mpi4py_1.3.1+hg20131106.orig/.hg_archival.txt0000644000000000000000000000022512211706251016377 0ustar 00000000000000repo: a4059b56eeb3844ffbab94f9260c3e709b988375 node: c66c1be9d40ae69aa61c45572fbe14ef3209378e branch: default latesttag: 1.3.1 latesttagdistance: 11 mpi4py_1.3.1+hg20131106.orig/.hgignore0000644000000000000000000000036012211706251015114 0ustar 00000000000000syntax: glob build docs/*.html docs/*.pdf docs/*.info docs/*.[137] docs/apiref docs/usrman dist MANIFEST __pycache__ *.pyc *.pyo .tox #src/mpi4py.MPE.c #src/mpi4py.MPI.c #src/include/mpi4py/mpi4py.MPI.h #src/include/mpi4py/mpi4py.MPI_api.h mpi4py_1.3.1+hg20131106.orig/.hgtags0000644000000000000000000000050512211706251014570 0ustar 00000000000000b511962953fed3cb5e1526b49fd8346181b8721b 1.0.0 01e1e85af0763ea2152b78ea26cacf30e4de03be 1.1.0 209ee7c190bede6ff2eb8ed3bfb7194a155545a1 1.2 7d1e259f4ecb1d44c0aebe59c8e9dfc1f0cc065c 1.2.1 ba666350b5c77e77e80874cca3cfac330ed14aff 1.2.2 f45d732db492d508430e7ef96cd83fbfa706166b 1.3 4e940dec3386ded5a1dc186a2ae6e11d5eab07d4 1.3.1 mpi4py_1.3.1+hg20131106.orig/HISTORY.txt0000644000000000000000000001750212211706251015221 0ustar 00000000000000======================= HISTORY: MPI for Python ======================= :Author: Lisandro Dalcin :Contact: dalcinl@gmail.com :Web Site: http://mpi4py.googlecode.com/ :Organization: CIMEC :Address: CCT CONICET, 3000 Santa Fe, Argentina Release 1.3.1 [2013-08-07] ========================== * Regenerate C wrappers with Cython 0.19.1 to support Python 3.3. * Install ``*.pxd`` files in ``/mpi4py`` to ease the support for Cython's ``cimport`` statement in code requiring to access mpi4py internals. * As a side-effect of using Cython 0.19.1, ancient Python 2.3 is no longer supported. If you really need it, you can install an older Cython and run ``python setup.py build_src --force``. Release 1.3 [2012-01-20] ======================== * Now ``Comm.recv()`` accept a buffer to receive the message. * Add ``Comm.irecv()`` and ``Request.{wait|test}[any|all]()``. * Add ``Intracomm.Spawn_multiple()``. * Better buffer handling for PEP 3118 and legacy buffer interfaces. * Add support for attribute attribute caching on communicators, datatypes and windows. * Install MPI-enabled Python interpreter as ``/mpi4py/bin/python-mpi``. * Windows: Support for building with Open MPI. Release 1.2.2 [2010-09-13] ========================== * Add ``mpi4py.get_config()`` to retrieve information (compiler wrappers, includes, libraries, etc) about the MPI implementation employed to build mpi4py. * Workaround Python libraries with missing GILState-related API calls in case of non-threaded Python builds. * Windows: look for MPICH2, DeinoMPI, Microsoft HPC Pack at their default install locations under %ProgramFiles. * MPE: fix hacks related to old API's, these hacks are broken when MPE is built with a MPI implementations other than MPICH2. * HP-MPI: fix for missing Fortran datatypes, use dlopen() to load the MPI shared library before MPI_Init() * Many distutils-related fixes, cleanup, and enhancements, better logics to find MPI compiler wrappers. * Support for ``pip install mpi4py``. Release 1.2.1 [2010-02-26] ========================== * Fix declaration in Cython include file. This declaration, while valid for Cython, broke the simple-minded parsing used in conf/mpidistutils.py to implement configure-tests for availability of MPI symbols. * Update SWIG support and make it compatible with Python 3. Also generate an warning for SWIG < 1.3.28. * Fix distutils-related issues in Mac OS X. Now ARCHFLAGS environment variable is honored of all Python's ``config/Makefile`` variables. * Fix issues with Open MPI < 1.4.2 releated to error checking and ``MPI_XXX_NULL`` handles. Release 1.2 [2009-12-29] ======================== * Automatic MPI datatype discovery for NumPy arrays and PEP-3118 buffers. Now buffer-like objects can be messaged directly, it is no longer required to explicitly pass a 2/3-list/tuple like ``[data, MPI.DOUBLE]``, or ``[data, count, MPI.DOUBLE]``. Only basic types are supported, i.e., all C/C99-native signed/unsigned integral types and single/double precision real/complex floating types. Many thanks to Eilif Muller for the initial feedback. * Nonblocking send of pickled Python objects. Many thanks to Andreas Kloeckner for the initial patch and enlightening discussion about this enhancement. * ``Request`` instances now hold a reference to the Python object exposing the buffer involved in point-to-point communication or parallel I/O. Many thanks to Andreas Kloeckner for the initial feedback. * Support for logging of user-defined states and events using `MPE `_. Runtime (i.e., without requiring a recompile!) activation of logging of all MPI calls is supported in POSIX platforms implementing ``dlopen()``. * Support for all the new features in MPI-2.2 (new C99 and F90 datatypes, distributed graph topology, local reduction operation, and other minor enhancements). * Fix the annoying issues related to Open MPI and Python dynamic loading of extension modules in platforms supporting ``dlopen()``. * Fix SLURM dynamic loading issues on SiCortex. Many thanks to Ian Langmore for providing me shell access. Release 1.1.0 [2009-06-06] ========================== * Fix bug in ``Comm.Iprobe()`` that caused segfaults as Python C-API calls were issued with the GIL released (issue #2). * Add ``Comm.bsend()`` and ``Comm.ssend()`` for buffered and synchronous send semantics when communicating general Python objects. * Now the call ``Info.Get(key)`` return a *single* value (i.e, instead of a 2-tuple); this value is ``None`` if ``key`` is not in the ``Info`` object, or a string otherwise. Previously, the call redundantly returned ``(None, False)`` for missing key-value pairs; ``None`` is enough to signal a missing entry. * Add support for parametrized Fortran datatypes. * Add support for decoding user-defined datatypes. * Add support for user-defined reduction operations on memory buffers. However, at most 16 user-defined reduction operations can be created. Ask the author for more room if you need it. Release 1.0.0 [2009-03-20] ========================== This is the fist release of the all-new, Cython-based, implementation of *MPI for Python*. Unfortunately, this implementation is not backward-compatible with the previous one. The list below summarizes the more important changes that can impact user codes. * Some communication calls had *overloaded* functionality. Now there is a clear distinction between communication of general Python object with *pickle*, and (fast, near C-speed) communication of buffer-like objects (e.g., NumPy arrays). - for communicating general Python objects, you have to use all-lowercase methods, like ``send()``, ``recv()``, ``bcast()``, etc. - for communicating array data, you have to use ``Send()``, ``Recv()``, ``Bcast()``, etc. methods. Buffer arguments to these calls must be explicitly specified by using a 2/3-list/tuple like ``[data, MPI.DOUBLE]``, or ``[data, count, MPI.DOUBLE]`` (the former one uses the byte-size of ``data`` and the extent of the MPI datatype to define the ``count``). * Indexing a communicator with an integer returned a special object associating the communication with a target rank, alleviating you from specifying source/destination/root arguments in point-to-point and collective communications. This functionality is no longer available, expressions like:: MPI.COMM_WORLD[0].Send(...) MPI.COMM_WORLD[0].Recv(...) MPI.COMM_WORLD[0].Bcast(...) have to be replaced by:: MPI.COMM_WORLD.Send(..., dest=0) MPI.COMM_WORLD.Recv(..., source=0) MPI.COMM_WORLD.Bcast(..., root=0) * Automatic MPI initialization (i.e., at import time) requests the maximum level of MPI thread support (i.e., it is done by calling ``MPI_Init_thread()`` and passing ``MPI_THREAD_MULTIPLE``). In case you need to change this behavior, you can tweak the contents of the ``mpi4py.rc`` module. * In order to obtain the values of predefined attributes attached to the world communicator, now you have to use the ``Get_attr()`` method on the ``MPI.COMM_WORLD`` instance:: tag_ub = MPI.COMM_WORLD.Get_attr(MPI.TAG_UB) * In the previous implementation, ``MPI.COMM_WORLD`` and ``MPI.COMM_SELF`` were associated to **duplicates** of the (C-level) ``MPI_COMM_WORLD`` and ``MPI_COMM_SELF`` predefined communicator handles. Now this is no longer the case, ``MPI.COMM_WORLD`` and ``MPI.COMM_SELF`` proxies the **actual** ``MPI_COMM_WORLD`` and ``MPI_COMM_SELF`` handles. * Convenience aliases ``MPI.WORLD`` and ``MPI.SELF`` were removed. Use instead ``MPI.COMM_WORLD`` and ``MPI.COMM_SELF``. * Convenience constants ``MPI.WORLD_SIZE`` and ``MPI.WORLD_RANK`` were removed. Use instead ``MPI.COMM_WORLD.Get_size()`` and ``MPI.COMM_WORLD.Get_rank()``. mpi4py_1.3.1+hg20131106.orig/LICENSE.txt0000644000000000000000000000311012211706251015130 0ustar 00000000000000======================= LICENSE: MPI for Python ======================= :Author: Lisandro Dalcin :Contact: dalcinl@gmail.com :Web Site: http://mpi4py.googlecode.com/ :Organization: CIMEC :Address: CCT CONICET, 3000 Santa Fe, Argentina Copyright (c) 2013, Lisandro Dalcin. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. mpi4py_1.3.1+hg20131106.orig/README.txt0000644000000000000000000000152112211706251015007 0ustar 00000000000000======================= README: MPI for Python ======================= :Author: Lisandro Dalcin :Contact: dalcinl@gmail.com :Web Site: http://mpi4py.googlecode.com/ :Organization: CIMEC :Address: CCT CONICET, 3000 Santa Fe, Argentina Thank you for downloading the *MPI for Python* project archive. As this is a work in progress, please check the `project website`_ for updates. .. _project website: http://mpi4py.googlecode.com/ - To build and install this package, you must meet the following requirements. + A Python 2.4 to 2.7 and 3.0 to 3.3 distribution. + A working MPI 1.2/1.3/2.0/2.1/2.2 implementation. - This package uses standard `distutils`. For detailed instructions about requirements and the building/install process, read the file ``docs/source/usrman/install.rst``. mpi4py_1.3.1+hg20131106.orig/THANKS.txt0000644000000000000000000000125112211706251015042 0ustar 00000000000000====================== THANKS: MPI for Python ====================== :Author: Lisandro Dalcin :Contact: dalcinl@gmail.com :Web Site: http://mpi4py.googlecode.com/ :Organization: CIMEC :Address: CCT CONICET, 3000 Santa Fe, Argentina I would like to thank everybody who contributed in any way, with code, hints, testing, bug reports, ideas, moral support, or even complaints... -- Lisandro Brian Granger Albert Strasheim Frank Eisenmenger Fernando Perez Matthew Turk Wonseok Shin Sam Skillman Greg Tener Amir Khosrowshahi Eilif Muller Andreas Klöckner Christoph Statz Thomas Spura Yaroslav Halchenko Aron Ahmadia Christoph Deil mpi4py_1.3.1+hg20131106.orig/conf/CMakeLists.txt0000644000000000000000000001476612211706251017015 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com CMAKE_MINIMUM_REQUIRED(VERSION 2.6) PROJECT(mpi4py) FIND_PACKAGE(PythonInterp) FIND_PACKAGE(PythonLibs) FIND_PACKAGE(MPI) SET(mpi4py_SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}/src") SET(mpi4py_BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/mpi4py") FILE(GLOB mpi4py_PYTHON_FILES RELATIVE ${mpi4py_SOURCE_DIR} ${mpi4py_SOURCE_DIR}/*.py) FILE(GLOB mpi4py_HEADER_FILES RELATIVE ${mpi4py_SOURCE_DIR} ${mpi4py_SOURCE_DIR}/include/mpi4py/*.px[di] ${mpi4py_SOURCE_DIR}/include/mpi4py/*.pyx ${mpi4py_SOURCE_DIR}/include/mpi4py/*.[hi] ${mpi4py_SOURCE_DIR}/*.pxd ) FOREACH(file ${mpi4py_PYTHON_FILES} ${mpi4py_HEADER_FILES} ) SET(src "${mpi4py_SOURCE_DIR}/${file}") SET(tgt "${mpi4py_BINARY_DIR}/${file}") ADD_CUSTOM_COMMAND( DEPENDS ${src} OUTPUT ${tgt} COMMAND ${CMAKE_COMMAND} ARGS -E copy ${src} ${tgt} COMMENT "copy: ${file}" ) SET(mpi4py_OUTPUT_FILES ${mpi4py_OUTPUT_FILES} ${tgt}) ENDFOREACH(file) FOREACH(file ${mpi4py_PYTHON_FILES}) SET(mpi4py_py ${mpi4py_py} "${mpi4py_BINARY_DIR}/${file}") SET(mpi4py_pyc ${mpi4py_pyc} "${mpi4py_BINARY_DIR}/${file}c") SET(mpi4py_pyo ${mpi4py_pyo} "${mpi4py_BINARY_DIR}/${file}o") ENDFOREACH(file) ADD_CUSTOM_COMMAND( COMMAND ${CMAKE_COMMAND} ARGS -E echo 'from compileall import compile_dir' > compile_py COMMAND ${CMAKE_COMMAND} ARGS -E echo 'compile_dir(\"${mpi4py_BINARY_DIR}\")' >> compile_py COMMAND ${PYTHON_EXECUTABLE} ARGS compile_py COMMAND ${PYTHON_EXECUTABLE} ARGS -O compile_py COMMAND ${CMAKE_COMMAND} ARGS -E remove compile_py DEPENDS ${mpi4py_py} OUTPUT ${mpi4py_pyc} ${mpi4py_pyo} ) SET(mpi4py_OUTPUT_FILES ${mpi4py_OUTPUT_FILES} ${mpi4py_pyc} ${mpi4py_pyo}) FIND_PROGRAM(MPI_COMPILER_CC NAMES mpicc HINTS "${MPI_BASE_DIR}" PATH_SUFFIXES bin DOC "MPI C compiler wrapper") MARK_AS_ADVANCED(MPI_COMPILER_CC) FIND_PROGRAM(MPI_COMPILER_CXX NAMES mpicxx mpic++ mpiCC HINTS "${MPI_BASE_DIR}" PATH_SUFFIXES bin DOC "MPI C++ compiler wrapper") MARK_AS_ADVANCED(MPI_COMPILER_CXX) find_program(MPI_COMPILER_F77 NAMES mpif77 HINTS "${MPI_BASE_DIR}" PATH_SUFFIXES bin DOC "MPI F77 compiler wrapper") MARK_AS_ADVANCED(MPI_COMPILER_F77) FIND_PROGRAM(MPI_COMPILER_F90 NAMES mpif90 HINTS "${MPI_BASE_DIR}" PATH_SUFFIXES bin DOC "MPI F90 compiler wrapper") MARK_AS_ADVANCED(MPI_COMPILER_F90) FOREACH(file "mpi.cfg") SET(tgt "${mpi4py_BINARY_DIR}/${file}") ADD_CUSTOM_COMMAND( OUTPUT ${tgt} COMMAND ${CMAKE_COMMAND} ARGS -E echo '[mpi]' > "${tgt}" COMMAND ${CMAKE_COMMAND} ARGS -E echo 'mpicc = ${MPI_COMPILER_CC}' >> ${tgt} COMMAND ${CMAKE_COMMAND} ARGS -E echo 'mpicxx = ${MPI_COMPILER_CXX}' >> ${tgt} COMMAND ${CMAKE_COMMAND} ARGS -E echo 'mpif77 = ${MPI_COMPILER_F77}' >> ${tgt} COMMAND ${CMAKE_COMMAND} ARGS -E echo 'mpif90 = ${MPI_COMPILER_F90}' >> ${tgt} COMMENT "write: ${file}" ) SET(mpi4py_OUTPUT_FILES ${mpi4py_OUTPUT_FILES} ${tgt}) ENDFOREACH(file) ADD_CUSTOM_TARGET(mpi4py ALL DEPENDS ${mpi4py_OUTPUT_FILES}) INCLUDE_DIRECTORIES( ${MPI_INCLUDE_PATH} ${PYTHON_INCLUDE_PATH} "${mpi4py_SOURCE_DIR}" ) # --- mpi4py.MPI --- PYTHON_ADD_MODULE(mpi4py.MPI MODULE "${mpi4py_SOURCE_DIR}/MPI.c") SET_TARGET_PROPERTIES( mpi4py.MPI PROPERTIES OUTPUT_NAME "MPI" PREFIX "" COMPILE_FLAGS "${MPI_COMPILE_FLAGS}" LINK_FLAGS "${MPI_LINK_FLAGS}" LIBRARY_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}" RUNTIME_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}" LINKER_LANGUAGE C ) TARGET_LINK_LIBRARIES(mpi4py.MPI ${PYTHON_LIBRARY}) TARGET_LINK_LIBRARIES(mpi4py.MPI ${MPI_LIBRARIES}) # --- mpi4py.MPE --- PYTHON_ADD_MODULE(mpi4py.MPE MODULE ${mpi4py_SOURCE_DIR}/MPE.c) SET_TARGET_PROPERTIES( mpi4py.MPE PROPERTIES OUTPUT_NAME "MPE" PREFIX "" COMPILE_FLAGS "${MPE_COMPILE_FLAGS}" "${MPI_COMPILE_FLAGS}" LINK_FLAGS "${MPE_LINK_FLAGS}" "${MPI_LINK_FLAGS}" LIBRARY_OUTPUT_DIRECTORY ${mpi4py_BINARY_DIR} RUNTIME_OUTPUT_DIRECTORY ${mpi4py_BINARY_DIR} LINKER_LANGUAGE C ) TARGET_LINK_LIBRARIES(mpi4py.MPE ${PYTHON_LIBRARY}) TARGET_LINK_LIBRARIES(mpi4py.MPE ${MPE_LIBRARY}) TARGET_LINK_LIBRARIES(mpi4py.MPE ${MPI_LIBRARIES}) # --- mpi4py.dl --- PYTHON_ADD_MODULE(mpi4py.dl MODULE "${mpi4py_SOURCE_DIR}/dynload.c") SET_TARGET_PROPERTIES( mpi4py.dl PROPERTIES OUTPUT_NAME "dl" PREFIX "" LIBRARY_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}" RUNTIME_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}" LINKER_LANGUAGE C ) TARGET_LINK_LIBRARIES(mpi4py.dl ${PYTHON_LIBRARY}) TARGET_LINK_LIBRARIES(mpi4py.dl ${CMAKE_DL_LIBS}) # --- mpi4py/bin/python-mpi --- ADD_EXECUTABLE(python-mpi "${mpi4py_SOURCE_DIR}/python.c") SET_TARGET_PROPERTIES( python-mpi PROPERTIES COMPILE_FLAGS "${MPI_COMPILE_FLAGS}" LINK_FLAGS "${MPI_LINK_FLAGS}" RUNTIME_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/bin" LINKER_LANGUAGE C ) TARGET_LINK_LIBRARIES(python-mpi ${PYTHON_LIBRARY}) TARGET_LINK_LIBRARIES(python-mpi ${MPI_LIBRARIES}) # --- mpi4py/lib-pmpi/libmpe.so --- ADD_LIBRARY(pmpi-mpe MODULE "${mpi4py_SOURCE_DIR}/pmpi-mpe.c") SET_TARGET_PROPERTIES( pmpi-mpe PROPERTIES OUTPUT_NAME "mpe" LIBRARY_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/lib-pmpi" RUNTIME_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/lib-pmpi" LINKER_LANGUAGE C ) TARGET_LINK_LIBRARIES(pmpi-mpe ${MPE_LIBRARIES}) TARGET_LINK_LIBRARIES(pmpi-mpe ${MPI_LIBRARIES}) # --- mpi4py/lib-pmpi/libvt.so --- ADD_LIBRARY(pmpi-vt MODULE "${mpi4py_SOURCE_DIR}/pmpi-vt.c") SET_TARGET_PROPERTIES( pmpi-vt PROPERTIES OUTPUT_NAME "vt" LIBRARY_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/lib-pmpi" RUNTIME_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/lib-pmpi" LINKER_LANGUAGE C ) TARGET_LINK_LIBRARIES(pmpi-vt ${VT_LIBRARIES}) TARGET_LINK_LIBRARIES(pmpi-vt ${MPI_LIBRARIES}) # --- mpi4py/lib-pmpi/libvt-mpi.so --- ADD_LIBRARY(pmpi-vt-mpi MODULE "${mpi4py_SOURCE_DIR}/pmpi-vt-mpi.c") SET_TARGET_PROPERTIES( pmpi-vt-mpi PROPERTIES OUTPUT_NAME "vt-mpi" LIBRARY_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/lib-pmpi" RUNTIME_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/lib-pmpi" LINKER_LANGUAGE C ) TARGET_LINK_LIBRARIES(pmpi-vt-mpi ${VT_MPI_LIBRARIES}) TARGET_LINK_LIBRARIES(pmpi-vt-mpi ${MPI_LIBRARIES}) # --- mpi4py/lib-pmpi/libvt-hyb.so --- ADD_LIBRARY(pmpi-vt-hyb MODULE "${mpi4py_SOURCE_DIR}/pmpi-vt-hyb.c") SET_TARGET_PROPERTIES( pmpi-vt-hyb PROPERTIES OUTPUT_NAME "vt-hyb" LIBRARY_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/lib-pmpi" RUNTIME_OUTPUT_DIRECTORY "${mpi4py_BINARY_DIR}/lib-pmpi" LINKER_LANGUAGE C ) TARGET_LINK_LIBRARIES(pmpi-vt-hyb ${VT_HYB_LIBRARIES}) TARGET_LINK_LIBRARIES(pmpi-vt-hyb ${MPI_LIBRARIES}) mpi4py_1.3.1+hg20131106.orig/conf/MANIFEST.in0000644000000000000000000000071712211706251016002 0ustar 00000000000000include setup*.py *.cfg *.txt tox.ini recursive-include demo *.py *.txt *.pyx *.i *.c *.cxx *.f90 recursive-include demo [M,m]akefile *.sh *.bat recursive-include conf *.py *.txt *.cfg *.in *.sh *.bat recursive-include src *.py *.pyx *.px[di] *.h *.c *.i recursive-include test *.py include docs/*.html include docs/*.pdf include docs/*.info include docs/*.[137] recursive-include docs/usrman * recursive-include docs/apiref * recursive-include docs/source * mpi4py_1.3.1+hg20131106.orig/conf/__init__.py0000644000000000000000000000007012211706251016345 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com mpi4py_1.3.1+hg20131106.orig/conf/cythonize.bat0000644000000000000000000000027012211706251016742 0ustar 00000000000000@echo off python -m cython --cleanup 3 -w src -I include %* mpi4py.MPE.pyx python -m cython --cleanup 3 -w src -I include %* mpi4py.MPI.pyx move src\mpi4py.MPI*.h src\include\mpi4pympi4py_1.3.1+hg20131106.orig/conf/cythonize.py0000644000000000000000000000416712211706251016635 0ustar 00000000000000#!/usr/bin/env python import sys, os def cythonize(source, includes=(), destdir_c=None, destdir_h=None, wdir=None): from Cython.Compiler.Main import \ CompilationOptions, default_options, \ compile, \ PyrexError from Cython.Compiler import Options cwd = os.getcwd() try: name, ext = os.path.splitext(source) outputs_c = [name+'.c'] outputs_h = [name+'.h', name+'_api.h'] # change working directory if wdir: os.chdir(wdir) # run Cython on source options = CompilationOptions(default_options) options.output_file = outputs_c[0] options.include_path = list(includes) Options.generate_cleanup_code = 3 any_failures = 0 try: result = compile(source, options) if result.num_errors > 0: any_failures = 1 except (EnvironmentError, PyrexError): e = sys.exc_info()[1] sys.stderr.write(str(e) + '\n') any_failures = 1 if any_failures: for output in outputs_c + outputs_h: try: os.remove(output) except OSError: pass return 1 # move ouputs for destdir, outputs in ( (destdir_c, outputs_c), (destdir_h, outputs_h)): if destdir is None: continue for output in outputs: dest = os.path.join( destdir, os.path.basename(output)) try: os.remove(dest) except OSError: pass os.rename(output, dest) # return 0 # finally: os.chdir(cwd) if __name__ == "__main__": sys.exit( cythonize('mpi4py.MPI.pyx', includes=['include'], destdir_h=os.path.join('include', 'mpi4py'), wdir='src') or cythonize('mpi4py.MPE.pyx', includes=['include'], wdir='src') ) mpi4py_1.3.1+hg20131106.orig/conf/cythonize.sh0000755000000000000000000000027412211706251016615 0ustar 00000000000000#!/bin/sh python -m cython --cleanup 3 -w src -Iinclude $@ mpi4py.MPE.pyx && \ python -m cython --cleanup 3 -w src -Iinclude $@ mpi4py.MPI.pyx && \ mv src/mpi4py.MPI*.h src/include/mpi4py mpi4py_1.3.1+hg20131106.orig/conf/epydoc.cfg0000644000000000000000000001052212211706251016203 0ustar 00000000000000[epydoc] # Epydoc section marker (required by ConfigParser) # The list of objects to document. Objects can be named using # dotted names, module filenames, or package directory names. # Alases for this option include "objects" and "values". modules: mpi4py # The type of output that should be generated. Should be one # of: html, text, latex, dvi, ps, pdf. #output: html # The path to the output directory. May be relative or absolute. #target: docs/html/ # An integer indicating how verbose epydoc should be. The default # value is 0; negative values will supress warnings and errors; # positive values will give more verbose output. verbosity: 0 # A boolean value indicating that Epydoc should show a tracaback # in case of unexpected error. By default don't show tracebacks #debug: 0 # If True, don't try to use colors or cursor control when doing # textual output. The default False assumes a rich text prompt #simple-term: 0 ### Generation options # The default markup language for docstrings, for modules that do # not define __docformat__. Defaults to epytext. docformat: reStructuredText # Whether or not parsing should be used to examine objects. parse: yes # Whether or not introspection should be used to examine objects. introspect: yes # Don't examine in any way the modules whose dotted name match this # regular expression pattern. exclude: # Don't perform introspection on the modules whose dotted name match this # regular expression pattern. #exclude-introspect # Don't perform parsing on the modules whose dotted name match this # regular expression pattern. #exclude-parse # The format for showing inheritance objects. # It should be one of: 'grouped', 'listed', 'included'. inheritance: listed # Whether or not to inclue private variables. (Even if included, # private variables will be hidden by default.) private: yes # Whether or not to list each module's imports. imports: no # Whether or not to include syntax highlighted source code in # the output (HTML only). sourcecode: yes # Whether or not to include a a page with Epydoc log, containing # effective option at the time of generation and the reported logs. include-log: no ### Output options # The documented project's name. name: MPI for Python # The documented project's URL. url: http://mpi4py.googlecode.com/ # The CSS stylesheet for HTML output. Can be the name of a builtin # stylesheet, or the name of a file. css: white # HTML code for the project link in the navigation bar. If left # unspecified, the project link will be generated based on the # project's name and URL. #link: My Cool Project # The "top" page for the documentation. Can be a URL, the name # of a module or class, or one of the special names "trees.html", # "indices.html", or "help.html" #top: os.path # An alternative help file. The named file should contain the # body of an HTML file; navigation bars will be added to it. #help: my_helpfile.html # Whether or not to include a frames-based table of contents. frames: yes # Whether each class should be listed in its own section when # generating LaTeX or PDF output. separate-classes: no ### API linking options # Define a new API document. A new interpreted text role # will be created #external-api: epydoc # Use the records in this file to resolve objects in the API named NAME. #external-api-file: epydoc:api-objects.txt # Use this URL prefix to configure the string returned for external API. #external-api-root: epydoc:http://epydoc.sourceforge.net/api ### Graph options # The list of graph types that should be automatically included # in the output. Graphs are generated using the Graphviz "dot" # executable. Graph types include: "classtree", "callgraph", # "umlclasstree". Use "all" to include all graph types graph: classtree # The path to the Graphviz "dot" executable, used to generate # graphs. #dotpath: /usr/local/bin/dot # The name of one or more pstat files (generated by the profile # or hotshot module). These are used to generate call graphs. #pstat: profile.out # Specify the font used to generate Graphviz graphs. # (e.g., helvetica or times). graph-font: Helvetica # Specify the font size used to generate Graphviz graphs. graph-font-size: 10 ### Return value options # The condition upon which Epydoc should exit with a non-zero # exit status. Possible values are error, warning, docstring_warning #fail-on: error mpi4py_1.3.1+hg20131106.orig/conf/epydocify.py0000755000000000000000000000600112211706251016604 0ustar 00000000000000#!/usr/bin/env python # -------------------------------------------------------------------- from mpi4py import MPI try: from signal import signal, SIGPIPE, SIG_IGN signal(SIGPIPE, SIG_IGN) except ImportError: pass # -------------------------------------------------------------------- try: from docutils.nodes import NodeVisitor NodeVisitor.unknown_visit = lambda self, node: None NodeVisitor.unknown_departure = lambda self, node: None except ImportError: pass try: # epydoc 3.0.1 + docutils 0.6 from docutils.nodes import Text try: from collections import UserString except ImportError: from UserString import UserString if not isinstance(Text, UserString): def Text_get_data(s): try: return s._data except AttributeError: return s.astext() def Text_set_data(s, d): s.astext = lambda: d s._data = d Text.data = property(Text_get_data, Text_set_data) except ImportError: pass # -------------------------------------------------------------------- from epydoc.docwriter import dotgraph import re dotgraph._DOT_VERSION_RE = \ re.compile(r'dot (?:- Graphviz )version ([\d\.]+)') try: dotgraph.DotGraph.DEFAULT_HTML_IMAGE_FORMAT dotgraph.DotGraph.DEFAULT_HTML_IMAGE_FORMAT = 'png' except AttributeError: DotGraph_to_html = dotgraph.DotGraph.to_html DotGraph_run_dot = dotgraph.DotGraph._run_dot def to_html(self, image_file, image_url, center=True): if image_file[-4:] == '.gif': image_file = image_file[:-4] + '.png' if image_url[-4:] == '.gif': image_url = image_url[:-4] + '.png' return DotGraph_to_html(self, image_file, image_url) def _run_dot(self, *options): if '-Tgif' in options: opts = list(options) for i, o in enumerate(opts): if o == '-Tgif': opts[i] = '-Tpng' options = type(options)(opts) return DotGraph_run_dot(self, *options) dotgraph.DotGraph.to_html = to_html dotgraph.DotGraph._run_dot = _run_dot # -------------------------------------------------------------------- import re _SIGNATURE_RE = re.compile( # Class name (for builtin methods) r'^\s*((?P\w+)\.)?' + # The function name r'(?P\w+)' + # The parameters r'\(((?P(?:self|cls|mcs)),?)?(?P.*)\)' + # The return value (optional) r'(\s*(->)\s*(?P\S.*?))?'+ # The end marker r'\s*(\n|\s+(--|<=+>)\s+|$|\.\s+|\.\n)') from epydoc import docstringparser as dsp dsp._SIGNATURE_RE = _SIGNATURE_RE # -------------------------------------------------------------------- import sys, os import epydoc.cli def epydocify(): dirname = os.path.dirname(__file__) config = os.path.join(dirname, 'epydoc.cfg') sys.argv.append('--config=' + config) epydoc.cli.cli() if __name__ == '__main__': epydocify() # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/conf/mpiconfig.py0000644000000000000000000003345212211706251016573 0ustar 00000000000000import sys, os, platform, re from distutils import sysconfig from distutils.util import convert_path from distutils.util import split_quoted from distutils.spawn import find_executable from distutils import log try: from collections import OrderedDict except ImportError: OrderedDict = dict try: from configparser import ConfigParser from configparser import Error as ConfigParserError except ImportError: from ConfigParser import ConfigParser from ConfigParser import Error as ConfigParserError class Config(object): def __init__(self, logger=None): self.log = logger or log self.section = None self.filename = None self.compiler_info = OrderedDict(( ('mpicc' , None), ('mpicxx' , None), ('mpif77' , None), ('mpif90' , None), ('mpif95' , None), ('mpild' , None), )) self.library_info = OrderedDict(( ('define_macros' , []), ('undef_macros' , []), ('include_dirs' , []), ('libraries' , []), ('library_dirs' , []), ('runtime_library_dirs' , []), ('extra_compile_args' , []), ('extra_link_args' , []), ('extra_objects' , []), )) def __bool__(self): for v in self.compiler_info.values(): if v: return True for v in self.library_info.values(): if v: return True return False __nonzero__ = __bool__ def get(self, k, d=None): if k in self.compiler_info: return self.compiler_info[k] if k in self.library_info: return self.library_info[k] return d def info(self, log=None): if log is None: log = self.log mpicc = self.compiler_info.get('mpicc') mpicxx = self.compiler_info.get('mpicxx') mpif77 = self.compiler_info.get('mpif77') mpif90 = self.compiler_info.get('mpif90') mpif95 = self.compiler_info.get('mpif95') mpild = self.compiler_info.get('mpild') if mpicc: log.info("MPI C compiler: %s", mpicc) if mpicxx: log.info("MPI C++ compiler: %s", mpicxx) if mpif77: log.info("MPI F77 compiler: %s", mpif77) if mpif90: log.info("MPI F90 compiler: %s", mpif90) if mpif95: log.info("MPI F95 compiler: %s", mpif95) if mpild: log.info("MPI linker: %s", mpild) def update(self, config, **more): if hasattr(config, 'keys'): config = config.items() for option, value in config: if option in self.compiler_info: self.compiler_info[option] = value if option in self.library_info: self.library_info[option] = value if more: self.update(more) def setup(self, options, environ=None): if environ is None: environ = os.environ self.setup_library_info(options, environ) self.setup_compiler_info(options, environ) def setup_library_info(self, options, environ): filename = section = None mpiopt = getattr(options, 'mpi', None) mpiopt = environ.get('MPICFG', mpiopt) if mpiopt: if ',' in mpiopt: section, filename = mpiopt.split(',', 1) else: section = mpiopt if not filename: filename = "mpi.cfg" if not section: section = "mpi" mach = platform.machine() arch = platform.architecture()[0] plat = sys.platform osnm = os.name if 'linux' == plat[:5]: plat = 'linux' elif 'sunos' == plat[:5]: plat = 'solaris' elif 'win' == plat[:3]: plat = 'windows' suffixes = [] suffixes.append(plat+'-'+mach) suffixes.append(plat+'-'+arch) suffixes.append(plat) suffixes.append(osnm+'-'+mach) suffixes.append(osnm+'-'+arch) suffixes.append(osnm) suffixes.append(mach) suffixes.append(arch) sections = [section+"-"+s for s in suffixes] sections += [section] self.load(filename, sections) if not self: if os.name == 'posix': self._setup_posix() if sys.platform == 'win32': self._setup_windows() def _setup_posix(self): pass def _setup_windows(self): from glob import glob ProgramFiles = os.environ.get('ProgramFiles', '') CCP_HOME = os.environ.get('CCP_HOME', '') for (name, prefix, suffix) in ( ('mpich3', ProgramFiles, 'MPICH'), ('mpich2', ProgramFiles, 'MPICH2'), ('openmpi', ProgramFiles, 'OpenMPI'), ('openmpi', ProgramFiles, 'OpenMPI*'), ('deinompi', ProgramFiles, 'DeinoMPI'), ('msmpi', CCP_HOME, ''), ('msmpi', ProgramFiles, 'Microsoft HPC Pack 2012'), ('msmpi', ProgramFiles, 'Microsoft HPC Pack 2012 SDK'), ('msmpi', ProgramFiles, 'Microsoft HPC Pack 2008 R2'), ('msmpi', ProgramFiles, 'Microsoft HPC Pack 2008'), ('msmpi', ProgramFiles, 'Microsoft HPC Pack 2008 SDK'), ): mpi_dir = os.path.join(prefix, suffix) if '*' in mpi_dir: dirs = glob(mpi_dir) if dirs: mpi_dir = max(dirs) if not (mpi_dir and os.path.isdir(mpi_dir)): continue define_macros = [] include_dirs = [os.path.join(mpi_dir, 'include')] library = 'mpi' library_dir = os.path.join(mpi_dir, 'lib') if name == 'openmpi': define_macros.append(('OMPI_IMPORTS', None)) library = 'libmpi' if name == 'msmpi': include_dirs.append(os.path.join(mpi_dir, 'inc')) library = 'msmpi' bits = platform.architecture()[0] if bits == '32bit': library_dir = os.path.join(library_dir, 'i386') if bits == '64bit': library_dir = os.path.join(library_dir, 'amd64') self.library_info.update( define_macros=define_macros, include_dirs=include_dirs, libraries=[library], library_dirs=[library_dir], ) self.section = name self.filename = [mpi_dir] break def setup_compiler_info(self, options, environ): def find_exe(cmd, path=None): if not cmd: return None parts = split_quoted(cmd) exe, args = parts[0], parts[1:] if not os.path.isabs(exe) and path: exe = os.path.basename(exe) exe = find_executable(exe, path) if not exe: return None return ' '.join([exe]+args) COMPILERS = ( ('mpicc', ['mpicc', 'mpcc_r']), ('mpicxx', ['mpicxx', 'mpic++', 'mpiCC', 'mpCC_r']), ('mpif77', ['mpif77', 'mpf77_r']), ('mpif90', ['mpif90', 'mpf90_r']), ('mpif95', ['mpif95', 'mpf95_r']), ('mpild', []), ) # compiler_info = {} PATH = environ.get('PATH', '') for name, _ in COMPILERS: cmd = (environ.get(name.upper()) or getattr(options, name, None) or self.compiler_info.get(name) or None) if cmd: exe = find_exe(cmd, path=PATH) if exe: path = os.path.dirname(exe) PATH = path + os.path.pathsep + PATH compiler_info[name] = exe else: self.log.error("error: '%s' not found", cmd) # if not self and not compiler_info: for name, candidates in COMPILERS: for cmd in candidates: cmd = find_exe(cmd) if cmd: compiler_info[name] = cmd break # self.compiler_info.update(compiler_info) def load(self, filename="mpi.cfg", section='mpi'): if isinstance(filename, str): filenames = filename.split(os.path.pathsep) else: filenames = list(filename) if isinstance(section, str): sections = section.split(',') else: sections = list(section) # try: parser = ConfigParser(dict_type=OrderedDict) except TypeError: parser = ConfigParser() try: read_ok = parser.read(filenames) except ConfigParserError: self.log.error( "error: parsing configuration file/s '%s'", os.path.pathsep.join(filenames)) return None for section in sections: if parser.has_section(section): break section = None if not section: self.log.error( "error: section/s '%s' not found in file/s '%s'", ','.join(sections), os.path.pathsep.join(filenames)) return None parser_items = list(parser.items(section, vars=None)) # compiler_info = type(self.compiler_info)() for option, value in parser_items: if option in self.compiler_info: compiler_info[option] = value # pathsep = os.path.pathsep expanduser = os.path.expanduser expandvars = os.path.expandvars library_info = type(self.library_info)() for k, v in parser_items: if k in ('define_macros', 'undef_macros', ): macros = [e.strip() for e in v.split(',')] if k == 'define_macros': for i, m in enumerate(macros): try: # -DFOO=bar idx = m.index('=') macro = (m[:idx], m[idx+1:] or None) except ValueError: # -DFOO macro = (m, None) macros[i] = macro library_info[k] = macros elif k in ('include_dirs', 'library_dirs', 'runtime_dirs', 'runtime_library_dirs', ): if k == 'runtime_dirs': k = 'runtime_library_dirs' pathlist = [p.strip() for p in v.split(pathsep)] library_info[k] = [expanduser(expandvars(p)) for p in pathlist if p] elif k == 'libraries': library_info[k] = [e.strip() for e in split_quoted(v)] elif k in ('extra_compile_args', 'extra_link_args', ): library_info[k] = split_quoted(v) elif k == 'extra_objects': library_info[k] = [expanduser(expandvars(e)) for e in split_quoted(v)] elif hasattr(self, k): library_info[k] = v.strip() else: pass # self.section = section self.filename = read_ok self.compiler_info.update(compiler_info) self.library_info.update(library_info) return compiler_info, library_info, section, read_ok def dump(self, filename=None, section='mpi'): # prepare configuration values compiler_info = self.compiler_info.copy() library_info = self.library_info.copy() for k in library_info: if k in ('define_macros', 'undef_macros', ): macros = library_info[k] if k == 'define_macros': for i, (m, v) in enumerate(macros): if v is None: macros[i] = m else: macros[i] = '%s=%s' % (m, v) library_info[k] = ','.join(macros) elif k in ('include_dirs', 'library_dirs', 'runtime_library_dirs', ): library_info[k] = os.path.pathsep.join(library_info[k]) elif isinstance(library_info[k], list): library_info[k] = ' '.join(library_info[k]) # fill configuration parser try: parser = ConfigParser(dict_type=OrderedDict) except TypeError: parser = ConfigParser() parser.add_section(section) for option, value in compiler_info.items(): if not value: continue parser.set(section, option, value) for option, value in library_info.items(): if not value: continue parser.set(section, option, value) # save configuration file if filename is None: parser.write(sys.stdout) elif hasattr(filename, 'write'): parser.write(filename) elif isinstance(filename, str): f = open(filename, 'wt') try: parser.write(f) finally: f.close() return parser if __name__ == '__main__': import optparse parser = optparse.OptionParser() parser.add_option("--mpi", type="string") parser.add_option("--mpicc", type="string") parser.add_option("--mpicxx", type="string") parser.add_option("--mpif90", type="string") parser.add_option("--mpif77", type="string") parser.add_option("--mpild", type="string") (options, args) = parser.parse_args() logger = log.Log(log.INFO) conf = Config(logger) conf.setup(options) conf.dump() mpi4py_1.3.1+hg20131106.orig/conf/mpidistutils.py0000644000000000000000000015731612211706251017360 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com """ Support for building mpi4py with distutils. """ # ----------------------------------------------------------------------------- import sys if sys.version[:3] == '3.0': from distutils import version version.cmp = lambda a, b : (a > b) - (a < b) del version del sys # ----------------------------------------------------------------------------- import sys, os, platform, re from distutils import sysconfig from distutils.util import convert_path from distutils.util import split_quoted from distutils.spawn import find_executable from distutils import log def fix_config_vars(names, values): values = list(values) if sys.platform == 'darwin': if 'ARCHFLAGS' in os.environ: ARCHFLAGS = os.environ['ARCHFLAGS'] for i, flag in enumerate(list(values)): if flag is None: continue flag, count = re.subn('-arch\s+\w+', ' ', flag) if count and ARCHFLAGS: flag = flag + ' ' + ARCHFLAGS values[i] = flag if 'SDKROOT' in os.environ: SDKROOT = os.environ['SDKROOT'] for i, flag in enumerate(list(values)): if flag is None: continue flag, count = re.subn('-isysroot [^ \t]*', ' ', flag) if count and SDKROOT: flag = flag + ' ' + '-isysroot ' + SDKROOT values[i] = flag return values if hasattr(sys, 'pypy_version_info'): config_vars = sysconfig.get_config_vars() for name in ('prefix', 'exec_prefix'): if name not in config_vars: config_vars[name] = os.path.normpath(getattr(sys, name)) def get_config_vars(*names): # Core Python configuration values = sysconfig.get_config_vars(*names) # Do any distutils flags fixup right now values = fix_config_vars(names, values) return values def fix_compiler_cmd(cc, mpicc): if not mpicc: return cc if not cc: return mpicc from os.path import basename cc = split_quoted(cc) i = 0 while basename(cc[i]) == 'env': i = 1 while '=' in cc[i]: i = i + 1 cc[i] = mpicc return ' '.join(cc) def fix_linker_cmd(ld, mpild): if not mpild: return ld if not ld: return mpild from os.path import basename ld = split_quoted(ld) i = 0 if (sys.platform.startswith('aix') and basename(ld[i]) == 'ld_so_aix'): i = i + 1 while basename(ld[i]) == 'env': i = i + 1 while '=' in ld[i]: i = i + 1 ld[i] = mpild return ' '.join(ld) def split_linker_cmd(ld): from os.path import basename ld = split_quoted(ld) if not ld: return '', '' i = 0 if (sys.platform.startswith('aix') and basename(ld[i]) == 'ld_so_aix'): i = i + 1 while basename(ld[i]) == 'env': i = i + 1 while '=' in ld[i]: i = i + 1 p = i + 1 ld, flags = ' '.join(ld[:p]), ' '.join(ld[p:]) return ld, flags from distutils.unixccompiler import UnixCCompiler rpath_option_orig = UnixCCompiler.runtime_library_dir_option def rpath_option(compiler, dir): option = rpath_option_orig(compiler, dir) if sys.platform.startswith('linux'): if option.startswith('-R'): option = option.replace('-R', '-Wl,-rpath,', 1) elif option.startswith('-Wl,-R,'): option = option.replace('-Wl,-R,', '-Wl,-rpath,', 1) return option UnixCCompiler.runtime_library_dir_option = rpath_option def customize_compiler(compiler, lang=None, mpicc=None, mpicxx=None, mpild=None, environ=None): if environ is None: environ = os.environ if compiler.compiler_type == 'unix': # Distutils configuration, actually obtained by parsing # :file:{prefix}/lib[32|64]/python{X}.{Y}/config/Makefile (cc, cxx, ccshared, ld, basecflags, opt) = get_config_vars ( 'CC', 'CXX', 'CCSHARED', 'LDSHARED', 'BASECFLAGS', 'OPT') cc = (cc or '').replace('-pthread', '') cxx = (cxx or '').replace('-pthread', '') ld = (ld or '').replace('-pthread', '') ld, ldshared = split_linker_cmd(ld) basecflags, opt = basecflags or '', opt or '' ccshared = ccshared or '' ldshared = ldshared or '' if hasattr(sys, 'pypy_version_info'): basecflags = '-Wall -Wimplicit' if not ccshared: ccshared = '-fPIC' if not ldshared: ldshared = '-shared' if sys.platform == 'darwin': ldshared += ' -Wl,-undefined,dynamic_lookup' # Compiler command overriding if not mpild and (mpicc or mpicxx): if lang == 'c': mpild = mpicc elif lang == 'c++': mpild = mpicxx else: mpild = mpicc or mpicxx if mpicc: cc = fix_compiler_cmd(cc, mpicc) if mpicxx: cxx = fix_compiler_cmd(cxx, mpicxx) if mpild: ld = fix_linker_cmd(ld, mpild) # Environment handling cppflags = cflags = cxxflags = ldflags = '' CPPFLAGS = environ.get('CPPFLAGS', '') CFLAGS = environ.get('CFLAGS', '') CXXFLAGS = environ.get('CXXFLAGS', '') LDFLAGS = environ.get('LDFLAGS', '') if CPPFLAGS: cppflags = cppflags + ' ' + CPPFLAGS cflags = cflags + ' ' + CPPFLAGS cxxflags = cxxflags + ' ' + CPPFLAGS ldflags = ldflags + ' ' + CPPFLAGS if CFLAGS: cflags = cflags + ' ' + CFLAGS ldflags = ldflags + ' ' + CFLAGS if CXXFLAGS: cxxflags = cxxflags + ' ' + CXXFLAGS ldflags = ldflags + ' ' + CXXFLAGS if LDFLAGS: ldflags = ldflags + ' ' + LDFLAGS basecflags = environ.get('BASECFLAGS', basecflags) opt = environ.get('OPT', opt ) ccshared = environ.get('CCSHARED', ccshared) ldshared = environ.get('LDSHARED', ldshared) cflags = ' '.join((basecflags, opt, cflags)) cxxflags = ' '.join((basecflags, opt, cxxflags)) cxxflags = cxxflags.replace('-Wstrict-prototypes', '') # Distutils compiler setup cpp = os.environ.get('CPP') or (cc + ' -E') cc_so = cc + ' ' + ccshared cxx_so = cxx + ' ' + ccshared ld_so = ld + ' ' + ldshared compiler.set_executables( preprocessor = cpp + ' ' + cppflags, compiler = cc + ' ' + cflags, compiler_so = cc_so + ' ' + cflags, compiler_cxx = cxx_so + ' ' + cxxflags, linker_so = ld_so + ' ' + ldflags, linker_exe = ld + ' ' + ldflags, ) try: compiler.compiler_cxx.remove('-Wstrict-prototypes') except: pass if compiler.compiler_type == 'mingw32': compiler.set_executables( preprocessor = 'gcc -mno-cygwin -E', ) if compiler.compiler_type in ('unix', 'cygwin', 'mingw32'): if lang == 'c++': def find_cmd_pos(cmd): pos = 0 if os.path.basename(cmd[pos]) == "env": pos = 1 while '=' in cmd[pos]: pos = pos + 1 return pos i = find_cmd_pos(compiler.compiler_so) j = find_cmd_pos(compiler.compiler_cxx) compiler.compiler_so[i] = compiler.compiler_cxx[j] try: compiler.compiler_so.remove('-Wstrict-prototypes') except: pass if (compiler.compiler_type == 'mingw32' and compiler.gcc_version >= '4.4'): # http://bugs.python.org/issue12641 for attr in ( 'preprocessor', 'compiler', 'compiler_cxx', 'compiler_so','linker_so', 'linker_exe'): try: getattr(compiler, attr).remove('-mno-cygwin') except: pass if compiler.compiler_type == 'msvc': if not compiler.initialized: compiler.initialize() compiler.ldflags_shared.append('/MANIFEST') compiler.ldflags_shared_debug.append('/MANIFEST') # ----------------------------------------------------------------------------- try: from mpiconfig import Config except ImportError: from conf.mpiconfig import Config def configuration(command_obj, verbose=True): config = Config() config.setup(command_obj) if verbose: if config.section and config.filename: log.info("MPI configuration: [%s] from '%s'", config.section, ','.join(config.filename)) config.info(log) return config def configure_compiler(compiler, config, lang=None): # mpicc = config.get('mpicc') mpicxx = config.get('mpicxx') mpild = config.get('mpild') customize_compiler(compiler, lang, mpicc=mpicc, mpicxx=mpicxx, mpild=mpild) # for k, v in config.get('define_macros', []): compiler.define_macro(k, v) for v in config.get('undef_macros', []): compiler.undefine_macro(v) for v in config.get('include_dirs', []): compiler.add_include_dir(v) for v in config.get('libraries', []): compiler.add_library(v) for v in config.get('library_dirs', []): compiler.add_library_dir(v) for v in config.get('runtime_library_dirs', []): compiler.add_runtime_library_dir(v) for v in config.get('extra_objects', []): compiler.add_link_object(v) if compiler.compiler_type in \ ('unix', 'intel', 'cygwin', 'mingw32'): cc_args = config.get('extra_compile_args', []) ld_args = config.get('extra_link_args', []) compiler.compiler += cc_args compiler.compiler_so += cc_args compiler.compiler_cxx += cc_args compiler.linker_so += ld_args compiler.linker_exe += ld_args return compiler # ----------------------------------------------------------------------------- try: from mpiscanner import Scanner except ImportError: try: from conf.mpiscanner import Scanner except ImportError: class Scanner(object): def parse_file(self, *args): raise NotImplementedError( "You forgot to grab 'mpiscanner.py'") class ConfigureMPI(object): SRCDIR = 'src' SOURCES = [os.path.join('include', 'mpi4py', 'libmpi.pxd')] DESTDIR = 'src' CONFIG_H = os.path.join('config', 'config.h') MISSING_H = 'missing.h' def __init__(self, config_cmd): self.scanner = Scanner() for filename in self.SOURCES: fullname = os.path.join(self.SRCDIR, filename) self.scanner.parse_file(fullname) self.config_cmd = config_cmd def run(self): results = [] for name, code in self.scanner.itertests(): log.info("checking for '%s' ..." % name) body = self.gen_one(results, code) ok = self.run_one(body) if not ok: log.info("**** failed check for '%s'" % name) results.append((name, ok)) return results def dump(self, results): destdir = self.DESTDIR config_h = os.path.join(destdir, self.CONFIG_H) missing_h = os.path.join(destdir, self.MISSING_H) log.info("writing '%s'", config_h) self.scanner.dump_config_h(config_h, results) log.info("writing '%s'", missing_h) self.scanner.dump_missing_h(missing_h, None) def gen_one(self, results, code): # configtest_h = "_configtest.h" self.config_cmd.temp_files.insert(0, configtest_h) fh = open(configtest_h, "w") try: sep = "/* " + ('-'*72)+ " */\n" fh.write(sep) self.scanner.dump_config_h(fh, results) fh.write(sep) self.scanner.dump_missing_h(fh, results) fh.write(sep) finally: fh.close() # body = ['#include "%s"' % configtest_h, 'int main(int argc, char **argv) {', '\n'.join([' ' + line for line in code.split('\n')]), ' return 0;', '}'] body = '\n'.join(body) + '\n' return body def run_one(self, body, lang='c'): ok = self.config_cmd.try_link(body, headers=['mpi.h'], lang=lang) return ok # ----------------------------------------------------------------------------- cmd_mpi_opts = [ ('mpild=', None, "MPI linker command, " "overridden by environment variable 'MPILD' " "(defaults to 'mpicc' or 'mpicxx' if any is available)"), ('mpif77=', None, "MPI F77 compiler command, " "overridden by environment variable 'MPIF77' " "(defaults to 'mpif77' if available)"), ('mpif90=', None, "MPI F90 compiler command, " "overridden by environment variable 'MPIF90' " "(defaults to 'mpif90' if available)"), ('mpif95=', None, "MPI F95 compiler command, " "overridden by environment variable 'MPIF95' " "(defaults to 'mpif95' if available)"), ('mpicxx=', None, "MPI C++ compiler command, " "overridden by environment variable 'MPICXX' " "(defaults to 'mpicxx', 'mpiCC', or 'mpic++' if any is available)"), ('mpicc=', None, "MPI C compiler command, " "overridden by environment variables 'MPICC' " "(defaults to 'mpicc' if available)"), ('mpi=', None, "specify a configuration section, " "and an optional list of configuration files " + "(e.g. --mpi=section,file1" + os.path.pathsep + "file2), " + "to look for MPI includes/libraries, " "overridden by environment variable 'MPICFG' " "(defaults to section 'mpi' in configuration file 'mpi.cfg')"), ('configure', None, "exhaustive test for checking missing MPI constants/types/functions"), ] def cmd_get_mpi_options(cmd_opts): optlist = [] for (option, _, _) in cmd_opts: if option[-1] == '=': option = option[:-1] option = option.replace('-','_') optlist.append(option) return optlist def cmd_initialize_mpi_options(cmd): mpiopts = cmd_get_mpi_options(cmd_mpi_opts) for op in mpiopts: setattr(cmd, op, None) def cmd_set_undefined_mpi_options(cmd, basecmd): mpiopts = cmd_get_mpi_options(cmd_mpi_opts) optlist = tuple(zip(mpiopts, mpiopts)) cmd.set_undefined_options(basecmd, *optlist) # ----------------------------------------------------------------------------- from distutils.core import setup as fcn_setup from distutils.core import Distribution as cls_Distribution from distutils.core import Extension as cls_Extension from distutils.core import Command from distutils.command import config as cmd_config from distutils.command import build as cmd_build from distutils.command import install as cmd_install from distutils.command import sdist as cmd_sdist from distutils.command import clean as cmd_clean from distutils.command import build_py as cmd_build_py from distutils.command import build_clib as cmd_build_clib from distutils.command import build_ext as cmd_build_ext from distutils.command import install_data as cmd_install_data from distutils.command import install_lib as cmd_install_lib from distutils.errors import DistutilsError from distutils.errors import DistutilsSetupError from distutils.errors import DistutilsPlatformError from distutils.errors import DistutilsOptionError from distutils.errors import CCompilerError # ----------------------------------------------------------------------------- # Distribution class supporting a 'executables' keyword class Distribution(cls_Distribution): def __init__ (self, attrs=None): # support for pkg data self.package_data = {} # PEP 314 self.provides = None self.requires = None self.obsoletes = None # supports 'executables' keyword self.executables = None cls_Distribution.__init__(self, attrs) def has_executables(self): return self.executables and len(self.executables) > 0 def is_pure (self): return (cls_Distribution.is_pure(self) and not self.has_executables()) # Extension class class Extension(cls_Extension): def __init__ (self, **kw): optional = kw.pop('optional', None) configure = kw.pop('configure', None) cls_Extension.__init__(self, **kw) self.optional = optional self.configure = configure # Library class class Library(Extension): def __init__ (self, **kw): kind = kw.pop('kind', "static") package = kw.pop('package', None) dest_dir = kw.pop('dest_dir', None) Extension.__init__(self, **kw) self.kind = kind self.package = package self.dest_dir = dest_dir # Executable class class Executable(Extension): def __init__ (self, **kw): package = kw.pop('package', None) dest_dir = kw.pop('dest_dir', None) Extension.__init__(self, **kw) self.package = package self.dest_dir = dest_dir # setup function def setup(**attrs): if 'distclass' not in attrs: attrs['distclass'] = Distribution if 'cmdclass' not in attrs: attrs['cmdclass'] = {} cmdclass = attrs['cmdclass'] for cmd in (config, build, install, test, clean, sdist, build_src, build_py, build_clib, build_ext, build_exe, install_lib, install_data, install_exe, ): if cmd.__name__ not in cmdclass: cmdclass[cmd.__name__] = cmd return fcn_setup(**attrs) # ----------------------------------------------------------------------------- # A minimalistic MPI program :-) ConfigTest = """\ int main(int argc, char **argv) { int ierr; ierr = MPI_Init(&argc, &argv); if (ierr) return -1; ierr = MPI_Finalize(); if (ierr) return -1; return 0; } """ class config(cmd_config.config): user_options = cmd_config.config.user_options + cmd_mpi_opts def initialize_options (self): cmd_config.config.initialize_options(self) cmd_initialize_mpi_options(self) self.noisy = 0 def finalize_options (self): cmd_config.config.finalize_options(self) if not self.noisy: self.dump_source = 0 def _clean(self, *a, **kw): if sys.platform.startswith('win'): for fn in ('_configtest.exe.manifest', ): if os.path.exists(fn): self.temp_files.append(fn) cmd_config.config._clean(self, *a, **kw) def check_header (self, header, headers=None, include_dirs=None): if headers is None: headers = [] log.info("checking for header '%s' ..." % header) body = "int main(int n, char**v) { return 0; }" ok = self.try_compile(body, list(headers) + [header], include_dirs) log.info(ok and 'success!' or 'failure.') return ok def check_macro (self, macro, headers=None, include_dirs=None): log.info("checking for macro '%s' ..." % macro) body = ("#ifndef %s\n" "#error macro '%s' not defined\n" "#endif\n") % (macro, macro) body += "int main(int n, char**v) { return 0; }\n" ok = self.try_compile(body, headers, include_dirs) return ok def check_library (self, library, library_dirs=None, headers=None, include_dirs=None, other_libraries=[], lang="c"): log.info("checking for library '%s' ..." % library) body = "int main(int n, char**v) { return 0; }" ok = self.try_link(body, headers, include_dirs, [library]+other_libraries, library_dirs, lang=lang) return ok def check_function (self, function, headers=None, include_dirs=None, libraries=None, library_dirs=None, decl=0, call=0, lang="c"): log.info("checking for function '%s' ..." % function) body = [] if decl: if call: proto = "int %s (void);" else: proto = "int %s;" if lang == "c": proto = "\n".join([ "#ifdef __cplusplus", "extern \"C\"", "#endif", proto]) body.append(proto % function) body.append( "int main (int n, char**v) {") if call: body.append(" (void)%s();" % function) else: body.append(" %s;" % function) body.append( " return 0;") body.append( "}") body = "\n".join(body) + "\n" ok = self.try_link(body, headers, include_dirs, libraries, library_dirs, lang=lang) return ok def check_symbol (self, symbol, type="int", headers=None, include_dirs=None, libraries=None, library_dirs=None, decl=0, lang="c"): log.info("checking for symbol '%s' ..." % symbol) body = [] if decl: body.append("%s %s;" % (type, symbol)) body.append("int main (int n, char**v) {") body.append(" %s v; v = %s;" % (type, symbol)) body.append(" return 0;") body.append("}") body = "\n".join(body) + "\n" ok = self.try_link(body, headers, include_dirs, libraries, library_dirs, lang=lang) return ok check_hdr = check_header check_lib = check_library check_func = check_function check_sym = check_symbol def run (self): # config = configuration(self, verbose=True) # test MPI C compiler self.compiler = getattr( self.compiler, 'compiler_type', self.compiler) self._check_compiler() configure_compiler(self.compiler, config, lang='c') self.try_link(ConfigTest, headers=['mpi.h'], lang='c') # test MPI C++ compiler self.compiler = getattr( self.compiler, 'compiler_type', self.compiler) self._check_compiler() configure_compiler(self.compiler, config, lang='c++') self.try_link(ConfigTest, headers=['mpi.h'], lang='c++') class build(cmd_build.build): user_options = cmd_build.build.user_options + cmd_mpi_opts def initialize_options(self): cmd_build.build.initialize_options(self) cmd_initialize_mpi_options(self) def finalize_options(self): cmd_build.build.finalize_options(self) config_cmd = self.get_finalized_command('config') if isinstance(config_cmd, config): cmd_set_undefined_mpi_options(self, 'config') def has_executables (self): return self.distribution.has_executables() sub_commands = \ [('build_src', lambda *args: True)] + \ cmd_build.build.sub_commands + \ [('build_exe', has_executables), ] class build_src(Command): description = "build C sources from Cython files" user_options = [ ('force', 'f', "forcibly build everything (ignore file timestamps)"), ] boolean_options = ['force'] def initialize_options(self): self.force = False def finalize_options(self): self.set_undefined_options('build', ('force', 'force'), ) def run(self): pass class build_py(cmd_build_py.build_py): if sys.version[:3] < '2.4': def initialize_options(self): self.package_data = None cmd_build_py.build_py.initialize_options(self) def finalize_options (self): cmd_build_py.build_py.finalize_options(self) self.package_data = self.distribution.package_data self.data_files = self.get_data_files() def run(self): cmd_build_py.build_py.run(self) if self.packages: self.build_package_data() def get_data_files (self): """Generate list of '(package,src_dir,build_dir,filenames)' tuples""" data = [] if not self.packages: return data for package in self.packages: # Locate package source directory src_dir = self.get_package_dir(package) # Compute package build directory build_dir = os.path.join(*([self.build_lib] + package.split('.'))) # Length of path to strip from found files plen = len(src_dir)+1 # Strip directory from globbed filenames filenames = [ file[plen:] for file in self.find_data_files(package, src_dir) ] data.append((package, src_dir, build_dir, filenames)) return data def find_data_files (self, package, src_dir): """Return filenames for package's data files in 'src_dir'""" from glob import glob globs = (self.package_data.get('', []) + self.package_data.get(package, [])) files = [] for pattern in globs: # Each pattern has to be converted to a platform-specific path filelist = glob(os.path.join(src_dir, convert_path(pattern))) # Files that match more than one pattern are only added once files.extend([fn for fn in filelist if fn not in files]) return files def get_package_dir (self, package): """Return the directory, relative to the top of the source distribution, where package 'package' should be found (at least according to the 'package_dir' option, if any).""" import string path = string.split(package, '.') if not self.package_dir: if path: return os.path.join(*path) else: return '' else: tail = [] while path: try: pdir = self.package_dir[string.join(path, '.')] except KeyError: tail.insert(0, path[-1]) del path[-1] else: tail.insert(0, pdir) return os.path.join(*tail) else: pdir = self.package_dir.get('') if pdir is not None: tail.insert(0, pdir) if tail: return os.path.join(*tail) else: return '' def build_package_data (self): """Copy data files into build directory""" lastdir = None for package, src_dir, build_dir, filenames in self.data_files: for filename in filenames: target = os.path.join(build_dir, filename) self.mkpath(os.path.dirname(target)) self.copy_file(os.path.join(src_dir, filename), target, preserve_mode=False) # Command class to build libraries class build_clib(cmd_build_clib.build_clib): user_options = [ ('build-clib-a=', 's', "directory to build C/C++ static libraries to"), ('build-clib-so=', 's', "directory to build C/C++ shared libraries to"), ] user_options += cmd_build_clib.build_clib.user_options + cmd_mpi_opts def initialize_options (self): self.libraries = None self.libraries_a = [] self.libraries_so = [] self.library_dirs = None self.rpath = None self.link_objects = None self.build_lib = None self.build_clib_a = None self.build_clib_so = None cmd_build_clib.build_clib.initialize_options(self) cmd_initialize_mpi_options(self) def finalize_options (self): cmd_build_clib.build_clib.finalize_options(self) build_cmd = self.get_finalized_command('build') if isinstance(build_cmd, build): cmd_set_undefined_mpi_options(self, 'build') # self.set_undefined_options('build', ('build_lib', 'build_lib'), ('build_lib', 'build_clib_a'), ('build_lib', 'build_clib_so')) # if self.libraries: libraries = self.libraries[:] self.libraries = [] self.check_library_list (libraries) for i, lib in enumerate(libraries): if isinstance(lib, Library): if lib.kind == "static": self.libraries_a.append(lib) else: self.libraries_so.append(lib) else: self.libraries.append(lib) def check_library_list (self, libraries): ListType, TupleType = type([]), type(()) if not isinstance(libraries, ListType): raise DistutilsSetupError( "'libraries' option must be a list of " "Library instances or 2-tuples") for lib in libraries: # if isinstance(lib, Library): lib_name = lib.name build_info = lib.__dict__ elif isinstance(lib, TupleType) and len(lib) == 2: lib_name, build_info = lib else: raise DistutilsSetupError( "each element of 'libraries' option must be an " "Library instance or 2-tuple") # if not isinstance(lib_name, str): raise DistutilsSetupError( "first element of each tuple in 'libraries' " "must be a string (the library name)") if '/' in lib_name or (os.sep != '/' and os.sep in lib_name): raise DistutilsSetupError( "bad library name '%s': " "may not contain directory separators" % lib[0]) if not isinstance(build_info, dict): raise DistutilsSetupError( "second element of each tuple in 'libraries' " "must be a dictionary (build info)") lib_type = build_info.get('kind', 'static') if lib_type not in ('static', 'shared', 'dylib'): raise DistutilsSetupError( "in 'kind' option (library '%s'), " "'kind' must be one of " " \"static\", \"shared\", \"dylib\"" % lib_name) sources = build_info.get('sources') if (sources is None or type(sources) not in (ListType, TupleType)): raise DistutilsSetupError( "in 'libraries' option (library '%s'), " "'sources' must be present and must be " "a list of source filenames" % lib_name) depends = build_info.get('depends') if (depends is not None and type(depends) not in (ListType, TupleType)): raise DistutilsSetupError( "in 'libraries' option (library '%s'), " "'depends' must be a list " "of source filenames" % lib_name) def run (self): cmd_build_clib.build_clib.run(self) if (not self.libraries_a and not self.libraries_so): return # from distutils.ccompiler import new_compiler self.compiler = new_compiler(compiler=self.compiler, dry_run=self.dry_run, force=self.force) # if self.define is not None: for (name, value) in self.define: self.compiler.define_macro(name, value) if self.undef is not None: for macro in self.undef: self.compiler.undefine_macro(macro) if self.include_dirs is not None: self.compiler.set_include_dirs(self.include_dirs) if self.library_dirs is not None: self.compiler.set_library_dirs(self.library_dirs) if self.rpath is not None: self.compiler.set_runtime_library_dirs(self.rpath) if self.link_objects is not None: self.compiler.set_link_objects(self.link_objects) # config = configuration(self, verbose=True) configure_compiler(self.compiler, config) # self.build_libraries(self.libraries) self.build_libraries(self.libraries_a) self.build_libraries(self.libraries_so) def build_libraries (self, libraries): for lib in libraries: # old-style if not isinstance(lib, Library): cmd_build_clib.build_clib.build_libraries(self, [lib]) continue # new-style try: self.build_library(lib) except (DistutilsError, CCompilerError): if not lib.optional: raise e = sys.exc_info()[1] self.warn('building library "%s" failed' % lib.name) self.warn('%s' % e) def config_library (self, lib): if lib.configure: config_cmd = self.get_finalized_command('config') config_cmd.compiler = self.compiler # fix compiler return lib.configure(lib, config_cmd) def build_library(self, lib): from distutils.dep_util import newer_group sources = [convert_path(p) for p in lib.sources] depends = [convert_path(p) for p in lib.depends] depends = sources + depends if lib.kind == "static": build_dir = self.build_clib_a else: build_dir = self.build_clib_so lib_fullpath = self.get_lib_fullpath(lib, build_dir) if not (self.force or newer_group(depends, lib_fullpath, 'newer')): log.debug("skipping '%s' %s library (up-to-date)", lib.name, lib.kind) return ok = self.config_library(lib) log.info("building '%s' %s library", lib.name, lib.kind) # First, compile the source code to object files in the library # directory. (This should probably change to putting object # files in a temporary build directory.) macros = lib.define_macros[:] for undef in lib.undef_macros: macros.append((undef,)) objects = self.compiler.compile( sources, depends=lib.depends, output_dir=self.build_temp, macros=macros, include_dirs=lib.include_dirs, extra_preargs=None, extra_postargs=lib.extra_compile_args, debug=self.debug, ) if lib.kind == "static": # Now "link" the object files together # into a static library. self.compiler.create_static_lib( objects, lib.name, output_dir=os.path.dirname(lib_fullpath), debug=self.debug, ) else: extra_objects = lib.extra_objects[:] export_symbols = lib.export_symbols[:] extra_link_args = lib.extra_link_args[:] objects.extend(extra_objects) if (self.compiler.compiler_type == 'msvc' and export_symbols is not None): output_dir = os.path.dirname(lib_fullpath) implib_filename = self.compiler.library_filename(lib.name) implib_file = os.path.join(output_dir, lib_fullpath) extra_link_args.append ('/IMPLIB:' + implib_file) # Detect target language, if not provided src_language = self.compiler.detect_language(sources) language = (lib.language or src_language) # Now "link" the object files together # into a shared library. self.compiler.link( self.compiler.SHARED_LIBRARY, objects, lib_fullpath, # libraries=lib.libraries, library_dirs=lib.library_dirs, runtime_library_dirs=lib.runtime_library_dirs, export_symbols=export_symbols, extra_preargs=None, extra_postargs=extra_link_args, debug=self.debug, target_lang=language, ) return def get_lib_fullpath (self, lib, build_dir): package_dir = (lib.package or '').split('.') dest_dir = convert_path(lib.dest_dir or '') output_dir = os.path.join(build_dir, *package_dir+[dest_dir]) lib_type = lib.kind if sys.platform != 'darwin': if lib_type == 'dylib': lib_type = 'shared' compiler = self.compiler # XXX lib_fullpath = compiler.library_filename( lib.name, lib_type=lib_type, output_dir=output_dir) return lib_fullpath def get_source_files (self): filenames = cmd_build_clib.build_clib.get_source_files(self) self.check_library_list(self.libraries) self.check_library_list(self.libraries_a) self.check_library_list(self.libraries_so) for (lib_name, build_info) in self.libraries: filenames.extend(build_info.get(sources, [])) for lib in self.libraries_so + self.libraries_a: filenames.extend(lib.sources) return filenames def get_outputs (self): outputs = [] for lib in self.libraries_a: lib_fullpath = self.get_lib_fullpath(lib, self.build_clib_a) outputs.append(lib_fullpath) for lib in self.libraries_so: lib_fullpath = self.get_lib_fullpath(lib, self.build_clib_so) outputs.append(lib_fullpath) return outputs # Command class to build extension modules class build_ext(cmd_build_ext.build_ext): user_options = cmd_build_ext.build_ext.user_options + cmd_mpi_opts def initialize_options(self): cmd_build_ext.build_ext.initialize_options(self) cmd_initialize_mpi_options(self) def finalize_options(self): cmd_build_ext.build_ext.finalize_options(self) build_cmd = self.get_finalized_command('build') if isinstance(build_cmd, build): cmd_set_undefined_mpi_options(self, 'build') # if ((sys.platform.startswith('linux') or sys.platform.startswith('gnu') or sys.platform.startswith('sunos')) and sysconfig.get_config_var('Py_ENABLE_SHARED')): py_version = sysconfig.get_python_version() bad_pylib_dir = os.path.join(sys.prefix, "lib", "python" + py_version, "config") try: self.library_dirs.remove(bad_pylib_dir) except ValueError: pass pylib_dir = sysconfig.get_config_var("LIBDIR") if pylib_dir not in self.library_dirs: self.library_dirs.append(pylib_dir) if pylib_dir not in self.rpath: self.rpath.append(pylib_dir) if sys.exec_prefix == '/usr': self.library_dirs.remove(pylib_dir) self.rpath.remove(pylib_dir) def run (self): if self.distribution.has_c_libraries(): build_clib = self.get_finalized_command('build_clib') if build_clib.libraries: build_clib.run() cmd_build_ext.build_ext.run(self) def build_extensions(self): # First, sanity-check the 'extensions' list self.check_extensions_list(self.extensions) # parse configuration file and configure compiler config = configuration(self, verbose=True) configure_compiler(self.compiler, config) if self.compiler.compiler_type == "unix": so_ext = sysconfig.get_config_var('SO') self.compiler.shared_lib_extension = so_ext self.config = config # XXX # extra configuration, check for all MPI symbols if self.configure: log.info('testing for missing MPI symbols') config_cmd = self.get_finalized_command('config') config_cmd.compiler = self.compiler # fix compiler configure = ConfigureMPI(config_cmd) results = configure.run() configure.dump(results) # macro = 'HAVE_CONFIG_H' log.info("defining preprocessor macro '%s'" % macro) self.compiler.define_macro(macro, 1) # build extensions for ext in self.extensions: try: self.build_extension(ext) except (DistutilsError, CCompilerError): if not ext.optional: raise e = sys.exc_info()[1] self.warn('building extension "%s" failed' % ext.name) self.warn('%s' % e) def config_extension (self, ext): configure = getattr(ext, 'configure', None) if configure: config_cmd = self.get_finalized_command('config') config_cmd.compiler = self.compiler # fix compiler configure(ext, config_cmd) def build_extension (self, ext): from distutils.dep_util import newer_group fullname = self.get_ext_fullname(ext.name) filename = os.path.join( self.build_lib, self.get_ext_filename(fullname)) depends = ext.sources + ext.depends if not (self.force or newer_group(depends, filename, 'newer')): log.debug("skipping '%s' extension (up-to-date)", ext.name) return # self.config_extension(ext) cmd_build_ext.build_ext.build_extension(self, ext) # # XXX -- this is a Vile HACK! if ext.name == 'mpi4py.MPI': dest_dir = os.path.dirname(filename) self.mkpath(dest_dir) mpi_cfg = os.path.join(dest_dir, 'mpi.cfg') log.info("writing %s" % mpi_cfg) if not self.dry_run: self.config.dump(filename=mpi_cfg) def get_outputs(self): outputs = cmd_build_ext.build_ext.get_outputs(self) for ext in self.extensions: # XXX -- this is a Vile HACK! if ext.name == 'mpi4py.MPI': fullname = self.get_ext_fullname(ext.name) filename = os.path.join( self.build_lib, self.get_ext_filename(fullname)) dest_dir = os.path.dirname(filename) mpi_cfg = os.path.join(dest_dir, 'mpi.cfg') outputs.append(mpi_cfg) return outputs # Command class to build executables class build_exe(build_ext): description = "build binary executable components" user_options = [ ('build-exe=', None, "build directory for executable components"), ] + build_ext.user_options def initialize_options (self): build_ext.initialize_options(self) self.build_base = None self.build_exe = None def finalize_options (self): build_ext.finalize_options(self) self.configure = None self.set_undefined_options('build', ('build_base','build_base'), ('build_lib', 'build_exe')) #from distutils.util import get_platform #plat_specifier = ".%s-%s" % (get_platform(), sys.version[0:3]) #if hasattr(sys, 'gettotalrefcount') and sys.version[0:3] > '2.5': # plat_specifier += '-pydebug' #if self.build_exe is None: # self.build_exe = os.path.join(self.build_base, # 'exe' + plat_specifier) self.executables = self.distribution.executables # XXX This is a hack self.extensions = self.distribution.executables self.check_extensions_list = self.check_executables_list self.build_extension = self.build_executable self.get_ext_filename = self.get_exe_filename self.build_lib = self.build_exe def check_executables_list (self, executables): ListType, TupleType = type([]), type(()) if type(executables) is not ListType: raise DistutilsSetupError( "'executables' option must be a list of Executable instances") for exe in executables: if not isinstance(exe, Executable): raise DistutilsSetupError( "'executables' items must be Executable instances") if (exe.sources is None or type(exe.sources) not in (ListType, TupleType)): raise DistutilsSetupError( ("in 'executables' option (executable '%s'), " + "'sources' must be present and must be " + "a list of source filenames") % exe.name) def get_exe_filename(self, exe_name): exe_ext = sysconfig.get_config_var('EXE') or '' return exe_name + exe_ext def get_exe_fullpath(self, exe, build_dir=None): build_dir = build_dir or self.build_exe package_dir = (exe.package or '').split('.') dest_dir = convert_path(exe.dest_dir or '') output_dir = os.path.join(build_dir, *package_dir+[dest_dir]) exe_filename = self.get_exe_filename(exe.name) return os.path.join(output_dir, exe_filename) def config_executable (self, exe): build_ext.config_extension(self, exe) def build_executable (self, exe): from distutils.dep_util import newer_group sources = list(exe.sources) depends = list(exe.depends) exe_fullpath = self.get_exe_fullpath(exe) depends = sources + depends if not (self.force or newer_group(depends, exe_fullpath, 'newer')): log.debug("skipping '%s' executable (up-to-date)", exe.name) return self.config_executable(exe) log.info("building '%s' executable", exe.name) # Next, compile the source code to object files. # XXX not honouring 'define_macros' or 'undef_macros' -- the # CCompiler API needs to change to accommodate this, and I # want to do one thing at a time! macros = exe.define_macros[:] for undef in exe.undef_macros: macros.append((undef,)) # Two possible sources for extra compiler arguments: # - 'extra_compile_args' in Extension object # - CFLAGS environment variable (not particularly # elegant, but people seem to expect it and I # guess it's useful) # The environment variable should take precedence, and # any sensible compiler will give precedence to later # command line args. Hence we combine them in order: extra_args = exe.extra_compile_args[:] objects = self.compiler.compile( sources, output_dir=self.build_temp, macros=macros, include_dirs=exe.include_dirs, debug=self.debug, extra_postargs=extra_args, depends=exe.depends) self._built_objects = objects[:] # XXX -- this is a Vile HACK! # # Remove msvcrXX.dll when building executables with MinGW # if self.compiler.compiler_type == 'mingw32': try: del self.compiler.dll_libraries[:] except: pass # Now link the object files together into a "shared object" -- # of course, first we have to figure out all the other things # that go into the mix. if exe.extra_objects: objects.extend(exe.extra_objects) extra_args = exe.extra_link_args[:] # Get special linker flags for building a executable with # bundled Python library, also fix location of needed # python.exp file on AIX ldshflag = sysconfig.get_config_var('LINKFORSHARED') or '' ldshflag = ldshflag.replace('-Xlinker ', '-Wl,') if sys.platform == 'darwin': # fix wrong framework paths fwkprefix = sysconfig.get_config_var('PYTHONFRAMEWORKPREFIX') fwkdir = sysconfig.get_config_var('PYTHONFRAMEWORKDIR') if fwkprefix and fwkdir and fwkdir != 'no-framework': for flag in split_quoted(ldshflag): if flag.startswith(fwkdir): fwkpath = os.path.join(fwkprefix, flag) ldshflag = ldshflag.replace(flag, fwkpath) if sys.platform.startswith('aix'): python_lib = sysconfig.get_python_lib(standard_lib=1) python_exp = os.path.join(python_lib, 'config', 'python.exp') ldshflag = ldshflag.replace('Modules/python.exp', python_exp) # Detect target language, if not provided language = exe.language or self.compiler.detect_language(sources) self.compiler.link( self.compiler.EXECUTABLE, objects, exe_fullpath, output_dir=None, libraries=self.get_libraries(exe), library_dirs=exe.library_dirs, runtime_library_dirs=exe.runtime_library_dirs, extra_preargs=split_quoted(ldshflag), extra_postargs=extra_args, debug=self.debug, target_lang=language) def get_outputs (self): outputs = [] for exe in self.executables: outputs.append(self.get_exe_fullpath(exe)) return outputs class install(cmd_install.install): user_options = cmd_install.install.user_options + [ ('single-version-externally-managed', None, "setuptools compatibility option"), ] boolean_options = cmd_install.install.boolean_options + [ 'single-version-externally-managed', ] def initialize_options(self): cmd_install.install.initialize_options(self) self.single_version_externally_managed = None self.no_compile = None def has_lib (self): return (cmd_install.install.has_lib(self) and self.has_exe()) def has_exe (self): return self.distribution.has_executables() sub_commands = \ cmd_install.install.sub_commands[:] + \ [('install_exe', has_exe), ] # XXX disable install_exe subcommand !!! del sub_commands[-1] class install_lib(cmd_install_lib.install_lib): def get_outputs(self): outputs = cmd_install_lib.install_lib.get_outputs(self) for (build_cmd, build_dir) in (('build_clib', 'build_lib'), ('build_exe', 'build_exe')): outs = self._mutate_outputs(1, build_cmd, build_dir, self.install_dir) build_cmd = self.get_finalized_command(build_cmd) build_files = build_cmd.get_outputs() outputs.extend(outs) return outputs class install_data (cmd_install_data.install_data): def finalize_options (self): self.set_undefined_options('install', ('install_lib', 'install_dir'), ('root', 'root'), ('force', 'force'), ) class install_exe(cmd_install_lib.install_lib): description = "install binary executable components" user_options = [ ('install-dir=', 'd', "directory to install to"), ('build-dir=','b', "build directory (where to install from)"), ('force', 'f', "force installation (overwrite existing files)"), ('skip-build', None, "skip the build steps"), ] boolean_options = ['force', 'skip-build'] negative_opt = { } def initialize_options (self): self.install_dir = None self.build_dir = None self.force = 0 self.skip_build = None def finalize_options (self): self.set_undefined_options('build_exe', ('build_exe', 'build_dir')) self.set_undefined_options('install', ('force', 'force'), ('skip_build', 'skip_build'), ('install_scripts', 'install_dir')) def run (self): self.build() self.install() def build (self): if not self.skip_build: if self.distribution.has_executables(): self.run_command('build_exe') def install (self): self.outfiles = [] if self.distribution.has_executables(): build_exe = self.get_finalized_command('build_exe') for exe in build_exe.executables: exe_fullpath = build_exe.get_exe_fullpath(exe) exe_filename = os.path.basename(exe_fullpath) if (os.name == "posix" and exe_filename.startswith("python-")): install_name = exe_filename.replace( "python-","python%s-" % sys.version[:3]) link = None else: install_name = exe_fullpath link = None source = exe_fullpath target = os.path.join(self.install_dir, install_name) self.mkpath(self.install_dir) out, done = self.copy_file(source, target, link=link) self.outfiles.append(out) def get_outputs (self): return self.outfiles def get_inputs (self): inputs = [] if self.distribution.has_executables(): build_exe = self.get_finalized_command('build_exe') inputs.extend(build_exe.get_outputs()) return inputs class test(Command): description = "run the test suite" user_options = [ ('args=', None, "options"), ] def initialize_options(self): self.args = None def finalize_options(self): if self.args: self.args = split_quoted(self.args) else: self.args = [] def run(self): pass class sdist(cmd_sdist.sdist): def run (self): build_src = self.get_finalized_command('build_src') build_src.run() cmd_sdist.sdist.run(self) class clean(cmd_clean.clean): description = "clean up temporary files from 'build' command" user_options = \ cmd_clean.clean.user_options[:2] + [ ('build-exe=', None, "build directory for executable components " "(default: 'build_exe.build-exe')"), ] + cmd_clean.clean.user_options[2:] def initialize_options(self): cmd_clean.clean.initialize_options(self) self.build_exe = None def finalize_options(self): cmd_clean.clean.finalize_options(self) self.set_undefined_options('build_exe', ('build_exe', 'build_exe')) def run(self): from distutils.dir_util import remove_tree # remove the build/temp. directory # (unless it's already gone) if os.path.exists(self.build_temp): remove_tree(self.build_temp, dry_run=self.dry_run) else: log.debug("'%s' does not exist -- can't clean it", self.build_temp) if self.all: # remove build directories for directory in (self.build_lib, self.build_exe, self.build_scripts, self.bdist_base, ): if os.path.exists(directory): remove_tree(directory, dry_run=self.dry_run) else: log.debug("'%s' does not exist -- can't clean it", directory) # just for the heck of it, try to remove the base build directory: # we might have emptied it right now, but if not we don't care if not self.dry_run: try: os.rmdir(self.build_base) log.info("removing '%s'", self.build_base) except OSError: pass # ----------------------------------------------------------------------------- try: import msilib Directory_make_short = msilib.Directory.make_short def make_short(self, file): parts = file.split('.') if len(parts) > 1: file = '_'.join(parts[:-1])+'.'+parts[-1] return Directory_make_short(self, file) msilib.Directory.make_short = make_short except: pass # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/conf/mpiregexes.py0000644000000000000000000000524612211706251016770 0ustar 00000000000000import re def anyof(*args): return r'(?:%s)' % '|'.join(args) def join(*args): tokens = [] for tok in args: if isinstance(tok, (list, tuple)): tok = '(%s)' % r'\s*'.join(tok) tokens.append(tok) return r'\s*'.join(tokens) lparen = r'\(' rparen = r'\)' colon = r'\:' asterisk = r'\*' ws = r'\s*' sol = r'^' eol = r'$' enum = join('enum', colon) typedef = 'ctypedef' pointer = asterisk struct = join(typedef, 'struct') basic_type = r'(?:void|int|char\s*\*{1,3})' integral_type = r'MPI_(?:Aint|Offset|Count|Fint)' struct_type = r'MPI_(?:Status)' opaque_type = r'MPI_(?:Datatype|Request|Message|Op|Info|Group|Errhandler|Comm|Win|File)' any_mpi_type = r'(?:%s|%s|%s)' % (struct_type, integral_type, opaque_type) upper_name = r'MPI_[A-Z0-9_]+' camel_name = r'MPI_[A-Z][a-z0-9_]+' usrfun_name = camel_name + r'_(?:function|fn)' arg_list = r'.*' ret_type = r'void|int|double' canyint = anyof(r'int', r'long(?:\s+long)?') canyptr = join(r'\w+', pointer+'?') annotation = r'\#\:\=' fallback_value = r'\(?[A-Za-z0-9_\+\-\(\)\*]+\)?' fallback = r'(?:%s)?' % join (annotation, [fallback_value]) INTEGRAL_TYPE = join( typedef, [canyint], [integral_type], fallback, eol) STRUCT_TYPE = join( struct, [struct_type], colon, eol) OPAQUE_TYPE = join( typedef, canyptr, [opaque_type], eol) FUNCTION_TYPE = join( typedef, [ret_type], [camel_name], lparen, [arg_list], rparen, fallback, eol) ENUM_VALUE = join(sol, enum, [upper_name], fallback, eol) HANDLE_VALUE = join(sol, [opaque_type], [upper_name], fallback, eol) BASIC_PTRVAL = join(sol, [basic_type, pointer], [upper_name], fallback, eol) INTEGRAL_PTRVAL = join(sol, [integral_type, pointer], [upper_name], fallback, eol) STRUCT_PTRVAL = join(sol, [struct_type, pointer], [upper_name], fallback, eol) FUNCT_PTRVAL = join(sol, [usrfun_name, pointer], [upper_name], fallback, eol) FUNCTION_PROTO = join(sol, [ret_type], [camel_name], lparen, [arg_list], rparen, fallback, eol) fint_type = r'MPI_Fint' fmpi_type = opaque_type.replace('Datatype', 'Type') c2f_name = fmpi_type+'_c2f' f2c_name = fmpi_type+'_f2c' FUNCTION_C2F = join(sol, [fint_type], [c2f_name], lparen, [opaque_type], rparen, fallback, eol) FUNCTION_F2C = join(sol, [opaque_type], [f2c_name], lparen, [fint_type], rparen, fallback, eol) IGNORE = anyof(join(sol, r'cdef.*', eol), join(sol, struct, r'_mpi_\w+_t', eol), join(sol, 'int', r'MPI_(?:SOURCE|TAG|ERROR)', eol), join(sol, r'#.*', eol), join(sol, eol)) # compile the RE's glb = globals() all = [key for key in dict(glb) if key.isupper()] for key in all: glb[key] = re.compile(glb[key]) mpi4py_1.3.1+hg20131106.orig/conf/mpiscanner.py0000644000000000000000000002632612211706251016761 0ustar 00000000000000# Very, very naive RE-based way for collecting declarations inside # 'cdef extern from *' Cython blocks in in source files, and next # generate compatibility headers for MPI-2 partially implemented or # built, or MPI-1 implementations, perhaps providing a subset of MPI-2 from textwrap import dedent from warnings import warn try: import mpiregexes as Re except ImportError: from conf import mpiregexes as Re class Node(object): REGEX = None def match(self, line): m = self.REGEX.search(line) if m: return m.groups() match = classmethod(match) HEADER = None CONFIG = None MISSING = None MISSING_HEAD = """\ #ifndef PyMPI_HAVE_%(name)s #undef %(cname)s """ MISSING_TAIL = """ #endif """ def init(self, name, **kargs): assert name is not None self.name = name self.__dict__.update(kargs) def header(self): line = dedent(self.HEADER) % vars(self) line = line.replace('\n', '') line = line.replace(' ', ' ') return line + '\n' def config(self): return dedent(self.CONFIG) % vars(self) def missing(self): head = dedent(self.MISSING_HEAD) body = dedent(self.MISSING) tail = dedent(self.MISSING_TAIL) return (head+body+tail) % vars(self) class NodeType(Node): CONFIG = """\ %(ctype)s v; %(ctype)s* p; char* c = (char*)&v; c[0] = 0; p = &v; *p = v;""" def __init__(self, ctype): self.init(name=ctype, cname=ctype, ctype=ctype,) class NodeStructType(NodeType): HEADER = """\ typedef struct {%(cfields)s ...; } %(ctype)s;""" MISSING = """\ typedef struct PyMPI_%(ctype)s { %(cfields)s } PyMPI_%(ctype)s; #define %(ctype)s PyMPI_%(ctype)s""" def __init__(self, ctype, cfields): super(NodeStructType, self).__init__(ctype) self.cfields = '\n'.join([' %s %s;' % field for field in cfields]) class NodeFuncType(NodeType): HEADER = """\ typedef %(crett)s (%(cname)s)(%(cargs)s);""" MISSING = """\ typedef %(crett)s (PyMPI_%(cname)s)(%(cargs)s); #define %(cname)s PyMPI_%(cname)s""" def __init__(self, crett, cname, cargs, calias=None): self.init(name=cname, cname=cname, ctype=cname+'*',) self.crett = crett self.cargs = cargs or 'void' if calias is not None: self.MISSING = '#define %(cname)s %(calias)s' self.calias = calias class NodeValue(Node): HEADER = """\ const %(ctype)s %(cname)s;""" CONFIG = """\ %(ctype)s v; %(ctype)s* p; v = %(cname)s; p = &v; *p = %(cname)s;""" MISSING = '#define %(cname)s (%(calias)s)' def __init__(self, ctype, cname, calias): self.init(name=cname, cname=cname, ctype=ctype, calias=calias) if ctype.endswith('*'): self.HEADER = "%(ctype)s const %(cname)s;" def ctypefix(ct): ct = ct.strip() ct = ct.replace('[][3]',' (*)[3]') ct = ct.replace('[]','*') return ct class NodeFuncProto(Node): HEADER = """\ %(crett)s %(cname)s(%(cargs)s);""" CONFIG = """\ %(crett)s v; v = %(cname)s(%(cargscall)s); if (v) v= (%(crett)s) 0;""" MISSING = ' '. join(['#define %(cname)s(%(cargsnamed)s)', 'PyMPI_UNAVAILABLE("%(name)s"%(comma)s%(cargsnamed)s)']) def __init__(self, crett, cname, cargs, calias=None): self.init(name=cname, cname=cname) self.crett = crett self.cargs = cargs or 'void' if cargs == 'void': cargs = '' if cargs: cargs = cargs.split(',') if cargs[-1].strip() == '...': del cargs[-1] else: cargs = [] self.cargstype = cargs nargs = len(cargs) if nargs: self.comma = ',' else: self.comma = '' cargscall = ['(%s)0' % ctypefix(a) for a in cargs] self.cargscall = ','.join(cargscall) cargsnamed = ['a%d' % (a+1) for a in range(nargs)] self.cargsnamed = ','.join(cargsnamed) if calias is not None: self.MISSING = '#define %(cname)s %(calias)s' self.calias = calias class IntegralType(NodeType): REGEX = Re.INTEGRAL_TYPE HEADER = """\ typedef %(cbase)s %(ctype)s;""" MISSING = """\ typedef %(ctdef)s PyMPI_%(ctype)s; #define %(ctype)s PyMPI_%(ctype)s""" def __init__(self, cbase, ctype, calias=None): super(IntegralType, self).__init__(ctype) self.cbase = cbase if calias is not None: self.ctdef = calias else: self.ctdef = cbase class StructType(NodeStructType): REGEX = Re.STRUCT_TYPE def __init__(self, ctype): cnames = ['MPI_SOURCE', 'MPI_TAG', 'MPI_ERROR'] cfields = list(zip(['int']*3, cnames)) super(StructType, self).__init__(ctype, cfields) class OpaqueType(NodeType): REGEX = Re.OPAQUE_TYPE HEADER = """\ typedef struct{...;} %(ctype)s;""" MISSING = """\ typedef void *PyMPI_%(ctype)s; #define %(ctype)s PyMPI_%(ctype)s""" class FunctionType(NodeFuncType): REGEX = Re.FUNCTION_TYPE class EnumValue(NodeValue): REGEX = Re.ENUM_VALUE def __init__(self, cname, calias): self.init(name=cname, cname=cname, ctype='int', calias=calias) class HandleValue(NodeValue): REGEX = Re.HANDLE_VALUE MISSING = '#define %(cname)s ((%(ctype)s)%(calias)s)' class BasicPtrVal(NodeValue): REGEX = Re.BASIC_PTRVAL MISSING = '#define %(cname)s ((%(ctype)s)%(calias)s)' class IntegralPtrVal(NodeValue): REGEX = Re.INTEGRAL_PTRVAL MISSING = '#define %(cname)s ((%(ctype)s)%(calias)s)' class StructPtrVal(NodeValue): REGEX = Re.STRUCT_PTRVAL class FunctionPtrVal(NodeValue): REGEX = Re.FUNCT_PTRVAL class FunctionProto(NodeFuncProto): REGEX = Re.FUNCTION_PROTO class FunctionC2F(NodeFuncProto): REGEX = Re.FUNCTION_C2F MISSING = ' '.join(['#define %(cname)s(%(cargsnamed)s)', '((%(crett)s)0)']) class FunctionF2C(NodeFuncProto): REGEX = Re.FUNCTION_F2C MISSING = ' '.join(['#define %(cname)s(%(cargsnamed)s)', '%(cretv)s']) def __init__(self, *a, **k): NodeFuncProto.__init__(self, *a, **k) self.cretv = self.crett.upper() + '_NULL' class Scanner(object): NODE_TYPES = [ IntegralType, StructType, OpaqueType, HandleValue, EnumValue, BasicPtrVal, IntegralPtrVal, StructPtrVal, FunctionType, FunctionPtrVal, FunctionProto, FunctionC2F, FunctionF2C, ] def __init__(self): self.nodes = [] self.nodemap = {} def parse_file(self, filename): fileobj = open(filename) try: self.parse_lines(fileobj) finally: fileobj.close() def parse_lines(self, lines): for line in lines: self.parse_line(line) def parse_line(self, line): if Re.IGNORE.match(line): return nodemap = self.nodemap nodelist = self.nodes for nodetype in self.NODE_TYPES: args = nodetype.match(line) if args: node = nodetype(*args) assert node.name not in nodemap, node.name nodemap[node.name] = len(nodelist) nodelist.append(node) break if not args: warn('unmatched line:\n%s' % line) def __iter__(self): return iter(self.nodes) def itertests(self): for node in self: yield (node.name, node.config()) def dump_header_h(self, fileobj): if isinstance(fileobj, str): fileobj = open(fileobj, 'w') try: self.dump_header_h(fileobj) finally: fileobj.close() return for node in self: fileobj.write(node.header()) CONFIG_HEAD = """\ #ifndef PyMPI_CONFIG_H #define PyMPI_CONFIG_H """ CONFIG_MACRO = 'PyMPI_HAVE_%s' CONFIG_TAIL = """\ #endif /* !PyMPI_CONFIG_H */ """ def dump_config_h(self, fileobj, suite): if isinstance(fileobj, str): fileobj = open(fileobj, 'w') try: self.dump_config_h(fileobj, suite) finally: fileobj.close() return head = dedent(self.CONFIG_HEAD) macro = dedent(self.CONFIG_MACRO) tail = dedent(self.CONFIG_TAIL) fileobj.write(head) if suite is None: for node in self: line = "#undef %s\n" % ((macro % node.name)) fileobj.write(line) else: for name, result in suite: assert name in self.nodemap if result: line = "#define %s 1\n" % ((macro % name)) else: line = "#undef %s\n" % ((macro % name)) fileobj.write(line) fileobj.write(tail) MISSING_HEAD = """\ #ifndef PyMPI_MISSING_H #define PyMPI_MISSING_H #ifndef PyMPI_UNUSED # if defined(__GNUC__) # if !defined(__cplusplus) || (__GNUC__>3||(__GNUC__==3&&__GNUC_MINOR__>=4)) # define PyMPI_UNUSED __attribute__ ((__unused__)) # else # define PyMPI_UNUSED # endif # elif defined(__INTEL_COMPILER) || defined(__ICC) # define PyMPI_UNUSED __attribute__ ((__unused__)) # else # define PyMPI_UNUSED # endif #endif static PyMPI_UNUSED int PyMPI_UNAVAILABLE(PyMPI_UNUSED const char *name,...) { return -1; } """ MISSING_TAIL = """\ #endif /* !PyMPI_MISSING_H */ """ def dump_missing_h(self, fileobj, suite): if isinstance(fileobj, str): fileobj = open(fileobj, 'w') try: self.dump_missing_h(fileobj, suite) finally: fileobj.close() return head = dedent(self.MISSING_HEAD) tail = dedent(self.MISSING_TAIL) # fileobj.write(head) if suite is None: for node in self: fileobj.write(node.missing()) else: nodelist = self.nodes nodemap = self.nodemap for name, result in suite: assert name in nodemap, name if not result: node = nodelist[nodemap[name]] fileobj.write(node.missing()) fileobj.write(tail) # ----------------------------------------- if __name__ == '__main__': import sys, os sources = [os.path.join('src', 'include', 'mpi4py', 'libmpi.pxd')] log = lambda msg: sys.stderr.write(msg + '\n') scanner = Scanner() for filename in sources: log('parsing file %s' % filename) scanner.parse_file(filename) log('processed %d definitions' % len(scanner.nodes)) config_h = os.path.join('src', 'config', 'config.h') log('writing file %s' % config_h) scanner.dump_config_h(config_h, None) missing_h = os.path.join('src', 'missing.h') log('writing file %s' % missing_h) scanner.dump_missing_h(missing_h, None) libmpi_h = os.path.join('.', 'libmpi.h') log('writing file %s' % libmpi_h) scanner.dump_header_h(libmpi_h) # ----------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/README.txt0000644000000000000000000000034512211706251015736 0ustar 00000000000000Issuing at the command line:: $ mpiexec -n 5 python helloworld.py will launch a five-process run of the Python interpreter and execute the test script ``helloworld.py``, a parallelized version of the *Hello World!* program. mpi4py_1.3.1+hg20131106.orig/demo/compute-pi/README.txt0000644000000000000000000000006312211706251020015 0ustar 00000000000000Different approaches for computing PI in parallel. mpi4py_1.3.1+hg20131106.orig/demo/compute-pi/cpi-cco.py0000644000000000000000000000236512211706251020215 0ustar 00000000000000#!/usr/bin/env python """ Parallel PI computation using Collective Communication Operations (CCO) within Python objects exposing memory buffers (requires NumPy). usage:: $ mpiexec -n python cpi-buf.py """ from mpi4py import MPI from math import pi as PI from numpy import array def get_n(): prompt = "Enter the number of intervals: (0 quits) " try: n = int(input(prompt)) if n < 0: n = 0 except: n = 0 return n def comp_pi(n, myrank=0, nprocs=1): h = 1.0 / n s = 0.0 for i in range(myrank + 1, n + 1, nprocs): x = h * (i - 0.5) s += 4.0 / (1.0 + x**2) return s * h def prn_pi(pi, PI): message = "pi is approximately %.16f, error is %.16f" print (message % (pi, abs(pi - PI))) comm = MPI.COMM_WORLD nprocs = comm.Get_size() myrank = comm.Get_rank() n = array(0, dtype=int) pi = array(0, dtype=float) mypi = array(0, dtype=float) while True: if myrank == 0: _n = get_n() n.fill(_n) comm.Bcast([n, MPI.INT], root=0) if n == 0: break _mypi = comp_pi(n, myrank, nprocs) mypi.fill(_mypi) comm.Reduce([mypi, MPI.DOUBLE], [pi, MPI.DOUBLE], op=MPI.SUM, root=0) if myrank == 0: prn_pi(pi, PI) mpi4py_1.3.1+hg20131106.orig/demo/compute-pi/cpi-dpm.py0000644000000000000000000001070712211706251020230 0ustar 00000000000000#!/usr/bin/env python """ Parallel PI computation using Dynamic Process Management (DPM) within Python objects exposing memory buffers (requires NumPy). usage: + parent/child model:: $ mpiexec -n 1 python cpi-dpm.py [nchilds] + client/server model:: $ [xterm -e] mpiexec -n python cpi-dpm.py server [-v] & $ [xterm -e] mpiexec -n 1 python cpi-dpm.py client [-v] """ import sys from mpi4py import MPI import numpy as N def get_n(): prompt = "Enter the number of intervals: (0 quits) " try: n = int(input(prompt)) if n < 0: n = 0 except: n = 0 return n def view(pi, np=None, wt=None): from math import pi as PI prn = sys.stdout.write if pi is not None: prn("computed pi is: %.16f\n" % pi) prn("absolute error: %.16f\n" % abs(pi - PI)) if np is not None: prn("computing units: %d processes\n" % np) if wt is not None: prn("wall clock time: %g seconds\n" % wt) sys.stdout.flush() def comp_pi(n, comm, root=0): nprocs = comm.Get_size() myrank = comm.Get_rank() n = N.array(n, 'i') comm.Bcast([n, MPI.INT], root=root) if n == 0: return 0.0 h = 1.0 / n; s = 0.0; for i in range(myrank, n, nprocs): x = h * (i + 0.5); s += 4.0 / (1.0 + x**2); mypi = s * h mypi = N.array(mypi, 'd') pi = N.array(0, 'd') comm.Reduce([mypi, MPI.DOUBLE], [pi, MPI.DOUBLE], root=root, op=MPI.SUM) return pi def master(icomm): n = get_n() wt = MPI.Wtime() n = N.array(n, 'i') icomm.Send([n, MPI.INT], dest=0) pi = N.array(0, 'd') icomm.Recv([pi, MPI.DOUBLE], source=0) wt = MPI.Wtime() - wt if n == 0: return np = icomm.Get_remote_size() view(pi, np, wt) def worker(icomm): myrank = icomm.Get_rank() if myrank == 0: source = dest = 0 else: source = dest = MPI.PROC_NULL n = N.array(0, 'i') icomm.Recv([n, MPI.INT], source=source) pi = comp_pi(n, comm=MPI.COMM_WORLD, root=0) pi = N.array(pi, 'd') icomm.Send([pi, MPI.DOUBLE], dest=dest) # Parent/Child def main_parent(nprocs=1): assert nprocs > 0 assert MPI.COMM_WORLD.Get_size() == 1 icomm = MPI.COMM_WORLD.Spawn(command=sys.executable, args=[__file__, 'child'], maxprocs=nprocs) master(icomm) icomm.Disconnect() def main_child(): icomm = MPI.Comm.Get_parent() assert icomm != MPI.COMM_NULL worker(icomm) icomm.Disconnect() # Client/Server def main_server(COMM): nprocs = COMM.Get_size() myrank = COMM.Get_rank() service, port, info = None, None, MPI.INFO_NULL if myrank == 0: port = MPI.Open_port(info) log(COMM, "open port '%s'", port) service = 'cpi' MPI.Publish_name(service, info, port) log(COMM, "service '%s' published.", service) else: port = '' log(COMM, "waiting for client connection ...") icomm = COMM.Accept(port, info, root=0) log(COMM, "client connection accepted.") worker(icomm) log(COMM, "disconnecting from client ...") icomm.Disconnect() log(COMM, "client disconnected.") if myrank == 0: MPI.Unpublish_name(service, info, port) log(COMM, "service '%s' unpublished", port) MPI.Close_port(port) log(COMM, "closed port '%s' ", port) def main_client(COMM): assert COMM.Get_size() == 1 service, info = 'cpi', MPI.INFO_NULL port = MPI.Lookup_name(service, info) log(COMM, "service '%s' found in port '%s'.", service, port) log(COMM, "connecting to server ...") icomm = COMM.Connect(port, info, root=0) log(COMM, "server connected.") master(icomm) log(COMM, "disconnecting from server ...") icomm.Disconnect() log(COMM, "server disconnected.") def main(): assert len(sys.argv) <= 2 if 'server' in sys.argv: main_server(MPI.COMM_WORLD) elif 'client' in sys.argv: main_client(MPI.COMM_WORLD) elif 'child' in sys.argv: main_child() else: try: nchilds = int(sys.argv[1]) except: nchilds = 2 main_parent(nchilds) VERBOSE = False def log(COMM, fmt, *args): if not VERBOSE: return if COMM.rank != 0: return sys.stdout.write(fmt % args) sys.stdout.write('\n') sys.stdout.flush() if __name__ == '__main__': if '-v' in sys.argv: VERBOSE = True sys.argv.remove('-v') main() mpi4py_1.3.1+hg20131106.orig/demo/compute-pi/cpi-rma.py0000644000000000000000000000310412211706251020220 0ustar 00000000000000#!/usr/bin/env python """ Parallel PI computation using Remote Memory Access (RMA) within Python objects exposing memory buffers (requires NumPy). usage:: $ mpiexec -n python cpi-rma.py """ from mpi4py import MPI from math import pi as PI from numpy import array def get_n(): prompt = "Enter the number of intervals: (0 quits) " try: n = int(input(prompt)); if n < 0: n = 0 except: n = 0 return n def comp_pi(n, myrank=0, nprocs=1): h = 1.0 / n; s = 0.0; for i in range(myrank + 1, n + 1, nprocs): x = h * (i - 0.5); s += 4.0 / (1.0 + x**2); return s * h def prn_pi(pi, PI): message = "pi is approximately %.16f, error is %.16f" print (message % (pi, abs(pi - PI))) nprocs = MPI.COMM_WORLD.Get_size() myrank = MPI.COMM_WORLD.Get_rank() n = array(0, dtype=int) pi = array(0, dtype=float) mypi = array(0, dtype=float) if myrank == 0: win_n = MPI.Win.Create(n, comm=MPI.COMM_WORLD) win_pi = MPI.Win.Create(pi, comm=MPI.COMM_WORLD) else: win_n = MPI.Win.Create(None, comm=MPI.COMM_WORLD) win_pi = MPI.Win.Create(None, comm=MPI.COMM_WORLD) while True: if myrank == 0: _n = get_n() n.fill(_n) pi.fill(0.0) win_n.Fence() if myrank != 0: win_n.Get([n, MPI.INT], 0) win_n.Fence() if n == 0: break _mypi = comp_pi(n, myrank, nprocs) mypi.fill(_mypi) win_pi.Fence() win_pi.Accumulate([mypi, MPI.DOUBLE], 0, op=MPI.SUM) win_pi.Fence() if myrank == 0: prn_pi(pi, PI) win_n.Free() win_pi.Free() mpi4py_1.3.1+hg20131106.orig/demo/compute-pi/makefile0000644000000000000000000000027412211706251020023 0ustar 00000000000000.PHONY: test MPIEXEC=mpiexec -n 1 PYTHON=python test: echo 100 | ${MPIEXEC} ${PYTHON} cpi-cco.py echo 100 | ${MPIEXEC} ${PYTHON} cpi-rma.py echo 100 | ${MPIEXEC} ${PYTHON} cpi-dpm.py mpi4py_1.3.1+hg20131106.orig/demo/cython/helloworld.pyx0000644000000000000000000000375012211706251020464 0ustar 00000000000000cdef extern from "mpi-compat.h": pass # --------- # Python-level module import # (file: mpi4py/MPI.so) from mpi4py import MPI # Python-level objects and code size = MPI.COMM_WORLD.Get_size() rank = MPI.COMM_WORLD.Get_rank() pname = MPI.Get_processor_name() hwmess = "Hello, World! I am process %d of %d on %s." print (hwmess % (rank, size, pname)) # --------- # Cython-level cimport # this make available mpi4py's Python extension types # (file: mpi4py/include/mpi4py/MPI.pxd) from mpi4py cimport MPI from mpi4py.MPI cimport Intracomm as IntracommType # C-level cdef, typed, Python objects cdef MPI.Comm WORLD = MPI.COMM_WORLD cdef IntracommType SELF = MPI.COMM_SELF # --------- # Cython-level cimport with PXD file # this make available the native MPI C API # with namespace-protection (stuff accessed as mpi.XXX) # (file: mpi4py/include/mpi4py/mpi_c.pxd) from mpi4py cimport libmpi as mpi cdef mpi.MPI_Comm world1 = WORLD.ob_mpi cdef int ierr1=0 cdef int size1 = 0 ierr1 = mpi.MPI_Comm_size(mpi.MPI_COMM_WORLD, &size1) cdef int rank1 = 0 ierr1 = mpi.MPI_Comm_rank(mpi.MPI_COMM_WORLD, &rank1) cdef int rlen1=0 cdef char pname1[mpi.MPI_MAX_PROCESSOR_NAME] ierr1 = mpi.MPI_Get_processor_name(pname1, &rlen1) pname1[rlen1] = 0 # just in case ;-) hwmess = "Hello, World! I am process %d of %d on %s." print (hwmess % (rank1, size1, pname1)) # --------- # Cython-level include with PXI file # this make available the native MPI C API # without namespace-protection (stuff accessed as in C) # (file: mpi4py/include/mpi4py/mpi.pxi) include "mpi4py/mpi.pxi" cdef MPI_Comm world2 = WORLD.ob_mpi cdef int ierr2=0 cdef int size2 = 0 ierr2 = MPI_Comm_size(MPI_COMM_WORLD, &size2) cdef int rank2 = 0 ierr2 = MPI_Comm_rank(MPI_COMM_WORLD, &rank2) cdef int rlen2=0 cdef char pname2[MPI_MAX_PROCESSOR_NAME] ierr2 = MPI_Get_processor_name(pname2, &rlen2) pname2[rlen2] = 0 # just in case ;-) hwmess = "Hello, World! I am process %d of %d on %s." print (hwmess % (rank2, size2, pname2)) # --------- mpi4py_1.3.1+hg20131106.orig/demo/cython/makefile0000644000000000000000000000133312211706251017242 0ustar 00000000000000.PHONY: default default: build test clean PYTHON = python PYTHON = python PYTHON_CONFIG = ${PYTHON} ../python-config MPI4PY_INCLUDE = -I${shell ${PYTHON} -c 'import mpi4py; print( mpi4py.get_include() )'} CYTHON = cython .PHONY: src src: helloworld.c helloworld.c: helloworld.pyx ${CYTHON} ${MPI4PY_INCLUDE} $< MPICC = mpicc CFLAGS = -fPIC ${shell ${PYTHON_CONFIG} --includes} LDFLAGS = -shared ${shell ${PYTHON_CONFIG} --libs} SO = ${shell ${PYTHON_CONFIG} --extension-suffix} .PHONY: build build: helloworld${SO} helloworld${SO}: helloworld.c ${MPICC} ${CFLAGS} -I${MPI4PY_INCLUDE} -o $@ $< ${LDFLAGS} .PHONY: test test: build ${PYTHON} -c 'import helloworld' .PHONY: clean clean: ${RM} helloworld.c helloworld${SO} mpi4py_1.3.1+hg20131106.orig/demo/cython/mpi-compat.h0000644000000000000000000000044012211706251017757 0ustar 00000000000000/* Author: Lisandro Dalcin */ /* Contact: dalcinl@gmail.com */ #ifndef MPI_COMPAT_H #define MPI_COMPAT_H #include #if (MPI_VERSION < 3) && !defined(PyMPI_HAVE_MPI_Message) typedef void *PyMPI_MPI_Message; #define MPI_Message PyMPI_MPI_Message #endif #endif/*MPI_COMPAT_H*/ mpi4py_1.3.1+hg20131106.orig/demo/embedding/helloworld.c0000644000000000000000000000200212211706251020465 0ustar 00000000000000/* * You can use safely use mpi4py between multiple * Py_Initialize()/Py_Finalize() calls ... * but do not blame me for the memory leaks ;-) * */ #include #include const char helloworld[] = \ "from mpi4py import MPI \n" "hwmess = 'Hello, World! I am process %d of %d on %s.' \n" "myrank = MPI.COMM_WORLD.Get_rank() \n" "nprocs = MPI.COMM_WORLD.Get_size() \n" "procnm = MPI.Get_processor_name() \n" "print (hwmess % (myrank, nprocs, procnm)) \n" ""; int main(int argc, char *argv[]) { int i,n=5; MPI_Init(&argc, &argv); for (i=0; i #include int main(int argc, char *argv[]) { int size, rank, len; char name[MPI_MAX_PROCESSOR_NAME]; #if defined(MPI_VERSION) && (MPI_VERSION >= 2) int provided; MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided); #else MPI_Init(&argc, &argv); #endif MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Get_processor_name(name, &len); printf("Hello, World! I am process %d of %d on %s.\n", rank, size, name); MPI_Finalize(); return 0; } /* * Local Variables: * mode: C * c-basic-offset: 2 * indent-tabs-mode: nil * End: */ mpi4py_1.3.1+hg20131106.orig/demo/helloworld.cxx0000644000000000000000000000121112211706251017130 0ustar 00000000000000#include #include int main(int argc, char *argv[]) { #if defined(MPI_VERSION) && (MPI_VERSION >= 2) MPI::Init_thread(MPI_THREAD_MULTIPLE); #else MPI::Init(); #endif int size = MPI::COMM_WORLD.Get_size(); int rank = MPI::COMM_WORLD.Get_rank(); int len; char name[MPI_MAX_PROCESSOR_NAME]; MPI::Get_processor_name(name, len); std::cout << "Hello, World! " << "I am process " << rank << " of " << size << " on " << name << "." << std::endl; MPI::Finalize(); return 0; } // Local Variables: // mode: C++ // c-basic-offset: 2 // indent-tabs-mode: nil // End: mpi4py_1.3.1+hg20131106.orig/demo/helloworld.f900000644000000000000000000000102712211706251016731 0ustar 00000000000000program main use mpi implicit none integer :: provided, ierr, size, rank, len character (len=MPI_MAX_PROCESSOR_NAME) :: name call MPI_Init_thread(MPI_THREAD_MULTIPLE, provided, ierr) call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr) call MPI_Comm_size(MPI_COMM_WORLD, size, ierr) call MPI_Get_processor_name(name, len, ierr) write(*, '(2A,I2,A,I2,3A)') & 'Hello, World! ', & 'I am process ', rank, & ' of ', size, & ' on ', name(1:len), '.' call MPI_Finalize(ierr) end program main mpi4py_1.3.1+hg20131106.orig/demo/helloworld.py0000644000000000000000000000043112211706251016761 0ustar 00000000000000#!/usr/bin/env python """ Parallel Hello World """ from mpi4py import MPI import sys size = MPI.COMM_WORLD.Get_size() rank = MPI.COMM_WORLD.Get_rank() name = MPI.Get_processor_name() sys.stdout.write( "Hello, World! I am process %d of %d on %s.\n" % (rank, size, name)) mpi4py_1.3.1+hg20131106.orig/demo/init-fini/makefile0000644000000000000000000000063412211706251017627 0ustar 00000000000000MPIEXEC=mpiexec NP_FLAG=-n NP=3 PYTHON=python .PHONY: test test: ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test_0.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test_1.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test_2a.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test_2b.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test_3.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test_4.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test_5.py mpi4py_1.3.1+hg20131106.orig/demo/init-fini/runtests.bat0000644000000000000000000000032512211706251020503 0ustar 00000000000000@echo off setlocal ENABLEEXTENSIONS set PYTHON=python @echo on %PYTHON% test_0.py %PYTHON% test_1.py %PYTHON% test_2a.py %PYTHON% test_2b.py %PYTHON% test_3.py %PYTHON% test_4.py %PYTHON% test_5.py mpi4py_1.3.1+hg20131106.orig/demo/init-fini/runtests.sh0000755000000000000000000000053412211706251020354 0ustar 00000000000000#!/bin/sh MPIEXEC=mpiexec NP_FLAG=-n NP=3 PYTHON=python set -x $MPIEXEC $NP_FLAG $NP $PYTHON test_0.py $MPIEXEC $NP_FLAG $NP $PYTHON test_1.py $MPIEXEC $NP_FLAG $NP $PYTHON test_2a.py $MPIEXEC $NP_FLAG $NP $PYTHON test_2b.py $MPIEXEC $NP_FLAG $NP $PYTHON test_3.py $MPIEXEC $NP_FLAG $NP $PYTHON test_4.py $MPIEXEC $NP_FLAG $NP $PYTHON test_5.py mpi4py_1.3.1+hg20131106.orig/demo/init-fini/test_0.py0000644000000000000000000000005512211706251017674 0ustar 00000000000000from mpi4py import rc from mpi4py import MPI mpi4py_1.3.1+hg20131106.orig/demo/init-fini/test_1.py0000644000000000000000000000041612211706251017676 0ustar 00000000000000from mpi4py import rc rc.initialize = False from mpi4py import MPI assert not MPI.Is_initialized() assert not MPI.Is_finalized() MPI.Init() assert MPI.Is_initialized() assert not MPI.Is_finalized() MPI.Finalize() assert MPI.Is_initialized() assert MPI.Is_finalized() mpi4py_1.3.1+hg20131106.orig/demo/init-fini/test_2a.py0000644000000000000000000000045012211706251020036 0ustar 00000000000000from mpi4py import rc rc.initialize = False from mpi4py import MPI assert not MPI.Is_initialized() assert not MPI.Is_finalized() MPI.Init_thread(MPI.THREAD_MULTIPLE) assert MPI.Is_initialized() assert not MPI.Is_finalized() MPI.Finalize() assert MPI.Is_initialized() assert MPI.Is_finalized() mpi4py_1.3.1+hg20131106.orig/demo/init-fini/test_2b.py0000644000000000000000000000064212211706251020042 0ustar 00000000000000from mpi4py import rc rc.initialize = False from mpi4py import MPI assert not MPI.Is_initialized() assert not MPI.Is_finalized() MPI.Init_thread() assert MPI.Is_initialized() assert not MPI.Is_finalized() import sys name, _ = MPI.get_vendor() if name == 'MPICH2' and sys.platform[:3]!='win': assert MPI.Query_thread() == MPI.THREAD_MULTIPLE MPI.Finalize() assert MPI.Is_initialized() assert MPI.Is_finalized() mpi4py_1.3.1+hg20131106.orig/demo/init-fini/test_3.py0000644000000000000000000000017512211706251017702 0ustar 00000000000000from mpi4py import rc rc.finalize = False from mpi4py import MPI assert MPI.Is_initialized() assert not MPI.Is_finalized() mpi4py_1.3.1+hg20131106.orig/demo/init-fini/test_4.py0000644000000000000000000000030312211706251017674 0ustar 00000000000000from mpi4py import rc rc.finalize = False from mpi4py import MPI assert MPI.Is_initialized() assert not MPI.Is_finalized() MPI.Finalize() assert MPI.Is_initialized() assert MPI.Is_finalized() mpi4py_1.3.1+hg20131106.orig/demo/init-fini/test_5.py0000644000000000000000000000047312211706251017705 0ustar 00000000000000from mpi4py import rc del rc.initialize del rc.threaded del rc.thread_level del rc.finalize from mpi4py import MPI assert MPI.Is_initialized() assert not MPI.Is_finalized() import sys name, _ = MPI.get_vendor() if name == 'MPICH2' and sys.platform[:3]!='win': assert MPI.Query_thread() == MPI.THREAD_MULTIPLE mpi4py_1.3.1+hg20131106.orig/demo/makefile0000644000000000000000000000122612211706251015737 0ustar 00000000000000.PHONY: default PYTHON=python default: ${MAKE} PYTHON=${PYTHON} -C compute-pi ${MAKE} PYTHON=${PYTHON} -C mandelbrot ${MAKE} PYTHON=${PYTHON} -C nxtval ${MAKE} PYTHON=${PYTHON} -C reductions ${MAKE} PYTHON=${PYTHON} -C sequential ${MAKE} PYTHON=${PYTHON} -C spawning ${MAKE} PYTHON=${PYTHON} -C wrap-c ${MAKE} PYTHON=${PYTHON} -C wrap-f2py ${MAKE} PYTHON=${PYTHON} -C wrap-swig ${MAKE} PYTHON=${PYTHON} -C wrap-boost ${MAKE} PYTHON=${PYTHON} -C wrap-cython ${MAKE} PYTHON=${PYTHON} -C cython ${MAKE} PYTHON=${PYTHON} -C embedding ${MAKE} PYTHON=${PYTHON} -C mpi-ref-v1 ${MAKE} PYTHON=${PYTHON} -C init-fini ${MAKE} PYTHON=${PYTHON} -C threads mpi4py_1.3.1+hg20131106.orig/demo/mandelbrot/makefile0000644000000000000000000000064012211706251020065 0ustar 00000000000000.PHONY: default build test clean default: build test clean build: mandelbrot-worker.exe MPIF90=mpif90 FFLAGS= -O3 mandelbrot-worker.exe: mandelbrot-worker.f90 ${MPIF90} ${FFLAGS} -o $@ $< PYTHON=python MPIEXEC=mpiexec NPFLAG= -n test: build ${MPIEXEC} ${NPFLAG} 1 ${PYTHON} mandelbrot-master.py ${MPIEXEC} ${NPFLAG} 7 ${PYTHON} mandelbrot.py ${PYTHON} mandelbrot-seq.py clean: ${RM} mandelbrot-worker.exe mpi4py_1.3.1+hg20131106.orig/demo/mandelbrot/mandelbrot-master.py0000644000000000000000000000267212211706251022366 0ustar 00000000000000from mpi4py import MPI import numpy as np x1 = -2.0 x2 = 1.0 y1 = -1.0 y2 = 1.0 w = 600 h = 400 maxit = 255 import os dirname = os.path.abspath(os.path.dirname(__file__)) executable = os.path.join(dirname, 'mandelbrot-worker.exe') # spawn worker worker = MPI.COMM_SELF.Spawn(executable, maxprocs=7) size = worker.Get_remote_size() # send parameters rmsg = np.array([x1, x2, y1, y2], dtype='f') imsg = np.array([w, h, maxit], dtype='i') worker.Bcast([rmsg, MPI.REAL], root=MPI.ROOT) worker.Bcast([imsg, MPI.INTEGER], root=MPI.ROOT) # gather results counts = np.empty(size, dtype='i') indices = np.empty(h, dtype='i') cdata = np.empty([h, w], dtype='i') worker.Gather(sendbuf=None, recvbuf=[counts, MPI.INTEGER], root=MPI.ROOT) worker.Gatherv(sendbuf=None, recvbuf=[indices, (counts, None), MPI.INTEGER], root=MPI.ROOT) worker.Gatherv(sendbuf=None, recvbuf=[cdata, (counts * w, None), MPI.INTEGER], root=MPI.ROOT) # disconnect worker worker.Disconnect() # reconstruct full result M = np.zeros([h, w], dtype='i') M[indices, :] = cdata # eye candy (requires matplotlib) try: from matplotlib import pyplot as plt plt.imshow(M, aspect='equal') plt.spectral() try: import signal def action(*args): raise SystemExit signal.signal(signal.SIGALRM, action) signal.alarm(2) except: pass plt.show() except: pass mpi4py_1.3.1+hg20131106.orig/demo/mandelbrot/mandelbrot-seq.py0000644000000000000000000000170512211706251021657 0ustar 00000000000000import numpy as np import time tic = time.time() x1 = -2.0 x2 = 1.0 y1 = -1.0 y2 = 1.0 w = 150 h = 100 maxit = 127 def mandelbrot(x, y, maxit): c = x + y*1j z = 0 + 0j it = 0 while abs(z) < 2 and it < maxit: z = z**2 + c it += 1 return it dx = (x2 - x1) / w dy = (y2 - y1) / h C = np.empty([h, w], dtype='i') for k in np.arange(h): y = y1 + k * dy for j in np.arange(w): x = x1 + j * dx C[k, j] = mandelbrot(x, y, maxit) M = C toc = time.time() print('wall clock time: %8.2f seconds' % (toc-tic)) # eye candy (requires matplotlib) if 1: try: from matplotlib import pyplot as plt plt.imshow(M, aspect='equal') plt.spectral() try: import signal def action(*args): raise SystemExit signal.signal(signal.SIGALRM, action) signal.alarm(2) except: pass plt.show() except: pass mpi4py_1.3.1+hg20131106.orig/demo/mandelbrot/mandelbrot-worker.f900000644000000000000000000000447312211706251022353 0ustar 00000000000000! $ mpif90 -o mandelbrot.exe mandelbrot.f90 program main use MPI implicit none integer master, nprocs, myrank, ierr real :: rmsg(4), x1, x2, y1, y2 integer :: imsg(3), w, h, maxit integer :: N integer, allocatable :: I(:) integer, allocatable :: C(:,:) integer :: j, k real :: x, dx, y, dy call MPI_Init(ierr) call MPI_Comm_get_parent(master, ierr) if (master == MPI_COMM_NULL) then print *, "parent communicator is MPI_COMM_NULL" call MPI_Abort(MPI_COMM_WORLD, 1, ierr) end if call MPI_Comm_size(master, nprocs, ierr) call MPI_Comm_rank(master, myrank, ierr) ! receive parameters and unpack call MPI_Bcast(rmsg, 4, MPI_REAL, 0, master, ierr) call MPI_Bcast(imsg, 3, MPI_INTEGER, 0, master, ierr) x1 = rmsg(1); x2 = rmsg(2) y1 = rmsg(3); y2 = rmsg(4) w = imsg(1); h = imsg(2); maxit = imsg(3) dx = (x2-x1)/real(w) dy = (y2-y1)/real(h) ! number of lines to compute here N = h / nprocs if (modulo(h, nprocs) > myrank) then N = N + 1 end if ! indices of lines to compute here allocate( I(0:N-1) ) I = (/ (k, k=myrank, h-1, nprocs) /) ! compute local lines allocate( C(0:w-1, 0:N-1) ) do k = 0, N-1 y = y1 + real(I(k)) * dy do j = 0, w-1 x = x1 + real(j) * dx C(j, k) = mandelbrot(x, y, maxit) end do end do ! send number of lines computed here call MPI_Gather(N, 1, MPI_INTEGER, & MPI_BOTTOM, 0, MPI_BYTE, & 0, master, ierr) ! send indices of lines computed here call MPI_Gatherv(I, N, MPI_INTEGER, & MPI_BOTTOM, MPI_BOTTOM, MPI_BOTTOM, MPI_BYTE, & 0, master, ierr) ! send data of lines computed here call MPI_Gatherv(C, N*w, MPI_INTEGER, & MPI_BOTTOM, MPI_BOTTOM, MPI_BOTTOM, MPI_BYTE, & 0, master, ierr) deallocate(C) deallocate(I) ! we are done call MPI_Comm_disconnect(master, ierr) call MPI_Finalize(ierr) contains function mandelbrot(x, y, maxit) result (it) implicit none real, intent(in) :: x, y integer, intent(in) :: maxit integer :: it complex :: z, c z = cmplx(0, 0) c = cmplx(x, y) it = 0 do while (abs(z) < 2.0 .and. it < maxit) z = z*z + c it = it + 1 end do end function mandelbrot end program main mpi4py_1.3.1+hg20131106.orig/demo/mandelbrot/mandelbrot.py0000644000000000000000000000501112211706251021063 0ustar 00000000000000from mpi4py import MPI import numpy as np tic = MPI.Wtime() x1 = -2.0 x2 = 1.0 y1 = -1.0 y2 = 1.0 w = 150 h = 100 maxit = 127 def mandelbrot(x, y, maxit): c = x + y*1j z = 0 + 0j it = 0 while abs(z) < 2 and it < maxit: z = z**2 + c it += 1 return it comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() rmsg = np.empty(4, dtype='f') imsg = np.empty(3, dtype='i') if rank == 0: rmsg[:] = [x1, x2, y1, y2] imsg[:] = [w, h, maxit] comm.Bcast([rmsg, MPI.FLOAT], root=0) comm.Bcast([imsg, MPI.INT], root=0) x1, x2, y1, y2 = [float(r) for r in rmsg] w, h, maxit = [int(i) for i in imsg] dx = (x2 - x1) / w dy = (y2 - y1) / h # number of lines to compute here N = h // size + (h % size > rank) N = np.array(N, dtype='i') # indices of lines to compute here I = np.arange(rank, h, size, dtype='i') # compute local lines C = np.empty([N, w], dtype='i') for k in np.arange(N): y = y1 + I[k] * dy for j in np.arange(w): x = x1 + j * dx C[k, j] = mandelbrot(x, y, maxit) # gather results at root counts = 0 indices = None cdata = None if rank == 0: counts = np.empty(size, dtype='i') indices = np.empty(h, dtype='i') cdata = np.empty([h, w], dtype='i') comm.Gather(sendbuf=[N, MPI.INT], recvbuf=[counts, MPI.INT], root=0) comm.Gatherv(sendbuf=[I, MPI.INT], recvbuf=[indices, (counts, None), MPI.INT], root=0) comm.Gatherv(sendbuf=[C, MPI.INT], recvbuf=[cdata, (counts*w, None), MPI.INT], root=0) # reconstruct full result at root if rank == 0: M = np.zeros([h,w], dtype='i') M[indices, :] = cdata toc = MPI.Wtime() wct = comm.gather(toc-tic, root=0) if rank == 0: for task, time in enumerate(wct): print('wall clock time: %8.2f seconds (task %d)' % (time, task)) def mean(seq): return sum(seq)/len(seq) print ('all tasks, mean: %8.2f seconds' % mean(wct)) print ('all tasks, min: %8.2f seconds' % min(wct)) print ('all tasks, max: %8.2f seconds' % max(wct)) print ('all tasks, sum: %8.2f seconds' % sum(wct)) # eye candy (requires matplotlib) if rank == 0: try: from matplotlib import pyplot as plt plt.imshow(M, aspect='equal') plt.spectral() try: import signal def action(*args): raise SystemExit signal.signal(signal.SIGALRM, action) signal.alarm(2) except: pass plt.show() except: pass MPI.COMM_WORLD.Barrier() mpi4py_1.3.1+hg20131106.orig/demo/mpe-logging/cpilog.py0000644000000000000000000000526212211706251020277 0ustar 00000000000000#!/usr/bin/env python from __future__ import with_statement # Python 2.5 and later # If you want MPE to log MPI calls, you have to add the two lines # below at the very beginning of your main bootstrap script. import mpi4py mpi4py.profile('MPE', logfile='cpilog') # Import the MPI extension module from mpi4py import MPI if 0: # <- use '1' to disable logging of MPI calls MPI.Pcontrol(0) # Import the MPE extension module from mpi4py import MPE if 1: # <- use '0' to disable user-defined logging # This has to be explicitly called ! MPE.initLog(logfile='cpilog') # Set the log file name (note: no extension) MPE.setLogFileName('cpilog') # Import the 'array' module from array import array # This is just to make the logging # output a bit more interesting from time import sleep # User-defined MPE events cpi_begin = MPE.newLogEvent("ComputePi-Begin", "yellow") cpi_end = MPE.newLogEvent("ComputePi-End", "pink") # User-defined MPE states synchronization = MPE.newLogState("Synchronize", "orange") communication = MPE.newLogState("Comunicate", "red") computation = MPE.newLogState("Compute", "green") comm = MPI.COMM_WORLD nprocs = comm.Get_size() myrank = comm.Get_rank() n = array('i', [0]) pi = array('d', [0]) mypi = array('d', [0]) def comp_pi(n, myrank=0, nprocs=1): h = 1.0 / n; s = 0.0; for i in range(myrank + 1, n + 1, nprocs): x = h * (i - 0.5); s += 4.0 / (1.0 + x**2); return s * h with synchronization: comm.Barrier() for N in [10000]*10: with cpi_begin: pass if myrank == 0: n[0] = N with communication: comm.Bcast([n, MPI.INT], root=0) with computation: mypi[0] = comp_pi(n[0], myrank, nprocs) with communication: comm.Reduce([mypi, MPI.DOUBLE], [pi, MPI.DOUBLE], op=MPI.SUM, root=0) with cpi_end: pass with synchronization: comm.Barrier() sleep(0.01) # ----------------------- # # Python 2.3/2.4 version # # ----------------------- # ## synchronization.enter() ## comm.Barrier() ## synchronization.exit() ## ## for N in [50000]*10: ## ## cpi_begin.log() ## ## if myrank == 0: ## n[0] = N ## ## communication.enter() ## comm.Bcast([n, MPI.INT], root=0) ## communication.exit() ## ## computation.enter() ## mypi[0] = comp_pi(n[0], myrank, nprocs) ## computation.exit() ## ## communication.enter() ## comm.Reduce([mypi, MPI.DOUBLE], ## [pi, MPI.DOUBLE], ## op=MPI.SUM, root=0) ## communication.exit() ## ## cpi_end.log() ## ## synchronization.enter() ## comm.Barrier() ## synchronization.exit() ## ## sleep(0.01) mpi4py_1.3.1+hg20131106.orig/demo/mpe-logging/makefile0000644000000000000000000000132512211706251020144 0ustar 00000000000000MPIEXEC = mpiexec PYTHON = python N = 8 .PHONY: default default: build test clean .PHONY: run-cpilog run-ring run-threads run run: run-cpilog run-ring run-threads run-cpilog: ${MPIEXEC} -n ${N} ${PYTHON} cpilog.py run-ring: ${MPIEXEC} -n ${N} ${PYTHON} ring.py run-threads: ${MPIEXEC} -n ${N} ${PYTHON} threads.py .PHONY: view-cpilog view-ring view-threads view view: view-cpilog view-ring view-threads view-cpilog: cpilog.slog2 jumpshot $< view-ring: ring.slog2 jumpshot $< view-threads: threads.slog2 jumpshot $< cpilog.clog2: run-cpilog ring.clog2: run-ring threads.clog2: run-threads %.slog2: %.clog2 clog2TOslog2 $< .PHONY: build build: run .PHONY: test test: .PHONY: clean clean: ${RM} *.[cs]log2 mpi4py_1.3.1+hg20131106.orig/demo/mpe-logging/ring.py0000644000000000000000000000131012211706251017747 0ustar 00000000000000#!/usr/bin/env python import os os.environ['MPE_LOGFILE_PREFIX'] = 'ring' import mpi4py mpi4py.profile('mpe') from mpi4py import MPI from array import array comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() src = rank-1 dest = rank+1 if rank == 0: src = size-1 if rank == size-1: dest = 0 try: from numpy import zeros a1 = zeros(1000000, 'd') a2 = zeros(1000000, 'd') except ImportError: from array import array a1 = array('d', [0]*1000); a1 *= 1000 a2 = array('d', [0]*1000); a2 *= 1000 comm.Sendrecv(sendbuf=a1, recvbuf=a2, source=src, dest=dest) MPI.Request.Waitall([ comm.Isend(a1, dest=dest), comm.Irecv(a2, source=src), ]) mpi4py_1.3.1+hg20131106.orig/demo/mpe-logging/threads.py0000644000000000000000000000137112211706251020451 0ustar 00000000000000import sys import mpi4py mpi4py.profile('MPE', logfile='threads') from mpi4py import MPI from array import array try: import threading except ImportError: sys.stderr.write("threading module not available\n") sys.exit(0) send_msg = array('i', [7]*1000); send_msg *= 1000 recv_msg = array('i', [0]*1000); recv_msg *= 1000 def self_send(comm, rank): comm.Send([send_msg, MPI.INT], dest=rank, tag=0) def self_recv(comm, rank): comm.Recv([recv_msg, MPI.INT], source=rank, tag=0) comm = MPI.COMM_WORLD rank = comm.Get_rank() send_thread = threading.Thread(target=self_send, args=(comm, rank)) recv_thread = threading.Thread(target=self_recv, args=(comm, rank)) send_thread.start() recv_thread.start() recv_thread.join() send_thread.join() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/README.txt0000644000000000000000000000066012211706251017621 0ustar 00000000000000@Book{MPI-Ref-V1, title = {{MPI} - The Complete Reference: Volume 1, The {MPI} Core}, author = {Marc Snir and Steve Otto and Steven Huss-Lederman and David Walker and Jack Dongarra}, edition = {2nd.}, year = 1998, publisher = {MIT Press}, volume = {1, The MPI Core}, series = {Scientific and Engineering Computation}, address = {Cambridge, MA, USA}, } mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-2.01.py0000644000000000000000000000175712211706251017477 0ustar 00000000000000## mpiexec -n 2 python ex-2.01.py # Process 0 sends a message to process 1 # -------------------------------------------------------------------- from mpi4py import MPI import array if MPI.COMM_WORLD.Get_size() < 2: raise SystemExit # -------------------------------------------------------------------- s = "Hello there" msg = array.array('c', '\0'*20) tag = 99 status = MPI.Status() myrank = MPI.COMM_WORLD.Get_rank() if myrank == 0: msg[:len(s)] = array.array('c', s) MPI.COMM_WORLD.Send([msg, len(s)+1, MPI.CHAR], 1, tag) elif myrank == 1: MPI.COMM_WORLD.Recv([msg, 20, MPI.CHAR], 0, tag, status) # -------------------------------------------------------------------- if myrank == 1: assert list(msg[:len(s)]) == list(s) assert msg[len(s)] == '\0' assert status.source == 0 assert status.tag == tag assert status.error == MPI.SUCCESS assert status.Get_count(MPI.CHAR) == len(s)+1 # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-2.08.py0000644000000000000000000000246512211706251017503 0ustar 00000000000000## mpiexec -n 2 python ex-2.08.py # An exchange of messages # -------------------------------------------------------------------- from mpi4py import MPI import array if MPI.COMM_WORLD.Get_size() < 2: raise SystemExit # -------------------------------------------------------------------- sendbuf = array.array('d', [0]*10) recvbuf = array.array('d', [0]*10) tag = 0 status = MPI.Status() myrank = MPI.COMM_WORLD.Get_rank() if myrank == 0: sendbuf[:] = array.array('d', range(len(sendbuf))) MPI.COMM_WORLD.Send([sendbuf, MPI.DOUBLE], 1, tag) MPI.COMM_WORLD.Recv([recvbuf, MPI.DOUBLE], 1, tag, status) elif myrank == 1: MPI.COMM_WORLD.Recv([recvbuf, MPI.DOUBLE], 0, tag, status) sendbuf[:] = recvbuf MPI.COMM_WORLD.Send([sendbuf, MPI.DOUBLE], 0, tag) # -------------------------------------------------------------------- if myrank == 0: assert status.source == 1 assert status.tag == tag assert status.error == MPI.SUCCESS assert status.Get_count(MPI.DOUBLE) == len(recvbuf) assert sendbuf == recvbuf elif myrank == 1: assert status.source == 0 assert status.tag == tag assert status.error == MPI.SUCCESS assert status.Get_count(MPI.DOUBLE) == len(recvbuf) assert sendbuf == recvbuf # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-2.16.py0000644000000000000000000000351412211706251017476 0ustar 00000000000000## mpiexec -n 4 python ex-2.16.py # Jacobi code # version of parallel code using sendrecv and null proceses. # -------------------------------------------------------------------- from mpi4py import MPI try: import numpy except ImportError: raise SystemExit # -------------------------------------------------------------------- n = 5 * MPI.COMM_WORLD.Get_size() # compute number of processes and myrank p = MPI.COMM_WORLD.Get_size() myrank = MPI.COMM_WORLD.Get_rank() # compute size of local block m = n/p if myrank < (n - p * m): m = m + 1 #compute neighbors if myrank == 0: left = MPI.PROC_NULL else: left = myrank - 1 if myrank == p - 1: right = MPI.PROC_NULL else: right = myrank + 1 # allocate local arrays A = numpy.empty((n+2, m+2), dtype='d', order='fortran') B = numpy.empty((n, m), dtype='d', order='fortran') A.fill(1) A[0, :] = A[-1, :] = 0 A[:, 0] = A[:, -1] = 0 # main loop converged = False while not converged: # compute, B = 0.25 * ( N + S + E + W) N, S = A[:-2, 1:-1], A[2:, 1:-1] E, W = A[1:-1, :-2], A[1:-1, 2:] numpy.add(N, S, B) numpy.add(E, B, B) numpy.add(W, B, B) B *= 0.25 A[1:-1, 1:-1] = B # communicate tag = 0 MPI.COMM_WORLD.Sendrecv([B[:, -1], MPI.DOUBLE], right, tag, [A[:, 0], MPI.DOUBLE], left, tag) MPI.COMM_WORLD.Sendrecv((B[:, 0], MPI.DOUBLE), left, tag, (A[:, -1], MPI.DOUBLE), right, tag) # convergence myconv = numpy.allclose(B, 0) loc_conv = numpy.asarray(myconv, dtype='i') glb_conv = numpy.asarray(0, dtype='i') MPI.COMM_WORLD.Allreduce([loc_conv, MPI.INT], [glb_conv, MPI.INT], op=MPI.LAND) converged = bool(glb_conv) # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-2.29.py0000644000000000000000000000223012211706251017474 0ustar 00000000000000## mpiexec -n 3 python ex-2.29.py # Use a blocking probe to wait for an incoming message # -------------------------------------------------------------------- from mpi4py import MPI import array if MPI.COMM_WORLD.Get_size() < 3: raise SystemExit # -------------------------------------------------------------------- comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: i = array.array('i', [7]*5) comm.Send([i, MPI.INT], 2, 0) elif rank == 1: x = array.array('f', [7]*5) comm.Send([x, MPI.FLOAT], 2, 0) elif rank == 2: i = array.array('i', [0]*5) x = array.array('f', [0]*5) status = MPI.Status() for j in range(2): comm.Probe(MPI.ANY_SOURCE, 0, status) if status.Get_source() == 0: comm.Recv([i, MPI.INT], 0, 0, status) else: comm.Recv([x, MPI.FLOAT], 1, 0, status) # -------------------------------------------------------------------- if rank == 2: for v in i: assert v == 7 for v in x: assert v == 7 assert status.source in (0, 1) assert status.tag == 0 assert status.error == 0 # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-2.32.py0000644000000000000000000000441312211706251017473 0ustar 00000000000000# Jacobi computation, using persitent requests from mpi4py import MPI try: import numpy except ImportError: raise SystemExit n = 5 * MPI.COMM_WORLD.Get_size() # compute number of processes and myrank p = MPI.COMM_WORLD.Get_size() myrank = MPI.COMM_WORLD.Get_rank() # compute size of local block m = n/p if myrank < (n - p * m): m = m + 1 #compute neighbors if myrank == 0: left = MPI.PROC_NULL else: left = myrank - 1 if myrank == p - 1: right = MPI.PROC_NULL else: right = myrank + 1 # allocate local arrays A = numpy.empty((n+2, m+2), dtype=float, order='fortran') B = numpy.empty((n, m), dtype=float, order='fortran') A.fill(1) A[0, :] = A[-1, :] = 0 A[:, 0] = A[:, -1] = 0 # create persintent requests tag = 0 sreq1 = MPI.COMM_WORLD.Send_init((B[:, 0], MPI.DOUBLE), left, tag) sreq2 = MPI.COMM_WORLD.Send_init((B[:, -1], MPI.DOUBLE), right, tag) rreq1 = MPI.COMM_WORLD.Recv_init((A[:, 0], MPI.DOUBLE), left, tag) rreq2 = MPI.COMM_WORLD.Recv_init((A[:, -1], MPI.DOUBLE), right, tag) reqlist = [sreq1, sreq2, rreq1, rreq2] for req in reqlist: assert req != MPI.REQUEST_NULL # main loop converged = False while not converged: # compute boundary columns N, S = A[ :-2, 1], A[2:, 1] E, W = A[1:-1, 0], A[1:-1, 2] C = B[:, 0] numpy.add(N, S, C) numpy.add(C, E, C) numpy.add(C, W, C) C *= 0.25 N, S = A[ :-2, -2], A[2:, -2] E, W = A[1:-1, -3], A[1:-1, -1] C = B[:, -1] numpy.add(N, S, C) numpy.add(C, E, C) numpy.add(C, W, C) C *= 0.25 # start communication #MPI.Prequest.Startall(reqlist) for r in reqlist: r.Start() # compute interior N, S = A[ :-2, 2:-2], A[2, 2:-2] E, W = A[1:-1, 2:-2], A[1:-1, 2:-2] C = B[:, 1:-1] numpy.add(N, S, C) numpy.add(E, C, C) numpy.add(W, C, C) C *= 0.25 A[1:-1, 1:-1] = B # complete communication MPI.Prequest.Waitall(reqlist) # convergence myconv = numpy.allclose(B, 0) loc_conv = numpy.asarray(myconv, dtype='i') glb_conv = numpy.asarray(0, dtype='i') MPI.COMM_WORLD.Allreduce([loc_conv, MPI.INT], [glb_conv, MPI.INT], op=MPI.LAND) converged = bool(glb_conv) # free persintent requests for req in reqlist: req.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-2.34.py0000644000000000000000000000224612211706251017477 0ustar 00000000000000## mpiexec -n 2 python ex-2.34.py # Use of ready-mode and synchonous-mode # -------------------------------------------------------------------- from mpi4py import MPI try: import numpy except ImportError: raise SystemExit if MPI.COMM_WORLD.Get_size() < 2: raise SystemExit # -------------------------------------------------------------------- comm = MPI.COMM_WORLD buff = numpy.empty((1000,2), dtype='f', order='fortran') rank = comm.Get_rank() if rank == 0: req1 = comm.Irecv([buff[:, 0], MPI.FLOAT], 1, 1) req2 = comm.Irecv([buff[:, 1], MPI.FLOAT], 1, 2) status = [MPI.Status(), MPI.Status()] MPI.Request.Waitall([req1, req2], status) elif rank == 1: buff[:, 0] = 5 buff[:, 1] = 7 comm.Ssend([buff[:, 1], MPI.FLOAT], 0, 2) comm.Rsend([buff[:, 0], MPI.FLOAT], 0, 1) # -------------------------------------------------------------------- all = numpy.all if rank == 0: assert all(buff[:, 0] == 5) assert all(buff[:, 1] == 7) assert status[0].source == 1 assert status[0].tag == 1 assert status[1].source == 1 assert status[1].tag == 2 # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-2.35.py0000644000000000000000000000133712211706251017500 0ustar 00000000000000## mpiexec -n 1 python ex-2.35.py # Calls to attach and detach buffers # -------------------------------------------------------------------- from mpi4py import MPI try: from numpy import empty except ImportError: from array import array def empty(size, dtype): return array(dtype, [0]*size) # -------------------------------------------------------------------- BUFSISE = 10000 + MPI.BSEND_OVERHEAD buff = empty(BUFSISE, dtype='b') MPI.Attach_buffer(buff) buff2 = MPI.Detach_buffer() MPI.Attach_buffer(buff2) MPI.Detach_buffer() # -------------------------------------------------------------------- assert len(buff2) == BUFSISE # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.01.py0000644000000000000000000000136612211706251017474 0ustar 00000000000000from mpi4py import MPI try: import numpy except ImportError: raise SystemExit # send a upper triangular matrix N = 10 a = numpy.empty((N, N), dtype=float, order='c') b = numpy.zeros((N, N), dtype=float, order='c') a.flat = numpy.arange(a.size, dtype=float) # compute start and size of each row i = numpy.arange(N) blocklen = N - i disp = N * i + i # create datatype for upper triangular part upper = MPI.DOUBLE.Create_indexed(blocklen, disp) upper.Commit() # send and recv matrix myrank = MPI.COMM_WORLD.Get_rank() MPI.COMM_WORLD.Sendrecv((a, 1, upper), myrank, 0, [b, 1, upper], myrank, 0) assert numpy.allclose(numpy.triu(b), numpy.triu(a)) assert numpy.allclose(numpy.tril(b, -1), numpy.zeros((N,N))) upper.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.02.py0000644000000000000000000000035512211706251017472 0ustar 00000000000000from mpi4py import MPI # Type = { (double, 0), (char, 8) } blens = (1, 1) disps = (0, MPI.DOUBLE.size) types = (MPI.DOUBLE, MPI.CHAR) dtype = MPI.Datatype.Create_struct(blens, disps, types) if 'ex-3.02' in __file__: dtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.03.py0000644000000000000000000000017512211706251017473 0ustar 00000000000000execfile('ex-3.02.py') assert dtype.size == MPI.DOUBLE.size + MPI.CHAR.size assert dtype.extent >= dtype.size dtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.04.py0000644000000000000000000000022712211706251017472 0ustar 00000000000000execfile('ex-3.02.py') count = 3 newtype = dtype.Create_contiguous(count) assert newtype.extent == dtype.extent * count dtype.Free() newtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.05.py0000644000000000000000000000027712211706251017500 0ustar 00000000000000execfile('ex-3.02.py') count = 2 blklen = 3 stride = 4 newtype = dtype.Create_vector(count, blklen, stride) assert newtype.size == dtype.size * count * blklen dtype.Free() newtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.06.py0000644000000000000000000000030012211706251017464 0ustar 00000000000000execfile('ex-3.02.py') count = 3 blklen = 1 stride = -2 newtype = dtype.Create_vector(count, blklen, stride) assert newtype.size == dtype.size * count * blklen dtype.Free() newtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.07.py0000644000000000000000000000031712211706251017475 0ustar 00000000000000execfile('ex-3.02.py') count = 2 blklen = 3 stride = 4 * dtype.extent newtype = dtype.Create_hvector(count, blklen, stride) assert newtype.size == dtype.size * count * blklen dtype.Free() newtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.08.py0000644000000000000000000000160212211706251017474 0ustar 00000000000000from mpi4py import MPI try: import numpy except ImportError: raise SystemExit # extract the section a[0:6:2, 0:5:2] and store it in e[:,:] a = numpy.empty((6, 5), dtype=float, order='fortran') e = numpy.empty((3, 3), dtype=float, order='fortran') a.flat = numpy.arange(a.size, dtype=float) lb, sizeofdouble = MPI.DOUBLE.Get_extent() # create datatype for a 1D section oneslice = MPI.DOUBLE.Create_vector(3, 1, 2) # create datatype for a 2D section twoslice = oneslice.Create_hvector(3, 1, 12*sizeofdouble) twoslice.Commit() # send and recv on same process myrank = MPI.COMM_WORLD.Get_rank() status = MPI.Status() MPI.COMM_WORLD.Sendrecv([a, 1, twoslice], myrank, 0, (e, MPI.DOUBLE), myrank, 0, status) assert numpy.allclose(a[::2, ::2], e) assert status.Get_count(twoslice) == 1 assert status.Get_count(MPI.DOUBLE) == e.size oneslice.Free() twoslice.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.09.py0000644000000000000000000000204012211706251017472 0ustar 00000000000000from mpi4py import MPI try: import numpy except ImportError: raise SystemExit # transpose a matrix a into b a = numpy.empty((100, 100), dtype=float, order='fortran') b = numpy.empty((100, 100), dtype=float, order='fortran') a.flat = numpy.arange(a.size, dtype=float) lb, sizeofdouble = MPI.DOUBLE.Get_extent() # create datatype dor one row # (vector with 100 double entries and stride 100) row = MPI.DOUBLE.Create_vector(100, 1, 100) # create datatype for matrix in row-major order # (one hundred copies of the row datatype, strided one word # apart; the succesive row datatypes are interleaved) xpose = row.Create_hvector(100, 1, sizeofdouble) xpose.Commit() # send matrix in row-major order and receive in column major order abuf = (a, xpose) bbuf = (b, MPI.DOUBLE) myrank = MPI.COMM_WORLD.Get_rank() status = MPI.Status() MPI.COMM_WORLD.Sendrecv(abuf, myrank, 0, bbuf, myrank, 0, status) assert numpy.allclose(a, b.transpose()) assert status.Get_count(xpose) == 1 assert status.Get_count(MPI.DOUBLE) == b.size row.Free() xpose.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.11.py0000644000000000000000000000016012211706251017464 0ustar 00000000000000execfile('ex-3.02.py') B = (3, 1) D = (4, 0) newtype = dtype.Create_indexed(B, D) dtype.Free() newtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.12.py0000644000000000000000000000020012211706251017460 0ustar 00000000000000execfile('ex-3.02.py') B = (3, 1) D = (4 * dtype.extent, 0) newtype = dtype.Create_hindexed(B, D) dtype.Free() newtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/ex-3.13.py0000644000000000000000000000044212211706251017471 0ustar 00000000000000from mpi4py import MPI blens = (1, 1) disps = (0, MPI.DOUBLE.size) types = (MPI.DOUBLE, MPI.CHAR) type1 = MPI.Datatype.Create_struct(blens, disps, types) B = (2, 1, 3) D = (0, 16, 26) T = (MPI.FLOAT, type1, MPI.CHAR) dtype = MPI.Datatype.Create_struct(B, D, T) type1.Free() dtype.Free() mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/makefile0000644000000000000000000000060012211706251017615 0ustar 00000000000000.PHONY: default build test clean test_seq test_mpi default: build test clean build: PYTHON = python MPIEXEC = mpiexec NP_FLAG = -n NP = 3 test_seq: ${MAKE} MPIEXEC= NP_FLAG= NP= test_mpi test_mpi: -@for i in `ls ex-*.py`; do \ echo ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} $$i; \ ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} $$i; \ done test: test_seq test_mpi clean: mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/runtests.bat0000644000000000000000000000251612211706251020504 0ustar 00000000000000@echo off setlocal ENABLEEXTENSIONS set MPI=Microsoft HPC Pack 2008 SDK set MPI=DeinoMPI set MPI=MPICH2 set PATH="%ProgramFiles%\%MPI%\bin";%PATH% set MPIEXEC=mpiexec set NP_FLAG=-n set NP=5 set PYTHON=C:\Python25\python.exe set PYTHON=C:\Python26\python.exe set PYTHON=C:\Python27\python.exe set PYTHON=C:\Python30\python.exe set PYTHON=C:\Python31\python.exe set PYTHON=C:\Python32\python.exe set PYTHON=python @echo on set MPIEXEC= set NP_FLAG= set NP= %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-2.01.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-2.08.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-2.16.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-2.29.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-2.32.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-2.34.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-2.35.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.01.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.02.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.03.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.04.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.05.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.06.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.07.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.08.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.09.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.11.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.12.py %MPIEXEC% %NP_FLAG% %NP% %PYTHON% ex-3.13.py mpi4py_1.3.1+hg20131106.orig/demo/mpi-ref-v1/runtests.sh0000755000000000000000000000151512211706251020351 0ustar 00000000000000#!/bin/sh MPIEXEC=mpiexec NP_FLAG=-n NP=3 PYTHON=python set -x $MPIEXEC $NP_FLAG $NP $PYTHON ex-2.01.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-2.08.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-2.16.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-2.29.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-2.32.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-2.34.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-2.35.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.01.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.02.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.03.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.04.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.05.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.06.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.07.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.08.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.09.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.11.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.12.py $MPIEXEC $NP_FLAG $NP $PYTHON ex-3.13.py mpi4py_1.3.1+hg20131106.orig/demo/nxtval/makefile0000644000000000000000000000054012211706251017251 0ustar 00000000000000MPIEXEC=mpiexec NP_FLAG=-n NP=5 PYTHON=python .PHONY: test test: ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} nxtval-threads.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} nxtval-dynproc.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} nxtval-onesided.py ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} nxtval-scalable.py # ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} nxtval-mpi3.py mpi4py_1.3.1+hg20131106.orig/demo/nxtval/nxtval-dynproc.py0000644000000000000000000000402712211706251021117 0ustar 00000000000000# -------------------------------------------------------------------- from mpi4py import MPI import sys, os class Counter(object): def __init__(self, comm): assert not comm.Is_inter() self.comm = comm.Dup() # start counter process script = os.path.abspath(__file__) if script[-4:] in ('.pyc', '.pyo'): script = script[:-1] self.child = self.comm.Spawn(sys.executable, [script, '--child'], 1) def free(self): self.comm.Barrier() # stop counter process rank = self.child.Get_rank() if rank == 0: self.child.send(None, 0, 1) self.child.Disconnect() # self.comm.Free() def next(self): # incr = 1 self.child.send(incr, 0, 0) ival = self.child.recv(None, 0, 0) nxtval = ival # return nxtval # -------------------------------------------------------------------- def _counter_child(): parent = MPI.Comm.Get_parent() assert parent != MPI.COMM_NULL try: counter = 0 status = MPI.Status() any_src, any_tag = MPI.ANY_SOURCE, MPI.ANY_TAG while True: # server loop incr = parent.recv(None, any_src, any_tag, status) if status.tag == 1: break parent.send(counter, status.source, 0) counter += incr finally: parent.Disconnect() if __name__ == '__main__': if (len(sys.argv) > 1 and sys.argv[0] == __file__ and sys.argv[1] == '--child'): _counter_child() sys.exit(0) # -------------------------------------------------------------------- def test(): vals = [] counter = Counter(MPI.COMM_WORLD) for i in range(5): c = counter.next() vals.append(c) counter.free() # vals = MPI.COMM_WORLD.allreduce(vals) assert sorted(vals) == list(range(len(vals))) if __name__ == '__main__': test() # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/nxtval/nxtval-mpi3.py0000644000000000000000000000413012211706251020304 0ustar 00000000000000from mpi4py import MPI from array import array as _array import struct as _struct # -------------------------------------------------------------------- class Counter(object): def __init__(self, comm): rank = comm.Get_rank() itemsize = MPI.INT.Get_size() if rank == 0: n = 1 else: n = 0 self.win = MPI.Win.Allocate(n*itemsize, itemsize, MPI.INFO_NULL, comm) if rank == 0: mem = self.win.memory mem[:] = _struct.pack('i', 0) def free(self): self.win.Free() def next(self, increment=1): incr = _array('i', [increment]) nval = _array('i', [0]) self.win.Lock(MPI.LOCK_EXCLUSIVE, 0, 0) self.win.Get_accumulate([incr, 1, MPI.INT], [nval, 1, MPI.INT], 0, op=MPI.SUM) self.win.Unlock(0) return nval[0] # ----------------------------------------------------------------------------- class Mutex(object): def __init__(self, comm): self.counter = Counter(comm) def __enter__(self): self.lock() return self def __exit__(self, *exc): self.unlock() return None def free(self): self.counter.free() def lock(self): value = self.counter.next(+1) while value != 0: value = self.counter.next(-1) value = self.counter.next(+1) def unlock(self): self.counter.next(-1) # ----------------------------------------------------------------------------- def test_counter(): vals = [] counter = Counter(MPI.COMM_WORLD) for i in range(5): c = counter.next() vals.append(c) counter.free() vals = MPI.COMM_WORLD.allreduce(vals) assert sorted(vals) == list(range(len(vals))) def test_mutex(): mutex = Mutex(MPI.COMM_WORLD) mutex.lock() mutex.unlock() mutex.free() if __name__ == '__main__': test_counter() test_mutex() # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/nxtval/nxtval-onesided.py0000644000000000000000000000364112211706251021234 0ustar 00000000000000# -------------------------------------------------------------------- from mpi4py import MPI from array import array as _array import struct as _struct class Counter(object): def __init__(self, comm): # size = comm.Get_size() rank = comm.Get_rank() # itemsize = MPI.INT.Get_size() if rank == 0: mem = MPI.Alloc_mem(itemsize*size, MPI.INFO_NULL) mem[:] = _struct.pack('i', 0) * size else: mem = MPI.BOTTOM self.win = MPI.Win.Create(mem, itemsize, MPI.INFO_NULL, comm) # blens = [rank, size-rank-1] disps = [0, rank+1] self.dt_get = MPI.INT.Create_indexed(blens, disps).Commit() # self.myval = 0 def free(self): self.dt_get.Free() mem = self.win.memory self.win.Free() if mem: MPI.Free_mem(mem) def next(self): # group = self.win.Get_group() size = group.Get_size() rank = group.Get_rank() group.Free() # incr = _array('i', [1]) vals = _array('i', [0])*size self.win.Lock(MPI.LOCK_EXCLUSIVE, 0, 0) self.win.Accumulate([incr, 1, MPI.INT], 0, [rank, 1, MPI.INT], MPI.SUM) self.win.Get([vals, 1, self.dt_get], 0, [ 0, 1, self.dt_get]) self.win.Unlock(0) # vals[rank] = self.myval self.myval += 1 nxtval = sum(vals) # return nxtval # -------------------------------------------------------------------- def test(): vals = [] counter = Counter(MPI.COMM_WORLD) for i in range(5): c = counter.next() vals.append(c) counter.free() vals = MPI.COMM_WORLD.allreduce(vals) assert sorted(vals) == list(range(len(vals))) if __name__ == '__main__': test() # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/demo/nxtval/nxtval-scalable.py0000644000000000000000000001005612211706251021206 0ustar 00000000000000from mpi4py import MPI # ----------------------------------------------------------------------------- import struct as _struct try: from numpy import empty as _empty def _array_new(size, typecode, init=0): a = _empty(size, typecode) a.fill(init) return a def _array_set(ary, value): ary.fill(value) def _array_sum(ary): return ary.sum() except ImportError: from array import array as _array def _array_new(size, typecode, init=0): return _array(typecode, [init]) * size def _array_set(ary, value): for i, _ in enumerate(ary): ary[i] = value def _array_sum(ary): return sum(ary, 0) # ----------------------------------------------------------------------------- class Counter(object): def __init__(self, comm, init=0): # size = comm.Get_size() rank = comm.Get_rank() mask = 1 while mask < size: mask <<= 1 mask >>= 1 idx = 0 get_idx = [] acc_idx = [] while mask >= 1: left = idx + 1 right = idx + (mask<<1) if rank < mask: acc_idx.append( left ) get_idx.append( right ) idx = left else: acc_idx.append( right ) get_idx.append( left ) idx = right rank = rank % mask mask >>= 1 # typecode = 'i' datatype = MPI.INT itemsize = datatype.Get_size() # root = 0 rank = comm.Get_rank() if rank == root: nlevels = len(get_idx) + 1 nentries = (1< large_message_size: skip = skip_large loop = loop_large iterations = list(range(loop+skip)) s_msg = MPI.IN_PLACE r_msg = [r_buf, size, MPI.BYTE] # comm.Barrier() for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Allgather(s_msg, r_msg) t_end = MPI.Wtime() comm.Barrier() # if myid == 0: latency = (t_end - t_start) * 1e6 / loop print ('%-10d%20.2f' % (size, latency)) def message_sizes(max_size): return [0] + [(1< large_message_size: skip = skip_large loop = loop_large iterations = list(range(loop+skip)) s_msg = [s_buf, size, MPI.BYTE] r_msg = [r_buf, size, MPI.BYTE] # comm.Barrier() for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Alltoall(s_msg, r_msg) t_end = MPI.Wtime() comm.Barrier() # if myid == 0: latency = (t_end - t_start) * 1e6 / loop print ('%-10d%20.2f' % (size, latency)) def message_sizes(max_size): return [0] + [(1< large_message_size: skip = skip_large loop = loop_large iterations = list(range(loop+skip)) disp = 0 for i in range (numprocs): s_counts[i] = r_counts[i] = size s_displs[i] = r_displs[i] = disp disp += size s_msg = [s_buf, (s_counts, s_displs), MPI.BYTE] r_msg = [r_buf, (r_counts, r_displs), MPI.BYTE] # comm.Barrier() for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Alltoallv(s_msg, r_msg) t_end = MPI.Wtime() comm.Barrier() # if myid == 0: latency = (t_end - t_start) * 1e6 / loop print ('%-10d%20.2f' % (size, latency)) def message_sizes(max_size): return [0] + [(1< large_message_size: skip = skip_large loop = loop_large iterations = list(range(loop+skip)) msg = [buf, size, MPI.BYTE] # comm.Barrier() for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Bcast(msg, 0) t_end = MPI.Wtime() comm.Barrier() # if myid == 0: latency = (t_end - t_start) * 1e6 / loop print ('%-10d%20.2f' % (size, latency)) def message_sizes(max_size): return [0] + [(1< MAX_MSG_SIZE: break if size > large_message_size: skip = skip_large loop = loop_large window_size = window_size_large iterations = list(range(loop+skip)) window_sizes = list(range(window_size)) s_msg = [s_buf, size, MPI.BYTE] r_msg = [r_buf, size, MPI.BYTE] send_request = [MPI.REQUEST_NULL] * window_size recv_request = [MPI.REQUEST_NULL] * window_size # comm.Barrier() if myid == 0: for i in iterations: if i == skip: t_start = MPI.Wtime() for j in window_sizes: recv_request[j] = comm.Irecv(r_msg, 1, 10) for j in window_sizes: send_request[j] = comm.Isend(s_msg, 1, 100) MPI.Request.Waitall(send_request) MPI.Request.Waitall(recv_request) t_end = MPI.Wtime() elif myid == 1: for i in iterations: for j in window_sizes: recv_request[j] = comm.Irecv(r_msg, 0, 100) for j in window_sizes: send_request[j] = comm.Isend(s_msg, 0, 10) MPI.Request.Waitall(send_request) MPI.Request.Waitall(recv_request) # if myid == 0: MB = size / 1e6 * loop * window_size s = t_end - t_start print ('%-10d%20.2f' % (size, MB/s)) def allocate(n): try: import mmap return mmap.mmap(-1, n) except (ImportError, EnvironmentError): try: from numpy import zeros return zeros(n, 'B') except ImportError: from array import array return array('B', [0]) * n if __name__ == '__main__': osu_bw() mpi4py_1.3.1+hg20131106.orig/demo/osu_bw.py0000644000000000000000000000471712211706251016117 0ustar 00000000000000# http://mvapich.cse.ohio-state.edu/benchmarks/ from mpi4py import MPI def osu_bw( BENCHMARH = "MPI Bandwidth Test", skip = 10, loop = 100, window_size = 64, skip_large = 2, loop_large = 20, window_size_large = 64, large_message_size = 8192, MAX_MSG_SIZE = 1<<22, ): comm = MPI.COMM_WORLD myid = comm.Get_rank() numprocs = comm.Get_size() if numprocs != 2: if myid == 0: errmsg = "This test requires exactly two processes" else: errmsg = None raise SystemExit(errmsg) s_buf = allocate(MAX_MSG_SIZE) r_buf = allocate(MAX_MSG_SIZE) if myid == 0: print ('# %s' % (BENCHMARH,)) if myid == 0: print ('# %-8s%20s' % ("Size [B]", "Bandwidth [MB/s]")) message_sizes = [2**i for i in range(30)] for size in message_sizes: if size > MAX_MSG_SIZE: break if size > large_message_size: skip = skip_large loop = loop_large window_size = window_size_large iterations = list(range(loop+skip)) window_sizes = list(range(window_size)) requests = [MPI.REQUEST_NULL] * window_size # comm.Barrier() if myid == 0: s_msg = [s_buf, size, MPI.BYTE] r_msg = [r_buf, 4, MPI.BYTE] for i in iterations: if i == skip: t_start = MPI.Wtime() for j in window_sizes: requests[j] = comm.Isend(s_msg, 1, 100) MPI.Request.Waitall(requests) comm.Recv(r_msg, 1, 101) t_end = MPI.Wtime() elif myid == 1: s_msg = [s_buf, 4, MPI.BYTE] r_msg = [r_buf, size, MPI.BYTE] for i in iterations: for j in window_sizes: requests[j] = comm.Irecv(r_msg, 0, 100) MPI.Request.Waitall(requests) comm.Send(s_msg, 0, 101) # if myid == 0: MB = size / 1e6 * loop * window_size s = t_end - t_start print ('%-10d%20.2f' % (size, MB/s)) def allocate(n): try: import mmap return mmap.mmap(-1, n) except (ImportError, EnvironmentError): try: from numpy import zeros return zeros(n, 'B') except ImportError: from array import array return array('B', [0]) * n if __name__ == '__main__': osu_bw() mpi4py_1.3.1+hg20131106.orig/demo/osu_gather.py0000644000000000000000000000370112211706251016751 0ustar 00000000000000# http://mvapich.cse.ohio-state.edu/benchmarks/ from mpi4py import MPI def osu_bcast( BENCHMARH = "MPI Gather Latency Test", skip = 1000, loop = 10000, skip_large = 10, loop_large = 100, large_message_size = 8192, MAX_MSG_SIZE = 1<<20, ): comm = MPI.COMM_WORLD myid = comm.Get_rank() numprocs = comm.Get_size() if numprocs < 2: if myid == 0: errmsg = "This test requires at least two processes" else: errmsg = None raise SystemExit(errmsg) if myid == 0: r_buf = allocate(MAX_MSG_SIZE*numprocs) else: s_buf = allocate(MAX_MSG_SIZE) if myid == 0: print ('# %s' % (BENCHMARH,)) if myid == 0: print ('# %-8s%20s' % ("Size [B]", "Latency [us]")) for size in message_sizes(MAX_MSG_SIZE): if size > large_message_size: skip = skip_large loop = loop_large iterations = list(range(loop+skip)) if myid == 0: s_msg = MPI.IN_PLACE r_msg = [r_buf, size, MPI.BYTE] else: s_msg = [s_buf, size, MPI.BYTE] r_msg = None # comm.Barrier() for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Gather(s_msg, r_msg, 0) t_end = MPI.Wtime() comm.Barrier() # if myid == 0: latency = (t_end - t_start) * 1e6 / loop print ('%-10d%20.2f' % (size, latency)) def message_sizes(max_size): return [0] + [(1< MAX_MSG_SIZE: break if size > large_message_size: skip = skip_large loop = loop_large iterations = list(range(loop+skip)) s_msg = [s_buf, size, MPI.BYTE] r_msg = [r_buf, size, MPI.BYTE] # comm.Barrier() if myid == 0: for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Send(s_msg, 1, 1) comm.Recv(r_msg, 1, 1) t_end = MPI.Wtime() elif myid == 1: for i in iterations: comm.Recv(r_msg, 0, 1) comm.Send(s_msg, 0, 1) # if myid == 0: latency = (t_end - t_start) * 1e6 / (2 * loop) print ('%-10d%20.2f' % (size, latency)) def allocate(n): try: import mmap return mmap.mmap(-1, n) except (ImportError, EnvironmentError): try: from numpy import zeros return zeros(n, 'B') except ImportError: from array import array return array('B', [0]) * n if __name__ == '__main__': osu_latency() mpi4py_1.3.1+hg20131106.orig/demo/osu_multi_lat.py0000644000000000000000000000426212211706251017474 0ustar 00000000000000# http://mvapich.cse.ohio-state.edu/benchmarks/ from mpi4py import MPI def osu_multi_lat( BENCHMARH = "MPI Multi Latency Test", skip_small = 100, loop_small = 10000, skip_large = 10, loop_large = 1000, large_message_size = 8192, MAX_MSG_SIZE = 1<<22, ): comm = MPI.COMM_WORLD myid = comm.Get_rank() nprocs = comm.Get_size() pairs = nprocs/2 s_buf = allocate(MAX_MSG_SIZE) r_buf = allocate(MAX_MSG_SIZE) if myid == 0: print ('# %s' % (BENCHMARH,)) if myid == 0: print ('# %-8s%20s' % ("Size [B]", "Latency [us]")) message_sizes = [0] + [2**i for i in range(30)] for size in message_sizes: if size > MAX_MSG_SIZE: break if size > large_message_size: skip = skip_large loop = loop_large else: skip = skip_small loop = loop_small iterations = list(range(loop+skip)) s_msg = [s_buf, size, MPI.BYTE] r_msg = [r_buf, size, MPI.BYTE] # comm.Barrier() if myid < pairs: partner = myid + pairs for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Send(s_msg, partner, 1) comm.Recv(r_msg, partner, 1) t_end = MPI.Wtime() else: partner = myid - pairs for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Recv(r_msg, partner, 1) comm.Send(s_msg, partner, 1) t_end = MPI.Wtime() # latency = (t_end - t_start) * 1e6 / (2 * loop) total_lat = comm.reduce(latency, root=0, op=MPI.SUM) if myid == 0: avg_lat = total_lat/(pairs * 2) print ('%-10d%20.2f' % (size, avg_lat)) def allocate(n): try: import mmap return mmap.mmap(-1, n) except (ImportError, EnvironmentError): try: from numpy import zeros return zeros(n, 'B') except ImportError: from array import array return array('B', [0]) * n if __name__ == '__main__': osu_multi_lat() mpi4py_1.3.1+hg20131106.orig/demo/osu_scatter.py0000644000000000000000000000370312211706251017146 0ustar 00000000000000# http://mvapich.cse.ohio-state.edu/benchmarks/ from mpi4py import MPI def osu_bcast( BENCHMARH = "MPI Scatter Latency Test", skip = 1000, loop = 10000, skip_large = 10, loop_large = 100, large_message_size = 8192, MAX_MSG_SIZE = 1<<20, ): comm = MPI.COMM_WORLD myid = comm.Get_rank() numprocs = comm.Get_size() if numprocs < 2: if myid == 0: errmsg = "This test requires at least two processes" else: errmsg = None raise SystemExit(errmsg) if myid == 0: s_buf = allocate(MAX_MSG_SIZE*numprocs) else: r_buf = allocate(MAX_MSG_SIZE) if myid == 0: print ('# %s' % (BENCHMARH,)) if myid == 0: print ('# %-8s%20s' % ("Size [B]", "Latency [us]")) for size in message_sizes(MAX_MSG_SIZE): if size > large_message_size: skip = skip_large loop = loop_large iterations = list(range(loop+skip)) if myid == 0: s_msg = [s_buf, size, MPI.BYTE] r_msg = MPI.IN_PLACE else: s_msg = None r_msg = [r_buf, size, MPI.BYTE] # comm.Barrier() for i in iterations: if i == skip: t_start = MPI.Wtime() comm.Scatter(s_msg, r_msg, 0) t_end = MPI.Wtime() comm.Barrier() # if myid == 0: latency = (t_end - t_start) * 1e6 / loop print ('%-10d%20.2f' % (size, latency)) def message_sizes(max_size): return [0] + [(1< target: partial = op(tmp, partial) recvobj = op(tmp, recvobj) else: tmp = op(partial, tmp) partial = tmp mask <<= 1 return recvobj def exscan(self, sendobj=None, recvobj=None, op=MPI.SUM): size = self.size rank = self.rank tag = MPI.COMM_WORLD.Get_attr(MPI.TAG_UB)-1 if op in (MPI.MINLOC, MPI.MAXLOC): sendobj = (sendobj, rank) recvobj = sendobj partial = sendobj mask = 1 flag = False while mask < size: target = rank ^ mask if target < size: tmp = self.sendrecv(partial, dest=target, source=target, sendtag=tag, recvtag=tag) if rank > target: partial = op(tmp, partial) if rank != 0: if not flag: recvobj = tmp flag = True else: recvobj = op(tmp, recvobj) else: tmp = op(partial, tmp) partial = tmp mask <<= 1 if rank == 0: recvobj = None return recvobj mpi4py_1.3.1+hg20131106.orig/demo/reductions/runtests.bat0000644000000000000000000000077012211706251021000 0ustar 00000000000000@echo off setlocal ENABLEEXTENSIONS set MPI=Microsoft HPC Pack 2008 SDK set MPI=DeinoMPI set MPI=MPICH2 set PATH="%ProgramFiles%\%MPI%\bin";%PATH% set MPIEXEC=mpiexec set NP_FLAG=-n set NP=5 set PYTHON=C:\Python25\python.exe set PYTHON=C:\Python26\python.exe set PYTHON=C:\Python27\python.exe set PYTHON=C:\Python30\python.exe set PYTHON=C:\Python31\python.exe set PYTHON=C:\Python32\python.exe set PYTHON=python @echo on %MPIEXEC% %NP_FLAG% %NP% %PYTHON% test_reductions.py -q mpi4py_1.3.1+hg20131106.orig/demo/reductions/runtests.sh0000755000000000000000000000016612211706251020646 0ustar 00000000000000#!/bin/sh MPIEXEC=mpiexec NP_FLAG=-n NP=5 PYTHON=python set -x $MPIEXEC $NP_FLAG $NP $PYTHON test_reductions.py -q mpi4py_1.3.1+hg20131106.orig/demo/reductions/test_reductions.py0000644000000000000000000001414312211706251022210 0ustar 00000000000000#import mpi4py #mpi4py.profile("mpe") from mpi4py import MPI import unittest import sys, os sys.path.insert(0, os.path.dirname(__file__)) from reductions import Intracomm del sys.path[0] class BaseTest(object): def test_reduce(self): rank = self.comm.rank size = self.comm.size for root in range(size): msg = rank res = self.comm.reduce(sendobj=msg, root=root) if self.comm.rank == root: self.assertEqual(res, sum(range(size))) else: self.assertEqual(res, None) def test_reduce_min(self): rank = self.comm.rank size = self.comm.size for root in range(size): msg = rank res = self.comm.reduce(sendobj=msg, op=MPI.MIN, root=root) if self.comm.rank == root: self.assertEqual(res, 0) else: self.assertEqual(res, None) def test_reduce_max(self): rank = self.comm.rank size = self.comm.size for root in range(size): msg = rank res = self.comm.reduce(sendobj=msg, op=MPI.MAX, root=root) if self.comm.rank == root: self.assertEqual(res, size-1) else: self.assertEqual(res, None) def test_reduce_minloc(self): rank = self.comm.rank size = self.comm.size for root in range(size): msg = rank res = self.comm.reduce(sendobj=msg, op=MPI.MINLOC, root=root) if self.comm.rank == root: self.assertEqual(res, (0, 0)) else: self.assertEqual(res, None) def test_reduce_maxloc(self): rank = self.comm.rank size = self.comm.size for root in range(size): msg = rank res = self.comm.reduce(sendobj=msg, op=MPI.MAXLOC, root=root) if self.comm.rank == root: self.assertEqual(res, (size-1, size-1)) else: self.assertEqual(res, None) def test_allreduce(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.allreduce(sendobj=msg) self.assertEqual(res, sum(range(size))) def test_allreduce_min(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.allreduce(sendobj=msg, op=MPI.MIN) self.assertEqual(res, 0) def test_allreduce_max(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.allreduce(sendobj=msg, op=MPI.MAX) self.assertEqual(res, size-1) def test_allreduce_minloc(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.allreduce(sendobj=msg, op=MPI.MINLOC) self.assertEqual(res, (0, 0)) def test_allreduce_maxloc(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.allreduce(sendobj=msg, op=MPI.MAXLOC) self.assertEqual(res, (size-1, size-1)) def test_scan(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.scan(sendobj=msg) self.assertEqual(res, sum(list(range(size))[:rank+1])) def test_scan_min(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.scan(sendobj=msg, op=MPI.MIN) self.assertEqual(res, 0) def test_scan_max(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.scan(sendobj=msg, op=MPI.MAX) self.assertEqual(res, rank) def test_scan_minloc(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.scan(sendobj=msg, op=MPI.MINLOC) self.assertEqual(res, (0, 0)) def test_scan_maxloc(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.scan(sendobj=msg, op=MPI.MAXLOC) self.assertEqual(res, (rank, rank)) def test_exscan(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.exscan(sendobj=msg) if self.comm.rank == 0: self.assertEqual(res, None) else: self.assertEqual(res, sum(list(range(size))[:rank])) def test_exscan_min(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.exscan(sendobj=msg, op=MPI.MIN) if self.comm.rank == 0: self.assertEqual(res, None) else: self.assertEqual(res, 0) def test_exscan_max(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.exscan(sendobj=msg, op=MPI.MAX) if self.comm.rank == 0: self.assertEqual(res, None) else: self.assertEqual(res, rank-1) def test_exscan_minloc(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.exscan(sendobj=msg, op=MPI.MINLOC) if self.comm.rank == 0: self.assertEqual(res, None) else: self.assertEqual(res, (0, 0)) def test_exscan_maxloc(self): rank = self.comm.rank size = self.comm.size msg = rank res = self.comm.exscan(sendobj=msg, op=MPI.MAXLOC) if self.comm.rank == 0: self.assertEqual(res, None) else: self.assertEqual(res, (rank-1, rank-1)) class TestS(BaseTest, unittest.TestCase): def setUp(self): self.comm = Intracomm(MPI.COMM_SELF) class TestW(BaseTest, unittest.TestCase): def setUp(self): self.comm = Intracomm(MPI.COMM_WORLD) class TestSD(BaseTest, unittest.TestCase): def setUp(self): self.comm = Intracomm(MPI.COMM_SELF.Dup()) def tearDown(self): self.comm.Free() class TestWD(BaseTest, unittest.TestCase): def setUp(self): self.comm = Intracomm(MPI.COMM_WORLD.Dup()) def tearDown(self): self.comm.Free() if __name__ == "__main__": unittest.main() mpi4py_1.3.1+hg20131106.orig/demo/sequential/makefile0000644000000000000000000000022512211706251020107 0ustar 00000000000000MPIEXEC=mpiexec NP_FLAG=-n NP=5 PYTHON=python .PHONY: test test: ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test_seq.py ${RM} -r *.py[co] __pycache__ mpi4py_1.3.1+hg20131106.orig/demo/sequential/runtests.bat0000644000000000000000000000075612211706251020777 0ustar 00000000000000@echo off setlocal ENABLEEXTENSIONS set MPI=Microsoft HPC Pack 2008 SDK set MPI=DeinoMPI set MPI=MPICH2 set PATH="%ProgramFiles%\%MPI%\bin";%PATH% set MPIEXEC=mpiexec set NP_FLAG=-n set NP=5 set PYTHON=C:\Python25\python.exe set PYTHON=C:\Python26\python.exe set PYTHON=C:\Python27\python.exe set PYTHON=C:\Python30\python.exe set PYTHON=C:\Python31\python.exe set PYTHON=C:\Python32\python.exe set PYTHON=python @echo on %MPIEXEC% %NP_FLAG% %NP% %PYTHON% test_seq.py mpi4py_1.3.1+hg20131106.orig/demo/sequential/runtests.sh0000755000000000000000000000015412211706251020636 0ustar 00000000000000#!/bin/sh MPIEXEC=mpiexec NP_FLAG=-n NP=5 PYTHON=python set -x $MPIEXEC $NP_FLAG $NP $PYTHON test_seq.py mpi4py_1.3.1+hg20131106.orig/demo/sequential/seq.py0000644000000000000000000000243412211706251017555 0ustar 00000000000000class Seq(object): """ Sequential execution """ def __init__(self, comm, ng=1, tag=0): ng = int(ng) tag = int(tag) assert ng >= 1 assert ng <= comm.Get_size() self.comm = comm self.ng = ng self.tag = tag def __enter__(self): self.begin() return self def __exit__(self, *exc): self.end() return None def begin(self): """ Begin a sequential execution of a section of code """ comm = self.comm size = comm.Get_size() if size == 1: return rank = comm.Get_rank() ng = self.ng tag = self.tag if rank != 0: comm.Recv([None, 'B'], rank - 1, tag) if rank != (size - 1) and (rank % ng) < (ng - 1): comm.Send([None, 'B'], rank + 1, tag) def end(self): """ End a sequential execution of a section of code """ comm = self.comm size = comm.Get_size() if size == 1: return rank = comm.Get_rank() ng = self.ng tag = self.tag if rank == (size - 1) or (rank % ng) == (ng - 1): comm.Send([None, 'B'], (rank + 1) % size, tag) if rank == 0: comm.Recv([None, 'B'], size - 1, tag) mpi4py_1.3.1+hg20131106.orig/demo/sequential/test_seq.py0000644000000000000000000000074112211706251020613 0ustar 00000000000000#import mpi4py #mpi4py.profile("mpe") from mpi4py import MPI import unittest import sys, os sys.path.insert(0, os.path.dirname(__file__)) from seq import Seq del sys.path[0] def test(): size = MPI.COMM_WORLD.Get_size() rank = MPI.COMM_WORLD.Get_rank() name = MPI.Get_processor_name() with Seq(MPI.COMM_WORLD, 1, 10): print( "Hello, World! I am process %d of %d on %s." % (rank, size, name)) if __name__ == "__main__": test() mpi4py_1.3.1+hg20131106.orig/demo/spawning/cpi-master.c0000644000000000000000000000131712211706251020276 0ustar 00000000000000#include #include #include #include int main(int argc, char *argv[]) { char cmd[32] = "cpi-worker-c.exe"; MPI_Comm worker; int n; double pi; MPI_Init(&argc, &argv); if (argc > 1) strcpy(cmd, argv[1]); printf("%s -> %s\n", argv[0], cmd); MPI_Comm_spawn(cmd, MPI_ARGV_NULL, 5, MPI_INFO_NULL, 0, MPI_COMM_SELF, &worker, MPI_ERRCODES_IGNORE); n = 100; MPI_Bcast(&n, 1, MPI_INT, MPI_ROOT, worker); MPI_Reduce(MPI_BOTTOM, &pi, 1, MPI_DOUBLE, MPI_SUM, MPI_ROOT, worker); MPI_Comm_disconnect(&worker); printf("pi: %.16f, error: %.16f\n", pi, fabs(M_PI-pi)); MPI_Finalize(); return 0; } mpi4py_1.3.1+hg20131106.orig/demo/spawning/cpi-master.cxx0000644000000000000000000000123712211706251020657 0ustar 00000000000000#include #include #include #include int main(int argc, char *argv[]) { MPI::Init(); char cmd[32] = "cpi-worker-cxx.exe"; if (argc > 1) std::strcpy(cmd, argv[1]); std::printf("%s -> %s\n", argv[0], cmd); MPI::Intercomm worker; worker = MPI::COMM_SELF.Spawn(cmd, MPI::ARGV_NULL, 5, MPI::INFO_NULL, 0); int n = 100; worker.Bcast(&n, 1, MPI::INT, MPI::ROOT); double pi; worker.Reduce(MPI::BOTTOM, &pi, 1, MPI::DOUBLE, MPI::SUM, MPI::ROOT); worker.Disconnect(); std::printf("pi: %.16f, error: %.16f\n", pi, std::fabs(M_PI-pi)); MPI::Finalize(); return 0; } mpi4py_1.3.1+hg20131106.orig/demo/spawning/cpi-master.f900000644000000000000000000000174012211706251020452 0ustar 00000000000000PROGRAM main USE mpi implicit none real (kind=8), parameter :: PI = 3.1415926535897931D0 integer argc character(len=32) argv(0:1) character(len=32) cmd integer ierr, n, worker real(kind=8) cpi call MPI_INIT(ierr) argc = iargc() + 1 call getarg(0, argv(0)) call getarg(1, argv(1)) cmd = 'cpi-worker-f90.exe' if (argc > 1) then cmd = argv(1) end if write(*,'(A,A,A)') trim(argv(0)), ' -> ', trim(cmd) call MPI_COMM_SPAWN(cmd, MPI_ARGV_NULL, 5, & MPI_INFO_NULL, 0, & MPI_COMM_SELF, worker, & MPI_ERRCODES_IGNORE, ierr) n = 100 call MPI_BCAST(n, 1, MPI_INTEGER, & MPI_ROOT, worker, ierr) call MPI_REDUCE(MPI_BOTTOM, cpi, 1, MPI_DOUBLE_PRECISION, & MPI_SUM, MPI_ROOT, worker, ierr) call MPI_COMM_DISCONNECT(worker, ierr) write(*,'(A,F18.16,A,F18.16)') 'pi: ', cpi, ', error: ', abs(PI-cpi) call MPI_FINALIZE(ierr) END PROGRAM main mpi4py_1.3.1+hg20131106.orig/demo/spawning/cpi-master.py0000644000000000000000000000100312211706251020474 0ustar 00000000000000from mpi4py import MPI from array import array from math import pi as PI from sys import argv cmd = 'cpi-worker-py.exe' if len(argv) > 1: cmd = argv[1] print("%s -> %s" % (argv[0], cmd)) worker = MPI.COMM_SELF.Spawn(cmd, None, 5) n = array('i', [100]) worker.Bcast([n,MPI.INT], root=MPI.ROOT) pi = array('d', [0.0]) worker.Reduce(sendbuf=None, recvbuf=[pi, MPI.DOUBLE], op=MPI.SUM, root=MPI.ROOT) pi = pi[0] worker.Disconnect() print('pi: %.16f, error: %.16f' % (pi, abs(PI-pi))) mpi4py_1.3.1+hg20131106.orig/demo/spawning/cpi-worker.c0000644000000000000000000000112512211706251020311 0ustar 00000000000000#include int main(int argc, char *argv[]) { int myrank, nprocs; int n, i; double h, s, pi; MPI_Comm master; MPI_Init(&argc, &argv); MPI_Comm_get_parent(&master); MPI_Comm_size(master, &nprocs); MPI_Comm_rank(master, &myrank); MPI_Bcast(&n, 1, MPI_INT, 0, master); h = 1.0 / (double) n; s = 0.0; for (i = myrank+1; i < n+1; i += nprocs) { double x = h * (i - 0.5); s += 4.0 / (1.0 + x*x); } pi = s * h; MPI_Reduce(&pi, MPI_BOTTOM, 1, MPI_DOUBLE, MPI_SUM, 0, master); MPI_Comm_disconnect(&master); MPI_Finalize(); return 0; } mpi4py_1.3.1+hg20131106.orig/demo/spawning/cpi-worker.cxx0000644000000000000000000000104712211706251020674 0ustar 00000000000000#include int main(int argc, char *argv[]) { MPI::Init(); MPI::Intercomm master = MPI::Comm::Get_parent(); int nprocs = master.Get_size(); int myrank = master.Get_rank(); int n; master.Bcast(&n, 1, MPI_INT, 0); double h = 1.0 / (double) n; double s = 0.0; for (int i = myrank+1; i < n+1; i += nprocs) { double x = h * (i - 0.5); s += 4.0 / (1.0 + x*x); } double pi = s * h; master.Reduce(&pi, MPI_BOTTOM, 1, MPI_DOUBLE, MPI_SUM, 0); master.Disconnect(); MPI::Finalize(); return 0; } mpi4py_1.3.1+hg20131106.orig/demo/spawning/cpi-worker.f900000644000000000000000000000125212211706251020466 0ustar 00000000000000PROGRAM main USE mpi implicit none integer ierr integer n, i, master, myrank, nprocs real (kind=8) h, s, x, cpi call MPI_INIT(ierr) call MPI_COMM_GET_PARENT(master, ierr) call MPI_COMM_SIZE(master, nprocs, ierr) call MPI_COMM_RANK(master, myrank, ierr) call MPI_BCAST(n, 1, MPI_INTEGER, & 0, master, ierr) h = 1 / DFLOAT(n) s = 0.0 DO i=myrank+1,n,nprocs x = h * (DFLOAT(i) - 0.5) s = s + 4.0 / (1.0 + x*x) END DO cpi = s * h call MPI_REDUCE(cpi, MPI_BOTTOM, 1, MPI_DOUBLE_PRECISION, & MPI_SUM, 0, master, ierr) call MPI_COMM_DISCONNECT(master, ierr) call MPI_FINALIZE(ierr) END PROGRAM main mpi4py_1.3.1+hg20131106.orig/demo/spawning/cpi-worker.py0000644000000000000000000000072212211706251020521 0ustar 00000000000000from mpi4py import MPI from array import array master = MPI.Comm.Get_parent() nprocs = master.Get_size() myrank = master.Get_rank() n = array('i', [0]) master.Bcast([n, MPI.INT], root=0) n = n[0] h = 1.0 / n s = 0.0 for i in range(myrank+1, n+1, nprocs): x = h * (i - 0.5) s += 4.0 / (1.0 + x**2) pi = s * h pi = array('d', [pi]) master.Reduce(sendbuf=[pi, MPI.DOUBLE], recvbuf=None, op=MPI.SUM, root=0) master.Disconnect() mpi4py_1.3.1+hg20131106.orig/demo/spawning/makefile0000644000000000000000000000206512211706251017567 0ustar 00000000000000.PHONY: default build test clean MPIEXEC=mpiexec -n 1 default: build test clean MASTERS = cpi-master-py.exe cpi-master-c.exe cpi-master-cxx.exe cpi-master-f90.exe WORKERS = cpi-worker-py.exe cpi-worker-c.exe cpi-worker-cxx.exe cpi-worker-f90.exe build: ${MASTERS} ${WORKERS} LANGS=py c cxx f90 test: build @for i in ${LANGS}; do \ for j in ${LANGS}; do \ ${MPIEXEC} ./cpi-master-$$i.exe ./cpi-worker-$$j.exe; \ done; \ done clean: ${RM} -r ${MASTERS} ${WORKERS} MPICC=mpicc MPICXX=mpicxx MPIF90=mpif90 # Python cpi-master-py.exe: cpi-master.py echo '#!'`which python` > $@ cat $< >> $@ chmod +x $@ cpi-worker-py.exe: cpi-worker.py echo '#!'`which python` > $@ cat $< >> $@ chmod +x $@ # C cpi-master-c.exe: cpi-master.c ${MPICC} $< -o $@ cpi-worker-c.exe: cpi-worker.c ${MPICC} $< -o $@ # C++ cpi-master-cxx.exe: cpi-master.cxx ${MPICXX} $< -o $@ cpi-worker-cxx.exe: cpi-worker.cxx ${MPICXX} $< -o $@ # Fortran 90 cpi-master-f90.exe: cpi-master.f90 ${MPIF90} $< -o $@ cpi-worker-f90.exe: cpi-worker.f90 ${MPIF90} $< -o $@ mpi4py_1.3.1+hg20131106.orig/demo/threads/makefile0000644000000000000000000000017012211706251017366 0ustar 00000000000000.PHONY: default build test clean default: build test clean PYTHON=python build: test: ${PYTHON} sendrecv.py clean:mpi4py_1.3.1+hg20131106.orig/demo/threads/sendrecv.py0000644000000000000000000000213712211706251020056 0ustar 00000000000000from mpi4py import MPI import sys if MPI.Query_thread() < MPI.THREAD_MULTIPLE: sys.stderr.write("MPI does not provide enough thread support\n") sys.exit(0) try: import threading except ImportError: sys.stderr.write("threading module not available\n") sys.exit(0) try: import numpy except ImportError: sys.stderr.write("NumPy package not available\n") sys.exit(0) send_msg = numpy.arange(1000000, dtype='i') recv_msg = numpy.zeros_like(send_msg) start_event = threading.Event() def self_send(): start_event.wait() comm = MPI.COMM_WORLD rank = comm.Get_rank() comm.Send([send_msg, MPI.INT], dest=rank, tag=0) def self_recv(): start_event.wait() comm = MPI.COMM_WORLD rank = comm.Get_rank() comm.Recv([recv_msg, MPI.INT], source=rank, tag=0) send_thread = threading.Thread(target=self_send) recv_thread = threading.Thread(target=self_recv) for t in (recv_thread, send_thread): t.start() assert not numpy.allclose(send_msg, recv_msg) start_event.set() for t in (recv_thread, send_thread): t.join() assert numpy.allclose(send_msg, recv_msg) mpi4py_1.3.1+hg20131106.orig/demo/vampirtrace/cpilog.py0000644000000000000000000000207012211706251020401 0ustar 00000000000000#!/usr/bin/env python # If you want VampirTrace to log MPI calls, you have to add the two # lines below at the very beginning of your main bootstrap script. import mpi4py mpi4py.rc.threaded = False mpi4py.rc.profile('vt-mpi', logfile='cpilog') # Import the MPI extension module from mpi4py import MPI # Import the 'array' module from array import array # This is just to make the logging # output a bit more interesting from time import sleep comm = MPI.COMM_WORLD nprocs = comm.Get_size() myrank = comm.Get_rank() n = array('i', [0]) pi = array('d', [0]) mypi = array('d', [0]) def comp_pi(n, myrank=0, nprocs=1): h = 1.0 / n; s = 0.0; for i in range(myrank + 1, n + 1, nprocs): x = h * (i - 0.5); s += 4.0 / (1.0 + x**2); return s * h comm.Barrier() for N in [10000]*10: if myrank == 0: n[0] = N comm.Bcast([n, MPI.INT], root=0) mypi[0] = comp_pi(n[0], myrank, nprocs) comm.Reduce([mypi, MPI.DOUBLE], [pi, MPI.DOUBLE], op=MPI.SUM, root=0) comm.Barrier() sleep(0.01) mpi4py_1.3.1+hg20131106.orig/demo/vampirtrace/makefile0000644000000000000000000000131712211706251020255 0ustar 00000000000000MPIEXEC = mpiexec PYTHON = python N = 8 .PHONY: default default: build test clean .PHONY: run-cpilog run-ring run-threads run run: run-cpilog run-ring run-threads run-cpilog: ${MPIEXEC} -n ${N} ${PYTHON} cpilog.py run-ring: ${MPIEXEC} -n ${N} ${PYTHON} ring.py run-threads: ${MPIEXEC} -n ${N} ${PYTHON} threads.py .PHONY: view-cpilog view-ring view-threads view view: view-cpilog view-ring view-threads view-cpilog: cpilog.otf view-ring: ring.otf view-threads: threads.otf cpilog.otf: run-cpilog ring.otf: run-ring threads.otf: run-threads .PHONY: build build: .PHONY: test test: run .PHONY: clean clean: ${RM} *.otf *.uctl *.*.def.z *.*.events.z *.*.marker.z ${RM} *.thumb *.*.def *.*.events mpi4py_1.3.1+hg20131106.orig/demo/vampirtrace/ring.py0000644000000000000000000000146512211706251020072 0ustar 00000000000000#!/usr/bin/env python # If you want VampirTrace to log MPI calls, you have to add the two # lines below at the very beginning of your main bootstrap script. import mpi4py mpi4py.rc.threaded = False mpi4py.rc.profile('vt-mpi', logfile='ring') from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() src = rank-1 dest = rank+1 if rank == 0: src = size-1 if rank == size-1: dest = 0 try: from numpy import zeros a1 = zeros(1000000, 'd') a2 = zeros(1000000, 'd') except ImportError: from array import array a1 = array('d', [0]*1000); a1 *= 1000 a2 = array('d', [0]*1000); a2 *= 1000 comm.Sendrecv(sendbuf=a1, recvbuf=a2, source=src, dest=dest) MPI.Request.Waitall([ comm.Isend(a1, dest=dest), comm.Irecv(a2, source=src), ]) mpi4py_1.3.1+hg20131106.orig/demo/vampirtrace/threads.py0000644000000000000000000000162212211706251020560 0ustar 00000000000000#!/usr/bin/env python import mpi4py mpi4py.rc.threaded = True mpi4py.rc.thread_level = "funneled" mpi4py.rc.profile('vt-hyb', logfile='threads') from mpi4py import MPI from threading import Thread MPI.COMM_WORLD.Barrier() # Understanding the Python GIL # David Beazley, http://www.dabeaz.com # PyCon 2010, Atlanta, Georgia # http://www.dabeaz.com/python/UnderstandingGIL.pdf # Consider this trivial CPU-bound function def countdown(n): while n > 0: n -= 1 # Run it once with a lot of work COUNT = 10000000 # 10 millon tic = MPI.Wtime() countdown(COUNT) toc = MPI.Wtime() print ("sequential: %f seconds" % (toc-tic)) # Now, subdivide the work across two threads t1 = Thread(target=countdown, args=(COUNT//2,)) t2 = Thread(target=countdown, args=(COUNT//2,)) tic = MPI.Wtime() for t in (t1, t2): t.start() for t in (t1, t2): t.join() toc = MPI.Wtime() print ("threaded: %f seconds" % (toc-tic)) mpi4py_1.3.1+hg20131106.orig/demo/wrap-boost/helloworld.cxx0000644000000000000000000000173312211706251021236 0ustar 00000000000000#include #include static void sayhello(MPI_Comm comm) { if (comm == MPI_COMM_NULL) { std::cout << "You passed MPI_COMM_NULL !!!" << std::endl; return; } int size; MPI_Comm_size(comm, &size); int rank; MPI_Comm_rank(comm, &rank); int plen; char pname[MPI_MAX_PROCESSOR_NAME]; MPI_Get_processor_name(pname, &plen); std::cout << "Hello, World! " << "I am process " << rank << " of " << size << " on " << pname << "." << std::endl; } #include #include using namespace boost::python; static void hw_sayhello(object py_comm) { PyObject* py_obj = py_comm.ptr(); MPI_Comm *comm_p = PyMPIComm_Get(py_obj); if (comm_p == NULL) throw_error_already_set(); sayhello(*comm_p); } BOOST_PYTHON_MODULE(helloworld) { if (import_mpi4py() < 0) return; /* Python 2.X */ def("sayhello", hw_sayhello); } /* * Local Variables: * mode: C++ * End: */ mpi4py_1.3.1+hg20131106.orig/demo/wrap-boost/makefile0000644000000000000000000000134312211706251020034 0ustar 00000000000000.PHONY: default default: build test clean PYTHON = python PYTHON_CONFIG = ${PYTHON} ../python-config MPI4PY_INCLUDE = ${shell ${PYTHON} -c 'import mpi4py; print( mpi4py.get_include() )'} BOOST_INCS = BOOST_LIBS = -lboost_python -lboost_python-mt MPICXX = mpicxx CXXFLAGS = -fPIC ${shell ${PYTHON_CONFIG} --includes} ${BOOST_INCS} LDFLAGS = -shared ${shell ${PYTHON_CONFIG} --libs} ${BOOST_LIBS} SO = ${shell ${PYTHON_CONFIG} --extension-suffix} .PHONY: build build: helloworld${SO} helloworld${SO}: helloworld.cxx ${MPICXX} ${CXXFLAGS} -I${MPI4PY_INCLUDE} -o $@ $< ${LDFLAGS} MPIEXEC = mpiexec NP_FLAG = -n NP = 5 .PHONY: test test: build ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test.py .PHONY: clean clean: ${RM} helloworld${SO} mpi4py_1.3.1+hg20131106.orig/demo/wrap-boost/test.py0000644000000000000000000000033212211706251017662 0ustar 00000000000000from mpi4py import MPI import helloworld as hw null = MPI.COMM_NULL hw.sayhello(null) comm = MPI.COMM_WORLD hw.sayhello(comm) try: hw.sayhello(list()) except: pass else: assert 0, "exception not raised" mpi4py_1.3.1+hg20131106.orig/demo/wrap-c/helloworld.c0000644000000000000000000000432312211706251017750 0ustar 00000000000000#define MPICH_SKIP_MPICXX 1 #define OMPI_SKIP_MPICXX 1 #include /* -------------------------------------------------------------------------- */ static void sayhello(MPI_Comm comm) { int size, rank; char pname[MPI_MAX_PROCESSOR_NAME]; int len; if (comm == MPI_COMM_NULL) { printf("You passed MPI_COMM_NULL !!!\n"); return; } MPI_Comm_size(comm, &size); MPI_Comm_rank(comm, &rank); MPI_Get_processor_name(pname, &len); pname[len] = 0; printf("Hello, World! I am process %d of %d on %s.\n", rank, size, pname); } /* -------------------------------------------------------------------------- */ static PyObject * hw_sayhello(PyObject *self, PyObject *args) { PyObject *py_comm = NULL; MPI_Comm *comm_p = NULL; if (!PyArg_ParseTuple(args, "O:sayhello", &py_comm)) return NULL; comm_p = PyMPIComm_Get(py_comm); if (comm_p == NULL) return NULL; sayhello(*comm_p); Py_INCREF(Py_None); return Py_None; } static struct PyMethodDef hw_methods[] = { {"sayhello", (PyCFunction)hw_sayhello, METH_VARARGS, NULL}, {NULL, NULL, 0, NULL} /* sentinel */ }; #if PY_MAJOR_VERSION < 3 /* --- Python 2 --- */ PyMODINIT_FUNC inithelloworld(void) { PyObject *m = NULL; /* Initialize mpi4py C-API */ if (import_mpi4py() < 0) goto bad; /* Module initialization */ m = Py_InitModule("helloworld", hw_methods); if (m == NULL) goto bad; return; bad: return; } #else /* --- Python 3 --- */ static struct PyModuleDef hw_module = { PyModuleDef_HEAD_INIT, "helloworld", /* m_name */ NULL, /* m_doc */ -1, /* m_size */ hw_methods /* m_methods */, NULL, /* m_reload */ NULL, /* m_traverse */ NULL, /* m_clear */ NULL /* m_free */ }; PyMODINIT_FUNC PyInit_helloworld(void) { PyObject *m = NULL; /* Initialize mpi4py's C-API */ if (import_mpi4py() < 0) goto bad; /* Module initialization */ m = PyModule_Create(&hw_module); if (m == NULL) goto bad; return m; bad: return NULL; } #endif /* -------------------------------------------------------------------------- */ /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/demo/wrap-c/makefile0000644000000000000000000000120212211706251017122 0ustar 00000000000000.PHONY: default default: build test clean PYTHON = python PYTHON_CONFIG = ${PYTHON} ../python-config MPI4PY_INCLUDE = ${shell ${PYTHON} -c 'import mpi4py; print( mpi4py.get_include() )'} MPICC = mpicc CFLAGS = -fPIC ${shell ${PYTHON_CONFIG} --includes} LDFLAGS = -shared ${shell ${PYTHON_CONFIG} --libs} SO = ${shell ${PYTHON_CONFIG} --extension-suffix} .PHONY: build build: helloworld${SO} helloworld${SO}: helloworld.c ${MPICC} ${CFLAGS} -I${MPI4PY_INCLUDE} -o $@ $< ${LDFLAGS} MPIEXEC = mpiexec NP_FLAG = -n NP = 5 .PHONY: test test: build ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test.py .PHONY: clean clean: ${RM} helloworld${SO} mpi4py_1.3.1+hg20131106.orig/demo/wrap-c/test.py0000644000000000000000000000033212211706251016756 0ustar 00000000000000from mpi4py import MPI import helloworld as hw null = MPI.COMM_NULL hw.sayhello(null) comm = MPI.COMM_WORLD hw.sayhello(comm) try: hw.sayhello(list()) except: pass else: assert 0, "exception not raised" mpi4py_1.3.1+hg20131106.orig/demo/wrap-cffi/apigen.py0000644000000000000000000000146512211706251017737 0ustar 00000000000000import sys, os.path as p wdir = p.abspath(p.dirname(__file__)) topdir = p.normpath(p.join(wdir, p.pardir, p.pardir)) srcdir = p.join(topdir, 'src') sys.path.insert(0, p.join(topdir, 'conf')) from mpiscanner import Scanner scanner = Scanner() libmpi_pxd = p.join(srcdir, 'include', 'mpi4py', 'libmpi.pxd') scanner.parse_file(libmpi_pxd) libmpi_h = p.join(wdir, 'libmpi.h') scanner.dump_header_h(libmpi_h) #try: # from cStringIO import StringIO #except ImportError: # from io import StringIO #libmpi_h = StringIO() #scanner.dump_header_h(libmpi_h) #print libmpi_h.read() libmpi_c = p.join(wdir, 'libmpi.c') f = open(libmpi_c, 'w') f.write( """\ #include #include "%(srcdir)s/config.h" #include "%(srcdir)s/missing.h" #include "%(srcdir)s/fallback.h" #include "%(srcdir)s/compat.h" """ % vars() ) f.close() mpi4py_1.3.1+hg20131106.orig/demo/wrap-cffi/libmpi.py0000644000000000000000000000367512211706251017755 0ustar 00000000000000import os.path as _pth import cffi as _cffi _wdir = _pth.abspath(_pth.dirname(__file__)) def _ffi_create(header, source, **kargs): ffi = _cffi.FFI() _ffi_define(ffi, header, **kargs) lib = _ffi_verify(ffi, source, **kargs) return ffi, lib def _ffi_define(ffi, csource, **kargs): opt = kargs.pop('override', False) ffi.cdef(csource, opt) def _ffi_verify(ffi, csource, **kargs): cc = kargs.pop('compiler', None) ld = kargs.pop('linker', None) _ffi_verify_push(cc, ld) try: lib = ffi.verify(csource, **kargs) finally: _ffi_verify_pop() return lib def _ffi_verify_push(cc, ld): from distutils import sysconfig from distutils.spawn import find_executable global customize_compiler_orig customize_compiler_orig = sysconfig.customize_compiler if not cc and not ld: return def customize_compiler(compiler): customize_compiler_orig(compiler) if cc: compiler.compiler_so[0] = find_executable(cc) if ld: compiler.linker_so[0]= find_executable(ld) sysconfig.customize_compiler = customize_compiler def _ffi_verify_pop(): from distutils import sysconfig global customize_compiler_orig sysconfig.customize_compiler = customize_compiler_orig del customize_compiler_orig def _read(filename): f = open(filename) try: return f.read() finally: f.close() ffi, mpi = _ffi_create( _read(_pth.join(_wdir, "libmpi.h")), _read(_pth.join(_wdir, "libmpi.c")), compiler='mpicc', linker='mpicc', #ext_package='mpi4py', #modulename='_cffi_mpi', #tmpdir=_wdir, #tmpdir='build' ) import sys as _sys _sys.modules[__name__+'.mpi'] = mpi del _sys #new = ffi.new #cast = ffi.cast #asbuffer = ffi.buffer #globals().update(mpi.__dict__) #del _sys, _cffi #del _ffi_setup #del _ffi_verify #del _ffi_verify_enter #del _ffi_verify_exit if __name__ == '__main__': mpi.MPI_Init(ffi.NULL, ffi.NULL); mpi.MPI_Finalize() mpi4py_1.3.1+hg20131106.orig/demo/wrap-cffi/makefile0000644000000000000000000000072112211706251017614 0ustar 00000000000000.PHONY: default default: build test clean PYTHON = python .PHONY: build build: libmpi.h libmpi.c ${PYTHON} libmpi.py libmpi.h: ${PYTHON} apigen.py libmpi.c: libmpi.h MPIEXEC = mpiexec NP_FLAG = -n .PHONY: test test: build ${MPIEXEC} ${NP_FLAG} 5 ${PYTHON} test_helloworld.py ${MPIEXEC} ${NP_FLAG} 4 ${PYTHON} test_ringtest.py ${MPIEXEC} ${NP_FLAG} 2 ${PYTHON} test_latency.py .PHONY: clean clean: ${RM} -r libmpi.h libmpi.c ${RM} -r *py[co] __pycache__ mpi4py_1.3.1+hg20131106.orig/demo/wrap-cffi/test_helloworld.py0000644000000000000000000000100312211706251021672 0ustar 00000000000000from libmpi import ffi from libmpi.mpi import * NULL = ffi.NULL size_p = ffi.new('int*') rank_p = ffi.new('int*') nlen_p = ffi.new('int*') name_p = ffi.new('char[]', MPI_MAX_PROCESSOR_NAME); MPI_Init(NULL, NULL); MPI_Comm_size(MPI_COMM_WORLD, size_p) MPI_Comm_rank(MPI_COMM_WORLD, rank_p) MPI_Get_processor_name(name_p, nlen_p) size = size_p[0] rank = rank_p[0] nlen = nlen_p[0] name = ffi.string(name_p[0:nlen]) print("Hello, World! I am process %d of %d on %s." % (rank, size, name)) MPI_Finalize() mpi4py_1.3.1+hg20131106.orig/demo/wrap-cffi/test_latency.py0000644000000000000000000000376312211706251021175 0ustar 00000000000000# http://mvapich.cse.ohio-state.edu/benchmarks/ from libmpi import ffi from libmpi.mpi import * def osu_latency( BENCHMARH = "MPI Latency Test", skip = 1000, loop = 10000, skip_large = 10, loop_large = 100, large_message_size = 8192, MAX_MSG_SIZE = 1<<22, ): myid = ffi.new('int*') numprocs = ffi.new('int*') MPI_Comm_rank(MPI_COMM_WORLD, myid) MPI_Comm_size(MPI_COMM_WORLD, numprocs) myid = myid[0] numprocs = numprocs[0] if numprocs != 2: if myid == 0: errmsg = "This test requires exactly two processes" else: errmsg = None raise SystemExit(errmsg) sbuf = ffi.new('unsigned char[]', MAX_MSG_SIZE) rbuf = ffi.new('unsigned char[]', MAX_MSG_SIZE) dtype = MPI_BYTE tag = 1 comm = MPI_COMM_WORLD status = MPI_STATUS_IGNORE if myid == 0: print ('# %s' % (BENCHMARH,)) if myid == 0: print ('# %-8s%20s' % ("Size [B]", "Latency [us]")) message_sizes = [0] + [2**i for i in range(30)] for size in message_sizes: if size > MAX_MSG_SIZE: break if size > large_message_size: skip = skip_large loop = loop_large iterations = list(range(loop+skip)) # MPI_Barrier(comm) if myid == 0: for i in iterations: if i == skip: t_start = MPI_Wtime() MPI_Send(sbuf, size, dtype, 1, tag, comm) MPI_Recv(rbuf, size, dtype, 1, tag, comm, status) t_end = MPI_Wtime() elif myid == 1: for i in iterations: MPI_Recv(rbuf, size, dtype, 0, tag, comm, status) MPI_Send(sbuf, size, dtype, 0, tag, comm) # if myid == 0: latency = (t_end - t_start) * 1e6 / (2 * loop) print ('%-10d%20.2f' % (size, latency)) def main(): MPI_Init(ffi.NULL, ffi.NULL) osu_latency() MPI_Finalize() if __name__ == '__main__': main() mpi4py_1.3.1+hg20131106.orig/demo/wrap-cffi/test_ringtest.py0000644000000000000000000000423612211706251021371 0ustar 00000000000000from libmpi import ffi from libmpi.mpi import * def ring(comm, count=1, loop=1, skip=0): size_p = ffi.new('int*') rank_p = ffi.new('int*') MPI_Comm_size(comm, size_p) MPI_Comm_rank(comm, rank_p) size = size_p[0] rank = rank_p[0] source = (rank - 1) % size dest = (rank + 1) % size sbuf = ffi.new('unsigned char[]', [42]*count) rbuf = ffi.new('unsigned char[]', [ 0]*count) iterations = list(range((loop+skip))) if size == 1: for i in iterations: if i == skip: tic = MPI_Wtime() MPI_Sendrecv(sbuf, count, MPI_BYTE, dest, 0, rbuf, count, MPI_BYTE, source, 0, comm, MPI_STATUS_IGNORE) else: if rank == 0: for i in iterations: if i == skip: tic = MPI_Wtime() MPI_Send(sbuf, count, MPI_BYTE, dest, 0, comm) MPI_Recv(rbuf, count, MPI_BYTE, source, 0, comm, MPI_STATUS_IGNORE) else: sbuf = rbuf for i in iterations: if i == skip: tic = MPI_Wtime() MPI_Recv(rbuf, count, MPI_BYTE, source, 0, comm, MPI_STATUS_IGNORE) MPI_Send(sbuf, count, MPI_BYTE, dest, 0, comm) toc = MPI_Wtime() if rank == 0 and ffi.string(sbuf) != ffi.string(rbuf): import warnings, traceback try: warnings.warn("received message does not match!") except UserWarning: traceback.print_exc() MPI_Abort(comm, 2) return toc - tic def ringtest(comm): size = ( 1 ) loop = ( 1 ) skip = ( 0 ) MPI_Barrier(comm) elapsed = ring(comm, size, loop, skip) size_p = ffi.new('int*') rank_p = ffi.new('int*') MPI_Comm_size(comm, size_p) MPI_Comm_rank(comm, rank_p) comm_size = size_p[0] comm_rank = rank_p[0] if comm_rank == 0: print ("time for %d loops = %g seconds (%d processes, %d bytes)" % (loop, elapsed, comm_size, size)) def main(): MPI_Init(ffi.NULL, ffi.NULL) ringtest(MPI_COMM_WORLD) MPI_Finalize() if __name__ == '__main__': main() mpi4py_1.3.1+hg20131106.orig/demo/wrap-cython/helloworld.pyx0000644000000000000000000000121612211706251021426 0ustar 00000000000000cdef extern from "mpi-compat.h": pass cimport mpi4py.MPI as MPI from mpi4py.libmpi cimport * cdef extern from "stdio.h": int printf(char*, ...) cdef void c_sayhello(MPI_Comm comm): cdef int size, rank, plen cdef char pname[MPI_MAX_PROCESSOR_NAME] if comm == MPI_COMM_NULL: printf(b"You passed MPI_COMM_NULL !!!\n",0) return MPI_Comm_size(comm, &size) MPI_Comm_rank(comm, &rank) MPI_Get_processor_name(pname, &plen) printf(b"Hello, World! I am process %d of %d on %s.\n", rank, size, pname) def sayhello(MPI.Comm comm not None ): cdef MPI_Comm c_comm = comm.ob_mpi c_sayhello(c_comm) mpi4py_1.3.1+hg20131106.orig/demo/wrap-cython/makefile0000644000000000000000000000137612211706251020220 0ustar 00000000000000.PHONY: default default: build test clean PYTHON = python PYTHON_CONFIG = ${PYTHON} ../python-config MPI4PY_INCLUDE = ${shell ${PYTHON} -c 'import mpi4py; print( mpi4py.get_include() )'} CYTHON = cython .PHONY: src src: helloworld.c helloworld.c: helloworld.pyx ${CYTHON} -I${MPI4PY_INCLUDE} $< MPICC = mpicc CFLAGS = -fPIC ${shell ${PYTHON_CONFIG} --includes} LDFLAGS = -shared ${shell ${PYTHON_CONFIG} --libs} SO = ${shell ${PYTHON_CONFIG} --extension-suffix} .PHONY: build build: helloworld${SO} helloworld${SO}: helloworld.c ${MPICC} ${CFLAGS} -I${MPI4PY_INCLUDE} -o $@ $< ${LDFLAGS} MPIEXEC = mpiexec NP_FLAG = -n NP = 5 .PHONY: test test: build ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test.py .PHONY: clean clean: ${RM} helloworld.c helloworld${SO} mpi4py_1.3.1+hg20131106.orig/demo/wrap-cython/mpi-compat.h0000644000000000000000000000044012211706251020726 0ustar 00000000000000/* Author: Lisandro Dalcin */ /* Contact: dalcinl@gmail.com */ #ifndef MPI_COMPAT_H #define MPI_COMPAT_H #include #if (MPI_VERSION < 3) && !defined(PyMPI_HAVE_MPI_Message) typedef void *PyMPI_MPI_Message; #define MPI_Message PyMPI_MPI_Message #endif #endif/*MPI_COMPAT_H*/ mpi4py_1.3.1+hg20131106.orig/demo/wrap-cython/test.py0000644000000000000000000000046212211706251020044 0ustar 00000000000000from mpi4py import MPI import helloworld as hw null = MPI.COMM_NULL hw.sayhello(null) comm = MPI.COMM_WORLD hw.sayhello(comm) try: hw.sayhello(None) except: pass else: assert 0, "exception not raised" try: hw.sayhello(list()) except: pass else: assert 0, "exception not raised" mpi4py_1.3.1+hg20131106.orig/demo/wrap-f2py/helloworld.f900000644000000000000000000000136612211706251020566 0ustar 00000000000000! ! $ f2py --f90exec=mpif90 -m helloworld -c helloworld.f90 ! subroutine sayhello(comm) use mpi implicit none integer :: comm integer :: rank, size, nlen, ierr character (len=MPI_MAX_PROCESSOR_NAME) :: pname if (comm == MPI_COMM_NULL) then print *, 'You passed MPI_COMM_NULL !!!' return end if call MPI_Comm_rank(comm, rank, ierr) call MPI_Comm_size(comm, size, ierr) call MPI_Get_processor_name(pname, nlen, ierr) print *, 'Hello, World!', & ' I am process ', rank, & ' of ', size, & ' on ', pname(1:nlen), '.' end subroutine sayhello ! program main ! use mpi ! implicit none ! integer ierr ! call MPI_Init(ierr) ! call sayhello(MPI_COMM_WORLD) ! call MPI_Finalize(ierr) ! end program main mpi4py_1.3.1+hg20131106.orig/demo/wrap-f2py/makefile0000644000000000000000000000073512211706251017572 0ustar 00000000000000.PHONY: default default: build test clean PYTHON = python PYTHON_CONFIG = ${PYTHON} ../python-config MPIF90 = mpif90 F2PY = f2py --fcompiler=gnu95 SO = ${shell ${PYTHON_CONFIG} --extension-suffix} .PHONY: build build: helloworld${SO} helloworld${SO}: helloworld.f90 ${F2PY} --f90exec=${MPIF90} -m helloworld -c $< MPIEXEC = mpiexec NP_FLAG = -n NP = 5 .PHONY: test test: build ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test.py .PHONY: clean clean: ${RM} helloworld${SO} mpi4py_1.3.1+hg20131106.orig/demo/wrap-f2py/test.py0000644000000000000000000000040412211706251017414 0ustar 00000000000000from mpi4py import MPI import helloworld as hw null = MPI.COMM_NULL fnull = null.py2f() hw.sayhello(fnull) comm = MPI.COMM_WORLD fcomm = comm.py2f() hw.sayhello(fcomm) try: hw.sayhello(list()) except: pass else: assert 0, "exception not raised" mpi4py_1.3.1+hg20131106.orig/demo/wrap-swig/helloworld.i0000644000000000000000000000121612211706251020503 0ustar 00000000000000%module helloworld %{ #define MPICH_SKIP_MPICXX 1 #define OMPI_SKIP_MPICXX 1 #include #include void sayhello(MPI_Comm comm) { int size, rank; char pname[MPI_MAX_PROCESSOR_NAME]; int len; if (comm == MPI_COMM_NULL) { printf("You passed MPI_COMM_NULL !!!\n"); return; } MPI_Comm_size(comm, &size); MPI_Comm_rank(comm, &rank); MPI_Get_processor_name(pname, &len); pname[len] = 0; printf("Hello, World! I am process %d of %d on %s.\n", rank, size, pname); } %} %include mpi4py/mpi4py.i %mpi4py_typemap(Comm, MPI_Comm); void sayhello(MPI_Comm comm); /* * Local Variables: * mode: C * End: */ mpi4py_1.3.1+hg20131106.orig/demo/wrap-swig/makefile0000644000000000000000000000147612211706251017666 0ustar 00000000000000.PHONY: default default: build test clean PYTHON = python PYTHON_CONFIG = ${PYTHON} ../python-config MPI4PY_INCLUDE = ${shell ${PYTHON} -c 'import mpi4py; print( mpi4py.get_include() )'} SWIG = swig SWIG_PY = ${SWIG} -python .PHONY: src src: helloworld_wrap.c helloworld_wrap.c: helloworld.i ${SWIG_PY} -I${MPI4PY_INCLUDE} -o $@ $< MPICC = mpicc CFLAGS = -fPIC ${shell ${PYTHON_CONFIG} --includes} LDFLAGS = -shared ${shell ${PYTHON_CONFIG} --libs} SO = ${shell ${PYTHON_CONFIG} --extension-suffix} .PHONY: build build: _helloworld${SO} _helloworld${SO}: helloworld_wrap.c ${MPICC} ${CFLAGS} -I${MPI4PY_INCLUDE} -o $@ $< ${LDFLAGS} MPIEXEC = mpiexec NP_FLAG = -n NP = 5 .PHONY: test test: build ${MPIEXEC} ${NP_FLAG} ${NP} ${PYTHON} test.py .PHONY: clean clean: ${RM} helloworld_wrap.c helloworld.py* _helloworld${SO} mpi4py_1.3.1+hg20131106.orig/demo/wrap-swig/test.py0000644000000000000000000000033212211706251017505 0ustar 00000000000000from mpi4py import MPI import helloworld as hw null = MPI.COMM_NULL hw.sayhello(null) comm = MPI.COMM_WORLD hw.sayhello(comm) try: hw.sayhello(list()) except: pass else: assert 0, "exception not raised" mpi4py_1.3.1+hg20131106.orig/docs/source/index.rst0000644000000000000000000000224412211706251017405 0ustar 00000000000000============== MPI for Python ============== :Author: Lisandro Dalcin :Contact: dalcinl@gmail.com :Organization: `CIMEC `_ :Address: CCT CONICET, 3000 Santa Fe, Argentina Online Documentation -------------------- Hosted at SciPy servers [http://mpi4py.scipy.org/]: Hosted at PyPI servers [http://packages.python.org/mpi4py]: + `User Manual (HTML)`_ (generated with Sphinx_). + `User Manual (PDF)`_ (generated with Sphinx_). + `API Reference`_ (generated with Epydoc_). .. _User Manual (HTML): usrman/index.html .. _User Manual (PDF): mpi4py.pdf .. _API Reference: apiref/index.html .. _Sphinx: http://sphinx.pocoo.org/ .. _Epydoc: http://epydoc.sourceforge.net/ Discussion and Support ---------------------- Hosted at Google Groups: + Group Page: http://groups.google.com/group/mpi4py + Mailing List: mpi4py@googlegroups.com Downloads and Development ------------------------- Hosted at Google Code: + Project Site: http://mpi4py.googlecode.com/ + Source Releases: http://code.google.com/p/mpi4py/downloads/ + Issue Tracker: http://code.google.com/p/mpi4py/issues/ + Mercurial Repository: http://code.google.com/p/mpi4py mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/Makefile0000644000000000000000000001272412211706251020515 0ustar 00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/MPIforPython.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/MPIforPython.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/MPIforPython" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/MPIforPython" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/abstract.txt0000644000000000000000000000140512211706251021413 0ustar 00000000000000.. topic:: Abstract This document describes the *MPI for Python* package. *MPI for Python* provides bindings of the *Message Passing Interface* (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors. This package is constructed on top of the MPI-1/2 specifications and provides an object oriented interface which closely follows MPI-2 C++ bindings. It supports point-to-point (sends, receives) and collective (broadcasts, scatters, gathers) communications of any *picklable* Python object, as well as optimized communications of Python object exposing the single-segment buffer interface (NumPy arrays, builtin bytes/string/array objects) .. Local Variables: .. mode: rst .. End: mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/appendix.rst0000644000000000000000000001515212211706251021415 0ustar 00000000000000Appendix ======== .. _python-mpi: MPI-enabled Python interpreter ------------------------------ Some MPI-1 implementations (notably, MPICH 1) **do require** the actual command line arguments to be passed at the time :c:func:`MPI_Init()` is called. In this case, you will need to use a re-built, MPI-enabled, Python interpreter binary executable. A basic implementation (targeting Python 2.X) of what is required is shown below: .. sourcecode:: c #include #include int main(int argc, char *argv[]) { int status, flag; MPI_Init(&argc, &argv); status = Py_Main(argc, argv); MPI_Finalized(&flag); if (!flag) MPI_Finalize(); return status; } The source code above is straightforward; compiling it should also be. However, the linking step is more tricky: special flags have to be passed to the linker depending on your platform. In order to alleviate you for such low-level details, *MPI for Python* provides some pure-distutils based support to build and install an MPI-enabled Python interpreter executable:: $ cd mpi4py-X.X.X $ python setup.py build_exe [--mpi=|--mpicc=/path/to/mpicc] $ [sudo] python setup.py install_exe [--install-dir=$HOME/bin] After the above steps you should have the MPI-enabled interpreter installed as :file:`{prefix}/bin/python{X}.{X}-mpi` (or :file:`$HOME/bin/python{X}.{X}-mpi`). Assuming that :file:`{prefix}/bin` (or :file:`$HOME/bin`) is listed on your :envvar:`PATH`, you should be able to enter your MPI-enabled Python interactively, for example:: $ python2.6-mpi Python 2.6 (r26:66714, Jun 8 2009, 16:07:26) [GCC 4.4.0 20090506 (Red Hat 4.4.0-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.executable '/usr/bin/python2.6-mpi' >>> .. _macosx-universal-sdk: Mac OS X and Universal/SDK Python builds ---------------------------------------- Mac OS X users employing a Python distribution built with support for `Universal applications `_ could have trouble building *MPI for Python*, specially if they want to link against MPI libraries built without such support. Another source of trouble could be a Python build using a specific *deployment target* and *cross-development SDK* configuration. Workarounds for such issues are to temporarily set the environment variables :envvar:`MACOSX_DEPLOYMENT_TARGET`, :envvar:`SDKROOT` and/or :envvar:`ARCHFLAGS` to appropriate values in the shell before trying to build/install *MPI for Python*. An appropriate value for :envvar:`MACOSX_DEPLOYMENT_TARGET` should be any greater or equal than the one used to build Python, and less or equal than your system version. The safest choice for end-users would be to use the system version (e.g, if you are on *Leopard*, you should try ``MACOSX_DEPLOYMENT_TARGET=10.5``). An appropriate value for :envvar:`SDKROOT` is the full path name of any of the SDK's you have at :file:`/Developer/SDKs` directory (e.g., ``SDKROOT=/Developer/SDKs/MacOSX10.5.sdk``). The safest choice for end-users would be the one matching the system version; or alternatively the root directory (i.e., ``SDKROOT=/``). Appropriate values for :envvar:`ARCHFLAGS` have the form ``-arch ``, where ```` should be chosen from the following table: ====== ========== ========= @ Intel PowerPC ====== ========== ========= 32-bit ``i386`` ``ppc`` 64-bit ``x86_64`` ``ppc64`` ====== ========== ========= For example, assuming your Mac is running **Snow Leopard** on a **64-bit Intel** processor and you want to override the hard-wired cross-development SDK in Python configuration, you can build and install *MPI for Python* using any of the alternatives below. Note that environment variables may need to be passed/set both at the build and install steps (because :program:`sudo` may not pass environment variables to subprocesses for security reasons) * Alternative 1:: $ env MACOSX_DEPLOYMENT_TARGET=10.6 \ SDKROOT=/ \ ARCHFLAGS='-arch x86_64' \ python setup.py build [options] $ sudo env MACOSX_DEPLOYMENT_TARGET=10.6 \ SDKROOT=/ \ ARCHFLAGS='-arch x86_64' \ python setup.py install [options] * Alternative 2:: $ export MACOSX_DEPLOYMENT_TARGET=10.6 $ export SDKROOT=/ $ export ARCHFLAGS='-arch x86_64' $ python setup.py build [options] $ sudo -s # enter interactive shell as root $ export MACOSX_DEPLOYMENT_TARGET=10.6 $ export SDKROOT=/ $ export ARCHFLAGS='-arch x86_64' $ python setup.py install [options] $ exit .. _building-mpi: Building MPI from sources ------------------------- In the list below you have some executive instructions for building some of the open-source MPI implementations out there with support for shared/dynamic libraries on POSIX environments. + *MPICH* :: $ tar -zxf mpich-X.X.X.tar.gz $ cd mpich-X.X.X $ ./configure --enable-shared --prefix=/usr/local/mpich $ make $ make install + *Open MPI* :: $ tar -zxf openmpi-X.X.X tar.gz $ cd openmpi-X.X.X $ ./configure --prefix=/usr/local/openmpi $ make all $ make install + *LAM/MPI* :: $ tar -zxf lam-X.X.X.tar.gz $ cd lam-X.X.X $ ./configure --enable-shared --prefix=/usr/local/lam $ make $ make install + *MPICH 1* :: $ tar -zxf mpich-X.X.X.tar.gz $ cd mpich-X.X.X $ ./configure --enable-sharedlib --prefix=/usr/local/mpich1 $ make $ make install Perhaps you will need to set the :envvar:`LD_LIBRARY_PATH` environment variable (using :command:`export`, :command:`setenv` or what applies to your system) pointing to the directory containing the MPI libraries . In case of getting runtime linking errors when running MPI programs, the following lines can be added to the user login shell script (:file:`.profile`, :file:`.bashrc`, etc.). - *MPICH* :: MPI_DIR=/usr/local/mpich export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH - *Open MPI* :: MPI_DIR=/usr/local/openmpi export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH - *LAM/MPI* :: MPI_DIR=/usr/local/lam export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH - *MPICH 1* :: MPI_DIR=/usr/local/mpich1 export LD_LIBRARY_PATH=$MPI_DIR/lib/shared:$LD_LIBRARY_PATH: export MPICH_USE_SHLIB=yes .. warning:: MPICH 1 support for dynamic libraries is not completely transparent. Users should set the environment variable :envvar:`MPICH_USE_SHLIB` to ``yes`` in order to avoid link problems when using the :program:`mpicc` compiler wrapper. mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/conf.py0000644000000000000000000001746612211706251020364 0ustar 00000000000000# -*- coding: utf-8 -*- # # MPI for Python documentation build configuration file, created by # sphinx-quickstart on Sat May 11 08:44:19 2013. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os try: from mpi4py import __version__ as mpi4py_version except: mpi4py_version = 'X.X.X' # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [] # Add any paths that contain templates here, relative to this directory. #templates_path = ['_templates'] templates_path = [] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'MPI for Python' copyright = u'2013, Lisandro Dalcin' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = mpi4py_version[:3] # The full version, including alpha/beta/rc tags. release = mpi4py_version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] html_static_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. #htmlhelp_basename = '' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', 'papersize': 'a4', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', 'printmodindex': '', 'printindex': '', 'preamble' : r'\usepackage{sphinxfix}', } latex_additional_files = ['sphinxfix.sty'] # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('manual', 'mpi4py.tex', u'MPI for Python', u'Lisandro Dalcin', 'howto'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('manual', 'mpi4py', u'MPI for Python', [u'Lisandro Dalcin'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('manual', 'mpi4py', u'MPI for Python', u'Lisandro Dalcin', 'mpi4py', 'MPI for Python.', 'Software libraries'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/index.rst0000644000000000000000000000070312211706251020710 0ustar 00000000000000MPI for Python ============== :Author: Lisandro Dalcin :Contact: dalcinl@gmail.com :Web Site: http://mpi4py.googlecode.com :Organization: `CIMEC `_ :Address: CCT CONICET, (3000) Santa Fe, Argentina :Date: |today| .. include:: abstract.txt Contents ======== .. include:: toctree.txt .. Indices and tables .. ================== .. .. * :ref:`genindex` .. * :ref:`modindex` .. * :ref:`search` mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/install.rst0000644000000000000000000001507312211706251021255 0ustar 00000000000000Installation ============ Requirements ------------ You need to have the following software properly installed in order to build *MPI for Python*: * A working MPI distribution, preferably a MPI-2 one built with shared/dynamic libraries. .. note:: If you want to build some MPI implementation from sources, check the instructions at :ref:`building-mpi` in the appendix. * A Python 2.4 to 2.7 or 3.0 to 3.3 distribution, with Python library preferably built with shared/dynamic libraries. .. note:: **Mac OS X** users employing a Python distribution built with **universal binaries** may need to temporarily set the environment variables :envvar:`MACOSX_DEPLOYMENT_TARGET`, :envvar:`SDKROOT`, and :envvar:`ARCHFLAGS` to appropriate values in the shell before trying to build/install *MPI for Python*. Check the instructions at :ref:`macosx-universal-sdk` in the appendix. .. note:: Some MPI-1 implementations **do require** the actual command line arguments to be passed in :c:func:`MPI_Init()`. In this case, you will need to use a rebuilt, MPI-enabled, Python interpreter executable. *MPI for Python* has some support for alleviating you from this task. Check the instructions at :ref:`python-mpi` in the appendix. Using **pip** or **easy_install** --------------------------------- If you already have a working MPI (either if you installed it from sources or by using a pre-built package from your favourite GNU/Linux distribution) and the :program:`mpicc` compiler wrapper is on your search path, you can use :program:`pip`:: $ [sudo] pip install mpi4py or alternatively *setuptools* :program:`easy_install` (deprecated):: $ [sudo] easy_install mpi4py .. note:: If the :program:`mpicc` compiler wrapper is not on your search path (or if it has a different name) you can use :program:`env` to pass the environment variable :envvar:`MPICC` providing the full path to the MPI compiler wrapper executable:: $ [sudo] env MPICC=/path/to/mpicc pip install mpi4py $ [sudo] env MPICC=/path/to/mpicc easy_install mpi4py Using **distutils** ------------------- *MPI for Python* uses a standard distutils-based build system. However, some distutils commands (like *build*) have additional options: * :option:`--mpicc=` : let you specify a special location or name for the :program:`mpicc` compiler wrapper. * :option:`--mpi=` : let you pass a section with MPI configuration within a special configuration file. * :option:`--configure` : runs exhaustive tests for checking about missing MPI types/constants/calls. This option should be passed in order to build *MPI for Python* against old MPI-1 implementations, possibly providing a subset of MPI-2. Downloading ^^^^^^^^^^^ The *MPI for Python* package is available for download at the project website generously hosted by Google Code. You can use :program:`curl` or :program:`wget` to get a release tarball:: $ curl -O http://mpi4py.googlecode.com/files/mpi4py-X.X.X.tar.gz $ wget http://mpi4py.googlecode.com/files/mpi4py-X.X.X.tar.gz Building ^^^^^^^^ After unpacking the release tarball:: $ tar -zxf mpi4py-X.X.X.tar.gz $ cd mpi4py-X.X.X the distribution is ready for building. - If you use a MPI implementation providing a :program:`mpicc` compiler wrapper (e.g., MPICH, Open MPI, LAM), it will be used for compilation and linking. This is the preferred and easiest way of building *MPI for Python*. If :program:`mpicc` is located somewhere in your search path, simply run the *build* command:: $ python setup.py build If :program:`mpicc` is not in your search path or the compiler wrapper has a different name, you can run the *build* command specifying its location:: $ python setup.py build --mpicc=/where/you/have/mpicc - Alternatively, you can provide all the relevant information about your MPI distribution by editing the file called :file:`mpi.cfg`. You can use the default section ``[mpi]`` or add a new, custom section, for example ``[my_mpi]`` (see the examples provided in the :file:`mpi.cfg` file):: [mpi] include_dirs = /usr/local/mpi/include libraries = mpi library_dirs = /usr/local/mpi/lib runtime_library_dirs = /usr/local/mpi/lib [other_mpi] include_dirs = /opt/mpi/include ... libraries = mpi ... library_dirs = /opt/mpi/lib ... runtime_library_dirs = /op/mpi/lib ... ... and then run the *build* command, perhaps specifying you custom configuration section:: $ python setup.py build --mpi=other_mpi Installing ^^^^^^^^^^ After building, the distribution is ready for install. If you have root privileges (either by log-in as the root user of by using :command:`sudo`) and you want to install *MPI for Python* in your system for all users, just do:: $ python setup.py install The previous steps will install the :mod:`mpi4py` package at standard location :file:`{prefix}/lib/python{X}.{X}/site-packages`. If you do not have root privileges or you want to install *MPI for Python* for your private use, you have two options depending on the target Python version. * For Python 2.6 and up:: $ python setup.py install --user * For Python 2.5 and below (assuming your home directory is available through the :envvar:`HOME` environment variable):: $ python setup.py install --home=$HOME Finally, add :file:`$HOME/lib/python` or :file:`$HOME/lib64/python` to your :envvar:`PYTHONPATH` environment variable. Testing ------- To quickly test the installation (Python 2.5 and up):: $ mpiexec -n 5 python -m mpi4py helloworld Hello, World! I am process 0 of 5 on localhost. Hello, World! I am process 1 of 5 on localhost. Hello, World! I am process 2 of 5 on localhost. Hello, World! I am process 3 of 5 on localhost. Hello, World! I am process 4 of 5 on localhost. If you installed from source, issuing at the command line:: $ mpiexec -n 5 python demo/helloworld.py or (in the case of ancient MPI-1 implementations):: $ mpirun -np 5 python demo/helloworld.py will launch a five-process run of the Python interpreter and run the test script :file:`demo/helloworld.py` from the source distribution. You can also run all the *unittest* scripts:: $ mpiexec -n 5 python test/runtests.py or, if you have nose_ unit testing framework installed:: $ mpiexec -n 5 nosetests -w test .. _nose: http://nose.readthedocs.org/ or, if you have `py.test`_ unit testing framework installed:: $ mpiexec -n 5 py.test test/ .. _py.test: http://pytest.org/ mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/intro.rst0000644000000000000000000002344312211706251020742 0ustar 00000000000000Introduction ============ Over the last years, high performance computing has become an affordable resource to many more researchers in the scientific community than ever before. The conjunction of quality open source software and commodity hardware strongly influenced the now widespread popularity of Beowulf_ class clusters and cluster of workstations. Among many parallel computational models, message-passing has proven to be an effective one. This paradigm is specially suited for (but not limited to) distributed memory architectures and is used in today's most demanding scientific and engineering application related to modeling, simulation, design, and signal processing. However, portable message-passing parallel programming used to be a nightmare in the past because of the many incompatible options developers were faced to. Fortunately, this situation definitely changed after the MPI Forum released its standard specification. High performance computing is traditionally associated with software development using compiled languages. However, in typical applications programs, only a small part of the code is time-critical enough to require the efficiency of compiled languages. The rest of the code is generally related to memory management, error handling, input/output, and user interaction, and those are usually the most error prone and time-consuming lines of code to write and debug in the whole development process. Interpreted high-level languages can be really advantageous for this kind of tasks. For implementing general-purpose numerical computations, MATLAB [#]_ is the dominant interpreted programming language. In the open source side, Octave and Scilab are well known, freely distributed software packages providing compatibility with the MATLAB language. In this work, we present MPI for Python, a new package enabling applications to exploit multiple processors using standard MPI "look and feel" in Python scripts. .. [#] MATLAB is a registered trademark of The MathWorks, Inc. What is MPI? ------------ MPI_, [mpi-using]_ [mpi-ref]_ the *Message Passing Interface*, is a standardized and portable message-passing system designed to function on a wide variety of parallel computers. The standard defines the syntax and semantics of library routines and allows users to write portable programs in the main scientific programming languages (Fortran, C, or C++). Since its release, the MPI specification [mpi-std1]_ [mpi-std2]_ has become the leading standard for message-passing libraries for parallel computers. Implementations are available from vendors of high-performance computers and from well known open source projects like MPICH_ [mpi-mpich]_, `Open MPI`_ [mpi-openmpi]_ or LAM_ [mpi-lammpi]_. What is Python? --------------- Python_ is a modern, easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming with dynamic typing and dynamic binding. It supports modules and packages, which encourages program modularity and code reuse. Python's elegant syntax, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. The Python interpreter and the extensive standard library are available in source or binary form without charge for all major platforms, and can be freely distributed. It is easily extended with new functions and data types implemented in C or C++. Python is also suitable as an extension language for customizable applications. Python is an ideal candidate for writing the higher-level parts of large-scale scientific applications [Hinsen97]_ and driving simulations in parallel architectures [Beazley97]_ like clusters of PC's or SMP's. Python codes are quickly developed, easily maintained, and can achieve a high degree of integration with other libraries written in compiled languages. Related Projects ---------------- As this work started and evolved, some ideas were borrowed from well known MPI and Python related open source projects from the Internet. * `OOMPI`_ + It has not relation with Python, but is an excellent object oriented approach to MPI. + It is a C++ class library specification layered on top of the C bindings that encapsulates MPI into a functional class hierarchy. + It provides a flexible and intuitive interface by adding some abstractions, like *Ports* and *Messages*, which enrich and simplify the syntax. * `Pypar`_ + Its interface is rather minimal. There is no support for communicators or process topologies. + It does not require the Python interpreter to be modified or recompiled, but does not permit interactive parallel runs. + General (*picklable*) Python objects of any type can be communicated. There is good support for numeric arrays, practically full MPI bandwidth can be achieved. * `pyMPI`_ + It rebuilds the Python interpreter providing a built-in module for message passing. It does permit interactive parallel runs, which are useful for learning and debugging. + It provides an interface suitable for basic parallel programing. There is not full support for defining new communicators or process topologies. + General (picklable) Python objects can be messaged between processors. There is not support for numeric arrays. * `Scientific Python`_ + It provides a collection of Python modules that are useful for scientific computing. + There is an interface to MPI and BSP (*Bulk Synchronous Parallel programming*). + The interface is simple but incomplete and does not resemble the MPI specification. There is support for numeric arrays. Additionally, we would like to mention some available tools for scientific computing and software development with Python. + `NumPy`_ is a package that provides array manipulation and computational capabilities similar to those found in IDL, MATLAB, or Octave. Using NumPy, it is possible to write many efficient numerical data processing applications directly in Python without using any C, C++ or Fortran code. + `SciPy`_ is an open source library of scientific tools for Python, gathering a variety of high level science and engineering modules together as a single package. It includes modules for graphics and plotting, optimization, integration, special functions, signal and image processing, genetic algorithms, ODE solvers, and others. + `Cython`_ is a language that makes writing C extensions for the Python language as easy as Python itself. The Cython language is very close to the Python language, but Cython additionally supports calling C functions and declaring C types on variables and class attributes. This allows the compiler to generate very efficient C code from Cython code. This makes Cython the ideal language for wrapping for external C libraries, and for fast C modules that speed up the execution of Python code. + `SWIG`_ is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages like Perl, Tcl/Tk, Ruby and Python. Issuing header files to SWIG is the simplest approach to interfacing C/C++ libraries from a Python module. .. External Links .. .............. .. _MPI: http://www.mpi-forum.org/ .. _MPICH: http://www.mpich.org/ .. _Open MPI: http://www.open-mpi.org/ .. _LAM: http://www.lam-mpi.org/ .. _Beowulf: http://www.beowulf.org/ .. _Python: http://www.python.org/ .. _NumPy: http://numpy.scipy.org/ .. _SciPy: http://www.scipy.org/ .. _Cython: http://www.cython.org/ .. _SWIG: http://www.swig.org/ .. _OOMPI: http://www.osl.iu.edu/research/oompi/ .. _Pypar: http://pypar.googlecode.com/ .. _pyMPI: http://sourceforge.net/projects/pympi/ .. _Scientific Python: http://dirac.cnrs-orleans.fr/plone/software/scientificpython/ .. References .. .......... .. [mpi-std1] MPI Forum. MPI: A Message Passing Interface Standard. International Journal of Supercomputer Applications, volume 8, number 3-4, pages 159-416, 1994. .. [mpi-std2] MPI Forum. MPI: A Message Passing Interface Standard. High Performance Computing Applications, volume 12, number 1-2, pages 1-299, 1998. .. [mpi-using] William Gropp, Ewing Lusk, and Anthony Skjellum. Using MPI: portable parallel programming with the message-passing interface. MIT Press, 1994. .. [mpi-ref] Mark Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra. MPI - The Complete Reference, volume 1, The MPI Core. MIT Press, 2nd. edition, 1998. .. [mpi-mpich] W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A high-performance, portable implementation of the MPI message passing interface standard. Parallel Computing, 22(6):789-828, September 1996. .. [mpi-openmpi] Edgar Gabriel, Graham E. Fagg, George Bosilca, Thara Angskun, Jack J. Dongarra, Jeffrey M. Squyres, Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, Ralph H. Castain, David J. Daniel, Richard L. Graham, and Timothy S. Woodall. Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation. In Proceedings, 11th European PVM/MPI Users' Group Meeting, Budapest, Hungary, September 2004. .. [mpi-lammpi] Greg Burns, Raja Daoud, and James Vaigl. LAM: An Open Cluster Environment for MPI. In Proceedings of Supercomputing Symposium, pages 379-386, 1994. .. [Hinsen97] Konrad Hinsen. The Molecular Modelling Toolkit: a case study of a large scientific application in Python. In Proceedings of the 6th International Python Conference, pages 29-35, San Jose, Ca., October 1997. .. [Beazley97] David M. Beazley and Peter S. Lomdahl. Feeding a large-scale physics application to Python. In Proceedings of the 6th International Python Conference, pages 21-29, San Jose, Ca., October 1997. mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/make.bat0000644000000000000000000001176412211706251020465 0ustar 00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\MPIforPython.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\MPIforPython.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/manual.rst0000644000000000000000000000012312211706251021052 0ustar 00000000000000MPI for Python ============== .. include:: abstract.txt .. include:: toctree.txt mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/mpi4py.rst0000644000000000000000000005044512211706251021033 0ustar 00000000000000Design and Interface Overview ============================= MPI for Python provides an object oriented approach to message passing which grounds on the standard MPI-2 C++ bindings. The interface was designed with focus in translating MPI syntax and semantics of standard MPI-2 bindings for C++ to Python. Any user of the standard C/C++ MPI bindings should be able to use this module without need of learning a new interface. Communicating Python Objects and Array Data ------------------------------------------- The Python standard library supports different mechanisms for data persistence. Many of them rely on disk storage, but *pickling* and *marshaling* can also work with memory buffers. The :mod:`pickle` (slower, written in pure Python) and :mod:`cPickle` (faster, written in C) modules provide user-extensible facilities to serialize generic Python objects using ASCII or binary formats. The :mod:`marshal` module provides facilities to serialize built-in Python objects using a binary format specific to Python, but independent of machine architecture issues. *MPI for Python* can communicate any built-in or used-defined Python object taking advantage of the features provided by the mod:`pickle` module. These facilities will be routinely used to build binary representations of objects to communicate (at sending processes), and restoring them back (at receiving processes). Although simple and general, the serialization approach (i.e., *pickling* and *unpickling*) previously discussed imposes important overheads in memory as well as processor usage, especially in the scenario of objects with large memory footprints being communicated. Pickling generic Python objects, ranging from primitive or container built-in types to user-defined classes, necessarily requires computer resources. Processing is also needed for dispatching the appropriate serialization method (that depends on the type of the object) and doing the actual packing. Additional memory is always needed, and if its total amount in not known *a priori*, many reallocations can occur. Indeed, in the case of large numeric arrays, this is certainly unacceptable and precludes communication of objects occupying half or more of the available memory resources. *MPI for Python* supports direct communication of any object exporting the single-segment buffer interface. This interface is a standard Python mechanism provided by some types (e.g., strings and numeric arrays), allowing access in the C side to a contiguous memory buffer (i.e., address and length) containing the relevant data. This feature, in conjunction with the capability of constructing user-defined MPI datatypes describing complicated memory layouts, enables the implementation of many algorithms involving multidimensional numeric arrays (e.g., image processing, fast Fourier transforms, finite difference schemes on structured Cartesian grids) directly in Python, with negligible overhead, and almost as fast as compiled Fortran, C, or C++ codes. Communicators ------------- In *MPI for Python*, :class:`Comm` is the base class of communicators. The :class:`Intracomm` and :class:`Intercomm` classes are sublcasses of the :class:`Comm` class. The :meth:`Is_inter` method (and :meth:`Is_intra`, provided for convenience, it is not part of the MPI specification) is defined for communicator objects and can be used to determine the particular communicator class. The two predefined intracommunicator instances are available: :const:`COMM_SELF` and :const:`COMM_WORLD`. From them, new communicators can be created as needed. The number of processes in a communicator and the calling process rank can be respectively obtained with methods :meth:`Get_size` and :meth:`Get_rank`. The associated process group can be retrieved from a communicator by calling the :meth:`Get_group` method, which returns an instance of the :class:`Group` class. Set operations with :class:`Group` objects like like :meth:`Union`, :meth:`Intersect` and :meth:`Difference` are fully supported, as well as the creation of new communicators from these groups using :meth:`Create`. New communicator instances can be obtained with the :meth:`Clone` method of :class:`Comm` objects, the :meth:`Dup` and :meth:`Split` methods of :class:`Intracomm` and :class:`Intercomm` objects, and methods :meth:`Create_intercomm` and :meth:`Merge` of :class:`Intracomm` and :class:`Intercomm` objects respectively. Virtual topologies (:class:`Cartcomm`, :class:`Graphcomm`, and :class:`Distgraphcomm` classes, being them specializations of :class:`Intracomm` class) are fully supported. New instances can be obtained from intracommunicator instances with factory methods :meth:`Create_cart` and :meth:`Create_graph` of :class:`Intracomm` class. Point-to-Point Communications ----------------------------- Point to point communication is a fundamental capability of message passing systems. This mechanism enables the transmittal of data between a pair of processes, one side sending, the other, receiving. MPI provides a set of *send* and *receive* functions allowing the communication of *typed* data with an associated *tag*. The type information enables the conversion of data representation from one architecture to another in the case of heterogeneous computing environments; additionally, it allows the representation of non-contiguous data layouts and user-defined datatypes, thus avoiding the overhead of (otherwise unavoidable) packing/unpacking operations. The tag information allows selectivity of messages at the receiving end. Blocking Communications ^^^^^^^^^^^^^^^^^^^^^^^ MPI provides basic send and receive functions that are *blocking*. These functions block the caller until the data buffers involved in the communication can be safely reused by the application program. In *MPI for Python*, the :meth:`Send`, :meth:`Recv` and :meth:`Sendrecv` methods of communicator objects provide support for blocking point-to-point communications within :class:`Intracomm` and :class:`Intercomm` instances. These methods can communicate memory buffers. The variants :meth:`send`, :meth:`recv` and :meth:`sendrecv` can communicate generic Python objects. Nonblocking Communications ^^^^^^^^^^^^^^^^^^^^^^^^^^ On many systems, performance can be significantly increased by overlapping communication and computation. This is particularly true on systems where communication can be executed autonomously by an intelligent, dedicated communication controller. MPI provides *nonblocking* send and receive functions. They allow the possible overlap of communication and computation. Non-blocking communication always come in two parts: posting functions, which begin the requested operation; and test-for-completion functions, which allow to discover whether the requested operation has completed. In *MPI for Python*, the :meth:`Isend` and :meth:`Irecv` methods of the :class:`Comm` class initiate a send and receive operation respectively. These methods return a :class:`Request` instance, uniquely identifying the started operation. Its completion can be managed using the :meth:`Test`, :meth:`Wait`, and :meth:`Cancel` methods of the :class:`Request` class. The management of :class:`Request` objects and associated memory buffers involved in communication requires a careful, rather low-level coordination. Users must ensure that objects exposing their memory buffers are not accessed at the Python level while they are involved in nonblocking message-passing operations. Persistent Communications ^^^^^^^^^^^^^^^^^^^^^^^^^ Often a communication with the same argument list is repeatedly executed within an inner loop. In such cases, communication can be further optimized by using persistent communication, a particular case of nonblocking communication allowing the reduction of the overhead between processes and communication controllers. Furthermore , this kind of optimization can also alleviate the extra call overheads associated to interpreted, dynamic languages like Python. In *MPI for Python*, the :meth:`Send_init` and :meth:`Recv_init` methods of the :class:`Comm` class create a persistent request for a send and receive operation respectively. These methods return an instance of the :class:`Prequest` class, a subclass of the :class:`Request` class. The actual communication can be effectively started using the :meth:`Start` method, and its completion can be managed as previously described. Collective Communications -------------------------- Collective communications allow the transmittal of data between multiple processes of a group simultaneously. The syntax and semantics of collective functions is consistent with point-to-point communication. Collective functions communicate *typed* data, but messages are not paired with an associated *tag*; selectivity of messages is implied in the calling order. Additionally, collective functions come in blocking versions only. The more commonly used collective communication operations are the following. * Barrier synchronization across all group members. * Global communication functions + Broadcast data from one member to all members of a group. + Gather data from all members to one member of a group. + Scatter data from one member to all members of a group. * Global reduction operations such as sum, maximum, minimum, etc. *MPI for Python* provides support for almost all collective calls. Unfortunately, the :meth:`Alltoallw` and :meth:`Reduce_scatter` methods are currently unimplemented. In *MPI for Python*, the :meth:`Bcast`, :meth:`Scatter`, :meth:`Gather`, :meth:`Allgather` and :meth:`Alltoall` methods of :class:`Comm` instances provide support for collective communications of memory buffers. The variants :meth:`bcast`, :meth:`scatter`, :meth:`gather`, :meth:`allgather` and :meth:`alltoall` can communicate generic Python objects. The vector variants (which can communicate different amounts of data to each process) :meth:`Scatterv`, :meth:`Gatherv`, :meth:`Allgatherv` and :meth:`Alltoallv` are also supported, they can only communicate objects exposing memory buffers. Global reduction operations on memory buffers are accessible through the :meth:`Reduce`, :meth:`Allreduce`, :meth:`Scan` and :meth:`Exscan` methods. The variants :meth:`reduce`, :meth:`allreduce`, :meth:`scan` and :meth:`exscan` can communicate generic Python objects; however, the actual required reduction computations are performed sequentially at some process. All the predefined (i.e., :const:`SUM`, :const:`PROD`, :const:`MAX`, etc.) reduction operations can be applied. Dynamic Process Management -------------------------- In the context of the MPI-1 specification, a parallel application is static; that is, no processes can be added to or deleted from a running application after it has been started. Fortunately, this limitation was addressed in MPI-2. The new specification added a process management model providing a basic interface between an application and external resources and process managers. This MPI-2 extension can be really useful, especially for sequential applications built on top of parallel modules, or parallel applications with a client/server model. The MPI-2 process model provides a mechanism to create new processes and establish communication between them and the existing MPI application. It also provides mechanisms to establish communication between two existing MPI applications, even when one did not *start* the other. In *MPI for Python*, new independent processes groups can be created by calling the :meth:`Spawn` method within an intracommunicator (i.e., an :class:`Intracomm` instance). This call returns a new intercommunicator (i.e., an :class:`Intercomm` instance) at the parent process group. The child process group can retrieve the matching intercommunicator by calling the :meth:`Get_parent` (class) method defined in the :class:`Comm` class. At each side, the new intercommunicator can be used to perform point to point and collective communications between the parent and child groups of processes. Alternatively, disjoint groups of processes can establish communication using a client/server approach. Any server application must first call the :func:`Open_port` function to open a *port* and the :func:`Publish_name` function to publish a provided *service*, and next call the :meth:`Accept` method within an :class:`Intracomm` instance. Any client applications can first find a published *service* by calling the :func:`Lookup_name` function, which returns the *port* where a server can be contacted; and next call the :meth:`Connect` method within an :class:`Intracomm` instance. Both :meth:`Accept` and :meth:`Connect` methods return an :class:`Intercomm` instance. When connection between client/server processes is no longer needed, all of them must cooperatively call the :meth:`Disconnect` method of the :class:`Comm` class. Additionally, server applications should release resources by calling the :func:`Unpublish_name` and :func:`Close_port` functions. One-Sided Communications ------------------------ One-sided communications (also called *Remote Memory Access*, *RMA*) supplements the traditional two-sided, send/receive based MPI communication model with a one-sided, put/get based interface. One-sided communication that can take advantage of the capabilities of highly specialized network hardware. Additionally, this extension lowers latency and software overhead in applications written using a shared-memory-like paradigm. The MPI specification revolves around the use of objects called *windows*; they intuitively specify regions of a process's memory that have been made available for remote read and write operations. The published memory blocks can be accessed through three functions for put (remote send), get (remote write), and accumulate (remote update or reduction) data items. A much larger number of functions support different synchronization styles; the semantics of these synchronization operations are fairly complex. In *MPI for Python*, one-sided operations are available by using instances of the :class:`Win` class. New window objects are created by calling the :meth:`Create` method at all processes within a communicator and specifying a memory buffer . When a window instance is no longer needed, the :meth:`Free` method should be called. The three one-sided MPI operations for remote write, read and reduction are available through calling the methods :meth:`Put`, :meth:`Get()`, and :meth:`Accumulate` respectively within a :class:`Win` instance. These methods need an integer rank identifying the target process and an integer offset relative the base address of the remote memory block being accessed. The one-sided operations read, write, and reduction are implicitly nonblocking, and must be synchronized by using two primary modes. Active target synchronization requires the origin process to call the :meth:`Start` and :meth:`Complete` methods at the origin process, and target process cooperates by calling the :meth:`Post` and :meth:`Wait` methods. There is also a collective variant provided by the :meth:`Fence` method. Passive target synchronization is more lenient, only the origin process calls the :meth:`Lock` and :meth:`Unlock` methods. Locks are used to protect remote accesses to the locked remote window and to protect local load/store accesses to a locked local window. Parallel Input/Output --------------------- The POSIX standard provides a model of a widely portable file system. However, the optimization needed for parallel input/output cannot be achieved with this generic interface. In order to ensure efficiency and scalability, the underlying parallel input/output system must provide a high-level interface supporting partitioning of file data among processes and a collective interface supporting complete transfers of global data structures between process memories and files. Additionally, further efficiencies can be gained via support for asynchronous input/output, strided accesses to data, and control over physical file layout on storage devices. This scenario motivated the inclusion in the MPI-2 standard of a custom interface in order to support more elaborated parallel input/output operations. The MPI specification for parallel input/output revolves around the use objects called *files*. As defined by MPI, files are not just contiguous byte streams. Instead, they are regarded as ordered collections of *typed* data items. MPI supports sequential or random access to any integral set of these items. Furthermore, files are opened collectively by a group of processes. The common patterns for accessing a shared file (broadcast, scatter, gather, reduction) is expressed by using user-defined datatypes. Compared to the communication patterns of point-to-point and collective communications, this approach has the advantage of added flexibility and expressiveness. Data access operations (read and write) are defined for different kinds of positioning (using explicit offsets, individual file pointers, and shared file pointers), coordination (non-collective and collective), and synchronism (blocking, nonblocking, and split collective with begin/end phases). In *MPI for Python*, all MPI input/output operations are performed through instances of the :class:`File` class. File handles are obtained by calling the :meth:`Open` method at all processes within a communicator and providing a file name and the intended access mode. After use, they must be closed by calling the :meth:`Close` method. Files even can be deleted by calling method :meth:`Delete`. After creation, files are typically associated with a per-process *view*. The view defines the current set of data visible and accessible from an open file as an ordered set of elementary datatypes. This data layout can be set and queried with the :meth:`Set_view` and :meth:`Get_view` methods respectively. Actual input/output operations are achieved by many methods combining read and write calls with different behavior regarding positioning, coordination, and synchronism. Summing up, *MPI for Python* provides the thirty (30) methods defined in MPI-2 for reading from or writing to files using explicit offsets or file pointers (individual or shared), in blocking or nonblocking and collective or noncollective versions. Environmental Management ------------------------ Initialization and Exit ^^^^^^^^^^^^^^^^^^^^^^^ Module functions :func:`Init` or :func:`Init_thread` and :func:`Finalize` provide MPI initialization and finalization respectively. Module functions :func:`Is_initialized()` and :func:`Is_finalized()` provide the respective tests for initialization and finalization. .. caution:: :c:func:`MPI_Init()` or :c:func:`MPI_Init_thread()` is actually called when you import the :mod:`MPI` module from the :mod:`mpi4py` package, but only if MPI is not already initialized. In such case, calling :func:`Init`/:func:`Init_thread` from Python is expected to generate an MPI error, and in turn an exception will be raised. .. note:: :c:func:`MPI_Finalize()` is registered (by using Python C/API function :c:func:`Py_AtExit()`) for being automatically called when Python processes exit, but only if :mod:`mpi4py` actually initialized Therefore, there is no need to call :func:`Finalize()` from Python to ensure MPI finalization. Implementation Information ^^^^^^^^^^^^^^^^^^^^^^^^^^ + The MPI version number can be retrieved from module function :func:`Get_version`. It returns a two-integer tuple ``(version,subversion)``. * The :func:`Get_processor_name` function can be used to access the processor name. * The values of predefined attributes attached to the world communicator can be obtained by calling the :meth:`Get_attr` method within the :const:`COMM_WORLD` instance. Timers ^^^^^^ MPI timer functionalities are available through the :func:`Wtime` and :func:`Wtick` functions. Error Handling ^^^^^^^^^^^^^^ Error handling functionality is almost completely supported. Errors originated in native MPI calls will raise an instance of the module exception class :exc:`Exception`, which is a subclass of the standard Python exception :exc:`RuntimeError`. .. caution:: Importing with ``from mpi4py.MPI import *`` will cause a name clashing with standard Python :exc:`Exception` base class. In order facilitate communicator sharing with other Python modules interfacing MPI-based parallel libraries, default MPI error handlers :const:`ERRORS_RETURN`, :const:`ERRORS_ARE_FATAL` can be assigned to and retrieved from communicators, windows and files with methods :meth:`{Class}.Set_errhandler` and :meth:`{Class}.Get_errhandler`. mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/sphinxfix.sty0000644000000000000000000000106012211706251021625 0ustar 00000000000000\setcounter{tocdepth}{2} \pagenumbering{arabic} \makeatletter \renewcommand{\theindex}{ \cleardoublepage \phantomsection \py@OldTheindex \addcontentsline{toc}{section}{\indexname} } \makeatother \makeatletter \renewcommand{\thebibliography}[1]{ \cleardoublepage \phantomsection \py@OldThebibliography{1} \addcontentsline{toc}{section}{\bibname} } \makeatother \makeatletter \renewcommand{\tableofcontents}{ \begingroup \parskip = 0mm \py@OldTableofcontents \endgroup \vfill \rule{\textwidth}{1pt} \newpage } \makeatother mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/toctree.txt0000644000000000000000000000017612211706251021261 0ustar 00000000000000.. toctree:: :maxdepth: 2 intro mpi4py install tutorial appendix .. Local Variables: .. mode: rst .. End: mpi4py_1.3.1+hg20131106.orig/docs/source/usrman/tutorial.rst0000644000000000000000000001504612211706251021452 0ustar 00000000000000.. _tutorial: Tutorial ======== .. warning:: Under construction. Contributions very welcome! *MPI for Python* supports convenient, *pickle*-based communication of generic Python object as well as fast, near C-speed, direct array data communication of buffer-provider objects (e.g., NumPy arrays). * Communication of generic Python objects You have to use **all-lowercase** methods (of the :class:`Comm` class), like :meth:`send()`, :meth:`recv()`, :meth:`bcast()`. Note that :meth:`isend()` is available, but :meth:`irecv()` is not. Collective calls like :meth:`scatter()`, :meth:`gather()`, :meth:`allgather()`, :meth:`alltoall()` expect/return a sequence of :attr:`Comm.size` elements at the root or all process. They return a single value, a list of :attr:`Comm.size` elements, or :const:`None`. Global reduction operations :meth:`reduce()` and :meth:`allreduce()` are naively implemented, the reduction is actually done at the designated root process or all processes. * Communication of buffer-provider objects You have to use method names starting with an **upper-case** letter (of the :class:`Comm` class), like :meth:`Send()`, :meth:`Recv()`, :meth:`Bcast()`. In general, buffer arguments to these calls must be explicitly specified by using a 2/3-list/tuple like ``[data, MPI.DOUBLE]``, or ``[data, count, MPI.DOUBLE]`` (the former one uses the byte-size of ``data`` and the extent of the MPI datatype to define the ``count``). Automatic MPI datatype discovery for NumPy arrays and PEP-3118 buffers is supported, but limited to basic C types (all C/C99-native signed/unsigned integral types and single/double precision real/complex floating types) and availability of matching datatypes in the underlying MPI implementation. In this case, the buffer-provider object can be passed directly as a buffer argument, the count and MPI datatype will be inferred. Point-to-Point Communication ---------------------------- * Python objects (:mod:`pickle` under the hood):: from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: data = {'a': 7, 'b': 3.14} comm.send(data, dest=1, tag=11) elif rank == 1: data = comm.recv(source=0, tag=11) * NumPy arrays (the fast way!):: from mpi4py import MPI import numpy comm = MPI.COMM_WORLD rank = comm.Get_rank() # pass explicit MPI datatypes if rank == 0: data = numpy.arange(1000, dtype='i') comm.Send([data, MPI.INT], dest=1, tag=77) elif rank == 1: data = numpy.empty(1000, dtype='i') comm.Recv([data, MPI.INT], source=0, tag=77) # automatic MPI datatype discovery if rank == 0: data = numpy.arange(100, dtype=numpy.float64) comm.Send(data, dest=1, tag=13) elif rank == 1: data = numpy.empty(100, dtype=numpy.float64) comm.Recv(data, source=0, tag=13) Collective Communication ------------------------ * Broadcasting a Python dictionary:: from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: data = {'key1' : [7, 2.72, 2+3j], 'key2' : ( 'abc', 'xyz')} else: data = None data = comm.bcast(data, root=0) * Scattering Python objects:: from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() if rank == 0: data = [(i+1)**2 for i in range(size)] else: data = None data = comm.scatter(data, root=0) assert data == (rank+1)**2 * Gathering Python objects:: from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() data = (rank+1)**2 data = comm.gather(data, root=0) if rank == 0: for i in range(size): assert data[i] == (i+1)**2 else: assert data is None * Parallel matrix-vector product:: from mpi4py import MPI import numpy def matvec(comm, A, x): m = A.shape[0] # local rows p = comm.Get_size() xg = numpy.zeros(m*p, dtype='d') comm.Allgather([x, MPI.DOUBLE], [xg, MPI.DOUBLE]) y = numpy.dot(A, xg) return y Dynamic Process Management -------------------------- * Compute Pi - Master (or parent, or client) side:: #!/usr/bin/env python from mpi4py import MPI import numpy import sys comm = MPI.COMM_SELF.Spawn(sys.executable, args=['cpi.py'], maxprocs=5) N = numpy.array(100, 'i') comm.Bcast([N, MPI.INT], root=MPI.ROOT) PI = numpy.array(0.0, 'd') comm.Reduce(None, [PI, MPI.DOUBLE], op=MPI.SUM, root=MPI.ROOT) print(PI) comm.Disconnect() * Compute Pi - Worker (or child, or server) side:: #!/usr/bin/env python from mpi4py import MPI import numpy comm = MPI.Comm.Get_parent() size = comm.Get_size() rank = comm.Get_rank() N = numpy.array(0, dtype='i') comm.Bcast([N, MPI.INT], root=0) h = 1.0 / N; s = 0.0 for i in range(rank, N, size): x = h * (i + 0.5) s += 4.0 / (1.0 + x**2) PI = numpy.array(s * h, dtype='d') comm.Reduce([PI, MPI.DOUBLE], None, op=MPI.SUM, root=0) comm.Disconnect() Wrapping with SWIG ------------------ * C source: .. sourcecode:: c /* file: helloworld.c */ void sayhello(MPI_Comm comm) { int size, rank; MPI_Comm_size(comm, &size); MPI_Comm_rank(comm, &rank); printf("Hello, World! " "I am process %d of %d.\n", rank, size); } * SWIG interface file: .. sourcecode:: c // file: helloworld.i %module helloworld %{ #include #include "helloworld.c" }% %include mpi4py/mpi4py.i %mpi4py_typemap(Comm, MPI_Comm); void sayhello(MPI_Comm comm); * Try it in the Python prompt:: >>> from mpi4py import MPI >>> import helloworld >>> helloworld.sayhello(MPI.COMM_WORLD) Hello, World! I am process 0 of 1. Wrapping with F2Py ------------------ * Fortran 90 source: .. sourcecode:: fortran ! file: helloworld.f90 subroutine sayhello(comm) use mpi implicit none integer :: comm, rank, size, ierr call MPI_Comm_size(comm, size, ierr) call MPI_Comm_rank(comm, rank, ierr) print *, 'Hello, World! I am process ',rank,' of ',size,'.' end subroutine sayhello * Try it in the Python prompt:: >>> from mpi4py import MPI >>> import helloworld >>> fcomm = MPI.COMM_WORLD.py2f() >>> helloworld.sayhello(fcomm) Hello, World! I am process 0 of 1. mpi4py_1.3.1+hg20131106.orig/makefile0000644000000000000000000000622612211706251015020 0ustar 00000000000000.PHONY: default default: build PYTHON = python # ---- .PHONY: config build test config: ${PYTHON} setup.py config ${CONFIGOPT} build: ${PYTHON} setup.py build ${BUILDOPT} test: ${MPIEXEC} ${VALGRIND} ${PYTHON} ${PWD}/test/runtests.py < /dev/null .PHONY: clean distclean srcclean fullclean clean: ${PYTHON} setup.py clean --all distclean: clean -${RM} -r build _configtest* *.py[co] -${RM} -r MANIFEST dist mpi4py.egg-info -${RM} -r conf/__pycache__ test/__pycache__ -find conf -name '*.py[co]' -exec rm -f {} ';' -find test -name '*.py[co]' -exec rm -f {} ';' -find src -name '*.py[co]' -exec rm -f {} ';' srcclean: ${RM} src/mpi4py.MPI.c ${RM} src/include/mpi4py/mpi4py.MPI.h ${RM} src/include/mpi4py/mpi4py.MPI_api.h ${RM} src/mpi4py.MPE.c fullclean: distclean srcclean docsclean -find . -name '*~' -exec rm -f {} ';' # ---- .PHONY: install uninstall install: build ${PYTHON} setup.py install --user ${INSTALLOPT} uninstall: -${RM} -r $(shell ${PYTHON} -m site --user-site)/mpi4py -${RM} -r $(shell ${PYTHON} -m site --user-site)/mpi4py-*-py*.egg-info # ---- .PHONY: docs docs-html docs-pdf docs-misc docs: docs-html docs-pdf docs-misc docs-html: rst2html sphinx-html epydoc-html docs-pdf: sphinx-pdf epydoc-pdf docs-misc: sphinx-man sphinx-info RST2HTML = rst2html RST2HTMLOPTS = --input-encoding=utf-8 RST2HTMLOPTS += --no-compact-lists RST2HTMLOPTS += --cloak-email-addresses .PHONY: rst2html rst2html: ${RST2HTML} ${RST2HTMLOPTS} ./LICENSE.txt > docs/LICENSE.html ${RST2HTML} ${RST2HTMLOPTS} ./HISTORY.txt > docs/HISTORY.html ${RST2HTML} ${RST2HTMLOPTS} ./THANKS.txt > docs/THANKS.html ${RST2HTML} ${RST2HTMLOPTS} docs/source/index.rst > docs/index.html SPHINXBUILD = sphinx-build SPHINXOPTS = .PHONY: sphinx sphinx-html sphinx-pdf sphinx-man sphinx-info sphinx: sphinx-html sphinx-pdf sphinx-man sphinx-info sphinx-html: ${PYTHON} -c 'import mpi4py.MPI' mkdir -p build/doctrees docs/usrman ${SPHINXBUILD} -b html -d build/doctrees ${SPHINXOPTS} \ docs/source/usrman docs/usrman ${RM} docs/usrman/.buildinfo sphinx-pdf: ${PYTHON} -c 'import mpi4py.MPI' mkdir -p build/doctrees build/latex ${SPHINXBUILD} -b latex -d build/doctrees ${SPHINXOPTS} \ docs/source/usrman build/latex ${MAKE} -C build/latex all-pdf > /dev/null mv build/latex/*.pdf docs/ sphinx-man: ${PYTHON} -c 'import mpi4py.MPI' mkdir -p build/doctrees build/man ${SPHINXBUILD} -b man -d build/doctrees ${SPHINXOPTS} \ docs/source/usrman build/man mv build/man/*.[137] docs/ sphinx-info: ${PYTHON} -c 'import mpi4py.MPI' mkdir -p build/doctrees build/texinfo ${SPHINXBUILD} -b texinfo -d build/doctrees ${SPHINXOPTS} \ docs/source/usrman build/texinfo ${MAKE} -C build/texinfo info > /dev/null mv build/texinfo/*.info docs/ EPYDOCBUILD = ${PYTHON} ./conf/epydocify.py EPYDOCOPTS = .PHONY: epydoc epydoc-html epydoc-pdf epydoc: epydoc-html epydoc-pdf epydoc-html: ${PYTHON} -c 'import mpi4py.MPI' mkdir -p docs/apiref ${EPYDOCBUILD} ${EPYDOCOPTS} --html -o docs/apiref epydoc-pdf: .PHONY: docsclean: -${RM} docs/*.info docs/*.[137] -${RM} docs/*.html docs/*.pdf -${RM} -r docs/usrman docs/apiref # ---- .PHONY: sdist sdist: src docs ${PYTHON} setup.py sdist ${SDISTOPT} # ---- mpi4py_1.3.1+hg20131106.orig/mpi.cfg0000644000000000000000000001314212211706251014561 0ustar 00000000000000# Some Linux distributions have RPM's for some MPI implementations. # In such a case, headers and libraries usually are in default system # locations, and you should not need any special configuration. # If you do not have MPI distribution in a default location, please # uncomment and fill-in appropriately the following lines. Yo can use # as examples the [mpich2], [openmpi], and [deinompi] sections # below the [mpi] section (wich is the one used by default). # If you specify multiple locations for includes and libraries, # please separate them with the path separator for your platform, # i.e., ':' on Unix-like systems and ';' on Windows # Default configuration # --------------------- [mpi] ## mpi_dir = /usr ## mpi_dir = /usr/local ## mpi_dir = /usr/local/mpi ## mpi_dir = /opt ## mpi_dir = /opt/mpi ## mpi_dir = = $ProgramFiles\MPI ## mpicc = %(mpi_dir)s/bin/mpicc ## mpicxx = %(mpi_dir)s/bin/mpicxx ## define_macros = ## undef_macros = ## include_dirs = %(mpi_dir)s/include ## libraries = mpi ## library_dirs = %(mpi_dir)s/lib ## runtime_library_dirs = %(mpi_dir)s/lib ## extra_compile_args = ## extra_link_args = ## extra_objects = # MPICH3 example # -------------- [mpich3] mpi_dir = /home/devel/mpi/mpich-3.0.1 mpicc = %(mpi_dir)s/bin/mpicc mpicxx = %(mpi_dir)s/bin/mpicxx #include_dirs = %(mpi_dir)s/include #libraries = mpich opa mpl rt pthread #library_dirs = %(mpi_dir)s/lib #runtime_library_dirs = %(library_dirs)s # MPICH2 example # -------------- [mpich2] mpi_dir = /home/devel/mpi/mpich2-1.4.1 mpicc = %(mpi_dir)s/bin/mpicc mpicxx = %(mpi_dir)s/bin/mpicxx #include_dirs = %(mpi_dir)s/include #libraries = mpich opa mpl #library_dirs = %(mpi_dir)s/lib #runtime_library_dirs = %(library_dirs)s # Open MPI example # ---------------- [openmpi] mpi_dir = /home/devel/mpi/openmpi-1.6.2 mpicc = %(mpi_dir)s/bin/mpicc mpicxx = %(mpi_dir)s/bin/mpicxx #include_dirs = %(mpi_dir)s/include #libraries = mpi library_dirs = %(mpi_dir)s/lib runtime_library_dirs = %(library_dirs)s # Sun MPI example # --------------- [sunmpi] #mpi_dir = /opt/SUNWhpc/HPC8.2.1/gnu mpi_dir = /opt/SUNWhpc/HPC8.1/sun mpicc = %(mpi_dir)s/bin/mpicc mpicxx = %(mpi_dir)s/bin/mpicxx #include_dirs = %(mpi_dir)s/include #libraries = mpi open-rte open-pal library_dirs = %(mpi_dir)s/lib runtime_library_dirs = %(library_dirs)s # HP MPI example # -------------- [hpmpi] mpi_dir = /opt/hpmpi mpicc = %(mpi_dir)s/bin/mpicc mpicxx = %(mpi_dir)s/bin/mpiCC #include_dirs = %(mpi_dir)s/include #libraries = hpmpio hpmpi dl #library_dirs = %(mpi_dir)s/lib #runtime_library_dirs = %(library_dirs)s # SGI MPI example # --------------- [sgimpi] define_macros = SGI_MPI=1 mpi_dir = /usr mpicc = icc mpicxx = icpc include_dirs = %(mpi_dir)s/include libraries = mpi library_dirs = %(mpi_dir)s/lib runtime_library_dirs = %(library_dirs)s # IBM POE/MPI example # ------------------- [poempi] mpicc = mpcc_r mpicxx = mpCC_r # MPICH3 example (Windows) # ------------------------ [mpich3-windows] mpi_dir = $ProgramFiles\MPICH include_dirs = %(mpi_dir)s\include libraries = mpi library_dirs = %(mpi_dir)s\lib # MPICH2 example (Windows) # ------------------------ [mpich2-windows] mpi_dir = $ProgramFiles\MPICH2 include_dirs = %(mpi_dir)s\include libraries = mpi library_dirs = %(mpi_dir)s\lib # Open MPI example (Windows) # ------------------------- [openmpi-windows-32bit] mpi_dir = $ProgramFiles\OpenMPI_v1.6.1-win32 #define_macros = OMPI_IMPORTS include_dirs = %(mpi_dir)s\include libraries = libmpi library_dirs = %(mpi_dir)s\lib [openmpi-windows-64bit] mpi_dir = $ProgramFiles\OpenMPI_v1.6.1-win64 #define_macros = OMPI_IMPORTS include_dirs = %(mpi_dir)s\include libraries = libmpi library_dirs = %(mpi_dir)s\lib # DeinoMPI example # ---------------- [deinompi] mpi_dir = $ProgramFiles\DeinoMPI include_dirs = %(mpi_dir)s\include libraries = mpi library_dirs = %(mpi_dir)s\lib # Microsoft MPI example # --------------------- [msmpi-32bit] mpi_dir = $ProgramFiles\Microsoft HPC Pack 2008 R2 include_dirs = %(mpi_dir)s\inc libraries = msmpi library_dirs = %(mpi_dir)s\lib\i386 [msmpi-64bit] mpi_dir = $ProgramFiles\Microsoft HPC Pack 2008 R2 include_dirs = %(mpi_dir)s\inc libraries = msmpi library_dirs = %(mpi_dir)s\lib\amd64 # SiCortex MPI example # -------------------- [sicortex] mpicc = mpicc --gnu mpicxx = mpicxx --gnu # LAM/MPI example # --------------- [lammpi] mpi_dir = /home/devel/mpi/lam-7.1.4 mpicc = %(mpi_dir)s/bin/mpicc mpicxx = %(mpi_dir)s/bin/mpic++ include_dirs = %(mpi_dir)s/include libraries = lammpio mpi lam library_dirs = %(mpi_dir)s/lib runtime_library_dirs = %(library_dirs)s # MPICH1 example # -------------- [mpich1] mpi_dir = /home/devel/mpi/mpich-1.2.7p1 mpicc = %(mpi_dir)s/bin/mpicc mpicxx = %(mpi_dir)s/bin/mpicxx include_dirs = %(mpi_dir)s/include libraries = mpich library_dirs = %(mpi_dir)s/lib/shared:%(mpi_dir)s/lib runtime_library_dirs = %(mpi_dir)s/lib/shared # Fake MPI, just for testing # -------------------------- [fakempi] mpicc = cc mpicxx = c++ include_dirs = misc/fakempi mpi4py_1.3.1+hg20131106.orig/setup.cfg0000644000000000000000000000100412211706251015126 0ustar 00000000000000[config] # mpicc = mpicc # mpicxx = mpicxx # mpif77 = mpif77 # mpif90 = mpif90 # mpif95 = mpif95 [build] debug = 0 # compiler = mingw32 [install] optimize = 1 [sdist] template = conf/MANIFEST.in force_manifest = 1 [nosetests] where = test [bdist_rpm] packager = Lisandro Dalcin vendor = CIMEC group = Libraries/Python doc_files = README.txt HISTORY.txt LICENSE.txt THANKS.txt # docs/index.html docs/mpi4py.pdf # docs/usrman docs/apiref mpi4py_1.3.1+hg20131106.orig/setup.py0000644000000000000000000004530012211706251015026 0ustar 00000000000000#!/usr/bin/env python # Author: Lisandro Dalcin # Contact: dalcinl@gmail.com """ MPI for Python ============== This package provides Python bindings for the **Message Passing Interface** (MPI) standard. It is implemented on top of the MPI-1/MPI-2 specification and exposes an API which grounds on the standard MPI-2 C++ bindings. This package supports: + Convenient communication of any *picklable* Python object - point-to-point (send & receive) - collective (broadcast, scatter & gather, reduction) + Fast communication of Python object exposing the *Python buffer interface* (NumPy arrays, builtin bytes/string/array objects) - point-to-point (blocking/nonbloking/persistent send & receive) - collective (broadcast, block/vector scatter & gather, reduction) + Process groups and communication domains - Creation of new intra/inter communicators - Cartesian & graph topologies + Parallel input/output: - read & write - blocking/nonbloking & collective/noncollective - individual/shared file pointers & explicit offset + Dynamic process management - spawn & spawn multiple - accept/connect - name publishing & lookup + One-sided operations (put, get, accumulate) You can install the `in-development version `_ of mpi4py with:: $ pip install mpi4py==dev or:: $ easy_install mpi4py==dev """ ## try: ## import setuptools ## except ImportError: ## pass import sys, os # -------------------------------------------------------------------- # Metadata # -------------------------------------------------------------------- def name(): return 'mpi4py' def version(): import re fh = open(os.path.join('src', '__init__.py')) try: data = fh.read() finally: fh.close() m = re.search(r"__version__\s*=\s*'(.*)'", data) return m.groups()[0] name = name() version = version() url = 'http://%(name)s.googlecode.com/' % vars() download = url + 'files/%(name)s-%(version)s.tar.gz' % vars() description = __doc__.split('\n')[1:-1]; del description[1:3] classifiers = """ Development Status :: 5 - Production/Stable Intended Audience :: Developers Intended Audience :: Science/Research License :: OSI Approved :: BSD License Operating System :: MacOS :: MacOS X Operating System :: Microsoft :: Windows Operating System :: POSIX Operating System :: POSIX :: Linux Operating System :: POSIX :: SunOS/Solaris Operating System :: Unix Programming Language :: C Programming Language :: Cython Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 3 Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: Implementation :: PyPy Topic :: Scientific/Engineering Topic :: Software Development :: Libraries :: Python Modules Topic :: System :: Distributed Computing """ keywords = """ scientific computing parallel computing message passing MPI """ platforms = """ Mac OS X Linux Solaris Unix Windows """ metadata = { 'name' : name, 'version' : version, 'description' : description.pop(0), 'long_description' : '\n'.join(description), 'url' : url, 'download_url' : download, 'classifiers' : [c for c in classifiers.split('\n') if c], 'keywords' : [k for k in keywords.split('\n') if k], 'platforms' : [p for p in platforms.split('\n') if p], 'license' : 'BSD', 'author' : 'Lisandro Dalcin', 'author_email' : 'dalcinl@gmail.com', 'maintainer' : 'Lisandro Dalcin', 'maintainer_email' : 'dalcinl@gmail.com', } metadata['requires'] = ['pickle',] metadata['provides'] = ['mpi4py', 'mpi4py.dl', 'mpi4py.rc', 'mpi4py.MPI', 'mpi4py.MPE', ] # -------------------------------------------------------------------- # Extension modules # -------------------------------------------------------------------- linux = sys.platform.startswith('linux') solaris = sys.platform.startswith('sunos') darwin = sys.platform.startswith('darwin') if linux: def whole_archive(name): return ['-Wl,-whole-archive', '-l%s' % name, '-Wl,-no-whole-archive', ] elif darwin: def whole_archive(name): return [#'-Wl,-force_load', '-l%s' % name, ] elif solaris: def whole_archive(name): return ['-Wl,-zallextract', '-l%s' % name, '-Wl,-zdefaultextract', ] else: def whole_archive(name): return ['-l%s' % name] def configure_mpi(ext, config_cmd): from textwrap import dedent from distutils import log from distutils.errors import DistutilsPlatformError # log.info("checking for MPI compile and link ...") errmsg = ("Cannot find 'mpi.h' header. " "Check your configuration!!!") ok = config_cmd.check_header("mpi.h", headers=["stdlib.h"]) if not ok: raise DistutilsPlatformError(errmsg) # headers = ["stdlib.h", "mpi.h"] ConfigTest = dedent("""\ int main(int argc, char **argv) { (void)MPI_Init(&argc, &argv); (void)MPI_Finalize(); return 0; } """) errmsg = ("Cannot %s MPI programs. " "Check your configuration!!!") ok = config_cmd.try_compile(ConfigTest, headers=headers) if not ok: raise DistutilsPlatformError(errmsg % "compile") ok = config_cmd.try_link(ConfigTest, headers=headers) if not ok: raise DistutilsPlatformError(errmsg % "link") # log.info("checking for missing MPI functions/symbols ...") tests = ["defined(%s)" % macro for macro in ("OPEN_MPI", "MPICH2", "DEINO_MPI", "MSMPI_VER",)] tests += ["(defined(MPICH_NAME)&&(MPICH_NAME==3))"] ConfigTest = dedent('''\ #if !(%s) #error "Unknown MPI" #endif ''') % "||".join(tests) ok = config_cmd.try_compile(ConfigTest, headers=headers) if not ok: from conf.mpidistutils import ConfigureMPI configure = ConfigureMPI(config_cmd) results = configure.run() configure.dump(results) ext.define_macros += [('HAVE_CONFIG_H', 1)] else: for prefix, suffixes in ( ("MPI_Type_create_f90_", ("integer", "real", "complex")), ): for suffix in suffixes: function = prefix + suffix ok = config_cmd.check_function( function, decl=1, call=1) if not ok: macro = "PyMPI_MISSING_" + function ext.define_macros += [(macro, 1)] def configure_mpe(ext, config_cmd): from distutils import log log.info("checking for MPE availability ...") libraries = [] for libname in ('pthread', 'mpe', 'lmpe'): if config_cmd.check_library( libname, other_libraries=libraries): libraries.insert(0, libname) ok = (config_cmd.check_header("mpe.h", headers=["stdlib.h", "mpi.h"]) and config_cmd.check_function("MPE_Init_log", headers=["stdlib.h", "mpi.h", "mpe.h"], libraries=libraries, decl=0, call=1) ) if ok: ext.define_macros += [('HAVE_MPE', 1)] if ((linux or darwin or solaris) and libraries[0] == 'lmpe'): ext.extra_link_args += whole_archive('lmpe') for libname in libraries[1:]: ext.extra_link_args += ['-l' + libname] else: ext.libraries += libraries def configure_dl(ext, config_cmd): from distutils import log log.info("checking for dlopen() availability ...") ok = config_cmd.check_header("dlfcn.h") if ok : ext.define_macros += [('HAVE_DLFCN_H', 1)] ok = config_cmd.check_library('dl') if ok: ext.libraries += ['dl'] ok = config_cmd.check_function("dlopen", libraries=['dl'], decl=1, call=1) if ok: ext.define_macros += [('HAVE_DLOPEN', 1)] def configure_libmpe(lib, config_cmd): libraries = [] for libname in ('pthread', 'mpe', 'lmpe'): if config_cmd.check_library( libname, other_libraries=libraries): libraries.insert(0, libname) if 'mpe' in libraries: if ((linux or darwin or solaris) and libraries[0] == 'lmpe'): lib.extra_link_args += whole_archive('lmpe') for libname in libraries[1:]: lib.extra_link_args += ['-l' + libname] else: lib.libraries += libraries def configure_libvt(lib, config_cmd): if lib.name == 'vt': ok = False for vt_lib in ('vt-mpi', 'vt.mpi'): ok = config_cmd.check_library(vt_lib) if ok: break if not ok: return libraries = [] for libname in ('otf', 'z', 'dl'): ok = config_cmd.check_library(libname) if ok: libraries.append(libname) if linux or darwin or solaris: lib.extra_link_args += whole_archive(vt_lib) lib.extra_link_args += ['-l%s' % libname for libname in libraries] else: lib.libraries += [vt_lib] + libraries elif lib.name in ('vt-mpi', 'vt-hyb'): vt_lib = lib.name ok = config_cmd.check_library(vt_lib) if ok: lib.libraries = [vt_lib] def configure_pyexe(exe, config_cmd): from distutils import sysconfig from distutils.util import split_quoted if sys.platform.startswith('win'): return libraries = [] library_dirs = [] link_args = [] if not sysconfig.get_config_var('Py_ENABLE_SHARED'): py_version = sysconfig.get_python_version() py_abiflags = getattr(sys, 'abiflags', '') libraries = ['python' + py_version + py_abiflags] cfg_vars = sysconfig.get_config_vars() if sys.platform == 'darwin': fwkdir = cfg_vars.get('PYTHONFRAMEWORKDIR') if (fwkdir and fwkdir != 'no-framework' and fwkdir in cfg_vars.get('LINKFORSHARED', '')): del libraries[:] for var in ('LIBDIR', 'LIBPL'): library_dirs += split_quoted(cfg_vars.get(var, '')) for var in ('LDFLAGS', 'LIBS', 'MODLIBS', 'SYSLIBS', 'LDLAST'): link_args += split_quoted(cfg_vars.get(var, '')) exe.libraries += libraries exe.library_dirs += library_dirs exe.extra_link_args += link_args def ext_modules(): modules = [] # MPI extension module from glob import glob MPI = dict( name='mpi4py.MPI', sources=['src/MPI.c'], depends=(['src/mpi4py.MPI.c'] + glob('src/*.h') + glob('src/config/*.h') + glob('src/compat/*.h') ), configure=configure_mpi, ) modules.append(MPI) # MPE extension module MPE = dict( name='mpi4py.MPE', optional=True, sources=['src/MPE.c'], depends=['src/mpi4py.MPE.c', 'src/MPE/mpe-log.h', 'src/MPE/mpe-log.c', ], configure=configure_mpe, ) modules.append(MPE) # custom dl extension module dl = dict( name='mpi4py.dl', optional=True, sources=['src/dynload.c'], depends=['src/dynload.h'], configure=configure_dl, ) if os.name == 'posix': modules.append(dl) # return modules def libraries(): # MPE logging pmpi_mpe = dict( name='mpe', kind='dylib', optional=True, package='mpi4py', dest_dir='lib-pmpi', sources=['src/pmpi-mpe.c'], configure=configure_libmpe, ) # VampirTrace logging pmpi_vt = dict( name='vt', kind='dylib', optional=True, package='mpi4py', dest_dir='lib-pmpi', sources=['src/pmpi-vt.c'], configure=configure_libvt, ) pmpi_vt_mpi = dict( name='vt-mpi', kind='dylib', optional=True, package='mpi4py', dest_dir='lib-pmpi', sources=['src/pmpi-vt-mpi.c'], configure=configure_libvt, ) pmpi_vt_hyb = dict( name='vt-hyb', kind='dylib', optional=True, package='mpi4py', dest_dir='lib-pmpi', sources=['src/pmpi-vt-hyb.c'], configure=configure_libvt, ) # return [ pmpi_mpe, pmpi_vt, pmpi_vt_mpi, pmpi_vt_hyb, ] def executables(): # MPI-enabled Python interpreter pyexe = dict(name='python-mpi', optional=True, package='mpi4py', dest_dir='bin', sources=['src/python.c'], configure=configure_pyexe, ) # if hasattr(sys, 'pypy_version_info'): return [] return [pyexe] # -------------------------------------------------------------------- # Setup # -------------------------------------------------------------------- from conf.mpidistutils import setup from conf.mpidistutils import Extension as Ext from conf.mpidistutils import Library as Lib from conf.mpidistutils import Executable as Exe CYTHON = '0.15' def run_setup(): """ Call distutils.setup(*targs, **kwargs) """ if ('setuptools' in sys.modules): from os.path import exists, join metadata['zip_safe'] = False if not exists(join('src', 'mpi4py.MPI.c')): metadata['install_requires'] = ['Cython>='+CYTHON] # setup(packages = ['mpi4py'], package_dir = {'mpi4py' : 'src'}, package_data = {'mpi4py' : ['include/mpi4py/*.h', 'include/mpi4py/*.pxd', 'include/mpi4py/*.pyx', 'include/mpi4py/*.pxi', 'include/mpi4py/*.i', 'MPI.pxd', 'libmpi.pxd', 'mpi_c.pxd',]}, ext_modules = [Ext(**ext) for ext in ext_modules()], libraries = [Lib(**lib) for lib in libraries() ], executables = [Exe(**exe) for exe in executables()], **metadata) def chk_cython(VERSION): import re from distutils import log from distutils.version import LooseVersion from distutils.version import StrictVersion warn = lambda msg='': sys.stderr.write(msg+'\n') # cython_zip = 'cython.zip' if os.path.isfile(cython_zip): path = os.path.abspath(cython_zip) if sys.path[0] != path: sys.path.insert(0, path) log.info("adding '%s' to sys.path", cython_zip) # try: import Cython except ImportError: warn("*"*80) warn() warn(" You need to generate C source files with Cython!!") warn(" Download and install Cython ") warn() warn("*"*80) return False # try: CYTHON_VERSION = Cython.__version__ except AttributeError: from Cython.Compiler.Version import version as CYTHON_VERSION REQUIRED = VERSION m = re.match(r"(\d+\.\d+(?:\.\d+)?).*", CYTHON_VERSION) if m: Version = StrictVersion AVAILABLE = m.groups()[0] else: Version = LooseVersion AVAILABLE = CYTHON_VERSION if (REQUIRED is not None and Version(AVAILABLE) < Version(REQUIRED)): warn("*"*80) warn() warn(" You need to install Cython %s (you have version %s)" % (REQUIRED, CYTHON_VERSION)) warn(" Download and install Cython ") warn() warn("*"*80) return False # return True def run_cython(source, depends=(), includes=(), destdir_c=None, destdir_h=None, wdir=None, force=False, VERSION=None): from glob import glob from distutils import log from distutils import dep_util from distutils.errors import DistutilsError target = os.path.splitext(source)[0]+".c" cwd = os.getcwd() try: if wdir: os.chdir(wdir) alldeps = [source] for dep in depends: alldeps += glob(dep) if not (force or dep_util.newer_group(alldeps, target)): log.debug("skipping '%s' -> '%s' (up-to-date)", source, target) return finally: os.chdir(cwd) if not chk_cython(VERSION): raise DistutilsError('requires Cython>=%s' % VERSION) log.info("cythonizing '%s' -> '%s'", source, target) from conf.cythonize import cythonize err = cythonize(source, includes=includes, destdir_c=destdir_c, destdir_h=destdir_h, wdir=wdir) if err: raise DistutilsError( "Cython failure: '%s' -> '%s'" % (source, target)) def build_sources(cmd): from distutils.errors import DistutilsError from os.path import exists, isdir, join has_src = (exists(join('src', 'mpi4py.MPI.c')) and exists(join('src', 'mpi4py.MPE.c'))) has_vcs = (isdir('.hg') or isdir('.git') or isdir('.svn')) if (has_src and not has_vcs and not cmd.force): return # mpi4py.MPI source = 'mpi4py.MPI.pyx' depends = ("include/*/*.pxi", "include/*/*.pxd", "MPI/*.pyx", "MPI/*.pxi",) includes = ['include'] destdir_h = os.path.join('include', 'mpi4py') run_cython(source, depends, includes, destdir_c=None, destdir_h=destdir_h, wdir='src', force=cmd.force, VERSION=CYTHON) # mpi4py.MPE source = 'mpi4py.MPE.pyx' depends = ("MPE/*.pyx", "MPE/*.pxi",) includes = ['include'] run_cython(source, depends, includes, destdir_c=None, destdir_h=None, wdir='src', force=cmd.force, VERSION=CYTHON) from conf.mpidistutils import build_src build_src.run = build_sources def run_testsuite(cmd): from distutils.errors import DistutilsError sys.path.insert(0, 'test') try: from runtests import main finally: del sys.path[0] if cmd.dry_run: return args = cmd.args[:] or [] if cmd.verbose < 1: args.insert(0,'-q') if cmd.verbose > 1: args.insert(0,'-v') err = main(args) if err: raise DistutilsError("test") from conf.mpidistutils import test test.run = run_testsuite def main(): run_setup() if __name__ == '__main__': main() # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPE.c0000644000000000000000000000012012211706251014657 0ustar 00000000000000#define MPICH_SKIP_MPICXX 1 #define OMPI_SKIP_MPICXX 1 #include "mpi4py.MPE.c" mpi4py_1.3.1+hg20131106.orig/src/MPE/MPE.pyx0000644000000000000000000001041712211706251015710 0ustar 00000000000000__doc__ = """ Multi-Processing Environment """ # ----------------------------------------------------------------------------- cdef object MPI from mpi4py import MPI MPI = None # ----------------------------------------------------------------------------- include "mpe-log.pxi" include "helpers.pxi" # ----------------------------------------------------------------------------- cdef class LogEvent: cdef int commID cdef int eventID[1] cdef int isActive cdef object name cdef object color def __cinit__(self, name, color=None): self.commID = 0 self.eventID[0] = 0 self.isActive = 0 # cdef int commID = 2 # MPI_COMM_WORLD cdef char *cname = NULL cdef char *ccolor = b"blue" cdef char *cformat = NULL name = toBytes(name, &cname) color = toBytes(color, &ccolor) if not isReady(): return CHKERR( MPELog.NewEvent(commID, cname, ccolor, cformat, self.eventID) ) # self.commID = commID self.isActive = 1 self.name = name self.color = color def __call__(self): return self def __enter__(self): self.log() return self def __exit__(self, *exc): return None def log(self): if not self.isActive: return if not self.commID: return if not isReady(): return CHKERR( MPELog.LogEvent(self.commID, self.eventID[0], NULL) ) property name: def __get__(self): return self.name property active: def __get__(self): return self.isActive def __set__(self, bint active): self.isActive = active property eventID: def __get__(self): return self.eventID[0] cdef class LogState: cdef int commID cdef int stateID[2] cdef int isActive cdef object name cdef object color def __cinit__(self, name, color=None): self.commID = 0 self.stateID[0] = 0 self.stateID[1] = 0 self.isActive = 0 # cdef int commID = 2 # MPI_COMM_WORLD cdef char *cname = NULL cdef char *ccolor = b"red" cdef char *cformat = NULL name = toBytes(name, &cname) color = toBytes(color, &ccolor) if not isReady(): return CHKERR( MPELog.NewState(commID, cname, ccolor, cformat, self.stateID) ) # self.commID = commID self.isActive = 1 self.name = name self.color = color def __call__(self): return self def __enter__(self): self.enter() return self def __exit__(self, *exc): self.exit() return None def enter(self): if not self.isActive: return if not self.commID: return if not isReady(): return CHKERR( MPELog.LogEvent(self.commID, self.stateID[0], NULL) ) def exit(self): if not self.isActive: return if not self.commID: return if not isReady(): return CHKERR( MPELog.LogEvent(self.commID, self.stateID[1], NULL) ) property name: def __get__(self): return self.name property active: def __get__(self): return self.isActive def __set__(self, bint active): self.isActive = active property stateID: def __get__(self): return (self.stateID[0], self.stateID[1]) def initLog(logfile=None): initialize() setLogFileName(logfile) def finishLog(): CHKERR( finalize() ) def setLogFileName(filename): cdef char *cfilename = NULL filename = toBytes(filename, &cfilename) CHKERR( MPELog.SetFileName(cfilename) ) def syncClocks(): if not isReady(): return CHKERR( MPELog.SyncClocks() ) def startLog(): if not isReady(): return CHKERR( MPELog.Start() ) def stopLog(): if not isReady(): return CHKERR( MPELog.Stop() ) def newLogEvent(name, color=None): cdef LogEvent event = LogEvent(name, color) return event def newLogState(name, color=None): cdef LogState state = LogState(name, color) return state # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPE/helpers.pxi0000644000000000000000000000755212211706251016717 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef inline object toBytes(object ob, char *p[]): if ob is None: return None if not isinstance(ob, bytes): ob = ob.encode() if p: p[0] = ob return ob # ----------------------------------------------------------------------------- cdef extern from *: void PyErr_SetObject(object, object) cdef int PyMPE_Raise(int ierr) except -1 with gil: PyErr_SetObject(RuntimeError, ierr) return 0 cdef inline int CHKERR(int ierr) nogil except -1: if ierr == 0: return 0 PyMPE_Raise(ierr) return -1 # ----------------------------------------------------------------------------- cdef extern from "stdio.h" nogil: ctypedef struct FILE FILE *stdin, *stdout, *stderr int fprintf(FILE *, char *, ...) int fflush(FILE *) cdef extern from "Python.h": ctypedef struct PyObject int Py_IsInitialized() nogil void PySys_WriteStderr(char*,...) void PySys_WriteStderr(char*,...) int Py_AtExit(void (*)()) cdef int logInitedHere = 0 # initialized from this module cdef int logFinishAtExit = 0 # finalize at Python process exit cdef inline int initialize() except -1: # Is logging active? if MPELog.Initialized() == 1: return 0 # Initialize logging library global logInitedHere cdef int ierr = 0 ierr = MPELog.Init() if ierr != 0: raise RuntimeError( "MPE logging initialization failed " "[error code: %d]" % ierr) logInitedHere = 1 # Register cleanup at Python exit global logFinishAtExit if not logFinishAtExit: if Py_AtExit(atexit) < 0: PySys_WriteStderr( b"warning: could not register " b"cleanup with Py_AtExit()\n", 0) logFinishAtExit = 1 return 1 cdef int finalize() nogil: # Is logging active? if MPELog.Initialized() != 1: return 0 # Do we initialized logging? global logInitedHere if not logInitedHere: return 0 # Finalize logging library cdef int ierr = 0 ierr = MPELog.Finish() return ierr cdef void atexit() nogil: cdef int ierr = 0 ierr = finalize() if ierr != 0: fprintf(stderr, b"error: in MPE finalization " b"[code: %d]", ierr); fflush(stderr) cdef inline int isReady() nogil: return (MPELog.Initialized() == 1) # ----------------------------------------------------------------------------- cdef inline int packArgs(object arglist, char bytebuf[]) except -1: cdef int pos = 0 cdef char token = 0 cdef int count = 0 cdef char *data = NULL # cdef long idata = 0 cdef double fdata = 0 cdef char* sdata = NULL # cdef arg = None bytebuf[0] = 0 for arg in arglist: # if isinstance(arg, int): if sizeof(long) == 4: token = c'd' else: token = c'l' idata = arg count = 1 data = &idata elif isinstance(arg, float): token = c'E' fdata = arg count = 1 data = &fdata elif isinstance(arg, bytes): token = c's' sdata = arg count = len(arg) data = sdata else: token = 0 count = 0 data = NULL continue # CHKERR( MPELog.PackBytes(bytebuf, &pos, token, count, data) ) return 0 # ----------------------------------------------------------------------------- #cdef colornames = ["white", "black", "red", "yellow", "green", # "cyan", "blue", "magenta", "aquamarine", # "forestgreen", "orange", "maroon", "brown", # "pink", "coral", "gray" ] # # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPE/mpe-log.c0000644000000000000000000001216612211706251016234 0ustar 00000000000000#if HAVE_MPE #include #if defined(_WIN32) && defined(__GNUC__) #include #include #endif #include "mpe.h" #if defined(MPE_LOG_OK) #define MPE_VERSION 2 #elif defined(MPE_Log_OK) #define MPE_VERSION 1 #else #undef HAVE_MPE #define HAVE_MPE 0 #endif #endif #if HAVE_MPE && MPE_VERSION==2 /* This is a hack for old MPE2 API's distributed with MPICH2 < 1.0.6 */ #if (defined(MPICH2) && !defined(MPICH2_NUMVERSION)) || defined(DEINO_MPI) #define MPE_Describe_comm_state(comm,s0,s1,n,c,f) \ MPE_Describe_comm_state(comm,0,s0,s1,n,c,f) #define MPE_Describe_comm_event(comm,e,n,c,f) \ MPE_Describe_comm_event(comm,0,e,n,c,f) #define MPE_Log_comm_event(comm,e,b) \ MPE_Log_comm_event(comm,0,e,b) #endif #endif #include "mpe-log.h" #if HAVE_MPE static char logFileName[256] = { 0 }; #endif static int PyMPELog_Init(void) { int ierr = 0; #if HAVE_MPE if (MPE_Initialized_logging() != 1) ierr = MPE_Init_log(); #endif return ierr; } static int PyMPELog_Finish(void) { int ierr = 0; #if HAVE_MPE const char *filename = logFileName; if (!filename[0]) filename = "Unknown"; if (MPE_Initialized_logging() == 1) ierr = MPE_Finish_log((char *)filename); #endif return ierr; } static int PyMPELog_Initialized(void) { int status = 1; #if HAVE_MPE status = MPE_Initialized_logging(); #else status = 1; #endif /* HAVE_MPE */ return status; } static int PyMPELog_SetFileName(const char filename[]) { int ierr = 0; #if HAVE_MPE if (!filename) return ierr; strncpy(logFileName, filename, sizeof(logFileName)); logFileName[sizeof(logFileName)-1] = 0; #endif return ierr; } static int PyMPELog_SyncClocks(void) { int ierr = 0; #if HAVE_MPE #if MPE_VERSION==2 ierr = MPE_Log_sync_clocks(); #endif #endif /* HAVE_MPE */ return ierr; } static int PyMPELog_Start(void) { int ierr = 0; #if HAVE_MPE ierr = MPE_Start_log(); #endif /* HAVE_MPE */ return ierr; } static int PyMPELog_Stop(void) { int ierr = 0; #if HAVE_MPE ierr = MPE_Stop_log(); #endif /* HAVE_MPE */ return ierr; } #if HAVE_MPE static MPI_Comm PyMPELog_GetComm(int commID) { switch (commID) { case 0: return MPI_COMM_NULL; case 1: return MPI_COMM_SELF; case 2: return MPI_COMM_WORLD; default: return MPI_COMM_WORLD; } } #endif static int PyMPELog_NewState(int commID, const char name[], const char color[], const char format[], int stateID[2]) { int ierr = 0; #if HAVE_MPE MPI_Comm comm = PyMPELog_GetComm(commID); if (comm == MPI_COMM_NULL) return 0; #if MPE_VERSION==2 ierr = MPE_Log_get_state_eventIDs(&stateID[0], &stateID[1]); if (ierr == -99999) { ierr = 0; stateID[0] = stateID[1] = -99999; } if (ierr != 0) return ierr; ierr = MPE_Describe_comm_state(comm, stateID[0], stateID[1], name, color, format); #else stateID[0] = MPE_Log_get_event_number(); stateID[1] = MPE_Log_get_event_number(); ierr = MPE_Describe_state(stateID[0], stateID[1], (char *)name, (char *)color); #endif #endif /* HAVE_MPE */ return ierr; } static int PyMPELog_NewEvent(int commID, const char name[], const char color[], const char format[], int eventID[1]) { int ierr = 0; #if HAVE_MPE MPI_Comm comm = PyMPELog_GetComm(commID); if (comm == MPI_COMM_NULL) return 0; #if MPE_VERSION==2 ierr = MPE_Log_get_solo_eventID(&eventID[0]); if (ierr == -99999) { ierr = 0; eventID[0] = -99999; } if (ierr != 0) return ierr; ierr = MPE_Describe_comm_event(comm, eventID[0], name, color, format); #else eventID[0] = MPE_Log_get_event_number(); MPE_Describe_event (eventID[0], (char *)name); #endif #endif /* HAVE_MPE */ return ierr; } static int PyMPELog_LogEvent(int commID, const int eventID, const char bytebuf[]) { int ierr = 0; #if HAVE_MPE MPI_Comm comm = PyMPELog_GetComm(commID); if (comm == MPI_COMM_NULL) return 0; #if MPE_VERSION==2 ierr = MPE_Log_comm_event(comm, eventID, bytebuf); #else ierr = MPE_Log_event(eventID, 0, /*NULL*/0); #endif #endif /* HAVE_MPE */ return ierr; } static int PyMPELog_PackBytes(char bytebuf[], int *position, char tokentype, int count, const void *data) { int ierr = 0; #if HAVE_MPE #if MPE_VERSION==2 if (((unsigned)*position) <= sizeof(MPE_LOG_BYTES)) ierr = MPE_Log_pack(bytebuf, position, tokentype, count, data); #endif #endif /* HAVE_MPE */ return ierr; } static PyMPELogAPI PyMPELog_ = { PyMPELog_Init, PyMPELog_Finish, PyMPELog_Initialized, PyMPELog_SetFileName, PyMPELog_SyncClocks, PyMPELog_Start, PyMPELog_Stop, PyMPELog_NewState, PyMPELog_NewEvent, PyMPELog_LogEvent, PyMPELog_PackBytes }; static PyMPELogAPI *PyMPELog = &PyMPELog_; /* Local Variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/MPE/mpe-log.h0000644000000000000000000000140312211706251016231 0ustar 00000000000000#ifndef PyMPI_MPE_LOG_H #define PyMPI_MPE_LOG_H typedef struct PyMPELogAPI { int (*Init)(void); int (*Finish)(void); int (*Initialized)(void); int (*SetFileName)(const char[]); int (*SyncClocks)(void); int (*Start)(void); int (*Stop)(void); int (*NewState)(int, const char[], const char[], const char[], int[2]); int (*NewEvent)(int, const char[], const char[], const char[], int[1]); int (*LogEvent)(int, int, const char[]); int (*PackBytes)(char[], int *, char, int, const void *); } PyMPELogAPI; #endif /*! PyMPI_MPE_LOG_H */ mpi4py_1.3.1+hg20131106.orig/src/MPE/mpe-log.pxi0000644000000000000000000000115612211706251016607 0ustar 00000000000000cdef extern from "MPE/mpe-log.h" nogil: ctypedef struct PyMPELogAPI: int (*Init)() nogil int (*Finish)() nogil int (*Initialized)() nogil int (*SetFileName)(char[]) nogil int (*SyncClocks)() nogil int (*Start)() nogil int (*Stop)() nogil int (*NewState)(int, char[], char[], char[], int[2]) nogil int (*NewEvent)(int, char[], char[], char[], int[1]) nogil int (*LogEvent)(int, int, char[]) nogil int (*PackBytes)(char[], int *, char, int, void *) nogil cdef extern from "MPE/mpe-log.c" nogil: PyMPELogAPI *MPELog"(PyMPELog)" mpi4py_1.3.1+hg20131106.orig/src/MPI.c0000644000000000000000000000012012211706251014663 0ustar 00000000000000#define MPICH_SKIP_MPICXX 1 #define OMPI_SKIP_MPICXX 1 #include "mpi4py.MPI.c" mpi4py_1.3.1+hg20131106.orig/src/MPI.pxd0000644000000000000000000000013212211706251015237 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com include "include/mpi4py/MPI.pxd" mpi4py_1.3.1+hg20131106.orig/src/MPI/CAPI.pxi0000644000000000000000000001051112211706251015762 0ustar 00000000000000# ----------------------------------------------------------------------------- # Datatype cdef api object PyMPIDatatype_New(MPI_Datatype arg): cdef Datatype obj = Datatype.__new__(Datatype) obj.ob_mpi = arg return obj cdef api MPI_Datatype* PyMPIDatatype_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Status cdef api object PyMPIStatus_New(MPI_Status *arg): cdef Status obj = Status.__new__(Status) if (arg != NULL and arg != MPI_STATUS_IGNORE and arg != MPI_STATUSES_IGNORE): obj.ob_mpi = arg[0] else: pass # XXX should fail ? return obj cdef api MPI_Status* PyMPIStatus_Get(object arg) except? NULL: if arg is None: return MPI_STATUS_IGNORE return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Request cdef api object PyMPIRequest_New(MPI_Request arg): cdef Request obj = Request.__new__(Request) obj.ob_mpi = arg return obj cdef api MPI_Request* PyMPIRequest_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Message cdef api object PyMPIMessage_New(MPI_Message arg): cdef Message obj = Message.__new__(Message) obj.ob_mpi = arg return obj cdef api MPI_Message* PyMPIMessage_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Op cdef api object PyMPIOp_New(MPI_Op arg): cdef Op obj = Op.__new__(Op) obj.ob_mpi = arg return obj cdef api MPI_Op* PyMPIOp_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Info cdef api object PyMPIInfo_New(MPI_Info arg): cdef Info obj = Info.__new__(Info) obj.ob_mpi = arg return obj cdef api MPI_Info* PyMPIInfo_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Group cdef api object PyMPIGroup_New(MPI_Group arg): cdef Group obj = Group.__new__(Group) obj.ob_mpi = arg return obj cdef api MPI_Group* PyMPIGroup_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Comm cdef api object PyMPIComm_New(MPI_Comm arg): cdef type cls = Comm cdef int inter = 0 cdef int topo = MPI_UNDEFINED if arg != MPI_COMM_NULL: CHKERR( MPI_Comm_test_inter(arg, &inter) ) if inter: cls = Intercomm else: CHKERR( MPI_Topo_test(arg, &topo) ) if topo == MPI_UNDEFINED: cls = Intracomm elif topo == MPI_CART: cls = Cartcomm elif topo == MPI_GRAPH: cls = Graphcomm elif topo == MPI_DIST_GRAPH: cls = Distgraphcomm else: cls = Intracomm cdef Comm obj = cls() obj.ob_mpi = arg return obj cdef api MPI_Comm* PyMPIComm_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Win cdef api object PyMPIWin_New(MPI_Win arg): cdef Win obj = Win.__new__(Win) obj.ob_mpi = arg return obj cdef api MPI_Win* PyMPIWin_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # File cdef api object PyMPIFile_New(MPI_File arg): cdef File obj = File.__new__(File) obj.ob_mpi = arg return obj cdef api MPI_File* PyMPIFile_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- # Errhandler cdef api object PyMPIErrhandler_New(MPI_Errhandler arg): cdef Errhandler obj = Errhandler.__new__(Errhandler) obj.ob_mpi = arg return obj cdef api MPI_Errhandler* PyMPIErrhandler_Get(object arg) except NULL: return &(arg).ob_mpi # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/Comm.pyx0000644000000000000000000022014612211706251016170 0ustar 00000000000000# Communicator Comparisons # ------------------------ IDENT = MPI_IDENT #: Groups are identical, contexts are the same CONGRUENT = MPI_CONGRUENT #: Groups are identical, contexts are different SIMILAR = MPI_SIMILAR #: Groups are similar, rank order differs UNEQUAL = MPI_UNEQUAL #: Groups are different # Communicator Topologies # ----------------------- CART = MPI_CART #: Cartesian topology GRAPH = MPI_GRAPH #: General graph topology DIST_GRAPH = MPI_DIST_GRAPH #: Distributed graph topology # Graph Communicator Weights # -------------------------- UNWEIGHTED = __UNWEIGHTED__ #:"""Unweighted graph""" WEIGHTS_EMPTY = __WEIGHTS_EMPTY__ #:"""Empty graph weights""" # Communicator Split Type # ----------------------- COMM_TYPE_SHARED = MPI_COMM_TYPE_SHARED cdef class Comm: """ Communicator """ def __cinit__(self, Comm comm=None): self.ob_mpi = MPI_COMM_NULL if comm is not None: self.ob_mpi = comm.ob_mpi def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Comm(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, Comm): return NotImplemented if not isinstance(other, Comm): return NotImplemented cdef Comm s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_COMM_NULL # Group # ----- def Get_group(self): """ Access the group associated with a communicator """ cdef Group group = Group.__new__(Group) with nogil: CHKERR( MPI_Comm_group(self.ob_mpi, &group.ob_mpi) ) return group property group: """communicator group""" def __get__(self): return self.Get_group() # Communicator Accessors # ---------------------- def Get_size(self): """ Return the number of processes in a communicator """ cdef int size = -1 CHKERR( MPI_Comm_size(self.ob_mpi, &size) ) return size property size: """number of processes in communicator""" def __get__(self): return self.Get_size() def Get_rank(self): """ Return the rank of this process in a communicator """ cdef int rank = MPI_PROC_NULL CHKERR( MPI_Comm_rank(self.ob_mpi, &rank) ) return rank property rank: """rank of this process in communicator""" def __get__(self): return self.Get_rank() @classmethod def Compare(cls, Comm comm1 not None, Comm comm2 not None): """ Compare two communicators """ cdef int flag = MPI_UNEQUAL with nogil: CHKERR( MPI_Comm_compare(comm1.ob_mpi, comm2.ob_mpi, &flag) ) return flag # Communicator Constructors # ------------------------- def Clone(self): """ Clone an existing communicator """ cdef Comm comm = type(self)() with nogil: CHKERR( MPI_Comm_dup(self.ob_mpi, &comm.ob_mpi) ) return comm def Dup(self, Info info=None): """ Duplicate an existing communicator """ cdef MPI_Info cinfo = arg_Info(info) cdef Comm comm = type(self)() if info is None: with nogil: CHKERR( MPI_Comm_dup( self.ob_mpi, &comm.ob_mpi) ) else: with nogil: CHKERR( MPI_Comm_dup_with_info( self.ob_mpi, cinfo, &comm.ob_mpi) ) return comm def Dup_with_info(self, Info info not None): """ Duplicate an existing communicator """ cdef Comm comm = type(self)() with nogil: CHKERR( MPI_Comm_dup_with_info( self.ob_mpi, info.ob_mpi, &comm.ob_mpi) ) return comm def Idup(self): """ Nonblocking duplicate an existing communicator """ cdef Comm comm = type(self)() cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Comm_idup( self.ob_mpi, &comm.ob_mpi, &request.ob_mpi) ) return (comm, request) def Create(self, Group group not None): """ Create communicator from group """ cdef Comm comm = type(self)() with nogil: CHKERR( MPI_Comm_create( self.ob_mpi, group.ob_mpi, &comm.ob_mpi) ) return comm def Create_group(self, Group group not None, int tag=0): """ Create communicator from group """ cdef Comm comm = type(self)() with nogil: CHKERR( MPI_Comm_create_group( self.ob_mpi, group.ob_mpi, tag, &comm.ob_mpi) ) return comm def Split(self, int color=0, int key=0): """ Split intracommunicator by color and key """ cdef Comm comm = type(self)() with nogil: CHKERR( MPI_Comm_split( self.ob_mpi, color, key, &comm.ob_mpi) ) return comm def Split_type(self, int split_type, int key=0, Info info=INFO_NULL): """ Split intracommunicator by color and key """ cdef MPI_Info cinfo = arg_Info(info) cdef Comm comm = type(self)() with nogil: CHKERR( MPI_Comm_split_type( self.ob_mpi, split_type, key, cinfo, &comm.ob_mpi) ) return comm # Communicator Destructor # ----------------------- def Free(self): """ Free a communicator """ with nogil: CHKERR( MPI_Comm_free(&self.ob_mpi) ) # Communicator Info # ----------------- def Set_info(self, Info info not None): """ Set new values for the hints associated with a communicator """ with nogil: CHKERR( MPI_Comm_set_info( self.ob_mpi, info.ob_mpi) ) def Get_info(self): """ Return the hints for a communicator that are currently being used by MPI """ cdef Info info = Info.__new__(Info) with nogil: CHKERR( MPI_Comm_get_info( self.ob_mpi, &info.ob_mpi) ) return info # Point to Point communication # ---------------------------- # Blocking Send and Receive Operations # ------------------------------------ def Send(self, buf, int dest=0, int tag=0): """ Blocking send .. note:: This function may block until the message is received. Whether or not `Send` blocks depends on several factors and is implementation dependent """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) with nogil: CHKERR( MPI_Send( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi) ) def Recv(self, buf, int source=0, int tag=0, Status status=None): """ Blocking receive .. note:: This function blocks until the message is received """ cdef _p_msg_p2p rmsg = message_p2p_recv(buf, source) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Recv( rmsg.buf, rmsg.count, rmsg.dtype, source, tag, self.ob_mpi, statusp) ) # Send-Receive # ------------ def Sendrecv(self, sendbuf, int dest=0, int sendtag=0, recvbuf=None, int source=0, int recvtag=0, Status status=None): """ Send and receive a message .. note:: This function is guaranteed not to deadlock in situations where pairs of blocking sends and receives may deadlock. .. caution:: A common mistake when using this function is to mismatch the tags with the source and destination ranks, which can result in deadlock. """ cdef _p_msg_p2p smsg = message_p2p_send(sendbuf, dest) cdef _p_msg_p2p rmsg = message_p2p_recv(recvbuf, source) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Sendrecv( smsg.buf, smsg.count, smsg.dtype, dest, sendtag, rmsg.buf, rmsg.count, rmsg.dtype, source, recvtag, self.ob_mpi, statusp) ) def Sendrecv_replace(self, buf, int dest=0, int sendtag=0, int source=0, int recvtag=0, Status status=None): """ Send and receive a message .. note:: This function is guaranteed not to deadlock in situations where pairs of blocking sends and receives may deadlock. .. caution:: A common mistake when using this function is to mismatch the tags with the source and destination ranks, which can result in deadlock. """ cdef int rank = MPI_PROC_NULL if dest != MPI_PROC_NULL: rank = dest if source != MPI_PROC_NULL: rank = source cdef _p_msg_p2p rmsg = message_p2p_recv(buf, rank) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Sendrecv_replace( rmsg.buf, rmsg.count, rmsg.dtype, dest, sendtag, source, recvtag, self.ob_mpi, statusp) ) # Nonblocking Communications # -------------------------- def Isend(self, buf, int dest=0, int tag=0): """ Nonblocking send """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Isend( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = smsg return request def Irecv(self, buf, int source=0, int tag=0): """ Nonblocking receive """ cdef _p_msg_p2p rmsg = message_p2p_recv(buf, source) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Irecv( rmsg.buf, rmsg.count, rmsg.dtype, source, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = rmsg return request # Probe # ----- def Probe(self, int source=0, int tag=0, Status status=None): """ Blocking test for a message .. note:: This function blocks until the message arrives. """ cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Probe( source, tag, self.ob_mpi, statusp) ) def Iprobe(self, int source=0, int tag=0, Status status=None): """ Nonblocking test for a message """ cdef int flag = 0 cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Iprobe( source, tag, self.ob_mpi, &flag, statusp) ) return flag # Matching Probe # -------------- def Mprobe(self, int source=0, int tag=0, Status status=None): """ Blocking test for a message """ cdef MPI_Message cmessage = MPI_MESSAGE_NULL cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Mprobe( source, tag, self.ob_mpi, &cmessage, statusp) ) cdef Message message = Message.__new__(Message) message.ob_mpi = cmessage return message def Improbe(self, int source=0, int tag=0, Status status=None): """ Nonblocking test for a message """ cdef int flag = 0 cdef MPI_Message cmessage = MPI_MESSAGE_NULL cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Improbe( source, tag, self.ob_mpi, &flag, &cmessage, statusp) ) if flag == 0: return None cdef Message message = Message.__new__(Message) message.ob_mpi = cmessage return message # Persistent Communication # ------------------------ def Send_init(self, buf, int dest=0, int tag=0): """ Create a persistent request for a standard send """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) cdef Prequest request = Prequest.__new__(Prequest) with nogil: CHKERR( MPI_Send_init( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = smsg return request def Recv_init(self, buf, int source=0, int tag=0): """ Create a persistent request for a receive """ cdef _p_msg_p2p rmsg = message_p2p_recv(buf, source) cdef Prequest request = Prequest.__new__(Prequest) with nogil: CHKERR( MPI_Recv_init( rmsg.buf, rmsg.count, rmsg.dtype, source, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = rmsg return request # Communication Modes # ------------------- # Blocking calls def Bsend(self, buf, int dest=0, int tag=0): """ Blocking send in buffered mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) with nogil: CHKERR( MPI_Bsend( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi) ) def Ssend(self, buf, int dest=0, int tag=0): """ Blocking send in synchronous mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) with nogil: CHKERR( MPI_Ssend( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi) ) def Rsend(self, buf, int dest=0, int tag=0): """ Blocking send in ready mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) with nogil: CHKERR( MPI_Rsend( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi) ) # Nonblocking calls def Ibsend(self, buf, int dest=0, int tag=0): """ Nonblocking send in buffered mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ibsend( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = smsg return request def Issend(self, buf, int dest=0, int tag=0): """ Nonblocking send in synchronous mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Issend( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = smsg return request def Irsend(self, buf, int dest=0, int tag=0): """ Nonblocking send in ready mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Irsend( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = smsg return request # Persistent Requests def Bsend_init(self, buf, int dest=0, int tag=0): """ Persistent request for a send in buffered mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) cdef Prequest request = Prequest.__new__(Prequest) with nogil: CHKERR( MPI_Bsend_init( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = smsg return request def Ssend_init(self, buf, int dest=0, int tag=0): """ Persistent request for a send in synchronous mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) cdef Prequest request = Prequest.__new__(Prequest) with nogil: CHKERR( MPI_Ssend_init( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = smsg return request def Rsend_init(self, buf, int dest=0, int tag=0): """ Persistent request for a send in ready mode """ cdef _p_msg_p2p smsg = message_p2p_send(buf, dest) cdef Prequest request = Prequest.__new__(Prequest) with nogil: CHKERR( MPI_Rsend_init( smsg.buf, smsg.count, smsg.dtype, dest, tag, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = smsg return request # Collective Communications # ------------------------- # Barrier Synchronization # ----------------------- def Barrier(self): """ Barrier synchronization """ with nogil: CHKERR( MPI_Barrier(self.ob_mpi) ) # Global Communication Functions # ------------------------------ def Bcast(self, buf, int root=0): """ Broadcast a message from one process to all other processes in a group """ cdef _p_msg_cco m = message_cco() m.for_bcast(buf, root, self.ob_mpi) with nogil: CHKERR( MPI_Bcast( m.sbuf, m.scount, m.stype, root, self.ob_mpi) ) def Gather(self, sendbuf, recvbuf, int root=0): """ Gather together values from a group of processes """ cdef _p_msg_cco m = message_cco() m.for_gather(0, sendbuf, recvbuf, root, self.ob_mpi) with nogil: CHKERR( MPI_Gather( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, root, self.ob_mpi) ) def Gatherv(self, sendbuf, recvbuf, int root=0): """ Gather Vector, gather data to one process from all other processes in a group providing different amount of data and displacements at the receiving sides """ cdef _p_msg_cco m = message_cco() m.for_gather(1, sendbuf, recvbuf, root, self.ob_mpi) with nogil: CHKERR( MPI_Gatherv( m.sbuf, m.scount, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, root, self.ob_mpi) ) def Scatter(self, sendbuf, recvbuf, int root=0): """ Scatter data from one process to all other processes in a group """ cdef _p_msg_cco m = message_cco() m.for_scatter(0, sendbuf, recvbuf, root, self.ob_mpi) with nogil: CHKERR( MPI_Scatter( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, root, self.ob_mpi) ) def Scatterv(self, sendbuf, recvbuf, int root=0): """ Scatter Vector, scatter data from one process to all other processes in a group providing different amount of data and displacements at the sending side """ cdef _p_msg_cco m = message_cco() m.for_scatter(1, sendbuf, recvbuf, root, self.ob_mpi) with nogil: CHKERR( MPI_Scatterv( m.sbuf, m.scounts, m.sdispls, m.stype, m.rbuf, m.rcount, m.rtype, root, self.ob_mpi) ) def Allgather(self, sendbuf, recvbuf): """ Gather to All, gather data from all processes and distribute it to all other processes in a group """ cdef _p_msg_cco m = message_cco() m.for_allgather(0, sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Allgather( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, self.ob_mpi) ) def Allgatherv(self, sendbuf, recvbuf): """ Gather to All Vector, gather data from all processes and distribute it to all other processes in a group providing different amount of data and displacements """ cdef _p_msg_cco m = message_cco() m.for_allgather(1, sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Allgatherv( m.sbuf, m.scount, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, self.ob_mpi) ) def Alltoall(self, sendbuf, recvbuf): """ All to All Scatter/Gather, send data from all to all processes in a group """ cdef _p_msg_cco m = message_cco() m.for_alltoall(0, sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Alltoall( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, self.ob_mpi) ) def Alltoallv(self, sendbuf, recvbuf): """ All to All Scatter/Gather Vector, send data from all to all processes in a group providing different amount of data and displacements """ cdef _p_msg_cco m = message_cco() m.for_alltoall(1, sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Alltoallv( m.sbuf, m.scounts, m.sdispls, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, self.ob_mpi) ) def Alltoallw(self, sendbuf, recvbuf): """ Generalized All-to-All communication allowing different counts, displacements and datatypes for each partner """ cdef _p_msg_ccow m = message_ccow() m.for_alltoallw(sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Alltoallw( m.sbuf, m.scounts, m.sdispls, m.stypes, m.rbuf, m.rcounts, m.rdispls, m.rtypes, self.ob_mpi) ) # Global Reduction Operations # --------------------------- def Reduce(self, sendbuf, recvbuf, Op op not None=SUM, int root=0): """ Reduce """ cdef _p_msg_cco m = message_cco() m.for_reduce(sendbuf, recvbuf, root, self.ob_mpi) with nogil: CHKERR( MPI_Reduce( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, root, self.ob_mpi) ) def Allreduce(self, sendbuf, recvbuf, Op op not None=SUM): """ All Reduce """ cdef _p_msg_cco m = message_cco() m.for_allreduce(sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Allreduce( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, self.ob_mpi) ) def Reduce_scatter_block(self, sendbuf, recvbuf, Op op not None=SUM): """ Reduce-Scatter Block (regular, non-vector version) """ cdef _p_msg_cco m = message_cco() m.for_reduce_scatter_block(sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Reduce_scatter_block( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, self.ob_mpi) ) def Reduce_scatter(self, sendbuf, recvbuf, recvcounts=None, Op op not None=SUM): """ Reduce-Scatter (vector version) """ cdef _p_msg_cco m = message_cco() m.for_reduce_scatter(sendbuf, recvbuf, recvcounts, self.ob_mpi) with nogil: CHKERR( MPI_Reduce_scatter( m.sbuf, m.rbuf, m.rcounts, m.rtype, op.ob_mpi, self.ob_mpi) ) # Nonblocking Collectives # ----------------------- def Ibarrier(self): """ Nonblocking Barrier """ cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ibarrier(self.ob_mpi, &request.ob_mpi) ) return request def Ibcast(self, buf, int root=0): """ Nonblocking Broadcast """ cdef _p_msg_cco m = message_cco() m.for_bcast(buf, root, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ibcast( m.sbuf, m.scount, m.stype, root, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Igather(self, sendbuf, recvbuf, int root=0): """ Nonblocking Gather """ cdef _p_msg_cco m = message_cco() m.for_gather(0, sendbuf, recvbuf, root, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Igather( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, root, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Igatherv(self, sendbuf, recvbuf, int root=0): """ Nonblocking Gather Vector """ cdef _p_msg_cco m = message_cco() m.for_gather(1, sendbuf, recvbuf, root, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Igatherv( m.sbuf, m.scount, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, root, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Iscatter(self, sendbuf, recvbuf, int root=0): """ Nonblocking Scatter """ cdef _p_msg_cco m = message_cco() m.for_scatter(0, sendbuf, recvbuf, root, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Iscatter( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, root, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Iscatterv(self, sendbuf, recvbuf, int root=0): """ Nonblocking Scatter Vector """ cdef _p_msg_cco m = message_cco() m.for_scatter(1, sendbuf, recvbuf, root, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Iscatterv( m.sbuf, m.scounts, m.sdispls, m.stype, m.rbuf, m.rcount, m.rtype, root, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Iallgather(self, sendbuf, recvbuf): """ Nonblocking Gather to All """ cdef _p_msg_cco m = message_cco() m.for_allgather(0, sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Iallgather( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Iallgatherv(self, sendbuf, recvbuf): """ Nonblocking Gather to All Vector """ cdef _p_msg_cco m = message_cco() m.for_allgather(1, sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Iallgatherv( m.sbuf, m.scount, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, self.ob_mpi, &request.ob_mpi) ) return request def Ialltoall(self, sendbuf, recvbuf): """ Nonblocking All to All Scatter/Gather """ cdef _p_msg_cco m = message_cco() m.for_alltoall(0, sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ialltoall( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Ialltoallv(self, sendbuf, recvbuf): """ Nonblocking All to All Scatter/Gather Vector """ cdef _p_msg_cco m = message_cco() m.for_alltoall(1, sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ialltoallv( m.sbuf, m.scounts, m.sdispls, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Ialltoallw(self, sendbuf, recvbuf): """ Nonblocking Generalized All-to-All """ cdef _p_msg_ccow m = message_ccow() m.for_alltoallw(sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ialltoallw( m.sbuf, m.scounts, m.sdispls, m.stypes, m.rbuf, m.rcounts, m.rdispls, m.rtypes, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Ireduce(self, sendbuf, recvbuf, Op op not None=SUM, int root=0): """ Nonblocking Reduce """ cdef _p_msg_cco m = message_cco() m.for_reduce(sendbuf, recvbuf, root, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ireduce( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, root, self.ob_mpi, &request.ob_mpi) ) return request def Iallreduce(self, sendbuf, recvbuf, Op op not None=SUM): """ Nonblocking All Reduce """ cdef _p_msg_cco m = message_cco() m.for_allreduce(sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Iallreduce( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, self.ob_mpi, &request.ob_mpi) ) return request def Ireduce_scatter_block(self, sendbuf, recvbuf, Op op not None=SUM): """ Nonblocking Reduce-Scatter Block (regular, non-vector version) """ cdef _p_msg_cco m = message_cco() m.for_reduce_scatter_block(sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ireduce_scatter_block( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, self.ob_mpi, &request.ob_mpi) ) return request def Ireduce_scatter(self, sendbuf, recvbuf, recvcounts=None, Op op not None=SUM): """ Nonblocking Reduce-Scatter (vector version) """ cdef _p_msg_cco m = message_cco() m.for_reduce_scatter(sendbuf, recvbuf, recvcounts, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ireduce_scatter( m.sbuf, m.rbuf, m.rcounts, m.rtype, op.ob_mpi, self.ob_mpi, &request.ob_mpi) ) return request # Tests # ----- def Is_inter(self): """ Test to see if a comm is an intercommunicator """ cdef int flag = 0 CHKERR( MPI_Comm_test_inter(self.ob_mpi, &flag) ) return flag property is_inter: """is intercommunicator""" def __get__(self): return self.Is_inter() def Is_intra(self): """ Test to see if a comm is an intracommunicator """ return not self.Is_inter() property is_intra: """is intracommunicator""" def __get__(self): return self.Is_intra() def Get_topology(self): """ Determine the type of topology (if any) associated with a communicator """ cdef int topo = MPI_UNDEFINED CHKERR( MPI_Topo_test(self.ob_mpi, &topo) ) return topo property topology: """communicator topology type""" def __get__(self): return self.Get_topology() # Process Creation and Management # ------------------------------- @classmethod def Get_parent(cls): """ Return the parent intercommunicator for this process """ cdef MPI_Comm comm = MPI_COMM_NULL with nogil: CHKERR( MPI_Comm_get_parent(&comm) ) global __COMM_PARENT__ cdef Intercomm parent = __COMM_PARENT__ parent.ob_mpi = comm return parent def Disconnect(self): """ Disconnect from a communicator """ with nogil: CHKERR( MPI_Comm_disconnect( &self.ob_mpi) ) @classmethod def Join(cls, int fd): """ Create a intercommunicator by joining two processes connected by a socket """ cdef Intercomm comm = Intercomm.__new__(Intercomm) with nogil: CHKERR( MPI_Comm_join( fd, &comm.ob_mpi) ) return comm # Attributes # ---------- def Get_attr(self, int keyval): """ Retrieve attribute value by key """ cdef void *attrval = NULL cdef int flag = 0 CHKERR( MPI_Comm_get_attr(self.ob_mpi, keyval, &attrval, &flag) ) if not flag: return None if attrval == NULL: return 0 # MPI-1 predefined attribute keyvals if ((keyval == MPI_TAG_UB) or (keyval == MPI_HOST) or (keyval == MPI_IO) or (keyval == MPI_WTIME_IS_GLOBAL)): return (attrval)[0] # MPI-2 predefined attribute keyvals elif ((keyval == MPI_UNIVERSE_SIZE) or (keyval == MPI_APPNUM) or (keyval == MPI_LASTUSEDCODE)): return (attrval)[0] # user-defined attribute keyval elif keyval in comm_keyval: return attrval else: return PyLong_FromVoidPtr(attrval) def Set_attr(self, int keyval, object attrval): """ Store attribute value associated with a key """ cdef void *ptrval = NULL cdef int incref = 0 if keyval in comm_keyval: ptrval = attrval incref = 1 else: ptrval = PyLong_AsVoidPtr(attrval) incref = 0 CHKERR( MPI_Comm_set_attr(self.ob_mpi, keyval, ptrval) ) if incref: Py_INCREF(attrval) def Delete_attr(self, int keyval): """ Delete attribute value associated with a key """ CHKERR( MPI_Comm_delete_attr(self.ob_mpi, keyval) ) @classmethod def Create_keyval(cls, copy_fn=None, delete_fn=None): """ Create a new attribute key for communicators """ cdef int keyval = MPI_KEYVAL_INVALID cdef MPI_Comm_copy_attr_function *_copy = comm_attr_copy_fn cdef MPI_Comm_delete_attr_function *_del = comm_attr_delete_fn cdef void *extra_state = NULL CHKERR( MPI_Comm_create_keyval(_copy, _del, &keyval, extra_state) ) comm_keyval_new(keyval, copy_fn, delete_fn) return keyval @classmethod def Free_keyval(cls, int keyval): """ Free and attribute key for communicators """ cdef int keyval_save = keyval CHKERR( MPI_Comm_free_keyval (&keyval) ) comm_keyval_del(keyval_save) return keyval # Error handling # -------------- def Get_errhandler(self): """ Get the error handler for a communicator """ cdef Errhandler errhandler = Errhandler.__new__(Errhandler) CHKERR( MPI_Comm_get_errhandler(self.ob_mpi, &errhandler.ob_mpi) ) return errhandler def Set_errhandler(self, Errhandler errhandler not None): """ Set the error handler for a communicator """ CHKERR( MPI_Comm_set_errhandler(self.ob_mpi, errhandler.ob_mpi) ) def Call_errhandler(self, int errorcode): """ Call the error handler installed on a communicator """ CHKERR( MPI_Comm_call_errhandler(self.ob_mpi, errorcode) ) def Abort(self, int errorcode=0): """ Terminate MPI execution environment .. warning:: This is a direct call, use it with care!!!. """ CHKERR( MPI_Abort(self.ob_mpi, errorcode) ) # Naming Objects # -------------- def Get_name(self): """ Get the print name for this communicator """ cdef char name[MPI_MAX_OBJECT_NAME+1] cdef int nlen = 0 CHKERR( MPI_Comm_get_name(self.ob_mpi, name, &nlen) ) return tompistr(name, nlen) def Set_name(self, name): """ Set the print name for this communicator """ cdef char *cname = NULL name = asmpistr(name, &cname, NULL) CHKERR( MPI_Comm_set_name(self.ob_mpi, cname) ) property name: """communicator name""" def __get__(self): return self.Get_name() def __set__(self, value): self.Set_name(value) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Comm_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Comm comm = cls() comm.ob_mpi = MPI_Comm_f2c(arg) return comm # Python Communication # -------------------- # def send(self, obj=None, int dest=0, int tag=0): """Send""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_send(obj, dest, tag, comm) # def bsend(self, obj=None, int dest=0, int tag=0): """Send in buffered mode""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_bsend(obj, dest, tag, comm) # def ssend(self, obj=None, int dest=0, int tag=0): """Send in synchronous mode""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_ssend(obj, dest, tag, comm) # def recv(self, obj=None, int source=0, int tag=0, Status status=None): """Receive""" cdef MPI_Comm comm = self.ob_mpi cdef MPI_Status *statusp = arg_Status(status) return PyMPI_recv(obj, source, tag, comm, statusp) # def sendrecv(self, sendobj=None, int dest=0, int sendtag=0, recvobj=None, int source=0, int recvtag=0, Status status=None): """Send and Receive""" cdef MPI_Comm comm = self.ob_mpi cdef MPI_Status *statusp = arg_Status(status) return PyMPI_sendrecv(sendobj, dest, sendtag, recvobj, source, recvtag, comm, statusp) # def isend(self, obj=None, int dest=0, int tag=0): """Nonblocking send""" cdef MPI_Comm comm = self.ob_mpi cdef Request request = Request.__new__(Request) request.ob_buf = PyMPI_isend(obj, dest, tag, comm, &request.ob_mpi) return request # def ibsend(self, obj=None, int dest=0, int tag=0): """Nonblocking send in buffered mode""" cdef MPI_Comm comm = self.ob_mpi cdef Request request = Request.__new__(Request) request.ob_buf = PyMPI_ibsend(obj, dest, tag, comm, &request.ob_mpi) return request # def issend(self, obj=None, int dest=0, int tag=0): """Nonblocking send in synchronous mode""" cdef MPI_Comm comm = self.ob_mpi cdef Request request = Request.__new__(Request) request.ob_buf = PyMPI_issend(obj, dest, tag, comm, &request.ob_mpi) return request # def irecv(self, obj=None, int dest=0, int tag=0): cdef MPI_Comm comm = self.ob_mpi cdef Request request = Request.__new__(Request) request.ob_buf = PyMPI_irecv(obj, dest, tag, comm, &request.ob_mpi) return request # def mprobe(self, int source=0, int tag=0, Status status=None): cdef MPI_Comm comm = self.ob_mpi cdef MPI_Status *statusp = arg_Status(status) cdef Message message = Message.__new__(Message) message.ob_buf = PyMPI_mprobe(source, tag, comm, &message.ob_mpi, statusp) return message # def improbe(self, int source=0, int tag=0, Status status=None): cdef int flag = 0 cdef MPI_Comm comm = self.ob_mpi cdef MPI_Status *statusp = arg_Status(status) cdef Message message = Message.__new__(Message) message.ob_buf = PyMPI_improbe(source, tag, comm, &flag, &message.ob_mpi, statusp) if flag == 0: return None return message # def barrier(self): "Barrier" cdef MPI_Comm comm = self.ob_mpi return PyMPI_barrier(comm) # def bcast(self, obj=None, int root=0): """Broadcast""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_bcast(obj, root, comm) # def gather(self, sendobj=None, recvobj=None, int root=0): """Gather""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_gather(sendobj, recvobj, root, comm) # def scatter(self, sendobj=None, recvobj=None, int root=0): """Scatter""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_scatter(sendobj, recvobj, root, comm) # def allgather(self, sendobj=None, recvobj=None): """Gather to All""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_allgather(sendobj, recvobj, comm) # def alltoall(self, sendobj=None, recvobj=None): """All to All Scatter/Gather""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_alltoall(sendobj, recvobj, comm) # def reduce(self, sendobj=None, recvobj=None, op=SUM, int root=0): """Reduce""" if op is None: op = SUM cdef MPI_Comm comm = self.ob_mpi return PyMPI_reduce(sendobj, recvobj, op, root, comm) # def allreduce(self, sendobj=None, recvobj=None, op=SUM): """Reduce to All""" if op is None: op = SUM cdef MPI_Comm comm = self.ob_mpi return PyMPI_allreduce(sendobj, recvobj, op, comm) cdef class Intracomm(Comm): """ Intracommunicator """ def __cinit__(self, Comm comm=None): cdef int inter = 0 if self.ob_mpi != MPI_COMM_NULL: CHKERR( MPI_Comm_test_inter(self.ob_mpi, &inter) ) if inter: raise TypeError( "expecting an intracommunicator") # Communicator Constructors # ------------------------- def Create_cart(self, dims, periods=None, bint reorder=False): """ Create cartesian communicator """ cdef int ndims = 0 ndims = len(dims) cdef int *idims = NULL dims = asarray_int(dims, ndims, &idims) cdef int *iperiods = NULL if periods is None: periods = [False] * ndims periods = asarray_int(periods, ndims, &iperiods) # cdef Cartcomm comm = Cartcomm.__new__(Cartcomm) with nogil: CHKERR( MPI_Cart_create( self.ob_mpi, ndims, idims, iperiods, reorder, &comm.ob_mpi) ) return comm def Create_graph(self, index, edges, bint reorder=False): """ Create graph communicator """ cdef int nnodes = 0, *iindex = NULL index = getarray_int(index, &nnodes, &iindex) cdef int nedges = 0, *iedges = NULL edges = getarray_int(edges, &nedges, &iedges) # extension: 'standard' adjacency arrays if iindex[0]==0 and iindex[nnodes-1]==nedges: nnodes -= 1; iindex += 1; # cdef Graphcomm comm = Graphcomm.__new__(Graphcomm) with nogil: CHKERR( MPI_Graph_create( self.ob_mpi, nnodes, iindex, iedges, reorder, &comm.ob_mpi) ) return comm def Create_dist_graph_adjacent(self, sources, destinations, sourceweights=None, destweights=None, Info info=INFO_NULL, bint reorder=False): """ Create distributed graph communicator """ cdef int indegree = 0, *isource = NULL cdef int outdegree = 0, *idest = NULL cdef int *isourceweight = MPI_UNWEIGHTED cdef int *idestweight = MPI_UNWEIGHTED if sources is not None: sources = getarray_int(sources, &indegree, &isource) sourceweights = asarray_weights( sourceweights, indegree, &isourceweight) if destinations is not None: destinations = getarray_int(destinations, &outdegree, &idest) destweights = asarray_weights( destweights, outdegree, &idestweight) cdef MPI_Info cinfo = arg_Info(info) # cdef Distgraphcomm comm = \ Distgraphcomm.__new__(Distgraphcomm) with nogil: CHKERR( MPI_Dist_graph_create_adjacent( self.ob_mpi, indegree, isource, isourceweight, outdegree, idest, idestweight, cinfo, reorder, &comm.ob_mpi) ) return comm def Create_dist_graph(self, sources, degrees, destinations, weights=None, Info info=INFO_NULL, bint reorder=False): """ Create distributed graph communicator """ cdef int nv = 0, ne = 0, i = 0 cdef int *isource = NULL, *idegree = NULL, cdef int *idest = NULL, *iweight = MPI_UNWEIGHTED nv = len(sources) sources = asarray_int(sources, nv, &isource) degrees = asarray_int(degrees, nv, &idegree) for i from 0 <= i < nv: ne += idegree[i] destinations = asarray_int(destinations, ne, &idest) weights = asarray_weights(weights, ne, &iweight) cdef MPI_Info cinfo = arg_Info(info) # cdef Distgraphcomm comm = \ Distgraphcomm.__new__(Distgraphcomm) with nogil: CHKERR( MPI_Dist_graph_create( self.ob_mpi, nv, isource, idegree, idest, iweight, cinfo, reorder, &comm.ob_mpi) ) return comm def Create_intercomm(self, int local_leader, Intracomm peer_comm not None, int remote_leader, int tag=0): """ Create intercommunicator """ cdef Intercomm comm = Intercomm.__new__(Intercomm) with nogil: CHKERR( MPI_Intercomm_create( self.ob_mpi, local_leader, peer_comm.ob_mpi, remote_leader, tag, &comm.ob_mpi) ) return comm # Global Reduction Operations # --------------------------- # Inclusive Scan def Scan(self, sendbuf, recvbuf, Op op not None=SUM): """ Inclusive Scan """ cdef _p_msg_cco m = message_cco() m.for_scan(sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Scan( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, self.ob_mpi) ) # Exclusive Scan def Exscan(self, sendbuf, recvbuf, Op op not None=SUM): """ Exclusive Scan """ cdef _p_msg_cco m = message_cco() m.for_exscan(sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Exscan( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, self.ob_mpi) ) # Nonblocking def Iscan(self, sendbuf, recvbuf, Op op not None=SUM): """ Inclusive Scan """ cdef _p_msg_cco m = message_cco() m.for_scan(sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Iscan( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, self.ob_mpi, &request.ob_mpi) ) return request def Iexscan(self, sendbuf, recvbuf, Op op not None=SUM): """ Inclusive Scan """ cdef _p_msg_cco m = message_cco() m.for_exscan(sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Iexscan( m.sbuf, m.rbuf, m.rcount, m.rtype, op.ob_mpi, self.ob_mpi, &request.ob_mpi) ) return request # Python Communication # def scan(self, sendobj=None, recvobj=None, op=SUM): """Inclusive Scan""" if op is None: op = SUM cdef MPI_Comm comm = self.ob_mpi return PyMPI_scan(sendobj, recvobj, op, comm) # def exscan(self, sendobj=None, recvobj=None, op=SUM): """Exclusive Scan""" if op is None: op = SUM cdef MPI_Comm comm = self.ob_mpi return PyMPI_exscan(sendobj, recvobj, op, comm) # Neighborhood Collectives # ------------------------ def Neighbor_allgather(self, sendbuf, recvbuf): """ Neighbor Gather to All """ cdef _p_msg_cco m = message_cco() m.for_neighbor_allgather(0, sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Neighbor_allgather( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, self.ob_mpi) ) def Neighbor_allgatherv(self, sendbuf, recvbuf): """ Neighbor Gather to All Vector """ cdef _p_msg_cco m = message_cco() m.for_neighbor_allgather(1, sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Neighbor_allgatherv( m.sbuf, m.scount, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, self.ob_mpi) ) def Neighbor_alltoall(self, sendbuf, recvbuf): """ Neighbor All-to-All """ cdef _p_msg_cco m = message_cco() m.for_neighbor_alltoall(0, sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Neighbor_alltoall( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, self.ob_mpi) ) def Neighbor_alltoallv(self, sendbuf, recvbuf): """ Neighbor All-to-All Vector """ cdef _p_msg_cco m = message_cco() m.for_neighbor_alltoall(1, sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Neighbor_alltoallv( m.sbuf, m.scounts, m.sdispls, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, self.ob_mpi) ) def Neighbor_alltoallw(self, sendbuf, recvbuf): """ Neighbor All-to-All Generalized """ cdef _p_msg_ccow m = message_ccow() m.for_neighbor_alltoallw(sendbuf, recvbuf, self.ob_mpi) with nogil: CHKERR( MPI_Neighbor_alltoallw( m.sbuf, m.scounts, m.sdisplsA, m.stypes, m.rbuf, m.rcounts, m.rdisplsA, m.rtypes, self.ob_mpi) ) # Nonblocking Neighborhood Collectives def Ineighbor_allgather(self, sendbuf, recvbuf): """ Nonblocking Neighbor Gather to All """ cdef _p_msg_cco m = message_cco() m.for_neighbor_allgather(0, sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ineighbor_allgather( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Ineighbor_allgatherv(self, sendbuf, recvbuf): """ Nonblocking Neighbor Gather to All Vector """ cdef _p_msg_cco m = message_cco() m.for_neighbor_allgather(1, sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ineighbor_allgatherv( m.sbuf, m.scount, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Ineighbor_alltoall(self, sendbuf, recvbuf): """ Nonblocking Neighbor All-to-All """ cdef _p_msg_cco m = message_cco() m.for_neighbor_alltoall(0, sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ineighbor_alltoall( m.sbuf, m.scount, m.stype, m.rbuf, m.rcount, m.rtype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Ineighbor_alltoallv(self, sendbuf, recvbuf): """ Nonblocking Neighbor All-to-All Vector """ cdef _p_msg_cco m = message_cco() m.for_neighbor_alltoall(1, sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ineighbor_alltoallv( m.sbuf, m.scounts, m.sdispls, m.stype, m.rbuf, m.rcounts, m.rdispls, m.rtype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request def Ineighbor_alltoallw(self, sendbuf, recvbuf): """ Nonblocking Neighbor All-to-All Generalized """ cdef _p_msg_ccow m = message_ccow() m.for_neighbor_alltoallw(sendbuf, recvbuf, self.ob_mpi) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Ineighbor_alltoallw( m.sbuf, m.scounts, m.sdisplsA, m.stypes, m.rbuf, m.rcounts, m.rdisplsA, m.rtypes, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = m return request # Python Communication # def neighbor_allgather(self, sendobj=None, recvobj=None): """Neighbor Gather to All""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_neighbor_allgather(sendobj, recvobj, comm) # def neighbor_alltoall(self, sendobj=None, recvobj=None): """Neighbor All to All Scatter/Gather""" cdef MPI_Comm comm = self.ob_mpi return PyMPI_neighbor_alltoall(sendobj, recvobj, comm) # Establishing Communication # -------------------------- # Starting Processes def Spawn(self, command, args=None, int maxprocs=1, Info info=INFO_NULL, int root=0, errcodes=None): """ Spawn instances of a single MPI application """ cdef char *cmd = NULL cdef char **argv = MPI_ARGV_NULL cdef MPI_Info cinfo = arg_Info(info) cdef int *ierrcodes = MPI_ERRCODES_IGNORE # cdef int rank = MPI_UNDEFINED CHKERR( MPI_Comm_rank(self.ob_mpi, &rank) ) cdef tmp1, tmp2 if root == rank: command = asmpistr(command, &cmd, NULL) if args is not None: tmp1 = asarray_argv(args, &argv) if errcodes is not None: tmp2 = mkarray_int(maxprocs, &ierrcodes) # cdef Intercomm comm = Intercomm.__new__(Intercomm) with nogil: CHKERR( MPI_Comm_spawn( cmd, argv, maxprocs, cinfo, root, self.ob_mpi, &comm.ob_mpi, ierrcodes) ) # cdef int i=0 if errcodes is not None: errcodes[:] = [ierrcodes[i] for i from 0<=ilen(command) tmp1 = asarray_str(command, count, &cmds) tmp2 = asarray_argvs(args, count, &argvs) tmp3 = asarray_nprocs(maxprocs, count, &imaxprocs) tmp4 = asarray_Info(info, count, &infos) if errcodes is not None: if root != rank: count = len(maxprocs) tmp3 = asarray_nprocs(maxprocs, count, &imaxprocs) for i from 0 <= i < count: n += imaxprocs[i] tmp5 = mkarray_int(n, &ierrcodes) # cdef Intercomm comm = Intercomm.__new__(Intercomm) with nogil: CHKERR( MPI_Comm_spawn_multiple( count, cmds, argvs, imaxprocs, infos, root, self.ob_mpi, &comm.ob_mpi, ierrcodes) ) # cdef Py_ssize_t j=0, p=0 if errcodes is not None: errcodes[:] = [[]] * count for i from 0 <= i < count: n = imaxprocs[i] errcodes[i] = \ [ierrcodes[j] for j from p<=j<(p+n)] p += n # return comm # Server Routines def Accept(self, port_name, Info info=INFO_NULL, int root=0): """ Accept a request to form a new intercommunicator """ cdef char *cportname = NULL cdef MPI_Info cinfo = MPI_INFO_NULL cdef int rank = MPI_UNDEFINED CHKERR( MPI_Comm_rank(self.ob_mpi, &rank) ) if root == rank: port_name = asmpistr(port_name, &cportname, NULL) cinfo = arg_Info(info) cdef Intercomm comm = Intercomm.__new__(Intercomm) with nogil: CHKERR( MPI_Comm_accept( cportname, cinfo, root, self.ob_mpi, &comm.ob_mpi) ) return comm # Client Routines def Connect(self, port_name, Info info=INFO_NULL, int root=0): """ Make a request to form a new intercommunicator """ cdef char *cportname = NULL cdef MPI_Info cinfo = MPI_INFO_NULL cdef int rank = MPI_UNDEFINED CHKERR( MPI_Comm_rank(self.ob_mpi, &rank) ) if root == rank: port_name = asmpistr(port_name, &cportname, NULL) cinfo = arg_Info(info) cdef Intercomm comm = Intercomm.__new__(Intercomm) with nogil: CHKERR( MPI_Comm_connect( cportname, cinfo, root, self.ob_mpi, &comm.ob_mpi) ) return comm cdef class Cartcomm(Intracomm): """ Cartesian topology intracommunicator """ def __cinit__(self, Comm comm=None): cdef int topo = MPI_CART if self.ob_mpi != MPI_COMM_NULL: CHKERR( MPI_Topo_test(self.ob_mpi, &topo) ) if topo != MPI_CART: raise TypeError( "expecting a Cartesian communicator") # Cartesian Inquiry Functions # --------------------------- def Get_dim(self): """ Return number of dimensions """ cdef int dim = 0 CHKERR( MPI_Cartdim_get(self.ob_mpi, &dim) ) return dim property dim: """number of dimensions""" def __get__(self): return self.Get_dim() property ndim: """number of dimensions""" def __get__(self): return self.Get_dim() def Get_topo(self): """ Return information on the cartesian topology """ cdef int ndim = 0 CHKERR( MPI_Cartdim_get(self.ob_mpi, &ndim) ) cdef int *idims = NULL cdef tmp1 = mkarray_int(ndim, &idims) cdef int *iperiods = NULL cdef tmp2 = mkarray_int(ndim, &iperiods) cdef int *icoords = NULL cdef tmp3 = mkarray_int(ndim, &icoords) CHKERR( MPI_Cart_get(self.ob_mpi, ndim, idims, iperiods, icoords) ) cdef int i = 0 cdef object dims = [idims[i] for i from 0 <= i < ndim] cdef object periods = [iperiods[i] for i from 0 <= i < ndim] cdef object coords = [icoords[i] for i from 0 <= i < ndim] return (dims, periods, coords) property topo: """topology information""" def __get__(self): return self.Get_topo() property dims: """dimensions""" def __get__(self): return self.Get_topo()[0] property periods: """periodicity""" def __get__(self): return self.Get_topo()[1] property coords: """coordinates""" def __get__(self): return self.Get_topo()[2] # Cartesian Translator Functions # ------------------------------ def Get_cart_rank(self, coords): """ Translate logical coordinates to ranks """ cdef int ndim = 0, *icoords = NULL CHKERR( MPI_Cartdim_get( self.ob_mpi, &ndim) ) coords = asarray_int(coords, ndim, &icoords) cdef int rank = MPI_PROC_NULL CHKERR( MPI_Cart_rank(self.ob_mpi, icoords, &rank) ) return rank def Get_coords(self, int rank): """ Translate ranks to logical coordinates """ cdef int i = 0, ndim = 0, *icoords = NULL CHKERR( MPI_Cartdim_get(self.ob_mpi, &ndim) ) cdef tmp = mkarray_int(ndim, &icoords) CHKERR( MPI_Cart_coords(self.ob_mpi, rank, ndim, icoords) ) cdef object coords = [icoords[i] for i from 0 <= i < ndim] return coords # Cartesian Shift Function # ------------------------ def Shift(self, int direction, int disp): """ Return a tuple (source,dest) of process ranks for data shifting with Comm.Sendrecv() """ cdef int source = MPI_PROC_NULL, dest = MPI_PROC_NULL CHKERR( MPI_Cart_shift(self.ob_mpi, direction, disp, &source, &dest) ) return (source, dest) # Cartesian Partition Function # ---------------------------- def Sub(self, remain_dims): """ Return cartesian communicators that form lower-dimensional subgrids """ cdef int ndim = 0, *iremdims = NULL CHKERR( MPI_Cartdim_get(self.ob_mpi, &ndim) ) remain_dims = asarray_int(remain_dims, ndim, &iremdims) cdef Cartcomm comm = Cartcomm.__new__(Cartcomm) with nogil: CHKERR( MPI_Cart_sub(self.ob_mpi, iremdims, &comm.ob_mpi) ) return comm # Cartesian Low-Level Functions # ----------------------------- def Map(self, dims, periods=None): """ Return an optimal placement for the calling process on the physical machine """ cdef int ndims = 0, *idims = NULL, *iperiods = NULL ndims = len(dims) dims = asarray_int(dims, ndims, &idims) if periods is None: periods = [False] * ndims periods = asarray_int(periods, ndims, &iperiods) cdef int rank = MPI_PROC_NULL CHKERR( MPI_Cart_map(self.ob_mpi, ndims, idims, iperiods, &rank) ) return rank # Cartesian Convenience Function def Compute_dims(int nnodes, dims): """ Return a balanced distribution of processes per coordinate direction """ cdef int i = 0, ndims = 0, *idims = NULL try: ndims = len(dims) except: ndims = dims dims = [0] * ndims cdef tmp = asarray_int(dims, ndims, &idims) CHKERR( MPI_Dims_create(nnodes, ndims, idims) ) dims = [idims[i] for i from 0 <= i < ndims] return dims cdef class Graphcomm(Intracomm): """ General graph topology intracommunicator """ def __cinit__(self, Comm comm=None): cdef int topo = MPI_GRAPH if self.ob_mpi != MPI_COMM_NULL: CHKERR( MPI_Topo_test(self.ob_mpi, &topo) ) if topo != MPI_GRAPH: raise TypeError( "expecting a general graph communicator") # Graph Inquiry Functions # ----------------------- def Get_dims(self): """ Return the number of nodes and edges """ cdef int nnodes = 0, nedges = 0 CHKERR( MPI_Graphdims_get(self.ob_mpi, &nnodes, &nedges) ) return (nnodes, nedges) property dims: """number of nodes and edges""" def __get__(self): return self.Get_topo() property nnodes: """number of nodes""" def __get__(self): return self.Get_topo()[0] property nedges: """number of edges""" def __get__(self): return self.Get_topo()[1] def Get_topo(self): """ Return index and edges """ cdef int nindex = 0, nedges = 0 CHKERR( MPI_Graphdims_get( self.ob_mpi, &nindex, &nedges) ) cdef int *iindex = NULL cdef tmp1 = mkarray_int(nindex, &iindex) cdef int *iedges = NULL cdef tmp2 = mkarray_int(nedges, &iedges) CHKERR( MPI_Graph_get(self.ob_mpi, nindex, nedges, iindex, iedges) ) cdef int i = 0 cdef object index = [iindex[i] for i from 0 <= i < nindex] cdef object edges = [iedges[i] for i from 0 <= i < nedges] return (index, edges) property topo: """topology information""" def __get__(self): return self.Get_topo() property index: """index""" def __get__(self): return self.Get_topo()[0] property edges: """edges""" def __get__(self): return self.Get_topo() # Graph Information Functions # --------------------------- def Get_neighbors_count(self, int rank): """ Return number of neighbors of a process """ cdef int nneighbors = 0 CHKERR( MPI_Graph_neighbors_count(self.ob_mpi, rank, &nneighbors) ) return nneighbors property nneighbors: """number of neighbors""" def __get__(self): cdef int rank = self.Get_rank() return self.Get_neighbors_count(rank) def Get_neighbors(self, int rank): """ Return list of neighbors of a process """ cdef int i = 0, nneighbors = 0, *ineighbors = NULL CHKERR( MPI_Graph_neighbors_count( self.ob_mpi, rank, &nneighbors) ) cdef tmp = mkarray_int(nneighbors, &ineighbors) CHKERR( MPI_Graph_neighbors( self.ob_mpi, rank, nneighbors, ineighbors) ) cdef object neighbors = [ineighbors[i] for i from 0 <= i < nneighbors] return neighbors property neighbors: """neighbors""" def __get__(self): cdef int rank = self.Get_rank() return self.Get_neighbors(rank) # Graph Low-Level Functions # ------------------------- def Map(self, index, edges): """ Return an optimal placement for the calling process on the physical machine """ cdef int nnodes = 0, *iindex = NULL index = getarray_int(index, &nnodes, &iindex) cdef int nedges = 0, *iedges = NULL edges = getarray_int(edges, &nedges, &iedges) # extension: accept more 'standard' adjacency arrays if iindex[0]==0 and iindex[nnodes-1]==nedges: nnodes -= 1; iindex += 1; cdef int rank = MPI_PROC_NULL CHKERR( MPI_Graph_map(self.ob_mpi, nnodes, iindex, iedges, &rank) ) return rank cdef class Distgraphcomm(Intracomm): """ Distributed graph topology intracommunicator """ def __cinit__(self, Comm comm=None): cdef int topo = MPI_DIST_GRAPH if self.ob_mpi != MPI_COMM_NULL: CHKERR( MPI_Topo_test(self.ob_mpi, &topo) ) if topo != MPI_DIST_GRAPH: raise TypeError( "expecting a distributed graph communicator") # Topology Inquiry Functions # -------------------------- def Get_dist_neighbors_count(self): """ Return adjacency information for a distributed graph topology """ cdef int indegree = 0 cdef int outdegree = 0 cdef int weighted = 0 CHKERR( MPI_Dist_graph_neighbors_count( self.ob_mpi, &indegree, &outdegree, &weighted) ) return (indegree, outdegree, weighted) def Get_dist_neighbors(self): """ Return adjacency information for a distributed graph topology """ cdef int maxindegree = 0, maxoutdegree = 0, weighted = 0 CHKERR( MPI_Dist_graph_neighbors_count( self.ob_mpi, &maxindegree, &maxoutdegree, &weighted) ) # cdef int *sources = NULL, *destinations = NULL cdef int *sourceweights = MPI_UNWEIGHTED cdef int *destweights = MPI_UNWEIGHTED cdef tmp1, tmp2, tmp3, tmp4 tmp1 = mkarray_int(maxindegree, &sources) tmp2 = mkarray_int(maxoutdegree, &destinations) cdef int i = 0 if weighted: tmp3 = mkarray_int(maxindegree, &sourceweights) for i from 0 <= i < maxindegree: sourceweights[i] = 1 tmp4 = mkarray_int(maxoutdegree, &destweights) for i from 0 <= i < maxoutdegree: destweights[i] = 1 # CHKERR( MPI_Dist_graph_neighbors( self.ob_mpi, maxindegree, sources, sourceweights, maxoutdegree, destinations, destweights) ) # cdef object src = [sources[i] for i from 0 <= i < maxindegree] cdef object dst = [destinations[i] for i from 0 <= i < maxoutdegree] if not weighted: return (src, dst, None) # cdef object sw = [sourceweights[i] for i from 0 <= i < maxindegree] cdef object dw = [destweights[i] for i from 0 <= i < maxoutdegree] return (src, dst, (sw, dw)) cdef class Intercomm(Comm): """ Intercommunicator """ def __cinit__(self, Comm comm=None): cdef int inter = 1 if self.ob_mpi != MPI_COMM_NULL: CHKERR( MPI_Comm_test_inter(self.ob_mpi, &inter) ) if not inter: raise TypeError( "expecting an intercommunicator") # Intercommunicator Accessors # --------------------------- def Get_remote_group(self): """ Access the remote group associated with the inter-communicator """ cdef Group group = Group.__new__(Group) CHKERR( MPI_Comm_remote_group(self.ob_mpi, &group.ob_mpi) ) return group property remote_group: """remote group""" def __get__(self): return self.Get_remote_group() def Get_remote_size(self): """ Intercommunicator remote size """ cdef int size = -1 CHKERR( MPI_Comm_remote_size(self.ob_mpi, &size) ) return size property remote_size: """number of remote processes""" def __get__(self): return self.Get_remote_size() # Communicator Constructors # ------------------------- def Merge(self, bint high=False): """ Merge intercommunicator """ cdef Intracomm comm = Intracomm.__new__(Intracomm) with nogil: CHKERR( MPI_Intercomm_merge( self.ob_mpi, high, &comm.ob_mpi) ) return comm cdef Comm __COMM_NULL__ = new_Comm ( MPI_COMM_NULL ) cdef Intracomm __COMM_SELF__ = new_Intracomm ( MPI_COMM_SELF ) cdef Intracomm __COMM_WORLD__ = new_Intracomm ( MPI_COMM_WORLD ) cdef Intercomm __COMM_PARENT__ = new_Intercomm ( MPI_COMM_NULL ) # Predefined communicators # ------------------------ COMM_NULL = __COMM_NULL__ #: Null communicator handle COMM_SELF = __COMM_SELF__ #: Self communicator handle COMM_WORLD = __COMM_WORLD__ #: World communicator handle # Buffer Allocation and Usage # --------------------------- BSEND_OVERHEAD = MPI_BSEND_OVERHEAD #: Upper bound of memory overhead for sending in buffered mode def Attach_buffer(memory): """ Attach a user-provided buffer for sending in buffered mode """ cdef void *base = NULL cdef int size = 0 attach_buffer(memory, &base, &size) with nogil: CHKERR( MPI_Buffer_attach(base, size) ) def Detach_buffer(): """ Remove an existing attached buffer """ cdef void *base = NULL cdef int size = 0 with nogil: CHKERR( MPI_Buffer_detach(&base, &size) ) return detach_buffer(base, size) # -------------------------------------------------------------------- # [5] Process Creation and Management # -------------------------------------------------------------------- # [5.4.2] Server Routines # ----------------------- def Open_port(Info info=INFO_NULL): """ Return an address that can be used to establish connections between groups of MPI processes """ cdef MPI_Info cinfo = arg_Info(info) cdef char cportname[MPI_MAX_PORT_NAME+1] with nogil: CHKERR( MPI_Open_port(cinfo, cportname) ) cportname[MPI_MAX_PORT_NAME] = 0 # just in case return mpistr(cportname) def Close_port(port_name): """ Close a port """ cdef char *cportname = NULL port_name = asmpistr(port_name, &cportname, NULL) with nogil: CHKERR( MPI_Close_port(cportname) ) # [5.4.4] Name Publishing # ----------------------- def Publish_name(service_name, Info info, port_name): """ Publish a service name """ cdef char *csrvcname = NULL service_name = asmpistr(service_name, &csrvcname, NULL) cdef char *cportname = NULL port_name = asmpistr(port_name, &cportname, NULL) cdef MPI_Info cinfo = arg_Info(info) with nogil: CHKERR( MPI_Publish_name(csrvcname, cinfo, cportname) ) def Unpublish_name(service_name, Info info, port_name): """ Unpublish a service name """ cdef char *csrvcname = NULL service_name = asmpistr(service_name, &csrvcname, NULL) cdef char *cportname = NULL port_name = asmpistr(port_name, &cportname, NULL) cdef MPI_Info cinfo = arg_Info(info) with nogil: CHKERR( MPI_Unpublish_name(csrvcname, cinfo, cportname) ) def Lookup_name(service_name, Info info=INFO_NULL): """ Lookup a port name given a service name """ cdef char *csrvcname = NULL service_name = asmpistr(service_name, &csrvcname, NULL) cdef MPI_Info cinfo = arg_Info(info) cdef char cportname[MPI_MAX_PORT_NAME+1] with nogil: CHKERR( MPI_Lookup_name(csrvcname, cinfo, cportname) ) cportname[MPI_MAX_PORT_NAME] = 0 # just in case return mpistr(cportname) mpi4py_1.3.1+hg20131106.orig/src/MPI/Datatype.pyx0000644000000000000000000010610512211706251017046 0ustar 00000000000000# Storage order for arrays # ------------------------ ORDER_C = MPI_ORDER_C #: C order (a.k.a. row major) ORDER_FORTRAN = MPI_ORDER_FORTRAN #: Fortran order (a.k.a. column major) ORDER_F = MPI_ORDER_FORTRAN #: Convenience alias for ORDER_FORTRAN # Type classes for Fortran datatype matching # ------------------------------------------ TYPECLASS_INTEGER = MPI_TYPECLASS_INTEGER TYPECLASS_REAL = MPI_TYPECLASS_REAL TYPECLASS_COMPLEX = MPI_TYPECLASS_COMPLEX # Type of distributions (HPF-like arrays) # --------------------------------------- DISTRIBUTE_NONE = MPI_DISTRIBUTE_NONE #: Dimension not distributed DISTRIBUTE_BLOCK = MPI_DISTRIBUTE_BLOCK #: Block distribution DISTRIBUTE_CYCLIC = MPI_DISTRIBUTE_CYCLIC #: Cyclic distribution DISTRIBUTE_DFLT_DARG = MPI_DISTRIBUTE_DFLT_DARG #: Default distribution # Combiner values for datatype decoding # ------------------------------------- COMBINER_NAMED = MPI_COMBINER_NAMED COMBINER_DUP = MPI_COMBINER_DUP COMBINER_CONTIGUOUS = MPI_COMBINER_CONTIGUOUS COMBINER_VECTOR = MPI_COMBINER_VECTOR COMBINER_HVECTOR = MPI_COMBINER_HVECTOR COMBINER_HVECTOR_INTEGER = MPI_COMBINER_HVECTOR_INTEGER #: from Fortran call COMBINER_INDEXED = MPI_COMBINER_INDEXED COMBINER_HINDEXED_INTEGER = MPI_COMBINER_HINDEXED_INTEGER #: from Fortran call COMBINER_HINDEXED = MPI_COMBINER_HINDEXED COMBINER_INDEXED_BLOCK = MPI_COMBINER_INDEXED_BLOCK COMBINER_HINDEXED_BLOCK = MPI_COMBINER_HINDEXED_BLOCK COMBINER_STRUCT = MPI_COMBINER_STRUCT COMBINER_STRUCT_INTEGER = MPI_COMBINER_STRUCT_INTEGER #: from Fortran call COMBINER_SUBARRAY = MPI_COMBINER_SUBARRAY COMBINER_DARRAY = MPI_COMBINER_DARRAY COMBINER_RESIZED = MPI_COMBINER_RESIZED COMBINER_F90_REAL = MPI_COMBINER_F90_REAL COMBINER_F90_COMPLEX = MPI_COMBINER_F90_COMPLEX COMBINER_F90_INTEGER = MPI_COMBINER_F90_INTEGER cdef class Datatype: """ Datatype """ def __cinit__(self, Datatype datatype=None): self.ob_mpi = MPI_DATATYPE_NULL if datatype is not None: self.ob_mpi = datatype.ob_mpi def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Datatype(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, Datatype): return NotImplemented if not isinstance(other, Datatype): return NotImplemented cdef Datatype s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_DATATYPE_NULL # Datatype Accessors # ------------------ def Get_size(self): """ Return the number of bytes occupied by entries in the datatype """ cdef MPI_Count size = 0 CHKERR( MPI_Type_size_x(self.ob_mpi, &size) ) return size property size: """size (in bytes)""" def __get__(self): cdef MPI_Count size = 0 CHKERR( MPI_Type_size_x(self.ob_mpi, &size) ) return size def Get_extent(self): """ Return lower bound and extent of datatype """ cdef MPI_Count lb = 0, extent = 0 CHKERR( MPI_Type_get_extent_x(self.ob_mpi, &lb, &extent) ) return (lb, extent) property extent: """extent""" def __get__(self): cdef MPI_Count lb = 0, extent = 0 CHKERR( MPI_Type_get_extent_x(self.ob_mpi, &lb, &extent) ) return extent property lb: """lower bound""" def __get__(self): cdef MPI_Count lb = 0, extent = 0 CHKERR( MPI_Type_get_extent_x(self.ob_mpi, &lb, &extent) ) return lb property ub: """upper bound""" def __get__(self): cdef MPI_Count lb = 0, extent = 0 CHKERR( MPI_Type_get_extent_x(self.ob_mpi, &lb, &extent) ) return lb + extent # Datatype Constructors # --------------------- def Dup(self): """ Duplicate a datatype """ cdef Datatype datatype = type(self)() CHKERR( MPI_Type_dup(self.ob_mpi, &datatype.ob_mpi) ) return datatype Create_dup = Dup #: convenience alias def Create_contiguous(self, int count): """ Create a contiguous datatype """ cdef Datatype datatype = type(self)() CHKERR( MPI_Type_contiguous(count, self.ob_mpi, &datatype.ob_mpi) ) return datatype def Create_vector(self, int count, int blocklength, int stride): """ Create a vector (strided) datatype """ cdef Datatype datatype = type(self)() CHKERR( MPI_Type_vector(count, blocklength, stride, self.ob_mpi, &datatype.ob_mpi) ) return datatype def Create_hvector(self, int count, int blocklength, Aint stride): """ Create a vector (strided) datatype """ cdef Datatype datatype = type(self)() CHKERR( MPI_Type_create_hvector(count, blocklength, stride, self.ob_mpi, &datatype.ob_mpi) ) return datatype def Create_indexed(self, blocklengths, displacements): """ Create an indexed datatype """ cdef int count = 0, *iblen = NULL, *idisp = NULL blocklengths = getarray_int(blocklengths, &count, &iblen) displacements = chkarray_int(displacements, count, &idisp) # cdef Datatype datatype = type(self)() CHKERR( MPI_Type_indexed(count, iblen, idisp, self.ob_mpi, &datatype.ob_mpi) ) return datatype def Create_hindexed(self, blocklengths, displacements): """ Create an indexed datatype with displacements in bytes """ cdef int count = 0, *iblen = NULL blocklengths = getarray_int(blocklengths, &count, &iblen) cdef MPI_Aint *idisp = NULL displacements = asarray_Aint(displacements, count, &idisp) # cdef Datatype datatype = type(self)() CHKERR( MPI_Type_create_hindexed(count, iblen, idisp, self.ob_mpi, &datatype.ob_mpi) ) return datatype def Create_indexed_block(self, int blocklength, displacements): """ Create an indexed datatype with constant-sized blocks """ cdef int count = 0, *idisp = NULL displacements = getarray_int(displacements, &count, &idisp) # cdef Datatype datatype = type(self)() CHKERR( MPI_Type_create_indexed_block(count, blocklength, idisp, self.ob_mpi, &datatype.ob_mpi) ) return datatype def Create_hindexed_block(self, int blocklength, displacements): """ Create an indexed datatype with constant-sized blocks and displacements in bytes """ cdef int count = 0 cdef MPI_Aint *idisp = NULL count = len(displacements) # XXX Overflow ? displacements = asarray_Aint(displacements, count, &idisp) # cdef Datatype datatype = type(self)() CHKERR( MPI_Type_create_hindexed_block(count, blocklength, idisp, self.ob_mpi, &datatype.ob_mpi) ) return datatype @classmethod def Create_struct(cls, blocklengths, displacements, datatypes): """ Create an datatype from a general set of block sizes, displacements and datatypes """ cdef int count = 0, *iblen = NULL blocklengths = getarray_int(blocklengths, &count, &iblen) cdef MPI_Aint *idisp = NULL displacements = asarray_Aint(displacements, count, &idisp) cdef MPI_Datatype *ptype = NULL datatypes = asarray_Datatype(datatypes, count, &ptype) # cdef Datatype datatype = cls() CHKERR( MPI_Type_create_struct(count, iblen, idisp, ptype, &datatype.ob_mpi) ) return datatype # Subarray Datatype Constructor # ----------------------------- def Create_subarray(self, sizes, subsizes, starts, int order=ORDER_C): """ Create a datatype for a subarray of a regular, multidimensional array """ cdef int ndims = 0, *isizes = NULL cdef int *isubsizes = NULL, *istarts = NULL sizes = getarray_int(sizes, &ndims, &isizes ) subsizes = chkarray_int(subsizes, ndims, &isubsizes) starts = chkarray_int(starts, ndims, &istarts ) cdef int iorder = MPI_ORDER_C if order is not None: iorder = order # cdef Datatype datatype = type(self)() CHKERR( MPI_Type_create_subarray(ndims, isizes, isubsizes, istarts, iorder, self.ob_mpi, &datatype.ob_mpi) ) return datatype # Distributed Array Datatype Constructor # -------------------------------------- def Create_darray(self, int size, int rank, gsizes, distribs, dargs, psizes, int order=ORDER_C): """ Create a datatype representing an HPF-like distributed array on Cartesian process grids """ cdef int ndims = 0, *igsizes = NULL cdef int *idistribs = NULL, *idargs = NULL, *ipsizes = NULL gsizes = getarray_int(gsizes, &ndims, &igsizes ) distribs = chkarray_int(distribs, ndims, &idistribs ) dargs = chkarray_int(dargs, ndims, &idargs ) psizes = chkarray_int(psizes, ndims, &ipsizes ) # cdef Datatype datatype = type(self)() CHKERR( MPI_Type_create_darray(size, rank, ndims, igsizes, idistribs, idargs, ipsizes, order, self.ob_mpi, &datatype.ob_mpi) ) return datatype # Parametrized and size-specific Fortran Datatypes # ------------------------------------------------ @classmethod def Create_f90_integer(cls, int r): """ Return a bounded integer datatype """ cdef Datatype datatype = cls() CHKERR( MPI_Type_create_f90_integer(r, &datatype.ob_mpi) ) return datatype @classmethod def Create_f90_real(cls, int p, int r): """ Return a bounded real datatype """ cdef Datatype datatype = cls() CHKERR( MPI_Type_create_f90_real(p, r, &datatype.ob_mpi) ) return datatype @classmethod def Create_f90_complex(cls, int p, int r): """ Return a bounded complex datatype """ cdef Datatype datatype = cls() CHKERR( MPI_Type_create_f90_complex(p, r, &datatype.ob_mpi) ) return datatype @classmethod def Match_size(cls, int typeclass, int size): """ Find a datatype matching a specified size in bytes """ cdef Datatype datatype = cls() CHKERR( MPI_Type_match_size(typeclass, size, &datatype.ob_mpi) ) return datatype # Use of Derived Datatypes # ------------------------ def Commit(self): """ Commit the datatype """ CHKERR( MPI_Type_commit(&self.ob_mpi) ) return self def Free(self): """ Free the datatype """ CHKERR( MPI_Type_free(&self.ob_mpi) ) # Datatype Resizing # ----------------- def Create_resized(self, Aint lb, Aint extent): """ Create a datatype with a new lower bound and extent """ cdef Datatype datatype = type(self)() CHKERR( MPI_Type_create_resized(self.ob_mpi, lb, extent, &datatype.ob_mpi) ) return datatype Resized = Create_resized #: compatibility alias def Get_true_extent(self): """ Return the true lower bound and extent of a datatype """ cdef MPI_Count lb = 0, extent = 0 CHKERR( MPI_Type_get_true_extent_x(self.ob_mpi, &lb, &extent) ) return (lb, extent) property true_extent: """true extent""" def __get__(self): cdef MPI_Count lb = 0, extent = 0 CHKERR( MPI_Type_get_true_extent_x(self.ob_mpi, &lb, &extent) ) return extent property true_lb: """true lower bound""" def __get__(self): cdef MPI_Count lb = 0, extent = 0 CHKERR( MPI_Type_get_true_extent_x(self.ob_mpi, &lb, &extent) ) return lb property true_ub: """true upper bound""" def __get__(self): cdef MPI_Count lb = 0, extent = 0 CHKERR( MPI_Type_get_true_extent_x(self.ob_mpi, &lb, &extent) ) return lb + extent # Decoding a Datatype # ------------------- def Get_envelope(self): """ Return information on the number and type of input arguments used in the call that created a datatype """ cdef int ni = 0, na = 0, nd = 0, combiner = MPI_UNDEFINED CHKERR( MPI_Type_get_envelope(self.ob_mpi, &ni, &na, &nd, &combiner) ) return (ni, na, nd, combiner) def Get_contents(self): """ Retrieve the actual arguments used in the call that created a datatype """ cdef int ni = 0, na = 0, nd = 0, combiner = MPI_UNDEFINED CHKERR( MPI_Type_get_envelope(self.ob_mpi, &ni, &na, &nd, &combiner) ) cdef int *i = NULL cdef MPI_Aint *a = NULL cdef MPI_Datatype *d = NULL cdef tmp1 = allocate(ni, sizeof(int), &i) cdef tmp2 = allocate(na, sizeof(MPI_Aint), &a) cdef tmp3 = allocate(nd, sizeof(MPI_Datatype), &d) CHKERR( MPI_Type_get_contents(self.ob_mpi, ni, na, nd, i, a, d) ) cdef int k = 0 cdef object integers = [i[k] for k from 0 <= k < ni] cdef object addresses = [a[k] for k from 0 <= k < na] cdef object datatypes = [new_Datatype(d[k]) for k from 0 <= k < nd] return (integers, addresses, datatypes) def decode(self): """ Convenience method for decoding a datatype """ # get the datatype envelope cdef int ni = 0, na = 0, nd = 0, combiner = MPI_UNDEFINED CHKERR( MPI_Type_get_envelope(self.ob_mpi, &ni, &na, &nd, &combiner) ) # return self immediately for named datatypes if combiner == MPI_COMBINER_NAMED: return self # get the datatype contents cdef int *i = NULL cdef MPI_Aint *a = NULL cdef MPI_Datatype *d = NULL cdef tmp1 = allocate(ni, sizeof(int), &i) cdef tmp2 = allocate(na, sizeof(MPI_Aint), &a) cdef tmp3 = allocate(nd, sizeof(MPI_Datatype), &d) CHKERR( MPI_Type_get_contents(self.ob_mpi, ni, na, nd, i, a, d) ) # manage in advance the contained datatypes cdef int k = 0, s1, e1, s2, e2, s3, e3, s4, e4 cdef object oldtype = None if (combiner == MPI_COMBINER_STRUCT or combiner == MPI_COMBINER_STRUCT_INTEGER): oldtype = [new_Datatype(d[k]) for k from 0 <= k < nd] elif (combiner != MPI_COMBINER_F90_INTEGER and combiner != MPI_COMBINER_F90_REAL and combiner != MPI_COMBINER_F90_COMPLEX): oldtype = new_Datatype(d[0]) # dispatch depending on the combiner value if combiner == MPI_COMBINER_DUP: return (oldtype, ('DUP'), {}) elif combiner == MPI_COMBINER_CONTIGUOUS: return (oldtype, ('CONTIGUOUS'), {('count') : i[0]}) elif combiner == MPI_COMBINER_VECTOR: return (oldtype, ('VECTOR'), {('count') : i[0], ('blocklength') : i[1], ('stride') : i[2]}) elif (combiner == MPI_COMBINER_HVECTOR or combiner == MPI_COMBINER_HVECTOR_INTEGER): return (oldtype, ('HVECTOR'), {('count') : i[0], ('blocklength') : i[1], ('stride') : a[0]}) elif combiner == MPI_COMBINER_INDEXED: s1 = 1; e1 = i[0] s2 = i[0]+1; e2 = 2*i[0] return (oldtype, ('INDEXED'), {('blocklengths') : [i[k] for k from s1 <= k <= e1], ('displacements') : [i[k] for k from s2 <= k <= e2]}) elif (combiner == MPI_COMBINER_HINDEXED or combiner == MPI_COMBINER_HINDEXED_INTEGER): s1 = 1; e1 = i[0] s2 = 0; e2 = i[0]-1 return (oldtype, ('HINDEXED'), {('blocklengths') : [i[k] for k from s1 <= k <= e1], ('displacements') : [a[k] for k from s2 <= k <= e2]}) elif combiner == MPI_COMBINER_INDEXED_BLOCK: s2 = 2; e2 = i[0]+1 return (oldtype, ('INDEXED_BLOCK'), {('blocklength') : i[1], ('displacements') : [i[k] for k from s2 <= k <= e2]}) elif combiner == MPI_COMBINER_HINDEXED_BLOCK: s2 = 0; e2 = i[0]-1 return (oldtype, ('HINDEXED_BLOCK'), {('blocklength') : i[1], ('displacements') : [a[k] for k from s2 <= k <= e2]}) elif (combiner == MPI_COMBINER_STRUCT or combiner == MPI_COMBINER_STRUCT_INTEGER): s1 = 1; e1 = i[0] s2 = 0; e2 = i[0]-1 return (Datatype, ('STRUCT'), {('blocklengths') : [i[k] for k from s1 <= k <= e1], ('displacements') : [a[k] for k from s2 <= k <= e2], ('datatypes') : oldtype}) elif combiner == MPI_COMBINER_SUBARRAY: s1 = 1; e1 = i[0] s2 = i[0]+1; e2 = 2*i[0] s3 = 2*i[0]+1; e3 = 3*i[0] return (oldtype, ('SUBARRAY'), {('sizes') : [i[k] for k from s1 <= k <= e1], ('subsizes') : [i[k] for k from s2 <= k <= e2], ('starts') : [i[k] for k from s3 <= k <= e3], ('order') : i[3*i[0]+1]}) elif combiner == MPI_COMBINER_DARRAY: s1 = 3; e1 = i[2]+2 s2 = i[2]+3; e2 = 2*i[2]+2 s3 = 2*i[2]+3; e3 = 3*i[2]+2 s4 = 3*i[2]+3; e4 = 4*i[2]+2 return (oldtype, ('DARRAY'), {('size') : i[0], ('rank') : i[1], ('gsizes') : [i[k] for k from s1 <= k <= e1], ('distribs') : [i[k] for k from s2 <= k <= e2], ('dargs') : [i[k] for k from s3 <= k <= e3], ('psizes') : [i[k] for k from s4 <= k <= e4], ('order') : i[4*i[2]+3]}) elif combiner == MPI_COMBINER_RESIZED: return (oldtype, ('RESIZED'), {('lb') : a[0], ('extent') : a[1]}) elif combiner == MPI_COMBINER_F90_INTEGER: return (Datatype, ('F90_INTEGER'), {('r') : i[0]}) elif combiner == MPI_COMBINER_F90_REAL: return (Datatype, ('F90_REAL'), {('p') : i[0], ('r') : i[1]}) elif combiner == MPI_COMBINER_F90_COMPLEX: return (Datatype, ('F90_COMPLEX'), {('p') : i[0], ('r') : i[1]}) # Pack and Unpack # --------------- def Pack(self, inbuf, outbuf, int position, Comm comm not None): """ Pack into contiguous memory according to datatype. """ cdef MPI_Aint lb = 0, extent = 0 CHKERR( MPI_Type_get_extent(self.ob_mpi, &lb, &extent) ) # cdef void *ibptr = NULL, *obptr = NULL cdef MPI_Aint iblen = 0, oblen = 0 cdef ob1 = getbuffer_r(inbuf, &ibptr, &iblen) cdef ob2 = getbuffer_w(outbuf, &obptr, &oblen) cdef int icount = (iblen/extent), osize = oblen # CHKERR( MPI_Pack(ibptr, icount, self.ob_mpi, obptr, osize, &position, comm.ob_mpi) ) return position def Unpack(self, inbuf, int position, outbuf, Comm comm not None): """ Unpack from contiguous memory according to datatype. """ cdef MPI_Aint lb = 0, extent = 0 CHKERR( MPI_Type_get_extent(self.ob_mpi, &lb, &extent) ) # cdef void *ibptr = NULL, *obptr = NULL cdef MPI_Aint iblen = 0, oblen = 0 cdef ob1 = getbuffer_r(inbuf, &ibptr, &iblen) cdef ob2 = getbuffer_w(outbuf, &obptr, &oblen) cdef int isize = iblen, ocount = (oblen/extent) # CHKERR( MPI_Unpack(ibptr, isize, &position, obptr, ocount, self.ob_mpi, comm.ob_mpi) ) return position def Pack_size(self, int count, Comm comm not None): """ Returns the upper bound on the amount of space (in bytes) needed to pack a message according to datatype. """ cdef int size = 0 CHKERR( MPI_Pack_size(count, self.ob_mpi, comm.ob_mpi, &size) ) return size # Canonical Pack and Unpack # ------------------------- def Pack_external(self, datarep, inbuf, outbuf, Aint position): """ Pack into contiguous memory according to datatype, using a portable data representation (**external32**). """ cdef char *cdatarep = NULL datarep = asmpistr(datarep, &cdatarep, NULL) cdef MPI_Aint lb = 0, extent = 0 CHKERR( MPI_Type_get_extent(self.ob_mpi, &lb, &extent) ) # cdef void *ibptr = NULL, *obptr = NULL cdef MPI_Aint iblen = 0, oblen = 0 cdef ob1 = getbuffer_r(inbuf, &ibptr, &iblen) cdef ob2 = getbuffer_w(outbuf, &obptr, &oblen) cdef int icount = (iblen/extent) # XXX overflow? cdef MPI_Aint osize = oblen # CHKERR( MPI_Pack_external(cdatarep, ibptr, icount, self.ob_mpi, obptr, osize, &position) ) return position def Unpack_external(self, datarep, inbuf, Aint position, outbuf): """ Unpack from contiguous memory according to datatype, using a portable data representation (**external32**). """ cdef char *cdatarep = NULL datarep = asmpistr(datarep, &cdatarep, NULL) cdef MPI_Aint lb = 0, extent = 0 CHKERR( MPI_Type_get_extent(self.ob_mpi, &lb, &extent) ) # cdef void *ibptr = NULL, *obptr = NULL cdef MPI_Aint iblen = 0, oblen = 0 cdef ob1 = getbuffer_r(inbuf, &ibptr, &iblen) cdef ob2 = getbuffer_w(outbuf, &obptr, &oblen) cdef MPI_Aint isize = iblen, cdef int ocount = (oblen/extent) # XXX overflow? # CHKERR( MPI_Unpack_external(cdatarep, ibptr, isize, &position, obptr, ocount, self.ob_mpi) ) return position def Pack_external_size(self, datarep, int count): """ Returns the upper bound on the amount of space (in bytes) needed to pack a message according to datatype, using a portable data representation (**external32**). """ cdef char *cdatarep = NULL cdef MPI_Aint size = 0 datarep = asmpistr(datarep, &cdatarep, NULL) CHKERR( MPI_Pack_external_size(cdatarep, count, self.ob_mpi, &size) ) return size # Attributes # ---------- def Get_attr(self, int keyval): """ Retrieve attribute value by key """ cdef void *attrval = NULL cdef int flag = 0 CHKERR( MPI_Type_get_attr(self.ob_mpi, keyval, &attrval, &flag) ) if not flag: return None if not attrval: return 0 # handle predefined keyvals if 0: pass # likely be a user-defined keyval elif keyval in type_keyval: return attrval else: return PyLong_FromVoidPtr(attrval) def Set_attr(self, int keyval, object attrval): """ Store attribute value associated with a key """ cdef void *ptrval = NULL cdef int incref = 0 if keyval in type_keyval: ptrval = attrval incref = 1 else: ptrval = PyLong_AsVoidPtr(attrval) incref = 0 CHKERR( MPI_Type_set_attr(self.ob_mpi, keyval, ptrval) ) if incref: Py_INCREF(attrval) def Delete_attr(self, int keyval): """ Delete attribute value associated with a key """ CHKERR( MPI_Type_delete_attr(self.ob_mpi, keyval) ) @classmethod def Create_keyval(cls, copy_fn=None, delete_fn=None): """ Create a new attribute key for datatypes """ cdef int keyval = MPI_KEYVAL_INVALID cdef MPI_Type_copy_attr_function *_copy = type_attr_copy_fn cdef MPI_Type_delete_attr_function *_del = type_attr_delete_fn cdef void *extra_state = NULL CHKERR( MPI_Type_create_keyval(_copy, _del, &keyval, extra_state) ) type_keyval_new(keyval, copy_fn, delete_fn) return keyval @classmethod def Free_keyval(cls, int keyval): """ Free and attribute key for datatypes """ cdef int keyval_save = keyval CHKERR( MPI_Type_free_keyval (&keyval) ) type_keyval_del(keyval_save) return keyval # Naming Objects # -------------- def Get_name(self): """ Get the print name for this datatype """ cdef char name[MPI_MAX_OBJECT_NAME+1] cdef int nlen = 0 CHKERR( MPI_Type_get_name(self.ob_mpi, name, &nlen) ) return tompistr(name, nlen) def Set_name(self, name): """ Set the print name for this datatype """ cdef char *cname = NULL name = asmpistr(name, &cname, NULL) CHKERR( MPI_Type_set_name(self.ob_mpi, cname) ) property name: """datatype name""" def __get__(self): return self.Get_name() def __set__(self, value): self.Set_name(value) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Type_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Datatype datatype = cls() datatype.ob_mpi = MPI_Type_f2c(arg) return datatype # Address Function # ---------------- def Get_address(location): """ Get the address of a location in memory """ cdef void *baseptr = NULL cdef tmp = getbuffer_r(location, &baseptr, NULL) cdef MPI_Aint address = 0 CHKERR( MPI_Get_address(baseptr, &address) ) return address cdef Datatype __DATATYPE_NULL__ = new_Datatype( MPI_DATATYPE_NULL ) cdef Datatype __UB__ = new_Datatype( MPI_UB ) cdef Datatype __LB__ = new_Datatype( MPI_LB ) cdef Datatype __PACKED__ = new_Datatype( MPI_PACKED ) cdef Datatype __BYTE__ = new_Datatype( MPI_BYTE ) cdef Datatype __AINT__ = new_Datatype( MPI_AINT ) cdef Datatype __OFFSET__ = new_Datatype( MPI_OFFSET ) cdef Datatype __COUNT__ = new_Datatype( MPI_COUNT ) cdef Datatype __CHAR__ = new_Datatype( MPI_CHAR ) cdef Datatype __WCHAR__ = new_Datatype( MPI_WCHAR ) cdef Datatype __SIGNED_CHAR__ = new_Datatype( MPI_SIGNED_CHAR ) cdef Datatype __SHORT__ = new_Datatype( MPI_SHORT ) cdef Datatype __INT__ = new_Datatype( MPI_INT ) cdef Datatype __LONG__ = new_Datatype( MPI_LONG ) cdef Datatype __LONG_LONG__ = new_Datatype( MPI_LONG_LONG ) cdef Datatype __UNSIGNED_CHAR__ = new_Datatype( MPI_UNSIGNED_CHAR ) cdef Datatype __UNSIGNED_SHORT__ = new_Datatype( MPI_UNSIGNED_SHORT ) cdef Datatype __UNSIGNED__ = new_Datatype( MPI_UNSIGNED ) cdef Datatype __UNSIGNED_LONG__ = new_Datatype( MPI_UNSIGNED_LONG ) cdef Datatype __UNSIGNED_LONG_LONG__ = new_Datatype( MPI_UNSIGNED_LONG_LONG ) cdef Datatype __FLOAT__ = new_Datatype( MPI_FLOAT ) cdef Datatype __DOUBLE__ = new_Datatype( MPI_DOUBLE ) cdef Datatype __LONG_DOUBLE__ = new_Datatype( MPI_LONG_DOUBLE ) cdef Datatype __C_BOOL__ = new_Datatype( MPI_C_BOOL ) cdef Datatype __INT8_T__ = new_Datatype( MPI_INT8_T ) cdef Datatype __INT16_T__ = new_Datatype( MPI_INT16_T ) cdef Datatype __INT32_T__ = new_Datatype( MPI_INT32_T ) cdef Datatype __INT64_T__ = new_Datatype( MPI_INT64_T ) cdef Datatype __UINT8_T__ = new_Datatype( MPI_UINT8_T ) cdef Datatype __UINT16_T__ = new_Datatype( MPI_UINT16_T ) cdef Datatype __UINT32_T__ = new_Datatype( MPI_UINT32_T ) cdef Datatype __UINT64_T__ = new_Datatype( MPI_UINT64_T ) cdef Datatype __C_COMPLEX__ = new_Datatype( MPI_C_COMPLEX ) cdef Datatype __C_FLOAT_COMPLEX__ = new_Datatype( MPI_C_FLOAT_COMPLEX ) cdef Datatype __C_DOUBLE_COMPLEX__ = new_Datatype( MPI_C_DOUBLE_COMPLEX ) cdef Datatype __C_LONG_DOUBLE_COMPLEX__ = new_Datatype( MPI_C_LONG_DOUBLE_COMPLEX ) cdef Datatype __SHORT_INT__ = new_Datatype( MPI_SHORT_INT ) cdef Datatype __TWOINT__ = new_Datatype( MPI_2INT ) cdef Datatype __LONG_INT__ = new_Datatype( MPI_LONG_INT ) cdef Datatype __FLOAT_INT__ = new_Datatype( MPI_FLOAT_INT ) cdef Datatype __DOUBLE_INT__ = new_Datatype( MPI_DOUBLE_INT ) cdef Datatype __LONG_DOUBLE_INT__ = new_Datatype( MPI_LONG_DOUBLE_INT ) cdef Datatype __CHARACTER__ = new_Datatype( MPI_CHARACTER ) cdef Datatype __LOGICAL__ = new_Datatype( MPI_LOGICAL ) cdef Datatype __INTEGER__ = new_Datatype( MPI_INTEGER ) cdef Datatype __REAL__ = new_Datatype( MPI_REAL ) cdef Datatype __DOUBLE_PRECISION__ = new_Datatype( MPI_DOUBLE_PRECISION ) cdef Datatype __COMPLEX__ = new_Datatype( MPI_COMPLEX ) cdef Datatype __DOUBLE_COMPLEX__ = new_Datatype( MPI_DOUBLE_COMPLEX ) cdef Datatype __LOGICAL1__ = new_Datatype( MPI_LOGICAL1 ) cdef Datatype __LOGICAL2__ = new_Datatype( MPI_LOGICAL2 ) cdef Datatype __LOGICAL4__ = new_Datatype( MPI_LOGICAL4 ) cdef Datatype __LOGICAL8__ = new_Datatype( MPI_LOGICAL8 ) cdef Datatype __INTEGER1__ = new_Datatype( MPI_INTEGER1 ) cdef Datatype __INTEGER2__ = new_Datatype( MPI_INTEGER2 ) cdef Datatype __INTEGER4__ = new_Datatype( MPI_INTEGER4 ) cdef Datatype __INTEGER8__ = new_Datatype( MPI_INTEGER8 ) cdef Datatype __INTEGER16__ = new_Datatype( MPI_INTEGER16 ) cdef Datatype __REAL2__ = new_Datatype( MPI_REAL2 ) cdef Datatype __REAL4__ = new_Datatype( MPI_REAL4 ) cdef Datatype __REAL8__ = new_Datatype( MPI_REAL8 ) cdef Datatype __REAL16__ = new_Datatype( MPI_REAL16 ) cdef Datatype __COMPLEX4__ = new_Datatype( MPI_COMPLEX4 ) cdef Datatype __COMPLEX8__ = new_Datatype( MPI_COMPLEX8 ) cdef Datatype __COMPLEX16__ = new_Datatype( MPI_COMPLEX16 ) cdef Datatype __COMPLEX32__ = new_Datatype( MPI_COMPLEX32 ) include "typemap.pxi" # Predefined datatype handles # --------------------------- DATATYPE_NULL = __DATATYPE_NULL__ #: Null datatype handle # Deprecated datatypes (since MPI-2) UB = __UB__ #: upper-bound marker LB = __LB__ #: lower-bound marker # MPI-specific datatypes PACKED = __PACKED__ BYTE = __BYTE__ AINT = __AINT__ OFFSET = __OFFSET__ COUNT = __COUNT__ # Elementary C datatypes CHAR = __CHAR__ WCHAR = __WCHAR__ SIGNED_CHAR = __SIGNED_CHAR__ SHORT = __SHORT__ INT = __INT__ LONG = __LONG__ LONG_LONG = __LONG_LONG__ UNSIGNED_CHAR = __UNSIGNED_CHAR__ UNSIGNED_SHORT = __UNSIGNED_SHORT__ UNSIGNED = __UNSIGNED__ UNSIGNED_LONG = __UNSIGNED_LONG__ UNSIGNED_LONG_LONG = __UNSIGNED_LONG_LONG__ FLOAT = __FLOAT__ DOUBLE = __DOUBLE__ LONG_DOUBLE = __LONG_DOUBLE__ # C99 datatypes C_BOOL = __C_BOOL__ INT8_T = __INT8_T__ INT16_T = __INT16_T__ INT32_T = __INT32_T__ INT64_T = __INT64_T__ UINT8_T = __UINT8_T__ UINT16_T = __UINT16_T__ UINT32_T = __UINT32_T__ UINT64_T = __UINT64_T__ C_COMPLEX = __C_COMPLEX__ C_FLOAT_COMPLEX = __C_FLOAT_COMPLEX__ C_DOUBLE_COMPLEX = __C_DOUBLE_COMPLEX__ C_LONG_DOUBLE_COMPLEX = __C_LONG_DOUBLE_COMPLEX__ # C Datatypes for reduction operations SHORT_INT = __SHORT_INT__ INT_INT = TWOINT = __TWOINT__ LONG_INT = __LONG_INT__ FLOAT_INT = __FLOAT_INT__ DOUBLE_INT = __DOUBLE_INT__ LONG_DOUBLE_INT = __LONG_DOUBLE_INT__ # Elementary Fortran datatypes CHARACTER = __CHARACTER__ LOGICAL = __LOGICAL__ INTEGER = __INTEGER__ REAL = __REAL__ DOUBLE_PRECISION = __DOUBLE_PRECISION__ COMPLEX = __COMPLEX__ DOUBLE_COMPLEX = __DOUBLE_COMPLEX__ # Size-specific Fortran datatypes LOGICAL1 = __LOGICAL1__ LOGICAL2 = __LOGICAL2__ LOGICAL4 = __LOGICAL4__ LOGICAL8 = __LOGICAL8__ INTEGER1 = __INTEGER1__ INTEGER2 = __INTEGER2__ INTEGER4 = __INTEGER4__ INTEGER8 = __INTEGER8__ INTEGER16 = __INTEGER16__ REAL2 = __REAL2__ REAL4 = __REAL4__ REAL8 = __REAL8__ REAL16 = __REAL16__ COMPLEX4 = __COMPLEX4__ COMPLEX8 = __COMPLEX8__ COMPLEX16 = __COMPLEX16__ COMPLEX32 = __COMPLEX32__ # Convenience aliases UNSIGNED_INT = __UNSIGNED__ SIGNED_SHORT = __SHORT__ SIGNED_INT = __INT__ SIGNED_LONG = __LONG__ SIGNED_LONG_LONG = __LONG_LONG__ BOOL = __C_BOOL__ SINT8_T = __INT8_T__ SINT16_T = __INT16_T__ SINT32_T = __INT32_T__ SINT64_T = __INT64_T__ F_BOOL = __LOGICAL__ F_INT = __INTEGER__ F_FLOAT = __REAL__ F_DOUBLE = __DOUBLE_PRECISION__ F_COMPLEX = __COMPLEX__ F_FLOAT_COMPLEX = __COMPLEX__ F_DOUBLE_COMPLEX = __DOUBLE_COMPLEX__ mpi4py_1.3.1+hg20131106.orig/src/MPI/Errhandler.pyx0000644000000000000000000000341612211706251017362 0ustar 00000000000000cdef class Errhandler: """ Error Handler """ def __cinit__(self, Errhandler errhandler=None): self.ob_mpi = MPI_ERRHANDLER_NULL if errhandler is not None: self.ob_mpi = errhandler.ob_mpi def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Errhandler(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, Errhandler): return NotImplemented if not isinstance(other, Errhandler): return NotImplemented cdef Errhandler s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_ERRHANDLER_NULL def Free(self): """ Free an error handler """ CHKERR( MPI_Errhandler_free(&self.ob_mpi) ) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Errhandler_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Errhandler errhandler = cls() errhandler.ob_mpi = MPI_Errhandler_f2c(arg) return errhandler cdef Errhandler __ERRHANDLER_NULL__ = new_Errhandler(MPI_ERRHANDLER_NULL) cdef Errhandler __ERRORS_RETURN__ = new_Errhandler(MPI_ERRORS_RETURN) cdef Errhandler __ERRORS_ARE_FATAL__ = new_Errhandler(MPI_ERRORS_ARE_FATAL) # Predefined errhandler handles # ----------------------------- ERRHANDLER_NULL = __ERRHANDLER_NULL__ #: Null error handler ERRORS_RETURN = __ERRORS_RETURN__ #: Errors return error handler ERRORS_ARE_FATAL = __ERRORS_ARE_FATAL__ #: Errors are fatal error handler mpi4py_1.3.1+hg20131106.orig/src/MPI/Exception.pyx0000644000000000000000000000751012211706251017231 0ustar 00000000000000include "ExceptionP.pyx" #include "ExceptionC.pyx" MPIException = Exception # Actually no errors SUCCESS = MPI_SUCCESS ERR_LASTCODE = MPI_ERR_LASTCODE # MPI-1 Error classes # ------------------- # MPI-1 Objects ERR_COMM = MPI_ERR_COMM ERR_GROUP = MPI_ERR_GROUP ERR_TYPE = MPI_ERR_TYPE ERR_REQUEST = MPI_ERR_REQUEST ERR_OP = MPI_ERR_OP # Communication argument parameters ERR_BUFFER = MPI_ERR_BUFFER ERR_COUNT = MPI_ERR_COUNT ERR_TAG = MPI_ERR_TAG ERR_RANK = MPI_ERR_RANK ERR_ROOT = MPI_ERR_ROOT ERR_TRUNCATE = MPI_ERR_TRUNCATE # Multiple completion ERR_IN_STATUS = MPI_ERR_IN_STATUS ERR_PENDING = MPI_ERR_PENDING # Topology argument parameters ERR_TOPOLOGY = MPI_ERR_TOPOLOGY ERR_DIMS = MPI_ERR_DIMS # Other arguments parameters ERR_ARG = MPI_ERR_ARG # Other errors ERR_OTHER = MPI_ERR_OTHER ERR_UNKNOWN = MPI_ERR_UNKNOWN ERR_INTERN = MPI_ERR_INTERN # MPI-2 Error classes # ------------------- # MPI-2 Objects ERR_INFO = MPI_ERR_INFO ERR_FILE = MPI_ERR_FILE ERR_WIN = MPI_ERR_WIN # Object attributes ERR_KEYVAL = MPI_ERR_KEYVAL # Info Object ERR_INFO_KEY = MPI_ERR_INFO_KEY ERR_INFO_VALUE = MPI_ERR_INFO_VALUE ERR_INFO_NOKEY = MPI_ERR_INFO_NOKEY # Input/Ouput ERR_ACCESS = MPI_ERR_ACCESS ERR_AMODE = MPI_ERR_AMODE ERR_BAD_FILE = MPI_ERR_BAD_FILE ERR_FILE_EXISTS = MPI_ERR_FILE_EXISTS ERR_FILE_IN_USE = MPI_ERR_FILE_IN_USE ERR_NO_SPACE = MPI_ERR_NO_SPACE ERR_NO_SUCH_FILE = MPI_ERR_NO_SUCH_FILE ERR_IO = MPI_ERR_IO ERR_READ_ONLY = MPI_ERR_READ_ONLY ERR_CONVERSION = MPI_ERR_CONVERSION ERR_DUP_DATAREP = MPI_ERR_DUP_DATAREP ERR_UNSUPPORTED_DATAREP = MPI_ERR_UNSUPPORTED_DATAREP ERR_UNSUPPORTED_OPERATION = MPI_ERR_UNSUPPORTED_OPERATION # Dynamic Process Management ERR_NAME = MPI_ERR_NAME ERR_NO_MEM = MPI_ERR_NO_MEM ERR_NOT_SAME = MPI_ERR_NOT_SAME ERR_PORT = MPI_ERR_PORT ERR_QUOTA = MPI_ERR_QUOTA ERR_SERVICE = MPI_ERR_SERVICE ERR_SPAWN = MPI_ERR_SPAWN # Windows ERR_BASE = MPI_ERR_BASE ERR_SIZE = MPI_ERR_SIZE ERR_DISP = MPI_ERR_DISP ERR_ASSERT = MPI_ERR_ASSERT ERR_LOCKTYPE = MPI_ERR_LOCKTYPE ERR_RMA_CONFLICT = MPI_ERR_RMA_CONFLICT ERR_RMA_SYNC = MPI_ERR_RMA_SYNC ERR_RMA_RANGE = MPI_ERR_RMA_RANGE ERR_RMA_ATTACH = MPI_ERR_RMA_ATTACH ERR_RMA_SHARED = MPI_ERR_RMA_SHARED ERR_RMA_FLAVOR = MPI_ERR_RMA_FLAVOR def Get_error_class(int errorcode): """ Convert an *error code* into an *error class* """ cdef int errorclass = MPI_SUCCESS CHKERR( MPI_Error_class(errorcode, &errorclass) ) return errorclass def Get_error_string(int errorcode): """ Return the *error string* for a given *error class* or *error code* """ cdef char string[MPI_MAX_ERROR_STRING+1] cdef int resultlen = 0 CHKERR( MPI_Error_string(errorcode, string, &resultlen) ) return tompistr(string, resultlen) def Add_error_class(): """ Add an *error class* to the known error classes """ cdef int errorclass = MPI_SUCCESS CHKERR( MPI_Add_error_class(&errorclass) ) return errorclass def Add_error_code(int errorclass): """ Add an *error code* to an *error class* """ cdef int errorcode = MPI_SUCCESS CHKERR( MPI_Add_error_code(errorclass, &errorcode) ) return errorcode def Add_error_string(int errorcode, string): """ Associate an *error string* with an *error class* or *errorcode* """ cdef char *cstring = NULL string = asmpistr(string, &cstring, NULL) CHKERR( MPI_Add_error_string(errorcode, cstring) ) mpi4py_1.3.1+hg20131106.orig/src/MPI/ExceptionC.pyx0000644000000000000000000000434712211706251017341 0ustar 00000000000000cdef extern from "Python.h": ctypedef class __builtin__.RuntimeError [object PyBaseExceptionObject]: pass cdef class Exception(RuntimeError): """ Exception """ cdef int ob_mpi def __cinit__(self, int ierr=0): if ierr < MPI_SUCCESS: ierr = MPI_ERR_UNKNOWN if ierr > MPI_ERR_LASTCODE: ierr = MPI_ERR_UNKNOWN self.ob_mpi = ierr RuntimeError.__init__(self, ierr) def __richcmp__(Exception self, int error, int op): cdef int ierr = self.ob_mpi if op == Py_LT: return ierr < error if op == Py_LE: return ierr <= error if op == Py_EQ: return ierr == error if op == Py_NE: return ierr != error if op == Py_GT: return ierr > error if op == Py_GE: return ierr >= error def __hash__(self): return RuntimeError.__hash__(self) def __bool__(self): return self.ob_mpi != MPI_SUCCESS def __int__(self): if not mpi_active(): return self.ob_mpi return self.Get_error_code() def __repr__(self): return "MPI.Exception(%d)" % self.ob_mpi def __str__(self): if not mpi_active(): return "error code: %d" % self.ob_mpi return self.Get_error_string() def Get_error_code(self): """ Error code """ cdef int errorcode = MPI_SUCCESS errorcode = self.ob_mpi return errorcode property error_code: """error code""" def __get__(self): return self.Get_error_code() def Get_error_class(self): """ Error class """ cdef int errorclass = MPI_SUCCESS CHKERR( MPI_Error_class(self.ob_mpi, &errorclass) ) return errorclass property error_class: """error class""" def __get__(self): return self.Get_error_class() def Get_error_string(self): """ Error string """ cdef char string[MPI_MAX_ERROR_STRING+1] cdef int resultlen = 0 CHKERR( MPI_Error_string(self.ob_mpi, string, &resultlen) ) return tompistr(string, resultlen) property error_string: """error string""" def __get__(self): return self.Get_error_string() mpi4py_1.3.1+hg20131106.orig/src/MPI/ExceptionP.pyx0000644000000000000000000000434012211706251017347 0ustar 00000000000000class Exception(RuntimeError): """ Exception """ def __init__(self, int ierr=0): if ierr < MPI_SUCCESS: ierr = MPI_ERR_UNKNOWN if ierr > MPI_ERR_LASTCODE: ierr = MPI_ERR_UNKNOWN self.ob_mpi = ierr RuntimeError.__init__(self, self.ob_mpi) def __eq__(self, int error): cdef int ierr = self.ob_mpi return (ierr == error) def __ne__(self, int error): cdef int ierr = self.ob_mpi return (ierr != error) def __lt__(self, int error): cdef int ierr = self.ob_mpi return (ierr < error) def __le__(self, int error): cdef int ierr = self.ob_mpi return (ierr <= error) def __gt__(self, int error): cdef int ierr = self.ob_mpi return (ierr > error) def __ge__(self, int error): cdef int ierr = self.ob_mpi return (ierr >= error) def __hash__(self): return RuntimeError.__hash__(self) def __bool__(self): cdef int ierr = self.ob_mpi return ierr != MPI_SUCCESS def __int__(self): if not mpi_active(): return self.ob_mpi return self.Get_error_code() def __repr__(self): return "MPI.Exception(%d)" % self.ob_mpi def __str__(self): if not mpi_active(): return "error code: %d" % self.ob_mpi return self.Get_error_string() def Get_error_code(self): """ Error code """ cdef int errorcode = MPI_SUCCESS errorcode = self.ob_mpi return errorcode error_code = property(Get_error_code, doc="error code") def Get_error_class(self): """ Error class """ cdef int errorclass = MPI_SUCCESS CHKERR( MPI_Error_class(self.ob_mpi, &errorclass) ) return errorclass error_class = property(Get_error_class, doc="error class") def Get_error_string(self): """ Error string """ cdef char string[MPI_MAX_ERROR_STRING+1] cdef int resultlen = 0 CHKERR( MPI_Error_string(self.ob_mpi, string, &resultlen) ) return tompistr(string, resultlen) error_string = property(Get_error_string, doc="error string") mpi4py_1.3.1+hg20131106.orig/src/MPI/File.pyx0000644000000000000000000005225612211706251016161 0ustar 00000000000000# Opening modes # ------------- MODE_RDONLY = MPI_MODE_RDONLY #: Read only MODE_WRONLY = MPI_MODE_WRONLY #: Write only MODE_RDWR = MPI_MODE_RDWR #: Reading and writing MODE_CREATE = MPI_MODE_CREATE #: Create the file if it does not exist MODE_EXCL = MPI_MODE_EXCL #: Error if creating file that already exists MODE_DELETE_ON_CLOSE = MPI_MODE_DELETE_ON_CLOSE #: Delete file on close MODE_UNIQUE_OPEN = MPI_MODE_UNIQUE_OPEN #: File will not be concurrently opened elsewhere MODE_SEQUENTIAL = MPI_MODE_SEQUENTIAL #: File will only be accessed sequentially MODE_APPEND = MPI_MODE_APPEND #: Set initial position of all file pointers to end of file # Positioning # ----------- SEEK_SET = MPI_SEEK_SET #: File pointer is set to offset SEEK_CUR = MPI_SEEK_CUR #: File pointer is set to the current position plus offset SEEK_END = MPI_SEEK_END #: File pointer is set to the end plus offset DISPLACEMENT_CURRENT = MPI_DISPLACEMENT_CURRENT #: Special displacement value for files opened in sequential mode DISP_CUR = MPI_DISPLACEMENT_CURRENT #: Convenience alias for `DISPLACEMENT_CURRENT` cdef class File: """ File """ def __cinit__(self, File file=None): self.ob_mpi = MPI_FILE_NULL if file is not None: self.ob_mpi = file.ob_mpi def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_File(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, File): return NotImplemented if not isinstance(other, File): return NotImplemented cdef File s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_FILE_NULL # [9.2] File Manipulation # ----------------------- # [9.2.1] Opening a File # ---------------------- @classmethod def Open(cls, Intracomm comm not None, filename, int amode=MODE_RDONLY, Info info=INFO_NULL): """ Open a file """ cdef char *cfilename = NULL filename = asmpistr(filename, &cfilename, NULL) cdef MPI_Info cinfo = arg_Info(info) cdef File file = cls() with nogil: CHKERR( MPI_File_open( comm.ob_mpi, cfilename, amode, cinfo, &file.ob_mpi) ) return file # [9.2.2] Closing a File # ---------------------- def Close(self): """ Close a file """ with nogil: CHKERR( MPI_File_close(&self.ob_mpi) ) # [9.2.3] Deleting a File # ----------------------- @classmethod def Delete(cls, filename, Info info=INFO_NULL): """ Delete a file """ cdef char *cfilename = NULL filename = asmpistr(filename, &cfilename, NULL) cdef MPI_Info cinfo = arg_Info(info) with nogil: CHKERR( MPI_File_delete(cfilename, cinfo) ) # [9.2.4] Resizing a File # ----------------------- def Set_size(self, Offset size): """ Sets the file size """ with nogil: CHKERR( MPI_File_set_size(self.ob_mpi, size) ) # [9.2.5] Preallocating Space for a File # -------------------------------------- def Preallocate(self, Offset size): """ Preallocate storage space for a file """ with nogil: CHKERR( MPI_File_preallocate(self.ob_mpi, size) ) # [9.2.6] Querying the Size of a File # ----------------------------------- def Get_size(self): """ Return the file size """ cdef MPI_Offset size = 0 with nogil: CHKERR( MPI_File_get_size(self.ob_mpi, &size) ) return size property size: """file size""" def __get__(self): return self.Get_size() # [9.2.7] Querying File Parameters # -------------------------------- def Get_group(self): """ Return the group of processes that opened the file """ cdef Group group = Group.__new__(Group) with nogil: CHKERR( MPI_File_get_group(self.ob_mpi, &group.ob_mpi) ) return group property group: """file group""" def __get__(self): return self.Get_group() def Get_amode(self): """ Return the file access mode """ cdef int amode = 0 with nogil: CHKERR( MPI_File_get_amode(self.ob_mpi, &amode) ) return amode property amode: """file access mode""" def __get__(self): return self.Get_amode() # [9.2.8] File Info # ----------------- def Set_info(self, Info info not None): """ Set new values for the hints associated with a file """ with nogil: CHKERR( MPI_File_set_info(self.ob_mpi, info.ob_mpi) ) def Get_info(self): """ Return the hints for a file that are actually being used by MPI """ cdef Info info = Info.__new__(Info) with nogil: CHKERR( MPI_File_get_info(self.ob_mpi, &info.ob_mpi) ) return info property info: """file info""" def __get__(self): return self.Get_info() def __set__(self, info): self.Set_info(info) # [9.3] File Views # ---------------- def Set_view(self, Offset disp=0, Datatype etype=None, Datatype filetype=None, object datarep=None, Info info=INFO_NULL): """ Set the file view """ cdef char *cdatarep = b"native" if datarep is not None: datarep = asmpistr(datarep, &cdatarep, NULL) cdef MPI_Datatype cetype = MPI_BYTE if etype is not None: cetype = etype.ob_mpi cdef MPI_Datatype cftype = cetype if filetype is not None: cftype = filetype.ob_mpi cdef MPI_Info cinfo = arg_Info(info) with nogil: CHKERR( MPI_File_set_view( self.ob_mpi, disp, cetype, cftype, cdatarep, cinfo) ) def Get_view(self): """ Return the file view """ cdef MPI_Offset disp = 0 cdef Datatype etype = Datatype.__new__(Datatype) cdef Datatype ftype = Datatype.__new__(Datatype) cdef char cdatarep[MPI_MAX_DATAREP_STRING+1] with nogil: CHKERR( MPI_File_get_view( self.ob_mpi, &disp, &etype.ob_mpi, &ftype.ob_mpi, cdatarep) ) fix_fileview_Datatype(etype); fix_fileview_Datatype(ftype) cdatarep[MPI_MAX_DATAREP_STRING] = 0 # just in case cdef object datarep = mpistr(cdatarep) return (disp, etype, ftype, datarep) # [9.4] Data Access # ----------------- # [9.4.2] Data Access with Explicit Offsets # ----------------------------------------- def Read_at(self, Offset offset, buf, Status status=None): """ Read using explicit offset """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read_at( self.ob_mpi, offset, m.buf, m.count, m.dtype, statusp) ) def Read_at_all(self, Offset offset, buf, Status status=None): """ Collective read using explicit offset """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read_at_all( self.ob_mpi, offset, m.buf, m.count, m.dtype, statusp) ) def Write_at(self, Offset offset, buf, Status status=None): """ Write using explicit offset """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write_at( self.ob_mpi, offset, m.buf, m.count, m.dtype, statusp) ) def Write_at_all(self, Offset offset, buf, Status status=None): """ Collective write using explicit offset """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write_at_all( self.ob_mpi, offset, m.buf, m.count, m.dtype, statusp) ) def Iread_at(self, Offset offset, buf): """ Nonblocking read using explicit offset """ cdef _p_msg_io m = message_io_read(buf) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_File_iread_at( self.ob_mpi, offset, m.buf, m.count, m.dtype, &request.ob_mpi) ) request.ob_buf = m return request def Iwrite_at(self, Offset offset, buf): """ Nonblocking write using explicit offset """ cdef _p_msg_io m = message_io_write(buf) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_File_iwrite_at( self.ob_mpi, offset, m.buf, m.count, m.dtype, &request.ob_mpi) ) request.ob_buf = m return request # [9.4.3] Data Access with Individual File Pointers # ------------------------------------------------- def Read(self, buf, Status status=None): """ Read using individual file pointer """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read( self.ob_mpi, m.buf, m.count, m.dtype, statusp) ) def Read_all(self, buf, Status status=None): """ Collective read using individual file pointer """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read_all( self.ob_mpi, m.buf, m.count, m.dtype, statusp) ) def Write(self, buf, Status status=None): """ Write using individual file pointer """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write( self.ob_mpi, m.buf, m.count, m.dtype, statusp) ) def Write_all(self, buf, Status status=None): """ Collective write using individual file pointer """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write_all( self.ob_mpi, m.buf, m.count, m.dtype, statusp) ) def Iread(self, buf): """ Nonblocking read using individual file pointer """ cdef _p_msg_io m = message_io_read(buf) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_File_iread( self.ob_mpi, m.buf, m.count, m.dtype, &request.ob_mpi) ) request.ob_buf = m return request def Iwrite(self, buf): """ Nonblocking write using individual file pointer """ cdef _p_msg_io m = message_io_write(buf) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_File_iwrite( self.ob_mpi, m.buf, m.count, m.dtype, &request.ob_mpi) ) request.ob_buf = m return request def Seek(self, Offset offset, int whence=SEEK_SET): """ Update the individual file pointer """ with nogil: CHKERR( MPI_File_seek(self.ob_mpi, offset, whence) ) def Get_position(self): """ Return the current position of the individual file pointer in etype units relative to the current view """ cdef MPI_Offset offset = 0 with nogil: CHKERR( MPI_File_get_position(self.ob_mpi, &offset) ) return offset def Get_byte_offset(self, Offset offset): """ Returns the absolute byte position in the file corresponding to 'offset' etypes relative to the current view """ cdef MPI_Offset disp = 0 with nogil: CHKERR( MPI_File_get_byte_offset( self.ob_mpi, offset, &disp) ) return disp # [9.4.4] Data Access with Shared File Pointers # --------------------------------------------- def Read_shared(self, buf, Status status=None): """ Read using shared file pointer """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read_shared( self.ob_mpi, m.buf, m.count, m.dtype, statusp) ) def Write_shared(self, buf, Status status=None): """ Write using shared file pointer """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write_shared( self.ob_mpi, m.buf, m.count, m.dtype, statusp) ) def Iread_shared(self, buf): """ Nonblocking read using shared file pointer """ cdef _p_msg_io m = message_io_read(buf) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_File_iread_shared( self.ob_mpi, m.buf, m.count, m.dtype, &request.ob_mpi) ) request.ob_buf = m return request def Iwrite_shared(self, buf): """ Nonblocking write using shared file pointer """ cdef _p_msg_io m = message_io_write(buf) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_File_iwrite_shared( self.ob_mpi, m.buf, m.count, m.dtype, &request.ob_mpi) ) request.ob_buf = m return request def Read_ordered(self, buf, Status status=None): """ Collective read using shared file pointer """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read_ordered( self.ob_mpi, m.buf, m.count, m.dtype, statusp) ) def Write_ordered(self, buf, Status status=None): """ Collective write using shared file pointer """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write_ordered( self.ob_mpi, m.buf, m.count, m.dtype, statusp) ) def Seek_shared(self, Offset offset, int whence=SEEK_SET): """ Update the shared file pointer """ with nogil: CHKERR( MPI_File_seek_shared(self.ob_mpi, offset, whence) ) def Get_position_shared(self): """ Return the current position of the shared file pointer in etype units relative to the current view """ cdef MPI_Offset offset = 0 with nogil: CHKERR( MPI_File_get_position_shared(self.ob_mpi, &offset) ) return offset # [9.4.5] Split Collective Data Access Routines # --------------------------------------------- # explicit offset def Read_at_all_begin(self, Offset offset, buf): """ Start a split collective read using explict offset """ cdef _p_msg_io m = message_io_read(buf) with nogil: CHKERR( MPI_File_read_at_all_begin( self.ob_mpi, offset, m.buf, m.count, m.dtype) ) def Read_at_all_end(self, buf, Status status=None): """ Complete a split collective read using explict offset """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read_at_all_end( self.ob_mpi, m.buf, statusp) ) def Write_at_all_begin(self, Offset offset, buf): """ Start a split collective write using explict offset """ cdef _p_msg_io m = message_io_write(buf) with nogil: CHKERR( MPI_File_write_at_all_begin( self.ob_mpi, offset, m.buf, m.count, m.dtype) ) def Write_at_all_end(self, buf, Status status=None): """ Complete a split collective write using explict offset """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write_at_all_end( self.ob_mpi, m.buf, statusp) ) # individual file pointer def Read_all_begin(self, buf): """ Start a split collective read using individual file pointer """ cdef _p_msg_io m = message_io_read(buf) with nogil: CHKERR( MPI_File_read_all_begin( self.ob_mpi, m.buf, m.count, m.dtype) ) def Read_all_end(self, buf, Status status=None): """ Complete a split collective read using individual file pointer """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read_all_end( self.ob_mpi, m.buf, statusp) ) def Write_all_begin(self, buf): """ Start a split collective write using individual file pointer """ cdef _p_msg_io m = message_io_write(buf) with nogil: CHKERR( MPI_File_write_all_begin( self.ob_mpi, m.buf, m.count, m.dtype) ) def Write_all_end(self, buf, Status status=None): """ Complete a split collective write using individual file pointer """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write_all_end( self.ob_mpi, m.buf, statusp) ) # shared file pointer def Read_ordered_begin(self, buf): """ Start a split collective read using shared file pointer """ cdef _p_msg_io m = message_io_read(buf) with nogil: CHKERR( MPI_File_read_ordered_begin( self.ob_mpi, m.buf, m.count, m.dtype) ) def Read_ordered_end(self, buf, Status status=None): """ Complete a split collective read using shared file pointer """ cdef _p_msg_io m = message_io_read(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_read_ordered_end( self.ob_mpi, m.buf, statusp) ) def Write_ordered_begin(self, buf): """ Start a split collective write using shared file pointer """ cdef _p_msg_io m = message_io_write(buf) with nogil: CHKERR( MPI_File_write_ordered_begin( self.ob_mpi, m.buf, m.count, m.dtype) ) def Write_ordered_end(self, buf, Status status=None): """ Complete a split collective write using shared file pointer """ cdef _p_msg_io m = message_io_write(buf) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_File_write_ordered_end( self.ob_mpi, m.buf, statusp) ) # [9.5] File Interoperability # --------------------------- # [9.5.1] Datatypes for File Interoperability # ------------------------------------------- def Get_type_extent(self, Datatype datatype not None): """ Return the extent of datatype in the file """ cdef MPI_Aint extent = 0 with nogil: CHKERR( MPI_File_get_type_extent( self.ob_mpi, datatype.ob_mpi, &extent) ) return extent # [9.6] Consistency and Semantics # ------------------------------- # [9.6.1] File Consistency # ------------------------ def Set_atomicity(self, bint flag): """ Set the atomicity mode """ with nogil: CHKERR( MPI_File_set_atomicity(self.ob_mpi, flag) ) def Get_atomicity(self): """ Return the atomicity mode """ cdef int flag = 0 with nogil: CHKERR( MPI_File_get_atomicity(self.ob_mpi, &flag) ) return flag property atomicity: """atomicity""" def __get__(self): return self.Get_atomicity() def __set__(self, value): self.Set_atomicity(value) def Sync(self): """ Causes all previous writes to be transferred to the storage device """ with nogil: CHKERR( MPI_File_sync(self.ob_mpi) ) # [9.7] I/O Error Handling # ------------------------ def Get_errhandler(self): """ Get the error handler for a file """ cdef Errhandler errhandler = Errhandler.__new__(Errhandler) CHKERR( MPI_File_get_errhandler(self.ob_mpi, &errhandler.ob_mpi) ) return errhandler def Set_errhandler(self, Errhandler errhandler not None): """ Set the error handler for a file """ CHKERR( MPI_File_set_errhandler(self.ob_mpi, errhandler.ob_mpi) ) def Call_errhandler(self, int errorcode): """ Call the error handler installed on a file """ CHKERR( MPI_File_call_errhandler(self.ob_mpi, errorcode) ) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_File_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef File file = cls() file.ob_mpi = MPI_File_f2c(arg) return file cdef File __FILE_NULL__ = new_File(MPI_FILE_NULL) # Predefined file handles # ----------------------- FILE_NULL = __FILE_NULL__ #: Null file handle mpi4py_1.3.1+hg20131106.orig/src/MPI/Group.pyx0000644000000000000000000001555512211706251016377 0ustar 00000000000000cdef class Group: """ Group """ def __cinit__(self, Group group=None): self.ob_mpi = MPI_GROUP_NULL if group is not None: self.ob_mpi = group.ob_mpi def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Group(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, Group): return NotImplemented if not isinstance(other, Group): return NotImplemented cdef Group s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_GROUP_NULL # Group Accessors # --------------- def Get_size(self): """ Return the size of a group """ cdef int size = -1 CHKERR( MPI_Group_size(self.ob_mpi, &size) ) return size property size: """number of processes in group""" def __get__(self): return self.Get_size() def Get_rank(self): """ Return the rank of this process in a group """ cdef int rank = -1 CHKERR( MPI_Group_rank(self.ob_mpi, &rank) ) return rank property rank: """rank of this process in group""" def __get__(self): return self.Get_rank() @classmethod def Translate_ranks(cls, Group group1 not None, ranks1, Group group2=None): """ Translate the ranks of processes in one group to those in another group """ cdef MPI_Group grp1 = MPI_GROUP_NULL cdef MPI_Group grp2 = MPI_GROUP_NULL cdef int i = 0, n = 0, *iranks1 = NULL, *iranks2 = NULL cdef tmp1 = getarray_int(ranks1, &n, &iranks1) cdef tmp2 = newarray_int(n, &iranks2) # grp1 = group1.ob_mpi if group2 is not None: grp2 = group2.ob_mpi else: CHKERR( MPI_Comm_group(MPI_COMM_WORLD, &grp2) ) try: CHKERR( MPI_Group_translate_ranks(grp1, n, iranks1, grp2, iranks2) ) finally: if group2 is None: CHKERR( MPI_Group_free(&grp2) ) # cdef object ranks2 = [iranks2[i] for i from 0 <= i < n] return ranks2 @classmethod def Compare(cls, Group group1 not None, Group group2 not None): """ Compare two groups """ cdef int flag = MPI_UNEQUAL CHKERR( MPI_Group_compare(group1.ob_mpi, group2.ob_mpi, &flag) ) return flag # Group Constructors # ------------------ def Dup(self): """ Duplicate a group """ cdef Group group = type(self)() CHKERR( MPI_Group_union(self.ob_mpi, MPI_GROUP_EMPTY, &group.ob_mpi) ) return group @classmethod def Union(cls, Group group1 not None, Group group2 not None): """ Produce a group by combining two existing groups """ cdef Group group = cls() CHKERR( MPI_Group_union( group1.ob_mpi, group2.ob_mpi, &group.ob_mpi) ) return group @classmethod def Intersect(cls, Group group1 not None, Group group2 not None): """ Produce a group as the intersection of two existing groups """ cdef Group group = cls() CHKERR( MPI_Group_intersection( group1.ob_mpi, group2.ob_mpi, &group.ob_mpi) ) return group @classmethod def Difference(cls, Group group1 not None, Group group2 not None): """ Produce a group from the difference of two existing groups """ cdef Group group = cls() CHKERR( MPI_Group_difference( group1.ob_mpi, group2.ob_mpi, &group.ob_mpi) ) return group def Incl(self, ranks): """ Produce a group by reordering an existing group and taking only listed members """ cdef int n = 0, *iranks = NULL ranks = getarray_int(ranks, &n, &iranks) cdef Group group = type(self)() CHKERR( MPI_Group_incl(self.ob_mpi, n, iranks, &group.ob_mpi) ) return group def Excl(self, ranks): """ Produce a group by reordering an existing group and taking only unlisted members """ cdef int n = 0, *iranks = NULL ranks = getarray_int(ranks, &n, &iranks) cdef Group group = type(self)() CHKERR( MPI_Group_excl(self.ob_mpi, n, iranks, &group.ob_mpi) ) return group def Range_incl(self, ranks): """ Create a new group from ranges of of ranks in an existing group """ cdef int *p = NULL, (*ranges)[3]# = NULL ## XXX cython fails ranges = NULL cdef int i = 0, n = len(ranks) cdef tmp1 = allocate(n, sizeof(int[3]), &ranges) for i from 0 <= i < n: p = ranges[i] p[0], p[1], p[2] = ranks[i] cdef Group group = type(self)() CHKERR( MPI_Group_range_incl(self.ob_mpi, n, ranges, &group.ob_mpi) ) return group def Range_excl(self, ranks): """ Create a new group by excluding ranges of processes from an existing group """ cdef int *p = NULL, (*ranges)[3]# = NULL ## XXX cython fails ranges = NULL cdef int i = 0, n = len(ranks) cdef tmp1 = allocate(n, sizeof(int[3]), &ranges) for i from 0 <= i < n: p = ranges[i] p[0], p[1], p[2] = ranks[i] cdef Group group = type(self)() CHKERR( MPI_Group_range_excl(self.ob_mpi, n, ranges, &group.ob_mpi) ) return group # Group Destructor # ---------------- def Free(self): """ Free a group """ if self.ob_mpi != MPI_GROUP_EMPTY: CHKERR( MPI_Group_free(&self.ob_mpi) ) elif self is not __GROUP_EMPTY__: self.ob_mpi = MPI_GROUP_NULL else: CHKERR( MPI_ERR_GROUP ) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Group_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Group group = cls() group.ob_mpi = MPI_Group_f2c(arg) return group cdef Group __GROUP_NULL__ = new_Group ( MPI_GROUP_NULL ) cdef Group __GROUP_EMPTY__ = new_Group ( MPI_GROUP_EMPTY ) # Predefined group handles # ------------------------ GROUP_NULL = __GROUP_NULL__ #: Null group handle GROUP_EMPTY = __GROUP_EMPTY__ #: Empty group handle mpi4py_1.3.1+hg20131106.orig/src/MPI/Info.pyx0000644000000000000000000001476112211706251016174 0ustar 00000000000000cdef class Info: """ Info """ def __cinit__(self, Info info=None): self.ob_mpi = MPI_INFO_NULL if info is not None: self.ob_mpi = info.ob_mpi def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Info(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, Info): return NotImplemented if not isinstance(other, Info): return NotImplemented cdef Info s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_INFO_NULL @classmethod def Create(cls): """ Create a new, empty info object """ cdef Info info = cls() CHKERR( MPI_Info_create(&info.ob_mpi) ) return info def Free(self): """ Free a info object """ CHKERR( MPI_Info_free(&self.ob_mpi) ) def Dup(self): """ Duplicate an existing info object, creating a new object, with the same (key, value) pairs and the same ordering of keys """ cdef Info info = type(self)() CHKERR( MPI_Info_dup(self.ob_mpi, &info.ob_mpi) ) return info def Get(self, object key, int maxlen=-1): """ Retrieve the value associated with a key """ if maxlen < 0: maxlen = MPI_MAX_INFO_VAL if maxlen > MPI_MAX_INFO_VAL: maxlen = MPI_MAX_INFO_VAL cdef char *ckey = NULL cdef char *cvalue = NULL cdef int flag = 0 key = asmpistr(key, &ckey, NULL) cdef tmp = allocate((maxlen+1), sizeof(char), &cvalue) CHKERR( MPI_Info_get(self.ob_mpi, ckey, maxlen, cvalue, &flag) ) cvalue[maxlen] = 0 # just in case if not flag: return None return mpistr(cvalue) def Set(self, object key, object value): """ Add the (key, value) pair to info, and overrides the value if a value for the same key was previously set """ cdef char *ckey = NULL cdef char *cvalue = NULL key = asmpistr(key, &ckey, NULL) value = asmpistr(value, &cvalue, NULL) CHKERR( MPI_Info_set(self.ob_mpi, ckey, cvalue) ) def Delete(self, object key): """ Remove a (key, value) pair from info """ cdef char *ckey = NULL key = asmpistr(key, &ckey, NULL) CHKERR( MPI_Info_delete(self.ob_mpi, ckey) ) def Get_nkeys(self): """ Return the number of currently defined keys in info """ cdef int nkeys = 0 CHKERR( MPI_Info_get_nkeys(self.ob_mpi, &nkeys) ) return nkeys def Get_nthkey(self, int n): """ Return the nth defined key in info. Keys are numbered in the range [0, N) where N is the value returned by `Info.Get_nkeys()` """ cdef char ckey[MPI_MAX_INFO_KEY+1] CHKERR( MPI_Info_get_nthkey(self.ob_mpi, n, ckey) ) ckey[MPI_MAX_INFO_KEY] = 0 # just in case return mpistr(ckey) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Info_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Info info = cls() info.ob_mpi = MPI_Info_f2c(arg) return info # Python mapping emulation # ------------------------ def __len__(self): if not self: return 0 return self.Get_nkeys() def __contains__(self, object key): if not self: return False cdef char *ckey = NULL cdef int dummy = 0 cdef int haskey = 0 key = asmpistr(key, &ckey, NULL) CHKERR( MPI_Info_get_valuelen(self.ob_mpi, ckey, &dummy, &haskey) ) return haskey def __iter__(self): return iter(self.keys()) def __getitem__(self, object key): if not self: raise KeyError(key) cdef object value = self.Get(key) if value is None: raise KeyError(key) return value def __setitem__(self, object key, object value): if not self: raise KeyError(key) self.Set(key, value) def __delitem__(self, object key): if not self: raise KeyError(key) if key not in self: raise KeyError(key) self.Delete(key) def get(self, object key, object default=None): """info get""" if not self: return default cdef object value = self.Get(key) if value is None: return default return value def keys(self): """info keys""" if not self: return [] cdef list keys = [] cdef int k = 0, nkeys = self.Get_nkeys() cdef object key for k from 0 <= k < nkeys: key = self.Get_nthkey(k) keys.append(key) return keys def values(self): """info values""" if not self: return [] cdef list values = [] cdef int k = 0, nkeys = self.Get_nkeys() cdef object key, val for k from 0 <= k < nkeys: key = self.Get_nthkey(k) val = self.Get(key) values.append(val) return values def items(self): """info items""" if not self: return [] cdef list items = [] cdef int k = 0, nkeys = self.Get_nkeys() cdef object key, value for k from 0 <= k < nkeys: key = self.Get_nthkey(k) value = self.Get(key) items.append((key, value)) return items def update(self, other=(), **kwds): """info update""" if not self: raise KeyError cdef object key, value if hasattr(other, 'keys'): for key in other.keys(): self.Set(key, other[key]) else: for key, value in other: self.Set(key, value) for key, value in kwds.items(): self.Set(key, value) def clear(self): """info clear""" if not self: return None cdef int k = 0, nkeys = self.Get_nkeys() cdef object key for k from 0 <= k < nkeys: key = self.Get_nthkey(0) self.Delete(key) cdef Info __INFO_NULL__ = new_Info(MPI_INFO_NULL) cdef Info __INFO_ENV__ = new_Info(MPI_INFO_ENV) # Predefined info handles # ----------------------- INFO_NULL = __INFO_NULL__ #: Null info handle INFO_ENV = __INFO_ENV__ #: Environment info handle mpi4py_1.3.1+hg20131106.orig/src/MPI/MPI.pyx0000644000000000000000000001541712211706251015725 0ustar 00000000000000__doc__ = """ Message Passing Interface """ from mpi4py.libmpi cimport * include "stdlib.pxi" include "atimport.pxi" initialize() startup() include "asmpistr.pxi" include "asbuffer.pxi" include "asmemory.pxi" include "asarray.pxi" include "helpers.pxi" include "msgbuffer.pxi" include "msgpickle.pxi" include "CAPI.pxi" include "Exception.pyx" include "Errhandler.pyx" include "Datatype.pyx" include "Status.pyx" include "Request.pyx" include "Message.pyx" include "Info.pyx" include "Op.pyx" include "Group.pyx" include "Comm.pyx" include "Win.pyx" include "File.pyx" # Assorted constants # ------------------ UNDEFINED = MPI_UNDEFINED #"""Undefined integer value""" ANY_SOURCE = MPI_ANY_SOURCE #"""Wildcard source value for receives""" ANY_TAG = MPI_ANY_TAG #"""Wildcard tag value for receives""" PROC_NULL = MPI_PROC_NULL #"""Special process rank for send/receive""" ROOT = MPI_ROOT #"""Root process for collective inter-communications""" BOTTOM = __BOTTOM__ #"""Special address for buffers""" IN_PLACE = __IN_PLACE__ #"""*In-place* option for collective communications""" # Predefined Attribute Keyvals # ---------------------------- KEYVAL_INVALID = MPI_KEYVAL_INVALID TAG_UB = MPI_TAG_UB HOST = MPI_HOST IO = MPI_IO WTIME_IS_GLOBAL = MPI_WTIME_IS_GLOBAL UNIVERSE_SIZE = MPI_UNIVERSE_SIZE APPNUM = MPI_APPNUM LASTUSEDCODE = MPI_LASTUSEDCODE WIN_BASE = MPI_WIN_BASE WIN_SIZE = MPI_WIN_SIZE WIN_DISP_UNIT = MPI_WIN_DISP_UNIT WIN_CREATE_FLAVOR = MPI_WIN_CREATE_FLAVOR WIN_MODEL = MPI_WIN_MODEL # Memory Allocation # ----------------- def Alloc_mem(Aint size, Info info=INFO_NULL): """ Allocate memory for message passing and RMA """ cdef void *base = NULL cdef MPI_Info cinfo = arg_Info(info) CHKERR( MPI_Alloc_mem(size, cinfo, &base) ) return tomemory(base, size) def Free_mem(memory): """ Free memory allocated with `Alloc_mem()` """ cdef void *base = NULL asmemory(memory, &base, NULL) CHKERR( MPI_Free_mem(base) ) # Initialization and Exit # ----------------------- def Init(): """ Initialize the MPI execution environment """ CHKERR( MPI_Init(NULL, NULL) ) startup() def Finalize(): """ Terminate the MPI execution environment """ cleanup() CHKERR( MPI_Finalize() ) # Levels of MPI threading support # ------------------------------- THREAD_SINGLE = MPI_THREAD_SINGLE # """Only one thread will execute""" THREAD_FUNNELED = MPI_THREAD_FUNNELED # """MPI calls are *funneled* to the main thread""" THREAD_SERIALIZED = MPI_THREAD_SERIALIZED # """MPI calls are *serialized*""" THREAD_MULTIPLE = MPI_THREAD_MULTIPLE # """Multiple threads may call MPI""" def Init_thread(int required=THREAD_MULTIPLE): """ Initialize the MPI execution environment """ cdef int provided = MPI_THREAD_SINGLE CHKERR( MPI_Init_thread(NULL, NULL, required, &provided) ) startup() return provided def Query_thread(): """ Return the level of thread support provided by the MPI library """ cdef int provided = MPI_THREAD_SINGLE CHKERR( MPI_Query_thread(&provided) ) return provided def Is_thread_main(): """ Indicate whether this thread called ``Init`` or ``Init_thread`` """ cdef int flag = 1 CHKERR( MPI_Is_thread_main(&flag) ) return flag def Is_initialized(): """ Indicates whether ``Init`` has been called """ cdef int flag = 0 CHKERR( MPI_Initialized(&flag) ) return flag def Is_finalized(): """ Indicates whether ``Finalize`` has completed """ cdef int flag = 0 CHKERR( MPI_Finalized(&flag) ) return flag # Implementation Information # -------------------------- # MPI Version Number # ----------------- VERSION = MPI_VERSION SUBVERSION = MPI_SUBVERSION def Get_version(): """ Obtain the version number of the MPI standard supported by the implementation as a tuple ``(version, subversion)`` """ cdef int version = 1 cdef int subversion = 0 CHKERR( MPI_Get_version(&version, &subversion) ) return (version, subversion) def Get_library_version(): """ Obtain the version string of the MPI library """ cdef char name[MPI_MAX_LIBRARY_VERSION_STRING+1] cdef int nlen = 0 CHKERR( MPI_Get_library_version(name, &nlen) ) return tompistr(name, nlen) # Environmental Inquires # ---------------------- def Get_processor_name(): """ Obtain the name of the calling processor """ cdef char name[MPI_MAX_PROCESSOR_NAME+1] cdef int nlen = 0 CHKERR( MPI_Get_processor_name(name, &nlen) ) return tompistr(name, nlen) # Timers and Synchronization # -------------------------- def Wtime(): """ Return an elapsed time on the calling processor """ return MPI_Wtime() def Wtick(): """ Return the resolution of ``Wtime`` """ return MPI_Wtick() # Control of Profiling # -------------------- def Pcontrol(int level): """ Control profiling """ if level < 0 or level > 2: CHKERR( MPI_ERR_ARG ) CHKERR( MPI_Pcontrol(level) ) # Maximum string sizes # -------------------- # MPI-1 MAX_PROCESSOR_NAME = MPI_MAX_PROCESSOR_NAME MAX_ERROR_STRING = MPI_MAX_ERROR_STRING # MPI-2 MAX_PORT_NAME = MPI_MAX_PORT_NAME MAX_INFO_KEY = MPI_MAX_INFO_KEY MAX_INFO_VAL = MPI_MAX_INFO_VAL MAX_OBJECT_NAME = MPI_MAX_OBJECT_NAME MAX_DATAREP_STRING = MPI_MAX_DATAREP_STRING # MPI-3 MAX_LIBRARY_VERSION_STRING = MPI_MAX_LIBRARY_VERSION_STRING # -------------------------------------------------------------------- cdef extern from "vendor.h": int MPI_Get_vendor(const_char**,int*,int*,int*) def get_vendor(): """ Infomation about the underlying MPI implementation :Returns: - a string with the name of the MPI implementation - an integer 3-tuple version ``(major, minor, micro)`` """ cdef const_char *name=NULL cdef int major=0, minor=0, micro=0 CHKERR( MPI_Get_vendor(&name, &major, &minor, µ) ) return (mpistr(name), (major, minor, micro)) # -------------------------------------------------------------------- cdef extern from *: enum: PY_VERSION_HEX if PYPY and PY_VERSION_HEX < 0x02070300: exec """ def _pypy_setup(): for klass in ( Status, Datatype, Request, Prequest, Grequest, Message, Op, Group, Info, Errhandler, Comm, Win, File, ): for name in klass.__dict__: meth = klass.__dict__[name] if (isinstance(meth, classmethod) or isinstance(meth, staticmethod)): hasattr(klass, name) _pypy_setup() del _pypy_setup """ in globals() # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/Message.pyx0000644000000000000000000001321212211706251016653 0ustar 00000000000000cdef class Message: """ Message """ def __cinit__(self, Message message=None): self.ob_mpi = MPI_MESSAGE_NULL if message is not None: self.ob_mpi = message.ob_mpi def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Message(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, Message): return NotImplemented if not isinstance(other, Message): return NotImplemented cdef Message s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_MESSAGE_NULL # Matching Probe # -------------- @classmethod def Probe(cls, Comm comm not None, int source=0, int tag=0, Status status=None): """ Blocking test for a message """ cdef MPI_Message cmessage = MPI_MESSAGE_NULL cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Mprobe( source, tag, comm.ob_mpi, &cmessage, statusp) ) cdef Message message = Message.__new__(cls) message.ob_mpi = cmessage return message @classmethod def Iprobe(cls, Comm comm not None, int source=0, int tag=0, Status status=None): """ Nonblocking test for a message """ cdef int flag = 0 cdef MPI_Message cmessage = MPI_MESSAGE_NULL cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Improbe( source, tag, comm.ob_mpi, &flag, &cmessage, statusp) ) if flag == 0: return None cdef Message message = Message.__new__(cls) message.ob_mpi = cmessage return message # Matched receives # ---------------- def Recv(self, buf, Status status=None): """ Blocking receive of matched message """ cdef MPI_Message message = self.ob_mpi cdef int source = MPI_ANY_SOURCE if message == MPI_MESSAGE_NO_PROC: source = MPI_PROC_NULL cdef _p_msg_p2p rmsg = message_p2p_recv(buf, source) cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Mrecv( rmsg.buf, rmsg.count, rmsg.dtype, &message, statusp) ) if self is not __MESSAGE_NO_PROC__: self.ob_mpi = message def Irecv(self, buf): """ Nonblocking receive of matched message """ cdef MPI_Message message = self.ob_mpi cdef int source = MPI_ANY_SOURCE if message == MPI_MESSAGE_NO_PROC: source = MPI_PROC_NULL cdef _p_msg_p2p rmsg = message_p2p_recv(buf, source) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Imrecv( rmsg.buf, rmsg.count, rmsg.dtype, &message, &request.ob_mpi) ) if self is not __MESSAGE_NO_PROC__: self.ob_mpi = message request.ob_buf = rmsg return request # Python Communication # -------------------- # @classmethod def probe(cls, Comm comm not None, int source=0, int tag=0, Status status=None): cdef Message message = Message.__new__(cls) cdef MPI_Status *statusp = arg_Status(status) message.ob_buf = PyMPI_mprobe(source, tag, comm.ob_mpi, &message.ob_mpi, statusp) return message # @classmethod def iprobe(cls, Comm comm not None, int source=0, int tag=0, Status status=None): cdef int flag = 0 cdef Message message = Message.__new__(cls) cdef MPI_Status *statusp = arg_Status(status) message.ob_buf = PyMPI_improbe(source, tag, comm.ob_mpi, &flag, &message.ob_mpi, statusp) if flag == 0: return None return message # def recv(self, obj=None, Status status=None): """ Blocking receive of matched message """ cdef MPI_Message message = self.ob_mpi cdef object rmsg = self.ob_buf # if obj is None else obj cdef MPI_Status *statusp = arg_Status(status) rmsg = PyMPI_mrecv(rmsg, &message, statusp) if self is not __MESSAGE_NO_PROC__: self.ob_mpi = message if self.ob_mpi == MPI_MESSAGE_NULL: self.ob_buf = None return rmsg # def irecv(self, obj=None, Status status=None): """ Nonblocking receive of matched message """ cdef MPI_Message message = self.ob_mpi cdef object rmsg = self.ob_buf # if obj is None else obj cdef Request request = Request.__new__(Request) request.ob_buf = PyMPI_imrecv(rmsg, &message, &request.ob_mpi) if self is not __MESSAGE_NO_PROC__: self.ob_mpi = message if self.ob_mpi == MPI_MESSAGE_NULL: self.ob_buf = None return request # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Message_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Message message = cls() message.ob_mpi = MPI_Message_f2c(arg) return message cdef Message __MESSAGE_NULL__ = new_Message ( MPI_MESSAGE_NULL ) cdef Message __MESSAGE_NO_PROC__ = new_Message ( MPI_MESSAGE_NO_PROC ) # Predefined message handles # -------------------------- MESSAGE_NULL = __MESSAGE_NULL__ #: Null message handle MESSAGE_NO_PROC = __MESSAGE_NO_PROC__ #: No-proc message handle mpi4py_1.3.1+hg20131106.orig/src/MPI/Op.pyx0000644000000000000000000001025012211706251015644 0ustar 00000000000000cdef class Op: """ Op """ def __cinit__(self, Op op=None): self.ob_mpi = MPI_OP_NULL if op is not None: self.ob_mpi = op.ob_mpi self.ob_func = op.ob_func self.ob_usrid = 0 # XXX def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Op(&self.ob_mpi) ) op_user_del(&self.ob_usrid) def __richcmp__(self, other, int op): if not isinstance(self, Op): return NotImplemented if not isinstance(other, Op): return NotImplemented cdef Op s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_OP_NULL def __call__(self, x, y): if self.ob_func != NULL: return self.ob_func(x, y) else: return op_user_py(self.ob_usrid, x, y, None) @classmethod def Create(cls, function, bint commute=False): """ Create a user-defined operation """ cdef Op op = cls() cdef MPI_User_function *cfunction = NULL op.ob_usrid = op_user_new(function, &cfunction) CHKERR( MPI_Op_create(cfunction, commute, &op.ob_mpi) ) return op def Free(self): """ Free the operation """ CHKERR( MPI_Op_free(&self.ob_mpi) ) op_user_del(&self.ob_usrid) # Process-local reduction # ----------------------- def Is_commutative(self): """ Query reduction operations for their commutativity """ cdef int flag = 0 CHKERR( MPI_Op_commutative(self.ob_mpi, &flag) ) return flag property is_commutative: """is commutative""" def __get__(self): return self.Is_commutative() def Reduce_local(self, inbuf, inoutbuf): """ Apply a reduction operator to local data """ # get *in* and *inout* buffers cdef _p_msg_cco m = message_cco() m.for_cro_send(inbuf, 0) m.for_cro_recv(inoutbuf, 0) # check counts and datatypes if self.scount != self.rcount: raise ValueError( "mismatch in inbuf count %d and inoutbuf count %d" % (self.scount, self.rcount)) if (self.stype != self.rtype): raise ValueError( "mismatch in inbuf and inoutbuf MPI datatypes") # do local reduction with nogil: CHKERR( MPI_Reduce_local( m.sbuf, m.rbuf, m.rcount, m.rtype, self.ob_mpi) ) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Op_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Op op = cls() op.ob_mpi = MPI_Op_f2c(arg) return op cdef Op __OP_NULL__ = new_Op( MPI_OP_NULL ) cdef Op __MAX__ = new_Op( MPI_MAX ) cdef Op __MIN__ = new_Op( MPI_MIN ) cdef Op __SUM__ = new_Op( MPI_SUM ) cdef Op __PROD__ = new_Op( MPI_PROD ) cdef Op __LAND__ = new_Op( MPI_LAND ) cdef Op __BAND__ = new_Op( MPI_BAND ) cdef Op __LOR__ = new_Op( MPI_LOR ) cdef Op __BOR__ = new_Op( MPI_BOR ) cdef Op __LXOR__ = new_Op( MPI_LXOR ) cdef Op __BXOR__ = new_Op( MPI_BXOR ) cdef Op __MAXLOC__ = new_Op( MPI_MAXLOC ) cdef Op __MINLOC__ = new_Op( MPI_MINLOC ) cdef Op __REPLACE__ = new_Op( MPI_REPLACE ) cdef Op __NO_OP__ = new_Op( MPI_NO_OP ) # Predefined operation handles # ---------------------------- OP_NULL = __OP_NULL__ #: Null MAX = __MAX__ #: Maximum MIN = __MIN__ #: Minimum SUM = __SUM__ #: Sum PROD = __PROD__ #: Product LAND = __LAND__ #: Logical and BAND = __BAND__ #: Bit-wise and LOR = __LOR__ #: Logical or BOR = __BOR__ #: Bit-wise or LXOR = __LXOR__ #: Logical xor BXOR = __BXOR__ #: Bit-wise xor MAXLOC = __MAXLOC__ #: Maximum and location MINLOC = __MINLOC__ #: Minimum and location REPLACE = __REPLACE__ #: Replace (for RMA) NO_OP = __NO_OP__ #: No-op (for RMA) mpi4py_1.3.1+hg20131106.orig/src/MPI/Request.pyx0000644000000000000000000002536712211706251016735 0ustar 00000000000000cdef class Request: """ Request """ def __cinit__(self, Request request=None): self.ob_mpi = MPI_REQUEST_NULL if request is not None: self.ob_mpi = request.ob_mpi self.ob_buf = request.ob_buf def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Request(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, Request): return NotImplemented if not isinstance(other, Request): return NotImplemented cdef Request s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_REQUEST_NULL # Completion Operations # --------------------- def Wait(self, Status status=None): """ Wait for a send or receive to complete """ cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Wait( &self.ob_mpi, statusp) ) if self.ob_mpi == MPI_REQUEST_NULL: self.ob_buf = None def Test(self, Status status=None): """ Test for the completion of a send or receive """ cdef int flag = 0 cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Test( &self.ob_mpi, &flag, statusp) ) if self.ob_mpi == MPI_REQUEST_NULL: self.ob_buf = None return flag def Free(self): """ Free a communication request """ with nogil: CHKERR( MPI_Request_free(&self.ob_mpi) ) def Get_status(self, Status status=None): """ Non-destructive test for the completion of a request """ cdef int flag = 0 cdef MPI_Status *statusp = arg_Status(status) with nogil: CHKERR( MPI_Request_get_status( self.ob_mpi, &flag, statusp) ) return flag # Multiple Completions # -------------------- @classmethod def Waitany(cls, requests, Status status=None): """ Wait for any previously initiated request to complete """ cdef int count = 0 cdef MPI_Request *irequests = NULL cdef int index = MPI_UNDEFINED cdef MPI_Status *statusp = arg_Status(status) # cdef tmp = acquire_rs(requests, None, &count, &irequests, NULL) try: with nogil: CHKERR( MPI_Waitany( count, irequests, &index, statusp) ) finally: release_rs(requests, None, count, irequests, NULL) return index @classmethod def Testany(cls, requests, Status status=None): """ Test for completion of any previously initiated request """ cdef int count = 0 cdef MPI_Request *irequests = NULL cdef int index = MPI_UNDEFINED cdef int flag = 0 cdef MPI_Status *statusp = arg_Status(status) # cdef tmp = acquire_rs(requests, None, &count, &irequests, NULL) try: with nogil: CHKERR( MPI_Testany( count, irequests, &index, &flag, statusp) ) finally: release_rs(requests, None, count, irequests, NULL) # return (index, flag) @classmethod def Waitall(cls, requests, statuses=None): """ Wait for all previously initiated requests to complete """ cdef int count = 0 cdef MPI_Request *irequests = NULL cdef MPI_Status *istatuses = MPI_STATUSES_IGNORE # cdef tmp = acquire_rs(requests, statuses, &count, &irequests, &istatuses) try: with nogil: CHKERR( MPI_Waitall( count, irequests, istatuses) ) finally: release_rs(requests, statuses, count, irequests, istatuses) return None @classmethod def Testall(cls, requests, statuses=None): """ Test for completion of all previously initiated requests """ cdef int count = 0 cdef MPI_Request *irequests = NULL cdef int flag = 0 cdef MPI_Status *istatuses = MPI_STATUSES_IGNORE # cdef tmp = acquire_rs(requests, statuses, &count, &irequests, &istatuses) try: with nogil: CHKERR( MPI_Testall( count, irequests, &flag, istatuses) ) finally: release_rs(requests, statuses, count, irequests, istatuses) return flag @classmethod def Waitsome(cls, requests, statuses=None): """ Wait for some previously initiated requests to complete """ cdef int incount = 0 cdef MPI_Request *irequests = NULL cdef int outcount = MPI_UNDEFINED, *iindices = NULL cdef MPI_Status *istatuses = MPI_STATUSES_IGNORE # cdef tmp1 = acquire_rs(requests, statuses, &incount, &irequests, &istatuses) cdef tmp2 = mkarray_int(incount, &iindices) try: with nogil: CHKERR( MPI_Waitsome( incount, irequests, &outcount, iindices, istatuses) ) finally: release_rs(requests, statuses, incount, irequests, istatuses) # cdef int i = 0 cdef object indices if outcount == MPI_UNDEFINED: indices = [] else: indices = [iindices[i] for i from 0 <= i < outcount] return (outcount, indices) @classmethod def Testsome(cls, requests, statuses=None): """ Test for completion of some previously initiated requests """ cdef int incount = 0 cdef MPI_Request *irequests = NULL cdef int outcount = MPI_UNDEFINED, *iindices = NULL cdef MPI_Status *istatuses = MPI_STATUSES_IGNORE # cdef tmp1 = acquire_rs(requests, statuses, &incount, &irequests, &istatuses) cdef tmp2 = mkarray_int(incount, &iindices) try: with nogil: CHKERR( MPI_Testsome( incount, irequests, &outcount, iindices, istatuses) ) finally: release_rs(requests, statuses, incount, irequests, istatuses) # cdef int i = 0 cdef object indices if outcount == MPI_UNDEFINED: indices = [] else: indices = [iindices[i] for i from 0 <= i < outcount] return (outcount, indices) # Cancel # ------ def Cancel(self): """ Cancel a communication request """ with nogil: CHKERR( MPI_Cancel(&self.ob_mpi) ) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Request_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Request request = cls() request.ob_mpi = MPI_Request_f2c(arg) return request # Python Communication # -------------------- # def wait(self, Status status=None): """ Wait for a send or receive to complete """ cdef msg = PyMPI_wait(self, status) return msg # def test(self, Status status=None): """ Test for the completion of a send or receive """ cdef int flag = 0 cdef msg = PyMPI_test(self, &flag, status) return (flag, msg) # @classmethod def waitany(cls, requests, Status status=None): """ Wait for any previously initiated request to complete """ cdef int index = MPI_UNDEFINED cdef msg = PyMPI_waitany(requests, &index, status) return (index, msg) # @classmethod def testany(cls, requests, Status status=None): """ Test for completion of any previously initiated request """ cdef int index = MPI_UNDEFINED cdef int flag = 0 cdef msg = PyMPI_testany(requests, &index, &flag, status) return (index, flag, msg) # @classmethod def waitall(cls, requests, statuses=None): """ Wait for all previously initiated requests to complete """ cdef msg = PyMPI_waitall(requests, statuses) return msg # @classmethod def testall(cls, requests, statuses=None): """ Test for completion of all previously initiated requests """ cdef int flag = 0 cdef msg = PyMPI_testall(requests, &flag, statuses) return (flag, msg) cdef class Prequest(Request): """ Persistent request """ def __cinit__(self, Request request=None): if self.ob_mpi != MPI_REQUEST_NULL: (request) def Start(self): """ Initiate a communication with a persistent request """ with nogil: CHKERR( MPI_Start(&self.ob_mpi) ) @classmethod def Startall(cls, requests): """ Start a collection of persistent requests """ cdef int count = 0 cdef MPI_Request *irequests = NULL cdef tmp = acquire_rs(requests, None, &count, &irequests, NULL) # try: with nogil: CHKERR( MPI_Startall(count, irequests) ) finally: release_rs(requests, None, count, irequests, NULL) cdef class Grequest(Request): """ Generalized request """ def __cinit__(self, Request request=None): self.ob_grequest = self.ob_mpi if self.ob_mpi != MPI_REQUEST_NULL: (request) @classmethod def Start(cls, query_fn, free_fn, cancel_fn, args=None, kargs=None): """ Create and return a user-defined request """ cdef Grequest request = cls() cdef _p_greq state = \ _p_greq(query_fn, free_fn, cancel_fn, args, kargs) with nogil: CHKERR( MPI_Grequest_start( greq_query_fn, greq_free_fn, greq_cancel_fn, state, &request.ob_mpi) ) Py_INCREF(state) request.ob_grequest = request.ob_mpi return request def Complete(self): """ Notify that a user-defined request is complete """ if self.ob_mpi != MPI_REQUEST_NULL: if self.ob_mpi != self.ob_grequest: raise MPIException(MPI_ERR_REQUEST) cdef MPI_Request grequest = self.ob_grequest self.ob_grequest = self.ob_mpi ## or MPI_REQUEST_NULL ?? with nogil: CHKERR( MPI_Grequest_complete(grequest) ) self.ob_grequest = self.ob_mpi ## or MPI_REQUEST_NULL ?? cdef Request __REQUEST_NULL__ = new_Request(MPI_REQUEST_NULL) # Predefined request handles # -------------------------- REQUEST_NULL = __REQUEST_NULL__ #: Null request handle mpi4py_1.3.1+hg20131106.orig/src/MPI/Status.pyx0000644000000000000000000000715412211706251016562 0ustar 00000000000000cdef class Status: """ Status """ def __cinit__(self, Status status=None): self.ob_mpi.MPI_SOURCE = MPI_ANY_SOURCE self.ob_mpi.MPI_TAG = MPI_ANY_TAG self.ob_mpi.MPI_ERROR = MPI_SUCCESS if status is not None: copy_Status(&status.ob_mpi, &self.ob_mpi) def __richcmp__(self, other, int op): if not isinstance(self, Status): return NotImplemented if not isinstance(other, Status): return NotImplemented cdef Status s = self, o = other cdef int r = equal_Status(&s.ob_mpi, &o.ob_mpi) if op == Py_EQ: return r == 0 elif op == Py_NE: return r != 0 else: raise TypeError("only '==' and '!='") def Get_source(self): """ Get message source """ return self.ob_mpi.MPI_SOURCE def Set_source(self, int source): """ Set message source """ self.ob_mpi.MPI_SOURCE = source property source: """source""" def __get__(self): return self.Get_source() def __set__(self, value): self.Set_source(value) def Get_tag(self): """ Get message tag """ return self.ob_mpi.MPI_TAG def Set_tag(self, int tag): """ Set message tag """ self.ob_mpi.MPI_TAG = tag property tag: """tag""" def __get__(self): return self.Get_tag() def __set__(self, value): self.Set_tag(value) def Get_error(self): """ Get message error """ return self.ob_mpi.MPI_ERROR def Set_error(self, int error): """ Set message error """ self.ob_mpi.MPI_ERROR = error property error: """error""" def __get__(self): return self.Get_error() def __set__(self, value): self.Set_error(value) def Get_count(self, Datatype datatype not None=BYTE): """ Get the number of *top level* elements """ cdef int count = MPI_UNDEFINED CHKERR( MPI_Get_count(&self.ob_mpi, datatype.ob_mpi, &count) ) return count property count: """byte count""" def __get__(self): return self.Get_count(__BYTE__) def Get_elements(self, Datatype datatype not None): """ Get the number of basic elements in a datatype """ cdef MPI_Count elements = MPI_UNDEFINED CHKERR( MPI_Get_elements_x(&self.ob_mpi, datatype.ob_mpi, &elements) ) return elements def Set_elements(self, Datatype datatype not None, Count count): """ Set the number of elements in a status .. note:: This should be only used when implementing query callback functions for generalized requests """ CHKERR( MPI_Status_set_elements_x(&self.ob_mpi, datatype.ob_mpi, count) ) def Is_cancelled(self): """ Test to see if a request was cancelled """ cdef int flag = 0 CHKERR( MPI_Test_cancelled(&self.ob_mpi, &flag) ) return flag def Set_cancelled(self, bint flag): """ Set the cancelled state associated with a status .. note:: This should be only used when implementing query callback functions for generalized requests """ CHKERR( MPI_Status_set_cancelled(&self.ob_mpi, flag) ) property cancelled: """ cancelled state """ def __get__(self): return self.Is_cancelled() def __set__(self, value): self.Set_cancelled(value) mpi4py_1.3.1+hg20131106.orig/src/MPI/Win.pyx0000644000000000000000000004613512211706251016036 0ustar 00000000000000# Create flavors # -------------- WIN_FLAVOR_CREATE = MPI_WIN_FLAVOR_CREATE WIN_FLAVOR_ALLOCATE = MPI_WIN_FLAVOR_ALLOCATE WIN_FLAVOR_DYNAMIC = MPI_WIN_FLAVOR_DYNAMIC WIN_FLAVOR_SHARED = MPI_WIN_FLAVOR_SHARED # Memory model # ------------ WIN_SEPARATE = MPI_WIN_SEPARATE WIN_UNIFIED = MPI_WIN_UNIFIED # Assertion modes # --------------- MODE_NOCHECK = MPI_MODE_NOCHECK MODE_NOSTORE = MPI_MODE_NOSTORE MODE_NOPUT = MPI_MODE_NOPUT MODE_NOPRECEDE = MPI_MODE_NOPRECEDE MODE_NOSUCCEED = MPI_MODE_NOSUCCEED # Lock types # ---------- LOCK_EXCLUSIVE = MPI_LOCK_EXCLUSIVE LOCK_SHARED = MPI_LOCK_SHARED cdef class Win: """ Window """ def __cinit__(self, Win win=None): self.ob_mpi = MPI_WIN_NULL if win is not None: self.ob_mpi = win.ob_mpi def __dealloc__(self): if not (self.flags & PyMPI_OWNED): return CHKERR( del_Win(&self.ob_mpi) ) def __richcmp__(self, other, int op): if not isinstance(self, Win): return NotImplemented if not isinstance(other, Win): return NotImplemented cdef Win s = self, o = other if op == Py_EQ: return (s.ob_mpi == o.ob_mpi) elif op == Py_NE: return (s.ob_mpi != o.ob_mpi) else: raise TypeError("only '==' and '!='") def __bool__(self): return self.ob_mpi != MPI_WIN_NULL # [6.2] Initialization # -------------------- # [6.2.1] Window Creation # ----------------------- @classmethod def Create(cls, memory, int disp_unit=1, Info info=INFO_NULL, Intracomm comm not None=COMM_SELF): """ Create an window object for one-sided communication """ cdef void *base = NULL cdef MPI_Aint size = 0 if memory is __BOTTOM__: base = MPI_BOTTOM memory = None elif memory is not None: memory = getbuffer_w(memory, &base, &size) cdef MPI_Info cinfo = arg_Info(info) cdef Win win = cls() with nogil: CHKERR( MPI_Win_create( base, size, disp_unit, cinfo, comm.ob_mpi, &win.ob_mpi) ) CHKERR( MPI_Win_set_errhandler( win.ob_mpi, MPI_ERRORS_RETURN) ) CHKERR( PyMPI_Win_setup(win.ob_mpi, memory) ) return win @classmethod def Allocate(cls, Aint size, int disp_unit=1, Info info=INFO_NULL, Intracomm comm not None=COMM_SELF): """ Create an window object for one-sided communication """ cdef void *base = NULL cdef MPI_Info cinfo = arg_Info(info) cdef Win win = cls() with nogil: CHKERR( MPI_Win_allocate( size, disp_unit, cinfo, comm.ob_mpi, &base, &win.ob_mpi) ) CHKERR( MPI_Win_set_errhandler( win.ob_mpi, MPI_ERRORS_RETURN) ) return win @classmethod def Allocate_shared(cls, Aint size, int disp_unit=1, Info info=INFO_NULL, Intracomm comm not None=COMM_SELF): """ Create an window object for one-sided communication """ cdef void *base = NULL cdef MPI_Info cinfo = arg_Info(info) cdef Win win = cls() with nogil: CHKERR( MPI_Win_allocate_shared( size, disp_unit, cinfo, comm.ob_mpi, &base, &win.ob_mpi) ) CHKERR( MPI_Win_set_errhandler( win.ob_mpi, MPI_ERRORS_RETURN) ) return win def Shared_query(self, int rank): """ Query the process-local address for remote memory segments created with `Win.Allocate_shared()` """ cdef void *base = NULL cdef MPI_Aint size = 0 cdef int disp_unit = 1 CHKERR( MPI_Win_shared_query( self.ob_mpi, rank, &size, &disp_unit, &base) ) return (tomemory(base, size), disp_unit) @classmethod def Create_dynamic(cls, Info info=INFO_NULL, Intracomm comm not None=COMM_SELF): """ Create an window object for one-sided communication """ cdef MPI_Info cinfo = arg_Info(info) cdef Win win = cls() with nogil: CHKERR( MPI_Win_create_dynamic( cinfo, comm.ob_mpi, &win.ob_mpi) ) CHKERR( MPI_Win_set_errhandler( win.ob_mpi, MPI_ERRORS_RETURN) ) return win def Attach(self, memory): """ Attach a local memory region """ cdef void *base = NULL cdef MPI_Aint size = 0 memory = getbuffer_w(memory, &base, &size) CHKERR( MPI_Win_attach(self.ob_mpi, base, size) ) def Detach(self, memory): """ Detach a local memory region """ cdef void *base = NULL memory = getbuffer_w(memory, &base, NULL) CHKERR( MPI_Win_detach(self.ob_mpi, base) ) def Free(self): """ Free a window """ with nogil: CHKERR( MPI_Win_free(&self.ob_mpi) ) # [6.2.2] Window Info # ------------------- def Set_info(self, Info info not None): """ Set new values for the hints associated with a window """ with nogil: CHKERR( MPI_Win_set_info( self.ob_mpi, info.ob_mpi) ) def Get_info(self): """ Return the hints for a windows that are currently in use """ cdef Info info = Info.__new__(Info) with nogil: CHKERR( MPI_Win_get_info( self.ob_mpi, &info.ob_mpi) ) return info # [6.2.2] Window Attributes # ------------------------- def Get_group(self): """ Return a duplicate of the group of the communicator used to create the window """ cdef Group group = Group() with nogil: CHKERR( MPI_Win_get_group( self.ob_mpi, &group.ob_mpi) ) return group property group: """window group""" def __get__(self): return self.Get_group() def Get_attr(self, int keyval): """ Retrieve attribute value by key """ cdef void *attrval = NULL cdef int flag = 0 CHKERR( MPI_Win_get_attr(self.ob_mpi, keyval, &attrval, &flag) ) if flag == 0: return None if attrval == NULL: return 0 # handle predefined keyvals if (keyval == MPI_WIN_BASE): return attrval elif (keyval == MPI_WIN_SIZE): return (attrval)[0] elif (keyval == MPI_WIN_DISP_UNIT): return (attrval)[0] elif (keyval == MPI_WIN_CREATE_FLAVOR): return (attrval)[0] elif (keyval == MPI_WIN_MODEL): return (attrval)[0] # likely be a user-defined keyval elif keyval in win_keyval: return attrval else: return PyLong_FromVoidPtr(attrval) def Set_attr(self, int keyval, object attrval): """ Store attribute value associated with a key """ cdef void *ptrval = NULL cdef int incref = 0 if keyval in win_keyval: ptrval = attrval incref = 1 else: ptrval = PyLong_AsVoidPtr(attrval) incref = 0 CHKERR( MPI_Win_set_attr(self.ob_mpi, keyval, ptrval) ) if incref: Py_INCREF(attrval) def Delete_attr(self, int keyval): """ Delete attribute value associated with a key """ CHKERR( MPI_Win_delete_attr(self.ob_mpi, keyval) ) @classmethod def Create_keyval(cls, copy_fn=None, delete_fn=None): """ Create a new attribute key for windows """ cdef int keyval = MPI_KEYVAL_INVALID cdef MPI_Win_copy_attr_function *_copy = win_attr_copy_fn cdef MPI_Win_delete_attr_function *_del = win_attr_delete_fn cdef void *extra_state = NULL CHKERR( MPI_Win_create_keyval(_copy, _del, &keyval, extra_state) ) win_keyval_new(keyval, copy_fn, delete_fn) return keyval @classmethod def Free_keyval(cls, int keyval): """ Free and attribute key for windows """ cdef int keyval_save = keyval CHKERR( MPI_Win_free_keyval (&keyval) ) win_keyval_del(keyval_save) return keyval property attrs: "window attributes" def __get__(self): cdef MPI_Win win = self.ob_mpi cdef void *base = NULL, *pbase = NULL cdef MPI_Aint size = 0, *psize = NULL cdef int disp = 1, *pdisp = NULL cdef int attr = MPI_KEYVAL_INVALID cdef int flag = 0 # attr = MPI_WIN_BASE CHKERR( MPI_Win_get_attr(win, attr, &pbase, &flag) ) if flag and pbase != NULL: base = pbase # attr = MPI_WIN_SIZE CHKERR( MPI_Win_get_attr(win, attr, &psize, &flag) ) if flag and psize != NULL: size = psize[0] # attr = MPI_WIN_DISP_UNIT CHKERR( MPI_Win_get_attr(win, attr, &pdisp, &flag) ) if flag and pdisp != NULL: disp = pdisp[0] # return (base, size, disp) property memory: """window memory buffer""" def __get__(self): cdef MPI_Win win = self.ob_mpi cdef void *base = NULL, *pbase = NULL cdef MPI_Aint size = 0, *psize = NULL cdef int attr = MPI_KEYVAL_INVALID cdef int flag = 0 # attr = MPI_WIN_BASE CHKERR( MPI_Win_get_attr(win, attr, &pbase, &flag) ) if flag and pbase != NULL: base = pbase # attr = MPI_WIN_SIZE CHKERR( MPI_Win_get_attr(win, attr, &psize, &flag) ) if flag and psize != NULL: size = psize[0] # return tomemory(base, size) # [6.3] Communication Calls # ------------------------- # [6.3.1] Put # ----------- def Put(self, origin, int target_rank, target=None): """ Put data into a memory window on a remote process. """ cdef _p_msg_rma msg = message_rma() msg.for_put(origin, target_rank, target) with nogil: CHKERR( MPI_Put( msg.oaddr, msg.ocount, msg.otype, target_rank, msg.tdisp, msg.tcount, msg.ttype, self.ob_mpi) ) # [6.3.2] Get # ----------- def Get(self, origin, int target_rank, target=None): """ Get data from a memory window on a remote process. """ cdef _p_msg_rma msg = message_rma() msg.for_get(origin, target_rank, target) with nogil: CHKERR( MPI_Get( msg.oaddr, msg.ocount, msg.otype, target_rank, msg.tdisp, msg.tcount, msg.ttype, self.ob_mpi) ) # [6.3.4] Accumulate Functions # ---------------------------- def Accumulate(self, origin, int target_rank, target=None, Op op not None=SUM): """ Accumulate data into the target process """ cdef _p_msg_rma msg = message_rma() msg.for_acc(origin, target_rank, target) with nogil: CHKERR( MPI_Accumulate( msg.oaddr, msg.ocount, msg.otype, target_rank, msg.tdisp, msg.tcount, msg.ttype, op.ob_mpi, self.ob_mpi) ) # [X.X.X] Get Accumulate Function # ------------------------------- def Get_accumulate(self, origin, result, int target_rank, target=None, Op op not None=SUM): """ Fetch-and-accumulate data into the target process """ cdef _p_msg_rma msg = message_rma() msg.for_get_acc(origin, result, target_rank, target) with nogil: CHKERR( MPI_Get_accumulate( msg.oaddr, msg.ocount, msg.otype, msg.raddr, msg.rcount, msg.rtype, target_rank, msg.tdisp, msg.tcount, msg.ttype, op.ob_mpi, self.ob_mpi) ) # [X.X.X] Request-based RMA Communication Operations # -------------------------------------------------- def Rput(self, origin, int target_rank, target=None): """ Put data into a memory window on a remote process. """ cdef _p_msg_rma msg = message_rma() msg.for_put(origin, target_rank, target) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Rput( msg.oaddr, msg.ocount, msg.otype, target_rank, msg.tdisp, msg.tcount, msg.ttype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = msg return request def Rget(self, origin, int target_rank, target=None): """ Get data from a memory window on a remote process. """ cdef _p_msg_rma msg = message_rma() msg.for_get(origin, target_rank, target) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Rget( msg.oaddr, msg.ocount, msg.otype, target_rank, msg.tdisp, msg.tcount, msg.ttype, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = msg return request def Raccumulate(self, origin, int target_rank, target=None, Op op not None=SUM): """ Fetch-and-accumulate data into the target process """ cdef _p_msg_rma msg = message_rma() msg.for_acc(origin, target_rank, target) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Raccumulate( msg.oaddr, msg.ocount, msg.otype, target_rank, msg.tdisp, msg.tcount, msg.ttype, op.ob_mpi, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = msg return request def Rget_accumulate(self, origin, result, int target_rank, target=None, Op op not None=SUM): """ Accumulate data into the target process using remote memory access. """ cdef _p_msg_rma msg = message_rma() msg.for_get_acc(origin, result, target_rank, target) cdef Request request = Request.__new__(Request) with nogil: CHKERR( MPI_Rget_accumulate( msg.oaddr, msg.ocount, msg.otype, msg.raddr, msg.rcount, msg.rtype, target_rank, msg.tdisp, msg.tcount, msg.ttype, op.ob_mpi, self.ob_mpi, &request.ob_mpi) ) request.ob_buf = msg return request # [6.4] Synchronization Calls # --------------------------- # [6.4.1] Fence # ------------- def Fence(self, int assertion=0): """ Perform an MPI fence synchronization on a window """ with nogil: CHKERR( MPI_Win_fence(assertion, self.ob_mpi) ) # [6.4.2] General Active Target Synchronization # --------------------------------------------- def Start(self, Group group not None, int assertion=0): """ Start an RMA access epoch for MPI """ with nogil: CHKERR( MPI_Win_start( group.ob_mpi, assertion, self.ob_mpi) ) def Complete(self): """ Completes an RMA operations begun after an `Win.Start()` """ with nogil: CHKERR( MPI_Win_complete(self.ob_mpi) ) def Post(self, Group group not None, int assertion=0): """ Start an RMA exposure epoch """ with nogil: CHKERR( MPI_Win_post( group.ob_mpi, assertion, self.ob_mpi) ) def Wait(self): """ Complete an RMA exposure epoch begun with `Win.Post()` """ with nogil: CHKERR( MPI_Win_wait(self.ob_mpi) ) def Test(self): """ Test whether an RMA exposure epoch has completed """ cdef int flag = 0 with nogil: CHKERR( MPI_Win_test(self.ob_mpi, &flag) ) return flag # [6.4.3] Lock # ------------ def Lock(self, int lock_type, int rank, int assertion=0): """ Begin an RMA access epoch at the target process """ with nogil: CHKERR( MPI_Win_lock( lock_type, rank, assertion, self.ob_mpi) ) def Unlock(self, int rank): """ Complete an RMA access epoch at the target process """ with nogil: CHKERR( MPI_Win_unlock(rank, self.ob_mpi) ) def Lock_all(self, int assertion=0): """ Begin an RMA access epoch at all processes """ with nogil: CHKERR( MPI_Win_lock_all( assertion, self.ob_mpi) ) def Unlock_all(self): """ Complete an RMA access epoch at all processes """ with nogil: CHKERR( MPI_Win_unlock_all(self.ob_mpi) ) # [X.X] Flush and Sync # -------------------- def Flush(self, int rank): with nogil: CHKERR( MPI_Win_flush(rank, self.ob_mpi) ) def Flush_all(self): with nogil: CHKERR( MPI_Win_flush_all(self.ob_mpi) ) def Flush_local(self, int rank): with nogil: CHKERR( MPI_Win_flush_local(rank, self.ob_mpi) ) def Flush_local_all(self): with nogil: CHKERR( MPI_Win_flush_local_all(self.ob_mpi) ) def Sync(self): with nogil: CHKERR( MPI_Win_sync(self.ob_mpi) ) # [6.6] Error Handling # -------------------- def Get_errhandler(self): """ Get the error handler for a window """ cdef Errhandler errhandler = Errhandler.__new__(Errhandler) CHKERR( MPI_Win_get_errhandler(self.ob_mpi, &errhandler.ob_mpi) ) return errhandler def Set_errhandler(self, Errhandler errhandler not None): """ Set the error handler for a window """ CHKERR( MPI_Win_set_errhandler(self.ob_mpi, errhandler.ob_mpi) ) def Call_errhandler(self, int errorcode): """ Call the error handler installed on a window """ CHKERR( MPI_Win_call_errhandler(self.ob_mpi, errorcode) ) # [8.4] Naming Objects # -------------------- def Get_name(self): """ Get the print name associated with the window """ cdef char name[MPI_MAX_OBJECT_NAME+1] cdef int nlen = 0 CHKERR( MPI_Win_get_name(self.ob_mpi, name, &nlen) ) return tompistr(name, nlen) def Set_name(self, name): """ Set the print name associated with the window """ cdef char *cname = NULL name = asmpistr(name, &cname, NULL) CHKERR( MPI_Win_set_name(self.ob_mpi, cname) ) property name: """window name""" def __get__(self): return self.Get_name() def __set__(self, value): self.Set_name(value) # Fortran Handle # -------------- def py2f(self): """ """ return MPI_Win_c2f(self.ob_mpi) @classmethod def f2py(cls, arg): """ """ cdef Win win = cls() win.ob_mpi = MPI_Win_f2c(arg) return win cdef Win __WIN_NULL__ = new_Win(MPI_WIN_NULL) # Predefined window handles # ------------------------- WIN_NULL = __WIN_NULL__ #: Null window handle mpi4py_1.3.1+hg20131106.orig/src/MPI/asarray.pxi0000644000000000000000000001176712211706251016726 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef inline object newarray_int(Py_ssize_t n, int **p): return allocate(n, sizeof(int), p) cdef inline object getarray_int(object ob, int *n, int **p): cdef int *base = NULL cdef Py_ssize_t i = 0, size = len(ob) cdef object mem = newarray_int(size, &base) for i from 0 <= i < size: base[i] = ob[i] n[0] = size # XXX overflow? p[0] = base return mem cdef inline object chkarray_int(object ob, int n, int **p): cdef int size = 0 cdef object mem = getarray_int(ob, &size, p) if n != size: raise ValueError( "expecting %d items, got %d" % (n, size)) return mem # ----------------------------------------------------------------------------- cdef inline object mkarray_int(Py_ssize_t size, int **p): return allocate(size, sizeof(int), p) cdef inline object asarray_int(object sequence, Py_ssize_t size, int **p): cdef int *array = NULL cdef Py_ssize_t i = 0, n = len(sequence) if size != n: raise ValueError( "expecting %d items, got %d" % (size, n)) cdef object ob = allocate(n, sizeof(int), &array) for i from 0 <= i < n: array[i] = sequence[i] p[0] = array return ob cdef inline object asarray_Aint(object sequence, Py_ssize_t size, MPI_Aint **p): cdef MPI_Aint *array = NULL cdef Py_ssize_t i = 0, n = len(sequence) if size != n: raise ValueError( "expecting %d items, got %d" % (size, n)) cdef object ob = allocate(n, sizeof(MPI_Aint), &array) for i from 0 <= i < n: array[i] = sequence[i] p[0] = array return ob cdef inline object asarray_Datatype(object sequence, Py_ssize_t size, MPI_Datatype **p): cdef MPI_Datatype *array = NULL cdef Py_ssize_t i = 0, n = len(sequence) if size != n: raise ValueError( "expecting %d items, got %d" % (size, n)) cdef object ob = allocate(n, sizeof(MPI_Datatype), &array) for i from 0 <= i < n: array[i] = (sequence[i]).ob_mpi p[0] = array return ob cdef inline object asarray_Info(object sequence, Py_ssize_t size, MPI_Info **p): cdef MPI_Info *array = NULL cdef Py_ssize_t i = 0, n = size cdef MPI_Info info = MPI_INFO_NULL cdef object ob if sequence is None or isinstance(sequence, Info): if sequence is not None: info = (sequence).ob_mpi ob = allocate(size, sizeof(MPI_Info), &array) for i from 0 <= i < size: array[i] = info else: n = len(sequence) if size != n: raise ValueError( "expecting %d items, got %d" % (size, n)) ob = allocate(size, sizeof(MPI_Datatype), &array) for i from 0 <= i < size: array[i] = (sequence[i]).ob_mpi p[0] = array return ob # ----------------------------------------------------------------------------- cdef inline object asarray_str(object sequence, Py_ssize_t size, char ***p): cdef Py_ssize_t i = 0, n = len(sequence) if size != n: raise ValueError( "expecting %d items, got %d" % (size, n)) cdef char** array = NULL cdef object ob = allocate((n+1), sizeof(char*), &array) for i from 0 <= i < n: sequence[i] = asmpistr(sequence[i], &array[i], NULL) array[n] = NULL p[0] = array return (sequence, ob) cdef inline object asarray_argv(object sequence, char ***p): if sequence is None: p[0] = MPI_ARGV_NULL return None cdef Py_ssize_t size = len(sequence) return asarray_str(sequence, size, p) cdef inline object asarray_argvs(object sequence, Py_ssize_t size, char ****p): if sequence is None: p[0] = MPI_ARGVS_NULL return None cdef Py_ssize_t i = 0, n = len(sequence) if size != n: raise ValueError( "expecting %d items, got %d" % (size, n)) cdef char*** array = NULL cdef object ob = allocate((n+1), sizeof(char**), &array) for i from 0 <= i < n: sequence[i] = asarray_argv(sequence[i], &array[i]) array[n] = NULL p[0] = array return (sequence, ob) cdef inline object asarray_nprocs(object sequence, Py_ssize_t size, int **p): cdef Py_ssize_t i = 0 cdef int value = 1 cdef int *array = NULL cdef object ob if sequence is None or is_int(sequence): if sequence is None: value = 1 else: value = sequence ob = newarray_int(size, &array) for i from 0 <= i < size: array[i] = value else: ob = asarray_int(sequence, size, &array) p[0] = array return ob # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/asbuffer.pxi0000644000000000000000000001565512211706251017061 0ustar 00000000000000#------------------------------------------------------------------------------ # Python 3 buffer interface (PEP 3118) cdef extern from "Python.h": ctypedef struct Py_buffer: void *obj void *buf Py_ssize_t len Py_ssize_t itemsize bint readonly char *format #int ndim #Py_ssize_t *shape #Py_ssize_t *strides #Py_ssize_t *suboffsets cdef enum: PyBUF_SIMPLE PyBUF_WRITABLE PyBUF_FORMAT PyBUF_ANY_CONTIGUOUS PyBUF_ND PyBUF_STRIDES int PyObject_CheckBuffer(object) int PyObject_GetBuffer(object, Py_buffer *, int) except -1 void PyBuffer_Release(Py_buffer *) int PyBuffer_FillInfo(Py_buffer *, object, void *, Py_ssize_t, bint, int) except -1 # Python 2 buffer interface (legacy) cdef extern from "Python.h": ctypedef void const_void "const void" int PyObject_CheckReadBuffer(object) int PyObject_AsReadBuffer (object, const_void **, Py_ssize_t *) except -1 int PyObject_AsWriteBuffer(object, void **, Py_ssize_t *) except -1 object PyBuffer_FromObject(object, Py_ssize_t, Py_ssize_t) object PyBuffer_FromReadWriteObject(object, Py_ssize_t, Py_ssize_t) #------------------------------------------------------------------------------ cdef extern from *: enum: PYPY "PyMPI_RUNTIME_PYPY" cdef type array_array cdef type numpy_array if PYPY: from array import array as array_array if PYPY: from numpypy import ndarray as numpy_array cdef int \ PyPy_GetBuffer(object obj, Py_buffer *view, int flags) \ except -1: cdef object addr cdef void *buf = NULL cdef Py_ssize_t size = 0 cdef bint readonly = 0 if isinstance(obj, bytes): buf = PyBytes_AsString(obj) size = PyBytes_Size(obj) readonly = 1 #elif isinstance(obj, bytearray): # buf = PyByteArray_AsString(obj) # size = PyByteArray_Size(obj) # readonly = 0 elif isinstance(obj, array_array): addr, size = obj.buffer_info() buf = PyLong_AsVoidPtr(addr) size *= obj.itemsize readonly = 0 elif isinstance(obj, numpy_array): addr, readonly = obj.__array_interface__['data'] buf = PyLong_AsVoidPtr(addr) size = obj.nbytes else: if (flags & PyBUF_WRITABLE) == PyBUF_WRITABLE: readonly = 0 PyObject_AsWriteBuffer(obj, &buf, &size) else: readonly = 1 PyObject_AsReadBuffer(obj, &buf, &size) PyBuffer_FillInfo(view, obj, buf, size, readonly, flags) if (flags & PyBUF_FORMAT) == PyBUF_FORMAT: view.format = b"B" return 0 #--------------------------------------------------------------------- cdef int \ PyObject_GetBufferEx(object obj, Py_buffer *view, int flags) \ except -1: if view == NULL: return 0 if PYPY: # special-case PyPy runtime return PyPy_GetBuffer(obj, view, flags) # Python 3 buffer interface (PEP 3118) if PyObject_CheckBuffer(obj): return PyObject_GetBuffer(obj, view, flags) # Python 2 buffer interface (legacy) if (flags & PyBUF_WRITABLE) == PyBUF_WRITABLE: view.readonly = 0 PyObject_AsWriteBuffer(obj, &view.buf, &view.len) else: view.readonly = 1 PyObject_AsReadBuffer(obj, &view.buf, &view.len) PyBuffer_FillInfo(view, obj, view.buf, view.len, view.readonly, flags) if (flags & PyBUF_FORMAT) == PyBUF_FORMAT: view.format = b"B" return 0 #--------------------------------------------------------------------- #@cython.final #@cython.internal cdef class _p_buffer: cdef Py_buffer view def __dealloc__(self): PyBuffer_Release(&self.view) # buffer interface (PEP 3118) def __getbuffer__(self, Py_buffer *view, int flags): if view == NULL: return if view.obj == None: Py_CLEAR(view.obj) if self.view.obj != NULL: PyObject_GetBufferEx(self.view.obj, view, flags) else: PyBuffer_FillInfo(view, NULL, self.view.buf, self.view.len, self.view.readonly, flags) def __releasebuffer__(self, Py_buffer *view): if view == NULL: return PyBuffer_Release(view) # buffer interface (legacy) def __getsegcount__(self, Py_ssize_t *lenp): if lenp != NULL: lenp[0] = self.view.len return 1 def __getreadbuffer__(self, Py_ssize_t idx, void **p): if idx != 0: raise SystemError( "accessing non-existent buffer segment") p[0] = self.view.buf return self.view.len def __getwritebuffer__(self, Py_ssize_t idx, void **p): if idx != 0: raise SystemError( "accessing non-existent buffer segment") if self.view.readonly: raise TypeError("object is not writeable") p[0] = self.view.buf return self.view.len cdef inline _p_buffer newbuffer(): return <_p_buffer>_p_buffer.__new__(_p_buffer) cdef inline _p_buffer getbuffer(object ob, bint readonly, bint format): cdef _p_buffer buf = newbuffer() cdef int flags = PyBUF_ANY_CONTIGUOUS if not readonly: flags |= PyBUF_WRITABLE if format: flags |= PyBUF_FORMAT PyObject_GetBufferEx(ob, &buf.view, flags) return buf cdef inline object getformat(_p_buffer buf): cdef Py_buffer *view = &buf.view # if buf.view.obj == NULL: if view.format != NULL: return mpistr(view.format) else: return "B" elif view.format != NULL: # XXX this is a hack if view.format != b"B": return mpistr(view.format) # cdef object ob = buf.view.obj cdef str format = None try: # numpy.ndarray format = ob.dtype.char except (AttributeError, TypeError): try: # array.array format = ob.typecode except (AttributeError, TypeError): if view.format != NULL: format = mpistr(view.format) return format cdef inline _p_buffer tobuffer(void *p, Py_ssize_t n, bint ro): cdef _p_buffer buf = newbuffer() cdef Py_buffer *view = &buf.view PyBuffer_FillInfo(view, NULL, p, n, ro, PyBUF_FORMAT|PyBUF_STRIDES) return buf #------------------------------------------------------------------------------ cdef inline _p_buffer getbuffer_r(object ob, void **base, MPI_Aint *size): cdef _p_buffer buf = getbuffer(ob, 1, 0) if base != NULL: base[0] = buf.view.buf if size != NULL: size[0] = buf.view.len return buf cdef inline _p_buffer getbuffer_w(object ob, void **base, MPI_Aint *size): cdef _p_buffer buf = getbuffer(ob, 0, 0) if base != NULL: base[0] = buf.view.buf if size != NULL: size[0] = buf.view.len return buf #------------------------------------------------------------------------------ mpi4py_1.3.1+hg20131106.orig/src/MPI/asmemory.pxi0000644000000000000000000000325012211706251017104 0ustar 00000000000000#------------------------------------------------------------------------------ cdef extern from "Python.h": enum: PY_SSIZE_T_MAX void *PyMem_Malloc(size_t) void *PyMem_Realloc(void*, size_t) void PyMem_Free(void*) cdef extern from "Python.h": object PyLong_FromVoidPtr(void*) void* PyLong_AsVoidPtr(object) #------------------------------------------------------------------------------ cdef extern from "Python.h": object PyMemoryView_FromBuffer(Py_buffer *) cdef inline object asmemory(object ob, void **base, MPI_Aint *size): cdef _p_buffer buf = getbuffer_w(ob, base, size) return buf cdef inline object tomemory(void *base, MPI_Aint size): cdef _p_buffer buf = tobuffer(base, size, 0) return PyMemoryView_FromBuffer(&buf.view) #------------------------------------------------------------------------------ #@cython.final #@cython.internal cdef class _p_mem: cdef void *buf def __cinit__(self): self.buf = NULL def __dealloc__(self): PyMem_Free(self.buf) cdef inline _p_mem allocate(Py_ssize_t m, size_t b, void **buf): cdef Py_ssize_t n = m * b if n > PY_SSIZE_T_MAX: raise MemoryError("memory allocation size too large") if n < 0: raise RuntimeError("memory allocation with negative size") cdef _p_mem ob = <_p_mem>_p_mem.__new__(_p_mem) ob.buf = PyMem_Malloc(n) if ob.buf == NULL: raise MemoryError if buf != NULL: buf[0] = ob.buf return ob cdef inline _p_mem allocate_int(int n, int **p): cdef _p_mem ob = allocate(n, sizeof(int), NULL) p[0] = ob.buf return ob #------------------------------------------------------------------------------ mpi4py_1.3.1+hg20131106.orig/src/MPI/asmpistr.pxi0000644000000000000000000000167212211706251017120 0ustar 00000000000000#--------------------------------------------------------------------- cdef extern from *: ctypedef char const_char "const char" object PyMPIString_AsStringAndSize(object,const_char**,Py_ssize_t*) object PyMPIString_FromString(const_char*) object PyMPIString_FromStringAndSize(const_char*,Py_ssize_t) #--------------------------------------------------------------------- cdef inline object asmpistr(object ob, char **s, int *n): cdef const_char *sbuf = NULL cdef Py_ssize_t slen = 0, *slenp = NULL if n != NULL: slenp = &slen ob = PyMPIString_AsStringAndSize(ob, &sbuf, slenp) if s != NULL: s[0] = sbuf if n != NULL: n[0] = slen return ob cdef inline object tompistr(const_char *s, int n): return PyMPIString_FromStringAndSize(s, n) cdef inline object mpistr(const_char *s): return PyMPIString_FromString(s) #--------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/atimport.pxi0000644000000000000000000001461312211706251017114 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef extern from "Python.h": int Py_IsInitialized() nogil void PySys_WriteStderr(char*,...) int Py_AtExit(void (*)()) void Py_INCREF(object) void Py_DECREF(object) void Py_CLEAR(void*) # ----------------------------------------------------------------------------- cdef extern from "atimport.h": pass # ----------------------------------------------------------------------------- ctypedef struct RCParams: int initialize int threaded int thread_level int finalize cdef int warnRC(object attr, object value) except -1: cdef object warn from warnings import warn warn("mpi4py.rc: '%s': unexpected value '%r'" % (attr, value)) cdef int getRCParams(RCParams* rc) except -1: # rc.initialize = 1 rc.threaded = 1 rc.thread_level = MPI_THREAD_MULTIPLE rc.finalize = 1 # cdef object rcmod try: from mpi4py import rc as rcmod except: return 0 # cdef object initialize = True cdef object threaded = True cdef object thread_level = 'multiple' cdef object finalize = None try: initialize = rcmod.initialize except: pass try: threaded = rcmod.threaded except: pass try: thread_level = rcmod.thread_level except: pass try: finalize = rcmod.finalize except: pass # if initialize in (True, 'yes'): rc.initialize = 1 elif initialize in (False, 'no'): rc.initialize = 0 else: warnRC("initialize", initialize) # if threaded in (True, 'yes'): rc.threaded = 1 elif threaded in (False, 'no'): rc.threaded = 0 else: warnRC("threaded", threaded) # if thread_level == 'single': rc.thread_level = MPI_THREAD_SINGLE elif thread_level == 'funneled': rc.thread_level = MPI_THREAD_FUNNELED elif thread_level == 'serialized': rc.thread_level = MPI_THREAD_SERIALIZED elif thread_level == 'multiple': rc.thread_level = MPI_THREAD_MULTIPLE else: warnRC("thread_level", thread_level) # if finalize is None: rc.finalize = rc.initialize elif finalize in (True, 'yes'): rc.finalize = 1 elif finalize in (False, 'no'): rc.finalize = 0 else: warnRC("finalize", finalize) # return 0 # ----------------------------------------------------------------------------- cdef extern from *: # int PyMPI_STARTUP_DONE int PyMPI_StartUp() nogil # int PyMPI_CLEANUP_DONE int PyMPI_CleanUp() nogil cdef int inited_atimport = 0 cdef int finalize_atexit = 0 PyMPI_STARTUP_DONE = 0 PyMPI_CLEANUP_DONE = 0 cdef int initialize() except -1: global inited_atimport global finalize_atexit cdef int ierr = MPI_SUCCESS # MPI initialized ? cdef int initialized = 1 ierr = MPI_Initialized(&initialized) # MPI finalized ? cdef int finalized = 1 ierr = MPI_Finalized(&finalized) # Do we have to initialize MPI? if initialized: if not finalized: # Cleanup at (the very end of) Python exit if Py_AtExit(atexit) < 0: PySys_WriteStderr(b"warning: could not register " b"cleanup with Py_AtExit()\n", 0) return 0 # Use user parameters from 'mpi4py.rc' module cdef RCParams rc getRCParams(&rc) cdef int required = MPI_THREAD_SINGLE cdef int provided = MPI_THREAD_SINGLE if rc.initialize: # We have to initialize MPI if rc.threaded: required = rc.thread_level ierr = MPI_Init_thread(NULL, NULL, required, &provided) if ierr != MPI_SUCCESS: raise RuntimeError( "MPI_Init_thread() failed [error code: %d]" % ierr) else: ierr = MPI_Init(NULL, NULL) if ierr != MPI_SUCCESS: raise RuntimeError( "MPI_Init() failed [error code: %d]" % ierr) inited_atimport = 1 # We initialized MPI if rc.finalize: # We have to finalize MPI finalize_atexit = 1 # Cleanup at (the very end of) Python exit if Py_AtExit(atexit) < 0: PySys_WriteStderr(b"warning: could not register " b"cleanup with Py_AtExit()\n", 0) return 0 cdef inline int mpi_active() nogil: cdef int ierr = MPI_SUCCESS # MPI initialized ? cdef int initialized = 0 ierr = MPI_Initialized(&initialized) if not initialized or ierr: return 0 # MPI finalized ? cdef int finalized = 1 ierr = MPI_Finalized(&finalized) if finalized or ierr: return 0 # MPI should be active ... return 1 cdef void startup() nogil: cdef int ierr = MPI_SUCCESS if not mpi_active(): return #DBG# fprintf(stderr, b"statup: BEGIN\n"); fflush(stderr) ierr = PyMPI_StartUp(); if ierr: pass #DBG# fprintf(stderr, b"statup: END\n"); fflush(stderr) cdef void cleanup() nogil: cdef int ierr = MPI_SUCCESS if not mpi_active(): return #DBG# fprintf(stderr, b"cleanup: BEGIN\n"); fflush(stderr) ierr = PyMPI_CleanUp() if ierr: pass #DBG# fprintf(stderr, b"cleanup: END\n"); fflush(stderr) cdef void atexit() nogil: cdef int ierr = MPI_SUCCESS if not mpi_active(): return #DBG# fprintf(stderr, b"atexit: BEGIN\n"); fflush(stderr) cleanup() if not finalize_atexit: return ierr = MPI_Finalize() if ierr: pass #DBG# fprintf(stderr, b"atexit: END\n"); fflush(stderr) # ----------------------------------------------------------------------------- # Vile hack for raising a exception and not contaminate the traceback cdef extern from *: void PyErr_SetObject(object, object) void *PyExc_RuntimeError void *PyExc_NotImplementedError cdef object MPIException = PyExc_RuntimeError cdef int PyMPI_Raise(int ierr) except -1 with gil: if ierr == -1: PyErr_SetObject(PyExc_NotImplementedError, None) return 0 if (MPIException) != NULL: PyErr_SetObject(MPIException, ierr) else: PyErr_SetObject(PyExc_RuntimeError, ierr) return 0 cdef inline int CHKERR(int ierr) nogil except -1: if ierr == 0: return 0 PyMPI_Raise(ierr) return -1 cdef inline void print_traceback(): cdef object sys, traceback import sys, traceback traceback.print_exc() try: sys.stderr.flush() except: pass # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/commimpl.pxi0000644000000000000000000001356112211706251017073 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef dict comm_keyval = {} cdef int comm_keyval_new(int keyval, object copy_fn,object delete_fn) except -1: comm_keyval[keyval] = (copy_fn, delete_fn) return 0 cdef int comm_keyval_del(int keyval) except -1: try: del comm_keyval[keyval] except KeyError: pass return 0 cdef int comm_attr_copy( MPI_Comm comm, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) except -1: cdef tuple entry = comm_keyval.get(keyval) cdef object copy_fn = None if entry is not None: copy_fn = entry[0] if copy_fn is None or copy_fn is False: flag[0] = 0 return 0 cdef object attrval = attrval_in cdef void **aptr = attrval_out if copy_fn is not True: attrval = copy_fn(attrval) Py_INCREF(attrval) aptr[0] = attrval flag[0] = 1 return 0 cdef int comm_attr_copy_cb( MPI_Comm comm, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) with gil: cdef object exc try: comm_attr_copy(comm, keyval, extra_state, attrval_in, attrval_out, flag) except MPIException, exc: print_traceback() return exc.Get_error_code() except: print_traceback() return MPI_ERR_OTHER return MPI_SUCCESS cdef int comm_attr_delete( MPI_Comm comm, int keyval, void *attrval, void *extra_state) except -1: cdef tuple entry = comm_keyval.get(keyval) cdef object delete_fn = None if entry is not None: delete_fn = entry[1] if delete_fn is not None: delete_fn(attrval) Py_DECREF(attrval) return 0 cdef int comm_attr_delete_cb( MPI_Comm comm, int keyval, void *attrval, void *extra_state) with gil: cdef object exc try: comm_attr_delete(comm, keyval, attrval, extra_state) except MPIException, exc: print_traceback() return exc.Get_error_code() except: print_traceback() return MPI_ERR_OTHER return MPI_SUCCESS @cython.callspec("MPIAPI") cdef int comm_attr_copy_fn(MPI_Comm comm, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) nogil: if attrval_in == NULL: return MPI_ERR_INTERN if attrval_out == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_SUCCESS return comm_attr_copy_cb(comm, keyval, extra_state, attrval_in, attrval_out, flag) @cython.callspec("MPIAPI") cdef int comm_attr_delete_fn(MPI_Comm comm, int keyval, void *attrval, void *extra_state) nogil: if attrval == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_SUCCESS return comm_attr_delete_cb(comm, keyval, attrval, extra_state) # ----------------------------------------------------------------------------- cdef _p_buffer _buffer = None cdef inline int attach_buffer(ob, void **p, int *n) except -1: global _buffer cdef void *bptr = NULL cdef MPI_Aint blen = 0 _buffer = getbuffer_w(ob, &bptr, &blen) p[0] = bptr n[0] = blen # XXX Overflow ? return 0 cdef inline object detach_buffer(void *p, int n): global _buffer cdef object ob = None try: if (_buffer is not None and _buffer.view.buf == p and _buffer.view.len == n and _buffer.view.obj != NULL): ob = _buffer.view.obj else: ob = tomemory(p, n) finally: _buffer = None return ob # ----------------------------------------------------------------------------- cdef object __UNWEIGHTED__ = MPI_UNWEIGHTED cdef object __WEIGHTS_EMPTY__ = MPI_WEIGHTS_EMPTY cdef object asarray_weights(object weights, int nweight, int **iweight): # if weights is None: iweight[0] = MPI_UNWEIGHTED return None # cdef int i = 0 if weights is __WEIGHTS_EMPTY__: if MPI_WEIGHTS_EMPTY != MPI_UNWEIGHTED: iweight[0] = MPI_WEIGHTS_EMPTY return None else: weights = mkarray_int(nweight, iweight) for i from 0 <= i < nweight: iweight[0][i] = 0 return weights # if weights is __UNWEIGHTED__: iweight[0] = MPI_UNWEIGHTED return None # return asarray_int(weights, nweight, iweight) # ----------------------------------------------------------------------------- cdef inline int comm_neighbors_count(MPI_Comm comm, int *incoming, int *outgoing, ) except -1: cdef int topo = MPI_UNDEFINED cdef int size=0, ndims=0, rank=0, nneighbors=0 cdef int indegree=0, outdegree=0, weighted=0 CHKERR( MPI_Topo_test(comm, &topo) ) if topo == MPI_UNDEFINED: # XXX CHKERR( MPI_Comm_size(comm, &size) ) indegree = outdegree = size elif topo == MPI_CART: CHKERR( MPI_Cartdim_get(comm, &ndims) ) indegree = outdegree = 2*ndims elif topo == MPI_GRAPH: CHKERR( MPI_Comm_rank(comm, &rank) ) CHKERR( MPI_Graph_neighbors_count( comm, rank, &nneighbors) ) indegree = outdegree = nneighbors elif topo == MPI_DIST_GRAPH: CHKERR( MPI_Dist_graph_neighbors_count( comm, &indegree, &outdegree, &weighted) ) if incoming != NULL: incoming[0] = indegree if outgoing != NULL: outgoing[0] = outdegree return 0 # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/helpers.pxi0000644000000000000000000001767712211706251016734 0ustar 00000000000000#------------------------------------------------------------------------------ cdef extern from "Python.h": enum: Py_LT enum: Py_LE enum: Py_EQ enum: Py_NE enum: Py_GT enum: Py_GE #------------------------------------------------------------------------------ cdef enum PyMPI_OBJECT_FLAGS: PyMPI_OWNED = 1<<1 #------------------------------------------------------------------------------ # Status cdef inline MPI_Status *arg_Status(object status): if status is None: return MPI_STATUS_IGNORE return &((status).ob_mpi) cdef inline int equal_Status(MPI_Status* s1, MPI_Status* s2) nogil: cdef size_t i=0, n=sizeof(MPI_Status) cdef unsigned char* a = s1 cdef unsigned char* b = s2 for i from 0 <= i < n: if a[i] != b[i]: return 0 return 1 cdef inline void copy_Status(MPI_Status* si, MPI_Status* so) nogil: cdef size_t i=0, n=sizeof(MPI_Status) cdef unsigned char* a = si cdef unsigned char* b = so for i from 0 <= i < n: b[i] = a[i] #------------------------------------------------------------------------------ # Datatype include "typeimpl.pxi" cdef inline Datatype new_Datatype(MPI_Datatype ob): cdef Datatype datatype = Datatype.__new__(Datatype) datatype.ob_mpi = ob return datatype cdef inline int del_Datatype(MPI_Datatype* ob): # if ob == NULL : return 0 if ob[0] == MPI_DATATYPE_NULL : return 0 if not mpi_active() : return 0 # return MPI_Type_free(ob) cdef inline int named_Datatype(MPI_Datatype ob): if ob == MPI_DATATYPE_NULL: return 0 cdef int ni = 0, na = 0, nt = 0, combiner = MPI_UNDEFINED cdef int ierr = MPI_SUCCESS ierr = MPI_Type_get_envelope(ob, &ni, &na, &nt, &combiner) if ierr: return 0 # XXX return combiner == MPI_COMBINER_NAMED cdef void fix_fileview_Datatype(Datatype datatype): cdef MPI_Datatype ob = datatype.ob_mpi if ob == MPI_DATATYPE_NULL: return if named_Datatype(ob): pass #------------------------------------------------------------------------------ # Request include "reqimpl.pxi" cdef inline Request new_Request(MPI_Request ob): cdef Request request = Request.__new__(Request) request.ob_mpi = ob return request cdef inline int del_Request(MPI_Request* ob): # if ob == NULL : return 0 if ob[0] == MPI_REQUEST_NULL : return 0 if not mpi_active() : return 0 # return MPI_Request_free(ob) #------------------------------------------------------------------------------ # Message cdef inline Message new_Message(MPI_Message ob): cdef Message message = Message.__new__(Message) message.ob_mpi = ob return message cdef inline int del_Message(MPI_Message* ob): # if ob == NULL : return 0 if ob[0] == MPI_MESSAGE_NULL : return 0 if ob[0] == MPI_MESSAGE_NO_PROC : return 0 if not mpi_active() : return 0 # # ob[0] = MPI_MESSAGE_NULL return 0 #------------------------------------------------------------------------------ # Op include "opimpl.pxi" cdef inline Op new_Op(MPI_Op ob): cdef Op op = Op.__new__(Op) op.ob_mpi = ob if ob == MPI_OP_NULL : op.ob_func = NULL elif ob == MPI_MAX : op.ob_func = _op_MAX elif ob == MPI_MIN : op.ob_func = _op_MIN elif ob == MPI_SUM : op.ob_func = _op_SUM elif ob == MPI_PROD : op.ob_func = _op_PROD elif ob == MPI_LAND : op.ob_func = _op_LAND elif ob == MPI_BAND : op.ob_func = _op_BAND elif ob == MPI_LOR : op.ob_func = _op_LOR elif ob == MPI_BOR : op.ob_func = _op_BOR elif ob == MPI_LXOR : op.ob_func = _op_LXOR elif ob == MPI_BXOR : op.ob_func = _op_BXOR elif ob == MPI_MAXLOC : op.ob_func = _op_MAXLOC elif ob == MPI_MINLOC : op.ob_func = _op_MINLOC elif ob == MPI_REPLACE : op.ob_func = _op_REPLACE elif ob == MPI_NO_OP : op.ob_func = _op_NO_OP return op cdef inline int del_Op(MPI_Op* ob): # if ob == NULL : return 0 if ob[0] == MPI_OP_NULL : return 0 if ob[0] == MPI_MAX : return 0 if ob[0] == MPI_MIN : return 0 if ob[0] == MPI_SUM : return 0 if ob[0] == MPI_PROD : return 0 if ob[0] == MPI_LAND : return 0 if ob[0] == MPI_BAND : return 0 if ob[0] == MPI_LOR : return 0 if ob[0] == MPI_BOR : return 0 if ob[0] == MPI_LXOR : return 0 if ob[0] == MPI_BXOR : return 0 if ob[0] == MPI_MAXLOC : return 0 if ob[0] == MPI_MINLOC : return 0 if ob[0] == MPI_REPLACE : return 0 if not mpi_active() : return 0 # return MPI_Op_free(ob) #------------------------------------------------------------------------------ # Info cdef inline Info new_Info(MPI_Info ob): cdef Info info = Info.__new__(Info) info.ob_mpi = ob return info cdef inline int del_Info(MPI_Info* ob): # if ob == NULL : return 0 if ob[0] == MPI_INFO_NULL : return 0 if ob[0] == MPI_INFO_ENV : return 0 if not mpi_active() : return 0 # return MPI_Info_free(ob) cdef inline MPI_Info arg_Info(object info): if info is None: return MPI_INFO_NULL return (info).ob_mpi #------------------------------------------------------------------------------ # Group cdef inline Group new_Group(MPI_Group ob): cdef Group group = Group.__new__(Group) group.ob_mpi = ob return group cdef inline int del_Group(MPI_Group* ob): # if ob == NULL : return 0 if ob[0] == MPI_GROUP_NULL : return 0 if ob[0] == MPI_GROUP_EMPTY : return 0 if not mpi_active() : return 0 # return MPI_Group_free(ob) #------------------------------------------------------------------------------ # Comm include "commimpl.pxi" cdef inline Comm new_Comm(MPI_Comm ob): cdef Comm comm = Comm.__new__(Comm) comm.ob_mpi = ob return comm cdef inline Intracomm new_Intracomm(MPI_Comm ob): cdef Intracomm comm = Intracomm.__new__(Intracomm) comm.ob_mpi = ob return comm cdef inline Intercomm new_Intercomm(MPI_Comm ob): cdef Intercomm comm = Intercomm.__new__(Intercomm) comm.ob_mpi = ob return comm cdef inline int del_Comm(MPI_Comm* ob): # if ob == NULL : return 0 if ob[0] == MPI_COMM_NULL : return 0 if ob[0] == MPI_COMM_SELF : return 0 if ob[0] == MPI_COMM_WORLD : return 0 if not mpi_active() : return 0 # return MPI_Comm_free(ob) #------------------------------------------------------------------------------ # Win include "winimpl.pxi" cdef inline Win new_Win(MPI_Win ob): cdef Win win = Win.__new__(Win) win.ob_mpi = ob return win cdef inline int del_Win(MPI_Win* ob): # if ob == NULL : return 0 if ob[0] == MPI_WIN_NULL : return 0 if not mpi_active() : return 0 # return MPI_Win_free(ob) #------------------------------------------------------------------------------ # File cdef inline File new_File(MPI_File ob): cdef File file = File.__new__(File) file.ob_mpi = ob return file cdef inline int del_File(MPI_File* ob): # if ob == NULL : return 0 if ob[0] == MPI_FILE_NULL : return 0 if not mpi_active() : return 0 # return MPI_File_close(ob) #------------------------------------------------------------------------------ # Errhandler cdef inline Errhandler new_Errhandler(MPI_Errhandler ob): cdef Errhandler errhandler = Errhandler.__new__(Errhandler) errhandler.ob_mpi = ob return errhandler cdef inline int del_Errhandler(MPI_Errhandler* ob): # if ob == NULL : return 0 if ob[0] == MPI_ERRHANDLER_NULL : return 0 if not mpi_active() : return 0 # return MPI_Errhandler_free(ob) #------------------------------------------------------------------------------ mpi4py_1.3.1+hg20131106.orig/src/MPI/msgbuffer.pxi0000644000000000000000000011062512211706251017235 0ustar 00000000000000#------------------------------------------------------------------------------ cdef extern from "Python.h": int is_int "PyInt_Check" (object) int is_list "PyList_Check" (object) int is_tuple "PyTuple_Check" (object) cdef inline bint is_buffer(object ob): return (PyObject_CheckBuffer(ob) or PyObject_CheckReadBuffer(ob)) #------------------------------------------------------------------------------ cdef object __BOTTOM__ = MPI_BOTTOM cdef object __IN_PLACE__ = MPI_IN_PLACE #------------------------------------------------------------------------------ #@cython.final #@cython.internal cdef class _p_message: cdef _p_buffer buf cdef object count cdef object displ cdef Datatype type cdef _p_message message_basic(object o_buf, object o_type, bint readonly, # void **baddr, MPI_Aint *bsize, MPI_Datatype *btype, ): global TypeDict cdef _p_message m = <_p_message>_p_message.__new__(_p_message) # special-case for BOTTOM or None, # an explicit MPI datatype is required if o_buf is __BOTTOM__ or o_buf is None: if isinstance(o_type, Datatype): m.type = o_type else: m.type = TypeDict[o_type] m.buf = newbuffer() baddr[0] = MPI_BOTTOM bsize[0] = 0 btype[0] = m.type.ob_mpi return m # get buffer base address and length cdef bint fmt = (o_type is None) m.buf = getbuffer(o_buf, readonly, fmt) baddr[0] = m.buf.view.buf bsize[0] = m.buf.view.len # lookup datatype if not provided or not a Datatype if isinstance(o_type, Datatype): m.type = o_type elif o_type is None: m.type = TypeDict[getformat(m.buf)] else: m.type = TypeDict[o_type] btype[0] = m.type.ob_mpi # and we are done ... return m cdef _p_message message_simple(object msg, int readonly, int rank, int blocks, # void **_addr, int *_count, MPI_Datatype *_type, ): # special-case PROC_NULL target rank if rank == MPI_PROC_NULL: _addr[0] = NULL _count[0] = 0 _type[0] = MPI_BYTE return None # unpack message list/tuple cdef Py_ssize_t nargs = 0 cdef object o_buf = None cdef object o_count = None cdef object o_displ = None cdef object o_type = None if is_buffer(msg): o_buf = msg elif is_list(msg) or is_tuple(msg): nargs = len(msg) if nargs == 2: (o_buf, o_count) = msg if (isinstance(o_count, Datatype) or isinstance(o_count, str)): (o_count, o_type) = None, o_count elif is_tuple(o_count) or is_list(o_count): (o_count, o_displ) = o_count elif nargs == 3: (o_buf, o_count, o_type) = msg if is_tuple(o_count) or is_list(o_count): (o_count, o_displ) = o_count elif nargs == 4: (o_buf, o_count, o_displ, o_type) = msg else: raise ValueError("message: expecting 2 to 4 items") elif PYPY: o_buf = msg else: raise TypeError("message: expecting buffer or list/tuple") # buffer: address, length, and datatype cdef void *baddr = NULL cdef MPI_Aint bsize = 0 cdef MPI_Datatype btype = MPI_DATATYPE_NULL cdef _p_message m = message_basic(o_buf, o_type, readonly, &baddr, &bsize, &btype) # buffer: count and displacement cdef int count = 0 # number of datatype entries cdef int displ = 0 # from base buffer, in datatype entries cdef MPI_Aint offset = 0 # from base buffer, in bytes cdef MPI_Aint extent = 0, lb = 0 if o_displ is not None: if o_count is None: raise ValueError( "message: cannot handle displacement, " "explicit count required") count = o_count if count < 0: raise ValueError( "message: negative count %d" % count) displ = o_displ if displ < 0: raise ValueError( "message: negative diplacement %d" % displ) if displ != 0: if btype == MPI_DATATYPE_NULL: raise ValueError( "message: cannot handle diplacement, " "datatype is null") CHKERR( MPI_Type_get_extent(btype, &lb, &extent) ) offset = displ*extent # XXX overflow? elif o_count is not None: count = o_count if count < 0: raise ValueError( "message: negative count %d" % count) elif bsize > 0: if btype == MPI_DATATYPE_NULL: raise ValueError( "message: cannot guess count, " "datatype is null") CHKERR( MPI_Type_get_extent(btype, &lb, &extent) ) if extent <= 0: raise ValueError( ("message: cannot guess count, " "datatype extent %d (lb:%d, ub:%d)" ) % (extent, lb, lb+extent)) if (bsize % extent) != 0: raise ValueError( ("message: cannot guess count, " "buffer length %d is not a multiple of " "datatype extent %d (lb:%d, ub:%d)" ) % (bsize, extent, lb, lb+extent)) if blocks < 1: blocks = 1 if ((bsize // extent) % blocks) != 0: raise ValueError( ("message: cannot guess count, " "number of datatype items %d is not a multiple of " "the required number of blocks %d" ) % (bsize//extent, blocks)) count = ((bsize // extent) // blocks) # XXX overflow? if o_count is None: o_count = count if o_displ is None: o_displ = displ m.count = o_count m.displ = o_displ # sanity-check zero-sized messages if o_buf is None: if count != 0: raise ValueError( "message: buffer is None but count is %d" % count) if displ != 0: raise ValueError( "message: buffer is None but displacement is %d" % displ) # return collected message data _addr[0] = (baddr + offset) _count[0] = count _type[0] = btype return m cdef _p_message message_vector(object msg, int readonly, int rank, int blocks, # void **_addr, int **_counts, int **_displs, MPI_Datatype *_type, ): # special-case PROC_NULL target rank if rank == MPI_PROC_NULL: _addr[0] = NULL _counts[0] = NULL _displs[0] = NULL _type[0] = MPI_BYTE return None # unpack message list/tuple cdef Py_ssize_t nargs = 0 cdef object o_buf = None cdef object o_counts = None cdef object o_displs = None cdef object o_type = None if is_buffer(msg): o_buf = msg elif is_list(msg) or is_tuple(msg): nargs = len(msg) if nargs == 2: (o_buf, o_counts) = msg if (isinstance(o_counts, Datatype) or isinstance(o_counts, str)): (o_counts, o_type) = None, o_counts elif is_tuple(o_counts): (o_counts, o_displs) = o_counts elif nargs == 3: (o_buf, o_counts, o_type) = msg if is_tuple(o_counts): (o_counts, o_displs) = o_counts elif nargs == 4: (o_buf, o_counts, o_displs, o_type) = msg else: raise ValueError("message: expecting 2 to 4 items") elif PYPY: o_buf = msg else: raise TypeError("message: expecting buffer or list/tuple") # buffer: address, length, and datatype cdef void *baddr = NULL cdef MPI_Aint bsize = 0 cdef MPI_Datatype btype = MPI_DATATYPE_NULL cdef _p_message m = message_basic(o_buf, o_type, readonly, &baddr, &bsize, &btype) # counts and displacements cdef int *counts = NULL cdef int *displs = NULL cdef int i=0, val=0 cdef MPI_Aint extent=0, lb=0 cdef MPI_Aint asize=0, aval=0 if o_counts is None: if bsize > 0: if btype == MPI_DATATYPE_NULL: raise ValueError( "message: cannot guess count, " "datatype is null") CHKERR( MPI_Type_get_extent(btype, &lb, &extent) ) if extent <= 0: raise ValueError( ("message: cannot guess count, " "datatype extent %d (lb:%d, ub:%d)" ) % (extent, lb, lb+extent)) if (bsize % extent) != 0: raise ValueError( ("message: cannot guess count, " "buffer length %d is not a multiple of " "datatype extent %d (lb:%d, ub:%d)" ) % (bsize, extent, lb, lb+extent)) asize = bsize // extent o_counts = mkarray_int(blocks, &counts) for i from 0 <= i < blocks: aval = (asize // blocks) + (asize % blocks > i) counts[i] = aval # XXX overflow? elif is_int(o_counts): val = o_counts o_counts = mkarray_int(blocks, &counts) for i from 0 <= i < blocks: counts[i] = val else: o_counts = asarray_int(o_counts, blocks, &counts) if o_displs is None: # contiguous val = 0 o_displs = mkarray_int(blocks, &displs) for i from 0 <= i < blocks: displs[i] = val val += counts[i] elif is_int(o_displs): # strided val = o_displs o_displs = mkarray_int(blocks, &displs) for i from 0 <= i < blocks: displs[i] = val * i else: # general o_displs = asarray_int(o_displs, blocks, &displs) m.count = o_counts m.displ = o_displs # return collected message data _addr[0] = baddr _counts[0] = counts _displs[0] = displs _type[0] = btype return m cdef tuple message_vecw_I(object msg, int readonly, int blocks, # void **_addr, int **_counts, int **_displs, MPI_Datatype **_types, ): cdef Py_ssize_t nargs = len(msg) if nargs == 3: o_buffer, (o_counts, o_displs), o_types = msg elif nargs == 4: o_buffer, o_counts, o_displs, o_types = msg else: raise ValueError("message: expecting 3 to 4 items") if readonly: o_buffer = getbuffer_r(o_buffer, _addr, NULL) else: o_buffer = getbuffer_w(o_buffer, _addr, NULL) o_counts = asarray_int(o_counts, blocks, _counts) o_displs = asarray_int(o_displs, blocks, _displs) o_types = asarray_Datatype(o_types, blocks, _types) return (o_buffer, o_counts, o_displs, o_types) cdef tuple message_vecw_A(object msg, int readonly, int blocks, # void **_addr, int **_counts, MPI_Aint **_displs, MPI_Datatype **_types, ): cdef Py_ssize_t nargs = len(msg) if nargs == 3: o_buffer, (o_counts, o_displs), o_types = msg elif nargs == 4: o_buffer, o_counts, o_displs, o_types = msg else: raise ValueError("message: expecting 3 to 4 items") if readonly: o_buffer = getbuffer_r(o_buffer, _addr, NULL) else: o_buffer = getbuffer_w(o_buffer, _addr, NULL) o_counts = asarray_int(o_counts, blocks, _counts) o_displs = asarray_Aint(o_displs, blocks, _displs) o_types = asarray_Datatype(o_types, blocks, _types) return (o_buffer, o_counts, o_displs, o_types) #------------------------------------------------------------------------------ #@cython.final #@cython.internal cdef class _p_msg_p2p: # raw C-side arguments cdef void *buf cdef int count cdef MPI_Datatype dtype # python-side argument cdef object _msg def __cinit__(self): self.buf = NULL self.count = 0 self.dtype = MPI_DATATYPE_NULL cdef int for_send(self, object msg, int rank) except -1: self._msg = message_simple(msg, 1, # readonly rank, 0, &self.buf, &self.count, &self.dtype) return 0 cdef int for_recv(self, object msg, int rank) except -1: self._msg = message_simple(msg, 0, # writable rank, 0, &self.buf, &self.count, &self.dtype) return 0 cdef inline _p_msg_p2p message_p2p_send(object sendbuf, int dest): cdef _p_msg_p2p msg = <_p_msg_p2p>_p_msg_p2p.__new__(_p_msg_p2p) msg.for_send(sendbuf, dest) return msg cdef inline _p_msg_p2p message_p2p_recv(object recvbuf, int source): cdef _p_msg_p2p msg = <_p_msg_p2p>_p_msg_p2p.__new__(_p_msg_p2p) msg.for_recv(recvbuf, source) return msg #------------------------------------------------------------------------------ #@cython.final #@cython.internal cdef class _p_msg_cco: # raw C-side arguments cdef void *sbuf, *rbuf cdef int scount, rcount cdef int *scounts, *rcounts cdef int *sdispls, *rdispls cdef MPI_Datatype stype, rtype # python-side arguments cdef object _smsg, _rmsg cdef object _rcnt def __cinit__(self): self.sbuf = self.rbuf = NULL self.scount = self.rcount = 0 self.scounts = self.rcounts = NULL self.sdispls = self.rdispls = NULL self.stype = self.rtype = MPI_DATATYPE_NULL # Collective Communication Operations # ----------------------------------- # sendbuf arguments cdef int for_cco_send(self, bint VECTOR, object amsg, int rank, int blocks) except -1: cdef bint readonly = 1 if not VECTOR: # block variant self._smsg = message_simple( amsg, readonly, rank, blocks, &self.sbuf, &self.scount, &self.stype) else: # vector variant self._smsg = message_vector( amsg, readonly, rank, blocks, &self.sbuf, &self.scounts, &self.sdispls, &self.stype) return 0 # recvbuf arguments cdef int for_cco_recv(self, bint VECTOR, object amsg, int rank, int blocks) except -1: cdef bint readonly = 0 if not VECTOR: # block variant self._rmsg = message_simple( amsg, readonly, rank, blocks, &self.rbuf, &self.rcount, &self.rtype) else: # vector variant self._rmsg = message_vector( amsg, readonly, rank, blocks, &self.rbuf, &self.rcounts, &self.rdispls, &self.rtype) return 0 # bcast cdef int for_bcast(self, object msg, int root, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, rank=0, sending=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if not inter: # intra-communication CHKERR( MPI_Comm_rank(comm, &rank) ) if root == rank: self.for_cco_send(0, msg, root, 0) sending = 1 else: self.for_cco_recv(0, msg, root, 0) sending = 0 else: # inter-communication if ((root == MPI_ROOT) or (root == MPI_PROC_NULL)): self.for_cco_send(0, msg, root, 0) sending = 1 else: self.for_cco_recv(0, msg, root, 0) sending = 0 if sending: self.rbuf = self.sbuf self.rcount = self.scount self.rtype = self.stype else: self.sbuf = self.rbuf self.scount = self.rcount self.stype = self.rtype return 0 # gather/gatherv cdef int for_gather(self, int v, object smsg, object rmsg, int root, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, size=0, rank=0, null=MPI_PROC_NULL CHKERR( MPI_Comm_test_inter(comm, &inter) ) if not inter: # intra-communication CHKERR( MPI_Comm_size(comm, &size) ) CHKERR( MPI_Comm_rank(comm, &rank) ) if root == rank: self.for_cco_recv(v, rmsg, root, size) if smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE self.scount = self.rcount self.stype = self.rtype else: self.for_cco_send(0, smsg, 0, 0) else: self.for_cco_recv(v, rmsg, null, size) self.for_cco_send(0, smsg, root, 0) else: # inter-communication CHKERR( MPI_Comm_remote_size(comm, &size) ) if ((root == MPI_ROOT) or (root == MPI_PROC_NULL)): self.for_cco_recv(v, rmsg, root, size) self.for_cco_send(0, smsg, null, 0) else: self.for_cco_recv(v, rmsg, null, size) self.for_cco_send(0, smsg, root, 0) return 0 # scatter/scatterv cdef int for_scatter(self, int v, object smsg, object rmsg, int root, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, size=0, rank=0, null=MPI_PROC_NULL CHKERR( MPI_Comm_test_inter(comm, &inter) ) if not inter: # intra-communication CHKERR( MPI_Comm_size(comm, &size) ) CHKERR( MPI_Comm_rank(comm, &rank) ) if root == rank: self.for_cco_send(v, smsg, root, size) if rmsg is __IN_PLACE__: self.rbuf = MPI_IN_PLACE self.rcount = self.scount self.rtype = self.stype else: self.for_cco_recv(0, rmsg, root, 0) else: self.for_cco_send(v, smsg, null, size) self.for_cco_recv(0, rmsg, root, 0) else: # inter-communication CHKERR( MPI_Comm_remote_size(comm, &size) ) if ((root == MPI_ROOT) or (root == MPI_PROC_NULL)): self.for_cco_send(v, smsg, root, size) self.for_cco_recv(0, rmsg, null, 0) else: self.for_cco_send(v, smsg, null, size) self.for_cco_recv(0, rmsg, root, 0) return 0 # allgather/allgatherv cdef int for_allgather(self, int v, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, size=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if not inter: # intra-communication CHKERR( MPI_Comm_size(comm, &size) ) else: # inter-communication CHKERR( MPI_Comm_remote_size(comm, &size) ) # self.for_cco_recv(v, rmsg, 0, size) if not inter and smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE self.scount = self.rcount self.stype = self.rtype else: self.for_cco_send(0, smsg, 0, 0) return 0 # alltoall/alltoallv cdef int for_alltoall(self, int v, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, size=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if not inter: # intra-communication CHKERR( MPI_Comm_size(comm, &size) ) else: # inter-communication CHKERR( MPI_Comm_remote_size(comm, &size) ) # self.for_cco_recv(v, rmsg, 0, size) if not inter and smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE self.scount = self.rcount self.scounts = self.rcounts self.sdispls = self.rdispls self.stype = self.rtype else: self.for_cco_send(v, smsg, 0, size) return 0 # Neighbor Collectives # -------------------- # neighbor allgather/allgatherv cdef int for_neighbor_allgather(self, int v, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int recvsize=0 comm_neighbors_count(comm, &recvsize, NULL) self.for_cco_send(0, smsg, 0, 0) self.for_cco_recv(v, rmsg, 0, recvsize) return 0 # neighbor alltoall/alltoallv cdef int for_neighbor_alltoall(self, int v, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int sendsize=0, recvsize=0 comm_neighbors_count(comm, &recvsize, &sendsize) self.for_cco_send(v, smsg, 0, sendsize) self.for_cco_recv(v, rmsg, 0, recvsize) return 0 # Collective Reductions Operations # -------------------------------- # sendbuf cdef int for_cro_send(self, object amsg, int root) except -1: self._smsg = message_simple(amsg, 1, # readonly root, 0, &self.sbuf, &self.scount, &self.stype) return 0 # recvbuf cdef int for_cro_recv(self, object amsg, int root) except -1: self._rmsg = message_simple(amsg, 0, # writable root, 0, &self.rbuf, &self.rcount, &self.rtype) return 0 cdef int for_reduce(self, object smsg, object rmsg, int root, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, rank=0, null=MPI_PROC_NULL CHKERR( MPI_Comm_test_inter(comm, &inter) ) if not inter: # intra-communication CHKERR( MPI_Comm_rank(comm, &rank) ) if root == rank: self.for_cro_recv(rmsg, root) if smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE self.scount = self.rcount self.stype = self.rtype else: self.for_cro_send(smsg, root) else: self.for_cro_recv(rmsg, null) self.for_cro_send(smsg, root) self.rcount = self.scount self.rtype = self.stype else: # inter-communication if ((root == MPI_ROOT) or (root == MPI_PROC_NULL)): self.for_cro_recv(rmsg, root) self.scount = self.rcount self.stype = self.rtype else: self.for_cro_send(smsg, root) self.rcount = self.scount self.rtype = self.stype return 0 cdef int for_allreduce(self, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) # get send and recv buffers self.for_cro_recv(rmsg, 0) if not inter and smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE self.scount = self.rcount self.stype = self.rtype else: self.for_cro_send(smsg, 0) # check counts and datatypes if (self.sbuf != MPI_IN_PLACE and self.scount != self.rcount): raise ValueError( "mismatch in send count %d and receive count %d" % (self.scount, self.rcount)) if (self.sbuf != MPI_IN_PLACE and self.stype != self.rtype): raise ValueError( "mismatch in send and receive MPI datatypes") return 0 cdef int for_reduce_scatter_block(self, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, size=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) CHKERR( MPI_Comm_size(comm, &size) ) # get send and recv buffers if not inter and smsg is __IN_PLACE__: self.for_cco_recv(0, rmsg, 0, size) self.sbuf = MPI_IN_PLACE else: self.for_cco_recv(0, rmsg, 0, 0) self.for_cco_send(0, smsg, 0, size) # check counts and datatypes if self.sbuf != MPI_IN_PLACE: if self.stype != self.rtype: raise ValueError( "mismatch in send and receive MPI datatypes") if self.scount != self.rcount: raise ValueError( "mismatch in send count %d receive count %d" % (self.scount, self.rcount*size)) return 0 cdef int for_reduce_scatter(self, object smsg, object rmsg, object rcnt, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, size=0, rank=MPI_PROC_NULL CHKERR( MPI_Comm_test_inter(comm, &inter) ) CHKERR( MPI_Comm_size(comm, &size) ) CHKERR( MPI_Comm_rank(comm, &rank) ) # get send and recv buffers self.for_cro_recv(rmsg, 0) if not inter and smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE else: self.for_cro_send(smsg, 0) # get receive counts if rcnt is None and not inter and self.sbuf != MPI_IN_PLACE: self._rcnt = mkarray_int(size, &self.rcounts) CHKERR( MPI_Allgather(&self.rcount, 1, MPI_INT, self.rcounts, 1, MPI_INT, comm) ) else: self._rcnt = asarray_int(rcnt, size, &self.rcounts) # total sum or receive counts cdef int i=0, sumrcounts=0 for i from 0 <= i < size: sumrcounts += self.rcounts[i] # check counts and datatypes if self.sbuf != MPI_IN_PLACE: if self.stype != self.rtype: raise ValueError( "mismatch in send and receive MPI datatypes") if self.scount != sumrcounts: raise ValueError( "mismatch in send count %d and sum(counts) %d" % (self.scount, sumrcounts)) if self.rcount != self.rcounts[rank]: raise ValueError( "mismatch in receive count %d and counts[%d] %d" % (self.rcount, rank, self.rcounts[rank])) else: if self.rcount != sumrcounts: raise ValueError( "mismatch in receive count %d and sum(counts) %d" % (self.rcount, sumrcounts)) return 0 cdef int for_scan(self, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 # get send and recv buffers self.for_cro_recv(rmsg, 0) if smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE self.scount = self.rcount self.stype = self.rtype else: self.for_cro_send(smsg, 0) # check counts and datatypes if (self.sbuf != MPI_IN_PLACE and self.scount != self.rcount): raise ValueError( "mismatch in send count %d and receive count %d" % (self.scount, self.rcount)) if (self.sbuf != MPI_IN_PLACE and self.stype != self.rtype): raise ValueError( "mismatch in send and receive MPI datatypes") return 0 cdef int for_exscan(self, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 # get send and recv buffers self.for_cro_recv(rmsg, 0) if smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE self.scount = self.rcount self.stype = self.rtype else: self.for_cro_send(smsg, 0) # check counts and datatypes if self.scount != self.rcount: raise ValueError( "mismatch in send count %d and receive count %d" % (self.scount, self.rcount)) if self.stype != self.rtype: raise ValueError( "mismatch in send and receive MPI datatypes") return 0 cdef inline _p_msg_cco message_cco(): cdef _p_msg_cco msg = <_p_msg_cco>_p_msg_cco.__new__(_p_msg_cco) return msg #------------------------------------------------------------------------------ #@cython.final #@cython.internal cdef class _p_msg_ccow: # raw C-side arguments cdef void *sbuf, *rbuf cdef int *scounts, *rcounts cdef int *sdispls, *rdispls cdef MPI_Aint *sdisplsA, *rdisplsA cdef MPI_Datatype *stypes, *rtypes # python-side arguments cdef object _smsg, _rmsg def __cinit__(self): self.sbuf = self.rbuf = NULL self.scounts = self.rcounts = NULL self.sdispls = self.rdispls = NULL self.sdisplsA = self.rdisplsA = NULL self.stypes = self.rtypes = NULL # alltoallw cdef int for_alltoallw(self, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int inter=0, size=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if not inter: # intra-communication CHKERR( MPI_Comm_size(comm, &size) ) else: # inter-communication CHKERR( MPI_Comm_remote_size(comm, &size) ) # self._rmsg = message_vecw_I( rmsg, 0, size, &self.rbuf, &self.rcounts, &self.rdispls, &self.rtypes) if not inter and smsg is __IN_PLACE__: self.sbuf = MPI_IN_PLACE self.scount = self.rcount self.scounts = self.rcounts self.sdispls = self.rdispls self.stypes = self.rtypes return 0 self._smsg = message_vecw_I( smsg, 1, size, &self.sbuf, &self.scounts, &self.sdispls, &self.stypes) return 0 # neighbor alltoallw cdef int for_neighbor_alltoallw(self, object smsg, object rmsg, MPI_Comm comm) except -1: if comm == MPI_COMM_NULL: return 0 cdef int sendsize=0, recvsize=0 comm_neighbors_count(comm, &recvsize, &sendsize) self._rmsg = message_vecw_A( rmsg, 0, recvsize, &self.rbuf, &self.rcounts, &self.rdisplsA, &self.rtypes) self._smsg = message_vecw_A( smsg, 1, sendsize, &self.sbuf, &self.scounts, &self.sdisplsA, &self.stypes) return 0 cdef inline _p_msg_ccow message_ccow(): cdef _p_msg_ccow msg = <_p_msg_ccow>_p_msg_ccow.__new__(_p_msg_ccow) return msg #------------------------------------------------------------------------------ #@cython.final #@cython.internal cdef class _p_msg_rma: # raw origin arguments cdef void* oaddr cdef int ocount cdef MPI_Datatype otype # raw result arguments cdef void* raddr cdef int rcount cdef MPI_Datatype rtype # raw target arguments cdef MPI_Aint tdisp cdef int tcount cdef MPI_Datatype ttype # python-side arguments cdef object _origin cdef object _result cdef object _target def __cinit__(self): self.oaddr = NULL self.ocount = 0 self.otype = MPI_DATATYPE_NULL self.raddr = NULL self.rcount = 0 self.rtype = MPI_DATATYPE_NULL self.tdisp = 0 self.tcount = 0 self.ttype = MPI_DATATYPE_NULL cdef int for_rma(self, int readonly, object origin, int rank, object target) except -1: # ORIGIN self._origin = message_simple( origin, readonly, rank, 0, &self.oaddr, &self.ocount, &self.otype) if ((rank == MPI_PROC_NULL) and (origin is not None) and (is_list(origin) or is_tuple(origin)) and (len(origin) > 0 and isinstance(origin[-1], Datatype))): self.otype = (origin[-1]).ob_mpi self._origin = origin # TARGET if target is None: self.tdisp = 0 self.tcount = self.ocount self.ttype = self.otype elif is_int(target): self.tdisp = target self.tcount = self.ocount self.ttype = self.otype elif is_list(target) or is_tuple(target): if len(target) != 3: raise ValueError("target: expecting 3 items") self.tdisp = target[0] self.tcount = target[1] self.ttype = (target[2]).ob_mpi else: raise ValueError("target: expecting integral or list/tuple") self._target = target return 0 cdef int for_put(self, object origin, int rank, object target) except -1: self.for_rma(1, origin, rank, target) return 0 cdef int for_get(self, object origin, int rank, object target) except -1: self.for_rma(0, origin, rank, target) return 0 cdef int for_acc(self, object origin, int rank, object target) except -1: self.for_rma(1, origin, rank, target) return 0 cdef int for_get_acc(self, object origin, object result, int rank, object target) except -1: # ORIGIN & TARGET self.for_rma(0, origin, rank, target) # RESULT self._result = message_simple( result, 0, rank, 0, &self.raddr, &self.rcount, &self.rtype) return 0 cdef inline _p_msg_rma message_rma(): cdef _p_msg_rma msg = <_p_msg_rma>_p_msg_rma.__new__(_p_msg_rma) return msg #------------------------------------------------------------------------------ #@cython.final #@cython.internal cdef class _p_msg_io: # raw C-side data cdef void *buf cdef int count cdef MPI_Datatype dtype # python-side data cdef object _msg def __cinit__(self): self.buf = NULL self.count = 0 self.dtype = MPI_DATATYPE_NULL cdef int for_read(self, object msg) except -1: self._msg = message_simple(msg, 0, # writable 0, 0, &self.buf, &self.count, &self.dtype) return 0 cdef int for_write(self, object msg) except -1: self._msg = message_simple(msg, 1, # readonly 0, 0, &self.buf, &self.count, &self.dtype) return 0 cdef inline _p_msg_io message_io_read(object buf): cdef _p_msg_io msg = <_p_msg_io>_p_msg_io.__new__(_p_msg_io) msg.for_read(buf) return msg cdef inline _p_msg_io message_io_write(object buf): cdef _p_msg_io msg = <_p_msg_io>_p_msg_io.__new__(_p_msg_io) msg.for_write(buf) return msg #------------------------------------------------------------------------------ mpi4py_1.3.1+hg20131106.orig/src/MPI/msgpickle.pxi0000644000000000000000000007663612211706251017250 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef extern from "Python.h": enum: PY_MAJOR_VERSION bint PyBytes_CheckExact(object) char* PyBytes_AsString(object) except NULL Py_ssize_t PyBytes_Size(object) except -1 object PyBytes_FromStringAndSize(char*,Py_ssize_t) cdef extern from *: enum: USE_MATCHED_RECV "PyMPI_USE_MATCHED_RECV" # ----------------------------------------------------------------------------- cdef object PyPickle_dumps = None cdef object PyPickle_loads = None cdef object PyPickle_PROTOCOL = -1 if PY_MAJOR_VERSION >= 3: from pickle import dumps as PyPickle_dumps from pickle import loads as PyPickle_loads else: try: from cPickle import dumps as PyPickle_dumps from cPickle import loads as PyPickle_loads except ImportError: from pickle import dumps as PyPickle_dumps from pickle import loads as PyPickle_loads cdef object PyStringIO_New = None cdef object PyPickle_loadf = None if PY_MAJOR_VERSION == 2: try: from cStringIO import StringIO as PyStringIO_New from cPickle import load as PyPickle_loadf except ImportError: pass #@cython.final #@cython.internal cdef class _p_Pickle: cdef object ob_dumps cdef object ob_loads cdef object ob_PROTOCOL def __cinit__(self): self.ob_dumps = None self.ob_loads = None self.ob_PROTOCOL = PyPickle_PROTOCOL property dumps: def __get__(self): if self.ob_dumps is None: return PyPickle_dumps else: return self.ob_dumps def __set__(self, dumps): if dumps is PyPickle_dumps: self.ob_dumps = None else: self.ob_dumps = dumps property loads: def __get__(self): if self.ob_loads is None: return PyPickle_loads else: return self.ob_loads def __set__(self, loads): if loads is PyPickle_loads: self.ob_loads = None else: self.ob_loads = loads property PROTOCOL: def __get__(self): return self.ob_PROTOCOL def __set__(self, PROTOCOL): self.ob_PROTOCOL = PROTOCOL cdef object dump(self, object obj, void **p, int *n): if obj is None: p[0] = NULL n[0] = 0 return None cdef object buf if self.ob_dumps is None: buf = PyPickle_dumps(obj, self.ob_PROTOCOL) else: buf = self.ob_dumps(obj, self.ob_PROTOCOL) p[0] = PyBytes_AsString(buf) n[0] = PyBytes_Size(buf) # XXX overflow? return buf cdef object alloc(self, void **p, int n): if n == 0: p[0] = NULL return None cdef object buf buf = PyBytes_FromStringAndSize(NULL, n) p[0] = PyBytes_AsString(buf) return buf cdef object load(self, object buf): if buf is None: return None cdef bint use_StringIO = \ (PY_MAJOR_VERSION == 2 and not PyBytes_CheckExact(buf) and PyStringIO_New is not None) if self.ob_loads is None: if use_StringIO: buf = PyStringIO_New(buf) if PyPickle_loadf is not None: return PyPickle_loadf(buf) buf = buf.read() return PyPickle_loads(buf) else: if use_StringIO: buf = PyStringIO_New(buf) buf = buf.read() return self.ob_loads(buf) cdef object dumpv(self, object obj, void **p, int n, int cnt[], int dsp[]): cdef Py_ssize_t i=0, m=n if obj is None: p[0] = NULL for i from 0 <= i < m: cnt[i] = 0 dsp[i] = 0 return None cdef object items = list(obj) m = len(items) if m != n: raise ValueError( "expecting %d items, got %d" % (n, m)) cdef int d=0, c=0 for i from 0 <= i < m: items[i] = self.dump(items[i], p, &c) if c == 0: items[i] = b'' cnt[i] = c; dsp[i] = d; d += c cdef object buf = b''.join(items) # XXX use _PyBytes_Join() ? p[0] = PyBytes_AsString(buf) return buf cdef object allocv(self, void **p, int n, int cnt[], int dsp[]): cdef int i=0, d=0 for i from 0 <= i < n: dsp[i] = d d += cnt[i] return self.alloc(p, d) cdef object loadv(self, object obj, int n, int cnt[], int dsp[]): cdef Py_ssize_t i=0, m=n cdef object items = [None] * m if obj is None: return items cdef char *p = PyBytes_AsString(obj) cdef object buf = None for i from 0 <= i < m: if cnt[i] == 0: continue buf = PyBytes_FromStringAndSize(p+dsp[i], cnt[i]) items[i] = self.load(buf) return items cdef _p_Pickle PyMPI_PICKLE = _p_Pickle() cdef inline _p_Pickle PyMPI_pickle(): return PyMPI_PICKLE _p_pickle = PyMPI_PICKLE # ----------------------------------------------------------------------------- cdef object PyMPI_send(object obj, int dest, int tag, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE # cdef int dosend = (dest != MPI_PROC_NULL) # cdef object smsg = None if dosend: smsg = pickle.dump(obj, &sbuf, &scount) with nogil: CHKERR( MPI_Send(sbuf, scount, stype, dest, tag, comm) ) return None cdef object PyMPI_bsend(object obj, int dest, int tag, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE # cdef int dosend = (dest != MPI_PROC_NULL) # cdef object smsg = None if dosend: smsg = pickle.dump(obj, &sbuf, &scount) with nogil: CHKERR( MPI_Bsend(sbuf, scount, stype, dest, tag, comm) ) return None cdef object PyMPI_ssend(object obj, int dest, int tag, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE # cdef int dosend = (dest != MPI_PROC_NULL) # cdef object smsg = None if dosend: smsg = pickle.dump(obj, &sbuf, &scount) with nogil: CHKERR( MPI_Ssend(sbuf, scount, stype, dest, tag, comm) ) return None cdef object PyMPI_recv(object obj, int source, int tag, MPI_Comm comm, MPI_Status *status): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *rbuf = NULL cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE # cdef int dorecv = (source != MPI_PROC_NULL) # cdef object rmsg = None cdef MPI_Message match = MPI_MESSAGE_NULL cdef MPI_Status rsts cdef MPI_Aint rlen = 0 if dorecv: if obj is None: with nogil: if USE_MATCHED_RECV: CHKERR( MPI_Mprobe(source, tag, comm, &match, &rsts) ) else: CHKERR( MPI_Probe(source, tag, comm, &rsts) ) CHKERR( MPI_Get_count(&rsts, rtype, &rcount) ) rmsg = pickle.alloc(&rbuf, rcount) source = rsts.MPI_SOURCE tag = rsts.MPI_TAG else: rmsg = getbuffer_w(obj, &rbuf, &rlen) rcount = rlen # XXX overflow? # with nogil: if match != MPI_MESSAGE_NULL: CHKERR( MPI_Mrecv(rbuf, rcount, rtype, &match, status) ) else: CHKERR( MPI_Recv(rbuf, rcount, rtype, source, tag, comm, status) ) if dorecv: rmsg = pickle.load(rmsg) return rmsg cdef object PyMPI_sendrecv(object sobj, int dest, int sendtag, object robj, int source, int recvtag, MPI_Comm comm, MPI_Status *status): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE cdef void *rbuf = NULL cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE # cdef int dosend = (dest != MPI_PROC_NULL) cdef int dorecv = (source != MPI_PROC_NULL) # cdef object smsg = None if dosend: smsg = pickle.dump(sobj, &sbuf, &scount) cdef MPI_Request sreq = MPI_REQUEST_NULL with nogil: CHKERR( MPI_Isend(sbuf, scount, stype, dest, sendtag, comm, &sreq) ) # cdef object rmsg = None cdef MPI_Message match = MPI_MESSAGE_NULL cdef MPI_Status rsts cdef MPI_Aint rlen = 0 if dorecv: if robj is None: with nogil: if USE_MATCHED_RECV: CHKERR( MPI_Mprobe(source, recvtag, comm, &match, &rsts) ) else: CHKERR( MPI_Probe(source, recvtag, comm, &rsts) ) CHKERR( MPI_Get_count(&rsts, rtype, &rcount) ) rmsg = pickle.alloc(&rbuf, rcount) source = rsts.MPI_SOURCE recvtag = rsts.MPI_TAG else: rmsg = getbuffer_w(robj, &rbuf, &rlen) rcount = rlen # XXX overflow? # with nogil: if match != MPI_MESSAGE_NULL: CHKERR( MPI_Mrecv(rbuf, rcount, rtype, &match, status) ) else: CHKERR( MPI_Recv(rbuf, rcount, rtype, source, recvtag, comm, status) ) CHKERR( MPI_Wait(&sreq, MPI_STATUS_IGNORE) ) if dorecv: rmsg = pickle.load(rmsg) return rmsg # ----------------------------------------------------------------------------- cdef object PyMPI_isend(object obj, int dest, int tag, MPI_Comm comm, MPI_Request *request): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE # cdef object smsg = None cdef int dosend = (dest != MPI_PROC_NULL) if dosend: smsg = pickle.dump(obj, &sbuf, &scount) with nogil: CHKERR( MPI_Isend(sbuf, scount, stype, dest, tag, comm, request) ) return smsg cdef object PyMPI_ibsend(object obj, int dest, int tag, MPI_Comm comm, MPI_Request *request): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE # cdef object smsg = None cdef int dosend = (dest != MPI_PROC_NULL) if dosend: smsg = pickle.dump(obj, &sbuf, &scount) with nogil: CHKERR( MPI_Ibsend(sbuf, scount, stype, dest, tag, comm, request) ) return smsg cdef object PyMPI_issend(object obj, int dest, int tag, MPI_Comm comm, MPI_Request *request): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE # cdef object smsg = None cdef int dosend = (dest != MPI_PROC_NULL) if dosend: smsg = pickle.dump(obj, &sbuf, &scount) with nogil: CHKERR( MPI_Issend(sbuf, scount, stype, dest, tag, comm, request) ) return smsg cdef object PyMPI_irecv(object obj, int dest, int tag, MPI_Comm comm, MPI_Request *request): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *rbuf = NULL cdef MPI_Aint rlen = 0 cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE # cdef object rmsg = None cdef int dorecv = (dest != MPI_PROC_NULL) if dorecv: if obj is None: rcount = (1<<15) obj = pickle.alloc(&rbuf, rcount) rmsg = getbuffer_r(obj, NULL, NULL) #elif is_int(obj): # rcount = obj # obj = pickle.alloc(&rbuf, rcount) # rmsg = getbuffer_r(obj, NULL, NULL) else: rmsg = getbuffer_w(obj, &rbuf, &rlen) rcount = rlen # XXX overflow? with nogil: CHKERR( MPI_Irecv(rbuf, rcount, rtype, dest, tag, comm, request) ) return rmsg cdef object PyMPI_wait(Request request, Status status): cdef _p_Pickle pickle = PyMPI_pickle() cdef object buf # cdef MPI_Status rsts with nogil: CHKERR( MPI_Wait(&request.ob_mpi, &rsts) ) buf = request.ob_buf if status is not None: status.ob_mpi = rsts if request.ob_mpi == MPI_REQUEST_NULL: request.ob_buf = None # cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE if type(buf) is not _p_buffer: return None CHKERR( MPI_Get_count(&rsts, rtype, &rcount) ) if rcount <= 0: return None return pickle.load(buf) cdef object PyMPI_test(Request request, int *flag, Status status): cdef _p_Pickle pickle = PyMPI_pickle() cdef object buf # cdef MPI_Status rsts with nogil: CHKERR( MPI_Test(&request.ob_mpi, flag, &rsts) ) if flag[0]: buf = request.ob_buf if status is not None: status.ob_mpi = rsts if request.ob_mpi == MPI_REQUEST_NULL: request.ob_buf = None # if not flag[0]: return None cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE if type(buf) is not _p_buffer: return None CHKERR( MPI_Get_count(&rsts, rtype, &rcount) ) if rcount <= 0: return None return pickle.load(buf) cdef object PyMPI_waitany(requests, int *index, Status status): cdef _p_Pickle pickle = PyMPI_pickle() cdef object buf # cdef int count = 0 cdef MPI_Request *irequests = NULL cdef MPI_Status rsts # cdef tmp = acquire_rs(requests, None, &count, &irequests, NULL) try: with nogil: CHKERR( MPI_Waitany(count, irequests, index, &rsts) ) if index[0] != MPI_UNDEFINED: buf = (requests[index[0]]).ob_buf if status is not None: status.ob_mpi = rsts finally: release_rs(requests, None, count, irequests, NULL) # if index[0] == MPI_UNDEFINED: return None cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE if type(buf) is not _p_buffer: return None CHKERR( MPI_Get_count(&rsts, rtype, &rcount) ) if rcount <= 0: return None return pickle.load(buf) cdef object PyMPI_testany(requests, int *index, int *flag, Status status): cdef _p_Pickle pickle = PyMPI_pickle() cdef object buf # cdef int count = 0 cdef MPI_Request *irequests = NULL cdef MPI_Status rsts # cdef tmp = acquire_rs(requests, None, &count, &irequests, NULL) try: with nogil: CHKERR( MPI_Testany(count, irequests, index, flag, &rsts) ) if index[0] != MPI_UNDEFINED: buf = (requests[index[0]]).ob_buf if status is not None: status.ob_mpi = rsts finally: release_rs(requests, None, count, irequests, NULL) # if index[0] == MPI_UNDEFINED: return None if not flag[0]: return None cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE if type(buf) is not _p_buffer: return None CHKERR( MPI_Get_count(&rsts, rtype, &rcount) ) if rcount <= 0: return None return pickle.load(buf) cdef object PyMPI_waitall(requests, statuses): cdef _p_Pickle pickle = PyMPI_pickle() cdef object buf, bufs # cdef Py_ssize_t i = 0 cdef int count = 0 cdef MPI_Request *irequests = NULL cdef MPI_Status *istatuses = MPI_STATUSES_IGNORE # cdef tmp = acquire_rs(requests, True, &count, &irequests, &istatuses) try: with nogil: CHKERR( MPI_Waitall(count, irequests, istatuses) ) bufs = [(requests[i]).ob_buf for i from 0 <= i < count] finally: release_rs(requests, statuses, count, irequests, NULL) # cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE for i from 0 <= i < count: if type(bufs[i]) is not _p_buffer: bufs[i] = None; continue rcount = 0 CHKERR( MPI_Get_count(&istatuses[i], rtype, &rcount) ) if rcount <= 0: bufs[i] = None return [pickle.load(buf) for buf in bufs] cdef object PyMPI_testall(requests, int *flag, statuses): cdef _p_Pickle pickle = PyMPI_pickle() cdef object buf, bufs # cdef Py_ssize_t i = 0 cdef int count = 0 cdef MPI_Request *irequests = NULL cdef MPI_Status *istatuses = MPI_STATUSES_IGNORE # cdef tmp = acquire_rs(requests, True, &count, &irequests, &istatuses) try: with nogil: CHKERR( MPI_Testall(count, irequests, flag, istatuses) ) if flag[0]: bufs = [(requests[i]).ob_buf for i from 0 <= i < count] finally: release_rs(requests, statuses, count, irequests, NULL) # if not flag[0]: return None cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE for i from 0 <= i < count: if type(bufs[i]) is not _p_buffer: bufs[i] = None; continue rcount = 0 CHKERR( MPI_Get_count(&istatuses[i], rtype, &rcount) ) if rcount <= 0: bufs[i] = None return [pickle.load(buf) for buf in bufs] # ----------------------------------------------------------------------------- cdef object PyMPI_mprobe(int source, int tag, MPI_Comm comm, MPI_Message *message, MPI_Status *status): cdef _p_Pickle pickle = PyMPI_pickle() cdef void* rbuf = NULL cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE cdef MPI_Status rsts if (status == MPI_STATUS_IGNORE): status = &rsts with nogil: CHKERR( MPI_Mprobe(source, tag, comm, message, status) ) if message[0] == MPI_MESSAGE_NO_PROC: return None CHKERR( MPI_Get_count(status, rtype, &rcount) ) cdef object rmsg = pickle.alloc(&rbuf, rcount) return rmsg cdef object PyMPI_improbe(int source, int tag, MPI_Comm comm, int *flag, MPI_Message *message, MPI_Status *status): cdef _p_Pickle pickle = PyMPI_pickle() cdef void* rbuf = NULL cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE cdef MPI_Status rsts if (status == MPI_STATUS_IGNORE): status = &rsts with nogil: CHKERR( MPI_Improbe(source, tag, comm, flag, message, status) ) if flag[0] == 0 or message[0] == MPI_MESSAGE_NO_PROC: return None CHKERR( MPI_Get_count(status, rtype, &rcount) ) cdef object rmsg = pickle.alloc(&rbuf, rcount) return rmsg cdef object PyMPI_mrecv(object rmsg, MPI_Message *message, MPI_Status *status): cdef _p_Pickle pickle = PyMPI_pickle() cdef void* rbuf = NULL cdef MPI_Aint rlen = 0 cdef MPI_Datatype rtype = MPI_BYTE if message[0] == MPI_MESSAGE_NO_PROC: rmsg = None elif rmsg is None: pass elif PyBytes_CheckExact(rmsg): rmsg = getbuffer_r(rmsg, &rbuf, &rlen) else: rmsg = getbuffer_w(rmsg, &rbuf, &rlen) cdef int rcount = rlen # XXX overflow? with nogil: CHKERR( MPI_Mrecv(rbuf, rcount, rtype, message, status) ) rmsg = pickle.load(rmsg) return rmsg cdef object PyMPI_imrecv(object rmsg, MPI_Message *message, MPI_Request *request): cdef _p_Pickle pickle = PyMPI_pickle() cdef void* rbuf = NULL cdef MPI_Aint rlen = 0 cdef MPI_Datatype rtype = MPI_BYTE if message[0] == MPI_MESSAGE_NO_PROC: rmsg = None elif rmsg is None: pass elif PyBytes_CheckExact(rmsg): rmsg = getbuffer_r(rmsg, &rbuf, &rlen) else: rmsg = getbuffer_w(rmsg, &rbuf, &rlen) cdef int rcount = rlen # XXX overflow? with nogil: CHKERR( MPI_Imrecv(rbuf, rcount, rtype, message, request) ) return rmsg # ----------------------------------------------------------------------------- cdef object PyMPI_barrier(MPI_Comm comm): with nogil: CHKERR( MPI_Barrier(comm) ) return None cdef object PyMPI_bcast(object obj, int root, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *buf = NULL cdef int count = 0 cdef MPI_Datatype dtype = MPI_BYTE # cdef int dosend=0, dorecv=0 cdef int inter=0, rank=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if inter: if root == MPI_PROC_NULL: dosend=0; dorecv=0; elif root == MPI_ROOT: dosend=1; dorecv=0; else: dosend=0; dorecv=1; else: CHKERR( MPI_Comm_rank(comm, &rank) ) if root == rank: dosend=1; dorecv=1; else: dosend=0; dorecv=1; # cdef object smsg = None if dosend: smsg = pickle.dump(obj, &buf, &count) with nogil: CHKERR( MPI_Bcast(&count, 1, MPI_INT, root, comm) ) cdef object rmsg = None if dorecv and dosend: rmsg = smsg elif dorecv: rmsg = pickle.alloc(&buf, count) with nogil: CHKERR( MPI_Bcast(buf, count, dtype, root, comm) ) if dorecv: rmsg = pickle.load(rmsg) return rmsg cdef object PyMPI_gather(object sendobj, object recvobj, int root, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE cdef void *rbuf = NULL cdef int *rcounts = NULL cdef int *rdispls = NULL cdef MPI_Datatype rtype = MPI_BYTE # cdef int dosend=0, dorecv=0 cdef int inter=0, size=0, rank=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if inter: CHKERR( MPI_Comm_remote_size(comm, &size) ) if root == MPI_PROC_NULL: dosend=0; dorecv=0; elif root == MPI_ROOT: dosend=0; dorecv=1; else: dosend=1; dorecv=0; else: CHKERR( MPI_Comm_size(comm, &size) ) CHKERR( MPI_Comm_rank(comm, &rank) ) if root == rank: dosend=1; dorecv=1; else: dosend=1; dorecv=0; # cdef object tmp1=None, tmp2=None if dorecv: tmp1 = allocate_int(size, &rcounts) if dorecv: tmp2 = allocate_int(size, &rdispls) # cdef object smsg = None if dosend: smsg = pickle.dump(sendobj, &sbuf, &scount) with nogil: CHKERR( MPI_Gather(&scount, 1, MPI_INT, rcounts, 1, MPI_INT, root, comm) ) cdef object rmsg = None if dorecv: rmsg = pickle.allocv(&rbuf, size, rcounts, rdispls) with nogil: CHKERR( MPI_Gatherv(sbuf, scount, stype, rbuf, rcounts, rdispls, rtype, root, comm) ) if dorecv: rmsg = pickle.loadv(rmsg, size, rcounts, rdispls) return rmsg cdef object PyMPI_scatter(object sendobj, object recvobj, int root, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int *scounts = NULL cdef int *sdispls = NULL cdef MPI_Datatype stype = MPI_BYTE cdef void *rbuf = NULL cdef int rcount = 0 cdef MPI_Datatype rtype = MPI_BYTE # cdef int dosend=0, dorecv=0 cdef int inter=0, size=0, rank=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if inter: CHKERR( MPI_Comm_remote_size(comm, &size) ) if root == MPI_PROC_NULL: dosend=1; dorecv=0; elif root == MPI_ROOT: dosend=1; dorecv=0; else: dosend=0; dorecv=1; else: CHKERR( MPI_Comm_size(comm, &size) ) CHKERR( MPI_Comm_rank(comm, &rank) ) if root == rank: dosend=1; dorecv=1; else: dosend=0; dorecv=1; # cdef object tmp1=None, tmp2=None if dosend: tmp1 = allocate_int(size, &scounts) if dosend: tmp2 = allocate_int(size, &sdispls) # cdef object smsg = None if dosend: smsg = pickle.dumpv(sendobj, &sbuf, size, scounts, sdispls) with nogil: CHKERR( MPI_Scatter(scounts, 1, MPI_INT, &rcount, 1, MPI_INT, root, comm) ) cdef object rmsg = None if dorecv: rmsg = pickle.alloc(&rbuf, rcount) with nogil: CHKERR( MPI_Scatterv(sbuf, scounts, sdispls, stype, rbuf, rcount, rtype, root, comm) ) if dorecv: rmsg = pickle.load(rmsg) return rmsg cdef object PyMPI_allgather(object sendobj, object recvobj, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE cdef void *rbuf = NULL cdef int *rcounts = NULL cdef int *rdispls = NULL cdef MPI_Datatype rtype = MPI_BYTE # cdef int inter=0, size=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if inter: CHKERR( MPI_Comm_remote_size(comm, &size) ) else: CHKERR( MPI_Comm_size(comm, &size) ) # cdef object tmp1 = allocate_int(size, &rcounts) cdef object tmp2 = allocate_int(size, &rdispls) # cdef object smsg = pickle.dump(sendobj, &sbuf, &scount) with nogil: CHKERR( MPI_Allgather(&scount, 1, MPI_INT, rcounts, 1, MPI_INT, comm) ) cdef object rmsg = pickle.allocv(&rbuf, size, rcounts, rdispls) with nogil: CHKERR( MPI_Allgatherv(sbuf, scount, stype, rbuf, rcounts, rdispls, rtype, comm) ) rmsg = pickle.loadv(rmsg, size, rcounts, rdispls) return rmsg cdef object PyMPI_alltoall(object sendobj, object recvobj, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int *scounts = NULL cdef int *sdispls = NULL cdef MPI_Datatype stype = MPI_BYTE cdef void *rbuf = NULL cdef int *rcounts = NULL cdef int *rdispls = NULL cdef MPI_Datatype rtype = MPI_BYTE # cdef int inter=0, size=0 CHKERR( MPI_Comm_test_inter(comm, &inter) ) if inter: CHKERR( MPI_Comm_remote_size(comm, &size) ) else: CHKERR( MPI_Comm_size(comm, &size) ) # cdef object stmp1 = allocate_int(size, &scounts) cdef object stmp2 = allocate_int(size, &sdispls) cdef object rtmp1 = allocate_int(size, &rcounts) cdef object rtmp2 = allocate_int(size, &rdispls) # cdef object smsg = pickle.dumpv(sendobj, &sbuf, size, scounts, sdispls) with nogil: CHKERR( MPI_Alltoall(scounts, 1, MPI_INT, rcounts, 1, MPI_INT, comm) ) cdef object rmsg = pickle.allocv(&rbuf, size, rcounts, rdispls) with nogil: CHKERR( MPI_Alltoallv(sbuf, scounts, sdispls, stype, rbuf, rcounts, rdispls, rtype, comm) ) rmsg = pickle.loadv(rmsg, size, rcounts, rdispls) return rmsg cdef object PyMPI_neighbor_allgather(object sendobj, object recvobj, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int scount = 0 cdef MPI_Datatype stype = MPI_BYTE cdef void *rbuf = NULL cdef int *rcounts = NULL cdef int *rdispls = NULL cdef MPI_Datatype rtype = MPI_BYTE # cdef int rsize=0 comm_neighbors_count(comm, &rsize, NULL) # cdef object tmp1 = allocate_int(rsize, &rcounts) cdef object tmp2 = allocate_int(rsize, &rdispls) # cdef object smsg = pickle.dump(sendobj, &sbuf, &scount) with nogil: CHKERR( MPI_Neighbor_allgather(&scount, 1, MPI_INT, rcounts, 1, MPI_INT, comm) ) cdef object rmsg = pickle.allocv(&rbuf, rsize, rcounts, rdispls) with nogil: CHKERR( MPI_Neighbor_allgatherv(sbuf, scount, stype, rbuf, rcounts, rdispls, rtype, comm) ) rmsg = pickle.loadv(rmsg, rsize, rcounts, rdispls) return rmsg cdef object PyMPI_neighbor_alltoall(object sendobj, object recvobj, MPI_Comm comm): cdef _p_Pickle pickle = PyMPI_pickle() # cdef void *sbuf = NULL cdef int *scounts = NULL cdef int *sdispls = NULL cdef MPI_Datatype stype = MPI_BYTE cdef void *rbuf = NULL cdef int *rcounts = NULL cdef int *rdispls = NULL cdef MPI_Datatype rtype = MPI_BYTE # cdef int ssize=0, rsize=0 comm_neighbors_count(comm, &rsize, &ssize) # cdef object stmp1 = allocate_int(ssize, &scounts) cdef object stmp2 = allocate_int(ssize, &sdispls) cdef object rtmp1 = allocate_int(rsize, &rcounts) cdef object rtmp2 = allocate_int(rsize, &rdispls) # cdef object smsg = pickle.dumpv(sendobj, &sbuf, ssize, scounts, sdispls) with nogil: CHKERR( MPI_Neighbor_alltoall(scounts, 1, MPI_INT, rcounts, 1, MPI_INT, comm) ) cdef object rmsg = pickle.allocv(&rbuf, rsize, rcounts, rdispls) with nogil: CHKERR( MPI_Neighbor_alltoallv(sbuf, scounts, sdispls, stype, rbuf, rcounts, rdispls, rtype, comm) ) rmsg = pickle.loadv(rmsg, rsize, rcounts, rdispls) return rmsg # ----------------------------------------------------------------------------- cdef inline object _py_reduce(object seq, object op): if seq is None: return None cdef object res cdef Py_ssize_t i=0, n=len(seq) if op is __MAXLOC__ or op is __MINLOC__: res = (seq[0], 0) for i from 1 <= i < n: res = op(res, (seq[i], i)) else: res = seq[0] for i from 1 <= i < n: res = op(res, seq[i]) return res cdef inline object _py_scan(object seq, object op): if seq is None: return None cdef Py_ssize_t i=0, n=len(seq) if op is __MAXLOC__ or op is __MINLOC__: seq[0] = (seq[0], 0) for i from 1 <= i < n: seq[i] = op((seq[i], i), seq[i-1]) else: for i from 1 <= i < n: seq[i] = op(seq[i], seq[i-1]) return seq cdef inline object _py_exscan(object seq, object op): if seq is None: return None seq = _py_scan(seq, op) seq.pop(-1) seq.insert(0, None) return seq cdef object PyMPI_reduce(object sendobj, object recvobj, object op, int root, MPI_Comm comm): cdef object items = PyMPI_gather(sendobj, recvobj, root, comm) return _py_reduce(items, op) cdef object PyMPI_allreduce(object sendobj, object recvobj, object op, MPI_Comm comm): cdef object items = PyMPI_allgather(sendobj, recvobj, comm) return _py_reduce(items, op) cdef object PyMPI_scan(object sendobj, object recvobj, object op, MPI_Comm comm): cdef object items = PyMPI_gather(sendobj, None, 0, comm) items = _py_scan(items, op) return PyMPI_scatter(items, None, 0, comm) cdef object PyMPI_exscan(object sendobj, object recvobj, object op, MPI_Comm comm): cdef object items = PyMPI_gather(sendobj, None, 0, comm) items = _py_exscan(items, op) return PyMPI_scatter(items, None, 0, comm) # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/opimpl.pxi0000644000000000000000000001533012211706251016552 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef object _op_MAX(object x, object y): """maximum""" return max(x, y) cdef object _op_MIN(object x, object y): """minimum""" return min(x, y) cdef object _op_SUM(object x, object y): """sum""" return x + y cdef object _op_PROD(object x, object y): """product""" return x * y cdef object _op_BAND(object x, object y): """bit-wise and""" return x & y cdef object _op_BOR(object x, object y): """bit-wise or""" return x | y cdef object _op_BXOR(object x, object y): """bit-wise xor""" return x ^ y cdef object _op_LAND(object x, object y): """logical and""" return bool(x) & bool(y) cdef object _op_LOR(object x, object y): """logical or""" return bool(x) | bool(y) cdef object _op_LXOR(object x, object y): """logical xor""" return bool(x) ^ bool(y) cdef object _op_MAXLOC(object x, object y): """maximum and location""" cdef object i, j, u, v, w, k u, i = x v, j = y w = max(u, v) k = None if u == v: k = min(i, j) elif u < v: k = j else: k = i return (w, k) cdef object _op_MINLOC(object x, object y): """minimum and location""" cdef object i, j, u, v, w, k u, i = x v, j = y w = min(u, v) k = None if u == v: k = min(i, j) elif u < v: k = i else: k = j return (w, k) cdef object _op_REPLACE(object x, object y): """replace, (x, y) -> y""" return y cdef object _op_NO_OP(object x, object y): """no-op, (x, y) -> x""" return x # ----------------------------------------------------------------------------- cdef object op_user_registry = [None]*(1+16) cdef inline object op_user_py(int index, object x, object y, object dt): return op_user_registry[index](x, y, dt) cdef inline void op_user_mpi( int index, void *a, void *b, MPI_Aint n, MPI_Datatype *t) with gil: cdef Datatype datatype # errors in user-defined reduction operations are unrecoverable try: datatype = Datatype.__new__(Datatype) datatype.ob_mpi = t[0] op_user_py(index, tomemory(a, n), tomemory(b, n), datatype) except: # print the full exception traceback and abort. PySys_WriteStderr(b"Fatal Python error: exception in " b"user-defined reduction operation\n", 0) try: print_traceback() finally: MPI_Abort(MPI_COMM_WORLD, 1) cdef inline void op_user_call( int index, void *a, void *b, int *plen, MPI_Datatype *t) nogil: # make it abort if Python has finalized if not Py_IsInitialized(): MPI_Abort(MPI_COMM_WORLD, 1) # make it abort if module clenaup has been done if (op_user_registry) == NULL: MPI_Abort(MPI_COMM_WORLD, 1) # compute the byte-size of memory buffers cdef MPI_Aint lb=0, extent=0 MPI_Type_get_extent(t[0], &lb, &extent) cdef MPI_Aint n = plen[0] * extent # make the actual GIL-safe Python call op_user_mpi(index, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_01(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 1, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_02(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 2, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_03(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 3, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_04(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 4, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_05(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 5, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_06(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 6, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_07(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 7, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_08(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 8, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_09(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call( 9, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_10(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call(10, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_11(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call(11, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_12(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call(12, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_13(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call(13, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_14(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call(14, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_15(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call(15, a, b, n, t) @cython.callspec("MPIAPI") cdef void op_user_16(void *a, void *b, int *n, MPI_Datatype *t) nogil: op_user_call(15, a, b, n, t) cdef MPI_User_function *op_user_map(int index) nogil: if index == 1: return op_user_01 elif index == 2: return op_user_02 elif index == 3: return op_user_03 elif index == 4: return op_user_04 elif index == 5: return op_user_05 elif index == 6: return op_user_06 elif index == 7: return op_user_07 elif index == 8: return op_user_08 elif index == 9: return op_user_09 elif index == 10: return op_user_10 elif index == 11: return op_user_11 elif index == 12: return op_user_12 elif index == 13: return op_user_13 elif index == 14: return op_user_14 elif index == 15: return op_user_15 elif index == 16: return op_user_16 else: return NULL cdef int op_user_new(object function, MPI_User_function **cfunction) except -1: # find a free slot in the registry cdef int index = 0 try: index = op_user_registry.index(None, 1) except ValueError: raise RuntimeError("cannot create too many " "user-defined reduction operations") # the line below will fail # if the function is not callable function.__call__ # register the Python function, # map it to the associated C function, # and return the slot index in registry op_user_registry[index] = function cfunction[0] = op_user_map(index) return index cdef int op_user_del(int *index) except -1: # free slot in the registry cdef Py_ssize_t idx = index[0] index[0] = 0 # clear the value if idx > 0: op_user_registry[idx] = None return 0 # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/reqimpl.pxi0000644000000000000000000001230412211706251016721 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef MPI_Status empty_status empty_status.MPI_SOURCE = MPI_ANY_SOURCE empty_status.MPI_TAG = MPI_ANY_TAG empty_status.MPI_ERROR = MPI_SUCCESS cdef object acquire_rs(object requests, object statuses, int *count, MPI_Request *rp[], MPI_Status *sp[]): cdef MPI_Request *array_r = NULL cdef MPI_Status *array_s = NULL cdef object ob_r = None, ob_s = None cdef Py_ssize_t i = 0, n = len(requests) count[0] = n # XXX overflow ? ob_r = allocate(n, sizeof(MPI_Request), &array_r) for i from 0 <= i < n: array_r[i] = (requests[i]).ob_mpi rp[0] = array_r if statuses is not None: ob_s = allocate(n, sizeof(MPI_Status), &array_s) for i from 0 <= i < n: array_s[i] = empty_status sp[0] = array_s return (ob_r, ob_s) cdef int release_rs(object requests, object statuses, int count, MPI_Request rp[], MPI_Status sp[]) except -1: cdef Py_ssize_t i = 0, nr = count, ns = 0 cdef Request req = None for i from 0 <= i < nr: req = requests[i] req.ob_mpi = rp[i] if rp[i] == MPI_REQUEST_NULL: req.ob_buf = None if statuses is not None: ns = len(statuses) if nr > ns : if isinstance(statuses, list): statuses += [Status.__new__(Status) for i from ns <= i < nr] ns = nr for i from 0 <= i < ns: (statuses[i]).ob_mpi = sp[i] return 0 # ----------------------------------------------------------------------------- #@cython.final #@cython.internal cdef class _p_greq: cdef object query_fn cdef object free_fn cdef object cancel_fn cdef tuple args cdef dict kargs def __cinit__(self, query_fn, free_fn, cancel_fn, args, kargs): self.query_fn = query_fn self.free_fn = free_fn self.cancel_fn = cancel_fn self.args = tuple(args) if args is not None else () self.kargs = dict(kargs) if kargs is not None else {} cdef int query(self, MPI_Status *status) except -1: status.MPI_SOURCE = MPI_ANY_SOURCE status.MPI_TAG = MPI_ANY_TAG MPI_Status_set_elements(status, MPI_BYTE, 0) MPI_Status_set_cancelled(status, 0) cdef Status sts = Status.__new__(Status) if self.query_fn is not None: sts.ob_mpi = status[0] self.query_fn(sts, *self.args, **self.kargs) status[0] = sts.ob_mpi if self.cancel_fn is None: MPI_Status_set_cancelled(status, 0) return MPI_SUCCESS cdef int free(self) except -1: if self.free_fn is not None: self.free_fn(*self.args, **self.kargs) return MPI_SUCCESS cdef int cancel(self, bint completed) except -1: if self.cancel_fn is not None: self.cancel_fn(completed, *self.args, **self.kargs) return MPI_SUCCESS # --- cdef int greq_query(void *extra_state, MPI_Status *status) with gil: cdef _p_greq state = <_p_greq>extra_state cdef int ierr = MPI_SUCCESS cdef object exc try: ierr = state.query(status) except MPIException, exc: print_traceback() ierr = exc.Get_error_code() except: print_traceback() ierr = MPI_ERR_OTHER return ierr cdef int greq_free(void *extra_state) with gil: cdef _p_greq state = <_p_greq>extra_state cdef int ierr = MPI_SUCCESS cdef object exc try: ierr = state.free() except MPIException, exc: print_traceback() ierr = exc.Get_error_code() except: print_traceback() ierr = MPI_ERR_OTHER Py_DECREF(extra_state) return ierr cdef int greq_cancel(void *extra_state, int completed) with gil: cdef _p_greq state = <_p_greq>extra_state cdef int ierr = MPI_SUCCESS cdef object exc try: ierr = state.cancel(completed) except MPIException, exc: print_traceback() ierr = exc.Get_error_code() except: print_traceback() ierr = MPI_ERR_OTHER return ierr # --- @cython.callspec("MPIAPI") cdef int greq_query_fn(void *extra_state, MPI_Status *status) nogil: if extra_state == NULL: return MPI_ERR_INTERN if status == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_ERR_INTERN return greq_query(extra_state, status) @cython.callspec("MPIAPI") cdef int greq_free_fn(void *extra_state) nogil: if extra_state == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_ERR_INTERN return greq_free(extra_state) @cython.callspec("MPIAPI") cdef int greq_cancel_fn(void *extra_state, int completed) nogil: if extra_state == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_ERR_INTERN return greq_cancel(extra_state, completed) # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/stdlib.pxi0000644000000000000000000000045512211706251016535 0ustar 00000000000000#--------------------------------------------------------------------- cdef extern from * nogil: # "stdio.h" ctypedef struct FILE FILE *stdin, *stdout, *stderr int fprintf(FILE *, char *, ...) int fflush(FILE *) #--------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/typeimpl.pxi0000644000000000000000000000636212211706251017122 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef dict type_keyval = {} cdef inline int type_keyval_new(int keyval, object copy_fn,object delete_fn) except -1: type_keyval[keyval] = (copy_fn, delete_fn) return 0 cdef inline int type_keyval_del(int keyval) except -1: try: del type_keyval[keyval] except KeyError: pass return 0 cdef inline int type_attr_copy( MPI_Datatype datatype, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) except -1: cdef tuple entry = type_keyval.get(keyval) cdef object copy_fn = None if entry is not None: copy_fn = entry[0] if copy_fn is None or copy_fn is False: flag[0] = 0 return 0 cdef object attrval = attrval_in cdef void **aptr = attrval_out if copy_fn is not True: attrval = copy_fn(attrval) Py_INCREF(attrval) aptr[0] = attrval flag[0] = 1 return 0 cdef int type_attr_copy_cb( MPI_Datatype datatype, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) with gil: cdef object exc try: type_attr_copy(datatype, keyval, extra_state, attrval_in, attrval_out, flag) except MPIException, exc: print_traceback() return exc.Get_error_code() except: print_traceback() return MPI_ERR_OTHER return MPI_SUCCESS cdef inline int type_attr_delete( MPI_Datatype datatype, int keyval, void *attrval, void *extra_state) except -1: cdef tuple entry = type_keyval.get(keyval) cdef object delete_fn = None if entry is not None: delete_fn = entry[1] if delete_fn is not None: delete_fn(attrval) Py_DECREF(attrval) return 0 cdef int type_attr_delete_cb( MPI_Datatype datatype, int keyval, void *attrval, void *extra_state) with gil: cdef object exc try: type_attr_delete(datatype, keyval, attrval, extra_state) except MPIException, exc: print_traceback() return exc.Get_error_code() except: print_traceback() return MPI_ERR_OTHER return MPI_SUCCESS @cython.callspec("MPIAPI") cdef int type_attr_copy_fn(MPI_Datatype datatype, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) nogil: if attrval_in == NULL: return MPI_ERR_INTERN if attrval_out == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_SUCCESS return type_attr_copy_cb(datatype, keyval, extra_state, attrval_in, attrval_out, flag) @cython.callspec("MPIAPI") cdef int type_attr_delete_fn(MPI_Datatype datatype, int keyval, void *attrval, void *extra_state) nogil: if attrval == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_SUCCESS return type_attr_delete_cb(datatype, keyval, attrval, extra_state) # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/typemap.pxi0000644000000000000000000001145512211706251016735 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef inline int AddTypeMap(dict TD, object key, Datatype dataype) except -1: global TypeDict if dataype.ob_mpi != MPI_DATATYPE_NULL: TD[key] = dataype return 1 return 0 # ----------------------------------------------------------------------------- cdef dict TypeDict = { } __TypeDict__ = TypeDict # boolean (C99) AddTypeMap(TypeDict, "?" , __C_BOOL__ ) # PEP-3118 & NumPy # character AddTypeMap(TypeDict, "c" , __CHAR__ ) # PEP-3118 & NumPy ## XXX this requires special handling ## AddTypeMap(TypeDict, "u" , __????__ ) # PEP-3118 ## AddTypeMap(TypeDict, "w" , __????__ ) # PEP-3118 # (signed) integer AddTypeMap(TypeDict, "b" , __SIGNED_CHAR__ ) # MPI-2 AddTypeMap(TypeDict, "h" , __SHORT__ ) AddTypeMap(TypeDict, "i" , __INT__ ) AddTypeMap(TypeDict, "l" , __LONG__ ) AddTypeMap(TypeDict, "q" , __LONG_LONG__ ) # unsigned integer AddTypeMap(TypeDict, "B" , __UNSIGNED_CHAR__ ) AddTypeMap(TypeDict, "H" , __UNSIGNED_SHORT__ ) AddTypeMap(TypeDict, "I" , __UNSIGNED__ ) AddTypeMap(TypeDict, "L" , __UNSIGNED_LONG__ ) AddTypeMap(TypeDict, "Q" , __UNSIGNED_LONG_LONG__) # (real) floating AddTypeMap(TypeDict, "f" , __FLOAT__ ) AddTypeMap(TypeDict, "d" , __DOUBLE__ ) AddTypeMap(TypeDict, "g" , __LONG_DOUBLE__ ) # PEP-3118 & NumPy # complex floating (F77) AddTypeMap(TypeDict, "Zf" , __COMPLEX__ ) # PEP-3118 AddTypeMap(TypeDict, "Zd" , __DOUBLE_COMPLEX__ ) # PEP-3118 AddTypeMap(TypeDict, "F" , __COMPLEX__ ) # NumPy AddTypeMap(TypeDict, "D" , __DOUBLE_COMPLEX__ ) # NumPy # complex floating (F90) AddTypeMap(TypeDict, "Zf" , __COMPLEX8__ ) # PEP-3118 AddTypeMap(TypeDict, "Zd" , __COMPLEX16__ ) # PEP-3118 AddTypeMap(TypeDict, "F" , __COMPLEX8__ ) # NumPy AddTypeMap(TypeDict, "D" , __COMPLEX16__ ) # NumPy # complex floating (C99) AddTypeMap(TypeDict, "Zf" , __C_FLOAT_COMPLEX__ ) # PEP-3118 AddTypeMap(TypeDict, "Zd" , __C_DOUBLE_COMPLEX__ ) # PEP-3118 AddTypeMap(TypeDict, "Zg" , __C_LONG_DOUBLE_COMPLEX__ ) # PEP-3118 AddTypeMap(TypeDict, "F" , __C_FLOAT_COMPLEX__ ) # NumPy AddTypeMap(TypeDict, "D" , __C_DOUBLE_COMPLEX__ ) # NumPy AddTypeMap(TypeDict, "G" , __C_LONG_DOUBLE_COMPLEX__ ) # NumPy # ----------------------------------------------------------------------------- cdef dict CTypeDict = { } __CTypeDict__ = CTypeDict AddTypeMap(CTypeDict, "?" , __C_BOOL__ ) AddTypeMap(CTypeDict, "c" , __CHAR__ ) AddTypeMap(CTypeDict, "b" , __SIGNED_CHAR__ ) AddTypeMap(CTypeDict, "h" , __SHORT__ ) AddTypeMap(CTypeDict, "i" , __INT__ ) AddTypeMap(CTypeDict, "l" , __LONG__ ) AddTypeMap(CTypeDict, "q" , __LONG_LONG__ ) AddTypeMap(CTypeDict, "B" , __UNSIGNED_CHAR__ ) AddTypeMap(CTypeDict, "H" , __UNSIGNED_SHORT__ ) AddTypeMap(CTypeDict, "I" , __UNSIGNED__ ) AddTypeMap(CTypeDict, "L" , __UNSIGNED_LONG__ ) AddTypeMap(CTypeDict, "Q" , __UNSIGNED_LONG_LONG__ ) AddTypeMap(CTypeDict, "f" , __FLOAT__ ) AddTypeMap(CTypeDict, "d" , __DOUBLE__ ) AddTypeMap(CTypeDict, "g" , __LONG_DOUBLE__ ) AddTypeMap(CTypeDict, "F" , __C_FLOAT_COMPLEX__ ) AddTypeMap(CTypeDict, "D" , __C_DOUBLE_COMPLEX__ ) AddTypeMap(CTypeDict, "G" , __C_LONG_DOUBLE_COMPLEX__ ) AddTypeMap(CTypeDict, "i1" , __INT8_T__ ) AddTypeMap(CTypeDict, "i2" , __INT16_T__ ) AddTypeMap(CTypeDict, "i4" , __INT32_T__ ) AddTypeMap(CTypeDict, "i8" , __INT64_T__ ) AddTypeMap(CTypeDict, "u1" , __UINT8_T__ ) AddTypeMap(CTypeDict, "u2" , __UINT16_T__ ) AddTypeMap(CTypeDict, "u4" , __UINT32_T__ ) AddTypeMap(CTypeDict, "u8" , __UINT64_T__ ) # ----------------------------------------------------------------------------- cdef dict FTypeDict = { } __FTypeDict__ = FTypeDict AddTypeMap(FTypeDict, "?" , __LOGICAL__ ) AddTypeMap(FTypeDict, "c" , __CHARACTER__ ) AddTypeMap(FTypeDict, "i" , __INTEGER__ ) AddTypeMap(FTypeDict, "f" , __REAL__ ) AddTypeMap(FTypeDict, "d" , __DOUBLE_PRECISION__ ) AddTypeMap(FTypeDict, "F" , __COMPLEX__ ) AddTypeMap(FTypeDict, "D" , __DOUBLE_COMPLEX__ ) AddTypeMap(FTypeDict, "i1" , __INTEGER1__ ) AddTypeMap(FTypeDict, "i2" , __INTEGER2__ ) AddTypeMap(FTypeDict, "i4" , __INTEGER4__ ) AddTypeMap(FTypeDict, "i8" , __INTEGER8__ ) AddTypeMap(FTypeDict, "i16" , __INTEGER16__ ) AddTypeMap(FTypeDict, "f2" , __REAL2__ ) AddTypeMap(FTypeDict, "f4" , __REAL4__ ) AddTypeMap(FTypeDict, "f8" , __REAL8__ ) AddTypeMap(FTypeDict, "f16" , __REAL16__ ) AddTypeMap(FTypeDict, "c4" , __COMPLEX4__ ) AddTypeMap(FTypeDict, "c8" , __COMPLEX8__ ) AddTypeMap(FTypeDict, "c16" , __COMPLEX16__ ) AddTypeMap(FTypeDict, "c32" , __COMPLEX32__ ) # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/typestr.pxi0000644000000000000000000000736112211706251016771 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef inline char* Datatype2Str(MPI_Datatype datatype) nogil: if datatype == MPI_DATATYPE_NULL: return NULL # MPI elif datatype == MPI_LB : return "" elif datatype == MPI_UB : return "" elif datatype == MPI_PACKED : return "B" elif datatype == MPI_BYTE : return "B" elif datatype == MPI_AINT : return "p"# XXX NumPy-specific elif datatype == MPI_OFFSET : if sizeof(MPI_Offset) == sizeof(MPI_Aint) : return "p" elif sizeof(MPI_Offset) == sizeof(long long) : return "q" elif sizeof(MPI_Offset) == sizeof(long) : return "l" elif sizeof(MPI_Offset) == sizeof(int) : return "i" else : return "" # C - character elif datatype == MPI_CHAR : return "c" elif datatype == MPI_WCHAR : return ""#"U"#XXX # C - (signed) integral elif datatype == MPI_SIGNED_CHAR : return "b" elif datatype == MPI_SHORT : return "h" elif datatype == MPI_INT : return "i" elif datatype == MPI_LONG : return "l" elif datatype == MPI_LONG_LONG : return "q" # C - unsigned integral elif datatype == MPI_UNSIGNED_CHAR : return "B" elif datatype == MPI_UNSIGNED_SHORT : return "H" elif datatype == MPI_UNSIGNED : return "I" elif datatype == MPI_UNSIGNED_LONG : return "L" elif datatype == MPI_UNSIGNED_LONG_LONG : return "Q" # C - (real) floating elif datatype == MPI_FLOAT : return "f" elif datatype == MPI_DOUBLE : return "d" elif datatype == MPI_LONG_DOUBLE : return "g" # C99 - boolean elif datatype == MPI_C_BOOL : return "?" # C99 - integral elif datatype == MPI_INT8_T : return "i1" elif datatype == MPI_INT16_T : return "i2" elif datatype == MPI_INT32_T : return "i4" elif datatype == MPI_INT64_T : return "i8" elif datatype == MPI_UINT8_T : return "u1" elif datatype == MPI_UINT16_T : return "u2" elif datatype == MPI_UINT32_T : return "u4" elif datatype == MPI_UINT64_T : return "u8" # C99 - complex floating elif datatype == MPI_C_COMPLEX : return "F" elif datatype == MPI_C_FLOAT_COMPLEX : return "F" elif datatype == MPI_C_DOUBLE_COMPLEX : return "D" elif datatype == MPI_C_LONG_DOUBLE_COMPLEX : return "G" # Fortran elif datatype == MPI_CHARACTER : return "c" elif datatype == MPI_LOGICAL : return ""#"?"# XXX elif datatype == MPI_INTEGER : return "i" elif datatype == MPI_REAL : return "f" elif datatype == MPI_DOUBLE_PRECISION : return "d" elif datatype == MPI_COMPLEX : return "F" elif datatype == MPI_DOUBLE_COMPLEX : return "D" # Fortran 90 elif datatype == MPI_LOGICAL1 : return ""#"?1"# XXX elif datatype == MPI_LOGICAL2 : return ""#"?2"# XXX elif datatype == MPI_LOGICAL4 : return ""#"?4"# XXX elif datatype == MPI_LOGICAL8 : return ""#"?8"# XXX elif datatype == MPI_INTEGER1 : return "i1" elif datatype == MPI_INTEGER2 : return "i2" elif datatype == MPI_INTEGER4 : return "i4" elif datatype == MPI_INTEGER8 : return "i8" elif datatype == MPI_INTEGER16 : return "i16" elif datatype == MPI_REAL2 : return "f2" elif datatype == MPI_REAL4 : return "f4" elif datatype == MPI_REAL8 : return "f8" elif datatype == MPI_REAL16 : return "f16" elif datatype == MPI_COMPLEX4 : return "c4" elif datatype == MPI_COMPLEX8 : return "c8" elif datatype == MPI_COMPLEX16 : return "c16" elif datatype == MPI_COMPLEX32 : return "c32" else : return "" # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/MPI/winimpl.pxi0000644000000000000000000001014212211706251016725 0ustar 00000000000000# ----------------------------------------------------------------------------- cdef extern from *: int PyMPI_KEYVAL_WIN_MEMORY cdef void win_memory_decref(void *ob) with gil: Py_DECREF(ob) @cython.callspec("MPIAPI") cdef int win_memory_del(MPI_Win w, int k, void *v, void *xs) nogil: if v != NULL: if Py_IsInitialized(): win_memory_decref(v) return MPI_SUCCESS cdef int PyMPI_Win_setup(MPI_Win win, object memory): cdef int ierr = MPI_SUCCESS # hold a reference to memory global PyMPI_KEYVAL_WIN_MEMORY if memory is not None: if PyMPI_KEYVAL_WIN_MEMORY == MPI_KEYVAL_INVALID: ierr = MPI_Win_create_keyval(MPI_WIN_NULL_COPY_FN, win_memory_del, &PyMPI_KEYVAL_WIN_MEMORY, NULL) if ierr: return ierr ierr = MPI_Win_set_attr(win, PyMPI_KEYVAL_WIN_MEMORY, memory) if ierr: return ierr Py_INCREF(memory) # return MPI_SUCCESS # ----------------------------------------------------------------------------- cdef dict win_keyval = {} cdef inline int win_keyval_new(int keyval, object copy_fn,object delete_fn) except -1: win_keyval[keyval] = (copy_fn, delete_fn) return 0 cdef inline int win_keyval_del(int keyval) except -1: try: del win_keyval[keyval] except KeyError: pass return 0 cdef int win_attr_copy( MPI_Win win, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) except -1: cdef tuple entry = win_keyval.get(keyval) cdef object copy_fn = None if entry is not None: copy_fn = entry[0] if copy_fn is None or copy_fn is False: flag[0] = 0 return 0 cdef object attrval = attrval_in cdef void **aptr = attrval_out if copy_fn is not True: attrval = copy_fn(attrval) Py_INCREF(attrval) aptr[0] = attrval flag[0] = 1 return 0 cdef int win_attr_copy_cb( MPI_Win win, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) with gil: cdef object exc try: win_attr_copy(win, keyval, extra_state, attrval_in, attrval_out, flag) except MPIException as exc: print_traceback() return exc.Get_error_code() except: print_traceback() return MPI_ERR_OTHER return MPI_SUCCESS cdef int win_attr_delete( MPI_Win win, int keyval, void *attrval, void *extra_state) except -1: cdef tuple entry = win_keyval.get(keyval) cdef object delete_fn = None if entry is not None: delete_fn = entry[1] if delete_fn is not None: delete_fn(attrval) Py_DECREF(attrval) return 0 cdef int win_attr_delete_cb( MPI_Win win, int keyval, void *attrval, void *extra_state) with gil: cdef object exc try: win_attr_delete(win, keyval, attrval, extra_state) except MPIException as exc: print_traceback() return exc.Get_error_code() except: print_traceback() return MPI_ERR_OTHER return MPI_SUCCESS @cython.callspec("MPIAPI") cdef int win_attr_copy_fn(MPI_Win win, int keyval, void *extra_state, void *attrval_in, void *attrval_out, int *flag) nogil: if attrval_in == NULL: return MPI_ERR_INTERN if attrval_out == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_SUCCESS return win_attr_copy_cb(win, keyval, extra_state, attrval_in, attrval_out, flag) @cython.callspec("MPIAPI") cdef int win_attr_delete_fn(MPI_Win win, int keyval, void *attrval, void *extra_state) nogil: if attrval == NULL: return MPI_ERR_INTERN if not Py_IsInitialized(): return MPI_SUCCESS return win_attr_delete_cb(win, keyval, attrval, extra_state) # ----------------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/__init__.py0000644000000000000000000001266612211706251016225 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com """ This is the **MPI for Python** package. What is *MPI*? ============== The *Message Passing Interface*, is a standardized and portable message-passing system designed to function on a wide variety of parallel computers. The standard defines the syntax and semantics of library routines and allows users to write portable programs in the main scientific programming languages (Fortran, C, or C++). Since its release, the MPI specification has become the leading standard for message-passing libraries for parallel computers. What is *MPI for Python*? ========================= *MPI for Python* provides MPI bindings for the Python programming language, allowing any Python program to exploit multiple processors. This package is constructed on top of the MPI-1/2 specifications and provides an object oriented interface which closely follows MPI-2 C++ bindings. """ __version__ = '1.3.1' __author__ = 'Lisandro Dalcin' __credits__ = 'MPI Forum, MPICH Team, Open MPI Team.' # -------------------------------------------------------------------- __all__ = ['MPI'] # -------------------------------------------------------------------- def get_include(): """ Return the directory in the package that contains header files. Extension modules that need to compile against mpi4py should use this function to locate the appropriate include directory. Using Python distutils (or perhaps NumPy distutils):: import mpi4py Extension('extension_name', ... include_dirs=[..., mpi4py.get_include()]) """ from os.path import dirname, join return join(dirname(__file__), 'include') # -------------------------------------------------------------------- def get_config(): """ Return a dictionary with information about MPI. """ from os.path import dirname, join try: from configparser import ConfigParser except ImportError: from ConfigParser import ConfigParser parser = ConfigParser() parser.read(join(dirname(__file__), 'mpi.cfg')) return dict(parser.items('mpi')) # -------------------------------------------------------------------- def profile(name='MPE', **kargs): """ Support for the MPI profiling interface. Parameters ---------- name : str, optional Name of the profiler to load. path : list of str, optional Additional paths to search for the profiler. logfile : str, optional Filename prefix for dumping profiler output. """ import sys, os, imp try: from mpi4py.dl import dlopen, RTLD_NOW, RTLD_GLOBAL from mpi4py.dl import dlerror except ImportError: from ctypes import CDLL as dlopen, RTLD_GLOBAL try: from DLFCN import RTLD_NOW except ImportError: RTLD_NOW = 2 dlerror = None # def lookup_pymod(name, path): for pth in path: for suffix, _, kind in imp.get_suffixes(): if kind == imp.C_EXTENSION: filename = os.path.join(pth, name + suffix) if os.path.isfile(filename): return filename return None # def lookup_dylib(name, path): format = [] for suffix, _, kind in imp.get_suffixes(): if kind == imp.C_EXTENSION: format.append(('', suffix)) if sys.platform.startswith('win'): format.append(('', '.dll')) elif sys.platform == 'darwin': format.append(('lib', '.dylib')) elif os.name == 'posix': format.append(('lib', '.so')) format.append(('', '')) for pth in path: for (lib, so) in format: filename = os.path.join(pth, lib + name + so) if os.path.isfile(filename): return filename return None # logfile = kargs.pop('logfile', None) if logfile: if name in ('mpe', 'MPE'): if 'MPE_LOGFILE_PREFIX' not in os.environ: os.environ['MPE_LOGFILE_PREFIX'] = logfile if name in ('vt', 'vt-mpi', 'vt-hyb'): if 'VT_FILE_PREFIX' not in os.environ: os.environ['VT_FILE_PREFIX'] = logfile path = kargs.pop('path', None) if path is None: path = [] elif isinstance(path, str): path = [path] else: path = list(path) # if name in ('MPE',): path.append(os.path.dirname(__file__)) filename = lookup_pymod(name, path) else: prefix = os.path.dirname(__file__) path.append(os.path.join(prefix, 'lib-pmpi')) filename = lookup_dylib(name, path) if filename is None: raise ValueError("profiler '%s' not found" % name) else: filename = os.path.abspath(filename) # handle = dlopen(filename, RTLD_NOW|RTLD_GLOBAL) if handle: profile._registry.append((name, (handle, filename))) else: from warnings import warn if dlerror: message = dlerror() else: message = "error loading '%s'" % filename warn(message) profile._registry = [] # -------------------------------------------------------------------- from mpi4py import rc rc.profile = profile # -------------------------------------------------------------------- if __name__ == '__main__': from mpi4py.__main__ import main main() # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/__main__.py0000644000000000000000000001330112211706251016171 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com """ Run some benchmarks and tests """ def helloworld(comm, args=None, verbose=True): """ Hello, World! using MPI """ from mpi4py import MPI from optparse import OptionParser parser = OptionParser(prog="mpi4py helloworld") parser.add_option("-q","--quiet", action="store_false", dest="verbose", default=verbose) (options, args) = parser.parse_args(args) size = comm.Get_size() rank = comm.Get_rank() name = MPI.Get_processor_name() message = ("Hello, World! I am process %*d of %d on %s." % (len(str(size-1)), rank, size, name) ) _seq_begin(comm) if options.verbose: _println(message, stream=_stdout) _seq_end(comm) return message def ringtest(comm, args=None, verbose=True): """ Time a message going around the ring of processes """ from array import array from mpi4py import MPI from optparse import OptionParser parser = OptionParser(prog="mpi4py ringtest") parser.add_option("-q","--quiet", action="store_false", dest="verbose", default=verbose) parser.add_option("-n", "--size", type="int", default=1, dest="size", help="message size") parser.add_option("-s", "--skip", type="int", default=0, dest="skip", help="number of warm-up iterations") parser.add_option("-l", "--loop", type="int", default=1, dest="loop", help="number of iterations") (options, args) = parser.parse_args(args) def ring(comm, n=1, loop=1, skip=0): iterations = list(range((loop+skip))) size = comm.Get_size() rank = comm.Get_rank() source = (rank - 1) % size dest = (rank + 1) % size Sendrecv = comm.Sendrecv Send = comm.Send Recv = comm.Recv Wtime = MPI.Wtime sendmsg = array('B', [42])*n recvmsg = array('B', [ 0])*n if size == 1: for i in iterations: if i == skip: tic = Wtime() Sendrecv(sendmsg, dest, 0, recvmsg, source, 0) else: if rank == 0: for i in iterations: if i == skip: tic = Wtime() Send(sendmsg, dest, 0) Recv(recvmsg, source, 0) else: sendmsg = recvmsg for i in iterations: if i == skip: tic = Wtime() Recv(recvmsg, source, 0) Send(sendmsg, dest, 0) toc = Wtime() if comm.rank == 0 and sendmsg != recvmsg: import warnings, traceback try: warnings.warn("received message does not match!") except UserWarning: traceback.print_exc() comm.Abort(2) return toc - tic size = getattr(options, 'size', 1) loop = getattr(options, 'loop', 1) skip = getattr(options, 'skip', 0) comm.Barrier() elapsed = ring(comm, size, loop, skip) if options.verbose and comm.rank == 0: _println("time for %d loops = %g seconds (%d processes, %d bytes)" % (loop, elapsed, comm.size, size), stream=_stdout) return elapsed from sys import stdout as _stdout from sys import stderr as _stderr def _println(message, stream): stream.write(message+'\n') stream.flush() def _seq_begin(comm): comm.Barrier() size = comm.Get_size() rank = comm.Get_rank() if rank > 0: comm.Recv([None, 'B'], rank - 1) def _seq_end(comm): size = comm.Get_size() rank = comm.Get_rank() if rank < size - 1: comm.Send([None, 'B'], rank + 1) comm.Barrier() _commands = { 'helloworld' : helloworld, 'ringtest' : ringtest, } def main(args=None): from optparse import OptionParser from mpi4py import __name__ as prog from mpi4py import __version__ as version parser = OptionParser(prog=prog, version='%prog ' + version, usage="%prog [options] [args]") parser.add_option("--no-threads", action="store_false", dest="threaded", default=True, help="initialize MPI without thread support") parser.add_option("--thread-level", type="choice", metavar="LEVEL", choices=["single", "funneled", "serialized", "multiple"], action="store", dest="thread_level", default="multiple", help="initialize MPI with required thread support") parser.add_option("--mpe", action="store_true", dest="mpe", default=False, help="use MPE for MPI profiling") parser.add_option("--vt", action="store_true", dest="vt", default=False, help="use VampirTrace for MPI profiling") parser.disable_interspersed_args() (options, args) = parser.parse_args(args) # import mpi4py mpi4py.rc.threaded = options.threaded mpi4py.rc.thread_level = options.thread_level if options.mpe: mpi4py.profile('mpe', logfile='mpi4py') if options.vt: mpi4py.profile('vt', logfile='mpi4py') # from mpi4py import MPI comm = MPI.COMM_WORLD if not args: if comm.rank == 0: parser.print_usage() parser.exit() command = args.pop(0) if command not in _commands: if comm.rank == 0: parser.error("unknown command '%s'" % command) parser.exit(2) command = _commands[command] command(comm, args=args) parser.exit() if __name__ == '__main__': main() mpi4py_1.3.1+hg20131106.orig/src/atimport.h0000644000000000000000000002412612211706251016116 0ustar 00000000000000/* ------------------------------------------------------------------------- */ #include "Python.h" #include "mpi.h" /* ------------------------------------------------------------------------- */ #include "config.h" #include "missing.h" #include "fallback.h" #include "compat.h" /* ------------------------------------------------------------------------- */ /* It could be a good idea to implement the startup and cleanup phases employing PMPI_Xxx calls, thus MPI profilers would not notice. 1) The MPI calls at the startup phase could be (a bit of initial) junk for users trying to profile the calls for their own code. 2) Some (naive?) MPI profilers could get confused if MPI_Xxx routines are called inside MPI_Finalize during the cleanup phase. If for whatever reason you need it, just change the values of the defines below to the corresponding PMPI_Xxx symbols. */ #define P_MPI_Comm_get_errhandler MPI_Comm_get_errhandler #define P_MPI_Comm_set_errhandler MPI_Comm_set_errhandler #define P_MPI_Errhandler_free MPI_Errhandler_free #define P_MPI_Comm_create_keyval MPI_Comm_create_keyval #define P_MPI_Comm_free_keyval MPI_Comm_free_keyval #define P_MPI_Comm_set_attr MPI_Comm_set_attr #define P_MPI_Win_free_keyval MPI_Win_free_keyval static MPI_Errhandler PyMPI_ERRHDL_COMM_WORLD = (MPI_Errhandler)0; static MPI_Errhandler PyMPI_ERRHDL_COMM_SELF = (MPI_Errhandler)0; static int PyMPI_KEYVAL_MPI_ATEXIT = MPI_KEYVAL_INVALID; static int PyMPI_KEYVAL_WIN_MEMORY = MPI_KEYVAL_INVALID; static int PyMPI_StartUp(void); static int PyMPI_CleanUp(void); static int MPIAPI PyMPI_AtExitMPI(MPI_Comm,int,void*,void*); static int PyMPI_STARTUP_DONE = 0; static int PyMPI_StartUp(void) { if (PyMPI_STARTUP_DONE) return MPI_SUCCESS; PyMPI_STARTUP_DONE = 1; /* change error handlers for predefined communicators */ if (PyMPI_ERRHDL_COMM_WORLD == (MPI_Errhandler)0) PyMPI_ERRHDL_COMM_WORLD = MPI_ERRHANDLER_NULL; if (PyMPI_ERRHDL_COMM_WORLD == MPI_ERRHANDLER_NULL) { (void)P_MPI_Comm_get_errhandler(MPI_COMM_WORLD, &PyMPI_ERRHDL_COMM_WORLD); (void)P_MPI_Comm_set_errhandler(MPI_COMM_WORLD, MPI_ERRORS_RETURN); } if (PyMPI_ERRHDL_COMM_SELF == (MPI_Errhandler)0) PyMPI_ERRHDL_COMM_SELF = MPI_ERRHANDLER_NULL; if (PyMPI_ERRHDL_COMM_SELF == MPI_ERRHANDLER_NULL) { (void)P_MPI_Comm_get_errhandler(MPI_COMM_SELF, &PyMPI_ERRHDL_COMM_SELF); (void)P_MPI_Comm_set_errhandler(MPI_COMM_SELF, MPI_ERRORS_RETURN); } /* make the call to MPI_Finalize() run a cleanup function */ if (PyMPI_KEYVAL_MPI_ATEXIT == MPI_KEYVAL_INVALID) { int keyval = MPI_KEYVAL_INVALID; (void)P_MPI_Comm_create_keyval(MPI_COMM_NULL_COPY_FN, PyMPI_AtExitMPI, &keyval, 0); (void)P_MPI_Comm_set_attr(MPI_COMM_SELF, keyval, 0); PyMPI_KEYVAL_MPI_ATEXIT = keyval; } return MPI_SUCCESS; } static int PyMPI_CLEANUP_DONE = 0; static int PyMPI_CleanUp(void) { if (PyMPI_CLEANUP_DONE) return MPI_SUCCESS; PyMPI_CLEANUP_DONE = 1; /* free atexit keyval */ if (PyMPI_KEYVAL_MPI_ATEXIT != MPI_KEYVAL_INVALID) { (void)P_MPI_Comm_free_keyval(&PyMPI_KEYVAL_MPI_ATEXIT); PyMPI_KEYVAL_MPI_ATEXIT = MPI_KEYVAL_INVALID; } /* free windows keyval */ if (PyMPI_KEYVAL_WIN_MEMORY != MPI_KEYVAL_INVALID) { (void)P_MPI_Win_free_keyval(&PyMPI_KEYVAL_WIN_MEMORY); PyMPI_KEYVAL_WIN_MEMORY = MPI_KEYVAL_INVALID; } /* restore default error handlers for predefined communicators */ if (PyMPI_ERRHDL_COMM_SELF != MPI_ERRHANDLER_NULL) { (void)P_MPI_Comm_set_errhandler(MPI_COMM_SELF, PyMPI_ERRHDL_COMM_SELF); (void)P_MPI_Errhandler_free(&PyMPI_ERRHDL_COMM_SELF); PyMPI_ERRHDL_COMM_SELF = MPI_ERRHANDLER_NULL; } if (PyMPI_ERRHDL_COMM_WORLD != MPI_ERRHANDLER_NULL) { (void)P_MPI_Comm_set_errhandler(MPI_COMM_WORLD, PyMPI_ERRHDL_COMM_WORLD); (void)P_MPI_Errhandler_free(&PyMPI_ERRHDL_COMM_WORLD); PyMPI_ERRHDL_COMM_WORLD = MPI_ERRHANDLER_NULL; } return MPI_SUCCESS; } static int MPIAPI PyMPI_AtExitMPI(PyMPI_UNUSED MPI_Comm comm, PyMPI_UNUSED int k, PyMPI_UNUSED void *v, PyMPI_UNUSED void *xs) { return PyMPI_CleanUp(); } /* ------------------------------------------------------------------------- */ #if !defined(PyMPI_USE_MATCHED_RECV) #if defined(PyMPI_HAVE_MPI_Mprobe) && \ defined(PyMPI_HAVE_MPI_Mrecv) #define PyMPI_USE_MATCHED_RECV 1 #else #define PyMPI_USE_MATCHED_RECV 0 #endif #endif #if !defined(PyMPI_USE_MATCHED_RECV) #define PyMPI_USE_MATCHED_RECV 0 #endif /* ------------------------------------------------------------------------- */ static PyObject * PyMPIString_AsStringAndSize(PyObject *ob, const char **s, Py_ssize_t *n) { PyObject *b = NULL; if (PyUnicode_Check(ob)) { #if PY_MAJOR_VERSION >= 3 b = PyUnicode_AsUTF8String(ob); #else b = PyUnicode_AsASCIIString(ob); #endif if (!b) return NULL; } else { b = ob; Py_INCREF(ob); } #if PY_MAJOR_VERSION >= 3 if (PyBytes_AsStringAndSize(b, (char **)s, n) < 0) { #else if (PyString_AsStringAndSize(b, (char **)s, n) < 0) { #endif Py_DECREF(b); return NULL; } return b; } #if PY_MAJOR_VERSION >= 3 #define PyMPIString_FromString PyUnicode_FromString #define PyMPIString_FromStringAndSize PyUnicode_FromStringAndSize #else #define PyMPIString_FromString PyString_FromString #define PyMPIString_FromStringAndSize PyString_FromStringAndSize #endif /* ------------------------------------------------------------------------- */ #if PY_VERSION_HEX < 0x02040000 #ifndef Py_CLEAR #define Py_CLEAR(op) \ do { \ if (op) { \ PyObject *_py_tmp = (PyObject *)(op); \ (op) = NULL; \ Py_DECREF(_py_tmp); \ } \ } while (0) #endif #endif #if PY_VERSION_HEX < 0x02060000 #ifndef PyExc_BufferError #define PyExc_BufferError PyExc_TypeError #endif/*PyExc_BufferError*/ #ifndef PyBuffer_FillInfo static int PyBuffer_FillInfo(Py_buffer *view, PyObject *obj, void *buf, Py_ssize_t len, int readonly, int flags) { if (view == NULL) return 0; if (((flags & PyBUF_WRITABLE) == PyBUF_WRITABLE) && (readonly == 1)) { PyErr_SetString(PyExc_BufferError, "Object is not writable."); return -1; } view->obj = obj; if (obj) Py_INCREF(obj); view->buf = buf; view->len = len; view->itemsize = 1; view->readonly = readonly; view->format = NULL; if ((flags & PyBUF_FORMAT) == PyBUF_FORMAT) view->format = "B"; view->ndim = 1; view->shape = NULL; if ((flags & PyBUF_ND) == PyBUF_ND) view->shape = &(view->len); view->strides = NULL; if ((flags & PyBUF_STRIDES) == PyBUF_STRIDES) view->strides = &(view->itemsize); view->suboffsets = NULL; view->internal = NULL; return 0; } #endif/*PyBuffer_FillInfo*/ #ifndef PyBuffer_Release static void PyBuffer_Release(Py_buffer *view) { PyObject *obj = view->obj; Py_XDECREF(obj); view->obj = NULL; } #endif/*PyBuffer_Release*/ #ifndef PyObject_CheckBuffer #define PyObject_CheckBuffer(ob) (0) #endif/*PyObject_CheckBuffer*/ #ifndef PyObject_GetBuffer #define PyObject_GetBuffer(ob,view,flags) \ (PyErr_SetString(PyExc_NotImplementedError, \ "new buffer interface is not available"), -1) #endif/*PyObject_GetBuffer*/ #endif #if PY_VERSION_HEX < 0x02070000 static PyObject * PyMemoryView_FromBuffer(Py_buffer *view) { if (view->obj) { if (view->readonly) return PyBuffer_FromObject(view->obj, 0, view->len); else return PyBuffer_FromReadWriteObject(view->obj, 0, view->len); } else { if (view->readonly) return PyBuffer_FromMemory(view->buf,view->len); else return PyBuffer_FromReadWriteMemory(view->buf,view->len); } } #endif /* ------------------------------------------------------------------------- */ #ifdef PYPY_VERSION #define PyMPI_RUNTIME_PYPY 1 #define PyMPI_RUNTIME_CPYTHON 0 #else #define PyMPI_RUNTIME_PYPY 0 #define PyMPI_RUNTIME_CPYTHON 1 #endif #ifdef PYPY_VERSION static int PyMPI_UNUSED _PyLong_AsByteArray(PyLongObject* v, unsigned char* bytes, size_t n, int little_endian, int is_signed) { PyErr_SetString(PyExc_RuntimeError, "PyPy: _PyLong_AsByteArray() not available"); return -1; } #if PY_VERSION_HEX < 0x02070300 /* PyPy < 2.0 */ #define PyCode_GetNumFree(o) PyCode_GetNumFree((PyObject *)(o)) #endif #if PY_VERSION_HEX < 0x02070300 /* PyPy < 2.0 */ static int PyBuffer_FillInfo_PyPy(Py_buffer *view, PyObject *obj, void *buf, Py_ssize_t len, int readonly, int flags) { if (view == NULL) return 0; if ((flags & PyBUF_WRITABLE) && readonly) { PyErr_SetString(PyExc_BufferError, "Object is not writable."); return -1; } if (PyBuffer_FillInfo(view, obj, buf, len, readonly, flags) < 0) return -1; view->readonly = readonly; return 0; } #define PyBuffer_FillInfo PyBuffer_FillInfo_PyPy #endif static PyObject * PyMemoryView_FromBuffer_PyPy(Py_buffer *view) { if (view->obj) { if (view->readonly) return PyBuffer_FromObject(view->obj, 0, view->len); else return PyBuffer_FromReadWriteObject(view->obj, 0, view->len); } else { if (view->readonly) return PyBuffer_FromMemory(view->buf,view->len); else return PyBuffer_FromReadWriteMemory(view->buf,view->len); } } #define PyMemoryView_FromBuffer PyMemoryView_FromBuffer_PyPy #endif/*PYPY_VERSION*/ /* ------------------------------------------------------------------------- */ #if !defined(WITH_THREAD) #undef PyGILState_Ensure #define PyGILState_Ensure() ((PyGILState_STATE)0) #undef PyGILState_Release #define PyGILState_Release(state) (state)=((PyGILState_STATE)0) #undef Py_BLOCK_THREADS #define Py_BLOCK_THREADS (_save)=(PyThreadState*)0; #undef Py_UNBLOCK_THREADS #define Py_UNBLOCK_THREADS (_save)=(PyThreadState*)0; #endif /* ------------------------------------------------------------------------- */ /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/compat.h0000644000000000000000000000046012211706251015535 0ustar 00000000000000#if defined(MPICH3) #include "compat/mpich3.h" #elif defined(MPICH2) #include "compat/mpich2.h" #elif defined(OPEN_MPI) #include "compat/openmpi.h" #elif defined(HP_MPI) #include "compat/hpmpi.h" #elif defined(MPICH1) #include "compat/mpich1.h" #elif defined(LAM_MPI) #include "compat/lammpi.h" #endif mpi4py_1.3.1+hg20131106.orig/src/compat/hpmpi.h0000644000000000000000000000426312211706251016657 0ustar 00000000000000#ifndef PyMPI_COMPAT_HPMPI_H #define PyMPI_COMPAT_HPMPI_H /* ---------------------------------------------------------------- */ #ifndef MPI_INTEGER1 #define MPI_INTEGER1 ((MPI_Datatype)MPI_Type_f2c(MPIF_INTEGER1)) #endif #ifndef MPI_INTEGER2 #define MPI_INTEGER2 ((MPI_Datatype)MPI_Type_f2c(MPIF_INTEGER2)) #endif #ifndef MPI_INTEGER4 #define MPI_INTEGER4 ((MPI_Datatype)MPI_Type_f2c(MPIF_INTEGER4)) #endif #ifndef MPI_REAL4 #define MPI_REAL4 ((MPI_Datatype)MPI_Type_f2c(MPIF_REAL4)) #endif #ifndef MPI_REAL8 #define MPI_REAL8 ((MPI_Datatype)MPI_Type_f2c(MPIF_REAL8)) #endif /* ---------------------------------------------------------------- */ #ifndef HPMPI_DLOPEN_LIBMPI #define HPMPI_DLOPEN_LIBMPI 1 #endif #if HPMPI_DLOPEN_LIBMPI #if HAVE_DLOPEN #include "../dynload.h" static void HPMPI_dlopen_libmpi(void) { void *handle = 0; int mode = RTLD_NOW | RTLD_GLOBAL; #ifdef RTLD_NOLOAD mode |= RTLD_NOLOAD; #endif #if defined(__APPLE__) /* Mac OS X */ if (!handle) handle = dlopen("libhpmpi.3.dylib", mode); if (!handle) handle = dlopen("libhpmpi.2.dylib", mode); if (!handle) handle = dlopen("libhpmpi.1.dylib", mode); if (!handle) handle = dlopen("libhpmpi.0.dylib", mode); if (!handle) handle = dlopen("libhpmpi.dylib", mode); #else /* GNU/Linux and others*/ if (!handle) handle = dlopen("libhpmpi.so.3", mode); if (!handle) handle = dlopen("libhpmpi.so.2", mode); if (!handle) handle = dlopen("libhpmpi.so.1", mode); if (!handle) handle = dlopen("libhpmpi.so.0", mode); if (!handle) handle = dlopen("libhpmpi.so", mode); #endif } static int PyMPI_HPMPI_MPI_Init(int *argc, char ***argv) { HPMPI_dlopen_libmpi(); return MPI_Init(argc, argv); } #undef MPI_Init #define MPI_Init PyMPI_HPMPI_MPI_Init static int PyMPI_HPMPI_MPI_Init_thread(int *argc, char ***argv, int required, int *provided) { HPMPI_dlopen_libmpi(); return MPI_Init_thread(argc, argv, required, provided); } #undef MPI_Init_thread #define MPI_Init_thread PyMPI_HPMPI_MPI_Init_thread #endif /* !HAVE_DLOPEN */ #endif /* !HPMPI_DLOPEN_LIBMPI */ /* ---------------------------------------------------------------- */ #endif /* !PyMPI_COMPAT_HPMPI_H */ mpi4py_1.3.1+hg20131106.orig/src/compat/lammpi.h0000644000000000000000000002673512211706251017031 0ustar 00000000000000#ifndef PyMPI_COMPAT_LAMMPI_H #define PyMPI_COMPAT_LAMMPI_H /* ---------------------------------------------------------------- */ static int PyMPI_LAMMPI_MPI_Info_free(MPI_Info *info) { if (info == 0) return MPI_ERR_ARG; if (*info == MPI_INFO_NULL) return MPI_ERR_ARG; return MPI_Info_free(info); } #undef MPI_Info_free #define MPI_Info_free PyMPI_LAMMPI_MPI_Info_free /* ---------------------------------------------------------------- */ static int PyMPI_LAMMPI_MPI_Cancel(MPI_Request *request) { int ierr = MPI_SUCCESS; ierr = MPI_Cancel(request); if (ierr == MPI_ERR_ARG) { if (request != 0 && *request == MPI_REQUEST_NULL) ierr = MPI_ERR_REQUEST; } return ierr; } #undef MPI_Cancel #define MPI_Cancel PyMPI_LAMMPI_MPI_Cancel static int PyMPI_LAMMPI_MPI_Comm_disconnect(MPI_Comm *comm) { if (comm == 0) return MPI_ERR_ARG; if (*comm == MPI_COMM_NULL) return MPI_ERR_COMM; if (*comm == MPI_COMM_SELF) return MPI_ERR_COMM; if (*comm == MPI_COMM_WORLD) return MPI_ERR_COMM; return MPI_Comm_disconnect(comm); } #undef MPI_Comm_disconnect #define MPI_Comm_disconnect PyMPI_LAMMPI_MPI_Comm_disconnect /* ---------------------------------------------------------------- */ #if defined(__cplusplus) extern "C" { #endif struct _errhdl { void (*eh_func)(void); int eh_refcount; int eh_f77handle; int eh_flags; }; #if defined(__cplusplus) } #endif static int PyMPI_LAMMPI_Errhandler_free(MPI_Errhandler *errhandler) { if (errhandler == 0) return MPI_ERR_ARG; if (*errhandler == MPI_ERRORS_RETURN || *errhandler == MPI_ERRORS_ARE_FATAL) { struct _errhdl *eh = (struct _errhdl *) (*errhandler); eh->eh_refcount--; *errhandler = MPI_ERRHANDLER_NULL; return MPI_SUCCESS; } else { return MPI_Errhandler_free(errhandler); } } #undef MPI_Errhandler_free #define MPI_Errhandler_free PyMPI_LAMMPI_Errhandler_free /* -- */ static int PyMPI_LAMMPI_MPI_Comm_get_errhandler(MPI_Comm comm, MPI_Errhandler *errhandler) { int ierr = MPI_SUCCESS; if (comm == MPI_COMM_NULL) return MPI_ERR_COMM; if (errhandler == 0) return MPI_ERR_ARG; /* get error handler */ ierr = MPI_Errhandler_get(comm, errhandler); if (ierr != MPI_SUCCESS) return ierr; return MPI_SUCCESS; } #undef MPI_Errhandler_get #define MPI_Errhandler_get PyMPI_LAMMPI_MPI_Comm_get_errhandler #undef MPI_Comm_get_errhandler #define MPI_Comm_get_errhandler PyMPI_LAMMPI_MPI_Comm_get_errhandler static int PyMPI_LAMMPI_MPI_Comm_set_errhandler(MPI_Comm comm, MPI_Errhandler errhandler) { int ierr = MPI_SUCCESS, ierr2 = MPI_SUCCESS; MPI_Errhandler previous = MPI_ERRHANDLER_NULL; if (comm == MPI_COMM_NULL) return MPI_ERR_COMM; if (errhandler == MPI_ERRHANDLER_NULL) return MPI_ERR_ARG; /* get previous error handler*/ ierr2 = MPI_Errhandler_get(comm, &previous); if (ierr2 != MPI_SUCCESS) return ierr2; /* increment reference counter */ if (errhandler != MPI_ERRHANDLER_NULL) { struct _errhdl *eh = (struct _errhdl *) (errhandler); eh->eh_refcount++; } /* set error handler */ ierr = MPI_Errhandler_set(comm, errhandler); /* decrement reference counter */ if (errhandler != MPI_ERRHANDLER_NULL) { struct _errhdl *eh = (struct _errhdl *) (errhandler); eh->eh_refcount--; } /* free previous error handler*/ if (previous != MPI_ERRHANDLER_NULL) { ierr2 = MPI_Errhandler_free(&previous); } if (ierr != MPI_SUCCESS) return ierr; if (ierr2 != MPI_SUCCESS) return ierr2; return MPI_SUCCESS; } #undef MPI_Errhandler_set #define MPI_Errhandler_set PyMPI_LAMMPI_MPI_Comm_set_errhandler #undef MPI_Comm_set_errhandler #define MPI_Comm_set_errhandler PyMPI_LAMMPI_MPI_Comm_set_errhandler /* -- */ static int PyMPI_LAMMPI_MPI_Win_get_errhandler(MPI_Win win, MPI_Errhandler *errhandler) { int ierr = MPI_SUCCESS; if (win == MPI_WIN_NULL) return MPI_ERR_WIN; if (errhandler == 0) return MPI_ERR_ARG; /* get error handler */ ierr = MPI_Win_get_errhandler(win, errhandler); if (ierr != MPI_SUCCESS) return ierr; /* increment reference counter */ if (*errhandler != MPI_ERRHANDLER_NULL) { struct _errhdl *eh = (struct _errhdl *) (*errhandler); eh->eh_refcount++; } return MPI_SUCCESS; } #undef MPI_Win_get_errhandler #define MPI_Win_get_errhandler PyMPI_LAMMPI_MPI_Win_get_errhandler static int PyMPI_LAMMPI_MPI_Win_set_errhandler(MPI_Win win, MPI_Errhandler errhandler) { int ierr = MPI_SUCCESS, ierr2 = MPI_SUCCESS; MPI_Errhandler previous = MPI_ERRHANDLER_NULL; if (win == MPI_WIN_NULL) return MPI_ERR_WIN; if (errhandler == MPI_ERRHANDLER_NULL) return MPI_ERR_ARG; /* get previous error handler*/ ierr2 = MPI_Win_get_errhandler(win, &previous); if (ierr2 != MPI_SUCCESS) return ierr2; /* increment reference counter */ if (errhandler != MPI_ERRHANDLER_NULL) { struct _errhdl *eh = (struct _errhdl *) (errhandler); eh->eh_refcount++; } /* set error handler */ ierr = MPI_Win_set_errhandler(win, errhandler); /* decrement reference counter */ if (errhandler != MPI_ERRHANDLER_NULL) { struct _errhdl *eh = (struct _errhdl *) (errhandler); eh->eh_refcount--; } /* free previous error handler*/ if (previous != MPI_ERRHANDLER_NULL) { ierr2 = MPI_Errhandler_free(&previous); } if (ierr != MPI_SUCCESS) return ierr; if (ierr2 != MPI_SUCCESS) return ierr2; return MPI_SUCCESS; } #undef MPI_Win_set_errhandler #define MPI_Win_set_errhandler PyMPI_LAMMPI_MPI_Win_set_errhandler static int PyMPI_LAMMPI_MPI_Win_create(void *base, MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, MPI_Win *win) { int ierr = MPI_SUCCESS; MPI_Errhandler errhandler = MPI_ERRHANDLER_NULL; ierr = MPI_Win_create(base, size, disp_unit, info, comm, win); if (ierr != MPI_SUCCESS) return ierr; ierr = MPI_Win_get_errhandler(*win, &errhandler); if (ierr != MPI_SUCCESS) return ierr; return MPI_SUCCESS; } #undef MPI_Win_create #define MPI_Win_create PyMPI_LAMMPI_MPI_Win_create static int PyMPI_LAMMPI_MPI_Win_free(MPI_Win *win) { int ierr = MPI_SUCCESS, ierr2 = MPI_SUCCESS; MPI_Errhandler errhandler = MPI_ERRHANDLER_NULL; if (win != 0 && *win != MPI_WIN_NULL ) { MPI_Errhandler previous; ierr2 = MPI_Win_get_errhandler(*win, &previous); if (ierr2 != MPI_SUCCESS) return ierr2; errhandler = previous; if (previous != MPI_ERRHANDLER_NULL) { ierr2 = MPI_Errhandler_free(&previous); if (ierr2 != MPI_SUCCESS) return ierr2; } } ierr = MPI_Win_free(win); if (errhandler != MPI_ERRHANDLER_NULL) { ierr2 = MPI_Errhandler_free(&errhandler); if (ierr2 != MPI_SUCCESS) return ierr2; } if (ierr != MPI_SUCCESS) return ierr; return MPI_SUCCESS; } #undef MPI_Win_free #define MPI_Win_free PyMPI_LAMMPI_MPI_Win_free /* -- */ #if defined(ROMIO_VERSION) #if defined(__cplusplus) extern "C" { #endif #define ADIOI_FILE_COOKIE 2487376 #define FDTYPE int #define ADIO_Offset MPI_Offset #define ADIOI_Fns struct ADIOI_Fns_struct #define ADIOI_Hints struct ADIOI_Hints_struct extern MPI_Errhandler ADIOI_DFLT_ERR_HANDLER; struct ADIOI_FileD { int cookie; /* for error checking */ FDTYPE fd_sys; /* system file descriptor */ #ifdef XFS int fd_direct; /* On XFS, this is used for direct I/O; fd_sys is used for buffered I/O */ int direct_read; /* flag; 1 means use direct read */ int direct_write; /* flag; 1 means use direct write */ /* direct I/O attributes */ unsigned d_mem; /* data buffer memory alignment */ unsigned d_miniosz; /* min xfer size, xfer size multiple, and file seek offset alignment */ unsigned d_maxiosz; /* max xfer size */ #endif ADIO_Offset fp_ind; /* individual file pointer in MPI-IO (in bytes)*/ ADIO_Offset fp_sys_posn; /* current location of the system file-pointer in bytes */ ADIOI_Fns *fns; /* struct of I/O functions to use */ MPI_Comm comm; /* communicator indicating who called open */ char *filename; int file_system; /* type of file system */ int access_mode; ADIO_Offset disp; /* reqd. for MPI-IO */ MPI_Datatype etype; /* reqd. for MPI-IO */ MPI_Datatype filetype; /* reqd. for MPI-IO */ int etype_size; /* in bytes */ ADIOI_Hints *hints; /* structure containing fs-indep. info values */ MPI_Info info; int split_coll_count; /* count of outstanding split coll. ops. */ char *shared_fp_fname; /* name of file containing shared file pointer */ struct ADIOI_FileD *shared_fp_fd; /* file handle of file containing shared fp */ int async_count; /* count of outstanding nonblocking operations */ int perm; int atomicity; /* true=atomic, false=nonatomic */ int iomode; /* reqd. to implement Intel PFS modes */ MPI_Errhandler err_handler; }; #if defined(__cplusplus) } #endif static int PyMPI_LAMMPI_MPI_File_get_errhandler(MPI_File file, MPI_Errhandler *errhandler) { /* check arguments */ if (file != MPI_FILE_NULL) { struct ADIOI_FileD * fh = (struct ADIOI_FileD *) file; if (fh->cookie != ADIOI_FILE_COOKIE) return MPI_ERR_ARG; } if (errhandler == 0) return MPI_ERR_ARG; /* get error handler */ if (file == MPI_FILE_NULL) { *errhandler = ADIOI_DFLT_ERR_HANDLER; } else { struct ADIOI_FileD * fh = (struct ADIOI_FileD *) file; *errhandler = fh->err_handler; } /* increment reference counter */ if (*errhandler != MPI_ERRHANDLER_NULL) { struct _errhdl *eh = (struct _errhdl *) (*errhandler); eh->eh_refcount++; } return MPI_SUCCESS; } #undef MPI_File_get_errhandler #define MPI_File_get_errhandler PyMPI_LAMMPI_MPI_File_get_errhandler static int PyMPI_LAMMPI_MPI_File_set_errhandler(MPI_File file, MPI_Errhandler errhandler) { /* check arguments */ if (file != MPI_FILE_NULL) { struct ADIOI_FileD * fh = (struct ADIOI_FileD *) file; if (fh->cookie != ADIOI_FILE_COOKIE) return MPI_ERR_ARG; } if (errhandler == MPI_ERRHANDLER_NULL) return MPI_ERR_ARG; if (errhandler != MPI_ERRORS_RETURN && errhandler != MPI_ERRORS_ARE_FATAL) return MPI_ERR_ARG; /* increment reference counter */ if (errhandler != MPI_ERRHANDLER_NULL ) { struct _errhdl *eh = (struct _errhdl *) errhandler; eh->eh_refcount++; } /* set error handler */ if (file == MPI_FILE_NULL) { MPI_Errhandler tmp = ADIOI_DFLT_ERR_HANDLER; ADIOI_DFLT_ERR_HANDLER = errhandler; errhandler = tmp; } else { struct ADIOI_FileD *fh = (struct ADIOI_FileD *) file; MPI_Errhandler tmp = fh->err_handler; fh->err_handler = errhandler; errhandler = tmp; } /* decrement reference counter */ if (errhandler != MPI_ERRHANDLER_NULL ) { struct _errhdl *eh = (struct _errhdl *) errhandler; eh->eh_refcount--; } return MPI_SUCCESS; } #undef MPI_File_set_errhandler #define MPI_File_set_errhandler PyMPI_LAMMPI_MPI_File_set_errhandler #endif /* ---------------------------------------------------------------- */ #endif /* !PyMPI_COMPAT_LAMMPI_H */ /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/compat/mpich1.h0000644000000000000000000001221112211706251016713 0ustar 00000000000000#ifndef PyMPI_COMPAT_MPICH1_H #define PyMPI_COMPAT_MPICH1_H /* ---------------------------------------------------------------- */ /* this does not actually work in parallel, */ /* but avoids a nasty segfault. */ static int PyMPI_MPICH1_argc = 0; static char **PyMPI_MPICH1_argv = 0; static char *PyMPI_MPICH1_args[2] = {0, 0}; static void PyMPI_MPICH1_FixArgs(int **argc, char ****argv) { if ((argc[0]==(int *)0) || (argv[0]==(char ***)0)) { #ifdef Py_PYTHON_H #if PY_MAJOR_VERSION >= 3 PyMPI_MPICH1_args[0] = (char *) "python"; #else PyMPI_MPICH1_args[0] = Py_GetProgramName(); #endif PyMPI_MPICH1_argc = 1; #endif PyMPI_MPICH1_argv = PyMPI_MPICH1_args; argc[0] = &PyMPI_MPICH1_argc; argv[0] = &PyMPI_MPICH1_argv; } } static int PyMPI_MPICH1_MPI_Init(int *argc, char ***argv) { PyMPI_MPICH1_FixArgs(&argc, &argv); return MPI_Init(argc, argv); } #undef MPI_Init #define MPI_Init PyMPI_MPICH1_MPI_Init static int PyMPI_MPICH1_MPI_Init_thread(int *argc, char ***argv, int required, int *provided) { PyMPI_MPICH1_FixArgs(&argc, &argv); return MPI_Init_thread(argc, argv, required, provided); } #undef MPI_Init_thread #define MPI_Init_thread PyMPI_MPICH1_MPI_Init_thread /* ---------------------------------------------------------------- */ #undef MPI_SIGNED_CHAR #define MPI_SIGNED_CHAR MPI_CHAR /* ---------------------------------------------------------------- */ static int PyMPI_MPICH1_MPI_Status_set_elements(MPI_Status *status, MPI_Datatype datatype, int count) { if (datatype == MPI_DATATYPE_NULL) return MPI_ERR_TYPE; return MPI_Status_set_elements(status, datatype, count); } #undef MPI_Status_set_elements #define MPI_Status_set_elements PyMPI_MPICH1_MPI_Status_set_elements /* ---------------------------------------------------------------- */ static int PyMPI_MPICH1_MPI_Sendrecv(void *sendbuf, int sendcount, MPI_Datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm, MPI_Status *status) { MPI_Status dummy; if (status == MPI_STATUS_IGNORE) status = &dummy; return MPI_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status); } #undef MPI_Sendrecv #define MPI_Sendrecv PyMPI_MPICH1_MPI_Sendrecv /* ---------------------------------------------------------------- */ #if defined(ROMIO_VERSION) #if defined(__cplusplus) extern "C" { #endif #define MPIR_COOKIE unsigned long cookie; struct MPIR_Errhandler { MPIR_COOKIE MPI_Handler_function *routine; int ref_count; }; extern void *MPIR_ToPointer(int); #if defined(__cplusplus) } #endif static int PyMPI_MPICH1_MPI_File_get_errhandler(MPI_File file, MPI_Errhandler *errhandler) { int ierr = MPI_SUCCESS; ierr = MPI_File_get_errhandler(file, errhandler); if (ierr != MPI_SUCCESS) return ierr; if (errhandler == 0) return ierr; /* just in case */ /* manage reference counting */ if (*errhandler != MPI_ERRHANDLER_NULL) { struct MPIR_Errhandler *eh = (struct MPIR_Errhandler *) MPIR_ToPointer(*errhandler); if (eh) eh->ref_count++; } return MPI_SUCCESS; } static int PyMPI_MPICH1_MPI_File_set_errhandler(MPI_File file, MPI_Errhandler errhandler) { int ierr = MPI_SUCCESS; MPI_Errhandler previous = MPI_ERRHANDLER_NULL; ierr = MPI_File_get_errhandler(file, &previous); if (ierr != MPI_SUCCESS) return ierr; ierr = MPI_File_set_errhandler(file, errhandler); if (ierr != MPI_SUCCESS) return ierr; /* manage reference counting */ if (previous != MPI_ERRHANDLER_NULL) { struct MPIR_Errhandler *eh = (struct MPIR_Errhandler *) MPIR_ToPointer(previous); if (eh) eh->ref_count--; } if (errhandler != MPI_ERRHANDLER_NULL) { struct MPIR_Errhandler *eh = (struct MPIR_Errhandler *) MPIR_ToPointer(errhandler); if (eh) eh->ref_count++; } return MPI_SUCCESS; } #undef MPI_File_get_errhandler #define MPI_File_get_errhandler PyMPI_MPICH1_MPI_File_get_errhandler #undef MPI_File_set_errhandler #define MPI_File_set_errhandler PyMPI_MPICH1_MPI_File_set_errhandler #endif /* !ROMIO_VERSION */ /* ---------------------------------------------------------------- */ #undef MPI_ERR_KEYVAL #define MPI_ERR_KEYVAL MPI_ERR_OTHER #undef MPI_MAX_OBJECT_NAME #define MPI_MAX_OBJECT_NAME MPI_MAX_NAME_STRING /* ---------------------------------------------------------------- */ #endif /* !PyMPI_COMPAT_MPICH1_H */ /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/compat/mpich2.h0000644000000000000000000000047412211706251016724 0ustar 00000000000000#ifndef PyMPI_COMPAT_MPICH2_H #define PyMPI_COMPAT_MPICH2_H #if defined(__SICORTEX__) #include "sicortex.h" #endif #if defined(MS_MPI) && !defined(MSMPI_VER) #undef MPI_File_c2f #define MPI_File_c2f PMPI_File_c2f #undef MPI_File_f2c #define MPI_File_f2c PMPI_File_f2c #endif #endif /* !PyMPI_COMPAT_MPICH2_H */ mpi4py_1.3.1+hg20131106.orig/src/compat/mpich3.h0000644000000000000000000000212312211706251016716 0ustar 00000000000000#ifndef PyMPI_COMPAT_MPICH3_H #define PyMPI_COMPAT_MPICH3_H #if defined(MPICH_NUMVERSION) #if (MPICH_NUMVERSION < 30100000) static int PyMPI_MPICH3_MPI_Type_size_x(MPI_Datatype datatype, MPI_Count *size) { int ierr = MPI_Type_commit(&datatype); if (ierr) return ierr; return MPI_Type_size_x(datatype,size); } #undef MPI_Type_size_x #define MPI_Type_size_x PyMPI_MPICH3_MPI_Type_size_x static int PyMPI_MPICH3_MPI_Type_get_extent_x(MPI_Datatype datatype, MPI_Count *lb, MPI_Count *extent) { int ierr = MPI_Type_commit(&datatype); if (ierr) return ierr; return MPI_Type_get_extent_x(datatype,lb,extent); } #undef MPI_Type_get_extent_x #define MPI_Type_get_extent_x PyMPI_MPICH3_MPI_Type_get_extent_x static int PyMPI_MPICH3_MPI_Type_get_true_extent_x(MPI_Datatype datatype, MPI_Count *lb, MPI_Count *extent) { int ierr = MPI_Type_commit(&datatype); if (ierr) return ierr; return MPI_Type_get_true_extent_x(datatype,lb,extent); } #undef MPI_Type_get_true_extent_x #define MPI_Type_get_true_extent_x PyMPI_MPICH3_MPI_Type_get_true_extent_x #endif #endif #endif /* !PyMPI_COMPAT_MPICH3_H */ mpi4py_1.3.1+hg20131106.orig/src/compat/openmpi.h0000644000000000000000000002022612211706251017206 0ustar 00000000000000#ifndef PyMPI_COMPAT_OPENMPI_H #define PyMPI_COMPAT_OPENMPI_H /* ------------------------------------------------------------------------- */ /* ------------------------------------------------------------------------- */ /* * The hackery below redefines the actuall calls to 'MPI_Init()' and * 'MPI_Init_thread()' in order to preload the main MPI dynamic * library with appropriate flags to 'dlopen()' ensuring global * availability of library symbols. */ #ifndef OPENMPI_DLOPEN_LIBMPI #define OPENMPI_DLOPEN_LIBMPI 1 #endif #if OPENMPI_DLOPEN_LIBMPI #if HAVE_DLOPEN #include "../dynload.h" /* static void * my_dlopen(const char *name, int mode) { void *handle; static int called = 0; if (!called) { called = 1; #if HAVE_DLFCN_H printf("HAVE_DLFCN_H: yes\n"); #else printf("HAVE_DLFCN_H: no\n"); #endif printf("\n"); printf("RTLD_LAZY: 0x%X\n", RTLD_LAZY ); printf("RTLD_NOW: 0x%X\n", RTLD_NOW ); printf("RTLD_LOCAL: 0x%X\n", RTLD_LOCAL ); printf("RTLD_GLOBAL: 0x%X\n", RTLD_GLOBAL ); #ifdef RTLD_NOLOAD printf("RTLD_NOLOAD: 0x%X\n", RTLD_NOLOAD ); #endif printf("\n"); } handle = dlopen(name, mode); printf("dlopen(\"%s\",0x%X) -> %p\n", name, mode, handle); printf("dlerror() -> %s\n\n", dlerror()); return handle; } #define dlopen my_dlopen */ static void PyMPI_OPENMPI_dlopen_libmpi(void) { void *handle = 0; int mode = RTLD_NOW | RTLD_GLOBAL; #ifdef RTLD_NOLOAD mode |= RTLD_NOLOAD; #endif #if defined(__CYGWIN__) if (!handle) handle = dlopen("cygmpi.dll", mode); if (!handle) handle = dlopen("mpi.dll", mode); #elif defined(__APPLE__) /* Mac OS X */ if (!handle) handle = dlopen("libmpi.3.dylib", mode); if (!handle) handle = dlopen("libmpi.2.dylib", mode); if (!handle) handle = dlopen("libmpi.1.dylib", mode); if (!handle) handle = dlopen("libmpi.0.dylib", mode); if (!handle) handle = dlopen("libmpi.dylib", mode); #else /* GNU/Linux and others*/ if (!handle) handle = dlopen("libmpi.so.3", mode); if (!handle) handle = dlopen("libmpi.so.2", mode); if (!handle) handle = dlopen("libmpi.so.1", mode); if (!handle) handle = dlopen("libmpi.so.0", mode); if (!handle) handle = dlopen("libmpi.so", mode); #endif } static int PyMPI_OPENMPI_MPI_Init(int *argc, char ***argv) { PyMPI_OPENMPI_dlopen_libmpi(); return MPI_Init(argc, argv); } #undef MPI_Init #define MPI_Init PyMPI_OPENMPI_MPI_Init static int PyMPI_OPENMPI_MPI_Init_thread(int *argc, char ***argv, int required, int *provided) { PyMPI_OPENMPI_dlopen_libmpi(); return MPI_Init_thread(argc, argv, required, provided); } #undef MPI_Init_thread #define MPI_Init_thread PyMPI_OPENMPI_MPI_Init_thread #endif /* !HAVE_DLOPEN */ #endif /* !OPENMPI_DLOPEN_LIBMPI */ /* ------------------------------------------------------------------------- */ /* ------------------------------------------------------------------------- */ /* ------------------------------------------------------------------------- */ #if (defined(OMPI_MAJOR_VERSION) && \ defined(OMPI_MINOR_VERSION) && \ defined(OMPI_RELEASE_VERSION)) #define PyMPI_OPENMPI_VERSION ((OMPI_MAJOR_VERSION * 10000) + \ (OMPI_MINOR_VERSION * 100) + \ (OMPI_RELEASE_VERSION * 1)) #else #define PyMPI_OPENMPI_VERSION 10000 #endif /* ------------------------------------------------------------------------- */ /* * Open MPI < 1.1.3 generates an error when MPI_File_get_errhandler() * is called with the predefined error handlers MPI_ERRORS_RETURN and * MPI_ERRORS_ARE_FATAL. */ #if PyMPI_OPENMPI_VERSION < 10103 static int PyMPI_OPENMPI_Errhandler_free(MPI_Errhandler *errhandler) { if (errhandler && ((*errhandler == MPI_ERRORS_RETURN) || (*errhandler == MPI_ERRORS_ARE_FATAL))) { *errhandler = MPI_ERRHANDLER_NULL; return MPI_SUCCESS; } return MPI_Errhandler_free(errhandler); } #undef MPI_Errhandler_free #define MPI_Errhandler_free PyMPI_OPENMPI_Errhandler_free #endif /* !(PyMPI_OPENMPI_VERSION < 10103) */ /* ------------------------------------------------------------------------- */ /* * Open MPI 1.1 generates an error when MPI_File_get_errhandler() is * called with the MPI_FILE_NULL handle. The code below try to fix * this bug by intercepting the calls to the functions setting and * getting the error handlers for MPI_File's. */ #if PyMPI_OPENMPI_VERSION < 10200 static MPI_Errhandler PyMPI_OPENMPI_FILE_NULL_ERRHANDLER = (MPI_Errhandler)0; static int PyMPI_OPENMPI_File_get_errhandler(MPI_File file, MPI_Errhandler *errhandler) { if (file == MPI_FILE_NULL) { if (PyMPI_OPENMPI_FILE_NULL_ERRHANDLER == (MPI_Errhandler)0) { PyMPI_OPENMPI_FILE_NULL_ERRHANDLER = MPI_ERRORS_RETURN; } *errhandler = PyMPI_OPENMPI_FILE_NULL_ERRHANDLER; return MPI_SUCCESS; } return MPI_File_get_errhandler(file, errhandler); } #undef MPI_File_get_errhandler #define MPI_File_get_errhandler PyMPI_OPENMPI_File_get_errhandler static int PyMPI_OPENMPI_File_set_errhandler(MPI_File file, MPI_Errhandler errhandler) { int ierr = MPI_File_set_errhandler(file, errhandler); if (ierr != MPI_SUCCESS) return ierr; if (file == MPI_FILE_NULL) { PyMPI_OPENMPI_FILE_NULL_ERRHANDLER = errhandler; } return ierr; } #undef MPI_File_set_errhandler #define MPI_File_set_errhandler PyMPI_OPENMPI_File_set_errhandler #endif /* !(PyMPI_OPENMPI_VERSION < 10200) */ /* ---------------------------------------------------------------- */ #if PyMPI_OPENMPI_VERSION < 10301 static MPI_Fint PyMPI_OPENMPI_File_c2f(MPI_File file) { if (file == MPI_FILE_NULL) return (MPI_Fint)0; return MPI_File_c2f(file); } #define MPI_File_c2f PyMPI_OPENMPI_File_c2f #endif /* !(PyMPI_OPENMPI_VERSION < 10301) */ /* ------------------------------------------------------------------------- */ #if PyMPI_OPENMPI_VERSION < 10402 static int PyMPI_OPENMPI_MPI_Cancel(MPI_Request *request) { if (request && *request == MPI_REQUEST_NULL) { MPI_Comm_call_errhandler(MPI_COMM_WORLD, MPI_ERR_REQUEST); return MPI_ERR_REQUEST; } return MPI_Cancel(request); } #undef MPI_Cancel #define MPI_Cancel PyMPI_OPENMPI_MPI_Cancel static int PyMPI_OPENMPI_MPI_Request_free(MPI_Request *request) { if (request && *request == MPI_REQUEST_NULL) { MPI_Comm_call_errhandler(MPI_COMM_WORLD, MPI_ERR_REQUEST); return MPI_ERR_REQUEST; } return MPI_Request_free(request); } #undef MPI_Request_free #define MPI_Request_free PyMPI_OPENMPI_MPI_Request_free static int PyMPI_OPENMPI_MPI_Win_get_errhandler(MPI_Win win, MPI_Errhandler *errhandler) { if (win == MPI_WIN_NULL) { MPI_Comm_call_errhandler(MPI_COMM_WORLD, MPI_ERR_WIN); return MPI_ERR_WIN; } return MPI_Win_get_errhandler(win, errhandler); } #undef MPI_Win_get_errhandler #define MPI_Win_get_errhandler PyMPI_OPENMPI_MPI_Win_get_errhandler static int PyMPI_OPENMPI_MPI_Win_set_errhandler(MPI_Win win, MPI_Errhandler errhandler) { if (win == MPI_WIN_NULL) { MPI_Comm_call_errhandler(MPI_COMM_WORLD, MPI_ERR_WIN); return MPI_ERR_WIN; } return MPI_Win_set_errhandler(win, errhandler); } #undef MPI_Win_set_errhandler #define MPI_Win_set_errhandler PyMPI_OPENMPI_MPI_Win_set_errhandler #endif /* !(PyMPI_OPENMPI_VERSION < 10402) */ /* ------------------------------------------------------------------------- */ /* * Open MPI 1.7 tries to set status even in the case of MPI_STATUS_IGNORE. */ #if PyMPI_OPENMPI_VERSION >= 10700 static int PyMPI_OPENMPI_MPI_Mrecv(void *buf, int count, MPI_Datatype type, MPI_Message *message, MPI_Status *status) { MPI_Status sts; if (status == MPI_STATUS_IGNORE) status = &sts; return MPI_Mrecv(buf, count, type, message, status); } #undef MPI_Mrecv #define MPI_Mrecv PyMPI_OPENMPI_MPI_Mrecv #endif /* !(PyMPI_OPENMPI_VERSION > 10700) */ /* ------------------------------------------------------------------------- */ #endif /* !PyMPI_COMPAT_OPENMPI_H */ /* Local Variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/compat/sicortex.h0000644000000000000000000000135412211706251017400 0ustar 00000000000000#ifndef PyMPI_COMPAT_SICORTEX_H #define PyMPI_COMPAT_SICORTEX_H #include "../dynload.h" static void PyMPI_SCMPI_dlopen_libslurm(void) { (void)dlopen("libslurm.so", RTLD_NOW|RTLD_GLOBAL|RTLD_NOLOAD); (void)dlerror(); } static int PyMPI_SCMPI_MPI_Init(int *argc, char ***argv) { PyMPI_SCMPI_dlopen_libslurm(); return MPI_Init(argc, argv); } #undef MPI_Init #define MPI_Init PyMPI_SCMPI_MPI_Init static int PyMPI_SCMPI_MPI_Init_thread(int *argc, char ***argv, int required, int *provided) { PyMPI_SCMPI_dlopen_libslurm(); return MPI_Init_thread(argc, argv, required, provided); } #undef MPI_Init_thread #define MPI_Init_thread PyMPI_SCMPI_MPI_Init_thread #endif /* !PyMPI_COMPAT_SICORTEX_H */ mpi4py_1.3.1+hg20131106.orig/src/config.h0000644000000000000000000000211212211706251015513 0ustar 00000000000000#if defined(MS_WINDOWS) #if defined(MSMPI_VER) || (defined(MPICH2) && defined(MPIAPI)) #define MS_MPI 1 #endif #if (defined(MS_MPI) || defined(DEINO_MPI)) && !defined(MPICH2) #define MPICH2 1 #endif #endif #if defined(MPICH_NAME) && (MPICH_NAME==3) #define MPICH3 1 #endif #if defined(MPICH_NAME) && (MPICH_NAME==1) #define MPICH1 1 #endif #if defined(MS_WINDOWS) && !defined(MPIAPI) #if defined(MPI_CALL) /* DeinoMPI */ #define MPIAPI MPI_CALL #endif #endif #if !defined(MPIAPI) #define MPIAPI #endif /* XXX describe */ #if defined(HAVE_CONFIG_H) #include "config/config.h" #elif defined(MPICH3) #include "config/mpich3.h" #elif defined(MPICH2) #include "config/mpich2.h" #elif defined(OPEN_MPI) #include "config/openmpi.h" #else /* Unknown MPI*/ #include "config/unknown.h" #endif #ifdef PyMPI_MISSING_MPI_Type_create_f90_integer #undef PyMPI_HAVE_MPI_Type_create_f90_integer #endif #ifdef PyMPI_MISSING_MPI_Type_create_f90_real #undef PyMPI_HAVE_MPI_Type_create_f90_real #endif #ifdef PyMPI_MISSING_MPI_Type_create_f90_complex #undef PyMPI_HAVE_MPI_Type_create_f90_complex #endif mpi4py_1.3.1+hg20131106.orig/src/config/mpi-11.h0000644000000000000000000002065012211706251016526 0ustar 00000000000000#define PyMPI_HAVE_MPI_UNDEFINED 1 #define PyMPI_HAVE_MPI_ANY_SOURCE 1 #define PyMPI_HAVE_MPI_ANY_TAG 1 #define PyMPI_HAVE_MPI_PROC_NULL 1 #define PyMPI_HAVE_MPI_Aint 1 #define PyMPI_HAVE_MPI_Datatype 1 #define PyMPI_HAVE_MPI_DATATYPE_NULL 1 #define PyMPI_HAVE_MPI_UB 1 #define PyMPI_HAVE_MPI_LB 1 #define PyMPI_HAVE_MPI_PACKED 1 #define PyMPI_HAVE_MPI_BYTE 1 #define PyMPI_HAVE_MPI_CHAR 1 #define PyMPI_HAVE_MPI_SHORT 1 #define PyMPI_HAVE_MPI_INT 1 #define PyMPI_HAVE_MPI_LONG 1 #define PyMPI_HAVE_MPI_LONG_LONG_INT 1 #define PyMPI_HAVE_MPI_UNSIGNED_CHAR 1 #define PyMPI_HAVE_MPI_UNSIGNED_SHORT 1 #define PyMPI_HAVE_MPI_UNSIGNED 1 #define PyMPI_HAVE_MPI_UNSIGNED_LONG 1 #define PyMPI_HAVE_MPI_FLOAT 1 #define PyMPI_HAVE_MPI_DOUBLE 1 #define PyMPI_HAVE_MPI_LONG_DOUBLE 1 #define PyMPI_HAVE_MPI_SHORT_INT 1 #define PyMPI_HAVE_MPI_2INT 1 #define PyMPI_HAVE_MPI_LONG_INT 1 #define PyMPI_HAVE_MPI_FLOAT_INT 1 #define PyMPI_HAVE_MPI_DOUBLE_INT 1 #define PyMPI_HAVE_MPI_LONG_DOUBLE_INT 1 #define PyMPI_HAVE_MPI_CHARACTER 1 #define PyMPI_HAVE_MPI_LOGICAL 1 #define PyMPI_HAVE_MPI_INTEGER 1 #define PyMPI_HAVE_MPI_REAL 1 #define PyMPI_HAVE_MPI_DOUBLE_PRECISION 1 #define PyMPI_HAVE_MPI_COMPLEX 1 #define PyMPI_HAVE_MPI_DOUBLE_COMPLEX 1 #define PyMPI_HAVE_MPI_INTEGER1 1 #define PyMPI_HAVE_MPI_INTEGER2 1 #define PyMPI_HAVE_MPI_INTEGER4 1 #define PyMPI_HAVE_MPI_REAL2 1 #define PyMPI_HAVE_MPI_REAL4 1 #define PyMPI_HAVE_MPI_REAL8 1 #define PyMPI_HAVE_MPI_BOTTOM 1 #define PyMPI_HAVE_MPI_Address 1 #define PyMPI_HAVE_MPI_Type_size 1 #define PyMPI_HAVE_MPI_Type_extent 1 #define PyMPI_HAVE_MPI_Type_lb 1 #define PyMPI_HAVE_MPI_Type_ub 1 #define PyMPI_HAVE_MPI_Type_dup 1 #define PyMPI_HAVE_MPI_Type_contiguous 1 #define PyMPI_HAVE_MPI_Type_vector 1 #define PyMPI_HAVE_MPI_Type_indexed 1 #define PyMPI_HAVE_MPI_Type_hvector 1 #define PyMPI_HAVE_MPI_Type_hindexed 1 #define PyMPI_HAVE_MPI_Type_struct 1 #define PyMPI_HAVE_MPI_Type_commit 1 #define PyMPI_HAVE_MPI_Type_free 1 #define PyMPI_HAVE_MPI_Pack 1 #define PyMPI_HAVE_MPI_Unpack 1 #define PyMPI_HAVE_MPI_Pack_size 1 #define PyMPI_HAVE_MPI_Status 1 #define PyMPI_HAVE_MPI_Get_count 1 #define PyMPI_HAVE_MPI_Get_elements 1 #define PyMPI_HAVE_MPI_Test_cancelled 1 #define PyMPI_HAVE_MPI_Request 1 #define PyMPI_HAVE_MPI_REQUEST_NULL 1 #define PyMPI_HAVE_MPI_Request_free 1 #define PyMPI_HAVE_MPI_Wait 1 #define PyMPI_HAVE_MPI_Test 1 #define PyMPI_HAVE_MPI_Request_get_status 1 #define PyMPI_HAVE_MPI_Cancel 1 #define PyMPI_HAVE_MPI_Waitany 1 #define PyMPI_HAVE_MPI_Testany 1 #define PyMPI_HAVE_MPI_Waitall 1 #define PyMPI_HAVE_MPI_Testall 1 #define PyMPI_HAVE_MPI_Waitsome 1 #define PyMPI_HAVE_MPI_Testsome 1 #define PyMPI_HAVE_MPI_Start 1 #define PyMPI_HAVE_MPI_Startall 1 #define PyMPI_HAVE_MPI_Op 1 #define PyMPI_HAVE_MPI_OP_NULL 1 #define PyMPI_HAVE_MPI_MAX 1 #define PyMPI_HAVE_MPI_MIN 1 #define PyMPI_HAVE_MPI_SUM 1 #define PyMPI_HAVE_MPI_PROD 1 #define PyMPI_HAVE_MPI_LAND 1 #define PyMPI_HAVE_MPI_BAND 1 #define PyMPI_HAVE_MPI_LOR 1 #define PyMPI_HAVE_MPI_BOR 1 #define PyMPI_HAVE_MPI_LXOR 1 #define PyMPI_HAVE_MPI_BXOR 1 #define PyMPI_HAVE_MPI_MAXLOC 1 #define PyMPI_HAVE_MPI_MINLOC 1 #define PyMPI_HAVE_MPI_REPLACE 1 #define PyMPI_HAVE_MPI_Op_free 1 #define PyMPI_HAVE_MPI_User_function 1 #define PyMPI_HAVE_MPI_Op_create 1 #define PyMPI_HAVE_MPI_Group 1 #define PyMPI_HAVE_MPI_GROUP_NULL 1 #define PyMPI_HAVE_MPI_GROUP_EMPTY 1 #define PyMPI_HAVE_MPI_Group_free 1 #define PyMPI_HAVE_MPI_Group_size 1 #define PyMPI_HAVE_MPI_Group_rank 1 #define PyMPI_HAVE_MPI_Group_translate_ranks 1 #define PyMPI_HAVE_MPI_IDENT 1 #define PyMPI_HAVE_MPI_CONGRUENT 1 #define PyMPI_HAVE_MPI_SIMILAR 1 #define PyMPI_HAVE_MPI_UNEQUAL 1 #define PyMPI_HAVE_MPI_Group_compare 1 #define PyMPI_HAVE_MPI_Group_union 1 #define PyMPI_HAVE_MPI_Group_intersection 1 #define PyMPI_HAVE_MPI_Group_difference 1 #define PyMPI_HAVE_MPI_Group_incl 1 #define PyMPI_HAVE_MPI_Group_excl 1 #define PyMPI_HAVE_MPI_Group_range_incl 1 #define PyMPI_HAVE_MPI_Group_range_excl 1 #define PyMPI_HAVE_MPI_Comm 1 #define PyMPI_HAVE_MPI_COMM_NULL 1 #define PyMPI_HAVE_MPI_COMM_SELF 1 #define PyMPI_HAVE_MPI_COMM_WORLD 1 #define PyMPI_HAVE_MPI_Comm_free 1 #define PyMPI_HAVE_MPI_Comm_group 1 #define PyMPI_HAVE_MPI_Comm_size 1 #define PyMPI_HAVE_MPI_Comm_rank 1 #define PyMPI_HAVE_MPI_Comm_compare 1 #define PyMPI_HAVE_MPI_Topo_test 1 #define PyMPI_HAVE_MPI_Comm_test_inter 1 #define PyMPI_HAVE_MPI_Abort 1 #define PyMPI_HAVE_MPI_Send 1 #define PyMPI_HAVE_MPI_Recv 1 #define PyMPI_HAVE_MPI_Sendrecv 1 #define PyMPI_HAVE_MPI_Sendrecv_replace 1 #define PyMPI_HAVE_MPI_BSEND_OVERHEAD 1 #define PyMPI_HAVE_MPI_Buffer_attach 1 #define PyMPI_HAVE_MPI_Buffer_detach 1 #define PyMPI_HAVE_MPI_Bsend 1 #define PyMPI_HAVE_MPI_Ssend 1 #define PyMPI_HAVE_MPI_Rsend 1 #define PyMPI_HAVE_MPI_Isend 1 #define PyMPI_HAVE_MPI_Ibsend 1 #define PyMPI_HAVE_MPI_Issend 1 #define PyMPI_HAVE_MPI_Irsend 1 #define PyMPI_HAVE_MPI_Irecv 1 #define PyMPI_HAVE_MPI_Send_init 1 #define PyMPI_HAVE_MPI_Bsend_init 1 #define PyMPI_HAVE_MPI_Ssend_init 1 #define PyMPI_HAVE_MPI_Rsend_init 1 #define PyMPI_HAVE_MPI_Recv_init 1 #define PyMPI_HAVE_MPI_Probe 1 #define PyMPI_HAVE_MPI_Iprobe 1 #define PyMPI_HAVE_MPI_Barrier 1 #define PyMPI_HAVE_MPI_Bcast 1 #define PyMPI_HAVE_MPI_Gather 1 #define PyMPI_HAVE_MPI_Gatherv 1 #define PyMPI_HAVE_MPI_Scatter 1 #define PyMPI_HAVE_MPI_Scatterv 1 #define PyMPI_HAVE_MPI_Allgather 1 #define PyMPI_HAVE_MPI_Allgatherv 1 #define PyMPI_HAVE_MPI_Alltoall 1 #define PyMPI_HAVE_MPI_Alltoallv 1 #define PyMPI_HAVE_MPI_Reduce 1 #define PyMPI_HAVE_MPI_Allreduce 1 #define PyMPI_HAVE_MPI_Reduce_scatter 1 #define PyMPI_HAVE_MPI_Scan 1 #define PyMPI_HAVE_MPI_Comm_dup 1 #define PyMPI_HAVE_MPI_Comm_create 1 #define PyMPI_HAVE_MPI_Comm_split 1 #define PyMPI_HAVE_MPI_CART 1 #define PyMPI_HAVE_MPI_Cart_create 1 #define PyMPI_HAVE_MPI_Cartdim_get 1 #define PyMPI_HAVE_MPI_Cart_get 1 #define PyMPI_HAVE_MPI_Cart_rank 1 #define PyMPI_HAVE_MPI_Cart_coords 1 #define PyMPI_HAVE_MPI_Cart_shift 1 #define PyMPI_HAVE_MPI_Cart_sub 1 #define PyMPI_HAVE_MPI_Cart_map 1 #define PyMPI_HAVE_MPI_Dims_create 1 #define PyMPI_HAVE_MPI_GRAPH 1 #define PyMPI_HAVE_MPI_Graph_create 1 #define PyMPI_HAVE_MPI_Graphdims_get 1 #define PyMPI_HAVE_MPI_Graph_get 1 #define PyMPI_HAVE_MPI_Graph_map 1 #define PyMPI_HAVE_MPI_Graph_neighbors_count 1 #define PyMPI_HAVE_MPI_Graph_neighbors 1 #define PyMPI_HAVE_MPI_Intercomm_create 1 #define PyMPI_HAVE_MPI_Comm_remote_group 1 #define PyMPI_HAVE_MPI_Comm_remote_size 1 #define PyMPI_HAVE_MPI_Intercomm_merge 1 #define PyMPI_HAVE_MPI_Errhandler_get 1 #define PyMPI_HAVE_MPI_Errhandler_set 1 #define PyMPI_HAVE_MPI_Handler_function 1 #define PyMPI_HAVE_MPI_Errhandler_create 1 #define PyMPI_HAVE_MPI_Init 1 #define PyMPI_HAVE_MPI_Finalize 1 #define PyMPI_HAVE_MPI_Initialized 1 #define PyMPI_HAVE_MPI_Finalized 1 #define PyMPI_HAVE_MPI_MAX_PROCESSOR_NAME 1 #define PyMPI_HAVE_MPI_Get_processor_name 1 #define PyMPI_HAVE_MPI_Wtime 1 #define PyMPI_HAVE_MPI_Wtick 1 #define PyMPI_HAVE_MPI_Pcontrol 1 #define PyMPI_HAVE_MPI_Errhandler 1 #define PyMPI_HAVE_MPI_ERRHANDLER_NULL 1 #define PyMPI_HAVE_MPI_ERRORS_RETURN 1 #define PyMPI_HAVE_MPI_ERRORS_ARE_FATAL 1 #define PyMPI_HAVE_MPI_Errhandler_free 1 #define PyMPI_HAVE_MPI_KEYVAL_INVALID 1 #define PyMPI_HAVE_MPI_TAG_UB 1 #define PyMPI_HAVE_MPI_HOST 1 #define PyMPI_HAVE_MPI_IO 1 #define PyMPI_HAVE_MPI_WTIME_IS_GLOBAL 1 #define PyMPI_HAVE_MPI_Attr_get 1 #define PyMPI_HAVE_MPI_Attr_put 1 #define PyMPI_HAVE_MPI_Attr_delete 1 #define PyMPI_HAVE_MPI_Copy_function 1 #define PyMPI_HAVE_MPI_Delete_function 1 #define PyMPI_HAVE_MPI_DUP_FN 1 #define PyMPI_HAVE_MPI_NULL_COPY_FN 1 #define PyMPI_HAVE_MPI_NULL_DELETE_FN 1 #define PyMPI_HAVE_MPI_Keyval_create 1 #define PyMPI_HAVE_MPI_Keyval_free 1 #define PyMPI_HAVE_MPI_SUCCESS 1 #define PyMPI_HAVE_MPI_ERR_LASTCODE 1 #define PyMPI_HAVE_MPI_ERR_COMM 1 #define PyMPI_HAVE_MPI_ERR_GROUP 1 #define PyMPI_HAVE_MPI_ERR_TYPE 1 #define PyMPI_HAVE_MPI_ERR_REQUEST 1 #define PyMPI_HAVE_MPI_ERR_OP 1 #define PyMPI_HAVE_MPI_ERR_BUFFER 1 #define PyMPI_HAVE_MPI_ERR_COUNT 1 #define PyMPI_HAVE_MPI_ERR_TAG 1 #define PyMPI_HAVE_MPI_ERR_RANK 1 #define PyMPI_HAVE_MPI_ERR_ROOT 1 #define PyMPI_HAVE_MPI_ERR_TRUNCATE 1 #define PyMPI_HAVE_MPI_ERR_IN_STATUS 1 #define PyMPI_HAVE_MPI_ERR_PENDING 1 #define PyMPI_HAVE_MPI_ERR_TOPOLOGY 1 #define PyMPI_HAVE_MPI_ERR_DIMS 1 #define PyMPI_HAVE_MPI_ERR_ARG 1 #define PyMPI_HAVE_MPI_ERR_OTHER 1 #define PyMPI_HAVE_MPI_ERR_UNKNOWN 1 #define PyMPI_HAVE_MPI_ERR_INTERN 1 #define PyMPI_HAVE_MPI_MAX_ERROR_STRING 1 #define PyMPI_HAVE_MPI_Error_class 1 #define PyMPI_HAVE_MPI_Error_string 1 mpi4py_1.3.1+hg20131106.orig/src/config/mpi-12.h0000644000000000000000000000021212211706251016517 0ustar 00000000000000#if defined(MPI_VERSION) #define PyMPI_HAVE_MPI_VERSION 1 #define PyMPI_HAVE_MPI_SUBVERSION 1 #define PyMPI_HAVE_MPI_Get_version 1 #endif mpi4py_1.3.1+hg20131106.orig/src/config/mpi-20.h0000644000000000000000000003121312211706251016523 0ustar 00000000000000#if defined(MPI_VERSION) #if (MPI_VERSION >= 2) #define PyMPI_HAVE_MPI_ERR_KEYVAL 1 #define PyMPI_HAVE_MPI_MAX_OBJECT_NAME 1 #define PyMPI_HAVE_MPI_WCHAR 1 #define PyMPI_HAVE_MPI_SIGNED_CHAR 1 #define PyMPI_HAVE_MPI_LONG_LONG 1 #define PyMPI_HAVE_MPI_UNSIGNED_LONG_LONG 1 #define PyMPI_HAVE_MPI_Type_dup 1 #define PyMPI_HAVE_MPI_Type_create_indexed_block 1 #define PyMPI_HAVE_MPI_ORDER_C 1 #define PyMPI_HAVE_MPI_ORDER_FORTRAN 1 #define PyMPI_HAVE_MPI_Type_create_subarray 1 #define PyMPI_HAVE_MPI_DISTRIBUTE_NONE 1 #define PyMPI_HAVE_MPI_DISTRIBUTE_BLOCK 1 #define PyMPI_HAVE_MPI_DISTRIBUTE_CYCLIC 1 #define PyMPI_HAVE_MPI_DISTRIBUTE_DFLT_DARG 1 #define PyMPI_HAVE_MPI_Type_create_darray 1 #define PyMPI_HAVE_MPI_Get_address 1 #define PyMPI_HAVE_MPI_Type_create_hvector 1 #define PyMPI_HAVE_MPI_Type_create_hindexed 1 #define PyMPI_HAVE_MPI_Type_create_struct 1 #define PyMPI_HAVE_MPI_Type_get_extent 1 #define PyMPI_HAVE_MPI_Type_create_resized 1 #define PyMPI_HAVE_MPI_Type_get_true_extent 1 #define PyMPI_HAVE_MPI_Type_create_f90_integer 1 #define PyMPI_HAVE_MPI_Type_create_f90_real 1 #define PyMPI_HAVE_MPI_Type_create_f90_complex 1 #define PyMPI_HAVE_MPI_TYPECLASS_INTEGER 1 #define PyMPI_HAVE_MPI_TYPECLASS_REAL 1 #define PyMPI_HAVE_MPI_TYPECLASS_COMPLEX 1 #define PyMPI_HAVE_MPI_Type_match_size 1 #define PyMPI_HAVE_MPI_Pack_external 1 #define PyMPI_HAVE_MPI_Unpack_external 1 #define PyMPI_HAVE_MPI_Pack_external_size 1 #define PyMPI_HAVE_MPI_COMBINER_NAMED 1 #define PyMPI_HAVE_MPI_COMBINER_DUP 1 #define PyMPI_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PyMPI_HAVE_MPI_COMBINER_VECTOR 1 #define PyMPI_HAVE_MPI_COMBINER_HVECTOR_INTEGER 1 #define PyMPI_HAVE_MPI_COMBINER_HVECTOR 1 #define PyMPI_HAVE_MPI_COMBINER_INDEXED 1 #define PyMPI_HAVE_MPI_COMBINER_HINDEXED_INTEGER 1 #define PyMPI_HAVE_MPI_COMBINER_HINDEXED 1 #define PyMPI_HAVE_MPI_COMBINER_INDEXED_BLOCK 1 #define PyMPI_HAVE_MPI_COMBINER_STRUCT_INTEGER 1 #define PyMPI_HAVE_MPI_COMBINER_STRUCT 1 #define PyMPI_HAVE_MPI_COMBINER_SUBARRAY 1 #define PyMPI_HAVE_MPI_COMBINER_DARRAY 1 #define PyMPI_HAVE_MPI_COMBINER_F90_REAL 1 #define PyMPI_HAVE_MPI_COMBINER_F90_COMPLEX 1 #define PyMPI_HAVE_MPI_COMBINER_F90_INTEGER 1 #define PyMPI_HAVE_MPI_COMBINER_RESIZED 1 #define PyMPI_HAVE_MPI_Type_get_envelope 1 #define PyMPI_HAVE_MPI_Type_get_contents 1 #define PyMPI_HAVE_MPI_Type_get_name 1 #define PyMPI_HAVE_MPI_Type_set_name 1 #define PyMPI_HAVE_MPI_Type_get_attr 1 #define PyMPI_HAVE_MPI_Type_set_attr 1 #define PyMPI_HAVE_MPI_Type_delete_attr 1 #define PyMPI_HAVE_MPI_Type_copy_attr_function 1 #define PyMPI_HAVE_MPI_Type_delete_attr_function 1 #define PyMPI_HAVE_MPI_TYPE_NULL_COPY_FN 1 #define PyMPI_HAVE_MPI_TYPE_DUP_FN 1 #define PyMPI_HAVE_MPI_TYPE_NULL_DELETE_FN 1 #define PyMPI_HAVE_MPI_Type_create_keyval 1 #define PyMPI_HAVE_MPI_Type_free_keyval 1 #define PyMPI_HAVE_MPI_STATUS_IGNORE 1 #define PyMPI_HAVE_MPI_STATUSES_IGNORE 1 #define PyMPI_HAVE_MPI_Status_set_elements 1 #define PyMPI_HAVE_MPI_Status_set_cancelled 1 #define PyMPI_HAVE_MPI_Request_get_status 1 #define PyMPI_HAVE_MPI_Grequest_cancel_function 1 #define PyMPI_HAVE_MPI_Grequest_free_function 1 #define PyMPI_HAVE_MPI_Grequest_query_function 1 #define PyMPI_HAVE_MPI_Grequest_start 1 #define PyMPI_HAVE_MPI_Grequest_complete 1 #define PyMPI_HAVE_MPI_ROOT 1 #define PyMPI_HAVE_MPI_IN_PLACE 1 #define PyMPI_HAVE_MPI_Alltoallw 1 #define PyMPI_HAVE_MPI_Exscan 1 #define PyMPI_HAVE_MPI_Comm_get_errhandler 1 #define PyMPI_HAVE_MPI_Comm_set_errhandler 1 #define PyMPI_HAVE_MPI_Comm_errhandler_fn 1 #define PyMPI_HAVE_MPI_Comm_errhandler_function 1 #define PyMPI_HAVE_MPI_Comm_create_errhandler 1 #define PyMPI_HAVE_MPI_Comm_call_errhandler 1 #define PyMPI_HAVE_MPI_Comm_get_name 1 #define PyMPI_HAVE_MPI_Comm_set_name 1 #define PyMPI_HAVE_MPI_Comm_get_attr 1 #define PyMPI_HAVE_MPI_Comm_set_attr 1 #define PyMPI_HAVE_MPI_Comm_delete_attr 1 #define PyMPI_HAVE_MPI_Comm_copy_attr_function 1 #define PyMPI_HAVE_MPI_Comm_delete_attr_function 1 #define PyMPI_HAVE_MPI_COMM_DUP_FN 1 #define PyMPI_HAVE_MPI_COMM_NULL_COPY_FN 1 #define PyMPI_HAVE_MPI_COMM_NULL_DELETE_FN 1 #define PyMPI_HAVE_MPI_Comm_create_keyval 1 #define PyMPI_HAVE_MPI_Comm_free_keyval 1 #define PyMPI_HAVE_MPI_MAX_PORT_NAME 1 #define PyMPI_HAVE_MPI_Open_port 1 #define PyMPI_HAVE_MPI_Close_port 1 #define PyMPI_HAVE_MPI_Publish_name 1 #define PyMPI_HAVE_MPI_Unpublish_name 1 #define PyMPI_HAVE_MPI_Lookup_name 1 #define PyMPI_HAVE_MPI_Comm_accept 1 #define PyMPI_HAVE_MPI_Comm_connect 1 #define PyMPI_HAVE_MPI_Comm_join 1 #define PyMPI_HAVE_MPI_Comm_disconnect 1 #define PyMPI_HAVE_MPI_ARGV_NULL 1 #define PyMPI_HAVE_MPI_ARGVS_NULL 1 #define PyMPI_HAVE_MPI_ERRCODES_IGNORE 1 #define PyMPI_HAVE_MPI_Comm_spawn 1 #define PyMPI_HAVE_MPI_Comm_spawn_multiple 1 #define PyMPI_HAVE_MPI_Comm_get_parent 1 #define PyMPI_HAVE_MPI_UNIVERSE_SIZE 1 #define PyMPI_HAVE_MPI_APPNUM 1 #define PyMPI_HAVE_MPI_ERR_SPAWN 1 #define PyMPI_HAVE_MPI_ERR_PORT 1 #define PyMPI_HAVE_MPI_ERR_SERVICE 1 #define PyMPI_HAVE_MPI_ERR_NAME 1 #define PyMPI_HAVE_MPI_Alloc_mem 1 #define PyMPI_HAVE_MPI_Free_mem 1 #define PyMPI_HAVE_MPI_ERR_NO_MEM 1 #define PyMPI_HAVE_MPI_Info 1 #define PyMPI_HAVE_MPI_INFO_NULL 1 #define PyMPI_HAVE_MPI_Info_free 1 #define PyMPI_HAVE_MPI_Info_create 1 #define PyMPI_HAVE_MPI_Info_dup 1 #define PyMPI_HAVE_MPI_MAX_INFO_KEY 1 #define PyMPI_HAVE_MPI_MAX_INFO_VAL 1 #define PyMPI_HAVE_MPI_Info_get 1 #define PyMPI_HAVE_MPI_Info_set 1 #define PyMPI_HAVE_MPI_Info_delete 1 #define PyMPI_HAVE_MPI_Info_get_nkeys 1 #define PyMPI_HAVE_MPI_Info_get_nthkey 1 #define PyMPI_HAVE_MPI_Info_get_valuelen 1 #define PyMPI_HAVE_MPI_ERR_INFO 1 #define PyMPI_HAVE_MPI_ERR_INFO_KEY 1 #define PyMPI_HAVE_MPI_ERR_INFO_VALUE 1 #define PyMPI_HAVE_MPI_ERR_INFO_NOKEY 1 #define PyMPI_HAVE_MPI_Win 1 #define PyMPI_HAVE_MPI_WIN_NULL 1 #define PyMPI_HAVE_MPI_Win_free 1 #define PyMPI_HAVE_MPI_Win_create 1 #define PyMPI_HAVE_MPI_Win_get_group 1 #define PyMPI_HAVE_MPI_Get 1 #define PyMPI_HAVE_MPI_Put 1 #define PyMPI_HAVE_MPI_REPLACE 1 #define PyMPI_HAVE_MPI_Accumulate 1 #define PyMPI_HAVE_MPI_MODE_NOCHECK 1 #define PyMPI_HAVE_MPI_MODE_NOSTORE 1 #define PyMPI_HAVE_MPI_MODE_NOPUT 1 #define PyMPI_HAVE_MPI_MODE_NOPRECEDE 1 #define PyMPI_HAVE_MPI_MODE_NOSUCCEED 1 #define PyMPI_HAVE_MPI_Win_fence 1 #define PyMPI_HAVE_MPI_Win_post 1 #define PyMPI_HAVE_MPI_Win_start 1 #define PyMPI_HAVE_MPI_Win_complete 1 #define PyMPI_HAVE_MPI_Win_wait 1 #define PyMPI_HAVE_MPI_Win_test 1 #define PyMPI_HAVE_MPI_LOCK_EXCLUSIVE 1 #define PyMPI_HAVE_MPI_LOCK_SHARED 1 #define PyMPI_HAVE_MPI_Win_lock 1 #define PyMPI_HAVE_MPI_Win_unlock 1 #define PyMPI_HAVE_MPI_Win_get_errhandler 1 #define PyMPI_HAVE_MPI_Win_set_errhandler 1 #define PyMPI_HAVE_MPI_Win_errhandler_fn 1 #define PyMPI_HAVE_MPI_Win_errhandler_function 1 #define PyMPI_HAVE_MPI_Win_create_errhandler 1 #define PyMPI_HAVE_MPI_Win_call_errhandler 1 #define PyMPI_HAVE_MPI_Win_get_name 1 #define PyMPI_HAVE_MPI_Win_set_name 1 #define PyMPI_HAVE_MPI_WIN_BASE 1 #define PyMPI_HAVE_MPI_WIN_SIZE 1 #define PyMPI_HAVE_MPI_WIN_DISP_UNIT 1 #define PyMPI_HAVE_MPI_Win_get_attr 1 #define PyMPI_HAVE_MPI_Win_set_attr 1 #define PyMPI_HAVE_MPI_Win_delete_attr 1 #define PyMPI_HAVE_MPI_Win_copy_attr_function 1 #define PyMPI_HAVE_MPI_Win_delete_attr_function 1 #define PyMPI_HAVE_MPI_WIN_DUP_FN 1 #define PyMPI_HAVE_MPI_WIN_NULL_COPY_FN 1 #define PyMPI_HAVE_MPI_WIN_NULL_DELETE_FN 1 #define PyMPI_HAVE_MPI_Win_create_keyval 1 #define PyMPI_HAVE_MPI_Win_free_keyval 1 #define PyMPI_HAVE_MPI_ERR_WIN 1 #define PyMPI_HAVE_MPI_ERR_BASE 1 #define PyMPI_HAVE_MPI_ERR_SIZE 1 #define PyMPI_HAVE_MPI_ERR_DISP 1 #define PyMPI_HAVE_MPI_ERR_ASSERT 1 #define PyMPI_HAVE_MPI_ERR_LOCKTYPE 1 #define PyMPI_HAVE_MPI_ERR_RMA_CONFLICT 1 #define PyMPI_HAVE_MPI_ERR_RMA_SYNC 1 #define PyMPI_HAVE_MPI_Offset 1 #define PyMPI_HAVE_MPI_File 1 #define PyMPI_HAVE_MPI_FILE_NULL 1 #define PyMPI_HAVE_MPI_MODE_RDONLY 1 #define PyMPI_HAVE_MPI_MODE_RDWR 1 #define PyMPI_HAVE_MPI_MODE_WRONLY 1 #define PyMPI_HAVE_MPI_MODE_CREATE 1 #define PyMPI_HAVE_MPI_MODE_EXCL 1 #define PyMPI_HAVE_MPI_MODE_DELETE_ON_CLOSE 1 #define PyMPI_HAVE_MPI_MODE_UNIQUE_OPEN 1 #define PyMPI_HAVE_MPI_MODE_APPEND 1 #define PyMPI_HAVE_MPI_MODE_SEQUENTIAL 1 #define PyMPI_HAVE_MPI_File_open 1 #define PyMPI_HAVE_MPI_File_close 1 #define PyMPI_HAVE_MPI_File_delete 1 #define PyMPI_HAVE_MPI_File_set_size 1 #define PyMPI_HAVE_MPI_File_preallocate 1 #define PyMPI_HAVE_MPI_File_get_size 1 #define PyMPI_HAVE_MPI_File_get_group 1 #define PyMPI_HAVE_MPI_File_get_amode 1 #define PyMPI_HAVE_MPI_File_set_info 1 #define PyMPI_HAVE_MPI_File_get_info 1 #define PyMPI_HAVE_MPI_MAX_DATAREP_STRING 1 #define PyMPI_HAVE_MPI_File_get_view 1 #define PyMPI_HAVE_MPI_File_set_view 1 #define PyMPI_HAVE_MPI_File_read_at 1 #define PyMPI_HAVE_MPI_File_read_at_all 1 #define PyMPI_HAVE_MPI_File_write_at 1 #define PyMPI_HAVE_MPI_File_write_at_all 1 #define PyMPI_HAVE_MPI_File_iread_at 1 #define PyMPI_HAVE_MPI_File_iwrite_at 1 #define PyMPI_HAVE_MPI_SEEK_SET 1 #define PyMPI_HAVE_MPI_SEEK_CUR 1 #define PyMPI_HAVE_MPI_SEEK_END 1 #define PyMPI_HAVE_MPI_DISPLACEMENT_CURRENT 1 #define PyMPI_HAVE_MPI_File_seek 1 #define PyMPI_HAVE_MPI_File_get_position 1 #define PyMPI_HAVE_MPI_File_get_byte_offset 1 #define PyMPI_HAVE_MPI_File_read 1 #define PyMPI_HAVE_MPI_File_read_all 1 #define PyMPI_HAVE_MPI_File_write 1 #define PyMPI_HAVE_MPI_File_write_all 1 #define PyMPI_HAVE_MPI_File_iread 1 #define PyMPI_HAVE_MPI_File_iwrite 1 #define PyMPI_HAVE_MPI_File_read_shared 1 #define PyMPI_HAVE_MPI_File_write_shared 1 #define PyMPI_HAVE_MPI_File_iread_shared 1 #define PyMPI_HAVE_MPI_File_iwrite_shared 1 #define PyMPI_HAVE_MPI_File_read_ordered 1 #define PyMPI_HAVE_MPI_File_write_ordered 1 #define PyMPI_HAVE_MPI_File_seek_shared 1 #define PyMPI_HAVE_MPI_File_get_position_shared 1 #define PyMPI_HAVE_MPI_File_read_at_all_begin 1 #define PyMPI_HAVE_MPI_File_read_at_all_end 1 #define PyMPI_HAVE_MPI_File_write_at_all_begin 1 #define PyMPI_HAVE_MPI_File_write_at_all_end 1 #define PyMPI_HAVE_MPI_File_read_all_begin 1 #define PyMPI_HAVE_MPI_File_read_all_end 1 #define PyMPI_HAVE_MPI_File_write_all_begin 1 #define PyMPI_HAVE_MPI_File_write_all_end 1 #define PyMPI_HAVE_MPI_File_read_ordered_begin 1 #define PyMPI_HAVE_MPI_File_read_ordered_end 1 #define PyMPI_HAVE_MPI_File_write_ordered_begin 1 #define PyMPI_HAVE_MPI_File_write_ordered_end 1 #define PyMPI_HAVE_MPI_File_get_type_extent 1 #define PyMPI_HAVE_MPI_File_set_atomicity 1 #define PyMPI_HAVE_MPI_File_get_atomicity 1 #define PyMPI_HAVE_MPI_File_sync 1 #define PyMPI_HAVE_MPI_File_get_errhandler 1 #define PyMPI_HAVE_MPI_File_set_errhandler 1 #define PyMPI_HAVE_MPI_File_errhandler_fn 1 #define PyMPI_HAVE_MPI_File_errhandler_function 1 #define PyMPI_HAVE_MPI_File_create_errhandler 1 #define PyMPI_HAVE_MPI_File_call_errhandler 1 #define PyMPI_HAVE_MPI_Datarep_conversion_function 1 #define PyMPI_HAVE_MPI_Datarep_extent_function 1 #define PyMPI_HAVE_MPI_CONVERSION_FN_NULL 1 #define PyMPI_HAVE_MPI_MAX_DATAREP_STRING 1 #define PyMPI_HAVE_MPI_Register_datarep 1 #define PyMPI_HAVE_MPI_ERR_FILE 1 #define PyMPI_HAVE_MPI_ERR_NOT_SAME 1 #define PyMPI_HAVE_MPI_ERR_BAD_FILE 1 #define PyMPI_HAVE_MPI_ERR_NO_SUCH_FILE 1 #define PyMPI_HAVE_MPI_ERR_FILE_EXISTS 1 #define PyMPI_HAVE_MPI_ERR_FILE_IN_USE 1 #define PyMPI_HAVE_MPI_ERR_AMODE 1 #define PyMPI_HAVE_MPI_ERR_ACCESS 1 #define PyMPI_HAVE_MPI_ERR_READ_ONLY 1 #define PyMPI_HAVE_MPI_ERR_NO_SPACE 1 #define PyMPI_HAVE_MPI_ERR_QUOTA 1 #define PyMPI_HAVE_MPI_ERR_UNSUPPORTED_DATAREP 1 #define PyMPI_HAVE_MPI_ERR_UNSUPPORTED_OPERATION 1 #define PyMPI_HAVE_MPI_ERR_CONVERSION 1 #define PyMPI_HAVE_MPI_ERR_DUP_DATAREP 1 #define PyMPI_HAVE_MPI_ERR_IO 1 #define PyMPI_HAVE_MPI_LASTUSEDCODE 1 #define PyMPI_HAVE_MPI_Add_error_class 1 #define PyMPI_HAVE_MPI_Add_error_code 1 #define PyMPI_HAVE_MPI_Add_error_string 1 #define PyMPI_HAVE_MPI_THREAD_SINGLE 1 #define PyMPI_HAVE_MPI_THREAD_FUNNELED 1 #define PyMPI_HAVE_MPI_THREAD_SERIALIZED 1 #define PyMPI_HAVE_MPI_THREAD_MULTIPLE 1 #define PyMPI_HAVE_MPI_Init_thread 1 #define PyMPI_HAVE_MPI_Query_thread 1 #define PyMPI_HAVE_MPI_Is_thread_main 1 #define PyMPI_HAVE_MPI_Fint 1 #define PyMPI_HAVE_MPI_F_STATUS_IGNORE 1 #define PyMPI_HAVE_MPI_F_STATUSES_IGNORE 1 #define PyMPI_HAVE_MPI_Status_c2f 1 #define PyMPI_HAVE_MPI_Status_f2c 1 #define PyMPI_HAVE_MPI_Type_c2f 1 #define PyMPI_HAVE_MPI_Request_c2f 1 #define PyMPI_HAVE_MPI_Op_c2f 1 #define PyMPI_HAVE_MPI_Info_c2f 1 #define PyMPI_HAVE_MPI_Group_c2f 1 #define PyMPI_HAVE_MPI_Comm_c2f 1 #define PyMPI_HAVE_MPI_Win_c2f 1 #define PyMPI_HAVE_MPI_File_c2f 1 #define PyMPI_HAVE_MPI_Errhandler_c2f 1 #define PyMPI_HAVE_MPI_Type_f2c 1 #define PyMPI_HAVE_MPI_Request_f2c 1 #define PyMPI_HAVE_MPI_Op_f2c 1 #define PyMPI_HAVE_MPI_Info_f2c 1 #define PyMPI_HAVE_MPI_Group_f2c 1 #define PyMPI_HAVE_MPI_Comm_f2c 1 #define PyMPI_HAVE_MPI_Win_f2c 1 #define PyMPI_HAVE_MPI_File_f2c 1 #define PyMPI_HAVE_MPI_Errhandler_f2c 1 #endif #endif mpi4py_1.3.1+hg20131106.orig/src/config/mpi-22.h0000644000000000000000000000260712211706251016532 0ustar 00000000000000#if defined(MPI_VERSION) #if (MPI_VERSION > 2) || (MPI_VERSION == 2 && MPI_SUBVERSION >= 2) #define PyMPI_HAVE_MPI_AINT 1 #define PyMPI_HAVE_MPI_OFFSET 1 #define PyMPI_HAVE_MPI_C_BOOL 1 #define PyMPI_HAVE_MPI_INT8_T 1 #define PyMPI_HAVE_MPI_INT16_T 1 #define PyMPI_HAVE_MPI_INT32_T 1 #define PyMPI_HAVE_MPI_INT64_T 1 #define PyMPI_HAVE_MPI_UINT8_T 1 #define PyMPI_HAVE_MPI_UINT16_T 1 #define PyMPI_HAVE_MPI_UINT32_T 1 #define PyMPI_HAVE_MPI_UINT64_T 1 #define PyMPI_HAVE_MPI_C_COMPLEX 1 #define PyMPI_HAVE_MPI_C_FLOAT_COMPLEX 1 #define PyMPI_HAVE_MPI_C_DOUBLE_COMPLEX 1 #define PyMPI_HAVE_MPI_C_LONG_DOUBLE_COMPLEX 1 #define PyMPI_HAVE_MPI_INTEGER8 1 #define PyMPI_HAVE_MPI_INTEGER16 1 #define PyMPI_HAVE_MPI_REAL16 1 #define PyMPI_HAVE_MPI_COMPLEX4 1 #define PyMPI_HAVE_MPI_COMPLEX8 1 #define PyMPI_HAVE_MPI_COMPLEX16 1 #define PyMPI_HAVE_MPI_COMPLEX32 1 #define PyMPI_HAVE_MPI_Op_commutative 1 #define PyMPI_HAVE_MPI_Reduce_local 1 #define PyMPI_HAVE_MPI_Reduce_scatter_block 1 #define PyMPI_HAVE_MPI_DIST_GRAPH 1 #define PyMPI_HAVE_MPI_UNWEIGHTED 1 #define PyMPI_HAVE_MPI_Dist_graph_create_adjacent 1 #define PyMPI_HAVE_MPI_Dist_graph_create 1 #define PyMPI_HAVE_MPI_Dist_graph_neighbors_count 1 #define PyMPI_HAVE_MPI_Dist_graph_neighbors 1 #define PyMPI_HAVE_MPI_Comm_errhandler_function 1 #define PyMPI_HAVE_MPI_Win_errhandler_function 1 #define PyMPI_HAVE_MPI_File_errhandler_function 1 #endif #endif mpi4py_1.3.1+hg20131106.orig/src/config/mpi-30.h0000644000000000000000000000706712211706251016536 0ustar 00000000000000#if defined(MPI_VERSION) #if (MPI_VERSION >= 3) #define PyMPI_HAVE_MPI_Count 1 #define PyMPI_HAVE_MPI_COUNT 1 #define PyMPI_HAVE_MPI_Type_size_x 1 #define PyMPI_HAVE_MPI_Type_get_extent_x 1 #define PyMPI_HAVE_MPI_Type_get_true_extent_x 1 #define PyMPI_HAVE_MPI_Get_elements_x 1 #define PyMPI_HAVE_MPI_Status_set_elements_x 1 #define PyMPI_HAVE_MPI_COMBINER_HINDEXED_BLOCK #define PyMPI_HAVE_MPI_Type_create_hindexed_block 1 #define PyMPI_HAVE_MPI_NO_OP 1 #define PyMPI_HAVE_MPI_Message 1 #define PyMPI_HAVE_MPI_MESSAGE_NULL 1 #define PyMPI_HAVE_MPI_MESSAGE_NO_PROC 1 #define PyMPI_HAVE_MPI_Message_c2f 1 #define PyMPI_HAVE_MPI_Message_f2c 1 #define PyMPI_HAVE_MPI_Mprobe 1 #define PyMPI_HAVE_MPI_Improbe 1 #define PyMPI_HAVE_MPI_Mrecv 1 #define PyMPI_HAVE_MPI_Imrecv 1 #define PyMPI_HAVE_MPI_Neighbor_allgather 1 #define PyMPI_HAVE_MPI_Neighbor_allgatherv 1 #define PyMPI_HAVE_MPI_Neighbor_alltoall 1 #define PyMPI_HAVE_MPI_Neighbor_alltoallv 1 #define PyMPI_HAVE_MPI_Neighbor_alltoallw 1 #define PyMPI_HAVE_MPI_Ibarrier 1 #define PyMPI_HAVE_MPI_Ibcast 1 #define PyMPI_HAVE_MPI_Igather 1 #define PyMPI_HAVE_MPI_Igatherv 1 #define PyMPI_HAVE_MPI_Iscatter 1 #define PyMPI_HAVE_MPI_Iscatterv 1 #define PyMPI_HAVE_MPI_Iallgather 1 #define PyMPI_HAVE_MPI_Iallgatherv 1 #define PyMPI_HAVE_MPI_Ialltoall 1 #define PyMPI_HAVE_MPI_Ialltoallv 1 #define PyMPI_HAVE_MPI_Ialltoallw 1 #define PyMPI_HAVE_MPI_Ireduce 1 #define PyMPI_HAVE_MPI_Iallreduce 1 #define PyMPI_HAVE_MPI_Ireduce_scatter_block 1 #define PyMPI_HAVE_MPI_Ireduce_scatter 1 #define PyMPI_HAVE_MPI_Iscan 1 #define PyMPI_HAVE_MPI_Iexscan 1 #define PyMPI_HAVE_MPI_Ineighbor_allgather 1 #define PyMPI_HAVE_MPI_Ineighbor_allgatherv 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoall 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoallv 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoallw 1 #define PyMPI_HAVE_MPI_WEIGHTS_EMPTY 1 #define PyMPI_HAVE_MPI_Comm_dup_with_info 1 #define PyMPI_HAVE_MPI_Comm_idup 1 #define PyMPI_HAVE_MPI_Comm_create_group 1 #define PyMPI_HAVE_MPI_COMM_TYPE_SHARED 1 #define PyMPI_HAVE_MPI_Comm_split_type 1 #define PyMPI_HAVE_MPI_Comm_set_info 1 #define PyMPI_HAVE_MPI_Comm_get_info 1 #define PyMPI_HAVE_MPI_WIN_CREATE_FLAVOR 1 #define PyMPI_HAVE_MPI_WIN_FLAVOR_CREATE 1 #define PyMPI_HAVE_MPI_WIN_FLAVOR_ALLOCATE 1 #define PyMPI_HAVE_MPI_WIN_FLAVOR_DYNAMIC 1 #define PyMPI_HAVE_MPI_WIN_FLAVOR_SHARED 1 #define PyMPI_HAVE_MPI_WIN_MODEL 1 #define PyMPI_HAVE_MPI_WIN_SEPARATE 1 #define PyMPI_HAVE_MPI_WIN_UNIFIED 1 #define PyMPI_HAVE_MPI_Win_allocate 1 #define PyMPI_HAVE_MPI_Win_allocate_shared 1 #define PyMPI_HAVE_MPI_Win_shared_query 1 #define PyMPI_HAVE_MPI_Win_create_dynamic 1 #define PyMPI_HAVE_MPI_Win_attach 1 #define PyMPI_HAVE_MPI_Win_detach 1 #define PyMPI_HAVE_MPI_Win_set_info 1 #define PyMPI_HAVE_MPI_Win_get_info 1 #define PyMPI_HAVE_MPI_Get_accumulate 1 #define PyMPI_HAVE_MPI_Fetch_and_op 1 #define PyMPI_HAVE_MPI_Compare_and_swap 1 #define PyMPI_HAVE_MPI_Rget 1 #define PyMPI_HAVE_MPI_Rput 1 #define PyMPI_HAVE_MPI_Raccumulate 1 #define PyMPI_HAVE_MPI_Rget_accumulate 1 #define PyMPI_HAVE_MPI_Win_lock_all 1 #define PyMPI_HAVE_MPI_Win_unlock_all 1 #define PyMPI_HAVE_MPI_Win_flush 1 #define PyMPI_HAVE_MPI_Win_flush_all 1 #define PyMPI_HAVE_MPI_Win_flush_local 1 #define PyMPI_HAVE_MPI_Win_flush_local_all 1 #define PyMPI_HAVE_MPI_Win_sync 1 #define PyMPI_HAVE_MPI_ERR_RMA_RANGE 1 #define PyMPI_HAVE_MPI_ERR_RMA_ATTACH 1 #define PyMPI_HAVE_MPI_ERR_RMA_SHARED 1 #define PyMPI_HAVE_MPI_ERR_RMA_FLAVOR 1 #define PyMPI_HAVE_MPI_MAX_LIBRARY_VERSION_STRING 1 #define PyMPI_HAVE_MPI_Get_library_version 1 #define PyMPI_HAVE_MPI_INFO_ENV 1 #endif #endif mpi4py_1.3.1+hg20131106.orig/src/config/mpich2-io.h0000644000000000000000000000726112211706251017314 0ustar 00000000000000#ifndef PyMPI_CONFIG_MPICH2_IO_H #define PyMPI_CONFIG_MPICH2_IO_H #undef PyMPI_HAVE_MPI_File #undef PyMPI_HAVE_MPI_FILE_NULL #undef PyMPI_HAVE_MPI_MODE_RDONLY #undef PyMPI_HAVE_MPI_MODE_RDWR #undef PyMPI_HAVE_MPI_MODE_WRONLY #undef PyMPI_HAVE_MPI_MODE_CREATE #undef PyMPI_HAVE_MPI_MODE_EXCL #undef PyMPI_HAVE_MPI_MODE_DELETE_ON_CLOSE #undef PyMPI_HAVE_MPI_MODE_UNIQUE_OPEN #undef PyMPI_HAVE_MPI_MODE_APPEND #undef PyMPI_HAVE_MPI_MODE_SEQUENTIAL #undef PyMPI_HAVE_MPI_File_open #undef PyMPI_HAVE_MPI_File_close #undef PyMPI_HAVE_MPI_File_delete #undef PyMPI_HAVE_MPI_File_set_size #undef PyMPI_HAVE_MPI_File_preallocate #undef PyMPI_HAVE_MPI_File_get_size #undef PyMPI_HAVE_MPI_File_get_group #undef PyMPI_HAVE_MPI_File_get_amode #undef PyMPI_HAVE_MPI_File_set_info #undef PyMPI_HAVE_MPI_File_get_info #undef PyMPI_HAVE_MPI_File_get_view #undef PyMPI_HAVE_MPI_File_set_view #undef PyMPI_HAVE_MPI_File_read_at #undef PyMPI_HAVE_MPI_File_read_at_all #undef PyMPI_HAVE_MPI_File_write_at #undef PyMPI_HAVE_MPI_File_write_at_all #undef PyMPI_HAVE_MPI_File_iread_at #undef PyMPI_HAVE_MPI_File_iwrite_at #undef PyMPI_HAVE_MPI_SEEK_SET #undef PyMPI_HAVE_MPI_SEEK_CUR #undef PyMPI_HAVE_MPI_SEEK_END #undef PyMPI_HAVE_MPI_DISPLACEMENT_CURRENT #undef PyMPI_HAVE_MPI_File_seek #undef PyMPI_HAVE_MPI_File_get_position #undef PyMPI_HAVE_MPI_File_get_byte_offset #undef PyMPI_HAVE_MPI_File_read #undef PyMPI_HAVE_MPI_File_read_all #undef PyMPI_HAVE_MPI_File_write #undef PyMPI_HAVE_MPI_File_write_all #undef PyMPI_HAVE_MPI_File_iread #undef PyMPI_HAVE_MPI_File_iwrite #undef PyMPI_HAVE_MPI_File_read_shared #undef PyMPI_HAVE_MPI_File_write_shared #undef PyMPI_HAVE_MPI_File_iread_shared #undef PyMPI_HAVE_MPI_File_iwrite_shared #undef PyMPI_HAVE_MPI_File_read_ordered #undef PyMPI_HAVE_MPI_File_write_ordered #undef PyMPI_HAVE_MPI_File_seek_shared #undef PyMPI_HAVE_MPI_File_get_position_shared #undef PyMPI_HAVE_MPI_File_read_at_all_begin #undef PyMPI_HAVE_MPI_File_read_at_all_end #undef PyMPI_HAVE_MPI_File_write_at_all_begin #undef PyMPI_HAVE_MPI_File_write_at_all_end #undef PyMPI_HAVE_MPI_File_read_all_begin #undef PyMPI_HAVE_MPI_File_read_all_end #undef PyMPI_HAVE_MPI_File_write_all_begin #undef PyMPI_HAVE_MPI_File_write_all_end #undef PyMPI_HAVE_MPI_File_read_ordered_begin #undef PyMPI_HAVE_MPI_File_read_ordered_end #undef PyMPI_HAVE_MPI_File_write_ordered_begin #undef PyMPI_HAVE_MPI_File_write_ordered_end #undef PyMPI_HAVE_MPI_File_get_type_extent #undef PyMPI_HAVE_MPI_File_set_atomicity #undef PyMPI_HAVE_MPI_File_get_atomicity #undef PyMPI_HAVE_MPI_File_sync #undef PyMPI_HAVE_MPI_File_get_errhandler #undef PyMPI_HAVE_MPI_File_set_errhandler #undef PyMPI_HAVE_MPI_File_errhandler_fn #undef PyMPI_HAVE_MPI_File_errhandler_function #undef PyMPI_HAVE_MPI_File_create_errhandler #undef PyMPI_HAVE_MPI_File_call_errhandler #undef PyMPI_HAVE_MPI_Datarep_conversion_function #undef PyMPI_HAVE_MPI_Datarep_extent_function #undef PyMPI_HAVE_MPI_CONVERSION_FN_NULL #undef PyMPI_HAVE_MPI_MAX_DATAREP_STRING #undef PyMPI_HAVE_MPI_Register_datarep #undef PyMPI_HAVE_MPI_File_c2f #undef PyMPI_HAVE_MPI_File_f2c #if !defined(MPI_ERR_FILE) #undef PyMPI_HAVE_MPI_ERR_FILE #undef PyMPI_HAVE_MPI_ERR_NOT_SAME #undef PyMPI_HAVE_MPI_ERR_BAD_FILE #undef PyMPI_HAVE_MPI_ERR_NO_SUCH_FILE #undef PyMPI_HAVE_MPI_ERR_FILE_EXISTS #undef PyMPI_HAVE_MPI_ERR_FILE_IN_USE #undef PyMPI_HAVE_MPI_ERR_AMODE #undef PyMPI_HAVE_MPI_ERR_ACCESS #undef PyMPI_HAVE_MPI_ERR_READ_ONLY #undef PyMPI_HAVE_MPI_ERR_NO_SPACE #undef PyMPI_HAVE_MPI_ERR_QUOTA #undef PyMPI_HAVE_MPI_ERR_UNSUPPORTED_DATAREP #undef PyMPI_HAVE_MPI_ERR_UNSUPPORTED_OPERATION #undef PyMPI_HAVE_MPI_ERR_CONVERSION #undef PyMPI_HAVE_MPI_ERR_DUP_DATAREP #undef PyMPI_HAVE_MPI_ERR_IO #endif #endif /* !PyMPI_CONFIG_MPICH2_IO_H */ mpi4py_1.3.1+hg20131106.orig/src/config/mpich2.h0000644000000000000000000002165112211706251016706 0ustar 00000000000000#ifndef PyMPI_CONFIG_MPICH2_H #define PyMPI_CONFIG_MPICH2_H #include "mpi-11.h" #include "mpi-12.h" #include "mpi-20.h" #include "mpi-22.h" #include "mpi-30.h" /* These types are difficult to implement portably */ #undef PyMPI_HAVE_MPI_REAL2 #undef PyMPI_HAVE_MPI_COMPLEX4 #if defined(MPI_UNWEIGHTED) && (MPICH2_NUMVERSION < 10300000) #undef MPI_UNWEIGHTED #define MPI_UNWEIGHTED ((int *)0) #endif /* MPICH2 < 1.3.0 */ #if !defined(MPICH2_NUMVERSION) || (MPICH2_NUMVERSION < 10100000) #undef PyMPI_HAVE_MPI_Type_create_f90_integer #undef PyMPI_HAVE_MPI_Type_create_f90_real #undef PyMPI_HAVE_MPI_Type_create_f90_complex #endif /* MPICH2 < 1.1.0 */ #ifndef ROMIO_VERSION #include "mpich2-io.h" #endif #if MPI_VERSION < 3 && defined(MPICH2_NUMVERSION) #if MPICH2_NUMVERSION >= 10500000 && \ MPICH2_NUMVERSION < 20000000 #if 0 /*XXX*/ #define PyMPI_HAVE_MPI_Count 1 #define PyMPI_HAVE_MPI_COUNT 1 #define PyMPI_HAVE_MPI_Type_size_x 1 #define PyMPI_HAVE_MPI_Type_get_extent_x 1 #define PyMPI_HAVE_MPI_Type_get_true_extent_x 1 #define PyMPI_HAVE_MPI_Get_elements_x 1 #define PyMPI_HAVE_MPI_Status_set_elements_x 1 #define MPI_Count MPIX_Count #define MPI_COUNT MPIX_COUNT #define MPI_Type_size_x MPIX_Type_size_x #define MPI_Type_get_extent_x MPIX_Type_get_extent_x #define MPI_Type_get_true_extent_x MPIX_Type_get_true_extent_x #define MPI_Get_elements_x MPIX_Get_elements_x #define MPI_Status_set_elements_x MPIX_Status_set_elements_x #endif #define PyMPI_HAVE_MPI_COMBINER_HINDEXED_BLOCK 1 #define PyMPI_HAVE_MPI_Type_create_hindexed_block 1 #define MPI_COMBINER_HINDEXED_BLOCK MPIX_COMBINER_HINDEXED_BLOCK #define MPI_Type_create_hindexed_block MPIX_Type_create_hindexed_block #define PyMPI_HAVE_MPI_NO_OP 1 #define MPI_NO_OP MPIX_NO_OP #define PyMPI_HAVE_MPI_Message 1 #define PyMPI_HAVE_MPI_MESSAGE_NULL 1 #define PyMPI_HAVE_MPI_MESSAGE_NO_PROC 1 #define PyMPI_HAVE_MPI_Message_c2f 1 #define PyMPI_HAVE_MPI_Message_f2c 1 #define PyMPI_HAVE_MPI_Mprobe 1 #define PyMPI_HAVE_MPI_Improbe 1 #define PyMPI_HAVE_MPI_Mrecv 1 #define PyMPI_HAVE_MPI_Imrecv 1 #define MPI_Message MPIX_Message #define MPI_MESSAGE_NULL MPIX_MESSAGE_NULL #define MPI_MESSAGE_NO_PROC MPIX_MESSAGE_NO_PROC #define MPI_Message_c2f MPIX_Message_c2f #define MPI_Message_f2c MPIX_Message_f2c #define MPI_Mprobe MPIX_Mprobe #define MPI_Improbe MPIX_Improbe #define MPI_Mrecv MPIX_Mrecv #define MPI_Imrecv MPIX_Imrecv #define PyMPI_HAVE_MPI_Ibarrier 1 #define PyMPI_HAVE_MPI_Ibcast 1 #define PyMPI_HAVE_MPI_Igather 1 #define PyMPI_HAVE_MPI_Igatherv 1 #define PyMPI_HAVE_MPI_Iscatter 1 #define PyMPI_HAVE_MPI_Iscatterv 1 #define PyMPI_HAVE_MPI_Iallgather 1 #define PyMPI_HAVE_MPI_Iallgatherv 1 #define PyMPI_HAVE_MPI_Ialltoall 1 #define PyMPI_HAVE_MPI_Ialltoallv 1 #define PyMPI_HAVE_MPI_Ialltoallw 1 #define PyMPI_HAVE_MPI_Ireduce 1 #define PyMPI_HAVE_MPI_Iallreduce 1 #define PyMPI_HAVE_MPI_Ireduce_scatter_block 1 #define PyMPI_HAVE_MPI_Ireduce_scatter 1 #define PyMPI_HAVE_MPI_Iscan 1 #define PyMPI_HAVE_MPI_Iexscan 1 #define MPI_Ibarrier MPIX_Ibarrier #define MPI_Ibcast MPIX_Ibcast #define MPI_Igather MPIX_Igather #define MPI_Igatherv MPIX_Igatherv #define MPI_Iscatter MPIX_Iscatter #define MPI_Iscatterv MPIX_Iscatterv #define MPI_Iallgather MPIX_Iallgather #define MPI_Iallgatherv MPIX_Iallgatherv #define MPI_Ialltoall MPIX_Ialltoall #define MPI_Ialltoallv MPIX_Ialltoallv #define MPI_Ialltoallw MPIX_Ialltoallw #define MPI_Ireduce MPIX_Ireduce #define MPI_Iallreduce MPIX_Iallreduce #define MPI_Ireduce_scatter_block MPIX_Ireduce_scatter_block #define MPI_Ireduce_scatter MPIX_Ireduce_scatter #define MPI_Iscan MPIX_Iscan #define MPI_Iexscan MPIX_Iexscan #define PyMPI_HAVE_MPI_Neighbor_allgather 1 #define PyMPI_HAVE_MPI_Neighbor_allgatherv 1 #define PyMPI_HAVE_MPI_Neighbor_alltoall 1 #define PyMPI_HAVE_MPI_Neighbor_alltoallv 1 #define PyMPI_HAVE_MPI_Neighbor_alltoallw 1 #define MPI_Neighbor_allgather MPIX_Neighbor_allgather #define MPI_Neighbor_allgatherv MPIX_Neighbor_allgatherv #define MPI_Neighbor_alltoall MPIX_Neighbor_alltoall #define MPI_Neighbor_alltoallv MPIX_Neighbor_alltoallv #define MPI_Neighbor_alltoallw MPIX_Neighbor_alltoallw #define PyMPI_HAVE_MPI_Ineighbor_allgather 1 #define PyMPI_HAVE_MPI_Ineighbor_allgatherv 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoall 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoallv 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoallw 1 #define MPI_Ineighbor_allgather MPIX_Ineighbor_allgather #define MPI_Ineighbor_allgatherv MPIX_Ineighbor_allgatherv #define MPI_Ineighbor_alltoall MPIX_Ineighbor_alltoall #define MPI_Ineighbor_alltoallv MPIX_Ineighbor_alltoallv #define MPI_Ineighbor_alltoallw MPIX_Ineighbor_alltoallw #define PyMPI_HAVE_MPI_Comm_idup 1 #define PyMPI_HAVE_MPI_Comm_create_group 1 #define PyMPI_HAVE_MPI_COMM_TYPE_SHARED 1 #define PyMPI_HAVE_MPI_Comm_split_type 1 #define MPI_Comm_idup MPIX_Comm_idup #define MPI_Comm_create_group MPIX_Comm_create_group #define MPI_COMM_TYPE_SHARED MPIX_COMM_TYPE_SHARED #define MPI_Comm_split_type MPIX_Comm_split_type #define PyMPI_HAVE_MPI_WIN_CREATE_FLAVOR 1 #define PyMPI_HAVE_MPI_WIN_FLAVOR_CREATE 1 #define PyMPI_HAVE_MPI_WIN_FLAVOR_ALLOCATE 1 #define PyMPI_HAVE_MPI_WIN_FLAVOR_DYNAMIC 1 #define PyMPI_HAVE_MPI_WIN_FLAVOR_SHARED 1 #define MPI_WIN_CREATE_FLAVOR MPIX_WIN_CREATE_FLAVOR #define MPI_WIN_FLAVOR_CREATE MPIX_WIN_FLAVOR_CREATE #define MPI_WIN_FLAVOR_ALLOCATE MPIX_WIN_FLAVOR_ALLOCATE #define MPI_WIN_FLAVOR_DYNAMIC MPIX_WIN_FLAVOR_DYNAMIC #define MPI_WIN_FLAVOR_SHARED MPIX_WIN_FLAVOR_SHARED #define PyMPI_HAVE_MPI_WIN_MODEL 1 #define PyMPI_HAVE_MPI_WIN_SEPARATE 1 #define PyMPI_HAVE_MPI_WIN_UNIFIED 1 #define MPI_WIN_MODEL MPIX_WIN_MODEL #define MPI_WIN_SEPARATE MPIX_WIN_SEPARATE #define MPI_WIN_UNIFIED MPIX_WIN_UNIFIED #define PyMPI_HAVE_MPI_Win_allocate 1 #define MPI_Win_allocate MPIX_Win_allocate #define PyMPI_HAVE_MPI_Win_allocate_shared 1 #define PyMPI_HAVE_MPI_Win_shared_query 1 #define MPI_Win_allocate_shared MPIX_Win_allocate_shared #define MPI_Win_shared_query MPIX_Win_shared_query #define PyMPI_HAVE_MPI_Win_create_dynamic 1 #define PyMPI_HAVE_MPI_Win_attach 1 #define PyMPI_HAVE_MPI_Win_detach 1 #define MPI_Win_create_dynamic MPIX_Win_create_dynamic #define MPI_Win_attach MPIX_Win_attach #define MPI_Win_detach MPIX_Win_detach #if 0 /*XXX*/ #define PyMPI_HAVE_MPI_Win_set_info 1 #define PyMPI_HAVE_MPI_Win_get_info 1 #define MPI_Win_set_info MPIX_Win_set_info #define MPI_Win_get_info MPIX_Win_get_info #endif/*XXX*/ #define PyMPI_HAVE_MPI_Get_accumulate 1 #define PyMPI_HAVE_MPI_Fetch_and_op 1 #define PyMPI_HAVE_MPI_Compare_and_swap 1 #define MPI_Get_accumulate MPIX_Get_accumulate #define MPI_Fetch_and_op MPIX_Fetch_and_op #define MPI_Compare_and_swap MPIX_Compare_and_swap #define PyMPI_HAVE_MPI_Rget 1 #define PyMPI_HAVE_MPI_Rput 1 #define PyMPI_HAVE_MPI_Raccumulate 1 #define PyMPI_HAVE_MPI_Rget_accumulate 1 #define MPI_Rget MPIX_Rget #define MPI_Rput MPIX_Rput #define MPI_Raccumulate MPIX_Raccumulate #define MPI_Rget_accumulate MPIX_Rget_accumulate #define PyMPI_HAVE_MPI_Win_lock_all 1 #define PyMPI_HAVE_MPI_Win_unlock_all 1 #define PyMPI_HAVE_MPI_Win_flush 1 #define PyMPI_HAVE_MPI_Win_flush_all 1 #define PyMPI_HAVE_MPI_Win_flush_local 1 #define PyMPI_HAVE_MPI_Win_flush_local_all 1 #define PyMPI_HAVE_MPI_Win_sync #define MPI_Win_lock_all MPIX_Win_lock_all #define MPI_Win_unlock_all MPIX_Win_unlock_all #define MPI_Win_flush MPIX_Win_flush #define MPI_Win_flush_all MPIX_Win_flush_all #define MPI_Win_flush_local MPIX_Win_flush_local #define MPI_Win_flush_local_all MPIX_Win_flush_local_all #define MPI_Win_sync MPIX_Win_sync #define PyMPI_HAVE_MPI_ERR_RMA_RANGE 1 #define PyMPI_HAVE_MPI_ERR_RMA_ATTACH 1 #define PyMPI_HAVE_MPI_ERR_RMA_SHARED 1 #define PyMPI_HAVE_MPI_ERR_RMA_FLAVOR 1 #define MPI_ERR_RMA_RANGE MPIX_ERR_RMA_RANGE #define MPI_ERR_RMA_ATTACH MPIX_ERR_RMA_ATTACH #define MPI_ERR_RMA_SHARED MPIX_ERR_RMA_SHARED #define MPI_ERR_RMA_FLAVOR MPIX_ERR_RMA_WRONG_FLAVOR #if 0 /*XXX*/ #define PyMPI_HAVE_MPI_Comm_dup_with_info 1 #define PyMPI_HAVE_MPI_Comm_set_info 1 #define PyMPI_HAVE_MPI_Comm_get_info 1 #define MPI_Comm_dup_with_info MPIX_Comm_dup_with_info #define MPI_Comm_set_info MPIX_Comm_set_info #define MPI_Comm_get_info MPIX_Comm_get_info #endif/*XXX*/ #if 0 /*XXX*/ #define PyMPI_HAVE_MPI_MAX_LIBRARY_VERSION_STRING 1 #define PyMPI_HAVE_MPI_Get_library_version 1 #define PyMPI_HAVE_MPI_INFO_ENV 1 #define MPI_MAX_LIBRARY_VERSION_STRING MPIX_MAX_LIBRARY_VERSION_STRING #define MPI_Get_library_version MPIX_Get_library_version #define MPI_INFO_ENV MPIX_INFO_ENV #endif/*XXX*/ #endif /* MPICH2 < 1.5*/ #endif /* MPI < 3.0*/ #endif /* !PyMPI_CONFIG_MPICH2_H */ mpi4py_1.3.1+hg20131106.orig/src/config/mpich3-io.h0000644000000000000000000000610712211706251017313 0ustar 00000000000000#ifndef PyMPI_CONFIG_MPICH3_IO_H #undef PyMPI_CONFIG_MPICH3_IO_H #undef PyMPI_HAVE_MPI_File #undef PyMPI_HAVE_MPI_FILE_NULL #undef PyMPI_HAVE_MPI_MODE_RDONLY #undef PyMPI_HAVE_MPI_MODE_RDWR #undef PyMPI_HAVE_MPI_MODE_WRONLY #undef PyMPI_HAVE_MPI_MODE_CREATE #undef PyMPI_HAVE_MPI_MODE_EXCL #undef PyMPI_HAVE_MPI_MODE_DELETE_ON_CLOSE #undef PyMPI_HAVE_MPI_MODE_UNIQUE_OPEN #undef PyMPI_HAVE_MPI_MODE_APPEND #undef PyMPI_HAVE_MPI_MODE_SEQUENTIAL #undef PyMPI_HAVE_MPI_File_open #undef PyMPI_HAVE_MPI_File_close #undef PyMPI_HAVE_MPI_File_delete #undef PyMPI_HAVE_MPI_File_set_size #undef PyMPI_HAVE_MPI_File_preallocate #undef PyMPI_HAVE_MPI_File_get_size #undef PyMPI_HAVE_MPI_File_get_group #undef PyMPI_HAVE_MPI_File_get_amode #undef PyMPI_HAVE_MPI_File_set_info #undef PyMPI_HAVE_MPI_File_get_info #undef PyMPI_HAVE_MPI_File_get_view #undef PyMPI_HAVE_MPI_File_set_view #undef PyMPI_HAVE_MPI_File_read_at #undef PyMPI_HAVE_MPI_File_read_at_all #undef PyMPI_HAVE_MPI_File_write_at #undef PyMPI_HAVE_MPI_File_write_at_all #undef PyMPI_HAVE_MPI_File_iread_at #undef PyMPI_HAVE_MPI_File_iwrite_at #undef PyMPI_HAVE_MPI_SEEK_SET #undef PyMPI_HAVE_MPI_SEEK_CUR #undef PyMPI_HAVE_MPI_SEEK_END #undef PyMPI_HAVE_MPI_DISPLACEMENT_CURRENT #undef PyMPI_HAVE_MPI_File_seek #undef PyMPI_HAVE_MPI_File_get_position #undef PyMPI_HAVE_MPI_File_get_byte_offset #undef PyMPI_HAVE_MPI_File_read #undef PyMPI_HAVE_MPI_File_read_all #undef PyMPI_HAVE_MPI_File_write #undef PyMPI_HAVE_MPI_File_write_all #undef PyMPI_HAVE_MPI_File_iread #undef PyMPI_HAVE_MPI_File_iwrite #undef PyMPI_HAVE_MPI_File_read_shared #undef PyMPI_HAVE_MPI_File_write_shared #undef PyMPI_HAVE_MPI_File_iread_shared #undef PyMPI_HAVE_MPI_File_iwrite_shared #undef PyMPI_HAVE_MPI_File_read_ordered #undef PyMPI_HAVE_MPI_File_write_ordered #undef PyMPI_HAVE_MPI_File_seek_shared #undef PyMPI_HAVE_MPI_File_get_position_shared #undef PyMPI_HAVE_MPI_File_read_at_all_begin #undef PyMPI_HAVE_MPI_File_read_at_all_end #undef PyMPI_HAVE_MPI_File_write_at_all_begin #undef PyMPI_HAVE_MPI_File_write_at_all_end #undef PyMPI_HAVE_MPI_File_read_all_begin #undef PyMPI_HAVE_MPI_File_read_all_end #undef PyMPI_HAVE_MPI_File_write_all_begin #undef PyMPI_HAVE_MPI_File_write_all_end #undef PyMPI_HAVE_MPI_File_read_ordered_begin #undef PyMPI_HAVE_MPI_File_read_ordered_end #undef PyMPI_HAVE_MPI_File_write_ordered_begin #undef PyMPI_HAVE_MPI_File_write_ordered_end #undef PyMPI_HAVE_MPI_File_get_type_extent #undef PyMPI_HAVE_MPI_File_set_atomicity #undef PyMPI_HAVE_MPI_File_get_atomicity #undef PyMPI_HAVE_MPI_File_sync #undef PyMPI_HAVE_MPI_File_get_errhandler #undef PyMPI_HAVE_MPI_File_set_errhandler #undef PyMPI_HAVE_MPI_File_errhandler_fn #undef PyMPI_HAVE_MPI_File_errhandler_function #undef PyMPI_HAVE_MPI_File_create_errhandler #undef PyMPI_HAVE_MPI_File_call_errhandler #undef PyMPI_HAVE_MPI_Datarep_conversion_function #undef PyMPI_HAVE_MPI_Datarep_extent_function #undef PyMPI_HAVE_MPI_CONVERSION_FN_NULL #undef PyMPI_HAVE_MPI_MAX_DATAREP_STRING #undef PyMPI_HAVE_MPI_Register_datarep #undef PyMPI_HAVE_MPI_File_c2f #undef PyMPI_HAVE_MPI_File_f2c #endif /* !PyMPI_CONFIG_MPICH3_IO_H */ mpi4py_1.3.1+hg20131106.orig/src/config/mpich3.h0000644000000000000000000000055512211706251016707 0ustar 00000000000000#ifndef PyMPI_CONFIG_MPICH2_H #define PyMPI_CONFIG_MPICH2_H #include "mpi-11.h" #include "mpi-12.h" #include "mpi-20.h" #include "mpi-22.h" #include "mpi-30.h" /* These types are difficult to implement portably */ #undef PyMPI_HAVE_MPI_REAL2 #undef PyMPI_HAVE_MPI_COMPLEX4 #ifndef ROMIO_VERSION #include "mpich3-io.h" #endif #endif /* !PyMPI_CONFIG_MPICH2_H */ mpi4py_1.3.1+hg20131106.orig/src/config/openmpi-io.h0000644000000000000000000000726412211706251017604 0ustar 00000000000000#ifndef PyMPI_CONFIG_OPENMPI_IO_H #define PyMPI_CONFIG_OPENMPI_IO_H #undef PyMPI_HAVE_MPI_File #undef PyMPI_HAVE_MPI_FILE_NULL #undef PyMPI_HAVE_MPI_MODE_RDONLY #undef PyMPI_HAVE_MPI_MODE_RDWR #undef PyMPI_HAVE_MPI_MODE_WRONLY #undef PyMPI_HAVE_MPI_MODE_CREATE #undef PyMPI_HAVE_MPI_MODE_EXCL #undef PyMPI_HAVE_MPI_MODE_DELETE_ON_CLOSE #undef PyMPI_HAVE_MPI_MODE_UNIQUE_OPEN #undef PyMPI_HAVE_MPI_MODE_APPEND #undef PyMPI_HAVE_MPI_MODE_SEQUENTIAL #undef PyMPI_HAVE_MPI_File_open #undef PyMPI_HAVE_MPI_File_close #undef PyMPI_HAVE_MPI_File_delete #undef PyMPI_HAVE_MPI_File_set_size #undef PyMPI_HAVE_MPI_File_preallocate #undef PyMPI_HAVE_MPI_File_get_size #undef PyMPI_HAVE_MPI_File_get_group #undef PyMPI_HAVE_MPI_File_get_amode #undef PyMPI_HAVE_MPI_File_set_info #undef PyMPI_HAVE_MPI_File_get_info #undef PyMPI_HAVE_MPI_File_get_view #undef PyMPI_HAVE_MPI_File_set_view #undef PyMPI_HAVE_MPI_File_read_at #undef PyMPI_HAVE_MPI_File_read_at_all #undef PyMPI_HAVE_MPI_File_write_at #undef PyMPI_HAVE_MPI_File_write_at_all #undef PyMPI_HAVE_MPI_File_iread_at #undef PyMPI_HAVE_MPI_File_iwrite_at #undef PyMPI_HAVE_MPI_SEEK_SET #undef PyMPI_HAVE_MPI_SEEK_CUR #undef PyMPI_HAVE_MPI_SEEK_END #undef PyMPI_HAVE_MPI_DISPLACEMENT_CURRENT #undef PyMPI_HAVE_MPI_File_seek #undef PyMPI_HAVE_MPI_File_get_position #undef PyMPI_HAVE_MPI_File_get_byte_offset #undef PyMPI_HAVE_MPI_File_read #undef PyMPI_HAVE_MPI_File_read_all #undef PyMPI_HAVE_MPI_File_write #undef PyMPI_HAVE_MPI_File_write_all #undef PyMPI_HAVE_MPI_File_iread #undef PyMPI_HAVE_MPI_File_iwrite #undef PyMPI_HAVE_MPI_File_read_shared #undef PyMPI_HAVE_MPI_File_write_shared #undef PyMPI_HAVE_MPI_File_iread_shared #undef PyMPI_HAVE_MPI_File_iwrite_shared #undef PyMPI_HAVE_MPI_File_read_ordered #undef PyMPI_HAVE_MPI_File_write_ordered #undef PyMPI_HAVE_MPI_File_seek_shared #undef PyMPI_HAVE_MPI_File_get_position_shared #undef PyMPI_HAVE_MPI_File_read_at_all_begin #undef PyMPI_HAVE_MPI_File_read_at_all_end #undef PyMPI_HAVE_MPI_File_write_at_all_begin #undef PyMPI_HAVE_MPI_File_write_at_all_end #undef PyMPI_HAVE_MPI_File_read_all_begin #undef PyMPI_HAVE_MPI_File_read_all_end #undef PyMPI_HAVE_MPI_File_write_all_begin #undef PyMPI_HAVE_MPI_File_write_all_end #undef PyMPI_HAVE_MPI_File_read_ordered_begin #undef PyMPI_HAVE_MPI_File_read_ordered_end #undef PyMPI_HAVE_MPI_File_write_ordered_begin #undef PyMPI_HAVE_MPI_File_write_ordered_end #undef PyMPI_HAVE_MPI_File_get_type_extent #undef PyMPI_HAVE_MPI_File_set_atomicity #undef PyMPI_HAVE_MPI_File_get_atomicity #undef PyMPI_HAVE_MPI_File_sync #undef PyMPI_HAVE_MPI_File_get_errhandler #undef PyMPI_HAVE_MPI_File_set_errhandler #undef PyMPI_HAVE_MPI_File_errhandler_fn #undef PyMPI_HAVE_MPI_File_errhandler_function #undef PyMPI_HAVE_MPI_File_create_errhandler #undef PyMPI_HAVE_MPI_File_call_errhandler #undef PyMPI_HAVE_MPI_Datarep_conversion_function #undef PyMPI_HAVE_MPI_Datarep_extent_function #undef PyMPI_HAVE_MPI_CONVERSION_FN_NULL #undef PyMPI_HAVE_MPI_MAX_DATAREP_STRING #undef PyMPI_HAVE_MPI_Register_datarep #undef PyMPI_HAVE_MPI_File_c2f #undef PyMPI_HAVE_MPI_File_f2c #if !defined(MPI_ERR_FILE) #undef PyMPI_HAVE_MPI_ERR_FILE #undef PyMPI_HAVE_MPI_ERR_NOT_SAME #undef PyMPI_HAVE_MPI_ERR_BAD_FILE #undef PyMPI_HAVE_MPI_ERR_NO_SUCH_FILE #undef PyMPI_HAVE_MPI_ERR_FILE_EXISTS #undef PyMPI_HAVE_MPI_ERR_FILE_IN_USE #undef PyMPI_HAVE_MPI_ERR_AMODE #undef PyMPI_HAVE_MPI_ERR_ACCESS #undef PyMPI_HAVE_MPI_ERR_READ_ONLY #undef PyMPI_HAVE_MPI_ERR_NO_SPACE #undef PyMPI_HAVE_MPI_ERR_QUOTA #undef PyMPI_HAVE_MPI_ERR_UNSUPPORTED_DATAREP #undef PyMPI_HAVE_MPI_ERR_UNSUPPORTED_OPERATION #undef PyMPI_HAVE_MPI_ERR_CONVERSION #undef PyMPI_HAVE_MPI_ERR_DUP_DATAREP #undef PyMPI_HAVE_MPI_ERR_IO #endif #endif /* !PyMPI_CONFIG_OPENMPI_IO_H */ mpi4py_1.3.1+hg20131106.orig/src/config/openmpi.h0000644000000000000000000000714512211706251017175 0ustar 00000000000000#ifndef PyMPI_CONFIG_OPENMPI_H #define PyMPI_CONFIG_OPENMPI_H #include "mpi-11.h" #include "mpi-12.h" #include "mpi-20.h" #include "mpi-22.h" #include "mpi-30.h" #ifndef OMPI_HAVE_FORTRAN_LOGICAL1 #define OMPI_HAVE_FORTRAN_LOGICAL1 0 #endif #ifndef OMPI_HAVE_FORTRAN_LOGICAL2 #define OMPI_HAVE_FORTRAN_LOGICAL2 0 #endif #ifndef OMPI_HAVE_FORTRAN_LOGICAL4 #define OMPI_HAVE_FORTRAN_LOGICAL4 0 #endif #ifndef OMPI_HAVE_FORTRAN_LOGICAL8 #define OMPI_HAVE_FORTRAN_LOGICAL8 0 #endif #if OMPI_HAVE_FORTRAN_LOGICAL1 #define PyMPI_HAVE_MPI_LOGICAL1 1 #endif #if OMPI_HAVE_FORTRAN_LOGICAL2 #define PyMPI_HAVE_MPI_LOGICAL2 1 #endif #if OMPI_HAVE_FORTRAN_LOGICAL4 #define PyMPI_HAVE_MPI_LOGICAL4 1 #endif #if OMPI_HAVE_FORTRAN_LOGICAL8 #define PyMPI_HAVE_MPI_LOGICAL8 1 #endif #if !OMPI_HAVE_FORTRAN_INTEGER1 #undef PyMPI_HAVE_MPI_INTEGER1 #endif #if !OMPI_HAVE_FORTRAN_INTEGER2 #undef PyMPI_HAVE_MPI_INTEGER2 #endif #if !OMPI_HAVE_FORTRAN_INTEGER4 #undef PyMPI_HAVE_MPI_INTEGER4 #endif #if !OMPI_HAVE_FORTRAN_INTEGER8 #undef PyMPI_HAVE_MPI_INTEGER8 #endif #if !OMPI_HAVE_FORTRAN_INTEGER16 #undef PyMPI_HAVE_MPI_INTEGER16 #endif #if !OMPI_HAVE_FORTRAN_REAL2 #undef PyMPI_HAVE_MPI_REAL2 #undef PyMPI_HAVE_MPI_COMPLEX4 #endif #if !OMPI_HAVE_FORTRAN_REAL4 #undef PyMPI_HAVE_MPI_REAL4 #undef PyMPI_HAVE_MPI_COMPLEX8 #endif #if !OMPI_HAVE_FORTRAN_REAL8 #undef PyMPI_HAVE_MPI_REAL8 #undef PyMPI_HAVE_MPI_COMPLEX16 #endif #if !OMPI_HAVE_FORTRAN_REAL16 #undef PyMPI_HAVE_MPI_REAL16 #undef PyMPI_HAVE_MPI_COMPLEX32 #endif #if MPI_VERSION==2 && MPI_SUBVERSION<2 #undef PyMPI_HAVE_MPI_Comm_errhandler_function #undef PyMPI_HAVE_MPI_Win_errhandler_function #undef PyMPI_HAVE_MPI_File_errhandler_function #endif #ifdef OMPI_PROVIDE_MPI_FILE_INTERFACE #if OMPI_PROVIDE_MPI_FILE_INTERFACE == 0 #include "openmpi-io.h" #endif #endif #if MPI_VERSION < 3 && (defined(OMPI_MAJOR_VERSION) && \ defined(OMPI_MINOR_VERSION) && \ defined(OMPI_RELEASE_VERSION)) #if ((OMPI_MAJOR_VERSION * 10000) + \ (OMPI_MINOR_VERSION * 100) + \ (OMPI_RELEASE_VERSION * 1)) >= 10700 #define PyMPI_HAVE_MPI_Message 1 #define PyMPI_HAVE_MPI_MESSAGE_NULL 1 #define PyMPI_HAVE_MPI_MESSAGE_NO_PROC 1 #define PyMPI_HAVE_MPI_Message_c2f 1 #define PyMPI_HAVE_MPI_Message_f2c 1 #define PyMPI_HAVE_MPI_Mprobe 1 #define PyMPI_HAVE_MPI_Improbe 1 #define PyMPI_HAVE_MPI_Mrecv 1 #define PyMPI_HAVE_MPI_Imrecv 1 #define PyMPI_HAVE_MPI_Ibarrier 1 #define PyMPI_HAVE_MPI_Ibcast 1 #define PyMPI_HAVE_MPI_Igather 1 #define PyMPI_HAVE_MPI_Igatherv 1 #define PyMPI_HAVE_MPI_Iscatter 1 #define PyMPI_HAVE_MPI_Iscatterv 1 #define PyMPI_HAVE_MPI_Iallgather 1 #define PyMPI_HAVE_MPI_Iallgatherv 1 #define PyMPI_HAVE_MPI_Ialltoall 1 #define PyMPI_HAVE_MPI_Ialltoallv 1 #define PyMPI_HAVE_MPI_Ialltoallw 1 #define PyMPI_HAVE_MPI_Ireduce 1 #define PyMPI_HAVE_MPI_Iallreduce 1 #define PyMPI_HAVE_MPI_Ireduce_scatter_block 1 #define PyMPI_HAVE_MPI_Ireduce_scatter 1 #define PyMPI_HAVE_MPI_Iscan 1 #define PyMPI_HAVE_MPI_Iexscan 1 #define PyMPI_HAVE_MPI_MAX_LIBRARY_VERSION_STRING 1 #define PyMPI_HAVE_MPI_Get_library_version 1 #endif /* OMPI < 1.7*/ #if 0 /*XXX*/ #define PyMPI_HAVE_MPI_Neighbor_allgather 1 #define PyMPI_HAVE_MPI_Neighbor_allgatherv 1 #define PyMPI_HAVE_MPI_Neighbor_alltoall 1 #define PyMPI_HAVE_MPI_Neighbor_alltoallv 1 #define PyMPI_HAVE_MPI_Neighbor_alltoallw 1 #define PyMPI_HAVE_MPI_Ineighbor_allgather 1 #define PyMPI_HAVE_MPI_Ineighbor_allgatherv 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoall 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoallv 1 #define PyMPI_HAVE_MPI_Ineighbor_alltoallw 1 #endif/*XXX*/ #endif /* MPI < 3.0*/ #endif /* !PyMPI_CONFIG_OPENMPI_H */ mpi4py_1.3.1+hg20131106.orig/src/config/unknown.h0000644000000000000000000000150612211706251017220 0ustar 00000000000000#ifndef PyMPI_CONFIG_UNKNOWN_H #define PyMPI_CONFIG_UNKNOWN_H /* ------------------------------------------------------------------------- */ #include "mpi-11.h" #include "mpi-12.h" #include "mpi-20.h" #include "mpi-22.h" #include "mpi-30.h" /* ------------------------------------------------------------------------- */ /* These types are difficult to implement portably */ #undef PyMPI_HAVE_MPI_INTEGER16 #undef PyMPI_HAVE_MPI_REAL2 #undef PyMPI_HAVE_MPI_COMPLEX4 /* These types are not available in MPICH(1) */ #if defined(MPICH_NAME) && (MPICH_NAME==1) #undef PyMPI_HAVE_MPI_INTEGER1 #undef PyMPI_HAVE_MPI_INTEGER2 #undef PyMPI_HAVE_MPI_INTEGER4 #undef PyMPI_HAVE_MPI_REAL4 #undef PyMPI_HAVE_MPI_REAL8 #endif /* ------------------------------------------------------------------------- */ #endif /* !PyMPI_CONFIG_UNKNOWN_H */ mpi4py_1.3.1+hg20131106.orig/src/dynload.c0000644000000000000000000000753112211706251015705 0ustar 00000000000000/* Author: Lisandro Dalcin * Contact: dalcinl@gmail.com */ #include "Python.h" #include "dynload.h" static PyObject * dl_dlopen(PyObject *self, PyObject *args) { void *handle = NULL; char *filename = NULL; int mode = 0; if (!PyArg_ParseTuple(args, (char *)"zi:dlopen", &filename, &mode)) return NULL; handle = dlopen(filename, mode); return PyLong_FromVoidPtr(handle); } static PyObject * dl_dlsym(PyObject *self, PyObject *args) { PyObject *arg0 = NULL; void *handle = NULL; char *symbol = NULL; void *symval = NULL; if (!PyArg_ParseTuple(args, (char *)"Os:dlsym", &arg0, &symbol)) return NULL; #ifdef RTLD_DEFAULT handle = (void *)RTLD_DEFAULT; #endif if (arg0 != Py_None) { handle = PyLong_AsVoidPtr(arg0); if (!handle && PyErr_Occurred()) return NULL; } symval = dlsym(handle, symbol); return PyLong_FromVoidPtr(symval); } static PyObject * dl_dlclose(PyObject *self, PyObject *arg0) { void *handle = NULL; if (arg0 != Py_None) { handle = PyLong_AsVoidPtr(arg0); if (!handle && PyErr_Occurred()) return NULL; } if (handle) dlclose(handle); return Py_BuildValue((char *)""); } static PyObject * dl_dlerror(PyObject *self, PyObject *args) { char *errmsg = NULL; errmsg = dlerror(); return Py_BuildValue((char *)"z", errmsg); } static PyMethodDef dl_methods[] = { { (char *)"dlopen", dl_dlopen, METH_VARARGS, NULL }, { (char *)"dlsym", dl_dlsym, METH_VARARGS, NULL }, { (char *)"dlclose", dl_dlclose, METH_O, NULL }, { (char *)"dlerror", dl_dlerror, METH_NOARGS, NULL }, { (char *)NULL, NULL, 0, NULL } /* sentinel */ }; PyDoc_STRVAR(dl_doc, "POSIX dynamic linking loader"); #if PY_MAJOR_VERSION >= 3 static struct PyModuleDef dl_module = { PyModuleDef_HEAD_INIT, /* m_base */ (char *)"dl", /* m_name */ dl_doc, /* m_doc */ -1, /* m_size */ dl_methods, /* m_methods */ NULL, /* m_reload */ NULL, /* m_traverse */ NULL, /* m_clear */ NULL /* m_free */ }; #endif #if !defined(PyModule_AddIntMacro) #define PyModule_AddIntMacro(m, c) \ PyModule_AddIntConstant(m, (char *)#c, c) #endif #define PyModule_AddPtrMacro(m, c) \ PyModule_AddObject(m, (char *)#c, PyLong_FromVoidPtr((void *)c)) #if PY_MAJOR_VERSION >= 3 PyMODINIT_FUNC PyInit_dl(void); PyMODINIT_FUNC PyInit_dl(void) #else PyMODINIT_FUNC initdl(void); PyMODINIT_FUNC initdl(void) #endif { PyObject *m = NULL; #if PY_MAJOR_VERSION >= 3 m = PyModule_Create(&dl_module); #else m = Py_InitModule3((char *)"dl", dl_methods, (char *)dl_doc); #endif if (!m) goto bad; if (PyModule_AddIntMacro(m, RTLD_LAZY ) < 0) goto bad; if (PyModule_AddIntMacro(m, RTLD_NOW ) < 0) goto bad; if (PyModule_AddIntMacro(m, RTLD_LOCAL ) < 0) goto bad; if (PyModule_AddIntMacro(m, RTLD_GLOBAL ) < 0) goto bad; #ifdef RTLD_NOLOAD if (PyModule_AddIntMacro(m, RTLD_NOLOAD ) < 0) goto bad; #endif #ifdef RTLD_NODELETE if (PyModule_AddIntMacro(m, RTLD_NODELETE ) < 0) goto bad; #endif #ifdef RTLD_DEEPBIND if (PyModule_AddIntMacro(m, RTLD_DEEPBIND ) < 0) goto bad; #endif #ifdef RTLD_FIRST if (PyModule_AddIntMacro(m, RTLD_FIRST ) < 0) goto bad; #endif #ifdef RTLD_DEFAULT if (PyModule_AddPtrMacro(m, RTLD_DEFAULT) < 0) goto bad; #endif #ifdef RTLD_NEXT if (PyModule_AddPtrMacro(m, RTLD_NEXT) < 0) goto bad; #endif #ifdef RTLD_SELF if (PyModule_AddPtrMacro(m, RTLD_SELF) < 0) goto bad; #endif #ifdef RTLD_MAIN_ONLY if (PyModule_AddPtrMacro(m, RTLD_MAIN_ONLY) < 0) goto bad; #endif finally: #if PY_MAJOR_VERSION >= 3 return m; #else return; #endif bad: Py_XDECREF(m); m = NULL; goto finally; } /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/dynload.h0000644000000000000000000000320212211706251015701 0ustar 00000000000000/* Author: Lisandro Dalcin * Contact: dalcinl@gmail.com */ #ifndef PyMPI_DYNLOAD_H #define PyMPI_DYNLOAD_H #if HAVE_DLFCN_H #include #else #if defined(__linux) || defined(__linux__) #define RTLD_LAZY 0x00001 #define RTLD_NOW 0x00002 #define RTLD_LOCAL 0x00000 #define RTLD_GLOBAL 0x00100 #define RTLD_NOLOAD 0x00004 #define RTLD_NODELETE 0x01000 #define RTLD_DEEPBIND 0x00008 #elif defined(__sun) || defined(__sun__) #define RTLD_LAZY 0x00001 #define RTLD_NOW 0x00002 #define RTLD_LOCAL 0x00000 #define RTLD_GLOBAL 0x00100 #define RTLD_NOLOAD 0x00004 #define RTLD_NODELETE 0x01000 #define RTLD_FIRST 0x02000 #elif defined(__APPLE__) #define RTLD_LAZY 0x1 #define RTLD_NOW 0x2 #define RTLD_LOCAL 0x4 #define RTLD_GLOBAL 0x8 #define RTLD_NOLOAD 0x10 #define RTLD_NODELETE 0x80 #define RTLD_FIRST 0x100 #elif defined(__CYGWIN__) #define RTLD_LAZY 1 #define RTLD_NOW 2 #define RTLD_LOCAL 0 #define RTLD_GLOBAL 4 #endif #if defined(__cplusplus) extern "C" { #endif extern void *dlopen(const char *, int); extern void *dlsym(void *, const char *); extern int dlclose(void *); extern char *dlerror(void); #if defined(__cplusplus) } #endif #endif #ifndef RTLD_LAZY #define RTLD_LAZY 1 #endif #ifndef RTLD_NOW #define RTLD_NOW RTLD_LAZY #endif #ifndef RTLD_LOCAL #define RTLD_LOCAL 0 #endif #ifndef RTLD_GLOBAL #define RTLD_GLOBAL RTLD_LOCAL #endif #endif /* !PyMPI_DYNLOAD_H */ /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/fallback.h0000644000000000000000000007403612211706251016023 0ustar 00000000000000#ifndef PyMPI_FALLBACK_H #define PyMPI_FALLBACK_H /* ---------------------------------------------------------------- */ #ifdef Py_PYTHON_H #ifndef PyMPI_snprintf #define PyMPI_snprintf PyOS_snprintf #endif #ifndef PyMPI_MALLOC #define PyMPI_MALLOC PyMem_Malloc #endif #ifndef PyMPI_FREE #define PyMPI_FREE PyMem_Free #endif #else #include #include #ifndef PyMPI_snprintf #define PyMPI_snprintf snprintf #endif #ifndef PyMPI_MALLOC #define PyMPI_MALLOC malloc #endif #ifndef PyMPI_FREE #define PyMPI_FREE free #endif #endif /* ---------------------------------------------------------------- */ /* Version Number */ #ifndef PyMPI_HAVE_MPI_VERSION #if !defined(MPI_VERSION) #define MPI_VERSION 1 #endif #endif #ifndef PyMPI_HAVE_MPI_SUBVERSION #if !defined(MPI_SUBVERSION) #define MPI_SUBVERSION 0 #endif #endif #ifndef PyMPI_HAVE_MPI_Get_version static int PyMPI_Get_version(int *version, int* subversion) { if (!version) return MPI_ERR_ARG; if (!subversion) return MPI_ERR_ARG; *version = MPI_VERSION; *subversion = MPI_SUBVERSION; return MPI_SUCCESS; } #undef MPI_Get_version #define MPI_Get_version PyMPI_Get_version #endif #ifndef PyMPI_HAVE_MPI_Get_library_version #define PyMPI_MAX_LIBRARY_VERSION_STRING 8 static int PyMPI_Get_library_version(char version[], int *rlen) { size_t l, n = PyMPI_MAX_LIBRARY_VERSION_STRING; if (!version) return MPI_ERR_ARG; /* XXX */ if (!rlen) return MPI_ERR_ARG; /* XXX */ l = PyMPI_snprintf(version, n, "MPI %d.%d", MPI_VERSION, MPI_SUBVERSION); if (l >= n) return MPI_ERR_INTERN; /* XXX */ *rlen = (int) l; return MPI_SUCCESS; } #undef MPI_MAX_LIBRARY_VERSION_STRING #define MPI_MAX_LIBRARY_VERSION_STRING \ PyMPI_MAX_LIBRARY_VERSION_STRING #undef MPI_Get_library_version #define MPI_Get_library_version \ PyMPI_Get_library_version #endif /* ---------------------------------------------------------------- */ /* Threading Support */ #ifndef PyMPI_HAVE_MPI_Init_thread static int PyMPI_Init_thread(int *argc, char ***argv, int required, int *provided) { int ierr = MPI_SUCCESS; if (!provided) return MPI_ERR_ARG; ierr = MPI_Init(argc, argv); if (ierr != MPI_SUCCESS) return ierr; *provided = MPI_THREAD_SINGLE; return MPI_SUCCESS; } #undef MPI_Init_thread #define MPI_Init_thread PyMPI_Init_thread #endif #ifndef PyMPI_HAVE_MPI_Query_thread static int PyMPI_Query_thread(int *provided) { if (!provided) return MPI_ERR_ARG; provided = MPI_THREAD_SINGLE; return MPI_SUCCESS; } #undef MPI_Query_thread #define MPI_Query_thread PyMPI_Query_thread #endif #ifndef PyMPI_HAVE_MPI_Is_thread_main static int PyMPI_Is_thread_main(int *flag) { if (!flag) return MPI_ERR_ARG; *flag = 1; /* XXX this is completely broken !! */ return MPI_SUCCESS; } #undef MPI_Is_thread_main #define MPI_Is_thread_main PyMPI_Is_thread_main #endif /* ---------------------------------------------------------------- */ /* Status */ #ifndef PyMPI_HAVE_MPI_STATUS_IGNORE static MPI_Status PyMPI_STATUS_IGNORE; #undef MPI_STATUS_IGNORE #define MPI_STATUS_IGNORE ((MPI_Status*)(&PyMPI_STATUS_IGNORE)) #endif #ifndef PyMPI_HAVE_MPI_STATUSES_IGNORE #ifndef PyMPI_MPI_STATUSES_IGNORE_SIZE #if defined(__GNUC__) || defined(__ICC) || defined(__INTEL_COMPILER) #warning MPI_STATUSES_IGNORE will use static storage of size 4096 #warning Buffer overruns may occur. You were warned !!! #endif #define PyMPI_MPI_STATUSES_IGNORE_SIZE 4096 #endif static MPI_Status PyMPI_STATUSES_IGNORE[PyMPI_MPI_STATUSES_IGNORE_SIZE]; #undef MPI_STATUSES_IGNORE #define MPI_STATUSES_IGNORE ((MPI_Status*)(PyMPI_STATUSES_IGNORE)) #endif /* ---------------------------------------------------------------- */ /* Datatypes */ #ifndef PyMPI_HAVE_MPI_LONG_LONG #undef MPI_LONG_LONG #define MPI_LONG_LONG MPI_LONG_LONG_INT #endif #ifndef PyMPI_HAVE_MPI_Type_get_extent static int PyMPI_Type_get_extent(MPI_Datatype datatype, MPI_Aint *lb, MPI_Aint *extent) { int ierr = MPI_SUCCESS; ierr = MPI_Type_lb(datatype, lb); if (ierr != MPI_SUCCESS) return ierr; ierr = MPI_Type_extent(datatype, extent); if (ierr != MPI_SUCCESS) return ierr; return MPI_SUCCESS; } #undef MPI_Type_get_extent #define MPI_Type_get_extent PyMPI_Type_get_extent #endif #ifndef PyMPI_HAVE_MPI_Type_dup static int PyMPI_Type_dup(MPI_Datatype datatype, MPI_Datatype *newtype) { int ierr = MPI_SUCCESS; ierr = MPI_Type_contiguous(1, datatype, newtype); if (ierr != MPI_SUCCESS) return ierr; ierr = MPI_Type_commit(newtype); /* the safe way ... */ if (ierr != MPI_SUCCESS) return ierr; return MPI_SUCCESS; } #undef MPI_Type_dup #define MPI_Type_dup PyMPI_Type_dup #endif #ifndef PyMPI_HAVE_MPI_Type_create_indexed_block static int PyMPI_Type_create_indexed_block(int count, int blocklength, int displacements[], MPI_Datatype oldtype, MPI_Datatype *newtype) { int i, *blocklengths = 0, ierr = MPI_SUCCESS; if (count > 0) { blocklengths = (int *) PyMPI_MALLOC(count*sizeof(int)); if (!blocklengths) return MPI_ERR_INTERN; } for (i=0; i 0) { blocklengths = (int *) PyMPI_MALLOC(count*sizeof(int)); if (!blocklengths) return MPI_ERR_INTERN; } for (i=0; i 0 ); PyMPI_CHKARG( sizes ); PyMPI_CHKARG( subsizes ); PyMPI_CHKARG( starts ); PyMPI_CHKARG( newtype ); for (i=0; i 0 ); PyMPI_CHKARG( subsizes[i] > 0 ); PyMPI_CHKARG( starts[i] >= 0 ); PyMPI_CHKARG( sizes[i] >= subsizes[i] ); PyMPI_CHKARG( starts[i] <= (sizes[i] - subsizes[i]) ); } PyMPI_CHKARG( (order==MPI_ORDER_C) || (order==MPI_ORDER_FORTRAN) ); ierr = MPI_Type_extent(oldtype, &extent); PyMPI_CHKERR(ierr); if (order == MPI_ORDER_FORTRAN) { if (ndims == 1) ierr = MPI_Type_contiguous(subsizes[0], oldtype, &tmp1); PyMPI_CHKERR(ierr); else { ierr = MPI_Type_vector(subsizes[1], subsizes[0], sizes[0], oldtype, &tmp1); PyMPI_CHKERR(ierr); size = sizes[0]*extent; for (i=2; i=0; i--) { size *= sizes[i+1]; ierr = MPI_Type_hvector(subsizes[i], 1, size, tmp1, &tmp2); PyMPI_CHKERR(ierr); ierr = MPI_Type_free(&tmp1); PyMPI_CHKERR(ierr); tmp1 = tmp2; } } /* add displacement and UB */ disps[1] = starts[ndims-1]; size = 1; for (i=ndims-2; i>=0; i--) { size *= sizes[i+1]; disps[1] += size*starts[i]; } } disps[1] *= extent; disps[2] = extent; for (i=0; i 0); PyMPI_CHKARG(blksize * nprocs >= global_size); } j = global_size - blksize*rank; mysize = PyMPI_MIN(blksize, j); if (mysize < 0) mysize = 0; stride = orig_extent; if (order == MPI_ORDER_FORTRAN) { if (dim == 0) { ierr = MPI_Type_contiguous(mysize, type_old, type_new); PyMPI_CHKERR(ierr); } else { for (i=0; idim; i--) stride *= gsizes[i]; ierr = MPI_Type_hvector(mysize, 1, stride, type_old, type_new); PyMPI_CHKERR(ierr); } } *offset = blksize * rank; if (mysize == 0) *offset = 0; ierr = MPI_SUCCESS; fn_exit: return ierr; } static int PyMPI_Type_cyclic(int *gsizes, int dim, int ndims, int nprocs, int rank, int darg, int order, MPI_Aint orig_extent, MPI_Datatype type_old, MPI_Datatype *type_new, MPI_Aint *offset) { int ierr, blksize, i, blklens[3], st_index, end_index, local_size, rem, count; MPI_Aint stride, disps[3]; MPI_Datatype type_tmp, types[3]; type_tmp = MPI_DATATYPE_NULL; types[0] = types[1] = types[2] = MPI_DATATYPE_NULL; if (darg == MPI_DISTRIBUTE_DFLT_DARG) blksize = 1; else blksize = darg; PyMPI_CHKARG(blksize > 0); st_index = rank*blksize; end_index = gsizes[dim] - 1; if (end_index < st_index) local_size = 0; else { local_size = ((end_index - st_index + 1)/(nprocs*blksize))*blksize; rem = (end_index - st_index + 1) % (nprocs*blksize); local_size += PyMPI_MIN(rem, blksize); } count = local_size/blksize; rem = local_size % blksize; stride = nprocs*blksize*orig_extent; if (order == MPI_ORDER_FORTRAN) for (i=0; idim; i--) stride *= gsizes[i]; ierr = MPI_Type_hvector(count, blksize, stride, type_old, type_new);PyMPI_CHKERR(ierr); /* if the last block is of size less than blksize, include it separately using MPI_Type_struct */ if (rem) { types[0] = *type_new; types[1] = type_old; disps[0] = 0; disps[1] = count*stride; blklens[0] = 1; blklens[1] = rem; ierr = MPI_Type_struct(2, blklens, disps, types, &type_tmp); PyMPI_CHKERR(ierr); ierr = MPI_Type_free(type_new); PyMPI_CHKERR(ierr); *type_new = type_tmp; } /* In the first iteration, we need to set the displacement in that dimension correctly. */ if ( ((order == MPI_ORDER_FORTRAN) && (dim == 0)) || ((order == MPI_ORDER_C) && (dim == ndims-1)) ) { types[0] = MPI_LB; disps[0] = 0; types[1] = *type_new; disps[1] = rank * blksize * orig_extent; types[2] = MPI_UB; disps[2] = orig_extent * gsizes[dim]; blklens[0] = blklens[1] = blklens[2] = 1; ierr = MPI_Type_struct(3, blklens, disps, types, &type_tmp); PyMPI_CHKERR(ierr); ierr = MPI_Type_free(type_new); PyMPI_CHKERR(ierr); *type_new = type_tmp; *offset = 0; } else { *offset = rank * blksize; } if (local_size == 0) *offset = 0; ierr = MPI_SUCCESS; fn_exit: return ierr; } static int PyMPI_Type_create_darray(int size, int rank, int ndims, int gsizes[], int distribs[], int dargs[], int psizes[], int order, MPI_Datatype oldtype, MPI_Datatype *newtype) { int ierr = MPI_SUCCESS, i; int procs, tmp_rank, tmp_size, blklens[3]; MPI_Aint orig_extent, disps[3]; MPI_Datatype type_old, type_new, types[3]; int *coords = 0; MPI_Aint *offsets = 0; orig_extent=0; type_old = type_new = MPI_DATATYPE_NULL;; types[0] = types[1] = types[2] = MPI_DATATYPE_NULL; ierr = MPI_Type_extent(oldtype, &orig_extent); PyMPI_CHKERR(ierr); PyMPI_CHKARG(rank >= 0); PyMPI_CHKARG(size > 0); PyMPI_CHKARG(ndims > 0); PyMPI_CHKARG(gsizes != 0); PyMPI_CHKARG(distribs != 0); PyMPI_CHKARG(dargs != 0); PyMPI_CHKARG(psizes != 0); PyMPI_CHKARG((order == MPI_ORDER_C) || (order == MPI_ORDER_FORTRAN)); for (i=0; ierr == MPI_SUCCESS && i < ndims; i++) { PyMPI_CHKARG(gsizes[1] > 0); PyMPI_CHKARG(psizes[1] > 0); PyMPI_CHKARG((distribs[i] == MPI_DISTRIBUTE_NONE) || (distribs[i] == MPI_DISTRIBUTE_BLOCK) || (distribs[i] == MPI_DISTRIBUTE_CYCLIC)); PyMPI_CHKARG((dargs[i] == MPI_DISTRIBUTE_DFLT_DARG) || (dargs[i] > 0)); PyMPI_CHKARG (!((distribs[i] == MPI_DISTRIBUTE_NONE) && (psizes[i] != 1))); } /* calculate position in Cartesian grid as MPI would (row-major ordering) */ coords = (int *) PyMPI_MALLOC(ndims*sizeof(int)); if (coords == 0) { ierr = MPI_ERR_INTERN; PyMPI_CHKERR(ierr); } offsets = (MPI_Aint *) PyMPI_MALLOC(ndims*sizeof(MPI_Aint)); if (offsets == 0) { ierr = MPI_ERR_INTERN; PyMPI_CHKERR(ierr); } procs = size; tmp_rank = rank; for (i=0; i=0; i--) { if (distribs[i] == MPI_DISTRIBUTE_BLOCK) { ierr = PyMPI_Type_block(gsizes, i, ndims, psizes[i], coords[i], dargs[i], order, orig_extent, type_old, &type_new, offsets+i); PyMPI_CHKERR(ierr); } else if (distribs[i] == MPI_DISTRIBUTE_CYCLIC) { ierr = PyMPI_Type_cyclic(gsizes, i, ndims, psizes[i], coords[i], dargs[i], order, orig_extent, type_old, &type_new, offsets+i); PyMPI_CHKERR(ierr); } else if (distribs[i] == MPI_DISTRIBUTE_NONE) { /* treat it as a block distribution on 1 process */ ierr = PyMPI_Type_block(gsizes, i, ndims, psizes[i], coords[i], MPI_DISTRIBUTE_DFLT_DARG, order, orig_extent, type_old, &type_new, offsets+i); PyMPI_CHKERR(ierr); } if (i != ndims-1) { ierr = MPI_Type_free(&type_old); PyMPI_CHKERR(ierr); } type_old = type_new; } /* add displacement and UB */ disps[1] = offsets[ndims-1]; tmp_size = 1; for (i=ndims-2; i>=0; i--) { tmp_size *= gsizes[i+1]; disps[1] += tmp_size*offsets[i]; } /* rest done below for both Fortran and C order */ } disps[1] *= orig_extent; disps[2] = orig_extent; for (i=0; i 0) p[n] = 0; #endif status->MPI_SOURCE = MPI_ANY_SOURCE; status->MPI_TAG = MPI_ANY_TAG; status->MPI_ERROR = MPI_SUCCESS; #ifdef PyMPI_HAVE_MPI_Status_set_elements (void)MPI_Status_set_elements(status, MPI_BYTE, 0); #endif #ifdef PyMPI_HAVE_MPI_Status_set_cancelled (void)MPI_Status_set_cancelled(status, 0); #endif } return MPI_SUCCESS; } #undef MPI_Request_get_status #define MPI_Request_get_status PyMPI_Request_get_status #endif #endif /* ---------------------------------------------------------------- */ #ifndef PyMPI_HAVE_MPI_Reduce_scatter_block static int PyMPI_Reduce_scatter_block(void *sendbuf, void *recvbuf, int recvcount, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) { int ierr = MPI_SUCCESS; int n = 1, *recvcounts = 0; ierr = MPI_Comm_size(comm, &n); if (ierr != MPI_SUCCESS) return ierr; recvcounts = (int *) PyMPI_MALLOC(n*sizeof(int)); if (!recvcounts) return MPI_ERR_INTERN; while (n-- > 0) recvcounts[n] = recvcount; ierr = MPI_Reduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm); PyMPI_FREE(recvcounts); return ierr; } #undef MPI_Reduce_scatter_block #define MPI_Reduce_scatter_block PyMPI_Reduce_scatter_block #endif /* ---------------------------------------------------------------- */ /* Communicator Info */ #ifndef PyMPI_HAVE_MPI_Comm_dup_with_info static int PyMPI_Comm_dup_with_info(MPI_Comm comm, MPI_Info info, MPI_Comm *newcomm) { int dummy, ierr; if (info != MPI_INFO_NULL) { ierr = MPI_Info_get_nkeys(info, &dummy); if (ierr != MPI_SUCCESS) return ierr; } return MPI_Comm_dup(comm, newcomm); } #undef MPI_Comm_dup_with_info #define MPI_Comm_dup_with_info PyMPI_Comm_dup_with_info #endif #ifndef PyMPI_HAVE_MPI_Comm_set_info static int PyMPI_Comm_set_info(MPI_Comm comm, MPI_Info info) { int dummy, ierr; ierr = MPI_Comm_size(comm, &dummy); if (ierr != MPI_SUCCESS) return ierr; if (info != MPI_INFO_NULL) { ierr = MPI_Info_get_nkeys(info, &dummy); if (ierr != MPI_SUCCESS) return ierr; } return MPI_SUCCESS; } #undef MPI_Comm_set_info #define MPI_Comm_set_info PyMPI_Comm_set_info #endif #ifndef PyMPI_HAVE_MPI_Comm_get_info static int PyMPI_Comm_get_info(MPI_Comm comm, MPI_Info *info) { int dummy, ierr; ierr = MPI_Comm_size(comm, &dummy); if (ierr != MPI_SUCCESS) return ierr; return MPI_Info_create(info); } #undef MPI_Comm_get_info #define MPI_Comm_get_info PyMPI_Comm_get_info #endif /* ---------------------------------------------------------------- */ /* Memory Allocation */ #if !defined(PyMPI_HAVE_MPI_Alloc_mem) || \ !defined(PyMPI_HAVE_MPI_Free_mem) static int PyMPI_Alloc_mem(MPI_Aint size, MPI_Info info, void *baseptr) { char *buf = 0, **basebuf = 0; if (size < 0) return MPI_ERR_ARG; if (!baseptr) return MPI_ERR_ARG; if (size == 0) size = 1; buf = (char *) PyMPI_MALLOC(size); if (!buf) return MPI_ERR_NO_MEM; basebuf = (char **) baseptr; *basebuf = buf; return MPI_SUCCESS; } #undef MPI_Alloc_mem #define MPI_Alloc_mem PyMPI_Alloc_mem static int PyMPI_Free_mem(void *baseptr) { if (!baseptr) return MPI_ERR_ARG; PyMPI_FREE(baseptr); return MPI_SUCCESS; } #undef MPI_Free_mem #define MPI_Free_mem PyMPI_Free_mem #endif /* ---------------------------------------------------------------- */ #ifndef PyMPI_HAVE_MPI_Win_allocate #ifdef PyMPI_HAVE_MPI_Win_create static int PyMPI_KEYVAL_WIN_MPIMEM = MPI_KEYVAL_INVALID; static int MPIAPI PyMPI_win_free_mpimem(PyMPI_UNUSED MPI_Win win, PyMPI_UNUSED int k, void *v, PyMPI_UNUSED void *xs) { return MPI_Free_mem(v); } static int MPIAPI PyMPI_free_keyval_win(PyMPI_UNUSED MPI_Comm comm, PyMPI_UNUSED int k, void *v, PyMPI_UNUSED void *xs) { int ierr = MPI_SUCCESS; ierr = MPI_Win_free_keyval((int *)v); if (ierr != MPI_SUCCESS) return ierr; ierr = MPI_Comm_free_keyval(&k); if (ierr != MPI_SUCCESS) return ierr; return MPI_SUCCESS; } static int PyMPI_Win_allocate(MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, void *baseptr_, MPI_Win *win_) { int ierr = MPI_SUCCESS; void *baseptr = MPI_BOTTOM; MPI_Win win = MPI_WIN_NULL; ierr = MPI_Alloc_mem(size?size:1, info, &baseptr); if (ierr != MPI_SUCCESS) goto error; ierr = MPI_Win_create(baseptr, size, disp_unit, info, comm, &win); if (ierr != MPI_SUCCESS) goto error; #if defined(PyMPI_HAVE_MPI_Win_create_keyval) && \ defined(PyMPI_HAVE_MPI_Win_set_attr) if (PyMPI_KEYVAL_WIN_MPIMEM == MPI_KEYVAL_INVALID) { int comm_keyval = MPI_KEYVAL_INVALID; ierr = MPI_Win_create_keyval(MPI_WIN_NULL_COPY_FN, PyMPI_win_free_mpimem, &PyMPI_KEYVAL_WIN_MPIMEM, NULL); if (ierr != MPI_SUCCESS) goto error; ierr = MPI_Comm_create_keyval(MPI_COMM_NULL_COPY_FN, PyMPI_free_keyval_win, &comm_keyval, NULL); if (ierr == MPI_SUCCESS) (void)MPI_Comm_set_attr(MPI_COMM_SELF, comm_keyval, &PyMPI_KEYVAL_WIN_MPIMEM); } ierr = MPI_Win_set_attr(win, PyMPI_KEYVAL_WIN_MPIMEM, baseptr); if (ierr != MPI_SUCCESS) goto error; #endif *((void**)baseptr_) = baseptr; *win_ = win; return MPI_SUCCESS; error: if (baseptr != MPI_BOTTOM) (void)MPI_Free_mem(baseptr); if (win != MPI_WIN_NULL) (void)MPI_Win_free(&win); return ierr; } #undef MPI_Win_allocate #define MPI_Win_allocate PyMPI_Win_allocate #endif #endif /* ---------------------------------------------------------------- */ #endif /* !PyMPI_FALLBACK_H */ /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/include/mpi4py/MPI.pxd0000644000000000000000000000663412211706251020121 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com # -- from mpi4py.libmpi cimport MPI_Aint from mpi4py.libmpi cimport MPI_Offset from mpi4py.libmpi cimport MPI_Count from mpi4py.libmpi cimport MPI_Status from mpi4py.libmpi cimport MPI_Datatype from mpi4py.libmpi cimport MPI_Request from mpi4py.libmpi cimport MPI_Message from mpi4py.libmpi cimport MPI_Op from mpi4py.libmpi cimport MPI_Group from mpi4py.libmpi cimport MPI_Info from mpi4py.libmpi cimport MPI_Errhandler from mpi4py.libmpi cimport MPI_Comm from mpi4py.libmpi cimport MPI_Win from mpi4py.libmpi cimport MPI_File # -- cdef import from *: ctypedef MPI_Aint Aint "MPI_Aint" ctypedef MPI_Offset Offset "MPI_Offset" ctypedef MPI_Count Count "MPI_Count" ctypedef public api class Status [ type PyMPIStatus_Type, object PyMPIStatusObject, ]: cdef MPI_Status ob_mpi cdef unsigned flags ctypedef public api class Datatype [ type PyMPIDatatype_Type, object PyMPIDatatypeObject, ]: cdef MPI_Datatype ob_mpi cdef unsigned flags ctypedef public api class Request [ type PyMPIRequest_Type, object PyMPIRequestObject, ]: cdef MPI_Request ob_mpi cdef unsigned flags cdef object ob_buf ctypedef public api class Prequest(Request) [ type PyMPIPrequest_Type, object PyMPIPrequestObject, ]: pass ctypedef public api class Grequest(Request) [ type PyMPIGrequest_Type, object PyMPIGrequestObject, ]: cdef MPI_Request ob_grequest ctypedef public api class Message [ type PyMPIMessage_Type, object PyMPIMessageObject, ]: cdef MPI_Message ob_mpi cdef unsigned flags cdef object ob_buf ctypedef public api class Op [ type PyMPIOp_Type, object PyMPIOpObject, ]: cdef MPI_Op ob_mpi cdef int flags cdef object (*ob_func)(object, object) cdef int ob_usrid ctypedef public api class Group [ type PyMPIGroup_Type, object PyMPIGroupObject, ]: cdef MPI_Group ob_mpi cdef unsigned flags ctypedef public api class Info [ type PyMPIInfo_Type, object PyMPIInfoObject, ]: cdef MPI_Info ob_mpi cdef unsigned flags ctypedef public api class Errhandler [ type PyMPIErrhandler_Type, object PyMPIErrhandlerObject, ]: cdef MPI_Errhandler ob_mpi cdef unsigned flags ctypedef public api class Comm [ type PyMPIComm_Type, object PyMPICommObject, ]: cdef MPI_Comm ob_mpi cdef unsigned flags ctypedef public api class Intracomm(Comm) [ type PyMPIIntracomm_Type, object PyMPIIntracommObject, ]: pass ctypedef public api class Cartcomm(Intracomm) [ type PyMPICartcomm_Type, object PyMPICartcommObject, ]: pass ctypedef public api class Graphcomm(Intracomm) [ type PyMPIGraphcomm_Type, object PyMPIGraphcommObject, ]: pass ctypedef public api class Distgraphcomm(Intracomm) [ type PyMPIDistgraphcomm_Type, object PyMPIDistgraphcommObject, ]: pass ctypedef public api class Intercomm(Comm) [ type PyMPIIntercomm_Type, object PyMPIIntercommObject, ]: pass ctypedef public api class Win [ type PyMPIWin_Type, object PyMPIWinObject, ]: cdef MPI_Win ob_mpi cdef unsigned flags ctypedef public api class File [ type PyMPIFile_Type, object PyMPIFileObject, ]: cdef MPI_File ob_mpi cdef unsigned flags # -- mpi4py_1.3.1+hg20131106.orig/src/include/mpi4py/__init__.pxd0000644000000000000000000000007012211706251021217 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com mpi4py_1.3.1+hg20131106.orig/src/include/mpi4py/__init__.pyx0000644000000000000000000000007012211706251021244 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com mpi4py_1.3.1+hg20131106.orig/src/include/mpi4py/libmpi.pxd0000644000000000000000000012550212211706251020744 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com cdef import from "mpi.h" nogil: #----------------------------------------------------------------- ctypedef long MPI_Aint ctypedef long long MPI_Offset #:= long ctypedef long long MPI_Count #:= MPI_Offset ctypedef struct MPI_Status: int MPI_SOURCE int MPI_TAG int MPI_ERROR ctypedef struct _mpi_datatype_t ctypedef _mpi_datatype_t* MPI_Datatype ctypedef struct _mpi_request_t ctypedef _mpi_request_t* MPI_Request ctypedef struct _mpi_message_t ctypedef _mpi_message_t* MPI_Message ctypedef struct _mpi_op_t ctypedef _mpi_op_t* MPI_Op ctypedef struct _mpi_group_t ctypedef _mpi_group_t* MPI_Group ctypedef struct _mpi_info_t ctypedef _mpi_info_t* MPI_Info ctypedef struct _mpi_comm_t ctypedef _mpi_comm_t* MPI_Comm ctypedef struct _mpi_win_t ctypedef _mpi_win_t* MPI_Win ctypedef struct _mpi_file_t ctypedef _mpi_file_t* MPI_File ctypedef struct _mpi_errhandler_t ctypedef _mpi_errhandler_t* MPI_Errhandler #----------------------------------------------------------------- enum: MPI_UNDEFINED #:= -32766 enum: MPI_ANY_SOURCE #:= MPI_UNDEFINED enum: MPI_ANY_TAG #:= MPI_UNDEFINED enum: MPI_PROC_NULL #:= MPI_UNDEFINED enum: MPI_ROOT #:= MPI_PROC_NULL enum: MPI_IDENT #:= 1 enum: MPI_CONGRUENT #:= 2 enum: MPI_SIMILAR #:= 3 enum: MPI_UNEQUAL #:= 4 void* MPI_BOTTOM #:= 0 void* MPI_IN_PLACE #:= 0 enum: MPI_KEYVAL_INVALID #:= 0 enum: MPI_MAX_OBJECT_NAME #:= 1 #----------------------------------------------------------------- # Null datatype MPI_Datatype MPI_DATATYPE_NULL #:= 0 # MPI datatypes MPI_Datatype MPI_PACKED #:= MPI_DATATYPE_NULL MPI_Datatype MPI_BYTE #:= MPI_DATATYPE_NULL MPI_Datatype MPI_AINT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_OFFSET #:= MPI_DATATYPE_NULL MPI_Datatype MPI_COUNT #:= MPI_DATATYPE_NULL # Elementary C datatypes MPI_Datatype MPI_CHAR #:= MPI_DATATYPE_NULL MPI_Datatype MPI_WCHAR #:= MPI_DATATYPE_NULL MPI_Datatype MPI_SIGNED_CHAR #:= MPI_DATATYPE_NULL MPI_Datatype MPI_SHORT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LONG #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LONG_LONG #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LONG_LONG_INT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UNSIGNED_CHAR #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UNSIGNED_SHORT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UNSIGNED #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UNSIGNED_LONG #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UNSIGNED_LONG_LONG #:= MPI_DATATYPE_NULL MPI_Datatype MPI_FLOAT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_DOUBLE #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LONG_DOUBLE #:= MPI_DATATYPE_NULL # C99 datatypes MPI_Datatype MPI_C_BOOL #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INT8_T #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INT16_T #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INT32_T #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INT64_T #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UINT8_T #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UINT16_T #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UINT32_T #:= MPI_DATATYPE_NULL MPI_Datatype MPI_UINT64_T #:= MPI_DATATYPE_NULL MPI_Datatype MPI_C_COMPLEX #:= MPI_DATATYPE_NULL MPI_Datatype MPI_C_FLOAT_COMPLEX #:= MPI_DATATYPE_NULL MPI_Datatype MPI_C_DOUBLE_COMPLEX #:= MPI_DATATYPE_NULL MPI_Datatype MPI_C_LONG_DOUBLE_COMPLEX #:= MPI_DATATYPE_NULL # C datatypes for reduction operations MPI_Datatype MPI_SHORT_INT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_2INT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LONG_INT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_FLOAT_INT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_DOUBLE_INT #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LONG_DOUBLE_INT #:= MPI_DATATYPE_NULL # Elementary Fortran datatypes MPI_Datatype MPI_CHARACTER #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LOGICAL #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INTEGER #:= MPI_DATATYPE_NULL MPI_Datatype MPI_REAL #:= MPI_DATATYPE_NULL MPI_Datatype MPI_DOUBLE_PRECISION #:= MPI_DATATYPE_NULL MPI_Datatype MPI_COMPLEX #:= MPI_DATATYPE_NULL MPI_Datatype MPI_DOUBLE_COMPLEX #:= MPI_DATATYPE_NULL # Size-specific Fortran datatypes MPI_Datatype MPI_LOGICAL1 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LOGICAL2 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LOGICAL4 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LOGICAL8 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INTEGER1 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INTEGER2 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INTEGER4 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INTEGER8 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_INTEGER16 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_REAL2 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_REAL4 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_REAL8 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_REAL16 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_COMPLEX4 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_COMPLEX8 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_COMPLEX16 #:= MPI_DATATYPE_NULL MPI_Datatype MPI_COMPLEX32 #:= MPI_DATATYPE_NULL # Deprecated since MPI-2, removed in MPI-3 MPI_Datatype MPI_UB #:= MPI_DATATYPE_NULL MPI_Datatype MPI_LB #:= MPI_DATATYPE_NULL int MPI_Type_lb(MPI_Datatype, MPI_Aint*) int MPI_Type_ub(MPI_Datatype, MPI_Aint*) int MPI_Type_extent(MPI_Datatype, MPI_Aint*) int MPI_Address(void*, MPI_Aint*) int MPI_Type_hvector(int, int, MPI_Aint, MPI_Datatype, MPI_Datatype*) int MPI_Type_hindexed(int, int[], MPI_Aint[], MPI_Datatype, MPI_Datatype*) int MPI_Type_struct(int, int[], MPI_Aint[], MPI_Datatype[], MPI_Datatype*) int MPI_Type_dup(MPI_Datatype, MPI_Datatype*) int MPI_Type_contiguous(int, MPI_Datatype, MPI_Datatype*) int MPI_Type_vector(int, int, int, MPI_Datatype, MPI_Datatype*) int MPI_Type_indexed(int, int[], int[], MPI_Datatype, MPI_Datatype*) int MPI_Type_create_indexed_block(int, int, int[], MPI_Datatype, MPI_Datatype*) enum: MPI_ORDER_C #:= 0 enum: MPI_ORDER_FORTRAN #:= 1 int MPI_Type_create_subarray(int, int[], int[], int[], int, MPI_Datatype, MPI_Datatype*) enum: MPI_DISTRIBUTE_NONE #:= 0 enum: MPI_DISTRIBUTE_BLOCK #:= 1 enum: MPI_DISTRIBUTE_CYCLIC #:= 2 enum: MPI_DISTRIBUTE_DFLT_DARG #:= 4 int MPI_Type_create_darray(int, int, int, int[], int[], int[], int[], int, MPI_Datatype, MPI_Datatype*) int MPI_Get_address(void*, MPI_Aint*) #:= MPI_Address int MPI_Type_create_hvector(int, int, MPI_Aint, MPI_Datatype, MPI_Datatype*) #:= MPI_Type_hvector int MPI_Type_create_hindexed(int, int[], MPI_Aint[], MPI_Datatype, MPI_Datatype*) #:= MPI_Type_hindexed int MPI_Type_create_hindexed_block(int, int, MPI_Aint[], MPI_Datatype, MPI_Datatype*) int MPI_Type_create_struct(int, int[], MPI_Aint[], MPI_Datatype[], MPI_Datatype*) #:= MPI_Type_struct int MPI_Type_create_resized(MPI_Datatype, MPI_Aint, MPI_Aint, MPI_Datatype*) int MPI_Type_size(MPI_Datatype, int*) int MPI_Type_size_x(MPI_Datatype, MPI_Count*) int MPI_Type_get_extent(MPI_Datatype, MPI_Aint*, MPI_Aint*) int MPI_Type_get_extent_x(MPI_Datatype, MPI_Count*, MPI_Count*) int MPI_Type_get_true_extent(MPI_Datatype, MPI_Aint*, MPI_Aint*) int MPI_Type_get_true_extent_x(MPI_Datatype, MPI_Count*, MPI_Count*) int MPI_Type_create_f90_integer(int, MPI_Datatype*) int MPI_Type_create_f90_real(int, int, MPI_Datatype*) int MPI_Type_create_f90_complex(int, int, MPI_Datatype*) enum: MPI_TYPECLASS_INTEGER #:= MPI_UNDEFINED enum: MPI_TYPECLASS_REAL #:= MPI_UNDEFINED enum: MPI_TYPECLASS_COMPLEX #:= MPI_UNDEFINED int MPI_Type_match_size(int, int, MPI_Datatype*) int MPI_Type_commit(MPI_Datatype*) int MPI_Type_free(MPI_Datatype*) int MPI_Pack(void*, int, MPI_Datatype, void*, int, int*, MPI_Comm) int MPI_Unpack(void*, int, int*, void*, int, MPI_Datatype, MPI_Comm) int MPI_Pack_size(int, MPI_Datatype, MPI_Comm, int*) int MPI_Pack_external(char[], void*, int, MPI_Datatype, void*, MPI_Aint, MPI_Aint*) int MPI_Unpack_external(char[], void*, MPI_Aint, MPI_Aint*, void*, int, MPI_Datatype) int MPI_Pack_external_size(char[], int, MPI_Datatype, MPI_Aint*) enum: MPI_COMBINER_NAMED #:= MPI_UNDEFINED enum: MPI_COMBINER_DUP #:= MPI_UNDEFINED enum: MPI_COMBINER_CONTIGUOUS #:= MPI_UNDEFINED enum: MPI_COMBINER_VECTOR #:= MPI_UNDEFINED enum: MPI_COMBINER_HVECTOR #:= MPI_UNDEFINED enum: MPI_COMBINER_HVECTOR_INTEGER #:= MPI_UNDEFINED enum: MPI_COMBINER_INDEXED #:= MPI_UNDEFINED enum: MPI_COMBINER_HINDEXED #:= MPI_UNDEFINED enum: MPI_COMBINER_HINDEXED_INTEGER #:= MPI_UNDEFINED enum: MPI_COMBINER_INDEXED_BLOCK #:= MPI_UNDEFINED enum: MPI_COMBINER_HINDEXED_BLOCK #:= MPI_UNDEFINED enum: MPI_COMBINER_STRUCT #:= MPI_UNDEFINED enum: MPI_COMBINER_STRUCT_INTEGER #:= MPI_UNDEFINED enum: MPI_COMBINER_SUBARRAY #:= MPI_UNDEFINED enum: MPI_COMBINER_DARRAY #:= MPI_UNDEFINED enum: MPI_COMBINER_F90_REAL #:= MPI_UNDEFINED enum: MPI_COMBINER_F90_COMPLEX #:= MPI_UNDEFINED enum: MPI_COMBINER_F90_INTEGER #:= MPI_UNDEFINED enum: MPI_COMBINER_RESIZED #:= MPI_UNDEFINED int MPI_Type_get_envelope(MPI_Datatype, int*, int*, int*, int*) int MPI_Type_get_contents(MPI_Datatype, int, int, int, int[], MPI_Aint[], MPI_Datatype[]) int MPI_Type_get_name(MPI_Datatype, char[], int*) int MPI_Type_set_name(MPI_Datatype, char[]) int MPI_Type_get_attr(MPI_Datatype, int, void*, int*) int MPI_Type_set_attr(MPI_Datatype, int, void*) int MPI_Type_delete_attr(MPI_Datatype, int) ctypedef int MPI_Type_copy_attr_function(MPI_Datatype,int,void*,void*,void*,int*) ctypedef int MPI_Type_delete_attr_function(MPI_Datatype,int,void*,void*) MPI_Type_copy_attr_function* MPI_TYPE_NULL_COPY_FN #:= 0 MPI_Type_copy_attr_function* MPI_TYPE_DUP_FN #:= 0 MPI_Type_delete_attr_function* MPI_TYPE_NULL_DELETE_FN #:= 0 int MPI_Type_create_keyval(MPI_Type_copy_attr_function*, MPI_Type_delete_attr_function*, int*, void*) int MPI_Type_free_keyval(int*) #----------------------------------------------------------------- MPI_Status* MPI_STATUS_IGNORE #:= 0 MPI_Status* MPI_STATUSES_IGNORE #:= 0 int MPI_Get_count(MPI_Status*, MPI_Datatype, int*) int MPI_Get_elements(MPI_Status*, MPI_Datatype, int*) int MPI_Get_elements_x(MPI_Status*, MPI_Datatype, MPI_Count*) int MPI_Status_set_elements(MPI_Status*, MPI_Datatype, int) int MPI_Status_set_elements_x(MPI_Status*, MPI_Datatype, MPI_Count) int MPI_Test_cancelled(MPI_Status*, int*) int MPI_Status_set_cancelled(MPI_Status*, int) #----------------------------------------------------------------- MPI_Request MPI_REQUEST_NULL #:= 0 int MPI_Request_free(MPI_Request*) int MPI_Wait(MPI_Request*, MPI_Status*) int MPI_Test(MPI_Request*, int*, MPI_Status*) int MPI_Request_get_status(MPI_Request, int*, MPI_Status*) int MPI_Cancel(MPI_Request*) int MPI_Waitany(int, MPI_Request[], int*, MPI_Status*) int MPI_Testany(int, MPI_Request[], int*, int*, MPI_Status*) int MPI_Waitall(int, MPI_Request[], MPI_Status[]) int MPI_Testall(int, MPI_Request[], int*, MPI_Status[]) int MPI_Waitsome(int, MPI_Request[], int*, int[], MPI_Status[]) int MPI_Testsome(int, MPI_Request[], int*, int[], MPI_Status[]) int MPI_Start(MPI_Request*) int MPI_Startall(int, MPI_Request*) ctypedef int MPI_Grequest_cancel_function(void*,int) ctypedef int MPI_Grequest_free_function(void*) ctypedef int MPI_Grequest_query_function(void*,MPI_Status*) int MPI_Grequest_start(MPI_Grequest_query_function*, MPI_Grequest_free_function*, MPI_Grequest_cancel_function*, void*, MPI_Request*) int MPI_Grequest_complete(MPI_Request) #----------------------------------------------------------------- MPI_Op MPI_OP_NULL #:= 0 MPI_Op MPI_MAX #:= MPI_OP_NULL MPI_Op MPI_MIN #:= MPI_OP_NULL MPI_Op MPI_SUM #:= MPI_OP_NULL MPI_Op MPI_PROD #:= MPI_OP_NULL MPI_Op MPI_LAND #:= MPI_OP_NULL MPI_Op MPI_BAND #:= MPI_OP_NULL MPI_Op MPI_LOR #:= MPI_OP_NULL MPI_Op MPI_BOR #:= MPI_OP_NULL MPI_Op MPI_LXOR #:= MPI_OP_NULL MPI_Op MPI_BXOR #:= MPI_OP_NULL MPI_Op MPI_MAXLOC #:= MPI_OP_NULL MPI_Op MPI_MINLOC #:= MPI_OP_NULL MPI_Op MPI_REPLACE #:= MPI_OP_NULL MPI_Op MPI_NO_OP #:= MPI_OP_NULL int MPI_Op_free(MPI_Op*) ctypedef void MPI_User_function(void*, void*, int*, MPI_Datatype*) int MPI_Op_create(MPI_User_function*, int, MPI_Op*) int MPI_Op_commutative(MPI_Op, int*) #----------------------------------------------------------------- MPI_Info MPI_INFO_NULL #:= 0 MPI_Info MPI_INFO_ENV #:= MPI_INFO_NULL int MPI_Info_free(MPI_Info*) int MPI_Info_create(MPI_Info*) int MPI_Info_dup(MPI_Info, MPI_Info*) enum: MPI_MAX_INFO_KEY #:= 1 enum: MPI_MAX_INFO_VAL #:= 1 int MPI_Info_get(MPI_Info, char[], int, char[],int*) int MPI_Info_set(MPI_Info, char[], char[]) int MPI_Info_delete(MPI_Info, char[]) int MPI_Info_get_nkeys(MPI_Info, int*) int MPI_Info_get_nthkey(MPI_Info, int, char[]) int MPI_Info_get_valuelen(MPI_Info, char[], int*, int*) #----------------------------------------------------------------- MPI_Group MPI_GROUP_NULL #:= 0 MPI_Group MPI_GROUP_EMPTY #:= 1 int MPI_Group_free(MPI_Group*) int MPI_Group_size(MPI_Group, int*) int MPI_Group_rank(MPI_Group, int*) int MPI_Group_translate_ranks(MPI_Group, int, int[], MPI_Group, int[]) int MPI_Group_compare(MPI_Group, MPI_Group, int*) int MPI_Group_union(MPI_Group, MPI_Group, MPI_Group*) int MPI_Group_intersection(MPI_Group, MPI_Group, MPI_Group*) int MPI_Group_difference(MPI_Group, MPI_Group, MPI_Group*) int MPI_Group_incl(MPI_Group, int, int[], MPI_Group*) int MPI_Group_excl(MPI_Group, int, int[], MPI_Group*) int MPI_Group_range_incl(MPI_Group, int, int[][3], MPI_Group*) int MPI_Group_range_excl(MPI_Group, int, int[][3], MPI_Group*) #----------------------------------------------------------------- MPI_Comm MPI_COMM_NULL #:= 0 MPI_Comm MPI_COMM_SELF #:= MPI_COMM_NULL MPI_Comm MPI_COMM_WORLD #:= MPI_COMM_NULL int MPI_Comm_free(MPI_Comm*) int MPI_Comm_group(MPI_Comm, MPI_Group*) int MPI_Comm_size(MPI_Comm, int*) int MPI_Comm_rank(MPI_Comm, int*) int MPI_Comm_compare(MPI_Comm, MPI_Comm, int*) int MPI_Topo_test(MPI_Comm, int*) int MPI_Comm_test_inter(MPI_Comm, int*) int MPI_Abort(MPI_Comm, int) int MPI_Send(void*, int, MPI_Datatype, int, int, MPI_Comm) int MPI_Recv(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Status*) int MPI_Sendrecv(void*, int, MPI_Datatype,int, int, void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Status*) int MPI_Sendrecv_replace(void*, int, MPI_Datatype, int, int, int, int, MPI_Comm, MPI_Status*) enum: MPI_BSEND_OVERHEAD #:= 0 int MPI_Buffer_attach(void*, int) int MPI_Buffer_detach(void*, int*) int MPI_Bsend(void*, int, MPI_Datatype, int, int, MPI_Comm) int MPI_Ssend(void*, int, MPI_Datatype, int, int, MPI_Comm) int MPI_Rsend(void*, int, MPI_Datatype, int, int, MPI_Comm) int MPI_Isend(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Request*) int MPI_Ibsend(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Request*) int MPI_Issend(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Request*) int MPI_Irsend(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Request*) int MPI_Irecv(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Request*) int MPI_Send_init(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Request*) int MPI_Bsend_init(void*, int, MPI_Datatype, int,int, MPI_Comm, MPI_Request*) int MPI_Ssend_init(void*, int, MPI_Datatype, int,int, MPI_Comm, MPI_Request*) int MPI_Rsend_init(void*, int, MPI_Datatype, int,int, MPI_Comm, MPI_Request*) int MPI_Recv_init(void*, int, MPI_Datatype, int,int, MPI_Comm, MPI_Request*) int MPI_Probe(int, int, MPI_Comm, MPI_Status*) int MPI_Iprobe(int, int, MPI_Comm, int*, MPI_Status*) MPI_Message MPI_MESSAGE_NULL #:= 0 MPI_Message MPI_MESSAGE_NO_PROC #:= MPI_MESSAGE_NULL int MPI_Mprobe(int, int, MPI_Comm, MPI_Message*, MPI_Status*) int MPI_Improbe(int, int, MPI_Comm, int*, MPI_Message*, MPI_Status*) int MPI_Mrecv(void*, int, MPI_Datatype, MPI_Message*, MPI_Status*) int MPI_Imrecv(void*, int, MPI_Datatype, MPI_Message*, MPI_Request*) int MPI_Barrier(MPI_Comm) int MPI_Bcast(void*, int, MPI_Datatype, int, MPI_Comm) int MPI_Gather(void*, int, MPI_Datatype, void*, int, MPI_Datatype, int, MPI_Comm) int MPI_Gatherv(void*, int, MPI_Datatype, void*, int*, int*, MPI_Datatype, int, MPI_Comm) int MPI_Scatter(void*, int, MPI_Datatype, void*, int, MPI_Datatype, int, MPI_Comm) int MPI_Scatterv(void*, int*, int*, MPI_Datatype, void*, int, MPI_Datatype, int, MPI_Comm) int MPI_Allgather(void*, int, MPI_Datatype, void*, int, MPI_Datatype, MPI_Comm) int MPI_Allgatherv(void*, int, MPI_Datatype, void*, int*, int*, MPI_Datatype, MPI_Comm) int MPI_Alltoall(void*, int, MPI_Datatype, void*, int, MPI_Datatype, MPI_Comm) int MPI_Alltoallv(void*, int*, int*, MPI_Datatype, void*, int*, int*, MPI_Datatype, MPI_Comm) int MPI_Alltoallw(void*, int*, int*, MPI_Datatype*, void*, int*, int*, MPI_Datatype*, MPI_Comm) int MPI_Reduce(void*, void*, int, MPI_Datatype, MPI_Op, int, MPI_Comm) int MPI_Allreduce(void*, void*, int, MPI_Datatype, MPI_Op, MPI_Comm) int MPI_Reduce_local(void*, void*, int, MPI_Datatype, MPI_Op) int MPI_Reduce_scatter_block(void*, void*, int, MPI_Datatype, MPI_Op, MPI_Comm) int MPI_Reduce_scatter(void*, void*, int*, MPI_Datatype, MPI_Op, MPI_Comm) int MPI_Scan(void*, void*, int, MPI_Datatype, MPI_Op, MPI_Comm) int MPI_Exscan(void*, void*, int, MPI_Datatype, MPI_Op, MPI_Comm) int MPI_Neighbor_allgather(void*, int, MPI_Datatype, void*, int, MPI_Datatype, MPI_Comm) int MPI_Neighbor_allgatherv(void*, int, MPI_Datatype, void*, int[], int[], MPI_Datatype, MPI_Comm) int MPI_Neighbor_alltoall(void*, int, MPI_Datatype, void*, int, MPI_Datatype, MPI_Comm) int MPI_Neighbor_alltoallv(void*, int[], int[],MPI_Datatype, void*, int[],int[], MPI_Datatype, MPI_Comm) int MPI_Neighbor_alltoallw(void *, int[], MPI_Aint[],MPI_Datatype[], void*, int[],MPI_Aint[], MPI_Datatype[], MPI_Comm) int MPI_Ibarrier(MPI_Comm, MPI_Request*) int MPI_Ibcast(void*, int, MPI_Datatype, int, MPI_Comm, MPI_Request*) int MPI_Igather(void*, int, MPI_Datatype, void*, int, MPI_Datatype, int, MPI_Comm, MPI_Request*) int MPI_Igatherv(void*, int, MPI_Datatype, void*, int*, int*, MPI_Datatype, int, MPI_Comm, MPI_Request*) int MPI_Iscatter(void*, int, MPI_Datatype, void*, int, MPI_Datatype, int, MPI_Comm, MPI_Request*) int MPI_Iscatterv(void*, int*, int*, MPI_Datatype, void*, int, MPI_Datatype, int, MPI_Comm, MPI_Request*) int MPI_Iallgather(void*, int, MPI_Datatype, void*, int, MPI_Datatype, MPI_Comm, MPI_Request*) int MPI_Iallgatherv(void*, int, MPI_Datatype, void*, int*, int*, MPI_Datatype, MPI_Comm, MPI_Request*) int MPI_Ialltoall(void*, int, MPI_Datatype, void*, int, MPI_Datatype, MPI_Comm, MPI_Request*) int MPI_Ialltoallv(void*, int*, int*, MPI_Datatype, void*, int*, int*, MPI_Datatype, MPI_Comm, MPI_Request*) int MPI_Ialltoallw(void*, int*, int*, MPI_Datatype*, void*, int*, int*, MPI_Datatype*, MPI_Comm, MPI_Request*) int MPI_Ireduce(void*, void*, int, MPI_Datatype, MPI_Op, int, MPI_Comm, MPI_Request*) int MPI_Iallreduce(void*, void*, int, MPI_Datatype, MPI_Op, MPI_Comm, MPI_Request*) int MPI_Ireduce_scatter_block(void*, void*, int, MPI_Datatype, MPI_Op, MPI_Comm, MPI_Request*) int MPI_Ireduce_scatter(void*, void*, int*, MPI_Datatype, MPI_Op, MPI_Comm, MPI_Request*) int MPI_Iscan(void*, void*, int, MPI_Datatype, MPI_Op, MPI_Comm, MPI_Request*) int MPI_Iexscan(void*, void*, int, MPI_Datatype, MPI_Op, MPI_Comm, MPI_Request*) int MPI_Ineighbor_allgather(void*, int, MPI_Datatype, void*, int, MPI_Datatype, MPI_Comm, MPI_Request*) int MPI_Ineighbor_allgatherv(void*, int, MPI_Datatype, void*, int[], int[], MPI_Datatype, MPI_Comm, MPI_Request*) int MPI_Ineighbor_alltoall(void*, int, MPI_Datatype, void*, int, MPI_Datatype, MPI_Comm, MPI_Request*) int MPI_Ineighbor_alltoallv(void*, int[], int[],MPI_Datatype, void*, int[],int[], MPI_Datatype, MPI_Comm, MPI_Request*) int MPI_Ineighbor_alltoallw(void *, int[], MPI_Aint[],MPI_Datatype[], void*, int[],MPI_Aint[], MPI_Datatype[], MPI_Comm, MPI_Request*) int MPI_Comm_dup(MPI_Comm, MPI_Comm*) int MPI_Comm_dup_with_info(MPI_Comm, MPI_Info, MPI_Comm*) int MPI_Comm_idup(MPI_Comm, MPI_Comm*, MPI_Request*) int MPI_Comm_create(MPI_Comm, MPI_Group, MPI_Comm*) int MPI_Comm_create_group(MPI_Comm, MPI_Group, int, MPI_Comm*) int MPI_Comm_split(MPI_Comm, int, int, MPI_Comm*) enum: MPI_COMM_TYPE_SHARED #:= MPI_UNDEFINED int MPI_Comm_split_type(MPI_Comm, int, int, MPI_Info, MPI_Comm*) int MPI_Comm_set_info(MPI_Comm, MPI_Info) int MPI_Comm_get_info(MPI_Comm, MPI_Info*) enum: MPI_CART #:= MPI_UNDEFINED int MPI_Cart_create(MPI_Comm, int, int[], int[], int, MPI_Comm*) int MPI_Cartdim_get(MPI_Comm, int*) int MPI_Cart_get(MPI_Comm, int, int[], int[], int[]) int MPI_Cart_rank(MPI_Comm, int[], int*) int MPI_Cart_coords(MPI_Comm, int, int, int[]) int MPI_Cart_shift(MPI_Comm, int, int, int[], int[]) int MPI_Cart_sub(MPI_Comm, int[], MPI_Comm*) int MPI_Cart_map(MPI_Comm, int, int[], int[], int*) int MPI_Dims_create(int, int, int[]) enum: MPI_GRAPH #:= MPI_UNDEFINED int MPI_Graph_create(MPI_Comm, int, int[], int[], int, MPI_Comm*) int MPI_Graphdims_get(MPI_Comm, int*, int*) int MPI_Graph_get(MPI_Comm, int, int, int[], int[]) int MPI_Graph_map(MPI_Comm, int, int[], int[], int*) int MPI_Graph_neighbors_count(MPI_Comm, int, int*) int MPI_Graph_neighbors(MPI_Comm, int, int, int[]) enum: MPI_DIST_GRAPH #:= MPI_UNDEFINED int* MPI_UNWEIGHTED #:= 0 int* MPI_WEIGHTS_EMPTY #:= MPI_UNWEIGHTED int MPI_Dist_graph_create_adjacent(MPI_Comm, int, int[], int[], int, int[], int[], MPI_Info, int, MPI_Comm*) int MPI_Dist_graph_create(MPI_Comm, int, int[], int[], int[], int[], MPI_Info, int, MPI_Comm*) int MPI_Dist_graph_neighbors_count(MPI_Comm, int*, int*, int*) int MPI_Dist_graph_neighbors(MPI_Comm, int, int[], int[], int, int[], int[]) int MPI_Intercomm_create(MPI_Comm, int, MPI_Comm, int, int, MPI_Comm*) int MPI_Comm_remote_group(MPI_Comm, MPI_Group*) int MPI_Comm_remote_size(MPI_Comm, int*) int MPI_Intercomm_merge(MPI_Comm, int, MPI_Comm*) enum: MPI_MAX_PORT_NAME #:= 1 int MPI_Open_port(MPI_Info, char[]) int MPI_Close_port(char[]) int MPI_Publish_name(char[], MPI_Info, char[]) int MPI_Unpublish_name(char[], MPI_Info, char[]) int MPI_Lookup_name(char[], MPI_Info, char[]) int MPI_Comm_accept(char[], MPI_Info, int, MPI_Comm, MPI_Comm*) int MPI_Comm_connect(char[], MPI_Info, int, MPI_Comm, MPI_Comm*) int MPI_Comm_join(int, MPI_Comm*) int MPI_Comm_disconnect(MPI_Comm*) char** MPI_ARGV_NULL #:= 0 char*** MPI_ARGVS_NULL #:= 0 int* MPI_ERRCODES_IGNORE #:= 0 int MPI_Comm_spawn(char[], char*[], int, MPI_Info, int, MPI_Comm, MPI_Comm*, int[]) int MPI_Comm_spawn_multiple(int, char*[], char**[], int[], MPI_Info[], int, MPI_Comm, MPI_Comm*, int[]) int MPI_Comm_get_parent(MPI_Comm*) int MPI_Errhandler_get(MPI_Comm, MPI_Errhandler*) int MPI_Errhandler_set(MPI_Comm, MPI_Errhandler) ctypedef void MPI_Handler_function(MPI_Comm*,int*,...) int MPI_Errhandler_create(MPI_Handler_function*, MPI_Errhandler*) enum: MPI_TAG_UB #:= MPI_KEYVAL_INVALID enum: MPI_HOST #:= MPI_KEYVAL_INVALID enum: MPI_IO #:= MPI_KEYVAL_INVALID enum: MPI_WTIME_IS_GLOBAL #:= MPI_KEYVAL_INVALID int MPI_Attr_get(MPI_Comm, int, void*, int*) int MPI_Attr_put(MPI_Comm, int, void*) int MPI_Attr_delete(MPI_Comm, int) ctypedef int MPI_Copy_function(MPI_Comm,int,void*,void*,void*,int*) ctypedef int MPI_Delete_function(MPI_Comm,int,void*,void*) MPI_Copy_function* MPI_DUP_FN #:= 0 MPI_Copy_function* MPI_NULL_COPY_FN #:= 0 MPI_Delete_function* MPI_NULL_DELETE_FN #:= 0 int MPI_Keyval_create(MPI_Copy_function*, MPI_Delete_function*, int*, void*) int MPI_Keyval_free(int*) int MPI_Comm_get_errhandler(MPI_Comm, MPI_Errhandler*) #:= MPI_Errhandler_get int MPI_Comm_set_errhandler(MPI_Comm, MPI_Errhandler) #:= MPI_Errhandler_set ctypedef void MPI_Comm_errhandler_fn(MPI_Comm*,int*,...) #:= MPI_Handler_function ctypedef void MPI_Comm_errhandler_function(MPI_Comm*,int*,...) #:= MPI_Comm_errhandler_fn int MPI_Comm_create_errhandler(MPI_Comm_errhandler_function*, MPI_Errhandler*) #:= MPI_Errhandler_create int MPI_Comm_call_errhandler(MPI_Comm, int) int MPI_Comm_get_name(MPI_Comm, char[], int*) int MPI_Comm_set_name(MPI_Comm, char[]) enum: MPI_UNIVERSE_SIZE #:= MPI_KEYVAL_INVALID enum: MPI_APPNUM #:= MPI_KEYVAL_INVALID enum: MPI_LASTUSEDCODE #:= MPI_KEYVAL_INVALID int MPI_Comm_get_attr(MPI_Comm, int, void*, int*) #:= MPI_Attr_get int MPI_Comm_set_attr(MPI_Comm, int, void*) #:= MPI_Attr_put int MPI_Comm_delete_attr(MPI_Comm, int) #:= MPI_Attr_delete ctypedef int MPI_Comm_copy_attr_function(MPI_Comm,int,void*,void*,void*,int*) #:= MPI_Copy_function ctypedef int MPI_Comm_delete_attr_function(MPI_Comm,int,void*,void*) #:= MPI_Delete_function MPI_Comm_copy_attr_function* MPI_COMM_DUP_FN #:= MPI_DUP_FN MPI_Comm_copy_attr_function* MPI_COMM_NULL_COPY_FN #:= MPI_NULL_COPY_FN MPI_Comm_delete_attr_function* MPI_COMM_NULL_DELETE_FN #:= MPI_NULL_DELETE_FN int MPI_Comm_create_keyval(MPI_Comm_copy_attr_function*, MPI_Comm_delete_attr_function*, int*, void*) #:= MPI_Keyval_create int MPI_Comm_free_keyval(int*) #:= MPI_Keyval_free #----------------------------------------------------------------- MPI_Win MPI_WIN_NULL #:= 0 int MPI_Win_free(MPI_Win*) int MPI_Win_create(void*, MPI_Aint, int, MPI_Info, MPI_Comm, MPI_Win*) int MPI_Win_allocate(MPI_Aint, int, MPI_Info, MPI_Comm, void*, MPI_Win*) int MPI_Win_allocate_shared(MPI_Aint, int, MPI_Info, MPI_Comm, void*, MPI_Win*) int MPI_Win_shared_query(MPI_Win, int, MPI_Aint*,int*, void*) int MPI_Win_create_dynamic(MPI_Info, MPI_Comm, MPI_Win*) int MPI_Win_attach(MPI_Win, void*, MPI_Aint) int MPI_Win_detach(MPI_Win, void*) int MPI_Win_set_info(MPI_Win,MPI_Info) int MPI_Win_get_info(MPI_Win,MPI_Info*) int MPI_Win_get_group(MPI_Win, MPI_Group*) int MPI_Get(void*, int, MPI_Datatype, int, MPI_Aint, int, MPI_Datatype, MPI_Win) int MPI_Put(void*, int, MPI_Datatype, int, MPI_Aint, int, MPI_Datatype, MPI_Win) int MPI_Accumulate(void*, int, MPI_Datatype, int, MPI_Aint, int, MPI_Datatype, MPI_Op, MPI_Win) int MPI_Get_accumulate(void*, int, MPI_Datatype, void*, int,MPI_Datatype, int, MPI_Aint, int, MPI_Datatype, MPI_Op, MPI_Win) int MPI_Fetch_and_op(void*, void*, MPI_Datatype, int, MPI_Aint, MPI_Op, MPI_Win) int MPI_Compare_and_swap(void*, void*, void*, MPI_Datatype, int, MPI_Aint, MPI_Win) int MPI_Rget(void*, int, MPI_Datatype, int, MPI_Aint, int, MPI_Datatype, MPI_Win, MPI_Request*) int MPI_Rput(void*, int, MPI_Datatype, int, MPI_Aint, int, MPI_Datatype, MPI_Win, MPI_Request*) int MPI_Raccumulate(void*, int, MPI_Datatype, int, MPI_Aint, int, MPI_Datatype, MPI_Op, MPI_Win, MPI_Request*) int MPI_Rget_accumulate(void*, int, MPI_Datatype, void*, int,MPI_Datatype, int, MPI_Aint, int, MPI_Datatype, MPI_Op, MPI_Win, MPI_Request*) enum: MPI_MODE_NOCHECK #:= MPI_UNDEFINED enum: MPI_MODE_NOSTORE #:= MPI_UNDEFINED enum: MPI_MODE_NOPUT #:= MPI_UNDEFINED enum: MPI_MODE_NOPRECEDE #:= MPI_UNDEFINED enum: MPI_MODE_NOSUCCEED #:= MPI_UNDEFINED int MPI_Win_fence(int, MPI_Win) int MPI_Win_post(MPI_Group, int, MPI_Win) int MPI_Win_start(MPI_Group, int, MPI_Win) int MPI_Win_complete(MPI_Win) int MPI_Win_wait(MPI_Win) int MPI_Win_test(MPI_Win, int*) enum: MPI_LOCK_EXCLUSIVE #:= MPI_UNDEFINED enum: MPI_LOCK_SHARED #:= MPI_UNDEFINED int MPI_Win_lock(int, int, int, MPI_Win) int MPI_Win_unlock(int, MPI_Win) int MPI_Win_lock_all(int, MPI_Win) int MPI_Win_unlock_all(MPI_Win) int MPI_Win_flush(int, MPI_Win) int MPI_Win_flush_all(MPI_Win) int MPI_Win_flush_local(int, MPI_Win) int MPI_Win_flush_local_all(MPI_Win) int MPI_Win_sync(MPI_Win) int MPI_Win_get_errhandler(MPI_Win, MPI_Errhandler*) int MPI_Win_set_errhandler(MPI_Win, MPI_Errhandler) ctypedef void MPI_Win_errhandler_fn(MPI_Win*,int*,...) ctypedef void MPI_Win_errhandler_function(MPI_Win*,int*,...) #:= MPI_Win_errhandler_fn int MPI_Win_create_errhandler(MPI_Win_errhandler_function*, MPI_Errhandler*) int MPI_Win_call_errhandler(MPI_Win, int) int MPI_Win_get_name(MPI_Win, char[], int*) int MPI_Win_set_name(MPI_Win, char[]) enum: MPI_WIN_BASE #:= MPI_KEYVAL_INVALID enum: MPI_WIN_SIZE #:= MPI_KEYVAL_INVALID enum: MPI_WIN_DISP_UNIT #:= MPI_KEYVAL_INVALID enum: MPI_WIN_CREATE_FLAVOR #:= MPI_KEYVAL_INVALID enum: MPI_WIN_MODEL #:= MPI_KEYVAL_INVALID enum: MPI_WIN_FLAVOR_CREATE #:= MPI_UNDEFINED enum: MPI_WIN_FLAVOR_ALLOCATE #:= MPI_UNDEFINED enum: MPI_WIN_FLAVOR_DYNAMIC #:= MPI_UNDEFINED enum: MPI_WIN_FLAVOR_SHARED #:= MPI_UNDEFINED enum: MPI_WIN_SEPARATE #:= MPI_UNDEFINED enum: MPI_WIN_UNIFIED #:= MPI_UNDEFINED int MPI_Win_get_attr(MPI_Win, int, void*, int*) int MPI_Win_set_attr(MPI_Win, int, void*) int MPI_Win_delete_attr(MPI_Win, int) ctypedef int MPI_Win_copy_attr_function(MPI_Win,int,void*,void*,void*,int*) ctypedef int MPI_Win_delete_attr_function(MPI_Win,int,void*,void*) MPI_Win_copy_attr_function* MPI_WIN_DUP_FN #:= 0 MPI_Win_copy_attr_function* MPI_WIN_NULL_COPY_FN #:= 0 MPI_Win_delete_attr_function* MPI_WIN_NULL_DELETE_FN #:= 0 int MPI_Win_create_keyval(MPI_Win_copy_attr_function*, MPI_Win_delete_attr_function*, int*, void*) int MPI_Win_free_keyval(int*) #----------------------------------------------------------------- MPI_File MPI_FILE_NULL #:= 0 enum: MPI_MODE_RDONLY #:= 1 enum: MPI_MODE_RDWR #:= 2 enum: MPI_MODE_WRONLY #:= 4 enum: MPI_MODE_CREATE #:= 8 enum: MPI_MODE_EXCL #:= 16 enum: MPI_MODE_DELETE_ON_CLOSE #:= 32 enum: MPI_MODE_UNIQUE_OPEN #:= 64 enum: MPI_MODE_APPEND #:= 128 enum: MPI_MODE_SEQUENTIAL #:= 256 int MPI_File_open(MPI_Comm, char[], int, MPI_Info, MPI_File*) int MPI_File_close(MPI_File*) int MPI_File_delete(char[], MPI_Info) int MPI_File_set_size(MPI_File, MPI_Offset) int MPI_File_preallocate(MPI_File, MPI_Offset) int MPI_File_get_size(MPI_File, MPI_Offset*) int MPI_File_get_group(MPI_File, MPI_Group*) int MPI_File_get_amode(MPI_File, int*) int MPI_File_set_info(MPI_File, MPI_Info) int MPI_File_get_info(MPI_File, MPI_Info*) int MPI_File_get_view(MPI_File, MPI_Offset*, MPI_Datatype*, MPI_Datatype*, char[]) int MPI_File_set_view(MPI_File, MPI_Offset, MPI_Datatype, MPI_Datatype, char[], MPI_Info) int MPI_File_read_at (MPI_File, MPI_Offset, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_read_at_all (MPI_File, MPI_Offset, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_write_at (MPI_File, MPI_Offset, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_write_at_all (MPI_File, MPI_Offset, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_iread_at (MPI_File, MPI_Offset, void*, int, MPI_Datatype, MPI_Request*) int MPI_File_iwrite_at (MPI_File, MPI_Offset, void*, int, MPI_Datatype, MPI_Request*) enum: MPI_SEEK_SET #:= 0 enum: MPI_SEEK_CUR #:= 1 enum: MPI_SEEK_END #:= 2 enum: MPI_DISPLACEMENT_CURRENT #:= 3 int MPI_File_seek(MPI_File, MPI_Offset, int) int MPI_File_get_position(MPI_File, MPI_Offset*) int MPI_File_get_byte_offset(MPI_File, MPI_Offset, MPI_Offset*) int MPI_File_read (MPI_File, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_read_all (MPI_File, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_write (MPI_File, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_write_all (MPI_File, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_iread (MPI_File, void*, int, MPI_Datatype, MPI_Request*) int MPI_File_iwrite (MPI_File, void*, int, MPI_Datatype, MPI_Request*) int MPI_File_read_shared (MPI_File, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_write_shared (MPI_File, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_iread_shared (MPI_File, void*, int, MPI_Datatype, MPI_Request*) int MPI_File_iwrite_shared (MPI_File, void*, int, MPI_Datatype, MPI_Request*) int MPI_File_read_ordered (MPI_File, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_write_ordered (MPI_File, void*, int, MPI_Datatype, MPI_Status*) int MPI_File_seek_shared(MPI_File, MPI_Offset, int) int MPI_File_get_position_shared(MPI_File, MPI_Offset*) int MPI_File_read_at_all_begin (MPI_File, MPI_Offset, void*, int, MPI_Datatype) int MPI_File_read_at_all_end (MPI_File, void*, MPI_Status*) int MPI_File_write_at_all_begin (MPI_File, MPI_Offset, void*, int, MPI_Datatype) int MPI_File_write_at_all_end (MPI_File, void*, MPI_Status*) int MPI_File_read_all_begin (MPI_File, void*, int, MPI_Datatype) int MPI_File_read_all_end (MPI_File, void*, MPI_Status*) int MPI_File_write_all_begin (MPI_File, void*, int, MPI_Datatype) int MPI_File_write_all_end (MPI_File, void*, MPI_Status*) int MPI_File_read_ordered_begin (MPI_File, void*, int, MPI_Datatype) int MPI_File_read_ordered_end (MPI_File, void*, MPI_Status*) int MPI_File_write_ordered_begin (MPI_File, void*, int, MPI_Datatype) int MPI_File_write_ordered_end (MPI_File, void*, MPI_Status*) int MPI_File_get_type_extent(MPI_File, MPI_Datatype, MPI_Aint*) int MPI_File_set_atomicity(MPI_File, int) int MPI_File_get_atomicity(MPI_File, int*) int MPI_File_sync(MPI_File) int MPI_File_get_errhandler(MPI_File, MPI_Errhandler*) int MPI_File_set_errhandler(MPI_File, MPI_Errhandler) ctypedef void MPI_File_errhandler_fn(MPI_File*,int*,...) ctypedef void MPI_File_errhandler_function(MPI_File*,int*,...) #:= MPI_File_errhandler_fn int MPI_File_create_errhandler(MPI_File_errhandler_function*, MPI_Errhandler*) int MPI_File_call_errhandler(MPI_File, int) ctypedef int MPI_Datarep_conversion_function(void*,MPI_Datatype,int,void*,MPI_Offset,void*) ctypedef int MPI_Datarep_extent_function(MPI_Datatype,MPI_Aint*,void*) MPI_Datarep_conversion_function* MPI_CONVERSION_FN_NULL #:= 0 enum: MPI_MAX_DATAREP_STRING #:= 1 int MPI_Register_datarep(char[], MPI_Datarep_conversion_function*, MPI_Datarep_conversion_function*, MPI_Datarep_extent_function*, void*) #----------------------------------------------------------------- MPI_Errhandler MPI_ERRHANDLER_NULL #:= 0 MPI_Errhandler MPI_ERRORS_RETURN #:= MPI_ERRHANDLER_NULL MPI_Errhandler MPI_ERRORS_ARE_FATAL #:= MPI_ERRHANDLER_NULL int MPI_Errhandler_free(MPI_Errhandler*) #----------------------------------------------------------------- enum: MPI_MAX_ERROR_STRING #:= 1 int MPI_Error_class(int, int*) int MPI_Error_string(int, char[], int*) int MPI_Add_error_class(int*) int MPI_Add_error_code(int,int*) int MPI_Add_error_string(int,char[]) # MPI-1 Error classes # ------------------- # Actually no errors enum: MPI_SUCCESS #:= 0 enum: MPI_ERR_LASTCODE #:= 1 # MPI-1 Objects enum: MPI_ERR_COMM #:= MPI_ERR_LASTCODE enum: MPI_ERR_GROUP #:= MPI_ERR_LASTCODE enum: MPI_ERR_TYPE #:= MPI_ERR_LASTCODE enum: MPI_ERR_REQUEST #:= MPI_ERR_LASTCODE enum: MPI_ERR_OP #:= MPI_ERR_LASTCODE # Communication argument parameters enum: MPI_ERR_BUFFER #:= MPI_ERR_LASTCODE enum: MPI_ERR_COUNT #:= MPI_ERR_LASTCODE enum: MPI_ERR_TAG #:= MPI_ERR_LASTCODE enum: MPI_ERR_RANK #:= MPI_ERR_LASTCODE enum: MPI_ERR_ROOT #:= MPI_ERR_LASTCODE enum: MPI_ERR_TRUNCATE #:= MPI_ERR_LASTCODE # Multiple completion enum: MPI_ERR_IN_STATUS #:= MPI_ERR_LASTCODE enum: MPI_ERR_PENDING #:= MPI_ERR_LASTCODE # Topology argument parameters enum: MPI_ERR_TOPOLOGY #:= MPI_ERR_LASTCODE enum: MPI_ERR_DIMS #:= MPI_ERR_LASTCODE # All other arguments, this is a class with many kinds enum: MPI_ERR_ARG #:= MPI_ERR_LASTCODE # Other errors that are not simply an invalid argument enum: MPI_ERR_OTHER #:= MPI_ERR_LASTCODE enum: MPI_ERR_UNKNOWN #:= MPI_ERR_LASTCODE enum: MPI_ERR_INTERN #:= MPI_ERR_LASTCODE # MPI-2 Error classes # ------------------- # Attributes enum: MPI_ERR_KEYVAL #:= MPI_ERR_ARG # Memory Allocation enum: MPI_ERR_NO_MEM #:= MPI_ERR_UNKNOWN # Info Object enum: MPI_ERR_INFO #:= MPI_ERR_ARG enum: MPI_ERR_INFO_KEY #:= MPI_ERR_UNKNOWN enum: MPI_ERR_INFO_VALUE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_INFO_NOKEY #:= MPI_ERR_UNKNOWN # Dynamic Process Management enum: MPI_ERR_SPAWN #:= MPI_ERR_UNKNOWN enum: MPI_ERR_PORT #:= MPI_ERR_UNKNOWN enum: MPI_ERR_SERVICE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_NAME #:= MPI_ERR_UNKNOWN # Input/Ouput enum: MPI_ERR_FILE #:= MPI_ERR_ARG enum: MPI_ERR_NOT_SAME #:= MPI_ERR_UNKNOWN enum: MPI_ERR_BAD_FILE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_NO_SUCH_FILE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_FILE_EXISTS #:= MPI_ERR_UNKNOWN enum: MPI_ERR_FILE_IN_USE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_AMODE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_ACCESS #:= MPI_ERR_UNKNOWN enum: MPI_ERR_READ_ONLY #:= MPI_ERR_UNKNOWN enum: MPI_ERR_NO_SPACE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_QUOTA #:= MPI_ERR_UNKNOWN enum: MPI_ERR_UNSUPPORTED_DATAREP #:= MPI_ERR_UNKNOWN enum: MPI_ERR_UNSUPPORTED_OPERATION #:= MPI_ERR_UNKNOWN enum: MPI_ERR_CONVERSION #:= MPI_ERR_UNKNOWN enum: MPI_ERR_DUP_DATAREP #:= MPI_ERR_UNKNOWN enum: MPI_ERR_IO #:= MPI_ERR_UNKNOWN # One-Sided Communications enum: MPI_ERR_WIN #:= MPI_ERR_ARG enum: MPI_ERR_BASE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_SIZE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_DISP #:= MPI_ERR_UNKNOWN enum: MPI_ERR_ASSERT #:= MPI_ERR_UNKNOWN enum: MPI_ERR_LOCKTYPE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_RMA_CONFLICT #:= MPI_ERR_UNKNOWN enum: MPI_ERR_RMA_SYNC #:= MPI_ERR_UNKNOWN enum: MPI_ERR_RMA_RANGE #:= MPI_ERR_UNKNOWN enum: MPI_ERR_RMA_ATTACH #:= MPI_ERR_UNKNOWN enum: MPI_ERR_RMA_SHARED #:= MPI_ERR_UNKNOWN enum: MPI_ERR_RMA_FLAVOR #:= MPI_ERR_UNKNOWN #----------------------------------------------------------------- int MPI_Alloc_mem(MPI_Aint, MPI_Info, void*) int MPI_Free_mem(void*) #----------------------------------------------------------------- int MPI_Init(int*, char**[]) int MPI_Finalize() int MPI_Initialized(int*) int MPI_Finalized(int*) enum: MPI_THREAD_SINGLE #:= 0 enum: MPI_THREAD_FUNNELED #:= 1 enum: MPI_THREAD_SERIALIZED #:= 2 enum: MPI_THREAD_MULTIPLE #:= 3 int MPI_Init_thread(int*, char**[], int, int*) int MPI_Query_thread(int*) int MPI_Is_thread_main(int*) #----------------------------------------------------------------- enum: MPI_VERSION #:= 1 enum: MPI_SUBVERSION #:= 0 int MPI_Get_version(int*, int*) enum: MPI_MAX_LIBRARY_VERSION_STRING #:= 1 int MPI_Get_library_version(char[], int*) enum: MPI_MAX_PROCESSOR_NAME #:= 1 int MPI_Get_processor_name(char[], int*) #----------------------------------------------------------------- double MPI_Wtime() double MPI_Wtick() int MPI_Pcontrol(int, ...) #----------------------------------------------------------------- # Fortran INTEGER ctypedef int MPI_Fint MPI_Fint* MPI_F_STATUS_IGNORE #:= 0 MPI_Fint* MPI_F_STATUSES_IGNORE #:= 0 int MPI_Status_c2f (MPI_Status*, MPI_Fint*) int MPI_Status_f2c (MPI_Fint*, MPI_Status*) # C -> Fortran MPI_Fint MPI_Type_c2f (MPI_Datatype) MPI_Fint MPI_Request_c2f (MPI_Request) MPI_Fint MPI_Message_c2f (MPI_Message) MPI_Fint MPI_Op_c2f (MPI_Op) MPI_Fint MPI_Info_c2f (MPI_Info) MPI_Fint MPI_Group_c2f (MPI_Group) MPI_Fint MPI_Comm_c2f (MPI_Comm) MPI_Fint MPI_Win_c2f (MPI_Win) MPI_Fint MPI_File_c2f (MPI_File) MPI_Fint MPI_Errhandler_c2f (MPI_Errhandler) # Fortran -> C MPI_Datatype MPI_Type_f2c (MPI_Fint) MPI_Request MPI_Request_f2c (MPI_Fint) MPI_Message MPI_Message_f2c (MPI_Fint) MPI_Op MPI_Op_f2c (MPI_Fint) MPI_Info MPI_Info_f2c (MPI_Fint) MPI_Group MPI_Group_f2c (MPI_Fint) MPI_Comm MPI_Comm_f2c (MPI_Fint) MPI_Win MPI_Win_f2c (MPI_Fint) MPI_File MPI_File_f2c (MPI_Fint) MPI_Errhandler MPI_Errhandler_f2c (MPI_Fint) #----------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/src/include/mpi4py/mpi.pxi0000644000000000000000000000012512211706251020253 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com include "mpi4py/libmpi.pxd" mpi4py_1.3.1+hg20131106.orig/src/include/mpi4py/mpi4py.h0000644000000000000000000000064112211706251020342 0ustar 00000000000000/* Author: Lisandro Dalcin */ /* Contact: dalcinl@gmail.com */ #ifndef MPI4PY_H #define MPI4PY_H #include "mpi.h" #if (MPI_VERSION < 3) && !defined(PyMPI_HAVE_MPI_Message) typedef void *PyMPI_MPI_Message; #define MPI_Message PyMPI_MPI_Message #endif #include "mpi4py.MPI_api.h" static int import_mpi4py(void) { if (import_mpi4py__MPI() < 0) goto bad; return 0; bad: return -1; } #endif /* MPI4PY_H */ mpi4py_1.3.1+hg20131106.orig/src/include/mpi4py/mpi4py.i0000644000000000000000000000450112211706251020342 0ustar 00000000000000/* Author: Lisandro Dalcin */ /* Contact: dalcinl@gmail.com */ /* ---------------------------------------------------------------- */ #if SWIG_VERSION < 0x010328 %warn "SWIG version < 1.3.28 is not supported" #endif /* ---------------------------------------------------------------- */ %header %{ #include "mpi4py/mpi4py.h" %} %init %{ if (import_mpi4py() < 0) #if PY_VERSION_HEX >= 0x03000000 return NULL; #else return; #endif %} /* ---------------------------------------------------------------- */ %define %mpi4py_fragments(PyType, Type) /* --- AsPtr --- */ %fragment(SWIG_AsPtr_frag(Type),"header") { SWIGINTERN int SWIG_AsPtr_dec(Type)(SWIG_Object input, Type **p) { if (input == Py_None) { if (p) *p = 0; return SWIG_OK; } else if (PyObject_TypeCheck(input,&PyMPI##PyType##_Type)) { if (p) *p = PyMPI##PyType##_Get(input); return SWIG_OK; } else { void *argp = 0; int res = SWIG_ConvertPtr(input,&argp,%descriptor(p_##Type), 0); if (!SWIG_IsOK(res)) return res; if (!argp) return SWIG_ValueError; if (p) *p = %static_cast(argp,Type*); return SWIG_OK; } } } /* --- From --- */ %fragment(SWIG_From_frag(Type),"header") { SWIGINTERN SWIG_Object SWIG_From_dec(Type)(Type v) { return PyMPI##PyType##_New(v); } } %enddef /*mpi4py_fragments*/ /* ---------------------------------------------------------------- */ %define SWIG_TYPECHECK_MPI_Comm 400 %enddef %define SWIG_TYPECHECK_MPI_Datatype 401 %enddef %define SWIG_TYPECHECK_MPI_Request 402 %enddef %define SWIG_TYPECHECK_MPI_Message 403 %enddef %define SWIG_TYPECHECK_MPI_Status 404 %enddef %define SWIG_TYPECHECK_MPI_Op 405 %enddef %define SWIG_TYPECHECK_MPI_Group 406 %enddef %define SWIG_TYPECHECK_MPI_Info 407 %enddef %define SWIG_TYPECHECK_MPI_File 408 %enddef %define SWIG_TYPECHECK_MPI_Win 409 %enddef %define SWIG_TYPECHECK_MPI_Errhandler 410 %enddef %define %mpi4py_checkcode(Type) %checkcode(Type) %enddef /*mpi4py_checkcode*/ /* ---------------------------------------------------------------- */ %define %mpi4py_typemap(PyType, Type) %types(Type*); %mpi4py_fragments(PyType, Type); %typemaps_asptrfromn(%mpi4py_checkcode(Type), Type); %enddef /*mpi4py_typemap*/ /* ---------------------------------------------------------------- */ /* * Local Variables: * mode: C * End: */ mpi4py_1.3.1+hg20131106.orig/src/include/mpi4py/mpi_c.pxd0000644000000000000000000000012612211706251020551 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com from mpi4py.libmpi cimport * mpi4py_1.3.1+hg20131106.orig/src/libmpi.pxd0000644000000000000000000000013512211706251016071 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com include "include/mpi4py/libmpi.pxd" mpi4py_1.3.1+hg20131106.orig/src/missing.h0000644000000000000000000027402412211706251015734 0ustar 00000000000000#ifndef PyMPI_MISSING_H #define PyMPI_MISSING_H #ifndef PyMPI_UNUSED # if defined(__GNUC__) # if !defined(__cplusplus) || (__GNUC__>3||(__GNUC__==3&&__GNUC_MINOR__>=4)) # define PyMPI_UNUSED __attribute__ ((__unused__)) # else # define PyMPI_UNUSED # endif # elif defined(__INTEL_COMPILER) || defined(__ICC) # define PyMPI_UNUSED __attribute__ ((__unused__)) # else # define PyMPI_UNUSED # endif #endif static PyMPI_UNUSED int PyMPI_UNAVAILABLE(PyMPI_UNUSED const char *name,...) { return -1; } #ifndef PyMPI_HAVE_MPI_Aint #undef MPI_Aint typedef long PyMPI_MPI_Aint; #define MPI_Aint PyMPI_MPI_Aint #endif #ifndef PyMPI_HAVE_MPI_Offset #undef MPI_Offset typedef long PyMPI_MPI_Offset; #define MPI_Offset PyMPI_MPI_Offset #endif #ifndef PyMPI_HAVE_MPI_Count #undef MPI_Count typedef MPI_Offset PyMPI_MPI_Count; #define MPI_Count PyMPI_MPI_Count #endif #ifndef PyMPI_HAVE_MPI_Status #undef MPI_Status typedef struct PyMPI_MPI_Status { int MPI_SOURCE; int MPI_TAG; int MPI_ERROR; } PyMPI_MPI_Status; #define MPI_Status PyMPI_MPI_Status #endif #ifndef PyMPI_HAVE_MPI_Datatype #undef MPI_Datatype typedef void *PyMPI_MPI_Datatype; #define MPI_Datatype PyMPI_MPI_Datatype #endif #ifndef PyMPI_HAVE_MPI_Request #undef MPI_Request typedef void *PyMPI_MPI_Request; #define MPI_Request PyMPI_MPI_Request #endif #ifndef PyMPI_HAVE_MPI_Message #undef MPI_Message typedef void *PyMPI_MPI_Message; #define MPI_Message PyMPI_MPI_Message #endif #ifndef PyMPI_HAVE_MPI_Op #undef MPI_Op typedef void *PyMPI_MPI_Op; #define MPI_Op PyMPI_MPI_Op #endif #ifndef PyMPI_HAVE_MPI_Group #undef MPI_Group typedef void *PyMPI_MPI_Group; #define MPI_Group PyMPI_MPI_Group #endif #ifndef PyMPI_HAVE_MPI_Info #undef MPI_Info typedef void *PyMPI_MPI_Info; #define MPI_Info PyMPI_MPI_Info #endif #ifndef PyMPI_HAVE_MPI_Comm #undef MPI_Comm typedef void *PyMPI_MPI_Comm; #define MPI_Comm PyMPI_MPI_Comm #endif #ifndef PyMPI_HAVE_MPI_Win #undef MPI_Win typedef void *PyMPI_MPI_Win; #define MPI_Win PyMPI_MPI_Win #endif #ifndef PyMPI_HAVE_MPI_File #undef MPI_File typedef void *PyMPI_MPI_File; #define MPI_File PyMPI_MPI_File #endif #ifndef PyMPI_HAVE_MPI_Errhandler #undef MPI_Errhandler typedef void *PyMPI_MPI_Errhandler; #define MPI_Errhandler PyMPI_MPI_Errhandler #endif #ifndef PyMPI_HAVE_MPI_UNDEFINED #undef MPI_UNDEFINED #define MPI_UNDEFINED (-32766) #endif #ifndef PyMPI_HAVE_MPI_ANY_SOURCE #undef MPI_ANY_SOURCE #define MPI_ANY_SOURCE (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_ANY_TAG #undef MPI_ANY_TAG #define MPI_ANY_TAG (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_PROC_NULL #undef MPI_PROC_NULL #define MPI_PROC_NULL (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_ROOT #undef MPI_ROOT #define MPI_ROOT (MPI_PROC_NULL) #endif #ifndef PyMPI_HAVE_MPI_IDENT #undef MPI_IDENT #define MPI_IDENT (1) #endif #ifndef PyMPI_HAVE_MPI_CONGRUENT #undef MPI_CONGRUENT #define MPI_CONGRUENT (2) #endif #ifndef PyMPI_HAVE_MPI_SIMILAR #undef MPI_SIMILAR #define MPI_SIMILAR (3) #endif #ifndef PyMPI_HAVE_MPI_UNEQUAL #undef MPI_UNEQUAL #define MPI_UNEQUAL (4) #endif #ifndef PyMPI_HAVE_MPI_BOTTOM #undef MPI_BOTTOM #define MPI_BOTTOM ((void*)0) #endif #ifndef PyMPI_HAVE_MPI_IN_PLACE #undef MPI_IN_PLACE #define MPI_IN_PLACE ((void*)0) #endif #ifndef PyMPI_HAVE_MPI_KEYVAL_INVALID #undef MPI_KEYVAL_INVALID #define MPI_KEYVAL_INVALID (0) #endif #ifndef PyMPI_HAVE_MPI_MAX_OBJECT_NAME #undef MPI_MAX_OBJECT_NAME #define MPI_MAX_OBJECT_NAME (1) #endif #ifndef PyMPI_HAVE_MPI_DATATYPE_NULL #undef MPI_DATATYPE_NULL #define MPI_DATATYPE_NULL ((MPI_Datatype)0) #endif #ifndef PyMPI_HAVE_MPI_PACKED #undef MPI_PACKED #define MPI_PACKED ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_BYTE #undef MPI_BYTE #define MPI_BYTE ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_AINT #undef MPI_AINT #define MPI_AINT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_OFFSET #undef MPI_OFFSET #define MPI_OFFSET ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_COUNT #undef MPI_COUNT #define MPI_COUNT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_CHAR #undef MPI_CHAR #define MPI_CHAR ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_WCHAR #undef MPI_WCHAR #define MPI_WCHAR ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_SIGNED_CHAR #undef MPI_SIGNED_CHAR #define MPI_SIGNED_CHAR ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_SHORT #undef MPI_SHORT #define MPI_SHORT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INT #undef MPI_INT #define MPI_INT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LONG #undef MPI_LONG #define MPI_LONG ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LONG_LONG #undef MPI_LONG_LONG #define MPI_LONG_LONG ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LONG_LONG_INT #undef MPI_LONG_LONG_INT #define MPI_LONG_LONG_INT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UNSIGNED_CHAR #undef MPI_UNSIGNED_CHAR #define MPI_UNSIGNED_CHAR ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UNSIGNED_SHORT #undef MPI_UNSIGNED_SHORT #define MPI_UNSIGNED_SHORT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UNSIGNED #undef MPI_UNSIGNED #define MPI_UNSIGNED ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UNSIGNED_LONG #undef MPI_UNSIGNED_LONG #define MPI_UNSIGNED_LONG ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UNSIGNED_LONG_LONG #undef MPI_UNSIGNED_LONG_LONG #define MPI_UNSIGNED_LONG_LONG ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_FLOAT #undef MPI_FLOAT #define MPI_FLOAT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_DOUBLE #undef MPI_DOUBLE #define MPI_DOUBLE ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LONG_DOUBLE #undef MPI_LONG_DOUBLE #define MPI_LONG_DOUBLE ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_C_BOOL #undef MPI_C_BOOL #define MPI_C_BOOL ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INT8_T #undef MPI_INT8_T #define MPI_INT8_T ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INT16_T #undef MPI_INT16_T #define MPI_INT16_T ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INT32_T #undef MPI_INT32_T #define MPI_INT32_T ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INT64_T #undef MPI_INT64_T #define MPI_INT64_T ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UINT8_T #undef MPI_UINT8_T #define MPI_UINT8_T ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UINT16_T #undef MPI_UINT16_T #define MPI_UINT16_T ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UINT32_T #undef MPI_UINT32_T #define MPI_UINT32_T ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UINT64_T #undef MPI_UINT64_T #define MPI_UINT64_T ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_C_COMPLEX #undef MPI_C_COMPLEX #define MPI_C_COMPLEX ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_C_FLOAT_COMPLEX #undef MPI_C_FLOAT_COMPLEX #define MPI_C_FLOAT_COMPLEX ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_C_DOUBLE_COMPLEX #undef MPI_C_DOUBLE_COMPLEX #define MPI_C_DOUBLE_COMPLEX ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_C_LONG_DOUBLE_COMPLEX #undef MPI_C_LONG_DOUBLE_COMPLEX #define MPI_C_LONG_DOUBLE_COMPLEX ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_SHORT_INT #undef MPI_SHORT_INT #define MPI_SHORT_INT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_2INT #undef MPI_2INT #define MPI_2INT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LONG_INT #undef MPI_LONG_INT #define MPI_LONG_INT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_FLOAT_INT #undef MPI_FLOAT_INT #define MPI_FLOAT_INT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_DOUBLE_INT #undef MPI_DOUBLE_INT #define MPI_DOUBLE_INT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LONG_DOUBLE_INT #undef MPI_LONG_DOUBLE_INT #define MPI_LONG_DOUBLE_INT ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_CHARACTER #undef MPI_CHARACTER #define MPI_CHARACTER ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LOGICAL #undef MPI_LOGICAL #define MPI_LOGICAL ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INTEGER #undef MPI_INTEGER #define MPI_INTEGER ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_REAL #undef MPI_REAL #define MPI_REAL ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_DOUBLE_PRECISION #undef MPI_DOUBLE_PRECISION #define MPI_DOUBLE_PRECISION ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_COMPLEX #undef MPI_COMPLEX #define MPI_COMPLEX ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_DOUBLE_COMPLEX #undef MPI_DOUBLE_COMPLEX #define MPI_DOUBLE_COMPLEX ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LOGICAL1 #undef MPI_LOGICAL1 #define MPI_LOGICAL1 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LOGICAL2 #undef MPI_LOGICAL2 #define MPI_LOGICAL2 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LOGICAL4 #undef MPI_LOGICAL4 #define MPI_LOGICAL4 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LOGICAL8 #undef MPI_LOGICAL8 #define MPI_LOGICAL8 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INTEGER1 #undef MPI_INTEGER1 #define MPI_INTEGER1 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INTEGER2 #undef MPI_INTEGER2 #define MPI_INTEGER2 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INTEGER4 #undef MPI_INTEGER4 #define MPI_INTEGER4 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INTEGER8 #undef MPI_INTEGER8 #define MPI_INTEGER8 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_INTEGER16 #undef MPI_INTEGER16 #define MPI_INTEGER16 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_REAL2 #undef MPI_REAL2 #define MPI_REAL2 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_REAL4 #undef MPI_REAL4 #define MPI_REAL4 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_REAL8 #undef MPI_REAL8 #define MPI_REAL8 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_REAL16 #undef MPI_REAL16 #define MPI_REAL16 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_COMPLEX4 #undef MPI_COMPLEX4 #define MPI_COMPLEX4 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_COMPLEX8 #undef MPI_COMPLEX8 #define MPI_COMPLEX8 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_COMPLEX16 #undef MPI_COMPLEX16 #define MPI_COMPLEX16 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_COMPLEX32 #undef MPI_COMPLEX32 #define MPI_COMPLEX32 ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_UB #undef MPI_UB #define MPI_UB ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_LB #undef MPI_LB #define MPI_LB ((MPI_Datatype)MPI_DATATYPE_NULL) #endif #ifndef PyMPI_HAVE_MPI_Type_lb #undef MPI_Type_lb #define MPI_Type_lb(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_lb",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_ub #undef MPI_Type_ub #define MPI_Type_ub(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_ub",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_extent #undef MPI_Type_extent #define MPI_Type_extent(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_extent",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Address #undef MPI_Address #define MPI_Address(a1,a2) PyMPI_UNAVAILABLE("MPI_Address",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_hvector #undef MPI_Type_hvector #define MPI_Type_hvector(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Type_hvector",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Type_hindexed #undef MPI_Type_hindexed #define MPI_Type_hindexed(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Type_hindexed",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Type_struct #undef MPI_Type_struct #define MPI_Type_struct(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Type_struct",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Type_dup #undef MPI_Type_dup #define MPI_Type_dup(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_dup",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_contiguous #undef MPI_Type_contiguous #define MPI_Type_contiguous(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_contiguous",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_vector #undef MPI_Type_vector #define MPI_Type_vector(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Type_vector",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Type_indexed #undef MPI_Type_indexed #define MPI_Type_indexed(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Type_indexed",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Type_create_indexed_block #undef MPI_Type_create_indexed_block #define MPI_Type_create_indexed_block(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Type_create_indexed_block",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_ORDER_C #undef MPI_ORDER_C #define MPI_ORDER_C (0) #endif #ifndef PyMPI_HAVE_MPI_ORDER_FORTRAN #undef MPI_ORDER_FORTRAN #define MPI_ORDER_FORTRAN (1) #endif #ifndef PyMPI_HAVE_MPI_Type_create_subarray #undef MPI_Type_create_subarray #define MPI_Type_create_subarray(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Type_create_subarray",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_DISTRIBUTE_NONE #undef MPI_DISTRIBUTE_NONE #define MPI_DISTRIBUTE_NONE (0) #endif #ifndef PyMPI_HAVE_MPI_DISTRIBUTE_BLOCK #undef MPI_DISTRIBUTE_BLOCK #define MPI_DISTRIBUTE_BLOCK (1) #endif #ifndef PyMPI_HAVE_MPI_DISTRIBUTE_CYCLIC #undef MPI_DISTRIBUTE_CYCLIC #define MPI_DISTRIBUTE_CYCLIC (2) #endif #ifndef PyMPI_HAVE_MPI_DISTRIBUTE_DFLT_DARG #undef MPI_DISTRIBUTE_DFLT_DARG #define MPI_DISTRIBUTE_DFLT_DARG (4) #endif #ifndef PyMPI_HAVE_MPI_Type_create_darray #undef MPI_Type_create_darray #define MPI_Type_create_darray(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Type_create_darray",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Get_address #undef MPI_Get_address #define MPI_Get_address MPI_Address #endif #ifndef PyMPI_HAVE_MPI_Type_create_hvector #undef MPI_Type_create_hvector #define MPI_Type_create_hvector MPI_Type_hvector #endif #ifndef PyMPI_HAVE_MPI_Type_create_hindexed #undef MPI_Type_create_hindexed #define MPI_Type_create_hindexed MPI_Type_hindexed #endif #ifndef PyMPI_HAVE_MPI_Type_create_hindexed_block #undef MPI_Type_create_hindexed_block #define MPI_Type_create_hindexed_block(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Type_create_hindexed_block",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Type_create_struct #undef MPI_Type_create_struct #define MPI_Type_create_struct MPI_Type_struct #endif #ifndef PyMPI_HAVE_MPI_Type_create_resized #undef MPI_Type_create_resized #define MPI_Type_create_resized(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Type_create_resized",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Type_size #undef MPI_Type_size #define MPI_Type_size(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_size",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_size_x #undef MPI_Type_size_x #define MPI_Type_size_x(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_size_x",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_get_extent #undef MPI_Type_get_extent #define MPI_Type_get_extent(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_get_extent",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_get_extent_x #undef MPI_Type_get_extent_x #define MPI_Type_get_extent_x(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_get_extent_x",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_get_true_extent #undef MPI_Type_get_true_extent #define MPI_Type_get_true_extent(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_get_true_extent",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_get_true_extent_x #undef MPI_Type_get_true_extent_x #define MPI_Type_get_true_extent_x(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_get_true_extent_x",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_create_f90_integer #undef MPI_Type_create_f90_integer #define MPI_Type_create_f90_integer(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_create_f90_integer",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_create_f90_real #undef MPI_Type_create_f90_real #define MPI_Type_create_f90_real(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_create_f90_real",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_create_f90_complex #undef MPI_Type_create_f90_complex #define MPI_Type_create_f90_complex(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_create_f90_complex",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_TYPECLASS_INTEGER #undef MPI_TYPECLASS_INTEGER #define MPI_TYPECLASS_INTEGER (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_TYPECLASS_REAL #undef MPI_TYPECLASS_REAL #define MPI_TYPECLASS_REAL (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_TYPECLASS_COMPLEX #undef MPI_TYPECLASS_COMPLEX #define MPI_TYPECLASS_COMPLEX (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_Type_match_size #undef MPI_Type_match_size #define MPI_Type_match_size(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_match_size",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_commit #undef MPI_Type_commit #define MPI_Type_commit(a1) PyMPI_UNAVAILABLE("MPI_Type_commit",a1) #endif #ifndef PyMPI_HAVE_MPI_Type_free #undef MPI_Type_free #define MPI_Type_free(a1) PyMPI_UNAVAILABLE("MPI_Type_free",a1) #endif #ifndef PyMPI_HAVE_MPI_Pack #undef MPI_Pack #define MPI_Pack(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Pack",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Unpack #undef MPI_Unpack #define MPI_Unpack(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Unpack",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Pack_size #undef MPI_Pack_size #define MPI_Pack_size(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Pack_size",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Pack_external #undef MPI_Pack_external #define MPI_Pack_external(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Pack_external",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Unpack_external #undef MPI_Unpack_external #define MPI_Unpack_external(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Unpack_external",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Pack_external_size #undef MPI_Pack_external_size #define MPI_Pack_external_size(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Pack_external_size",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_NAMED #undef MPI_COMBINER_NAMED #define MPI_COMBINER_NAMED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_DUP #undef MPI_COMBINER_DUP #define MPI_COMBINER_DUP (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_CONTIGUOUS #undef MPI_COMBINER_CONTIGUOUS #define MPI_COMBINER_CONTIGUOUS (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_VECTOR #undef MPI_COMBINER_VECTOR #define MPI_COMBINER_VECTOR (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_HVECTOR #undef MPI_COMBINER_HVECTOR #define MPI_COMBINER_HVECTOR (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_HVECTOR_INTEGER #undef MPI_COMBINER_HVECTOR_INTEGER #define MPI_COMBINER_HVECTOR_INTEGER (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_INDEXED #undef MPI_COMBINER_INDEXED #define MPI_COMBINER_INDEXED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_HINDEXED #undef MPI_COMBINER_HINDEXED #define MPI_COMBINER_HINDEXED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_HINDEXED_INTEGER #undef MPI_COMBINER_HINDEXED_INTEGER #define MPI_COMBINER_HINDEXED_INTEGER (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_INDEXED_BLOCK #undef MPI_COMBINER_INDEXED_BLOCK #define MPI_COMBINER_INDEXED_BLOCK (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_HINDEXED_BLOCK #undef MPI_COMBINER_HINDEXED_BLOCK #define MPI_COMBINER_HINDEXED_BLOCK (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_STRUCT #undef MPI_COMBINER_STRUCT #define MPI_COMBINER_STRUCT (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_STRUCT_INTEGER #undef MPI_COMBINER_STRUCT_INTEGER #define MPI_COMBINER_STRUCT_INTEGER (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_SUBARRAY #undef MPI_COMBINER_SUBARRAY #define MPI_COMBINER_SUBARRAY (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_DARRAY #undef MPI_COMBINER_DARRAY #define MPI_COMBINER_DARRAY (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_F90_REAL #undef MPI_COMBINER_F90_REAL #define MPI_COMBINER_F90_REAL (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_F90_COMPLEX #undef MPI_COMBINER_F90_COMPLEX #define MPI_COMBINER_F90_COMPLEX (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_F90_INTEGER #undef MPI_COMBINER_F90_INTEGER #define MPI_COMBINER_F90_INTEGER (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_COMBINER_RESIZED #undef MPI_COMBINER_RESIZED #define MPI_COMBINER_RESIZED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_Type_get_envelope #undef MPI_Type_get_envelope #define MPI_Type_get_envelope(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Type_get_envelope",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Type_get_contents #undef MPI_Type_get_contents #define MPI_Type_get_contents(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Type_get_contents",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Type_get_name #undef MPI_Type_get_name #define MPI_Type_get_name(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_get_name",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_set_name #undef MPI_Type_set_name #define MPI_Type_set_name(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_set_name",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_get_attr #undef MPI_Type_get_attr #define MPI_Type_get_attr(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Type_get_attr",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Type_set_attr #undef MPI_Type_set_attr #define MPI_Type_set_attr(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Type_set_attr",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Type_delete_attr #undef MPI_Type_delete_attr #define MPI_Type_delete_attr(a1,a2) PyMPI_UNAVAILABLE("MPI_Type_delete_attr",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_copy_attr_function #undef MPI_Type_copy_attr_function typedef int (PyMPI_MPI_Type_copy_attr_function)(MPI_Datatype,int,void*,void*,void*,int*); #define MPI_Type_copy_attr_function PyMPI_MPI_Type_copy_attr_function #endif #ifndef PyMPI_HAVE_MPI_Type_delete_attr_function #undef MPI_Type_delete_attr_function typedef int (PyMPI_MPI_Type_delete_attr_function)(MPI_Datatype,int,void*,void*); #define MPI_Type_delete_attr_function PyMPI_MPI_Type_delete_attr_function #endif #ifndef PyMPI_HAVE_MPI_TYPE_NULL_COPY_FN #undef MPI_TYPE_NULL_COPY_FN #define MPI_TYPE_NULL_COPY_FN (0) #endif #ifndef PyMPI_HAVE_MPI_TYPE_DUP_FN #undef MPI_TYPE_DUP_FN #define MPI_TYPE_DUP_FN (0) #endif #ifndef PyMPI_HAVE_MPI_TYPE_NULL_DELETE_FN #undef MPI_TYPE_NULL_DELETE_FN #define MPI_TYPE_NULL_DELETE_FN (0) #endif #ifndef PyMPI_HAVE_MPI_Type_create_keyval #undef MPI_Type_create_keyval #define MPI_Type_create_keyval(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Type_create_keyval",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Type_free_keyval #undef MPI_Type_free_keyval #define MPI_Type_free_keyval(a1) PyMPI_UNAVAILABLE("MPI_Type_free_keyval",a1) #endif #ifndef PyMPI_HAVE_MPI_STATUS_IGNORE #undef MPI_STATUS_IGNORE #define MPI_STATUS_IGNORE (0) #endif #ifndef PyMPI_HAVE_MPI_STATUSES_IGNORE #undef MPI_STATUSES_IGNORE #define MPI_STATUSES_IGNORE (0) #endif #ifndef PyMPI_HAVE_MPI_Get_count #undef MPI_Get_count #define MPI_Get_count(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Get_count",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Get_elements #undef MPI_Get_elements #define MPI_Get_elements(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Get_elements",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Get_elements_x #undef MPI_Get_elements_x #define MPI_Get_elements_x(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Get_elements_x",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Status_set_elements #undef MPI_Status_set_elements #define MPI_Status_set_elements(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Status_set_elements",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Status_set_elements_x #undef MPI_Status_set_elements_x #define MPI_Status_set_elements_x(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Status_set_elements_x",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Test_cancelled #undef MPI_Test_cancelled #define MPI_Test_cancelled(a1,a2) PyMPI_UNAVAILABLE("MPI_Test_cancelled",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Status_set_cancelled #undef MPI_Status_set_cancelled #define MPI_Status_set_cancelled(a1,a2) PyMPI_UNAVAILABLE("MPI_Status_set_cancelled",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_REQUEST_NULL #undef MPI_REQUEST_NULL #define MPI_REQUEST_NULL ((MPI_Request)0) #endif #ifndef PyMPI_HAVE_MPI_Request_free #undef MPI_Request_free #define MPI_Request_free(a1) PyMPI_UNAVAILABLE("MPI_Request_free",a1) #endif #ifndef PyMPI_HAVE_MPI_Wait #undef MPI_Wait #define MPI_Wait(a1,a2) PyMPI_UNAVAILABLE("MPI_Wait",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Test #undef MPI_Test #define MPI_Test(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Test",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Request_get_status #undef MPI_Request_get_status #define MPI_Request_get_status(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Request_get_status",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Cancel #undef MPI_Cancel #define MPI_Cancel(a1) PyMPI_UNAVAILABLE("MPI_Cancel",a1) #endif #ifndef PyMPI_HAVE_MPI_Waitany #undef MPI_Waitany #define MPI_Waitany(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Waitany",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Testany #undef MPI_Testany #define MPI_Testany(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Testany",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Waitall #undef MPI_Waitall #define MPI_Waitall(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Waitall",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Testall #undef MPI_Testall #define MPI_Testall(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Testall",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Waitsome #undef MPI_Waitsome #define MPI_Waitsome(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Waitsome",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Testsome #undef MPI_Testsome #define MPI_Testsome(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Testsome",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Start #undef MPI_Start #define MPI_Start(a1) PyMPI_UNAVAILABLE("MPI_Start",a1) #endif #ifndef PyMPI_HAVE_MPI_Startall #undef MPI_Startall #define MPI_Startall(a1,a2) PyMPI_UNAVAILABLE("MPI_Startall",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Grequest_cancel_function #undef MPI_Grequest_cancel_function typedef int (PyMPI_MPI_Grequest_cancel_function)(void*,int); #define MPI_Grequest_cancel_function PyMPI_MPI_Grequest_cancel_function #endif #ifndef PyMPI_HAVE_MPI_Grequest_free_function #undef MPI_Grequest_free_function typedef int (PyMPI_MPI_Grequest_free_function)(void*); #define MPI_Grequest_free_function PyMPI_MPI_Grequest_free_function #endif #ifndef PyMPI_HAVE_MPI_Grequest_query_function #undef MPI_Grequest_query_function typedef int (PyMPI_MPI_Grequest_query_function)(void*,MPI_Status*); #define MPI_Grequest_query_function PyMPI_MPI_Grequest_query_function #endif #ifndef PyMPI_HAVE_MPI_Grequest_start #undef MPI_Grequest_start #define MPI_Grequest_start(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Grequest_start",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Grequest_complete #undef MPI_Grequest_complete #define MPI_Grequest_complete(a1) PyMPI_UNAVAILABLE("MPI_Grequest_complete",a1) #endif #ifndef PyMPI_HAVE_MPI_OP_NULL #undef MPI_OP_NULL #define MPI_OP_NULL ((MPI_Op)0) #endif #ifndef PyMPI_HAVE_MPI_MAX #undef MPI_MAX #define MPI_MAX ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_MIN #undef MPI_MIN #define MPI_MIN ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_SUM #undef MPI_SUM #define MPI_SUM ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_PROD #undef MPI_PROD #define MPI_PROD ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_LAND #undef MPI_LAND #define MPI_LAND ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_BAND #undef MPI_BAND #define MPI_BAND ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_LOR #undef MPI_LOR #define MPI_LOR ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_BOR #undef MPI_BOR #define MPI_BOR ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_LXOR #undef MPI_LXOR #define MPI_LXOR ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_BXOR #undef MPI_BXOR #define MPI_BXOR ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_MAXLOC #undef MPI_MAXLOC #define MPI_MAXLOC ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_MINLOC #undef MPI_MINLOC #define MPI_MINLOC ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_REPLACE #undef MPI_REPLACE #define MPI_REPLACE ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_NO_OP #undef MPI_NO_OP #define MPI_NO_OP ((MPI_Op)MPI_OP_NULL) #endif #ifndef PyMPI_HAVE_MPI_Op_free #undef MPI_Op_free #define MPI_Op_free(a1) PyMPI_UNAVAILABLE("MPI_Op_free",a1) #endif #ifndef PyMPI_HAVE_MPI_User_function #undef MPI_User_function typedef void (PyMPI_MPI_User_function)(void*, void*, int*, MPI_Datatype*); #define MPI_User_function PyMPI_MPI_User_function #endif #ifndef PyMPI_HAVE_MPI_Op_create #undef MPI_Op_create #define MPI_Op_create(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Op_create",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Op_commutative #undef MPI_Op_commutative #define MPI_Op_commutative(a1,a2) PyMPI_UNAVAILABLE("MPI_Op_commutative",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_INFO_NULL #undef MPI_INFO_NULL #define MPI_INFO_NULL ((MPI_Info)0) #endif #ifndef PyMPI_HAVE_MPI_INFO_ENV #undef MPI_INFO_ENV #define MPI_INFO_ENV ((MPI_Info)MPI_INFO_NULL) #endif #ifndef PyMPI_HAVE_MPI_Info_free #undef MPI_Info_free #define MPI_Info_free(a1) PyMPI_UNAVAILABLE("MPI_Info_free",a1) #endif #ifndef PyMPI_HAVE_MPI_Info_create #undef MPI_Info_create #define MPI_Info_create(a1) PyMPI_UNAVAILABLE("MPI_Info_create",a1) #endif #ifndef PyMPI_HAVE_MPI_Info_dup #undef MPI_Info_dup #define MPI_Info_dup(a1,a2) PyMPI_UNAVAILABLE("MPI_Info_dup",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_MAX_INFO_KEY #undef MPI_MAX_INFO_KEY #define MPI_MAX_INFO_KEY (1) #endif #ifndef PyMPI_HAVE_MPI_MAX_INFO_VAL #undef MPI_MAX_INFO_VAL #define MPI_MAX_INFO_VAL (1) #endif #ifndef PyMPI_HAVE_MPI_Info_get #undef MPI_Info_get #define MPI_Info_get(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Info_get",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Info_set #undef MPI_Info_set #define MPI_Info_set(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Info_set",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Info_delete #undef MPI_Info_delete #define MPI_Info_delete(a1,a2) PyMPI_UNAVAILABLE("MPI_Info_delete",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Info_get_nkeys #undef MPI_Info_get_nkeys #define MPI_Info_get_nkeys(a1,a2) PyMPI_UNAVAILABLE("MPI_Info_get_nkeys",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Info_get_nthkey #undef MPI_Info_get_nthkey #define MPI_Info_get_nthkey(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Info_get_nthkey",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Info_get_valuelen #undef MPI_Info_get_valuelen #define MPI_Info_get_valuelen(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Info_get_valuelen",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_GROUP_NULL #undef MPI_GROUP_NULL #define MPI_GROUP_NULL ((MPI_Group)0) #endif #ifndef PyMPI_HAVE_MPI_GROUP_EMPTY #undef MPI_GROUP_EMPTY #define MPI_GROUP_EMPTY ((MPI_Group)1) #endif #ifndef PyMPI_HAVE_MPI_Group_free #undef MPI_Group_free #define MPI_Group_free(a1) PyMPI_UNAVAILABLE("MPI_Group_free",a1) #endif #ifndef PyMPI_HAVE_MPI_Group_size #undef MPI_Group_size #define MPI_Group_size(a1,a2) PyMPI_UNAVAILABLE("MPI_Group_size",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Group_rank #undef MPI_Group_rank #define MPI_Group_rank(a1,a2) PyMPI_UNAVAILABLE("MPI_Group_rank",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Group_translate_ranks #undef MPI_Group_translate_ranks #define MPI_Group_translate_ranks(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Group_translate_ranks",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Group_compare #undef MPI_Group_compare #define MPI_Group_compare(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Group_compare",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Group_union #undef MPI_Group_union #define MPI_Group_union(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Group_union",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Group_intersection #undef MPI_Group_intersection #define MPI_Group_intersection(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Group_intersection",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Group_difference #undef MPI_Group_difference #define MPI_Group_difference(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Group_difference",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Group_incl #undef MPI_Group_incl #define MPI_Group_incl(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Group_incl",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Group_excl #undef MPI_Group_excl #define MPI_Group_excl(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Group_excl",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Group_range_incl #undef MPI_Group_range_incl #define MPI_Group_range_incl(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Group_range_incl",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Group_range_excl #undef MPI_Group_range_excl #define MPI_Group_range_excl(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Group_range_excl",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_COMM_NULL #undef MPI_COMM_NULL #define MPI_COMM_NULL ((MPI_Comm)0) #endif #ifndef PyMPI_HAVE_MPI_COMM_SELF #undef MPI_COMM_SELF #define MPI_COMM_SELF ((MPI_Comm)MPI_COMM_NULL) #endif #ifndef PyMPI_HAVE_MPI_COMM_WORLD #undef MPI_COMM_WORLD #define MPI_COMM_WORLD ((MPI_Comm)MPI_COMM_NULL) #endif #ifndef PyMPI_HAVE_MPI_Comm_free #undef MPI_Comm_free #define MPI_Comm_free(a1) PyMPI_UNAVAILABLE("MPI_Comm_free",a1) #endif #ifndef PyMPI_HAVE_MPI_Comm_group #undef MPI_Comm_group #define MPI_Comm_group(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_group",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_size #undef MPI_Comm_size #define MPI_Comm_size(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_size",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_rank #undef MPI_Comm_rank #define MPI_Comm_rank(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_rank",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_compare #undef MPI_Comm_compare #define MPI_Comm_compare(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Comm_compare",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Topo_test #undef MPI_Topo_test #define MPI_Topo_test(a1,a2) PyMPI_UNAVAILABLE("MPI_Topo_test",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_test_inter #undef MPI_Comm_test_inter #define MPI_Comm_test_inter(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_test_inter",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Abort #undef MPI_Abort #define MPI_Abort(a1,a2) PyMPI_UNAVAILABLE("MPI_Abort",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Send #undef MPI_Send #define MPI_Send(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Send",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Recv #undef MPI_Recv #define MPI_Recv(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Recv",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Sendrecv #undef MPI_Sendrecv #define MPI_Sendrecv(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12) PyMPI_UNAVAILABLE("MPI_Sendrecv",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12) #endif #ifndef PyMPI_HAVE_MPI_Sendrecv_replace #undef MPI_Sendrecv_replace #define MPI_Sendrecv_replace(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Sendrecv_replace",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_BSEND_OVERHEAD #undef MPI_BSEND_OVERHEAD #define MPI_BSEND_OVERHEAD (0) #endif #ifndef PyMPI_HAVE_MPI_Buffer_attach #undef MPI_Buffer_attach #define MPI_Buffer_attach(a1,a2) PyMPI_UNAVAILABLE("MPI_Buffer_attach",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Buffer_detach #undef MPI_Buffer_detach #define MPI_Buffer_detach(a1,a2) PyMPI_UNAVAILABLE("MPI_Buffer_detach",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Bsend #undef MPI_Bsend #define MPI_Bsend(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Bsend",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Ssend #undef MPI_Ssend #define MPI_Ssend(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Ssend",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Rsend #undef MPI_Rsend #define MPI_Rsend(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Rsend",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Isend #undef MPI_Isend #define MPI_Isend(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Isend",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Ibsend #undef MPI_Ibsend #define MPI_Ibsend(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Ibsend",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Issend #undef MPI_Issend #define MPI_Issend(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Issend",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Irsend #undef MPI_Irsend #define MPI_Irsend(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Irsend",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Irecv #undef MPI_Irecv #define MPI_Irecv(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Irecv",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Send_init #undef MPI_Send_init #define MPI_Send_init(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Send_init",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Bsend_init #undef MPI_Bsend_init #define MPI_Bsend_init(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Bsend_init",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Ssend_init #undef MPI_Ssend_init #define MPI_Ssend_init(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Ssend_init",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Rsend_init #undef MPI_Rsend_init #define MPI_Rsend_init(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Rsend_init",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Recv_init #undef MPI_Recv_init #define MPI_Recv_init(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Recv_init",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Probe #undef MPI_Probe #define MPI_Probe(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Probe",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Iprobe #undef MPI_Iprobe #define MPI_Iprobe(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Iprobe",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_MESSAGE_NULL #undef MPI_MESSAGE_NULL #define MPI_MESSAGE_NULL ((MPI_Message)0) #endif #ifndef PyMPI_HAVE_MPI_MESSAGE_NO_PROC #undef MPI_MESSAGE_NO_PROC #define MPI_MESSAGE_NO_PROC ((MPI_Message)MPI_MESSAGE_NULL) #endif #ifndef PyMPI_HAVE_MPI_Mprobe #undef MPI_Mprobe #define MPI_Mprobe(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Mprobe",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Improbe #undef MPI_Improbe #define MPI_Improbe(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Improbe",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Mrecv #undef MPI_Mrecv #define MPI_Mrecv(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Mrecv",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Imrecv #undef MPI_Imrecv #define MPI_Imrecv(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Imrecv",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Barrier #undef MPI_Barrier #define MPI_Barrier(a1) PyMPI_UNAVAILABLE("MPI_Barrier",a1) #endif #ifndef PyMPI_HAVE_MPI_Bcast #undef MPI_Bcast #define MPI_Bcast(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Bcast",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Gather #undef MPI_Gather #define MPI_Gather(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Gather",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Gatherv #undef MPI_Gatherv #define MPI_Gatherv(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Gatherv",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Scatter #undef MPI_Scatter #define MPI_Scatter(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Scatter",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Scatterv #undef MPI_Scatterv #define MPI_Scatterv(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Scatterv",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Allgather #undef MPI_Allgather #define MPI_Allgather(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Allgather",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Allgatherv #undef MPI_Allgatherv #define MPI_Allgatherv(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Allgatherv",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Alltoall #undef MPI_Alltoall #define MPI_Alltoall(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Alltoall",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Alltoallv #undef MPI_Alltoallv #define MPI_Alltoallv(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Alltoallv",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Alltoallw #undef MPI_Alltoallw #define MPI_Alltoallw(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Alltoallw",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Reduce #undef MPI_Reduce #define MPI_Reduce(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Reduce",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Allreduce #undef MPI_Allreduce #define MPI_Allreduce(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Allreduce",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Reduce_local #undef MPI_Reduce_local #define MPI_Reduce_local(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Reduce_local",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Reduce_scatter_block #undef MPI_Reduce_scatter_block #define MPI_Reduce_scatter_block(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Reduce_scatter_block",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Reduce_scatter #undef MPI_Reduce_scatter #define MPI_Reduce_scatter(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Reduce_scatter",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Scan #undef MPI_Scan #define MPI_Scan(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Scan",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Exscan #undef MPI_Exscan #define MPI_Exscan(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Exscan",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Neighbor_allgather #undef MPI_Neighbor_allgather #define MPI_Neighbor_allgather(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Neighbor_allgather",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Neighbor_allgatherv #undef MPI_Neighbor_allgatherv #define MPI_Neighbor_allgatherv(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Neighbor_allgatherv",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Neighbor_alltoall #undef MPI_Neighbor_alltoall #define MPI_Neighbor_alltoall(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Neighbor_alltoall",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Neighbor_alltoallv #undef MPI_Neighbor_alltoallv #define MPI_Neighbor_alltoallv(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Neighbor_alltoallv",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Neighbor_alltoallw #undef MPI_Neighbor_alltoallw #define MPI_Neighbor_alltoallw(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Neighbor_alltoallw",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Ibarrier #undef MPI_Ibarrier #define MPI_Ibarrier(a1,a2) PyMPI_UNAVAILABLE("MPI_Ibarrier",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Ibcast #undef MPI_Ibcast #define MPI_Ibcast(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Ibcast",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Igather #undef MPI_Igather #define MPI_Igather(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Igather",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Igatherv #undef MPI_Igatherv #define MPI_Igatherv(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Igatherv",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Iscatter #undef MPI_Iscatter #define MPI_Iscatter(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Iscatter",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Iscatterv #undef MPI_Iscatterv #define MPI_Iscatterv(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Iscatterv",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Iallgather #undef MPI_Iallgather #define MPI_Iallgather(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Iallgather",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Iallgatherv #undef MPI_Iallgatherv #define MPI_Iallgatherv(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Iallgatherv",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Ialltoall #undef MPI_Ialltoall #define MPI_Ialltoall(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Ialltoall",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Ialltoallv #undef MPI_Ialltoallv #define MPI_Ialltoallv(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Ialltoallv",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Ialltoallw #undef MPI_Ialltoallw #define MPI_Ialltoallw(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Ialltoallw",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Ireduce #undef MPI_Ireduce #define MPI_Ireduce(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Ireduce",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Iallreduce #undef MPI_Iallreduce #define MPI_Iallreduce(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Iallreduce",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Ireduce_scatter_block #undef MPI_Ireduce_scatter_block #define MPI_Ireduce_scatter_block(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Ireduce_scatter_block",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Ireduce_scatter #undef MPI_Ireduce_scatter #define MPI_Ireduce_scatter(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Ireduce_scatter",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Iscan #undef MPI_Iscan #define MPI_Iscan(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Iscan",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Iexscan #undef MPI_Iexscan #define MPI_Iexscan(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Iexscan",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Ineighbor_allgather #undef MPI_Ineighbor_allgather #define MPI_Ineighbor_allgather(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Ineighbor_allgather",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Ineighbor_allgatherv #undef MPI_Ineighbor_allgatherv #define MPI_Ineighbor_allgatherv(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Ineighbor_allgatherv",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Ineighbor_alltoall #undef MPI_Ineighbor_alltoall #define MPI_Ineighbor_alltoall(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Ineighbor_alltoall",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Ineighbor_alltoallv #undef MPI_Ineighbor_alltoallv #define MPI_Ineighbor_alltoallv(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Ineighbor_alltoallv",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Ineighbor_alltoallw #undef MPI_Ineighbor_alltoallw #define MPI_Ineighbor_alltoallw(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Ineighbor_alltoallw",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Comm_dup #undef MPI_Comm_dup #define MPI_Comm_dup(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_dup",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_dup_with_info #undef MPI_Comm_dup_with_info #define MPI_Comm_dup_with_info(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Comm_dup_with_info",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Comm_idup #undef MPI_Comm_idup #define MPI_Comm_idup(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Comm_idup",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Comm_create #undef MPI_Comm_create #define MPI_Comm_create(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Comm_create",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Comm_create_group #undef MPI_Comm_create_group #define MPI_Comm_create_group(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Comm_create_group",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Comm_split #undef MPI_Comm_split #define MPI_Comm_split(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Comm_split",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_COMM_TYPE_SHARED #undef MPI_COMM_TYPE_SHARED #define MPI_COMM_TYPE_SHARED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_Comm_split_type #undef MPI_Comm_split_type #define MPI_Comm_split_type(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Comm_split_type",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Comm_set_info #undef MPI_Comm_set_info #define MPI_Comm_set_info(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_set_info",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_get_info #undef MPI_Comm_get_info #define MPI_Comm_get_info(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_get_info",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_CART #undef MPI_CART #define MPI_CART (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_Cart_create #undef MPI_Cart_create #define MPI_Cart_create(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Cart_create",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Cartdim_get #undef MPI_Cartdim_get #define MPI_Cartdim_get(a1,a2) PyMPI_UNAVAILABLE("MPI_Cartdim_get",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Cart_get #undef MPI_Cart_get #define MPI_Cart_get(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Cart_get",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Cart_rank #undef MPI_Cart_rank #define MPI_Cart_rank(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Cart_rank",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Cart_coords #undef MPI_Cart_coords #define MPI_Cart_coords(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Cart_coords",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Cart_shift #undef MPI_Cart_shift #define MPI_Cart_shift(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Cart_shift",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Cart_sub #undef MPI_Cart_sub #define MPI_Cart_sub(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Cart_sub",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Cart_map #undef MPI_Cart_map #define MPI_Cart_map(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Cart_map",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Dims_create #undef MPI_Dims_create #define MPI_Dims_create(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Dims_create",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_GRAPH #undef MPI_GRAPH #define MPI_GRAPH (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_Graph_create #undef MPI_Graph_create #define MPI_Graph_create(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Graph_create",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Graphdims_get #undef MPI_Graphdims_get #define MPI_Graphdims_get(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Graphdims_get",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Graph_get #undef MPI_Graph_get #define MPI_Graph_get(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Graph_get",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Graph_map #undef MPI_Graph_map #define MPI_Graph_map(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Graph_map",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Graph_neighbors_count #undef MPI_Graph_neighbors_count #define MPI_Graph_neighbors_count(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Graph_neighbors_count",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Graph_neighbors #undef MPI_Graph_neighbors #define MPI_Graph_neighbors(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Graph_neighbors",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_DIST_GRAPH #undef MPI_DIST_GRAPH #define MPI_DIST_GRAPH (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_UNWEIGHTED #undef MPI_UNWEIGHTED #define MPI_UNWEIGHTED ((int*)0) #endif #ifndef PyMPI_HAVE_MPI_WEIGHTS_EMPTY #undef MPI_WEIGHTS_EMPTY #define MPI_WEIGHTS_EMPTY ((int*)MPI_UNWEIGHTED) #endif #ifndef PyMPI_HAVE_MPI_Dist_graph_create_adjacent #undef MPI_Dist_graph_create_adjacent #define MPI_Dist_graph_create_adjacent(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Dist_graph_create_adjacent",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Dist_graph_create #undef MPI_Dist_graph_create #define MPI_Dist_graph_create(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Dist_graph_create",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Dist_graph_neighbors_count #undef MPI_Dist_graph_neighbors_count #define MPI_Dist_graph_neighbors_count(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Dist_graph_neighbors_count",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Dist_graph_neighbors #undef MPI_Dist_graph_neighbors #define MPI_Dist_graph_neighbors(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Dist_graph_neighbors",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Intercomm_create #undef MPI_Intercomm_create #define MPI_Intercomm_create(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Intercomm_create",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Comm_remote_group #undef MPI_Comm_remote_group #define MPI_Comm_remote_group(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_remote_group",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_remote_size #undef MPI_Comm_remote_size #define MPI_Comm_remote_size(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_remote_size",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Intercomm_merge #undef MPI_Intercomm_merge #define MPI_Intercomm_merge(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Intercomm_merge",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_MAX_PORT_NAME #undef MPI_MAX_PORT_NAME #define MPI_MAX_PORT_NAME (1) #endif #ifndef PyMPI_HAVE_MPI_Open_port #undef MPI_Open_port #define MPI_Open_port(a1,a2) PyMPI_UNAVAILABLE("MPI_Open_port",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Close_port #undef MPI_Close_port #define MPI_Close_port(a1) PyMPI_UNAVAILABLE("MPI_Close_port",a1) #endif #ifndef PyMPI_HAVE_MPI_Publish_name #undef MPI_Publish_name #define MPI_Publish_name(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Publish_name",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Unpublish_name #undef MPI_Unpublish_name #define MPI_Unpublish_name(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Unpublish_name",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Lookup_name #undef MPI_Lookup_name #define MPI_Lookup_name(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Lookup_name",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Comm_accept #undef MPI_Comm_accept #define MPI_Comm_accept(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Comm_accept",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Comm_connect #undef MPI_Comm_connect #define MPI_Comm_connect(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Comm_connect",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Comm_join #undef MPI_Comm_join #define MPI_Comm_join(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_join",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_disconnect #undef MPI_Comm_disconnect #define MPI_Comm_disconnect(a1) PyMPI_UNAVAILABLE("MPI_Comm_disconnect",a1) #endif #ifndef PyMPI_HAVE_MPI_ARGV_NULL #undef MPI_ARGV_NULL #define MPI_ARGV_NULL ((char**)0) #endif #ifndef PyMPI_HAVE_MPI_ARGVS_NULL #undef MPI_ARGVS_NULL #define MPI_ARGVS_NULL ((char***)0) #endif #ifndef PyMPI_HAVE_MPI_ERRCODES_IGNORE #undef MPI_ERRCODES_IGNORE #define MPI_ERRCODES_IGNORE ((int*)0) #endif #ifndef PyMPI_HAVE_MPI_Comm_spawn #undef MPI_Comm_spawn #define MPI_Comm_spawn(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Comm_spawn",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Comm_spawn_multiple #undef MPI_Comm_spawn_multiple #define MPI_Comm_spawn_multiple(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Comm_spawn_multiple",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Comm_get_parent #undef MPI_Comm_get_parent #define MPI_Comm_get_parent(a1) PyMPI_UNAVAILABLE("MPI_Comm_get_parent",a1) #endif #ifndef PyMPI_HAVE_MPI_Errhandler_get #undef MPI_Errhandler_get #define MPI_Errhandler_get(a1,a2) PyMPI_UNAVAILABLE("MPI_Errhandler_get",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Errhandler_set #undef MPI_Errhandler_set #define MPI_Errhandler_set(a1,a2) PyMPI_UNAVAILABLE("MPI_Errhandler_set",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Handler_function #undef MPI_Handler_function typedef void (PyMPI_MPI_Handler_function)(MPI_Comm*,int*,...); #define MPI_Handler_function PyMPI_MPI_Handler_function #endif #ifndef PyMPI_HAVE_MPI_Errhandler_create #undef MPI_Errhandler_create #define MPI_Errhandler_create(a1,a2) PyMPI_UNAVAILABLE("MPI_Errhandler_create",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_TAG_UB #undef MPI_TAG_UB #define MPI_TAG_UB (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_HOST #undef MPI_HOST #define MPI_HOST (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_IO #undef MPI_IO #define MPI_IO (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_WTIME_IS_GLOBAL #undef MPI_WTIME_IS_GLOBAL #define MPI_WTIME_IS_GLOBAL (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_Attr_get #undef MPI_Attr_get #define MPI_Attr_get(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Attr_get",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Attr_put #undef MPI_Attr_put #define MPI_Attr_put(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Attr_put",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Attr_delete #undef MPI_Attr_delete #define MPI_Attr_delete(a1,a2) PyMPI_UNAVAILABLE("MPI_Attr_delete",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Copy_function #undef MPI_Copy_function typedef int (PyMPI_MPI_Copy_function)(MPI_Comm,int,void*,void*,void*,int*); #define MPI_Copy_function PyMPI_MPI_Copy_function #endif #ifndef PyMPI_HAVE_MPI_Delete_function #undef MPI_Delete_function typedef int (PyMPI_MPI_Delete_function)(MPI_Comm,int,void*,void*); #define MPI_Delete_function PyMPI_MPI_Delete_function #endif #ifndef PyMPI_HAVE_MPI_DUP_FN #undef MPI_DUP_FN #define MPI_DUP_FN (0) #endif #ifndef PyMPI_HAVE_MPI_NULL_COPY_FN #undef MPI_NULL_COPY_FN #define MPI_NULL_COPY_FN (0) #endif #ifndef PyMPI_HAVE_MPI_NULL_DELETE_FN #undef MPI_NULL_DELETE_FN #define MPI_NULL_DELETE_FN (0) #endif #ifndef PyMPI_HAVE_MPI_Keyval_create #undef MPI_Keyval_create #define MPI_Keyval_create(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Keyval_create",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Keyval_free #undef MPI_Keyval_free #define MPI_Keyval_free(a1) PyMPI_UNAVAILABLE("MPI_Keyval_free",a1) #endif #ifndef PyMPI_HAVE_MPI_Comm_get_errhandler #undef MPI_Comm_get_errhandler #define MPI_Comm_get_errhandler MPI_Errhandler_get #endif #ifndef PyMPI_HAVE_MPI_Comm_set_errhandler #undef MPI_Comm_set_errhandler #define MPI_Comm_set_errhandler MPI_Errhandler_set #endif #ifndef PyMPI_HAVE_MPI_Comm_errhandler_fn #undef MPI_Comm_errhandler_fn #define MPI_Comm_errhandler_fn MPI_Handler_function #endif #ifndef PyMPI_HAVE_MPI_Comm_errhandler_function #undef MPI_Comm_errhandler_function #define MPI_Comm_errhandler_function MPI_Comm_errhandler_fn #endif #ifndef PyMPI_HAVE_MPI_Comm_create_errhandler #undef MPI_Comm_create_errhandler #define MPI_Comm_create_errhandler MPI_Errhandler_create #endif #ifndef PyMPI_HAVE_MPI_Comm_call_errhandler #undef MPI_Comm_call_errhandler #define MPI_Comm_call_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_call_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Comm_get_name #undef MPI_Comm_get_name #define MPI_Comm_get_name(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Comm_get_name",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Comm_set_name #undef MPI_Comm_set_name #define MPI_Comm_set_name(a1,a2) PyMPI_UNAVAILABLE("MPI_Comm_set_name",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_UNIVERSE_SIZE #undef MPI_UNIVERSE_SIZE #define MPI_UNIVERSE_SIZE (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_APPNUM #undef MPI_APPNUM #define MPI_APPNUM (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_LASTUSEDCODE #undef MPI_LASTUSEDCODE #define MPI_LASTUSEDCODE (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_Comm_get_attr #undef MPI_Comm_get_attr #define MPI_Comm_get_attr MPI_Attr_get #endif #ifndef PyMPI_HAVE_MPI_Comm_set_attr #undef MPI_Comm_set_attr #define MPI_Comm_set_attr MPI_Attr_put #endif #ifndef PyMPI_HAVE_MPI_Comm_delete_attr #undef MPI_Comm_delete_attr #define MPI_Comm_delete_attr MPI_Attr_delete #endif #ifndef PyMPI_HAVE_MPI_Comm_copy_attr_function #undef MPI_Comm_copy_attr_function #define MPI_Comm_copy_attr_function MPI_Copy_function #endif #ifndef PyMPI_HAVE_MPI_Comm_delete_attr_function #undef MPI_Comm_delete_attr_function #define MPI_Comm_delete_attr_function MPI_Delete_function #endif #ifndef PyMPI_HAVE_MPI_COMM_DUP_FN #undef MPI_COMM_DUP_FN #define MPI_COMM_DUP_FN (MPI_DUP_FN) #endif #ifndef PyMPI_HAVE_MPI_COMM_NULL_COPY_FN #undef MPI_COMM_NULL_COPY_FN #define MPI_COMM_NULL_COPY_FN (MPI_NULL_COPY_FN) #endif #ifndef PyMPI_HAVE_MPI_COMM_NULL_DELETE_FN #undef MPI_COMM_NULL_DELETE_FN #define MPI_COMM_NULL_DELETE_FN (MPI_NULL_DELETE_FN) #endif #ifndef PyMPI_HAVE_MPI_Comm_create_keyval #undef MPI_Comm_create_keyval #define MPI_Comm_create_keyval MPI_Keyval_create #endif #ifndef PyMPI_HAVE_MPI_Comm_free_keyval #undef MPI_Comm_free_keyval #define MPI_Comm_free_keyval MPI_Keyval_free #endif #ifndef PyMPI_HAVE_MPI_WIN_NULL #undef MPI_WIN_NULL #define MPI_WIN_NULL ((MPI_Win)0) #endif #ifndef PyMPI_HAVE_MPI_Win_free #undef MPI_Win_free #define MPI_Win_free(a1) PyMPI_UNAVAILABLE("MPI_Win_free",a1) #endif #ifndef PyMPI_HAVE_MPI_Win_create #undef MPI_Win_create #define MPI_Win_create(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Win_create",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Win_allocate #undef MPI_Win_allocate #define MPI_Win_allocate(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Win_allocate",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Win_allocate_shared #undef MPI_Win_allocate_shared #define MPI_Win_allocate_shared(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_Win_allocate_shared",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_Win_shared_query #undef MPI_Win_shared_query #define MPI_Win_shared_query(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Win_shared_query",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_Win_create_dynamic #undef MPI_Win_create_dynamic #define MPI_Win_create_dynamic(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Win_create_dynamic",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Win_attach #undef MPI_Win_attach #define MPI_Win_attach(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Win_attach",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Win_detach #undef MPI_Win_detach #define MPI_Win_detach(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_detach",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_set_info #undef MPI_Win_set_info #define MPI_Win_set_info(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_set_info",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_get_info #undef MPI_Win_get_info #define MPI_Win_get_info(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_get_info",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_get_group #undef MPI_Win_get_group #define MPI_Win_get_group(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_get_group",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Get #undef MPI_Get #define MPI_Get(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Get",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Put #undef MPI_Put #define MPI_Put(a1,a2,a3,a4,a5,a6,a7,a8) PyMPI_UNAVAILABLE("MPI_Put",a1,a2,a3,a4,a5,a6,a7,a8) #endif #ifndef PyMPI_HAVE_MPI_Accumulate #undef MPI_Accumulate #define MPI_Accumulate(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Accumulate",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Get_accumulate #undef MPI_Get_accumulate #define MPI_Get_accumulate(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12) PyMPI_UNAVAILABLE("MPI_Get_accumulate",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12) #endif #ifndef PyMPI_HAVE_MPI_Fetch_and_op #undef MPI_Fetch_and_op #define MPI_Fetch_and_op(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Fetch_and_op",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Compare_and_swap #undef MPI_Compare_and_swap #define MPI_Compare_and_swap(a1,a2,a3,a4,a5,a6,a7) PyMPI_UNAVAILABLE("MPI_Compare_and_swap",a1,a2,a3,a4,a5,a6,a7) #endif #ifndef PyMPI_HAVE_MPI_Rget #undef MPI_Rget #define MPI_Rget(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Rget",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Rput #undef MPI_Rput #define MPI_Rput(a1,a2,a3,a4,a5,a6,a7,a8,a9) PyMPI_UNAVAILABLE("MPI_Rput",a1,a2,a3,a4,a5,a6,a7,a8,a9) #endif #ifndef PyMPI_HAVE_MPI_Raccumulate #undef MPI_Raccumulate #define MPI_Raccumulate(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) PyMPI_UNAVAILABLE("MPI_Raccumulate",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10) #endif #ifndef PyMPI_HAVE_MPI_Rget_accumulate #undef MPI_Rget_accumulate #define MPI_Rget_accumulate(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13) PyMPI_UNAVAILABLE("MPI_Rget_accumulate",a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13) #endif #ifndef PyMPI_HAVE_MPI_MODE_NOCHECK #undef MPI_MODE_NOCHECK #define MPI_MODE_NOCHECK (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_MODE_NOSTORE #undef MPI_MODE_NOSTORE #define MPI_MODE_NOSTORE (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_MODE_NOPUT #undef MPI_MODE_NOPUT #define MPI_MODE_NOPUT (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_MODE_NOPRECEDE #undef MPI_MODE_NOPRECEDE #define MPI_MODE_NOPRECEDE (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_MODE_NOSUCCEED #undef MPI_MODE_NOSUCCEED #define MPI_MODE_NOSUCCEED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_Win_fence #undef MPI_Win_fence #define MPI_Win_fence(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_fence",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_post #undef MPI_Win_post #define MPI_Win_post(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Win_post",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Win_start #undef MPI_Win_start #define MPI_Win_start(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Win_start",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Win_complete #undef MPI_Win_complete #define MPI_Win_complete(a1) PyMPI_UNAVAILABLE("MPI_Win_complete",a1) #endif #ifndef PyMPI_HAVE_MPI_Win_wait #undef MPI_Win_wait #define MPI_Win_wait(a1) PyMPI_UNAVAILABLE("MPI_Win_wait",a1) #endif #ifndef PyMPI_HAVE_MPI_Win_test #undef MPI_Win_test #define MPI_Win_test(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_test",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_LOCK_EXCLUSIVE #undef MPI_LOCK_EXCLUSIVE #define MPI_LOCK_EXCLUSIVE (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_LOCK_SHARED #undef MPI_LOCK_SHARED #define MPI_LOCK_SHARED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_Win_lock #undef MPI_Win_lock #define MPI_Win_lock(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Win_lock",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Win_unlock #undef MPI_Win_unlock #define MPI_Win_unlock(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_unlock",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_lock_all #undef MPI_Win_lock_all #define MPI_Win_lock_all(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_lock_all",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_unlock_all #undef MPI_Win_unlock_all #define MPI_Win_unlock_all(a1) PyMPI_UNAVAILABLE("MPI_Win_unlock_all",a1) #endif #ifndef PyMPI_HAVE_MPI_Win_flush #undef MPI_Win_flush #define MPI_Win_flush(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_flush",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_flush_all #undef MPI_Win_flush_all #define MPI_Win_flush_all(a1) PyMPI_UNAVAILABLE("MPI_Win_flush_all",a1) #endif #ifndef PyMPI_HAVE_MPI_Win_flush_local #undef MPI_Win_flush_local #define MPI_Win_flush_local(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_flush_local",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_flush_local_all #undef MPI_Win_flush_local_all #define MPI_Win_flush_local_all(a1) PyMPI_UNAVAILABLE("MPI_Win_flush_local_all",a1) #endif #ifndef PyMPI_HAVE_MPI_Win_sync #undef MPI_Win_sync #define MPI_Win_sync(a1) PyMPI_UNAVAILABLE("MPI_Win_sync",a1) #endif #ifndef PyMPI_HAVE_MPI_Win_get_errhandler #undef MPI_Win_get_errhandler #define MPI_Win_get_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_get_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_set_errhandler #undef MPI_Win_set_errhandler #define MPI_Win_set_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_set_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_errhandler_fn #undef MPI_Win_errhandler_fn typedef void (PyMPI_MPI_Win_errhandler_fn)(MPI_Win*,int*,...); #define MPI_Win_errhandler_fn PyMPI_MPI_Win_errhandler_fn #endif #ifndef PyMPI_HAVE_MPI_Win_errhandler_function #undef MPI_Win_errhandler_function #define MPI_Win_errhandler_function MPI_Win_errhandler_fn #endif #ifndef PyMPI_HAVE_MPI_Win_create_errhandler #undef MPI_Win_create_errhandler #define MPI_Win_create_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_create_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_call_errhandler #undef MPI_Win_call_errhandler #define MPI_Win_call_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_call_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_get_name #undef MPI_Win_get_name #define MPI_Win_get_name(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Win_get_name",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Win_set_name #undef MPI_Win_set_name #define MPI_Win_set_name(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_set_name",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_WIN_BASE #undef MPI_WIN_BASE #define MPI_WIN_BASE (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_WIN_SIZE #undef MPI_WIN_SIZE #define MPI_WIN_SIZE (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_WIN_DISP_UNIT #undef MPI_WIN_DISP_UNIT #define MPI_WIN_DISP_UNIT (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_WIN_CREATE_FLAVOR #undef MPI_WIN_CREATE_FLAVOR #define MPI_WIN_CREATE_FLAVOR (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_WIN_MODEL #undef MPI_WIN_MODEL #define MPI_WIN_MODEL (MPI_KEYVAL_INVALID) #endif #ifndef PyMPI_HAVE_MPI_WIN_FLAVOR_CREATE #undef MPI_WIN_FLAVOR_CREATE #define MPI_WIN_FLAVOR_CREATE (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_WIN_FLAVOR_ALLOCATE #undef MPI_WIN_FLAVOR_ALLOCATE #define MPI_WIN_FLAVOR_ALLOCATE (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_WIN_FLAVOR_DYNAMIC #undef MPI_WIN_FLAVOR_DYNAMIC #define MPI_WIN_FLAVOR_DYNAMIC (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_WIN_FLAVOR_SHARED #undef MPI_WIN_FLAVOR_SHARED #define MPI_WIN_FLAVOR_SHARED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_WIN_SEPARATE #undef MPI_WIN_SEPARATE #define MPI_WIN_SEPARATE (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_WIN_UNIFIED #undef MPI_WIN_UNIFIED #define MPI_WIN_UNIFIED (MPI_UNDEFINED) #endif #ifndef PyMPI_HAVE_MPI_Win_get_attr #undef MPI_Win_get_attr #define MPI_Win_get_attr(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Win_get_attr",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Win_set_attr #undef MPI_Win_set_attr #define MPI_Win_set_attr(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Win_set_attr",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Win_delete_attr #undef MPI_Win_delete_attr #define MPI_Win_delete_attr(a1,a2) PyMPI_UNAVAILABLE("MPI_Win_delete_attr",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Win_copy_attr_function #undef MPI_Win_copy_attr_function typedef int (PyMPI_MPI_Win_copy_attr_function)(MPI_Win,int,void*,void*,void*,int*); #define MPI_Win_copy_attr_function PyMPI_MPI_Win_copy_attr_function #endif #ifndef PyMPI_HAVE_MPI_Win_delete_attr_function #undef MPI_Win_delete_attr_function typedef int (PyMPI_MPI_Win_delete_attr_function)(MPI_Win,int,void*,void*); #define MPI_Win_delete_attr_function PyMPI_MPI_Win_delete_attr_function #endif #ifndef PyMPI_HAVE_MPI_WIN_DUP_FN #undef MPI_WIN_DUP_FN #define MPI_WIN_DUP_FN (0) #endif #ifndef PyMPI_HAVE_MPI_WIN_NULL_COPY_FN #undef MPI_WIN_NULL_COPY_FN #define MPI_WIN_NULL_COPY_FN (0) #endif #ifndef PyMPI_HAVE_MPI_WIN_NULL_DELETE_FN #undef MPI_WIN_NULL_DELETE_FN #define MPI_WIN_NULL_DELETE_FN (0) #endif #ifndef PyMPI_HAVE_MPI_Win_create_keyval #undef MPI_Win_create_keyval #define MPI_Win_create_keyval(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Win_create_keyval",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Win_free_keyval #undef MPI_Win_free_keyval #define MPI_Win_free_keyval(a1) PyMPI_UNAVAILABLE("MPI_Win_free_keyval",a1) #endif #ifndef PyMPI_HAVE_MPI_FILE_NULL #undef MPI_FILE_NULL #define MPI_FILE_NULL ((MPI_File)0) #endif #ifndef PyMPI_HAVE_MPI_MODE_RDONLY #undef MPI_MODE_RDONLY #define MPI_MODE_RDONLY (1) #endif #ifndef PyMPI_HAVE_MPI_MODE_RDWR #undef MPI_MODE_RDWR #define MPI_MODE_RDWR (2) #endif #ifndef PyMPI_HAVE_MPI_MODE_WRONLY #undef MPI_MODE_WRONLY #define MPI_MODE_WRONLY (4) #endif #ifndef PyMPI_HAVE_MPI_MODE_CREATE #undef MPI_MODE_CREATE #define MPI_MODE_CREATE (8) #endif #ifndef PyMPI_HAVE_MPI_MODE_EXCL #undef MPI_MODE_EXCL #define MPI_MODE_EXCL (16) #endif #ifndef PyMPI_HAVE_MPI_MODE_DELETE_ON_CLOSE #undef MPI_MODE_DELETE_ON_CLOSE #define MPI_MODE_DELETE_ON_CLOSE (32) #endif #ifndef PyMPI_HAVE_MPI_MODE_UNIQUE_OPEN #undef MPI_MODE_UNIQUE_OPEN #define MPI_MODE_UNIQUE_OPEN (64) #endif #ifndef PyMPI_HAVE_MPI_MODE_APPEND #undef MPI_MODE_APPEND #define MPI_MODE_APPEND (128) #endif #ifndef PyMPI_HAVE_MPI_MODE_SEQUENTIAL #undef MPI_MODE_SEQUENTIAL #define MPI_MODE_SEQUENTIAL (256) #endif #ifndef PyMPI_HAVE_MPI_File_open #undef MPI_File_open #define MPI_File_open(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_open",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_close #undef MPI_File_close #define MPI_File_close(a1) PyMPI_UNAVAILABLE("MPI_File_close",a1) #endif #ifndef PyMPI_HAVE_MPI_File_delete #undef MPI_File_delete #define MPI_File_delete(a1,a2) PyMPI_UNAVAILABLE("MPI_File_delete",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_set_size #undef MPI_File_set_size #define MPI_File_set_size(a1,a2) PyMPI_UNAVAILABLE("MPI_File_set_size",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_preallocate #undef MPI_File_preallocate #define MPI_File_preallocate(a1,a2) PyMPI_UNAVAILABLE("MPI_File_preallocate",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_get_size #undef MPI_File_get_size #define MPI_File_get_size(a1,a2) PyMPI_UNAVAILABLE("MPI_File_get_size",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_get_group #undef MPI_File_get_group #define MPI_File_get_group(a1,a2) PyMPI_UNAVAILABLE("MPI_File_get_group",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_get_amode #undef MPI_File_get_amode #define MPI_File_get_amode(a1,a2) PyMPI_UNAVAILABLE("MPI_File_get_amode",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_set_info #undef MPI_File_set_info #define MPI_File_set_info(a1,a2) PyMPI_UNAVAILABLE("MPI_File_set_info",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_get_info #undef MPI_File_get_info #define MPI_File_get_info(a1,a2) PyMPI_UNAVAILABLE("MPI_File_get_info",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_get_view #undef MPI_File_get_view #define MPI_File_get_view(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_get_view",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_set_view #undef MPI_File_set_view #define MPI_File_set_view(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_File_set_view",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_File_read_at #undef MPI_File_read_at #define MPI_File_read_at(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_File_read_at",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_File_read_at_all #undef MPI_File_read_at_all #define MPI_File_read_at_all(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_File_read_at_all",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_File_write_at #undef MPI_File_write_at #define MPI_File_write_at(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_File_write_at",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_File_write_at_all #undef MPI_File_write_at_all #define MPI_File_write_at_all(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_File_write_at_all",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_File_iread_at #undef MPI_File_iread_at #define MPI_File_iread_at(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_File_iread_at",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_File_iwrite_at #undef MPI_File_iwrite_at #define MPI_File_iwrite_at(a1,a2,a3,a4,a5,a6) PyMPI_UNAVAILABLE("MPI_File_iwrite_at",a1,a2,a3,a4,a5,a6) #endif #ifndef PyMPI_HAVE_MPI_SEEK_SET #undef MPI_SEEK_SET #define MPI_SEEK_SET (0) #endif #ifndef PyMPI_HAVE_MPI_SEEK_CUR #undef MPI_SEEK_CUR #define MPI_SEEK_CUR (1) #endif #ifndef PyMPI_HAVE_MPI_SEEK_END #undef MPI_SEEK_END #define MPI_SEEK_END (2) #endif #ifndef PyMPI_HAVE_MPI_DISPLACEMENT_CURRENT #undef MPI_DISPLACEMENT_CURRENT #define MPI_DISPLACEMENT_CURRENT (3) #endif #ifndef PyMPI_HAVE_MPI_File_seek #undef MPI_File_seek #define MPI_File_seek(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_seek",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_get_position #undef MPI_File_get_position #define MPI_File_get_position(a1,a2) PyMPI_UNAVAILABLE("MPI_File_get_position",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_get_byte_offset #undef MPI_File_get_byte_offset #define MPI_File_get_byte_offset(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_get_byte_offset",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_read #undef MPI_File_read #define MPI_File_read(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_read",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_read_all #undef MPI_File_read_all #define MPI_File_read_all(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_read_all",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_write #undef MPI_File_write #define MPI_File_write(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_write",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_write_all #undef MPI_File_write_all #define MPI_File_write_all(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_write_all",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_iread #undef MPI_File_iread #define MPI_File_iread(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_iread",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_iwrite #undef MPI_File_iwrite #define MPI_File_iwrite(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_iwrite",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_read_shared #undef MPI_File_read_shared #define MPI_File_read_shared(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_read_shared",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_write_shared #undef MPI_File_write_shared #define MPI_File_write_shared(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_write_shared",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_iread_shared #undef MPI_File_iread_shared #define MPI_File_iread_shared(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_iread_shared",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_iwrite_shared #undef MPI_File_iwrite_shared #define MPI_File_iwrite_shared(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_iwrite_shared",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_read_ordered #undef MPI_File_read_ordered #define MPI_File_read_ordered(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_read_ordered",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_write_ordered #undef MPI_File_write_ordered #define MPI_File_write_ordered(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_write_ordered",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_seek_shared #undef MPI_File_seek_shared #define MPI_File_seek_shared(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_seek_shared",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_get_position_shared #undef MPI_File_get_position_shared #define MPI_File_get_position_shared(a1,a2) PyMPI_UNAVAILABLE("MPI_File_get_position_shared",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_read_at_all_begin #undef MPI_File_read_at_all_begin #define MPI_File_read_at_all_begin(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_read_at_all_begin",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_read_at_all_end #undef MPI_File_read_at_all_end #define MPI_File_read_at_all_end(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_read_at_all_end",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_write_at_all_begin #undef MPI_File_write_at_all_begin #define MPI_File_write_at_all_begin(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_File_write_at_all_begin",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_File_write_at_all_end #undef MPI_File_write_at_all_end #define MPI_File_write_at_all_end(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_write_at_all_end",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_read_all_begin #undef MPI_File_read_all_begin #define MPI_File_read_all_begin(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_File_read_all_begin",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_File_read_all_end #undef MPI_File_read_all_end #define MPI_File_read_all_end(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_read_all_end",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_write_all_begin #undef MPI_File_write_all_begin #define MPI_File_write_all_begin(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_File_write_all_begin",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_File_write_all_end #undef MPI_File_write_all_end #define MPI_File_write_all_end(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_write_all_end",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_read_ordered_begin #undef MPI_File_read_ordered_begin #define MPI_File_read_ordered_begin(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_File_read_ordered_begin",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_File_read_ordered_end #undef MPI_File_read_ordered_end #define MPI_File_read_ordered_end(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_read_ordered_end",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_write_ordered_begin #undef MPI_File_write_ordered_begin #define MPI_File_write_ordered_begin(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_File_write_ordered_begin",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_File_write_ordered_end #undef MPI_File_write_ordered_end #define MPI_File_write_ordered_end(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_write_ordered_end",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_get_type_extent #undef MPI_File_get_type_extent #define MPI_File_get_type_extent(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_File_get_type_extent",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_File_set_atomicity #undef MPI_File_set_atomicity #define MPI_File_set_atomicity(a1,a2) PyMPI_UNAVAILABLE("MPI_File_set_atomicity",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_get_atomicity #undef MPI_File_get_atomicity #define MPI_File_get_atomicity(a1,a2) PyMPI_UNAVAILABLE("MPI_File_get_atomicity",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_sync #undef MPI_File_sync #define MPI_File_sync(a1) PyMPI_UNAVAILABLE("MPI_File_sync",a1) #endif #ifndef PyMPI_HAVE_MPI_File_get_errhandler #undef MPI_File_get_errhandler #define MPI_File_get_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_File_get_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_set_errhandler #undef MPI_File_set_errhandler #define MPI_File_set_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_File_set_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_errhandler_fn #undef MPI_File_errhandler_fn typedef void (PyMPI_MPI_File_errhandler_fn)(MPI_File*,int*,...); #define MPI_File_errhandler_fn PyMPI_MPI_File_errhandler_fn #endif #ifndef PyMPI_HAVE_MPI_File_errhandler_function #undef MPI_File_errhandler_function #define MPI_File_errhandler_function MPI_File_errhandler_fn #endif #ifndef PyMPI_HAVE_MPI_File_create_errhandler #undef MPI_File_create_errhandler #define MPI_File_create_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_File_create_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_File_call_errhandler #undef MPI_File_call_errhandler #define MPI_File_call_errhandler(a1,a2) PyMPI_UNAVAILABLE("MPI_File_call_errhandler",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Datarep_conversion_function #undef MPI_Datarep_conversion_function typedef int (PyMPI_MPI_Datarep_conversion_function)(void*,MPI_Datatype,int,void*,MPI_Offset,void*); #define MPI_Datarep_conversion_function PyMPI_MPI_Datarep_conversion_function #endif #ifndef PyMPI_HAVE_MPI_Datarep_extent_function #undef MPI_Datarep_extent_function typedef int (PyMPI_MPI_Datarep_extent_function)(MPI_Datatype,MPI_Aint*,void*); #define MPI_Datarep_extent_function PyMPI_MPI_Datarep_extent_function #endif #ifndef PyMPI_HAVE_MPI_CONVERSION_FN_NULL #undef MPI_CONVERSION_FN_NULL #define MPI_CONVERSION_FN_NULL (0) #endif #ifndef PyMPI_HAVE_MPI_MAX_DATAREP_STRING #undef MPI_MAX_DATAREP_STRING #define MPI_MAX_DATAREP_STRING (1) #endif #ifndef PyMPI_HAVE_MPI_Register_datarep #undef MPI_Register_datarep #define MPI_Register_datarep(a1,a2,a3,a4,a5) PyMPI_UNAVAILABLE("MPI_Register_datarep",a1,a2,a3,a4,a5) #endif #ifndef PyMPI_HAVE_MPI_ERRHANDLER_NULL #undef MPI_ERRHANDLER_NULL #define MPI_ERRHANDLER_NULL ((MPI_Errhandler)0) #endif #ifndef PyMPI_HAVE_MPI_ERRORS_RETURN #undef MPI_ERRORS_RETURN #define MPI_ERRORS_RETURN ((MPI_Errhandler)MPI_ERRHANDLER_NULL) #endif #ifndef PyMPI_HAVE_MPI_ERRORS_ARE_FATAL #undef MPI_ERRORS_ARE_FATAL #define MPI_ERRORS_ARE_FATAL ((MPI_Errhandler)MPI_ERRHANDLER_NULL) #endif #ifndef PyMPI_HAVE_MPI_Errhandler_free #undef MPI_Errhandler_free #define MPI_Errhandler_free(a1) PyMPI_UNAVAILABLE("MPI_Errhandler_free",a1) #endif #ifndef PyMPI_HAVE_MPI_MAX_ERROR_STRING #undef MPI_MAX_ERROR_STRING #define MPI_MAX_ERROR_STRING (1) #endif #ifndef PyMPI_HAVE_MPI_Error_class #undef MPI_Error_class #define MPI_Error_class(a1,a2) PyMPI_UNAVAILABLE("MPI_Error_class",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Error_string #undef MPI_Error_string #define MPI_Error_string(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Error_string",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Add_error_class #undef MPI_Add_error_class #define MPI_Add_error_class(a1) PyMPI_UNAVAILABLE("MPI_Add_error_class",a1) #endif #ifndef PyMPI_HAVE_MPI_Add_error_code #undef MPI_Add_error_code #define MPI_Add_error_code(a1,a2) PyMPI_UNAVAILABLE("MPI_Add_error_code",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Add_error_string #undef MPI_Add_error_string #define MPI_Add_error_string(a1,a2) PyMPI_UNAVAILABLE("MPI_Add_error_string",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_SUCCESS #undef MPI_SUCCESS #define MPI_SUCCESS (0) #endif #ifndef PyMPI_HAVE_MPI_ERR_LASTCODE #undef MPI_ERR_LASTCODE #define MPI_ERR_LASTCODE (1) #endif #ifndef PyMPI_HAVE_MPI_ERR_COMM #undef MPI_ERR_COMM #define MPI_ERR_COMM (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_GROUP #undef MPI_ERR_GROUP #define MPI_ERR_GROUP (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_TYPE #undef MPI_ERR_TYPE #define MPI_ERR_TYPE (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_REQUEST #undef MPI_ERR_REQUEST #define MPI_ERR_REQUEST (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_OP #undef MPI_ERR_OP #define MPI_ERR_OP (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_BUFFER #undef MPI_ERR_BUFFER #define MPI_ERR_BUFFER (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_COUNT #undef MPI_ERR_COUNT #define MPI_ERR_COUNT (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_TAG #undef MPI_ERR_TAG #define MPI_ERR_TAG (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_RANK #undef MPI_ERR_RANK #define MPI_ERR_RANK (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_ROOT #undef MPI_ERR_ROOT #define MPI_ERR_ROOT (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_TRUNCATE #undef MPI_ERR_TRUNCATE #define MPI_ERR_TRUNCATE (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_IN_STATUS #undef MPI_ERR_IN_STATUS #define MPI_ERR_IN_STATUS (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_PENDING #undef MPI_ERR_PENDING #define MPI_ERR_PENDING (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_TOPOLOGY #undef MPI_ERR_TOPOLOGY #define MPI_ERR_TOPOLOGY (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_DIMS #undef MPI_ERR_DIMS #define MPI_ERR_DIMS (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_ARG #undef MPI_ERR_ARG #define MPI_ERR_ARG (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_OTHER #undef MPI_ERR_OTHER #define MPI_ERR_OTHER (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_UNKNOWN #undef MPI_ERR_UNKNOWN #define MPI_ERR_UNKNOWN (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_INTERN #undef MPI_ERR_INTERN #define MPI_ERR_INTERN (MPI_ERR_LASTCODE) #endif #ifndef PyMPI_HAVE_MPI_ERR_KEYVAL #undef MPI_ERR_KEYVAL #define MPI_ERR_KEYVAL (MPI_ERR_ARG) #endif #ifndef PyMPI_HAVE_MPI_ERR_NO_MEM #undef MPI_ERR_NO_MEM #define MPI_ERR_NO_MEM (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_INFO #undef MPI_ERR_INFO #define MPI_ERR_INFO (MPI_ERR_ARG) #endif #ifndef PyMPI_HAVE_MPI_ERR_INFO_KEY #undef MPI_ERR_INFO_KEY #define MPI_ERR_INFO_KEY (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_INFO_VALUE #undef MPI_ERR_INFO_VALUE #define MPI_ERR_INFO_VALUE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_INFO_NOKEY #undef MPI_ERR_INFO_NOKEY #define MPI_ERR_INFO_NOKEY (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_SPAWN #undef MPI_ERR_SPAWN #define MPI_ERR_SPAWN (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_PORT #undef MPI_ERR_PORT #define MPI_ERR_PORT (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_SERVICE #undef MPI_ERR_SERVICE #define MPI_ERR_SERVICE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_NAME #undef MPI_ERR_NAME #define MPI_ERR_NAME (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_FILE #undef MPI_ERR_FILE #define MPI_ERR_FILE (MPI_ERR_ARG) #endif #ifndef PyMPI_HAVE_MPI_ERR_NOT_SAME #undef MPI_ERR_NOT_SAME #define MPI_ERR_NOT_SAME (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_BAD_FILE #undef MPI_ERR_BAD_FILE #define MPI_ERR_BAD_FILE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_NO_SUCH_FILE #undef MPI_ERR_NO_SUCH_FILE #define MPI_ERR_NO_SUCH_FILE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_FILE_EXISTS #undef MPI_ERR_FILE_EXISTS #define MPI_ERR_FILE_EXISTS (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_FILE_IN_USE #undef MPI_ERR_FILE_IN_USE #define MPI_ERR_FILE_IN_USE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_AMODE #undef MPI_ERR_AMODE #define MPI_ERR_AMODE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_ACCESS #undef MPI_ERR_ACCESS #define MPI_ERR_ACCESS (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_READ_ONLY #undef MPI_ERR_READ_ONLY #define MPI_ERR_READ_ONLY (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_NO_SPACE #undef MPI_ERR_NO_SPACE #define MPI_ERR_NO_SPACE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_QUOTA #undef MPI_ERR_QUOTA #define MPI_ERR_QUOTA (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_UNSUPPORTED_DATAREP #undef MPI_ERR_UNSUPPORTED_DATAREP #define MPI_ERR_UNSUPPORTED_DATAREP (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_UNSUPPORTED_OPERATION #undef MPI_ERR_UNSUPPORTED_OPERATION #define MPI_ERR_UNSUPPORTED_OPERATION (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_CONVERSION #undef MPI_ERR_CONVERSION #define MPI_ERR_CONVERSION (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_DUP_DATAREP #undef MPI_ERR_DUP_DATAREP #define MPI_ERR_DUP_DATAREP (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_IO #undef MPI_ERR_IO #define MPI_ERR_IO (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_WIN #undef MPI_ERR_WIN #define MPI_ERR_WIN (MPI_ERR_ARG) #endif #ifndef PyMPI_HAVE_MPI_ERR_BASE #undef MPI_ERR_BASE #define MPI_ERR_BASE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_SIZE #undef MPI_ERR_SIZE #define MPI_ERR_SIZE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_DISP #undef MPI_ERR_DISP #define MPI_ERR_DISP (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_ASSERT #undef MPI_ERR_ASSERT #define MPI_ERR_ASSERT (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_LOCKTYPE #undef MPI_ERR_LOCKTYPE #define MPI_ERR_LOCKTYPE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_RMA_CONFLICT #undef MPI_ERR_RMA_CONFLICT #define MPI_ERR_RMA_CONFLICT (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_RMA_SYNC #undef MPI_ERR_RMA_SYNC #define MPI_ERR_RMA_SYNC (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_RMA_RANGE #undef MPI_ERR_RMA_RANGE #define MPI_ERR_RMA_RANGE (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_RMA_ATTACH #undef MPI_ERR_RMA_ATTACH #define MPI_ERR_RMA_ATTACH (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_RMA_SHARED #undef MPI_ERR_RMA_SHARED #define MPI_ERR_RMA_SHARED (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_ERR_RMA_FLAVOR #undef MPI_ERR_RMA_FLAVOR #define MPI_ERR_RMA_FLAVOR (MPI_ERR_UNKNOWN) #endif #ifndef PyMPI_HAVE_MPI_Alloc_mem #undef MPI_Alloc_mem #define MPI_Alloc_mem(a1,a2,a3) PyMPI_UNAVAILABLE("MPI_Alloc_mem",a1,a2,a3) #endif #ifndef PyMPI_HAVE_MPI_Free_mem #undef MPI_Free_mem #define MPI_Free_mem(a1) PyMPI_UNAVAILABLE("MPI_Free_mem",a1) #endif #ifndef PyMPI_HAVE_MPI_Init #undef MPI_Init #define MPI_Init(a1,a2) PyMPI_UNAVAILABLE("MPI_Init",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Finalize #undef MPI_Finalize #define MPI_Finalize() PyMPI_UNAVAILABLE("MPI_Finalize") #endif #ifndef PyMPI_HAVE_MPI_Initialized #undef MPI_Initialized #define MPI_Initialized(a1) PyMPI_UNAVAILABLE("MPI_Initialized",a1) #endif #ifndef PyMPI_HAVE_MPI_Finalized #undef MPI_Finalized #define MPI_Finalized(a1) PyMPI_UNAVAILABLE("MPI_Finalized",a1) #endif #ifndef PyMPI_HAVE_MPI_THREAD_SINGLE #undef MPI_THREAD_SINGLE #define MPI_THREAD_SINGLE (0) #endif #ifndef PyMPI_HAVE_MPI_THREAD_FUNNELED #undef MPI_THREAD_FUNNELED #define MPI_THREAD_FUNNELED (1) #endif #ifndef PyMPI_HAVE_MPI_THREAD_SERIALIZED #undef MPI_THREAD_SERIALIZED #define MPI_THREAD_SERIALIZED (2) #endif #ifndef PyMPI_HAVE_MPI_THREAD_MULTIPLE #undef MPI_THREAD_MULTIPLE #define MPI_THREAD_MULTIPLE (3) #endif #ifndef PyMPI_HAVE_MPI_Init_thread #undef MPI_Init_thread #define MPI_Init_thread(a1,a2,a3,a4) PyMPI_UNAVAILABLE("MPI_Init_thread",a1,a2,a3,a4) #endif #ifndef PyMPI_HAVE_MPI_Query_thread #undef MPI_Query_thread #define MPI_Query_thread(a1) PyMPI_UNAVAILABLE("MPI_Query_thread",a1) #endif #ifndef PyMPI_HAVE_MPI_Is_thread_main #undef MPI_Is_thread_main #define MPI_Is_thread_main(a1) PyMPI_UNAVAILABLE("MPI_Is_thread_main",a1) #endif #ifndef PyMPI_HAVE_MPI_VERSION #undef MPI_VERSION #define MPI_VERSION (1) #endif #ifndef PyMPI_HAVE_MPI_SUBVERSION #undef MPI_SUBVERSION #define MPI_SUBVERSION (0) #endif #ifndef PyMPI_HAVE_MPI_Get_version #undef MPI_Get_version #define MPI_Get_version(a1,a2) PyMPI_UNAVAILABLE("MPI_Get_version",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_MAX_LIBRARY_VERSION_STRING #undef MPI_MAX_LIBRARY_VERSION_STRING #define MPI_MAX_LIBRARY_VERSION_STRING (1) #endif #ifndef PyMPI_HAVE_MPI_Get_library_version #undef MPI_Get_library_version #define MPI_Get_library_version(a1,a2) PyMPI_UNAVAILABLE("MPI_Get_library_version",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_MAX_PROCESSOR_NAME #undef MPI_MAX_PROCESSOR_NAME #define MPI_MAX_PROCESSOR_NAME (1) #endif #ifndef PyMPI_HAVE_MPI_Get_processor_name #undef MPI_Get_processor_name #define MPI_Get_processor_name(a1,a2) PyMPI_UNAVAILABLE("MPI_Get_processor_name",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Wtime #undef MPI_Wtime #define MPI_Wtime() PyMPI_UNAVAILABLE("MPI_Wtime") #endif #ifndef PyMPI_HAVE_MPI_Wtick #undef MPI_Wtick #define MPI_Wtick() PyMPI_UNAVAILABLE("MPI_Wtick") #endif #ifndef PyMPI_HAVE_MPI_Pcontrol #undef MPI_Pcontrol #define MPI_Pcontrol(a1) PyMPI_UNAVAILABLE("MPI_Pcontrol",a1) #endif #ifndef PyMPI_HAVE_MPI_Fint #undef MPI_Fint typedef int PyMPI_MPI_Fint; #define MPI_Fint PyMPI_MPI_Fint #endif #ifndef PyMPI_HAVE_MPI_F_STATUS_IGNORE #undef MPI_F_STATUS_IGNORE #define MPI_F_STATUS_IGNORE ((MPI_Fint*)0) #endif #ifndef PyMPI_HAVE_MPI_F_STATUSES_IGNORE #undef MPI_F_STATUSES_IGNORE #define MPI_F_STATUSES_IGNORE ((MPI_Fint*)0) #endif #ifndef PyMPI_HAVE_MPI_Status_c2f #undef MPI_Status_c2f #define MPI_Status_c2f(a1,a2) PyMPI_UNAVAILABLE("MPI_Status_c2f",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Status_f2c #undef MPI_Status_f2c #define MPI_Status_f2c(a1,a2) PyMPI_UNAVAILABLE("MPI_Status_f2c",a1,a2) #endif #ifndef PyMPI_HAVE_MPI_Type_c2f #undef MPI_Type_c2f #define MPI_Type_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Request_c2f #undef MPI_Request_c2f #define MPI_Request_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Message_c2f #undef MPI_Message_c2f #define MPI_Message_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Op_c2f #undef MPI_Op_c2f #define MPI_Op_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Info_c2f #undef MPI_Info_c2f #define MPI_Info_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Group_c2f #undef MPI_Group_c2f #define MPI_Group_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Comm_c2f #undef MPI_Comm_c2f #define MPI_Comm_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Win_c2f #undef MPI_Win_c2f #define MPI_Win_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_File_c2f #undef MPI_File_c2f #define MPI_File_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Errhandler_c2f #undef MPI_Errhandler_c2f #define MPI_Errhandler_c2f(a1) ((MPI_Fint)0) #endif #ifndef PyMPI_HAVE_MPI_Type_f2c #undef MPI_Type_f2c #define MPI_Type_f2c(a1) MPI_DATATYPE_NULL #endif #ifndef PyMPI_HAVE_MPI_Request_f2c #undef MPI_Request_f2c #define MPI_Request_f2c(a1) MPI_REQUEST_NULL #endif #ifndef PyMPI_HAVE_MPI_Message_f2c #undef MPI_Message_f2c #define MPI_Message_f2c(a1) MPI_MESSAGE_NULL #endif #ifndef PyMPI_HAVE_MPI_Op_f2c #undef MPI_Op_f2c #define MPI_Op_f2c(a1) MPI_OP_NULL #endif #ifndef PyMPI_HAVE_MPI_Info_f2c #undef MPI_Info_f2c #define MPI_Info_f2c(a1) MPI_INFO_NULL #endif #ifndef PyMPI_HAVE_MPI_Group_f2c #undef MPI_Group_f2c #define MPI_Group_f2c(a1) MPI_GROUP_NULL #endif #ifndef PyMPI_HAVE_MPI_Comm_f2c #undef MPI_Comm_f2c #define MPI_Comm_f2c(a1) MPI_COMM_NULL #endif #ifndef PyMPI_HAVE_MPI_Win_f2c #undef MPI_Win_f2c #define MPI_Win_f2c(a1) MPI_WIN_NULL #endif #ifndef PyMPI_HAVE_MPI_File_f2c #undef MPI_File_f2c #define MPI_File_f2c(a1) MPI_FILE_NULL #endif #ifndef PyMPI_HAVE_MPI_Errhandler_f2c #undef MPI_Errhandler_f2c #define MPI_Errhandler_f2c(a1) MPI_ERRHANDLER_NULL #endif #endif /* !PyMPI_MISSING_H */ mpi4py_1.3.1+hg20131106.orig/src/mpi4py.MPE.pyx0000644000000000000000000000023212211706251016502 0ustar 00000000000000#cython: embedsignature=True #cython: cdivision=True #cython: always_allow_keywords=True #cython: autotestdict=False cimport cython include "MPE/MPE.pyx" mpi4py_1.3.1+hg20131106.orig/src/mpi4py.MPI.pyx0000644000000000000000000000023212211706251016506 0ustar 00000000000000#cython: embedsignature=True #cython: cdivision=True #cython: always_allow_keywords=True #cython: autotestdict=False cimport cython include "MPI/MPI.pyx" mpi4py_1.3.1+hg20131106.orig/src/mpi_c.pxd0000644000000000000000000000013412211706251015703 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com include "include/mpi4py/mpi_c.pxd" mpi4py_1.3.1+hg20131106.orig/src/pmpi-mpe.c0000644000000000000000000000010712211706251015767 0ustar 00000000000000#include "stdlib.h" #include "mpi.h" static const char name[] = "mpe"; mpi4py_1.3.1+hg20131106.orig/src/pmpi-vt-hyb.c0000644000000000000000000000011212211706251016413 0ustar 00000000000000#include "stdlib.h" #include "mpi.h" static const char name[] = "vt-hyb"; mpi4py_1.3.1+hg20131106.orig/src/pmpi-vt-mpi.c0000644000000000000000000000011212211706251016416 0ustar 00000000000000#include "stdlib.h" #include "mpi.h" static const char name[] = "vt-mpi"; mpi4py_1.3.1+hg20131106.orig/src/pmpi-vt.c0000644000000000000000000000156412211706251015647 0ustar 00000000000000#include "stdlib.h" #include "mpi.h" static const char name[] = "vt"; #if (defined(OMPI_MAJOR_VERSION) && \ defined(OMPI_MINOR_VERSION) && \ defined(OMPI_RELEASE_VERSION)) #define OPENMPI_VERSION_NUMBER \ ((OMPI_MAJOR_VERSION * 10000) + \ (OMPI_MINOR_VERSION * 100) + \ (OMPI_RELEASE_VERSION * 1)) #endif #ifdef __cplusplus extern "C" { #endif struct ompregdescr; extern int POMP_MAX_ID; extern struct ompregdescr* pomp_rd_table[]; int POMP_MAX_ID = 0; struct ompregdescr* pomp_rd_table[] = { 0 }; #if defined(OPENMPI_VERSION_NUMBER) #if ((OPENMPI_VERSION_NUMBER >= 10300) && \ (OPENMPI_VERSION_NUMBER < 10403)) int MPI_Init_thread(int *argc, char ***argv, int required, int *provided) { if (provided) *provided = MPI_THREAD_SINGLE; return MPI_Init(argc, argv); } #endif #endif #ifdef __cplusplus } #endif mpi4py_1.3.1+hg20131106.orig/src/python.c0000644000000000000000000001564512211706251015601 0ustar 00000000000000/* Author: Lisandro Dalcin */ /* Contact: dalcinl@gmail.com */ /* -------------------------------------------------------------------------- */ #include #define MPICH_IGNORE_CXX_SEEK 1 #define OMPI_IGNORE_CXX_SEEK 1 #include #ifdef __FreeBSD__ #include #endif static int PyMPI_Main(int, char **); #if PY_MAJOR_VERSION >= 3 static int Py3_Main(int, char **); #else static int Py2_Main(int, char **); #endif /* -------------------------------------------------------------------------- */ int main(int argc, char **argv) { #ifdef __FreeBSD__ fp_except_t m; m = fpgetmask(); fpsetmask(m & ~FP_X_OFL); #endif return PyMPI_Main(argc, argv); } static int PyMPI_Main(int argc, char **argv) { int sts=0, flag=1, finalize=0; /* MPI initalization */ (void)MPI_Initialized(&flag); if (!flag) { #if (defined(MPI_VERSION) && (MPI_VERSION > 1)) int required = MPI_THREAD_MULTIPLE; int provided = MPI_THREAD_SINGLE; (void)MPI_Init_thread(&argc, &argv, required, &provided); #else (void)MPI_Init(&argc, &argv); #endif finalize = 1; } /* Python main */ #if PY_MAJOR_VERSION >= 3 sts = Py3_Main(argc, argv); #else sts = Py2_Main(argc, argv); #endif /* MPI finalization */ (void)MPI_Finalized(&flag); if (!flag) { if (sts != 0) (void)MPI_Abort(MPI_COMM_WORLD, sts); if (finalize) (void)MPI_Finalize(); } return sts; } /* -------------------------------------------------------------------------- */ #if PY_MAJOR_VERSION <= 2 static int Py2_Main(int argc, char **argv) { return Py_Main(argc,argv); } #endif #if PY_MAJOR_VERSION >= 3 #include static wchar_t **mk_wargs(int, char **); static wchar_t **cp_wargs(int, wchar_t **); static void rm_wargs(wchar_t **, int); static int Py3_Main(int argc, char **argv) { int sts = 0; wchar_t **wargv = mk_wargs(argc, argv); wchar_t **wargv2 = cp_wargs(argc, wargv); if (wargv && wargv2) sts = Py_Main(argc, wargv); else sts = 1; rm_wargs(wargv2, 1); rm_wargs(wargv, 0); return sts; } #if PY_VERSION_HEX < 0x03020000 static wchar_t *_Py_char2wchar(const char *, size_t *); #elif defined(__APPLE__) #ifdef __cplusplus extern "C" { #endif extern wchar_t* _Py_DecodeUTF8_surrogateescape(const char *, Py_ssize_t); #ifdef __cplusplus } #endif #endif static wchar_t ** mk_wargs(int argc, char **argv) { int i; char *saved_locale = NULL; wchar_t **args = NULL; args = (wchar_t **)PyMem_Malloc((size_t)(argc+1)*sizeof(wchar_t *)); if (!args) goto oom; saved_locale = strdup(setlocale(LC_ALL, NULL)); if (!saved_locale) goto oom; setlocale(LC_ALL, ""); for (i=0; i= 0x03020000 args[i] = _Py_DecodeUTF8_surrogateescape(argv[i], strlen(argv[i])); #else args[i] = _Py_char2wchar(argv[i], NULL); #endif if (!args[i]) goto oom; } args[argc] = NULL; setlocale(LC_ALL, saved_locale); free(saved_locale); return args; oom: fprintf(stderr, "out of memory\n"); if (saved_locale) { setlocale(LC_ALL, saved_locale); free(saved_locale); } if (args) rm_wargs(args, 1); return NULL; } static wchar_t ** cp_wargs(int argc, wchar_t **args) { int i; wchar_t **args_copy = NULL; if (!args) return NULL; args_copy = (wchar_t **)PyMem_Malloc((size_t)(argc+1)*sizeof(wchar_t *)); if (!args_copy) goto oom; for (i=0; i<(argc+1); i++) { args_copy[i] = args[i]; } return args_copy; oom: fprintf(stderr, "out of memory\n"); return NULL; } static void rm_wargs(wchar_t **args, int deep) { int i = 0; if (args && deep) while (args[i]) PyMem_Free(args[i++]); if (args) PyMem_Free(args); } #if PY_VERSION_HEX < 0x03020000 static wchar_t * _Py_char2wchar(const char* arg, size_t *size) { wchar_t *res; #ifdef HAVE_BROKEN_MBSTOWCS /* Some platforms have a broken implementation of * mbstowcs which does not count the characters that * would result from conversion. Use an upper bound. */ size_t argsize = strlen(arg); #else size_t argsize = mbstowcs(NULL, arg, 0); #endif size_t count; unsigned char *in; wchar_t *out; #ifdef HAVE_MBRTOWC mbstate_t mbs; #endif if (argsize != (size_t)-1) { res = (wchar_t *)PyMem_Malloc((argsize+1)*sizeof(wchar_t)); if (!res) goto oom; count = mbstowcs(res, arg, argsize+1); if (count != (size_t)-1) { wchar_t *tmp; /* Only use the result if it contains no surrogate characters. */ for (tmp = res; *tmp != 0 && (*tmp < 0xd800 || *tmp > 0xdfff); tmp++) ; if (*tmp == 0) { if (size != NULL) *size = count; return res; } } PyMem_Free(res); } /* Conversion failed. Fall back to escaping with surrogateescape. */ #ifdef HAVE_MBRTOWC /* Try conversion with mbrtwoc (C99), and escape non-decodable bytes. */ /* Overallocate; as multi-byte characters are in the argument, the actual output could use less memory. */ argsize = strlen(arg) + 1; res = (wchar_t *)PyMem_Malloc(argsize*sizeof(wchar_t)); if (!res) goto oom; in = (unsigned char*)arg; out = res; memset(&mbs, 0, sizeof mbs); while (argsize) { size_t converted = mbrtowc(out, (char*)in, argsize, &mbs); if (converted == 0) /* Reached end of string; null char stored. */ break; if (converted == (size_t)-2) { /* Incomplete character. This should never happen, since we provide everything that we have - unless there is a bug in the C library, or I misunderstood how mbrtowc works. */ fprintf(stderr, "unexpected mbrtowc result -2\n"); return NULL; } if (converted == (size_t)-1) { /* Conversion error. Escape as UTF-8b, and start over in the initial shift state. */ *out++ = 0xdc00 + *in++; argsize--; memset(&mbs, 0, sizeof mbs); continue; } if (*out >= 0xd800 && *out <= 0xdfff) { /* Surrogate character. Escape the original byte sequence with surrogateescape. */ argsize -= converted; while (converted--) *out++ = 0xdc00 + *in++; continue; } /* successfully converted some bytes */ in += converted; argsize -= converted; out++; } #else /* Cannot use C locale for escaping; manually escape as if charset is ASCII (i.e. escape all bytes > 128. This will still roundtrip correctly in the locale's charset, which must be an ASCII superset. */ res = (wchar_t *)PyMem_Malloc((strlen(arg)+1)*sizeof(wchar_t)); if (!res) goto oom; in = (unsigned char*)arg; out = res; while(*in) if(*in < 128) *out++ = *in++; else *out++ = 0xdc00 + *in++; *out = 0; #endif if (size != NULL) *size = (size_t)(out - res); return res; oom: fprintf(stderr, "out of memory\n"); return NULL; } #endif #endif /* !(PY_MAJOR_VERSION >= 3) */ /* -------------------------------------------------------------------------- */ /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/src/rc.py0000644000000000000000000000205112211706251015055 0ustar 00000000000000# Author: Lisandro Dalcin # Contact: dalcinl@gmail.com """ Runtime configuration parameters """ initialize = True """ Automatic MPI initialization at import. * Any of ``{True, "yes", 1}``: initialize MPI at import. * Any of ``{False, "no", 0}``: do not initialize MPI at import. """ threaded = True """ Request for thread support at MPI initialization. * Any of ``{True, "yes", 1}``: initialize MPI with ``MPI_Init_thread()``. * Any of ``{False, "no", 0}``: initialize MPI with ``MPI_Init()``. """ thread_level = "multiple" """ Level of thread support to request at MPI initialization. * ``"single"`` : use ``MPI_THREAD_SINGLE``. * ``"funneled"`` : use ``MPI_THREAD_FUNNELED``. * ``"serialized"`` : use ``MPI_THREAD_SERIALIZED``. * ``"multiple"`` : use ``MPI_THREAD_MULTIPLE``. """ finalize = None """ Automatic MPI finalization at exit. * ``None``: Finalize MPI at exit iff it was initialized at import. * Any of ``{True, "yes", 1}``: call ``MPI_Finalize()`` at exit. * Any of ``{False, "no", 0}``: do not call ``MPI_Finalize()`` at exit. """ mpi4py_1.3.1+hg20131106.orig/src/vendor.h0000644000000000000000000000604012211706251015547 0ustar 00000000000000#include "mpi.h" #define myAtoI(p, i) \ do { \ i = 0; \ while (*p >= '0' && *p <= '9') \ { i *= 10; i += *p++ - '0'; } \ } while(0) #define myVersionParser(S, a, b, c) \ do { \ const char *s = S; \ a = b = c = 0; \ myAtoI(s, a); if(*s++ != '.') break; \ myAtoI(s, b); if(*s++ != '.') break; \ myAtoI(s, c); if(*s++ != '.') break; \ } while(0) static int MPI_Get_vendor(const char **vendor_name, int *version_major, int *version_minor, int *version_micro) { const char* name="unknown"; int major=0, minor=0, micro=0; /* MPICH3 */ #if defined(MPICH3) #if defined(MPICH_NUMVERSION) {int version = MPICH_NUMVERSION/1000; major = version/10000; version -= major*10000; minor = version/100; version -= minor*100; micro = version/1; version -= micro*1;} #elif defined(MPICH_VERSION) myVersionParser(MPICH_VERSION,major,minor,micro); #endif name = "MPICH"; #if defined(DEINO_MPI) name = "DeinoMPI"; #elif defined(MS_MPI) name = "Microsoft MPI"; #endif #endif /* MPICH2 */ #if defined(MPICH2) #if defined(MPICH2_NUMVERSION) {int version = MPICH2_NUMVERSION/1000; major = version/10000; version -= major*10000; minor = version/100; version -= minor*100; micro = version/1; version -= micro*1;} #elif defined(MPICH2_VERSION) myVersionParser(MPICH2_VERSION,major,minor,micro); #endif name = "MPICH2"; #if defined(DEINO_MPI) name = "DeinoMPI"; #elif defined(MS_MPI) name = "Microsoft MPI"; #elif defined(__SICORTEX__) name = "SiCortex MPI"; #endif #endif /* Open MPI */ #if defined(OPEN_MPI) name = "Open MPI"; #if defined(OMPI_MAJOR_VERSION) major = OMPI_MAJOR_VERSION; #endif #if defined(OMPI_MINOR_VERSION) minor = OMPI_MINOR_VERSION; #endif #if defined(OMPI_RELEASE_VERSION) micro = OMPI_RELEASE_VERSION; #endif #endif /* HP MPI */ #if defined(HP_MPI) name = "HP MPI"; major = HP_MPI/100; minor = HP_MPI%100; #if defined(HP_MPI_MINOR) micro = HP_MPI_MINOR; #endif #endif /* MPICH1 */ #if defined(MPICH_NAME) && MPICH_NAME==1 name = "MPICH1"; #if defined(MPICH_VERSION) myVersionParser(MPICH_VERSION,major,minor,micro); #endif #endif /* LAM/MPI */ #if defined(LAM_MPI) name = "LAM/MPI"; #if defined(LAM_MAJOR_VERSION) major = LAM_MAJOR_VERSION; #endif #if defined(LAM_MINOR_VERSION) minor = LAM_MINOR_VERSION; #endif #if defined(LAM_RELEASE_VERSION) micro = LAM_RELEASE_VERSION; #endif #endif /* SGI */ #if defined(SGI_MPI) name = "SGI"; #endif if (vendor_name) *vendor_name = name; if (version_major) *version_major = major; if (version_minor) *version_minor = minor; if (version_micro) *version_micro = micro; return 0; } #undef myAtoI #undef myVersionParser /* Local variables: c-basic-offset: 2 indent-tabs-mode: nil End: */ mpi4py_1.3.1+hg20131106.orig/test/arrayimpl.py0000644000000000000000000000735012211706251016650 0ustar 00000000000000from mpi4py import MPI try: from collections import OrderedDict except ImportError: OrderedDict = dict __all__ = ['TypeMap', 'ArrayTypes', 'allclose'] TypeMap = OrderedDict([ ('b', MPI.SIGNED_CHAR), ('h', MPI.SHORT), ('i', MPI.INT), ('l', MPI.LONG), ('q', MPI.LONG_LONG), ('f', MPI.FLOAT), ('d', MPI.DOUBLE), ]) import sys if sys.version_info[:2] < (3,3): del TypeMap['q'] if MPI.SIGNED_CHAR == MPI.DATATYPE_NULL: del TypeMap['b'] ArrayTypes = [] try: import array except ImportError: pass else: def product(seq): res = 1 for s in seq: res = res * s return res def mkshape(seq): return tuple([int(s) for s in seq]) class Array(array.array): TypeMap = TypeMap.copy() def __new__(cls, arg, typecode, shape=None): if isinstance(arg, (int, float)): if shape is None: shape = () elif isinstance(shape, int): shape = (shape,) else: shape = mkshape(shape) size = product(shape) arg = [arg] * size else: size = len(arg) if shape is None: shape = (size,) else: shape = mkshape(shape) assert size == product(shape) ary = array.array.__new__(cls, typecode, arg) ary.shape = shape ary.size = size try: ary.mpidtype = Array.TypeMap[typecode] except KeyError: ary.mpidtype = MPI.DATATYPE_NULL return ary def flat(self): return self flat = property(flat) def as_raw(self): return self def as_mpi(self): return (self, self.mpidtype) def as_mpi_c(self, count): return (self, count, self.mpidtype) def as_mpi_v(self, cnt, dsp): return (self, (cnt, dsp), self.mpidtype) ArrayTypes.append(Array) __all__.append('Array') try: import numpy except ImportError: pass else: class NumPy(object): TypeMap = TypeMap.copy() def __init__(self, arg, typecode, shape=None): if isinstance(arg, (int, float, complex)): if shape is None: shape = () else: if shape is None: shape = len(arg) self.array = ary = numpy.zeros(shape, typecode) if isinstance(arg, (int, float, complex)): ary.fill(arg) else: ary[:] = arg try: self.mpidtype = NumPy.TypeMap[typecode] except KeyError: self.mpidtype = MPI.DATATYPE_NULL def __len__(self): return len(self.array) def __getitem__(self, i): return self.array[i] def __setitem__(self, i, v): self.array[i] = v def typecode(self): return self.array.dtype.char typecode = property(typecode) def itemsize(self): return self.array.itemsize itemsize = property(itemsize) def flat(self): return self.array.flat flat = property(flat) def as_raw(self): return self.array def as_mpi(self): return (self.array, self.mpidtype) def as_mpi_c(self, count): return (self.array, count, self.mpidtype) def as_mpi_v(self, cnt, dsp): return (self.array, (cnt, dsp), self.mpidtype) ArrayTypes.append(NumPy) __all__.append('NumPy') try: from numpy import allclose except ImportError: def allclose(a, b, rtol=1.e-5, atol=1.e-8): for x, y in zip(a, b): if abs(x-y) > (atol + rtol * abs(y)): return False return True mpi4py_1.3.1+hg20131106.orig/test/mpiunittest.py0000644000000000000000000000515512211706251017236 0ustar 00000000000000import sys, os, glob import unittest class TestCase(unittest.TestCase): def assertRaisesMPI(self, IErrClass, callableObj, *args, **kwargs): from mpi4py.MPI import Exception as excClass, Get_version try: callableObj(*args, **kwargs) except NotImplementedError: if Get_version() >= (2, 0): raise self.failureException("raised NotImplementedError") except excClass: excValue = sys.exc_info()[1] error_class = excValue.Get_error_class() if isinstance(IErrClass, (list, tuple)): match = (error_class in IErrClass) else: match = (error_class == IErrClass) if not match: if isinstance(IErrClass, (list, tuple)): IErrClassName = [ErrClsName(e) for e in IErrClass] IErrClassName = type(IErrClass)(IErrClassName) else: IErrClassName = ErrClsName(IErrClass) raise self.failureException( "generated error class is '%s' (%d), " "but expected '%s' (%s)" % \ (ErrClsName(error_class), error_class, IErrClassName, IErrClass,) ) else: if hasattr(excClass,'__name__'): excName = excClass.__name__ else: excName = str(excClass) raise self.failureException("%s not raised" % excName) failUnlessRaisesMPI = assertRaisesMPI if sys.version_info < (2,4): assertTrue = unittest.TestCase.failUnless assertFalse = unittest.TestCase.failIf ErrClsMap = None def ErrClsName(ierr): global ErrClsMap if ErrClsMap is None: from mpi4py import MPI ErrClsMap = {} ErrClsMap[MPI.SUCCESS] = 'SUCCESS' for entry in dir(MPI): if 'ERR_' in entry: errcls = getattr(MPI, entry) ErrClsMap[errcls] = entry try: return ErrClsMap[ierr] except KeyError: return '' def find_tests(pattern='test_*.py', directory=None, exclude=()): if directory is None: directory = os.path.split(__file__)[0] pattern = os.path.join(directory, pattern) test_list = [] for test_file in glob.glob(pattern): filename = os.path.basename(test_file) modulename = os.path.splitext(filename)[0] if modulename not in exclude: test = __import__(modulename) test_list.append(test) return test_list def main(*args, **kargs): try: unittest.main(*args, **kargs) except SystemExit: pass mpi4py_1.3.1+hg20131106.orig/test/runtests.py0000644000000000000000000001742312211706251016541 0ustar 00000000000000import sys, os import optparse import unittest def getoptionparser(): parser = optparse.OptionParser() parser.add_option("-q", "--quiet", action="store_const", const=0, dest="verbose", default=1, help="do not print status messages to stdout") parser.add_option("-v", "--verbose", action="store_const", const=2, dest="verbose", default=1, help="print status messages to stdout") parser.add_option("-i", "--include", type="string", action="append", dest="include", default=[], help="include tests matching PATTERN", metavar="PATTERN") parser.add_option("-e", "--exclude", type="string", action="append", dest="exclude", default=[], help="exclude tests matching PATTERN", metavar="PATTERN") parser.add_option("--no-builddir", action="store_false", dest="builddir", default=True, help="disable testing from build directory") parser.add_option("--path", type="string", action="append", dest="path", default=[], help="prepend PATH to sys.path", metavar="PATH") parser.add_option("--refleaks", type="int", action="store", dest="repeats", default=3, help="run tests REPEAT times in a loop to catch leaks", metavar="REPEAT") parser.add_option("--no-threads", action="store_false", dest="threaded", default=True, help="initialize MPI without thread support") parser.add_option("--thread-level", type="choice", choices=["single", "funneled", "serialized", "multiple"], action="store", dest="thread_level", default="multiple", help="initialize MPI with required thread support") parser.add_option("--mpe", action="store_true", dest="mpe", default=False, help="use MPE for MPI profiling") parser.add_option("--vt", action="store_true", dest="vt", default=False, help="use VampirTrace for MPI profiling") parser.add_option("--no-numpy", action="store_false", dest="numpy", default=True, help="disable testing with NumPy arrays") parser.add_option("--no-array", action="store_false", dest="array", default=True, help="disable testing with builtin array.array") return parser def getbuilddir(): from distutils.util import get_platform s = os.path.join("build", "lib.%s-%.3s" % (get_platform(), sys.version)) if (sys.version[:3] >= '2.6' and hasattr(sys, 'gettotalrefcount')): s += '-pydebug' return s def setup_python(options): rootdir = os.path.dirname(os.path.dirname(__file__)) builddir = os.path.join(rootdir, getbuilddir()) if options.builddir and os.path.exists(builddir): sys.path.insert(0, builddir) if options.path: path = options.path[:] path.reverse() for p in path: sys.path.insert(0, p) def setup_unittest(options): from unittest import TestSuite try: from unittest.runner import _WritelnDecorator except ImportError: from unittest import _WritelnDecorator # if sys.version[:3] < '2.4': TestSuite.__iter__ = lambda self: iter(self._tests) # writeln_orig = _WritelnDecorator.writeln def writeln(self, message=''): try: self.stream.flush() except: pass writeln_orig(self, message) try: self.stream.flush() except: pass _WritelnDecorator.writeln = writeln def import_package(options, pkgname): # if not options.numpy: sys.modules['numpy'] = None if not options.array: sys.modules['array'] = None # package = __import__(pkgname) # import mpi4py.rc mpi4py.rc.threaded = options.threaded mpi4py.rc.thread_level = options.thread_level if options.mpe: mpi4py.rc.profile('mpe', logfile='runtests-mpi4py') if options.vt: mpi4py.rc.profile('vt', logfile='runtests-mpi4py') import mpi4py.MPI # return package def getprocessorinfo(): from mpi4py import MPI rank = MPI.COMM_WORLD.Get_rank() name = MPI.Get_processor_name() return (rank, name) def getlibraryinfo(): from mpi4py import MPI info = "MPI %d.%d" % MPI.Get_version() name, version = MPI.get_vendor() if name != "unknown": info += (" (%s %s)" % (name, '%d.%d.%d' % version)) return info def getpythoninfo(): x, y = sys.version_info[:2] return ("Python %d.%d (%s)" % (x, y, sys.executable)) def getpackageinfo(pkg): return ("%s %s (%s)" % (pkg.__name__, pkg.__version__, pkg.__path__[0])) def writeln(message='', endl='\n'): sys.stderr.flush() sys.stderr.write(message+endl) sys.stderr.flush() def print_banner(options, package): r, n = getprocessorinfo() fmt = "[%d@%s] %s" if options.verbose: writeln(fmt % (r, n, getpythoninfo())) writeln(fmt % (r, n, getlibraryinfo())) writeln(fmt % (r, n, getpackageinfo(package))) def load_tests(options, args): from glob import glob import re testsuitedir = os.path.dirname(__file__) sys.path.insert(0, testsuitedir) pattern = 'test_*.py' wildcard = os.path.join(testsuitedir, pattern) testfiles = glob(wildcard) testfiles.sort() testsuite = unittest.TestSuite() testloader = unittest.TestLoader() include = exclude = None if options.include: include = re.compile('|'.join(options.include)).search if options.exclude: exclude = re.compile('|'.join(options.exclude)).search if not options.numpy: sys.modules['numpy'] = None if not options.array: sys.modules['array'] = None for testfile in testfiles: filename = os.path.basename(testfile) testname = os.path.splitext(filename)[0] if ((exclude and exclude(testname)) or (include and not include(testname))): continue module = __import__(testname) for arg in args: try: cases = testloader.loadTestsFromNames((arg,), module) testsuite.addTests(cases) except AttributeError: pass if not args: cases = testloader.loadTestsFromModule(module) testsuite.addTests(cases) return testsuite def run_tests(options, testsuite): runner = unittest.TextTestRunner(verbosity=options.verbose) result = runner.run(testsuite) return result.wasSuccessful() def run_tests_leaks(options, testsuite): from sys import gettotalrefcount from gc import collect rank, name = getprocessorinfo() r1 = r2 = 0 repeats = options.repeats while repeats: repeats -= 1 collect() r1 = gettotalrefcount() run_tests(options, testsuite) collect() r2 = gettotalrefcount() leaks = r2-r1 if leaks: writeln('[%d@%s] refleaks: (%d - %d) --> %d' % (rank, name, r2, r1, leaks)) def main(args=None): pkgname = 'mpi4py' parser = getoptionparser() (options, args) = parser.parse_args(args) setup_python(options) setup_unittest(options) package = import_package(options, pkgname) print_banner(options, package) testsuite = load_tests(options, args) success = run_tests(options, testsuite) if success and hasattr(sys, 'gettotalrefcount'): run_tests_leaks(options, testsuite) return not success if __name__ == '__main__': import sys sys.dont_write_bytecode = True sys.exit(main()) mpi4py_1.3.1+hg20131106.orig/test/spawn_child.py0000644000000000000000000000034412211706251017137 0ustar 00000000000000import sys sys.path.insert(0, sys.argv[1]) from mpi4py import MPI parent = MPI.Comm.Get_parent() parent.Barrier() parent.Disconnect() assert parent == MPI.COMM_NULL parent = MPI.Comm.Get_parent() assert parent == MPI.COMM_NULL mpi4py_1.3.1+hg20131106.orig/test/test_attributes.py0000644000000000000000000002070012211706251020067 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest try: from sys import getrefcount except ImportError: class getrefcount(object): def __init__(self, arg): pass def __eq__(self, other): return True def __add__(self, other): return self class BaseTestCommAttr(object): keyval = MPI.KEYVAL_INVALID def tearDown(self): self.comm.Free() if self.keyval != MPI.KEYVAL_INVALID: self.keyval = MPI.Comm.Free_keyval(self.keyval) self.assertEqual(self.keyval, MPI.KEYVAL_INVALID) def testAttr(self, copy_fn=None, delete_fn=None): self.keyval = MPI.Comm.Create_keyval(copy_fn, delete_fn) self.assertNotEqual(self.keyval, MPI.KEYVAL_INVALID) attrval = [1,2,3] rc = getrefcount(attrval) self.comm.Set_attr(self.keyval, attrval) self.assertEqual(getrefcount(attrval), rc+1) o = self.comm.Get_attr(self.keyval) self.assertTrue(o is attrval) self.assertEqual(getrefcount(attrval), rc+2) o = None dupcomm = self.comm.Clone() if copy_fn is True: self.assertEqual(getrefcount(attrval), rc+2) o = dupcomm.Get_attr(self.keyval) if copy_fn is True: self.assertTrue(o is attrval) self.assertEqual(getrefcount(attrval), rc+3) elif not copy_fn: self.assertTrue(o is None) self.assertEqual(getrefcount(attrval), rc+1) dupcomm.Free() o = None self.assertEqual(getrefcount(attrval), rc+1) self.comm.Delete_attr(self.keyval) self.assertEqual(getrefcount(attrval), rc) o = self.comm.Get_attr(self.keyval) self.assertTrue(o is None) def testAttrCopyFalse(self): self.testAttr(False) def testAttrCopyTrue(self): self.testAttr(True) def testAttrCopyDelete(self): self.keyval = MPI.Comm.Create_keyval( copy_fn=MPI.Comm.Clone, delete_fn=MPI.Comm.Free) self.assertNotEqual(self.keyval, MPI.KEYVAL_INVALID) comm1 = self.comm dupcomm1 = comm1.Clone() rc = getrefcount(dupcomm1) comm1.Set_attr(self.keyval, dupcomm1) self.assertTrue(dupcomm1 != MPI.COMM_NULL) self.assertTrue(getrefcount(dupcomm1), rc+1) comm2 = comm1.Clone() dupcomm2 = comm2.Get_attr(self.keyval) self.assertTrue(dupcomm1 != dupcomm2) self.assertTrue(getrefcount(dupcomm1), rc+1) self.assertTrue(getrefcount(dupcomm2), 3) comm2.Free() self.assertTrue(dupcomm2 == MPI.COMM_NULL) self.assertTrue(getrefcount(dupcomm1), rc+1) self.assertTrue(getrefcount(dupcomm2), 2) self.comm.Delete_attr(self.keyval) self.assertTrue(dupcomm1 == MPI.COMM_NULL) self.assertTrue(getrefcount(dupcomm1), rc) class TestCommAttrWorld(BaseTestCommAttr, unittest.TestCase): def setUp(self): self.comm = MPI.COMM_WORLD.Dup() class TestCommAttrSelf(BaseTestCommAttr, unittest.TestCase): def setUp(self): self.comm = MPI.COMM_SELF.Dup() class BaseTestDatatypeAttr(object): keyval = MPI.KEYVAL_INVALID def tearDown(self): self.datatype.Free() if self.keyval != MPI.KEYVAL_INVALID: self.keyval = MPI.Datatype.Free_keyval(self.keyval) self.assertEqual(self.keyval, MPI.KEYVAL_INVALID) def testAttr(self, copy_fn=None, delete_fn=None): self.keyval = MPI.Datatype.Create_keyval(copy_fn, delete_fn) self.assertNotEqual(self.keyval, MPI.KEYVAL_INVALID) attrval = [1,2,3] rc = getrefcount(attrval) self.datatype.Set_attr(self.keyval, attrval) self.assertEqual(getrefcount(attrval), rc+1) o = self.datatype.Get_attr(self.keyval) self.assertTrue(o is attrval) self.assertEqual(getrefcount(attrval), rc+2) o = None dupdatatype = self.datatype.Dup() if copy_fn is True: self.assertEqual(getrefcount(attrval), rc+2) o = dupdatatype.Get_attr(self.keyval) if copy_fn is True: self.assertTrue(o is attrval) self.assertEqual(getrefcount(attrval), rc+3) elif not copy_fn: self.assertTrue(o is None) self.assertEqual(getrefcount(attrval), rc+1) dupdatatype.Free() o = None self.assertEqual(getrefcount(attrval), rc+1) self.datatype.Delete_attr(self.keyval) self.assertEqual(getrefcount(attrval), rc) o = self.datatype.Get_attr(self.keyval) self.assertTrue(o is None) def testAttrCopyFalse(self): self.testAttr(False) def testAttrCopyTrue(self): self.testAttr(True) def testAttrCopyDelete(self): self.keyval = MPI.Datatype.Create_keyval( copy_fn=MPI.Datatype.Dup, delete_fn=MPI.Datatype.Free) self.assertNotEqual(self.keyval, MPI.KEYVAL_INVALID) datatype1 = self.datatype dupdatatype1 = datatype1.Dup() rc = getrefcount(dupdatatype1) datatype1.Set_attr(self.keyval, dupdatatype1) self.assertTrue(dupdatatype1 != MPI.DATATYPE_NULL) self.assertTrue(getrefcount(dupdatatype1), rc+1) datatype2 = datatype1.Dup() dupdatatype2 = datatype2.Get_attr(self.keyval) self.assertTrue(dupdatatype1 != dupdatatype2) self.assertTrue(getrefcount(dupdatatype1), rc+1) self.assertTrue(getrefcount(dupdatatype2), 3) datatype2.Free() self.assertTrue(dupdatatype2 == MPI.DATATYPE_NULL) self.assertTrue(getrefcount(dupdatatype1), rc+1) self.assertTrue(getrefcount(dupdatatype2), 2) self.datatype.Delete_attr(self.keyval) self.assertTrue(dupdatatype1 == MPI.DATATYPE_NULL) self.assertTrue(getrefcount(dupdatatype1), rc) class TestDatatypeAttrBYTE(BaseTestDatatypeAttr, unittest.TestCase): def setUp(self): self.datatype = MPI.BYTE.Dup() class TestDatatypeAttrINT(BaseTestDatatypeAttr, unittest.TestCase): def setUp(self): self.datatype = MPI.INT.Dup() class TestDatatypeAttrFLOAT(BaseTestDatatypeAttr, unittest.TestCase): def setUp(self): self.datatype = MPI.FLOAT.Dup() class TestWinAttr(unittest.TestCase): keyval = MPI.KEYVAL_INVALID def setUp(self): self.win = MPI.Win.Create(MPI.BOTTOM, 1, MPI.INFO_NULL, MPI.COMM_SELF) def tearDown(self): self.win.Free() if self.keyval != MPI.KEYVAL_INVALID: self.keyval = MPI.Win.Free_keyval(self.keyval) self.assertEqual(self.keyval, MPI.KEYVAL_INVALID) def testAttr(self, copy_fn=None, delete_fn=None): self.keyval = MPI.Win.Create_keyval(copy_fn, delete_fn) self.assertNotEqual(self.keyval, MPI.KEYVAL_INVALID) attrval = [1,2,3] rc = getrefcount(attrval) self.win.Set_attr(self.keyval, attrval) self.assertEqual(getrefcount(attrval), rc+1) o = self.win.Get_attr(self.keyval) self.assertTrue(o is attrval) self.assertEqual(getrefcount(attrval), rc+2) o = None self.assertEqual(getrefcount(attrval), rc+1) self.win.Delete_attr(self.keyval) self.assertEqual(getrefcount(attrval), rc) o = self.win.Get_attr(self.keyval) self.assertTrue(o is None) def testAttrCopyDelete(self): self.keyval = MPI.Win.Create_keyval(delete_fn=MPI.Win.Free) self.assertNotEqual(self.keyval, MPI.KEYVAL_INVALID) newwin = MPI.Win.Create(MPI.BOTTOM, 1, MPI.INFO_NULL, MPI.COMM_SELF) rc = getrefcount(newwin) # self.win.Set_attr(self.keyval, newwin) self.assertTrue(newwin != MPI.WIN_NULL) self.assertTrue(getrefcount(newwin), rc+1) # self.win.Delete_attr(self.keyval) self.assertTrue(newwin == MPI.WIN_NULL) self.assertTrue(getrefcount(newwin), rc) try: k = MPI.Datatype.Create_keyval() k = MPI.Datatype.Free_keyval(k) except NotImplementedError: del TestDatatypeAttrBYTE del TestDatatypeAttrINT del TestDatatypeAttrFLOAT try: k = MPI.Win.Create_keyval() k = MPI.Win.Free_keyval(k) except NotImplementedError: del TestWinAttr _name, _version = MPI.get_vendor() if (_name == 'Open MPI' and _version <= (1, 5, 1)): if MPI.Query_thread() > MPI.THREAD_SINGLE: del BaseTestCommAttr.testAttrCopyDelete del TestWinAttr.testAttrCopyDelete if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_cco_buf.py0000644000000000000000000006371512211706251017316 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl try: _reduce = reduce except NameError: from functools import reduce as _reduce prod = lambda sequence,start=1: _reduce(lambda x, y: x*y, sequence, start) def maxvalue(a): try: typecode = a.typecode except AttributeError: typecode = a.dtype.char if typecode == ('f'): return 1e30 elif typecode == ('d'): return 1e300 else: return 2 ** (a.itemsize * 7) - 1 class BaseTestCCOBuf(object): COMM = MPI.COMM_NULL def testBarrier(self): self.COMM.Barrier() def testBcast(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): if rank == root: buf = array(root, typecode, root) else: buf = array( -1, typecode, root) self.COMM.Bcast(buf.as_mpi(), root=root) for value in buf: self.assertEqual(value, root) def testGather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): sbuf = array(root, typecode, root+1) if rank == root: rbuf = array(-1, typecode, (size,root+1)) else: rbuf = array([], typecode) self.COMM.Gather(sbuf.as_mpi(), rbuf.as_mpi(), root=root) if rank == root: for value in rbuf.flat: self.assertEqual(value, root) def testScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): rbuf = array(-1, typecode, size) if rank == root: sbuf = array(root, typecode, (size, size)) else: sbuf = array([], typecode) self.COMM.Scatter(sbuf.as_mpi(), rbuf.as_mpi(), root=root) for value in rbuf: self.assertEqual(value, root) def testAllgather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): sbuf = array(root, typecode, root+1) rbuf = array( -1, typecode, (size, root+1)) self.COMM.Allgather(sbuf.as_mpi(), rbuf.as_mpi()) for value in rbuf.flat: self.assertEqual(value, root) def testAlltoall(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): sbuf = array(root, typecode, (size, root+1)) rbuf = array( -1, typecode, (size, root+1)) self.COMM.Alltoall(sbuf.as_mpi(), rbuf.as_mpi_c(root+1)) for value in rbuf.flat: self.assertEqual(value, root) def testAlltoallw(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for n in range(1,size+1): sbuf = array( n, typecode, (size, n)) rbuf = array(-1, typecode, (size, n)) sdt, rdt = sbuf.mpidtype, rbuf.mpidtype sdsp = list(range(0, size*n*sdt.extent, n*sdt.extent)) rdsp = list(range(0, size*n*rdt.extent, n*rdt.extent)) smsg = (sbuf.as_raw(), ([n]*size, sdsp), [sdt]*size) rmsg = (rbuf.as_raw(), ([n]*size, rdsp), [rdt]*size) try: self.COMM.Alltoallw(smsg, rmsg) except NotImplementedError: return for value in rbuf.flat: self.assertEqual(value, n) def assertAlmostEqual(self, first, second): num = float(float(second-first)) den = float(second+first)/2 or 1.0 if (abs(num/den) > 1e-2): raise self.failureException('%r != %r' % (first, second)) def testReduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): sbuf = array(range(size), typecode) rbuf = array(-1, typecode, size) self.COMM.Reduce(sbuf.as_mpi(), rbuf.as_mpi(), op, root) max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if rank != root: self.assertEqual(value, -1) continue if op == MPI.SUM: if (i * size) < max_val: self.assertAlmostEqual(value, i*size) elif op == MPI.PROD: if (i ** size) < max_val: self.assertAlmostEqual(value, i**size) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testAllreduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): sbuf = array(range(size), typecode) rbuf = array(0, typecode, size) self.COMM.Allreduce(sbuf.as_mpi(), rbuf.as_mpi(), op) max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: if (i * size) < max_val: self.assertAlmostEqual(value, i*size) elif op == MPI.PROD: if (i ** size) < max_val: self.assertAlmostEqual(value, i**size) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testReduceScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): rcnt = list(range(1,size+1)) sbuf = array([rank+1]*sum(rcnt), typecode) rbuf = array(-1, typecode, rank+1) self.COMM.Reduce_scatter(sbuf.as_mpi(), rbuf.as_mpi(), None, op) max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: redval = sum(range(size))+size if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.PROD: redval = prod(range(1,size+1)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.MAX: self.assertEqual(value, size) elif op == MPI.MIN: self.assertEqual(value, 1) rbuf = array(-1, typecode, rank+1) self.COMM.Reduce_scatter(sbuf.as_mpi(), rbuf.as_mpi(), rcnt, op) max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: redval = sum(range(size))+size if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.PROD: redval = prod(range(1,size+1)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.MAX: self.assertEqual(value, size) elif op == MPI.MIN: self.assertEqual(value, 1) def testReduceScatterBlock(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): for rcnt in range(1,size): sbuf = array([rank]*rcnt*size, typecode) rbuf = array(-1, typecode, rcnt) if op == MPI.PROD: sbuf = array([rank+1]*rcnt*size, typecode) self.COMM.Reduce_scatter_block(sbuf.as_mpi(), rbuf.as_mpi(), op) max_val = maxvalue(rbuf) v_sum = (size*(size-1))/2 v_prod = 1 for i in range(1,size+1): v_prod *= i v_max = size-1 v_min = 0 for i, value in enumerate(rbuf): if op == MPI.SUM: if v_sum <= max_val: self.assertAlmostEqual(value, v_sum) elif op == MPI.PROD: if v_prod <= max_val: self.assertAlmostEqual(value, v_prod) elif op == MPI.MAX: self.assertEqual(value, v_max) elif op == MPI.MIN: self.assertEqual(value, v_min) def testScan(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() # -- for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): sbuf = array(range(size), typecode) rbuf = array(0, typecode, size) self.COMM.Scan(sbuf.as_mpi(), rbuf.as_mpi(), op) max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: if (i * (rank + 1)) < max_val: self.assertAlmostEqual(value, i * (rank + 1)) elif op == MPI.PROD: if (i ** (rank + 1)) < max_val: self.assertAlmostEqual(value, i ** (rank + 1)) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testExscan(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): sbuf = array(range(size), typecode) rbuf = array(0, typecode, size) try: self.COMM.Exscan(sbuf.as_mpi(), rbuf.as_mpi(), op) except NotImplementedError: return if rank == 1: for i, value in enumerate(rbuf): self.assertEqual(value, i) elif rank > 1: max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: if (i * rank) < max_val: self.assertAlmostEqual(value, i * rank) elif op == MPI.PROD: if (i ** rank) < max_val: self.assertAlmostEqual(value, i ** rank) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testBcastTypeIndexed(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode, datatype in arrayimpl.TypeMap.items(): for root in range(size): # if rank == root: buf = array(range(10), typecode).as_raw() else: buf = array(-1, typecode, 10).as_raw() indices = range(0, len(buf), 2) newtype = datatype.Create_indexed_block(1, indices) newtype.Commit() newbuf = (buf, 1, newtype) self.COMM.Bcast(newbuf, root=root) newtype.Free() if rank != root: for i, value in enumerate(buf): if (i % 2): self.assertEqual(value, -1) else: self.assertEqual(value, i) # if rank == root: buf = array(range(10), typecode).as_raw() else: buf = array(-1, typecode, 10).as_raw() indices = range(1, len(buf), 2) newtype = datatype.Create_indexed_block(1, indices) newtype.Commit() newbuf = (buf, 1, newtype) self.COMM.Bcast(newbuf, root) newtype.Free() if rank != root: for i, value in enumerate(buf): if not (i % 2): self.assertEqual(value, -1) else: self.assertEqual(value, i) class BaseTestCCOBufInplace(object): def testGather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): count = root+3 if rank == root: sbuf = MPI.IN_PLACE buf = array(-1, typecode, (size, count)) #buf.flat[(rank*count):((rank+1)*count)] = \ # array(root, typecode, count) s, e = rank*count, (rank+1)*count for i in range(s, e): buf.flat[i] = root rbuf = buf.as_mpi() else: buf = array(root, typecode, count) sbuf = buf.as_mpi() rbuf = None try: self.COMM.Gather(sbuf, rbuf, root=root) except NotImplementedError: return for value in buf.flat: self.assertEqual(value, root) def testScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(1, 10): if rank == root: buf = array(root, typecode, (size, count)) sbuf = buf.as_mpi() rbuf = MPI.IN_PLACE else: buf = array(-1, typecode, count) sbuf = None rbuf = buf.as_mpi() try: self.COMM.Scatter(sbuf, rbuf, root=root) except NotImplementedError: return for value in buf.flat: self.assertEqual(value, root) def testAllgather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for count in range(1, 10): buf = array(-1, typecode, (size, count)) #buf.flat[(rank*count):((rank+1)*count)] = \ # array(count, typecode, count) s, e = rank*count, (rank+1)*count for i in range(s, e): buf.flat[i] = count try: self.COMM.Allgather(MPI.IN_PLACE, buf.as_mpi()) except NotImplementedError: return for value in buf.flat: self.assertEqual(value, count) def assertAlmostEqual(self, first, second): num = float(float(second-first)) den = float(second+first)/2 or 1.0 if (abs(num/den) > 1e-2): raise self.failureException('%r != %r' % (first, second)) def testReduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): count = size if rank == root: buf = array(range(size), typecode) sbuf = MPI.IN_PLACE rbuf = buf.as_mpi() else: buf = array(range(size), typecode) buf2 = array(range(size), typecode) sbuf = buf.as_mpi() rbuf = buf2.as_mpi() try: self.COMM.Reduce(sbuf, rbuf, op, root) except NotImplementedError: return if rank == root: max_val = maxvalue(buf) for i, value in enumerate(buf): if op == MPI.SUM: if (i * size) < max_val: self.assertAlmostEqual(value, i*size) elif op == MPI.PROD: if (i ** size) < max_val: self.assertAlmostEqual(value, i**size) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testAllreduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): buf = array(range(size), typecode) sbuf = MPI.IN_PLACE rbuf = buf.as_mpi() self.COMM.Allreduce(sbuf, rbuf, op) max_val = maxvalue(buf) for i, value in enumerate(buf): if op == MPI.SUM: if (i * size) < max_val: self.assertAlmostEqual(value, i*size) elif op == MPI.PROD: if (i ** size) < max_val: self.assertAlmostEqual(value, i**size) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testReduceScatterBlock(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): for rcnt in range(size): if op == MPI.PROD: rbuf = array([rank+1]*rcnt*size, typecode) else: rbuf = array([rank]*rcnt*size, typecode) self.COMM.Reduce_scatter_block(MPI.IN_PLACE, rbuf.as_mpi(), op) max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if i >= rcnt: if op == MPI.PROD: self.assertEqual(value, rank+1) else: self.assertEqual(value, rank) else: if op == MPI.SUM: redval = sum(range(size)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.PROD: redval = prod(range(1,size+1)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.MAX: self.assertEqual(value, size-1) elif op == MPI.MIN: self.assertEqual(value, 0) def testReduceScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): rcnt = list(range(1, size+1)) if op == MPI.PROD: rbuf = array([rank+1]*sum(rcnt), typecode) else: rbuf = array([rank]*sum(rcnt), typecode) self.COMM.Reduce_scatter(MPI.IN_PLACE, rbuf.as_mpi(), rcnt, op) max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if i >= rcnt[rank]: if op == MPI.PROD: self.assertEqual(value, rank+1) else: self.assertEqual(value, rank) else: if op == MPI.SUM: redval = sum(range(size)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.PROD: redval = prod(range(1,size+1)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.MAX: self.assertEqual(value, size-1) elif op == MPI.MIN: self.assertEqual(value, 0) class TestCCOBufSelf(BaseTestCCOBuf, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCOBufWorld(BaseTestCCOBuf, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCOBufInplaceSelf(BaseTestCCOBufInplace, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCOBufInplaceWorld(BaseTestCCOBufInplace, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCOBufSelfDup(BaseTestCCOBuf, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestCCOBufWorldDup(BaseTestCCOBuf, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() _name, _version = MPI.get_vendor() if (_name == 'MPICH1' or _name == 'LAM/MPI' or MPI.BOTTOM == MPI.IN_PLACE): del BaseTestCCOBufInplace del TestCCOBufInplaceSelf del TestCCOBufInplaceWorld elif _name == 'Open MPI': if _version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestCCOBufWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_cco_nb_buf.py0000644000000000000000000006106512211706251017771 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl try: _reduce = reduce except NameError: from functools import reduce as _reduce prod = lambda sequence,start=1: _reduce(lambda x, y: x*y, sequence, start) def maxvalue(a): try: typecode = a.typecode except AttributeError: typecode = a.dtype.char if typecode == ('f'): return 1e30 elif typecode == ('d'): return 1e300 else: return 2 ** (a.itemsize * 7) - 1 class BaseTestCCOBuf(object): COMM = MPI.COMM_NULL def testBarrier(self): self.COMM.Ibarrier().Wait() def testBcast(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): if rank == root: buf = array(root, typecode, root) else: buf = array( -1, typecode, root) self.COMM.Ibcast(buf.as_mpi(), root=root).Wait() for value in buf: self.assertEqual(value, root) def testGather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): sbuf = array(root, typecode, root+1) if rank == root: rbuf = array(-1, typecode, (size,root+1)) else: rbuf = array([], typecode) self.COMM.Igather(sbuf.as_mpi(), rbuf.as_mpi(), root=root).Wait() if rank == root: for value in rbuf.flat: self.assertEqual(value, root) def testScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): rbuf = array(-1, typecode, size) if rank == root: sbuf = array(root, typecode, (size, size)) else: sbuf = array([], typecode) self.COMM.Iscatter(sbuf.as_mpi(), rbuf.as_mpi(), root=root).Wait() for value in rbuf: self.assertEqual(value, root) def testAllgather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): sbuf = array(root, typecode, root+1) rbuf = array( -1, typecode, (size, root+1)) self.COMM.Iallgather(sbuf.as_mpi(), rbuf.as_mpi()).Wait() for value in rbuf.flat: self.assertEqual(value, root) def testAlltoall(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): sbuf = array(root, typecode, (size, root+1)) rbuf = array( -1, typecode, (size, root+1)) self.COMM.Ialltoall(sbuf.as_mpi(), rbuf.as_mpi_c(root+1)).Wait() for value in rbuf.flat: self.assertEqual(value, root) def assertAlmostEqual(self, first, second): num = float(float(second-first)) den = float(second+first)/2 or 1.0 if (abs(num/den) > 1e-2): raise self.failureException('%r != %r' % (first, second)) def testReduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): sbuf = array(range(size), typecode) rbuf = array(-1, typecode, size) self.COMM.Ireduce(sbuf.as_mpi(), rbuf.as_mpi(), op, root).Wait() max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if rank != root: self.assertEqual(value, -1) continue if op == MPI.SUM: if (i * size) < max_val: self.assertAlmostEqual(value, i*size) elif op == MPI.PROD: if (i ** size) < max_val: self.assertAlmostEqual(value, i**size) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testAllreduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): sbuf = array(range(size), typecode) rbuf = array(0, typecode, size) self.COMM.Iallreduce(sbuf.as_mpi(), rbuf.as_mpi(), op).Wait() max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: if (i * size) < max_val: self.assertAlmostEqual(value, i*size) elif op == MPI.PROD: if (i ** size) < max_val: self.assertAlmostEqual(value, i**size) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testReduceScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): rcnt = list(range(1,size+1)) sbuf = array([rank+1]*sum(rcnt), typecode) rbuf = array(-1, typecode, rank+1) self.COMM.Ireduce_scatter(sbuf.as_mpi(), rbuf.as_mpi(), None, op).Wait() max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: redval = sum(range(size))+size if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.PROD: redval = prod(range(1,size+1)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.MAX: self.assertEqual(value, size) elif op == MPI.MIN: self.assertEqual(value, 1) rbuf = array(-1, typecode, rank+1) self.COMM.Ireduce_scatter(sbuf.as_mpi(), rbuf.as_mpi(), rcnt, op).Wait() max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: redval = sum(range(size))+size if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.PROD: redval = prod(range(1,size+1)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.MAX: self.assertEqual(value, size) elif op == MPI.MIN: self.assertEqual(value, 1) def testReduceScatterBlock(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): for rcnt in range(1,size): sbuf = array([rank]*rcnt*size, typecode) rbuf = array(-1, typecode, rcnt) if op == MPI.PROD: sbuf = array([rank+1]*rcnt*size, typecode) self.COMM.Ireduce_scatter_block(sbuf.as_mpi(), rbuf.as_mpi(), op).Wait() max_val = maxvalue(rbuf) v_sum = (size*(size-1))/2 v_prod = 1 for i in range(1,size+1): v_prod *= i v_max = size-1 v_min = 0 for i, value in enumerate(rbuf): if op == MPI.SUM: if v_sum <= max_val: self.assertAlmostEqual(value, v_sum) elif op == MPI.PROD: if v_prod <= max_val: self.assertAlmostEqual(value, v_prod) elif op == MPI.MAX: self.assertEqual(value, v_max) elif op == MPI.MIN: self.assertEqual(value, v_min) def testScan(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() # -- for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): sbuf = array(range(size), typecode) rbuf = array(0, typecode, size) self.COMM.Iscan(sbuf.as_mpi(), rbuf.as_mpi(), op).Wait() max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: if (i * (rank + 1)) < max_val: self.assertAlmostEqual(value, i * (rank + 1)) elif op == MPI.PROD: if (i ** (rank + 1)) < max_val: self.assertAlmostEqual(value, i ** (rank + 1)) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testExscan(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): sbuf = array(range(size), typecode) rbuf = array(0, typecode, size) self.COMM.Iexscan(sbuf.as_mpi(), rbuf.as_mpi(), op).Wait() if rank == 1: for i, value in enumerate(rbuf): self.assertEqual(value, i) elif rank > 1: max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if op == MPI.SUM: if (i * rank) < max_val: self.assertAlmostEqual(value, i * rank) elif op == MPI.PROD: if (i ** rank) < max_val: self.assertAlmostEqual(value, i ** rank) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testBcastTypeIndexed(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode, datatype in arrayimpl.TypeMap.items(): for root in range(size): # if rank == root: buf = array(range(10), typecode).as_raw() else: buf = array(-1, typecode, 10).as_raw() indices = range(0, len(buf), 2) newtype = datatype.Create_indexed_block(1, indices) newtype.Commit() newbuf = (buf, 1, newtype) self.COMM.Ibcast(newbuf, root=root).Wait() newtype.Free() if rank != root: for i, value in enumerate(buf): if (i % 2): self.assertEqual(value, -1) else: self.assertEqual(value, i) # if rank == root: buf = array(range(10), typecode).as_raw() else: buf = array(-1, typecode, 10).as_raw() indices = range(1, len(buf), 2) newtype = datatype.Create_indexed_block(1, indices) newtype.Commit() newbuf = (buf, 1, newtype) self.COMM.Ibcast(newbuf, root).Wait() newtype.Free() if rank != root: for i, value in enumerate(buf): if not (i % 2): self.assertEqual(value, -1) else: self.assertEqual(value, i) class BaseTestCCOBufInplace(object): def testGather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): count = root+3 if rank == root: sbuf = MPI.IN_PLACE buf = array(-1, typecode, (size, count)) #buf.flat[(rank*count):((rank+1)*count)] = \ # array(root, typecode, count) s, e = rank*count, (rank+1)*count for i in range(s, e): buf.flat[i] = root rbuf = buf.as_mpi() else: buf = array(root, typecode, count) sbuf = buf.as_mpi() rbuf = None self.COMM.Igather(sbuf, rbuf, root=root).Wait() for value in buf.flat: self.assertEqual(value, root) def testScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(1, 10): if rank == root: buf = array(root, typecode, (size, count)) sbuf = buf.as_mpi() rbuf = MPI.IN_PLACE else: buf = array(-1, typecode, count) sbuf = None rbuf = buf.as_mpi() self.COMM.Iscatter(sbuf, rbuf, root=root).Wait() for value in buf.flat: self.assertEqual(value, root) def testAllgather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for count in range(1, 10): buf = array(-1, typecode, (size, count)) #buf.flat[(rank*count):((rank+1)*count)] = \ # array(count, typecode, count) s, e = rank*count, (rank+1)*count for i in range(s, e): buf.flat[i] = count self.COMM.Iallgather(MPI.IN_PLACE, buf.as_mpi()).Wait() for value in buf.flat: self.assertEqual(value, count) def assertAlmostEqual(self, first, second): num = float(float(second-first)) den = float(second+first)/2 or 1.0 if (abs(num/den) > 1e-2): raise self.failureException('%r != %r' % (first, second)) def testReduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): count = size if rank == root: buf = array(range(size), typecode) sbuf = MPI.IN_PLACE rbuf = buf.as_mpi() else: buf = array(range(size), typecode) buf2 = array(range(size), typecode) sbuf = buf.as_mpi() rbuf = buf2.as_mpi() self.COMM.Ireduce(sbuf, rbuf, op, root).Wait() if rank == root: max_val = maxvalue(buf) for i, value in enumerate(buf): if op == MPI.SUM: if (i * size) < max_val: self.assertAlmostEqual(value, i*size) elif op == MPI.PROD: if (i ** size) < max_val: self.assertAlmostEqual(value, i**size) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testAllreduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): buf = array(range(size), typecode) sbuf = MPI.IN_PLACE rbuf = buf.as_mpi() self.COMM.Iallreduce(sbuf, rbuf, op).Wait() max_val = maxvalue(buf) for i, value in enumerate(buf): if op == MPI.SUM: if (i * size) < max_val: self.assertAlmostEqual(value, i*size) elif op == MPI.PROD: if (i ** size) < max_val: self.assertAlmostEqual(value, i**size) elif op == MPI.MAX: self.assertEqual(value, i) elif op == MPI.MIN: self.assertEqual(value, i) def testReduceScatterBlock(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): for rcnt in range(size): if op == MPI.PROD: rbuf = array([rank+1]*rcnt*size, typecode) else: rbuf = array([rank]*rcnt*size, typecode) self.COMM.Ireduce_scatter_block(MPI.IN_PLACE, rbuf.as_mpi(), op).Wait() max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if i >= rcnt: if op == MPI.PROD: self.assertEqual(value, rank+1) else: self.assertEqual(value, rank) else: if op == MPI.SUM: redval = sum(range(size)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.PROD: redval = prod(range(1,size+1)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.MAX: self.assertEqual(value, size-1) elif op == MPI.MIN: self.assertEqual(value, 0) def testReduceScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): rcnt = list(range(1, size+1)) if op == MPI.PROD: rbuf = array([rank+1]*sum(rcnt), typecode) else: rbuf = array([rank]*sum(rcnt), typecode) self.COMM.Ireduce_scatter(MPI.IN_PLACE, rbuf.as_mpi(), rcnt, op).Wait() max_val = maxvalue(rbuf) for i, value in enumerate(rbuf): if i >= rcnt[rank]: if op == MPI.PROD: self.assertEqual(value, rank+1) else: self.assertEqual(value, rank) else: if op == MPI.SUM: redval = sum(range(size)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.PROD: redval = prod(range(1,size+1)) if redval < max_val: self.assertAlmostEqual(value, redval) elif op == MPI.MAX: self.assertEqual(value, size-1) elif op == MPI.MIN: self.assertEqual(value, 0) class TestCCOBufSelf(BaseTestCCOBuf, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCOBufWorld(BaseTestCCOBuf, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCOBufInplaceSelf(BaseTestCCOBufInplace, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCOBufInplaceWorld(BaseTestCCOBufInplace, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCOBufSelfDup(BaseTestCCOBuf, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestCCOBufWorldDup(BaseTestCCOBuf, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() try: MPI.COMM_SELF.Ibarrier().Wait() except NotImplementedError: del BaseTestCCOBuf del TestCCOBufSelf del TestCCOBufWorld del TestCCOBufInplaceSelf del TestCCOBufInplaceWorld del TestCCOBufSelfDup del TestCCOBufWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_cco_nb_vec.py0000644000000000000000000003457612211706251020001 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl def maxvalue(a): try: typecode = a.typecode except AttributeError: typecode = a.dtype.char if typecode == ('f'): return 1e30 elif typecode == ('d'): return 1e300 else: return 2 ** (a.itemsize * 7) - 1 class BaseTestCCOVec(object): COMM = MPI.COMM_NULL def testGatherv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, count) rbuf = array( -1, typecode, size*size) counts = [count] * size displs = range(0, size*size, size) recvbuf = rbuf.as_mpi_v(counts, displs) if rank != root: recvbuf=None self.COMM.Igatherv(sbuf.as_mpi(), recvbuf, root).Wait() if recvbuf is not None: for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testGatherv2(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size) rbuf = array( -1, typecode, size*size) sendbuf = sbuf.as_mpi_c(count) recvbuf = rbuf.as_mpi_v(count, size) if rank != root: recvbuf=None self.COMM.Igatherv(sendbuf, recvbuf, root).Wait() if recvbuf is not None: for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testGatherv3(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size+1): # sbuf = array(root, typecode, count).as_raw() rbuf = array( -1, typecode, count*size).as_raw() sendbuf = sbuf recvbuf = [rbuf, count] if rank != root: recvbuf=None self.COMM.Igatherv(sendbuf, recvbuf, root).Wait() if recvbuf is not None: for v in rbuf: self.assertEqual(v, root) # sbuf = array(root, typecode, count).as_raw() if rank == root: rbuf = array( -1, typecode, count*size).as_raw() else: rbuf = None self.COMM.Gatherv(sbuf, rbuf, root) self.COMM.Barrier() if rank == root: for v in rbuf: self.assertEqual(v, root) def testScatterv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): # sbuf = array(root, typecode, size*size) rbuf = array( -1, typecode, count) counts = [count] * size displs = range(0, size*size, size) sendbuf = sbuf.as_mpi_v(counts, displs) if rank != root: sendbuf = None self.COMM.Iscatterv(sendbuf, rbuf.as_mpi(), root).Wait() for vr in rbuf: self.assertEqual(vr, root) def testScatterv2(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size*size) rbuf = array( -1, typecode, size) sendbuf = sbuf.as_mpi_v(count, size) recvbuf = rbuf.as_mpi_c(count) if rank != root: sendbuf = None self.COMM.Iscatterv(sendbuf, recvbuf, root).Wait() a, b = rbuf[:count], rbuf[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testScatterv3(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size+1): # sbuf = array(root, typecode, count*size).as_raw() rbuf = array( -1, typecode, count).as_raw() sendbuf = [sbuf, count] recvbuf = rbuf if rank != root: sendbuf = None self.COMM.Iscatterv(sendbuf, recvbuf, root).Wait() for v in rbuf: self.assertEqual(v, root) # if rank == root: sbuf = array(root, typecode, count*size).as_raw() else: sbuf = None rbuf = array( -1, typecode, count).as_raw() self.COMM.Scatterv(sbuf, rbuf, root) for v in rbuf: self.assertEqual(v, root) def testAllgatherv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, count) rbuf = array( -1, typecode, size*size) counts = [count] * size displs = range(0, size*size, size) sendbuf = sbuf.as_mpi() recvbuf = rbuf.as_mpi_v(counts, displs) self.COMM.Iallgatherv(sendbuf, recvbuf).Wait() for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testAllgatherv2(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size) rbuf = array( -1, typecode, size*size) sendbuf = sbuf.as_mpi_c(count) recvbuf = rbuf.as_mpi_v(count, size) self.COMM.Iallgatherv(sendbuf, recvbuf).Wait() for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testAllgatherv3(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size+1): # sbuf = array(root, typecode, count).as_raw() rbuf = array( -1, typecode, count*size).as_raw() sendbuf = sbuf recvbuf = [rbuf, count] self.COMM.Iallgatherv(sendbuf, recvbuf).Wait() for v in rbuf: self.assertEqual(v, root) # sbuf = array(root, typecode, count).as_raw() rbuf = array( -1, typecode, count*size).as_raw() self.COMM.Iallgatherv(sbuf, rbuf).Wait() for v in rbuf: self.assertEqual(v, root) def testAlltoallv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size*size) rbuf = array( -1, typecode, size*size) counts = [count] * size displs = range(0, size*size, size) sendbuf = sbuf.as_mpi_v(counts, displs) recvbuf = rbuf.as_mpi_v(counts, displs) self.COMM.Ialltoallv(sendbuf, recvbuf).Wait() for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testAlltoallv2(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size*size) rbuf = array( -1, typecode, size*size) sendbuf = sbuf.as_mpi_v(count, size) recvbuf = rbuf.as_mpi_v(count, size) self.COMM.Ialltoallv(sendbuf, recvbuf).Wait() for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testAlltoallv3(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size+1): # sbuf = array(root, typecode, count*size).as_raw() rbuf = array( -1, typecode, count*size).as_raw() sendbuf = [sbuf, count] recvbuf = [rbuf, count] self.COMM.Ialltoallv(sendbuf, recvbuf).Wait() for v in rbuf: self.assertEqual(v, root) # sbuf = array(root, typecode, count*size).as_raw() rbuf = array( -1, typecode, count*size).as_raw() self.COMM.Ialltoallv(sbuf, rbuf).Wait() for v in rbuf: self.assertEqual(v, root) def testAlltoallw(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for n in range(1, size+1): sbuf = array( n, typecode, (size, n)) rbuf = array(-1, typecode, (size, n)) sdt, rdt = sbuf.mpidtype, rbuf.mpidtype sdsp = list(range(0, size*n*sdt.extent, n*sdt.extent)) rdsp = list(range(0, size*n*rdt.extent, n*rdt.extent)) smsg = (sbuf.as_raw(), ([n]*size, sdsp), [sdt]*size) rmsg = (rbuf.as_raw(), ([n]*size, rdsp), [rdt]*size) try: self.COMM.Ialltoallw(smsg, rmsg).Wait() except NotImplementedError: return for v in rbuf.flat: self.assertEqual(v, n) class TestCCOVecSelf(BaseTestCCOVec, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCOVecWorld(BaseTestCCOVec, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCOVecSelfDup(BaseTestCCOVec, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestCCOVecWorldDup(BaseTestCCOVec, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() try: MPI.COMM_SELF.Ibarrier().Wait() except NotImplementedError: del BaseTestCCOVec del TestCCOVecSelf del TestCCOVecWorld del TestCCOVecSelfDup del TestCCOVecWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_cco_ngh_buf.py0000644000000000000000000001272312211706251020143 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl def create_topo_comms(comm): size = comm.Get_size() rank = comm.Get_rank() # Cartesian n = int(size**1/2.0) m = int(size**1/3.0) if m*m*m == size: dims = [m, m, m] elif n*n == size: dims = [n, n] else: dims = [size] periods = [True] * len(dims) yield comm.Create_cart(dims, periods=periods) # Graph index, edges = [0], [] for i in range(size): pos = index[-1] index.append(pos+2) edges.append((i-1)%size) edges.append((i+1)%size) yield comm.Create_graph(index, edges) # Dist Graph sources = [(rank-2)%size, (rank-1)%size] destinations = [(rank+1)%size, (rank+2)%size] yield comm.Create_dist_graph_adjacent(sources, destinations) def get_neighbors_count(comm): topo = comm.Get_topology() if topo == MPI.CART: ndim = comm.Get_dim() return 2*ndim, 2*ndim if topo == MPI.GRAPH: rank = comm.Get_rank() nneighbors = comm.Get_neighbors_count(rank) return nneighbors, nneighbors if topo == MPI.DIST_GRAPH: indeg, outdeg, w = comm.Get_dist_neighbors_count() return indeg, outdeg return 0, 0 class BaseTestCCONghBuf(object): COMM = MPI.COMM_NULL def testNeighborAllgather(self): for comm in create_topo_comms(self.COMM): rsize, ssize = get_neighbors_count(comm) for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for v in range(3): sbuf = array( v, typecode, 3) rbuf = array(-1, typecode, (rsize, 3)) comm.Neighbor_allgather(sbuf.as_mpi(), rbuf.as_mpi()) for value in rbuf.flat: self.assertEqual(value, v) sbuf = array( v, typecode, 3) rbuf = array(-1, typecode, (rsize, 3)) comm.Neighbor_allgatherv(sbuf.as_mpi_c(3), rbuf.as_mpi_c(3)) for value in rbuf.flat: self.assertEqual(value, v) sbuf = array( v, typecode, 3) rbuf = array(-1, typecode, (rsize, 3)) comm.Ineighbor_allgather(sbuf.as_mpi(), rbuf.as_mpi()).Wait() for value in rbuf.flat: self.assertEqual(value, v) sbuf = array( v, typecode, 3) rbuf = array(-1, typecode, (rsize, 3)) comm.Ineighbor_allgatherv(sbuf.as_mpi_c(3), rbuf.as_mpi_c(3)).Wait() for value in rbuf.flat: self.assertEqual(value, v) comm.Free() def testNeighborAlltoall(self): for comm in create_topo_comms(self.COMM): rsize, ssize = get_neighbors_count(comm) for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for v in range(3): sbuf = array( v, typecode, (ssize, 3)) rbuf = array(-1, typecode, (rsize, 3)) comm.Neighbor_alltoall(sbuf.as_mpi(), rbuf.as_mpi_c(3)) for value in rbuf.flat: self.assertEqual(value, v) sbuf = array( v, typecode, (ssize, 3)) rbuf = array(-1, typecode, (rsize, 3)) comm.Neighbor_alltoall(sbuf.as_mpi(), rbuf.as_mpi()) for value in rbuf.flat: self.assertEqual(value, v) sbuf = array( v, typecode, (ssize, 3)) rbuf = array(-1, typecode, (rsize, 3)) comm.Neighbor_alltoallv(sbuf.as_mpi_c(3), rbuf.as_mpi_c(3)) for value in rbuf.flat: self.assertEqual(value, v) sbuf = array( v, typecode, (ssize, 3)) rbuf = array(-1, typecode, (rsize, 3)) comm.Ineighbor_alltoall(sbuf.as_mpi(), rbuf.as_mpi()).Wait() for value in rbuf.flat: self.assertEqual(value, v) sbuf = array( v, typecode, (ssize, 3)) rbuf = array(-1, typecode, (rsize, 3)) comm.Ineighbor_alltoallv(sbuf.as_mpi_c(3), rbuf.as_mpi_c(3)).Wait() for value in rbuf.flat: self.assertEqual(value, v) comm.Free() class TestCCONghBufSelf(BaseTestCCONghBuf, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCONghBufWorld(BaseTestCCONghBuf, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCONghBufSelfDup(BaseTestCCONghBuf, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestCCONghBufWorldDup(BaseTestCCONghBuf, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() cartcomm = MPI.COMM_SELF.Create_cart([1], periods=[1]) try: cartcomm.neighbor_allgather(None) except NotImplementedError: del BaseTestCCONghBuf del TestCCONghBufSelf del TestCCONghBufWorld del TestCCONghBufSelfDup del TestCCONghBufWorldDup finally: cartcomm.Free() if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_cco_ngh_obj.py0000644000000000000000000000621212211706251020135 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest _basic = [None, True, False, -7, 0, 7, 2**31, -2**63, 2**63-1, -2.17, 0.0, 3.14, 1+2j, 2-3j, 'mpi4py', ] messages = _basic messages += [ list(_basic), tuple(_basic), dict([('k%d' % key, val) for key, val in enumerate(_basic)]) ] messages = messages + [messages] def create_topo_comms(comm): size = comm.Get_size() rank = comm.Get_rank() # Cartesian n = int(size**1/2.0) m = int(size**1/3.0) if m*m*m == size: dims = [m, m, m] elif n*n == size: dims = [n, n] else: dims = [size] periods = [True] * len(dims) yield comm.Create_cart(dims, periods=periods) # Graph index, edges = [0], [] for i in range(size): pos = index[-1] index.append(pos+2) edges.append((i-1)%size) edges.append((i+1)%size) yield comm.Create_graph(index, edges) # Dist Graph sources = [(rank-2)%size, (rank-1)%size] destinations = [(rank+1)%size, (rank+2)%size] yield comm.Create_dist_graph_adjacent(sources, destinations) def get_neighbors_count(comm): topo = comm.Get_topology() if topo == MPI.CART: ndim = comm.Get_dim() return 2*ndim, 2*ndim if topo == MPI.GRAPH: rank = comm.Get_rank() nneighbors = comm.Get_neighbors_count(rank) return nneighbors, nneighbors if topo == MPI.DIST_GRAPH: indeg, outdeg, w = comm.Get_dist_neighbors_count() return indeg, outdeg return 0, 0 class BaseTestCCONghObj(object): COMM = MPI.COMM_NULL def testNeighborAllgather(self): for comm in create_topo_comms(self.COMM): rsize, ssize = get_neighbors_count(comm) for smess in messages: rmess = comm.neighbor_allgather(smess) self.assertEqual(rmess, [smess] * rsize) comm.Free() def testNeighborAlltoall(self): for comm in create_topo_comms(self.COMM): rsize, ssize = get_neighbors_count(comm) for smess in messages: rmess = comm.neighbor_alltoall([smess] * ssize) self.assertEqual(rmess, [smess] * rsize) comm.Free() class TestCCONghObjSelf(BaseTestCCONghObj, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCONghObjWorld(BaseTestCCONghObj, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCONghObjSelfDup(BaseTestCCONghObj, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestCCONghObjWorldDup(BaseTestCCONghObj, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() cartcomm = MPI.COMM_SELF.Create_cart([1], periods=[1]) try: cartcomm.neighbor_allgather(None) except NotImplementedError: del BaseTestCCONghObj del TestCCONghObjSelf del TestCCONghObjWorld del TestCCONghObjSelfDup del TestCCONghObjWorldDup finally: cartcomm.Free() if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_cco_obj.py0000644000000000000000000001413312211706251017302 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest try: _reduce = reduce except NameError: from functools import reduce as _reduce cumsum = lambda seq: _reduce(lambda x, y: x+y, seq, 0) cumprod = lambda seq: _reduce(lambda x, y: x*y, seq, 1) _basic = [None, True, False, -7, 0, 7, 2**31, -2**63, 2**63-1, -2.17, 0.0, 3.14, 1+2j, 2-3j, 'mpi4py', ] messages = _basic messages += [ list(_basic), tuple(_basic), dict([('k%d' % key, val) for key, val in enumerate(_basic)]) ] class BaseTestCCOObj(object): COMM = MPI.COMM_NULL def testBarrier(self): self.COMM.barrier() def testBcast(self): for smess in messages: for root in range(self.COMM.Get_size()): rmess = self.COMM.bcast(smess, root=root) self.assertEqual(smess, rmess) def testGather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages + [messages]: for root in range(size): rmess = self.COMM.gather(smess, root=root) if rank == root: self.assertEqual(rmess, [smess] * size) else: self.assertEqual(rmess, None) def testScatter(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages + [messages]: for root in range(size): if rank == root: rmess = self.COMM.scatter([smess] * size, root=root) else: rmess = self.COMM.scatter(None, root=root) self.assertEqual(rmess, smess) def testAllgather(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages + [messages]: rmess = self.COMM.allgather(smess, None) self.assertEqual(rmess, [smess] * size) def testAlltoall(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages + [messages]: rmess = self.COMM.alltoall([smess] * size, None) self.assertEqual(rmess, [smess] * size) def testReduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for root in range(size): for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN, MPI.MAXLOC, MPI.MINLOC): value = self.COMM.reduce(rank, None, op=op, root=root) if rank != root: self.assertTrue(value is None) else: if op == MPI.SUM: self.assertEqual(value, cumsum(range(size))) elif op == MPI.PROD: self.assertEqual(value, cumprod(range(size))) elif op == MPI.MAX: self.assertEqual(value, size-1) elif op == MPI.MIN: self.assertEqual(value, 0) elif op == MPI.MAXLOC: self.assertEqual(value[1], size-1) elif op == MPI.MINLOC: self.assertEqual(value[1], 0) def testAllreduce(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN, MPI.MAXLOC, MPI.MINLOC): value = self.COMM.allreduce(rank, None, op) if op == MPI.SUM: self.assertEqual(value, cumsum(range(size))) elif op == MPI.PROD: self.assertEqual(value, cumprod(range(size))) elif op == MPI.MAX: self.assertEqual(value, size-1) elif op == MPI.MIN: self.assertEqual(value, 0) elif op == MPI.MAXLOC: self.assertEqual(value[1], size-1) elif op == MPI.MINLOC: self.assertEqual(value[1], 0) def testScan(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() # -- sscan = self.COMM.scan(size, op=MPI.SUM) self.assertEqual(sscan, cumsum([size]*(rank+1))) # -- rscan = self.COMM.scan(rank, op=MPI.SUM) self.assertEqual(rscan, cumsum(range(rank+1))) # -- minloc = self.COMM.scan(rank, op=MPI.MINLOC) maxloc = self.COMM.scan(rank, op=MPI.MAXLOC) self.assertEqual(minloc, (0, 0)) self.assertEqual(maxloc, (rank, rank)) def testExscan(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() # -- sscan = self.COMM.exscan(size, op=MPI.SUM) if rank == 0: self.assertTrue(sscan is None) else: self.assertEqual(sscan, cumsum([size]*(rank))) # -- rscan = self.COMM.exscan(rank, op=MPI.SUM) if rank == 0: self.assertTrue(rscan is None) else: self.assertEqual(rscan, cumsum(range(rank))) # minloc = self.COMM.exscan(rank, op=MPI.MINLOC) maxloc = self.COMM.exscan(rank, op=MPI.MAXLOC) if rank == 0: self.assertEqual(minloc, None) self.assertEqual(maxloc, None) else: self.assertEqual(minloc, (0, 0)) self.assertEqual(maxloc, (rank-1, rank-1)) class BaseTestCCOObjDup(BaseTestCCOObj): def setUp(self): self.COMM = self.COMM.Dup() def tearDown(self): self.COMM.Free() del self.COMM class TestCCOObjSelf(BaseTestCCOObj, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCOObjWorld(BaseTestCCOObj, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCOObjSelfDup(BaseTestCCOObjDup, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCOObjWorldDup(BaseTestCCOObjDup, unittest.TestCase): COMM = MPI.COMM_WORLD _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestCCOObjWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_cco_obj_inter.py0000644000000000000000000002275112211706251020510 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest try: _reduce = reduce except NameError: from functools import reduce as _reduce cumsum = lambda seq: _reduce(lambda x, y: x+y, seq, 0) cumprod = lambda seq: _reduce(lambda x, y: x*y, seq, 1) _basic = [None, True, False, -7, 0, 7, 2**31, -2**63, 2**63-1, -2.17, 0.0, 3.14, 1+2j, 2-3j, 'mpi4py', ] messages = _basic messages += [ list(_basic), tuple(_basic), dict([('k%d' % key, val) for key, val in enumerate(_basic)]) ] class BaseTestCCOObjInter(object): BASECOMM = MPI.COMM_NULL INTRACOMM = MPI.COMM_NULL INTERCOMM = MPI.COMM_NULL def setUp(self): BASE_SIZE = self.BASECOMM.Get_size() BASE_RANK = self.BASECOMM.Get_rank() if BASE_SIZE < 2: return if BASE_RANK < BASE_SIZE // 2 : self.COLOR = 0 self.LOCAL_LEADER = 0 self.REMOTE_LEADER = BASE_SIZE // 2 else: self.COLOR = 1 self.LOCAL_LEADER = 0 self.REMOTE_LEADER = 0 self.INTRACOMM = self.BASECOMM.Split(self.COLOR, key=0) self.INTERCOMM = self.INTRACOMM.Create_intercomm(self.LOCAL_LEADER, self.BASECOMM, self.REMOTE_LEADER) def tearDown(self): if self.INTRACOMM != MPI.COMM_NULL: self.INTRACOMM.Free() del self.INTRACOMM if self.INTERCOMM != MPI.COMM_NULL: self.INTERCOMM.Free() del self.INTERCOMM def testBarrier(self): if self.INTRACOMM == MPI.COMM_NULL: return if self.INTERCOMM == MPI.COMM_NULL: return self.INTERCOMM.Barrier() def testBcast(self): if self.INTRACOMM == MPI.COMM_NULL: return if self.INTERCOMM == MPI.COMM_NULL: return rank = self.INTERCOMM.Get_rank() size = self.INTERCOMM.Get_size() rsize = self.INTERCOMM.Get_remote_size() for smess in messages + [messages]: for color in [0, 1]: if self.COLOR == color: for root in range(size): if root == rank: rmess = self.INTERCOMM.bcast(smess, root=MPI.ROOT) else: rmess = self.INTERCOMM.bcast(None, root=MPI.PROC_NULL) self.assertEqual(rmess, None) else: for root in range(rsize): rmess = self.INTERCOMM.bcast(None, root=root) self.assertEqual(rmess, smess) def testGather(self): if self.INTRACOMM == MPI.COMM_NULL: return if self.INTERCOMM == MPI.COMM_NULL: return rank = self.INTERCOMM.Get_rank() size = self.INTERCOMM.Get_size() rsize = self.INTERCOMM.Get_remote_size() for smess in messages + [messages]: for color in [0, 1]: if self.COLOR == color: for root in range(size): if root == rank: rmess = self.INTERCOMM.gather(smess, root=MPI.ROOT) self.assertEqual(rmess, [smess] * rsize) else: rmess = self.INTERCOMM.gather(None, root=MPI.PROC_NULL) self.assertEqual(rmess, None) else: for root in range(rsize): rmess = self.INTERCOMM.gather(smess, root=root) self.assertEqual(rmess, None) def testScatter(self): if self.INTRACOMM == MPI.COMM_NULL: return if self.INTERCOMM == MPI.COMM_NULL: return rank = self.INTERCOMM.Get_rank() size = self.INTERCOMM.Get_size() rsize = self.INTERCOMM.Get_remote_size() for smess in messages + [messages]: for color in [0, 1]: if self.COLOR == color: for root in range(size): if root == rank: rmess = self.INTERCOMM.scatter([smess] * rsize, root=MPI.ROOT) else: rmess = self.INTERCOMM.scatter(None, root=MPI.PROC_NULL) self.assertEqual(rmess, None) else: for root in range(rsize): rmess = self.INTERCOMM.scatter(None, root=root) self.assertEqual(rmess, smess) def testAllgather(self): if self.INTRACOMM == MPI.COMM_NULL: return if self.INTERCOMM == MPI.COMM_NULL: return rank = self.INTERCOMM.Get_rank() size = self.INTERCOMM.Get_size() rsize = self.INTERCOMM.Get_remote_size() for smess in messages + [messages]: rmess = self.INTERCOMM.allgather(smess) self.assertEqual(rmess, [smess] * rsize) def testAlltoall(self): if self.INTRACOMM == MPI.COMM_NULL: return if self.INTERCOMM == MPI.COMM_NULL: return rank = self.INTERCOMM.Get_rank() size = self.INTERCOMM.Get_size() rsize = self.INTERCOMM.Get_remote_size() for smess in messages + [messages]: rmess = self.INTERCOMM.alltoall([smess] * rsize) self.assertEqual(rmess, [smess] * rsize) def testReduce(self): if self.INTRACOMM == MPI.COMM_NULL: return if self.INTERCOMM == MPI.COMM_NULL: return rank = self.INTERCOMM.Get_rank() size = self.INTERCOMM.Get_size() rsize = self.INTERCOMM.Get_remote_size() for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): for color in [0, 1]: if self.COLOR == color: for root in range(size): if root == rank: value = self.INTERCOMM.reduce(None, op=op, root=MPI.ROOT) if op == MPI.SUM: self.assertEqual(value, cumsum(range(rsize))) elif op == MPI.PROD: self.assertEqual(value, cumprod(range(rsize))) elif op == MPI.MAX: self.assertEqual(value, rsize-1) elif op == MPI.MIN: self.assertEqual(value, 0) else: value = self.INTERCOMM.reduce(None, op=op, root=MPI.PROC_NULL) self.assertEqual(value, None) else: for root in range(rsize): value = self.INTERCOMM.reduce(rank, op=op, root=root) self.assertEqual(value, None) def testAllreduce(self): if self.INTRACOMM == MPI.COMM_NULL: return if self.INTERCOMM == MPI.COMM_NULL: return rank = self.INTERCOMM.Get_rank() size = self.INTERCOMM.Get_size() rsize = self.INTERCOMM.Get_remote_size() for op in (MPI.SUM, MPI.MAX, MPI.MIN, MPI.PROD): value = self.INTERCOMM.allreduce(rank, None, op) if op == MPI.SUM: self.assertEqual(value, cumsum(range(rsize))) elif op == MPI.PROD: self.assertEqual(value, cumprod(range(rsize))) elif op == MPI.MAX: self.assertEqual(value, rsize-1) elif op == MPI.MIN: self.assertEqual(value, 0) class TestCCOObjInter(BaseTestCCOObjInter, unittest.TestCase): BASECOMM = MPI.COMM_WORLD class TestCCOObjInterDup(TestCCOObjInter): def setUp(self): self.BASECOMM = self.BASECOMM.Dup() super(TestCCOObjInterDup, self).setUp() def tearDown(self): self.BASECOMM.Free() del self.BASECOMM super(TestCCOObjInterDup, self).tearDown() class TestCCOObjInterDupDup(TestCCOObjInterDup): BASECOMM = MPI.COMM_WORLD INTERCOMM_ORIG = MPI.COMM_NULL def setUp(self): super(TestCCOObjInterDupDup, self).setUp() if self.INTERCOMM == MPI.COMM_NULL: return self.INTERCOMM_ORIG = self.INTERCOMM self.INTERCOMM = self.INTERCOMM.Dup() def tearDown(self): super(TestCCOObjInterDupDup, self).tearDown() if self.INTERCOMM_ORIG == MPI.COMM_NULL: return self.INTERCOMM_ORIG.Free() _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 3, 0): del TestCCOObjInter del TestCCOObjInterDup del TestCCOObjInterDupDup elif _version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestCCOObjInter del TestCCOObjInterDup del TestCCOObjInterDupDup elif _name == "MPICH2" or _name == "Microsoft MPI": if _version <= (1, 0, 7): def _SKIPPED(*args, **kwargs): pass TestCCOObjInterDupDup.testBarrier = _SKIPPED TestCCOObjInterDupDup.testAllgather = _SKIPPED TestCCOObjInterDupDup.testAllreduce = _SKIPPED elif _name == "DeinoMPI": def _SKIPPED(*args, **kwargs): pass TestCCOObjInterDupDup.testBarrier = _SKIPPED TestCCOObjInterDupDup.testAllgather = _SKIPPED TestCCOObjInterDupDup.testAllreduce = _SKIPPED elif _name == "MPICH1": del BaseTestCCOObjInter del TestCCOObjInter del TestCCOObjInterDup del TestCCOObjInterDupDup elif MPI.ROOT == MPI.PROC_NULL: del BaseTestCCOObjInter del TestCCOObjInter del TestCCOObjInterDup del TestCCOObjInterDupDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_cco_vec.py0000644000000000000000000003310212211706251017302 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl def maxvalue(a): try: typecode = a.typecode except AttributeError: typecode = a.dtype.char if typecode == ('f'): return 1e30 elif typecode == ('d'): return 1e300 else: return 2 ** (a.itemsize * 7) - 1 class BaseTestCCOVec(object): COMM = MPI.COMM_NULL def testGatherv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, count) rbuf = array( -1, typecode, size*size) counts = [count] * size displs = range(0, size*size, size) recvbuf = rbuf.as_mpi_v(counts, displs) if rank != root: recvbuf=None self.COMM.Barrier() self.COMM.Gatherv(sbuf.as_mpi(), recvbuf, root) self.COMM.Barrier() if recvbuf is not None: for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testGatherv2(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size) rbuf = array( -1, typecode, size*size) sendbuf = sbuf.as_mpi_c(count) recvbuf = rbuf.as_mpi_v(count, size) if rank != root: recvbuf=None self.COMM.Barrier() self.COMM.Gatherv(sendbuf, recvbuf, root) self.COMM.Barrier() if recvbuf is not None: for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testGatherv3(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size+1): # sbuf = array(root, typecode, count).as_raw() rbuf = array( -1, typecode, count*size).as_raw() sendbuf = sbuf recvbuf = [rbuf, count] if rank != root: recvbuf=None self.COMM.Barrier() self.COMM.Gatherv(sendbuf, recvbuf, root) self.COMM.Barrier() if recvbuf is not None: for v in rbuf: self.assertEqual(v, root) # sbuf = array(root, typecode, count).as_raw() if rank == root: rbuf = array( -1, typecode, count*size).as_raw() else: rbuf = None self.COMM.Gatherv(sbuf, rbuf, root) self.COMM.Barrier() if rank == root: for v in rbuf: self.assertEqual(v, root) def testScatterv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): # sbuf = array(root, typecode, size*size) rbuf = array( -1, typecode, count) counts = [count] * size displs = range(0, size*size, size) sendbuf = sbuf.as_mpi_v(counts, displs) if rank != root: sendbuf = None self.COMM.Scatterv(sendbuf, rbuf.as_mpi(), root) for vr in rbuf: self.assertEqual(vr, root) def testScatterv2(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size*size) rbuf = array( -1, typecode, size) sendbuf = sbuf.as_mpi_v(count, size) recvbuf = rbuf.as_mpi_c(count) if rank != root: sendbuf = None self.COMM.Scatterv(sendbuf, recvbuf, root) a, b = rbuf[:count], rbuf[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testScatterv3(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size+1): # sbuf = array(root, typecode, count*size).as_raw() rbuf = array( -1, typecode, count).as_raw() sendbuf = [sbuf, count] recvbuf = rbuf if rank != root: sendbuf = None self.COMM.Scatterv(sendbuf, recvbuf, root) for v in rbuf: self.assertEqual(v, root) # if rank == root: sbuf = array(root, typecode, count*size).as_raw() else: sbuf = None rbuf = array( -1, typecode, count).as_raw() self.COMM.Scatterv(sbuf, rbuf, root) for v in rbuf: self.assertEqual(v, root) def testAllgatherv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, count) rbuf = array( -1, typecode, size*size) counts = [count] * size displs = range(0, size*size, size) sendbuf = sbuf.as_mpi() recvbuf = rbuf.as_mpi_v(counts, displs) self.COMM.Allgatherv(sendbuf, recvbuf) for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testAllgatherv2(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size) rbuf = array( -1, typecode, size*size) sendbuf = sbuf.as_mpi_c(count) recvbuf = rbuf.as_mpi_v(count, size) self.COMM.Allgatherv(sendbuf, recvbuf) for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testAllgatherv3(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size+1): # sbuf = array(root, typecode, count).as_raw() rbuf = array( -1, typecode, count*size).as_raw() sendbuf = sbuf recvbuf = [rbuf, count] self.COMM.Allgatherv(sendbuf, recvbuf) for v in rbuf: self.assertEqual(v, root) # sbuf = array(root, typecode, count).as_raw() rbuf = array( -1, typecode, count*size).as_raw() self.COMM.Allgatherv(sbuf, rbuf) for v in rbuf: self.assertEqual(v, root) def testAlltoallv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size*size) rbuf = array( -1, typecode, size*size) counts = [count] * size displs = range(0, size*size, size) sendbuf = sbuf.as_mpi_v(counts, displs) recvbuf = rbuf.as_mpi_v(counts, displs) self.COMM.Alltoallv(sendbuf, recvbuf) for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testAlltoallv2(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size): sbuf = array(root, typecode, size*size) rbuf = array( -1, typecode, size*size) sendbuf = sbuf.as_mpi_v(count, size) recvbuf = rbuf.as_mpi_v(count, size) self.COMM.Alltoallv(sendbuf, recvbuf) for i in range(size): row = rbuf[i*size:(i+1)*size] a, b = row[:count], row[count:] for va in a: self.assertEqual(va, root) for vb in b: self.assertEqual(vb, -1) def testAlltoallv3(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for root in range(size): for count in range(size+1): # sbuf = array(root, typecode, count*size).as_raw() rbuf = array( -1, typecode, count*size).as_raw() sendbuf = [sbuf, count] recvbuf = [rbuf, count] self.COMM.Alltoallv(sendbuf, recvbuf) for v in rbuf: self.assertEqual(v, root) # sbuf = array(root, typecode, count*size).as_raw() rbuf = array( -1, typecode, count*size).as_raw() self.COMM.Alltoallv(sbuf, rbuf) for v in rbuf: self.assertEqual(v, root) class TestCCOVecSelf(BaseTestCCOVec, unittest.TestCase): COMM = MPI.COMM_SELF class TestCCOVecWorld(BaseTestCCOVec, unittest.TestCase): COMM = MPI.COMM_WORLD class TestCCOVecSelfDup(BaseTestCCOVec, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestCCOVecWorldDup(BaseTestCCOVec, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestCCOVecWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_comm.py0000644000000000000000000000674212211706251016646 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class TestCommNull(unittest.TestCase): def testContructor(self): comm = MPI.Comm() self.assertFalse(comm is MPI.COMM_NULL) self.assertEqual(comm, MPI.COMM_NULL) def testContructorIntra(self): comm_null = MPI.Intracomm() self.assertFalse(comm_null is MPI.COMM_NULL) self.assertEqual(comm_null, MPI.COMM_NULL) def testContructorInter(self): comm_null = MPI.Intercomm() self.assertFalse(comm_null is MPI.COMM_NULL) self.assertEqual(comm_null, MPI.COMM_NULL) class BaseTestComm(object): def testPyProps(self): comm = self.COMM self.assertEqual(comm.Get_size(), comm.size) self.assertEqual(comm.Get_rank(), comm.rank) self.assertEqual(comm.Is_intra(), comm.is_intra) self.assertEqual(comm.Is_inter(), comm.is_inter) self.assertEqual(comm.Get_topology(), comm.topology) def testGroup(self): comm = self.COMM group = self.COMM.Get_group() self.assertEqual(comm.Get_size(), group.Get_size()) self.assertEqual(comm.Get_rank(), group.Get_rank()) group.Free() self.assertEqual(group, MPI.GROUP_NULL) def testCloneFree(self): comm = self.COMM.Clone() comm.Free() self.assertEqual(comm, MPI.COMM_NULL) def testCompare(self): results = (MPI.IDENT, MPI.CONGRUENT, MPI.SIMILAR, MPI.UNEQUAL) ccmp = MPI.Comm.Compare(self.COMM, MPI.COMM_WORLD) self.assertTrue(ccmp in results) ccmp = MPI.Comm.Compare(self.COMM, self.COMM) self.assertEqual(ccmp, MPI.IDENT) comm = self.COMM.Dup() ccmp = MPI.Comm.Compare(self.COMM, comm) comm.Free() self.assertEqual(ccmp, MPI.CONGRUENT) def testIsInter(self): is_inter = self.COMM.Is_inter() self.assertTrue(type(is_inter) is bool) def testGetSetName(self): try: name = self.COMM.Get_name() except NotImplementedError: return self.COMM.Set_name('comm') self.assertEqual(self.COMM.Get_name(), 'comm') self.COMM.Set_name(name) self.assertEqual(self.COMM.Get_name(), name) def testGetParent(self): try: parent = MPI.Comm.Get_parent() except NotImplementedError: return class TestCommSelf(BaseTestComm, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF def testSize(self): size = self.COMM.Get_size() self.assertEqual(size, 1) def testRank(self): rank = self.COMM.Get_rank() self.assertEqual(rank, 0) class TestCommWorld(BaseTestComm, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD def testSize(self): size = self.COMM.Get_size() self.assertTrue(size >= 1) def testRank(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() self.assertTrue(rank >= 0 and rank < size) class TestCommSelfDup(TestCommSelf): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestCommWorldDup(TestCommWorld): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestCommWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_datatype.py0000644000000000000000000002522712211706251017525 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest datatypes_c = [ MPI.CHAR, MPI.WCHAR, MPI.SIGNED_CHAR, MPI.SHORT, MPI.INT, MPI.LONG, MPI.UNSIGNED_CHAR, MPI.UNSIGNED_SHORT, MPI.UNSIGNED, MPI.UNSIGNED_LONG, MPI.LONG_LONG, MPI.UNSIGNED_LONG_LONG, MPI.FLOAT, MPI.DOUBLE, MPI.LONG_DOUBLE, ] datatypes_c99 = [ MPI.C_BOOL, MPI.INT8_T, MPI.INT16_T, MPI.INT32_T, MPI.INT64_T, MPI.UINT8_T, MPI.UINT16_T, MPI.UINT32_T, MPI.UINT64_T, MPI.C_COMPLEX, MPI.C_FLOAT_COMPLEX, MPI.C_DOUBLE_COMPLEX, MPI.C_LONG_DOUBLE_COMPLEX, ] datatypes_f = [ MPI.CHARACTER, MPI.LOGICAL, MPI.INTEGER, MPI.REAL, MPI.DOUBLE_PRECISION, MPI.COMPLEX, MPI.DOUBLE_COMPLEX, ] datatypes_f90 = [ MPI.LOGICAL1, MPI.LOGICAL2, MPI.LOGICAL4, MPI.LOGICAL8, MPI.INTEGER1, MPI.INTEGER2, MPI.INTEGER4, MPI.INTEGER8, MPI.INTEGER16, MPI.REAL2, MPI.REAL4, MPI.REAL8, MPI.REAL16, MPI.COMPLEX4, MPI.COMPLEX8, MPI.COMPLEX16, MPI.COMPLEX32, ] datatypes_mpi = [ MPI.PACKED, MPI.BYTE, MPI.AINT, MPI.OFFSET, ] datatypes = [] datatypes += datatypes_c datatypes += datatypes_c99 datatypes += datatypes_f datatypes += datatypes_f90 datatypes += datatypes_mpi datatypes = [t for t in datatypes if t != MPI.DATATYPE_NULL] combiner_map = {} class TestDatatype(unittest.TestCase): def testGetExtent(self): for dtype in datatypes: lb, ext = dtype.Get_extent() def testGetSize(self): for dtype in datatypes: size = dtype.Get_size() def testGetTrueExtent(self): for dtype in datatypes: try: lb, ext = dtype.Get_true_extent() except NotImplementedError: return def testGetEnvelope(self): for dtype in datatypes: try: envelope = dtype.Get_envelope() except NotImplementedError: return if ('LAM/MPI' == MPI.get_vendor()[0] and "COMPLEX" in dtype.name): continue ni, na, nd, combiner = envelope self.assertEqual(combiner, MPI.COMBINER_NAMED) self.assertEqual(ni, 0) self.assertEqual(na, 0) self.assertEqual(nd, 0) def _test_derived_contents(self, oldtype, factory, newtype): try: envelope = newtype.Get_envelope() contents = newtype.Get_contents() except NotImplementedError: return ni, na, nd, combiner = envelope i, a, d = contents self.assertEqual(ni, len(i)) self.assertEqual(na, len(a)) self.assertEqual(nd, len(d)) self.assertTrue(combiner != MPI.COMBINER_NAMED) name = factory.__name__ NAME = name.replace('Create_', '').upper() symbol = getattr(MPI, 'COMBINER_' + NAME) if symbol == MPI.UNDEFINED: return if combiner_map is None: return symbol = combiner_map.get(symbol, symbol) if symbol is None: return self.assertEqual(symbol, combiner) decoded = newtype.decode() oldtype, constructor, kargs = decoded constructor = 'Create_' + constructor.lower() if combiner in [MPI.COMBINER_CONTIGUOUS]: # Cython could optimize one-arg methods newtype2 = getattr(oldtype, constructor)(kargs['count']) else: newtype2 = getattr(oldtype, constructor)(**kargs) decoded2 = newtype2.decode() self.assertEqual(decoded[1], decoded2[1]) self.assertEqual(decoded[2], decoded2[2]) newtype2.Free() def _test_derived(self, oldtype, factory, *args): try: if isinstance(oldtype, MPI.Datatype): newtype = factory(oldtype, *args) else: newtype = factory(*args) except NotImplementedError: return self._test_derived_contents(oldtype, factory, newtype) newtype.Commit() self._test_derived_contents(oldtype, factory, newtype) newtype.Free() def testDup(self): for dtype in datatypes: factory = MPI.Datatype.Dup self._test_derived(dtype, factory) def testCreateContiguous(self): for dtype in datatypes: for count in range(5): factory = MPI.Datatype.Create_contiguous args = (count, ) self._test_derived(dtype, factory, *args) def testCreateVector(self): for dtype in datatypes: for count in range(5): for blocklength in range(5): for stride in range(5): factory = MPI.Datatype.Create_vector args = (count, blocklength, stride) self._test_derived(dtype, factory, *args) def testCreateHvector(self): for dtype in datatypes: for count in range(5): for blocklength in range(5): for stride in range(5): factory = MPI.Datatype.Create_hvector args = (count, blocklength, stride) self._test_derived(dtype, factory, *args) def testCreateIndexed(self): for dtype in datatypes: for block in range(5): blocklengths = list(range(block, block+5)) displacements = [0] for b in blocklengths[:-1]: stride = displacements[-1] + b * dtype.extent + 1 displacements.append(stride) factory = MPI.Datatype.Create_indexed args = (blocklengths, displacements) self._test_derived(dtype, factory, *args) #args = (block, displacements) XXX #self._test_derived(dtype, factory, *args) XXX def testCreateIndexedBlock(self): for dtype in datatypes: for block in range(5): blocklengths = list(range(block, block+5)) displacements = [0] for b in blocklengths[:-1]: stride = displacements[-1] + b * dtype.extent + 1 displacements.append(stride) factory = MPI.Datatype.Create_indexed_block args = (block, displacements) self._test_derived(dtype, factory, *args) def testCreateHindexed(self): for dtype in datatypes: for block in range(5): blocklengths = list(range(block, block+5)) displacements = [0] for b in blocklengths[:-1]: stride = displacements[-1] + b * dtype.extent + 1 displacements.append(stride) factory = MPI.Datatype.Create_hindexed args = (blocklengths, displacements) self._test_derived(dtype, factory, *args) #args = (block, displacements) XXX #self._test_derived(dtype, factory, *args) XXX def testCreateHindexedBlock(self): for dtype in datatypes: for block in range(5): displacements = [0] for i in range(5): stride = displacements[-1] + block * dtype.extent + 1 displacements.append(stride) factory = MPI.Datatype.Create_hindexed_block args = (block, displacements) self._test_derived(dtype, factory, *args) def testCreateStruct(self): for dtype1 in datatypes: for dtype2 in datatypes: dtypes = (dtype1, dtype2) blocklengths = (2, 3) displacements = [0] for dtype in dtypes[:-1]: stride = displacements[-1] + dtype.extent displacements.append(stride) factory = MPI.Datatype.Create_struct args = (blocklengths, displacements, dtypes) self._test_derived(dtypes, factory, *args) def testCreateSubarray(self): for dtype in datatypes: for ndim in range(1, 5): for size in range(1, 5): for subsize in range(1, size): for start in range(size-subsize): for order in [MPI.ORDER_C, MPI.ORDER_FORTRAN, MPI.ORDER_F, ]: sizes = [size] * ndim subsizes = [subsize] * ndim starts = [start] * ndim factory = MPI.Datatype.Create_subarray args = sizes, subsizes, starts, order self._test_derived(dtype, factory, *args) def testResized(self): for dtype in datatypes: for lb in range(-10, 10): for extent in range(1, 10): factory = MPI.Datatype.Create_resized args = lb, extent self._test_derived(dtype, factory, *args) def testGetSetName(self): for dtype in datatypes: try: name = dtype.Get_name() self.assertTrue(name) dtype.Set_name(name) self.assertEqual(name, dtype.Get_name()) except NotImplementedError: return def testCommit(self): for dtype in datatypes: dtype.Commit() class TestGetAddress(unittest.TestCase): def testGetAddress(self): try: from array import array location = array('i', range(10)) bufptr, _ = location.buffer_info() addr = MPI.Get_address(location) self.assertEqual(addr, bufptr) except ImportError: pass try: from numpy import asarray location = asarray(range(10), dtype='i') bufptr, _ = location.__array_interface__['data'] addr = MPI.Get_address(location) self.assertEqual(addr, bufptr) except ImportError: pass import sys _name, _version = MPI.get_vendor() if _name == 'LAM/MPI': combiner_map[MPI.COMBINER_INDEXED_BLOCK] = MPI.COMBINER_INDEXED elif _name == 'MPICH1': combiner_map[MPI.COMBINER_VECTOR] = None combiner_map[MPI.COMBINER_HVECTOR] = None combiner_map[MPI.COMBINER_INDEXED] = None for t in datatypes_f: datatypes.remove(t) elif MPI.Get_version() < (2, 0): combiner_map = None if _name == 'Open MPI': if _version <= (1, 5, 1): for t in datatypes_f90[-4:]: if t != MPI.DATATYPE_NULL: datatypes.remove(t) if 'win' in sys.platform: del TestDatatype.testCommit del TestDatatype.testDup del TestDatatype.testResized if sys.version_info[0] >=3: del TestGetAddress if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_doc.py0000644000000000000000000000313212211706251016446 0ustar 00000000000000import sys, types from mpi4py import MPI import mpiunittest as unittest ModuleType = type(MPI) ClassType = type(MPI.Comm) FunctionType = type(MPI.Init) MethodDescrType = type(MPI.Comm.Get_rank) GetSetDescrType = type(MPI.Comm.rank) def getdocstr(mc, docstrings, namespace=None): name = getattr(mc, '__name__', None) if name is None: return if name in ('__builtin__', 'builtins'): return if name.startswith('_'): return if namespace: name = '%s.%s' % (namespace, name) if type(mc) in (ModuleType, ClassType): doc = getattr(mc, '__doc__', None) docstrings[name] = doc for k, v in vars(mc).items(): getdocstr(v, docstrings, name) elif type(mc) in (FunctionType, MethodDescrType, GetSetDescrType): doc = getattr(mc, '__doc__', None) docstrings[name] = doc class TestDoc(unittest.TestCase): def testDoc(self): missing = False docs = { } getdocstr(MPI, docs) for k in docs: if not k.startswith('_'): doc = docs[k] if doc is None: print ("'%s': missing docstring" % k) missing = True else: doc = doc.strip() if not doc: print ("'%s': empty docstring" % k) missing = True if 'mpi4py.MPI' in doc: print ("'%s': bad format docstring" % k) self.assertFalse(missing) if hasattr(sys, 'pypy_version_info'): del TestDoc.testDoc if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_environ.py0000644000000000000000000000644512211706251017373 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class TestEnviron(unittest.TestCase): def testIsInitialized(self): flag = MPI.Is_initialized() self.assertTrue(type(flag) is bool) self.assertTrue(flag) def testIsFinalized(self): flag = MPI.Is_finalized() self.assertTrue(type(flag) is bool) self.assertFalse(flag) def testGetVersion(self): version = MPI.Get_version() self.assertEqual(len(version), 2) major, minor = version self.assertTrue(type(major) is int) self.assertTrue(major >= 1) self.assertTrue(type(minor) is int) self.assertTrue(minor >= 0) def testGetLibraryVersion(self): version = MPI.Get_library_version() self.assertTrue(isinstance(version, str)) self.assertTrue(len(version) > 0) def testGetProcessorName(self): procname = MPI.Get_processor_name() self.assertTrue(isinstance(procname, str)) def testWTime(self): time1 = MPI.Wtime() self.assertTrue(type(time1) is float) time2 = MPI.Wtime() self.assertTrue(type(time2) is float) self.assertTrue(time2 >= time1) def testWTick(self): tick = MPI.Wtick() self.assertTrue(type(tick) is float) self.assertTrue(tick > 0.0) class TestWorldAttrs(unittest.TestCase): def testWTimeIsGlobal(self): wtg = MPI.COMM_WORLD.Get_attr(MPI.WTIME_IS_GLOBAL) if wtg is not None: self.assertTrue(wtg in (True, False)) def testWTimeIsGlobal(self): wtg = MPI.COMM_WORLD.Get_attr(MPI.WTIME_IS_GLOBAL) if wtg is not None: self.assertTrue(wtg in (True, False)) def testHostPorcessor(self): size = MPI.COMM_WORLD.Get_size() vals = list(range(size)) + [MPI.PROC_NULL] hostproc = MPI.COMM_WORLD.Get_attr(MPI.HOST) if hostproc is not None: self.assertTrue(hostproc in vals) def testIOProcessor(self): size = MPI.COMM_WORLD.Get_size() vals = list(range(size)) + [MPI.UNDEFINED, MPI.ANY_SOURCE, MPI.PROC_NULL] ioproc = MPI.COMM_WORLD.Get_attr(MPI.IO) if ioproc is not None: self.assertTrue(ioproc in vals) def testAppNum(self): if MPI.APPNUM == MPI.KEYVAL_INVALID: return appnum = MPI.COMM_WORLD.Get_attr(MPI.APPNUM) if appnum is not None: self.assertTrue(appnum == MPI.UNDEFINED or appnum >= 0) def testUniverseSize(self): if MPI.UNIVERSE_SIZE == MPI.KEYVAL_INVALID: return univsz = MPI.COMM_WORLD.Get_attr(MPI.UNIVERSE_SIZE) if univsz is not None: self.assertTrue(univsz == MPI.UNDEFINED or univsz >= 0) def testLastUsedCode(self): if MPI.LASTUSEDCODE == MPI.KEYVAL_INVALID: return lastuc = MPI.COMM_WORLD.Get_attr(MPI.LASTUSEDCODE) self.assertTrue(lastuc >= 0) _name, _version = MPI.get_vendor() if (_name in ('MPICH', 'MPICH2') and _version > (1, 2)): # Up to mpich2-1.3.1 when running under Hydra process manager, # getting the universe size fails for the singleton init case if MPI.COMM_WORLD.Get_attr(MPI.APPNUM) is None: del TestWorldAttrs.testUniverseSize if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_errhandler.py0000644000000000000000000000250112211706251020026 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class TestErrhandler(unittest.TestCase): def testPredefined(self): self.assertFalse(MPI.ERRHANDLER_NULL) self.assertTrue(MPI.ERRORS_ARE_FATAL) self.assertTrue(MPI.ERRORS_RETURN) def testCommGetSetErrhandler(self): for COMM in [MPI.COMM_SELF, MPI.COMM_WORLD]: for ERRHANDLER in [MPI.ERRORS_ARE_FATAL, MPI.ERRORS_RETURN, MPI.ERRORS_ARE_FATAL, MPI.ERRORS_RETURN, ]: errhdl_1 = COMM.Get_errhandler() self.assertNotEqual(errhdl_1, MPI.ERRHANDLER_NULL) COMM.Set_errhandler(ERRHANDLER) errhdl_2 = COMM.Get_errhandler() self.assertEqual(errhdl_2, ERRHANDLER) errhdl_2.Free() self.assertEqual(errhdl_2, MPI.ERRHANDLER_NULL) COMM.Set_errhandler(errhdl_1) errhdl_1.Free() self.assertEqual(errhdl_1, MPI.ERRHANDLER_NULL) def testGetErrhandler(self): errhdls = [] for i in range(100): e = MPI.COMM_WORLD.Get_errhandler() errhdls.append(e) for e in errhdls: e.Free() for e in errhdls: self.assertEqual(e, MPI.ERRHANDLER_NULL) if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_errorcode.py0000644000000000000000000000277312211706251017677 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class TestErrorCode(unittest.TestCase): errorclasses = [item[1] for item in vars(MPI).items() if item[0].startswith('ERR_')] errorclasses.insert(0, MPI.SUCCESS) errorclasses.remove(MPI.ERR_LASTCODE) def testGetErrorClass(self): self.assertEqual(self.errorclasses[0], 0) for ierr in self.errorclasses: errcls = MPI.Get_error_class(ierr) self.assertTrue(errcls >= MPI.SUCCESS) self.assertTrue(errcls < MPI.ERR_LASTCODE) self.assertEqual(errcls, ierr) def testGetErrorStrings(self): for ierr in self.errorclasses: errstr = MPI.Get_error_string(ierr) def testException(self): from sys import version_info as py_version for ierr in self.errorclasses: errstr = MPI.Get_error_string(ierr) errcls = MPI.Get_error_class(ierr) errexc = MPI.Exception(ierr) if py_version >= (2,5): self.assertEqual(errexc.error_code, ierr) self.assertEqual(errexc.error_class, ierr) self.assertEqual(errexc.error_string, errstr) self.assertEqual(str(errexc), errstr) self.assertEqual(int(errexc), ierr) self.assertTrue(errexc == ierr) self.assertTrue(errexc == errexc) self.assertFalse(errexc != ierr) self.assertFalse(errexc != errexc) if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_exceptions.py0000644000000000000000000003050612211706251020067 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest # -------------------------------------------------------------------- class TestExcDatatypeNull(unittest.TestCase): def testDup(self): self.assertRaisesMPI(MPI.ERR_TYPE, MPI.DATATYPE_NULL.Dup) def testCommit(self): self.assertRaisesMPI(MPI.ERR_TYPE, MPI.DATATYPE_NULL.Commit) def testFree(self): self.assertRaisesMPI(MPI.ERR_TYPE, MPI.DATATYPE_NULL.Free) class TestExcDatatype(unittest.TestCase): DATATYPES = (MPI.BYTE, MPI.PACKED, MPI.CHAR, MPI.WCHAR, MPI.SIGNED_CHAR, MPI.UNSIGNED_CHAR, MPI.SHORT, MPI.UNSIGNED_SHORT, MPI.INT, MPI.UNSIGNED, MPI.UNSIGNED_INT, MPI.LONG, MPI.UNSIGNED_LONG, MPI.LONG_LONG, MPI.UNSIGNED_LONG_LONG, MPI.FLOAT, MPI.DOUBLE, MPI.LONG_DOUBLE, MPI.SHORT_INT, MPI.TWOINT, MPI.INT_INT, MPI.LONG_INT, MPI.FLOAT_INT, MPI.DOUBLE_INT, MPI.LONG_DOUBLE_INT, MPI.UB, MPI.LB,) ERR_TYPE = MPI.ERR_TYPE def testFreePredefined(self): for dtype in self.DATATYPES: if dtype != MPI.DATATYPE_NULL: self.assertRaisesMPI(self.ERR_TYPE, dtype.Free) self.assertTrue(dtype != MPI.DATATYPE_NULL) def testKeyvalInvalid(self): for dtype in self.DATATYPES: if dtype != MPI.DATATYPE_NULL: try: self.assertRaisesMPI( [MPI.ERR_KEYVAL, MPI.ERR_OTHER], dtype.Get_attr, MPI.KEYVAL_INVALID) except NotImplementedError: return _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 4, 3): TestExcDatatype.DATATYPES = TestExcDatatype.DATATYPES[1:] TestExcDatatype.ERR_TYPE = MPI.ERR_INTERN # -------------------------------------------------------------------- class TestExcStatus(unittest.TestCase): def testGetCount(self): status = MPI.Status() self.assertRaisesMPI( MPI.ERR_TYPE, status.Get_count, MPI.DATATYPE_NULL) def testGetElements(self): status = MPI.Status() self.assertRaisesMPI( MPI.ERR_TYPE, status.Get_elements, MPI.DATATYPE_NULL) def testSetElements(self): status = MPI.Status() self.assertRaisesMPI( MPI.ERR_TYPE, status.Set_elements, MPI.DATATYPE_NULL, 0) # -------------------------------------------------------------------- class TestExcRequestNull(unittest.TestCase): def testFree(self): self.assertRaisesMPI(MPI.ERR_REQUEST, MPI.REQUEST_NULL.Free) def testCancel(self): self.assertRaisesMPI(MPI.ERR_REQUEST, MPI.REQUEST_NULL.Cancel) # -------------------------------------------------------------------- class TestExcOpNull(unittest.TestCase): def testFree(self): self.assertRaisesMPI([MPI.ERR_OP, MPI.ERR_ARG], MPI.OP_NULL.Free) class TestExcOp(unittest.TestCase): def testFreePredefined(self): for op in (MPI.MAX, MPI.MIN, MPI.SUM, MPI.PROD, MPI.LAND, MPI.BAND, MPI.LOR, MPI.BOR, MPI.LXOR, MPI.BXOR, MPI.MAXLOC, MPI.MINLOC): self.assertRaisesMPI([MPI.ERR_OP, MPI.ERR_ARG], op.Free) if MPI.REPLACE != MPI.OP_NULL: self.assertRaisesMPI([MPI.ERR_OP, MPI.ERR_ARG], op.Free) # -------------------------------------------------------------------- class TestExcInfoNull(unittest.TestCase): def testTruth(self): self.assertFalse(bool(MPI.INFO_NULL)) def testDup(self): self.assertRaisesMPI( [MPI.ERR_INFO, MPI.ERR_ARG], MPI.INFO_NULL.Dup) def testFree(self): self.assertRaisesMPI( [MPI.ERR_INFO, MPI.ERR_ARG], MPI.INFO_NULL.Free) def testGet(self): self.assertRaisesMPI( [MPI.ERR_INFO, MPI.ERR_ARG], MPI.INFO_NULL.Get, 'key') def testSet(self): self.assertRaisesMPI( [MPI.ERR_INFO, MPI.ERR_ARG], MPI.INFO_NULL.Set, 'key', 'value') def testDelete(self): self.assertRaisesMPI( [MPI.ERR_INFO, MPI.ERR_ARG], MPI.INFO_NULL.Delete, 'key') def testGetNKeys(self): self.assertRaisesMPI( [MPI.ERR_INFO, MPI.ERR_ARG], MPI.INFO_NULL.Get_nkeys) def testGetNthKey(self): self.assertRaisesMPI( [MPI.ERR_INFO, MPI.ERR_ARG], MPI.INFO_NULL.Get_nthkey, 0) class TestExcInfo(unittest.TestCase): def setUp(self): self.INFO = MPI.Info.Create() def tearDown(self): self.INFO.Free() self.INFO = None def testDelete(self): self.assertRaisesMPI( MPI.ERR_INFO_NOKEY, self.INFO.Delete, 'key') def testGetNthKey(self): self.assertRaisesMPI( [MPI.ERR_INFO_KEY, MPI.ERR_ARG], self.INFO.Get_nthkey, 0) try: MPI.Info.Create().Free() except NotImplementedError: del TestExcInfoNull, TestExcInfo else: if _name == 'Microsoft MPI': # ??? del TestExcInfoNull.testDup # -------------------------------------------------------------------- class TestExcGroupNull(unittest.TestCase): def testCompare(self): self.assertRaisesMPI( MPI.ERR_GROUP, MPI.Group.Compare, MPI.GROUP_NULL, MPI.GROUP_NULL) self.assertRaisesMPI( MPI.ERR_GROUP, MPI.Group.Compare, MPI.GROUP_NULL, MPI.GROUP_EMPTY) self.assertRaisesMPI( MPI.ERR_GROUP, MPI.Group.Compare, MPI.GROUP_EMPTY, MPI.GROUP_NULL) def testAccessors(self): for method in ('Get_size', 'Get_rank'): self.assertRaisesMPI( MPI.ERR_GROUP, getattr(MPI.GROUP_NULL, method)) class TestExcGroup(unittest.TestCase): def testFreeEmpty(self): self.assertRaisesMPI(MPI.ERR_GROUP, MPI.GROUP_EMPTY.Free) # -------------------------------------------------------------------- class TestExcCommNull(unittest.TestCase): def testCompare(self): self.assertRaisesMPI( MPI.ERR_COMM, MPI.Comm.Compare, MPI.COMM_NULL, MPI.COMM_NULL) self.assertRaisesMPI( MPI.ERR_COMM, MPI.Comm.Compare, MPI.COMM_SELF, MPI.COMM_NULL) self.assertRaisesMPI( MPI.ERR_COMM, MPI.Comm.Compare, MPI.COMM_WORLD, MPI.COMM_NULL) self.assertRaisesMPI( MPI.ERR_COMM, MPI.Comm.Compare, MPI.COMM_NULL, MPI.COMM_SELF) self.assertRaisesMPI( MPI.ERR_COMM, MPI.Comm.Compare, MPI.COMM_NULL, MPI.COMM_WORLD) def testAccessors(self): for method in ('Get_size', 'Get_rank', 'Is_inter', 'Is_intra', 'Get_group', 'Get_topology'): self.assertRaisesMPI(MPI.ERR_COMM, getattr(MPI.COMM_NULL, method)) def testFree(self): self.assertRaisesMPI(MPI.ERR_COMM, MPI.COMM_NULL.Free) def testDisconnect(self): try: self.assertRaisesMPI(MPI.ERR_COMM, MPI.COMM_NULL.Disconnect) except NotImplementedError: return def testGetAttr(self): self.assertRaisesMPI( MPI.ERR_COMM, MPI.COMM_NULL.Get_attr, MPI.TAG_UB) def testGetErrhandler(self): self.assertRaisesMPI( [MPI.ERR_COMM, MPI.ERR_ARG], MPI.COMM_NULL.Get_errhandler) def testSetErrhandler(self): self.assertRaisesMPI( MPI.ERR_COMM, MPI.COMM_NULL.Set_errhandler, MPI.ERRORS_RETURN) def testIntraNull(self): comm_null = MPI.Intracomm() self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Dup) self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Create, MPI.GROUP_EMPTY) self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Split, color=0, key=0) def testInterNull(self): comm_null = MPI.Intercomm() self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Get_remote_group) self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Get_remote_size) self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Dup) self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Create, MPI.GROUP_EMPTY) self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Split, color=0, key=0) self.assertRaisesMPI(MPI.ERR_COMM, comm_null.Merge, high=True) class TestExcComm(unittest.TestCase): def testFreeSelf(self): self.assertRaisesMPI( [MPI.ERR_COMM, MPI.ERR_ARG], MPI.COMM_SELF.Free) def testFreeWorld(self): self.assertRaisesMPI( [MPI.ERR_COMM, MPI.ERR_ARG], MPI.COMM_WORLD.Free) def testKeyvalInvalid(self): self.assertRaisesMPI( [MPI.ERR_KEYVAL, MPI.ERR_OTHER], MPI.COMM_SELF.Get_attr, MPI.KEYVAL_INVALID) _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 4, 2): del TestExcCommNull.testGetAttr if _version < (1, 4, 1): del TestExcCommNull.testGetErrhandler # -------------------------------------------------------------------- class TestExcWinNull(unittest.TestCase): def testFree(self): self.assertRaisesMPI( [MPI.ERR_WIN, MPI.ERR_ARG], MPI.WIN_NULL.Free) def testGetErrhandler(self): self.assertRaisesMPI( [MPI.ERR_WIN, MPI.ERR_ARG], MPI.WIN_NULL.Get_errhandler) def testSetErrhandler(self): self.assertRaisesMPI( [MPI.ERR_WIN, MPI.ERR_ARG], MPI.WIN_NULL.Set_errhandler, MPI.ERRORS_RETURN) def testCallErrhandler(self): self.assertRaisesMPI([MPI.ERR_WIN, MPI.ERR_ARG], MPI.WIN_NULL.Call_errhandler, 0) class TestExcWin(unittest.TestCase): def setUp(self): self.WIN = MPI.Win.Create(None, 1, MPI.INFO_NULL, MPI.COMM_SELF) def tearDown(self): self.WIN.Free() self.WIN = None def testKeyvalInvalid(self): self.assertRaisesMPI( [MPI.ERR_KEYVAL, MPI.ERR_OTHER], self.WIN.Get_attr, MPI.KEYVAL_INVALID) try: w = MPI.Win.Create(None, 1, MPI.INFO_NULL, MPI.COMM_SELF) w.Free() except NotImplementedError: del TestExcWinNull, TestExcWin # -------------------------------------------------------------------- class TestExcErrhandlerNull(unittest.TestCase): def testFree(self): self.assertRaisesMPI(MPI.ERR_ARG, MPI.ERRHANDLER_NULL.Free) def testCommSetErrhandler(self): self.assertRaisesMPI( MPI.ERR_ARG, MPI.COMM_SELF.Set_errhandler, MPI.ERRHANDLER_NULL) self.assertRaisesMPI( MPI.ERR_ARG, MPI.COMM_WORLD.Set_errhandler, MPI.ERRHANDLER_NULL) class TestExcErrhandler(unittest.TestCase): def testFreePredefined(self): #self.assertRaisesMPI(MPI.ERR_ARG, MPI.ERRORS_ARE_FATAL.Free) #self.assertRaisesMPI(MPI.ERR_ARG, MPI.ERRORS_RETURN.Free) pass # -------------------------------------------------------------------- import sys name, version = MPI.get_vendor() if name == 'MPICH2': try: MPI.DATATYPE_NULL.Get_size() except MPI.Exception: pass else: del TestExcDatatypeNull del TestExcDatatype del TestExcStatus del TestExcRequestNull del TestExcOpNull del TestExcOp del TestExcInfoNull del TestExcInfo del TestExcGroupNull del TestExcGroup del TestExcCommNull del TestExcComm del TestExcWinNull del TestExcWin del TestExcErrhandlerNull del TestExcErrhandler elif name == 'Open MPI': if 'win' in sys.platform: del TestExcDatatypeNull del TestExcDatatype del TestExcStatus del TestExcRequestNull del TestExcOpNull del TestExcOp del TestExcInfoNull del TestExcInfo del TestExcGroupNull del TestExcGroup del TestExcCommNull del TestExcComm del TestExcWinNull del TestExcWin del TestExcErrhandlerNull del TestExcErrhandler elif name == 'HP MPI': del TestExcDatatypeNull del TestExcDatatype del TestExcStatus del TestExcRequestNull del TestExcOpNull del TestExcOp del TestExcInfoNull del TestExcInfo del TestExcGroupNull del TestExcGroup del TestExcCommNull del TestExcComm del TestExcWinNull del TestExcWin del TestExcErrhandlerNull del TestExcErrhandler # -------------------------------------------------------------------- if __name__ == '__main__': unittest.main() # -------------------------------------------------------------------- mpi4py_1.3.1+hg20131106.orig/test/test_file.py0000644000000000000000000001514312211706251016625 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import os, tempfile class BaseTestFile(object): COMM = MPI.COMM_NULL FILE = MPI.FILE_NULL prefix = 'mpi4py' def setUp(self): fd, self.fname = tempfile.mkstemp(prefix=self.prefix) os.close(fd) self.amode = MPI.MODE_RDWR | MPI.MODE_CREATE #self.amode |= MPI.MODE_DELETE_ON_CLOSE try: self.FILE = MPI.File.Open(self.COMM, self.fname, self.amode, MPI.INFO_NULL) #self.fname=None except Exception: os.remove(self.fname) raise def tearDown(self): if self.FILE == MPI.FILE_NULL: return amode = self.FILE.amode self.FILE.Close() if not (amode & MPI.MODE_DELETE_ON_CLOSE): MPI.File.Delete(self.fname, MPI.INFO_NULL) def testPreallocate(self): ## MPICH2 1.0.x emits a nesting level warning ## when preallocating zero size. name, ver = MPI.get_vendor() if not (name == 'MPICH2' and ver < (1, 1, 0)): self.FILE.Preallocate(0) size = self.FILE.Get_size() self.assertEqual(size, 0) self.FILE.Preallocate(1) size = self.FILE.Get_size() self.assertEqual(size, 1) self.FILE.Preallocate(100) size = self.FILE.Get_size() self.assertEqual(size, 100) self.FILE.Preallocate(10) size = self.FILE.Get_size() self.assertEqual(size, 100) self.FILE.Preallocate(200) size = self.FILE.Get_size() self.assertEqual(size, 200) def testGetSetSize(self): size = self.FILE.Get_size() self.assertEqual(size, 0) size = self.FILE.size self.assertEqual(size, 0) self.FILE.Set_size(100) size = self.FILE.Get_size() self.assertEqual(size, 100) size = self.FILE.size self.assertEqual(size, 100) def testGetGroup(self): fgroup = self.FILE.Get_group() cgroup = self.COMM.Get_group() gcomp = MPI.Group.Compare(fgroup, cgroup) self.assertEqual(gcomp, MPI.IDENT) fgroup.Free() cgroup.Free() def testGetAmode(self): amode = self.FILE.Get_amode() self.assertEqual(self.amode, amode) self.assertEqual(self.FILE.amode, self.amode) def testGetSetInfo(self): info = self.FILE.Get_info() self.FILE.Set_info(info) info.Free() def testGetSetView(self): fsize = 100 * MPI.DOUBLE.size self.FILE.Set_size(fsize) displacements = range(100) datatypes = [MPI.SHORT, MPI.INT, MPI.LONG, MPI.FLOAT, MPI.DOUBLE] datareps = ['native'] #['native', 'internal', 'external32'] for disp in displacements: for dtype in datatypes: for datarep in datareps: etype, ftype = dtype, dtype self.FILE.Set_view(disp, etype, ftype, datarep, MPI.INFO_NULL) of, et, ft, dr = self.FILE.Get_view() self.assertEqual(disp, of) self.assertEqual(etype, et) self.assertEqual(ftype, ft) self.assertEqual(datarep, dr) #try: et.Free() #except MPI.Exception: pass #try: ft.Free() #except MPI.Exception: pass def testGetSetAtomicity(self): atom = self.FILE.Get_atomicity() self.assertFalse(atom) for atomicity in [True, False] * 4: self.FILE.Set_atomicity(atomicity) atom = self.FILE.Get_atomicity() self.assertEqual(atom, atomicity) def testSync(self): self.FILE.Sync() def testSeekGetPosition(self): offset = 0 self.FILE.Seek(offset, MPI.SEEK_END) self.FILE.Seek(offset, MPI.SEEK_CUR) self.FILE.Seek(offset, MPI.SEEK_SET) pos = self.FILE.Get_position() self.assertEqual(pos, offset) def testSeekGetPositionShared(self): offset = 0 self.FILE.Seek_shared(offset, MPI.SEEK_END) self.FILE.Seek_shared(offset, MPI.SEEK_CUR) self.FILE.Seek_shared(offset, MPI.SEEK_SET) pos = self.FILE.Get_position_shared() self.assertEqual(pos, offset) def testGetByteOffset(self): for offset in range(10): disp = self.FILE.Get_byte_offset(offset) self.assertEqual(disp, offset) def testGetTypeExtent(self): extent = self.FILE.Get_type_extent(MPI.BYTE) self.assertEqual(extent, 1) def testGetErrhandler(self): eh = self.FILE.Get_errhandler() self.assertEqual(eh, MPI.ERRORS_RETURN) eh.Free() class TestFileNull(unittest.TestCase): def setUp(self): self.eh_save = MPI.FILE_NULL.Get_errhandler() def tearDown(self): MPI.FILE_NULL.Set_errhandler(self.eh_save) self.eh_save.Free() def testGetSetErrhandler(self): eh = MPI.FILE_NULL.Get_errhandler() self.assertEqual(eh, MPI.ERRORS_RETURN) eh.Free() MPI.FILE_NULL.Set_errhandler(MPI.ERRORS_ARE_FATAL) eh = MPI.FILE_NULL.Get_errhandler() self.assertEqual(eh, MPI.ERRORS_ARE_FATAL) eh.Free() MPI.FILE_NULL.Set_errhandler(MPI.ERRORS_RETURN) eh = MPI.FILE_NULL.Get_errhandler() self.assertEqual(eh, MPI.ERRORS_RETURN) eh.Free() class TestFileSelf(BaseTestFile, unittest.TestCase): COMM = MPI.COMM_SELF prefix = BaseTestFile.prefix + ('-%d' % MPI.COMM_WORLD.Get_rank()) import sys _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version <= (1, 2, 8): if MPI.Query_thread() > MPI.THREAD_SINGLE: del BaseTestFile.testPreallocate del BaseTestFile.testGetSetInfo del BaseTestFile.testGetSetAtomicity del BaseTestFile.testSync del BaseTestFile.testGetAmode del BaseTestFile.testGetSetSize del BaseTestFile.testGetSetView del BaseTestFile.testGetByteOffset del BaseTestFile.testGetTypeExtent del BaseTestFile.testSeekGetPosition del BaseTestFile.testSeekGetPositionShared if sys.platform.startswith('win'): del TestFileNull del TestFileSelf try: dummy = BaseTestFile() dummy.COMM = MPI.COMM_SELF dummy.setUp() dummy.tearDown() del dummy except NotImplementedError: try: del TestFileNull except NameError: pass try: del TestFileSelf except NameError: pass if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_fortran.py0000644000000000000000000000400712211706251017356 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class BaseTestFortran(object): HANDLES = [] def testFortran(self): for handle1 in self.HANDLES: try: fint = handle1.py2f() except NotImplementedError: continue handle2 = type(handle1).f2py(fint) self.assertEqual(handle1, handle2) class TestFortranDatatype(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.DATATYPE_NULL, MPI.CHAR, MPI.SHORT, MPI.INT, MPI.LONG, MPI.FLOAT, MPI.DOUBLE, ] class TestFortranOp(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.OP_NULL, MPI.MAX, MPI.MIN, MPI.SUM, MPI.PROD, MPI.LAND, MPI.BAND, MPI.LOR, MPI.BOR, MPI.LXOR, MPI.BXOR, MPI.MAXLOC, MPI.MINLOC, ] class TestFortranRequest(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.REQUEST_NULL, ] class TestFortranMessage(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.MESSAGE_NULL, MPI.MESSAGE_NO_PROC, ] class TestFortranErrhandler(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.ERRHANDLER_NULL, MPI.ERRORS_RETURN, MPI.ERRORS_ARE_FATAL, ] class TestFortranInfo(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.INFO_NULL, ] class TestFortranGroup(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.GROUP_NULL, MPI.GROUP_EMPTY, ] class TestFortranComm(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.COMM_NULL, MPI.COMM_SELF, MPI.COMM_WORLD, ] class TestFortranWin(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.WIN_NULL, ] class TestFortranFile(BaseTestFortran, unittest.TestCase): HANDLES = [MPI.FILE_NULL, ] if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_grequest.py0000644000000000000000000000215312211706251017542 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class GReqCtx(object): source = 1 tag = 7 completed = False free_called = False def query(self, status): status.Set_source(self.source) status.Set_tag(self.tag) def free(self): self.free_called = True def cancel(self, completed): if completed is not self.completed: raise MPI.Exception(MPI.ERR_PENDING) class TestGrequest(unittest.TestCase): def testAll(self): ctx = GReqCtx() greq = MPI.Grequest.Start(ctx.query, ctx.free, ctx.cancel) self.assertFalse(greq.Test()) self.assertFalse(ctx.free_called) greq.Cancel() greq.Complete() ctx.completed = True greq.Cancel() status = MPI.Status() self.assertTrue(greq.Test(status)) self.assertEqual(status.Get_source(), ctx.source) self.assertEqual(status.Get_tag(), ctx.tag) greq.Wait() self.assertTrue(ctx.free_called) if MPI.Get_version() < (2, 0): del GReqCtx del TestGrequest if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_group.py0000644000000000000000000001371212211706251017042 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class BaseTestGroup(object): def testProperties(self): group = self.GROUP self.assertEqual(group.Get_size(), group.size) self.assertEqual(group.Get_rank(), group.rank) def testCompare(self): results = (MPI.IDENT, MPI.SIMILAR, MPI.UNEQUAL) group = MPI.COMM_WORLD.Get_group() gcmp = MPI.Group.Compare(self.GROUP, group) group.Free() self.assertTrue(gcmp in results) gcmp = MPI.Group.Compare(self.GROUP, self.GROUP) self.assertEqual(gcmp, MPI.IDENT) def testUnion(self): group = MPI.Group.Union(MPI.GROUP_EMPTY, self.GROUP) self.assertEqual(MPI.Group.Compare(group, self.GROUP), MPI.IDENT) group.Free() group = MPI.Group.Union(self.GROUP, MPI.GROUP_EMPTY) self.assertEqual(MPI.Group.Compare(group, self.GROUP), MPI.IDENT) group.Free() group = MPI.Group.Union(self.GROUP, self.GROUP) self.assertEqual(MPI.Group.Compare(group, self.GROUP), MPI.IDENT) group.Free() def testDifference(self): group = MPI.Group.Difference(MPI.GROUP_EMPTY, self.GROUP) self.assertEqual(MPI.Group.Compare(group, MPI.GROUP_EMPTY), MPI.IDENT) group.Free() group = MPI.Group.Difference(self.GROUP, MPI.GROUP_EMPTY) self.assertEqual(MPI.Group.Compare(group, self.GROUP), MPI.IDENT) group.Free() group = MPI.Group.Difference(self.GROUP, self.GROUP) self.assertEqual(MPI.Group.Compare(group, MPI.GROUP_EMPTY), MPI.IDENT) group.Free() def testIntersect(self): group = MPI.Group.Intersect(MPI.GROUP_EMPTY, self.GROUP) self.assertEqual(MPI.Group.Compare(group, MPI.GROUP_EMPTY), MPI.IDENT) group.Free() group = MPI.Group.Intersect(self.GROUP, MPI.GROUP_EMPTY) self.assertEqual(MPI.Group.Compare(group, MPI.GROUP_EMPTY), MPI.IDENT) group.Free() group = MPI.Group.Intersect(self.GROUP, self.GROUP) self.assertEqual(MPI.Group.Compare(group, self.GROUP), MPI.IDENT) group.Free() def testIncl(self): group = self.GROUP.Incl([]) self.assertEqual(MPI.Group.Compare(group, MPI.GROUP_EMPTY), MPI.IDENT) group.Free() def testExcl(self): group = self.GROUP.Excl([]) self.assertEqual(MPI.Group.Compare(group, self.GROUP), MPI.IDENT) group.Free() def testRangeIncl(self): if self.GROUP == MPI.GROUP_EMPTY: return group = self.GROUP.Range_incl([]) self.assertEqual(MPI.Group.Compare(group, MPI.GROUP_EMPTY), MPI.IDENT) group.Free() ranges = [ (0, self.GROUP.Get_size()-1, 1), ] group = self.GROUP.Range_incl(ranges) self.assertEqual(MPI.Group.Compare(group, self.GROUP), MPI.IDENT) group.Free() def testRangeExcl(self): if self.GROUP == MPI.GROUP_EMPTY: return group = self.GROUP.Range_excl([]) self.assertEqual(MPI.Group.Compare(group, self.GROUP), MPI.IDENT) group.Free() ranges = [ (0, self.GROUP.Get_size()-1, 1), ] group = self.GROUP.Range_excl(ranges) self.assertEqual(MPI.Group.Compare(group, MPI.GROUP_EMPTY), MPI.IDENT) group.Free() def testTranslRanks(self): group1 = self.GROUP group2 = self.GROUP ranks1 = list(range(group1.Get_size())) * 3 ranks2 = MPI.Group.Translate_ranks(group1, ranks1) ranks2 = MPI.Group.Translate_ranks(group1, ranks1, group2) self.assertEqual(list(ranks1), list(ranks2)) def testTranslRanksProcNull(self): if self.GROUP == MPI.GROUP_EMPTY: return group1 = self.GROUP group2 = self.GROUP ranks1 = [MPI.PROC_NULL] * 10 ranks2 = MPI.Group.Translate_ranks(group1, ranks1, group2) self.assertEqual(list(ranks1), list(ranks2)) def testTranslRanksGroupEmpty(self): if self.GROUP == MPI.GROUP_EMPTY: return group1 = self.GROUP group2 = MPI.GROUP_EMPTY ranks1 = list(range(group1.Get_size())) * 2 ranks2 = MPI.Group.Translate_ranks(group1, ranks1, group2) for rank in ranks2: self.assertEqual(rank, MPI.UNDEFINED) class TestGroupNull(unittest.TestCase): def testContructor(self): group = MPI.Group() self.assertFalse(group is MPI.GROUP_NULL) self.assertEqual(group, MPI.GROUP_NULL) def testNull(self): GROUP_NULL = MPI.GROUP_NULL group_null = MPI.Group() self.assertFalse(GROUP_NULL) self.assertFalse(group_null) self.assertEqual(group_null, GROUP_NULL) class TestGroupEmpty(BaseTestGroup, unittest.TestCase): def setUp(self): self.GROUP = MPI.GROUP_EMPTY def testEmpty(self): self.assertTrue(self.GROUP) def testSize(self): size = self.GROUP.Get_size() self.assertEqual(size, 0) def testRank(self): rank = self.GROUP.Get_rank() self.assertEqual(rank, MPI.UNDEFINED) class TestGroupSelf(BaseTestGroup, unittest.TestCase): def setUp(self): self.GROUP = MPI.COMM_SELF.Get_group() def tearDown(self): self.GROUP.Free() def testSize(self): size = self.GROUP.Get_size() self.assertEqual(size, 1) def testRank(self): rank = self.GROUP.Get_rank() self.assertEqual(rank, 0) class TestGroupWorld(BaseTestGroup, unittest.TestCase): def setUp(self): self.GROUP = MPI.COMM_WORLD.Get_group() def tearDown(self): self.GROUP.Free() def testSize(self): size = self.GROUP.Get_size() self.assertTrue(size >= 1) def testRank(self): size = self.GROUP.Get_size() rank = self.GROUP.Get_rank() self.assertTrue(rank >= 0 and rank < size) _name, _version = MPI.get_vendor() if _name == 'MPICH1': del BaseTestGroup.testTranslRanksProcNull TestGroupEmpty.testTranslRanks = lambda self: None if _name == 'LAM/MPI': del BaseTestGroup.testTranslRanksProcNull if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_info.py0000644000000000000000000001240512211706251016637 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class TestInfoNull(unittest.TestCase): def testTruth(self): self.assertFalse(bool(MPI.INFO_NULL)) def testPyMethods(self): inull = MPI.INFO_NULL def getitem(): return inull['k'] def setitem(): inull['k'] = 'v' def delitem(): del inull['k'] self.assertEqual(len(inull), 0) self.assertFalse('key' in inull) self.assertRaises(KeyError, getitem) self.assertRaises(KeyError, setitem) self.assertRaises(KeyError, delitem) self.assertEqual(inull.keys(), []) self.assertEqual(inull.values(), []) self.assertEqual(inull.items(), []) class TestInfoEnv(unittest.TestCase): def testTruth(self): self.assertTrue(bool(MPI.INFO_ENV)) def testPyMethods(self): env = MPI.INFO_ENV if env == MPI.INFO_NULL: return for key in ("command", "argv", "maxprocs", "soft", "host", "arch", "wdir", "file", "thread_level"): v = env.Get(key) class TestInfo(unittest.TestCase): def setUp(self): self.INFO = MPI.Info.Create() def tearDown(self): self.INFO.Free() self.assertEqual(self.INFO, MPI.INFO_NULL) self.INFO = None def testTruth(self): self.assertTrue(bool(self.INFO)) def testDup(self): info = self.INFO.Dup() self.assertNotEqual(self.INFO, info) self.assertEqual(info.Get_nkeys(), 0) info.Free() self.assertFalse(info) def testGet(self): value = self.INFO.Get('key') self.assertEqual(value, None) def testGetNKeys(self): self.assertEqual(self.INFO.Get_nkeys(), 0) def testGetSetDelete(self): INFO = self.INFO self.assertEqual(INFO.Get_nkeys(), 0) INFO.Set('key', 'value') nkeys = INFO.Get_nkeys() self.assertEqual(nkeys, 1) key = INFO.Get_nthkey(0) self.assertEqual(key, 'key') value = INFO.Get('key') self.assertEqual(value, 'value') INFO.Delete('key') nkeys = INFO.Get_nkeys() self.assertEqual(nkeys, 0) value = INFO.Get('key') self.assertEqual(value, None) def testPyMethods(self): INFO = self.INFO self.assertEqual(len(INFO), 0) self.assertTrue('key' not in INFO) self.assertEqual(INFO.keys(), []) self.assertEqual(INFO.values(), []) self.assertEqual(INFO.items(), []) INFO['key'] = 'value' self.assertEqual(len(INFO), 1) self.assertTrue('key' in INFO) self.assertEqual(INFO['key'], 'value') for key in INFO: self.assertEqual(key, 'key') self.assertEqual(INFO.keys(), ['key']) self.assertEqual(INFO.values(), ['value']) self.assertEqual(INFO.items(), [('key', 'value')]) self.assertEqual(key, 'key') del INFO['key'] self.assertEqual(len(INFO), 0) self.assertTrue('key' not in INFO) self.assertEqual(INFO.keys(), []) self.assertEqual(INFO.values(), []) self.assertEqual(INFO.items(), []) def getitem(): INFO['key'] self.assertRaises(KeyError, getitem) def delitem(): del INFO['key'] self.assertRaises(KeyError, delitem) INFO.clear() INFO.update([('key1','value1')]) self.assertEqual(len(INFO), 1) self.assertEqual(INFO['key1'], 'value1') self.assertEqual(INFO.get('key1'), 'value1') self.assertEqual(INFO.get('key2'), None) self.assertEqual(INFO.get('key2', 'value2'), 'value2') INFO.update(key2='value2') self.assertEqual(len(INFO), 2) self.assertEqual(INFO['key1'], 'value1') self.assertEqual(INFO['key2'], 'value2') self.assertEqual(INFO.get('key1'), 'value1') self.assertEqual(INFO.get('key2'), 'value2') self.assertEqual(INFO.get('key3'), None) self.assertEqual(INFO.get('key3', 'value3'), 'value3') INFO.update([('key1', 'newval1')], key2='newval2') self.assertEqual(len(INFO), 2) self.assertEqual(INFO['key1'], 'newval1') self.assertEqual(INFO['key2'], 'newval2') self.assertEqual(INFO.get('key1'), 'newval1') self.assertEqual(INFO.get('key2'), 'newval2') self.assertEqual(INFO.get('key3'), None) self.assertEqual(INFO.get('key3', 'newval3'), 'newval3') INFO.update(dict(key1='val1', key2='val2', key3='val3')) self.assertEqual(len(INFO), 3) self.assertEqual(INFO['key1'], 'val1') self.assertEqual(INFO['key2'], 'val2') self.assertEqual(INFO['key3'], 'val3') INFO.clear() self.assertEqual(len(INFO), 0) self.assertEqual(INFO.get('key1'), None) self.assertEqual(INFO.get('key2'), None) self.assertEqual(INFO.get('key3'), None) self.assertEqual(INFO.get('key1', 'value1'), 'value1') self.assertEqual(INFO.get('key2', 'value2'), 'value2') self.assertEqual(INFO.get('key3', 'value3'), 'value3') try: MPI.Info.Create().Free() except NotImplementedError: del TestInfoNull del TestInfo if (MPI.VERSION < 3 and MPI.INFO_ENV == MPI.INFO_NULL): del TestInfoEnv if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_mem.py0000644000000000000000000000137612211706251016467 0ustar 00000000000000import unittest from mpi4py import MPI class TestMemory(unittest.TestCase): def testMemory1(self): for size in range(0, 10000, 100): if size == 0: size = 1 try: mem1 = MPI.Alloc_mem(size) self.assertEqual(len(mem1), size) MPI.Free_mem(mem1) except NotImplementedError: return def testMemory2(self): for size in range(0, 10000, 100): if size == 0: size = 1 try: mem2 = MPI.Alloc_mem(size, MPI.INFO_NULL) self.assertEqual(len(mem2), size) MPI.Free_mem(mem2) except NotImplementedError: return if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_message.py0000644000000000000000000001020512211706251017324 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest from arrayimpl import allclose try: import array HAVE_ARRAY = True except ImportError: HAVE_ARRAY = False try: import numpy HAVE_NUMPY = True except ImportError: HAVE_NUMPY = False def Sendrecv(smsg, rmsg): comm = MPI.COMM_SELF sts = MPI.Status() comm.Sendrecv(sendbuf=smsg, recvbuf=rmsg, status=sts) class TestMessage(unittest.TestCase): TYPECODES = "hil"+"HIL"+"fd" def _test1(self, equal, zero, s, r, t): r[:] = zero Sendrecv(s, r) self.assertTrue(equal(s, r)) def _test2(self, equal, zero, s, r, typecode): datatype = MPI.__TypeDict__[typecode] for type in (None, typecode, datatype): r[:] = zero Sendrecv([s, type], [r, type]) self.assertTrue(equal(s, r)) def _test31(self, equal, z, s, r, typecode): datatype = MPI.__TypeDict__[typecode] for count in (None, len(s)): for type in (None, typecode, datatype): r[:] = z Sendrecv([s, count, type], [r, count, type]) self.assertTrue(equal(s, r)) def _test32(self, equal, z, s, r, typecode): datatype = MPI.__TypeDict__[typecode] for type in (None, typecode, datatype): for p in range(0, len(s)): r[:] = z Sendrecv([s, (p, None), type], [r, (p, None), type]) self.assertTrue(equal(s[:p], r[:p])) for q in range(p, len(s)): count, displ = q-p, p r[:] = z Sendrecv([s, (count, displ), type], [r, (count, displ), type]) self.assertTrue(equal(s[p:q], r[p:q])) self.assertTrue(equal(z[:p], r[:p])) self.assertTrue(equal(z[q:], r[q:])) def _test4(self, equal, z, s, r, typecode): datatype = MPI.__TypeDict__[typecode] for type in (None, typecode, datatype): for p in range(0, len(s)): r[:] = z Sendrecv([s, p, None, type], [r, p, None, type]) self.assertTrue(equal(s[:p], r[:p])) for q in range(p, len(s)): count, displ = q-p, p r[:] = z Sendrecv([s, count, displ, type], [r, count, displ, type]) self.assertTrue(equal(s[p:q], r[p:q])) self.assertTrue(equal(z[:p], r[:p])) self.assertTrue(equal(z[q:], r[q:])) if HAVE_ARRAY: def _testArray(self, test): from array import array from operator import eq as equal for t in tuple(self.TYPECODES): for n in range(1, 10): z = array(t, [0]*n) s = array(t, list(range(n))) r = array(t, [0]*n) test(equal, z, s, r, t) def testArray1(self): self._testArray(self._test1) def testArray2(self): self._testArray(self._test2) def testArray31(self): self._testArray(self._test31) def testArray32(self): self._testArray(self._test32) def testArray4(self): self._testArray(self._test4) if HAVE_NUMPY: def _testNumPy(self, test): from numpy import zeros, arange, empty for t in tuple(self.TYPECODES): for n in range(10): z = zeros (n, dtype=t) s = arange(n, dtype=t) r = empty (n, dtype=t) test(allclose, z, s, r, t) def testNumPy1(self): self._testNumPy(self._test1) def testNumPy2(self): self._testNumPy(self._test2) def testNumPy31(self): self._testNumPy(self._test31) def testNumPy32(self): self._testNumPy(self._test32) def testNumPy4(self): self._testNumPy(self._test4) if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_msgzero.py0000644000000000000000000000346412211706251017377 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class BaseTestMessageZero(object): null_b = [None, MPI.INT] null_v = [None, (0, None), MPI.INT] def testPointToPoint(self): comm = self.COMM comm.Sendrecv(sendbuf=self.null_b, dest=comm.rank, recvbuf=self.null_b, source=comm.rank) r2 = comm.Irecv(self.null_b, comm.rank) r1 = comm.Isend(self.null_b, comm.rank) MPI.Request.Waitall([r1, r2]) def testCollectivesBlock(self): comm = self.COMM comm.Bcast(self.null_b) comm.Gather(self.null_b, self.null_b) comm.Scatter(self.null_b, self.null_b) comm.Allgather(self.null_b, self.null_b) comm.Alltoall(self.null_b, self.null_b) def testCollectivesVector(self): comm = self.COMM comm.Gatherv(self.null_b, self.null_v) comm.Scatterv(self.null_v, self.null_b) comm.Allgatherv(self.null_b, self.null_v) comm.Alltoallv(self.null_v, self.null_v) def testReductions(self): comm = self.COMM comm.Reduce(self.null_b, self.null_b) comm.Allreduce(self.null_b, self.null_b) comm.Reduce_scatter_block(self.null_b, self.null_b) rcnt = [0]*comm.Get_size() comm.Reduce_scatter(self.null_b, self.null_b, rcnt) try: comm.Scan(self.null_b, self.null_b) except NotImplementedError: pass try: comm.Exscan(self.null_b, self.null_b) except NotImplementedError: pass class TestMessageZeroSelf(BaseTestMessageZero, unittest.TestCase): COMM = MPI.COMM_SELF class TestMessageZeroWorld(BaseTestMessageZero, unittest.TestCase): COMM = MPI.COMM_WORLD _name, _version = MPI.get_vendor() if (_name == 'Open MPI'): del BaseTestMessageZero.testReductions if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_op.py0000644000000000000000000000722612211706251016327 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest MPI_ERR_OP = MPI.ERR_OP try: import array except ImportError: array = None try: bytes except NameError: bytes = str if array: try: tobytes = array.array.tobytes except AttributeError: tobytes = array.array.tostring def frombytes(typecode, data): a = array.array(typecode,[]) try: data = data.tobytes() except AttributeError: pass try: _frombytes = array.array.frombytes except AttributeError: _frombytes = array.array.fromstring _frombytes(a, data) return a def mysum_py(a, b): for i in range(len(a)): b[i] = a[i] + b[i] return b def mysum(ba, bb, dt): if dt is None: return mysum_py(ba, bb) assert dt == MPI.INT assert len(ba) == len(bb) a = frombytes('i', ba) b = frombytes('i', bb) b = mysum_py(a, b) bb[:] = tobytes(b) class TestOp(unittest.TestCase): def testConstructor(self): op = MPI.Op() self.assertFalse(op) self.assertEqual(op, MPI.OP_NULL) def testCreate(self): for comm in [MPI.COMM_SELF, MPI.COMM_WORLD]: for commute in [True, False]: for N in range(4): # buffer(empty_array) returns # the same non-NULL pointer !!! if N == 0: continue size = comm.Get_size() rank = comm.Get_rank() myop = MPI.Op.Create(mysum, commute) a = array.array('i', [i*(rank+1) for i in range(N)]) b = array.array('i', [0]*len(a)) comm.Allreduce([a, MPI.INT], [b, MPI.INT], myop) scale = sum(range(1,size+1)) for i in range(N): self.assertEqual(b[i], scale*i) ret = myop(a, b) self.assertTrue(ret is b) for i in range(N): self.assertEqual(b[i], a[i]+scale*i) myop.Free() def testCreateMany(self): N = 16 # max user-defined operations # ops = [] for i in range(N): o = MPI.Op.Create(mysum) ops.append(o) self.assertRaises(RuntimeError, MPI.Op.Create, mysum) for o in ops: o.Free() # cleanup # other round ops = [] for i in range(N): o = MPI.Op.Create(mysum) ops.append(o) self.assertRaises(RuntimeError, MPI.Op.Create, mysum) for o in ops: o.Free() # cleanup def _test_call(self, op, args, res): self.assertEqual(op(*args), res) def testCall(self): self._test_call(MPI.MIN, (2,3), 2) self._test_call(MPI.MAX, (2,3), 3) self._test_call(MPI.SUM, (2,3), 5) self._test_call(MPI.PROD, (2,3), 6) def xor(x,y): return bool(x) ^ bool(y) for x, y in ((0, 0), (0, 1), (1, 0), (1, 1)): self._test_call(MPI.LAND, (x,y), x and y) self._test_call(MPI.LOR, (x,y), x or y) self._test_call(MPI.LXOR, (x,y), xor(x, y)) self._test_call(MPI.BAND, (x,y), x & y) self._test_call(MPI.BOR, (x,y), x | y) self._test_call(MPI.LXOR, (x,y), x ^ y) if MPI.REPLACE: self._test_call(MPI.REPLACE, (2,3), 3) self._test_call(MPI.REPLACE, (3,2), 2) if MPI.NO_OP: self._test_call(MPI.NO_OP, (2,3), 2) self._test_call(MPI.NO_OP, (3,2), 3) if not array: del TestOp.testCreate if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_p2p_buf.py0000644000000000000000000001457212211706251017250 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl class BaseTestP2PBuf(object): COMM = MPI.COMM_NULL def testSendrecv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() dest = (rank + 1) % size source = (rank - 1) % size for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for s in range(0, size): sbuf = array( s, typecode, s) rbuf = array(-1, typecode, s+1) self.COMM.Sendrecv(sbuf.as_mpi(), dest, 0, rbuf.as_mpi(), source, 0) for value in rbuf[:-1]: self.assertEqual(value, s) self.assertEqual(rbuf[-1], -1) def testSendRecv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for s in range(0, size): sbuf = array( s, typecode, s) rbuf = array(-1, typecode, s) mem = array( 0, typecode, s+MPI.BSEND_OVERHEAD).as_raw() if size == 1: MPI.Attach_buffer(mem) rbuf = sbuf MPI.Detach_buffer() elif rank == 0: MPI.Attach_buffer(mem) self.COMM.Bsend(sbuf.as_mpi(), 1, 0) MPI.Detach_buffer() self.COMM.Send(sbuf.as_mpi(), 1, 0) self.COMM.Ssend(sbuf.as_mpi(), 1, 0) self.COMM.Recv(rbuf.as_mpi(), 1, 0) self.COMM.Recv(rbuf.as_mpi(), 1, 0) self.COMM.Recv(rbuf.as_mpi(), 1, 0) elif rank == 1: self.COMM.Recv(rbuf.as_mpi(), 0, 0) self.COMM.Recv(rbuf.as_mpi(), 0, 0) self.COMM.Recv(rbuf.as_mpi(), 0, 0) MPI.Attach_buffer(mem) self.COMM.Bsend(sbuf.as_mpi(), 0, 0) MPI.Detach_buffer() self.COMM.Send(sbuf.as_mpi(), 0, 0) self.COMM.Ssend(sbuf.as_mpi(), 0, 0) else: rbuf = sbuf for value in rbuf: self.assertEqual(value, s) def testPersistent(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() dest = (rank + 1) % size source = (rank - 1) % size for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for s in range(size): for xs in range(3): # sbuf = array( s, typecode, s) rbuf = array(-1, typecode, s+xs) sendreq = self.COMM.Send_init(sbuf.as_mpi(), dest, 0) recvreq = self.COMM.Recv_init(rbuf.as_mpi(), source, 0) sendreq.Start() recvreq.Start() sendreq.Wait() recvreq.Wait() self.assertNotEqual(sendreq, MPI.REQUEST_NULL) self.assertNotEqual(recvreq, MPI.REQUEST_NULL) sendreq.Free() recvreq.Free() for value in rbuf[:s]: self.assertEqual(value, s) for value in rbuf[s:]: self.assertEqual(value, -1) # sbuf = array(s, typecode, s) rbuf = array(-1, typecode, s+xs) sendreq = self.COMM.Send_init(sbuf.as_mpi(), dest, 0) recvreq = self.COMM.Recv_init(rbuf.as_mpi(), source, 0) reqlist = [sendreq, recvreq] MPI.Prequest.Startall(reqlist) index1 = MPI.Prequest.Waitany(reqlist) self.assertTrue(index1 in [0, 1]) self.assertNotEqual(reqlist[index1], MPI.REQUEST_NULL) index2 = MPI.Prequest.Waitany(reqlist) self.assertTrue(index2 in [0, 1]) self.assertNotEqual(reqlist[index2], MPI.REQUEST_NULL) self.assertTrue(index1 != index2) index3 = MPI.Prequest.Waitany(reqlist) self.assertEqual(index3, MPI.UNDEFINED) for preq in reqlist: self.assertNotEqual(preq, MPI.REQUEST_NULL) preq.Free() self.assertEqual(preq, MPI.REQUEST_NULL) for value in rbuf[:s]: self.assertEqual(value, s) for value in rbuf[s:]: self.assertEqual(value, -1) def testIProbe(self): comm = self.COMM.Dup() try: f = comm.Iprobe() self.assertFalse(f) f = comm.Iprobe(MPI.ANY_SOURCE) self.assertFalse(f) f = comm.Iprobe(MPI.ANY_SOURCE, MPI.ANY_TAG) self.assertFalse(f) status = MPI.Status() f = comm.Iprobe(MPI.ANY_SOURCE, MPI.ANY_TAG, status) self.assertFalse(f) self.assertEqual(status.source, MPI.ANY_SOURCE) self.assertEqual(status.tag, MPI.ANY_TAG) self.assertEqual(status.error, MPI.SUCCESS) finally: comm.Free() class TestP2PBufSelf(BaseTestP2PBuf, unittest.TestCase): COMM = MPI.COMM_SELF class TestP2PBufWorld(BaseTestP2PBuf, unittest.TestCase): COMM = MPI.COMM_WORLD class TestP2PBufSelfDup(BaseTestP2PBuf, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestP2PBufWorldDup(BaseTestP2PBuf, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() name, version = MPI.get_vendor() if name == 'Open MPI': if version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestP2PBufWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_p2p_buf_matched.py0000644000000000000000000001344312211706251020731 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl class TestMessage(unittest.TestCase): def testMessageNull(self): null = MPI.MESSAGE_NULL self.assertFalse(null) null2 = MPI.Message() self.assertEqual(null, null2) null3 = MPI.Message(null) self.assertEqual(null, null3) def testMessageNoProc(self): # noproc = MPI.MESSAGE_NO_PROC self.assertTrue(noproc) noproc.Recv(None) self.assertTrue(noproc) noproc.Irecv(None).Wait() self.assertTrue(noproc) # noproc2 = MPI.Message(MPI.MESSAGE_NO_PROC) self.assertTrue(noproc2) self.assertEqual(noproc2, noproc) self.assertNotEqual(noproc, MPI.MESSAGE_NULL) # message = MPI.Message(MPI.MESSAGE_NO_PROC) message.Recv(None) self.assertEqual(message, MPI.MESSAGE_NULL) # message = MPI.Message(MPI.MESSAGE_NO_PROC) request = message.Irecv(None) self.assertEqual(message, MPI.MESSAGE_NULL) self.assertNotEqual(request, MPI.REQUEST_NULL) request.Wait() self.assertEqual(request, MPI.REQUEST_NULL) class BaseTestP2PMatched(object): COMM = MPI.COMM_NULL def testIMProbe(self): comm = self.COMM.Dup() try: m = comm.Improbe() self.assertEqual(m, None) m = comm.Improbe(MPI.ANY_SOURCE) self.assertEqual(m, None) m = comm.Improbe(MPI.ANY_SOURCE, MPI.ANY_TAG) self.assertEqual(m, None) status = MPI.Status() m = comm.Improbe(MPI.ANY_SOURCE, MPI.ANY_TAG, status) self.assertEqual(m, None) self.assertEqual(status.source, MPI.ANY_SOURCE) self.assertEqual(status.tag, MPI.ANY_TAG) self.assertEqual(status.error, MPI.SUCCESS) m = MPI.Message.Iprobe(comm) self.assertEqual(m, None) finally: comm.Free() def testProbeRecv(self): comm = self.COMM size = comm.Get_size() rank = comm.Get_rank() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for s in range(0, size+1): sbuf = array( s, typecode, s) rbuf = array(-1, typecode, s) if size == 1: m = comm.Improbe(0, 0) self.assertEqual(m, None) sr = comm.Isend(sbuf.as_mpi(), 0, 0) m = comm.Mprobe(0, 0) self.assertTrue(isinstance(m, MPI.Message)) self.assertTrue(m) n = comm.Improbe(0, 0) self.assertEqual(n, None) rr = m.Irecv(rbuf.as_raw()) self.assertFalse(m) self.assertTrue(rr) MPI.Request.Waitall([sr,rr]) self.assertFalse(rr) # r = comm.Isend(sbuf.as_mpi(), 0, 0) m = MPI.Message.Probe(comm, 0, 0) self.assertTrue(isinstance(m, MPI.Message)) self.assertTrue(m) n = MPI.Message.Iprobe(comm, 0, 0) self.assertEqual(n, None) m.Recv(rbuf.as_raw()) self.assertFalse(m) r.Wait() elif rank == 0: comm.Send(sbuf.as_mpi(), 1, 0) m = comm.Mprobe(1, 0) self.assertTrue(m) n = comm.Improbe(0, 0) self.assertEqual(n, None) m.Recv(rbuf.as_raw()) self.assertFalse(m) # comm.Send(sbuf.as_mpi(), 1, 1) m = None while not m: m = comm.Improbe(1, 1) m.Irecv(rbuf.as_raw()).Wait() n = comm.Improbe(1, 1) self.assertEqual(n, None) elif rank == 1: m = comm.Mprobe(0, 0) self.assertTrue(m) n = comm.Improbe(1, 0) self.assertEqual(n, None) m.Recv(rbuf.as_raw()) self.assertFalse(m) comm.Send(sbuf.as_mpi(), 0, 0) # m = None while not m: m = comm.Improbe(0, 1) m.Irecv(rbuf.as_mpi()).Wait() comm.Send(sbuf.as_mpi(), 0, 1) n = comm.Improbe(0, 1) self.assertEqual(n, None) else: rbuf = sbuf for value in rbuf: self.assertEqual(value, s) class TestP2PMatchedSelf(BaseTestP2PMatched, unittest.TestCase): COMM = MPI.COMM_SELF class TestP2PMatchedWorld(BaseTestP2PMatched, unittest.TestCase): COMM = MPI.COMM_WORLD class TestP2PMatchedSelfDup(BaseTestP2PMatched, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestP2PMatchedWorldDup(BaseTestP2PMatched, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() if MPI.MESSAGE_NULL == MPI.MESSAGE_NO_PROC: del TestMessage del BaseTestP2PMatched del TestP2PMatchedSelf del TestP2PMatchedWorld del TestP2PMatchedSelfDup del TestP2PMatchedWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_p2p_obj.py0000644000000000000000000002724412211706251017246 0ustar 00000000000000import sys from mpi4py import MPI import mpiunittest as unittest def allocate(n): if not hasattr(sys, 'pypy_version_info'): try: return bytearray(n) except NameError: pass from array import array return array('B', [0]) * n _basic = [None, True, False, -7, 0, 7, -2**63, 2**63-1, -2.17, 0.0, 3.14, 1+2j, 2-3j, 'mpi4py', ] messages = list(_basic) messages += [ list(_basic), tuple(_basic), dict([('k%d' % key, val) for key, val in enumerate(_basic)]) ] class BaseTestP2PObj(object): COMM = MPI.COMM_NULL def testSendAndRecv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages: self.COMM.send(smess, MPI.PROC_NULL) rmess = self.COMM.recv(None, MPI.PROC_NULL, 0) self.assertEqual(rmess, None) if size == 1: return for smess in messages: if rank == 0: self.COMM.send(smess, rank+1, 0) rmess = smess elif rank == size - 1: rmess = self.COMM.recv(None, rank-1, 0) else: rmess = self.COMM.recv(None, rank-1, 0) self.COMM.send(rmess, rank+1, 0) self.assertEqual(rmess, smess) def testISendAndRecv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages: req = self.COMM.isend(smess, MPI.PROC_NULL) self.assertTrue(req) req.Wait() self.assertFalse(req) rmess = self.COMM.recv(None, MPI.PROC_NULL, 0) self.assertEqual(rmess, None) for smess in messages: req = self.COMM.isend(smess, rank, 0) self.assertTrue(req) #flag = req.Test() #self.assertFalse(flag) #self.assertTrue(req) rmess = self.COMM.recv(None, rank, 0) self.assertTrue(req) flag = req.Test() self.assertTrue(flag) self.assertFalse(req) self.assertEqual(rmess, smess) for smess in messages: dst = (rank+1)%size src = (rank-1)%size req = self.COMM.isend(smess, dst, 0) self.assertTrue(req) rmess = self.COMM.recv(None, src, 0) req.Wait() self.assertFalse(req) self.assertEqual(rmess, smess) def testIRecvAndSend(self): comm = self.COMM size = comm.Get_size() rank = comm.Get_rank() for smess in messages: req = comm.irecv(0, MPI.PROC_NULL) self.assertTrue(req) comm.send(smess, MPI.PROC_NULL) rmess = req.wait() self.assertFalse(req) self.assertEqual(rmess, None) for smess in messages: buf = allocate(512) req = comm.irecv(buf, rank, 0) self.assertTrue(req) flag, rmess = req.test() self.assertTrue(req) self.assertFalse(flag) self.assertEqual(rmess, None) comm.send(smess, rank, 0) self.assertTrue(req) flag, rmess = req.test() self.assertTrue(flag) self.assertFalse(req) self.assertEqual(rmess, smess) tmp = allocate(1024) for buf in (None, tmp): for smess in messages: dst = (rank+1)%size src = (rank-1)%size req = comm.irecv(buf, src, 0) self.assertTrue(req) comm.send(smess, dst, 0) rmess = req.wait() self.assertFalse(req) self.assertEqual(rmess, smess) def testIRecvAndISend(self): comm = self.COMM size = comm.Get_size() rank = comm.Get_rank() tmp = allocate(512) for buf in (None, tmp): for smess in messages: dst = (rank+1)%size src = (rank-1)%size rreq = comm.irecv(buf, src, 0) self.assertTrue(rreq) sreq = comm.isend(smess, dst, 0) self.assertTrue(sreq) dummy, rmess = MPI.Request.waitall([sreq,rreq]) self.assertFalse(sreq) self.assertFalse(rreq) self.assertEqual(dummy, None) self.assertEqual(rmess, smess) for buf in (None, tmp): for smess in messages: src = dst = rank rreq = comm.irecv(buf, src, 1) flag, msg = MPI.Request.testall([rreq]) self.assertEqual(flag, False) self.assertEqual(msg, None) sreq = comm.isend(smess, dst, 1) while True: flag, msg = MPI.Request.testall([sreq,rreq]) if not flag: self.assertEqual(msg, None) continue (dummy, rmess) = msg self.assertEqual(dummy, None) self.assertEqual(rmess, smess) break def testManyISendAndRecv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages: reqs = [] for k in range(6): r = self.COMM.isend(smess, rank, 0) reqs.append(r) flag = MPI.Request.Testall(reqs) if not flag: index, flag = MPI.Request.Testany(reqs) count, indices = MPI.Request.Testsome(reqs) self.assertTrue(count in [0, MPI.UNDEFINED]) for k in range(3): rmess = self.COMM.recv(None, rank, 0) self.assertEqual(rmess, smess) flag = MPI.Request.Testall(reqs) if not flag: index, flag = MPI.Request.Testany(reqs) self.assertEqual(index, 0) self.assertTrue(flag) count, indices = MPI.Request.Testsome(reqs) self.assertTrue(count >= 2) indices = list(indices) indices.sort() self.assertTrue(indices[:2] == [1, 2]) for k in range(3): rmess = self.COMM.recv(None, rank, 0) self.assertEqual(rmess, smess) flag = MPI.Request.Testall(reqs) self.assertTrue(flag) for smess in messages: reqs = [] for k in range(6): r = self.COMM.isend(smess, rank, 0) reqs.append(r) for k in range(3): rmess = self.COMM.recv(None, rank, 0) self.assertEqual(rmess, smess) index = MPI.Request.Waitany(reqs) self.assertTrue(index == 0) self.assertTrue(flag) count1, indices1 = MPI.Request.Waitsome(reqs) for k in range(3): rmess = self.COMM.recv(None, rank, 0) self.assertEqual(rmess, smess) count2, indices2 = MPI.Request.Waitsome(reqs) if count1 == MPI.UNDEFINED: count1 = 0 if count2 == MPI.UNDEFINED: count2 = 0 self.assertEqual(6, 1+count1+count2) indices = [0]+list(indices1)+list(indices2) indices.sort() self.assertEqual(indices, list(range(6))) MPI.Request.Waitall(reqs) def testSSendAndRecv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages: self.COMM.ssend(smess, MPI.PROC_NULL) rmess = self.COMM.recv(None, MPI.PROC_NULL, 0) self.assertEqual(rmess, None) if size == 1: return for smess in messages: if rank == 0: self.COMM.ssend(smess, rank+1, 0) rmess = smess elif rank == size - 1: rmess = self.COMM.recv(None, rank-1, 0) else: rmess = self.COMM.recv(None, rank-1, 0) self.COMM.ssend(rmess, rank+1, 0) self.assertEqual(rmess, smess) def testISSendAndRecv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages: req = self.COMM.issend(smess, MPI.PROC_NULL) self.assertTrue(req) req.Wait() self.assertFalse(req) rmess = self.COMM.recv(None, MPI.PROC_NULL, 0) self.assertEqual(rmess, None) for smess in messages: req = self.COMM.issend(smess, rank, 0) self.assertTrue(req) flag = req.Test() self.assertFalse(flag) self.assertTrue(req) rmess = self.COMM.recv(None, rank, 0) self.assertTrue(req) flag = req.Test() self.assertTrue(flag) self.assertFalse(req) self.assertEqual(rmess, smess) for smess in messages: dst = (rank+1)%size src = (rank-1)%size req = self.COMM.issend(smess, dst, 0) self.assertTrue(req) rmess = self.COMM.recv(None, src, 0) req.Wait() self.assertFalse(req) self.assertEqual(rmess, smess) def testSendrecv(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages: dest = (rank + 1) % size source = (rank - 1) % size rmess = self.COMM.sendrecv(smess, dest, 0, None, source, 0) continue self.assertEqual(rmess, smess) rmess = self.COMM.sendrecv(None, dest, 0, None, source, 0) self.assertEqual(rmess, None) rmess = self.COMM.sendrecv(smess, MPI.PROC_NULL, 0, None, MPI.PROC_NULL, 0) self.assertEqual(rmess, None) def testPingPong01(self): size = self.COMM.Get_size() rank = self.COMM.Get_rank() for smess in messages: self.COMM.send(smess, MPI.PROC_NULL) rmess = self.COMM.recv(None, MPI.PROC_NULL, 0) self.assertEqual(rmess, None) if size == 1: return smess = None if rank == 0: self.COMM.send(smess, rank+1, 0) rmess = self.COMM.recv(None, rank+1, 0) elif rank == 1: rmess = self.COMM.recv(None, rank-1, 0) self.COMM.send(smess, rank-1, 0) else: rmess = smess self.assertEqual(rmess, smess) for smess in messages: if rank == 0: self.COMM.send(smess, rank+1, 0) rmess = self.COMM.recv(None, rank+1, 0) elif rank == 1: rmess = self.COMM.recv(None, rank-1, 0) self.COMM.send(smess, rank-1, 0) else: rmess = smess self.assertEqual(rmess, smess) class BaseTestP2PObjDup(BaseTestP2PObj): def setUp(self): self.COMM = self.COMM.Dup() def tearDown(self): self.COMM.Free() del self.COMM class TestP2PObjSelf(BaseTestP2PObj, unittest.TestCase): COMM = MPI.COMM_SELF class TestP2PObjWorld(BaseTestP2PObj, unittest.TestCase): COMM = MPI.COMM_WORLD class TestP2PObjSelfDup(BaseTestP2PObjDup, unittest.TestCase): COMM = MPI.COMM_SELF class TestP2PObjWorldDup(BaseTestP2PObjDup, unittest.TestCase): COMM = MPI.COMM_WORLD _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestP2PObjWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_p2p_obj_matched.py0000644000000000000000000001254612211706251020732 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest _basic = [None, True, False, -7, 0, 7, -2**63, 2**63-1, -2.17, 0.0, 3.14, 1+2j, 2-3j, 'mpi4py', ] messages = list(_basic) messages += [ list(_basic), tuple(_basic), dict([('k%d' % key, val) for key, val in enumerate(_basic)]) ] class TestMessage(unittest.TestCase): def testMessageNull(self): null = MPI.MESSAGE_NULL self.assertFalse(null) null2 = MPI.Message() self.assertEqual(null, null2) null3 = MPI.Message(null) self.assertEqual(null, null3) def testMessageNoProc(self): # noproc = MPI.MESSAGE_NO_PROC self.assertTrue(noproc) noproc.recv(None) self.assertTrue(noproc) noproc.irecv(None).wait() self.assertTrue(noproc) # noproc2 = MPI.Message(MPI.MESSAGE_NO_PROC) self.assertTrue(noproc2) self.assertEqual(noproc2, noproc) self.assertNotEqual(noproc, MPI.MESSAGE_NULL) # message = MPI.Message(MPI.MESSAGE_NO_PROC) message.recv(None) self.assertEqual(message, MPI.MESSAGE_NULL) # message = MPI.Message(MPI.MESSAGE_NO_PROC) request = message.irecv(None) self.assertEqual(message, MPI.MESSAGE_NULL) self.assertNotEqual(request, MPI.REQUEST_NULL) request.wait() self.assertEqual(request, MPI.REQUEST_NULL) class BaseTestP2PMatched(object): COMM = MPI.COMM_NULL def testIMProbe(self): comm = self.COMM.Dup() try: m = comm.improbe() self.assertEqual(m, None) m = comm.improbe(MPI.ANY_SOURCE) self.assertEqual(m, None) m = comm.improbe(MPI.ANY_SOURCE, MPI.ANY_TAG) self.assertEqual(m, None) status = MPI.Status() m = comm.improbe(MPI.ANY_SOURCE, MPI.ANY_TAG, status) self.assertEqual(m, None) self.assertEqual(status.source, MPI.ANY_SOURCE) self.assertEqual(status.tag, MPI.ANY_TAG) self.assertEqual(status.error, MPI.SUCCESS) m = MPI.Message.iprobe(comm) self.assertEqual(m, None) finally: comm.Free() def testProbeRecv(self): comm = self.COMM size = comm.Get_size() rank = comm.Get_rank() for smsg in messages: if size == 1: m = comm.improbe(0, 0) self.assertEqual(m, None) sr = comm.isend(smsg, 0, 0) m = comm.mprobe(0, 0) self.assertTrue(isinstance(m, MPI.Message)) self.assertTrue(m) n = comm.improbe(0, 0) self.assertEqual(n, None) rr = m.irecv() self.assertFalse(m) self.assertTrue(rr) MPI.Request.Waitall([sr,rr]) self.assertFalse(rr) # r = comm.isend(smsg, 0, 0) m = MPI.Message.probe(comm, 0, 0) self.assertTrue(isinstance(m, MPI.Message)) self.assertTrue(m) n = MPI.Message.iprobe(comm, 0, 0) self.assertEqual(n, None) rmsg = m.recv() self.assertFalse(m) r.wait() elif rank == 0: comm.send(smsg, 1, 0) m = comm.mprobe(1, 0) self.assertTrue(m) n = comm.improbe(0, 0) self.assertEqual(n, None) rmsg = m.recv() self.assertFalse(m) # comm.send(smsg, 1, 1) m = None while not m: m = MPI.Message.iprobe(comm, 1, 1) rmsg = m.irecv().wait() n = comm.improbe(1, 1) self.assertEqual(n, None) elif rank == 1: m = comm.mprobe(0, 0) self.assertTrue(m) n = comm.improbe(1, 0) self.assertEqual(n, None) rmsg = m.recv() self.assertFalse(m) comm.send(rmsg, 0, 0) # m = None while not m: m = MPI.Message.iprobe(comm, 0, 1) rmsg = m.irecv().wait() comm.send(smsg, 0, 1) n = comm.improbe(0, 1) self.assertEqual(n, None) else: rmsg = smsg self.assertEqual(smsg, rmsg) class TestP2PMatchedSelf(BaseTestP2PMatched, unittest.TestCase): COMM = MPI.COMM_SELF class TestP2PMatchedWorld(BaseTestP2PMatched, unittest.TestCase): COMM = MPI.COMM_WORLD class TestP2PMatchedSelfDup(BaseTestP2PMatched, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_SELF.Dup() def tearDown(self): self.COMM.Free() class TestP2PMatchedWorldDup(BaseTestP2PMatched, unittest.TestCase): def setUp(self): self.COMM = MPI.COMM_WORLD.Dup() def tearDown(self): self.COMM.Free() if MPI.MESSAGE_NULL == MPI.MESSAGE_NO_PROC: del TestMessage del BaseTestP2PMatched del TestP2PMatchedSelf del TestP2PMatchedWorld del TestP2PMatchedSelfDup del TestP2PMatchedWorldDup if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_pack.py0000644000000000000000000001235012211706251016621 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl import struct class BaseTestPack(object): COMM = MPI.COMM_NULL def testPackSize(self): for array in arrayimpl.ArrayTypes: for typecode, datatype in arrayimpl.TypeMap.items(): itemsize = struct.calcsize(typecode) overhead = datatype.Pack_size(0, self.COMM) for count in range(10): pack_size = datatype.Pack_size(count, self.COMM) self.assertEqual(pack_size - overhead, count*itemsize) def testPackUnpack(self): for array in arrayimpl.ArrayTypes: for typecode1, datatype1 in arrayimpl.TypeMap.items(): for typecode2, datatype2 in arrayimpl.TypeMap.items(): for items in range(10): # input and output arrays iarray1 = array(range(items), typecode1).as_raw() iarray2 = array(range(items), typecode2).as_raw() oarray1 = array(items, typecode1, items).as_raw() oarray2 = array(items, typecode2, items).as_raw() # temp array for packing size1 = datatype1.Pack_size(len(iarray1), self.COMM) size2 = datatype2.Pack_size(len(iarray2), self.COMM) tmpbuf = array(0, 'b', size1 + size2 + 1).as_raw() # pack input arrays position = 0 position = datatype1.Pack(iarray1, tmpbuf, position, self.COMM) position = datatype2.Pack(iarray2, tmpbuf, position, self.COMM) # unpack output arrays position = 0 position = datatype1.Unpack(tmpbuf, position, oarray1, self.COMM) position = datatype2.Unpack(tmpbuf, position, oarray2, self.COMM) # test equal = arrayimpl.allclose self.assertTrue(equal(iarray1, oarray1)) self.assertTrue(equal(iarray2, oarray2)) EXT32 = 'external32' class BaseTestPackExternal(object): skipdtype = [] def testPackSize(self): for array in arrayimpl.ArrayTypes: for typecode, datatype in arrayimpl.TypeMap.items(): itemsize = struct.calcsize(typecode) overhead = datatype.Pack_external_size(EXT32, 0) for count in range(10): pack_size = datatype.Pack_external_size(EXT32, count) real_size = pack_size - overhead def testPackUnpackExternal(self): for array in arrayimpl.ArrayTypes: for typecode1, datatype1 in arrayimpl.TypeMap.items(): for typecode2, datatype2 in arrayimpl.TypeMap.items(): for items in range(1, 10): if typecode1 in self.skipdtype: continue if typecode2 in self.skipdtype: continue # input and output arrays if typecode1 == 'b': iarray1 = array(127, typecode1, items).as_raw() else: iarray1 = array(255, typecode1, items).as_raw() iarray2 = array(range(items), typecode2).as_raw() oarray1 = array(-1, typecode1, items).as_raw() oarray2 = array(-1, typecode2, items).as_raw() # temp array for packing size1 = datatype1.Pack_external_size(EXT32, iarray1.size) size2 = datatype2.Pack_external_size(EXT32, iarray2.size) tmpbuf = array(0, 'b', size1 + size2 + 1).as_raw() # pack input arrays position = 0 position = datatype1.Pack_external(EXT32, iarray1, tmpbuf, position) position = datatype2.Pack_external(EXT32, iarray2, tmpbuf, position) # unpack output arrays position = 0 position = datatype1.Unpack_external(EXT32, tmpbuf, position, oarray1) position = datatype2.Unpack_external(EXT32, tmpbuf, position, oarray2) # test result equal = arrayimpl.allclose self.assertTrue(equal(iarray1, oarray1)) self.assertTrue(equal(iarray2, oarray2)) class TestPackSelf(BaseTestPack, unittest.TestCase): COMM = MPI.COMM_SELF class TestPackWorld(BaseTestPack, unittest.TestCase): COMM = MPI.COMM_SELF class TestPackExternal(BaseTestPackExternal, unittest.TestCase): pass _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 5, 0): del BaseTestPackExternal del TestPackExternal elif _name in ('MPICH', 'MPICH2', 'DeinoMPI'): BaseTestPackExternal.skipdtype += ['l'] BaseTestPackExternal.skipdtype += ['d'] else: try: MPI.BYTE.Pack_external_size(EXT32, 0) except NotImplementedError: del BaseTestPackExternal del TestPackExternal if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_pickle.py0000644000000000000000000000502612211706251017154 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import sys try: import marshal except ImportError: marshal = None try: import json except ImportError: try: import simplejson as json except ImportError: json = None OBJS = [ None, True, False, 7, 1<<32, 3.14, 1+2j, 'qwerty', (0, 1, 2), [0, 1, 2], {'a':0, 'b': 1}, ] class TestPickle(unittest.TestCase): def setUp(self): pickle = MPI._p_pickle self._backup = (pickle.dumps, pickle.loads, pickle.PROTOCOL) def tearDown(self): pickle = MPI._p_pickle (pickle.dumps, pickle.loads, pickle.PROTOCOL) = self._backup def do_pickle(self, obj, pickle): comm = MPI.COMM_SELF o = comm.sendrecv(obj) self.assertEqual(obj, o) s = pickle.dumps(obj, pickle.PROTOCOL) o = pickle.loads(s) self.assertEqual(obj, o) def testPickle(self): pickle = MPI._p_pickle protocols = [0, 1, 2] if sys.version_info[0] > 2: protocols.append(3) protocols.append(-1) for protocol in protocols: pickle.PROTOCOL = protocol for obj in OBJS: self.do_pickle(obj, pickle) self.do_pickle(OBJS, pickle) if marshal is not None: def testMarshal(self): pickle = MPI._p_pickle pickle.dumps = marshal.dumps pickle.loads = marshal.loads protocols = [0] if sys.version_info[:2] < (2, 4): pickle.dumps = lambda o,p: marshal.dumps(o) if sys.version_info[:2] >= (2, 4): protocols.append(1) if sys.version_info[:2] >= (2, 5): protocols.append(2) for protocol in protocols: pickle.PROTOCOL = protocol for obj in OBJS: self.do_pickle(obj, pickle) self.do_pickle(OBJS, pickle) if json is not None: def testJson(self): pickle = MPI._p_pickle pickle.dumps = lambda o,p: json.dumps(o).encode() pickle.loads = lambda s: json.loads(s.decode()) OBJS2 = [o for o in OBJS if not isinstance(o, (complex, tuple))] for obj in OBJS2: self.do_pickle(obj, pickle) self.do_pickle(OBJS2, pickle) if __name__ == '__main__': try: unittest.main() except SystemExit: pass mpi4py_1.3.1+hg20131106.orig/test/test_request.py0000644000000000000000000000647212211706251017403 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class TestRequest(unittest.TestCase): def setUp(self): self.REQUEST = MPI.Request() self.STATUS = MPI.Status() def testWait(self): self.REQUEST.Wait() self.REQUEST.Wait(None) self.REQUEST.Wait(self.STATUS) def testTest(self): self.REQUEST.Wait() self.REQUEST.Wait(None) self.REQUEST.Test(self.STATUS) def testGetStatus(self): try: flag = self.REQUEST.Get_status() except NotImplementedError: return self.assertTrue(flag) flag = self.REQUEST.Get_status(self.STATUS) self.assertTrue(flag) self.assertEqual(self.STATUS.Get_source(), MPI.ANY_SOURCE) self.assertEqual(self.STATUS.Get_tag(), MPI.ANY_TAG) self.assertEqual(self.STATUS.Get_error(), MPI.SUCCESS) self.assertEqual(self.STATUS.Get_count(MPI.BYTE), 0) self.assertEqual(self.STATUS.Get_elements(MPI.BYTE), 0) try: self.assertFalse(self.STATUS.Is_cancelled()) except NotImplementedError: pass class TestRequestArray(unittest.TestCase): def setUp(self): self.REQUESTS = [MPI.Request() for i in range(5)] self.STATUSES = [MPI.Status() for i in range(5)] def testWaitany(self): MPI.Request.Waitany(self.REQUESTS) MPI.Request.Waitany(self.REQUESTS, None) MPI.Request.Waitany(self.REQUESTS, self.STATUSES[0]) def testTestany(self): MPI.Request.Testany(self.REQUESTS) MPI.Request.Testany(self.REQUESTS, None) MPI.Request.Testany(self.REQUESTS, self.STATUSES[0]) def testWaitall(self): MPI.Request.Waitall(self.REQUESTS) MPI.Request.Waitall(self.REQUESTS, None) for statuses in (self.STATUSES, []): MPI.Request.Waitall(self.REQUESTS, statuses) self.assertEqual(len(statuses), len(self.REQUESTS)) def testTestall(self): MPI.Request.Testall(self.REQUESTS) MPI.Request.Testall(self.REQUESTS, None) for statuses in (self.STATUSES, []): MPI.Request.Testall(self.REQUESTS, statuses) self.assertEqual(len(statuses), len(self.REQUESTS)) def testWaitsome(self): out = (MPI.UNDEFINED, []) ret = MPI.Request.Waitsome(self.REQUESTS) self.assertEqual((ret[0],list(ret[1])), out) ret = MPI.Request.Waitsome(self.REQUESTS, None) self.assertEqual((ret[0],list(ret[1])), out) for statuses in (self.STATUSES, []): ret = MPI.Request.Waitsome(self.REQUESTS, statuses) self.assertEqual((ret[0],list(ret[1])), out) self.assertEqual(len(statuses), len(self.REQUESTS)) def testTestsome(self): out = (MPI.UNDEFINED, []) ret = MPI.Request.Testsome(self.REQUESTS) self.assertEqual((ret[0],list(ret[1])), out) ret = MPI.Request.Testsome(self.REQUESTS, None) self.assertEqual((ret[0],list(ret[1])), out) for statuses in (self.STATUSES, []): ret = MPI.Request.Testsome(self.REQUESTS, statuses) self.assertEqual((ret[0],list(ret[1])), out) self.assertEqual(len(statuses), len(self.REQUESTS)) _name, _version = MPI.get_vendor() if (_name == 'MPICH1' or _name == 'LAM/MPI'): del TestRequest.testGetStatus if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_rma.py0000644000000000000000000001156512211706251016471 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl def mkzeros(n): import sys if not hasattr(sys, 'pypy_version_info'): try: return bytearray([0]) * n except NameError: return str('\0') * n return str('\0') * n def memzero(m): n = len(m) if n == 0: return try: zero = '\0'.encode('ascii') m[0] = zero except TypeError: zero = 0 m[0] = zero for i in range(n): m[i] = zero class BaseTestRMA(object): COMM = MPI.COMM_NULL INFO = MPI.INFO_NULL COUNT_MIN = 0 def setUp(self): nbytes = 100*MPI.DOUBLE.size try: self.mpi_memory = MPI.Alloc_mem(nbytes) self.memory = self.mpi_memory memzero(self.memory) except MPI.Exception: from array import array self.mpi_memory = None self.memory = array('B',[0]*nbytes) self.WIN = MPI.Win.Create(self.memory, 1, self.INFO, self.COMM) def tearDown(self): self.WIN.Free() if self.mpi_memory: MPI.Free_mem(self.mpi_memory) def testPutGet(self): group = self.WIN.Get_group() size = group.Get_size() group.Free() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for count in range(self.COUNT_MIN, 10): for rank in range(size): sbuf = array(range(count), typecode) rbuf = array(-1, typecode, count+1) self.WIN.Fence() self.WIN.Put(sbuf.as_mpi(), rank) self.WIN.Fence() self.WIN.Get(rbuf.as_mpi_c(count), rank) self.WIN.Fence() for i in range(count): self.assertEqual(sbuf[i], i) self.assertNotEqual(rbuf[i], -1) self.assertEqual(rbuf[-1], -1) def testAccumulate(self): group = self.WIN.Get_group() size = group.Get_size() group.Free() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for count in range(self.COUNT_MIN, 10): for rank in range(size): sbuf = array(range(count), typecode) rbuf = array(-1, typecode, count+1) for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): self.WIN.Fence() self.WIN.Accumulate(sbuf.as_mpi(), rank, op=op) self.WIN.Fence() self.WIN.Get(rbuf.as_mpi_c(count), rank) self.WIN.Fence() # for i in range(count): self.assertEqual(sbuf[i], i) self.assertNotEqual(rbuf[i], -1) self.assertEqual(rbuf[-1], -1) def testPutProcNull(self): self.WIN.Fence() self.WIN.Put(None, MPI.PROC_NULL, None) self.WIN.Fence() def testGetProcNull(self): self.WIN.Fence() self.WIN.Get(None, MPI.PROC_NULL, None) self.WIN.Fence() def testAccumulateProcNullReplace(self): self.WIN.Fence() zeros = mkzeros(8) self.WIN.Fence() self.WIN.Accumulate([zeros, MPI.INT], MPI.PROC_NULL, None, MPI.REPLACE) self.WIN.Fence() self.WIN.Accumulate([zeros, MPI.INT], MPI.PROC_NULL, None, MPI.REPLACE) self.WIN.Fence() def testAccumulateProcNullSum(self): self.WIN.Fence() zeros = mkzeros(8) self.WIN.Fence() self.WIN.Accumulate([zeros, MPI.INT], MPI.PROC_NULL, None, MPI.SUM) self.WIN.Fence() self.WIN.Accumulate([None, MPI.INT], MPI.PROC_NULL, None, MPI.SUM) self.WIN.Fence() def testFence(self): self.WIN.Fence() assertion = 0 modes = [0, MPI.MODE_NOSTORE, MPI.MODE_NOPUT, MPI.MODE_NOPRECEDE, MPI.MODE_NOSUCCEED] self.WIN.Fence() for mode in modes: self.WIN.Fence(mode) assertion |= mode self.WIN.Fence(assertion) self.WIN.Fence() class TestRMASelf(BaseTestRMA, unittest.TestCase): COMM = MPI.COMM_SELF class TestRMAWorld(BaseTestRMA, unittest.TestCase): COMM = MPI.COMM_WORLD try: w = MPI.Win.Create(None, 1, MPI.INFO_NULL, MPI.COMM_SELF).Free() except NotImplementedError: del TestRMASelf, TestRMAWorld _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestRMAWorld elif _name == 'HP MPI': BaseTestRMA.COUNT_MIN = 1 if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_rma_nb.py0000644000000000000000000001134112211706251017140 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest import arrayimpl def mkzeros(n): import sys if not hasattr(sys, 'pypy_version_info'): try: return bytearray([0]) * n except NameError: return str('\0') * n return str('\0') * n def memzero(m): n = len(m) if n == 0: return try: zero = '\0'.encode('ascii') m[0] = zero except TypeError: zero = 0 m[0] = zero for i in range(n): m[i] = zero class BaseTestRMA(object): COMM = MPI.COMM_NULL INFO = MPI.INFO_NULL COUNT_MIN = 0 def setUp(self): nbytes = 100*MPI.DOUBLE.size try: self.mpi_memory = MPI.Alloc_mem(nbytes) self.memory = self.mpi_memory memzero(self.memory) except MPI.Exception: from array import array self.mpi_memory = None self.memory = array('B',[0]*nbytes) self.WIN = MPI.Win.Create(self.memory, 1, self.INFO, self.COMM) def tearDown(self): self.WIN.Free() if self.mpi_memory: MPI.Free_mem(self.mpi_memory) def testPutGet(self): group = self.WIN.Get_group() size = group.Get_size() group.Free() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for count in range(self.COUNT_MIN, 10): for rank in range(size): sbuf = array(range(count), typecode) rbuf = array(-1, typecode, count+1) self.WIN.Fence() r = self.WIN.Rput(sbuf.as_mpi(), rank) r.Wait() self.WIN.Fence() r = self.WIN.Rget(rbuf.as_mpi_c(count), rank) r.Wait() self.WIN.Fence() for i in range(count): self.assertEqual(sbuf[i], i) self.assertNotEqual(rbuf[i], -1) self.assertEqual(rbuf[-1], -1) def testAccumulate(self): group = self.WIN.Get_group() size = group.Get_size() group.Free() for array in arrayimpl.ArrayTypes: for typecode in arrayimpl.TypeMap: for count in range(self.COUNT_MIN, 10): for rank in range(size): sbuf = array(range(count), typecode) rbuf = array(-1, typecode, count+1) for op in (MPI.SUM, MPI.PROD, MPI.MAX, MPI.MIN): self.WIN.Fence() r = self.WIN.Raccumulate(sbuf.as_mpi(), rank, op=op) r.Wait() self.WIN.Fence() self.WIN.Get(rbuf.as_mpi_c(count), rank) self.WIN.Fence() # for i in range(count): self.assertEqual(sbuf[i], i) self.assertNotEqual(rbuf[i], -1) self.assertEqual(rbuf[-1], -1) def testPutProcNull(self): self.WIN.Fence() r = self.WIN.Rput(None, MPI.PROC_NULL, None) r.Wait() self.WIN.Fence() def testGetProcNull(self): self.WIN.Fence() r = self.WIN.Rget(None, MPI.PROC_NULL, None) r.Wait() self.WIN.Fence() def testAccumulateProcNullReplace(self): self.WIN.Fence() zeros = mkzeros(8) self.WIN.Fence() r = self.WIN.Raccumulate([zeros, MPI.INT], MPI.PROC_NULL, None, MPI.REPLACE) r.Wait() self.WIN.Fence() r = self.WIN.Raccumulate([zeros, MPI.INT], MPI.PROC_NULL, None, MPI.REPLACE) r.Wait() self.WIN.Fence() def testAccumulateProcNullSum(self): self.WIN.Fence() zeros = mkzeros(8) self.WIN.Fence() r = self.WIN.Raccumulate([zeros, MPI.INT], MPI.PROC_NULL, None, MPI.SUM) r.Wait() self.WIN.Fence() r = self.WIN.Raccumulate([None, MPI.INT], MPI.PROC_NULL, None, MPI.SUM) r.Wait() self.WIN.Fence() class TestRMASelf(BaseTestRMA, unittest.TestCase): COMM = MPI.COMM_SELF class TestRMAWorld(BaseTestRMA, unittest.TestCase): COMM = MPI.COMM_WORLD try: w = MPI.Win.Create(None, 1, MPI.INFO_NULL, MPI.COMM_SELF).Free() except NotImplementedError: del TestRMASelf, TestRMAWorld _name, _version = MPI.get_vendor() if _name == 'Open MPI': del TestRMASelf, TestRMAWorld if _name == 'MPICH2': del TestRMASelf, TestRMAWorld else: try: del TestRMASelf, TestRMAWorld except NameError: pass if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_spawn.py0000644000000000000000000001214112211706251017031 0ustar 00000000000000import sys, os, mpi4py from mpi4py import MPI import mpiunittest as unittest MPI4PYPATH = os.path.abspath(os.path.dirname(mpi4py.__path__[0])) CHILDSCRIPT = os.path.abspath( os.path.join(os.path.dirname(__file__), 'spawn_child.py') ) class BaseTestSpawn(object): COMM = MPI.COMM_NULL COMMAND = sys.executable ARGS = [CHILDSCRIPT, MPI4PYPATH] MAXPROCS = 1 INFO = MPI.INFO_NULL ROOT = 0 def testCommSpawn(self): child = self.COMM.Spawn(self.COMMAND, self.ARGS, self.MAXPROCS, info=self.INFO, root=self.ROOT) local_size = child.Get_size() remote_size = child.Get_remote_size() child.Barrier() child.Disconnect() self.assertEqual(local_size, self.COMM.Get_size()) self.assertEqual(remote_size, self.MAXPROCS) def testReturnedErrcodes(self): errcodes = [] child = self.COMM.Spawn(self.COMMAND, self.ARGS, self.MAXPROCS, info=self.INFO, root=self.ROOT, errcodes=errcodes) child.Barrier() child.Disconnect() rank = self.COMM.Get_rank() self.assertEqual(len(errcodes), self.MAXPROCS) for errcode in errcodes: self.assertEqual(errcode, MPI.SUCCESS) def testArgsOnlyAtRoot(self): self.COMM.Barrier() rank = self.COMM.Get_rank() if rank == self.ROOT: child = self.COMM.Spawn(self.COMMAND, self.ARGS, self.MAXPROCS, info=self.INFO, root=self.ROOT) else: child = self.COMM.Spawn(None, None, -1, info=None, root=self.ROOT) child.Barrier() child.Disconnect() self.COMM.Barrier() def testCommSpawnMultiple(self): COMMAND = [self.COMMAND] * 3 ARGS = [self.ARGS] * len(COMMAND) MAXPROCS = [self.MAXPROCS] * len(COMMAND) INFO = [self.INFO] * len(COMMAND) child = self.COMM.Spawn_multiple( COMMAND, ARGS, MAXPROCS, info=INFO, root=self.ROOT) local_size = child.Get_size() remote_size = child.Get_remote_size() child.Barrier() child.Disconnect() self.assertEqual(local_size, self.COMM.Get_size()) self.assertEqual(remote_size, sum(MAXPROCS)) def testReturnedErrcodesMultiple(self): COMMAND = [self.COMMAND]*3 ARGS = [self.ARGS]*len(COMMAND) MAXPROCS = range(1, len(COMMAND)+1) INFO = MPI.INFO_NULL errcodelist = [] child = self.COMM.Spawn_multiple( COMMAND, ARGS, MAXPROCS, info=INFO, root=self.ROOT, errcodes=errcodelist) child.Barrier() child.Disconnect() rank = self.COMM.Get_rank() self.assertEqual(len(errcodelist), len(COMMAND)) for i, errcodes in enumerate(errcodelist): self.assertEqual(len(errcodes), MAXPROCS[i]) for errcode in errcodes: self.assertEqual(errcode, MPI.SUCCESS) def testArgsOnlyAtRootMultiple(self): self.COMM.Barrier() rank = self.COMM.Get_rank() if rank == self.ROOT: COMMAND = [self.COMMAND] * 3 ARGS = [self.ARGS] * len(COMMAND) MAXPROCS = range(2, len(COMMAND)+2) INFO = [MPI.INFO_NULL] * len(COMMAND) child = self.COMM.Spawn_multiple( COMMAND, ARGS, MAXPROCS, info=INFO, root=self.ROOT) else: child = self.COMM.Spawn_multiple( None, None, -1, info=None, root=self.ROOT) child.Barrier() child.Disconnect() self.COMM.Barrier() class TestSpawnSelf(BaseTestSpawn, unittest.TestCase): COMM = MPI.COMM_SELF class TestSpawnWorld(BaseTestSpawn, unittest.TestCase): COMM = MPI.COMM_WORLD class TestSpawnSelfMany(BaseTestSpawn, unittest.TestCase): COMM = MPI.COMM_SELF MAXPROCS = MPI.COMM_WORLD.Get_size() class TestSpawnWorldMany(BaseTestSpawn, unittest.TestCase): COMM = MPI.COMM_WORLD MAXPROCS = MPI.COMM_WORLD.Get_size() _SKIP_TEST = False _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 5, 0): _SKIP_TEST = True elif _version < (1, 4, 0): _SKIP_TEST = True if 'win' in sys.platform: _SKIP_TEST = True elif _name == 'MPICH2': if _version < (1, 0, 6): _SKIP_TEST = True if 'win' in sys.platform: _SKIP_TEST = True elif _name == 'Microsoft MPI': _SKIP_TEST = True elif _name == 'HP MPI': _SKIP_TEST = True elif MPI.Get_version() < (2, 0): _SKIP_TEST = True if _SKIP_TEST: del BaseTestSpawn del TestSpawnSelf del TestSpawnWorld del TestSpawnSelfMany del TestSpawnWorldMany elif (_name in ('MPICH', 'MPICH2') and _version > (1, 2)): # Up to mpich2-1.3.1 when running under Hydra process manager, # spawn fails for the singleton init case if MPI.COMM_WORLD.Get_attr(MPI.APPNUM) is None: del TestSpawnSelf del TestSpawnWorld del TestSpawnSelfMany del TestSpawnWorldMany if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_status.py0000644000000000000000000000554412211706251017235 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest class TestStatus(unittest.TestCase): def setUp(self): self.STATUS = MPI.Status() def tearDown(self): self.STATUS = None def testDefaultFieldValues(self): self.assertEqual(self.STATUS.Get_source(), MPI.ANY_SOURCE) self.assertEqual(self.STATUS.Get_tag(), MPI.ANY_TAG) self.assertEqual(self.STATUS.Get_error(), MPI.SUCCESS) def testGetCount(self): count = self.STATUS.Get_count(MPI.BYTE) self.assertEqual(count, 0) def testGetElements(self): elements = self.STATUS.Get_elements(MPI.BYTE) self.assertEqual(elements, 0) def testSetElements(self): try: self.STATUS.Set_elements(MPI.BYTE, 7) count = self.STATUS.Get_count(MPI.BYTE) self.assertEqual(count, 7) elements = self.STATUS.Get_elements(MPI.BYTE) self.assertEqual(elements, 7) except NotImplementedError: if MPI.Get_version() >= (2,0): raise def testIsCancelled(self): flag = self.STATUS.Is_cancelled() self.assertTrue(type(flag) is bool) self.assertFalse(flag) def testSetCancelled(self): try: self.STATUS.Set_cancelled(True) flag = self.STATUS.Is_cancelled() self.assertTrue(flag) except NotImplementedError: if MPI.Get_version() >= (2,0): raise def testPyProps(self): self.assertEqual(self.STATUS.Get_source(), self.STATUS.source) self.assertEqual(self.STATUS.Get_tag(), self.STATUS.tag) self.assertEqual(self.STATUS.Get_error(), self.STATUS.error) self.STATUS.source = 1 self.STATUS.tag = 2 self.STATUS.error = MPI.ERR_ARG self.assertEqual(self.STATUS.source, 1) self.assertEqual(self.STATUS.tag, 2) self.assertEqual(self.STATUS.error, MPI.ERR_ARG) def testCopyConstructor(self): self.STATUS.source = 1 self.STATUS.tag = 2 self.STATUS.error = MPI.ERR_ARG try: self.STATUS.Set_elements(MPI.BYTE, 7) except NotImplementedError: pass try: self.STATUS.Set_cancelled(True) except NotImplementedError: pass status = MPI.Status(self.STATUS) self.assertEqual(status.source, 1) self.assertEqual(status.tag, 2) self.assertEqual(status.error, MPI.ERR_ARG) try: count = status.Get_count(MPI.BYTE) elems = status.Get_elements(MPI.BYTE) self.assertEqual(count, 7) self.assertEqual(elems, 7) except NotImplementedError: pass try: flag = status.Is_cancelled() self.assertTrue(flag) except NotImplementedError: pass if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_subclass.py0000644000000000000000000001751712211706251017534 0ustar 00000000000000from mpi4py import MPI import mpiunittest as unittest # --- class MyBaseComm(object): def free(self): if self != MPI.COMM_NULL: super(type(self), self).Free() class BaseTestBaseComm(object): def setUp(self): self.comm = self.CommType(self.COMM_BASE) def testSubType(self): self.assertTrue(type(self.comm) not in [ MPI.Comm, MPI.Intracomm, MPI.Cartcomm, MPI.Graphcomm, MPI.Distgraphcomm, MPI.Intercomm]) self.assertTrue(isinstance(self.comm, self.CommType)) def testCloneFree(self): if self.COMM_BASE != MPI.COMM_NULL: comm = self.comm.Clone() else: comm = self.CommType() self.assertTrue(isinstance(comm, MPI.Comm)) self.assertTrue(isinstance(comm, self.CommType)) comm.free() def tearDown(self): self.comm.free() # --- class MyComm(MPI.Comm, MyBaseComm): def __new__(cls, comm=None): if comm is not None: if comm != MPI.COMM_NULL: comm = comm.Clone() return super(MyComm, cls).__new__(cls, comm) class BaseTestMyComm(BaseTestBaseComm): CommType = MyComm class TestMyCommNULL(BaseTestMyComm, unittest.TestCase): COMM_BASE = MPI.COMM_NULL class TestMyCommSELF(BaseTestMyComm, unittest.TestCase): COMM_BASE = MPI.COMM_SELF class TestMyCommWORLD(BaseTestMyComm, unittest.TestCase): COMM_BASE = MPI.COMM_WORLD # --- class MyIntracomm(MPI.Intracomm, MyBaseComm): def __new__(cls, comm=None): if comm is not None: if comm != MPI.COMM_NULL: comm = comm.Dup() return super(MyIntracomm, cls).__new__(cls, comm) class BaseTestMyIntracomm(BaseTestBaseComm): CommType = MyIntracomm class TestMyIntracommNULL(BaseTestMyIntracomm, unittest.TestCase): COMM_BASE = MPI.COMM_NULL class TestMyIntracommSELF(BaseTestMyIntracomm, unittest.TestCase): COMM_BASE = MPI.COMM_SELF class TestMyIntracommWORLD(BaseTestMyIntracomm, unittest.TestCase): COMM_BASE = MPI.COMM_WORLD # --- class MyCartcomm(MPI.Cartcomm, MyBaseComm): def __new__(cls, comm=None): if comm is not None: if comm != MPI.COMM_NULL: dims = [comm.size] comm = comm.Create_cart(dims) return super(MyCartcomm, cls).__new__(cls, comm) class BaseTestMyCartcomm(BaseTestBaseComm): CommType = MyCartcomm class TestMyCartcommNULL(BaseTestMyCartcomm, unittest.TestCase): COMM_BASE = MPI.COMM_NULL class TestMyCartcommSELF(BaseTestMyCartcomm, unittest.TestCase): COMM_BASE = MPI.COMM_SELF class TestMyCartcommWORLD(BaseTestMyCartcomm, unittest.TestCase): COMM_BASE = MPI.COMM_WORLD # --- class MyGraphcomm(MPI.Graphcomm, MyBaseComm): def __new__(cls, comm=None): if comm is not None: if comm != MPI.COMM_NULL: index = list(range(0, comm.size+1)) edges = list(range(0, comm.size)) comm = comm.Create_graph(index, edges) return super(MyGraphcomm, cls).__new__(cls, comm) class BaseTestMyGraphcomm(BaseTestBaseComm): CommType = MyGraphcomm class TestMyGraphcommNULL(BaseTestMyGraphcomm, unittest.TestCase): COMM_BASE = MPI.COMM_NULL class TestMyGraphcommSELF(BaseTestMyGraphcomm, unittest.TestCase): COMM_BASE = MPI.COMM_SELF class TestMyGraphcommWORLD(BaseTestMyGraphcomm, unittest.TestCase): COMM_BASE = MPI.COMM_WORLD # --- class MyRequest(MPI.Request): def __new__(cls, request=None): return super(MyRequest, cls).__new__(cls, request) def test(self): return super(type(self), self).Test() def wait(self): return super(type(self), self).Wait() class MyPrequest(MPI.Prequest): def __new__(cls, request=None): return super(MyPrequest, cls).__new__(cls, request) def test(self): return super(type(self), self).Test() def wait(self): return super(type(self), self).Wait() def start(self): return super(type(self), self).Start() class MyGrequest(MPI.Grequest): def __new__(cls, request=None): return super(MyGrequest, cls).__new__(cls, request) def test(self): return super(type(self), self).Test() def wait(self): return super(type(self), self).Wait() class BaseTestMyRequest(object): def setUp(self): self.req = self.MyRequestType(MPI.REQUEST_NULL) def testSubType(self): self.assertTrue(type(self.req) is not self.MPIRequestType) self.assertTrue(isinstance(self.req, self.MPIRequestType)) self.assertTrue(isinstance(self.req, self.MyRequestType)) self.req.test() class TestMyRequest(BaseTestMyRequest, unittest.TestCase): MPIRequestType = MPI.Request MyRequestType = MyRequest class TestMyPrequest(BaseTestMyRequest, unittest.TestCase): MPIRequestType = MPI.Prequest MyRequestType = MyPrequest class TestMyGrequest(BaseTestMyRequest, unittest.TestCase): MPIRequestType = MPI.Grequest MyRequestType = MyGrequest class TestMyRequest2(TestMyRequest): def setUp(self): req = MPI.COMM_SELF.Isend( [MPI.BOTTOM, 0, MPI.BYTE], dest=MPI.PROC_NULL, tag=0) self.req = MyRequest(req) class TestMyPrequest2(TestMyPrequest): def setUp(self): req = MPI.COMM_SELF.Send_init( [MPI.BOTTOM, 0, MPI.BYTE], dest=MPI.PROC_NULL, tag=0) self.req = MyPrequest(req) def tearDown(self): self.req.Free() def testStart(self): for i in range(5): self.req.start() self.req.test() self.req.start() self.req.wait() # --- class MyWin(MPI.Win): def __new__(cls, win=None): return MPI.Win.__new__(cls, win) def free(self): MPI.Win.Free(self) class BaseTestMyWin(object): def setUp(self): w = MPI.Win.Create(MPI.BOTTOM) self.win = MyWin(w) def tearDown(self): if self.win: self.win.Free() def testSubType(self): self.assertTrue(type(self.win) is not MPI.Win) self.assertTrue(isinstance(self.win, MPI.Win)) self.assertTrue(isinstance(self.win, MyWin)) def testFree(self): self.assertTrue(self.win) self.win.free() self.assertFalse(self.win) class TestMyWin(BaseTestMyWin, unittest.TestCase): pass try: w = MPI.Win.Create(MPI.BOTTOM).Free() except NotImplementedError: del TestMyWin # --- import os, tempfile class MyFile(MPI.File): def __new__(cls, file=None): return MPI.File.__new__(cls, file) def close(self): MPI.File.Close(self) class BaseTestMyFile(object): def setUp(self): fd, fname = tempfile.mkstemp(prefix='mpi4py') os.close(fd) amode = MPI.MODE_RDWR | MPI.MODE_CREATE | MPI.MODE_DELETE_ON_CLOSE try: f = MPI.File.Open(MPI.COMM_SELF, fname, amode, MPI.INFO_NULL) self.file = MyFile(f) except Exception: os.remove(fname) raise def tearDown(self): if self.file: self.file.Close() def testSubType(self): self.assertTrue(type(self.file) is not MPI.File) self.assertTrue(isinstance(self.file, MPI.File)) self.assertTrue(isinstance(self.file, MyFile)) def testFree(self): self.assertTrue(self.file) self.file.close() self.assertFalse(self.file) class TestMyFile(BaseTestMyFile, unittest.TestCase): pass import sys _name, _version = MPI.get_vendor() if _name == 'Open MPI': if sys.platform.startswith('win'): del TestMyFile try: dummy = BaseTestMyFile() dummy.setUp() dummy.tearDown() del dummy except NotImplementedError: try: del TestMyFile except NameError: pass # --- if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_threads.py0000644000000000000000000000454012211706251017337 0ustar 00000000000000try: import threading as _threading _HAS_THREADING = True except ImportError: import dummy_threading as _threading _HAS_THREADING = False Thread = _threading.Thread try: current_thread = _threading.current_thread # Py 3.X except AttributeError: current_thread = _threading.currentThread # Py 2.X import mpi4py.rc mpi4py.rc.thread_level = 'multiple' from mpi4py import MPI import mpiunittest as unittest class TestMPIThreads(unittest.TestCase): REQUIRED = MPI.THREAD_SERIALIZED def testThreadLevels(self): levels = [MPI.THREAD_SINGLE, MPI.THREAD_FUNNELED, MPI.THREAD_SERIALIZED, MPI.THREAD_MULTIPLE] if None in levels: return for i in range(len(levels)-1): self.assertTrue(levels[i] < levels[i+1]) try: provided = MPI.Query_thread() self.assertTrue(provided in levels) except NotImplementedError: pass def _test_is(self, main=False): try: flag = MPI.Is_thread_main() except NotImplementedError: return self.assertEqual(flag, main) if _VERBOSE: from sys import stderr thread = current_thread() name = thread.getName() log = lambda m: stderr.write(m+'\n') log("%s: MPI.Is_thread_main() -> %s" % (name, flag)) def testIsThreadMain(self): self._test_is(main=True) try: provided = MPI.Query_thread() except NotImplementedError: return if provided < self.REQUIRED: return T = [] for i in range(5): t = Thread(target=self._test_is, args = (not _HAS_THREADING,)) T.append(t) if provided == MPI.THREAD_SERIALIZED: for t in T: t.start() t.join() elif provided == MPI.THREAD_MULTIPLE: for t in T: t.start() for t in T: t.join() _name, _version = MPI.get_vendor() if _name == 'Open MPI': TestMPIThreads.REQUIRED = MPI.THREAD_MULTIPLE if _name == 'LAM/MPI': TestMPIThreads.REQUIRED = MPI.THREAD_MULTIPLE _VERBOSE = False #_VERBOSE = True if __name__ == '__main__': import sys if '-v' in sys.argv: _VERBOSE = True unittest.main() mpi4py_1.3.1+hg20131106.orig/test/test_win.py0000644000000000000000000001072312211706251016502 0ustar 00000000000000import sys from mpi4py import MPI import mpiunittest as unittest try: from sys import getrefcount except ImportError: class getrefcount(object): def __init__(self, arg): pass def __eq__(self, other): return True def __add__(self, other): return self def __sub__(self, other): return self def memzero(m): n = len(m) if n == 0: return try: zero = '\0'.encode('ascii') m[0] = zero except TypeError: zero = 0 m[0] = zero for i in range(n): m[i] = zero class BaseTestWin(object): COMM = MPI.COMM_NULL INFO = MPI.INFO_NULL def setUp(self): try: self.mpi_memory = MPI.Alloc_mem(10) self.memory = self.mpi_memory memzero(self.memory) except MPI.Exception: from array import array self.mpi_memory = None self.memory = array('B',[0]*10) refcnt = getrefcount(self.memory) self.WIN = MPI.Win.Create(self.memory, 1, self.INFO, self.COMM) if type(self.memory).__name__ == 'buffer': self.assertEqual(getrefcount(self.memory), refcnt+1) else: if sys.version_info[:3] < (3, 3): self.assertEqual(getrefcount(self.memory), refcnt) else: self.assertEqual(getrefcount(self.memory), refcnt+1) def tearDown(self): refcnt = getrefcount(self.memory) self.WIN.Free() if type(self.memory).__name__ == 'buffer': self.assertEqual(getrefcount(self.memory), refcnt-1) else: if sys.version_info[:3] < (3, 3): self.assertEqual(getrefcount(self.memory), refcnt) else: self.assertEqual(getrefcount(self.memory), refcnt-1) if self.mpi_memory: MPI.Free_mem(self.mpi_memory) def testGetMemory(self): memory = self.WIN.memory pointer = MPI.Get_address(memory) length = len(memory) base, size, dunit = self.WIN.attrs self.assertEqual(size, length) self.assertEqual(dunit, 1) self.assertEqual(base, pointer) def testAttributes(self): cgroup = self.COMM.Get_group() wgroup = self.WIN.Get_group() grpcmp = MPI.Group.Compare(cgroup, wgroup) cgroup.Free() wgroup.Free() self.assertEqual(grpcmp, MPI.IDENT) base, size, unit = self.WIN.attrs self.assertEqual(size, len(self.memory)) self.assertEqual(unit, 1) self.assertEqual(base, MPI.Get_address(self.memory)) def testGetAttr(self): base = MPI.Get_address(self.memory) size = len(self.memory) unit = 1 self.assertEqual(size, self.WIN.Get_attr(MPI.WIN_SIZE)) self.assertEqual(unit, self.WIN.Get_attr(MPI.WIN_DISP_UNIT)) self.assertEqual(base, self.WIN.Get_attr(MPI.WIN_BASE)) def testGetSetErrhandler(self): for ERRHANDLER in [MPI.ERRORS_ARE_FATAL, MPI.ERRORS_RETURN, MPI.ERRORS_ARE_FATAL, MPI.ERRORS_RETURN,]: errhdl_1 = self.WIN.Get_errhandler() self.assertNotEqual(errhdl_1, MPI.ERRHANDLER_NULL) self.WIN.Set_errhandler(ERRHANDLER) errhdl_2 = self.WIN.Get_errhandler() self.assertEqual(errhdl_2, ERRHANDLER) errhdl_2.Free() self.assertEqual(errhdl_2, MPI.ERRHANDLER_NULL) self.WIN.Set_errhandler(errhdl_1) errhdl_1.Free() self.assertEqual(errhdl_1, MPI.ERRHANDLER_NULL) def testGetSetName(self): try: name = self.WIN.Get_name() self.WIN.Set_name('mywin') self.assertEqual(self.WIN.Get_name(), 'mywin') self.WIN.Set_name(name) self.assertEqual(self.WIN.Get_name(), name) except NotImplementedError: pass class TestWinSelf(BaseTestWin, unittest.TestCase): COMM = MPI.COMM_SELF class TestWinWorld(BaseTestWin, unittest.TestCase): COMM = MPI.COMM_WORLD try: w = MPI.Win.Create(MPI.BOTTOM, 1, MPI.INFO_NULL, MPI.COMM_SELF).Free() except NotImplementedError: del BaseTestWin, TestWinSelf, TestWinWorld _name, _version = MPI.get_vendor() if _name == 'Open MPI': if _version < (1, 4, 0): if MPI.Query_thread() > MPI.THREAD_SINGLE: del TestWinWorld elif _name == 'MPICH2': if 'win' in sys.platform: del BaseTestWin.testAttributes if __name__ == '__main__': unittest.main() mpi4py_1.3.1+hg20131106.orig/tox.ini0000644000000000000000000000065312211706251014631 0ustar 00000000000000# Tox (http://tox.testrun.org/) is a tool for running tests # in multiple virtualenvs. This configuration file will run the # test suite on all supported python versions.To use it, # "pip install tox" and then run "tox" from this directory. [tox] envlist = py24, py25, py26, py27, py31, py32, py33, [testenv] deps = commands = {envpython} {toxinidir}/test/runtests.py --quiet --no-builddir []