pax_global_header00006660000000000000000000000064135634734160014526gustar00rootroot0000000000000052 comment=e3d5e1064c4a7e74e2d252bbc6672511bfa0b84d pysph-master/000077500000000000000000000000001356347341600135265ustar00rootroot00000000000000pysph-master/.coveragerc000066400000000000000000000001671356347341600156530ustar00rootroot00000000000000[run] plugins = Cython.Coverage source = pysph, pyzoltan omit = */tests/* */examples/* branch = True parallel = True pysph-master/.gitignore000066400000000000000000000001741356347341600155200ustar00rootroot00000000000000*.pyc *.o *.c *.cpp *~ *.so *.orig *.npz *.log *.npz *.pyd *.pdf test.pyx PySPH.egg-info/ build/ dist/ .tox/ *.out *_output pysph-master/.travis.yml000066400000000000000000000015771356347341600156510ustar00rootroot00000000000000os: ubuntu dist: xenial language: python python: - 2.7 - 3.7 install: - sudo apt-get update - if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh; else wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; fi - bash miniconda.sh -b -p $HOME/miniconda - export PATH="$HOME/miniconda/bin:$PATH" - hash -r - conda config --set always_yes yes --set changeps1 no - conda update -q conda - conda info -a - conda config --add channels conda-forge - conda config --add channels defaults - conda install -c conda-forge pocl==1.2 pyopencl - conda install -c defaults virtualenv cython - python -c 'import pyopencl as cl' - pip install beaker tox tox-travis script: - tox pysph-master/CHANGES.rst000066400000000000000000000225241356347341600153350ustar00rootroot000000000000001.0b1 ----- * Release date: Still under development. * Remove pyzoltan, cyarray into their own packages on pypi. 1.0a6 ----- 90 pull requests were merged for this release. Thanks to the following who contributed to this release (in alphabetical order): A Dinesh, Abhinav Muta, Aditya Bhosale, Ananyo Sen, Deep Tavker, Prabhu Ramachandran, Vikas Kurapati, nilsmeyerkit, Rahul Govind, Sanka Suraj. * Release date: 26th November, 2018. * Enhancements: * Initial support for transparently running PySPH on a GPU via OpenCL. * Changed the API for how adaptive DT is computed, this is now to be set in the particle array properties called ``dt_cfl, dt_force, dt_visc``. * Support for non-pairwise particle interactions via the ``loop_all`` method. This is useful for MD simulations. * Add support for ``py_stage1, py_stage2 ...``, methods in the integrator. * Add support for ``py_initialize`` and ``initialize_pair`` in equations. * Support for using different sets of equations for different stages of the integration. * Support to call arbitrary Python code from a ``Group`` via the ``pre/post`` callback arguments. * Pass ``t, dt`` to the reduce method. * Allow particle array properties to have strides, this allows us to define properties with multiple components. For example if you need 3 values per particle, you can set the stride to 3. * Mayavi viewer can now show non-real particles also if saved in the output. * Some improvements to the simple remesher of particles. * Add simple STL importer to import geometries. * Allow user to specify openmp schedule. * Better documentation on equations and using a different compiler. * Print convenient warning when particles are diverging or if ``h, m`` are zero. * Abstract the code generation into a common core which supports Cython, OpenCL and CUDA. This will be pulled into a separate package in the next release. * New GPU NNPS algorithms including a very fast oct-tree. * Added several sphysics test cases to the examples. * Schemes: * Add a working Implicit Incompressible SPH scheme (of Ihmsen et al., 2014) * Add GSPH scheme from SPH2D and all the approximate Riemann solvers from there. * Add code for Shepard and MLS-based density corrections. * Add kernel corrections proposed by Bonet and Lok (1999) * Add corrections from the CRKSPH paper (2017). * Add basic equations of Parshikov (2002) and Zhang, Hu, Adams (2017) * Bug fixes: * Ensure that the order of equations is preserved. * Fix bug with dumping VTK files. * Fix bug in Adami, Hu, Adams scheme in the continuity equation. * Fix mistake in WCSPH scheme for solid bodies. * Fix bug with periodicity along the z-axis. 1.0a5 ----- * Release date: 17th September, 2017 * Mayavi viewer now supports empty particle arrays. * Fix error in scheme chooser which caused problems with default scheme property values. * Add starcluster support/documentation so PySPH can be easily used on EC2. * Improve the particle array so it automatically ravel's the passed arrays and also accepts constant values without needing an array each time. * Add a few new examples. * Added 2D and 3D viewers for Jupyter notebooks. * Add several new Wendland Quintic kernels. * Add option to measure coverage of Cython code. * Add EDAC scheme. * Move project to github. * Improve documentation and reference section. * Fix various bugs. * Switch to using pytest instead of nosetests. * Add a convenient geometry creation module in ``pysph.tools.geometry`` * Add support to script the viewer with a Python file, see ``pysph view -h``. * Add several new NNPS schemes like extended spatial hashing, SFC, oct-trees etc. * Improve Mayavi viewer so one can view the velocity vectors and any other vectors. * Viewer now has a button to edit the visualization properties easily. * Add simple tests for all available kernels. Add ``SuperGaussian`` kernel. * Add a basic dockerfile for pysph to help with the CI testing. * Update build so pysph can be built with a system zoltan installation that is part of trilinos using the ``USE_TRILINOS`` environment variable. * Wrapping the ``Zoltan_Comm_Resize`` function in ``pyzoltan``. 1.0a4 ------ * Release date: 14th July, 2016. * Improve many examples to make it easier to make comparisons. * Many equation parameters no longer have defaults to prevent accidental errors from not specifying important parameters. * Added support for ``Scheme`` classes that manage the generation of equations and solvers. A user simply needs to create the particles and setup a scheme with the appropriate parameters to simulate a problem. * Add support to easily handle multiple rigid bodies. * Add support to dump HDF5 files if h5py_ is installed. * Add support to directly dump VTK files using either Mayavi_ or PyVisfile_, see ``pysph dump_vtk`` * Improved the nearest neighbor code, which gives about 30% increase in performance in 3D. * Remove the need for the ``windows_env.bat`` script on Windows. This is automatically setup internally. * Add test that checks if all examples run. * Remove unused command line options and add a ``--max-steps`` option to allow a user to run a specified number of iterations. * Added Ghia et al.'s results for lid-driven-cavity flow for easy comparison. * Added some experimental results for the dam break problem. * Use argparse instead of optparse as it is deprecated in Python 3.x. * Add ``pysph.tools.automation`` to facilitate easier automation and reproducibility of PySPH simulations. * Add spatial hash and extended spatial hash NNPS algorithms for comparison. * Refactor and cleanup the NNPS related code. * Add several gas-dynamics examples and the ``ADEKEScheme``. * Work with mpi4py_ version 2.0.0 and older versions. * Fixed major bug with TVF implementation and add support for 3D simulations with the TVF. * Fix bug with uploaded tarballs that breaks ``pip install pysph`` on Windows. * Fix the viewer UI to continue playing files when refresh is pushed. * Fix bugs with the timestep values dumped in the outputs. * Fix floating point issues with timesteps, where examples would run a final extremely tiny timestep in order to exactly hit the final time. .. _h5py: http://www.h5py.org .. _PyVisfile: http://github.com/inducer/pyvisfile .. _Mayavi: http://code.enthought.com/projects/mayavi/ 1.0a3 ------ * Release date: 18th August, 2015. * Fix bug with ``output_at_times`` specification for solver. * Put generated sources and extensions into a platform specific directory in ``~/.pysph/sources/`` to avoid problems with multiple Python versions, operating systems etc. * Use locking while creating extension modules to prevent problems when multiple processes generate the same extesion. * Improve the ``Application`` class so users can subclass it to create examples. The users can also add their own command line arguments and add pre/post step/stage callbacks by creating appropriate methods. * Moved examples into the ``pysph.examples``. This makes the examples reusable and easier to run as installation of pysph will also make the examples available. The examples also perform the post-processing to make them completely self-contained. * Add support to write compressed output. * Add support to set the kernel from the command line. * Add a new ``pysph`` script that supports ``view``, ``run``, and ``test`` sub-commands. The ``pysph_viewer`` is now removed, use ``pysph view`` instead. * Add a simple remeshing tool in ``pysph.solver.tools.SimpleRemesher``. * Cleanup the symmetric eigenvalue computing routines used for solid mechanics problems and allow them to be used with OpenMP. * The viewer can now view the velocity magnitude (``vmag``) even if it is not present in the data. * Port all examples to use new ``Application`` API. * Do not display unnecessary compiler warnings when there are no errors but display verbose details when there is an error. 1.0a2 ------ * Release date: 12th June, 2015 * Support for tox_, this makes it trivial to test PySPH on py26, py27 and py34 (and potentially more if needed). * Fix bug in code generator where it is unable to import pysph before it is installed. * Support installation via ``pip`` by allowing ``egg_info`` to be run without cython or numpy. * Added `Codeship CI build `_ using tox for py27 and py34. * CI builds for Python 2.7.x and 3.4.x. * Support for Python-3.4.x. * Support for Python-2.6.x. .. _tox: https://pypi.python.org/pypi/tox 1.0a1 ------ * Release date: 3rd June, 2015. * First public release of the new PySPH code which uses code-generation and is hosted on `bitbucket `_. * OpenMP support. * MPI support using `Zoltan `_. * Automatic code generation from high-level Python code. * Support for various multi-step integrators. * Added an interpolator utility module that interpolates the particle data onto a desired set of points (or grids). * Support for inlets and outlets. * Support for basic `Gmsh `_ input/output. * Plenty of examples for various SPH formulations. * Improved documentation. * Continuous integration builds on `Shippable `_, `Drone.io `_, and `AppVeyor `_. pysph-master/LICENSE.txt000066400000000000000000000031111356347341600153450ustar00rootroot00000000000000Unless otherwise specified by LICENSE.txt files in individual directories, all code is Copyright (c) 2009-2015, the PySPH developers All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. pysph-master/MANIFEST.in000066400000000000000000000003651356347341600152700ustar00rootroot00000000000000include MANIFEST.in Makefile *.bat *.py *.rst *.sh *.txt *.yml *.toml recursive-include docs *.* recursive-include pysph *.pxd *.pyx *.mako *.txt.gz *.h recursive-exclude pysph *.cpp recursive-include pysph/examples *.py *.gz ndspmhd*.npz *.rst pysph-master/Makefile000066400000000000000000000026641356347341600151760ustar00rootroot00000000000000ROOT = $(shell pwd) MAKEFILE = $(ROOT)/Makefile SRC = $(ROOT) PKG1 = $(SRC)/pysph SUBPKG1 = base sph sph/solid_mech parallel DIRS := $(foreach dir,$(SUBPKG1),$(PKG1)/$(dir)) # this is used for cython files on recursive call to make PYX = $(wildcard *.pyx) MPI4PY_INCL = $(shell python -c "import mpi4py; print mpi4py.get_include()") # the default target to make all : build .PHONY : $(DIRS) bench build build : python setup.py build_ext --inplace $(DIRS) : cd $@; python $(ROOT)/pyzoltan/core/generator.py $(MAKE) -f $(MAKEFILE) -C $@ cythoncpp ROOT=$(ROOT) %.c : %.pyx python `which cython` -I$(SRC) -I$(MPI4PY_INCL) $< %.cpp : %.pyx python `which cython` --cplus -I$(SRC) -I$(MPI4PY_INCL) $< %.html : %.pyx python `which cython` -I$(SRC) -I$(MPI4PY_INCL) -a $< cython : $(PYX:.pyx=.c) cythoncpp : $(PYX:.pyx=.cpp) _annotate : $(PYX:.pyx=.html) annotate : for f in $(DIRS); do $(MAKE) -f $(MAKEFILE) -C $${f} _annotate ROOT=$(ROOT); done clean : python setup.py clean -for dir in $(DIRS); do rm -f $$dir/*.c; done -for dir in $(DIRS); do rm -f $$dir/*.cpp; done cleanall : clean -for dir in $(DIRS); do rm -f $$dir/*.so; done # -rm $(patsubst %.pyx,%.c,$(wildcard $(PKG)/*/*.pyx)) test : python `which pytest` -m 'not slow' pysph testall : python `which pytest` pysph epydoc : python cython-epydoc.py --config epydoc.cfg pysph doc : cd docs; make html develop : python setup.py develop install : python setup.py install pysph-master/README.rst000066400000000000000000000231321356347341600152160ustar00rootroot00000000000000PySPH: a Python-based SPH framework ------------------------------------ |Travis Status| |Shippable Status| |Appveyor Status| |Codeship Status| PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. It is implemented in `Python `_ and the performance critical parts are implemented in `Cython `_ and PyOpenCL_. PySPH allows users to write their high-level code in pure Python. This Python code is automatically converted to high-performance Cython or OpenCL which is compiled and executed. PySPH can also be configured to work seamlessly with OpenMP, OpenCL, and MPI. The latest documentation for PySPH is available at `pysph.readthedocs.org `_. .. |Travis Status| image:: https://travis-ci.org/pypr/pysph.svg?branch=master :target: https://travis-ci.org/pypr/pysph .. |Shippable Status| image:: https://api.shippable.com/projects/59272c73b2b3a60800b215d7/badge?branch=master :target: https://app.shippable.com/github/pypr/pysph .. |Codeship Status| image:: https://app.codeship.com/projects/37370120-23ab-0135-b8f4-5ed227e7b019/status?branch=master :target: https://codeship.com/projects/222098 .. |Appveyor Status| image:: https://ci.appveyor.com/api/projects/status/q7ujoef1xbguk4wx :target: https://ci.appveyor.com/project/prabhuramachandran/pysph-00bq8 Here are `videos `_ of some example problems solved using PySPH. .. _PyOpenCL: https://documen.tician.de/pyopencl/ .. _PyZoltan: https://github.com/pypr/pyzoltan Features -------- - Flexibility to define arbitrary SPH equations operating on particles in pure Python. - Define your own multi-step integrators in pure Python. - High-performance: our performance is comparable to hand-written solvers implemented in FORTRAN. - Seamless multi-core support with OpenMP. - Seamless GPU support with PyOpenCL_. - Seamless parallel support using `Zoltan `_ and PyZoltan_. SPH formulations ----------------- PySPH ships with a variety of standard SPH formulations along with basic examples. Some of the formulations available are: - `Weakly Compressible SPH (WCSPH) `_ for free-surface flows (Gesteira et al. 2010, Journal of Hydraulic Research, 48, pp. 6--27) - `Transport Velocity Formulation `_ for incompressilbe fluids (Adami et al. 2013, JCP, 241, pp. 292--307) - `SPH for elastic dynamics `_ (Gray et al. 2001, CMAME, Vol. 190, pp 6641--6662) - `Compressible SPH `_ (Puri et al. 2014, JCP, Vol. 256, pp 308--333) - `Generalized Transport Velocity Formulation (GTVF) `_ (Zhang et al. 2017, JCP, 337, pp. 216--232) - `Entropically Damped Artificial Compressibility (EDAC) `_ (Ramachandran et al. 2019, Computers and Fluids, 179, pp. 579--594) - `delta-SPH `_ (Marrone et al. CMAME, 2011, 200, pp. 1526--1542) - `Dual Time SPH (DTSPH) `_ (Ramachandran et al. arXiv preprint) - `Incompressible (ISPH) `_ (Cummins et al. JCP, 1999, 152, pp. 584--607) - `Simple Iterative SPH (SISPH) `_ (Muta et al. arXiv preprint) - `Implicit Incompressibel SPH (IISPH) `_ (Ihmsen et al. 2014, IEEE Trans. Vis. Comput. Graph., 20, pp 426--435) - `Gudnov SPH (GSPH) `_ (Inutsuka et al. JCP, 2002, 179, pp. 238--267) - `Conservative Reproducible Kernel SPH (CRKSPH) `_ (Frontiere et al. JCP, 2017, 332, pp. 160--209) - `Approximate Gudnov SPH (AGSPH) `_ (Puri et al. JCP, 2014, pp. 432--458) - `Adaptive Density Kernel Estimate (ADKE) `_ (Sigalotti et al. JCP, 2006, pp. 124--149) - `Akinci `_ (Akinci et al. ACM Trans. Graph., 2012, pp. 62:1--62:8) Boundary conditions from the following papers are implemented: - `Generalized Wall BCs `_ (Adami et al. JCP, 2012, pp. 7057--7075) - `Do nothing type outlet BC `_ (Federico et al. European Journal of Mechanics - B/Fluids, 2012, pp. 35--46) - `Outlet Mirror BC `_ (Tafuni et al. CMAME, 2018, pp. 604--624) - `Method of Characteristics BC `_ (Lastiwaka et al. International Journal for Numerical Methods in Fluids, 2012, pp. 35--46) - `Hybrid BC `_ (Negi et al. arXiv preprint) Corrections proposed in the following papers are also the part for PySPH: - `Corrected SPH `_ (Bonet et al. CMAME, 1999, pp. 97--115) - `hg-correction `_ (Hughes et al. Journal of Hydraulic Research, pp. 105--117) - `Tensile instability correction' `_ (Monaghan J. J. JCP, 2000, pp. 2990--311) - Particle shift algorithms (`Xu et al `_. JCP, 2009, pp. 6703--6725), (`Skillen et al `_. CMAME, 2013, pp. 163--173) Surface tension models are implemented from: - `Morris surface tension`_ (Morris et al. Internaltional Journal for Numerical Methods in Fluids, 2000, pp. 333--353) - `Adami Surface tension formulation `_ (Adami et al. JCP, 2010, pp. 5011--5021) .. _Morris surface tension: https://dx.doi.org/10.1002/1097-0363(20000615)33:3<333::AID-FLD11>3.0.CO;2-7 Installation ------------- Up-to-date details on how to install PySPH on Linux/OS X and Windows are available from `here `_. If you wish to see a working build/test script please see our `shippable.yml `_. For Windows platforms see the `appveyor.yml `_. Running the examples -------------------- You can verify the installation by exploring some examples. A fairly quick running example (taking about 20 seconds) would be the following:: $ pysph run elliptical_drop This requires that Mayavi be installed. The saved output data can be viewed by running:: $ pysph view elliptical_drop_output/ A more interesting example would be a 2D dam-break example (this takes about 30 minutes in total to run):: $ pysph run dam_break_2d The solution can be viewed live by running (on another shell):: $ pysph view The generated output can also be viewed and the newly generated output files can be refreshed on the viewer UI. A 3D version of the dam-break problem is also available, and may be run as:: $ pysph run dam_break_3d This runs the 3D dam-break problem which is also a SPHERIC benchmark `Test 2 `_ .. figure:: https://github.com/pypr/pysph/raw/master/docs/Images/db3d.png :width: 550px :alt: Three-dimensional dam-break example PySPH is more than a tool for wave-body interactions::: $ pysph run cavity This runs the driven cavity problem using the transport velocity formulation of Adami et al. The output directory ``cavity_output`` will also contain streamlines and other post-processed results after the simulation completes. For example the streamlines look like the following image: .. figure:: https://github.com/pypr/pysph/raw/master/docs/Images/ldc-streamlines.png :width: 550px :alt: Lid-driven-cavity example If you want to use PySPH for elastic dynamics, you can try some of the examples from the ``pysph.examples.solid_mech`` package:: $ pysph run solid_mech.rings Which runs the problem of the collision of two elastic rings: .. figure:: https://github.com/pypr/pysph/raw/master/docs/Images/rings-collision.png :width: 550px :alt: Collision of two steel rings The auto-generated code for the example resides in the directory ``~/.pysph/source``. A note of caution however, it's not for the faint hearted. There are many more examples, they can be listed by simply running:: $ pysph run Credits -------- PySPH is primarily developed at the `Department of Aerospace Engineering, IIT Bombay `_. We are grateful to IIT Bombay for their support. Our primary goal is to build a powerful SPH based tool for both application and research. We hope that this makes it easy to perform reproducible computational research. To see the list of contributors the see `github contributors page `_ Some earlier developers not listed on the above are: - Pankaj Pandey (stress solver and improved load balancing, 2011) - Chandrashekhar Kaushik (original parallel and serial implementation in 2009) Support ------- If you have any questions or are running into any difficulties with PySPH, please email or post your questions on the pysph-users mailing list here: https://groups.google.com/d/forum/pysph-users Please also take a look at the `PySPH issue tracker `_. pysph-master/appveyor.yml000066400000000000000000000017451356347341600161250ustar00rootroot00000000000000build: false platform: x64 environment: distutils_use_sdk: 1 sdkver: "v7.0" cache: - C:\Users\appveyor\.cache init: - ps: $Env:sdkbin = "C:\Program Files\Microsoft SDKs\Windows\" + $Env:sdkver + "\Bin" - ps: $Env:sdkverpath = "C:/Program Files/Microsoft SDKs/Windows/" + $Env:sdkver + "/Setup/WindowsSdkVer.exe" - ps: $Env:path = "C:\Enthought\edm;" + $Env:sdkbin + ";" + $Env:path install: # Install edm, needed so we can quickly install numpy. - ps: Start-FileDownload "https://package-data.enthought.com/edm/win_x86_64/1.6/edm_1.6.1_x86_64.msi" - start /wait msiexec /a edm_1.6.1_x86_64.msi /qn /log install.log TARGETDIR=c:\ - edm info - edm install -y numpy cython pytest mock h5py # Install pysph related dependencies. - edm run -- pip install -r requirements.txt -r requirements-test.txt # Build pysph. - edm run -- python setup.py develop test_script: # Run the tests. - edm run -- python -m pytest -v -m "not slow or slow" --junit-xml=pytest.xml pysph-master/docker/000077500000000000000000000000001356347341600147755ustar00rootroot00000000000000pysph-master/docker/README.md000066400000000000000000000014471356347341600162620ustar00rootroot00000000000000## Docker related files The docker files are available at https://hub.docker.com/u/pysph/ The `base` sub-directory contains a Dockerfile that is used to make the base image that can be easily used to test PySPH on both Python-2.7 and Python-3.5. This is the base image for any other PySPH related docker images. The base image only contains the necessary packages so as to run *all* the tests. It therefore include all the dependencies like mpi4py, Zoltan, Mayavi, and h5py so as to exercise all the tests. If you update the Dockerfile build a new image using: $ cd base $ docker build -t pysph/base:v3 . Push it to dockerhub (if you have the permissions) and tag it as latest: $ docker push pysph/base:v3 $ docker tag pysph/base:v3 pysph/base:latest $ docker push pysph/base:latest pysph-master/docker/base/000077500000000000000000000000001356347341600157075ustar00rootroot00000000000000pysph-master/docker/base/Dockerfile000066400000000000000000000023671356347341600177110ustar00rootroot00000000000000FROM ubuntu:16.04 MAINTAINER Prabhu Ramachandran # Install the necessary packages RUN apt-get update && \ apt-get install -y \ build-essential \ cython \ cython3 \ g++ \ git \ ipython \ ipython3 \ libgomp1 \ libopenmpi-dev \ libtrilinos-zoltan-dev \ mayavi2 \ python \ python-dev \ python-execnet \ python-h5py \ python-mako \ python-matplotlib \ python-mock \ python-mpi4py \ python-nose \ python-numpy \ python-pip \ python-psutil \ python-qt4 \ python-setuptools \ python-unittest2 \ python3 \ python3-h5py \ python3-mako \ python3-matplotlib \ python3-mpi4py \ python3-nose \ python3-numpy \ python3-pip \ python3-psutil \ sudo \ tox \ vim \ wget \ && rm -rf /var/lib/apt/lists/* # Sudo and the new user are needed as one should not run mpiexec as root. RUN adduser --disabled-password --gecos '' pysph && \ adduser pysph sudo && \ echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers ENV HOME=/home/pysph \ ZOLTAN_INCLUDE=/usr/include/trilinos \ ZOLTAN_LIBRARY=/usr/lib/x86_64-linux-gnu \ USE_TRILINOS=1 USER pysph VOLUME /home/pysph WORKDIR /home/pysph pysph-master/docs/000077500000000000000000000000001356347341600144565ustar00rootroot00000000000000pysph-master/docs/Images/000077500000000000000000000000001356347341600156635ustar00rootroot00000000000000pysph-master/docs/Images/dam-break-schematic.png000066400000000000000000000611101356347341600221510ustar00rootroot00000000000000PNG  IHDR bKGD IDATxwxTUw& $"b]Pׂ4E@&.* uԄA ERDAW M JL$3<>k9s9.6@Nt PCIB %1t PCIB %1t PCIB %1t PCIB %1t PCIB %1t PCIB %1t PCIB %1t PCIB %1t PCIB %1t PCIB %FhBw6 Paŋ+##tBI1P!#֭[CiӦJ:ӿo1B%۴i֭[Qϟo:Bv]vU=vGў={LP$_~O?ÇQR%M2t @It颸8FF+VԂ LG luVYF#G4vnЧ~j: Qanݺ[n4'>>^&M2@hKjٲe#]}ڵk:˛FanݺsΌA;vԌ3LJ:,k.}W;v(hڵӬYLJ:,׵kWuQŊ3tҺ_J:,׸qc?>q׫㕜l:?(ܠAxbb4l7x~t7m۶;w @I .TFTxqQq:uiӦtiرzMGGTbccaQ䃒۽{?XQ6mڤC8/K.1?ti۷(eʔɵV*Ur0J:ld5lP+W6#66M0CIFgyt pթSM0BIm.]kVUV5'..M0BImF~tM7~Љ'LGJ:lG^z袋LGj۶̙c:Pa#Fhc>(UNU^tpJ*RJlbJ:,xb 80s۷o9 8[\\&Ol:?(ܘ1c%ZD\M4ѧ~4Qc  vբE -ZtÈ;vMG֭S^GIÇw:Z5jԐ֯j: |Pa;wjӦMjܸ([n͵ѨQTdICCIΚ`bŊĨnݺPDVwդIQ1*W&F@V#F3ad?䓦tMSL1@(ܹsuw\rlRa[|>M0AO=(HlbJ:l1oʔ)xJJvm 8cRRRt]wӴiS}G0@>(0믿Vrk: y9}ݧJJ:HLLԐ!CLGعs֮]؀tWJJ:lw)&&Fu5jժA %KHHРAL([ 61%tp61 %ztYӦM3@(͚5k$.Hlb J:l3l0F 61%XnN>뛎մiS}4t"11Q6-22RM4ҥKMG%۸q? ϔ Pa#GjСyȰ9 8[ڵu ٳtr&LPÆ s߳g~m:w)StXdɒy:tjԨas@6mpB|>Q䃒#;kתQF@wh޿@衤È_~Y={29&Fcǎ5\c((ǵpButp*UZхA Qa'G4|裏#t*55UWNLGp\;#tW_U||<`361 M 9y̙>tpڵkUJqH:lꫯk׮*V(H]ve# LPa'Oj֬Y޽(Xmڴт t7` _ϴC PaSNiƌ@ӤIL%KNNVNTxqQرC6@(\2e繅 ڜбcGtXSNu|„ JMM5Ct Qa̻ᆱ-[E޹.]Zu?`(FXBuё#GLG"oÆ 61z饗TdIt1|pt pJ*Ĉ6J:lg/VZLGGX"a%%%iccn5lPV2@(ժUtEK.1Goa:Qa$ (0b_ՠAQ۴i9ظqat1ft p2e袋.RLLDvʕ+պukQbcc5sL1%7nz)\.Q"""3tСCZbڴic: 8Jݕl:QaǫW^-&F@)6ֲeԶm[Qx61%0a{1FΝ;kԩcm 8z}otpX/_^6m2t⥗^bB@\\&Ol:?hLܱcǴh"F }ZbMGPJ:,7qD=裊4vYfZh( @I宾ju1qϗkjv)StXyy1@@ZzsNQ䃒#|>,Yt(mۖk#G*::P"PaĢEt7x⦣@kzRJ|>3F}1rʚ6m -YD]w*Wl: 8BTTʕ+&F@v#GT߾}MGa# Pa_UjUQQn-_M0AIF~ŋMGJ:l+TjLGG֭S^0AImF&F@֭իֵkWM2t ~Pa$ 8t pVZiѢEz(%OteF+VL5MGPJ:,` 0 si4i i:yoʕ+mN[ݫbŊ#0&))IM"o׮]qשS %%FܹSVݺuMG"/33S^cr +n!#*Uf(ݮ]qF5itpŋ+33M0BIF~ҵkWM:t V֬Ytp{W.a[9R}5MBIm~w}j֬(HqqqJNN6@(ͨQ]qٛmtb޽ꫯE~aM>t ~PacǪO>r\=3gr-[WdI]uU/MGPJ:,7fݛQt<@ 8J:,צMw}+%%@"p뮻N֭MGJ:,wu9Gn7 ݛk7?4$O>D(P䥦ĨUV_j((0bj޼LG"J*((lbJ:l5gy*W((Y3@(鰍K/޽{I& tfT|yQtiܸqzMGGk׮fϞm:?(… ըQ#UXtp~X3f0tXiر@PUkךtX?ƚd1˭^Z}܏?hs@ƍWiii%{Tr\WXǭ`ÇrG/b(\>g:UV:xRRRTfMqȪV,Xntb$FlڴI.Kj2*Uf(0"!!A6tlbJ:lyf:tH_(ڵom:QaaÆiРAct`tXn޽z'uJQQQ~'v}ԩSz*Uvg>.I5%>ye߷o۴i? :vgwyGtQKtA3g4@IF.233ʧ?7oڴiceӧO.]x<G9wZaٲeUvm㏺k P/T'8zʔ)UjU_^^vxrrUR%[N+W>v1*U*߁f1˫KҠATJ'bѦ#YJrmbTti :(0u? Y,t IDATކ $^4h`{oXbŊt1qDCr2Sn]Iҽy, ǣ([ Yn7aۥjԩә]""r]Ǔkv䖖f:yM4t vB!2W)^ 8{#[8qBsU.]LGرc#XGj߾RRRLJ:lk[n332ƌR+?~\6@{衇4k,1ۜeddL>FǛ -[V\r{QAImx uMŊ>授P ʟ;~ѐmD&sGoa:?(ũS4c u=q;;v$$ 'N\xaoַ~'Oݐ6թS5?_Y9S r\jӦ͛g: PaӧOkڴiu$`N+)L^xA.Y^p]t@珟ְɓC'_wοkȅĉ-Og T\Ye˖զMLGJ:,_yh2EhNwuWPUW\q$jժ?=رcUzuC2 X'aF]z饦8u%3\z۷zi: 1c,X[D[&g̙3MPtgK.Qɒ%~p.DFFf(0"))I Q'!Lvx)..N&M2@(ݪUTzu]|r!Qbby1,i:n}7lbiGpaÆiРAlr 'Szz7]vǏ[z}MBI UREk6%d]|!;K.2eްհa4v}!6'Be7ߨ d<7Ym޼oH:lzj/_^]vY1k|zI0ZBZݕl:?(Mbb UZB M?V+=!._ҥKN:9ڃ*~1HKK3An[s/^{t`$HHH5폒x4}ܯ{(uɓ'tXS%Tnݼ_pփM[Z ~oR@⋕]v tXnĉ|p+VP"Pa^{MW^ye[nG}G*U{igQ_ ʹv_g861k((\DDDLJڵkzuKz}&L`s3ВSJM6t @Iжmtמӧ$W^Ǫgqt'**JK͛MG J:;vzm:F@J.m/YGsPGb# PaCO?}'of233MG2駟dOI`(iӦZlaְݸqOXharp*#k%K tնm[I;"ؒӧOwd1QJ:l/9~fF&F@6GҥKծ]'Y".'(#]LO3`V.]4e1Ok楗^5z ⃣NokMG`AZnw}W>>B%8v/^k_}F!IZn]ЮO͚5Wg)ѩS.~s̹z@h(Vnf-_t/}QES]. }}YqN?޽ʔ)c˽<~ F&˯@cH ;~.\UV:5eРAA߭kS[h>V^=o:x*T`:OK,=#w{<'+:Be<󢣣&F@ÈŋStt^t! S) nVK.6mDIF3ٳgJ:l3|p 0t Kd}_ᰚ VT)իWOW6@(Ŋ+TNըQ#q;K8txv B%HJJRMHvZ?~tr}.ժU+9wDDE?VJJ~zo&sٷo_M={JJ:,;hy$*۴iyT.;vL|Icmڴѥ^j&(܋//8^_|E7rHy<"3jhS0Z&Md:B`W3l0իWO^P]fEh"nڢtBRx]zDٛUPtÈ]vseIx_aÆy?Tu>l;v[o^z #F_~9- ; :%ڵc# 0o駟4a%`BʈSLG`@&F_}:qH:l7rH׶J9n@5lg}{$I^ҥ &~6VwnК5kj: ?(ѣկ_?|~p0 $Iz',G0A>~m1AImݫJ͚5>r-~xTbEKlmҩS'[cی3F}>rr| (l+VTʕn:Q[߿__|Zli:JX`$4yd1[3F{5jxN:r6awqV\4Q䃒8p@+WT֭sYKaѣGȑ#9v޽6ol((ܸqSO9܎G6vXiW_}X=T^=CayjӦM?~ݲee{\۔>myTd1%-x<^eڴiB͚5 p{ȓ8Oҥ3tz|rhѢP*TM8pbbbr+U4Lj۶-a#f͚{Gn[*{BAkx߿c@a# Pa;׫'W^|=kRRRL (  TXQZ~(@I̙&M\rs;t`/^lٵϗyI+>>H0ްՄ {I:V>ZHJ_}U&sBe]z;4x`Xb(#ռyԸqc/_ޖEEEIe(~0vժU+-\t~0|>?^n1of233-oQA0< ҥz衶mۚm,XFB cըQCھ}( @I-|>Ǝ>}8`I@e2'pݻ74@(ŢEtM7RJO| 'LGD:עE -^ɁFI|>Fk=[>9rȀ9r$潻8ΩSr|VZiݺuyܒ%Kt׫rʹy^y-~pjƍ3PXv pm۶iÆ [nCL`/RL }-`D 8P>O&f\:<\W>Frlb [kN4|;w|>ԩ6ٲeۧo.A*G ʵ(֭a֒ުU+hZbE3g$m۶vF5h Igva~'LٮVZȌp-Zg# ZK*͛Km׮]/$Qҋ[jݺ%I.K lS^=I L=LGƃsEEEqMG4waK=-]޽M%y*UۧO?49m۷oըQ#Q-::tթSG޽{MGPKzLLZl))甗ݻw?DI/8!bF;vA׹sgM>t 0Q%I.hLu)vܩY7qGɓ'_Ԙ1c.:AH =. <#%ʖ-{>T.))Iuv+==@sСzĉ N2&&F 4?`( /^\[ԩS5g]qS]ٵk6nܨ;3ŋ9Q`yuԨQJMM VYz pM6iǎYf &յXYS^ϟ~[^Wu5\c*,2b/s"55UҙZV <8 8OʕsmbEB4i+jϞ=JHHTW^s?q\B8x<*U~Q _*TВ%K$ #Jzddڴi#Iڷo$U^ziLJ %K|ǣWןѷo߀_xdY,KXM`-&F@1YWFt3A&M|$饗^*^fϞ2Ӗ#kiݻw:HHݺu hIU\9ILuqɓ'Cg zK06h@rˬt.\Xp/\@hs|Aړ*QD@k_޽[5jԐ?HC[oՊ+ԵK%&&.31vl߮jF5ʕ+5}tݻW}1 Abw9s1aNm*&&F^lO&F@ON:G~_zK:vQJNNVϞ=s7Zt~kΝ߼y~'=Zn+`@6mU]=8$Z@ꎣk֬QbŔB ^s9s|IUV-1222kٲe+|)]7oѣGULq##N<4}G666V}Ubbb0# ͘1C[VRLG iG7pewj7rСRRRLPϝ;W^WO[oc,?v ꫹6Ѽs=];D ˮ <_HO0ae˖_5kꫯ6@G}kkK1hz8q\\nw~Jp@ 6襗^ҕW^i(֎fΜǏg};<جYt=lٲι\.K>ma(<*Tдir+[l$\Ttt^͛#޽[zի"4%_J*f2'}ڵJOOWuw(##C~ݛRp}y|ڹs|<8zQohѢ>@Hsݪ_>s|ͦ@PFҧLCijذ|A=c:x,Y[>}nfVw9-tfN)">>^ɦcPPJرcݻgKHHP-(b~.]Z+pQ)''s/971.?~\fs[h̊%!!A ha ]!Yrtiiiڷo3g <'TRynZu@~GI:yδ8'%%I^|E=䓶3*Ug"z!{ѣ()!!A?>My-I3>W:k'33SrVRDd^o~o~uݺaCo:EvObcw NG@$&&jСt ;G~!{< 8Pŋu]G*WjժQ\s,ÆbJ]?iO7ok\.Æ +>Lʗ/UVeϵ駟t뭷NWu!]kx|Xe˖waX1/$f͚ժU8E(*ԿU\YR.CGgWrr^|EQ$Y ٯ G4t1'Fj͚5գGqƳ>{(LzV^SN"I)*U)I` [f6mצ%)}2uU&IڱcGs۷o?t!q IDAT?aG$;"B#S M01-]4M_tEl*x%K|nIhѢ\: CIN8wyG;wVDD#63%tb|,yW6*.]BY~M7nXҙ$^XNõnݺ֭ӈ#YhGݫݻg`=\:~AUdɒڲe(y SNի.G RJwQݺuծ];~jذuunv_ՠA5o\͛7W Ժuk gV'OٳյkWQ  R1-..N&M2#_ymQ묺kܤ.r\R͚5޽{5|>|X&MRbbb+hԨQ^>C]Vĉ-J\t_W׮]+nQ?%%%I NIM?VFN} B~/h_h~^wKֵ]~_{Zx94}ڵ"##̜ߵn͛7+==]۷oQttt`tԩSJIIQ\\\15ǖ5?Ϯn4p%u"""t]w7vgL|mE1Q"łoq>I-&O~XѹOZk֨h̙ '?m߾=/Ç/*[t#v,{0烒˥iʔ)뫞ͮLoJ6nh}dR#YDDϟXk(љj`#TuG/ܛo+&&&׹<84n8tL ~+WԩSաCQ 4 8OAhrٳgh2%f UCA 8QbTD mٲEk6GTf~7}$@hb ,(c=="/u 7mYf}|J7  %;vv *H:Ea.]TK. =MsYp:wiӦi(]`_U˗/W׮]s`3[ lmڴт Xq6QƍMFa>@UVյ^xփV r*"Xp|>\߱s4p@%&&j9V }yͬcgVv,Ozzay?gu1/_@*Pa-[~ys݊]ŋ=9|{׺μ81ϒrzɊ9Y233ke`Ǜ?~p^oՈ=;W^ǭzm^ ڼk ~sa!dIII3f(${Bnjv\YAƙy?{niʊo.50vpp>''G1vm&"BÇLD|]c/_ϥ1'''^b4̯4k.633222[n)jkkJ'"vbX~xҘ7NOk׮ؙ3g_~E1&"_q"b;;;a|ժU֭['NpHH4v>}4VZZcƌ԰t]׳I;o<"4""O#G(ggg+%%%|XOd222s]&m ޽...\SS_58PYpzz*/ZH}7| lunAA? /^vM ۸VjUjLt5nLtɚVWW+AVXDǏ粲2!k׮ϥT_rEx5EiHv&-`6lΝ;̥=<<1FYZvvv把 c[0gil׮]HcOTðo>i,!!⤱6 7o|dp}!xܸq/H-,g׸at^;7ozei+;nnn.K۸%KHsoܸ!.^W\ᠠ !w|u1\YeLsdl#%cre OLL3P>}0)~3JPthslkk+-~;k٘V1cpiiӧOWCBB855UٳGqyDKG6 FXC^@@20\pA{/ƶm4_[o%Y !C\KxB0deeo\+___aV͘l_"Ι3GX^eLnNN/]U~~~\\\l0kL#ri͛޽{See% 888K/$2q'''2d0>l0~嗅?q^|E!ёl0K/A緳Q=)LD|=^^^ܫW/^z7|ct:޼y30---ޞ7nذEiH6׷o_/+V#G8NHHX94چĉXtt4gΑXZZKc.aؽ{0< .UYf k322[ʕ+ٳg͛7 ƾ; ooo0&ŋF$Ynff4W)HreO)crm;(NX-[ƍqWWW3o)]K%&& {iӦq||xzz:[`,77WxH̚5KșBߟHq۷sNxذa<{l>|8w1!!!LDlmm"݄PCCJcj[҆qn5 ѣ\ɓ'K7 3׭=s4~TÐ#GƧO.s礯3fLҌʕm5&w̙Bnzz:^U^^^o*o¤nrCCCT#eLnsH*//WlhۃVeGGGaazz:O:Uh4(<`zπ?O\XXSN>W&O,^^^9Λ7O8yǎyHovД͛77C k?ϘDYZZpe/l5n~)O>D=z08˗E H6SQQݻwnryE7aPZ=~xM>>>>Z:$mٲ6R6swyG;t s݆;wwwXQQ...c{%,;HMM˔)SSNIsNiӦ Of{i_*m|̙3vVN6o߾ܳgo-ʚ#crj窡:|UV).kk{^SN 7nfw!}-8p@Xǟ ۿA|޽sN5ͺt::u𾒒"=z]6B3Q?-krnJD~ct???&"ĠH7!fƌ#L,s`ny`u7R>|PUUUtFm w}'s&N(-+++Y0<|]]]lҥKXM6qOOOa ѣG[{1&kOOOᵭ;ydsǥ{1ɕ5&8qBzJ9yQUjd Zs+"^vG%VWW餳̣Gf833ᘍ_Z-O6`LI٬{||<7F]v?:a:/l:_"D 07Hg"NH7!&N:<4e...ŒZ=OOO{ٳg 6hCK2ņOFz0*6 Á3ueI}؂ Yn&Њ ҬI+} ~uu{VK>|0"cA]\\tseMekZ-?^́n6Ii… EGDD(6 Ñ#G\uѥ))ƚ)..Wĉ Ϗtb0N /Э[_7///41cF؝;wȑ#+=aÆQVV4=T^^Nwiݻ7ݼyawRAA1,YB۶m3Z+4h%&&ݻwXqq1O‚СCINNx˗?Qtt45QLLL 1JVZ? ƴZ-C^ߒmMMM٬Yw}$n(CF‚3f`ooo9r$3|֭lffD#F`ooo1bwI,Igf^x1wܙ=<<Ã;w0P@t&}os^{MutiUUd_e.=6*zQ͢sn$iœ9s%#####yܸqϲ%[YYq>}ݝwޭ\0--gΜ={d ٳ'Ϝ9S0_t:qdd$ԠH7!3ffSGGGAz~``_NC 1\1bDXmm-]t^}GGii)co&偷oߦ=z` tA2(TE:ʠHP*"@eP tA2(TE:ʠHP*"@eP tA2(TE:ʠHP*"@eP tA2(TE:ʠHP*"@eP &44=KsBBB̌zIEza۷/i4 .\;vQll,ش)#H`uF[l!"$:sLCLQpp0z=z4-\N"9s&yzzQVKDD[nK.%$$)@PwP~)uڕ(66n޼Iׯ'"0rpph3%(;({{ޣ н{瞣uֵ@SPw`4h ǏQ {.])(;0KKKjɓ'G;4.))ῳx6(;W_}EDDO=g"˗ܹs?&"xiSG@AEEEQqq1uޝ6oLAAA4rHz!3)Pnn.ѦMW^dffF dnnN锘g JPFX 2+PpppCɉ-[FDDT^^XZEz; +++h(<<$ "ߟMCRtt4]zEh4zj""Zt) 6Ly.fldggG~~~Sh+H?<9::ڝ;whΝ녘513RHH^$XӧOZvv6uYXniiIdmmMJ}mbt0ѣGiΜ9ȟllľ DsΥ)S>MhC(Ν;iԭ[7S5,,L~(Ʌ hʕA-(TIP*"@eP tA2(TE:ʠHP*"@eP tA2(TE:ʠHP*"@eP tA2(TIENDB`pysph-master/docs/Images/db3d.png000066400000000000000000002307151356347341600172150ustar00rootroot00000000000000PNG  IHDRN_bKGD IDATx{]WaZ{sN?$#lPؒc6!)!qb s/ĩ"Kj3$sLp+*U@ ;T=$?l$H YtyghG?Ts>smZenyy//6_,V$e"+0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000RP62mܰNv3|Up"4e&&%Iouk*9rL5=TV^+,={S/ܭ/7?m_7s~}|p!4es-o'|I/Tn#G_բPv5ܝw97WB#vlߪ}:rzɁcX?$i45-G>r.nFpA7[nԮ/7usKz491}&5Mja! H"EhM=Szg?}ܼ>ۯJ!?{Au]-$y_psVsnxz~D;Yf=3OugιQ7ӳX'OH~}i˖MSs^$B#8'Τ|GؾUOm6֭oB[-9=oZשĉSj;:r]#FpGtц q:u,kΫ7LSЧdnڧ}w{Bz`.d/֡C^k7^H_uVBpq9[hԵcv[,) cbDhCĂ)ߒcext:lөY<67V[ps}{ߩ[o~?uPVͶ.FPX7wޭjUSҜs~}xep/9_sҜEvک=j6;N5Ejܜӳe45-zIғ.Fh"\KQIESps>qrb\fMf@zq́C?ts.B#k\QWQuz~]OJҊ-$5Wܼ>$Io{z」xwޭOTr]"5s;r_8O~7moQ3M}_l:+r-޿{;u Wh56iwݬWM'_|e2x&'i`-ݢqz>/Ӎu1u+ urr\ǎXUQ܆ܧl٤=|iVB#k{w꽻%Iǎб'(hY&y8ܱ}gS}^]#*W.ک];lϳJpqWVIw_&Ij;:q펎=?> #X|roްaß6nX{փifvNeJ"4J}7 Lu\-ff>%kp>}Xgӧ4==ܗB#j.A?:z}-k|L 4Pp9z\GWQ;[,) GGh`j{+nLoNgm:5=ضCh`+w֛ߨOg[8H80`;[Q-ޫ?s_et5B#+78ܼ4'}o}3_*ߵk>ٜd2s2gjzzV=dq{45-zIk9EhR''uo$Q8#MO7rF. np>5O%Iz[o{cn꠺ݮRF. np.nλnY=w-XG`9o{wH[lIWM8~Ny@ V<ؾUOJ}^]:r<+s`ÆIuyqDG_}o%Iy˛{eZn2Mc:uV B#gabbLQi)m7Sp_$U\?wuwN3s/%X!oE?}͕AO?RO Vvp>}Xgӧ4==W`% 4pvO][oy$ǞwQɓ3|evI _ⱙdNM!:B#C +'O/U@`isҜ|I^y}N' B#G LOG$5uV<_IFE*o191}&'%Iz]nWO=sH.ՆPp͹y}wI{gǿ陦ZZ2_%"4.*ֽ~Pya_7?u=yX5`Zm쥗߾ڱ}jHcc,gX(W{?nw2_3Eh)t߃^s9/@h ˯ѨW=1F@ p>ힺA`!4V- n791߹-ݓг{=cO12FC ޻[W-I:v?'Ё =>kwQWUXdvlVܦXQjϕEvjjN5w9TT`"4V$c֯V?uzM(.FF 'w6׾q>ܗ 4V&ڸay(֐(.7ܴOo{덚mffZXh/e@}Sڷnwt1m.LFYk|In#Gۍ\@Fk4zM46V۴zb4bAQʱcVpu'Г,XNF nHWLzMya#}iEijeڸanݤ];%I_olDpq 4.a7><הFzxYe^Z&&%Iowϔ{Jn jXs^334WK$Iִ4eϞ)$} w/ܭn5z Lx%JGͻK$IVm55}+wӟ$cB#༡nRxgqlQmU'6A #٫ۡį$k\\Z}3#O.UXs,&qxh_PND%r.#u6xȞ<׽/k s唞zN1 :F9A F^rK>mqVyܨ+cbIFN MԞ.Qop:uv^VRXݬ.ÿ/g}~_^1unfGBF9A FqS|:ұ~ %$1# Ը ׿_?Y!4 78=#rc{DNWKKz@ϫyZ LyA5s@)#v^UbF&kO7ힺLp3F Fq>T+~"IR`fՃdloľDYvsY3^5%f3uI~jF|F$ZW/N7V;vl$8ߗdtBǯSjdz)SyNl.l7Ե=12Yr;]ު.ִk0gÆIu$+wiM:p9}F(8gSJu)Z+-a cMHcLlHJfLVudd%9îkǚe~bbLQi)m7K>,XKEQpivZIRcl FczM.I|1Gٙ٩.UVWY2r2J`p-zucq{<&/Mv~+z~Dl/UXk(0UM$k:ZW{\׊騒*4 du )w1ٚGS ih } o)\yyoU/f?ӷGPY}t֟rv,ߛB#@ N6l$Y0}F{Yi5VޤzkmMMgUzZ+StvwF"*jdlQ#s\V j%;+I}j/9޻[W-I:v9z4ChԮ˴q:Ib7M+#JJQN y_"Yi#//|FUclѤtcAԘGj#fdN-9y\)4)qk( ]HSS;5ky՜c**( 519.Izި{$ h;BUtRɪu?j޿N]LN4m kıYKbuө4h40fZVk593t&mI1udK[K9v{[ֺ\9F]F]OumX?}>}_h4ksF3}n%^e ;( [8 xmc0cyZEtTt?Hu[tIE3UmoQ3M}_lS onڧS}V)B#\Dn=u>/I~p9F!IZ_VY7==BF k6Ɓ\,H!2oX9WV=x\uv sqz>/Ӎu1us{V1B#q;o FM={XGC H&[QlAkǥE(eAQH'g6E6d ̹T,tT+>j6#N/DaNz ;~BO?'-A FX򂛭[7ijW:pe27aprqF##Z~Ozq&( dMzMAkUާE80 ԡ$IIdAFӵ9f9'Hr ѰYS^t%Xq +ygV]qο~,p W(f#RĥP A3zWFQ)ErΩl¦srٺF9&6-IG-ߓQKF]u[ s69W?| +vV:(\XՒ2 8ZRVv3V"ALDI7NZF֥ȕfCi?#0Uл\隃0 [W%/K;W&FX(tMjv*nJl-a>}A {@sɫʣa-.vit^0)EM5Q(cHV$NpCJq\d5Lo$c+zhUQFmfV_wN s^`gzFRHg;~s"% ɪn&]' 0Pm, AcfpjhCAn'ۓ1Sy/xl)]hL6=Fn^wKdi94u} `!4*C N'uɦv`썗;#) f'dRh_(kN ɕ607fg%4ymheUvFJ}g&/ 4<:lk^SkMz ZZIoG\+뇒TZS}O´$1;Uto*U78qIiI^Vm9Ei6bjjhf4>,)C+(I-]#^V*شixCAXY̎+r6OGe ; yv7y̲^:8[7hJcF:DZs7ejaoeB)6Aof_vr#m\Zu?Uo:y~S:ujVƄK.8 +d]m?fU$ۆu[t˒ҩQt/b*j6j+^yf~b,5Yk^hW/ĝnZcZT\@.N)/ >6ooµywgT+V0J3GjƯWW<)=K4ZNҶSsBck@^i-ΐa =#{+!E4yIXYwS}//+/k:J CufM'7HIjhI7-&Z`((\)|Ƭ覿^'/wǝmQ~kÂj=$j͵d뵁$Iα<&BO]9K%Igi㦮I?{LұJaL$M%~FV7&0@heD j!~WQ[eMK+8n.V>q&+&U,omoԚaǫ/2=z狲 b\&Ϧzӛ:tlI_5JG3ysNow5{'w3S`NmL75+78SGr~ml/\#YR`Z!-œza% hU>ZWYFxO##eotWI3,G2]6s}NWMuZZ+^пq{tNfbΧ k8[/90L3fom"|J7Y; slF@(Ru-$[ۍJ'^.lW`z{+ϒk jK41g컎~##NQ)q^ΤιyYN&V{u]uKөkI9=Mb$e@rLd0G#ThJX slF8O0Ԟv)&)RuȞ\z\CE˂K\U|/4Yͨ"9*6 FA(9>^6U58k+H|58Οd]/]5W͡>8/sbҼW܍+٬I6Hv[.ɦHҖV$DִFu sF8֯лڰ>=Xz\dZ=+4{5 L$ $IjZV9INYx*Mk\rhO}k*5 l'X>.խuu7N'*ۧQ~ PAhAɓ2jiL+ǣG_)pa{Mި٦>/jf)I`1%ڨJI Mǿ _8"SVϛ^c)/a7WYJCM?;~f}=QX D|z^Ş1EFI_W+x\k)V)pݢqz>/Ӎu1uE_p#o@SN˝0#8 )F^ JIf1QklXMAy/uY.RdcL.pIo[㗪<9lNuޥe9FZ7趷mxC_~X:zn),{%Dq]^K$-ֈ1yTgmۘx+-hLɨ*$yKa g꽻uݒcOzr=cbNerg}] y3.@RꂍR8m3FqTm-VSX ~ё? (7҉' 8/)T~7ڻuXP7^PovC>ml- HG!K(}7꘠.m(M&~~(p> nvjjN5w97ܗnXI*4/(%I]INfͮ^.k6iݒR&^Fz (FuyK5xK.'Қ#yԿEfM de ]]/9yA w0HbشIuuKNN6L lv WZaNs.=I9zlٮAh%VpoG FVX2]Tƨ1;K1ƨ6Qφv^VU,0X]q7V{.k4!')r95*=|RiTv_>F'_X;`d7֨>u[]u;9 ҈4!Ak&gQ;^yJ,̙|m5K4faܗlXaIh/\%RXwiKj`UW[Q0؈UӅ%edX[jQ~[(ZHRf]uy7Nѧ[]orCх99B#,a%:d+sOҩ^ܔWNFo@X9ҔX#h_((Nzn]^2N> aʍ#.?g.,5torD7nxMoё]=K_Ȟǖ sV|ca]i^*'r6V4JN*TSI:xDݤeW-8>` C F늓M6!nT6¶LԩtrI:4E^Ecg.(ftVGGC=ϗ3ӽh\9ܦ|}2}KWrLvq_ =a>Y~>*0 ae8ɩ4wJE^XnS9i t"-1%~pB;_R!4Qps!N6T{_q{"zJQ8]H7 %[OC$ۊjc{ڊ4FMF~zujbeĬ miC;8em/P!(͎sN<cm1ŔUg\[kFNle)l"lΣJͪT-\+);ȌP/SV|3:ۍM?;F=hm-l(5sŞCb mF %t$0^& ׁtjVAys׭I΄ ź՘}B+uԌ7 *@hQpIuTɁBd- ]I^ u܎qgJzEᬂ䬑O蔼E)V&+ XKkTj1axCk+,qOGcR0*S4TP^*-|FvբBKھ%T=2zdz{3^ox:i^Yb9e=s*-Q0szf; kG%)v)V$ Y%1JNI&f:VF$YLyo|o/DcFLU6NTnH|\SRB6fAԴ`F|bB="%q$NdlwqRJ(R}^l0pyYઌ,([j>y0dJFe{-=ވeE?<VtKt3n毑Vߛ.\s9l\txeo\/%{5ߓ ZWtX( ~ݤff0X, np:nfǣ(4#,567+n.rq۩6ESL͂sNQVz/ɷd P4EC.XFA,tݘ6OMGLG=OuW7XǕ$N3sCҜj}̝0'ڲHޫ5ת4Pã[ae f%<1koW5*/Ǎ˙۳gJ]z-`Mg[9?ֻ飡E!8%IQ+Y4 ȂU@R^%I"「zZ^XCxֻ,_W),Yو`{N{J#cɻяK^^NZN&74E+_lv\SXx\0{_Xq7.FVMyT5I_8tAi>*LWu>V Ѻ ؋EF Fk78S n^TiLKg53# 3|*0UG؎!=͝aA1Yh[gF&4p(cgeOcOq7:/V{P66嵐s4}Kt4*,Ky?~VSN2wI8Oވiuz5;5&{~>m¤d4^cՂc 34aävﺼM ԰zc1u%%nRִU 2RǥͩXɎڊflǨ ctǘb]b|bmlA/ĕGzT\b)DJ.yO.)^[BhFr+?.пt7og8rdZX6US\i sGF0(ZV6",ܽڴ6qJ5ڡh0ʚ`H;211(wϔ~)餁.RMx(k4Y{B@djZ_Xdтۣ e"PKT*@7k$ɘ053Jr7gVgyby$KaA9[TҨcok)A N :ְڻti}@{y֯\b s)#=D:e*iЏ-ޗJX s]Q֛Q0d:2:wՇ`U[\O#t:պ^Χ?h{_Ay!~Z$I$[uu$i}ퟖZHWRFkHXQ$M4ix5G#F$Qlzf`E_/ ~1bG/ӵs6,ɯm[uE9^^IVXœM"ZS` kWXaNZ\z >|qg%sҌkm sJ:&9yV/B#aXG?qe26qI4y:vnfگ}=-W> ?WEiDMA(_S潂 , HcF#5|,}\ AXTg)t:kZ!iW..k8&Bia md4uV-}+s*7V`8JV#;œ|rJfrplͯ;?W>jwI?~A.)}f(znkkgS轔,jaiEh" +i66Brw(n獩\(Nz[ 2ao6Wޛ;_#B#eC .FpXd>$;YܜOWo;W[DnިLo[t違ːQ |d~ 2UN_xcLoi~\yt/gvy1e+QtaO`*߲o´_߷Q qԔ՜ry=ROաe9*iV+KMͅ}AMĩЮ+EgY UW>yB# Omj"zr_ץk%)yP+4=+S }R7+c ڥXH<i=H(Pa2k(Pmw|6bBd? [3BY NlTΛb?!/|JyYѶMV]#m$ON$zqܑ& Bfua7H3ymZ-~#+saavNI7/J}J_A_9kmq-9|49IdS2bvئ-[6Xc. npZ/46;{4SXPШHLJ&$kfd(eM]*TLEMIG `6 k=5(e̘aEפ3'}_j]|}_G;dM^12Y g]hݸ%^=[xM78|ZlmDSQ%-ZZ{՜W{WTc@^SoXI"|)Juz'Y?ŋ༢VJXF JCSUQ3ߌɪR5iH5 ݭ5jr+Wh Ja-/&Ukdz5pCf6_maz+{C[M4~gziS7%1X~~nX(=$>UrxY^wy-@dAjhciu*2e-? IDATLG,"߿ c)k*_ځ.*F7:nfGS}C R%TAw͈ P}<"j#5hQTJ^E!MGxǾN:v^YKCiy H_7+I{c嵤gPSLu}+Z_jϕFGATF%E۪ž.fF7XVŏɦ{⽲IøFIJr&O85ַMF6M56:6M[60TVtյ;nMa//[dݰi"P׾2T'CD3sN}Mٟd`[lj=[<}Yfۂx@FΤ0' ǥlz.wGoj' k$6(]cuD;sq^d09C VVSNccz6_o;t HC_%E5? PARzlư$MRÓQz=e7)k/ܐ`0ίz_fsWjҳqVkc*B@qL8s^OVS\-%{Ms0t虌>[,F`)MXS|-nb[.zz{0Jq!6/Ppƚj}̦T`٣-#RUyotX[dg0G(wj(xvyAp2z瑤 8:#2ז F[7Z´J|?ߏ{eFkdgP뤡nÊge|'U.{+ .DaN"|.@vZ=1=b=QVʤb} zKJrޛFq\wWUIP EzDj Ւ-іl,˚9Rȟh|O>3cmjQp<%el/"i&HH$dḞȈW@&Q/3"22_xw_'ҹ-0 X42 s s9L7r M8뎵t!- B@n?XGg#tOHML n9F_'U*II17KSv_c|nu:5&[==5ovCAx&B᫕.ZU很B~Zdǥ6)F\MROU1(ž f! Ae$<z@LJYё9 \Ohd`Jh3OJt|8bv ?҅˽Qr0vc9ZA0V0gbx6^wCU2qR'MW˘Ga Ì 0W#- kzjhE$b z7hND#nd7P4fiթu}XtH-ΩVE'[qW{jA:0 jH-¹l瞩Ly٣=:T~l|ė᧧jsz7AmcOHUAZ1` (>jh XX42 36afR=U@G h JzEM[(XMmc{s<Eg`#U+r^Q^Jg('_*vIl^ܼpD7 ak ?Mʄg<Ü:'S!X:WS3YgcߍQ9U{MpݥWdiAo`|`^=dGސZ)c|qYbrY pG`a\P cC(uyc/Q0̩'UͿPZD {Fe\l3Wddkp)0Fٷn{}|Ac'܆&P؈ hHeQ"jUlz݄,!lEn9)W4cy h5{kN+)5u^Qf=T9?qa?:j5,'w}P2))ΘǦ͎0s+?ujbM7@ Ola׺&Bm~SP]4hSRa߳ạMqmbE#\ZM]{nTC/Aw'ڏnwp\$^jj2A\6!FHV(@lhA[L 81su-$\9K}VnpVKjSnġ]t:k VkT=Aߞ鍫= /kxS'ufP&)V%%- J`E#\GXf3w@{>ìRǽI Jb @m\}T*Rh  &?_jwX}\T97Lo~$o H"n"ܴI`?$X0ܨFz3[]e*?UsZE#Qi3/SU9u-7𳵋T͜J5Jn/5 DDj MJ)(r8Zk)5b$44M3XE#\Tܜ[bnnroa.9My ha6$iK駁H!hhM49O:81 ^HXZh~%&[ǞM*k R(5|hix4*#|jmڠu ,G٤{!|qWu/*U8{3Yu-!$IDլ ݹ?X \hdk*4/ua oJv &tRtZd,UJ!w7 AhȚ+Za5~V{]^oAYajiPmv~. 1Z,v$#MVi)HWQZDPi:Z@9"#ԫۋ"ݵk)X2 jX425 0+8cu톢{(3&E\bBBe$DqYDc Ef@e⤪F]ƻo2k. ȣdu&2M~ǩUhS``bRS%I1mbglcH VKz@R <G `r/F4 -`$aF3?痮[[k8ǿaf9Una'1c#ciΤ ) $ iN[mvPI @5 In':0h, FJa:C*4(BsNОl@Fa:ERR hT)GM=h AAC >v7L)j/sx!k)Vw7)k>D,s<1ɯ1ܔGFj|L Ȣy5Upȉ0}Ui̟CO#PV;K@~&hQ#2o Q4@ebI,(*ӊ!c`>pV \hd6a ?  _[Hf;ĔIaM6a⅒ (7@f}z"J|x&4UgR։úUQ9Lb yc5ص5M Y:6ZB@xID )7c&BC Z\AL<[Jx #|S׾*#pS݇960Gg* ^m^Ouk`fS1V},_G+L 4ƹ|i]Vi4Mz4&gX)MC+@F^#}ӛ"WdE#\n57fwG{>7 Si(&-tJGJHfڮF(zФѐh4[ E{&9,Y(hd9BVbaK¶MدڙӮmĔv5,rGN̎2Ǥ sUE:GX3bD.B]z+=ȿl0ǫ$ Zg(&l0Qk$$tښpo鸿i/ q0 3 s컗 nf5$b'[,z&Mbyq~JF7gꩩVT:K3%ܴe5B$vӌc-K]ŠIo-|wZCeBw-G༬LO\V3 Pmiqm?Y;3jR h3;=OrGPi _WFZE4mQ!K?ľ87a0Wlp0k B0"Sm@qCT/blJ[gIX8DD.;l`j:Bd0ԻD { -¿o~˽\iԯ1c{8cFeXQkD+4M 7OG&iL*SՂuoJXaq`0Wlp0F] hj@aZ)Pid34!B {|i2Trm3W5 3'P^vܾ};aFg'F*Az.~1V<ْc5GXҌj&UU"Nù=\/B0(cA}"$/F rދ5~r2~Myd暂E#\Fa. BSl755b&&!Zv[ E&l1*PD4%ǵEnw'Dq5͢U9΁zA%fmgѼex][0',G=Ǻ[J g/<"!ӛay~ 2AF&Ҩ*|JyTJ)nlY׳ 8NZj,"0LFyٽkL0%@JD ZU6[1<`bB4b@k %DN`j/$ܰ|&iL?ųvi#x*YuqEnybXueq_39nm݇n؀AoF"E1A]e31QYi,uRX42@E({nlp0k mEicQ#YQd"@. -DV3t)mp:"3Jجm#; OTc)yyi_iZoPSIYu{Y➪{/AD@*#ҐJU6hRyjPIxcEѪ}KűE`8bDa. :pm;!>Ň nf t1ј~fø"`rRL?W)qȉPܾ8zowƬF_F!(Lnb}],z3ba.Bfp5Nz'^[w؉"OjZS(/. rk"0k F|`q9_5Ń90klϢ,"?:q,\C#T@#ii)y$ЄDشoXn+Ǎ!5(T0mi gcȣf95Slc%EբYtH$EHUpjMnGIdu疊F4C:3,hL|{^8(gQ޵?w}‹@aUX42x{C/c''bkM1&@3!lJi1 [Oqb"yHWFUZC mL*9siRk17n*=wl A%7e'4ݩU3Zf/as(_83uN!jVNQ, RVzB.6zHÒQeb$ m̋'^Lam/>_25 FY#ͦM|[8߼2% Zh7&(d( 1 UQ_unXgj̘?8*eUr3z~ ec(XZZgS5vԸgp1Zn?POw&SAWq<7)Gky{Ѐu)VT=n5/dE#\U7?|e\l3eZA!"`j: n,&!gp* IEamh4:qh5RUcs `>'ix [ziZ>?chQ&8i=4i"R4lps)QGUkDRQHaNAH)FDa1-SٔLfͯod>2 ì,fLa./M}{^@6cBBF8"48?ǑM7E~ ݸ]zF0 ?B/QˏAE~*1t5ޜa9m9Q5GNt[n yRȤ "$7Nc7`Y`İ]ڤ9tohT24bq0Z¢aF7 seИ܌ͻ^,UHB#!e!d~Jd D[9elMZ F a#grsDj'lr] 6(l䌪痢cŸK~w~VWqKf)a d$!DdjTAm#E Qx74/ujh] ca 0E#T7 sBa!LwyŏRa0(4OpR,yvC0U(Q}j`I]B.-:y4%ߏ/cn~_"^0߽`-P`\W \ s1X񋰂+;g7UA@\(4fQLH4)R%FRyj(s5B Tp/as:պzȴh kO9o ) yshۣSy̢^}@ǟTTx|G.Lhd alFoz냈 S "4\N !HCܲdءUn4o$lHHqTi̯iNS3l)R}դfV61t S \Eldq 4 s{0<sV7clR~w,y±|Φpb.ieFG\aI|9 ARbGVQa-H =^ 8NYcFao~ a5 0ft^-'f23Dca-6R)5BBnY%y@kg$23UTT_ghܸJD^YL}]&y;_/rm._@†'z"iY{8By]ar#3j:Df>ٳϵu@(^ 1i#bR,Q  ~pǞXYUeڂE#sM2u3V6aə-xÝ?^S#܊5EXm'\X?IX!D8rFșR ^y򈙿1 icnWKsIεUfaէk!K3YAcea/՝eMxz2^4#L<]0)o®oW "AÉ?*gUמs*P4)ԭ%uĉ8< |әu3XY9yk2 s昜l߽k/n\0W2QD5r@Y*LlN6p~/ 8Tn"쾑kEWF֪ 3Q^.y4,g6š*_9,XdRmF|.hU"riZt^0M&Z}zsxW6Xg)Iy,h^_,0vHDHhs )(ECL |_ a5]a+5ǯ}A[³q)ҹ厷7yFEh7”F8RNy-aQޱ]`?LO3GC2+F;>'ȣ4cWy"VfRghG,γU;guWx{L yq}cΘfU#Rd)^܄ 7t4$.o|SǺ)=~50W,kNg?xYY0OY1OdF4QDYZi$&ʹ:e.Yw8Ei1r(7F:آ#ܟ}vVt |i4RL7t[S/!'Hs'efۈ9?! (KM%AΙUBq,4!2R^Fk]Y Ti4w豓l0 ,kG{[0=~4 < s9i7q~D:jLL0Fݴ02͹3 K=;/O}17l)$9 T/qne5J#MկZ_Rxp7Qij/'g/-k:f+A mz p.mVfHEbnz#]$K͋ Z3!  q Vt​&#~.vNlڴ½:E#s]~>g 1HXaB DcZ*$)7q.,頥)")1iD$-řR;ZԹ~ȄT0~t&#:80Ua3mLsi^Z7߮a3&'1Zc_;v!lD!@CEA?V&9T~*w/Bʄ#HDRBQƏ/V9j%b8&;&{+j5j513~/}ݵ`E#s>g|O 2a#3`v %i޼I l9+8b_wb#JeSi<E@փf7?,az5-<eY7ƛ7<ynp4ܩ|ѺiH @aZΩBpgTzǘOU\q|PWx)x߇E Gq[ a\7t=<܋k[NLOMǟ;ck 7>4ZDbϳ&5@J0flq/kJK\nĚSED ЅÔ] Hn*v3 aUD)<+ְO^tVs"h`PXT܊Z_Ƌr8> 7W8HBF YlNB1s=8`A0#)H>{ _?HpI daX42$߇oqW?7<,~3;>‘a֊oS`jfܸM4ʢ<CV+}E+姥RJ9TV%3)F\'_4\oJU C yJ)>vgŵ}. sÙsH AEk1'8{|x>Cy)=tKԤfD>3dظb|Cډ|HKA;~ yzevf8a5ɳl/?QOb޻qۭ;1??ߦ2E d-vⶻ߉7Ȏ6bGV9P#)B$ xV#{@A\ϻ{) uq//-H/zw iX=s 2RSCh=0 5p^aiQ "SІ#{N+ y#gⰊ1"Ɓv">Xh/٭qםb-p*aa\<_XG~}sv{l%0D`{?U"f&4n&ш/J//%""aTx.B~MMuuU!q;V' /;o!Y"7̑$65t ɏSVuv/e՞d$Dae!)#atU0]0jh#0(ACL cDIw=aFŚ^ak us!LOuaKqY%[~vU4yPF:HBcCp6g${ ܸ˅|îwһZ8% @H#Z*w**hm9fO?<BM@*5" ӏgc͢UZ:sj qDxdzOX73/<`ڂE#s3$8z$>Gعs;~.dӍc{ߝ@cb쀪/`\InCl0Xk<4RO(Vk>?+k Ie5TekMTQI=;Clݧ'(0?(mo5Fyzj (OeuC~?UlyOBHBS' 9o87UG=lŴgR3NzkQ5aDI hFGw!{7fg<'1 36,ks Amu-5?{Μ= s=!E*OA:!13!Јam'̯J(Kw|Fg}6隰jY喪QV[x˪$B}cEYTVK[ZZ'Աrٓs8Ln2nvJ^; dmZ2 dԵz 0 1)QG]=-yMyKG]֝شi=|K0Y5,k㯾~?_k1O@A ,-\ 3<25hL)S @&L#v nRS/ZLpҐ^an+0*E.?GdfTW2F܋lAbtx*`ٯp=Uw0&CZIRThVhP.k`Lo1Ɖ= At I`Xwnx̻OwR ñk_j5j513~/}e0 ì S~w k?ülq s^Rb\tb_k.x-1jBXJq$f#RbDcw|΍/*m@P?}$:0t)@rz0F d$ ZGBE1Z6"#a("Mڋ%=vrͯ0FAk|GGM㓟 >4n2Դ{=HRE%sFPl^*n\G]/*)5pWiM F)mZsF#L&[}9䣴"JYjyem2\Z%iuaIU~:),WhD'4-/ֶ/޼S;X_;RJQ#vc}2"xU ?Z#yjK3.WSFK!ED5|k ;}(X42l7nt{;cOSyg #HvQFu{F )j_HSBv9W'+RU=a w.]>W>eP*~:._%qK;Gdf E > ?E~I+G/sNDטnC8i@;0=h:UA4UH=[ODV%!Wg|oÊ\ -y bq n݌}^?`M0 Ff0ϼMcvzǎ=ڇ]wފ٭[p~nJ2@fa-@35a2bAh7f&8~^ykaMe޻UŌO=.W!|rAU!<ڵ{F ӛrOVpZkdΦGr=VP3b+7/UT IDATϱ M5Ta^j~+BF$$El\0ko4( RFIHcl]ho>1W [foO{W鸌"LLM__hU&e3;iKldR,(]T'tLC֪긟Glz֋ 8=_橖6S;l/Vh -v;* ö4φ0¢QG;G.e5Q#pmD'_ϼa2 [63rZ&&:_>_ynH՞P/DL1?sk3VRrj%vq6f(KT wPvUGtatn dÞ 6:Dj1LDPP~&7JbոRp,I@3#o 65^'pX<EHF j$Fq!Ʊf{^,yŇ˜02 K&L\t=t=,.,_:wbzjSmoy{}MLՎ?jcK K+).$$*0A!4_?f٫/ 2YQ[NublҶI묽cޙſ4h.v_7ʥ:yO̟I,[fFVMՅƪ?2#XZqCUg6mF$#G4bjǎ1C'!t&'?Ng^:o>Ν_0 ì s $ ~LOMbvv {7}#a&6Zgu}ϙ D0WN7Cxֈ!܉)IL+w/.Ѭ]Yb1 &{KY.)6BKg>'ު"~Ү߃QLZ{cT}~_#8Xbb^hH垍F@mįdpSFc.:Z+8 o 8qI"n݌}ރ3xvvf技E#\B}I{7nu'+Ńy]غm6lVAvj!hHi &[&[I.T\8Y:'Qȅ5| t b$ht> R/BW]!\G@"l|ܡKO-TvO/%ՊuXl7J4suY s˘?d x(ʚԸi!8ՅkB9 _1&v[l;q4>5./ݻv`6mZ-7s_xe0 \ hd+=zLw0t ǟ"~W_xOM::o6jĎ/А--R.^|G)mVwO ~,5%A91ݸboV~j~yؒhDFYѻnf^]E޼]zUb7,TNdS_5 AjI=uS*m2r4:QM:_DaŜj9NvFK„\t"$}s.?tu 0 F}Ϝo';gk7|W ͇yݮ0I!DE6ʹp&TcKHRS&=h] s´A6ʋ:ZU30a++#EG[|TQjK7-2N EkS($jK}#N9sKNh 9D8m6Z41 z..EEӟ8_|>0 se¢nv{X73^/|Gp[iׯ~usombkX>D[76!>5hݦ1:9%ѨjM >_2؆.v1,G/3 ҶƱ&,~2By i˯$J#{VWͯ 5HX^J ̗ ɦIDۓmis;六VX hc&@]*Evm544a,%uQ^_wp0W4,uh6cn~cOSygƆq]oq盱aF'XR e4ahZLJAZt"uSr됗6YR21L]t8_h V;[JYH ,ſ./܁Yl1ӅqsK jC}UƜ@UaYQk+@MDS)ZE"~_z=EC&6ȶL2*4?\V\ZEW4 (#XY .W`*E#\n ? _Xl6$fmþQ[yl+ ,Z2ьEH"M :6;xjSgު0*fů/ "ºp4=2nkcH[L*Z[ 6eUTe&r"vXڪ5DZ"Uޞ4dn״\ISNzƝ#+UJCrk: {uuufAvӾK*P O4* 1Q L፻vnu'bb檂E#\&:|trw\o{-((ϖmt8tl D@$Mff<7P@ƀx~-A#O0i:,f3g-lYȭCd;w3͎ d9ʑU^E < Myݏ~H3H3EiMp,q#FkmɫWXꯅgY":KjvGAzA]LEAkkcn|OY{SaK F|WgAdRocǍScpJ)MK--"_y$3 2K/U!kϓF[k 6 u ~,w5TM]I3c1V"d} Hq{ q : 7ZeBm>Z Z+K"w];"`OjoWjdf;'7~j+T7jр1xvr;IH I:]vꊫSrUܪ[cRIjo\1!A@ {' 4ZO}g5{yyndͺӼ WTK+ h" -7"YYuk<`Vix-nfT=52Jk#)MNOg&0@:c5O=<^aXZ/[o[yx ]g([m(LnvHi͘32K(` _Eșr=Cf3k$_B5A޻UhF= 4XiƘdrʙ"gt;j"[Ӂ+PjC|~yYQ)!IT>c$3Ǝ/3߿8YE8;ʢ03=ɭG'_-x w=|S|w^&~_Q06o\E*ybq&YCGiݬ?DOLGD*!V$4B82RڪNF3Q:.NN2[R|gFKk7(TNMEM8r<,IƘ둸K6)9]Î ^3|E!>O33S$M<<;|“FW0=|⣷nqN5s=g+ī:ncȉE KeVO8N/lJ̜S5ys2tTڣñQ޷\Kҽ2s& fꈡ)m-J(Lq]Ի$0JRZ.tjᨆ'l@F=%ˮ8&jm&Z$3I.&$PM%uB(R| +ǭ}[Ƕm18e H3NsM-`Iǫ ;xE>ƁO_ p7u9G$g1@P݈Kֹ`7d\>`Vӗ;Ec5\U2+# JñiB::SUBl&h4Z!X#;7r`g+NyeհN/^H~h&+;V*L4,I&HFQk5Jbo]͔bB˪(! TH%H}ԃ(鴹z6>qm yt O=<^x||_w׿^^%˸n_G-G 3"S'aq,Smz1{= ǘT{/6;d\JQl#4A$Dc6 j2į47NqPr= kDY1E`7xmч™e-5XB(Zˠ7 QTJDUqn<-#ݸwF|۶n3,.Vw-֯`qybXNa)D&?TK :ٰƥ b&>帕-wl Zt*)Je;\v]1tih|Y4u+ij)e!nGd a-S w6U 6Yzj`e1VK ͌r8H*Q8ĉ.>:XXVuZ[y:$uOLJ7%ajrqSS<Գs}?>x5 M?kӓ^=g_8ߡ61wיӪ)AJ:Xئɲ:U>VXh5bDrZ8>ݰҪZoKq)-6rH)bwhZLt!F`~Z )UW'뎣םIy1[_JJ[uX!'+#%`̳<Ϣ3BAh]:'|%@UEKMa)ȲCjyP")_.mĪi֭[͆kwSQ=<<<YfUަ|+?s:U3Y5ֆGki09g0i5KS6#ymGU(3{{+#Ry=lE3'nd׭% ݉8fI"m1!l#iY*kڄCL`lff]+54zx`enn{k߶o̗}9Ͻ:֫6WJx,A 1/>a\9MJQ}-]_ hh}|2ɨ׬r}i>\R(ϯVùº&XGS"k,}Bg}iTjņHZX`&8f)uHcPz$R[lJ99\ aԉ&Yc(μM;w}jx}i[~'^8|zm"37ogݺ54iN(`vK$ ,, Q5$*-n?Za*%_Hpd 1:63z,-(R\YIt==57((XC]֢Xgdϱ|/Ҫc$`  ZxI(vT1bvb?f3e 8칕pݵlܸ'z^oN14z\8p3S+y|>خ~+|9减ڵx!ڭ0>dF/K{^t:GmD,6_C9Rs|)/NpU%k N4O?wpl.Fkp[ܺY NgdhӼB9E5?@VX=>چ#+ŗ"ʕ9љB:mHe:eJlQR4k'&6ݳ~0?R?$l|<ڒ R(AҢHUZ jݠذ0NPũ >vmLMMSrϽq٫x“F ^8́Yf׾m;vv=ߡyel|V7]ŢPh"垶Y[phA0v?{;?))oK}0yj(D)VD*DG)2٘ԬgaӔTQ٤J '#%u̙䤵lZqU6Uڸc}A-:k>$Srձʕq~5S|ޕ)3j).L2VcabF-$Ipj% Ac ohʈ)^&9 8wBⶭz6le-<<<<^6F8:V&?|9ŭĶ_ezabp ߙKo`yU\7,,wcbN[fx[_'#vJ @؊Pr j*&ytJems~eQr֬Q}rdXS{quC☵:ZHJ;`lApqUjݕf\pJֵ╱*c,6,S!(A8RύKREZˠo*J`dʸ$eNw责ʹP~aOm1ukaߴ(8qKV7>da+"lE*pD5;iɩ'Q{*:2t3V<=D&885+M^{2ett+Z$g[(IЛ_`R=UUKUWmӟ0w}nvhYB'8?ٵ?o3︆|n/n& UׯǗ"8 8r}\-`ڐN5R€P^u^`2dpjs8ѵ#09pajvS J aXUqdP޺eYM9ŴE_jTB*gˣSdTj\?wtR)YTz%T- r̊yl*ir7u|“FNƍk_1 ضz"BLrt[7C+ڲYRϲY .҃cl6 ZfsA5u%v=RR Z5Yy7bK$r~9`~ o4qJP)3h|E`ӑSa?K)23RmiM]O면19*+ƍr=@Z:qd3 ZKAy5%%/VU)VRÛ-+DJU?C.łF,\LbC3quqptjaX!`3_'yfwy+-8ã<s?ǯǩ v9aa}ˑ6ǖl.Y#D<὚ܰ:I*crRD,1p$CD,I#5_C*UDdYOT$\w #J8#eհ |Mє/h^*Nt4bs=("lOSEŴquB)" 5iP/)&Wlfuf_2TWq$p84R${nL^aezlO}6\QfgymlۺvŎwx%F\MvN}6x!G~/|k/^u i!|o?@*@ ծ 6J,ӹAHY1M)ڜ$qb^ mX_b9 ME-h(!ir_kDm_NI,^Sa&+_`RY%p-&XUjX{tb uIɑYiKwjt[J%6%# i ϑO~%bty3Նb" ~BIcs{'_Vy09%"_~ ܮ=<<<^ _XsO|v֮Y=uw _;߾W}o׽)mi֊6]6vD,K-DD<8&6,/WH2WAF!ZA<T0^cS Zt&౐ t7GǎJ7^c|s2V︝\s%vssDž O=6Q;N׫J{qTgimcVX8Begצޒq`'9]Ю֚]<&'w3<91Fs O=<<"NF?۸?=OU}ۛ`KGg 4QԢC>A <}̧E);mXZ֕E\ _;Cc\0T Z0a+rJFDQm2*1atTciY3lSiw3asc(AneqQSIkﻒ>&I d~L [Т~cfs:4MU*qd鿣41uS3WcN)%ZHrJ^AۢN1vGN$~ObM<術"𿢗: {ɩ n&񟟓=<<<NlRU`H^I q܁!$bjBk1 )hUJ~EWJ1#7AV9dkRcKAz\As)|lFURcHIl"Fc=ff:.)-:0ym;Ҙ{/&=zvTsi޴}۶nnxsF_7>̇n{?U IDAT_/,5;=[!Wou@fn"`] ffQDP{}y1끔 60 8H8=i:`'>{.P׆Cֆ$I'E{lV;p."m/dZ^J0{WJaA hˈݙIDd=}Y.{]!"vHFꩨ*%hKj(j/3U13 ֔\Y,3MɜMlf[N!-iiU氞l+N?⼱z,J$V)0Li'[fJu7t2ӉecrKElj+{.zxxx I×z7Q+dy m|7VwU'9؋0TH~"R BS 0"-&pZk(csd T@Eyz!@oi(HJrOݘDBrak7 JZ&7#z@Z #U0933ڑ"A43ɺK3Ԋ8u- 4JJ[G<ka2cJՏ/mm$ `J8LaSI$ݷt(3b4XSY+{gm,f=-"SbCg5ib`9=߭.eLMN;ncgxsbn+"xQÁsm'jt#zann~kVq۶nxlSa4?s\i}Aqb AM{z7 Hy,N|^ EHA2i]XDmKuc%UXh #n',QI2@檔iQeBZki^lF(::WWg8[)4^a8*R Wi q@[Q3<'v6i%WMέ%*m= jǞbB7[NMFRSM[Hc,ր5 ǟĚ>g۶nز2gnxxxxjIYwo K|_CIB(&;,ؾ" 0u9#aDwńSO0#*`;FE4WHUŬijlkJ{VHX#9-(vHH[tqqieϹ]:C 0X BZ c,S3Hj& <_7LqUӴێ4: t[ظ" j6QIdyܩ3'dxeb޴^ݮ` $QMb),)%bBRV؅"Y;B4;u} 3 zKf2X`CE28wuQ2߷r{g4zxxu|dfz+_<+_NU ;j"aַ?/<G+U%oJ:14!LJSԿnċKHڰPei8R׭+6 sˊc',/ldi#VoXBx:5Đ 5\@imdqžγ86/KI u"hKW@^Ns4IWDlbHڴkbU峔A>CeRNM]i"MY#Ia<-4[dpn믺j+ԇkwGg4zx4`z7m/å׿ϼf'Y|YVO9cđ<( :L&Z}ٵ#Y&1V6lU+'U0RF@Y$1*5aUej-ЊUm^8r@Rś\Y(88SIڳv'$Raֆ~ު#wLF]o;4 #,\HJJSQRUk8NJiY)[<9GWGd7'rX;dPYl8MS[p$h6nZx/ K,g4zx4`nn]58Ix=LOq.z3EH>?+k@Z^ F/+q䤧&<0 )hT2gڒB@Y0֐`U pn5moLEVO)&  Ta͚8 GE@5#0r6DV'”N4ֺ\X*p)e7]Z "I( Bb dV)j"P6u,nD%||5a{g5=N){EېvZOXi$G 2u13@vӦ"YTUG{0ϭL}7_G7u8޽I~йoqvI9C(,| [sop0%$P,8LUz$EVTT;B2DiрbCs}ao7EvB3qJ0O2 o<$~m1h6c jzDX (mm~)ٴ(.)ռ96Y \ :@d.#gHTU $qq$W⊑uzbm)0'gF\{RhJE5M Xcl7>[6sW͉vChb/,$14oھm[7nqNv~+ٵ{ϟ;wVO=<<*nlxnىi~Cr6g$ yE /M*TjlHZmaBr EohyYb)A"0Kn߬z+5ܱ%́=z-7M1=x',vm NGuҨUR^ y1i@?VHbi6¢NԠ:ԢB&UWNj*>SRʪ3Tr j٤SHuC[i Ƅ3W*4#πw859<̉0zxxx\ c ߾>l^c^r o۲WMq!Ft#"F$WSdQ4ZFtO MHlQrqJqi=ZGvh0^xp#)qlA7ofť8]IR28"EA,&z:NW4:9 CT2paPd% SŒSC]}6S@)5E'Sߪ)s%hGYawni9# Q1TC5ڱ&Nn8=m۷ee,..sWfqܫ4zxx1LCnzF>!d5(b@auZc,8쒄_31$e,Yˍ(=Z}Pi, cUj|RhUGqq:ZAahbY}qq,5y>?ɀիBn`<3 #A $um5A* Pa.#Qz81tBdE0Be %6gOS#HtVcNL)C;jGʦ287s1Q˵6mT7ƲMsҝCyD/ !SzMݷ???/xxxxx2IÇ;ULa)vJĪOioDuW{fQ%I1%B(ƹ_*9'1!}=~mT#p*Ғ0_2lu"v..[_7aC>@;:O%q<ZuGFtJ@j&WzUJZ CE;д8oiQyK4ʳfbNT.WF[EB0vLlf ͊6Ne;/:AW]O]%W,.9vGh(dbSN*ZL\ "W+#aN.kk,`("K,J\ˇDzq bS٬αHC.7nf~ԦҔaZ͐LE7 б&Zv[ AV+oo_e JejXkvC׫VRB,.mV)K İhSЊRQ2G*"rVg)OS*(Rfۙfd_4}0. t\Kib G!))O~T&$sO:6:] {ϓN2B''?x/x8 ٺ~_"nJHQ@rd,X2ñ?1QhU+$3Z!!knRrYVSѮ 0b{dQM ,=Ͼ]{_r{8"آ7dq8v".$Nxg6Ûd]bܑ"]qf&bfC**U\űei8nr@KlN#X%Ux32U9t_z*72PG6͚gpz{6&NJ@۪S## ld=/@bM-vܿ\yxxxx“FS/8} Z:*m2G%HL2+p~),1%&p dDa%!X_\fۥ87Oa]#czd(h[/,13=|Qhm8|xc/_} oϱ5YբXñczq9I*AYN e! =C#t=Պ͔(;wI0 OJA41NLH-XZ ů[LO{{;~_y6'|($ ܝR ֬is8 ΨĩYww5h]_ry8_{z>qT s1gΩJ---\s\3︝C{/<<<<<^k7eä8&eR*17Aw:h9abdFtAR0p`IXĦ ?H.m},Ⲗ>㇬[O|ԯnM|_?΃?xs 8p|[-[.C%cB 7$Äxc{eC? Y! 0 cMgJ%*PiRa;Hj(e:4W$gE24Ēgѩ:u7ϬQ=n#&OO:&k6%;I`yiG<}NmKGp3xFF\<溭Z޵e67g9q;xYEB1 wKHnwyֽ}[j9[UOU=n~ߟҘ8hCe| O)ZfuS:veM]bPO,C~!2qOƉȡ|+].1QYO:;۹>72sf-d}l޲~Tp}BVX7P.DA`PjhBFf5k1,T^q5Z+(h2XB}B5-P 5DZ+AcqF;Nm\:Qc}Mܻ&W9.dn4KEMV!:NQBt.\߷  0M7.b|z nťwN+ZbD-0fSI6nR'J)5!,q7Ql8 CYKNap'H f#Zy{/r)_}n_M IDAT?C+ͼy)'l~u[?J+s1**C9GwFY`j]V3*urYuϨF1J#ii ăC{)bX,NCEuY7tcH  "a ߰RŸ? Euϯ^Β]}bNz-6HE QY!ysdD8F9d#D =g#gCJe{_`NJƗz]}~kybWImk2X2ѿ6p%ըĘHe#IcG鹊g)<7JYŀΚdQi=hsQFzIT3!x .[&O}m$j;s> <ץ{7]u>ZAA R疒-\CőEn8X;ʄ(kOUW<&xxk/|<^ЃSrLuVqR>U5+>n[{}([Y(k~:]:g r|N12]Yn4 4KwPҖQ!קu&2)Ƞt?^- $UvQ{iOKsQAx(8^; f˽[`bZ)MˬaD)a*"X׫ 8ڟ=eѦ֟6$dqK~_ʄ!R 'lkƣ?˒k]u' {С|:) |UIzbD@Tu[?Tw⠂AEJ$ Af,Ͱ>3³ƑN-=X~ Ġ)Z`WQxrk֥ΎDICY̋|ShoH+*{ɳ `kj1*{b|QMf[gΏ2^c*E"jq A΍Lo6uiZYT܂#I Dժ.Hlb:4$i˩mbYX-QƯVx㕗9t%ϱw #݂ p*h)>#E:;.=9qbMHUMLo]먖YXǭ5ɶF1D8TU|qyy|rm|;M$?7~/b)k\b9+W,x^%Sјp]w|Y3G]j_¦]ZTMyEW;t}63Q#kM&E33m9&oqb[h3chU54h̄HmRCB֚ 9W[+_ kjATp?_A8rtm?wށ=f՝tw<&I(h1(LԋiIE1RNEG| T +G"Tg[ǟ׷S,ggW7STOjS_&=)*;P孔A (v1cؿ0?}(viyCˤWQOƄ튶L䔣mmU:9FGo"P8hLܳUa'O=&Fo-RV ':VB?t1V3Ogy͍O}L_}K|;?şشEAx<4 T1:;y?[`ry0Da.i1oc6}ԉSKcCk+룭5ST݇rRgpX)&~S|7r'>J;yMQͱ FfZs]]̞BqٴspB{.8hV)&*DdXpuQ$RG$TFTޱyhC]$0F~JoKՔMKK,[d1ؼe'mEkY[ .Q2TS=Jizz|9t kaf}nq5Z+ \UQmMS"@9T*UQ沊ĚQDic K16{̬8ͦͲcyY46\߷]#g[vfJ p6(1:<~Y+^BM-D46OSSX6Sa2qM_(J ('QNfX7?eKpݢS2ɒYb9mǭ?y-wcϝFϼ߰bjp3<2F\}?c|º{,m}9`0^P0cF~eJZ^rV1UmL?--n5 "Gk>GjeӤU[78ASKH};ɦU:v0VS,{ZP!sxlgcuE  K=}ފ L |UM#A ?UtcO+ڽ_1uq4z@.6&zK띆X4Z׫ CqF;u(tPNC=¶̼z;G'\zr?^=w]/olٙ:y:ϱqvY|dI/m}hAeKR\y7|Po !kWMj4N3i jr\uC`FcE?>wo>ԿA3E{[Dd2<%1*8lBHJ`>y\'Ԟҭqp96iN'X~ul.xoq:d n{W9zz:~̞=?<'+N^/Y*O~.JwѨKr.oTKMA&Yj-SEZYt,C5aV6|֜I^}p;Fnl5t_:;LTJgǕw*$>:ySNAFAHF/kq6Y؉髩kiƣ&} UT(S͡lo8^40Z?V1$7rN&KҀ{[eklWI(+зãQ -x+RFQ댜[9ʐw8Qdk&mYXeLx$x31l0I?11(`ჼw=Ob^gB!/EkͮKqp#=S#ڌM>!'Fe&:a;\}]-.* ʆM%yau[?Ju7 d3/O=cãL,읟6R]KRepC1#p (́ϱu.cgDc pTmB h´1 QqUa^ؘ -22-8 F7ӉئmE^I;qC5ɘxǣK熞9t%ϱw>OY7ADD \@ -8R9>{eMIbeˁ?IDLz%փ%qXɧoITQ|Pwr5\kTñ\7wáFN&X,o?{ϝl>w1FF<ŽoՉ3:X<">񇴶wu2Ω^3e5quMSS?5GYջzj֋-4l8=FQߺ7UZ[[j,FQ/ kմ&[ FA7Vr5)qem,\e|yBfgՆ1Q:_e4uцBHFTXD&h_Y|w%"rdWm>*f]63jFvΟ=S6˹]˗z||V+7.bێo122Z+})*'=qJ0]6hep539(eSs}{ޞ)#ߧ8] :Oj 5  hSǟe}UɏPhvY=\ǭkމi:jc q^䒚;6iI Uv}̟,}mlq1Mn8ԌS52V" |Ul1^ycãM#Pyֿn\0ϱq BVSGk^ܷ!TRJFɶc>zo[ 7R,QfWԅwoټe'GLAxhS`/Knc|ogv+s(M%q2_Bqu\lB?zONRXN;L0ñȜL)\+;9$e?VbO<M#ãl>?y?] ~iK5sЮD6ZYrNthBr:kR~\XK p#q9i[bT>b&Z̴olӔPEb2넪hmmÇVT+ӛ|}B[ȮؼeI 0h \#>]~5q"NXKGQŬI /ѹ ;%T n4GGJlH n{5}\vTӾgWqh/96 Ui{B< tVJe%IkWH#O1J}<"Xꨇc kmcI$g6nbq064P#Eu>{RE p!FA8^+s#N+\QPU:&Ũnq̛0]UT[xrܰC̚}Ylz;ZδȢZd\6հnz;̨$ݴ,9"[7r뭷Q(hٵ{ؼezk*At()D5]Q|WtvPmK21%ZpJ lde9M ZEe"DeM`m\\uܻN[HK!Oؾc.>ol33æsG~&7M?k4iL3QKe[h7ňWR/+!v1&cS3IKMH&uoȡC9|(S,77AOzX;GAa:( ~Biyp ^?tghm#xhBfr7bC5׎G}aƠ*xTщ!:yPqM$JMlKty\}>FF7~ظ%1/|vg1ǡ-5ֱquH + 7f{pքc DM3lmm&#{g|> ψm['?izsI[[ n;o_ʱ1oUAA8h6;C]|/7{m(6f3BE5!@wʣEY);1NLH~=&d?Ʒ_ړ.h<=>nHgG;'G:T+Ou'2ɩ .f3೨~q4*̱md Kh5>gOj 5XFFƦ{J pF(i;GZXVv< ^\bV}d*M@(I&JK͚@0Tԉ'6K7 (2\ 86.fgrB\alt=tv0c[WvΜ. Ge)?uO Cߤa&zYHz3)f5iAݚ4RyrđÇ'lw>M^c;ёi  MD4 i08toۗ卓֝5{~m-8p0[ט1Q!$]+`:,3[4U/#grB{zw%S Jl_dh~aU6oIgG==s{?ATa%&+]GgWddmq|, E#a IDAT)A%*  qR3Ѵ5GIAGNFRA&93&N&Dmޜ j[۷SN;} o!v`'o_A.4D4 i0 kصkߔgFWgdWϦ@ u@sPG(R ژvѪJ^K-V)Br-( î"å]>$)v6lv JG1,FwQUCRM?SkqTP8S r9R֪fb4۰X*I9 xv今}U*'ņ֩O۳W3VӅ,^KsY*1t{itEH0vw嶺kMA\"Qm|xOW_KObQzo؄m媴UFeI ]M$…h|?={ysnV޳n"&vrHՋF|ǵqt(bmH.PX)?NO·۳Տ=Ni||J;,^׿#cJ- \ZhL6)W+ y2wFH`jc*CkJ&ȤZX ~eW:d{q7222c_D ~-\DLBjS~j8̅eҾJcZ /b!DTROŃmQ[N06JgZޛbruG_~%ybql#AաCG#O18t  \*hӤ\z:,RmaIKKLnʥ _F<(`-gpsP"U C龡Bh*-۶{xkυ:8t{ϝx9WG3r9FW]5W̓@FRZ fA+ &m}]%Ә m6ɏ?[ԌwTꦚ y7غiuEò T+AA(I\5/|9'ϱ'l0 | N\K+mm^uEXR/{xkρS|¹&%tvu(Us"uM5u@!H̦9I[ iêH`}iaDޫ& c3 F2ȡt݋'&g9t㢦U p)#Q Xr=7-_귦:?[Or#l*چ8a>C)P\ee qh.8pb nj2ZRqџ|*slj9U`m$ [VʥP..*5njjP% 2" oo`uc9{-+{g7?`~qmȐ mu-4K1A6qQ)P{(fɶl_@1&kJ`lzvw/;fa|VXΓR28t1ZAAD !績o!+W,gNLPCA8;9-72g fϞg*hEb0*1A$m$5q m2`3^6M+^L*Sҟz̚'Ѫԅ`ySFgeK3ݞA  \jh3mU>rl5N 5 trtb5d/SJTBu4Wc+/N&4P &`jNQˍ\x6Nh6H~&Fb02&m4H|UA޷![-o:M{Ng{+&Mmθ:7Aq`$㕊e3c)RUtF%cU֫[EяMpʥAYؓӚZ()tvwG p{U*pj(l3`/OK4<=Jv ZG,, ֢ŪA4F[0@mmH8q*i5q09v7N{$$jTGӏ?2>>\;+rrFF7ap|  g4 Z(OYǒ/|8 18|?`xdΎvrgs:[Q&r7U6J&+jT2GlvHcTg%QcbI=K/8Jcg=dw՝h`ASz'(_\r\>2F7F9io(8a5N1JmXsWM"CRu.Yräi:ʢ0!NTV_ȡçqeN9,ty~n …FA8$mm-|SE"/!鹌ζ 4G(cpŒ鍎LgT6Ҩ@>_ JY~(USRTU̪IkϢj)Jy4!jdSС#J%6SJx'IOnq Z읶 ŀFA8G%v<"bi%Zv=+HRiC!/ /?Վ_xIYsmQFiqvuܦ *YƆŢqKqƆ~2䌢 5n⯾6AA(瘇wG鞎p bz1BiQ:1>>#1ccE*Ɨ 3:6 ņFA8njYFnqF>n&55t8VP&n֢\625Z#q:Fe *ӦQ$ Q-%Jafz2+Ї?WħfÛYb9O~J$ g0 Zls)R.WXf \E\ M˯| =]9?JEJa=l䐚uIDB 2X*lތ*nCATH2XYXtR08i͢/~g_@R㲥K DSR?TAAD4 ´1 kyQqH\5/|vϠ5@[K^GkqE66v֐H;'LD5LHZiE!AY0Zrhe| 4Sa6 .5NU.Ma-=AȖ;I٢PS(h{<-;{Z pI0 AsNbVML3'l84[b͂ j [p蠂 UպxJc8M{6|bۗ3IGؾc`$  i1ǹ0[ǂUmMFY m$!iN^#|ڄp㩊EUDb0hPq:Hb\FՋKaM$ CT3z;eBtu4SAQ'|~Ɨ _?jP  L?>?; 0}A ;w}CrtOkZ!l~4lEsI‰JUFeQ0a2i*<ֶ+Ľ!JKUkEd"FX(Z*+O{M79/J0y<+wp3셽߿;9|ҾA|!$=Uw6f]6%7.iC,W=_?B.s9of QD1e)Pu5&J%M]OT$[!jk#8R,ڎ}R{wxS geKRek(+'KAsFA(+v& qH \`}~=WmG{J;_k]-ɟH9q&DR'T[s^Iäo#/NAel愚域{mߛ wܾ+32:׿C{J  LpQ.Wغ35$U 7w`niwav`Y/9"qb5Ժ 'KL䰖[3±Zёش bXUw#~|.  L"Dq< ,]8 [̠{FTƚ(JrJZi{nTIifLc©2abSUM8geb-#ònInTA.@D4 J9NK!Oؾc@"8@wWU"ZF"u$ЌbPFł01;ք]v܉_FE鰖rރ/=tTY{7ݸ^?+qAA@(8>+h<\±\PTlgd$ (8X坉B0/ P~-NLT+FTdEǭk+W,QT: QA.D4 El޲Ύ6zz%'C|3)x5Vf1#D32XAalh娦TW_J[θk0O=tO˖.g.{6G ph6l[%oS4wJ@59MIDl*52DHp#E^4>=e^[ٳg?kMAA8 h612:=9I}_lfTg"&u-YgrSGڎ!^1tz Q()tvw_  h;v7%asd^קf]Z_'BMd6dlƭ:&18߿Ki3/e匌o>Ё$ FA9d,<6Oէ&ZOeMZlĤR5 (ҔtUcOAMhkkӟZE{{+v#~ЁK:YA.D4 Ej38toۗ E1:\nn7&SP &JF2C*qOƆc[{4R'$~ S#:tt pòn8xٺ% 9DD \"\l8CYƮ]0th-r+h;. Z+, i.RGԆYT@i4 u֭?Vbhb%`}+acL {3ٳgrY<'  \hKFsᑱέêίRŧhf^70t8A5bI; gTHfpڸ Gx{Olgm\Žy'NAAFD4 %FkE*!6qb)陈K܎wFZf)]zƗ O~~F pf0jA8e89ew՝<#::n1f?`Gabx' k:&k{]G_ ݏ\{fb%,*O~73<<:A<@D \$8rrr>IU{Y.ګG v< }кVGIRZ\n造 hލ^0*E7%Wv=ӫ(͆]xaCnw.b%e}K6킹AA8hKT\Gwy3ZS5=qǭ<>x5#f$RTר5d CTRZED.=RbѬ<9G 34}>uۓfîl?p0vuuިE رs%׎EA!Q|/? tE>vn5h *# Q&d$54ú a?c S:Z˯ r?yvE4FK~ĩu)\߷|?FAA(#EoV19αBگE ge6Qi~wCkˁk NbR 9q,oμ`E2-a!}) {1i! {Upɱw>Kn38~׸Wnٳus!ZmJ]W3 `cWe0n 1K6Qi5XqێTjHw&8h|bvIDATCGZVʮ6ΏTYAA_Z$(B={s9NW0+MeTJEAk}m-8nfTql֤5֩ǰId$q 3th̆]ذ{#S~g9,<́ٺ_  SFD 8qBE1sQu=潿tE;SUZL| <̓(Rp"jc Tyg㭟 {3ٳgrY<'+J*  'FArX|rEʛ;kZMo,Qc!բPnCTU(ŶyYiݏr<58P*e,r5ȁnҺ|@)EgG+W,oq/ol~=-AA.PD4 0)nl|R*&QJ#o(EԸ9N"T Ů+R<(Ͽ1DWfxg鍟m|xOWf  L LQ^%7_,űs,JZ(h\R<[RQЂjC;zjUV]lTߙWMlSMVM"Zx6KJAQriR)TjnFenմ +yQ4Ijܲb<`cx^Ι9{a0IـxyycJ2hOtVHޫg7ReMjް^tF|h ǹfz_癳@fΥueիTVj%yWg\ /0Gu>-\6\ʹJZ~CkTQY=XZ۰e.YZ.Þ|1\uv ~l"ЧT2SEt&ۈF73Ug$:ut=$)gvg<[ٜd.%-uiXFbBI2uoL2203uw.IKYɭ|RJWߑ:AyyVHV׭TWyxU`!ܒD"SP(GQ"ΐYK)[f,[YۯӫDڥK*esףҖBqu@mօ2I\l[l;tƼ į߹CЙ GD#i/_-jPɜuw^։sJ|=o1'U2V⦮H'/I75)\[grż~[li7а*[e; wfm8ӒE5r{]uɩ19EU2$V6df esx%leʹMj"=)keaCA?oՖLҏP,*yex)h0ںA-8ҀdcFmF8!8.ÖiS-`[KH۲s2t٪<Գ٭HW̌#u@)9>ǶK,3Ozz]C_$tAgi'F5oXL:bB/ F3q$3Ug$^_U)zUޅ)R"HkbP˓gY[C]jnjԲetBRɴN(JzYG4q .$әNDUZ~Ty^Ƣ*,,$9K=sۍV_ּViUm$iuJuuWk[{Wp /{۷mml; 欤:sf\PΌį߹CЙ ೈFyw6 _w}9+><"u8[93{ު͛3{$ ŇhwTZg#QْV>lRv6@tt1!̊Xl\#B}8NYZkphXCDh0kR>}6ҳԤGQk[,+;5>F4-p۶?{675juJ[tv-*0qJJڼi*6d{`a9%_qx_µ5Zbbb+@`pX|[~MZ٬s"z^O>PYPӡn:vuuthas`Npף+-WR٢hE5g8rtypXCŋ*Ub. Ȳ,"޾4ԇ+V}I;١sHP_h|^:#Q(\[ի$y ֨>+kqu~}ݩx*y#ӫ+}wnٚLmm-qw(o/Os9;n?AAaJ## ضwMmm]?OL  GabΝwq_t)mS__?'%  G }QVVvAA݆Ä   'imm=h}c#nHcca=7AA&iӦ1mڴ#r۷sF?aۿCabܹlڴ7oF4jkkzkײ~I8CAA&O?{wEY`a?[o%"D0D&.֬Y9H${|ɦ׳bŊBv63y4a ;ѣ/QYf4NvSt [nS.@2#{:ﲎ#]FWG >T!`W]i+#Jwh' )},F8J0pn oF$%aT&9JUi$BJ.TT v3aJ#T3 9 ]TBJ`PЯ`DTF"t(ؕЫ`W*]//xY ЌvpqT0$j(]0 |'-J[ޠcx'e.K-sQ8IP}+Ep U.rv 8<<k.fΜeYX5$Waꫯ榛n+W~zeq͝wA*tn*PfIᡗ}#B#/&ι*a*!@S̽jv*`)O2ߛ~bGVn٩}tn<~U??74}><0Zt@x=40ںa7*m^UCg,0M=55DIg]:=dC!h;ryh廳$OAY&i羙N3NB:X[MI@ڗ[#@^*y?%B]rڬ[u> q뭷Ao5~Yp~7fYO#MD?ӈ [k)xxNnч-ct^Mfպǃnې&TYL9\@G)ep/3|xֺ`q1s.M?"q3k]?mm6p>ξ);Cwۺؖ刓T +{MͶ3ÊCeOM:'WӰYg}?#30ۺ yso2kރ4X5i~4`1>({P-"Ġ( F! H8ui'vUxlsJ->Pv9ffFA4q43G*9"Dx|&b0uuapI Lx-jkm(Pw|L!`E#lA2'pV;^w[ "G!P/l />zȲ'0Y'%IH)wZVQmgDaGچTfyA, vt1 o$ vL2.hL90K~+P9-6xmTua;ε1dD.u&=i~SJۢ96&&H4{1:=ll7矩g~.({A8297t) n,1pRsSH#:q] `{4awV>JJCDuZ4щA#F`p80i*64OOm'k/h)OSrAIĞ@&Zv:Y"< 1@付Bw1Y4 *C-Ҧ3m0݈McKpG{Qgj^A1FM Z*#J/ dҩSI{>خ_@;DlݹKî?cSL{BhBY56 NSx39S']x,=^g ,GZP9phhJR=^^5t~5敏CrhB_o[^8!v%]A3u||]^>tt̄S@D'ثcjL9:x9dd;Y 5npݟ5X2o}=[;1g9A-Ftka5ή(9[س3Q;*UYw$4Aihol M '8RAvᑬzK!nM ;r-]mh\gF1 ixҳၙ~p~'Zsj-REq|`c$72S F; DO ¯A ;?"DDLq6eO8~} x#>Z㰪~w6L/">&R hNE;V|X]ж #"11tv>NY >'tbsEC(Y W&PvxЁQaUWV Bç7XcTޫvTCB кmq/ $F6؝9Х Xg,7O#˂Sc|N-[ 5$;ܼU^BLc9+wm4 ~T\N?i:Q)D824t ?{Egt!T˝cB0ƤvL:hEc 6tcSzڑޓ{\ grQ k Cq_ӡځVA|ri\Vc6V]xxp|v)=rc1"A7tcPœt]QDYA芷ːG 䔪\TG,&NUFM3L8`Ȇ TY_8@[,s˰K\!R;P{5hZɨf߈d#o'vr*38eb*t 5`k+&dGr]MQÇOOja/)">11饇顛t8="N7 㽐r%-D(0>VL(_GjDc( 'ZXL1g0S픅D) OU(THSFXv;\4F K89VvZN ;؉̴H Sf/. ?5 M//M/Ao $Ͱ.\i/ހn'>Hfha+C=t>ϣ4r/ITtG:b ;3Hꙸi((1=Yn=l@ć;JCh#ךؔ׌Ԟn;9_20ӹHkI(}ri~32aCj(*q3 *=Cm p6~Y0c6,Y^|~u^l8}\8~}ʣv-pVO8mKC_
  • ΩuCܶy>idɤ I bZeۘFpF)pK40a4 R?47"|~ǝ"s 2gfJ2$|ee9V<׶M'zG᤽/ 635ut}dtr8?87x# .Ї9G'䬳b|;aϞ='?aʕw|?t&Sy%w: ¥KSnjY&b&7y&i~BK2 YtxIQF$1z觗=Lg.~Wt (J@pVz]9޺I&-SL5gsTOiiLaL£p^5nL=-H)$@/뮻g>>k W]uhh˪U?oA8B4mNT>1ס>smcpG`ᇯC6OVU>u)c1Sc(>Ҥ]n馟tLWp8uk>Tsp]܌뼜^c5r?xn8߷=xe lA ,z Ly-S4bѝyy¯gj.nj;ۋųxZ҉ l&L.|,r CiV '#rw Xf͐\r gy9ڮ͛7s+WRUUźuD2E6z!6lm%M{EGx|l}* _% |p)9onwl70TaI ,GpuNj<`S)v6-4= 6ۍ5"";qOU&v#qCk_wt];b!wK:Qvf?D+I$&BЫsͅ'vVR ~q MAJH`pAdn@mAuȩ0$"Vsn 6iH@?O˲lyF*I!1L\tB0ޮ Si ֑`Ν̛7`phҥKصk׈$H`6;h] ~ vk} `c6"WK6 A3fɹ0!(P S$[7WX 鎱mA/pKSCб`wT ZkN z liXʙdbL@7k@ IDATųt=|Y\@y4D|q^x96`}QYyptG*yn喁۶mchFCCw0"@ zKlէp8X5~xZH{G.BJ)FܶaelM69ٮS!u[PŬCyS$͘c&E@;E lbLl8t_ a7r 6 ީvVx2OP#BWJ.^9QNʧ=\1)5g QJ,ݥ0hhV hbxwbY٥L\G7ܕ C Of,9*чI;57g+1J 1BG4?R-wRt\s5|+_.K_{oNxNF)H>S0tQm϶5^X|&X=ذSC ˶ꊌ,mVlzbŤ|&>447WkPH|Oȓ$M.K?$h$A)I#I԰$F-*L¢EL*0;j8@aj_W*%+W U}+^WQQS*j׀dm݇M Ԏ su7a3gWh"3M@c&W)x$u,Fv;Qhf/ ncZ$.84fadݞ;(D$"_`ucN>ƃַ >_җ[wuaf-Zĺu7o^/=]=w i ,WK5pI6kĄ78kYmF? A? w?-#fY0 T t.8ʦIM[z:}t$eh$O)njvQ2 7d'Tybo$#@ ^k?䌴ek*?Ccp lo;vHSReOC|k4pk u0X%rF^N:]'yGltK߰qt2:*P ^vUv>'=A(tL;tql6\;-`I(Ԛ̔ jxĨa¶D; Jm҃um&(]Vjdz+06v |Sb9S>YsC1GQh㼹扟uZ؅e0׿s}Mĩ S Svz?wDe=ptIpXp W~5w[aoB,UE  u5_h8)Z襅>٭[Ǖ MIX=stuBÇԢG6 [{F硻z:0jm@GYvix!v5?Zk:C FQ =CNMw:nbf]@χmw݁MS@v8K(39n#Y~/]47kTxVPgX<"r"\K>l'|#>T᎒bӇExZ#TϳN r$8G?W\AGG ,W< ݥ^=Ν;` ,[fNz).".#IB$N(WDT^ؚyhlGކos50w0l^io߿ ~xa`U1,d30<\tf'IvmOApkg9JG/e}VwHCAqt]K|! 5!B@M+y_ 4y%V=IHCmB?sPV8la'6rNrif+N)fU8+[0%@zFOSdZD߱x;ihhG}>{Ʋ,,r|>~aoiuuu~\ve]S  SMY`w${/j*+@ AweEpp1EP] uPztΚ¶m^"kZ?;I$;I1” A R@uwuiBџ#">w<)gX9,ɤـ3JE#ne+SN1)zhhE ~F;Jhb'1la+Xg'ZDqC zsKP͟"K݌E8T G 0GYn֭n<쳇)) {nan[^xMi[]a=Cnk/oWX{|0f~Gar/;L:H6R1$zcΜɫΜ[)"Hʇ0D)%]ĀAJFPʲO8@HdL=> 0^亚TBJQEnPB[[L3ڮ0j4 o[/2C|pc퇡>y\s>;#ѓ#I'+4^KB|oFj13s3?AL'» `BCh'~W;˫0[^J^lO9~RN2|8[xMd|&!csɯ슐Rq 1`p@o4k6_PpjM-bCs;ldSr#5KƃH * ݰGEHQysD >qcgkaFošS¢ȁ״.x 2(,ADg6>f?S83}؊# ( p+M$ G)p&^PN)3Ic$x$F#J/ k'%O/<4a`"9|$dcN'ЫJ>4'Ođ:9fw-㻴+4lt,4l4tTa",i:HӉ#F'MOc@ 蠉mkXNJ*. 8~+)66{uoy?:s>g>MO iX׫W-N=8%M=):1)1i(6D2Y|[5͙fdU5<]6)59 xg?7>>j gC:/~ ^*~_YgN7IJ#8-a76ͤKrq;ؖ3ax(5$:M^ZIwjf9ӞAvAd5'c(vˉw,X^yyқc,x/!M9Շ ^4@_^n1ұR^'9OHmd M!'W2uwr]MAρ_ޑ6 K;헔Æ08at4 yzaG|T4 sp<# {{_pl)SS}ͦ_㊍FXzQq %M$>YLc'E I)W. lc@pG/fT8PbRtU `2\a}T_)kx)KYo8M#Z-N;ϠlĤ4q!NTsΦj(cyxG@R4N\0%2 !?鲰~<$SVCi3|G@<&> [7O?ųWZ^'ই[A4w\>hZgu_> [tENܶٔ6/'m^JlM,_kQbVĢ$$!ᶝ]b7 ȈʩI8ŔЮ0ThڥGa 0!BT >W.YFmXqg:mu}:ȃ~%:!ac86iB(%! lR饅Z襏ZlHhkAYIաOBc>̧X\C3[I҃54=Da4䪰Lb(JFe*H# q &Lz'$D!> ф\WSIVa4 )_zqF_0Jwdxy=057a(׾~+` ߕ` p:hh+>ӛ?x;č9Ƒ ~0~6/'l6yFAEg&~f-3ݺ KƦ-$iv'4ȎYB s=Rn$+,t %.%JG!DL44Ć~XTKbj(9iҴLM0tQjSs&{9՜Dp\.74.|qFf1}#5:1e8U@ /@t$m۴)$@hB`5Ypح(@N{F |`;bqwò 㹭p sx͙jk迪Gjln遰q|aTz`#mv"t-\b9pOD >m&x(8qJ>^b/-$H! )v$a73#  x ŏoH;RR?n?B#bA8Lx0j h3l.|h\E)*x$53jpMU|H"ǃ&ZH N"I@QZ&@V/o]6|֯§Q&O }=Ae釾ߑx~9* +3$mCCH–$lI87P2in˹FR:nrT%.ǩt2ק:iҤH$I)RJʵp)LҤ$=PҤ1^sj W^n01]aAqyqm16{8m ~xovUjqѐk`C?$`u q7+}f &亚3pǟ NGw@}}$VS-ν %wH|g* Nksm 'lw]9s'ᶿ·|g.(VtW%iC㇗ޠg90}`KR'K9$2#혇b30x5̤zxciIMinI & Rn;EIt, Bo N@dM$;Li1;)DвL"H2 ?~D!Jd}O{NqJ We-! D..1~`M JՕ5RHt\hQ)> 0.DLQjkW8q\kգt yjd;9<#3 k!@Ъ @5鼩 gxR;˷C;D ~|QhgG;UX ZѦE(ǃ<ӂDSn rFuXc}ptv4gG36}441cYbnv!I=ؽ)O!< kQ{r98tB_Pj8!>u<$G4&1z i%@`@T0B"SLS0"!ˏ^l`&W۝LWᓄ aMIÂqEH.άqLheD(fA[M>OO\/Y{g 2FP ,,ך!P;;{w]# ݹcP0awou@+|:S:t?_?_-mݗ9#pF; O;> |,ǔۆ7{IWpo t$,1~(b#8`g69E R4>:HE.Rt4nR@}9O<~ ?9syheS6$ <zdLL9mg VrfWfZW7;8G1Nya0( )wKav`/N+(C&_6F'[x: :S4MQNo6Y\HߌɰjL |`#M氈Q"@A'L@h`U/vWԶdJ #)kh 2ýowGΉ'<vA 52_[WG*OwPR  -][;&,4甬?a;)UZ/jAhDK4ϖ4/h6UAMۍHZՈu|Rddj**Ҵ!!QI!e4xL :RQ3hI2$BI"n] \c9gXP10.!aB@Icp~̦j6VX tM$xxxJ9 QCnB/; h Hf7@QmpF?-@1~~2(C¬ LJdIǁ ~a b5ݶsereG{[ǭ^`,x}sU|-DÛWlQ?Oǻa  ]΁BG |gEA/jW DMִmhBKIpiRq󇄄_6ۉURJME!Fڷq|@d RY DB6R@06RP9^Dr>-d/x `Y5,,,pG jSꡛ{SXoqzbT7r5tJ7.|1*|p3 {CjtS`4YώަG8\ HY3t[\'ۖwpk5CZCN |OĊ̼cO\X6 z_2| %UpZ|[ )EuDz!8vA՚sG hHDCgIC,>"HruDO2|fH%uRC%ps?RVxtf EEOIL&lMbr%e({32ާGV"~*!8] XmY,`l*(5¼rm}G$_ԻRRJ1ɄBD ~NUՠi[ڈigK18%@$BN9lNQ.@|؞E5us*Gw Ap}U0.~Jy0Nu %, 0MF w+`pw0w) Z=saР-@GɆfc0 |z'x2 %MzkBxsu-0Z jp5<0THg ;~ :NIJHfַ jɠ9/j."3/m$PH)d=,1IL <,B8-B`[- cюDC , c@lQAt.Āe OhT \tjƪصvI[v9IFbK\Ol3;2S͎pmd{LrY VBy5vnRP<%ls]?ެِzv;6AoA a%ڍB oO{ǵ,5e8R tfH0BFGٔmZ=' Q&HI(uxLq1]g9L9$F0I"*tК`z߯@ D&lB)[(a74 #*#d_fH~t߁'TsFKyJ$ Aa&&HH0xRs@'N٫f'v* )4Pa"z>.,14Z^ۛqB}c1 2;|rKC}EB/8%ҸZ 2>)GY= E; vX )ʾ-L`J@hs˭#> uXY_ߵ~r/a|y?)h%g*SijE01JlM5S7_b Z* ђ#_òb5ˊT=<uL'Wx:|͹jTnLYmc(`WRN/ѡ"N vď=ڳqB}c$I")CsQ*@NaIȟȤ8X r ċi%hy= ֤BY\w"x͑e!@sk~m[>ex#~.|Ep>:B<2΅oG4wщ'm娓eNd2i4dzAlU9*Q=dhl bvQ2C UUFJB}Raz_@@E0/E[<$5/J<.s]9!H>jx?& (֓%8h P DڸA6p!ቚL$g|Qgz1kM $2'4jж=PtW휪*c@@T"ߜ>c?(m,:0".2xwj' g<`Mb7Sa8M`ҿgAv!tP //z wE0>*Z sF_"^7<4.TuFHp*z~w0nPamD"LR錊 Ԅ!i_֨Q."@!PO! XlNZvRNyrBP1$FFդ+HH)yqD\"jQ5<͛԰3j+Ȁم0HX0u`Q?BaR,/}@p0ghi4i#hJrsgH-g_H܁dN!:iABB$Hĩ%IjOW Ԅ"BR@$wV5qgRddQDi\԰T@;ZŝrR˜{JZ>BxFe z8̉냛PW OgJvuN9J|*/ !5`qxqB(.X,)PyS(CoI$I<4BKT&5QEu!ߛX5YYtq^#S߄dXOuva/V;_/ $!šth0[l9Vȭ P]~v[!sYA| [c0Յ8Yކ)/>Rs`nz~^NP ŰNPO zzB_PO5qx5ĩ?,;~n'Գ:S)8&.;2pOZ$,c$ /i \{JT,!f$x*Fiipɀד{]7\i|em_8ΨqPHxA=K\!{ xvܸqc[s4>Kv dxrpò%@L-Y"c2) e o;/73#~ a =4-/@tHݻSό,3 Vloӳeܲq xWt5z7./'j ѱqR0!0k'ަu9 UnVhCic=_$ts5 enA|[?$@H'Pǧx`rYW)YJNO$dLz RJ!2&?|XtL*W`VXIۣmFun 6-@9:iؿ8It^~#?8irq4΃h.(׈T(6`}a(h!$ '@Aî9gSQ%IGYڀygi?Y-~A-;?tXxZꔴj,XSJC\#CcppӦz#fMgP3Bbv5 I0xw"IC`de>Vy"x 2&Mp ? 5+b KGөpM8!2X_FYb+1 xH`i qB'\ڍč !Dȶ#pX{}`G@<D( ,O \7.2C^^'RuK$cop-$,@`58o9U0_BJ,y}VnK@}ff-?(u!p :Ւp{K)m"cs1l)P̰ %۟|׃8l0FQalv'[ojޱOWTh@"56 '*K<\bzN9b$UHp^n}  !-b5@C<-yl.C/iϭ p} hx 쫀/o)Ñ:`u^HĀ"SDhƋ&BU^6-@.<<@Ѹxqwtc&7{`K&Q cb",p]w}t>Z`/dsbFC^ ,ǀt5NˁksYx#3:IWl Y7{'Cp]2Qmkmeuti>vg ,[Í e !;zÈP ޮEs Ոd a ՞my%xBwE! zc鬬O{xIjjCVհF|W).CU%pkCM5b@=u4))[\DFО"#RM8@N :Seb*ⶥqb;=;r`xgx!@6f:/@,z8dؖ|3903s| vl p =غÊpSHtu[Ox|ĺ۱,þ. IDATB.A :Afѩ DOs F8Y*duHf; ld R+ 0U`e1J-%E{,c$je(|ve>< dlçZ NBQ- Q*jL!-."x,{C.^BJ%j}PteгPq 2+d d\[*-*k5U[mg~%S%@/AHI{qj4uͅ[\$&jIKsQac[ꓶ (*HSB`[\Y?O+@q 2`z{:ƠY9*wͫAo;E J:-e[?;'oσa+0V cY{n}ja- &=C1uzX-ǒSc1n,\_vYRC尿 r1௲1+$Ho (-Mݜ4Z4hg}NZYFY.n9Z Fplu~i8P 2*٦\9(OX1Xc=}rX<:a5R0|q} o0P5qSGL- x7!G1 -XW[Ք73~XU @)F]o)N}|^ݸqc-@.:wְgOc0BiZ&¯G¬L@y.*ۦ9%=> aYa-?QM?g S!m< VUpR-vqro;OvuOZW||x= l/^N'<1AVtM˯9l|Z1Bp\I(tc!8Qyo&>+։>=? Uy JLX^JG;P`fg73OmCdxܵ^&ܽ\ Np);T%eaGbPESF[:bHƈ=ذXcZv|=m2Az%K̀xǹvc~WVXAqq1ǏgĉL2ՓNi_@b 7Al8]~%#& ftGvpR"@Ç{= `Q0A(I #p :99i5r$϶`z? R c0E?FBw'f,J`up 02!lmc)@.12 P[@F&7KI6\;3øA0A֛g?jpkDZƂMp^\x6L{O!J)`|80͆2` >ᰋs7#H1lȅ˃);*,+w4LXj:WD |+ho``)܌-Rn6\,1 Tn9q[ ֎;ߘ={6ӧO`ĉs =Fӹ1e89so;%±,[ C >X* qAB() CX dUo{ֈbbt%s$(fner+ݦ9 zк8<eUΧ} \=P?b!X^~{aaep0xua4K6 Ooe|n1Oãߞ9waDRV_ᢅ0s3,3&`Pv^ $ pP2^HB`޽Z8!&fY) boc8(|fZ*JφtofEQe0}Pۧfܹ0u&O6G{nFEEAPPS7P:th $n5nr!+[oV`uD$ NڙyUpgj*~c`LV+nD@7P&|, 2 N: D5κ`9!@ZHI76@D'g0e4 .XǕpK=SBmӅp0},0ܜ|MkXkc/#oGaO1 ρvVđ5vMA0rrRJ Ky^VbK '؍`"Hd߅#z&k!AqUK|`HDZY>?TflGunY 6r,  Z==u .TB?uIIIx{7ݻ7~;_bϞ=ddd[o}vz!' Bh#Ώ< 9?>!ِl#EftƒM%"bnѷJ1Z]?mi8X`%0r\7:U;a8vyWXuhg0Exac2]+x.sW!,\[h-*`q_8ԘɡpEpmOa/eY@A-ÙZ5Y:˵ŶF՘-X;AJ!U,pmܘ :[ #:U._Mxl88~i]`c.>7WV9 ;`a1!Iks(XZ@eeU, 2b"a jDj@+b4B~% t!߁"#g?Err2fС ,Z-˗/gܸqmsn nrг KD>a, Eq +E=a in &Q! Yzq0+o B܉jӁ)W3! k @J&4`dnm۴.M_o+SfՏor^^n-y4`*x;c:W7G FxiЯWt e2Dz߂Fa^,Bբ|!`O3ސ:,Sm0o!DKvG$dp䜣b .0"Fʫ> 0Y1;RpU[.Zāݰ"-~qQ -B1 θ`YCW* pԐ0:jQtڮzj-*Z$)er={0uTbbb9s&QQQp ̚53f́ݴ ןpީL= ۢ_"Os~?/;oBbOWV7C`8R'J0.^dƒ;qb6f8᦯ogBD3W mlm`p ,gؗ >~ktA|S_P;V*¤># U}3,xnp2`k#_S0m̺bXb4 X|<ʉ0,Z!ͶAzR*Y"/j i ZKp1R0!cvY&ܰ, qABۊ.XPiZ<izY O5@H@y_c@bbb=AYXXxs[sС.\'O榛n⩧;Aq*7,nY!,tev/ "nC`nV^8Z=#9:~?N = ֟Ljs `](mUO@U+ h?# V?7U'r&9b㗑 [Kw:׷5jG!_Q`0ݯ¯`{pQϖӉ5F] '-I(9k?!-w! Ta6ρ}b9U ` mF:l[ +,fq7eu^EIz9tq\&~pSWS8 sv?C78xC)ٔJk!Xk<:#wO !Eu=tR xAQ?:U %6+i׸`Rpʯ"`P(KX2ؼy3uuuxy5:p65V\رcωꪫX`۶mcN7n1Xأ!7Q:?c>8 )ۺ1 BlR&@F;? ةf.Ipi'!>yGPU롁I"|f_ ŝ۶g$,9iEEw8~dk : F99F#"9rpknq0ex wOVEC0_^U o>m9esJ_:  o{(, ;z%xr(̺87^ ~* 8cG_B/U=}0$}884't/**9>#QB`U4 ,U4PH , )u  ! *n KVO6O> 0m4@ֹsҫW/qۈcӘnyѢEHDlll˾v[\ Tٞ,#!3y2dߕa_aIK;T%q ?7n 6SQvE-녴ыJ1EL'2b;٢d歂/uZaɋ_kkY|N[P+Ϸw+YӲ_ 3{/ΥYbXĞp8|\ peׄV 3m}y>όTY/&[&Lu!;8cf jglђ'X8WXJƓoqv/۔BI ^ w1߅&О-Õ% ~XxY O8Q &i$8~Nܹnؕm]t5e%:b%a-trVZ ;GGq- u-xu$uI ћ3#XF]?zNX&LP%F#|5SpxqZBi%Sdr\he}`|_nGAT*/WDCpUm10sxuLHKwe'ِS 1.U*R@& dZjȶUm C)|V5ksaΜ9?d K/Ԗ krҐ$ cá!.l1(Vl8O8GX,ن$~*alr'm#M&G ?OF N3= M=ž(?neS0<{k/aX GA y͟q+3wL?­Ώepn}o |?qMP/6@lOYn[ xdԖCl$4aJ91t4PC V8f 6˚F, 9&.h"m{Rp IHАD ;Z 08^̶̰/K_$XācL'8Q*@c2a3 IaI 0p?VHYgAT\NM!nna ׿}a_\LJ\|+B%E0p=G-%^[]O+_m E-q)]<: C]**C~w,[ S'Bd O2P.9- $ U>+۲ز~0Rf!zX@-J- 1Eqs ` - rgrĐ0vQS- F7wX_8H`ouGˍ5zFi cLJu.XBDjTԛ^NčN:=OT`Iv>|uNjpåB -aQs V^-?RKw vO࡫!qFvn~" AyMl8\/ (XZ9NRmW܄ H*G8C'THvSI Xx#7w{PNϸ_+!ݎ]4Gna3 DE5uONֲm &ĉy))B^Iav39A̹CDMMY끘#b) Óa∤0ub6:v;u(\F/)-EZqxo<]TS #ࣙ- ,[?{ 4m|הee |V዇DEVHIaAaź||dAc旧/1Jxv o\bàCcYOjyV< -j\V/GAZHX,t2rJn, P!5_1 5Y0JɤS!fq|Jh4G %R؉)R4?W(D,jD<^ n\@H;$,LCi.j**\>^"YQCrsP|p,ꅏ#&[`ba!ؕ#,K Sp!q.xX.'HblɅgf51Ä5%uj"1 .q t; IDATlNA |+1$F#?e9<>C-.Xӟ/^&/% O|GOwûCO+u7F#d v4tPT!>!"z%88qnTw[j{yn`z0!pH !0bimx;jHDkfj{ϩiMvIQ#aj6Lv+: O_^!|o'X׿?y= n{f, ^A@I'H#dB lQQe܁ RA‡c %Tn3y8nhWL9+ es !>Ϗᛓ!NQ,//7S^J d)F3?*9K0 vÙm HX] սaK@@a:F@Ɖ ޱةfTGr̯H@zkWC2ZA25Ų7D立ԾK?wa?f6GX\`aX,| >\D^Y[%pu9)0:&¸,A2d#Fޝ|[JP\ U/ N9S RQ*ptJRtWV +.gEb!ٵl#a 1l*Z\ %~,CI Yw`uŨl@'fUJ&A@LF_ . #$ȯ2ydl6ؔN!v\-02CcMM33q[aR<쨄nrRvc Ѱz>:B|a 8]%&Z HA$ ,W푋{^+Q{G:xxAYP n} f7O/GmePrs{"d_l/6WQ0: L<%hD$'[pӟ_f(8 ${gކ//d/\o&cF\C RM3tcrFAD?ff* aaP9z$vH<t 2uk`d:xp,y1<'^ċ^krc!}G&IxvEZbqg1]M x7T@? 1P֬STVH8(A*C3QHGk$ U`џ}|Mv_oKb;2\q{V<>{đFIȽ jaғC>`^E9nhBZر^{ ςO>d`04j)h>x&}Sn+aXx ]D9/hr.%|o+w- |( E+VEa$Z1Ӆ^2Cmtڴ WQ5`p -T1$Zm`9%޲``A*3~:^x  _oQ7$_%K@p]eh끝UK5pHc* eҷY[ +>DR+ 2h@?4jr嚀MlR A#_&zxq0ŶX_&삛?X >}&<1Dz> Ĝsn/7 _P z~~%d@Ew΀oo߅m/] `bx_+ʮOۛJ0IQ;= ?6$IH'0T3 WS5ʪ&$xմt[z&ar|_ƒvwx 4͜|qyϤݲe =&L %%E;`ϝ13 _:a\EveDZź #+lT6h ?W+:C! ;$ Ȋ$(RYyk$ Yk +!q%쇩N2FsQ?H7\{[ ]4DЋ \[_y::iA1xeȝ /*?C:Xr12%*Nx0 v;(X ~O F%b:5~/hw#Q0;ֻ!>@iB/e"R'Q3bq=BM.zirjhEe׃W)Z/l^ K@ફgaѢE\8."}}̞=o;3e˖W_~;)4HK%-bRҤ#7#ဋ$Q"OF  aVgd`kӔ1U1P\/^hEd0̈́#%[CdJFa@L1q3Ћ'qp5Ǩ_-nQYQCWi 2=)u#PΛ A'!+Wd<#N}gʕq\q\QF|r+6LMY %?gkDeB kV D /3DBY-JV&ǃI9Q@B +FtR1% F%Z?!IHXAqVekiNB` /wܽN_? :w*{ (47ݝ^{_*T0|/ʡP3~hi/_.7?`XzB@vi Sp `0RȊLIF֠ LOnاBHa%Bao-`tp($ zJ) -0aL v@Lp'k2䦊Kz{0w2્Kx=q!oI”H,/% 5Z$ €ĢF@:h. l8RI&T2Vъ@BH!ɪB84 3mjX,G7b8qM7ċ/=ï~+xwؼy3/VsCB46zNN(oPaB@eP[+qη Ærȵ;YIkC9\Spf]U0ڄDNasQ4Ny~o1"ZlMO lP)i.7 c3!X .Ohվ]]㿂DRX,kПض8Z <GOŒ'7ʡjFj}Ԉ>f( Pi)pE'sᝧ/p\.\u?GxqMű$d`X,VbvSsB@l\^kW@頖bPqdqc饋cT`( `uwJqA$g[gNaܫxq% * .n$&&jf߾}̘1)SD̚5k?_f deq=v_N@@a}W$=!^m -2.av2e(B@&ÆO =nqx~'R]Gǂk% QU>tnbΠGsF[GZZۢӶϑJ~P1B]6W2/n?#¿ᛷ` B';9]xaXA<+ bc!&zCL9*R4YSKE&y=(+aa|p|A><;̺<<|ǰEnm!'V .ȾՉT@*g!t|Q6 H#`؈RQ5,X8M"|T϶qFت~M /K@Nv5\CRR> \},];*>s@O{$; 9Z8 :pNLr$"oW9Ϸ8p`ӈ fبB@H3dGi+A@a|m}}I c*Y'9+3i/۲gYJ땞5)ؾNP>uˆLÊa11f̀cyKhr0i81ᒋ`kX0V gKἳ`FR ? ۰f8R✀X,t'RhIw0e[#~V# 4ŀ~P ^ڠ ;E%o' B/N#x= *HJJvhwIIgxb.\Ȋ+8sYd ]]{1zBfLX}«1%Yf@&BWua3֮)2FEj( I->&&#LIa2⡼I[N5EPoT6$ * zYm1aڝ݇`pe7֫`(~y ξn >ω!׿eWEº0k&~akp/`klȀ`X|#%ʲFf㿆N}c C?#ή[Ǎ.پ`EcdPnǑzP PFEp[C!"rg$ngrEIvҮKŽ^x^, |ڻw/w?y \|477iDKoK.9x7Dz5Y1*Q`9DLj2Eh)rX\c`~L<$P߲)s Pä)Q&)zU^8S^zM,yeo,kݷ<]ÁCk=tvE0w%ؽ{):^>;þ_?_Q੧1}cΛ > p͐=o~t\$&fg@ϩmV>Z ʂr.V*>j)$I$ހAGPE PU|M.*6^?,w1N(]8% *ioo筷:;Š+;v,'$_>x{a0HIq>kZl?>k6 X8\oIBq +5"aDSFt l%tT5"vTF+f:!5:/z,NX=iШbZ?N@t:!Pr(|[_xSԏw>#`3?zL1Ag΀7^Xv́+={!J"M+> iiq߭ eӏ!g`}=|,FXw:#ѶO†=9WPByy&hc`.o:*饇61S< $:luc*StWk߫>/G/[L^xᅀ`ʔ),\e˖fn6l-ZLYL7 ̟?Z뮻իWs3b%LIG\nDIS`g|oQg!0L@BG+mTJ@d| VD=&'CdЮjpX%m]#„/09@N ,}_+V~,<>w?^l`u5%c2-[Wag}|A/Ƿq{l O}>@8%eIEC#d-N D`@WG8L]L,LM0?тQۇYa~ReW^`W^uJ`TjWb[{1a@ 3b I@P%]:1~i ~m~+P^^ΤI裏X`mb3˾⣏>bTWWs;pyuN(IsOqsҍfR |qJ!Mbi2XW -]dE=*sY( vVX#;> !"}ĸ M>Aδ1OrQ Qys1w4( G %Vdw璘@Y% Y^c"X|)TV‚saCp{]>[Llj C9_KAv⸫ˢ|/b1R\ςL`5ʄ5epm75iϯ/&\$do?? M-A腭'4s6le0[G$!PeS^og0o{^̀-xƌU[$PL3rE*EZVxܣ‹^&KtRm/_0|ԩL:uZ xiQueRҴL\0`_P_s A($PPJ23Ϊ,[LOUMF(yzq^y 0' u9ۡW!KV5 f MЪu&jhl> E1ttUQQ!ǣ̘x^XDD buzYZ FalyNBL Cc_n \s?/"B9_/H~_fT mDCnyYrXB 0tD9T( aDh7f3UDL3СJ@hL"A;9f\f+1u"HSo4~~Hַor$46OV&;rNC9vO@`lN@^jyP$ʴ3E9پ,)/A|5e<#|.n>ƁMF-:u*Hْu!h}A_`oŒ3<3@i){,}\{_|"}׏C6mꥭBv9s/ȍ9zn2r㍾ݧ5k&22L,^@fT@6u<0d!B櫡9CL4tB+cgX(`@B=eo0qkPxUBR‰S-jj$ 56 HCQ+vm ‹dB4B֐I 7UQk3!,@['ʪNѶܪ#lV)i+ &㞡ȅ.r2lJrT` fBFm J $H5\ %Ga쐃Ea\}be/g=7 ~.Ցm >O{?6D+*=ĢE~gj終Fҟ+g^-ko |1k2wvsr"¨.3ra?ypZraѪL IDATh,SbhP^~9>d~;#?>اͅOֶOp'C.IiP@ _R(`$΃0.KLP \fRxiqԏ|X}ZZ,\};^zz_CByg O<ҥMlݪCB ? g' H{T`:!m*VĶ* )^RʉVW 0gXKcP]gO 驧XP 1 ȾɭEo,/N+w),NK( 11"G!H "JuVV4ז>9 uR`B\[_-$J&'< &P-tJheLf2*%*IN$B 78Ί2%^LrÀv3d-|7x n¿:;-pC>G^ޏgd䥗x6>P*o Q!x FOs|tAfI1fLuo+~+xYuC'åLhtn0c~',kۜnc`YY vL&ןw[4y#pSo0uEQVvSKuTKl˯ HWI#`p |"l?jPe$H#Ke6@Dadz K /[% }`iQ@( PR.Z9  (Ym0Yݡ#vk(rAa5vaFFYpXä>;~$H[fʛӣsd6OTHdDoRz e遒/o|.1u7{hj=G>lW =u˟O_8.0?d@V"}eXF#kO\{۔m:-AI6B %6jD1$! `֎8B1Bz` PhTPIGU@8% J19)=>lP5"TL`-29)0>v 6>F֗C䂤(=UFhݻWd(%Io'~.=7ހK.qu.wv|`>DdfA@ J=tA2IdQkoagj0YYM`J V@ʯh&$(*ՔH<1"QN#G:gؐ蠛F,)Q^[ip4{i/pVe4kb2a>$r*#xCn7;aRLM=5k7]隵MC֔$ᑐ ]DȊ9P p0)JZ˹l,rzBN-,38<ٸ,bc=s9z*+{/t{ ; 3fa:bB9tء, Ck]C#O@LB]01REic62TGIdB?Z81ju`R袃F̴p 7_| UIWz-* ^Fa~ ~~-sp8dE;+M$[Q]LXsD~M2:FLJ=zi6}>$3֕V@H}:Hv"$:n?%XQ@R]o b"{!3X%X,_{fzu=&MCc&;;MP>P/3Veq:3M!m2ǦjQUfA 5 rEfĨ5B|$dsQZuhOU2D!2Ex Eq$:H`|~@Nz@* :HOAz<~>\v{c|y7Ӧ sR~h'`WyEw7lb[08]QC_WB-Z:}`P L.͊2G x'W(rʩ "TX~eGT848rTY=8kk8: nj"lO9 ^Faɠeh(fQ} #KLX'ºBh<cbqWT{8d!b;ٿ%5 b`2 Kk| +el蒥o2?$^qDŽo ;8\9pX,<\;wcժ'P0-09X0F>zQ5xRƙ>>,~9@rmiUNǂA&0TN!I@=ܧjP,NHI#Ta0@DI^Mz3dd2۫~J@A?@r<|ExNt̳X.(ѱbiBQD% %U AaF‚\h͝P=1Fgjl\ Af> :N,^dۊQ@!4T|6^f6Oc&M Ť!7d#(H[cETW $:(9 hys4"ѪzCqJ:lsTVUPH-X0Md[b&'eU6$!i8H=h`D:i/phEd]wt(*d2Y߭ >fwВn-ڢ k~j o-r0XO4v/,Y,rsysh4>jjj\ /xdK l %G\\'}E7,-$CxA5-UWfn$L1 Kg$:X#6tiDxhdNYz:a&) #qՂ0_e:IP@Jk[B)$& "9Y)DxVLb9) ȷq\ 7 }ng;QT˝T}L;ņ1_QG;4Ec$YN;฽Wu?9'bo@wK@wM8}r*y-Zʕ+㢋.w?͛=֮]lfƌ_OK@!X"]_z@"#͈љpPbeB2P Q>gLVFaY} oWa"Sq))}!C@Fa\t'QV%`r/*I@W@U"oO% :dbVƎuV*#C=Glv-WD^VTOc ]``2[m:L%JJR>%Xl*#sCYX}3@b0CJ)@=3FN2*mJmȩ([Q.X[laoU?2 \{ySAB] dJ 8LG p*ŀQK HNHX}}Paa( D;cbb?/W_}EI /qQHvNVҡAEAf,IJDžEcNC"8ƼMʰß68_U)aTh+w0mk}Us+,fK?X,$Az$lԨ-EMV#zJ,|/hO( zDb})/!R7_ުs: &m )=:{[ 'fӜ3XՏ {ؖ__Vj,FPAm|3BE(L+|vN1`@"!C9´VwY${b ƍԩSOxq% >;&&Z[p..껡@gRx(~ʰ^tع"pF"k6<J֤u egN t6ɕA-A/(2ұ8x;̀ebRG2u){ #'PCf"]fipmƚ _r p_\)[ `ÛB+KNV`!%+`D@IWmO1~$h$qHG3u \U%=6G 0LvK鷰 O /}zaB'݆c@w2SLJJv,m]d5kP]]U?% k8 "QEJl'N|&V%AY4JN|} *aZ]iI`2(*2,b; $D fdvvgSl u:L̅ni{! HHwkBoUZڃHyS\ok |S Si\f”[a% ATУx1Bcs`*?9P5h#4RO1ɮ n]+$VPBסfE$[CHuv0FGpěg?S}, ݻL-+Ǎ7ިܽ8%  ,Q 햙P|2 PZ &}5xBiaaF=\̚ƍ[$R 7ફD,z{a [KED+K] Xa">\Hn)%`٤۪F U! Ԧ7ǁOdK/6 +36HS`@I IdtmO89@8ŻЮLo$ 3-,XXE'd`~ebeb|j8p%NϳzJ饃hR@v:$?ZKMSkj RYV(a2Xt){kײ`A,_>RXPbʋ4rL7 -=J$aQY)ܹWcVG neT`76K_,;>>_I37S"ƌ?]2MO=F0=KIV j{ AO7 9) f3̘ 3a:XU=Bqg GY٭`a燁k0TemשGlB3H%ԁM@X1蠝Tscr-r-kpPb/`@/Jp KÐ-=)R@BdGl ȕ`wB2򒡭 % [} @Za$a*VB]@BDl*(WYʜB kV,0g?\b0m΅]PӤ-}h>{LȅFљ;7 H^'Oa6oseXޚ?A, =Wl?93]Q*o`] \QN Bʗ7BEq`"5Q?K 8J-{)Ld1 맟|@2vjE7]0a$PwC97 C HłoR]|. /N#x 0EPǎyΥN:4G2XцWƠ;N1*4 7:Ya-ÒAb0uԙ+<eX2! &+ggFHU| ݎ+HЫ':zk$Cd2qY}4 F:K@4?/w#.Ηώ7< 7JСN{.,V p;X O Ko$ /G17 I$\ 0:.R}UYCwg`U`c+"0N 8hϰ9B;mrV?P4*{T25G Hq)3Yɖ4 aC@NA^m5L@E:DF "JG{S H?K((S kXhSn߻Y{%U&p˂#4tUDg}|a3; :`qaO(m>fgDDh}6AD3xp?h1Adg?sgt%K*6,e',W_kkLťr,Y#HxۜWP謧24dlvE9 8v6>\yde0ĒILeam  4Qlp&ˆpapt* 5@$ Gd$Q g˰.$/`Jo/0& oŜ9@L&W\awtb`P,I?]Gc(ܿݿ?<gDZ3X;tI'meULe' 'GD"C*CEdC  -{-tMIzιͺ{qxVfdW&,;c } #B1 ݈~՞0z1n%6!ס9r?'1ZNكf%(0`rt$}%UZ*P6K2")N>קp7 6T3lH >rsՄyx0dXH6Ձ$*+@ H5t@4j!D?2 Hsbsyz:!K9Dy m"sZsdHBy(ׁ_7XXzl(~+(.ȴ fHW8qZtLvY>eKHKN0z#K*(Jwoo^~9KJ1.DN,b3w,a͚\f̸7:'T=<aes`h*JkEci-GN{ hHq" M7y9F']͈wSkD8y'biD6ak-gfwQD$*$N%ؠ T M݆Z[][ gOVV|KrCD(li9y䳭BS!G Ch[6420͆ 嗋?Tlޜ˕+edf{wO-!=R@f$'{P^ׯC0?LyAɟ79TnÜs6•4e`p='(dQ/[r%M-GX_HJlN: vde)뀨TbShfݚ. L^z5m-wq@ Hυ+uWg[sV`S&#Lf ^/z!b_!Zw\Q1F%l3_IZ>%g~EQ{!TQADAqʈ~eW4zytӕcdd eI,l"tjlo SbCD t:8}>Pȼh97S,eI&%4NY(7F} D; +y 0QF:,wB$X|U Q2zeQ1؛(^q\}rc  UM8Tp+1UPCG^YHt?@rN +[WVB)*R\3WP#` .%g ~: jIke$kTla]'JoT # %yA$ǣ*KSm0@r̐X 1jЫ9|s2`h7Ί$._Xٽ`׎g-vQw2ZVQ|Lz{X" w ZT^1FVL$6A'u k{rޑa0a hTpD#G*y DnH%1t@W?8V94&ҨN%18bvRn8'o @5t@|T࣠e`ԶfkC-pw4TP Яi`+: ] G\FCdP fh` .9C:"=_q Do:(h TfM`c@8VT($ϾW>׀.mEp&ˏP. Xw^|сg,_nGM3Bl*g̀ae^7o@^z|~x WݏQx`%xlҨ2鴼RB{KR9O;jWA:[W)TZB/I18bL8`дЁ*yDNaYSm 5>>Xh nހ=!5ڛ&BL+QWrN&'>~K <Ȫ\0OWk8iZQP-~&u_HY&߭0$9, WBW ^nг9|gaapM¢T*xj<|b`xIZ JŬYNG9s攡 Ysc=6Z._NwnW+:pKW# @B_r*8n_%g񣅉_@ Od^kP'_ed\lEj0||M JiX vYI,C@^Y|8aUDW{)2e I<]ݕ2+@ 4 '̏sD?Xq[@S2&^) kdk\/S`d) u`p 8~ ZiZw4|Dy>}JIK2F1h ~FK{љ0~#3 FW= dwW砓/Jk\ *H…"PA27LB"~5*~$B4ް MI1ŜGC43\ب#8`QA*  f݁.M~(PqsŽ 6۰ 5T^8폪3,;F `7#2wC= v7P8npNLHtiRg?QY  ʃʜ_[CXK5x^V ,-tnKʟcFϏ q%pu]ū,og?2+h4*&My2}`*-ށsOêoDqf)V.D7˟7y /v+)SK$`(eBE9nprZJ厂?ěH>e'U8A_JWxj-iB(gX\,J!6^ -ˆZ[R訡zw-!̢5*3,?~b A:A Mc>3Ա:{/C^! Rd׼!+t~`Ր; {f6aU)j _"v`VPvbtx U>׀Šuapv+aKb)DFY??zr`0hLA``25^ `Qt7 o7p󎳒5Oⅿ5 b<8&!sM.iT~Q&XXcND"C3.@t?Rw*+Ψn?4Bz3j"l3DF@BS pBEć*+@\ )BXVET&C2R{Ds ΐ^rQkҰ>.@ȸkdo#<=>Z&,/|<}nXO K`(XM6ڱdoUԩe '~Bp|>|0Ͱc'Mo܎?˜Db.ʋz4y*qRCbCx(4igX=mt+vYPU^%ovԱ2*(ŕNVrȅ$UO;΢1ܜʪ!ˆZ[ RZ B~pR܄8} >QVC0H˒$`(@}ARQ$G QBazU2UDvȏ Ua l) %4l~`C5 o&Qе1\P!gQǖÀVi0hhҍbJ5R1|VC# L{"Wex>ڑ+w(`^6߂E~?>pxw⍓tEdd;X4&:EaՆ&whoЮ IDATII^UФsJҊ5C ECD^+@ %<\LU`C-pv쥫WOyCd:- zPZ<9hX;dN.s-ׅu5]!)ThKv)=M]rF=qbhXMCEؒ&=րL#r- ?]`.eݴg[#TA˓U*mXV! ΝG?mYR1|k֨y1wFɓo_Q q QP۷X-lFG<}s(,`Vm[s+aHh'mKqzi޲B[H[҉03 pǗ=N&T~L1J]0%y?(ݮC!}TnR*@7F^:T4 6" S;w7oVXt,Da̅QD]8dƲڄː%#4a j(:A%:'"SpJژ<`tAj\3Ud(i B-<3Æ?,=T=oCnǪ *4o3f]"`8qvLnKI > ˗ k{LVvmv;'$'ɟSNq_ 2gÖ,pf`ooJ" y/Ģ1q/ֻRE DUyFݗV:}FH3.PF+a(@,/@\VC  U YMqr!iD(2 Vg8D_Y0+Ӂt z] BI kPFǯ /Vq>` p@FjЂ!"4F?b+gƃ|0c%t|㙃J p~8n0dhM[Э  >Sqc6 RR 1zK$))0sfu;;ƿ r`h } WaFOxy{zJ25{978+MMСLa-'(ѓp7kDcJɱ7)*hE|GɤʋKkîr Tg)@%CH9!oAYy.@Tdeq"mBD+K܆6%;!W6t.NX2$h O7xa` MQ0mfPέp  -Ln FP|v6$*JP:v.UoHNfuX3|<հ8 O}K*`.a[sYh4s2Ҩu| Qh;-'9uquav/jugE6 |"Rrpk"%죜C@#:*i+ ¥+ 6" ZV{痛Xfg,p#ENfU?[(;=*{FEk/r)5!8ز֡cQ#:4 AU[C eF-HHeTpJ]sHZ#vOp4YrDʍ9p= nfuꀯ?h!g6^jlH =w~YJ0." ,Y MY2\`}g΃A"9BQv) g²P\f~FB\ƷJmY8*M'YtMVPLKW4J^{aNNsLC)/܏!37fÎe{] ?eǸY, wn$8\`K΂$A=q-`+,%RU=j&G(9K凢/M ŷu?B𥾉bŘ~U`|$AN'1kLښ6 ҧq怃=|mth6/a+@j8|N,g2,:<v@ ^#ܰ\2@:GyRpBYnQ^Bݣ᧓p1B*|R *lwr_AqP!"9h/V,Eˏs;zv)o;w-FMx}!!1Ja )l<ljpS9XyH A[+>}1[ׯk!K) U G@#WYXCꀀ=eo|8xɧ#Uqi;ܦ`amϐ7Q`>)(' !(Iݧ_&1X)<"}|`we]7g1l~M/2Iް*:} /k~XxE#>gyy­{5 lkQHςsfol_90Foj,o`,BvDbd 9+ɱF *EN2MLR ꐓ]drlRCWҞHYsvW^3Wz3K(XWs_Z?7jTb OM 7niIjy"no.eKA+{ vjB <*$𗌎Mr}YuE亄נ8AZvHMa`#ȁٿ*98C!4 M)+RtQ0ie '3f}̴n+f`>e6 ]ip&m>!xqm{9h d\« ' aPT ,~M-2?>gdJҩVq4Q͈F!%Ie; q4/DpaM1ipB 9HkPh p-ٛDn/`PPC>56UIML!n9RuB2 @p[<܎ [ < ] E" kM$vM񫂳|<\=gx98w ڷ~WaW&I5?vů@7@pw1 *%"֌m:8K( ؙYUSKsHHY`ќDq7Qc?%rWsK.ll OM-@wv2ᛯ/dIPAzE'] Mnu|\΅lfz!0@PT͕yA`EbݥwH%0P&2:Vr([RAtfmW6/ +vKwAܝ>]B1^{H,dƾn}#Aq pL"2;xN~a83v*4 !9h2VtN7|E~_*:n&M.n[aE!Dհ|H[htUJ.s aFx̷e7~A*̍H8I; zqndiT=lVߴDBRk)X 5Sc9MbN?7?&*N X^*()@D@vIrh"v(ai@c flv]D"\Qd+W+ pQ \3?G4 ;^k-b=axSXsD&d@c IFe<cx`_"Z,mju iaD~jnF 9h~B$5n /a5 v> .[u ek:rBuZƖ䳳\GD;u>+ E8L8!f{OrCFbA4A#Ax3N|8Fо1ZFK'jJ (T^d g!ox3s0"7 6P`bXp*\>4[]SOjI6w44] (2-CF43 k V ro"+$vx* R/¯@E<ߔ ]ۿCZؼYH9^iҹ P`S|wu3;;=R¥[Áw?U'¬e1 E$X7 V I ?O ]X{_w mGC֭zNӥ7I׉'=Y@YPǍ140;~Q#O|~2T7up,JmNǷ.]I lͅ]7jy wsg!995PPC>56D@BK >nʄNXH(*Mi ^1B4N./@Ұ5r<݈VJ@&L37fu+ق?kr^#:DP<S ~?_q[^ ZR·sGwhb!R )s:d~3<u:χ+?0sS(.Igm#0g)ŵbB(>%jKeUE ++DE;/+_!_ETf'a*i.ۍWRGf1Jn〶' ^p-APM , d#9|Ĉ6 l4 g.AiuJ w_PTla=üp4s "`kXSHbl/"hXU@|$' ցN}RGYx3 ^ w3] '@&45CqY X .1ܜӋ0{-L$c<]" Vu<,K!4F 7am,3Owq0ܻ+.I!Ty_P(CMkxa,$%8z\8Xx(`|̓xO HhϕB4 騶J/*:cglisCEoo"z*Rs:vQƌm1~V :M_P""ÂMq \ FR3EE: +X +]*:5^Cg۪.$C+@*pZ#>R>^P8G<z;?)ȏ+} DZ0<vg aHn`Gۈb諿$B=_(E& һ)oSu Pxi0FzJ|j](ؽΜOgc ,tXwa0f8,a08Yz1;ěq2Ư JM= ͔ `o2ޫqqD}LbA8M5Bvwfp`bX#Ҕ`8k!+FT~, Ϸ',_;B\2R 25QRIxj m:q%?L"8{6N z ) J?Wb7L$,F z?ᾖ=`:{ZHl~<⟅7Y?`~k>+Eq'jAs"oOx*<7CYq="!N."ĭ63JX| ɠ܎Sy0j\D/% {  DBCTgKorB'*Iv0C!2*Eiu9I s'TdB>jO"N2ltQB9DcOK%:j{G } $bGQ{ug ,=lH-ER -E\JhX-©< )2PO7ϣou`~yp=JIjJZ@Npr- =TCz]g{Bhs EA='/+_+;?ZƉJ- U+ Z4t~Z̀g鎰Yˊm}:zVG݅F@7eEЇx 29\a?9IC;YUrp: Y0,eZE[PH!K >A}\͐*б7v3)F"$ @^QK9{O4)Z7Я@t@LNu,,@Μyco/ॗjLo)S7ݺucͲos=ԭ[777?]QW/>(B ?W@~w][QXC7cooXL ;@ j;.NϜ71^2 c::ޝ>t V-D1L2A@j`?)Q6#W{к^;2zJC5 IDAT0tJ\O?г\ roհaØ={6&L`Я_?V^-yͣ{2{l~G&NHmMMA37!EiB(SWMXPX\Z w+%B"G%s$wG-XM*%4,4,E>$,WU^-B +iqp1X#=0< . *,|F8=@y@gv=#w#_xhw%h?RS7! ,Kp朼HE2NN%ݻYv-gO>,_ꫯs=ǔ)SXp!K.L8'C lZ ww w/YP !C6/)Ji܎6z1wuҰ4j;+v@FՏ_ԫ #`IQSa>fCpdEUF8bJnd:ìpDSQ{`/|8|(BA$X|Y, !/>\lVS`VvO砥:cסzbD/_%ѭa}w8F4S䟋7 Ycp:1%X3 al[†`:a wp*/O;wpjJ]}:DMC 8$RªS0~+CQ|4𒞧 x48&e2Vcw,+>txtnIKbDfuYuSV,X rkz!+vtC&W \JA^ckir4M;&^rh |Gb!Z/ЫW͢_[8}48;{KLL}iݺ54lؐ3fPQۿlА(X~p%˲ *( GCM:-oїv _4?WCY`߭p^΂%CEW#_/</ճx bM"Z |k c k c:Ő0c|Ov<w3k!4waz+#rO9z3!e}3ZZRB*z9jT|֕3::<|DeRԣ5$(8Fc5|+*y Z~SΨi#SWxI8.rQ&0&8H6Q`<MJ׼y7x l3eΝˀ߿?_|YjC'(X~p"u@an6z[|9: 理ꀂe k@ÒK  m(~swH}#8{[=)hUFU jl,{p&L[T*~KߎF1olm+^n ‰mxiu߁+Pd"nBԳ.'Š'kuA@R<2%Z _7^#MӮNx)rFd#!Fm:-]gef tL#"л>1S /hM]TV=pSr2iH)?>8\vZVkVWHt3ݡ*@(@dgWPeԺ T -#r22$$k\$-$Ĵ?$Dd 8& ]a U*Ot0?.0wi_ ImalX:=!PB0|./B.`,<0 F,JxUh6} ZȠzXt7"8{Rp"d@Ap7wgKΎWR r*.(Q"dE4EaqM Kᅝ0O_jto2 䉝a }qBlONcRE>;tEӔBPQiD/ʙIK‰PН圥BOJel@'-z1{  D 6q8 8r|;ﰦl-nqޣVmZhk[:ZnmjݣYj=E@F 9ޗו+sr<-ƨQ#Ѡ#)f{fY޽{_t)TR%۱}e֭lܸ>6l@RRz VCX ȿjڼ "A@䀀 ?ו.ޅ &uK_L" "Lhv*,BlA ~VTC0A@~9n&Ȣ @0f >jUa18wB 0l+8eR 6fMG+E Ž Dh4M9*aT!ɚ>wɑ89@xEq@L<np.$$CBf$$VA~Z-H]!OP{CyK8230x,l ~NA4։2o$y!a)a[IJ2 A,;e"T6R[ω1N_[⌍zqmkIf 㠘gլQb?%LYD.W#,,v1x`)Q+V ""k׾خ_~,[7n .vm۶%<<'rM7n̯ʶm_1_ Zy+aDrȅmsc7!YeYߝ2+J0f, A°ʐ(uamy9Ufq5aDaxvq m-X,y`G$AaHqu\jT>6~ bF pq_ܑ|'da xC0'l΂?;$ ΅{ u;Da$q|6r'@6x|c}q' iqO'>DGWg )|XW 7U֗[ ϹURpņgH`CT1.WWs4%/f^vh`޴5s[!>]4(:Joq& Ⱦ $}sZA9?`ZnÆ cѢEуuWJR{n @DD}޽{,XI&dEˑW xCL\δkv]ޓBu@fסFŅ這%`8~[`wX|[eU4X)x[ :`~۲~ve7o8Iu[mhkK >R`x(Cj>nbcn7&"@_z =uV@|t}FBzH< u# F>NAFvJ7Tً V&c(w4߬Q[i\~>T.3ʯ"5 S/ PMA&d5z{U6T4Ӷb<;`͹yX)4jdD+0g}3h4&OٳggϞ=4mm,Y^HWtL>gϒ'߿7V?X &Qׄ.̘_-T2ۨ6()&{QXrTY;a9mzEk_dvT^])N,=HŸ&d6 a*ry1kô^0{̴uBrTkܐ;)%_#( ΂Xp1ڭ_!.4- PT &}Ã`w51!{㊱!7GxNq|+(0o~@,1p 6a!P2,PDhݺs+7J@!.^4 7.IkQQq ^V`8u]yNI8u eȓ<[&}HHf%Ѭ* ve4ꁌKNv%2;`L}p 9( *XP$t R:pUޘmf Tvo7%Xm4Xsժ^E8 Gj,| BMW/ vvJ&LA +2`RFE# \_?h${|q&{t|ҍu #RYIz5J]CҰ'OF%ґH@|PUjo˿! 8+G5L}D# dwWE3|6(Ů]z6wCy@ ֩yYeo6f͢_~\Znƍ͎7o7ӓYfe Dz"\]& ޞ9Q@&όލS{+ 9d0p6ݖ>v$m$ ȯj_IT u*m>Š#$=zM61o<N-Xr%|&FEE1j(ƍŋС 4`РA 4? fȩ F)lLD)zyq 4)^4/j_^+T r3!.oW$QՐ6k}@YUUA\f(%;0Bd ~,V[~#${36u?c2[6gg#ޮ&A7w..g@b" ]'ß's@^mC_ P/ucab(\wQx ?4fc-x.@Cf:Թ JTMᤑ|koÜEG 0-O%~z$sdcKFdu<%řHҺidD?dw0Qٚ:n-G x}- +Wӽ4~R10wo֩Y .]#]ty޽{sN^S3ej5GD!w~\\DP_HL* RIR< $/C鈙N]^P(9F93z( NkSQкlR'Q8WeTAԆ`Oy9p_ރB˺k IDATT6btHQ` !}wnpm)9м tgCPX2W!43=V~ct!P.秐&IRZ{ ]lQ у߃pR+;6|N0*+D 1/ yuwI1x < )dB{fdT?4@]3C%I=XQVJ@$IAk;- %lp +ëʕ엳OǓU]淵m;LۡCPN@'p]uT+}ჟ} E$|L{pucaX t 'x%~55=B?o$%v]A% X/@͢}{0v [AI oF"4 th[b#q%u$J`VFLUjMM@Obg6uQ25ˠ yqt^!1xFČ^IbA7TV& ̼F8B3/ibENɓ~9Ґ!"ၱ+"oo_Ntt4={$,,UVQLƏχ~fClltKRXtU F 3D\R|a Vd%3W?/&3] !ÒS-rq5 8PXY~\^cpTԝpYa_S?Ja˅dB :րi=a>1Ns<4`~eϙ 4SsZu-'e`t'8=V/cmA͂ྂdѓ;6IOEӇv6^C%ZS` @te3 0)>w[XZ Ts ;߿^-Ixhl0 k'16&C}b9!ϸUoi`A &jR+w_$,D9#Q ?$p?.@u'hfs-(+<`JD~m["uXsBB 6HLLdʔ);6mо}{~GƍG@@1J~ĤSpuy$hLQ8w֕7& ;?נUWE95 eHRAh ^i~;j2H, гcW"VʼN;C0 ܌з.H)akO7oA  C0stՂLo_ | >+B+;9gt|l)INlr ILǶf?CͱЭ.L ErAb$.мƉ~,LwbE箌F Lt< KY$5jڈ`aPG-M^CoU7`)il>&WnO[cK"v4Bu.TAQ@-RЛ:AQ yRc݂Y72'sH[(yB(`%;=SxYNIV;ZI,x%IUUZʪcq#.?p*uФC ZmCUVj* DG8}:9mŠa% ٠Xb'P@S%$ၳ뒛ST&|n`Wz"R'rEYe]˵ 0ڵk9s&۷ogϞDDD0a„V˝;/m۶%<<'2p@֯_O۶mٺu+cƌs+Ԛ(M#?ߜWAʅefJEy Jù;'{m*<}2HK0"6* ޫA>RXj%cT#QiQ^WM񀐮}wcpXۙCg8\6^u'Oa93mpI=Р UGq <Sr_Bd[xt/0?}̉ѐ*Ck3m|vQ B|`p>Zh$FK u0n|"Q"JqJtEah8 +b@j!"{utIN$ egIdghbo+q7fD׌O1qRQo}mO3W$0G@N^*%KT@yömOi2i\^&Z +1Fɓ9{,ٳM_f,Y^O"e={N8Ae=o.5RS RZgZT*bSZl J bn U˰\sQ-3WoZ }7TA:;.@%L L$Vy5pTAQY 6{j5,[0l|Bh "&!>}xh!Q N%C_zN lfw uhJ 3"!E}Jñ.p+t ;s4pjssX_ ֕,&ҠE '@OG8%dptNN$A>:8V&%dZټg)0d%|MOr#lZ23,R3#]/:Y V|#PV3Ph \p40Ӫ7 I. ?Iݻptz@T:ʘ_F鐜sEut$rx#!h+],C&5(#pkA +Ԍ5=UTʿ:0f2ȹx"ރ, If% Qnay Q.7+mX H>$rOHnB'8[h .ʟ̽U^o4mgcS2Z"(m.f>DWowʯ|Z$mAb&IÝxh2~8j5Gka"cǏ9*,5Bpy_:~6h0I;- xCTǓE}`Z/,,H-`.(9Oܾ'&mag[ FUO q A)g,^Y$ ݇AX+ɉ$#&?;Y%%pCv|E!|'Hf E4|e_ :ql('9>P^F׹C2'E !{ׯK7o~J6&J0VXaIX H>De˟J)dG@ \˷ M 4@XIU .)LhXN˜p9>WvV6AT`h#! qћpPtR F) t͘`3mc}\aI_ fM`v ~q;ǂs 3oFP38&S^U!&*Y.Kh}d[Vj 5VXsznAe0y mJMZo54 IHvjX "AlZ>A+T(s*M1;]b)dɅ3s Ks B) y~DZޥE0#5Qvg0 YN^2w@dƃ HFan߬AVgU>DZ/{zq5& `g"+( :\Qu/gi Z2} m+ Bq_^IaPxl] `c;0 "/[J[QWT5?eydC5naAi";'hSN';;dNT'&uř0LBG [Pvi 0T=\(8j}!:{?#12L׀wu8+0_t1+ > * *+oCyY=#9le!D˪~~DG+()@@<"X=ٍ!2Z-:vP2H0nH:zIqd,5RSSe98i^k|ߐ^BH,:9K榦8vKu(.=[ W'NZ|cǎV~}U@AV>͖hWKtt:9ngg,bs8@h4?jI/d;g{0&Bp@°k 4ߟ_$nNu |nw`r1?6UB0o'| 8 SzB9PyY!>n#Hsp* {.ju5;؂;ڊټz~H$JFlM԰ %QCYhT * ~}gᜑ:Ыi$w "#zkeL3/&oKUf& W2uWX1HPQL +4԰^W!u ^ Ozei`8%11`! +ɇP.dʀG, 1Q lL0k0k C˵p 4*'oL07x\vBpX[NC򏫰;t +ԎґB 0OѴOZ盄mEuMM`~L"0\ s&onS,zWw*cGAid*2A)ݠb UCS'+uFJEr-ܜSuq"EI`ۊ^(T|DD OEvPDnN1/l$ Ah:$tW3FT#| } ݆/DKς@DP$>yj`jk`gA e~$R׍68*'10Xn q#o/yL\g%Cs5,`v(bz m$ uMow"8;@@"Iϟ~DŠ܆[KLLx@3)w@9{~1UK.XoBE}ݫc/SХmefq icyT{Zh^V~!`qW*A k OA`dK5!:| f5QV cEKNе,$|Ȯ':^ɿØf>l*C^B!!>к%IHn0Ƈ IDAT'<y}Ǽ reww|WHÖ[&PTwg>Ы,[ _98wo `\X$dU'[3j5~g/p]mf=RMq5* >`)fVRT %`rz} _JPl gnכ`zv+ iÛ+xT*!Uˉ\OKxj {3TL Sots5L Q0OKt5G -h5ې0aI#) Sƫ *rUi ^h-^HxSߌ'@}hSƼ{DժdJ Š܆YOժIKϷN/+ 9EHyTre* _G2ӫU*av Rd4${aeǥRFP5QYkq?qXꖀ[6+!d>oAWQx fסuj^Tv\oU` 3* ^ .}< Aq0ce*špa*幗! @50  p )i|DAS0 j02q7 T4Y\*:e!XƤ$:;\QJ剗*gSΐA@/|ěsyMܫTrMcČ$Qhbx)(S&u(J?R r0=T+J@+wڵDQ@V!5{"STP,:l\Rb WϻLxftI𧌖e#kÀ | a!#Pc%3A|*ߚu5!YGB}ب\ + 6$ n#8a} Pcc\j.ϓ*'*s"큤g1?X˹$ILH}Ni3G4zYG ;55[ da,4 Low"$^V`%  ! IVnU@ HrpSA+`)IKjh_ :/͡}U (?~d3Iό"^]XqDيvм ^)bJ(|1eIA0^t ED4^$+ADTT}[e@pv*,.9X4N݂ C>_OrD4x?B0}5ffI:|rEt.0V#܁E2z [8Z :ZpDsr"iXLU> `)8c8 Uo468Γ.L6mˉ$ @V}r.)~ܾ-ɪ~w!OĊvhU*x+f6v!h=uʌT|y6=5h* Vψ52ɗd0Zg͢F}b1ZKOMΞ*kTTJJSoa^2~b"" B}`ӡ=wڤϣ6 B+deʸrp,t~~>;t\] (. w最KHvԮѱu*gh[.D(߽|o? NdD F60m=aB'r TmW);F@/jXy%va45Y ] #|ka0qL$Һ"#%#'s8zHHW7ǹFJ l6L75U4'ãnIhHi5ĸP* Q(Y[5g89c~~I!|N#vDۤ=$S1tLځf781F *f:(`cHV=s+܁4/e$8C{gU % (!bb`vwkwwwkbw`Jب 9 _Y;3>39/3ě_y  F <3b7'"5+~ ?NBpHf~wpaX }X=K x %Wc 7GhQR/ݳЩ,:*ƫR@yr" 5 {Fyqm2ג֯/5mDV48O^ /[,Q7đ(;/?7>KiJݛa'kpN^5t܎ ,Г$V^ø?FƺA,{RŪ&W LlXVD?JoJtM1_y `nMV=C^Bͳ²R~ BCJǙP9EDP8l ?KmSF40ffJ 잒%0@ o^u >%> u.Ls3[NI7@@xAFIvZBAҰLo!0z K)55٭${H4 X|o+@c~-O-"u)iXh\%I0Ƌ :7>pc4]-9-ӥ%ҧ|Yđ1qE,!NHsJ(;X #p'~FBGpUSd5=oŨ&0MeJ:k-H`l}"EE7y֖If&m `I yJw͙roQp.L%o^w!t) 7b`LB鱲2%"Bj=N ٵ+T4@rf+K!S0@tQY8wGhHiiT"af0uh\G_ۙ|rwRx9ä"ObYL`Dଧ=d4t#AP20S]y-*3]WѸb\0 IK?U?9 d֭JDi65۷CT7h i(_e >Xl'R2g(Wz5n[YبpFbA>yӛCT $ov!X~\h肹H.ϑ ͅ{2>X"v㓢*޿ CW4ո#ah`Qw ^}{/+4 _@Z0]HJaWUBcKF% XViBtNBCV郊ÑJb>ٲ7=CFZ(JcusCO=Qt[QYǙPlRigazyƇ}򝁓`o]jA(&6jT'% HUF(9-YYP1R|X)'55nFMMA "| Eq. CBƸ%WW9syRӄ70>4@R)Ĉ42"uqb&./Wd1bɿ*fb{R$1™*c'I)ȕ 6־/fi;t"yi'Up !ܪ\<`ob^k< 5r64^B'8q"LjgAp& hqHP\MT#`6D|Zk,}=@?kt9 ;> rQg} <>6p:LtxQjQ!'Xq8鼬 U ીL&=[p[v/9ͭZ h(/`b54IE'Q#9:t(>>>SB:ߪUP*Io޼fnhLMM.nkkJx[@LMۃ%DF̙y_{/+'>ne?Ho)/ H[+п7e .d~a1x,\U'UzVɻᐎB ~m*6_7#)քUa58PUBU(aoP6Y %f06MC7ApsIEb`R}?8h SLExBjQ88@_8R n71ܸj5 O|\ְ() Pa;hLAJpل WBpS*X$VTj) Jq , oa=Q۟9ВHH8~J(|>^҅ugR;UB8{6=y*VQF̙3۳~z\\\Y&;vH}[j.\g#?=*Xiym~%K\݂/Qn7JSS8zThFPe ?=U(Palx+K!Hx/&*U=|js8Z-DU+]]XTs[Iۣb(5 kD`z_m:,23–pTͳWLyx_䞨67X)Y)ȑsPxccwTƪٳ(:v|e([֎e$x"vbŊmjժˆ#Wsϟ… l=1 i{{s>}֭o9^?4T*(QBZ&>2dH93N?ąz\P fB:0pLlnNhkCC0.̖r{~ `!f' -ѡGeC@RN81 jժ_oMM`V+(C񑰫Qlr=f;aMYOx,Эx?)/jM63]@,ff~dK *lMf^p _c_~K!Y p/!JXIOH6S@OO 4ApRS ltW`k7j[`h: '-RŢ JֆC8Sx]1"S;jȔI 9rK;}p /}91l3޼͛4yIMիWceeE&ugiӆvqMيT/pnʥK+{rm]]!8pH@\^(SXajT+'У> t="aC%ϳkUqS[B9H+ ,M0JuIMe0fnrU:զC THfӶ?[!C QAaCBNQ}TNk芉򹊣mI8,/^}A"7)1R #xnovbuS08`Ȝf.PPs(Ν\zӂ`+T6+dLSrAv{XsxKeOG' X%=85ދF/SXԻ.xi~8 B"bV%Û ?Q IDATn@sE5Va\mlǪ+nj69`k Ҭx VJhͥJdCNރ/ߤ_[5hPl 3Ҷmff %ׯ_K999ߓ#s >%JT*پ};;v`߾}ܼyh1 io*1r>?D 78 aPDú0#4œ̠K 2شü;T l.jo7s[&{`8E%=a)>0 d5@p2CMOy}w^K(/ATY9貞mĉuJ,U*+R UTUVM6.]!CaÆTρ V!{vk?쿛*3 :o}5)/S0'إy \aay:WgpDؙD CU#ta0}>3AzBl(3nHeiSoPkH+J%,0VbvVp?4*7C Έ9k GG(:&lӭJVbd4X֟C,|tܭ`VaxY&=U K^ mx`GX7Yɇs5I2*hjk/ CWÜs ATp薨Lg!yT.r`KAB W6\ѣ̥4I#h葕9sr@dά|HF##Kz@J32UJ_n=}\ZUu/.J.֩:| v˟'@!/ZE,K,^<AApٜag_xM鷈O0B~-+q2 36KwO\n=C  ^ΰ #هpa0nVi }pc`C(<7k|x\z&'W(r µ/B7xBLe'0I>!h6}*9(ؠLkl[h+A<1W@hP{q*hi{Rd)6VU#grawտuf%y QQY}[ܠt*F_B@lt )ALt)QU8N'N0e]``Q@Glꋓ-/gø"\dN1CS?Ip r;s-,(תBsOÕf ůÃoIY1h6~ NHL.Gp,1jfPkbwgG ^e o<+VPe([񦄮ȍ_)T'G iӦ lڴSլ^SgΜݻzF~2HΌR^;9r! G y*õ{NŞwVFfB{mۤD|Q=5Lk.> jw'oaYX,ˡUiRGuwNs0p| pt 㶁YS(1zeI6(wf@  mm!A.`l]oSaIOc;j5|/&v6<j~NMv=IlFah:n-^BZAt4)z!$NGz {K,4L8},M$ĨT+= IWQ{WA:7R@Q@'[.ݺuc߿-[r9Ǝv۷̌W^w*UbҤI޽={Ю];ԩ='ONkeh!fW2Y/_^Ko>Lqy Y حQo@Ft|== ӷH#A.k4= 3@7(sCpSS etC(P*(tD '2>i1>|n)4:N3oaSKOThSX`߈ qd5LnFen(` K)%?yބCR>+iow0^w T0I1Ĉ]1 IVKÎP@p"%_[(# 9=ut!r o/=ZoՒ;k"WXZ~&mͥS ݆ xG@Pm%?GrIPo T'U7h+~ Kum ')yé* 5A0~'m(U _P(\z v0\{c\P&6a'^m +C@W* '=(\/xu 4Eհ9 qb׭%2OaL䳀S6>^EC esғȞFJʙp9f:xl6!e$sx~`X\ɟσ-R9N F1 FRAT qp-oK&q t _ m݁PO(V]0|ۜ傡M`ErhTª.\BxN;L 'a ҇pt4/cz!5RI`TBJP(^%*_; K:_2 h_@ȆR I@/"\Vw[8wc$+R! &.z14 (~F&2Gr iKk|# >G`Ap /8,a`bx{QQP=a_bc`Q`CΞP(ʟ˗(QB{((|xĥB[#`4@Oj` KN `=Ł65`B8z *# JP[Zd,tEex}3),BΕ`wAIc ,*h|c Ԟ N 0ua~߅M} OVd@ru) NU``mpK)tPBL⨜H-^o@gg9L0X=dJl4W[p<"c K_2f0%/s7l8Srji,)QAp'R 9 RC3P :Cx nAP\p1v*Zj@{7`A RJ`cc`kш9F_HjuexyGO! ᲋7 @f8;@t<`DK(]ťiT@pfnf"HOw֯T^:/|гM}th> 6K>I* rPoV΂EaVXzT&Nxj`x9_D;R)gk,ӂA۹Nll7C^33_J#ϡc]u $>١q9TzBA]ؤGH[>f2t::U5r8tM땀=`E>%Yr7ABߣ,ݓޭ,Dж ? ^af$(¨0Q ,L!X"o$-q0#vÐ!YXC!8BmĪׇߍNas`fܩ@j䉹^qוk\~ Bゐ?¶3.ى WӮ~'{Ǐ188FG''3>Nޫ@%|2gW1T7B7@V`T:}i7S).H=(z^63K~3Cłd,@"q|)pF l :+;~([&l`.0HRTp9?'AղD}7o! WP*Pξ`CSÑ0jM[,SbC 3 O8hyBy( dƨ"MWdžPnRNZObXճgMWGT*ݭO|F Icũ1H0)kHTзp7\(о6;+=u\(JeB{8]&&HMM`Q8xK+ It1P(B]I`meF„mivdw]d(n +Jp@(>n7!L3&pyY{Gq7[O l\Çps,h Agu,L!8f{Ijux[>~#`R 4eT#Ow3<݃jǡex Þp2Tά}8 OEHlp"T$>A[p>%t8%;o! hhNÌ[_4'~cyP1SP^o>*>& @/OH72gN xx󕯌IInR~LLx4/T(aX;Veyu+0~spvfUaPVfRgfXFnKz9'P7 7I*UhP @PaJ; NˡV 7;=á!f9a!^zTJ #aYUSG8>wB`}, 3q Gz$ǧHX{7h8dw7߈x^W^“mkgECȚYB$'>8AiN?-9t+Wf(#}а=Zh(rH9****Hj׈Cc4@RJZ[ҩyb\\? @n*_n 7={4"')~`6PSS(5wƞ(A`Vs^Y-IIXs{X|}!g&VڔI 7{hZD ~0D_ǀlp72y#.멀S%zUph5< FK1 VYO!RX]E3 !/qFR*;t]ROa8EoGw%ϡmUCA[<$|.CMea/ڮ9x5<ɷEUXVaE2ȓ"&zHRH6B<} ^^} EqmHjb4@;vS#ǧZ#l IDAT[+0!XɟC]S/kȣ"L-Bwb?R8/`Jy}3gahSI,R(T+'Í l EAp`عWu:*qG esӡpC^wΕ60* ]V@,h+`&4u`%wze|[CX*[[@a7q$&^%Bkd»x>D&̡-N6Bxf nvn #Ci%M)t r{z:uME9G.(bT6,ք)5v`-Iz T&؋žB`J|k 3,+t| < ξPR{ ٴ$'Ǟ h< ysPPqۉ$IF ICũؿ?Wt/vJ7@2dPSqwWeܩ\~&T:Zp9n4`|'yc&}aZf 9c0B`h(n.Ѱ >ǠZDS"9.%O`8oghVzEîg ~% ݠXNax n"*9 0?rԽ,mMa}!0rZ"B=5G`_|[:ax~ %nNp1Kz,>* 񃤵?~\MǎBcO ex 35^-}a֬ɶ58N%12;=lޣ̠k˵XekZ !NNJT t*B)HXB;A*@`g @|pezbk™Q Z/AI\W%8Mq *nPv2,>ϗ͘4k`_v. |AP}/8a Hg -s®jXZJ9܄;a#7/<g|ly .n| RdKGK*n ;!i){ =DHu@ERJy EsNE75BHWWR#҈#I#?ܹckkϟ'/߬V}73-[?^^V aJFytlcÉKKqy}ML` (~$ (?k s@f\{AX1IOJW(`Q7 M&îPUfUY }W簥?dNA\py<#BP6Z v`XM 6\ ZS TUXwJJaPLbQgՔ.ߴ U%H\9gʕs…7(rЬ! JҡV DzVi`fJѓШuBAx렄Ъ&l=,(YZ׀n\3.0`>-^D* =P86S.s?LL`e_ÁPnP(g ( Au|ƪ 0jԙ~^lo%}O (. 1{oÔ2r94I^iU[8NS/2 bV0q ̼p#^4ױ$lb}33&n[wx9++Ru')ݼGJk b aij1h1MP -RμV䉊S'F4!Dbm%rmǠ(͛G=!g=X :tCph~Z.ph0BkP<_qPs)NSH:؉R MaqU& /zP@+Sp$hfrCl2DT‰.rP8wi7aS!g69oOJ 3`-Q'l|='49/rZfu% 7MGD ,յTκ>Π{5ӘXz7~9p L^hQ]qXX$!}+620*ryj׈Cc4@2Dٳ#Fر)?̪TqSE܅Ky0Մ Ꮩ80x$@)lt Kw@Nw? N3a 0X3JL pzNR2j^53[!a_MHx>%qP |! "*>E$d>)d0`>WpIJ@U@0@`U ʁ6Ԋ;1T*fq&Z 37BAb~Η/p>KZ]Tԭn}h7IޝOoD{ê{phF !AC3˰'L]ae pW@ uCq!2vƝa(J, ;밨?C ""b-b+*`7vvwݻv.v]A3Ȋ 37gs=ܙ9?DLܶI̳\7a}~ ʂDشI?jd: H> [UTW ,X:vփNa h*4<vLuuaxpQ^ Zh/ "4X0Dm xzTb! i%j|L}[Bw9<{BvʔRA\!z΅̶Ы4( y2<11׌h Z-*? go0xQp E}j 5-"ĄCjpL-;Nwꔫzy]x)Dڛ"U/%^P8˿<$<ik.DUX4:  - $՗'.aW*HP:m ׫oa(|d|?$̕eXoN_`i VV?"$EꀤĘ _SӔ9ss`n.T)Qw֮…Mt)<*Р''4 cI `bMz?԰.wj%EqC}\M^9y;Y M!Zu%l(jf+cʧ͘à:,:aV(ޢ( eK!8R 7^( 1S#h ]Vv^;e9e X[aP \ljP\x<] @W 4|?^Mz0p3üŲ\;•- ?OºEQBBix,)rNNFܻ'ʛ989۩TРpRB岰`1; O p隼 y,{ Q`|gXGQ4gC%K_jՙ69dUG{Cf&м$º0h+e'C,,e+?€Ð! ;pY7lz9_|ĩav0n#%l ;CɝB|;F|h4~k`U(zp:~n.U!>WsH- #eHXX66)Xŀ# XlUW q hT_iN{ ×u5a\cT.>5J F* Dt_5Q&DO8ɟgz_pׅF0~!s: OBݜ~^r;πa)dIUd38| )7Pn9k<+fCh4trC!%.asuTP=,/݊+i:XCI&n܃ahgy&uӧۇ1Nʎ/Dah 1\U?R\T}@Uܸl%\2>]"֛7ϑ*i V{'p1&0f~FFr$2&J@ K䏑7,٠B_Sx$3SXcҥ|\HbTP( k wЫ*\{ +OB澐K.Ba6:6Lh;U^aS#xZCR ;C L8n;KpM ցgъ'{ViXXNy1E|}iG%Pfx m CWaJ3SS_塈Ý.%A5ժ%C OjLL46@͋RԷxq8^w]v텶䟧Tm #Y2l`לo`hSHvRA +psfsB|P=?ȑ5j(BxW/?P{w_+[ZxT4:%k{2g< ~?ȔɒO#Ț5m)cc XN*޽S&xLMlYmTR&@gؾ5T4կPUfv<5x [BohЧ5 LP,9|B!DHxo(&`K8eT6 ftzaU`R[hU76$FF4p Hc ܄[K,P03$DpX Ah66PdG̐Cv߇'Ҷyat)ȡ0[0FA='YrKmX.~jan]hGwSa~msd;diwLMX UKAI+ \ AۥpJ,NN)~Cf~q ' ^ *E_'_?{E&hKKE)+Tu jU`<K5]` ;.`0a)o|!S2my mZbpj< uE%9ãE0-xWf&u"z&֖P8`8 WӟDI̟DI̐=}K4QKqxO"+QIO#oo}K= UrPTއ;KE%`DIAJ,jϻ(N~e>r62оK0kLl 3(/&о[D'2sﰰ0pa""bxbτA(5&$$GG>!Esb"% }+kkm[V%ڷ+ahRq@ንC)lHX?>vG[hQFG뗤afzY- ô?|?G<\y,x& > w_ɻ.qX|XbO X[}jqLy҃#r? !+5+y h54 ugT,t +XTW2#7,'-~fYt_$kL ;w ګW,oWC>"/p(2VD!BC i1J[@P ۶)OS8[XT} , ltwkz)wJ:-?΁Ƀӭj0,v"dBUÂй* j(ȨT%8%"jh8p_s>bF*qQt2~3J97B3t-~EJxAPdWY F#ȟ*;hGT?/2[ò?a> &C%KT! PgQeZ-~'fM#< HIʘō(ш/yx1ԯ۷`:QTBV KKȕC=c IDAT8 FBg".R:A|7Ju PE8%y0l  K2Ůd^0LG!UF* +spj ݆gclO *h`?0  z5xlP 64P|>8vFfKv2Qi>!>kv+H~(tys4^h` Y L"~AҦ5%,LB*r/QΝCeKX^i Sv.w8[qR-3avXGR"&dJ~s,(®Ʊ4]d tUǴ0ܛ(n ;%"^`2Fpt+7.n8\8xJ⣧l\|#BVkIƩ ~>eBNy<%D}\d(!8.HApz7n'tM $7mG+QJ&\ 1+&D.J ED1ȘvQ?)@Aqucgߖ߿')~p%,ak taVcU, @JnȖ";؈>Bl@q58 @}p 4q6Ey@L? ¡B\Cʈ)h40 <aV5H zY,0*~6L } Y #p!t"u8̟9%S}CW?O Ubggγg-T DY ~#ΞU&@@aIW)lڤTSf)iazt K3s9,-@F}?Qߔ_0tV6V+{1O^ o-Hp:T0cH| oa(rτG9 p[>I'rR&p f+3SB!kZ\%-9 ӎ@JQ10u;ɟsbL]C%B(:֯6m9z4OO 2R4H0` ?!9r_1YX|W)'O(_/mVaթ ^é3HVL>3~n3/U*6lFFF,(rnER Di`3[[#,(66"%n 9r= k## S%lY 1* U[/ d5и}l[\ ZejW#uKpq1['l$o8=]]thShRЭl?%C Ob};d MkPfԆ`W[hQ%N +/A0j*3X[j4"8RIH3DEr)};}DJf9SA`([crpK;ot&}pp V>"CiUooc>t{zz2sLYv-ԭ[m۶IK:u*?kn@92jk'B<<`fimOEpܹPܑ xdq@~XѶזL{1^6K`([XO叡ptt yzRJ恓`Yo}\:ԭܸL"sHy <4ew^ #|XvR 纂_pL gy.?op7Vh$4BZ 1xIGvSz m$l9d~f@Bпy'ƛP8wFU'g{Ӧ<=Q?> wwif[pqI+o>};v0g]6k׮ݝårCtڕy (O }J@)Uʄӧ6;m 6l %XX@N0mHkHk nkd?a\" k SbbkB\B ӎzH?*̆qPhS VloSJgKp<Ba8x<\rwXPClL<P˕F GB|8Hy9,WJ{ ,,|9?ڱ}N|7n,ի1( _|h4wwPxP w%W\X~Q7>?`Ǝ˼yH*UApuڵ$777BNWWS]KZk̵kȚ6չ3,Z\A0y0e4X b> fq V}a"@)p0ĂL{rKz aFi\Y}:w!HJ}bmmFHHV ''+ &zKLɺm MX R +kVmU*hVAkk&N:,!* UAƐNG338J׃eD-@r ZЦp22’-e#Um}>aIi+€}{v|_ȔI{i}9fM=3L -0k$m&?of[W?hrP`dW^# N`.82aFD>- ͫkH_x Tգp!Ҧ k赐}HBL8 }??_ܧ`b6‚`co)WMӈ4qo6N 1j!^oN WOGn#6h| |S T:yc`1sD~~eal H )~u󷰼xɌX{ zauge.`CXHTKwɿ&W 0:,N LkgO"h5K EQ~b^;uG>{,6,jrDFF~&\@Μ_tw%22^zѫWߟiӦ)7 O5_*T*E>(aƚ5t| 9nXCBp(T|i@0~"6Swу4xAWy? X=[ ![&Eְx|P*UqVVFDDok yٳRݺ R +_!_DԆ]::]uᆰT0o+6B8e]ˠj+ȜVy{;8 *!h]:ptx !MF :7s0M㡰sC5g =3|ay]LMc h[v;q dnuS-!~M!O&qhC~$a-ak ƪOMEAz V2 Oaч`XxR z>X~:L6>ə[w-L.BQP%v ¹GO™˰~yL'sL4J){MjX?/ kgAk?8{35p4>Lpr0:퇺v+X^9^Ngt M?\K&N@pF[`d$Ct¢Q&7T AP%T \3 Es@IęX<#<V*3 g &yMqjw\BNy$u:m Tm氯S`!c|6  ] ψ0mj5 SA >p?@Ww2](>obcC;8 I, N+d>{las_@ CW`K8Jk*;֑a:f6q!11޾Ucg'ߗɓHu~Ig%3vX\BHHF}-[F\\ٲi95` I{ZZ_,H'9ׯk_=+gJPQ3gv-[ 7紴0BZN0/t `y-V;xg#ka^Cx &mPJB|F[omH(  s&*9u =ICo_j)|ʕpRP%tah 0\UI+T̑ÊX'JΘ7oOi];+WJ[M[X@&3|IU@ =| 2 V΂byܡc?{9հ| V.BT*Fąl|s2;`P Jw+]T*[,O8s gCQPgE;? 12Zapfwn*gc+585yo< _,2k FÁ^ЪDCE? Xs {dJtPq,]{e-vt]낷,+ .Q1lԩ e%J `pOPP8u* R޻P܀oA$RXEll }''+< OsX[ O)9 QQVnXJ ~LcdKf@x$1͇#)i}T*9Rܭ)oVJpq-ငyTЦolfEb/S }A񯿙)4?I{ 50x&Ha2r_28`E 2 a~McՑp ѱ056B3EhRNp~%Bm 4.3K|zeŠ avl~7nٳ '[5ff*,,/W޼VUXX4iwS0H _sƌz#%#gܻxa@ c@E+[2Ueٲf~q * wKS3+96UA;vHk;^O[, IDATl4%Ɛِ<ÇDe~X?vtPi=cI 4ϐLգ᷍P*BAg0{%6й+_!V8V_&"c0ex/\XwF]OC`Ϝ|nP1qe $"+p*p/d'>"yfO!u@&аj5$ÆA`{˖4i̋رT]uj4  &J1\UI9s*y&Dju#FW344ꞑ҂˥|):8X%ٲvP9/koS9Wж+WJR0SB-]LCBJ.`jBP `fATPXH|Ipb=\ %F*ƢhY@Yp_T*hV7BxaLB ֥pytq ,!R6]-C!h$4)Ai ;B0[/>~4BD2BlrFѰ$?w?~h-TVYp)_MH3z4Ruv1F~nMa(hQGwA#@ L񽔐`JXJ~ߡC_ѳgZ2f zNŊvIK)RD/%q~ noD\bf`I_Fe 2N6J,8v,z[ԯoG8;H_j׆1c(\]|ס %17-ˠT-p m˔O'^_٥* AQP5u56N No}!osO c: w8p.Z%-#K ˊ8{*ܵ. XzP,Ӥ~Y kzqdv Ae=oiU,p3YӁ#T}GiMxG{?2Oy+BpTc}7c^m+_]' C}ymi1ذ.P6GF5kTV$>""tQ|dL )0J0-lڴ,Y£Gpss#((;)YL_x;;3޼NZB$GG#޿#uꤿ*V`ԨP%9~33݋搐(e0bp҄{f(..Ws.&O,-^~ݻ(QB^!sӽۡC01QQ/N^x)CXZZ>8(U zǤ/n܂*uH~'IرKn EȟW\ iY4h40m) GCF祍] 8L]#2e6m x{P(ԅt/bk5< gDG2"V$aei,\ 6mJY xF+@jnvqjXr ֋ymaRA9 6=!AI=(Rt# gN C$)`jBbbR%QAۧs/vPǸm3jִrk N? V1wK& <*e Y1X@~`LLTGDeΜi{.S@h +QŠ ^ 3ibSz<ٳ352VƢ.">ԨV…е>yrÔ@SDu)T*+fC#-mZׅ ~}e…H1W$>@1/_F`g .* $! L&聵%mJR)2gjΣGZ[nWj,9t(yRwd3km [KbT2ԠP \ukqP\}NP8XܛGI_NoǠ7ILLG c _C\zccQFVL-B{P4lnNp&T@)ǭ0r3`1(õ ];(6?7!%<.@ݝb CKqHBdwBK)n >̙d3נ*(>aQ;P~(F֡拏P;$wSIƃР+L *| ]ydlH,]*`MapZ62Gd`q2|=DN$OUՎgGTΕˉ7UQ7W=NYhC*Zԡ)mjxzˆCC5h"bcߧY5PdǺ;ll`$[J7>~\BقwG k,SCfpv?(cvt:Նsj߇ uCO@ek(y %!g_r]\X*1/NB AA0ct7&A͋/9xтځy =Ž| pU8o:Ymh溺MкT Vc0iG Y2LǎPq㤘Ίo I88hN ȚՁw[-d޽EdZ ;v(.ʕl u6aLgΪ;_'*7ϔ$<~h™K?MƗT۠\1({Q-O^>iwc811p4|^X8 q=gZ  h4R@@sxcA1/aoH`l @a?LS(;׺Mp hxo݂˗Q#گ]iQ,u@7`>6[J"`T%a+66z㳯R8}:Yh͒Eիo~Z6mh&%wtMab]ͻNB<;L0g:4iGAipt֫6#Ḋұ )ܡJk<*qZ-LrCpmk4\a`h]:uElH*WOI, qE3#2'P_l^"n<̩r1TDV&4^W`p/X^ -JC.1B$N3M"XPs^h?&g [BHk%`CSu8EYד'z ,~0&M2+sR4;^Sjo`Aيω7;^Pd7W.{n0޷3;wFHB`Akxе̦Mzbb_Ζ : RYOahԵ7ʖCQCt2s(NۡBGL;?] 8(Nl,,"{հs2z %CN 4N!ܴ6WOжz'BFP=;O[N6A鑐4BI/X΄]atO/>d6=a^PĘnP ~0}?R!sZm._aO0q.l/ Ӂ_s] 0}2/nZKC 0=ecU..MH̚eB@G^ÇյkCʃA&Μ(, `umP v,`'֡p^8rgu`uIwA׷ AR4 *o) ! T/ {VBz4x5E@05!aR G= i"ѷ@L,쇮 H>`VFpn,<kC@Qhspw(1L᧶(i.{.x7pz24@YḞ]'`LeN }\z0NY @Ȟ ڵ5y0|8oZX~D gƸ%*JєVXXIAي}.; O:P~2-q&%:ZcK~Z?ב*9<|'cFeKmj(WNOv2ɓecGAݺpT-'p0wzü]4i !"{ ¹аTnA 0)kp3-)L12 ԅaXǏ.yGΟ/<uԵdM j*p%sq% gùp_ /(_ϖ+h%b00Xخ?2ҧ"Yw-(J$~/߅A+ao?5Y&96Fסd.XRX;C1#ķRnh?J]kٶSͻ.A͚DҥicСTh\QGW~G&kVI(`jիQ,Om=E ?}kr_" 0>} a4eQk0?Ι6m`oIjm:R~NIXފ My4Z: kP$ǏmP J5dTނ5d`i;nscU7ahBdZ %kB~3u>w-%6=#H݄h8zҤ#{tb3 $Dg{ QPfrbUy(dO|-ʽVVݡe9ˈQd&Cz;6Zc>B|k Z qtr <\ =6xg񿵬ER) ^tdR`Т7X^O+VJĪUό %>QR=˖& 2+ZԆcyҀփ22jiӠOa(] gu՝w@wpuuazȟGݹB&`"Ka";..|%:X ͂;?4V;[h#w/0y%Hj®@$.YB 5<_p2{₧;T)" ]Dwqbn3||/U}}p?V =!'ϹAIߒ, 㶀Ln j vrsӡFpl!xeR65iKE`h&s?^GV?؏8܌ߔu88X'V$Ji3nUʍHIppI{wfzÈʞ|$1j#,Yb̙P4)ߩȝ;6B~H՝KpvTX'$ @ ܀}?@P{h<B$ouM5h t@`h\ٴeOl]0uITYd Z- ټ B#a0D@HGLA8~8gH"܃Ҋ}CMȒJKπba@ټ=[3VCq0g7Lv#_\/N>M""ի DFzo/57'1S%q: )aR[hW\p'kC9"xXr} E Y h[XVPX~>`ژgР8&N5٭7o ,_Ya_^^.ܾlW3:q#IKr)iv+Wvb~Ihޞ ե7N˰a@ڷW_XQشZ/TP~7o-3sAjiW:;pt5lsin03Vn!_cX+jz{;C; %rwN(d.@B̐' bMR':Ň;@}_wy_sŰ6Sٖ^BŎ ~Vw \*:~M CSrpeCpB€pe呷lڣd N{ su>=$ dYd ia9;N-R}drz(4@m9tJ4n{&H2, ´P%,`=[OZM<}+=y)˲boʠ׾Xv3+*@2-OO;-5;9i2Ν #|i`iȎ͛TU:OROٖ$KGC۶pߗUK\}JطUG 45adPUƳg&Ъe iĞ)!D}'[~$M< G(߂ps1 \,MT Y:k˙0>fI毃`XgX8FOa"v, &eoӮi^ѧygφQqƛ7z+Vo)[5*Uʉ' gg"- /_ vՊXV Ҿ; 鉒d)foi]mst}, [~7Wؙp`5ʮࡢGPu狏Sc54¥K1d`y~sD=q͛H$^I 5 JX=I1|3/*l/đ#}ڷwfѢcL4e8"#GVbx[4n?^ʖ˔Q'Q,, =YP9^,pl# l&eQ J؂b.E#-2ޓXLH89-A@h1QQOP<ݬ C|pstPQ V]jaKג0t2 򩨎x ;X:^#$$eMH,3~:'bbӹ 2~eʗU$y,'@RsuTxx;[[q&5j{r7￷$Itfƃ[3>+V7=6m >>Щiq>`p?hPj%8 LLmmzum懃`xZj KJb7[`\I~?H]ڌGDS+Y`:)I֎2ACQCaCG6oN8xF [qKp<#Gj1#U oJ{``3״>r焣BMxH"IP otl {A6𿛉 @Q0HU !Ry+f/Bِ5n@FO1s`YDe sWRQS B6P;طb vv3n`dB4qv`xxV+ǂg%hO, #aY*uw(zc3&X1*Z6m|`&AaTGZײQ/Dl EPRHz|?L1왁ݻ745~]$7;ՃQdʤv^/8׈lbX| V m|+MbUP ) 66%JqTWh˥K:^R7SвoGn7a ܽbk 6p!'܆ D=¶5c )!h\_@u~p9=%ٯ-R s- #`v8q &*Cu_0T6~p+qG1~ɱϟ@}Ξ t:ɁVXI\IȷV+) H YhρEYA4m7& 1oڵЫlX c6́=cX1^S$I̟eK=KJOaNr&YfMYQdzbo¢Bd)S>п^ &@dtƐ&3$%D4bkB\mG!cj!Ftz8}E|A\uDL,a2j5!2ߖᱷl [Z( BL"H. *lqv'?;WF\8]}bXo豷7\[;k׾S4FM܆p]Z%݋#KW%:tpdΜhzT'znh*W!CѢ=zhгfV{[̰cԮmqvEe`rnQ\9okHEe.66P?EAhnb˟GwM ztz$"~!2k.Z ʇ{GnBl$wt٦>w eKwWУ)[U3X8&̇`L/pb`_țK]^Y@v֭VKQd3&tƎ}CǎvNmgڴ`z6? *Jxf~z@}}eZbid+I .Ȁ|…_{*Z~2ڢٷInߎO||ÇmBU3ܫKE6o*?{Wտ,[:AH`0=vX<|N9ޕ2ed!MéS=,ϛ/ {r悲ܥ,?6K'[r6lKurD2 Y^E]Kʝ{4I(Y^S+veȲwKY^_},kr|? xg6mիf>7%GG+^BS7=?pAlmC-..N?D_?q6o{.i#^U2C0m:dYfӦt^/'̤ͺ_Xz3biܸ13f̠]v^ԩS˖-[l+ (ٳg=ze˲uV./oejL|`WN֖ǏUӍ3^=.gN%8{V z~ȑXnH.,I hY3i2` @P! OOػ %irs` =5Y2ps\' "H^y/D+ɰMݟN]A?A}Ht$< A}EmOP;6&,ZBʃܹ1Ɛ!o0ww?օ㊝eS2Wo3qmٳի5j`Ք)S6i҄ 7nLѣ[nw`sa VT!I"'!4Thlܨ^TZ\\4 C:3ukUh$LqgHn f= <|<ɒ/sл7f Oz mG6\O9QbY([P$~8֞%U_X #>1oca ;o7i5xvYLOd2AШk'C~c0?gFa`>dT1=<4$Dǵk1.m ^Lm}s'1xXlNNNu]UdɒFYIX77oL*˝ۅ72yz̪*-/_xF̻gOW.5nqqЩ 'DAǸ Z?%0POH:boVEӦ9(Nիek =3"goP H*HßJ%Ӕuapघ[ ;a?H͇BDL`X&nY5!w Х`MZt0XRؾ]dۛ?߸-4o8t+ӟӫ*?=.ߨgΝ;xyyAV p]} 8wz""";&x$oxXy,*ʥK'j%_a%g:uVOO5?o|W.]RoXіi5Ze|ZD@ 􄆪?X5j`0{& u@4+d8v¥aƘH/CvãУ5MySZ oixԴu")S Rsڵdnܙ3'LQ1c=5gC:qc,7o /ap!B^V݅EpZUa 9w*U5<0e,[gb> i)8*|@zP# c=3Ys>2ᑰ &G2l ܸ MìAp7 UK}e Y}12+LM?[OXT/RnsvE1&M\97}'qc(͚YڵprRufr%|f͚ř3g1cŊB رsJ" /|hDć88[!ː/bIRoJ<NJSIԯ̺u4mR=\0!#ԥj%sunu1kQH"Р-[v B4Ъe<1R5+a:QA `NUrEXj7 @|bId::wN]kȞ B!}z]]w qq29r?>W@~yE>y#!dY`Nb0o_pͯk`@o?p///Ob>Xwaϖ-k4|}}x"V/YNZ)OƶmOɜY>Y2^V%Is;pz4y)A^ܙ>=!Cg3ƍV^vvvꮱ-ǎX MY4lgf-..Η1#<(Rp6n +sHiE6})#D̈Ch++&uJUQl7BaӃT/Ú 192Cά3H zBT"M ~=0*KTw/Kl]Pn%*'ud3Ч=t0>.i-rp8.We*oLFLjK"##evbbgMȲ̪U/}|WUY?<;qt$Ix^Ij+6ܺ}p̙úugbٲeϟb>.>bbbXnҥ3YIt F".Nً^o0浜_Qצ@\`Ae͚%g\gϊem w;; ᅦGArܺ+u *Z[QRx#8}|suYIdd`,:" ,ϡoO_ƒ'Bh]hL 2!SZ] KFƴ_Ma~!<;îelB{оh$xҦpٰ}„?ooCev$0Ёts7j%30 V Aǿ铈H\oooիGn !gΜZ'N~k׮˗/Ν;d̘uRpa * Nb׮]xUV% /]5Y$k{s4шZIŊ̚XXRRXm-'gW̘ir9y2Sb(UJVa<76 c7R6.k"O=?%Y236mP!@a1GG3Z]`Ri(hЦ9l%wtSyLXbOOLɇȲ(OnynAHBDDADx{;xcP 9R\ Bo[a<Ö0vpB0u&̘ F@@ g4Xq_J cH^"UPP %KPyYΝFpa~pn܈Pev-y-|.ܬOZ'N퍝';IlذÇh"=zDѢEٹs'Uc4  =ʔ)ƍ5kKB qaʪ1Y" /FѼfb\8?F#JUv,yA~=dƝ~OusMcgв=t 3&v̙GT8e5gG1cY,3n Ic۷#ɑC U:tI2+_o7J4<{Iϙ=#w?ҮP}.3a*~2P3f1*V@fT~MظQKz3M j4п(8fWd}<<`t᏾v^ZYajV^C _)H5b%!^@] "cBxT(ea|Qh3\9U+Yf?0|:W0tZ5e㯿 …nfk,^ nY'gC;u֏[^[NG; Hk*@ll4Y.:ϞVZ""Y'R%8Xymj%^PoɗOT={V]ۉg 9cő[tIܩeLKJ'ŢzuxC\e3L - 0w*ܸ B6b QpJ¡(ǐD*\Kp<|W~ 'AǏCa0v5ԵTN=mjU&^6йkuͼQDG(Q<e۷#ɞ]C7X,qq|Ob[-a _2Q3!@)8~\y}u=ؿ_Uypz+@>ə3'7oMGrgׄ{$ʜ9r*y' 7,, Iʕui*[N&6/L|IIp2TBѓXr0Y1 2v}!\;;@JKo|w?!}a&ȮAC&p,2 >]wlL8ѕt̻Pz/E׮|ʔ1>}\ `Aum"::M$?+_V`N zbYmRwgrq!$DJ`o/y!bc#ѯ_r&NT&z$&Lp79Nb7x\lVb-Z-oW,>Fժp#2dYR$h4д)~2gJ0bRڣ섋!kfu˞JB?|~]D,) 2}]&-Ԯ 'Aúw޸!DH6ŋ2z/RɲLny.e0̜ih,PeɔIEQ:^NznUV|X\hRf vVg9i.%&""~!?=+sʒŖڵ3Ǵ*)Sj߉m Q.$$I"0PKj0ɅiSQ3d~ECEaP/l_-CUkULZ'Π.,0l( 5[Bt ]ز"jևJ%h/X YYFu|tQ0Ч=ƁTbGriO5H3}Irܾ-W.22r+_V `ժvܹPUmdq=u'nnZ5<}q+`793g*wŪWϙ8T5wdfÏ?FD:kF4n|5`jQ\0z$$KޥgoHG+0(ס`vxmn_@!"ÿ/< E5ʔ\wܹ r-(?{UՅNI=$IMHQ! !J AQ (* "*X0t)R4 қAJ3)SELSDY뮙$s˔9{4 4m͛"jn#Qe.7xm_|&0Pw%yJ `}ʪU7i؅Fl(Fe RżLO/69EEL]Ύ.@*n V[qߘ!!Ǜ_ޤ;'N7S1— JǓ={r.yF8ћwU.&Lor9wβ*n MAyƍ%V;hvVa:0A_'L~=0jO1j5<z@d9ׇnEks~;GoC6=apkdtno(D1'NаcG[$vD६e##[غUude?B={1ᇕw&7~COZZ>>{ (kS -b?]T"BB\lE$lU ;m?SŅ %[N߰#~~2;w*>$1wn`~ @ӦZ&Nta,z^#//͛Hg$#úCĮukص˪Õ+LBе l[㟁B/0 8|C͖0r<{v^k@PAؾzt__vܩжm1`vx; 8ߧlt:xY|N1"ѣ7vGGޭ[9z_4H)=ѣ:7/=J)&LJJ!扉Ws *,4g`رc3VZ  /m4hΙ3E3$I"(ȁD O=i% @-u:㏥EEZĂr*99%¶mΌmJ%1mcUi)6 oWL-""ݺ°QжYgxyHXv}>`6_vARLyxcxdփ/AOJ3Ķ䷓ C&lp:wFG_G];SB,h$#Eob,-jr|]{ZRgժLhqbϞTt1/vJЈp=QQQ;6.@ԨΕ+'׬Gfz/+FW%f֒] bltWf dѢ4}yu&ON`LtHd#c*~+;s5'YrbU[a0 /~0h : ހ!>2nݎZ M$ujº-Bj^ȉRN.|fox; W~Qz< 'BG~x} <\+h_̅`x\z|ݻھ|{N[=e:rD~E'0/ غw/7țo&kU*d t65tzÎ;.@R#jzۄ-YjUh4-hs?I8я JOj%^=^JRREرLbwo'|PK/,2.$! 2f}4W/e4BbTYm[Ú8Rt ov9bA2q |VuCQxE@09h7 twZU{0` -{С qab:Z0X F8\8Kށ:*#ԭkq/7+zmsiS YJǒ%^fELJ`;7_ٹ}BǎE?jl덊ӯ*v,;dHfa8^"5jXooT3,]RBC rP};cDNo[b gZ2xpyy1C*{_[axQrpٽÖJ@8 .gNc\Rq- 5Q`QОvSmr +zQ|Zo= U:`!X`[ DZ_k7Gg3 RO`0QqnEI;Ճ[ap$Lz^XQW$0j]VY⢥<|O {)co4^xYyDb {NΠQ#4,E?N%_'$č\VUȥKW@4lW_%a늏YY M8:*~~Z|}5\O|^Ç{ 4n섓S+\ 8sJ}O(<+- :ڲN0 W_AŎ&ٶ…jݭ|K0}:,X J4갥V#=v5VtqsZ0jlyUz5k} @z[-®^pwq_oZм1th GC5vĬ0)8slߜDdYނ_?\p:TfP' Y0+|pp^V2}zQmmKcu 0k΀چ^^);w _%C1ZVu[5$ѐP$۷o[0hy!C93ٳCqNI4ɏy*OOOQ-Z()>'+VdITޚ@V/va<-rN<<$-&u32o>Y&|9>,{CB`l^CJp5^{6BvP<1"q-''XH-5 937)n$QzS<q Y=U7Wp㾃e[QԩddBFgیLq?7O-7VԨ]:#kpa*R=hD[xƎػ׺(ΝF~ܹ77СBV㍓/ҥhJӦzFݕsT:mFe,*d7 qj8;6.@*!6Hk.%6m|tڵ3ϲKKa|#ʿjft9Vu];w6n`g &曩k^]0l11,_ᖉSf+QQyL7 *ڴxY#۷̙ɄU+{?wo1FUK&|Xtxz ~$QEZab b>)E4KJkpӅ6ru@B&X ˢ,ZNp3C4btw >޷z(bJPg[Ca!6movc Xѝ*hcLlqog|,Icݺt5<:Ʈ]Zz#ݡy/}4 9$&x/T[n$IDP<+ߝ ?vM`h*$FXF&>=qĉzƏc-^z>&U~E~A+B#Fy/~X xOS\Qǵc܃^v*VJϺ˷ҒQdh̏{%:7˴upPA̛BJo#eVYbJW2hQ$I bbK:@Rm[ cDCE4e\\\`@?Xn GSТ#߂_.nՊhEX_ujB}c^MDa|+QACZXFX^ CCE6mDΝZhy =sY!Ǿ IDATbpaqqF8x0 L|dfٸ1Yw4?pq]GZZ>xa;?~~NƖ௘ؿ߼zW6Odvppsa0#<ѝFC|  ݹ~Eնm0wu(6D^EJl@`_Ʋ,3}z*Md9;vd3ay^SLL"yyzT*ڋ'oK^?фZ}O.Xv*8r$իx" br Qxb+'xo\PrS͐!|AA5;73Vt&֬KN_.'OZIŏ?1cG#?d$ұ"d9qfPJ-A+)XXM+:"svlX(= 5a ƫx >@D5:% .eĭ5N,vֳx^/3zt&-[:m˗ Y>3-3'hʦ{ҽyяlZnY,lǎ Jȁ LTъK"<ܝ;W( qcONl"X6';=\'x)S$~Qo/EzF1a oL΁*A`;pĝk~_CE6K < S˧nxu:k-ޟə3-gφu5G'178PeCmn 2x kE|Mf iɓj SfO@.EڥrsܸQ~d|;W{N%.@*,WWcҤLJJ~ssӒWj{ҰWw5E;;qwWqՒ$1vl0￟Xc1^rcb||4rӦ%RP\ 11։+\e>ܺ,`ի5#KϷ¾7&F6j$WR:U n 3w`x̄gNj.=zɎ]~,ùh?od A!Bn-ؽ.]\Y&?_楗 xB,qb$GZ&/ꉎNxAG.v*Y|]̢[7%ӦyF'׮RZ!krZUYجȈncν]T2… ֭֯gR|}5deYҧO ۶_ @ݺ.C`@?̲EW`)S??B"$* >(C>%!IG;N ˩S[,́ɓ!yaB\nSʕзo.WZ$ jQ'ӾMLe:YJ=zBFE2$ {w×ް>t}Hh}  Ц#Dw+~vw^,`׼w\˜q@g%nQ<6pd3F? ݻ,>_#F6Ӧm0e>!C8ёEpw<Ҝbd4wwfX8]dfydx#*A/9xx W^ith^Y,ۻ͜<=-/ýi8 >sG0VfǎG;!iV~ ЩSw'ӣeZdȐS\fHJZՁ}X8qʶ u`&ON䭷">۷iLk%IÇ;ѥ tV ~}=ڶg*d`-;vX\]Å^6 xda9'ȌI̞y))E8USP`D3c^ ى7i޼$ل+ڞ'$D8;W\Ց881رvRIu&=__Q>>ddm̢Jgr -;^xכ[5JzlڔLfq>AA`9Ҥ+۷ߤw<4q!1Qƍ PH2%yqqQzvAb4fE.XX>}rYZ%Q"3EdyFM>'ZxQ ׬ )[ER۵4j(Άq9 !)Yԕ:n]@D* nQOJbDxq²Zؼ+F E?t(k̜YH͚*6mrúPe/%˗Y\^99FIW|/(0bE'V3k]#Ȭ}  ٧כpmX%z 1ͥ"kǎJ-ĉ4k֌ǏӴirGPyEuWfQFɣrRI8Lv{ؿ?] 7 YӅSk0bb>4֭ݩW˖QۗglOmNJ޽yؑVhbDM? vŋM|̈*zJѱbg&Vϟ޽H ,vOc0:-hkZ-h"IwqKd>q7#`r V&%īf9ZU`^L\3υ޽mߣ g'1sf0/NȲ…5*77"&.NGbb>-Zvp -[yMI5ʺ'$hE7כ!n ;~ےs}+܉'`ynv# v$(ȅgo* j(쬡PX.>Ν˵(,**ٳRo?~̙HP5ˎiyHHnn\T,X`U1h1*֮ue"zeh%FRY[j- ݻC%&Ggb(rLU4B/V&ΟM`>Ѹ0* .~"2oW^qeK#Aa?Xoo۔=ok+n*?,!2ߤ_?t:Oʅ,+{ xxخҎ;c/BŚ5 e>Ͼ}5j%bƌ,\XעI4 ^AlؐA\\Z^y%EVn؉ɓ}x$mז$Y•ի 4IGffȴiF*.xRAB>-i0v_\"(|οsWY3xEQ+il*"dz̆ EꥣfM_}ljo"I'6E,XU_Fn wtl^s}?W/N#7[8;[ SiRT"?7oɕ+ew ak0(zxh[dWظ MxrBKYMtt`M jSX84 (?^݁y2%klgb]Y5jhJ;BlEdΞfMŒ_SOUL(/OfBzґ~B~:&MJW<~*sd~x6&Lȉ8;2\XݶU$zVc(omK6+vVMV.ҲV_3f)]T&t:ع&NAZ6#l&L **]-%̙ Oժ*օh YY&ƎIR+ } d*VpD>j}ikg~F=^^eG6LjbŊsOY| &JpeoDha ''5O.[xspP֔0$UQSB__G&%c|EtI^{Kyת0gN"''gWeŊTΝSzX8>J'ٴɍ ŋlb* }Kd֬M Go_Et4 ._}a>;pڹ70DꭷkWѤr. bbDzOT&}6c UW_9I,lޜٙQ6z>gkV_wNJ%:d:u2?ɛ4iR~MgS˟u">ۘ',ۙcΝ^^r"))ԫW~KcGG"}A~1.],qQEbb[ DzeIxxiҤpQ,<Ɛ!> 䃣cɚ]x~;#[+h$|ӟ IJ2mJ qq&f)KNv/ @bt5Sl.3t??;VEֶ'k l)SEB|&ުSGXکXd_vUBp\ U̟Zx2?lb"`xճ=feҩyƺ={^z)BK af+%Z׼"\]5eL|ĜUˋ`yC.wWT@{ .@*99ΤWg$IW6Tڴ)IZ-gղ^Xq,ڿ .]MDDɯChZc.%։b`>(l#ݺy)I<ٗmryTcqIqSbxl{F_?~T9#&3ұUjNVV]J>,] Rv 7{we!+Kt?|X],6m*"V ;&V2{R曎 L̟MZ }l.oeT\\Ԍc~L2_g̘RYb߾T]ΟT4vxgw'22F{P);v씎}@llVߙ]^*Jb{ر#=-[$N4:u*[ny_9s∎$(ɞ==ؿ?YڵmV0jT ֥iM\]d>n4nsϥ2a~EeK -[jطTb$O4C599*>\3Æ.YbЫ@_,V}H.]DAqTq ©>-zh!Lt;t2[YΈĐ!j^|QS!ݻY,=hݺțL2oӦn!2|pCi{n{оyu \”?1r$ӣG8y*MJ]܃(uH]8{6]Qz9s& OR$IN. 0вzu$'`рY$Ib(f]g<=y(?y'f͜i ^AˠA~ޝ$1fLյյ|agsHÇ{T\ڷpɓ 2őqwRq̊&f2ѱ*%I8,լ / ) :u+'"&śOJ,.\7nl4n,6DI\&V0rL߾jVrǧb>g^}5 XֿBNgdl3^<^0 |9'' _KBNQ߳ ^h, {;.@A _nv"#oVȈC!l77-yyʗ]Zo,'0hP2⋡̝K/U}e`3=ڟ| 3ztkW/T+LVEqAY|\^x!ٳ}qw\fԬ[¹sF^Y)Sl_L͚s標5Kd^HB"l ۋ +W`qwK9 VH^C''sM!DNNգw9ɲ̹s27xy W܅bc̚EppIIE,ZԩUJmj.LӦԨtΜ"22}Ϝɠ~ ܯJzz~Z;?Sdg8q͚5cϞtN>yyzL&wwWsegiР4BRRSGYŋ٨l]ddMZ `A}la.@!, y*,6?s&(J:t(֭0ٓd#V1FhjY?;JK+$&&矯SY$6uU+eļyEӦWzܹ=NXBϮ]zHCT_NM=lu}׷BZxYY+o~>0$6ޢ/_k IDATKɛñ-^(DǞ=&j֔8PMǎ*_,|u>k1x+z9ߑ[22twh&zn׮d4)jH\A $s<<F Tf$ }_;l+EFFk.̙CڵYz597n$22ölOOO"##UdŊdffrqWnb.@!D=HVqtTvTʬ0L[&_~{rr>yyjP6)OO/\7W^]W 2277rr |A>Zn}Ɇ SY,|i*"#/8t(>gh޼W]Ac$&F":ZK;2ySt޼Dx:B&̉2;v{#ի0@M/:9tϡsg' q  ] ę39G,l M4i,}+$'Sn`0!ގbrsps+?’[[*@>L6maذaSNrRLmym۶7wyǦOν=l9Wv#CFѨL:>>KZ4(umR(a0.#5`@TYؒ$wM2z[vٻ7iή\J[7 ׻0o#?dun!v ÇW_ !" 53Q`GF̆  +k"6n4Ү;q{w/92} _w#9s36.qB#+Kϕ+y|ć`H$Fʕ+qqqaСC9<ǎ+u@֭'66VS r0 L> MΝٹsgm޼;?0K.%11\m lE u ]xN!{,[^vmׯ K/߅?NR,Ժv{wOfώ'=eҿe"-G,m666jՔ+kn[(+:ӕ`w).BVz*8;&Af /JÆ NUGn# `Ӧ\y&3| d,4޽%9z{LG* $.8{HVj 'J%,_6)SDgԭ+;S5z!8~CFUSf2|M>GF̛獇ǝ[2pa}xӤ)eRΝiak_'ѳrK[e|ڶUV#xzEŘs挵NN8;ߙ@||| ?^SO=?ƍ5ڹ7 28|0۷o[nc=HLL_.S|ҥ ]vm۶6m,Qt+jO v!!!*UCB\HL,|Ξ=YU[м[$R3WJOfM"yyF7/1,$đ_ iރ-nnjf e4Nc@e$Iwvu720l?+@92n#.y|5VBh ""*zE]7oNG@L Cj5k\j۹s8x?*4jm[5O> L^޽̲e~w#Grٱ#ɓqsvf͛x0_O^=,[H8u*ƍս&!/;P ժ)(0`2wSmxM&#Flٲ__jCHűch޼yZۨѨhF*dy BcƤN q;שݘ>=ŋ %]3QڝdBby$wܹc[DkodvMYgM8{րD6 2{v*%-ҥ9r%:v.SJ5ti2j91su=J"/5 m8w.5Ee +W(۝ђk=?ȶmspErww$'wJfu]QV )1<55Ͽd"::>իWӫW/N.@@Inci$mv"3/iw+.fBREժ\KX_WW Z"W@DBLu m^޽8r$>J ::"H_Ν1o^<""SPP_7gΜFeNeFʕ+aРAe>ο)ʔh$SQa0hJ8H` ǎ* :]";[׀HU1lXUVATTZm&-[zP  ::z\ZՁwI`@?""OsqQ+|M-J,8tBlْ˰a C&c 9QT$쩥gO-z̏?YXkOCݺ/oGK`܊$%d^{#!A|aaBٮ$%,BwWebcebcM\*+zHԯ/F~aaJe'[#5D׮N,\艫IBbbRÓ3"#˗`tL_?˯1)IG@ŕ+YRLff? K3pvֲ~}$O<[y HQ;Ceɒ%lذC5_r% 2FbŊ|'o翁]*Cnch'OҴi+]!!Z>._2+^OgШuIj 7Z2&N OiʃMK0}z(˗`4ԮL˥K̘4h`~m$INn\MTT2>Ifw$I,]j%vеQf~#Kq񢉮]5oE 9- /=7IEX!P]M]] t±$ŭ%<`,@Qp>WDk>ܢE ó>KFFjbڵ8p7#Fj*^J?:>,[(իǡC|#M4'c^ވ wNbb"O?N-ذas:J8qժU^bq3^os~РAe[Td$?߀!GpF_~5u cRiܼGoᆷ\3bD͋&:-OO( QOoa`ܸM,]E|}]iY6d5W5gO ?޸FȲLVp&VLVl NqqG6m 8XJW[7֨{>>BHoyuvOE<؅ O?ewٌOhhťX*#¬,޺5nqu\ \=ptTe{;IIy=_2==\|=ZZBպuXNIѣ&++}FW;PzE̙3/iӦL:nݺÇj*bcc  <<ׯ^:W^ss` e0vXVXAZZrWXATTG-7͊+)7x{'բ"#zWWe9ii:,OknʤIMx6h4%HNQz<=ׯ*,ڕH׮A6eeټ9ë|buB;v3UprU"}7n*!әX* y1q77vꥡn]U,Ö-0a/ ,[&zLfd}I''9;u_o2GWZUM^NkxO|/_.`tZpgϊ^5zu֥S*nTb~ѣin|qܹtSU P$RSuܸqwBo_dynv(˹۹;؎w|yA3(=E3ݧM?J3x4<=ĪUq jIl:8ItMZ u୷B80^ሟLvl\\T<#G|#h‘aZN턄9ҁ#(,ٻGIJ2ᡡS';RO0m$A~ch:Ԯ-jL,v{w!2Z9Jx3%'4L\ᶾz>,#ìJ7=v, 'i4k<83*Vnnv݉E 'Μ)i;vν}Rr4hˠAxj,j,_ޜ=AWݵxz:#4I=zysUa̘*lܘѣ9P;8fvf{AvRUTrDږɒⴳ;r8ĹsDV,ɦ(g[eYՔl E $X@t`fD] r>σg…-l`2 l`oKy?ssYXR+c:zTU]]MM_$? e`B$sg7ߌW]7;4+c2\uOJu'2gJY}0ZZ27݉1$!?Lgg gd?AEBx1ĬɾOYbpf<<#=ܘFUUQUuLs= <#<#cFY6(*&Sfkٰva fgXi[+tSYi3e|Y-8z4? zu\ŋ98f:+Y9r$wGફlll;}N2iWGc\/YLo%[̷-^[oر#FO&vn,i,'K(+8U7's[=~<gӦgUz7.KUD$M;[?lL #>04S!SS:D &peJ:DBeǎ^."MD8;krҷ(=MiQz7( ?y?С'E45e'U~:~e\?c'sટvwd$E"+W X!2wwAL/ N9R狤%;xt1]p. <6pSg#d" !M(s8z[|7AZE80qǵ%njg*VL[ɞ=ANWdoVG"!ISD$C^|_SY9vwsq`I֭vl*{F;Wq?x]?Iknp5"ųWiקgO={<$Аe%V7=\yn}E>pm|S+J]|x >BIvc֩+cԯӚa2M=S90zhVH$v! #safHtvɡCI7|JX_dCfǒZE$_^JYY1-s2obR']4y(6XА8h_SX8y2>XH2KE/vg>`Kgfu7a 5SyV=r//bZZǓߟ1F5UhkS8rDWS()Y̕WZ7fdRg?r`[n)3{)*'*?7f{o+9OH$Ԭql2^o5?&d|@ 2@1VhrJfN7i)Ϛ$voC]͖1ZU̎)sE|+ Np3P[o >Wlh<_z== )(0d$ \{k30_7LQȧ>e墋,ͳceK.{^kFo/j*:b|z9We@uHM@MHu~;[J,եѡޮpHGayL45XZBYY~QB"S^qmqjO547OZi?Il"py: \(ZVF"d VXheh(6rihB7E,f 퓧m6/:,H{{>< ==a퓮55y?m`Ӧ x.6m*͚ xc5u7‚{dƍXǺru|=;G|g&=@sE}ȲUWYjkީ CyM|>P9yRc^_8yR% 0e'PZ*/)эi4@TTѡ|?f*=+ &,&SyI/Ǐg}1 %v-t*H,sOo6(jF<Ω3bzY8|fo-P\,PX(P^.P[+nHuHAAK`''O}[ w\2ᥗٰ8O(*{:|x{T IDAT 5-޳P0%s030AgQ?Hd@rOscfpկu,eevNSW7deH$9aiUY.pa!괂 JW2n{mq%|q[*s6Z…zA~<2we\}x77;[mlj#ر#SĥZb ˎ"*Ju ~F8,HAD"g`ӁE(a6r,ճf3lW0Eؾ݋!r%9q"½e,Y9#H>QQ1VJQTiq8vj~#յF__xd1 {YOm㷿 .x234Ϊ۝\:䣥۔ַ/-ꫫu꫽j/;v},Z䢼̏~t;fM$pG۹b-~htbE㗿tvo IeYK\z~)Z!Ȗr 6XZ 1=Iz@@-_4v>HK}yf{Ux.l2uy\Kz1TU;C^\pA U>_4lSZj7( 9+ I d# H\Ӽ̛wZdI";LAneҕ+}}JKv]'$&ڵ%;++y i~2JJ,}w-OxtkͳF`v+gǹryQj{J;wpS$,_tcH莇 |S%B ?I7EE27\:-9X@spիmt+~+qnSIz+γ j_of3uu~TUcǎ P`˖BsX뿮7w۹RjkZVzPHg<KXZ]SPkl\s>YI$4>0ή] cqE͓ii1su rw*w $Y/Y1P_J,<'<z>>`+җ*QP0xL& %(/ϭpx@M y\e`,0c2 $jL&D(Tʼn&=^Y Za6j^| /,lygm:檜qLVߟ?jRdyzs蓎8۶9y2AmM7HSdf 7 (n^I$nXTbrK5l˖I,[vmaNnUUHZ`POn1XtaI8t(]>PR:=ZW(z0iTDOW@EJ&.4QP Nʰh|A7Ȳ5v$KTU׿%UʬBQԌԩ9xp ]zӷ%,4gzԜ;Q2|@ fF2 4 MKBrYRPR"'O& @f&bf3Jv6DQkx.-4%W0oK.)!=Lp%?a'fndFY|z ??Jk,^y&kJW^y 4>0'O MM",\(dInhnhn|[M!'vVepP]*ɤ@ӍG&^M&amI [DQۭ{tjC $E;[Ǔ'IxA r5eYq4$~CD,|!gp0Fg]p tt1Yi&JR.JJ>iә'̛Yi^u4WrR „ù0H`1@:/BaaG$In \B%K~VH_lT޽>y'lWOeK%%#YScK_(d+7P2e 552$CQ4v#~8"_lcf[^Ȳȼy"IxEQ}W1|>}u:aA\w4כE Ǐy0EQ=hf*;7TWU%x^-gc4wPP`yjvf͙H$I"ﵩ1Jʽ{`3-PZj Zt0Y&T4.}*x<zzBX N3CCq"oEQ`޼p(,)'o~eUd=⋽*__>mQsX-f^[<6I`Z+k%po7lYYJqqϘL%SGcUMr¯-I$OrJKEVZө?Z( NiD"*8p FkkXL.xn(țo_SX(sΝYih|g":gc_FWNy6ʣdH&?`a IDQRϵZ,PDBIIa$U#Q]ȑARǎϼ1ڸ2 /ʕE9 Ek-'H}{sذ!{zh_5¿K'nĭrOz'_O^ߎ*,Zdfzrv,[f**L|tu%LՕKh޴?oEp5G@U5^~8 :Ꜿif/MM.և 46ymoR[Ǔ.1*E2d*v>,dW&~jk'/(-' SV6&oI*v짷7ʢE'u$>jB&.5d)]KWŪH3a+VXNgg^@Ua^=+G\zieVw옟rYt7%gL*]XJ XPȲDB%W.)A z7ǏRXcҬ UW%Y 7NFWDl]X"W_]6m+Oh7ntqcǢwk\|+\fYէFE8r$޽q~C?==z9.h̒%f-B}k[nqpMv{,82E3I8#I^zPHmUΈ{ rɬZY9' i5HholYn Hr ^` 6[z*WvC[]]g.-yҒaDQ`byի(][B(䥗hjrMYr" zkPA4 czV*TU7;p:MlRLe[Y4ZZdZZdntG(o_{l"Vb eϗq:D'wߟ`س'W}A:;c8&d4M׻Y쨛 D  wy2PRڱ W~VkzjӇ#,bI,[n霚hANOOhAF[ .9U2ʕ˪UY+ǂِÇ7\xav{~č7V׽b*W^YJA(p%\rI@_Kwwz __Jjh3.#tvx$ 3 pK.>#$mm~.`ki8< %R^Di_m@7 #9ǰZuQSj8fB8fĆr %&w!W{,XP-Aؼ7Ƌ/vpLW_]iwZ?MMV佡2zsl___}j3\⤩ɒw0vܹn9/k r|筷|=cTy^u`A“I>zuYٷǢE-pJһ>_*#5r~b#9q8bIAHyu0'PU-(.lؘ& -)/OCV,eϞ~ܸ[(*pUkx‹-a]kgR6_*w&ʕn֯$ 7O!3LkMʲZfiǚ5g>"3Z tv;0~=_襡Z.|̮YAX,ɤJ$jXj]&Ien8UUNSW:1x<"~<`bٲ>hj')[7ͼyn+;hj}Sosf/+,[fP\]m;N;qwv&ص+H$4DϷvF1Y0ȈP(0HOO W~}AޝSXƻcK\tљSa8HEPU5:;C^I-{/l άPI@= 3uM @a$IDD"t,hl6X,9G:ֺ8y28zEG+[}BIjjrW*ܬ+d;W3o^Ml6>!!~n.+IK2ߨٲÖ-zh8w zVq%K47[ ;1I>0ѣ4Gu mOo~s/{O6k+VQP݅XL˗L|gf8PXhlo'x+E&O=DQ@UF|@l6DBI9aH @ k( X'Ujh:ܹ=ϛWHwwgٲ✖X!^~JeN{".xs=$v֮Ϳt1-hHDe(;wyAEDK,Ybe+gIW_ q(mhF{{={ 6$p:M,]dzO~́O/oxp0FWWxҬFOOǔ2(tJC]m͟5~|?,)%InT Fadلɤ|6ML~ =s7a He` Bq]=:DCCzvJK|55埍Z:;üjlbi1(+(0sMzѣam`Js; 6ȊvV[:ߟd(۷ݝjFU LXhiPW'A{ǁ1:4oFhT)q8˥Q&˥(M) IDATb*ǎEikGUgcTUٴsCN_E^&SGRR=խ‚7qI_7]|q5SDhZy<z{C~b`00~(u6Dn Px\԰hAMR}+Vpx{,YyIATU٩*ыh,YR@yyna44Oy%Z&SZznN.dK"|'y0)2]443ǜTZEͿJOO$IӓDQ WޥKҍ [W<- \~y鴼W>in T6ovyb*'N$hk::$PS#3gL]:yZ3(HeHeevMP]VDQRF7®]:ش)7ɤ{Y0%c̙sve\ĩL4Nzj6f@ %ܠ^P`24? \ ,͝!Jd|SW(*3@0pHYS3J"O=5D˂ܹs[n^z|y믿~[u橧b\wu?=/#1Q(,'"X3pz#jTT8z螵SK2҃njkSZ夥яz )|KsdR+QX4xq𠏊 G=nqh4$LZΕLcq۷@ছ#ܹ_<#}\s5tuu0袋w?tA0;y.cI FXl)EY),pAf9TVڨ3:y5ۣISӹ#{6n.rsEz%V@"bcBuuB$YX4#H!+L+9q"PlBϦa|(ӿprɓ.Z}Z˗Pf"Vm6{ݻwjժq_IQ #12gC$>_ے=I 44xH$t9O给?_>෿Y,].7ټtPFٹ3%Yhi&wd0EhosHM+ѣ$I`x/gѢ^wlt%M.m6@ ; vQױrew`LxA[[lcˣ,YѣGAְZ%V="I"NgjeDS`~ g.u8އjHUE\[X,"YxJk<&wկGhA71llhbsMSOusYݯituEim $<'6֯/y10c!**ll0:mmC82MM)uv)/GRj(Q+ie.WC3v VZZ:xA)(ЍxN`d@ZMf&2L%H(n̪U4ۘ"YR85K/  {ՙP[k]8th|IQUHHtʰЄ-aƴ0Sַm[,$*1x\w;U PTdai-TUAjji jqSRU-`f$ J;jl,<r@.Tob`&FbfӴNYDPUql"I"֝~#+r3[MfM@_Mc>ʨATcO"@I +?j\26Lo1L0+LKKA?4CʝBx̙0)*,]\lc` rdγπmx֭:񾾾 @ )l6su A,sQ,ZZZx'y뭷xG{yikkvgyմyv=Ofb %YyE.I`b xmʲe?mL&鉳^K+cXDTUPDT-.Xίs<I/]`(*̚Q+x큌E8qeV|OfyPfbx<35پ};<?0'OdŊC@QLQLKK4`Pap0`8mmaB!Hat/UdYLM3>g6nLe%K f›MUꦦE2g~1eDBA3K?2c`ƭ4\$@Sؚqd^ir(-ds\@\. KbT)4d2RM<ĩrf&l6+X,1(($}?TzI '?yG][;d;CxWV9ijyO8#Dm3-vq]fh(QbFY죑$X,qPt1\ٜ12 #1pCDjKbIPaԶRrJEXEhT㏃%N)SY>q/Á1Rp,m4*N3p }|^#H4P^n<3)TXrfiZZx\X,n t3>s= #9#TFhٜ?`XL4M̞У1MSh-H0xᐨQ\IN$?:R+Vkn'hdRYX7K`P/Ǖ-p8#m # gTeH|F ytPMiPCCQѱ7%c|EOB$ǎٷohz'EE3/~ $M6 sdM 2cWkkc` ӾIn2N3>_4#'Q2n A٤=G٘1z@ fFr{q#PvyF`XllaIx<l<>6ld޼G"I::B紳̝bɎH!h=ǏAu55vb[zDB=6IX̱1gP: #JMÿ2vw5ß5MEsձ00b1`cd=O6 1ix +b) 蒔3qw{ۄ-[iz@Ɋ_6p:(FVI=pˊ,46il-JrD4_+556,LS qXLɎjtuEpe)o*3!PH|nqEf,cK9f(o~ƶmii)_+&dF M su.Y&̾^$V3H9 >3% OOnˌO4M~^8>N &H]3Wb5M7O<fx-QG:Dl݇iEBޟ#I*Ǐ308\/]Ir V$ʎǰ3}zcG;L&HqnˆN (MYPWgG]]{!=xp:exVI|Bx6TUYQQa9+bc*RQ+SwwД1c*y5k=~v3|PuVA%jZ骨=LZKz= / %NQI;g+]!Q?ͪՏP(*5He0A{U:˭4߿`~=OQ+a.45oqp.7^y|7qE?B(ho!?:"Ńa׆J+**,rjjџj6m{f.7 #ԥ؂q=S\.KVG<:;4! 0&a] z?̓8Ja¸q)J~;!J< aH ) I6 %ImmIu&_A{{GkWt.QzkYwwȐԫf45%ۆ dYS=& !bym0ϪJT8$iS˥킼:;>pp$rZ׿|y-I3e479Y]w'dʒV>܅FgAhUf9|~]IA5-B,;AΉ~x} ^TT'0$*5 @JP4* ɤI$itt0zF뾈9stNl*a?V(,(+3 55O b"g6}2DxliBq8, BQrmY-7eX,z|* @JDaj?PDexO[P__K/muB8 BZXdY#O=vl^MM)+LOW>gOg2nH؟JLDy >TU9u`(9n DaRb ufE͌J;dY† 5Q(KIDATah:F`2 !ޮwuu%[KڔqpWՄhU &ҵrq5H7rĉ^TU6dLk@xXB% Y>E hB3/-sN|:Ǝ"gtt1tv} ҡ(Q;@FE$LVF2eTUQ^^<6 $Izmz?YT2kWz{O]U~#(iFuee~4(Flǐ!q͏ {Z)f$ޤLؕIpX/xяB]_ [EgjjQ[[+'N%L띆yBp$YRQaCM#Hm@mm'Meâ.Ç)Ia_ϐXQYi7[0nWz#e k}f5["mUƲXL#xY!׀P`R"ᨮ eYwl֚64 *~8A[[ovxZM:ԅCS]B<'t*hxӯv;f Lʭ1G(@[M0ۛLOFӌӟ9B;#G|}:j>~੧vcOHBHk$gWv3D2+]@{{/nkZ'f2f31g?u~9;Bݪ)̬{:B hv5A,)vTV>|a>?WCsK7[o uRP^nĉɯ*J--5hZ?p-hltDE7ID&Ƌa=ړ-T 7ī#~OcAO(ʮf)ŚZ &3 zvD2jRVfMi ȹ4p2ٔ./ek@x0)rCÇ;/߇'kF 'OrnpGD8EE= `)+3Wڃ>P;+&Վ3v쨩I/7ԓFfuu'] 'B&X&u v{ƋV.6 Vҥuu_P(0V3*亏db *+8y2!DZl63QK$d2nkt(G4[SĄDL(Q,[6z*җt{- ,9E,Y,2a1jW40?w},K^/{sbܸ'jP}}Y2 s J2& &kGcժ=i=FMo)Eb1Um60efAkE: zN/K*KMV8g@xʷ@=w^=*+8rA]_OO==ᬪ(JLF+ScljSQUz&)S5˹z'<VhJj5aOҿ;G(؝ |^LfP '{7[?RLGLj5i6s=d]]H:w 2@yyee @H{?rPVa}\uX|a#c#NZ,+f:a2qC N_ ɤ"ViOUVP[J_lf |y$EK`bLJU.Wz߻'{wE"Q~n>d] /nTk6dYҴ,K&fv)u""D!I8]eVjܲ,iVa%P(áPaEbXU?:M=O} cժ**IYg%_s!@[[ ^N0ckHؼ3'SH$URQs͵n6twNNJ-xl;XZ),Sx0)2FHe̘*ͧcu}( >0MSqIW4*px/>#ׅ$I$US$$ wnó/8ɤ{[%ҮJ!9IAHOO( !(/3QRH "y];z貰P5 l6l/Ie uuN? ,J$IJR®2oUaÆx;bԨ!}Yuw%^ѕ$씢Sh${zxǺ̈́Īii٨P c!:R(o )nzZ#D2+Y8ͬiLuwtZ :?;v|(EQ(!$,K:0pٳ`ʔgO[&Qi/.%ok|jBM~W%IBECN̄H-]vjR  ~E"6_(? Ѩ@ d+,K۬VS3,&<>E8HЧr,M %BIɤ.OOOv%lw:-|I$I5p:!ĩY*@ !& !Jz#Iۭ, I>~QL˙EÒיyRoT ]w|M<;v,}Y\uUXf f͚qK.Cl2'ԩSi&\x>ra9ٞƂ-dAE'V)aBHl$A7RDdZ*@Y$Lj0iG~?jn|{1cƨ_ ʬvz8x faȨs^-IT lv[5(+ScfSBQZMz ׀Pn^~e lP_bPB3dY5O[rm RgRv=9KIdPqaիt:1w>ϟo۷oy睗MMMl6͛KCJt_u:wۙ'HP$0NCBf'钔ıTp8H%n7tv_p8 ÂczNU~Sx ԵdwI(%RPYig "6#r֡en$f+ERғ5 -BeӴ~5 GEu&L8p c'N}„ B)_7ހnb2Y-j5@u wq /b1n7Pz~a"U jj\xK\K/mĈՕօ:6o~uu.UUe8[Zz҃c| RgAСeѣ=[{{oN"V+U9rzGsX&(Jdy'p+?)S{MXȋ/ bƌxWЀK(_J6GAUUUő#Gty, %nKM$bQ2ˁ!wIpXtZO$B I, 9s)h4> >yssyuش먭uٞl,L&Nח uuklj %v"sKKπ%]8f=ړu}PB B i,IR<k V(B U*\5 ZߒcReٳ۷oOd"oZ[[3xTl y.h:,Rli7pzR(/~/噣3S"hJ_x=Xoع1fkfCϾ$ƕ>!r$'4t8̚S,%қcX"U%8Rҽ_&ʏetE-N:'~hhh@KKKXDАmmm y,PQlHQi djcBӍ^;wr{iK٨bv[v'?Q _lʚ6_#*Un qlVSwөvrY4(ZMf5 GAmmmtEL'OtgH0IbԨQ@ Ŵ;w1"cG?0aΝ;!IRݻ1cƌ>͛yeLf?u$I¤Iظx饏n8],8k@=g ($i"z饵 oFvb)Sz_|^M^z5ƏD1w\lݺSNA +R>|8~%lCwon}<'=X;EXoر {OS^&fmh#1#&"T$1yd̜9qI3=yY&~ g={6&O9s@uu5/_GgI:u*ncQ88cQ88aԩ7u?Sv3[$\iIqHK,+o_~y>w36lX|ǢEi&Dze0iҤ"""*\y @ڰa| _5n7F+.<}E.R8q#fڵ;w.6o'[dĉ3f ^}U߀eex7lٲxڵkf̜9@"H]t<'[wMpmΘEDDDDYYv-,Y+WƳE~FF}ʛL&l޼-C=ٺu"T8BDDDDDa """""2 (ŋc„ e]7Xχ[ncǎEuu5JرC=.]َźu0g477vcX|9 ȲKu,~m\s5hllDYYƏ+Vǥ+xs`w2Ӄ _~9!2c7Qr @ ru ϣW]u֯_qK.uh"\HSNŶm ҒXɓ'qwb2e ~a?4fKLcqo/Z6A.cO+_ ^/{1꫸[ѕe;[nŔ)Se|xGۋ9s{1hKG[[VXp8YfHyDm۶ IO?giĸqR>$Il޼9-1cƈ+R-műcB$qkB.c9#nv1m41a26Mw} XtMBeo߾>>lQ__*mmmB$quR V^ Ӊs>|ٳ۷oOئ&L6-fa޼yx7C)=Emmm¶ /8pZr|XlSLrX|8lق .. _HD.Eŷmn?;pA<ضmn6]ⱛ(5 8rWWW^R"-=0o?5"D"q_n8X2CKK obxpgcɒ%[uRX=v¶m0yd1w}7^xj2D1)p\T[(,XxꩧzK#<}G z /Ư~+̘1֭W_7O󽋃Ǝ;pEO>$^z%\3g/_Tx&JmkhhHضKrh4n ?|K,}ٌttt@QD"tvv"&(I0s>gϞ !vڥΖ\⡇‰'qF|{̙3zjL>?O uon`ԨQطo_IΝ;#FHؑ#G?LؾsNH&M2q…Xz5V\y鶯.۱ؿ?n6TVVo_w^TTT`ź){1z ty%83śo.,aW_.0nנ IĪUۢhZeC׬Y#$I[loꪫtRXDQpB!˲Xr޻Zu>-[s9G1BlݺU4-\ׯ$|kFX~WSrŗ%Q^^.|>_^{eߋfTnd֬YrGyDkX,bxl6ÇǷ)"?|QSS#VX!6l O.ĶmV^cq뭷 IĂ ĶmĻ}x+E/۱ԩS咕XDQq %k׊3f$|X,СC}|>q7ѣG + c#wd;MMMBe!IR­Qr^sXAqw -=\b #w2{ŤIDCCXpذaF?{2' ]DDDDDd .B'"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""0!"""""?_EIENDB`pysph-master/docs/Images/local-remote-particles.png000066400000000000000000001072071356347341600227470ustar00rootroot00000000000000PNG  IHDRq37bKGD IDATxy\T?י}AdS5p251IsjVn]KZL5]7̮Jm7[rI4q ADdUa}Μ<2n 3xpfs>g7~s  ;jx" htqqqPTLBisdBΝ;ӧO;>WP`Ȑ!یWJ!mG3qQMM +|0NٳgcP*nBB!" !DMM2 Gll,!`Zqu?.\f;&22˖-C%BHBA!mشi+up}GŰaw㫪.\`d2,ZӧOwBH[EA!mbŋٳ'.\A9u.A_(,,dO2/v5BCA!mXRRvO6 /2d˖-ln<7|qUUUrss\TUUy!-ul ٿ?+%2b 2c~W<{*ʡɓ'o˗Q[[GbȑM~-AQQq!deewވG||<cT*̛7qԠ;vlFYYYXx1cJ<q# 4 >t̩Ԯ"++ K,a{z?=zhy|ݻ7y,Y!v(# Iҡc-[^{ o&!q1Ʈ?3Tވy`E;v5kv[cb͚5lh4bvI8Bڐ._ׯ_$@bb""##oFҌv½q%u3VBpp]q`rpp0}]6;vBZ< iCtfv***p;>d2>g}w}3f@ll, 8GaY <>ݟ5kuvc~WaHKKCRR{޽;z)v?99Oi5ZBC8èQe޽{q=J.w޽>sz҉'PQQh߾=fϞ}\0y3f~iϸq***pa5ɯҲL!mѣo#+Wz`ڴi f6`J۳gLi(#۷/t0 شi[{RѾ}{n6 dw"L^ti &~322הvr*!mD"… /v؁x-M^z%L6 RsL&|'ꫯ uWkl2$%%aڵv9ܹs=}yvA_~dM.c9r$ pjL&t:\|Fzz:k0駟"55+WDN<i8AMKiV<#;;qA y@A"E>>>A~\ ///Faa!{,::1f9ٌm۶]k>}bŊP߁ @]ͻ>Xpc999N{l5kߝ`eː̙3?~~~'33l?$#))LH[FA!L|>ujkkaZaXp\B\TX 8Щ%;jXl-M0l0Ev H`Zqudee!-- ǎcDSL/ B&W\O<.aΝw|dB>}o8}4._5k8<~xTWWn-cƍ_W6X|9'JrJ$QGH32 HIIAQQV+z=k o,\A*B"8pZ R Zx%A/?j]_Ĉ#|MfaҤIlom"L2@]_]v,T$%%a׮]~.\'|.--ūʺT*lܸzj i(fru㏸vjkkQSSfF#L&V+xfc<^FZo8qq4in݊D$|׭6n-X\;_:tKKxO,$|vI4-3L!͠;wlFmm-xbapΒH$lYUVCRG5jڵkHOOGVV hPDGGO>Ghhm l68|;_|έ` NfV^x%t:u*[}^cPcժUZ~!!!M~ UҤ$O_!رcZ-ZK8KGcY6(//.]`drIWlBND"Çe׎*.._~%֮]ݻwp߾}طoR)Z J2*H0n8+n4?;h+VUo`-[T*A+!9.4D\7UgKVc3TTT ##O˗fQ?s~?B qƶD1k,DGG7z|qYpUUUN;˸, R)L& X@FT=cJ%rss=fee%.^:6xTb7D"q3JKKCJJ fϞhXVkhcI$T*Vby wR lb3sAӝՈ#!Ρ 7h׮¦T?pۇQFݻw;*%Jj]`Z2[fY2۷gw :^ii)6o!mQ3ر#d2 V+8^  v R AK/95T*qdСCS_׫f:kJ]>1afQ-~i+۷cʔ)nۧ~zV;166ۣ8B@.C2݌y=z>(ƍ޽{]vX,3fSX9T*ebbbBdjTws])S3FW]1du?_QQQq ݻ_hKL!nҥK\zr)JmqJBhh5Zf._Sӟ^rذaT*KΟ7'NĠA\rnB237ҥ XQUw&H$d2([f ©".ߨQ8_OV1?&&o61><{9\vI_5^uZBA!n#pB*BTB&qY)8cÝnfR3Ksb]=z4.IF6#G7ş'lٲcsƥK+>`3ݺuի4!i(#čwDWLÇ7jv% cfdd`޽5kSD"aN[=cXnːjx0m4?AAA]7LHOOǛoٳg۵;v,>65MHsְ/f4m6TWW$''sHRL4 < G%Mh&xѣG?GyE(o7n@d2VQ<7q@\2d____(J9pÇ󨮮f}M&##;w{~]qg =zux+q6mT &`۶m^ fP^^*h46P(닠 DFF²O?ZeY\.?1qD%xqM9y$?=d\paaaҥKVׯ_Gxx.8سgt:m"JYJڷo &!4LE  du& 8<{<%%8ǎbAvvvs_2qB#""a + |'=_}All,y !^fEaݺuصk0h DEEa̘1Xh5R<ϳvZbّ{l6-$];-JR RaÆ{iA!wCA!N(//o>f%)A@UUW25558qX q\rk]0r9z~^DBW ضmNQFAVcՈzF <W\Aaa!+r;r:t@tt4bbbh(#l6~>}O IDATϟǚ5kХK1yd|8^x+aXV=ym 󨩩a58R?*kB8Bl6СCذaRSSq1tyf={p9mBVS2BicdB2BW_} E׮]ѹsgl6^Ê+bDFFk𞙙 ngBp  4GH $6na׏石禤e[ǏO?+V}f O ~: QQQFL___"22:u}҆l۶ G/rssqF)ԩ wyF+--3aڻLF#N>\ `(,[DT*eпDGGߵ1!TBڐɓ'cɷܴi0d8qv3͸|2_]]nt:DEE++we vixGff&Ξ=  rKR) QVVcذaOIih&$z7oF`` ڷoh}  @MM 󑙙`0;v{ݻ0L0 vp [f〺=j2  iVV8BHذayT*ܹ0j(̟?&M/+"%%lbܱH8T*\.qPPTABB%JPGH `2P]]ͺ?ob{(R6Q.\ڵk? ##ݺuPt+JdɔWb޽0ͨfcm8r燰0L82! /ep \~& @e4qLR} mKg<l}.]#FڵkСC^a󨬬Ķm`4QSStod,رc[!ΤIIIIB \|հX,=je<{\3jQYY L&nnsh\t ;wرcRSrJt:tڵ&<;v@Aպ4` +z=J%\r~BgL!^y\rn |M?eVD`DGGŋ_~ly`\M&ߜP( JY'x,BnEA!^h4"//&OyCH$lyU\FST֭[y6 ػw/yŋ[d–-[,8VSN aaao0XWTd D>}0|pW BH~ !L&.^2326-ZVV"''fMyjf̘\ٳCW|fzp?(((@jj*~X,y9:!PGY,\t-56xY`l6j"//љضm{Kbove[,Z9s`ʔ)ڱ5DS)^zIO񜖵@H+TPPPDWr8cҥKM#2CwU8q?ecL&RGwV r[HHCCw!,4GiZh4psW_j :Υcx'N`Μ99r$JKK=}9vnI`qDPP͛QF{2 vx1XZ^5!88BNO`0߿QjlqMM L&, J%|||['S(#Csd2a߾}زe `'Ⱦm7!^`0p~8 :CšCwz&E|sPЭ[7^1Nz[hh(yGţ> " ǏCK2 !!!:ep1={9;zywF"@*BX,( l6HHH@Ni߰{שpPVL=/… >Cff}'Rٶߖ9yݻQTT^oW3cbvݻ1l0ǝ/A;B )5 n.&֭ƍ˗/u<йsgrp`SW)X*,<"t:/v)iV+[j0LߑW@ ߬URRb;]mgO"`ٸx"֮]91պv P( =d8* ڵsCS=zeDl6+֕rׯ_eUqBG塺i(So|مbɒ% .}* !!!Ͷ5ٳg1tP7==YYYt0L0͍+v;b'JZɓ'7Rjee%{ hVVũSPVV07oP.C&!00!!!>}:Ր#e!h4H$9s&fΜ٤18Ԥsy AY1A،JJriq8~8Ю]Fwذah4(**kPeGT*!HB|‰cEQQQq 0bĈ'>4 h4(++CDDD_!ވ8BK5x8q/_ ZđVl6zp/о}{;<,T*ExxxpFb ~b, P=z4:tF*f~Z-K!dZ(..nО8jf8qiii8z(oJСC1rH 4ȣAjaX^!W탪O*DB޽{ bʕ+j .,Lqt-_DEE&Osdggbhdmڀ`|\?QF.CVC&!<<>`wEjj*4 w (++VE=m6={ ,@pppNjKj>>>vƄ&đd2a֭ꫯ~ӦMXfEE^?:8qc{.]z#&zyyyX,lÿ8HٛLƂ]{_UUV ___Wbwof{^Ldggy3rsxAB`= ԩS8~8u;سg?k׮'[PB h{ڵkQQQqfjjjPRR[D3|;(--ba-\ȉfZjfįd2t޽Y^|}̙3 ̱-Zyyy(,,Dyy]_ZFNt>v233QYY٨$ ߿O>$:VRAT>(œ:8"YVغu]vEBBF[l6q!9ь37WWr *+++PP=M#''}MඩM\.GϞ=ZhWt ;QQQ8|Ӊ& lߘ JC`` GS4c֬YСV+Ν;8t| nʔ)M&[QGZj,^챮]GBBF %%ɸr {|XbG_^>qeÛ 8T:łlf%rRJjiZ;v,{L֚;wGAuu5[ wāǣO>/8qCPPT̙3]B7&%%%y"qb_9s@Y;8]t)S`4qY@ii)Ν; &4fh???b֟]!g}QrJݺukq{%fV 3Z&~_D bccQUU#FNf.]baٵ ޽{>r~waƌ%"J%|||p}5(#- xqau.]Yf5iD"`СġCrx]rP*h߾=$ &9x_6D&=^r:t@LLL F# XjCe5\A"@#44ٗ#CCCqu$''#33fjϟj&O?aܸqJfNرc0" JN:{x9xZN%-ƾ}믳 ,9s\:_| 6+VH 'ZFuu5Zm˩ Dppp.pUTTT*(RT*=gDDD_~~7\|-:Z)}1bz1 *KZٌ?G\9sPPP;v֯_#FxL&Chh(BCCYco* 5OoA@uu5Vk y6 ǁyTVVz,WNNbcc=r-ҥK\r2 eY>qvZTB*:U'm¶mX,\Ql8v]pB1XTTԨB H닐#""oMPo2믿bΜ9طo_`.TRRA!99ӗ$;wf ^5wیT*t*ʭI>in?sNe0 OѫW/l޼ ij>{KVh4pYիذaf̘իWÇw~555]DFF/ /F2  ۳?82 {[![PG^EE?YYYHLLDNs!''&MjS"00@]렫W6 ]K)G RXV FԩSk.hfa.={DPP|||XeWS{r}m6c47 KIIaoOW\bUՎoX\￟?rH#4dj*q:t(.\ʄGS?~<>c-ZGz芜'H0fj=k R) r9|}}ѡC 8%&ěQGީSmg~1s?X~=+;j(KnӇ>㒦qǏ@Ľq`0@bذa8k׮%9$$$= !JREq+Y׊zEgg( +,,d6~ a 5.qXи) 11,{#ZO?N¿/H$L& H$6 * ?#{swgRz°aZW zCpppLRbK,gp}u >0`z~!!!1b|||p5tӗ`ٳgT*YC❈bkPPYzBZ2 @II b˓;dS[[nS[ DUU$S%FQ\\QQQ_Fm =z>|yCE@@z)̟?ڵU֑Jc?#%K $$NjhR)$I ڥR)N:-[`q}u^YY:Tu3Æ }݇r& V @DDIHkDSYc{Ǭ\me|Al޼w=GRRv HLLO<ш'q%h4FgxJ%BCCZZu8XZZH,X%PG^EE&M޽o6'NdnJqH //5L J\rk{?K/ni9h#zڵcUz={uVƶN!;;سgۇcǎId2gJRȅ^|ETTT`˖-֭qIGH[G3qEzju}kj՘6mKSe,o"-- /^W@@~wk^GNNV+f[U QTB*O>-+㭷'|B8"fDQQ`ҤIx72ֲe믿i7ߴڽBΝæMp1 ٳ1}t`8(,,jbqi '&1*vJJ=FAAA8999O<˗GDB< Hc,X9st/6l`W\ Z-V\Ɋ קRЫW/AVyTUU!??׮]ڵÒ%KzͺRqq1kl/kJkYrR:uB]|孋Fufʕ+68Ң[صk7%K`ҤI.9&Le\rno j5~a<,}Y*--EII xl { IDAT7b1CP@" ::+ꦵ$l&nXx1z!̘1:B HbXpB{|ȑxwuA~~>xX?nt!!Xή]D^j fU*|}}綸۷oO?>}6>!mqũkf`޼y=zíA}l~РAxwkc,k߾=V\޽{7fVc&L@RR[ZYV-wrsSNje[TW,` W*++CxxxGH[GAixǺu.] !!#GDϞ=oYڳl¡C+W}~̙x饗Z̛JJJ0w\TUUK裏\gaӦM377b.b`8{,`ĈXviV+j2rř8G~e3rRBT*Eǎ ~g^oz HaXpIGQQ-{R):taÆ!>> r&pApȑ#tf-ϋC6Ȃ p q|rs$> zrxbANNf3f3[:u& ەSh{عs'***0c i HeXPVVZ?\'ߏd\zթcJ%M~mI* |ll6N8!C9o6~SO= 6zy\xe~ ' >S 7osg̘n4j(??p\AG},\!-IH"OXfݒiv">>s\4:tĉXl\vgO>_~%***׿zL\\"##QRRbPݰh4صs" 8f׮]C\\[J8CX~=_\\ϳ=!!!l) 8{,ϟYf{uM(# )i=~MSs˵,̈́ Eí1DDzfefl}3hs뜳y3:իWCP 77fBLL 6mZk-((`D"1@ @HHBBBp@~*nߺuK9B#)) gϞŝ;w*L@qwwGΝ;7ؠG<]k.??qUqQT޽{ ???罽! QZZ<`/Ig -={:h0f̞=ƽBx b֬Y()){0w\l߾F%Imkk-뵳n5\.޽{V[ 8ܺu n¾}P أGX cPuvU:JPR \5f} gy}LJ=w;~ kVZZjqCnn.֯_s1Y^x+V`U_SBMWiz$H16lx&spp@6mХKt[~&x֯_#G{OxGQQ4$qL8QF40% Č30qD^yl߲\.Ə2 !zxжm[l۶,{ك5kP/!Fz1 @edT7ox^_|8iUbVSP+>qao(..^ܹs bXYY۷7=44ǡe˖ł tҺ$B*E3qTի,裏y᭷b_ ۷o7\7*ױ8vO>z?zv0E]ϴSNFT :Er /`r],^` J*buZ_~a`8qǙ8qN0aB) 4]1kб15ICA!U.Tiбb6l`}S-ۢE v;//FD }ܻw~)_HfbZo߾~lcAf. }JR#Bѿ\BTPupp`_(Yf:ƷlقW̙3ss5k|GGGL2ݷ NsDaåKXof͚eeez%)h ppp0 !u8BjrxwY0l2:u$.))_\~-icر5:ou0j(8::P2[FB8::}fvލcbΜ9mo/WݸB}~C!vǏu7o3^|E)))a{ׄBuC"`Ŋ:u*rss!0o&Mb?:Lސ8pc>jܸ1߿@]Y$8D H$ TB, & c;3X.8<u?LMW_ž}?~<>D"v¢EX S04W. B"p}pM6IM;4GH5ْc0f.qjy֭c30 qRCUR)ƌѣGիHJJŋZK* [F^УGg67jԈu06 0<ݶr>>>~:P( q5{43pb0BqTcp+r!^ZgM(]v۷/:vf͚ǡYYY8s X ___\Ƴ(..L&L&c3sBRVVVppp{#9@A!Fؿ?֮]SAAA DN*\ɓ'q \pAݻcٲeRP`ر}6`СX`YZd 9ݻkdP(1T#G"44M4<ŋغuNh{{{,^5FBѠ #aŊx9+++xzzO<ݻwg^kccɓ'c„ F0OODFF>&Nh1v܉M6߿q"""pIXϞ='}),[ ˞+WB@A!5Rm۶[U*bĈeee777v_o=z(@aaa0`y[4ܹ3>zz_`` M)SѣGI&a&| b6/?xVп}STHKKɓ',ǍpZ̝Q{a,]z9}YJR?PG,^aa!Ξ=7o"//̵;Zj_~;$Xf ݻsvvvpwwg?~;wɓ'ϼYfM[o9s`رFGRUST4i7eʔZkFix(# WVSR(ȑ#-BC!q|7lIQ_Θ4iFi ~'ņu#,++C\\V^^{ k׮8pqOyQG,Jyy98m08#j!.\@RRΜ9۷o8׈D"xxxGӧ:wlULL m@=#7o}=V sΈ\t8BQ(#ԩSXhk[ƻヒ^zU<ڧO?2 IIIq @zz:&O%KW^>ȳ$Iҙ춯  $$-OAۿMXK[GżyXe˖!66 k5 oK.eهŘ;w.+LK2 쾱/}vYcBq^;w/_ڶmo3e)! ;wm۶˗/ǹsLz~S*:E{&0`h!ˍB(#VAA"""X???lڴ ͛77oM6z_'|6܊JT͛_|QDz7x\B(#͛fwggg]֤_xXz5[F+**–-[Lv~Rbw5ǎ{ip^^ݬY3%8RO]~f.\hX`/ ##}k>.''7+4]QܧigXk !Pđzi߾}l\Ϟ=UTHII5{f.T*Xz-["((qF7ިvI.]sBA8É'3f|2_l֬Y!LfgΜA~~~DDD@Rgpss%3tY'SB AAw.^.i[ش4|hѢNt :СCޑё̈́) l޼٬᫯bCBB:!a ;III>5@*tSGhh(}Qm͛7zՖ#QGMmڴ8M 6%*CkW*Ϗnݺ!00Hܻw8x ?sLHRCy~PGM41Twuue߿oԸ2}|ͽ{0gUܹsO=z ((d'<(#v{[[[aL};;;v/-Z@tt4KrqMØ3gNj퀎B% i˟ڽ-e*c% K׮]hp7n`ҤI6mFDbe͛wX$ IDAT77h5qHKK˗q- ֬Y]vaܸq۷/+=8?8v~g¿ݺuç~ GGj#55{Err2=zT띝ѳgO[B,qc%>2220p@Aff3U`eeULʍ7tGq`D"f͚cܻwƆ ؿ@*Axu}ݻwԦbܺu 999Nqp +++ ^^^ppp+%(#/mbʕUǝ;wJ$lܸ;v5R$&&PϚm߾,Y>zH[ ƍøqPXXdff"//E;|||еkW8;;U M瑝;w@.C$C^ТE\ΝܹsYv#.\hp_[P^{ bcc}vpׯ#,, ޽{vMdPwwЗ ߿oRpgfL5< PRRP'N`'I5JFB||<о}{w[.PG,3^y啺z͛ERRR Y9cǚnܸj k֬1D"&Ov!222 ܹsi&/jˬR&0`HRvjQPPRpځ\ii) Wydffs[ƨQݸq .ݻqUy> ŋo۷cĈ}&pEYUWWW_F={b嬎Xjj*K*ځR47oĿ gggz 4RJ% JeU78Νc%,󈉉K/T*[bӑѣGm۶ؽ{7,2`h&$11K.œ'OtB߾}ѱcG4oޜe!''8}4 _"!!}ɂ'}:t7n8ѴiS?W߿oY7kc4;ӧAKK,M!//(++FR*D(++X,ƭ[вeKi*٘:u*`F… }6`РA&6mDhh(вeKdffo߾,aB4GHq6oތHD" 2{޽{@ǫ%K"4ikd2lݺݟ8q"^xoSNԟƍ21hY3''lۛ,9WF.PVZ͛,xta0'XPT=&776l~gaĈ(//\.իѳgO5 KSNEDD ł ׮]Cxx8ufK1auCQQ1w\DGG#00#99iڴ)ƏofϞɓ'CR!)) 3LUFK/… (//Grr^u⢣lܸ7nDPPV:#yl\m-mrDrܹsuxڣGpBlݺZxבƞ8nnnرc8/ɓ'cΝy3f acc@L3g"&&"!%%O<ܹs1apj K.EΝƍc.]1?Yp_!!!~2Ӵgkj{Wp888b OpvvѨQ#N:!>>l<<<)))lk֬YdY!!!?YkEJJ X`/8lٲ:̙އP(_|瑟OIIh۶mw޽`ҤIU~III E4(#Bquֱ/]b&4쌵kײ/ƚO|2kPִ4|hѢNt :Tc_z%vʕ+f]suuexڵkpǾ,\2VMhSUy͸ecA.ٳWO,ѣ@7oFhh(N8 9s<ϳG/^pttĚ5kC–=㣳o߾e! cXuh=Mϵql_hU=Ϡy/Fԣ(#Blo5,Yb^^^:ڵK'єn޼nydeeVggg8::PpՆ giii50֭[@]$<<,<N8BnO.~ܹW\G} HA,m-ZKZ}ӴlR߱cG888Ν;o9s0p@6;Il>r_ԳyĴ(#B;cA!!!B2ǏmͲ>BBBa899PgRMsW\<<<0n8vΝؿI|k?ڋJG],oW AWrРA:]ܹ@Tcٲe 3O[`r uֈd,++qXr%`l[CBBu靨(nϟ:* ⴻ~ܽ{+ rd2 ̀S @waU1a' 1sL6sQKW{9YEcZR==^ml Cff&Xf d2YqyǮ]i&X`` E"QrB$؊h^ ## pttdgΜŋPĵk/̞:{#ѡC\v 'O^d\rVqȑ cϼRfǴõmiii_gP?)))ߣ ~g4k vvvٳg5#iitME٭[7Vz#//1%;eeeFØTXw9D",_mڴ޿~zDFFHlذk߾NwT0bX[[WHMME۶mu` 66۷D"A޽qq,^ǏJBpp0Zh~ zsg޸qcD"ի8<  #G૯b3 ,3f@(Vz/^Dii)t-|_gիWc(--ŤIo ++8O 5QVV8^Z :w6r_x 1 Ն0%UyyNPammm۶aٲe?DRRF#G7033?Stuk5%вeKyʮ"2>'NysΕ_g>ҽxĴ(#eff>>>;v 66wPT!<_Rioƶ`3W,6O??6o̺ٳ{쁇zvbx^SNP3fرc-QVXz3[U @,C"@(VDޘ^O'TTڥ|:t2eP8B, 2eAYM4a5)5k ۷GZZ?~sU ''ϴrppUkk@ &wزe ?۷o [[[jUs,-Z0XIbb"N>[$D{02L! 5\ ʵ;۷GGGe˖ b{7nwwweee: )0l*ZBtt4+ 2§9;;cȐ!믱rJ uA׮]!!J!Ͷ7N(ӭ[ZۇWΞ=.] ƒ%K0mڴ,b4GXrJihũ@+DE M6y.^~˗/7&̞=l۶- b17n HRC$ʪP(D"!Hп4nDWNH  ʘ,NB/]v5x\}!,,?zNښR(`eL$ , 5jX +++HRg4oVVVϧ ooo3Bj-bX Rg޽{oUjyV͚53͛qm֎JS%22Ff0ϟۛ争i  ddD8* U8C+   YkkkVwРAfmSGHm_; @5b3a BYdqb۶m~m-;Ι3}e>wy2\.Ǯ]8PٳgWXË=9[-amm DD3+++GD"T*5`kk֭[cĈC"!lRRR0sLꙇ fkok߿VZSv_5o߮S:S,dzz:N:tehԨ/^lt R p9P)+//gqΜ9e˖<׺uk^2 <+Wp޽g^/bŵ^Ǐܿ%h+ТE 888b~bnݺI&Uۼyzeg^z%lٲd2~|w(..6-[",, AAA D&*~b 1 pNG}fڶmq;y$/^DOOOرC.TRRş={eee{F>}ЫW/F!8BbժUlϘ#}-}X柳36o\/9)Jdee!77(//5k-Zݝf! q4OƢEt[hǣo߾:+xٳGgf&kG!f(#qUDžB!ڵk.] vvv8%%%ٳgq]cFٳgO*!!_'H IDATQGHq>g3}uL|uRDqq1ʠP(؞PM.^8Br9rǏǹs*,ŠI&ӧѥKZJBq>GGGDEE!$$Fq%_^8M`_!!!Rz&->>͚5B"yL8111x7PXXFN81c 99( \|ݺuCLL3^gϞyfuqFtJ?ѣG?M0>}:lmm֭[!U BH+--i*W^yW\a5墢b OpvvѨQ#올,r@xx8RRR ֬Y {{{K,?Yk___ 8k .\CN0gv=B_|x{U9~+ &MTg} gΜQ*APG!)JZK|zO6777r={0x`sGc Ǐqe<ggg]/^pttĚ5kCܸq8g@@||| ˑ ۷/ >> 1M} 'N]'99zSN=ztfivѭ[7 PXXXgMGA!:Wp^É'P(₎;5Y...ؿ?v܉+W_DA, -Z7od rbd믿:vܹsx> ˙3gH$bW7~ex1a̙3FƍWƲe˪~(#Rº:4ٖ y;w`,[ ` 4<~@֭6mݻaʕӧm +++DEE/{+|/ N3Q}T*f̘ Xf <%K?F=*<nBs(**bkkfN(B$ UnСRSSw}=ަMddd~z<٭k׮m۶ppp`...رc>A(M6v&O-[@*yB&gaڵ,cUs -¢E遪T*Rdffx@y/s|y*.D?u!瞍M BT۷m۲5Xo{ѿ<`h?gϞhӦ ~4H0\zÆ ÷~Bȑ#ꫯXV@  0cƌŋWŋ(--E@@N 4qA߿ǎg}cǢC/?\ Bs?FFF JɛWF,8___8::VtgϞFOKJJ:t8z(|}}ufq-шaܹzgPPP&MRI5(#Rx˗!Y g'@X DT_|^3޽{#55YYY7&m%R 899A$A J0% Y \p8}4VZE\B,X b16mjٳgѥK`Xd MVחELzBlllиqc8eieU@HĂ&MT`ɲ;w ;wo]חELB7J%P^^\Je$L$ !Ӝ image/svg+xml 1 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 remote for 1 remote for 2 pysph-master/docs/Images/periodic-domain-ghost-particle-tags.png000066400000000000000000003233301356347341600253170ustar00rootroot00000000000000PNG  IHDR XvpsBIT|d pHYsaa?i IDATxiU; (橘yHTDcbv&tGIҝnoNɇbFLF;11ѴG& zڜMCUu.Ngk{BJ)a1c1@ 0c1|x 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1 1c1Өj*ko~zOnϮ]ظq#{+1c=n:~qF6ns7p_~9?Oϔc1PH)6??sϽv---,^UV1n8@'1seƍ;5c9s?[[ZZ\l53s֮]˦MN+(c1ݕ~s=^͛7wac Yv1~0c1d֬Ț/ZaNO@:eɒ%i1ƘV~ H'qFx ֮]զc1Ɯ\l5twqWo߾]m1cLqռW| qw}Lss7c1e.^cglL4Onby ssw8شi---̙3珶mnnf޼ylݺ4caĈϫ'!N7zjn=lذqƱzjqhnnf֭|{cڴibw%|o]mFksz6rz6ksj^z%>' ٕg`;s*b1L$իWn:n&L ի{Np 7o}~ OsM63t$K.ksz6rz6k}8S[l5|3gλvm']8Uk:c1ƘRl5n|b1ST=~ORj1c1|T 7xcWmڜͩt_83I]ا1b*9=^Su9=^1Ɣ_1cpc/b1c4|bk1SSI 1c1c1Ƙ JW@1c1Ol1c*b* c1T8`J,c1cL[c1Ƙ 39Q4g_1c1t>5cp|$|c1ic1SI 1c11cLJW@1c1c1T8Tb1c4|bk1SSI 1c1c1Ƙ J¿Wc1%Xc1i 1cLc $|c1ic1©$uN͙W@1c1Ol1c*b* _1c1t>5cpSI 1c1c1Ƙ JW@1c1Ol1c*_1c1c: c1T8e* _1g6vE{{;{cǎk.?ٿ?Ǐg׮];v V @{{;v CBJwȑ#>|ݻw۳g:mѣdہNk۾}Nѣ8UУj3A2݆5k$ YMyߴ+K@2hPؘ &>}&wblqi`~ H{ǧB41 4(6M81UWU% 5*5 TӣG4qb#iDCC=rdRuUU4iRM@:xp;fLRPH'LH{' ?;d߾ H} JԻ.M81U HcFNÇM@YS&NzTW' lhHFmiĉ&iСqb&Nzꕀ4hl0aBW_ԯo4acǦAt„T njIÆ 9&NLaۨ#Ȇa[ϰmaįQ HUb8qbkZ.mlLCo,wﺺt]w֮_?".gUtٱt#Ά˗b1]i1"H@?_ijH!]ibyT+ E 4HA e.q>iPy1K!¦k!_ HU-c H!M9ΉBcö>aXⸯtI FÞ.;Ǻ4ƶÞSbضg2H1GoHbclR/q)b1ƆĚ3clvfOľkŴb_p~QS~,2ec͚5 | u,7bEV`0Y`tXLw¶sb-w^n@Za˂8qak='$iXlc/bU,`Qj p,7|$#_Ou^vpյP67s뀣}{Fv}1֮]1ÁO@LoKP`z6JzI@-X3zľvQp<?DI@_p K69ornlZ|v|6#76艒Q=al$JZQ`Ϗ Fcc=QR4luuִ%(x%K1@?d06%EG5m;=m{P2ow4X(b?EQUW3Es)=%Z:@*ty׻F7.PW?׊~P5qlOqlJ@z\ZD:?ֆbmubwO|=.t8޽(9!O~,AY]w1cw]yhSbc>Ĕ /"_ W EU*1|߯eOPl`l c}P1J8zBr#(υ QP/({EQ" 3 *ߟZ F( _"Lj8PU< JH|;m?BID_uQE o/c5ِ^/77G¶obQq,?B_=..0o>[MkOu{Sv&SoB &n/'!~:w{~@?維ʽ &>/becsc]lϋ o4۟ӯicy[{qcѶb|x5-Oemgľ{>`WMw >~=ۇ.;6sRo|nAss3w~; ]mΟ3JTY|釯DUZUGfts6T5UGvۑ6T\nj܍wR$.@7CCƕ-ltAT2:f:kQp1n.%$u[kbfLWѰ@UgN.T|3 UZRGŚ{ϟF&C7kͬ;;H*r(=?lk^Ngk׎XK> +RK.e޼yT"]TXv#Tlnb'`h"`G"` Hq2$ ֢g } %%^x֠b! o : F{ɸ桠{X~:nEawUg6 xDd VLD>/@7ٮE7^ׁsy,-^^^?:LF^{F&b\Q3{@5Fyϣ=C%kP"v.m^C̵h"1)Q'|2g 4@zW%yq?~&ia* ŐC $E|ٱC/'6 y@޶ F<" P̶8"곟/1?#z@&Tqek:cd|6Q "z@x?ҳ5ol'|;H ` )?b);92Ͳ|j8 U@cqZ}ERKP5b$8CQų U#IDf]nR݃*Q/0G(j^Q\TVn'T)5ocY1g;>1edd[(@b髛P𷱏](ٍJ`ۀϠ`܊]HЊ4!(齇 M8Bk\V@PЬB ☿JnF>x]0D| %;s17pZض,%>G(Ѹ3lnGKb ¶Z}=0JzDj9/?J}htb؈T3r: ̇1k,Q'FA [@aZS F@ȁ1`Uՠϡ@P%iF0Jעb"j7)lۆZs)|RODUё8aH mG̿xٶ U kbEvo5aۣq,YGbzPض?X(y[Z]=Q)tm}0~i뱿ol\f!5`D$B?Lh}GPjYIha95H睽H5#%M#sԚr??2iÉφwmXjr?Ƚ3?{ߓRkʎc&ؖiOC($ҸQjq,3˯w~~^x:o= IiL_kef^܆dڵ\O|zc[ׯgڵZŋvJoûn:~#܅4PǙcM$!AOT$04g+Ӑ`'" 0T3J$X4נ;16'Ŝ26# -8b>mKAQ˨ H#I Z z ơtJ--"d$)ET=uy U4*Q*3 Xk0V"JTv71FҖU9/|w!mg ~&*І~P~yjn )/`G֭[w7M6ns}Q,[[9s;ΰ]ǒ%K=x,5._~ٞBIZ eP1(a=s 3Q@ ?%"P1=QvƇmϣb$ c͒{a[J6:JHf$eJcz5aJl, Žg}oXBk"Jj.BISDm|̽k¶!(iF֜XэsPr>{`RintH/Zώ}*b1C@W_4~4w޽{OP(vJ`='IzƆSxP P sM;U?&}OH}6>/.m̶l'ϫbR1N\TfcⳚx͍ CS $踦#k=ٝ 탤blnM:6:TkZ.co9G0Ѥ45-2W@LRJi7B߾}O<g´.g˖- nB6T}[n]$Ӑ|@]* 1>Ps(E+֕{Q5zTlC꘿>j]K\f]a*)+̨ms(Ʈ*U0=?Lds.nh ([P^ִ!q0Ʈ<k<=qPwBbTm n@ࣱfm\>cWDZľo[l' gɴȦM:ٚa鴧D #z!vmXPagQ+DžHuZA!2#}gP@ ,iCAs8+\/1~:lF򄬅g+ .{N_$#ılCH~%P͗"-vJ>u15]ۂQ%7-(k1#A%X7W}c-oBH~MUnW=W!mgo}W)1}tmxMQ#FP@m v#((=`QJ`wlGU6{(A,@bZT]_@A!r squfJD(Q[caaEjYwQZ]֠Ĉ^fz!Aζ7P+11viBW h[9>cO\╭)=(Ŗme*EA7>BJzotR:>1bƘ ]4VZyOyDŽ T= {o҄w'ZbhƦR7v84;|/kQs=RkʎIܚrrnl\OeHJm#L>RkJ M67'اRjMYEI^ [H'sr+)ql8ڶ!ʏUzNn7ڳ5ɭHτ};?ǽa}-HJZ3.;6sb )z$ D$ אDxx5%K%Wc_=QUdDɡ10>lWmH|w%!_Ti2vc1H6{5t6><.T U{ WD.aE]ێBU;)NblR%JOX>אOڑ<.Tc~clF_}_g1s3OݻO;V\ve"9(y荞3~ހ$3P<]_?P7(ME0[, zzp& {><7m@<~'F F;?A-.Gi$QD8֙H7&֦o;$wSA-Dz$_A7nivOj>҄A:qMXVNn H^Jf#Ig? IDAT2{| $)9ߣb?xd@Lќ )ǶN~o~(ЃXaQHn<@1p #q0g[6P "q%OCsc(pޜmX TmJ)P.kڗĶk@ C(sXXl8J:@Ğmmw пc#ΡzJZZZxꩧXfM?L1L3piA*]rl77zjns=<|_tr}}߮;wX(p{J=$l2C|tk+LBUH PE Vx9>G!:Ts\A+oʳ:fօrh`cϰ#mC)yËw8۬Aȅ 9NOf- 󗳗  -Ƒ_~d?jε Oߡ5ag6hb/`s1]Gȋ8NU p-j`?[hDnjZk:D3C^avEa;/z28ggHa[p {x5~#. +lko{ G?W|;w8UA0; R.XgbƔ_X{䟄O^{d'??k֬__:Σ=w9Ծ]ֿ6eHްI Cߎ qL}^0#O|A| ]{YBt Þa9EeBMEH\?  )!} {>TQs9">Xc6MZ ; ɧS¶*Jςk16W-@M? $8҉jj9,öj8)cww{wPcwKn=ٷa:\SRJO]_JH>M $T&8?A!A]bOpQHpqc %Xc&JpE'9 f}\&Ʀw z|clb1V1qm 4XC̑6?pۚ†bqL163 aX?L(:/J/ƆXBb 41!%cb*fǺ/Եk6ODȞ^/nk)_ךYtٱt#*;N*@Z;!䶙Y+qƲc($13R)9/:dI),9:7bR3JpjcljXx?.ll~$%\rc%%Z$%LsC}9# FtgM%?V11c}Ȥ/owx? lMύu&dN$"x?~j¶ F¶bR3X>dw9ϧy;NWO@˔@jY=w7H(p=Z3D"}$chE2E攋ȴpH@3W Ăn  J鹾#Uaİ|ZXwj, +L+j7H5$z"_ǐ`(=r6JJnm@Sjj9% ⓨf!.$!9d%wǚաc1'vJҖrz|e o{(ƞ磳c>e*ZqR~εH-^<ЭFjŸ3~ލZCbl- ~BLnF!8zoxJC>֭s}uQʬQ/!HG-j %*HzFnmDPQC6H;t縰Gѱ cŜqtgҏ?^۱#a[n(j m]v v/H̴` /߄nnۍ8JB[}[n'J(A( 8)5odOJ-Pp:i(h1&-#?VuXﰨ*b(gƶ!?=w6>;j%Zev#;vJT{H80 P5Tsۆp;uJ9lHb/Ԣ6W ynDIHq {хQ_QWe-T%Bې$1,$ĶD2($-Hr#C=>.B(TDse(1(hCe|5RXeƢEHisщ ֶ $GzoD)B5?3^Q [L}HEG_i~%’(hZL@OEAg PB3%514n.APQ$mC-ES3\sJ !)1{/<ҀOF{=ax)JDwGX1%&Ś>$C=%kP"v.mwSvo/BI-Z1CBWoJTz'|ab_HM9}dԮ R/Ԫ1]VzFo:kx_)'3(; 1>3NU۔NL0U]f^nlBt1r]e$-&uJ0d5cHؓ6;NRӒU6sscb8cþb;/:f6BR ]ֹw?gs{:tr*P޶qI]ek:*w| ]"'0]. ]LS6֯_OUAA_m3zżߙn\?9B陻%z{1 BȡYzۑ$ U3D=:U&3 xTq\UEcHf+l뇪Qr'*51*lۍ*óP5ow+ELB5lU"i[H$17kz=Pw\;>Vn'T+5o]doٽk]QY>sS6IHDH 5fj8 H _@ivZ5~ (P/@Ϡ`܊B]HЊ4!(齇 U8Tw_)s(ډ]nռ%PⱒM7!XZ6_BIN\U-mA瑼bJ4 ۑ2%wFmEN$ |"(ǿ,PN,FIщIcc#6)f͢'{jP (8C1d5?v/"b[D/TW[5s(P'!TɬE*)ikQg1cnrSX 7B?QGET=+7l{ע[/3Vza-ҲR_լEʟP{ݱ¶)m㻿鋒U(~~Uz8B7ۖ'߉{Mi֬Yc>$t̔{@x-@q ƆA΍ ϽWBO;Q&Z\scٓIذ~?Ѹ>I?{3Zq,ӄ2-x664!XC}$m֭.Rq,k:07wmX5a=O3m]r={0\DJW"ًv#* u#pR0o@fn7X`fD=Q]HݎLC6UPHbI # Pn%xDR)6# -8b>mKAq˨ H#I ZG*HpIYV"y+H5܊ {Q%-T"InD /G"πci$Xd5@+QrO4I[[xr߅d.mw$cxbt?W_]ϙmx 0ܷmxMy˔%KznXj~]f=k#<Oc h[Q03{6{PBoЅ(Y!(ܵ07>{%#įPHn߫P W(! (aQk8)(4/;EaHb 1cދ(%OS1N ۆ%[sbF7AXaK]짍R!bާÎsongTٽ=&*fbyT'OBFT{1VX(Cv7vCP;cce)Xґ"FuU߈o֠1N(8-()H(nZx(hY7 ֍"DllK|鰛se(=e''>@[P8?R:(mŘ%vٺm}{ذ҈9F#}{)ִ\ϯio{`Y=!,VGeT"I苠inʋe7e-D$RiHp ƮGe(h=\q_eVR{ [ 76m1a fŒ &.9b W=bzE$Ⱥ\+)Es+wAuËc٪6=c=ubuDZx?5qoS5]-a (:kvnV1vu~oz9\vo~n[d'<}q&:`e/cʌO@L٘>}:)QBHT[Hdj81&&?DoYq!%*D ЖU>[(`gIC Yub|a3'd-<[Qsb*Ty&f şE(X8~iPiҷwS☰\(ynA Xaw +(i"y$";kyJd@k®r{: ioj'F۷ߤ*=%O1ÁoBFTMmmm?:50U̵(1 (nFOɄ P%pC0(=lSi?r;+ga;1-gF(Qq,7agZ~-l{3hxRms؛lB7cM!=löaǛeKض/g[9=ٶq,洿o޲zgX{4hH }4d]j>mMtu.S{ Hb1ɵ[-!ύM%kKj4(7'U ڶ!ʏeHF$[VrMH'")&hLV״!T.ܲ7{oWMh{%)\ך+c3g/`ѫW/@JXoH@5$F ?cPC̯ƾT;.߆*cs)=vh-YJWnqD2\kUº!cjv$Qʿc~ =$*J1bۑdu'%ISM9D Sbac_C^iGPsP])o16_=_g1sS6.2 H.9H(8>]r, CňNj@;7(CM[ʸ =gw Eky~$ sc柢<I6]Ufrlsb'!ij(ׇmMӏ} GS|q$G=gOP2v٪Ĝ[lbZ|3M@w yo䣱AJ-=@ cqbMľbx<杂m]vۣ[?/^&tSA o7Zn 2PHnqMMy!^rc=b?=m(T@hQ QD~(ס1}焒(<D 5DѺ" XnJ&Yբ$ -9QbNΖXP8>l`[~U۲F8ù̶Gse]oz=msǒqblM{g)o]:#ѩΡࡇHh{4[h?r^S^|Ĕ;wR, q@…(pMB1iȮ*ԑ39,1;*PWyb) uS5}=߁* u\b*5U Sjz7euP< zcW}(Uyۑb9 5a,!aB&IREr&=/BQQT+blQT#/|IH,C1c#J]SNg78^6TawLxMݹs'A )w) ɣP#( B߷MiGPa(Xe2 vTջ%/#%w HpN|.G%d:(`Cwƣ`# Y(΋}\E*YDI{n/EPruζ%%oSPO1UsωϞP}܉(u:7 ^×KQRN_G3υvmwSvo o7677s}*ߌ.o: ( mAB=K(P}r % jCclK|1n7n KČQHͬی$rn-|#[gsp)lLѰXMc|-oCqm{/fk9ln9ߊ﵇;cmǿxf[ְZv`uXv[>&mxg@C=o{{mxMyjo[@5gQJ0)mGiD~c`Ԋj|t 3Ӳu y/$tq$kxݐYD_,W,kQ3̺Q0j|{ %P#~ P蝋Qut6'r/J昏n=$)gHq}\ '4ੰmJ^~)o]ۿGWD}Yc`̇KL&Yrpj, k|V o98($e{ I@I$ H=))%PrJ71l9J6?*D4 w!A$U;֬]->q\S+clhmG)6< bT>1e / Pk- IDATU'EPr( ?AO0# N?$GqA%7wxվsޅg>+ > ?F!rG-nCʁ8ZaGcmϢXGPS*/jQ?/.T s-> 8J>zo~mY7 G:C̶rzO3Tm]v v/H̴`1܆Qmxr-hYDl HGQ9Hc' L=b(ȵSj;UA3), (ȏա%o]6؟/ Yil=\vJM)FNa[mn}v\aK6^3;k5h=l︦l+bw@>yM>7gW~*nцFhz~mxMyM݈J gGe%Ͻz\$zFbcv mC[P:R$1j $؍(ixI2.BA{/?"9u9_FPeZmCưt(h~$vR i9ڎd#Pq<I)Dn6͹")XOb+KLUk.}'Э0bOx.6lgw].gk׎XKE0ev߰ptR͛G%-nB [})/`h"^G"` HqeT[*ck^x& Pzq1 /'${Q=w%’(hi@@B N" zt u(NA&z[޻LD>/@7fbKQb:9tx԰Xb㸗# dtSi5&CJٱ.Gb^1dtn,E (ڄn95r{| J<נD\oz3^~3-Z= Q' >ޏD@! {\Cm^A)Ȟ*tNS^AA{VϞh<2-uod>3릢@I6欛n& %(V PBjriJ: Mq}Q|? U:cr tRyF:n+bM~*ۓ?\r%T"Bhv~,S^|ĔS@ձwPjr B/6](O|^1ow[!Ey>Db$A8CQBU(7*`fݕ(Aat2Lfgǣ@տ%Q2r.JF!ECU܍(qZ@e5%'=sJT܋vgOlURJNFzb2lDGQuWQvi`mxB,᳿o]doٽk]QY>Ӽ/|T>1eD/bj q +҆(H} j[&Q^ APiC H=%"Q{Aq(,^Edx?;,"S@?(EV͛Q2%+)z#zr _UjAꬅh%dDXXia۲|+vDΰ)/.jQriV0JzD 'brzRI]m]Ȳ{n4466b>UgeL )f͢'{jP (8C1d5zVT@c jb|<*H4#yF-Z,fMn 붡֜EJ0_Eν*gNj:{$V`o4+mE^fBZ$/Y*?WݻcmSһ w%oPS 6^»P'q>n-O"IM=mwue1ӬY0|8= ݈J@& Rȡ(<.JA~x$~BJm@;Q;#Q`ZA2c{(, n3֞Q.u;s #ɶۇPPn}ϰ)Ʋ-ahOcGc!\nuSfc[)x~~gۥ _HVh:M6})/nۍ6֭G<]Hyc3 $2n;$04g@5 vۨ w@I  \$g8jt+5$$Jަ$s،-8b>mKAA(Qxi=Qy2D Q^! zr+*LFzQHRD[ЍoQ±5< U1X=(iY? ̕(v71FҖV9=/|wdwטѲz{ ]+6mxBCob )K,ⱨZz]~ٞB Z Q)rLAm+Jf>~Oֆw 3QH G:| AOFa(~Bt^uX׀M(B Ll@ C}WZQLEyA,ҰC+Wc^DIE(x w"wM6%<(ٚ4AXaK]짍R!bާÎsongTٽ=&*fby&tSAXՍt c=Ɗ+}َBn yѥ1V@ᲈQH|c;&߬A$H(8-()BnZx(hY7 ֍"DBllK|/2Ę*J #lII1UlA J:ꠘêfҏje)cÖJ#lXry<&oj牢e,[G}˗St 4 +̽,S^|Ĕ-[P, -Pm! @ FLCL1}= l(-C;GPx(*bSMXZP1i/1 c00(io@C+ WPҁ_@*QL$YKbZ?{~nnB5#nx1%1!T]BI?]Xq)OcbM*oKtJƚ¶ry<ӯ_DzEߤb- y_ |biO*jGBJ&Pslj165,01$xϢV QM/U&fu rB;KP̪{K߀@zH [ |;!nK8O. ;Ժgm]_TA8.uC>C5@VurnV! @DΣnݧPl@ *cҺ |(T vl5J'g܋M-s;Ez6fY7½| %I҃$IR\|Ŕ\/ 󯣗߰k(?E_*J/FׁNW(5"e܆sMd[Ұ= [WwlTsb +(mq1q{A>s",J\"CیZ>! _J&!3vJ>{QzBvs⶚vi$''Wߝ /ۣgӊ+H$'C o7NOr7RBAq/YhNWvjl յ%eTkD7b%GPx<©;$G&֕QPy]dx:_GIH; sCU'ʖz ..Uc?EzyFOl7}lO`n,qux^ևz4;;Z.Fo (S-x'-hx cBoÂ+&)V$Ikk+RUVBQh5YyeW1ߊ\*ޗ<ZLp"z URA ^" b˰uP(^ZwA+ۉWp\gP"$aYEr6i ҳ*7#QTF"hD /#>!ۮhY֝+mO  qjlx}V!V$Iɐ$Iaxb:C` ɣOtDz} oP= R[i9%eJ->66!XQU\l@e׎`gv) Q"q=LD LA`3 3 F#Yr d%%lfW"z\l= ;8efuُ۶?C(}h^(a)ב1KNN;[A4K,!GY<$)X o7NûsN֬^͛N*~}Cg %!>DB!TWQP 7XOg߷l?Ͷ{܋Bjn+L!ہٺ\;1m[@O mg`Kno]dQ5Vn;vtm>wNEz<sض$''W9\;׀mW{hnB{!H4a7IRSm||׿5+(%oQ0g;èW~x^qNzZ [Qۀ*S2kx-,j?[7"`-Jt[ $?r( ?ɍ-ٟл%QumOslQ\KЂcrB3>JZS?%>o3m|mVwww]ߡ7"+$I JR <˃e~; =}gD t֍GԌQ868|% }5r2("QWaDID%&܂%Ap@}Q~)wWxDPHsێ "D aN@AFj9Mmy̐m*7!"͒^8*yz#:ou}=Ƞ-Ez>| '(ᖗEǵ-IH"+@M$IRwyDaU֢ Xx{vZPE/?Eamt҈?/܊B^4 !{lQ=y3(t{/1USF=B! s;v tضWPtgiCPE m@ߠBh?rM[@Pр(Cq;m[dKL1*Ѷ"= UaG]}w27( {Zt)I$dHFixq$,p(!k@ݟs/j}1:h'kQ4J'fPFK޺`(r-0dps˵uVؖ5dH 3c]mpV&;;.@Q<~v[|ӳ[<,7Iw#_~e~3܇sͧQF"Η1Ўa.Q)x2Pp̗!x^AIFɸ Dr.b9w#T%B$b- ޛL=B6 U#xCs1 ϭJ1-lŹw"(ƻݛPR.KL}i޹[(L޻ [|}9J|<z2;;.tr>jʕ,\(b21,V"-BOR$Vd(B ( bZJUVZBt;J/.Da5$DJ<{'"aėDU[7H %8% ϟQ~%43X.AH Ž7{"d|Bf%Vឩaa*5@XM0BB/K 6*Wr o)}^ 0/@o`(5ha'l] sIFYWo]ɺ&:Vc݈r51Dg#?# }DSv}exZ 2T\2e^ GQ5r(xVPUo7D4*`{UBdז_vJP2ԊdK5oC(lh76,GH!:J2ZK,ΰm^PsW{ls'Bf/sm(<={p#p(w?K%D;0po?IcqƑ$IO Yz5r;3;3Ejժ?#ހyfOO>`~ӟ~s{e͚5UJT3oB&4|; Mpq-l3!vTv[< Hĝ-̺c > Hݶƶ왚slb辇i0Ķ@<~j>Q]뜺ka1깟{#|º=HNNK4vO/#K|W2$gAO͛7Aزeˇ>s]wq饗~z!lYU_:-<5hDPx|iD.B=v#C( %*B%A; נ׉=NuEP Q%;E%[7%[y1JH~ %D_?D zأ_AR |#xJgufED݊:6%C;m%u>¯-N_~_󒿓;]o/S!I$H_yN 4iRXf} R)JzCy晓G@ί'^Aa &?S04ָ-&"ta5A<`lЈq9Hz%z3+!9_JuLFZ9ݰ Hm5A0jo޶!vGоAp]l<ݜ8Ogې :Ԯ 3M2<#=2~9VĶyj3nOtGJ$Kzre7aͬXGyfٶmR[Pݲ GK6Awe9dgQm0]8f>p@p xv 8-قQm?t{|<@d0q7`+h[7 3ZwBUԫPųӣQ @pȞ3~-=fT3<8#lTl^O9]mӍ|sȶl[Q?fA2 M(Jl۶$A"$=H,V?O~|¶z̜9P; O#ef2BFpQ;8oE} 1ipW_A"xB<2Wdf _DP"Qq#bYxPyҷ,m^P2YllEm~@-\̲u(_} #l( [y.<=9(5*6ONNc T:C`̙$I!56ĉO{}޽{inn>ڝwɦMx?_,T曹}2rHJqFĘ砠 uߠ:`;v\(}Cdwk/rkQ.] dJ9Rap#JD:P(ؗ-f;]Bt%:g Q]֡?m[?G޿ٶQN@';m3^p_>Y%+om+!,{A=E{|2~ ;;½= {GIO|??v$)ZI2pyS1޵k׆|^:?=} c=0)o;X2qtӉ47t}ݖ©CFs=-dԔ]qS©ԔSs A8|>*dTQM!$f ا20޶!>HE/J7.pj8tRζIT*ҩ~)Х8#Ƀ>C=#<)?A"| ڷ߄P:Sbx ^#z,a FG(\޺EX'(޲s_q:O?G!v d|?CpȚ27ЮMmϢe(vD8usk?CI$hA8>h!n=-n+BDDN#;;dxmo=V0$I|ܥB6"dڵ,X?X;'oL; q^Mќ6kN8J@UQcs@`{L4]+Z?qsmZ=JBQߟJ:m!s}}J.U\s,Yo؞hX9=68|n?;;Z.Fo (S-x'R}5<,&,$鞒ހ$)LZ[[)JBu/^B)0Ⱥϣe` 6mUvMA+VPA7f>ֻU ]g@(u:zhݭhgneغe(C\/cՠTDЊ+P8msP(^ "9q4Z*7ť Fl$JA^2 r9ۻPm3k|dۊ뮵 FMZhx[[[I$'C$ŋ =($6P6 <(U(}؊B~vk7] q UQp 9\BzD(`O@;1OD LA`3 3 F#Yr d%%lfW"z\l= ;lDQ{g?Po Q~d*J.Fc(8lFu.w'c =wɒ%$Y>R,H//F$ܹիWs]w1bĈLvɚիy߉^o~c(Ca6؇xף{!2K(YFA[lEa7PۜnGgݛ9"^r(8葭˵z۶zmv;߷|.om+[d~`s;}ۍs6t;\lv"=m9l[.۝k7_O} izlnJidix}PB)gC[<BoBEc1nK2e$6Oy!l0h3Si3 ,TLaAԜQ8dg\2ZN<76`fh3crhR.Ҥ iREg<'(=K>ޥ:6;Jmԣ"MؐѤ6#Mjwww]%}^a1}GX #?ZU[kI &A2㞾oWG9n8Q3Fopɇ/ᾈr9aE+0"D%&D, k|-#*o*DNyۼ#`n{A ׾JSr;U #&۶I$R$yGlFXPGse z("*VދhK֭E;iDp[KnE!ۆ ޻dںuo\{mE4H0~6`JTy$dTgö=O$} hC(- ~>9`ϡQv۶}dLD"@EH}?CQ$''Wߝ v&=힖.]J gcx$IR$n$={~IB2 gP@:ݟs/j}1:h'kQ4J'fPFK޺`(ǿ?ĶN2RHTYg8mv:}lKEz2zhw﮶uNuNl+]]uw]n-Y-hx7[ ob%-BFg#OD/cUnG5]z<܊C"iZ{'bٽ %"dћVkO+iZZ#{a/G^]^\&''We1B^G\ [,BF C' @SB4.BFD935̳G_/G0ScM پ "\q"ѽ(QYjE م(ڂPga%-([Dwww½}~/_N e o#^$΂ / ̂0BB9|TCk0`2^fK9mT g)e+AL9)g1Tos;10,&ib昅9V&:Vc݈ vh[snClOmnN9f^Al@߳ ~*w~g-f_ 04'2 BTe{+AA9*ALCy&(ms:s]}7~ FCȱ" /T;t %׊?Z6`%)^$ɺu(z GZ@/ȸb3R9 q/CȍBA8CQųz$UuF}:U&"8F^KQ=p+.BvhmU*mXV "{|U1 .vcՕ$tTlOnB {7v"qs?E7ݐP۳!#k(hE/Y߳8ö]{A^J4͝}ʶգVJzA Hߍ,MR½&A?LƍG$I>~$)L̙COB5Qȫ M(8!#A˿G! 4ړ!JA:OJf=B)DƽJףcbv!j'T-l@Pm=lK =-= 2^l֣JnE7vm;g?JޞDO=kz 8_Dm#;;.7Ӝ9sHlG$EK1`I2k@A ~y]"?\#mú.m"ʝG=j3n⏖jcKDۍ駹!7!f zMB63E z r/bY + n3E5- A!9lEQgq47үA=(ލXp6 8vT"eC@Hj.*LExRFKT InF"D t+=w >=|.q\mix}sӿ`.w‰B=.DAD<N4{` ob%A&\r }#Tk5" a?pz-"J9.Jfix;_KP"*J>kPDan-J.D_8-/=d;%''ww2poEߤ_C$I jIIO`=3'_B_O̽nB-TE] @M~a iq[MDj yX Er?'t;Jf? QQW3CrЉ ^1!k Ӎ:saA:01k `Ժ߼mBF=펔}(r1>76sy9qζ!At]m+Af>dNywG{ue sm=3cg,#B^JR7 I m۶Q.-P|d3xuEè*v9c|62nT@*sK?Tg Guyi0݀A0m$ĒsԺk}^*LbLDesh T=77sqT7mYa{z>Xwgg =|؄mJʻ6݈' IzEHz"6wFTSۉ\r!Qz붠uP(݊$?g|-5"v k|=nX-D ~κnurmFAus(f[%ovl-q'#Ym_ٶߺVAǷ='Uq;}F=v7Z_ThV$3;;>́B˭ȳMSS=Q"[aİnXiz$5 =X¤w9TcAM' ܔu[RzNmfWQSvuO RSN&}DlQF]Ć763BF'c2*Jeے"M RNJpzۆ"ip*h޶ \{éT*I9&SHr|@>✎ܝζjȿ97ƌ{a쌥[y+hy-W+IauT ܏ D "x޺7ں]~ bT@/ᅧ*C;߽|ER[7C0\st۶ @po HG"wp>g>v6s",J\"CیZ>! _J&!3vJ>{QzBvs⶚vi$''Wߝ /ۣgӊ+HrhȢ)&9RW0I2'x"W 6N#n=]t>}QB$$9EQF~/;$GШpr4Z?;]mF̾TI޶ƐA{ﺮ?cg,/'JRߵI VʥP]ףZ!UϦ P >c8n%0Z.3Qq)"}w˫P%Tπ`Pn:zhݭjz2l2TA^ZwAϯ*o'V\jum 0!BVU$g#F뾀(=+z3A5f#|euW"E —M@АmWo,EЕ޶Hچ v#&B4$I!HdtG#鈎"Dz sK0Z|ml Bv#5(؀|0ϟ;(GLāľ3%0e}X(G(.td[(_e lOm_ "QruζKO(G^s{g?Po Q~d*2E#^rJKWH_Gs.;;dl޾޳.Y$g.)T{dI>hxtޝ;wfjD> H*yu\{?+((ц3ckuF`YfκdžnJ>zdrmt{|޶m%=a7ض-1vG=h[ٺRkGnӭy:mgh[aے]mp^e_ W]?7IRm XLzG}4 ,1 a61'τs.# ]t}|>D88vyq'ț +]07Jyq ]t=!nո9aψ^2ZN<7^03dQ1ݓCFu&uTg`vnNy#|~>]کmé4ѶH=:.bdž&5$''WE=_uu >hKwX_¡c_T%J`%)LL@v{^A;g @e_DM J0H"dD"}}}BВc}um [w "57Qr"g[k~ "HNN;Mz=-]$g. 55$YjIIO`} Y78Vr㎸}s䞫@"]c/C" # j}-y HQY"mh%dp.Qw)sm5!K!gζ|"t$Rnuy[©vǘﻫme?S9.o[c> n[www]==%3~a쌥;@~pؒ||%-BFg#O{80qA`vT v!8 1R&0T\ꃭJ1-lŹw"(ƻݛP]#zӊs bi%0m@KKW `>{^TQ5y.qNw[y.2ۘKGo /GV\… aU-CGUrǮ%=67I=%A&˗/'Gu D^@k^x6 (׀Γ(vp؇Q_Wm @t#ߡD2[r1Ԑ*Bjw *݋Z]- u~KPقE$''Wߝ-7i$I"~$zʞ0 Al(Ck0XV~-nrPʔX[` BN OP 04S1dbaXel[Bb51DVw~FP,馸ǝ{+߅!b%2 FB91FH~Nsu޶ A,Jy✎'''WߍB]GCȱ" /T;t-ÞзU[7 I uQBձ=<us6_qżg棥-<(sA%_pU#gUv#HD3F>{CЩ2-\Q1_[QpT#EmkDUͨRlò:T}UX!EsӨy&w;0㬮DА$e{Pw\;^6xl+JkBݽpaCI=%"ב὇ T"#2SBϗP2Ԋ"qf@Em܆BݞQ)DG_CIF+wɺr J|@PqmDeS%wGPs`nnpEznghR]}F7i a2n8tP,$$K03ge> G!7ഇv/"6 ЀjOZ[(%<*S+]?snrۅ9dTojPm=lK =-= 2^l֣JnE7vm;g?JޞDO=kz 8_Dm#;;.7Ӝ9sH$'DKIO_ 2aɵ:th01zQNFs< 5eĕr#dϕs3p9s^ r_]s~G~!^!"?Ŷ熄>3 lwȨHQAGۆu]!'''WE=;z <cg,a ȏ[ÎTƪ-W o7Nꫯç^`ބfnm&27!N_ @V`5-ȽEfDf0ֺpCp&Ux aoGmވK8z7bـ0+0Qy42@ G>ya#0G#HJoG,9;PE+&^@3}C0iTgyLs*O>$''Web ' Xinn'Jw!P.ص37$IJR\r%-Pͯ/ۋ(Cy(嘆ڻ(]6~G݇l~^/Aȫ(vA(A$~BX\򠟫u#PKFI&0s5ab楶a3Wr<{t~Mt߭]gۆg'J{^B Qng|mvFB혗 ½=}~b I$_J!Pm#H֮]˂ hiiUG\y啀BىBf7#<޺ e@!#e؉d^"wQpU źZx}Y7|`&XWF^u\=a+da[Zwuχy֎fV.Zv5(ź޶}bm)4bpζ1y9-90{HNNPǑB=ɖzg+R~e0k o'zAkMtOIo@&۶m\*q [(1 , lQ7fL[į2ǽsKu|Twۏ;ϳL UIA0m$ĒsԺk} WӣQ @Ȟ3~%`'P݂jQBgu8buC+<c)yNW*~t#JNxٶm+Ǭc9Hwww]XރIRm$I|%IR̜9P; '#lC䎓[H,K(L~0F|T 4*dD.G ݊cXݻ_A"xB<T+xG /{(X(UsW",vJ>B6?sű< JF>(y ޏY+ɰ޼/L\fx.oAv}vs'''W߁1{NM!0sLtRôE)ULr$-BF6Nb\;>XC(nECO@T V@Oo"۰l[:^oOsmu{ەn3 ۞mE!7 (h~g;_Am{}8֓]ݯlo]l۞FNgjێLe[ٶrhV$3;;>́B˭ȳMSS=Q"Շ#߳c=67IM4\I24=X$S*}7G8Քe iv # 4t}V$dszh3馅`S)t4v#f iE̦f`Fh3S/`|T*)9p*4 m.R ҍm 067SéTrM RN3BFEuNGxNg[www]Jcc=V0vhxjևqO ؒ||%WKR444|U# Qn1 Dd_C-U:x?l{|Pw/u}Q~ A8 x-ZEBʹcm[c 7U;~uBs @;`#DoJ.,2IuSvXڶ;/#<+q/w6Y۬e, WcH$I>~$)L.bJ.(z>Z 9^EH:I\ FDx>Q05byA矠<A6]cjP򎭙Bx~-.z=v?n}/#GA$<9۞EPbtqV6k?CI$ľӂp}N'|2JC7zN[V.Qs;{{{ZbI\:,T{hI>hxt'|o 2(%{QЪAVsD]$m(T"dVQEވ=?AJDպ2 <LϵYk( iGA~ζ~($[Rv@<Ŷ^~Ŷ_+{rh[/艜m퉶̍%qӣn3sQ&''W%M{~O<7p=Q -ܫ׭=΍ v$I$IaJTbP}z@%(pMAA}mUAz01m%*VD.b<ܥ-BTP:?},B-#7jz2l2!]fjP v"h(ٶ9(/DIXEVgJ͈nո2ں+yˈ& h嶫7xu"JoSkmCG!;;>nDߤU$g.+IOJR,^؃Bh31^_(_DybIRϸ DnTջ%x:s]BzD(`O@;QSFЇe(9،}ߌBBH\*YFI ۽"%+Wl$[}ߎ(=!NY{~g+JF"cZ8ptJXudDB!TWQP 7XOg߷l?Ͷ{܋Bjn+L!ہٺ\;1m[@O mg`Kno]dQ5Vn;vtm>w@" IDATNEz<sض$''W9\;׀mW{hA4@㽝=67IM4\I24>h(AX!- uS!LG%Zf2CF'QS67?^dF 1 `c <vxJ8WWNNKz~_i<#%fL B]sPplW-=|'lf2 e3* Trh;+yaz~Js:AX6t>+lf}v]]6sZaL˷(2NʤT3j++q5b댭ɤ@%=~r̶qeRmtL ,teR38? AρbT2 c- Tcx ؤw$zߊxL߱OУfntCQiq(s8 ܄H?E{ҔsO!D6b/"?(ܧ;wZrq6]Y>ڬ 5Et&T '(KYSδ>ڌrMAQ@D⺻+l(|7aQˑ6^vr -f`9i4# [|TNt7t=l^K -qzS>M<mr>Fx^ ^bӧ t!" T#r xۈ~G`J1χQDŽ4{@%w k^F0NK"e}HXH޶>` TL KaDӶmf:D(|1g^hkg {Ykݢ4֠*9e6,Qf^!D*mqz& ;NӋ/7eiƌx9I2ŗD>yg B&AtN3a^}>ns0e۽'5Y#FYP'i΂7./QΕM,E,?^¶4aQJW2p"DlsAk:] NW0Ӎ޶67m_8=~.ےx{{g ?asl=wt- ex0/rKMtN;tM~!*NT*_" DfhEٰ;Qm"܉v %I@/D_C4DFA6Dɸv#z_HN@UiQ!Te7 ^}1,(ؽ ėAZ!Qv5h(8#T?r$:U96އ{muoAA^D(z)εAҹqļ v} |qyܭ חߙw:&{ 3ydtMW/Kqi.qoB?P+vKOUVj*VX444b ~_j*^z232gsɠ"DLD7|F|q[Qx1 g@ E=ɽe݄@|* $֛%shi^|+ͺ?rC}Y~& tu`@ q+6!wD)GQǗp Ҙm`7ؼ?O|XExnzԞ`rf@eD@eydP TJJ¸=>U(wfpޞIHs/.$fmBj:@l&FP8)wf_ƝzXۄ[a'"0tQ*DIcTA5a}Uva*JQܚ5μ 'jW@0 +oVa켥3TZU56_fln^⟀| Yb7p+Wm|I*++:t(seڵifdƍ$Pvz?=`,ЁVQ Z1{$>OBTigl6w @Ⱦ(BYQκʄ@Ig_2(eLe+Yt3 Qe*gX{Qu ]3BYFDZ(og;?a89U!Iw5{JPw;Bϝ eG}mq{<@,[SooL;Ic~}ݘ^x zy"x㍯lj*n喳]vebceȐ!D "AB5d.Blo ރ†GG37mqЄ8ӋEciEbBwjՄE% fO>?FA jށ(xptr l}wJPqlͶv/(+Dfs1gxmy(=ƈL 3{{& B?L 6Yx_>To7s^/..+444pС466vǓ^B\3\ /-nGt0d(w/BG V6u2yaW:e-ߴ]Zn*͙$,e;EYlcΞh4Clm_h륳m5!.fTnPymfa[Vm !v/[(x[<#{wWm.q6f-/;ȏۧ7_lŋdUn 6,Xj_lկ]67o9{@}ϛ3{7 +y_la9e}%x>8 q']2s''qγ"~ˉ|h\߽ S'N7ҮӕF⶟k:kZ`[vkڿ]_1w"}LyKgU?3bcČwWYv-ׯϠE' Dd핢sy`[a=@d;_݂ Y_aLz08dPn\Dx(g7j܎hnD܇*W2Dr#ّe,cǛm :G.Ю١"3y.a݆ 1#:9zLlNkrp{lXA9BkMmH&܆-(6BѨvq݈ >*$mõ(aR`=3QޭŮO7ћ*%Uh CήG@}5 ~616"Dpsf ?DiYPβ>ݪ=#l=MVB{`GK$ak:ey̦Qulٖ0I]cs9wd>ID:s˒%K _f,_KSO//t3kjj\g9ͼyΜ ]1cƐRjLBq *Xn@0y="aBxb?D9B+G|C[ݎ}7l4#tٽY|1͈\ &QVvBkm6߅2_@`6B͝Vuf*6ۦ۽`Fl@ϻPclu0N1݋Ers`kdz̞;Cߤƌ/^z駟δ]EOƍfƍ?իyꩧx'\3f 6mW^aڵlڴ>g\KRZZ[ZZZ_m(V`e>G <@{UdZN@pDn(zfNx`Yt =lEf0ݾu5M3ہ ŮטQjt!Vm`DXr~{CĶjg[ wٚ">NB:ٱfKv4b[w@jcz{{gM4\5[ŧ~3JJJ芒Ivc_t0E$c}?+wwKqvU3<òe)8\bc!wQӈpЂ )B/ҲZq6z$Փ6jylF ISt hV %<мtn}~Q=@'tLctrwlc-jJRw~vRa6','mw>ǑW҈%m;L7tc .Zu!^x%6I TP.mA/@-7Q;v.+M ~`qlB,tn/LDV@#2PǬa v8C_`M'Ǯj(gXa D~ĶQڭjYhڛ(T!ywtRigqnښVY_w/C6H34c@6_*륭2^:@<IW`^oVnaZYl!e[?ֶAu6"chG$"GT9@A3fQ]%%clkBĩH9v-!FVi-yEvZT;۲Z~2sڊt7{m"sqk?N5͵k Kz{{g Zȧ5Vo@?uN/7t]Q:fRVQ T723%|w? /EY$ JJl("PrMĶy'&NTs$9856$[}w@T F&>4֣eKD&Bńyy{{gilu2SNŋ/ex;t2Z-=zaI8G & w9`0;P]iEG`nZ{.kFl[c#Hu@F1HI{lAXٸlm+PVon6Wg[t;QIͣiWoך_;zsKmm8=lsi2[3S44,bt2QXڭ⭕]669%+xG}(cxs1 niP/= l8iz|mFhc5A2([#h\kLQ3An*) Q?"M(臲x+k&J@lC{foDl#1 'E#6 m=n˧k z k?>[w̶I(xyQ6[_ߙw?EOD*l֭{ x`yMzMI0ݙcG͈|醢Ҍ.~>N%7!OQCH%x Q$x!gIDr\ "jItf nC%AtGP7R?A^E8Ls(Afo+<( u_lcT|]E ZͶ;lۼepoA4MQUVؚrsmaZJHm㏘wiOgQno6RvWxMON7Ӆ(U2`)tm#Fʾ_Go Ͷ>":M܁ xjOcBw¬;Ax&'. @!aߣlj|C}(P9n3&,qO mPVeg(^PFP ='j@}6f -AtR]nd|evJDQF{}o,;{(ߙw^|AI/H3f/ex;t2?ChD'{{ UrW IDAT`oY2e@h d ELB(۸z!CXg*,T bA z H9aEKF[Gf !o+V}ݿ `TExnzdMu9euG0"L0UnzTJQ 9Sμ s7siΜ9x9i#!瀴PK`u"~ Ec 1sl PA:<@SkLx{N4`&̺] n#1*b0̺qJ8d!u?m(b] 4 #ajm=Q?#mD=96GgPDMXڮ EtևOwds<>>Z3ݤh#lƟogϦ+Jg`RQ{uՇec/SZ/ƍILD Y`!M>Vk3 m$pN#?Yhc @6zW,*G@Ig_տ;P02#/,n fX{Qp2A;s*lD0^nw~p8UO Bvm. jPm!|jYͶ#J]i'io{q? KWKl2dȐ3 b/)AN  H=J5^g {;&uQ #0ġň_ |!<8ٮdw{?,I(:}9V;P0pnz]NCnARv(8?6]O[f܍_@a98mK̶<݊'Q067"NM6F@x{{gb M~ 2//`}&Ǐ' (gA^ [܎aa,t;j8;>ꀀn(ڴ9Ck￈y(kM֚uPi$a)-(+rf[Pv@"+DmƯF@mQ0K.l5Pu?ʋn3*mk'̶cDj٫мeosQ5n7o!JMߙw@~>Ib#?/^|?D7{d9a!(=gQ}N!&> Mwsb xC 9!,?@zҞ@'ͺL}GQVgwԉ}w:WݫVqtrq^p[ME:^5Gδ;r=3yj1*?qWA鲱) X% ?BRT4\A' C.D/@pQ 3 !ÈeoC:."Mݛߌ;=lB`?ee')yfA](c%  `3.D nE^gm`m FnD<܁PUik1AV Dӽ(oZ/9qz<ͽ3A{>MJ `̘1x!~LER__ʕ+Yt)6[KKK _ ̹,hLW8\p e[_ l ,i͛M=2u̺zvFt;|3 4ZM_cc<Ġj1l]Uovo`]lK.[BIHga4;vYl3ێFlζHZmLooLXkf~FII'yWH&ۍ=eE(- h >ZYMtN{@:t= < ˖-c$sAݷ"$F!N#A 6C޾HkٜTOڨQ0“u[L7d @?Gt[_n4(("䱏64'QN0M[M7$j7-~ Z#qDtkoDlAeQT:Qۼ%wGy;~_tRt= UʀK~U?lls`yMu(lrT?ʎ-GiP͙'QaP)Ǭ/ezx?ֲ߇޾ih@g|y6*y.1;!Dq@چJvZedlDFxe mhQU"tڽПÄ 1 !Uio]:>˶)LcPh$  P`&ْg+PDHAHhjg[O-ˮ%m'#:g[mMwv,2nMOX[\֝w|ZcSY /pM73PTXYvKKlrAP^-7 h"H Ŧҷ|d?c>fpUsF?*AB􉋬]B>߇*$³nGzR(wYY7Aq?Te4ߋiDX8lx2"&8J2Œ8T{t?B%=S(Q5.52t" yM5dٕN[kioq:Iw 7i*{AxKlr HČBA/C0A jQ̋V~(،`GYkQp1g^+ވ0> 7@"U)CLQf_vQH\lE~)J&QPRlvW=-f|Dلk"C?6!wuw#U'̶Q0r5a)([z% 7%N/&3;N38vo`s:u*^_rHʟCėD׳jJQv$#B0DfA#@0^-tuji#} 1KOو:v a{mvlcz綠d ,zllqݷfo7 -iQok֯[fsڥ΃mow94-ߙ)bv XkE:GމK+^32X&^:de/裏֡`,b{.>Aa@9M7 QGc'MoOsu1(s:mslC=hCfeK7xkm)@5 tF"huM>E`~!GD PoE0\`mD[m(:xl`m$Tz<2T$hۦ mt@U/ӿagyb46 /o[?qzM<,mr>FC(X L>$o QeRFۍr}$E@0m}|)Dtߛ*/A^2՞DŽ|YwLO]xx/C G>(s2PrfMX s'vh3֡i()@PF5 b}szNԀl[\(P=)U%+֣-N0DYXw(P34x }^f̘/^v"exy3!Ύ @:0و>ϹޓNf,4a LgA@FTuNʃl"DeS_a[y RBYֆsؖ gUn8+EX^Jpnٸisٖ;NvckIg({{ Wlls%6inn& @G=1=P^A¢?DUQ Ql9r "+gGM 6܆yp&##D[܋82D2\lc~&:T@ "x7{馡4?d!݈6ruBAąVFƘfhXG OflETʤ)[$%x{{g rcv%&=d#577ŋ ^b9s#`H&!b"( !D+Y('>ChD'{{ Ur`oY2e@h d ELB(۸z!CXg*,T bA z H9aEKF[Gf !o+V}ݿ `TExnzdMu9euG0"L0UnzTJQ 9Sμ s7siΜ9x9Qެ_ OD)Xo̗#@Abb\"t"ڃu" ;9y ֢ޝh({sl^O]`}mY nM<y||dMx{{g߻IG،?z-fϞMW@!V /?+ec/S/ƍILD Y`!M>Vk3 m$pN#?Yhc @6zW,*G@Ig_տ;P02#/,n fX{Qp2A;s*lD0^nw~p8UO Bvm. jPm!|jYͶ#J]i'io{q+VyM r&W "AB=%<"a?QQ2 wvboG`܄n906!bD8}Q A2'n'WZ? C/'ܪy MKi-Uj@nWBxǦi8l[`Khoo;`@^I!Cŋ ^bǓ^B\3\ /-nGt0d:Z5v E[u@@7W[mTRcd!z{_D<|Ft&kͺ}4gle9 (;{Vl{}iWRζ(_%IXwy(SBE݇m[Ͷ5|f1kl}zm5 ~Uh](Ǜoo; ?voBߤƏt,s]Q:O?PՕec/SRO?ә6‹+WtRJKK3mη/1KL D2pNoAzhD:8#˞,G@Q #=hMW 'Pފ `!a;Q.8*v_h?͈#> M@eH A/GF{QasD$މ6F{QQ aNCaCec̷;GP`9\#fDmi1oBBħ?-ooL;Vs:VoFYh?5\eL{&뫩V~ec/S<Kl2o߿’R*D"ۂ`{:Bpa) }2xSQ }ЉY̺OP0 E8m <6s ɮM{ԱQ]M`TG#iV4Ζ<["BJEBZDS;r}mYv-is?9rmEOGlsUo=ζc5vkzݚڵG3-:'̂^xn(5izU #;XStMtNO@&$H (x6qa*ሔPl[ё]-};G>&X1AԊ)c@Ee( -6 g1 zqg܎*PN6 n6~`]ie!eyӈZq6AdDLpeqL~JzP&t *7 j\j }e!E= j+cFA {WAd룐0؊RL4{Z" +Dl~mBpD%=G#ZccOwm?@%?`jCSl=QJn2/-DK_LXgwfp޾V*;uThFV//ۉ᭯gʕlAѣI8G & w9`0;P]iEG`nZ{.kFl[c#Hu@F1HI{lAXٸlm+PVon6Wg[t;QIͣiWoך_;zsKmm8=lsi2[3S44,bt2.^_[jWec/S2m@W?۷wwߪyXfMGq裏֡`,b_ |€rkn&C?<@9N(`byQt8؆h {І$ʖo(SjD꬛ |Bn "3ފ`ھPu<6HxdhID"MCj#Ƶ}^OAhm ^޶~xԶ֗w1O uB\׋S\d2}}ǔ)SXl7n/o|]TzMI0ݙcG͈|醢Ҍ.~>N%7!OQCH%x Q$x!gIDr\ "jItf nC%AtGP7R?A^E8Ls(Afo+<( u_lcT|]E ZͶ;lۼepoA4MQUVؚrsmaZJHm㏘wiOgQno/^| Yb7p+WFmL>$o QeRFۍr}$E@0m}|)Dtߛ*/A^2՞DŽ|YwLO]xx/C G>(s2PrfMX s'vh3֡i()@PF5 b}szNԀl[\(P=)U%+֣-N0DYXw(P34x }^f̘6rHt2Pjjj(//'NuիWsVoW^և^;h9nq/CH;膲TDrlP9Lw# 6܊yfB擅w#7RWѦ QV Po[ͮcUcm%G2 z!M BDQ3Ar5 CVPN3 }"шOD'!v2zd.}  D(QqC><U(X@Ă&4@r>(p'nCV6 {4܄(Ț`rf@aDaFA,m9k㕢 0nCAX syo6Ҝ9sr2~OvZ?Ob s{饗hhh;KzX>>`:AǦQ'ȹSt?ϔ/hDvo¬ۅ=|:F!>(" C`Ϭ4@CZV-fErO :֮fKQ9F`AYcstvE݄xk[dM7m}ZpktA6G#k:;MX=f[o1{lt ִþ.x⟀tPTTi3dƍ$Pv M&"`,&W+fBV@S8Οm,1FE BY+PYw#FG̤/C@_(VtЗ QN3,AYܽ(8 ڝ9pb6"/og;?a8 ܪ' N|f6 5(P>wڂˬf[xr.δ47=Qٸэ|oBҕ{cÇRofVXqK/u!C>H{GvO b7Gp=(l؏@qTz (@@=݆7![2MmQ4!uw?zRUTA* (>C&HZAM{7zܤǯ{t CaWL'QHi5t 4**@ * (κ|bo*؝&9k͵kP<0ٞTΧWZ;)|%CM(%Zy J>B:E+n(eԌݞBJ2+i8l[jhm6gF}wV/ I܍-BR>1(qo?ICC$HφL1bUUU޽Ç̥^zgγ>oٲGyݼ{7'+Wdʕi_G&OL1 Ũ. Dx- Nߠ//2T E0P8%$ER xU K/){@S&u;5"*") FΞWP\m6j̶ 1A4̶ج s( z%f,CIǏQնy=J/+P5;;;v'aoE?N< ɓy'x'κܜ#k" ' k@D ?ڴiY{e̘1S?!wqQL{CnBA;ꇧs!YP)Ћ~Px+A}"6Q{AǴYjbs@΋;yv?öh5}Ӧv@I;y+[ϡz9k_(oG=c=N>JwX2x6~y鞒w\/2w\~߲uV6o֭[3 6Cɳ>(?8۶m?n566#zjjjj.x-[Ovo&y5e7_][x:hR@G8全ZJL!V% M7G[Y=a Z^q}vǥ_~Qо % Q5r1JZ`Te@ m(tCwFF  )i-|ݏpwc"œ҆Xbsw%-Pb6\fPmi7o%J2%[߹wD]c zk&o\ow)Ctǩƣ|67 =% ;;}_?$_ݸ02JKjS0^Bu(Z، M߇xN!Ev iC(قhG(̍0^G T{Z\=W`ՠ$a7 P5eLBIN0TXF1 yfk7\jJj.AIK Ὃ̶(iD4/6͖Rkn=>} epޞI/[O> /Ս$!X>,˗/?B, ~+Ǿ(䵡WLB2P؝($oS`-fQ5q=Y*~-8{r~()~3~ZxY7tiT--F@kJC%jc}y0J0<4c ؂yJjeV=%v~vl?ds1lI4O̶!p=ǜ&!LashK#2?[7ndٲet׍Ik67 =% 444NX6 Xçx#oD(ڨtZ ע@ىN^*"`>5 isMa!bYD5_V6$_g׎@}5J~6 6*9 mZϣ4Ή*T3< 0+{9~z6lMt16M+Pulٖ2ےǯ_cc9Fwwwd>It*EC$HO A &u Pmo>d;"weو2c _FTQMQtd)-E}(`M_[/_{;;;v IwoR:DŽ rY> V !az7|_ο|@5F|m`.A};L(A/@Z}v{z'цe%[vӕz0fƬۅzt{Pm(0. {f>Ʊ/5ŽmͦŶ =9DxDp=f[Oc=Sd3[̶b%qo$gwwwJKD,>oS]]M>JwXo5$??#mn m>צ_nf.mpcL7\UL7ή\uLWnmI_`sPd%tcRFyhe{tTv#my~1xv^fƟ+a8wtL7AE1׎+ ۊpؽ#ctPeNSj;לܝ˶\;ɿ>~\\oď?؂|z%@$&=zUʀ{b>(@7n "˯!B{-U:0{r:w۽L7zѬ݋ Wu}{!Eur1l[c 7U~.FgT+my܋6妻`wNL7c)fmO#<}+YXզ[h$[L7t LwXuC $HbreBprB/ιϣwQ0*~a J/ZoYw(U"V2nGB_ IDAT3wĭ{A[ϿDx$D+@ fEbG#k(Wm~Iq*|#DP̶P<ݦ"&۵_$n$bߩG82Siҳ8Bpw+mN뭭:k;I@ ؋Xs,`sŋ rr!#H%v#w 6b R(t8AAZuN=ێBu!BFkG GU*Q/E1=CA mCgj]u<+g3(8kȮ$m(l)v@Ŷb{>Ŷm6cy֛2v,6?~NOXbVFD=+hD=9a8 ~@8aDU}B7iaq ~mK A>~ILΝK ӕ,3hs?[ c}>hSی^gDpKA!x՞)ўsi`# ⽬W}EkD帍 s/v4^CЋ6O$Qik - Z3>yA0QQo2ۜ;aמ!b"ڂ޶$=^nAD{g;$4oThDApgID;Y^Dނ"7%/qӃflbDSc[ @U3öX6ju^)8=aﻫmol:Iz\ έS?|Ro"?-7?%݁wpMHSMtO л"W_}g7ndE4W D/CUnE5ހILnJ USk장vCxT+}{fZ YoiN!"] 'P5xfPUDԡ**>*cPe[GտLT@0f[%ByD*Ta`~ρPFu0.AА$1+6{Qw',@EK61ےC\;MK-&7*[> i$Hb2tP| >D_{hN#7( ގ҆C(H݇? !l_E[Q[` 3}=hD8~(x5dO1˯vR( JPp_KT |uD^WݐQf!=h 5d4!|WLfqٶ%>!x(Ѹl"dBb%w_@'Qs7`+$=~B4ν40:t(A_ V| L<4cT,F5M(eNC@2T JE[ m0m xU2KJ"VU-m=zfAD͙&|U;[*gΞh1^AhmXKoT/,AXfU*3^t}fl{~l;fOOm@O {;*ͻP8_BmxW{;J)M7i 3"ƀ$׀݁[H sD9I)$1[< D/A_Px\Z%rEJ&YEv iC/gDd J> Pˢ07¬{% Dekrc\YW(B`QB2 %);QPaU jVX5! {p6s)[(%/j#&.2%[l^A Qnxoi'ki?iϰ~_1;wgo֓!A H_$!X7n5JJݮ5ч bl֍D,9'Lw];WgF1FQ =g͐_y4Ή*T3< M?5f6]oyfw8c5Iٜ.FUfi9i̶<~tX{9o* J@ A>~IL&L@92(BhDdqf# C }(<~Q9GG%b K\b) (G' (l@p(bf_>a켥;) HGo^؂|z%@$&=zUʀ{b>(@7n "˯!B{-U:0{r:w۽L7zѬ+G{ /qC0\>Y )X̶58{|A::Qb?o}@ ǽ*QjcXnKAq7tlb]5JF"z3/C;%{ۈ(=l~#wڪh%;;;2 qo6o=g=-^ /i[R K9rmD͛>}:yӆ Xb)d:PHKUM BkОmG!##*z("١ 6^=MW(cq]=AĩXEv%!(7l@UheK@ ǧ..صdLm+=ͳޔ=޶c9szڌ۵2\;E;t$A '䦛n"%]ߐhjgxKmn{JY$1ijj"JRx m lF#PBo}mٕ h3*WЮ h.PHK5 ]oπ`3q륗龈z2,ì[BqRk+;@U,V,ClB L5DI{t_BT ]FAP @_lDQp;q*EO4])5{_xِ!;;;ޮDߤ5 /:N' B ɬY:a#qREv#̋KJZ|؎`BUkQr!^ }a}D+P(܆?pľ3%0i}X]({](ΰ6*P@U%()mvO7{%" %+l$ۆ½ߋ(="NجJ̶AM(}hG^me(aI1̵ ε&lv#ٳg$Hφn$Nȣ<»#z8QS( 5 QD G!TW:PP;dG;oo7]=Ϟk=v}G"oNtƬۃ GG1XI;f!=m6n7 쎍=i6Vo[t{G{ѮY~N͍~{.kv6vڿIzjߧhMYM`Y<}=hxYu' K'zp67 M%4\A"wއ~إWd -Wj1wѕyn%Z>&e$f6TQS9oi3h3i38Qzh*uyFtNԜ^7E֧"BmȝM9lj63`h3chRӤriR{9ә6؜LsL.m46O=:Ӥֺ&շiRs$WiEf!:߇~8a켥;9?llA> XA>}&2oWm|!jIr6q.^#jEDPD!a@ O܏%'6Ӎ2Ǜu$q["*o*e"om.#yA$RwJ֠JmDcQf-6f6/Bܛf`-i!ʃ6gNtt6DЖ$=t|6ߧI߲g_#}-AK:БGH$/[$1;w.ig(LW"fZΠo[oQ0*DTMqLn3z~nrD/NiT{ {̺D{nϥ(^%b9E 86B"*̽}fl{ A/F > D!&«g*pO3j}Fhܢ4GGDslsXb]{h TxےxkDwww{oӼyTe\ IDAT BȨ A\ ICqb{^ڿ=L<,\vUHavBbD! j`3f}{)/:€A5D&lAH)6/D#z%*cQB%Bj?,L㗣ij%b3 ν&Eh" \ IC^x{e^WgAl(L܌n8bYIk[3p;U&2.c"6j'vL? Qwtft#f2"VZ#"C!jimuq5{mS VvbL?މ Dt;n0/mpc=2{\cȃ{켷}}L4{=L$>cL$np'm~Nۜߕ <7ܥ+/ByK`mw?6wn{Jx$1ٺu+)T;^OE߲{цVQ "ih)d} 9 sjw @~AUCQ޺bϣ:U&8z^+P=pDaU*.TGaY@֙v U1[haT Q ُ@[:bUqw"X]lRk~@̶=P4;;;NӒI֭ A|% :@_G jD8  R!!y\Q0nEn-1"1Dz"w_jDev>,I+(jB}-HFDc]fD!~R3BT{ k(hBxm J|nCPqٜEf[ JhEOnhV7YIz.g)hR{;%7ia2tP|D>:Y$Hb2ydQ} 7At;Qp:LD YP'(9nm4/!J?@'Q%_! aKP֣mDԜi"*wQ)rZo)-m@e /Ol6KP9k?EwGl;̶'̶cDfJ~JQi݀ŨZlQOoҿYO'O&H 5,H$smxgn'e܀yo"jF\D9qEԔW}){.N󂘮b;Wϋ]DEU1m ubu}ǧ*'lET]uqOiX w](s${={\n_._;7 =%v#w-[O!w"߇ Q]f`hkx:hRނE,2[` @C0ZMGpFaCT}"+{=.Z>]g;ˆ/FV`Te@ mʲCAuOjy <`IIᭈ%g?hށjX;%6wG cռ 0WQ{i xf`.߹wD]c zk:Q 軩-s S+HbrڷxB5^_l/:K(.L6=$~^F|E; pEanY:J ($֢% Q/dJRv@Ԡ6vk(43;^ClRPPRs J:^B?FXM]dE O#J|NCVfKND7abvL 28qoOAߤ'CtIAH_$!X7nF8*(DYKQ;_D}? >i(Eas+,@ӯ@P=)<[QpӬ \k9B,Ba{͗ ,v;JBz=slkͶv/(%{Pp ݌6"a!y9L\i\B^}Jf{k;$;7)dc„  !az7|_ο|@5F|m`.A};L(Aǁ_g|ZDX(uankDo7]Yc0`̺](noLv,ߎ>*NW BPg>mۈ.ڬ2ߊl}]lˠ>JߋJ[Yo{am<ꬷ-I{ >sVZvC6tEWCIML&A$|_~z1Hp1 WӍ \ӕ[[W)=ǹ6+z)G)tÝp><6}Hcë]D.,sgǹ4" /H.#R(̿^OC@k(B_a,r;*c+B}v{`{{疏[4LI؉wYW͚1v(ĎF0P0דhjUGjmϡĥ1xM?EgMkFIHDYpehvogqWڜ[[uv߱Y$笧ŋH'ؤο |V$s6"dL>܌iÆ X 2(%GP*@&Vsx]Sh϶P]Z‘F+Q/E1=CA m(!8jz&֥QPB{6S6Z JB:PoV,ʖ*kOY]l+3]l+kiɘVl3z:fg)3{mbcs?I9-ke6߹wvJHۃOfO>M7D>J. {MtO o@$&MMMS)֠Z o@ \hJm/-2s3mfuT UXBx cURAc&o1q2COVeu Q(^jMwWʛEЊe(mQ( "9 qTK3*+(74+M( nG|6AC]h㉦+fO 32ە蛴655Hx$$ .>(>(>{ss3xYbk֬+X5kY8B`31^_(߀nDybIRYCw-J.#BT@3Lϝv P"qPwF& Qr { F%Qr 藠d%%fOپADd嚘mdP{X)S]4}@7 %#WʹhC+Qrͼ %,Izz"f6w߹wĽ}aD;{l ِ@#Gӧ]vyfJݻ|}] lpEF8 ܐpnx]ךX[u`QF`h3Lu5evv0Ӊ63`;6sgډrX`TL7AO;Du\DYa}z.,޶ٴsh3&6Jm>F&͛7{3~O0'|oC;M7 Q3NBӀpɭ-QS."< a3HxO~-9޷nY>ެ[%Ap{ܢQ~U)9|ks$nv "BXwSrUo#"k2n1CDyތ4S6kL AU9+EtŦ+q&$ug >M<ǯm r^ր# ?@L {>F\f\zgvF9s~t% lF -BDd ǔ6Y*-G{PFA'"wʬ;A'\؈x/kU"# A_8x(A(Q9n#)$܋ݧm:ͶFIL}3v8'ZڙAf>h}V#nQ`ϣ }9e9w,1®=CD*mIz5 ";;;ҋwH=7ii޼y !ϔM6qWfϺ>j(|A?իWg{?; ~#n3iΎt6f }.F=\3ꡓvUAd(0Eo %q])J^y(,>ƶ,)', *5u$nEͮZfx- v{ NOj[5ۻi?misO4vF㏟s}d>Hz~޽$H(vW6lȵ)DH!rtƠWcL9}4U"Xa :cZa CQ$8A kgCt~u 6|q0sm!6Đsfj(u龌 +ͮjc閠YkC3\dsudmOgnT{(N7.멭 A|6$+ʚ5kشiӟ#ȢEp( B ^( b* Q /< $:P_jro3D?;[l%QX frokg <0]:,hEIS9B8bD! zLF_ge^ XS@odMy9eǠL3mFIi7Bgՠ$0i_z$;;;RoӢE A$,gʮ]5jԟ@zSO=u:sɚ5kx6l_Ο?t+Wdʕm_C^|3d"BCa~ H،(tQzש(x?r1 thOG?SAA{ٽ)nQn$ ͺI(:8P+om5(xl(Aꏒ=F%K[i J0X*q/1z!x~{7k%G~N2tCl'tݿ\{?Zv /… OOu^z)B ڀ|#FPUUݻ0Yw?>}Q֬Ysݻinnfڴi?֭[Izc=JMݳQ/! >W(pE` 7P ](i@P#RUJ7`^oU;h!Lz*#Q WdhJF%NDn^vT,@VC+ўW٬0b(9^-(ݏ em.< _6G{%jsͶ6Tmٶba=Ow?ag'0PSގJT1-Fu͗P%6i/Jn߹4qoBߤ&OL A>րt#wqFa2z (PxNDoV+ P`YP)P`AH~'SHDYjbs@΋;yv?öh5}Ӧv@vVjmCDsz!<?;;;Pގz{o|nAC{f鞒w\D#<իɵ9lٲg~hNH},0Ϋ/;j#Ap{ bcfPkp!FpfOs؃筨׀`uv_[*"(h߅#% r02A6!x;}# Pa sF4ފG;P± aAiCe},;u(1zA`BQ{i xfb=%[߹wD]c zk&o\o7 wK#I)$1r߿Ғ/LQ{nJ9Ƣv -;D2Dy-0(-JPj<Y^k6̷_gƠz{s%fjo(A)%yf\bd6AL1 NSbxaJ GG(ꑯ$bxѾv cjwww7w{{ &l=P-3B./ ymU0ӥPL#u' EDXYTMkOʟ_Ξ3 ،zBݦ뇪'1z=uMFb${/J`=p!Xw&A7 AҩPݲk`</x( 7Yu@NkZ(;dZ*2 棺[3 +v>t F"VEDX(yhEaJ"uv8 W'k`B Ϟf/h9*Qn",E5(C/l4{ P+a'x V'4?*hP7Ҫ~OS} ESPU=(=PPmb4QQ{b|N#17^tcs^hN lPP+🋽}ܜy`1`?{\k>Ԝ_bz%KPnנjrA(gnbٶ%#7dc;JB;Q5b]Df7D̾n;P"ao0xasoĽ&e~:m.٬77l4ݮePqI ƒ$m7۪͎]L]la}-I{۶#>s.'kW6%/[#"'u onn$Oikkskkk˵)g%k׮uKy}xp)5&uWUZ[:~Jy1;O;huPw|L7À>v`]ty=3cwP5kqP5`);F;9 #yc[wPwl̶*itM33k'|"l쳏k׮u;ke}xpvn$?%@$&JX8P^A,"x\imݏ.B[[@f{rؽ[dlUJpY7W#-ZŬӭfۃ8{< }[6ᦻ߯ZVl y<e6kMwۀ`Nfxh t^ӧ~ J?g"elF (imlC?WHʸ;?"mie=YE f && Uf_z~\g 3DF?fۋ(qG Og܁`+(wμ|&=ܽ66kk[P;{{yEiҥ @ÛGR4=r )dzQHKE&Vsx]SdP]Z‘F{L}v(9FᱛwHL4KuEl%'bm۵R  %VD+[JmjP8>amv Ğ -k61fd6zSaxێ1k3sZb*Gs=ћOcf>W Qmp޳*$H~Jx$1 J P}q% m <Z(anG[ve#9ژnmwbA+Pw1BDXj-?* M 1^w" aa-Fxzi4Uf_QA+AlBlL*SL龆(=3zA5.5H_ek"pq*CO1|])3{_xQΥυk7ADA A. ?@$&sΥ9< LDt0! +QچH3/B,)Pjekc3Un@f8^{(Aps]B&x(`A;-(I#b`9Qmm%~)JQR2eDdJVٶ $ۄ½߉(=' Nج[3l~fۗgJF#bcэWpy$񛈘y4&6; y$$: ޽{O!zjv;) )v q C!TWzQPoGoo6{j=znێ F6Z{|ٶٸlm3-6Ə}XmiD~S칽vslk}>7I{ۼ̖\]zxY}'43VA9l//$H~J69#ߛ(%/=sP0M7.T+E?<@s~z[sQی*hc5A 2ӨZ{MC7`=Jt&꭛T1& C_F U+,@lAi̶(?1-= .f|(B\%Pr5/sPyb6%/Y;IzǶRz,Qimvbm <=h^)8=a{m}o>pNlK[pޮ6guSX4wsM䧄Ey$7Wֱv"Q?#hfEհQm܍mbJ>&c/"UqGPҰA2.EA0z?-XiQaލ*7DhD4{+I@!>Rw hŨڏ`U#P6+Px@Phfs-}A1>ٽ%#8ћf;7 \zi9ZZ#>5=҆mv} J|IyJkewww.fKf >j̞=BX~*<}ww 77  XA%KPxm@ A( br3-G/<^^\@)axT{[ y(`,EAD9-C N "f 7f22!Lz޻ADơD3/A m=rXmACL F?h^ 0Ѣ,Ѿ R{ BBz]mhi7,LPن94&%d|;G K`:^;@ ǡ fXBL7=(^R#`AA{b h`̺]*Y§n" ԟ"ĘucQgMEԡ!BmlͶP=a(Hsl(Yz*Q11% O݌x۵kkKlN7mCZshw6FHbs:Fwww x{vŋ)D nn$?% Pu JM.FAhݳ®B$b33R6|1 Dc{ Czd3 譻OQ*r߂տ%Q22%#fTmGB ˊQcA!9p5bFa| (q򳺜(9ꍛR2DKCC]nmtٖ/;;;Ns8qo}}9+ ,XA H ISSөZ7D5oevtqOhNυQ3p}Qk@nnM,G4n3C5܊=D0(|ލjy;cWiz\ _}Qaė < U#B7 ݫ<Q#x )iT?-|ݍ*^GiCd},%-Qb\O&tm1o%5O Js4D݈c _}&yA;~W>^\I ,[ оͨZ.z]lڄT7|P@ Sw4;? 6fiPC*Ӭ*k,Xn(Hf0,!š߂.6"h׎@}J~mm2s _E IT;P^Fپ=*ˈ7lٽ|ۤlN*ot JNڜ1Rf[Rm,GνO7 Jc$,BR@~IL&OLsdPm@EDd=q! _F ](<~Q9.B5=G ҍCde#rw' QIC w u؎ ³ i*;Ϡ 6}(ULD` 6T]=(qMֿYl4ؽdfllG]w'Jf\L1Q{ߌ0qmsyJdv" fWϚ=!}www@ߤ 'O&H Ey$GlA5(*{\;jM (nG@rT j~f=hòm֒߭y*bnYwtbֵ~t;c(v|3J4zMn}T~Rle}t86]Yovoi]lˠ.IgnUbFQ IDATٲl,f[mGk}]N%lQWWG!J^,BoY%`z^ZI<$o-צ]nf.mpL6h7\]LWimI_`)LtPlFtRZf]6a%6m.cya&AS+HbR^^U#"Ǖ WuC{5!kТU:OH9h=c ч*enKPYjk *Qfctڽ dZvX`O75} #xƣYgNtSL7|wM޾ZѮ,-I%H% \q\e x v^Q2B6#^66!PثA$]e܃sg[64̲pI؊w֬ˢeY3 Vx*/z=v?3@K"QgE#Zl}@IvčE;mg^vJ>ivQzE^i5N-({@wwwĽ=ʼtR raH#)t{[n 2(%Pʢ`B9)g{P.BhHH^ e=>;t;${&֥QP"{66Z)JBzQV~-6(6VbgؖkiVb3z2fg0{mGbcs?I9-kDԣ߹wM 1g+_ (yAû jAߝM䧄7 AҩZ W.CveW1߉ì֦x Vyb9 u~(.D򃨒 dπ`s p'bɠffb!F]eeQUyͶi(FIx"9qԘk3*!QTRӍDQ( !>!+̮2GЕ2'I7 ~\\x}D4 $Hb2w\ ɣODDz}9 Pm4"Ē2 _66!XQU\lFŨ׋g>ە(nBǁľӂ4>,FA; F Qr 藢d%%YfOپAD6dmOM({^sij=fPj} Q~vd:"96ݨZzJ7A Kg]wwwOc^isΛ7 _@ $a 4y$Nûw^~|^Eⷚݎ(|` ()u܄PՕ^x8уۛMÞeCD'C(z#!;6tQqF~ͽq;dm'=i6n6v l#ojc6Vo[t;g{n]vn7Z~M6.%;;;>AW~c}@c o*(=} ; 77 k : SO=R+6 ƩF+3xpc*^7S2s]DYbQSt08fLwfwL;T6y1tT0'jN"*}rmf}]N͜Dv0E^Wf1E4^iR(=v{At36'3(=Sv>|@;fTw:MS4.Imz\;ɿy1u;k /9V/vn$?%@$&C !Lw^{N^w#tkFԌSQ6{I@.QS.!|A$Z"wD}}} AK#gO2@Ē cnх@ry$i"wD"~6|/ov"͋PVD9x4TI2D'[jJ*"hK_mz;Gwww~$zoyȐ!~mK Aw ?@$& , + 5,3hhn(!*|8Lݴ^r; y} (V"{̺cD{eϥu( bw$Y@%*Gm$EDT;@&^ *D!.«_Z۵hig:{Y/!x %2tslsXŮhTxےx 4{3HpB| ) 4y$N!dOQ@:N7Q]0 }.A=\'N롏vUAQ`z J02ĭ:OQY^֎c[ @?)'3XU6ju4KD0=ж>l8IzL έS?|QoWb~3˿), ޹mP}l=ps )azI/B7պuFj H2zQ5nTsۇwpۀRzI hUqGPҰA2.EA0z?-XiQaղUoD>h2KDV8B}(@АQuPq GFz5 mF`qv_57͘wn@L<DhiRԼ(J%(9dv'q?+6߹wL/QoW+˗3{l QbUP4I'DžO  ɒ%Kp{(6 `Rq1bߌ*cQel= OE!. "q%hBA|J$6%KQiAUfۈβ7*3M %"m;>.Dq( K3\-s0l36[>1dM؀m^NXxD#z%* cQBvJ!*zoYǗij %bsνiLK7=iɒ%ǹ#H%׫DR,X꩚p2͌Sƃ7]91cYI+1EF\gIkD;81x6:'vL?Չq'cΌz&;LƉ]fvL7։i&3E2Zb-kN4޶q 5{mӝtN 6{ftOK[]`ϦޙfA#gȉ(mv\cٛqb xk+4mR6?l΃s/|6b(p[꫹cg-y5ͱ%($H~Jx$1yIAb~ݳ®6_qgfmb UOĝU#ؿ!1TG G 譻a?E2-\-yDr8U*{̢lAUvT\HaY1~*s9p5bFq'$ftT%fOnE{vrk~x̶=P4ksk9I>$a ?@$&MMM8"&A|5f!VNEhoP QZ*"l_{' ](=` ]3}z<Q"r{(B5Aa:"R ;n_ePsJ:Pph]( %ކvCA!aN#Pсfqٶ%>"xhR Ѥ.$;;;v4$'7i4aD A. ?@$&ӦM# #OoP ŭ(8$"*PQ*r8Q99#J?@gQ%GĽ%:ZlǻWQOo?YOӦM#^ o‘\cDRk@yMyj톎nlL70sJON(qѮ){.NlL7l}ű0yv-6sCmnD ~y+s.utp? 0Q9I=o{=y\X21%L*$H~J#)t 6^Q`ފfn3.judP꒷"^ LB`YQ">,G#fhYp V#G8p+*!9lGQ-oq̴*^K80bٌ0K < U#B7FFݓZ~ {QnĒU4G5vDx9 2 cݧjCQS4A[z[Iz|s Jp߹whdnDz믿` ޺U=4={MVdٲe-nFwkKїu^X^˿R (}҅;xOQBЋy(ـhG, w(̵uo2- h{.k֍@I6"0oQB2%)[QPeU!j.Mo" 0p6s)fo#Jj.EIZ݁fPE ߡ3Pr`!ji?i϶~gv\gTޞI| $ ~$B`[Td pZ%VO \Ty] puL \+iZ[Mxd qXA Ӎ~C]8Qoz`;PA'!h cQv؎nlh EoܶZQz=ehC.csZ66g636ԉum)'Li9;;;.Ͻ. +|[֭u;k ָ641s ) HdǎS)@uT{[l0 fTBU1Hg-܀*}_eRe-PݭU:o{zӁ`CXan VQfXĒst7ڵzx(&(jd³, ZD Oe֦fxndyf6:o9]f-|lٖ2ےq]oc9Bwwwx>IwT;v$H C ɓw ml;3P"q ›/GX8ݎB{&,6m ^P2r3J6.߻%Pl.(_KoFxJ붹%2;+ig͞>;;;vNoRwɓ' H$HIXGR{zz~TSۋœr̥(tPp:BvD y}VǬ߉Ba֮oAAvta7"'1붘uM/f]; M3ێBn]o77DG7*E̶]Gc>fƘm5 =9Ax𝨊om1?{[[Zgzےm;^3;;;p.e}: Qbz*(:}M$B_vZ1Xp1x5&iv *-k锞]D9s=EԔqݭtj1'{Mopylxh3q0EtDQf\O-q*֘ŝNE:Ɩrgmq"rӍm+vµ9N"m;t=v”3k'|"Sv\X21%*$H~J`ILC q_YDts'ں]>1TC/￁*,{Mz٬DJn(qF0\c {ӭfۃ8{< }[6ᦻ߯ZVl y<e6kMwۀ`Nfxh t^ӧ~ J?g?OOʓO>yΞ)D+H!T% (z>"elF (imlC?WHʸ;?݃=|ܺak#HVf]%({̚vk(Ķ"ě(Wm~qq|̴_B:m/ĥ<ݦbsMLk$n,lC8 SIҳ(BpklN۬voA ks~'QK mHD Fr '?~p?VZ3zgJ۫um[ l,[.u^Etn0ɞ><"(=Ȟ)A|6ҝ#V"G* ږe.xEPo[vAۖRgOt.cӢس9-k~{u]gsZ51% +$H~JxWȓO>7SW\_ ?SA:ATWKJT+Y+% 6hˮ 04ͬVZ[)@U{Ю hrTwOFeZ~URAcMD\/Lw'bɠffbTA^MwٗEQA+A5bmų0C$HNE35̠JmnDQ( !>!+̮2GЕ2'I7 ssMzvtt$H CP줺뵵ˉ=S2w\ ɣODDz}9 Pm4"Ē2 _66!X\lFыg>!J+Q(܄1}%0i}Xv=; #/E4JJݳ̞}9lB1ۖdP;)v{;l@uDCsl>*n2/]$=~3\Ľf 7oAqn(x+Bl (ioo?m۶Ğ)tIR( a};; ܍´3%(~@$ /6_{%K)MׅB*!BfGM?A Nh>Ͷ^%T)KvvB I xt@/QUSc|J{ߣe/ f@I^ ̶c6??CFk6X燿D< A\0k XK/Rԟ];vO3䩧r)pӉ9ŰVx]WZ[].\7:QJz\s;&%N N۬׸ө8Ǹ^NlrnȝN!:EXMg ޡ IDAT4N:6N m=ȝNN ӛ6l+0126V V?`N v0slr0lmNy3Z:z*a%/րԶ9MVdȐ!8]h@5NT[eQ EѨ~  d8졚@8FPL/J{.(#&h~Fi #e[).?>a()de0pFFcjH1lKsQ@']Tbf7H6 +Ǚm{ 'e(oQO*>} Ջp죖u@lrCթ9I.>`ؒbp%ڜ8pjN+S:BSsztG :Ͷ9 εo߾2įm $.Hc=__"foePASh(%bGDAt Lw=H,Q#_܃kpB?B O"F'bͿlxGo#C2pGh>~ ,QI[S[6'Z cl>ы &> J|FՇqDtqT巏Wd`V[['tb FXe6"ȷ%mzwoN˜Qm._#pߏ֠.\H!/~?я=X:7$a k@Biii9C}l̙3?W/8a6"^f8B,M7}7 Q`@/%4z3-~lCA.bti6l{ЊYv-|U#DLjhwJj[Z1.DK>ߴ1.C77${"< -݆_E<vllf#\64@ɓCz/GIͦ+bD?FK>z}kJ7u߮0nn;5kV, $ ?[<_3,ݤP9>%BL)W>ҦҥP4PW3}\E ^l;l#+`[\?^X |?gժU>g'Yd U#A*\# 0izTCL6.6Ӄp>ls l>`h=9d{A{bh#Dx;NٿA$נ-ΦsĠa辎}~Ik{4eD/ KK K$16miN}`=U.?5lLӉ6kH5D+,y9IwwF IrX,O  ɲem&֌ ^_lۄ h" C;By p'+(`~`kD (( bD( 6PȞݞ~8K!fA/BۿQo %yz\jns6 AM1]k+v aa2 !;Gjg(ds&Jr.AȟP2m8;;tk֗!A$ׯ`DRu֝ 0(CxPRdK!E=B4Sd3 njo_bnc@ zfmy]ڞ;޶}tD_ cQv؎niNcz臧v{J؜ m4)1xܭ[.a%o Xspvn$?% رt*rكoТMz GMw3Nv ژяW.YUNU:Q6Ǯ/01f XaFmcK1h׎*uoclcE Ϟ/h*Z{U{T1~7\|ۤlN*ot 9c̶}tX}}NرcA0$ L<~Ƞ`ۀ֋&{8C4)(`P:r\Ոef,Cp(A;Q@ߍOP oAP)9O9s=r ө)tcv h[mylx5%hfOY8DԔ"nm v=O Tݫ7"fU*G+Ͷǽ@p5QN>&m"t!o؇}j}J18J5ݥvoe Dy%|Iтٳ% P_vh/A\!5(X;f'w  r-9bͦf-C({^th4ۊͶKQh?JKC (Y{Gh]dpoBIDE+탈w޼y @ÛGR4#GO߶n9j^Ovo`7خ|/Eս(98CTU(8ص:2{*fy'=> `N%=Qt]ky %CP}%"f~`ۻf VQV-dv}q{=[bmecb"R()lm'1ےw? >o_IYq< P?c 67 k XH } suuuիWruuzرRx7售 ҍ;֥R)際к:R:nf2pGr# &ȵ犋FG:q~7I8Tʍ7UUT8 ƎsʶcǺjᨫNm̘1npeeu8I݈zaKJ\q(uk9ҍ9Reu8WZR7576 nU؜<ص Jj7nhNnjquiN+*\q.m575aC lFl+).mÆL:Z[[]EYYnnnvC,/wWv]]]__Hc 88|؂%H?ȑ#֒N9|0lYJ8d UIUVO%CNlubv_@ʀnlrX$E6Q[32DA?ޯ*ˑ>}۷o/˲t5ߒ!}>[HkhhHׯ_˗/+Y ;w͛zꕞ?XY޾֭[k={.ׯzbD"{n5<<|*/^uݽ{W7n?/_ʲb1E"7nb{Mtڵm3APצõ޼ySצb;MuMݺuTl{}7Pq؁ĠAt;S.H$nǯӁWi7A[1 1z vy* $tP= <0 @1A1T@B9 * zoՙjP= > 1heeEdR}љ2\.w;D4G4F4G4G4qŋL<' !T*=66{3Y$nгhhhhhݻ1::CCC5;cww=pL@>}z٬⋶f:뺧摼ollL?$ _$>۷;CCCOOmeYVyһJW_}ՑX"AyO>Ԝxs\ Lv$nb~ݶH$r}}~ ?>lZd2VAz_O}Ϛlbb$ =*}0.6{{{A"<ϫm+A$/`Q~;k{{״O]}) |-% y7B8BA<8sq477X,wIDATh4ZWJ6#׍bs_E uk6`Em[hT󚟟8ڹ6q{y'O͉,KRmQ\.Z'"u&.h^x{odkkKc/~nѷ~[Qjmvww8Ω}U`,KGGGu總Xx2Lٶkuu >u]moow2ԎhewUQ}}q9L&Nj\lVTJ}_LF\Sa(LM*.7L`kkkv=Z֪dR]7̨P(u]qMLLПpaT*P(t;g J*G=GqW.Jy7͜/%Immm EӧT*uW9Ir]!F3:5KwwwT*R8j&d_^n/~,' t6ɓ'rGTJ|Mⴃ>8lf|N  neﳟqYjZM'NTŢJRm ` A|$Z!QRV Ԃ(غ4Բh}$ Ч*V(hFD%KIf&!{L2$}Wɝ;=|FcB!B! >!B! B!B!34 B!B(f0h@!B!P`ЀB!B!B!B1AB!B!bB!B  !B! B!B!34 B!B(f0h@!B!P`ЀB!B]|ozA͍3ga`SAAAo^UU%luuu1<>f5k!B!tA)@!BHnl~ ر#ʿdl6[qaÆ={xEnjBq=Fe˖A]! SPd۶ms\E:x3fغuko6nXTT4sL˕9 &\<B!46mƌ῟>}: Aтv6lذaݻZ w̛7o۶msbUrU HJJyCp;_… nwiiի~!hΰB]£`pcWBbŮ(%%JKKm6_aƍ 3gΜ7o񛎎> UUU3gGNII5kn9?ür[jN6͘dh"Az;@aCE(J-BqqqϛqY*..noo5B!"KJJra…< ^z`geWAiHWL"0h@PPPqE,(˖-[honذ!11ϿVUUM6/ӋMVYY9uVn#0JKKA(**;w.Oxlڴf1"QojjjMV___PP0uT(B։?h7ڱcGnnnQQQvv ŸȯB!7h'*. !$77qѢE!$Ν+~۶m;v7W^ٳ͚5k!۷o|!d޼y=uܹn|㣫 >HOIII ?>>999LoN?Θ1#Q\.BHqqq47---%F~z{rB]T"΢El6ܹsm6۲eˊFț3g7p}}O~~ !yGxaW\\\PPﱲz$⒟?/"SRRB66-''v޼yӦM >l%xokKMM $L6xԩz` р%?͛wu]s5#`y SrrrUUUG7n< Əs̙1cFeee}}O0w1u{O?~ 6E(z{n7uBqw.((Y.\~c(i֭|Bvmk׮---s->eggګ*4\vmNNѣGv}ԩ6lkZ˵aÆ3gvmb!eqLXmmmAAZĎ; &G6BHeein3fz ~vvvT ;wneeeUUՀ>u1cƎ;:::!ЅvnlBF#1 ]?`Wv tq5Jʘ;k֬7ȵF;;< ߖbԩ|e!rէz!b{|B=T^zGȯZށ_[[[YYf͚Xg>`ڴik׮(\uU}f*3AU`WB4 bkݱcGiiimm-ѣ19nONN1 xᇫ].׶mΝV|be=چ!J!'*@aЀd5n^葈Ɏk֬CHݥ/R4g>6ca`ζm ??WVVFyd^ڽ{wl6['B(憮!ABGÂ5 6lXlf[|yxebڵ3!o<\ރwd/}={L:(Ɍ|ޒg̘QUU\~7xݘY#۷o˫_ϟϗZhQRReX~Sm۶-++߷999I B!42wΖR\\\WW|"~ L6:6e+r6l;u34+oG~E|B]x9o޼;w.!h"BHbw<#  !B>0*chB!B!.B!B!B!B1AB!B!bB!B  !B! B!B!34 B!B(f0h@!B!P`ЀB!B!B!B1AB!B!bB!B  !B! B!B!34 B!B(f0h@!B!P`ЀB!B!B!B1AB!B!bB!B  !B! B!B!34 B!B(f0h@!B!P`ЀB!B!B!B1AB!B!bB!B  !B! B!B!34 B!B(f0h@!B!P`ЀB!B!B!B1AB!B!bB!B  !B! B!B!34 B!B(f0h@!B!P`ЀB!B!B!B1#S1E!_7 !B!ЅoR '!1f|4R*rp=9#T"!B!0h@cL4x !$ !Ii$oq MHfa BfO!B!0h@DJ)oSJGh`آǛ"dd81!B!0h@Y  Ҙ$QU`3~ 0™$DpbB!Bz4MmbL&? N A!BaЀYQ5M4-1dժU+VZX֠/`lz X,Bς `0rAȇ >xȘ9d$$E! 0h@*$δلfv;xeLswf],x w'掣 '-E팒~UNoLE1[|e4!5iAGxWr goD Fs4Tt"Q})~?oqGKJ߷o1YQB(c|u+KzG.˫h0pF_ey|`EoS~5&>GL"Ĭ#?W,)y|2==R)E팒ZnQbLI`Lq8ms1z~h"v!1ܫ 4a{ϒ<|i9QSӰMHFJ 8b^1BϛM&S4=WUuJc@dERV^btGG2[uFgѓOf[Y^l.j.IV^rOɾ̡+D 2_/Ԓ练 UVn?i$yyJJǼ5].4=ݔxڱeo^3Ň|1gW茷ŧͼ“ͫ0&wp55/"uYw*T&ENO xFIU3)e!V͌m3VEo_E/UpA /NHUUU=oF۬Yjjjj~i,."xguo~KB#q\VW X|x"M4 "o d ..FFk=DWWE=.z^Fe9=`6lْ.</܄@TIBHuvwCn٢O7l6[e48F29pkUTT޷SV֌t% @)?{;{L OXlPK'“oaanё~;u/ƞ?c'ã]5O쟴?#OFxW.߼t`ҁOk".1/N?Zٓ!yG-6u}bxW<_kC=bx4aX4MlGZ%ʇcǎիWGٲes)((")⍵ދxƘ|M||<4<duuuku[cnq gr[wJu>s~0-HΗ6s6"D6 4#?Nr^}GQ'N̿"ܻweK21JKw[׭SB´dQ{=D#d@WKSgofJJB`nc:_+vee[޷{sx/>̤Xj<$hI y‡"<y'NK.Zt:D&À|kIGtM LI!py.\dA)ڵkky䳮V5..1#׺jkknwbb={\{X^M:;; !qqqqc.(]]] L]7kE }~#vX$)ZpIB}曏_lDbc7~+99K| g]ׇn_AÅ1Vؽiv|;nk?w--[VU,<н9~퇾) f8hWIpm>ؔt]-i<@/I܎L9rV [y`GH}.% /n?i+ݳ[ IDAT,ikrf\+-aӦE?q$S;v{=iSp0+VCiADG 0h@D8>X[Iy; WU),7 ߹577HHnr -ƒh%)m65SD]LZK˷5jcg}ƺ߬sLrkqrیGt[>K36,5bTs tnk͵xc;#||Zyf|ש"㽉WFW)Kcq-a+:/8fcZ?uNLsX^!17h}okTH |Yog% V^6&J3P>_ EQEIII`}_z*dnlyb, x>mVeq/kEfCmh Y{)aX,/Q5Y!Nx s-q:uXWZ?p:|G@!^d,ҲeS*fe}ehҐCyIӴ <3 .)|; 9w:q}NM2$Y-~Q Syx㍎L !Jp}&:-8<$JwiV,]\?WSU#L &vu+jZioiuGA6)?_7>cF貔99JeeRF&yc9ϫDUU$`&Sf~y12))Y ڮs$]'ޟӋ b&Z7tfqUloүV4=u+&L{閖b2uL{gGjn9ߡ2{ɸ,o@n(㧐$ ͒aq0ǰq1 TUD!$z\|A絢eSZу[2}prOm697DVt> G}Xcm(]NOkk}FKb4/V8ӧ?ֆz!d gN=Dĉٳ[$V?BQ"0 >R߯eY#Fuuu$ɲl|ɱgƠ⭂ *f0IWUMa#PA!b0 _:R2U&r,|Ccܢ;(cu=.MIe3ϼ|PKFzD]koO޿?;,Pe (P>wuQsxNR< |>,PmeM&LhTWih uz?f^9qUWǕ8rdSl/ZūD<*yh`;kY&+n)9$'NHpVTR[{Ri#]/tO,/RVFvn)'0I`mtu@LLgVu0?oIs<`J ڼ9I.,,}`~hsK!- !~oS9r3R5§Sn}rwM[K7v5~'jtYyzH:s?DQgAvgoRѡiZY||>F@մO+*Vݻb攔]a{\+:C-Gl(?SrZu8_`tDb@۵unEw߯|G:;G/7:p:̆b>rgl޼ܸw{?<3 B $‰%z\ 9묏YMo %wLjQ{Φ{'V-hقTdwW6HJWWFM7ZSA0(u5tBPmC #(>Cwxm?Ʈgشs 'eItN uuu6]nk+Ç(F%aU7qOW[Qw8[w ` 0!rgM9{!/|a';J1/<`4]LB +DqL|c}ٙ?233w>h^,cor# ыC ?!#?fM4M_;zW qyf Z'jx^EvIH津yw&RVS۵+mۼŐ)JZ,ٜ[n! ɲJX do}  nh|-a[g  L75Wzu'Oᙷ,hW҉.Ȅ BLRD]cSA` Ys^q'*@)n{)G]ee9 `|ss+y55 X-TUYI-$Vגkq0dzU`5^9KZ~kt{P79"V#BG?*]dMb"38{5ݤALf={_nn ?}SE~CrSWǍW]1N$к4ŷWN)=پS\xFF]}qyR~Q%hAgi!DcZKkKt!Ժ!k4VVV.Z|۷?}",1 =0QIII} ^57UU 5Huuu7ݴvB I'キhÆW][|bcP|RSW6؇ȣ2+>K$! ,GKN45=wujmu560EbTw({xĢW:xg &Dυ0hx#kMFf~;'u$i/\T2WhFIWGV9fBL&Gmmi2SQ; ?@.x,D#ZOs:]_0?HB$h@ĈEUe 7q1 w<^Qո.o2FYu ,Sqee[JJJ/7mLi]?qMP"HIRF{g"NM{}vX @b`gMIw{KN/|͹@1-DD!w'!ֲ cV۵6! c{_\zU[o%6te+,B&րj LPNworML5z㏛s ģy~S;uRν$ O1$45-{;L׻dWNb$kozu ?}SE~[Zw6z-ݽͿvk/,UIFxNmkǃVvPOvo,k*T޲U(-M3+֔o|txN{poUfBhKOUVVe˖*++/_n ?ڵk7n3^z 1J\;R=O2AЊ \g$ os:?馧*NJ 2Bt]o?rl MACVԵ؇a ȣ꒔,9&YF'J-cMukGbQ0!#X|(!DxDc|;TWۧO~ƆYUU1B+۳gرhdyij*1<9ȯ% I堔k=V NBuʽ{eGcƬ͖ޮ.볃m΃P3~K'1p\W(4 h$ $ilb[]I-H\JQeY2ZVN)ʙ =nsaaWj*=U_i2%x:Q ~,`Q4-o8q2j?[P8hT1iiq$@ZZzSSǓ(1c'$no' @*e.RJ,b~f^]>h#2;|A,)13e~ey7=^j)MwcB2Kb11fyywcdx[ۅڡτ9bMIƈ'(0=5o[o}xgֲ7ߔJJFEKa%˗OnEk䴳.e ˗VIyi9Iui2MMu],w\#{fJnn5ӧR7;?k9}eӥ%*͛*+^f\PUw߾~ȯcp~]s4.g(_ҙ4>Ϲ^$)H9`A{\فƎzcv~-D nHI rt ;5ڀ4xY ZAR? ai6 |/ZHK x(>?1BgI]YF,t4㶶ʘKU16[7&YU zM'뿲'NWˑ?tG 7: ~]ԝ3fZphg21zg=5DǁZ(l~(VVuoE&HLG+,!YYMii>>gΦK/B /ZjlG*& 2l10їD:Db Ⴍi  I8Θ6-[X]3.N;sX3RJ\?@Bh>A>pG w[xFUY=NWǣũ^aDii"nyH>}/~қ[ !YuQp睞QݝN[~ct%N%X˻ +#iENj]f%tቮ'tze3cX ߏ["p%ʒ/8(&xDE`Hya˕b2lQBtnniYc!B~~u񡜏pnO:k[¯K7Mft5!N0dzt8>$k1ii|Q_ Gշ>&ʫwrrSO*΃F ɣ&SiiiRdছon=&Xˣ)l|0#pyf ΃!ʵ#o'1hq_ڝG ܋/TUTI32'L,I 4CxIE7^%<2!dqHl"x>DRaF$iJѣWU)IJFZ^)iZ݉:͢l'i2@}-eJf9FǏ^G E"fo-%mTS'MfQdNF>k{0#)&ǛNy?K#6ޮTOIzͱZL|4v7{+/7D NJyø.ey -dd@@$h#` &LL`._O&N|#2R*Oڼyt|(OXXKV?e&5E_l:sf DdXS$C|%񮆼*+5sF_./6|7N+S~v't?K.LOegha* L&1(D4'K)59=^5M-d =VY , AY9xGxOZ9}Ӕ1@;*5uḰ2Ǐ=M|>1,5?1q̙xBڣGA>b3@Sbve NbѢw^yŬ~Q1qO6l޿=VIek$Ј&+K$ \ f|≗vi(M&]`d2(c\ǙmNI3ǝٽpñc_ Oqfo@ Ns`B1 =%Bfy&6[୷Z#Vn`*9e;e5LEX&L̔K~' c:X-e9.MK?"I(O@a5Q`rJ<,X%G{@CN2Y DSǫ auu#].Wnn_Nt'42]8QZyWòe=JB(5R?IG:4-U,!S%%˅2kzoɉ)BEP')lG7?ykJ)ٷt9}l6#^gM6jHYnVǏbG&,:c\>/ P.ZаZf,YQH| =cQN<~- Q[[Wz !DgKRUcFQㇲkVLQYw/vtL6@eL&Dhf,Mg Be~NO/7=aBFCC{/}cZϺ\@@++dY 3A@Y +;\#7kGEMe~lm<k8TԜ6w.{}ܹ&Kד%';/LPG_|Aoa[wѪUGM&д!A 3cpD Qn'1$VX--}ᅒ4`lf-zdF(A4IFO0" %ziGD50!M3KNT4&I4Q\4LrJCtHPZtXDl>fLӁϝοhڃLlT- "#XV&Ja&<@7Ah_aV_a4]W22&@OG\Y0w{ݟz|V 0¨WuIO4mXim͂@>;yoo<w׮[*d}rK;,~.^zlwxPss~ !.<:~'™U~fl6kX/ߋQ?3X xw^"^Νutt$$$PJNwů9MM^o-!+{23 'i)`>k&-h\_5c ҧ*ŘN)BrdYGH 5..ʧ5^WU|UWwv>$ zWSCZڽ= ^bm/y$h_u$ #pEE-kMuu@cTd crjd]Q̣Gȑ죏#Q7 |9uPJɓ~}UEgb^C?l<Y$lm<Ҳg) *V} "ʂ]O(UOlE{BBhi6ossgq‚w\Gs8ԅw%$QT_}h`w1!$ (\" }.7ͷu]o'LPZSSJlAKMVö:k3Kuw7 T#&2UrvĬ!f4?a@bR I ]-x|O#&U<ߐ tY(Um&]IDzMӜ--6ĠǏ7|B ӽ4v-(;ʠ2vG,_7m\LqJ.IEoo(RޣsjؿNYP< zXR1'r咻Zdɂw3pRG k|fi]76&F:fLisbݡh#GBѻM&+u,|]4` ٝgV 9gi?59ÃnvYY雹6+ks8<֚dGwi Ŕ.5ZZzSsƺsM˭6.eWsv{8H;>-*Ng%Δ- ɹ]QFr )^JWV`3TU9RnWnz}GT@2J9ruu=0,[Z1l߻?c8*\B!u.ѻwg~a<#@au:9lo_v. %!M E(-$D41o._v_BIӳD"tTJ2+!Ƽ鉷XdF",?saX,XIK ߽\y߱nٌiyZUW{p- eQG %`A|ƌ[Bv dd|h)D,UpM<!3Nr gX,ML $R+z}Knmg(fU4I4!DT{yڋ/5BlRY?ӛ[N1%eA|4FTU7Mb쩔MuRw[IYsU{ȉ>\oj4UU jk9Z*2P@ȥ@͍ݹBr%ۅO5DwJ,GѦ QϚ>knfyii)bIfHZ?K,]tO?RrYM$E?G(qj)g=ؽU](;cA[VJ͟2_YI$R cwoj ut]ǜ' ]ǵԭ=h;yȈxJxgȩFEEA/}`,j5R FB9Eߎ/>om_r-V ŏ]tvy> ʜ$YC cј}caWC$l؝ɞUI$/~A'{Ů0 zΜvT*%*4_*HelOKKiM !#-ů|;bO{璘rgORnD'O͛r8voذ/MӑzmU"w^u}DҮ(4lj\.A~Z'\}u+4jNz"Rd2HS( TljXr쌻\EUUP<O!M&mkquw8^>[IQCMowwc]]f8 HnHM Y^4)~cJ'jden)hkyy7 Zeg*WUUg"8(%uu4^?h7\ϡx\{7:4c͚Tqƿ{;9qh?18ðqbJE}rK;FɇM ~~u}ͥjjzYT oacgW6'Fw]|KM2Yՙ[slyRM(_*O8iyDMR8%1JyyVmlB VQ֕M4S DvJշ=іziH!_<CR," 9:c "?M |{!@O%52Ϯ(9B$ڦ ewUx|]-@=,?[l @i!ˉ<0@*dԄ 48.U}tL5UUP92;a֬ۑE?+;333 .\ନ( wvヒ3+/pWHE@#l9s˾?xlOjn.H&=Rd'l0.[6'У;kZˣ0d LVO:/ML~?Pe0 R:Шe%k8l4A}l2My_s9筃C9}d!+MN;S3Lf$ɤb?E=<_r咲%uuu`Ĩ?L\WI @u1!ӅȣU@ Aq^B~?Btkƒ(ٺM#X SΓ729tHYiR%p4ĘtFy)`5XIYL.xEkeMQzGM+Rn4hkJMT!--w<@hR E"N<%K|ꩮTJRrMzӟW_!L+D(!ci tcm{#=x 'eYY+?"flb sPʁѪTz<1$HfJ'ʗ_ig՚μ C"{{iͅ\暽=BKUU}4e rELJt>z2mZiNzw(_u~}3.X4>WUpv{׮gBl(Kե_̻ΒQe(-^y^ۛHP,(˥QJ55v EEv6#4 xl{8xǿ{h^mCfLbŽ"V42siDKg?򫾨7|嗓rŮŲm7.i1TU.#U_P*RWS27@S cgT !B*+]S/{gn )L![޸xSTT( ص+ %? Y"b *lo+/:aѕT Mzzzx ^ƜZTM!N۵kV]i7-[\R P$+~swRf)b8?U}}9Irৄx%_D~{Mzg㗚ljD-"zƎgq~!t A<4ؽmmPZDHRQ[++-c?қ i3IHo/{i`sssgg_}4eePF?lt)T2MCCvuRK,xo{guvz6'G1W|?!3z皾j hAi#W/.jbFry0_mۜTo&52&5m7 1 ҂Tq6x׸HOS (h9=}/5 g\).i_y%v蹓],4"()I^H-P̕A wg^rzgHBfIs*WQL$mJt`$ FBإ,#nvI]^U+m)^.ɩӍˆ2B&%fi9KnBWUUjB]보d",K*':)@=&Ppӓ^uaImeMBEHw7͚BGQT{`l]ӱJ!Mn^Xp0P2gGͰn6зY2uRw*ebB{q;j|:R<)7oLGVvGlL>n. bغɓb_qjN/ӳ_ws>!gݦ-<.*o!@"3bK,HpX4u`Dj#>c;{' +p~Fߕ foD+w[N$d"Z:2}{R"B5؊r;q>18Mx:_׬>E)DC!u.ӅEe .9=nKNj$A_8cm{r颥_~AQSvA͐p%N1[dtsnwKJ̘1矲qeV)kCb <ǡ`1D}p8'~^fpq0)VMR t)wI#0* J3] -y A)o5Bl>M{43Mcl$P  ,eK&?|eLQKQ|ΝJI: $ι[{'SUTʖL9O)JACÂJ)<1 I=p5]%fm7oRn^;hfZ;o޲a#2.IG|]pc;g8wČRmSQ9=36{I^ƃ@zzz{w?ļd_ayDM(䅒:mPG pdy]w[X[2+LC}bߨ1M3;Cuhi`LUp\6o=W B[/PI*zn]~ݞ& t1p0# h#dK0`3(0 P"@ˁ_H`8xBf8)e0S|XB~e% geN!phR-n1{0 B|\/gl9BZBhH ]t}Fi녅w2x@@mC[|lnnqBԺ\L4;UJtNIC<XqŲe ߟy뤏Kk 1ouEx"qro@?PLb}k +Dnؑ֫&'@U3~탟tݚf퓳'NDR+ p«=;r3@4y݆R( 8G]84 aTUwKȅxEEkVZW=7֯wo8H&. 1mXm&gM!{oR;LMUUkd4ןz*ͫFoo=O.=G֭^_B.T6bU4$9z5Eh~̔;?t8\įkOQ[K 9mZB~550 #_z a(d UUbDsjANt)*B7_J~9BivuD"}aeCiPZb1|m-'[s)e{٬,_Rj G^vҟn|dVSب?҂Aæ M׬^]x܇ncS^ɠňWѬ*Аt5G52: 1'sZօyERM3d{O7 +!I*JOJQ,PSLsn9tъzV< gؽaOUtMcBvh:ҩ~^xrFta\մRoBcٲ 碧'>e ~MQn]E?[LN޽ 65 4"MҬu>$I0sxp>jP_6qutX} 0m%F8#)T(M&7<'g$ztI!TbGSsj>'ta|ws]Ϟ"cG$;C)ukE坹_I[Zc^B(ʺoeTKip^XRa/(Q#c/l•WSNYYO[AY[?ȨSUȈa;Hn1y4.)~<0.4t~ӓ`D|ϕW:Ή2wbڡqԦ?lYJɀUJ x 8#/'T,BŒCi%K'R> n~ā.P霏]l6OKUUSʻ* $_|لhRnnfvI2MH3G󣏦>䉗^zŬYrYJM"$ Á v{8`UorL$~k{iK>Ic}yKktC!$)D/ nߞL> 8@0ck :)SߵL7s82@>}@4U4FEiDMeRu%b__Ƶ@)\;q(+EB0X呾>Jnsm_G8*ny䑉5FkH=HcEJ٣ ]|1ܱC민EE۶m1#QRB\.iU.YŠ~(缤D9||rB|p+= #x{[[>X{{ pRL|ossoQ9<d4駅N$=*M'\},#"&VnN9e S<pgZ|t],XGJT*.)5LB}99c={zOS!F!?xa)6P'(J"8aW >e:{{_x3 tEg?4׿{SXĄxpMijD;rp]84Myd̙ jjR"ȫlB]iSOO549w/xx3FSP$xBU2;KWؽ7,*Kke& PqRH xS ǖ.]pXل'jN87gek]0]^!Jcbp.D,]YNDOe1C7a21 y0 -׭C c_0X* )e8ש$`"^|_.t'I>[VG wn2Mׯ\ʫҊ/)Lci`p4 6 HR\MgeU{B~?L NBL!0Ez1+6|`@|`ͧ@%d c & ڬ2B)Ô޵c}ĉ;[[oQ\ejXן]+jkkn9 _6B5*BJUz5|8\Q8!5.WM<~iD|!4ˤ|7+k;=XܗklM&w'q!8 wO4( z !0 F[Z!d{55*OXhHR#D<.+V(>k8 FN]OBJkkLB|QV-ڱ{]44rrTkJGMMZ7U%>vpUȔT$?wY inN^-!-NiB(`ǿ<ȿ,]Ғ% w7qSV'O=|_Z?Y\7y8*kܚxb Fa}~ZބR# Whժ66?y,0#ݼ /ObnW9w|{m"e=%W&B$""UUk+1_{I'ݺ6 9j)ѻ:7 SkV %-K?x&^51uR,Kw%R>j$;%sΈ~y5g65M;u;ۭ cgC)ΊDd Dg>P0P,إfU=U>PBdg:$|16e_-l,Ȍ&;&{2>ペ=Bʝ۰+pH_4/۶?5>VXe Ki8g6io/q8vNp~4NWUYjDěmz8esTݻ1y?LVVFfQ L4")hFK/5k?cLJqgr5j:8;C6)&" 1b?Ln}WY $iFbS E$;jeQ)Eϧim{3Kb[RB{|^5&is5s.}hƌa21 1荺?9Ͽbϣ?L̕V$- A |Ou GCBiHI 9*$$)e @5 #UJ%,t.!D5~d @qt*RDfi3t\hH){^!|ul4͝?եH{L3&0@!&e*=fr3è=hF)uf "+xҷ_3bIEpH_dMkӴyOf56: $0C)6J1=_~#}x}T6n '<ΘqOn(CQ:/ H ` $Pa)'y:g0@;:/˟uuxQ0 Ӵ' mBHӴgOQMM #FA^w8 %+?|ygQgCoyKKM RR*Hwȑ[3 ǃ[?ᗿђb/ヒ.)cPU2 g|80--.4 Cؾ}wmm)S0axqsӉj%l9}CjjzO~>FNga裳"Z::qh(GF Vy'1t)y'Q݋qg_5D#;{0`0ۡ`b!LoMgiDbIIǿLBWBv3'wz~OB:nJLTVRzjj"K`߸tci)pΝL(!VVfE͕ݽ?~6Y۸=3%SFbm}iY̸O>Lso1EA$ʹaEJY*9!I,ZlƧ>H&)(\@QhC]Oܝy< N.pӑFSGgR[}N3,DIL!$ ;vpt(QlTZ^_'eaeO>yqluГQU_"i(;&% 獿(]j?7rbCJwIU}甩[fl_v-1& vPـƮ(h,v1%bX;X33;N{;!~y|o\u1s83sET;LUefXQ Uy&ida~SNjrR^.G](M„F4Xų||N:PVƁsӑ0kV9c/طNz"tӑ34**튤!58%58FBU\Xvzm,Yq$ Tŷf:.qa 1NKs݊bڄgM3dlDdFz*n|O/s]5|aqc R}VͶ|a|;h}C@ ?fl`O4M˨$dY4_6'|?^>|w%x6 LlhTu--nmSk-k&puYMC@lP`xKӈ!:OYNi(tz(4T84(0{p%#jM&@-P xЀNup81թi2 bHUؙDcJDEu@1y d"7a(`0`5BR)pxeBLQDaz#ɖ`1f[B\L6MLMbƢ`.Wa3fyr qw*Nj/~0ednwE~@(=0@'Q9c l$Z&gegی•xᨔe&ЭiD0EQ8oBv>w^I%E'2vk~_^(QL\@ p'cL<gҽeզ&}, NHQbYY좋o]o<{1iRPUWO.ÑiQYUDq1Ԏ_N߾UG UbRkv'iw7 p8Νt7D3*+٬Y'?ɄMsfdPI'AUQVvO hi[[i7*'``P#v≢iK/=XΖwOp[vmwXM tY[k?` ՘5K45ɤcGSn3+W>M"rQVڱ5 Ws9jB::.ԙ)'5 ~Hè*kΜO{6c`[M 4Ğ$08a8m)L3~d1'_ 4]&W8=zUPP$ggͣG4BIJT- W`ƚ{ٲUeҥ|EqMKUVE~2[o5E`)2WZ 5y[=iKU} QPpE>l8S˄CbŜ xjrWCT&ckXHc 256._/A"dItu_#+t],jY2qwgar\s`f9I wN vvc`Ӳe@322}Ne㆟[V&)E/>q^5Һ6DNnE:풔"'qƸ2K&G5ޣwpU)MƸeu  |Rn%RҶEZX#$)^#.YrdqOO|umpUۭ5*/:X/i;mz(c֓ UFŢQo%,K眈Z/{\JLizw۶O>Y2;FKq<OOB`(c';/-G} H.ɾT_\jiM8vuW^1ԱUd2D\҂j0$s=lw5羪DbgJJJ>h̲)l|)g%J)% ,!Wf8cUEmfjpTLKY&eƙH|4& ]0!+l)0d8 d96=p}[KgzC^1_|-Z_`pI9ps}n_/z4C J0A۸q4=9PUVI~) ;y1&GaDozoz9?skdx y$55#$x_K|>+ϔyM.q $fv($MCN b„,Ӝ|u]v|]]zD1A$4-BK/YW&Ml6|s`loZ^>Gg1"4b,?Ák]]3=D/~A A55x-gq?!@[ʠ(6.wqxz*UT傪R$~e~yp2f!lɀ kcҚ5(( "rQ_:cFJp[mQI,z?HL|skWVєQo_lU*XY6Dd<̵,]1 S#7jOYzs&p |)E";=4-ݪ A!rYFh9>g^ao ⅕_|6 |^MnVUBQ #ijJb=X^|y1aLϮ,FooO9Hg$,Y༖wt\yW]XV-asD[yR e5#_6乍'=7>0(ArlYd ,#PcղeU%@RChPXӝN#[jZvQb3_"i8I|sO8E.ńՄu@HA ي3\X9QŰOmP &h#Νon_r ٌ'IWW̖${lɶK ;fMLIX`DyIkW7m9g=;\ZrKR Yn3?/Nl˫q$Tmlx, Dxސcq$o#r˪=.y>˳<20dYdYuKx=i&g&wuW^A A#κ)3yh4ʎ;tܔLz{EEĆL4LIJvKfb51>5v' C죱qى'Fj#=Tkv{_n,[ӧ1'A'm^F잞Q>RYpɒ59h 4vqRRo;jj剄 0wo`Y)qtw$>h4W@;Q63C$ϏJMS%)@[$RFS4FL.|`5X<u߉/O&͒&O.B 5B\ۓOƾ)T(A[%fܸ1ӧPՇcH %w%>"FX$SM mYB:ۀ2F+)q&끅gUN[TGWWJktֶj{y~}}U"A,@Mlײ,X{E>8իeYZT!ѨiY{۶ܥYN.˺WQ %MwƎ+o}-$e-MEO8aN]U3]'UF٣.ǓoY 4x4IDaƶfe=>^*]S)wxszӁXLҴӉO/7fc4Y(줕+G}_P8QENGPOgLTQ]ٳ ڙgn[ rLU߯p`0Z3 4%dT2>*D,2SU2 pIb%UΝʕ#)Ð4De|D<EE05LiBo/q^D"xr]D3YfD|D<|3GfdL.9~/aSʆA!R$A Ppw4f<Qg;?Zo/ҢOoQv:3gλҕvo~#*<,Ȭg(vqc⩧N窷zҲ0| 4!4Jc$Eq9Zpg]Lu:\G n2skΑmҧp]rIpFi2DL /qLsC]l'1ƒV`cjS݁ KULC(L4>"*CbpC?76WY/< $'2ƒIoGee1٥kY&jֿy){KYu~Hax\vѾ\pݢ$:5eP0\&f5kp::>*rr= yzϚnxX:iqREHpzGZüyuO7my ;\* eh]\Zs IDATrqdt[Z+trLW@$ \Βd&qbv㨣cID]+4LI%s֜XY_IHQz0ڜE7KuCś(,Ӽy ots\hB@ٖ!Aӄp.N2c FBpYX$>>8kM.j8ҡM4}0%SrpF<Dz~2LSp'(V/f\A ?Uo99 ]Cʢl=+>xr90a` |N0|gb8QTZz Etq,؂%h r,:J0%rsTAV(ʇe#HDTyOΙK!x9W,ʟ[~Q_鯦)N_XO*-U$9>/\r[Y3qx6c,4z41|b|]Z==e3om`91F$Vf[mPď)t"ף?pCY #iLtV#U?gȉ~%XXHA'1g[>da$2\PZ:TO+'O׿Itr86Rg.J$N4ΞXc/IPMe7͜F;H&Orr.ͽyv2y+%J o54^ݗLNdLqۉpӁ ʁ+ʽI(dk-<0vM 16dgB߬^fO/1H52V7],4 DcY 8'"Xˇv%P]Rx\Is[_z6D1PWU[ aHYMI.èb gޯUDc,_Q'B^s[ƏGB Z[fcBL4]zf=\h% ñhl)uvMX-DD]yyOFUUU&*y| IՕQ˲@:<^C(?/`VhD`X0Rё>AQdovHx=m͚w:!3&Ř?NX%//J I, y4ebWKsa48'<::UԬŠƍkך ! + 0s&9 x}MhժH[8D̚#pB/NSk+-Y"1u*|>b=+aYO<cry.]$xyy3{+⊛3}fMiQ(T72"5/?^aw5KW^1Cp*++v\V0O$č7ⷿUeYI|z61"mTB{cN8+ޅ#J$;:c޼Pe7;NVUYQNK{p'cY6|Ǭ^_=~G*q1px9-䔓iN<mK `H)9@LW4|dÛJmU6UdX2{hm.+ ctقEznzz;{YwR.7k]0~k|qSkJ4M+?$ZxX\BEeiڡNM @g;,㥌ls23yppts_۹2}>ߢE]s==Z\gVsO˞YlueϚ‘ +NW4~]kSۯ_mԮR>{iNk0c`PAUQ9F?cvOeg2%k KrDZc^1fJ@@lX$FDI.iߟw1 :R~*;GW.t&I:]D"so ey#tn*)KJs;<@yyᇟXH˕6~QYȪjcsٲU`]wW T +ўu4%-V|~z4 2 }xJp+59Di,s2q>=|pg/QKrs͵!LOK4675QNׯN,j[(>(Ioi@WJ1"#bPX(lVtt$ 5H a"reYx}s^ `d-bۋwu?iw;Hi@ p9dNU}0͒hnw*yr_J"Uݷc98O-La\Sq1LlV?ley{g2`Gx_Jf:=DQ3 ė!IootpM!z[/ ~z WQo4?2p9@n h qD-DEӣxeA:$,xir޼ϝNeӯ7צּPo^oimܝ;ӵ8$47NXȹ}c؈F_s%$:Z[+5 'Hi &|ϛz<ؾӟ‡UVAN0,7hhݦZUI* $Yo9=R|u~˭<( -_o"'k@@O6Q+.cؚ eeF`8G,S1s&ntt/` $8`sLj ғ'd2fWWwvZ!r }۶NM[~9rJx":L0 uao-Ug^axjSF#uٲK/EM TUXmق;;:kq1Ŕc}uw]1"%|}9d t{^dy4h 4 1_pp3S r=G={49w3,}PbǎEJm1uL66hot³>lq_e.oЈCY{Wkn[H3S)M鴉G*)sQ>}ի73fn5Cgw2X>\~~YT=C @sqN;w0\Vc "RQ"9ٓHw0VV: Zlɔd8 \3-JZ6_ %i dITO:i=w#oa\JJ_U' WŹĘbi!Luqel)Gꆜ)ΩX>\Xnݸ9mn*z ee>@3W꾪d$_y_HE\ֻna3׊0}|f AJK >y,^՘q9C NZZnź(WV>JsQ#%o`Y)+Nڵ-~/[99vpuuءf. ՏS;X<1ߵ*,PՅuV_a䦝hxGwY.'*X+%ճ~nzċv5_ܯm@9 k`o:JDBc1]Nsڛn6-jʆz!i: ,^GY͙eVn޴Sq-!oǺ/L!ܯxyt;Q#~ T9js&9Ɛ Ǣ_RŹ7섦l㒦Psu"Rٮ-IpHxנN}<".IQYBȆaN44%cdR٣Q#%ð/x4޴&n2U.=|ĉ_1#''{ M^$$Iw$F2[+WXq8OYQ4&';*[v2Y}9SD5.777M}J!2LT[8nii50p$Eą0آ(EL36|`e_ MM>oqc$% 0XnԸݭieHTP d;v`0]&zzD<,D֛Ás,]j\~9$ 0tt8D&It:\~ ]]U6S[oQ{;$ J&v6-1jnn߹sǢEfot8X,|K0a Hɸ"Z4yerVj2,˵ʌŋ/9gf._4{klu5|S߱i'fh 4PdҭFy=f7)N=q,s(3\Lq?2rcsr=0]*\y6~ɖۚ w;E/:J2DE6D_frƧeA0Dk)ʪ{+kB&cWU[B|IQyԴ$i"r+[Ν NutY,FFEsuO&9ͯ14M3_ԫ/߲dy[,!ˤ( pa 1!,[ɗ!1clfɊıcHe@E֮X #ѸtWmͿ=9tZK[u}2'ŽfXOA^J)^U2%*8 Bp_(˲o~YEDOWϴo ۶jebh<|84a>EL+P|ڲ%t[{ AT? ?@]:@5s8n]oJ}}yr<Gtw R䥉`Z4z,{+Eόͅ@dZ \m|[CK wg>\Ms ;mU2U(zUs)ipW_e Q*0MCBlhxӹ\U9pp2f2`i ^=ܾ=/bB}@pc $Ҳݴ香3m/fT(cɤDDAMk'rȲ.eY.,-DUwm1ց{BalAR`>0 Ё" 2%t:@ "Q4*伛[牞t@0✕c*/`u$z'ʀr.ggk7?o,' þeÆQHL,eSmucC @cㇵ~C],9"d )OQL GNglpi6 JK3{y;02|5v$f2 Iw17PX'g`D8݇\4;ɂ-[oILSLy!IQ=jnnm=bţܳZףgeo? -hmMOܒ7v.DhTjj]z1f ֬I'*+a[Y _{iZ_}ecBYa!h`gXI|0: @a!B1qt/8X)EciXhbd^;;;׬:;ߏ2y'NÔePU;MKFϷ\.cGZe̜)fdqŒ%[O=U)/YYJJOX, mD RELNa[iǎu6[a*6d#<"4(IdibZ܇\2 ;=g8?1oG"BcyMɤ3!H=#%>oˣѨi<-'ĀոbWq1eJh%e T785?} w+| +4:'x<"]𑺺Gu\v3G8lګ/a 8s+C[x =Mx2w\9r==m70;3!ymmmcjv1DRF =Y"hT7Cۺ[.W:7]QMfJcUa$?TvYE ev6ۨK`⻗:2iÆӷobe˖5԰0|zB\mV>옟_2%7RP}8ɤ-I9]VLTxTbflnYDŽD#;ZҖx)QrHIv َ=3=l|E'TKo0͜yI /TVӄI`ֆ_XFX)$%˽;E\$¤\إQT6Wf.ØX!|:GbW[Fٝ65{b#miiD"MMWy<ɤ-D U'?ˀ۩;o޵+W.ܵ: nʀWUGW_=ʚ'cnڴTgoM&muFfݭ#} c_D1:mv`. DI@E4Kʁls^y2+k#~"Ia%ie+u:38'Nܶq(@c_I&D q82(."lЁh9|bxx=\0Jʰ5 Z6A4FN2 ,6 w`[CC֭_F ;[44X̒eb8 ( }f M R)0@la.C@4DXvP X,{{5q~}170 qsGeee%?ؾF"H$UULtUPUIe1qiz0`YYP1%px̛]F,"Z[9srpNs"+ o_^vE9pz^2*+eap>l܈@, w**Bܹ:aENƘ1r:h{a{W65SΟ&taĺ4`SCK!P-Ui)[LqwUic`û3+*u!EW\Ij/ o兜8T՟Je0Et_D I;wr$rp'ȶidC. PX@~/D:` Q?7yS~F$$D>Qa) I!Z\?ËɡhR#|[akTN"9>Z"^z*PVd@lPU\ʕ;(!%6m"פG{.e-([<’ɳZuŊ'&.׹LɊl> sQј3l[gJOll$]ūibdS-\((@:=@bF4*M'Z$ o4}H]i7߼'^߷^&Y`U׏W6fL TW~v˩`MSaT 5'$)M33T0eoe(_ƒP^KgF$q_+>9C3: \+>kNY>^~>,+|J'??B:k}f1dt!ʙ?N{<37o%kּs?=\ L#K\2ˊG`hn:Θ;f3JImsG:TӌN:i?V~ɯw-moI@ gyǯ9߿QʽXg̗id&jvOGQ0`FpMv\200MZ͕ۖ̓z)oZ(/Lg*Jw6 |4Eʷm0Ea@*ޫ}_ኊ._;Mr}hzdv6>l3R0[-@FGHA@4p-(%a^ ˄؇R˺ULL3l( \G}uuuG.ZGӆ1a`4=k.ho۶noE8qyUF2Uykqy#&P Ntz[i~]C(mT"C)ੳڪ:duv.5˅ۀuܐiZ.9 95G 3WU]%%BQ0:;DŽB*tu^/,_I'##(. s|'71okUU3|4q啧'wiG;yŅ_V|aOwws"0K UEE9O#yygܹVUIɹvkaRH ttn7FFvڸtUUd LQFGYq0W~t_qu>I'"D«٢"SOK/ % +(+CCpO^~R)߭6Xj 3m[2Bi)2Gdß~u=n_3Wz'!HN!+'+5{짺,E"s=ႂ]Z;r;WvTA~s4Yfg~/~*/X%+:l8\1v JBRus$fm_/H$gy٩ii*mmƫwa郝F~u"XTiO7T]R 5EdL#&e|K6lB oZr]fnwktYC^BmU4m+WYҾ ؛rr('uY &,kVZ,FNw!DgK-)]jBm/G9 eU5rp塷<шT;gP-lI)W>{AurZ*5|Kov$]]7~Ӛfղ`yEc͟n.a}6mmifia vL%|2 SYPәjECW4pV %Hd]4LJ|i4:㑶5ɹenXt [ RfSu&95X]%l,[r4 P@Kq*^LFUO}]w8%nWZEQ4wb޼d eY,VZwiP/:\ޠO &;ծJ(K+'g+)œ \QMZ럣YWMYb[UxP@KR߼=m_D}(GD)OCH߅z`gmesg{$tO?w}hW;Q&k~_bYA""3I]Q>ee^<) TU]^SA`T, 5448¿%tmR;ytGrX2ůsi!D$ 6 n6 aI) fa8߿UI躾{D"_}9p/1!xbD8 !;Ƌ۶ؚN/:oKzvXļl3 CDʤ"gwJ9.EDWLz܎'zvhٲ UgIGN{fNhZO{ζ[aۦ~ۣ(Z2}>S6tԶ)) UUd+bvSn{)>_(`s*ueiΚuxDd*usX 0 'p$0  ` l: /4\VۀLs-.Lv^jY{B+(X]QqWXr7}ϣ3ǣ({?|wׯiHDϝVEHy lFjgYLmav\2|:zϠe 2n='ԃD[ >x -v֬n4kT;iĚ,̛c^yy ^|oD.G!% 'G DdֆGQD 8$zmmذA;OR~vXGD~,s]V&r9_YQ8(@g'/Gcc޴e^>\zt4n7.yyƼ!O9uuN6~xL. dL&%(n䓫l}w;lӌ]wKeEqCUr9BZ2i9/{E.@Goߏ9sJB(kٲiӠ(dۼ}\LZu˄3hOO9BR2Ƽ!%lʺuRӄ(-U??wmۻ]%%ǃZtw* 'fqO3f(_"oh\h_H&qi !wSayE^U}X: H6۞n *;42KNdA]fHho@Ҳ装. ::ht_|  ;?'=n{  2;޲2Ϝ9xt4jiˮ\)$V{_9P߰ GtnU7ag?1TT*@ܺ#i$h,ϦbԠZ@(!TsV[6>{O}Or!wJ͵R\Q"hU^+LBAU }!?l!}.FeuJPTG6vcddL:nëXxZ*`1̑jَ zyJKQW$Æ *@?bA\0TTTw U|q< 7rW ;Y<<<<0*oi)-*rg*zFᲛbq#-Җ]R#?e2٬)eNe%Th3ymAJAk) 8B& Y3Ɣ<\G4gDU [UU^W6z;wf wE҄C}sJHBmCyxD7u&կg2XrPC=KX~И*l,^iNW({.C'bVHoiNXB=ߏÑՅ^e-^j^{٪\+JҲ,!W}WB+;w8iA@4-==Ni ewbYwF9b+)ޞ*f˥IUU3nn>t'e| 1aMlT8`7xe ݑ]^3P\ɯ{F̿,}um޼W5wTɯwdҟ)0 @ןX}ޡs_riܹiT`V3 v}` ,0.P[J/PB:Q@QQ0؟N-C4/ْ2v0l\nH0ρ[ 4fy%MD Lg,zm]$_0'zol8(Ҵ:Uϋo--d  IDATU1$ZNw&@8hnN~ ,~h~(p.0sr BJyLt/GR8l,[&3NW?o_|o|\Eo4c Q֖=OAw̉H4KNŨ|m5_z5 -B7܉n-wasYܓ3 M4D{l}K (ku1nj;m}>rRWsض `P_O/""4)B(Yd Ŋ m"'?244U:OGZEUݩTRUm)e_˥[Zx|T:c4!c9+KRWWQΘ1cϯ˓yy8|{uRi)=h񸯿?sqVXwg{/r Km8l EH@MCa!'f~ݍ\pFGbN; LO=sԩc(E>𰮪,/bӦORl;abrN)ypq@4QT"'N?55Y\c`gQtýb:"\ $P|Z\`opm̜o)n"&lIk/,n˲UYM%0>14!HU!TJP`[Q*_-i:.Pج #efL~D @g`Y c>ą)3^qo͕[^QW)œjگLd58u,>-h,V*nLSuC]7U"mWK)ݴj / IU}?`*rmN>[8 {x){óww\r+?9%@{BydV>rb@yudP拺/:Jh^@<3p\n{Ǟ-[f~5`rKfx:e8e1n}m]]n+ܴit _ B<nMo{),,TLId1OZ !C]/Ue6:z)XR^WeY@@e.&en¬nOUUOM흝][,+RX'o eSih:݀ͲsAԉʉ,˔xy 0=ò,˒̦%@$⸹n2nKp A49" muDI<[=.T Hq `DpDˁ@!P-N`ĶbGhZni 9WQQQ݁nܱE0ܪy0K6 =&0we(nw̙y䓛7wtu feD>RAsuJcUV2yef5JvB[;ql\`ŋ B5(g'. fVU??zwXA.-׮_BevWg\1|QK`(,ļyJKKUU-,=2:'l#Ymu/eeB~q9LDl5."#)¶!n̚7HXV]]G~MYc:<0-ZZǸBO](d'-˨ǣ;w/9y_xdIsEzz75QR44@UTB$6TWF"awt+`"&6lyhh@.X Jr 6n4gδTUu=u!ՇZKy=)/WaILDqp:MFmwa4mˣ /_Cg4M~xAss0?_]E_K''!]8 ,h}]Ų?m~N!vW4u]ᦻs=`ގ?t).I`hΉ.76j55s"s2k`Q:oeBx}OaZ:d¥+DE5-Z"&>Xuub)S^9p- \B?=3J1螵eu=lR M/;L\kd=5Xq B W?i[bT_\mq^r帵;͂ H]Ԩ'-=KcXEUiKբsaX Hz"J+TG\E9 e[ˢuWUA Xq94 G bX CQ@LTI75jk[?wV[[.+vwG תV(*TY@1ۖd8hћlQRv Eg뱥 'msW +bư>TWQ]ܪ{h<o]{ kL^",6)O_69HqlYʫ1x&)0(ח;vcedh`4FW=ZlCNASXJe|gMEP5$Zq@ WG~2M>^{PoRyyʙgI뮻=(ϝܹfۋ_Ə~zMcZxA WWaYDpH$+ꂦAQPQ!vf^^`t~HooFaߜ9μ^#YSc* kЀYf, T(Dr wvr"qnw3>u N; mm_PS;v7~kkb,kKk$&5 D4:u_qȿ ڴmR8w4YJ\&3eشmGeʴ"tْ#yڪ<<'m|03"Fϫy3eu Xr׶$bAtΤ)RgwnehӴ'SOM9a Bp i!,3TpOb2QgCQ=†ٶ,bmk:l(*(XdZ]Uŗ(ë|Ng6QW KeL#n2͖[Ma$ |I#~wky6;OȳmXe^t)#Vw٣GM A nU5-~'ֳnޞI--ﶷmLA"gG",kSdL+_Z"BRŒi%MG|#~6MLN BPc#εԩ[S,JgvCvfhjdoॷGa;S4UJ~[ujr"0VUM$΢|rٺzi 'hriG#2zXLrR.w35O? ĽqQEEfM\_zϒZWtB\^9sO.T|6rwcC)@l Qfۮ]l4otcz<IW W$㣽"ҊIT7-mݨ0KRWFh9] \D/)~H>i3adsG9ÉMZR 1ǀ?T q;_G|+-c1g?xn+I%`nnrJc)e|hFUWM3RJ!'J (YQ*cB j`'!R#R - l)u.A=s Zges3E  m; 0FQ [VR)\!bP <t 3Ox_% )oNBR00a@'D7P !yB79Fgbutt*b^!L!mN#Em@ yk/Fpl>|Tj/FeX,>0pr1y @X1a, `nRz[ omݼ`?qnϽe@6QpX A> wUTd3t:ƣZnߐ+W*++a۾ٰ~2پ=q"?_<־뮂ߖZmya8Or%mTTGH\⋋0͛7s Er@EkJɤ=m2`EEU ?ITVR,fơ'{Jx|Έeh(񘁀t`ɤH]]׿O>[nif.:!%6ng"pH8hjTW_ܹVCRvW^m-WOEhk}vY{OKW^>Lه.s戡!0[9A)L$==x̛sA!Յ`z\nۺO=b掎eix9O0\,} ) X@q-[Sg?HiO].'\lYpkתj66ErgGgY_"o܅itkb^*{ mn+:Δ*VY# K)9DdҶ %V.@Bў8{ŏql ۭaGX84KUuȶ=dq>eߖ!MXӝO?I/ )ee,k@21 l!.[K:pFx HM)VlCR0 !$BB5 gQQ]!}@i WHMMƲW'7Ƚq#&"b˖#mm ' 2X;SCM$d2zZ[ 7EEEKݿhmM&:)+ z|]VI9D&̉$|ki磳c,(QIwju#Q+e L&|w}^"L:ӶMW_Ϋ31rHqCٌt{> )ԓI##2/On2;Iv~iyzd9VcjY^A54w|A--c̒mp)H, .MPj&^v~t< i]j4o#POJ/!W.C\}o?Θ׶+Y@֩SCp|ȷ mi/,4 P]^Аxr ]"kҽuBd3EV= fLf7ڹ) ?o-gqM=gܿEKɒx\O sV} =xXhRYYLKKvw4}QzBdZylA%>n&CI鰔&weYV$MgFζPDaM ;e )GBXqcX~&M2ͽRcG}isM_lz'&ׯj“PoÌU\;2_JȤLwUUuIUOȌz}öW-o05MkE%xEP/7)p\BIy¢Ex]ѱ4/V|d'~K#ž<˲J'Uu0 f.ݶ"k9懀v D?0Wxڶ3R; ЁO1wCee_{{ey _i_mfuuv-Zg2Ww=[5"S/ IDATiD5̿K ā6PlQ].f'>])!@0o\d.f߸6&6[}I.~ Hfgxeŀ4b(-b]&In #ijٹhu,P(D\ee̯Sz˲L JR9YMP'\'f' OʦOvnl@0h1#݃8y1j,TmكGWc<}JgJ/KXx\Z3Fp W=V`䧣eeeef}5 ׿6qTMMEwA:IPQ0:j*)A}=_zmxժooh.D}qk6TTh> ۜ&=quQ 7yeer`@[[;sŁ:;#X2ʈɏ6ͼ}7!-Q#t+ ^ aG* Ε{϶mMmjuqS Z[z##3[{-=LqLˡÙ;7t=dʘW/.qpkm^8opOOf<xs<2V\w777dz<.s9xYD67ׯ^ n|Tp^}·0XȲ^˅%smO>i76*o矷URRyDaيmVB9+ r# .=ܼޜ.#n WI@03[T ԙ80f%)=sũ}mSPHDJ,^4 g-LD8\#o:aה Zöy(\PiZ~T_w٩HLjfI4CdaeʴLA6;(-e)@fxs֎#{E/ TwZv*k(?(ϟ DUUZ{8Srd8ZyPT@Ъ8b^VSk :"lmRB2$g{z/|N>{] ; |)^*˵ j]աɥgt2 #|>LZ/sryR`SX,Og/\DQR+V'\bq}qZ7G5mUtlLZ7|JTa!rsyE~]]xKb֒lٸ n p(EL0wh*4`a!V5t$C*d )ld:n1X;SLF#-p=~%J{'疂'mOA$4Jǹ.@/XJ5g cagmܘh / ЀDRn(uw6q|cl1(PDƑ6v=hἉl2#j/ ^@2 %JFG."!P0 'H Z7l.%M]j$B.!0h0!M'08t}p|p ~{PΎ1۶ZZ]]3K. ឱJ}Pi`hiQs->_st1 #ԧM Ơ= аL(.f!w|>jogkkkjtVzi~`WߥN|4|>qwf#Nb4ہ_4c|В>4dɒ7[w4K2=Aibe fde?!EShgTYTtA4URº]mUν 1ܳҔTiI 8H,Pq`okGlKm2$ .|_Y_s(R܋Yi<@|Y|<ꘫ>QV$x̜ qlFJxdG{fF =R8%eb@[3{⚵kckiKmk~4~'0*=47 }VC(]L;7ro5Jkg~'0斢)@…Hģiҽ9cD[[y_lXU5&ߟ?^sFRgDU ),̘&B ap.`O4fM3D HRHկ-O.CjVǢDRYM ahI8cKBx=<-|p)@WgE[FX(uU{inkI|Lw'[nypFk5}_- ueMQl6#5PVE?}K1nvcD tn@7h$/oC(.e{1ܗ|ɽ}stPF1wwK\0_VP(4zgaGhR ܡkH{CƌQ#G10N"(P(t?v-<+ǧRmRHe!5iҌG;|y% è-t ں JE'e;[j? R ҷN~~,_%2sKzzB ](]Knu矟0Iozt*+*WsAuW<t651I>uCCa\80!P]qΝHYy_j+ sIɏ~J(V1S[sgR&_t=MD~h Kjݰ1ؘùCfd]#f $^"@5p3D6`Dx`;P7GDmG0 $ b@ @ ]XJ`+cUmJy@  lu&sw*$wNtF˺V;0 #{8w҉Z|qp]]Ӵ>?tJhK: \0 7B#Z@M:p:mj"hy!`?NP\ R(.Y]]-[rOYl$},h<^VFͨGhhnFYWWCq饼Nc)%WOƾ&glnI,ֿjUNjvАuT] Q\Dİ`s3gQQvr*+cMM$]Gqp}ZoG[`rv&cNilܰbLP(t)g|u(pO3ᷬ$*JËQ)8fXG6+^(EnBsΉPU_}睭ݷqCEEC8jk'.Yr̙#R()AI ~A)9<`|lDu0~Q<5MxMz6k;餤i~F}z=9~_Әe٩ ̦&ϗw71lɉ'zmSEthkS. ?Iݯ~7Ϭ^mbůb8o|C޶q!Nۥn/֯5Y-ߖ_v_ S}>!\z^qu+!R }"ak6}{Py[r?DI$~ryA.}[3֪BJ ^+,Rx.]s03i >|re91d (gOc }}4ih]lG^$ ǦA*)afw'Lƙ"v˜kѩy锭lu5[(@+ @-[[YFdlHPC +p\QVtΙ{hbnD _ౄ1VTKGv_Y}'7Dw2YVW8)z/y2nw>R8` !R(XgrO|7uQ~>?$!G+ũTaƼ`0]^aLhpRtqfΛw13/2Ƭ"ѭi6cgY ^Κ1PJ0vrf]I,CsoyADGbʒÈ(KÖpJ&ۡpXri{]5SO'Ԯ_9dƸ+so-8B9.A CO>O 5Y7i 8ղR?oY,DJ0j SܣxE ]Cs3JKO7u="r=ܡDM}#*-3.0t/l\ڂ/QL1(+)iܾ{WZ։Dq2dNQc,Ν)))ݙ~Xi"+ݶf͛=>_< 6)UL 949uR~!p mDS6˺U `36HOaZaSjRnr@?n[ pE(E4JrOa7Ga!`jX8 l85DIt90`%d@D]D/0v-;-hwg -J]QmWBlRہ 0*dl6qWBm@ p7-fD$"ZO( :4,+)E&+D]. r䃬<@m3"ΡiH sփ8?'8 >9pSDWi/:cDd&?yyfm+AO3Ty9LSdTS^{/Zd9סg]I() QBPJ{PY:j.]*<^/^|lP_?Ax|u)?zJfj첟p3f|5s0[PYZZPWPV&qv} <`QHJ#?L!aJ{LinFYtҍ1f?Ax}WJcYq ,\(A.2;yjS`/M{{[)]C,mt]L=IdYNPG2ӄ@Ѹ6yp\6Bh,/HBO)`oK|t6ˑ %Kst PoFMR2Mほ@Y۵!n͞skxXI(OT>~x,0ٗK cJ: 4a3ofGeukvݑc3Q)#1LԼ xՎ8[6_!!Й,UgBO [YJ'AWB)E0 X@Qu¨{g^-a(4c`J:뵵eh:S|>4e F"_yg׹@#e8@IT_ 1@QK+>L*,t\"yUƉ&50Ib^Wf?Ll H$eVW)+|{NCЛ̝}*j/s:_ynw5)wL'pDe-ku&vkYSLfeM"rU6LkPJ2ʏD&h]T>OTo7rFݜhߍLex +W⬳ގ]͐ ^jds9Ն ҏdwqñmRA1 @; 2I "bLp +'-S'|I;iRE[|csc㆓N9'C5E.˓[zs$ 8'U)i0izn륗\sUtMhhi֬>*K-uNUߨw=J赳j&F>ϙtQIMM}}ˆMjb۶~1o~KQo^g8pDQ|e@3Kg4Cg IDATh4\^>jG} q}cՎ1elbŦ(d2b0,qD9_ڹ4A4-N~h?ʲn`MѴN64"eH˅ T2@?c(h&|U\dg_ߕ==.y]cPDɑ4 a&`ˎ%Հd@#]w+.@XDPH &0!J(.G/PF@M~ C I 2FT1N4c(e(Vѕ5oY0#4^]:΅Dmkyyy Xg2Pt hv-  x`caA_x!?7c̶ihJhLt}՝Κ5e/iw'U܌իq饮`MpCrJivx=D}}>koWg1Hq9na0M8Ѩ^V-,dI41DMM#Z]'B8*xMؑ).f/`@A=\uLG1}: Dc|k   ^cD_' (X04t_u\;*OuɤY\hP?Xwh>nTzRQkO5yLH/?00udh38*i~d"%FS8"am="h}$YѡR}W{wX4l?8-wIn84nfsCa?[fyҝj +w]O9amc eԂ1-mSk 8ωx K)ݕΦ t[ 'L;IƉ{k3 #Y1*(47[\), đlÆ-[ĝwj5U(*<lsQ`8+f >,PKo(Oڙ Im'~,u% !;ض %R~z qVQ Rߙ7<y̋Mi>C+([5`L)E~[Y`^:25vn yrL?fPD^G^s/=5"7p:1nvtCk㙼 =wάǪu,@nr&ĪH16xL'! uY)*(+<U fr/H4ΏN3JjWJ7?6s.pSf)|>̙#P #&eU4yG@<`,,{o# qAG1RK8;|kşt>ٛS>"d3gguy-{nl ,c]t鳣BY0Sw|:@,h#p>UׯSjm_E$416DW8x d(-&zA4#78/+VLq kafE㨫YH}@0 qan)jd'fYD 88ڀA 6,~X@;< X L$`+,P\~J VW/njHYh!)p- h)+J}f\,Nl#&`ƈsD?8|*Jt9RQJ )ľDw,%:#),+ۜ.su"iHSZf . 0G%g7pMf]LrPb]x%q&o2v5)ͷejfD]-|3kA "8QI^vz,5s&Fl i7p^%xs^=͞.Yⷬ!@׍lVчJycDP\ Jap;w(% .ȆrhxMut/hJLG~f@}= Kfֵ(*zaɒ1^+wqŹ{mZ7 v-xcXH,O8Nf|OQiiQoYy片O?]GY.l. e-a}ɐ> \yt=O'srcbz2ˑ7 |2}Ga~M^VfDDX8`ss27~zum3MmchEEp9cmRy: X<Ez}WowO/iL;oi\Ac"<W+ $$.}oF&dV5 Kڊ*[ZPZAh)Cc_0XL_ʜ>,(O{і5kn{Vս<vysZs#Ӥ">ȃif|6.8u^v<ViΙڌIZb!/"쟒6Q֩bީ,Q;R*{@Y^(bZXmHe.7rROX/ *B`3q)rvH-LMNeg)R D BvH㞷;hg /H@oo\>`Ǽs6TVx +Ԅ8[4$+`~&U9őU99/R0"p)AOU~сKNYZ qΪ̹sO=]+)ܯHvtjfv o1ɰ‰zYmyy2x<ǧ͞w_"ΆjkuJ;Mַߘwoߊم =9ߨ;}}lpbxS)=N 3r!.D)r,j# oNqDm'_a}}=eBLB\4:Ͼ1a]=U  Yҋ )G92Ôvg)՚JC`瑈H*t TM#xU}a_gy0[N`Hجk +k+:k}l:DT\ 0u$DR _a Dx9e1JJ竵7*?Hp76` plh$@8c)rMy1f!@gLiR D%!'LNalRo'9鄔˶6MTZ۾B\w *DD hہ-ҙ 1(ThH>>Ynç]H5Tv]G]-spE< puϽH_%AG]a)a0o)am a8 W^I4VQ}\N>]Ս>]xbұ>WHX\l,[\ӡx{ɨhTv@g'<eeX(*BM ?$RxU,ZDڅ;_SIr++ǣ+WϜ)LSb.ᦛ{XkJR7bu\g.:;bEܳ#mc` ٙT*[TDee꫐GI Z眃d9i97McoP(TP9JW,uV]M3g1g7/mLOS'N_/9ᄏ17%KB[o(,e<L8éJew7rN--7/ƋqFzm==VupkkK/% fbo>ﻢy2m_'_L&f0ޝP*!֟eqRW?i='v3=so2e5'rȍ]PCMMNCIbԊsJlnӐ &j9b"2Ww>s_C>I|tyaM-mg)b20")˔M 0Hd~˲mdǙy%H汽kSg p xI~pRر/9sPP̄1X߾GYR)sΩzNDmuMh^ v̛}x 7U<*؋+ٳs{8XGFf JːcЋ˵s@: M^=p|sTȇN<zE2_OrKA7lxqln^ xS1ITxc]߹˞8g8Acc ` `1Du0*cL/3- V&Aɷ@\z}+Cd~lߘw~>g!hTeJ{{=-lݱ[?)N {u_Bsg.ܨ8'?F}R¶u"}KXRSd]ho!ژDR.vJ=-M; Cۚ/wkcLB 孆F"x x@8Ρٌclۑ*H!EI32-8qځ\|0A?~ #q$La099ˌ壦l[RSPPDb@ z{q+~H <]JK TW飏}뭺558Bss"m„gd4GS)}۶Y~pl Q)(mmp0mYV~>t]x|;?ݾ}Sj>g C99M9Q̘P\GE!agEVaBX~p)QTYgjݺ#H:*.HE"{wW~y9 Ӵ࣏0Yغup̸qE8'"N;RZBa F0ʐN㣏>64 yyxm60_{m8ԯ~Ŋ_O|3c3sD2I ]GK r 4 ,b1Lb\r>nڹ3rX(na(yж?i`f_bX 9]Q!\QƦMFilqcS KG"rr4 Z!eR}B4 R*f}}m-}ݲwi@~)hhKJ 5-q=4rfq|--{):f͚M^/=rGLCmǜ3ni?(tA~աPȲT*iK1UW[ccJ0K}"9sM3Uζ7NAla߹aqu=\^u n=蠚gR(3Mq%<  G@)btV9.," ,!' /Ux1?DntA$s9>Q׮U|'\-)7Ъ硢`'cmW4> E`hQrO'tzz3]{-8qBUW  ROgչ-aBOA !LIYdD( ,"ɀ2FPMΡG c{={tzw]k{ E=ham]Ƅ޾WLkBR[⥅:B=H?I񜎎$ J`Ѳc.J%]/$^R&[fe'޳ѹu-SǘE IDATa~̡>~8TT-J\@KBQ)'@%JdB"8a][KXvĈC)s؜0x{;,ڶ$ Ǔm%pPd61V]='$iea!CfI#R]_?huHWNK2e%\e-- u9c?|;gjlR˂1B LJ,fV>!z:˅7nBf2;9%J>ι?n^FS}!RëO3cZyV.՟S0q _?@BpYfp)SB[M=B'oqu ˒5! ~80STtE=8gP@e rWub7ЦMT˲>56o/si@LY۲zN$I#I; yyn]ohl<e 'b$啗Gsrh!g٬z#p`^xr!!'z<{7nqű@P x`ἧݢؿ};<t`^  _|x2&$/Ϙ #eRvruv80pn g @AHÁ[B>!qPϹ w@)p0ځIݔTUlXBą$ul#6.ﮬ,ִsѶk\kKe1Nv$D$)HA|ʣsn&9pa;ޗA>4,LsS&u=FرqZ>UsuB,J-=D ?VWNdF 6LqU~a8@C6+b1ߺuS,9'6<FHҥBTU E!V{@8 uuذ aBYK+lVd!\׳'˒* d0li&Nb1?>EO΅jEg'Ⱦ}d0~<*+A]W EL%Qŋ?0J⋘0wooef!V-ESJKɋ/ /YF2wH/] UG YhjŠ:.VȆÄ1ͼv&0DTU3򽞮a~n9O@J$٬RCnneHC]=ZUeEq|ӎǃci5!Cܹ>`z]vM4bBJ9cA8q!CH;5ZNIBb TTYV6%>*5$Ke%DP-hǖN/*|B)siOܟaR+ͦB$]͸=_?ى:c4P)z,IABmS\X BU㢲/ZX,T`ۂ1%H"ك˥κ]^Q+V+/?(ܲ+3V44|m+/,j0_|8mz%+"W]#ʊLiwwF8I(o8Z`Kx8h2K3;2NXB[B/׬!UM  4"BڜLޒLa{`1ƀ^@8ml^BJsr˚79=ss}@`@gF1 C`.V` 7wO]̜9SD2+` p M`Hd@.>j,*` @6hK+]IHIRLㄸǀ?POb,p3GRX`1nB0vOo[Z~5|xol4ݖjJM.YRT@XYSJ⒦gtR$󇀓78؂ rVǣ[;}>zJw U5M*.6!@dE mD!J RlvWm6oFY.L|)9b1vtq֯^qCl&~p׏qPS/ipi岲#O/_:'Pa@l{+'Mf}ݸvl@qJU%cLXX^:[\*'1Z$v5[U\eRb1N4IS˶l 0Ӆ;gΐ$ttж]+Wi",>~S(T x^g٪%e8-,g<2D_rsC<6v0f΄⡐y$I yڴK>%^_hk ĿCd>}j!ri%t?Հ lMCMu+>}mo;Uw3AZ'B0%Hd h~4g?01d0yNtũ̋u:҉4!4MuZk0_`\6s-k2%pPp.:K- J I_+JKQ(41S QvSA(`pe)}TSӱo>:CȮo0L-(;DDrQStBb]׺M6-zhѦMC@;͙sW]5F?/o-l6gLHQaJɶ .%#Gү::L޽n_~9[{i x( 5sz^'il,|}}ct۾{i&I $iݦyc+G )WbU״n) KJXzR]&DaA55kz{_O&wr`BH WBL,⣦ٳu|Uݖjwỳ~y P b`4e`bpq"x2PL$n~k[uM&s ;F@UB9A靀9L)8xЁ*;` P$:c@ {끽K>(q>0 (ct=?JP0MGHrU 4`X Ȁ (xp:p1,0(Vf¶3 c*K[$ʐmiGĥ_phE^I8 ۟"&BQ5'$4gR\LHrA/*{I!iIN9`hjBg'z.U}% 16jk_l76Jý^PIiL]]0` yyl 6ztΝ 'ȎaA91 E~l uL"7)N| O1Np$bRIpn &0`۰ 2)|D<9tuw{m]T$fYJŅp退`t.dd}to(  hqېz hAJ%>sCyDh"u g }5GRQ@6\YF" FPJ8޽b*yҤRJ8ܜswMoD??&*_H 7b _&7p*"\3$ZT e%e͑|SKL}EF`Bl/L(4b.'{$3ϲ*K=resdzTgGՔzk3bvu|5LsFh:j@8vlu99\ruTJ⌷uvf^.w^T=׾x?,9HFq-* b*bv,]8Hٌa 6c~K0[8;:K㱩 sa{$Iȭ@.D/Wn2EIvd 55-EA$FOY}[hogl u{&bYs?!.dL}40XCPDm 4i۷V6+x%ϥ, K($_wzy&krkd(/nڴ/YVm:;ْ%::^մFB|a@9#S/)c8mmk^y#}}ۇnR)Gĝ8/P]7q|Wx_5In=x߬! (#<0 @<-uL۶m x0 @/ӀB(c)=R$pOo&  TSUV6"Z[pF . 8\ </!4 5 f; hC0BcPˠn}-'&jjiNTW|._!nH(a;QS=#IB44k|Cwٷ-_e;**FbΜ.(ǚD}=upPK/lkΛwᏘPb44Zl7T~ niX"!! ,YU9b #scFN74~Z\r/z}K|%\Ujl<d$?--%) ;ꨚs'MuӬ6ݸ)(_'.x<.Υ%U e=]%hxʆ?1w=L;4sVRTnKYB*k]biY1TY%"TVEe3R99;*++!eRտ-PDc.Jp$4.\""}EVBiK0}*6,]x["*I-a#ԐEaTbX6f5Lj'i{yYsUiwAӖxeLYTEch=Y0M4-*⪪Ԑk%VWT兿 nq(ъ 4 IDAT<=Ӗ,zpyF$qܘ\]mX,_備&$G"NeoRc_~BD*Ezzp%~p.,K@@H Aϳ)(Ѷr;PߦRrB㏿c1 vFQUmWpyZFFlS(۔+$PZ[o_ZEFXѸ3M%tyV4Ԥi,`Kj;v,$Qs\0uC;K8|(w۶0B@(vv"s-xL?`Est0pJ4zKw !Բnu>cF$P (Ԭ̙6-`xZ]$fͪz{fuuL8nBJ̩}+ w{*.`A^Ivs "&*NHi ˆW1x䇻 w^`ZYf|_SsB[?'֍Rzeo^zXàߏrljU홌3p!FΝ ke Bdw~φ;>miB:c4ʕ\q]HiVq΅ޝL>VWW3bo5)xRt1!,N<N&r~*(:nY 8|=ӏ8b8ȏG|t*ŲLt* 0@ πn+`$ >F  lBF 0xhzr ("$ kpv  X;/o`c 'Qz_oeB$rT l[vp@*2!:XQ:h$d-+ʹOh@ԁD 뀹@'Pl^b$ޞUp+Ŭ8NjGJ8%o,G?~4*%=梆mo:|yF(/'gDdc\|1~zlnB cyyhk=+ۛ*, UUՏ}gφ[o _Ӕ Ή˥{MH^hƏ'!8_׋tisXFlp&( -("(%,(}K^1b$Z=^waolC::5mlV;躜ɰVqlx0BCrI2v{ c=}qYK/Uꈦo$ظBF5sA'QY;X]M@G(!$uu=`:R*{+k[ %³ E Q)L.9lz _qIW2Ub:%x/_bEy'3풟5-W.-_2&Pth? fLT y (ZQ[+l',c% P4#[:>mX2PO,YE2 @]{@pu/תÔ|A̜W/^|oApiy'xcغv0(6o~g|{v4yAX*8䜇my۶.ާ;UXJ烐!ö@dYnﴫSH23S"1#6넘5Kbɖœ\/ ~y⪔r^b۩&OFMZQ}2CsP1qӄԈVD[ g)Y6m>Sh?814䋆+CJ)+M REAiό1;Kb;bu%+6wJYL͜<<"b۶+lԣ9MȲ!K~e܊عWo΋~?k~hB8^?$jGݲ,9?BT;PR"ɲD;T;ڑ#O͗\2o|E(Qzm/o{(=hll߷Oa1tр,Vt%hSMpJN3Fϡ{6,,.͜6o. , xXH)G 8 ?.g@2 PpHW (Ɵ;Q V L&$D{#8Ϧe+-@4yx28( !%b^dlC|:BzF`D"2 W>$dcf4?T܉yKt;4۾}P3g BZ{qJ26)T#!ws3o&NvWU8/YTjnjmD67W3MG9 }O#Ͼo]9exi)f_  a~4Ν{/^zruYZ*, [V82sx]0k׭de5oޮ<駟qHY>h)kju]B!ĕLfeYZ>p;W²75喍mSFΡ|ҊI^1MpY3HAK6 uusRLiFDHUU;>G ciq1jߗ[3g*렣0(n}w4:ߒgw)L@d*U,"z!N%c2 oߖN;vc{m]֘y eP]?,~>8gPURQ!&NDs, ?xeR;F*Wըi@<~tm7L,΢E% d.,7oiJu-iÁ ,fGdq55TCV+UYnH$B RKHWH$&0ێx Rj]\,LMOzc7ͦlikkm+MR7- k>:oYX깜HzUVXc%_0h%(*1>-˛ejqצMJBU %RZ\4ZJtu5Ӓ{L*_*Λ~QAzKaCQ)_`|()14 srN HTT3fV:$h6f]6Z+Wv]F9JKcG=<԰b u!!}ofMFد)Ad*U b;ER 8;@gӇ2~~? Rv{N9mg9m \-޺YΘS}HdZ .8 $pb\x$qw^!*9+mhv^"iD@PR4T"FiaWY5+}ҥ(+C6kG,/*P.)yMffnc^a]ohCSKk3?Mhl[o}$qiad>H {i<$kv9s~`˖-4 |sA5ʡaÇFyT@8\wcDB,犦!dq* ØۿDB0-pH/Ȳ-B)5m[!"Ƕ ۰]]?Q[+54vج JֿnLB2T 2@bչrs{o#mo5q06;y@"?Dg'ID9 G^$ Z]裥{~7y)5$2h$D4rXj0|y'$}]g9ݲa~H&cyfΝ-goM"tvbl2L==TdRx 9Gw7fφB`$raBQ:^NP98nj6k^\x={FpMRZ*ƙg_543c%^/4Gp]|!y"֯6,+3B^кi3܁ŋq啨B H;FTϾϏ;]'yyyf BǮҶryy$;׮qkL[hژC^1[,zua.5N;]\^ˮqn2o4o`8^Sۦ[Z݉*=-5UYYdVU >w!@{KGG{|UYVa$;:RQٞ9'A\p&ۅ0nqȊ1 ϔwg*l;# zq4Vq}H@ WBB]Mλ)9֮GGzM\37ܜۖFig9:Ӈ{+t=YD0 66$aL$l1& 0`d2Pƈ %@Bi4=3C{2ϷfNO{yn0p9;Ds38,UQ^p%[QLJ '.FMWVU\1)=-[>ٲ\ZfZ9tK)OO#Bs8$J`jYyB:[O;qGW[4qcO O>n[jl[v˗E2zx=ݯbnH7bx,d@zuL眷B q,\baȤ ɏ}V)U]jaoֹyO ATADL`<W&YuT/ޅӧcK$晚&J% 9L˔ezX=; U2!Cݜ>}aK9]U[Vე (.eeT1rW\yJioD;v#lVEqq7_Sͷ*LqӴ0t--UUncGYm[{xW $iU610P4e. k^(8e 9_[kҝ#8 pлh,kmuu_CCW;>ms3֮mmmu%\&0o/beWԀOKJg8w#e,M9bqU^U@z%=`PЁ c\Ri:BA`R:ǔlVG E@XuDz ~&B 8~|k|N 0pXxG0G|O@l3} e=FuKQ~luӲ,@YDD:ǏH/s~؛J} @-P9z]`w;;O6DQ/ݦ%gb(7JEE9wq^.woDހt ةf+GLVUm׮1_Ew\Nxd2m U8qYV-whhÏajPQEd\K X瞋B*Uţbhx.h0]W\lqVMy晞_zjx<#.c0.#͹fYapiɒ9sxsKN$7l~[ c )! <"E Úr!6X2IUULQHttX Tr.v(\}>>56r''>[]] 70W|Ljhc8bz*b1 E$|l{D`/|˜Ӷd` IDAT1uN%|ӦԩvKHcʔk[x<&)Y$y|B F"eI"pUUDXo/~(jeeM69y Ӱa{UrJي z*. ⪫]gBЇ3;~iQF͊}}=Xn7Dkp lK Ry>F& @*+weZC/֫*] ˚C,X0- ВKE}'gtŬ+``uw츚c+bgۯb>wmq[Z/?Pܴivc#4-}a;LXH$?fWuyGqϘU.o=s/){ i4(%N&ECP?mʶ\$OE8Jv\?F?*.+rt>dJPрM M늞~Yn=;(M/6bLz˫'&/^k |t2/PLZUE?ο}~*ьk}<;Jng\QTdG_7 WuHzh4i1D"hoGK \.`b\)c8}7ɲɡv>WUP5MJbG-kHkWT{/`Ys~8cI,S3^R5LU~VDZti~U7xKGZXTz& 3ضJ= z{cQJnJh4sCCuS fR)\#IisU@,.kS~TlӦD-K#pޣ(1]ߑ> G~BGoRޡiHLFM&jUU{ԙȅd E`GJ` T;> hqt*E@pPاDG?16cD펱!}S<x8 kuޥ1c$E׌Mf, ².^Vp%v 3*{;# wWW}mI9x ph [;!#*08z4xG&B5r+RBǰ<Ciس͢c~mkTV_rC6qbյז[h-wNnvqCaP\PQ45C"%@`dvppaÊ;\y[Vf<(QUۆSUr`q+ =Ё+K--N UB.#mq[f :nmu 5O0almm#4~+L&gغI 0MTV2IVewЀT Rk@ULTggW ~s^VL3Ə#݃xqѨkxoWWIx CZe/a8˃N%dL<˫cctOMpB> tk@li4KQ#!LSv#tIx?c#߾k Mӥ={JO=Y a*rPU2MƕmK=|kW M&>',}}xa:\%̽^^/bL1*u+a֬_=tTexSS62رGm3]QSݓU[Y=be:Ile5H r闒hdž 1[Կ//XwUx o5qDpyfͷۓIg btc|7bl6j՚jXyy|E~OI-S"|c[ ݹ݊b+ۄΓOn5g~;ovK Y pD􏸻X P,_ׁ  !Ν29=uu>)mɄBJs]wO|U>?*P4>e ]vJdK)s?M PHUs/{KPHZH'c :SSe"3g2Jkj``J\q jpYȧq5.1M*Zb2y]z KM.]bJR?, -Z|0UӺ,+N1ݪie*_=5ѢȐqr9Ѳ|ϓU\11&HNبǙ޻G v?DU d2U~fd.q”ݶmW۶1k<%€ ǍSLӎjp-t ikd.AVtu~7]C">^~5@Dg'^{ m&I ,I%LRzaŘpV8d+߰9ssLc6X,QY/\GL%%O,ٺk [UrUƍعs+ y.?]>T5IK`}`̸]׉IȐd`pK۝38{GʞY1?~wh(YJpM[v8G*yaqXu|"oJcrZqC6^oeh|WlT="{ܞmFeA ۖr~pk̰k\Hnk~7BJd#Eu]20uSTJqx`YqMHق@c|mے[N1N t .Plp,2KWt3~LgsHrq K.9˄U&ӜTV TgV ηXtD Gw(k[z낁:Vjjvl^+tiGth/r ՝|vцuM3-k4X]0@ѿvj-]t7LcNxJy6!v {@ 06`hj^Gw[@>qB%UDBX|fH ߡCӴ_[J$vE@1SJXqx"X4z{goLs0 `Y7O]v3QGt8k?K$4+ʻOJklDに%Y^)'x].W$ HZأ(wjZn9%_jiRו|W>M=eAP4ۏo鿢6jU#MM#fEV?½ Abr3qe68ޑ@$3tpz 瞏Eχ98Hຩi\[G #fCC` K94KJN yEQT)Y*eZ<<9--#6B! 8pnņL"~-)e:;|W^Y9_pߜ8?z=uti$utnhؼ]]TWQ*fCC )Y6+^|wƌ|u5WUv}}XjxڴߏV>uqcT_pۿ?4}viDz!%>W#E8<KxJIR2's׮{c1iUi,`#PU43=ۭFh4Ϲ-2MS8;իH.yڭ@\ӆ;!Ĩ& OQXK ?=TgJ}޼{ 8Par@Ge '&c_ZT*kgn0]ؾJj.NyEyvm8aې Elڈs?YҲs޾O;wӲfrKzDyíXWxS9,ڱx?{,L˲-;@^Y׃@B\߬?w.4-_jefիFRaPuTӤIO͛ͳα|~Ǡ8Wgt?x< JT'(v3 8^^lD>v%L]0$G4x~)Yۼ  ߗ6޶ӗJ 3?boϞ0FR{?\C x'0f3\W$>!?i|HxrSĞ >cƂD⼔EA0A!~Cј1&!bSTS0ƒA>?E"yFUKEMk.BE%;F;w҄K+O7btlx~J y$~SeBmDp@2Ӣ!&PUpbD_c}08rÁv1NxGO҈~)E16<<<ݎЩ^Aei9#2ӧN3z{^_xDlbeaUU3&7';L$,`x]dL%N$\s)GVcBbQGC],]-X*LPjkEJjH>KҟzrqEut~SӬ*}fK.5dpU8ٖz33<͖'e!ur6!; Cl\zelڄ^x}?tڴ*47]SݥWQ4YJB〰D eZH . ]OPu_Zʣ*auhVH[Uo7lp~sgBa2@X|)be<>߶'*J{2ifT&π{'rM__&R~"aY]V_*[JXD.Ǝ!K7 3Q"#'qf f_|KX/&LNL 1jX x p?+P Հأf +5es^<@'"·lŒ/' A8*)S %[mUm{P;8V(8> p0EFT42Uqk5榦xDӶH9Ӳ<pgJSnKd<^UW7QREU1Xu*I]%E(sqmX ["ۍx\|%pP~?ۍ_r|W~^M\ٶln/XP f>خN? 0JՅh$HQȲގPHb_jme,w҉'*DJs36o.F[Z֩ ȹ/V\z)֯CQ`YPxe{S]x|Z3x 1o((4hYUUΕXlm=]7ߎDUUiƚ5bx` xq̚%qiUUu睗: 8/9o¶YD.c ZcA(tDĹ&Lp^QG̈́@:mr"znk$GbQ4E*ʴ&#RU-<ŋS AuY?꛲lYY?Oqe~k[j'WU{>Ŷȼ1_2\\%ϭ҃Űm ?~`F*Z( +>W/ް IH*S35"%T_5q%u  UC iڽUP<_̽ޞ8wzR?h{>G$b E.ՑVi!KlNו)J:bgB4 yBVƙ*~pX׾c=vGxasP_8;c3/^݋]t\XeRooPR8 _yvl]W+*ALaNWTDZ]MJFuTWoVb|Z"+ *bQo|J5) fQ'yk2% /2۴~=kB D`Hg]Ò5i[O!gql*4jzlxx ]=?x3E~\V=3WX#f^'^B*# &Y*( }LUs3J~Mcmlv7\w u\6HI7tfL٪%{XG4K<]b,`֨۝q,Ƙ2v‰x|( s|9_'R+X.b %aR\.fYVM-{q@Hv ,&4P)sO|xĔfKE%.|j_Gep27({f`G%ڍ7N"CZ?:0\^&@b@ QE 0N,g^_3yB|Ɗowhںcof|e?74}V>RCTj}N3iQ}#Ǘo lصƎ8W\a_֯*: Rq-pQ^O B)A `8X('vvѱ(ɤ gӦ^{}>rUU,*پB|K ̕>dPU \J짞vo+EwWX,%mEH&+OZZ $/w@W I^W=?J(#%#9:xR|@a~-)5`2@ؔbq}'p\%[nەcUR:) IDAT~`tƪ` H2y, x-cW75`׮;m{<`P7PI \Lo2 7d3)V'@XUM\\(5eжo~[nK~4ΟUω!*&w)hLVƋ[#K)olooYW3:ki>؝Rb(F,=`7WHr銲X  mkluOgpPEmvr2LL?xGKiDyW?o۳7eI@u8hul8Sƺa Be% K )o_4SoH跿G20r}&`4gˉ^+JR=>((Ø>[ŋ/b<')qx!46D$]]ؼJ^/؈aL:mQ*mCqppG*eg'>[dǪвJr eevStO/L_H,(I<.-2S4sϋ#M.8شϜ)->,-شa="pzO7H]qd` eeDs'a6ZFXVbl\J?_SS6Mv$vF8̪4kWW9~g98BA_Ğ5 ثg B6A g`rڻ>4=:L.ȭq"2c$R(TUv\be{;JCCf0(oc\(JsoE4v#m3EߏtZou2B<\y)%M;X J:F:Oe#ko=gjLlPŇТˋDܰ!7OZ'ڞJ@s9]jZ:?裛ld2I|mZ5']^oyu73P+ d]si7 %[Y1dBX{{*rᲧ2ćwH?17 וo]6;#6IP[20piMY+s2h%qnͶ2QDH/MLtylxuA,΁ [` 1WyU"`$Ņ@d_,KBIĸI]ʓ"]EO:]w謒HE \vuO.g,}x<睋( D.E!a&""@ `==ՙED _Κ.N6n;~84⡇pň9th-́}nN*RAcnFD`{~q,D!s-񺪘kEP9ObQ"`@!tZ ~B@ЉT@@h>Wڤq$e$2ByG}6w_-T)IqObYyzk&+TRRMr≬<$^&g^ OՀ1>ӥ("=FYzQG9aBs'E )@I쎣UdY\!Rs9!d \m?0-5A(BOw19R||EsKmU|u56ڧC/ ?8B!ًrCQx$es0Oy2|{\gDa}Zm4D{3^2KDxPj+%O2,] R-j.'LStwc :ɥ"5H {g+@i/yj)d|R]-ruLͧ^%oL,I| BKs)S,B$raWE[c8_&wå}}{{ |Iߙ9sHR0FD}D+ػ7y]E0&lc7VMvsFddBAzU"FT d!ѱ1(@c@'я8WX!B%l`7cѯc ƎmajuR)YVfw6tv {!lj̞ͫrT*0W{Ŋ瞛31y'.Di:ϓSOE(Ą@_9I(~̝K/<~* ò 0Ts(_`׮E* lYf2U`6~gdYݼ[x#^Jc0+ɶw|:4 "\R=xxuW#_ :CtmaEDT58_1ݕ |stTa齇ܱuw,q2{PCmqS18-hen34Q0|KWSVK/h^!^֜:L;r_1q*zJg"U~{-w{=(e H=Nr}U.7LmS2a1H9]Z\oWc9x=t"ZZ`H&`i*Lq+8 nAQ6U8kG]WIY(ҥCGA.\ ,{_7jӲ0 !liYS7nj[@Q(pđ9DDD{򰛄v Kp$OK/e2~DDo~}47C(Ҋj|Gl].$B(W+Hʱ?۫ 1C{*B@p910l4!G*p81^TwЬ\'8×c( x(c;l1h=u3zi"9tv.e[9ѿda-?vDGX28V,hm~oBڅbR-0Ǝ-[sΑL}}k6yʢTL 1"  ZUEVqLFUUiӮYyHV~2d4<-طSF BfDۊ :v̒EWi<֏VA T1 E[b^k0k]VGSE# WT'Htt 09jdABQXʀD_+ ~8kV!ù_--0ɲkn1 GF)MR1j_Zau;1P$X=^ lc'p.-yܪcM YBߙ B%$x*`mY4bQ7&~Ͻjy}p-NϽ8:FUp5U;*jS\jz5M)4W Y#5 5R}}r'+mw嵤ML&[ ;zcN `N0eN2bmԶ"pw;_>裏R)CCC7pCYYW5Džk^3mϊşjg,_(|b|C7j$ɀ&TjM$\nm29S^<2VTU%!*ɹ*`  8Q5'xI8茹t|`Y0 N.'c xTVRzBVX7~A4pI)vX˖U;iRm9^ 8$'@p=p1P,C9Ds~,BmcA DN)Rc;w-GknNuu` $@5T (չ%0j„(p)ˀ.80sw=\U]=T7=2CILֈ:GQA1 "8BQA @]U]5מwTwsϹ{z]k{e/s0  Q8(x@e& L_}ռo3>NU e,ѝDwoԌ/)t(k/w8aק]Mà al]䢅evv ?H @a!l*<y'}Sf /i|iB x1\9q󎒐e3p0ru6cjƘ,cǎ`0{w"NlڄkvG?Ňb<2vbp.=Tn։_G$La(Lph>TCdZ(+,˖a$كyBIӌL.ۍHO6m3cҥv0cpX,RWWy~n&BĈ1U4mWҲݻz{==:tsfSTC'PUÝOfnϲYQP`}vK.L]ܐe*+S:; /(OSOLbv(-e8['BǃCQRV#ݻ\t)f͂,"ֆvc~#CT fer坝 8c4CTXE8v 6!  }:17G^Vj7 CN͌ + vd@koOK*# ILu>L7t.eƹ>2BC8F+7\{vy*0ME&HaͭURzbW/_dznJSv3/8/12eNeOϛn֖ Y)Z;WȤ{@U+Vmn(=fa܄=1dĈ3Hpxql| `pCA@6HG@4 .,2)jEe_Gȼ$c[VQ};2(<b]W_Ӧ`T aY;>Zg(bo=_@MM~'ڵ믳[5%{M련@y15FDbF"1򆫯ޭHZId yŻ@qI-A, "0!H WVoJ9Y/dqXLS]}^QYZj$ Q MѧýS L0륤} RwY}opRkg3ˆl{%6@n 5~ݻ;EeQnjZF dEn4sPq::O=EϾT19ó߻Y* <4F0yU&NW'?U8c'd,KH,i%Ie]:ېI:/iRƚ;KrPSSn*5F$LƽeYQ"fSW*+Bz-9i&DBQGP 394`nHY(p„/se9N+tJ6,LS{+.mR्F*҆W_eW0YJhzB0"P^IQUa9dY:ӂ'LSed2-(`C qw|޴3IaD@wؿ/(IYZX}?xq:dA /d\IA7cIκ훖=_{b΋T:ƘERB_2$ noNYY"ϙQYIsfw<~kOU+&ˣ䐸])6'Yǻ]^|:}\sƘi6lpN3RÇ\.rMͳ>97o7^K.z{^x|޼yׯ.F <|>_#42]ƍЖ+䍸aĝ?͎2MY菺xΥÇ0$U5N痩TcM3z `-@3ִ^~`|́(GԡiG;ZL(7p3XO@͚bzosOO] w4! I@Y*õk 9!cp4%'Rw7, ]`Yz.wh a \jY'xK::^ $*]-e&sY@[, MM8y0 r5ǵ 0/>xv@* fD(tz3O5X"7z4΅LGѢE޻\ B qZx<+9u;L4XsHl6&2Y ݵXT̈ҽ=;͖4S&s:ܚv ~rK TV7ΉVVp>9auڊK86>Zm{ IUWfs .SU8S/\V32j*G#g-IUaYDvtg 姛˅#\6{c_9ڏ=(Y!肨lG"=¹^7cg nkK*;$( ,pdu5Fһ6#t3~&HSBfҤ7scj@AuʀALqMX+W JX[g(-L%/#s.nf6ks"ij.8Fmmx1ÙC IDAT2LQ[Zl$+/Ts9Or d/苇%2UXެ`+ RjVސ%yVߢWS`]+Ob>MQr ^Li3`{ qudG#BlVb$mxi=R?l1[qVim%vV.t81InNć 1=l. 4xbEL|բ"Ƹ""akjLn )Jδ²$A9fs'2ԬX_q]!O'Or3L d !dA(!pF^:0//::1YD}utlIh pCfkSdEd9,oz۩K]/H'q4PƢm^ٳ;:?+imX6b`K8ARV4seVJ#$.Ƀ ' qvOӎ+#+eY6X&GI?*%AŌL *z[h(rWdFNW>`%_yع*D^Zdtj(8$$)+>˲dP 0 h޷b_2cƼ#"!LnZa'B!3&3#3VH|m$ߎ<@ Pl. qs < n0'2v+ MM+~޽i~|SWdIrs (д{+zdu/GKxƘED\&cAԬ(Eڛ%5x5qbYuuw[ %Wlu1رX|>"A?F´is|y;?29Hlzh.7bZw$_TzpW^82gz*<, _<:_ [9sPSYZZ⋸zx0 Dx C{;YUxL$XE8̳YaXl@)@Qi@:{H DUUPU<4jk  8dj ]v**q#6M59}Բ~=~ke۶oID"Ǝl\xBUǃ}|9|9#LMaYVk+LA_߀bAn9C!zcĈ@Q,]LEKC~uusm۾,Ȼ?vvf.Ӗ-2t:Ô)ض ^V&.H#Y͚={pyeJq3?t_T>s}Esʔ%55a*>v,7Ps{n,$Çu`bV=qߠ3d  +]H._>fkS0PHJ&72 **Op8+׮^$y]kW*s_H%l12HR=_^l;HFax|^\=ō C& I5 l 3ȳE[b%KVbbL;aZj`P<S,=փZI^C՛W.V0DjLҘbYɗN>rJn,(PՁFs_砢Ȇ$R"~ 0>/qّġta6$F7,]l2j',3.ڍFD9Jb[5Ř%C)yI,Ƅb~b朧1 @MD]:w%m&醛PӺT JڲW)!r-}õQ $j=&^y6VCkV`ð,$IH dKt(c6uvvb=rnw0*6!d+l`!䬜 \j*7 94%` "lm;FA?|] KLSgLS42Mp &xT>Te]e9ۑ8jzK/}ͫQb o,˦5 UV5M5Ԍe 11q1ƸI%w Cmm%`0XH +9GG7?{_dYÑ71uݑH$N+V477tMt:J 5jthnn Yg#ećae Oh7-^5$)E}[wYY)}+r!{agL媼`r t'7WT[KJ޴wݵ&-081=dV[7'^=(KK@(BP;`\w IMd_ܙ!b Af o*C/na.x"}*o܌1E4x]6 9oCH~1"s]TP'@@-`&G|!INg:i ad dÀ:XPK47 X@01K\U;uL|N4β"xQqXhG%-ءObʀ%P5q%H Zj *++7Pdց3l~>/`{vQ5` U4 YB4_!T, c ;}-\z)4 AM 2xgqt²9 9xӦg~tt`V\p_ObӦY."2$Q]Sf QS,1@ hDaL~u%qVu1/oLS44` ݋K/EQ C)mJQK dpEpfP `> ׯOȲAt n'8@8v O>F2 @LDVynOφub_ѣ.ibH̟gWwy򐶬 E#oOc矧9d,,$Ue]]4}zFg d]|xux<tz{$JnكxeߖD"l \.wssͭ[%ML,V޽۬r.}[o@, *^/ǥ.`Fw= ?=ܼ?_#yD,۱cQ˚؁N&d2U]= aOz̽=yh`Jq| W)I`1H0zxV=z NS~)u(w,!]nENF Btj&K把 ƜB+V~y#P"p3XA?c *1:ǟ;\(N&RnhH&0౐-@ScxpdBO7^|67--t˺ɀlӘ D#8k64u)1Y$4u797E6a p{Pg~L3س(\\ |1C"+*eehiAm- 8No GI ,@ɀ&֭9x7;=}V2z({n+W\n8K͟[k^7l˅U/)~tj;JDngeCCCQX0(C,]:W6*$C,i͙/yDfOWr; u# 9ѡx(B jk+݌K¢+~b|t3F CXB7BnףEE)NINp[>$@a\I'L@ǜ ?dFsMݟwaۦ*GbHOcu4 ZwHNTY j>rX\.pDė=fWAb˘J,~j:qWT뻥phEEZr!K7lV63j~@pg2g6`0?3eZ:`eiR(99[?t:C09ْ/J> * vW۲_3'TUɲ"D#"iޓ&{JL8b Ƙ*)i+H}qSUUUU gqNw~KK|ƌ6lX~~(;%KZrjѢ|cxe8fTSY'a2+Dž2bl;P̻{{Y BvGDg0$ݪ(_rIjT:E45< FFMy&d<0nGr pW.qQ@x(:`&2OKg#CU"o0F@NN+@F0rv:hc?<s A  @/c9h;G*& 8o3 ;c%DQ()sЈ@9hn. h\XĹ( ;!TØ$X#N`v`;p @ ǢW]`%qKQ6կcp OS'&xivbx"jB`x\<O> /gͭBi \w`8o:^{ SPU|9N>unBs`˖-YTT`lڄ .Zk7\|@-I,( F+'?͆>c:dK$xyۭOcfL Auup8:N'a ׮QU$:rgyq1,YP֭ ͆Sw&{SݻK.)Y$k,qAaaX(D&`DLA\lzF aU;/j헤QH"b8*}:OzRBKݴxFzꜮ?[jbd".\֑t)eO}sg'a@$)Y_o]3ԳшM210=R0[j #nO˱DPPdY,J%<ȜjK`; CS*=tLxIJf98 a?J[q)y2[Y}n|?;ەSdМjwh~գ)ǬC8 ƨ t%Xvڜ( .KQ;'۩>3g0&"}dx 5ðie,+%e7ϛ߆g6aadu2yN[Ӟ$+PמɊ>X֙{R0+͌ A"2FV+@yz>OEa!,*ILѮ\ϠښxUfa(v@\e}W'|5H.HeUW_(3\O֯+GGc=hSĘΤ3Z8P9;(cJJy"1/ړ֖4u9IX+IdIjXAs^/e bun:@ fciӐ$ƄlOee9s"* ykX_8u:ƍ@1WPя!`0%.)$=d_  VdPe&O_q\tv)5 V,3Rތu_mAvI҅X) s"_d*)rxDIr|Kita'IJa1`/W GM+B(%A4'Ő$<<^}POB*flr+3%g#G2~hۯ?|su0x&IUU tqH=v,SSe3zפE"fkZ11(W~Dܡ r,9_g\'OaÆ3fgy&oFebA^1cߴޏ?&o7nq0(K` p 9o}iĉ>xDZƆ!"7yimd9gY't]Pɓsr9`>CX²$ ā?7S,7fF4ڲ͐e$fz 8+/>tt`|Vir]Q(/ǹchZD"CQVF&`W^wEK *+aYy:#cP] E׋*h, {X 20~xe P׮/P|."`7Tߟ IDATXYq۫|"4R(.eALQsWM:ۖ-; .us_K]/QN[{vݐhϧ['ͦΝ̙J]Ⱥ>lڕ޼y/?~%Ykk' z2z׬b3Wh4z+BcC31{ >λ?\4Z s̚]T=ZP0VQ!o;%N9,`ߨ pMan j{X e9X?~a-gq)#6q4IQBe]UYyK87qǢ{?Flw4PI =ɋEi:dQlDc <σ6)jH( JҘM-Rg׮je66z6Ƅ%.텕fG}\PJ@ (~Ǒ2XJ{<NQQ iL"qHl&6m´iP胢n2}j ltF|SrW+/&'Q,{≵{3MI-IG%up925 TʱYr1L.ߴ)9y|G⭷W_@@2Rp D pF_ڦ^6aG:D ;ʙ8&bi4;; V`$.-픊!@eAa8ߛfv Jr 9AP$q`a̸Jm \svZP$Cjog+^*bA,X8^Ǘ CL؄U-x1p̜{&"~5PQQL "n+`&K)st |elDg]j$_=rɑ0$cuu:Ua]|=zuu5$:n,NĤlj)U@EdzC6[ZQ&hڑ@ m㖃;+vCb$Quqi;8 IהCȾ*ގa?zc6EҵvMr:9NO5ȍq Qo^+mtV߽jֹ/ܒ'_6.#5T7r=E:ְAP&hp`ib'b]CSkNQ_5/>[B^&u8e6iԉ'?q5LVlNjkU>J\qW$YM? ڸ_r ĄuwwVg|ὭG/>bAɚǯV?`y2 +9a?zinnOI6W+&ܿvm0~ 7Bwg2lH}p)ʈc&ݲr坚6R.WyIAx*Ѣo nXg$q߄X,Yfj9nI8i?xײJ? XDN> ]0 K L't%x5* =Oc~o@{჏A!,wD^Ex,oǛ[ى`:GI̧hjBWVǃN'N_ݻpǎ2 B BON`Ť͙3qc܌ep+>D+W/\({44 `cTY)uvgGGu9DdYH^|{!̙e>riG6Kxe!<ol48@& "52Muuoћ9&9Fȟ5EQEQ$I$)NqUUkZ㿓ghb$, Hӎ :+'8ue2 ;!q UPT0PYb+͚u8n׊K$/37WQe1 9 qdL&,YU2U hcc(tgìζ{]>mΜ9gʙ> m,`גH숅`1Xb-T, EQ93]gXrM~u]gf}f^sQi:b#u㝑D0[$ \Go X?,O&Lʋ܇:ڗ#;$4e18둇6yPujrnHhY(nTv}OO:6n4MFVe ٠zx-LtTW2R,[]Z$yLJ\.(.#DIM$A` ^!qE2a "u\Ej4^b H_ Tւ`س^ii6ĶHR4X$\f#0b ;ƒ+ y99_tu%^zIq l.AyzXqE]W[_t8#߼@Y [Y^Wepظ&+h R$i?q;ym㣌 M -ui0á9̉LhGiK`** ctz UU7W KnYξDp6,YcL˥!.WxÆ, ;墋Do`/2dcIEv5梕jaea ƍlܸq .1cF8kOf BlпO5kZa`|5X=7Ir[λ͛%B-^m+XO3f!DAםۅ8tDo#ynW>_ 09B844*,,dn`.dTn`:n`4▵km_! ر4[`2p)L`:rO.@h@p=`| (kќ;z q?(|NHE@ K)y4۲ʈ\D"/sp;Yj37]8` p$-kn~5 trNĀmQpp61.NM ph$Hui]B(XcWٳqx̄TyX4%!އHf ZEw<^pjj50P^.(B>T ? v/.N*/G.Lo+@I Z[׿b x?|s={`X 3ZZtE6~q :{mitf{F0بQ8dz!~:ϣP |>z*^]:"Ini55l<,TVg<& p f͂IhjĖ-5k _bX)E|JJmb1氎icq7Tfal*ygΛG%%V4JGa`.3 4HcEq TU5mEˢBR).I8){ܷz|ݺB_VM \q3hF1e<PST ~?Ć er=s2i_byCt$IHsc ==l6oBUN%WBuHD/aʔ)~(<3wwm34B:yf_/D*m;XVt0Ȫ$d1Sp++J+ao;7Wgnc*cw.E]y`2w2JY{1 Y6t!ML /`ػrӔmX,b]Ͻ1Hyu}_"K>!5_E rV1*ń KP(, pFCCh^*gPJJM?}3"؍;[n]8p] c9hqf`1w.2g㝬 @~)!oaDE$3"۩ᐦoV$ӹ*OVҲghӦ*eUl|V/c0ie <1kA G|6EYG>+-lKV83[8hQ$^sԌwg@ww,<ɅI/yf6R8 ^2dY~9#ǑQnfr"3hvH02 2ϪgPv ^ '1wY L9\l.۷X'c`NJQ*dZ 8u᭷'Oǣޫ5E͘9 2A1LA&vp]Q Ӕj~Y55>Sa~}ci'Wad䝫;k-q/ :`Pm., I] EM ͔sLNy clh/xKCCI 2 Kc־Ĩ<&=Tl `&0y\oX?Dpx-+/fk&~J?~}dB ;|nrL.%2Y1p,$KG!7FC2o>޸wHĻpJˊq׈z"a8+ceM..Y_/+J$Mr? PUnDo|&C?ZZZ^{ސߏ.4͛^پCXd%Z̹ cK:}Z$Rp37U׸.[nݪ Q Tj,o$r|W).WI7\|TL"׎X( mDۉJ3-k Cu8"uwci8`p_zQ_ͩ)*l1V, Ӧa@Q_ @o/\.tv…ӓO3a'^۹O>vs"g(.$H 4aNv ׇ7lrNuw3"\gMseY,e}m9|^W%f.gut`rOy~{|}>eڴgWQ( K!z`%ҥ'l<VɄBv)LP ztt`h UU1RUhvTg@d B1fY uRTE I@1c~v}94ޣ̙saM74{캺:]׍} _K_6zC[ ֠o)DgB[gCr(dzYfצfWF$oM(eT+N>gW.k?[Y9板P9F0H m$=]SsT^,UU&(i7HBX[x6h?].GWK/~sh (dWiѝHĬdRx$A~"sW;u]TV4 sp0c qϘ1 01b83i ݙvl*QDt1CB`6oES &2va &dKⶃLJA2G$nIRiQ PWʕ4y臨jk˚suup6ݾ|>3Y- eyt(-338]b)FY. e9xϟoY`IgO:(Hzf5:\@*u1e.^Y e"׊Ң,qșl/ih8}+tB= "e$Ȍy$Gg /ZRtNWbI99>7'nzMsc ..v]\ AGb2%8?Ptqq8.Fh(alW8 fs.s{9 aB,@7EAb`HPFv%5 qoI_qlڴ¹ES5L@CeA B"aX`8%g{Y*% GD4M5Ũs7^:6?x֯'Au?6TKB_ںo~0ƢQCͧB_4 `g?.Hx)Sxatss q,&ƐHȡP6I 뤪<e4ipN'Zo'( F3"pΚ،kMuD00KH0pŹbF_I¶m;'LF?cgC56*L&Ӄ6dpHC;G UE2_ƎPUl.3Ϭo~kl0x<8<4 2I8+Vb!{U sf͊RUs[1jD?n^RTOF{_s ?Asa3g̜͡ArD$R䓍P%2LMcXLuގK M0>ƌeAjk |p:cɆf#>DBY}wwEI~ 1.裏>N #Ξ}mo.$u|k׬aߗ|}K!BHj$3Th%rSN KB$ <<5u%.uuc:p?\1gKWӃ(pe,jڞ55b\%{61l^-\.>- r%MBb㢨H֮UO0V|`(+˅{,$LG,5@PeYvT8Z{w? ('ts,=O:]]xj5(h'si@q ZQH̼$Œ+cy\}Œr&f:Pi!]81"&reyZڳɻ-_9Цm1sW@2Un7vt]T QC{.OIG|vQ[61]&Z;@qAha YVVfHIc(Zx~,:d J[ZXlSvˮ|hpdQ"zaw+,axu7͞?-цkh6 wz=5fZiS1zQ $i &C!Ĕ\wgvc)1y#W 0tOos/a=oElaeMLs(ֳY}4iZy]]o[[aHxv8xlͅhV'tsS}#_ 1 F"}$,9#۟s.I=n`2p c yJoON9؅{D%*J4 Xp X( xteQrhpp!0H\bkO% $Y ;r8(ѭ8q+ Qv_\U%>d~':X \lp;@7Pa@ ρ@q:sdKaHda\hE9 Ln|YDz{34=%hF&Yc87 d248{_aժK*C3FDyH t ł4z *J[X&ahD*.!yմ|4֣0RYz>}:a4* iIB?R)Z /**h&Y%K03;4d3`Y͛g-C,Àۍ*#]GM ;__Pxd3PN>MƄ0>RO߶ ͌tybO^q.o*oч[{„_ dUYYtY} c9Ԯ]CN;?~Ёfu7!EMMe%f{'U v29Eb`$ͱ1~a+Gslvsh<$ #F>ӊu¤,NbJ\@a;ѧ=UZd-+כDzރSGBO,$Ydq.3AR4|II#XIAG;z3>wOOT&*e Bx Hbp2(W`֚q7󣧽ykyMI~H;UXLXN6B=”ƌv)΋[:ZLPбp*\$'HP(+2,WVg2uXZ6ZS'4I׊z98W9)*TrKr+3: <høځdZ?0(c cN;kK=UQD9'bH$5'{9 *+c/8S8x\wi~t#T4NZs)O,,oWWkYfvs%,v' Ӧ .p>m"+ MST{55ï5Mk2ʜ=&6o=ʒI55,̅X$H$e#@C:U1c0N#`oiaLöظӧ EQGylĉdp ăS0mP^Ng3,^r8L46"C$bwױx1z  S( Xqg_qiq\sU\?ke ˋU>WeB2Ÿ/@ii"G,UdYӭڵRG4sЀpغm EN200wuwgW=w-\Xu IUE{;[W\'ԩQ]ou:{w_B{"{{ ؼh1JӚ7o~h#w{V1HP7̟9_m\ҋ=+ɫ*3,X~|\3~W?;߽{aI_r\P3Qy pg^ ۢ&`dAoBzVVɌO$ +V˝\_2ͻ`ٲیR\.0NJJ0c: Z ώt+.yB23?`Si\&]J23 4DCi@W@ ,X@j7>B"p,Ah?`ALPdezCU7l] KninJO#%Qߙg&TYS|:{GL~M}}E5.!"DB!ilWwz~ўOkOuVxsgbՄ)w`գK>~UjXܲtpѸUTᬣQUolLaWi^TT0) cR!XmV6ȍ3g?':{D6e2oUCʏ8ʣwSVxRu骣A7Y4N:biH#6&,APdBŝ%eU]ѝx &63ޏC!Q^#)Y|s &9ZhbafQVl,P`=Gba(*‚ 3[=c8 C\ʀCA[LH An޾N]ގ{.]tٳDiX3#Osj8l* K8㌁e]X .g*t5r4b `8bh4\RQo|%}PSzyӦ$òPRM9ԥaHq"aY+;7Z~\qFxR5 @@woNQv*-`UxmٳӟxSOc\jo7v2X9A{~VeW6 #c6m&⛏3ݬAFV%cP˗?Vxk%feٳvsrܛoe_%-K5jUi'>e +9:ztiPX U@%0_ԉ@Mڗe0"*p)8~2[Î@M5 mX,] l7p0dH !a̜aS}-b8v7߼hժ{SQԞ&#ΗHEEX;ӂzUb1tuun=|MgW%ҥm *V\Ӿ!?X;( Y롐Sٙ33>D/DGb 44MD}=4!Ek0Ġ,sU$l܈SNAcp $>et0 \  {7Z$pECMM5us8|#Gq1jkq'L`/~vcd-[5֖947Cӆ 8眃Nw/I<A}u?|UݱÖ~DA1o֯dž :i 6r{||oga[:0g߿f隻ߵ SՍ(x5Y2h4ْLظ2lW1~r~?M '/>e^s~P(v@`յ8} ,ULucO\kR& aVCNx! qx %UUB&.wW{4 iu 2ޮ< IDAT{r{=lX4y㵾gs2Emf?][]0Y6<~]4hʹlYV{Am7xVX)o , ӤQrK۪B$e40s&n^*d"cKr?K_x6؋gE!Cu_IAݑI͒}`qk>Wk^5* Q&s0Fsz+Q#KWJTUKLUϭ^\nVh\us3pM8N:D C( M:H}5 y~a>n8 B4 Ɔ5CU)PU!m@̜GŔ)Go/4w]װ{w/|vcvaQ>?t≹p{{ey^(ݷ/E( o}PUevu;Lq=4oL2o˴ʳ|9n]e4YVƈSE|phGGxI?8GS8osAQ__@ ?cFyE:O{kkIcܲjtw@7OUķxoytB7&4m7|+EY"՞v=/߽uʿ,m47mjuE!G ߾}vt#n^d8M O\p2q8(a@ٶQY^*R׬uZu0W ;jBTrdȗ)rFθ(͖?ki=-'y"A3EFuy]55&|q? Yan> XQg!WX&y';@ C1Ơ$-|ʸ8)JCvNWs@I߰ # HɃy``*H{<&˯pŌv]TJrZvU3,xZ3jWUZi'͟wŋP|yB=;w6*$ KR'o?JzwWMQa`OMk,0&N4'MnXoHS*ߟFH%0 ![w8m^:pFGt&ҳsω9P@ù^S_pa< %Er]sIH`~ٝjހRq:.{EIFmxa&O#SC6 C,sР)`:rduޫG=n8O/GxVVL᯽fA20g6@4h(W 1@,mE`C$>MX:f0vI΋}\P{UFȸ j0L̋ش> & /Q1a^-1h<6mR^{Mvuc}h:_ϥ'yjT!x>x׼pׯ~uُ T*<]Q fmFc.g Ιd!r>4wn_&$/:9=|Bup ? MM2Tm0T:ӕ~ZL%:%=ؾ994 fSL0tɸ)- NƇe H)}5W^SӻwGw6cFW"]I} kРԆKo./*:HUGϛwk XEVm*_<!47 ߮zߑ* `Tr5IipU;ܟ/=Y f2c<+ckQ'MA.,_ `'?O3yEr> Aaʐ?Wutu!mtGs |y&[T47ˡ!YWW?Ữk9WG<9h)D@uuW@,wEw7mZZp%|hö8c2|:(PU9iJ0X)oiZsᾆZ{{4s8*mF UU9w7o|xf",wsL&P_y9g:.z d_v^?_+g>|ǎO7冥chj«ϗݴ[U ɓ N9%hTڶىUq)eyR_~"g{ШQ\6 X:M>*t:Aᬳ2w-Y^[jl/1p#.)[n:v䜵޷YF7/x%( !ķxo0!f6U=/&)XH\ T/;//*p~Y~3Dfͺ??dcD,/_jRKܿ}7JJ㌁s>jܲlI^~3F1)IPR>*3u4GR!I1Aj. \h'3 TjbIR*~"YK?]itu}i&S;bJNdZϙ yֲH+; Cc/ko@Q  X@<`Pi8$ 8 gdY$zI0RQP|Y!rUfQ$syow8%annGQCߚl|P[Rr\4_xꩅѕ-ҕ~`׬1y]2ל\:SNAc ۊG"P]͜C<,'.pvyX矇#&Mj8u#kjjp.,d$'Ǚ߾ t d!shXh:@Kl~{  !Tml'T@ KC$MDSP:dwsq&1w\),&׮ÔɩG$ =%令7>w)H yJ-⤪[}K&60y~(`U*ѕYɲy:%>ĉ"ҝtСP=ܙ=rծK*CU zMk@i3?DC+p-?oow~ >ߒNsF)cZ!!wWT)ʖl6/:ϘN" jI`EEXke,5YNm P(Q9`.HӁlvFo'` u=\TH`,p xx v-QB%H@#p+THTEZ(R < \`&r`W0Ƅ&#*h[&sHt*vK9WML[; ,.\helRU7c, f2K38lD.n`< !; x 4Iۀ;I.\zڀ|`-4P R QH:PU= 7 XHmۓb^vjTUSQTb hmAYё*=7]ٛnOPXa[8 _W|,^V_/ >|<>y$c3KؑP[ ǁ(1--ƈ\W(uuͺ˲9;f'm42} tvsƌܶ j:q1ػo!MӮ.Sմ릳YhmEC , ʒIRǍl6BLfpA4;-YN9u붦If8+*RQ44J6KʛocoLDZw/tv:̲|^YY9j׮GuӔ{iFO.7񮻖=Ԣ̹k۔L^/9,uu\t9~|<@U, ah1tuA"G3~:֮hhi7,#ҥ5"9S1MUJr7Mܹt){,&LxE\r T5T=WU)د5pEA6 ]TUmjR"W &ͫo&( v4\}9s3\},\`֐o,*:z͓`YV:& V \?67_?:4Iow]^.,+ٙ[pJgM?YLBS"d&#h [I¸k1ƁT EAk^z=xH&qM((VsC W൷?IoVBk $.ΠXg! C[ggxѢ\AAi,K9c-\sMv@?TK &ďNTAFq$V@PȤ$An=AX ~ 1aXՙ0.uHq&xDw&CU2)8BN=4!~_q-"AE;zҾGj<4zX 3}DJ6,*;'ԇ!(uCP߳=ۥz@PuTmvmV fs9}%L=E4;"ݑ[tE*qvG1=kp1@2]&KJ^a>~F(+4M4b!i$ GW @9@,-8r`04uMV fڔ ZߨC z8N8Vμ1EةH*3'ˤt&$\$:ې@rBYMIarFYBHo4{ϝqS_\7߯}dɟ:;jdL@5(CQU=N S]:(敉tEWg2"$;ػ|a"qЀڐ ̅i#]eTϯҊ1F~vexw"#}"Ӄi۹;m)M Yf1cx]a$eVoSϼndCeΘ Lr2 r 7R?cN8x׮7-;-`BgK֬~ng 8=ջJOx5iխ6l|`@il 750J'mf 1GߕwWhvp!UEK{c4ceG7~o^|Wo ݲDIUMl;kWVȪUw͙C4c,mk[)Վw|ɥs8Y-vANJW;! r~C0`,(00r 4 nۮ6$>is#DoWբ_^}M)+U{ D`Q, O+-XU5x@I9 @0$P( -jo?!A4 86Xl 0 pr.^ܔR? h`l c8WGHD{ PҀNAa( Rb_V鱔 -^K~DK@O`p^V]CUBjF9SNU?9ǂLD(4m]hl4N_~ev4¯H |*:txde~R h>D"(+C6 ^/bhRٽ{o߻7$߶ -+ח ΍x箆xT<ɓsϕg N,Yb3fƍu+ƏiGpyo00 /OǍRulF 8gP\.TTdr~=F!n 8?x!^~Yfx)Lg(#RʪUO0 >}EQ8j2%j:E"|.%`De+jW\7]Hhy0`:cO6.bKG\\tǢ]ULqsB]7,GI;Ӊ T⪎܋VQOxjQ؏2K/SB5ʧ8D:!GUb7_S1!b {2j"ja]'iK8֬Iwu8C۹g͸F)uc}f7yug$9RHj=X<^kΘnÜ1.hmBҲJTIɄnƈXfHwB1.."eK'mzo]/l-Bv͐Pvɓ9%`)O9g.ꫴ2{P]#YNp:;͎h/!:t5ݨIQkmk((@(w̝=_%OO44(+u^tsXg`*,!P(-6]]!w)92e,Sua1q)QVvwX5k? ,0P!|w(/A y\"@4[wpʲx~^HBIirW/-TO$={;ʪD\e՛6=`:1Ueʖ-niJ'7nن$BX ;/o7Hnڝsx/sd2SS,]&s N()q 򋭱-ف~kf/1cq<ZM[5Pt[G7W>Qm;seԴD4_,]nԱa]s*Ub+ƾq%4-X_`K6)S~뺙k??sY<\@UL%҉MyҤTK85d$` q"T Ā %0~McB(B x 0xhf"j圑 >rGv c ŪZ}@N$?{\.EĈ\ (4PU䀤 xEGU7_U]>[?A4eKp# zp@~ ȑH[ט  V)  -#߭.?q"ċp5 "Gcf i*|Y+ d]]s*8p{ ~v> k` TՃqY3 ÏCC`J gkC5|kiHtzx@iHPUhzpJo>Ew<^_kى3#`,-Y/đG ؽGOMMGSDi)®-H"aX.Wpm]ȹ['3s"Ooԩ:dl">:8; χ .㥗04=p][U5Y8b~}wx3zJ4z*+80^Qz(Tw뭏\y大6v>>@zl}1((0N:좢"U-| וƌQN?]dld|ɓ"uwgN rl)qIE"!7n&N4MsC'Hj>"Νt5ӟ(BaZ6::pԉbO0*zL4*kdYa֦W_]4Q{5gVh6~i>[ ~ܹEEiEJI/._t3+Uid 1P@(S[ du}-{J&6Kbxd Np=nXZJVZ@ne`K&s.ܳW.9C;DM a86Qgw{ @"RU`Yɱ(&N\>/7 sq5g!!,#\:~PN掺s~e@PdTM>{{5˩N/yz؊Ag$`OA A01Q_|[o~7E S]Ita|7$9rv>Mó=n‚}%p'ZOn, ?tŗ5͌5 }HG(9;.rb&˹8,tڎb$ɕlM',E+WN^<<_0{hqzVq81W55W " ׵f54P~9oli9[ 9Ў@.᮳8$WȬ4BhO ~/~!, !=ȌCT!@d+3} ɶ˟Xyl UE99$sݽ|qSǐ^KA ~؃ןcYyxn7zjO74Įd`ЩdELMbi">svngca`U?(lg\\XLJmsfUnaLI)d1p.%ٮ&yT;EE$hƲ4 J$R󬩹O?>:\y% ;}΁n(fN8knn0au$#Eef'Ͳe]{ms9tZc$fMM1ݻz.藷{(`{D4R LiiJ<?\S8XAC 0n#,!aQ}_& %%Gin<0\iHhK;wwZu1vt7 C9pÆ[gμkȚ5[wU߼P}١o:Lr ؙ́m8Dk0U_^^r+ǎ dr_GԑYRfv6H;`1lG61:߶- @*Ga QTT vi(> ob62@#jJyʑi X;),?W0s[W #ɹtYVW0ωXo8/Dۀ \旗ofm7jߣ'! #(!#82`HyD~ \\.9pPir`b,Kt#'@3cq 5)D^@znK#0/nۣ(͟={@yP0`B8 !pDU%(-ڏ> ] 0\-huhJ?H͟r_?ƌ!҂W^ '`8l9VDQ^O?P^LP4,F1{6/PU`j\u ,Z ODo/z&{w8 Coz1|9[.;kaυBU1z4~3">~pÀ뢨\a J$ܒٳQP `29IiY9]Ygɷ³PȎDd<D4ܹ?g?S:Hm'?> /Du5V\;OO?=cE7tgF㥗dYkiiF'ɞ=O8aH>qcz5Lߏ@}8\,^:wafq9\WrH&yeL(Q<,Môi` }}xA~8Əc!mòN#c4Rh8 70 BUox.y'̙28Ct!Ŀg$O|tCCEKKss-ʖ{?ܥv9?4?΀D9眝zj…ɓruvh1]]hj̕o^1&Iخg<`*8-2 f5i)IYޥ^NBW'<&E aNv44 JzO&ȇWQFW2WՍ{>[,~ied@0+V.۞aIiРضңrh,R{/ׯs[ZÁuY{7U? "G\!gWQHte2\r{>}&ȶmrrJtƧO*Jcyy!w͙8NdNgTEnqݕX||HL )k<@X䀝DCqTQs`3p-XZtVc1V^] XSWu@88 p4oԀÀ뀱D T ) HQ c;prQяZ@ݺD!c5@3Q?U q:HQR8WKYι`P 'J9(Wչ]]JQ82t7sKR&Y x8P{K Ph,Q04ԑ|Fො4veŀz ?*q..9 Yhy<yg/Z` !FPUH؋?dHz ~r<=<&MG]ݻƛ+TU5.+m,X}ソoޜz) Js.}?9 M!A(Y8c:ՅSiҴr`K/ů "44 C(gr6 3f A.vtLB3==>]n㡇Z8"i- 6|Ńqi8TzaZ 瞋 %ڰr%&Oǃ>ɻBc5K顩S188| B  <fY8u:;%8H̘N'֯'Dm-WUӃz^A&ND MPS#.̌s=6y@8jiiRJ=XEE9s7.c,pW Bu5;U`m NEu5T_}EaW]u[ut8ee,W˹0_* c$pE- oY t%%H1J3ΠM0u*Y6~^/=VsՍm|f Dt@w pTkld2R"X4eZZ롢Kgvr1wƟ(Yyp9[n\2Jn^& wU21eA3<9s(\  +1ePꢂh}uD٬cuj&1Pِ]QqʄmQ"&DyW plttW1|<vx)t0~7䯫57T].Dgyi ͣAٵurG?)AtU]_ِ!8 HhIbʳϱMP jǖ/l,JJ5`6<44;NhBc"H0L6b(}mj*+*6ƆJr H! 0&4HIB$M/24fL $pE}UV6mǬtrNN}khfvޙ繟c)(J;;efIcLV Pf;ie#?eKzcVu<0_,uhPh-`.CDHI(3j1dcD 9Idѕ+mݢ3v>s,)--}~c\n߮]?!"+h?s"l+J^!'(Đ祔BHp!q,K q(~MYһ+\fa~Dחy_qǍ@>jղՌ>e,8Xnԩ}oݪ8Nh6Ky!Ƅl~8G68rW_նm7]+T58y#4F; LM@vX:;(ɑe; ~ϟ86J]i7],e0yLO(@Pn DG/mK:r9O[T5p| Uo.B IDATk_GuP^~/"őGj:GcD>~K͜9tEMCo~#^cEa)J08BivT]v'd2IE++̈ҥE0^}՞7--mYL 쳘: ݫ--xU{,L;Sv}pJ,$~?\0^#Kw55M]]]==G_P~?Tqzqm4 >*+͎ι]_/G48C"h1&3**2, EòHJs}nRSߴ*+3Pwwdqi֭ڗz37ߜni A ;?$= oΚc6_W7dihg'&c}M.5DHO:i'Nb-8[.ٳlHARŴr\~^y6>K˦+^2yc˖GI(ZK.,aR@.u;lPU"Ugz^@::B8 .C] -+k#3px 74,\Dp] SUu\dOkR60I"!㘎* Uz鴏1,: CS3C!3ObL9pOLb xUW|&VlyΟq7w =!UƤ6TQu*KK<+k{L]vɂ}ptJѳ. ,"XKF5rU!mk, Ԕ4&s3H29/J8Z61y 'ltB`!ƌt_!90`t,  ao܈ϦSN1'#)J;!|saDaR G}ڻK!}`@ˊ* >m%ŁQc8v.mW|aM}mmF{Wv êP=.[e){'/$ؐЬ5}'pqQxv@(q6% "QU>0 P}Gg NV^.<`p"Wn{|1TNf)$kOm[azuȸ[FԩhPQRaQT(l fyڸ1}icLHvw$B^I*u-頩6\joEc莦*? ?3"ñ(#I9\.S(xKg8W6+3-k cro w:rU[c^jjo'"4 RӴzܶU"mO;x!UpK9MQsqbWVWHC$'-X/VAn * Q%p;XJ=kalK_?d84е,H::ĉ'4Jx<׿_saf΄kߑb&twC;bLZ Q(Cy9.2##EH>Lǃ݇:p4M>L80m1RT^M afafٳ1q"0ds3<o|YvO=sgFZ*v% ĩm)lɒDbx`{Lg!%PV1k|x>kPmp8Xz.x كBw7FFLpA(,l338l45AJBlne˗T>}A}6<^t|!o828jq/ޘ_ c@hU,H)HXQSSM=rrC6gp_Wv"*++kkqu};fJJ4ER^(5+r9%7S2Y i=8f|D^tV4Nʵ~+i˖΁KJ) yhh^*qvj%Nin _2^{ gv<<`4p Gc޹gtݡr2O[UR\,\COϟz(Tֵv3m+^F 0pt"H$fLeu *2,H2Ι]zA£c .0Wv.eŠA`qNPB T(vXp*^i&im! AUԈi$2-') Gb)QG* c+y,:pNJ•O]0/GLgjjcpEEMSߓf;zs(IψzϺuOjڪ5!g̐gK[B}T5~Uz%mkoe@u_ʜsnMG&izUP@_DK 3 g0gUW۱X + W'{&9-;Q_%fSD{ѶC==,R5BHU|xsy٢.@d@=st>a3LP] pb8.pu:y]h+KfC/X:4iqTaB2xqڶ$|7p:o ̑MM m~{4 0Ix@kDA02 (sEv{WPj2#H$ 0*ӆAC (iM(޽7w慠…P˽[VS3\V7PZ#H\zY^( ^[qxU˖,@&CV"m>5[C/O|ߤU/@A8slnR]g4fv_"->ӄ ٶ.M ¢-}IĈ)auvX]9@ǟ*LFQ*HU2}WG( Վq8'8xqw8~O>pchf݌㤀?m"b@NM;\ B Oelcdw MG}("}R-e"7YaLe0||;^f~a48x؀y c\a?r'(p3 _Qa\g(Zz,V)4k. {+WcI 1$k91 Cƈv fwF^ljrnODGcd9R][67,!.z H  ')@/P l~ (Y+cЊ`iyfյ$b\έL2w|h` xP7¯0P{7~PIf w$񆆐/a)#@goGѝŸ(6mΝ8}&l,~&@ 첥40 ]X` 1k`[Iɪ:^oٴm ގ"?UJ{87܌_ ,I.~m xmt&RJ(B*ۆ׋>i2"*)A=_kɛh{}`%pI+W.[+I֎DuRdhmfYz6 J23i 6d+&Ih"$U >?cL!r1l C1  r(#R)-72!k fs34> R/Jn;o3. H:==hhP¸BcιCk]zHmO;,JV'Rź&EW=wlo6vY ]g cBp0 FTՠiA /U: 9 T, :SI,ϴrYkަqtRPؑs(Qh)TT MCs8t~?KWq ɘ.y΅ehSr7o> ^j{9][ZM#j6u /Sڌq)C^)9pZZEB6f?'A w'Kc$`:2շ.//VUw Mh 磮((_ujn&RvumϜY(*_hQ.`kV[wt/SppaeGC#HBh /< (Q(LZi0ļj팼jG9A IeN8˦ju✳^3WVq״1K$hNj)+VJjz}Ҩ1'\yv3@_o ƒƉr[>H򢋖p_gjmqhi1^{m+hg">a-lSx."--{VٌhTU*#c{~B$=ۗ,jxx;nymMMO8H/dRBAZoJhZxSI4ﯩq=#x<>_u.K$R2D Bx?~sB.SU7k٧9. g>1tX `63`=`POٛn! n:'9jD=Bz7TU}]s,%GQ: +e80fv>OBcuje̲.&jflH*cG}N@9FCQp8\J˄XXP9GG/2Yvrb9P.)qsY!"foFo TTCޔG1? 4 Ê4iìY Bl ck~6J(&| n_˭[^70PߟٲeM_]݄ګiz|:*ahmEO=ö](**pXG'G?laj44`6xoټ1 XtspVB/jilQU0#Ct+k“/Ke`\nG~~Y-5&,34߸`֬롊p8Tڶ-D.f^& H[ "q΋L&'̙sƈ!PYfe%yhnf^BN<L'KVƘ>KbH^aš[8o{'@1%u{o/=<`9qBzs"u5_֭dҷ}rX(3Tx m.#Q¥UGZ42yE7mBU!\Ъx$H aH65V l{lђKVhzIӟi\9NmY4}i2MJKDv4eŪ#Mqi'k3&MSɽ]=ͷ6imOiIY.. %71-,'"`If L_,߀@f¢t^y8Go/e*8jZhSx % %YD6/`fM+WYR-*w< ;j 1 @s!`(]9,ѕ?˖}ׁOaA :zyM'VipT\5_uhn֊ >[q۲Hi 2R10[N]hu\׆ê: DV~HXFW/=uXɶ5?P)'FB |]@"U?tסXGa_#PU|ٮoN %uG VWr 2}-Ƥ{x9Jxѷ/ZIS韥A7PK>bdkxlR<(Au]/xH IDATOn!Zj۷56&~4(T-BnsPH2@tƖ:κ!5b`.tRNj"u];8۶y!- Q0# JtHr_ggM*u.0 D7c\e,Lt'D,Qԍ<>~,Oع;0c矯ﯲ:ε@$B<$$Q{HX$͹̃,u b=mCqw1 x4T`P%`IL(6L֎nE VQi㸫;]0J<+M3X/:)[~Q==W]5Ҳbjk=`Oҗsig_㜢QuNj+rQW! %A,[j;PR! מptB7ф x946"ES < ah*+K%`⋼q/ܾyyNm+#dǒ%zL²({,!'x BNB*UgS8dq@n="bp s"D"JT A g BW^y\55^EcycB%QI|QՅݻ1u*,A*QQAu5Co/^&P4s]\Ծ+⌑e!FM c xڪs/$_ s9<|CH` 8X]MHwWxc c6ޟlFA,Kq(1륗mR3m˖Gi_iiy;ݖ^h^ Kc:\o\dcDNQqddN\|A*E\¹\ lu`+Wryr cIYG*`Pޥ5eOuIuhH/n=/9Gn./MMfPl}$̳CAChȃB}(ų2cvz]4|TؖS8C\.y[e)c\8/Dd+7TXR-/tv6( ʂ#hk=CS}mJoEeDِRRPޖj)iDUժ*R-)eg5чf^n9 8Hĸ’ի HܑLF6c( ,8FVYYQ9ʐrP P9>~=2zs9CWJX-YgjQp&9H0)]A53)3x2 |d҃, 7T/lo_잧LC ԇpqspB53hchP)F@ɜoI0gLk;\>%` ›&~$3 w'CIqYǁH{\X ?_͟?Oymh&iiiw&r~̂a'Q!׆kdQ Rn3D1'U=\Q6f,+,\ e)c xԶ7 o)ۑ3/ZUU#_+*zqͲDu@sM?ۤr@2cD]=`gi|KcI8]>Ht"1@@9p@@`@p".`,fA!6 O>ܹbXq R) 7'x DP 9^ :;NXDv;QdMD`05@v|@D`7W@؉*pP xGE~ԀN@ @ *)OVwXQ|rMW'/Iײ6AP,_4 0#u˯z^--0hlCC8؇ VVTWi=Hρ!$KTDBUw4ebIY{?[[;0ڜxfD)-U887ou-Wsq.Bys2ixcٺ:YVDW^z K F(T- QoL 41f'Pp !7ѥhN?=~~?B!fc~sbd5#?/no?wuaP8]p>2bRq(Eu5Į](+"a>Q[,\BLp8\<6oF0BXhxdPUEA>q[(g1mbkظDpTTq6u匸J&s~`pO~j2Ms`]p c /˖ PS{_~z%ˌߖU .hY8{ȉ/j/lٳ88dT\!>%^o]})~-;F+>i.=z@.bC)Gcܶ=qviӎdr]Wտ1_mmߢCCܹ1FR,yxWsi軍)JPI===>_k? ̴_̶[ΝIaTycAA# [7L2n%sMol67t""<JpViնKUTV:]m'bmA {s:pA 47̳$ؿi$-Fѯ2eiDTIY)P2-jkrw7/Z`Ϙ)х3AJ* K(#(H l_`  vƚ5rb(5 HK>ugJFǭȩ|dG(=PLdzj"Maٜ8DD@<;~'I)npH0iDoכ?2{]bF&K0|7WPY"p,AP\ *TV(Iۖ ɒTEiU1 )# 9gL#qǙH6Ř{*AzF @(Mo%$:i:Zq|6Eʣךò.Cs49-*bM+g]%@s pIe(M߶?pGupX, ]ġ$Ŷݳpg+˜ƹHDFV].c,yUmxPG'6w?⡡, EQڥDp1 =р4?w1f P UDa`?VO>q5^3FD>@%ꐲ<qn;'ԑc p+p)pȘq$/+\ D9w3STHAB;R1DmB`pP\d@C]mqWfwc뮳? u^Μ+~Κ5r8x!piiEeelf AP]3τχ <, hhH&!FFL+ 1F55P"O!Go/,^ݍH v\ib:MNBw엿,,+fÊk~y<>ֹʛok)rXW^?Q'?D>ɓalҀ@fazwJKQS)x-,^ E\9!Յ^C8 M++AOq*n]LlvEAGG.x睇a;7gC{/-cC@$B[~r@GΘ =۫\|/g9v6[2lY8Z58~oo&B//KD"qmMdlߓSv\.P/IZ.VbrzU w71rBIWP+ds< !5M3lŶ5}z2C Ɨ~4o'{jk{b&TV*11F"nnY /¤Se_?ۛ45_eC_Cy%&}Z4^+$'Yf3vCS W *}Ř$tyAh& #XY7ͧ \u$@z\ [,鷼'9@^('FJԎze$x|;'M3: Vؽ;7,gkQ683UἳY+dB8P$hnq2to/|EaWֺx8$}0,؀䢾~EtkRDrDߨ9՘dsꦏ>ZHpխO8nܷG 8qpg` fPgNlp>?_qKAڶ'.s*UrJ{;wkn7XFp3.` H1S,1@!܀CTP]]KO_b?ʘb$2 0ACHbtƦwR?뺓M.UV&tt|Ŷ=,etB'r p7fl6QPn|~`%p8P* `)ݪP l&l!zeWoZ3I$$c0W ~Z&QxUWs{gm„h8͚ed2N& `h% B,qزEٴID"јmmX%0DCSx\`_d2t=  cBtZ{/͟rbh֭:m47g( QBrhjbZ[kqFzV8먪BC: f c|/(Ϙ2xF9\Rmw%SZS[I#7.=Hkoue} ς 1 1jSg=zS{SЈR Q 7^5 ^Ң Z6ǶTvgDhaj<8t"x"0ًq=YZ[1=7ǝsXܬ>w̝Tӄm+t/nL%inT^+[(|P۵J:D)p'7cՈ,׶ u\Jqtw@2{n˼yVs3i9:bW^%5Hl՛U.Pcp),\=tegG>\Y8LsXattW\j"FW|_{/o|{v>_$(*+n]7xm>}|pCCa{եi֬w~퉄.yQ 窮$Zummd۟ƈD *EQׄc)E"E#B K%o!j$2^S?rO h\@ c[ Tq^z9QWWPDS48 !rcCC-+ d(s@9: 3|nji!T0-55!ӴXh16:Z@m@{/! ,fZPj`<Zn LT5H,)`y?f "•U+f.a~Fױ+\^VVܹظN; 9XrsHq「rj/X|ǰ82? ?O<HIBX}$>fDcٶlnƳfng'O--^|qܠѶi--FssA9tz5nΝp|9A@Y}_!HdC㬶--WCG| {cɒqAJu"W2U_JI!I8Ml>mM|M;γv )lPmzzVFuگwu^_(F=l'\pAǑG֫ꨔٹ߽/?t0k1㦉cڽã?P25qjnƆ ڶN*944ADnBp\H:+RQpMwW.M`@6$TGL+W=W^EbXPYLŴηUyݬYȑ2X9 C;^ Ǹ-\P݄=dhf0iXcn}smK޽*'8Զ[3of2Wwp8f̵?X,^}peUO*\"NJjEIWʷ v=i>zqysNE\%0YSUv bUvET h9Kccd,_[﷊NgH SCO7_ C|d &A>;7 6@ L9d!QuW d;0L t @wp.?@ӝ ];:ohj3!?>Pa֮M̛Pɹ]x5ۻ&ڮ]=)橽rSpX _T z`liPW`El`/HtS%*2ђ%2`9Rգt2銢д7La` eB @Ns-D7ɶ<T@ (Ec_yT86PFіl:+L)cQE(yQ7Q#1It0TPQ@]@(`.ADҁ9/3L:ۀv]]m}mE$D$-M8+=:e)Gwv޽sdjk~=:\Wimwd :4r; mG.ى*`WW'Cl|!~˖ 02_P[Zp8Fڥ{)rsFTڼ͛W*DقmɸѶ{q$A⍍9?x?!fx;SOw7nP_ؼ/0)gN;)e(~rhz;_yE,MMxUiYa@Ӥߏ;w/qQ0 d2N8lpn' b"zuƠe~`YW(+^bf?Ԅ1}108Gy9+` ] |4!b1<3f[ix8 ^1 , GJ&" $^qEuڴ?pl/"~mӥUٽ9Ֆ6R܊>UwpJi *o)URD榹|͹BݻSgUm;Ql9Lwzbc̜c6~y'ի=S^]:ʕ+w(Un){߫XS]+,+ *R(c&6ƬYMmut'94d *~D zCd$;0Em2BFoAG٢i¶?#nXd0 TjQw3uuYΓOΟ461ǑŊ)˜.nw v.)HNm[1AqUZK.PNjM1+xQ=9"d`9S1! rFnoeGREEeBp8ut=n׵-23?aSM7aLa \qd@aRXCH{c)ey(PTTVf򫡄@l_ y0eq7 ^\.K3RӦEOgD{^ZU\n<+Zk86ּJcZwllm8jTn9rˊ IG+W!Ł@˥}d̲vղ꺶w"!g2rKVwxӅGyV|+gRI)LWBaի)[Ue [H ҁ4i1Q?I U8|EohsUDwĶU=Wpʫ?}UӾg"oNQGիrґaϔ_1#1@ "Fp,~Jq,8R=(޶vBϣ/Th0*]>F^o'|+>E:w j;6[2wWal**W>346G @0C乢N{5 20C[URZ PQHP27 ~*e;oX6s IT5i`yHAMwl˗;-2?>/hkN|Iwt0u#tX:sf?etުwwﱬ5k~Niշs ʦW rYEX!@Y[lnUg3gf%].9g թ\LM`lL)BČx eK uwOBUŶ 9qŽ+&ϧNp DBPho2&"95O]-ED 3HrRfG###@P՛߹z |PcBq熇p\6뒲l@vgjcD*0Fk[/x|u>Ir:ӴDJ`ˉc+JE(+l,.I@&A`3PSY,vmƤt@@9N]u~@@# |B%)sX\ x$N p pL.|pxXdZu`})Ah~ .>OkTI|BDk=8g0i|e7޸H*沖PQq̱i?9SiW/ Mj)e&C/BAh*T y5?cۋk*K1e8Opءx(|!X!JB@K=II)JDLBˢt:`PHU);0]q~l܈Ƌ/…hn(Nnۆ Ŵi(TNrd~rAr9" ax x@wW=E: $c4V8T2s( <Y#( UT@磣ƫcpulڄ 88|Pw,/ch^/̫`:m BlxY8JܹLsɓFww׎x+,z<')G}=s1M}}DP^͛j }}<7ߌ@۶%KP^~3,_@U6luw]wuh =LD (hkcGUشI.\﬚?"u V.{m4AUZZCV|kLKw}46|KIygړ&J$f3v׽ֈWћws63_F_NE˾GDןqF=tt#NrhKKkkK]܊V|6fMutc)}VwEMV4#`;5A0&ݍO\.nO:۪-=ҁ@עE= ~! jU4F^ٗj'"53g6ɖ%]."U^ޒ6[͞pe1eR7py}Ք;|LNg1޽@m.$eF u^X aYht,Upb=j8H (bb^^9o6Z:{bGA';5`pI}='BeALv-G [v䠯΀- 7i im;XzLܵK '=TxGvg'twtX"J7wh=zS1M Bb^ksN?_ߠ3a08eJN⠈fܛ W=FVʍڳ " n@~ C Fy*[2:ӏ* *9|NHcVr &Jq6=̔{}a~&u…~_\H\.{~Lt P-jk?KoB&i S8.G(Mʩ CBU ~2N9PARQ2>GcceeWTV};͛G^ղ܊?^Y7}ѵW8sf.F8|c2̄?b#p+`"1>߲'jWաBJhਉ5F&{:EQ"F"t<󿓕LouDʁovl\G8"āz.`PBBTJY+iB0@AFmo;ꁜgkڏKYXhزv.# Kdl$@R 8PZ@yR=r` :cFR\0a%mUՐgפ<Ϭ r YGo9Ow}9a6qҥeZFҘ,N OOGk Jq][Kp?1}BB((P6WbB0IXxBܱCvu)>r UW!磶Y\.O:mig&34Cu56mbO?y%0**f @?LFޡqT˒R`l ccPUHI3FBp챘5 D^`pܻAseu5~z*}4GөX~?瞻>x!47rΉ@>aj׮X$Bf'!ravY:}7vɔ͢1pz"Q|odI)7 Y&?<ܵCe##v(D%Ϝ.Dj&{e|U|a 5[K7u4V@`)I~)Er(^FQy;"rԖǟ` IDAT(mHNcpaD.gmssVe;ɡxU$˜JXvF|Zu1F|zGfLnUawoϢE*u/cމK ʷlMnTC]`}٪Բ-:$%>DRP#+R:M2E SH'lN_e1=½h>;w^&sJ&}#P w<|ic23G2{=Ier&4M鶸 eܳhnr[Um=3OVfEK T mB#ضD߲tN7_#B04b``у&GvWSlQPyh3ds2笮O;|Xi~hqtʤosw/ 2~@ai)vۺ_H5`ll܈Nߏd&eCjr_ $-yiCyKs,Iɤ[q Ujl̟Os s^Q9ӷCҊ*X^E%ϗ+v8S@f1Y-v\ҕʻ-;\6ND+)|LK ^'*ko*&ek[zG`Dwm ]kϝ+#׋nuSswlOIT8eCS0X:ZukDcd};w/|&a()b E꣮-Jk%r Z|;rQ,hX4 D2$V#3TԛfÁx6[yhkX>|+IӜSOOxXL%wWTl{ɴHA;,qyCgʘqRCwdx&|꼹k|K FL5@1\ XxZZrƈ}xk5wRY裯8λLy :xdDWpGz"qCYP#̑9_sCczע wQgW)fq6Μye~ͥKґk=7><\~>3_jmmfz|m/;iH`\ll-B78c|p y@=q0 ^`llr.DŽhclG~c p-\@cN.rKE^(0@+010Sʋ c K?@CЀp*Pz:`@ .1tidXLIeLceWVԡWha9ދ8Wxj%CNM1<'0òauu]$oY[2XO^ x=L6gt]Kl=;$u]Emm wpdjyyٯ~yr3C0</F2)Wd^Y[Z/reߤI zB KRbdB!BHp9/օ r}eeE[Bv-Dh`};?jް?_yԉ/ONt>m/[gj*/}ĐQcp3[**bUcKFcgT*Ӗ^20ڇZH|$gI\-ݜ RR247 *ة߳75---9kks]ps={cǞ(*k?V̞Of+VdK#9N.*QUK0}xQ4=FUÎr闝xFc#%[ }}@s&s[&c1P8pmC{!mO.*cӂfTe9'FH'(vG8w,_̖IXUoW+w9lkYa YA!@I)cƍ٨$HpF4@da;vc"3*|Uv(eϲHxkIo2BCu_Q~휐U~*qVVtB!47cRú3f+wy+M_Q"#8ԿʃFfh6{YUy9P+$z:&?m(fVSTUݳAJ@D.#vڵyNc)XwCJ1_JeT{joZSMa1=h%Pq(Ⱦl"S0hG>39 W?zշsH4q(!5<ݚ+Dr|{M[_^׷ihsu$w|~6A`u 6EgQ́GC:DWtҁ٬ƭ(f.eD}`>e!^xUg8P"M vl_-u{ݛ;vVJ@?Px&00@$,kMdi c97M 35ofP5k.?#-뗶 hn@/ 9K16!:+ M@!>i`r| 9:)ϵTu8`{o&EuΩ{fz{5FDPQ b\G$h .4npweaYzzs?j&}{}y晩:u~70 U\t'*pPqǭ՟uS7p׼yni)dWQ`%XTE%H06㹓hi>|EgBDyNtW,vsPJ$::R$҉\  ,^-o? @H/ *{&։4^1pmay&saxm_.e!%B/XQF/Sz=Yufn@cē86b1-UџOp,7^mnphkİ2$iD4mzmǃAc`08{v2Lx̛Ƹqz4z<J㨣p840=>ٙ, _|;Nپ͚>Nط4}48G^::(1Dt1;vR6jhLx\N+3CCز+Vo~ݾ} w0a, +V`&bJG97O+7ᲗpX^>hb1,^̡6Jp%Ơ(hmEu5\gM9li;vM]VW?c0 =x\ C 4{k]G3MM$(嗳SpxXS;f^{ É —_xWTDdu׮ na͛:1!LxDoi۫;V8~ϪY?]A:??O>M;Pܶm(eYng :G:d;c*E%cd[ ']!sz뷘"#R_֫yU^fG}{e͛SN+')uh3f5l* ͱc]=_X1{%nǕ11-nmI^3(8}bdDy"6k5c:q\{>=Es $9W"5s,L$(79+UIvo~fUx+s%DQ9 v!nW*#5Kew7?O E D/\wa1?ebyN:OOAdV$1)|BNwg`CDtr ޤQpۅ{JJ+ϛ`S =cp؎b;.#z{%ػ>"̞Ӭ'مZAtv\:G^ݢkIgEW.¸1+%sCA2vK9d9΋JJJ18?m,(88σ 2oP¢GfώVVxpzetO8]Ա獼\n/0qDeF"qHgg{$"N+UgҴc{Cor;k`{#n~]o rWSO6rRlsZP Mi1qOoF*Xcz̟BplYW?v,c b?PUQbBوZ{{y_ߊ 7mJ{Tz1" TCU6eͶA:z/޶M̸rC _^ގDj߰Ba, 0lײ3R7LPlH5}>Ʈht[(U@h&K0x5p G0l-, 5&}zu(//i{ 0  +պ8pd,_QT )G#"K Dxog2cu}qrR^FwN \&P\#<,J9Rb-bzՂ1dL؄O?<\+ = `p(lȏyӣ %95Ň12X^vp^wa ~Kma΄Oh֬.Zr #ji}9z4zd(+.:+V8՚m Ρi, EAi)֯ǙgoW%3}sieӾAUa0 0CոKPUVXH p8p`V}xIXVG~vsA AA~޽# W.Z3yZQQV^^kۭرd8m>EB!Ҙ1=N(a&)QZW^ԩw ug*kZZ+75O99H-س3=F2 À _|j聀>bX,eKQG1"Բe*q.^}u䓩MǴixI\O)%Z[駨Θ aUT xz2_n3ƞ:,enzc VDN.Р|/>! Xq_dÆɓk>/K)5Me׎ώd]6*3tvZL h'u0F7n?M yFeuLJ^\m^r\p7/U׳Ygvc4\XvmzVcΜ<|VV::RnLj4|Bw ?zsT0DGdPuD}tu6}}W7\SEoY zb i O;NM8gN[CP,"yMs-ZsW9 :g;I}sg +xV0;|MKng`?b Ü{idsrjjh\}TWkr#P$%`Rb̑ Ud"sdL7s;](azy(nؙ-mOPA Ja_leS"ixΆ:y62߀Ԯ΍K~6“bYF)Hܕ`7/R?6KϪFqԘi% MrBE5hf k۶wzya!!dWygLGe%}g\d0K 4Ú:c_ Q4f^P&M4/wӧj}*SQ 28(eٚ.`I Đ@lܟoG~-as04s f ^<O:2UskqGաLC 4 C&J2[T YXi̓l?Ƞ˘9B%!jnrcq xiZrOI: }`(.6u TR~8&LF70TPpo ^ӲHܹmrA.ӿxiQOHRUѐtb Ԣ1M\-+8ձkwX%`B(BP_ueAU2meAO^²;1(vh[*Øѳ-s͋`#{-P)L[ڞ~}M3Vc,v>?($+-uȺO-&Nܿlp մ𩅛[UPUhߟ]&W?H'_+Snå8Bʯ^zSOmL--˅YA>8pr Hp7؃ڟz9۷8oo,UXP^>Yq㷟8M|v<9IYπQ״e+[;r301mYH~[I~X-epH=3)Mxʳ^0h8,XR IDAT x8p$wX9Qmrۓw3#0(F~v ؗlq5R֙#??ifS|P̈r"8(W r>8s*eeYiwO]͎(O.aJ|g z`Rfz`+PH q $m@ Qݜy[2NGC@0-1l<}MVDY~+5 cM { ]y49jT0d 5 03M_gg:F쳱i?CC(*B[.eٹ\ߺukV<ȬW_eEENm-ʕrT<ɶY4JPR2uC]֮Ee:9\@4"3fqw=,fμ)CZkMu'ݧ(_vCJzfU˙mss^ùIgC眆*M %722PΧ 1dYZi!f,.}#) ,ba:d۶C$s7!ȀTl yUYΤC}α| EWy*XVDɪn+ 9#C,I.B `}e 8st|@'] tOJI(aҗrWJJye}>LdD6߾-UYDࠩiVK +o`P!Xh $.FWKgY/BAf\rXEc`ψoe~>~z஻-|{oe=-{m=h+NЁ۪[-37{F CingH!H1ypCf6[qwvp5+.|qӦ= GQASQq7vit{^: ?䄢RfnnNzdOR8@z4M9_rX_>NFoYqҘ1 +h!oC#x$<3?UG LxF~s=FzSkL߸o;/?昞N47g%˙街_Nh1ÀMe 4s&)xWwQG3 FȻ^"2u܁pRf^[8`A0Xѵnͮ&'عK9l0]xj":$j~QF j4-# ~l6"vU1^5KlG'@[oDOBFv|@_I#R7 8@fgwWvGC@[`LqU$/9 vw_4w`cL?c>Yo_+B72wy}} W)e[*xTrG4iCĿlN:Yo @AD _öZN9rƌ;lرCSƎ)8@f10p 6867_a'&_q ^?uŊ_v؂&!ƍTہE!O{4@ JQJUW^]{;cؒ{㏷~B!~FHe,۞#.u0iO\ɄG;IF Kضi+!bOT<0`#gpGV:٠(*jIF|5b{G @%wޕ *H4>[{};NФy w 1q90awp9mRb @LʻM4f+? xC (z)e`cYU؁V$ "x<ݖ"^kLM8cbDB9}DgH3T8vt7gU@m;NV'޻۔Y9hEqVw2"*,.T(in4%^*!Kڼ("pd22H ]Oy%M". @QֆݻSgԈ2 \.M=0,.NXhkC4A c^cTNC}=2 "n?8MC^4D"OCg"I:8#?YuvY'}'ݪ۶5B64!`--Gda!+/'B}=D_z ;v` v|>5PZZgZo6qlj*msx\5 8Cm-֬a@Q|c0PTDA(䒩?/NklT?pPζeq1 gZ%^CUöi1}4n ]4, acщ'jo55CB<=?l̜"0u"?, N4U$7ﮁ>ΥM/ˊڟ+g*Y渌}W‹Xn NP]:O;i}Tg'EƔ#7*}ٮa:F)4s.If{<lޥee,ϏF"DZE{fS\UC=flH*?`(ڏH)fr֏i,K3 ӗ${cgՅz(^H+P ZlRQlܣj $]"a TSnִ RR5i;E) O< 0 H募͞5.3?~| |C\ʾh[:lcR*]-_f"oT8ho7.-'e j#|m\0`X&YdeŋuήpƙfUo1ͤwu`P2I9=GW]yq38^^1ױ6dv7P(ؚ5UϞ34Hqi雛@#сRG]и%>TJ)(P3i'T*ZzkSjL9yjIriܾ}[SRmYMywIMi^}q.F ܶu矷1]2Ɣx=pm4m%쫮*1^PX8>^T8E"ٝ>ht?3䛀^Ȉ,KnDv7:UU9KocPAتbA.ʼc&:cu8w6$̞ŶwH`| eR"^ KET0ol< cTUf⿻⾃\_c7AQڸ%K9@0(l._]}QA.W1f}<\iŸ ζwf6(1L`M,QԎws0Aò$Ps?aXʠ{̙XY]ӣr)p*Ƙ(&q讻nUHv>`!F7o;NeDG{hjF>2@B@ 4"pw*bD\ 8`}@ P( =\ƍ}MBq^]ER.WYH%S6{ A@KHY&ĝMTF˹B|=0pam{H 'GF ;G`QP døѲfe0:p}c\(ah>+YŘFMYc 6s6K9$g|Iڥ8dXs 7mJ`LQ(=;4րsEZ=Dž'GabJYR^~ڵV4nxN}-}㏷þSC]ݻ]Wqk_mqEKj)sw^? ? x GP@focnx6SՉe<[:(I r嬀筏{b޵ǛWq f:tC>Ua"0*=ITXVJMptRjY&t_b׮"?13߹Hb*YUU^ t.RR*qip4:}:+,N5e DQy&9N%*j0j+"SNYcNm90ҫ^q O3go."H+ 9QnYwf.^ٻpIQɖ, %Wq`m5Rr T\LwFXDp0#~׃O<ݮkei I[+wowף V䁵g2|Y`X?; 2\ݑsF+FRI=Ǝ+r|n7vG0n[c-}}etwvo2E! 0ee^T$Ҋ+RY`hwQ+'Tղ,a*T)!uvf,- ΜiU 8k?oN?[_w{@ h?5 F[qYQѾY sa>H3T*"LcRv&g[֏:)e1%' )C?AQRqd^hD7a\Vi28@ۜG֌XY~w'2XQK/YI?$k~Xa2嚡VYr6^wL*م!RlhV\ݷ>yۚp+W on]p."iv&<⩧wЛC9n9&pAg'x4t  l`BF.}i\d]ǀw*>>!.55--Dx9w8!`' \4CN, *Dsa0D{,U8D].åڇ?eK0Sy@/PF2oGHO73h4P4i2rQR~TER6۔!Hq6hR%\?*|GLe|,MeRRs+<*ǴiJ]-![7OE+ۿ"ƥχ /aų(/:"tt}>**щ{ӵB}5)f ʆv8f3Ϧ[p!q!K/esAC2bhoNJuEV|gFݛ3Ɨ~6mp|S`;?;PUtS@4 ǁsht4\b gO+C!d ׫]K\׬F) RX}?jƌDmC2'yAUB}Լf۴gt/EB ?b)D4i6lRJT4N2ph].y_VTsd ",Nd`-Ҍ4qֻȿĈJ\&O,UF+ L;SP2bؚ}"7Aҵ bgHX]RbicJE5,-K+,cn׵o;3)'}YÀ)6gnƅ3vU'LJ0J@u0~U$zޞjR: q^gAUgr0 JEo )@:w.FymUpJ__ߞ2tPL4Խ/wֶWr훫(F% 7 IDAT0xl(2s4F9T}ɤuŋu5O0e?`)绉"=W hp_~/<[gD??ֽ֬* kk3m-8sGb'?q**^$8O/gUn籊22z27ug R1EQ$cYM OQal }vQn1ƹ$iL`-n.tb~KCـ‰ x̠p P>PdriuC?V$./) \Y>ƈsG"erɭi=g:gbH9 sƯXέ^MBP &X[tguX]GDssѨ:jTI,:u`ڮN<5TT ¡ba?KRJ((pR\s6yŞlhp*W};::ގ**H + TR"$=}:aSt#,YbTU!Exƍm6.

    #\UY:ֿqV Hn<%o$$wK~Dz\ j(cY rr}7K]*0HZMpCgaP90f6k?KOPS5RRX^\ɤ07ܾ9ŝۄg qRePH(jk-C,qfc4`98w,*e6iivw\5;ㇰ %$$rIM1=Zr0T)Mhl S%'͙_Q"k&mT=JLz1F/̙H*;߆{rCʚƦ1c'afs;irT3Y*睇1O鮩r_3,0.w&x4lru1^~y?_5NA3t:=h"\A[S $9WFo_{TQ(v0MS @`@ AbH2e(讻<4m9OKYDdn@}#DeVEU16=Lu#9HtӶi3eCG@nAr@/Mtp3uLK|aaw+'}@ 0 t=XȹOb]#8@ E< \S< Hwh@QNUBlPcb q48 QRp~P`!@/Q1fh/4ꀟ3VݖyGndsqШQqǹֆNS 40 ׯ"P;cH a  uW#ADVq(2Ѐע؃Lo6Dםl٫yK@G5̄F&bD?N6t@ܻJ\}>k`?fڗm \qQtFp M/]!u# }? ~Lct44``iqK/{WdW}ǻvm~jksM '`?ى_fӦ)甖N^lmEc#": ۶ cBHсp\^Ura~ 48d`c>* 3MqGϧD,Vk@lxwp$%^*PzzG?/ ۷ ]ʬo0T<z hjjmlۆ7ĤI9|>zy Fq1KX}=w5t~B!33f`肝'zz(dYǜM;v*-\|Hp !N|4Dڢ !c4XLE&ߔ&yo_c~J%]pHĴie ;; KF'\_жjW9itm5M$7TnYSZwofZB _`aqZ:n_iP2}墩I45o"fPUKMicgc=LՇIB m]8 v^+A4HmdjOgxI 6 M𜘭uWF~䅰o /~+)9jj>r4W-o Fw\}Β`.E5PBS1hli?n» \Qz],x{RyD|ژm.Q$h=rrL[x,ޡ6ܻ~|k4M8nOP몪!%%׮Ys~q˗k-[bA办4{n|;`n0 >ҴRޙL]Lͩ\`AUp˶k9I 喔ޘ5| $ *v 2i%{}Lrf:uLUY.+;:ؚ5ɥ Up7$|7׮Z9s,yIJEs.쫫1*T^]GgU ''xUYZz{rKOڳ4Hn hG>ر˒D6:HB a۰yD Vv]w,MSs|T^2#dL2abْ]wȐ´b Zί\,P\CAQiZ٭jeeYaxbG\Od3{^l i);5eFuu݇2@rj°lR>J\5 nϗ#ŕ]j0(o>{νb}{c6w0 ?1qX"N#r$1Wz?qJAK\SUiSh`L4@S|V>пfX8BX ?bPU.-.9VFʮ?ɭ&髢щ^R)'s16KU vEXL:^.]?eh(BCPEߧm]+@WD ՘H:g Os懊r D9" ;(fW#hp/pHe͘nBxӉ8P h"@ <t)5bBܝ0 hEYel?C*@ڀp \Z[{UG"R7cuأP ap7(p6(l#2C=Bx~eh)Di3~&>F$-:5GtP4])1QF8Դ 3RcT#!@޶2v-cQ 8s[IfBz/[MKZZ7( 8  RiPa  ` 3~njKjDmW`J .0+6]*]%m Wz֯pqƀ0S[>_Ǻqy(.FQ{bZs'P7pxaBXMMEK7z{Ra"~ByS/,Q(_|ѲjUvf`V~cWYYdνc~44`XiP_˂}4N?VaT44 ܿqKWsgcn|1d6~X>1Duv6mB$!F ơUmbi7o SOcTSαp!xkFo/ f5jZZp=חxYEaX0\S+!l·0Y<8MìYGOyED_jjr12S\o 'DNCG'{_LbNzHY>bhK 4# W=^zѫ'z<soܒqVNi+QS&Ϊ Y)T<ܣlָo]62QTukMk!p#3.0VFlETH0)S[E%uuAӅ#%<>ć"a@jik֨?V~Y̘UehmEu0cB? :^}Uǭ`#Ҵ-7qISy|/ƟjtVabIZ`Hx!P$b[pHRZa>1sPqP;$A>X <T3ÀE4PcUlL`d㬪 z.<%uFշ-^,LޱfMNU)c$]tkhQuv%3~O48nlcӶ$uxeWUq"?Qq4ڱmF|J&H%sv+g/p Re9Lf̰6n:RB!6q@\b< \N1ɡW?'矗9ߏϣc3 ?C{+֮ Ҡ}x1smG͛_S,yzjkSF?;2Ul:Q\::M*%LP kGm5S 5յBH}:ZRrΥesoTڝܼ5sf%AQi /KI]3GI'Ȳ,413g'vԥ,ͻ{W3vXc(T#'ojk۷]۞ȘFƞrjͨQ]--~YVE M4h) Dp>QҬ(篔rH'1#V`` )/:_|/~n:`,$견I"Df9 T:c'%c>JNhtMsyp}y&90 ہ౶ 7ry a. p- U#B%d?v]4b t7`3L7= 0@`ѹlv383s 4͇2?hړey[(D*UeN pW4zn*t]2v;Y!UemHa$Q:Y68/tkO P Uvcp%aut AN̷M&EAVYTLFFklU eA<N `:A⤪ d,x l;8aBO?cYD()a3g`VU|R'>r.0==7N92!b֎̚i:fL㥗X2Eݬ=c4<$MCu5.ěo=lv…x-l0L1hoر"oKK"NX,<0i9 i>Օy?\_c#MC[=&MR_~پѣEE%CϑGQRFƍ>֩N :AD0g6li>a)qŽ_ғ}}NUs!v8Gk+**܌7߄H %%LE,A@>F?p,]SN޽pQ(8b`Y11<,+駣fxY:7g͛\J` IDAT2ͬτ@"a󂈲Y E]i{nEssyտI/U0_M60T()J2{+QcܶԯPsqu.KGO?B8JdY帿|2A߹Flh0y,F+=oww+]ّ36*3`=~߉FɈk ms;o'c ݵe򆴳.>63]WAw"|'Dch bq,} Xf^)N-QłN%>ބREIq)U]G(tZ* lvn[kB" I/N3Jm;)k6$UX^3 {NCxr1iQ'^xhTHi.g:8ӃY3g@(ćRA2`3UnEަSbIe&3P֮$B!N9{v߲eBJiӃsTG28kkyMM2#8CVo%B1e)Dn3$M!Ѹx%\IV*d~u WBcQg5KqN}Ԏ,b?< #\ĹU lQ.B@y< lq$PU7}dwwVpr+8$+ +v6eV7s&wݾbZ# ,<4+ ǼŘ1i:}}Z C-"R[;pS،hTHi؎ NC\CmYz8 ІR$~)Y=[<2R0oTJ۾;x`mw:yaj A\aU}eR6u=8k>x?~ّ*+bdry*.CဃêP}>^YgL ִ !\9{M cs]m:0~kURt9_a8˱׋iq1@qXJ:<>:MspM5gNtwDA `Jw8-]zPT1V]9Nd oy"1B pT!\Hm!K4.ٟ]::g1AEÇnSw0ser9)eb/U-P_F'zU#@q~mn۶6lH)7[ .Dhp8>D$8c}}K\[++DYXf \D`C(MGK+*+.TW#BqD{; 4xKP--|2FLBhmcĒUU` t?ˉbΣ)y2'{Q)ZAyK>jՒ 32MƘtۋL!wrv2r_rV^+* e{.ώ7PjUwG~ϞS͛YQ]z)z z,^ JJr9hnDN?SJ[Z2<^R˔)Je=7TS ű G~Xϴѣ'F \ϡ(gnU_=kl~3"rahV $WT)SP__Hs\9p9Vq1[/ް,d\UXy9!Bp i&s RPc\.!Ԅi T #^^U" B!bƬY_)d^"~of )qWgG?j}zWlիV(Yq/٬Lzec*,Jtݰ:}X>mۦJG mo:þz~n>zy}?c{|jݛV;DZ>aKi'I`"1xoY>j\6kO@@~kT= /n?zΨ\锈A49?IV5F2J^>$JT4pU #?Gڈks`֭TWG#c;ʒc ƌ8^*3M!&Y6gE`YI>Ay.<#a.1PiiO&Sfy(PB`S`<@m0aߏM6u*Da9`h{@0.*+_i[%[|'|1ce#a_˒e: t~ƔY]q"PH6 n8:qEHO\vUUa#~y:;_:qV7Ԯ (~*Q(vyyfWfװCI`ݎIZMl$mR"Q9m{;K Ovi*N+\Nrb͝;wSu5Զ(2!5fpt'ƣC |m۶uWiD&}aۇ AB"E&feWuLJAMW7$xσca߾ʂ8}  m@_>o1`3vnr`^V&)P 'benY>d[q4†;8(&*&v9h)ѕx@* ; 9PW }|![);j's_M͢x<+" x+`p"p'Pr]@at.5vK?; (j` |~ bEoo.,Y4z|s E$ 2se.E 44 (U5k4u*c·F0nDW00 $e)0,۫>8{~vWUUX3'*IGũ?a!x\|!t6^l/l ۶ݷ0B% --XNoO=*//\6)'iȖ.`=߇@ie07-qn}']7Vp_C5ܼvkG0( 3.x.?PUR~?sĬYx]st7 ~ؼJU0qWB3.i6Svc`%%*=iر'#E"1vua]˷oW,ON${y1yշ6ݺNȎ U,+C}=alۖ=H2;==PUr^n<M] Nرg?~ϧqdu{0ylڄtwq,X_\x3gBQ=(l )%cpy4e&D.uj^?3%%{.P\!3X8̢Q6j64䘦PR( &NSO/H;iHٓOҌA>O?}Vfq2q<5k0v,zzP^)+jqqY>ߪ(_/3ΝM7@ B \|N-.YUS8#I\~{o1[I?m?zOe!dVrT]َON'==mluVTLCe>knp1!'$w)dP?y˼c"%XbL* >eRP__o`Pxn$GĊ]VCfY .d\lXUm*4lƾ\ W21%>,gD^c2Ru_Гr |xk3fcOj8Qvh+VDFPB6xVʓUMK٩lGe4wѡ t WUc缢.>0T$}ݥ͜` |1rh.AMXB0t@"z @UiyKRxsG+vysW7fiR߶C|WPK|}MFm|-O_Ϡ7i95kP(X?Pʹ5~ſH%YC'|6~ t9&F!/Е$_ 2L)w/S^Nl,/FeR:mm5krgYi3M^gB"گ![9q-(+tH1Џ*?+G93B Ⴙ8B,BR_^O c6QG"ë/cg"?\jC w/ ^wc~YZpT]kN9kJWʚ A<IX`5U9Ҷit˾mGܗ8ޱ]%~)xLjjEs#щ:y;pKxIJ_$ $WV~z|ؠpi fݏYxsnLdlf,AU}iz+;K/ge# 9$%R^]of/هr+2@[>Pc@1 @|@^!]?K[Z4">vSGUǪKoO>`"CӖr۾ (vߺzƧnVFh ,H^GcK4mm>?Lr@ xBLbj`iAW %@t`P" 1h8`#&+*I p#pp xMD4{4P,)Dazh'] |` I4H5 +Dc,|~矟=O> p14J 0 7($ |ZԀP,olFe>Ԏɔf3UNbcgAۥ_9D{>#p\&֭՘!<(ڊH%lrP?wݷws57O޻1c+VOdcۋZtvs:ZZPZ:Fl gҨQƪo_1sWZkǃGO\q:;@J)b0 oGerIΞ=_vۊR^Yi(WӴihh@w7Ap%ԄSl̡pXzT^xAoogu֝w^=gΉ\45i==G!RUP^=zϞOz~KU a6W]gXB-.l[ND"PU.͘1jKK֭s-{Ü 0{6^{ /aYB1E2ՍRUUuX ?QJ+pv}v?v'są޽&jڕQǝRڽ io\q1 mm{**]>s*ai*'!cIcK^z);cSTdڷNϿdgÕ?Tj6VzL8cI;gOk>G*p^*9.c&Wq>GYNcR[~^V*nIcFo}@RZFLE>c|Ֆs{#M6g r52ߖ]2Qs^VUU|Xf-xj5[pR{2X&C`pDJ?eWw.A}=(CS+y6$Ȗ 튞;cjQXc))A$9PXK&-MeCWt0g _^ZӜk PSt3}$OX8`E1slh`0gu4{6lxbT;c/)⬉|ų`X$9p^_x.V,+UW?TR\(݇T5@bmJg)EǞ hjQ sWUU;UUaP׬ɞqFM6V* ݲ5rȰ{j\"jܭi>KWAYHl|40 2~h)G2 h4pd3eeUW!eж|#oQxmV Y ?s+DǏO@7Pyiu:cu`@SO=qHYOXP295pC 1dSfDmP3Vȫ\B􄪎:/Thi 'f0OQ :9kld3g:2JJmEA1ce'6^YqE\{q4$AV<\f k Y磬Lq&\:GM1[QYQL ={GQ] {'Od(6@(~p$l0 BD"dlq:Tս|y}[zYU}sgoBٲe`Ni%b?Ȁp#8g`QzzFwTc,qY` j IDAT`۷:nRϗ?f͉E 繪BrrۤvN{{~n5% Vp~뤦M_I5rܹX.n[ >k*`ʿiQHqL&Te୙^]787FFDExK 8eDE޾hdU;q~')e)P^gCճfulR%π눎,J.V*l P#"H8_lٍ7ڽawd9#nz?:V{Z4S@t +5oa%!.VjW2#ou9^k}ͦy:cˉ.j  J \Jt pm@7pc18 ؀Q\@?~ uN'@8Ƥr1˺SEPtˁpPHQ:}m-v'^@S`)8S;V~78SO#;\\ M $PQR?B(DS@GIsNANxwq$z+DJHQ\ex|1w&G uSӀwC[;D[v!0qxA 0ua  Y[hٜ9:K͝ !P(/` g¬YhleqlgXg2Ιi^ik,\8j$qxMA"*)3o^54LJq94I8ܼyڏ~׿o}EveEɌX [ )q⋘3n&EWz MM2.JaCv׮wߥN--vfkaG&:X,"'+@eeH$d"h().ZZX:M9 Rs3b}%^x!oΉ%ڪUTM##!.8P&c'{owb--=r\0XעuAA{;P(࣏g3<-m|`hPH$'75 b*PqV&)iwb{_|qwG?lfüEP(HlhQ+/QMy0Y DPԣ\:lo8OQiO4nrD0wUW>)hPanyI͚E/O8) RHIz)4@e9-j`! =,v.ITgŕᰇw:tӶñذǓ;XqYqe7M@E洶V-Z8]]O>;LIQ诋?}NʵhDqР6OZ # 10}2>9&ش/4MK^C&MbЃRhl(qNeaZa&JUJh1Υ-Dm{bCƥt#ĘP ĤX]9z"$㐩+0^|:2ʒ8g΃8ܚ6\.Sx5xLpBK܉AyTGa0V!Rs! 2#o p¨$8͘ԳϲKj mo\)b\1Dz"WT'[^ܷ~k"G]B$u\n*cy D J T$ U%c ;˺(D"l#@cFm'nZ h>~ YPkk(v͚3N}6o 龁WmUGi5rUY\P~}㺻5mq"LV!#j>H,2) L~ nWjA@#@8ЀC0 X@(9h20!p= Y s1->H9 5$bCh>귿d`V{P(P dĶ3Aۮgw->~_jy>?[8gPrMH [`(H9!ſjjpx]G<L*i%9㠯weq8O&ik?'|) 2PM$~: fq8lB{d~ji6mWUU:笪Kܔ:QR>|6q̘㏇]Gg'V̙X=^46n.Z+g<\X},/g粒s0܌ if꫰m< ji)~d~E+>--vX>uvEi9I! pᨮnق8hoGSòPY ގFZs,qAÅ@[{M+j\C̞-L ' GHkײ ]G٬z2[׿}8A5~<9zkW >8_˯5?0HXw|;mPns`:-cx<~YGSZղYKBaqV՟w ]O|f7m^^RR^.kj|;Y}OKdy49E1 !n'L?A<=C!gP"ZY +Y&㫫kSq*ZaKS#l6;=M^ҕ#a)6[\Q6**5:/J38Wۡ_O&ߑ&rl+>%KHyE*wYȾLx-u<8 R= y*!ZyNU(_8ݏ>zfoBjm!abDIml B"S*,q̘DK , $ܳMn@be}ZO'U{dy?ZNBe%|>JxM>,ۂb=;k'S?dl7ݡB2MFOs0 Ḉr ڥiL:**ibY)W.}b=`[(* t<:ϒ&MԙL'۱~ZC̕ȵAf÷N7%+ȾGmj-97 7CN2?^9N!6QqDee P}}B;FH3TUN`/gHx&O'ɝnqoURR_ RpMmU?R`R:+K%7 bO(T>1iB(D,saDCިwf#{ںd8u8(/q k kWOO8m/+JC3.C}#@Ph%K}^adʱIqy26 A%Iqę' V2G߇G1_Nq޹~} sYPB\Jl?.r3r@sZ[ jpU!,)1Q>_$vS!^ѴR^ 30}蠺ŋ† i֮TJ+/HKz:j?^^.;:R˗g-k&=ϞV$n,l:*jPv!Ҝ1W㬳F JJ܌Y?btd2G `:~UBx[qyhhqݍ^╕t|>. Gonn84 t&MB]-v石9sㄝ;K'k׎ŋܹ.D[:/']'h0'o@ (Uݝw;RDE׶2{nJG ϶$sr' ]L"] \'G‹ύy#%%_.=w \7 9tNԴ6Zώ:̜.&3Uɞ@\1L6oN!B^;oyAcn1rcizzڪQ 3b%˪dVpFχN#fvPYI֠i)?0tyIejT*Z!OFi eH1ӲА|xqPTC1i9Æ?`*@8?wdԻ(R :7ukFEKpd`Wg_|Q|A-^;n Z e: Hb8%tHRSTf̮y^^YY14ն /f<GY%D|XpLFyb"ozC*:(/d 3zN9?qS%+ 娚 2mP9KXu3#ER QX͎,2Mj@8 cM)sV5Ui.prWv&Ej&;_P{eP1,t ljI>M{"{͛>캵kCPAm{Cd*DJW,e3;q5'˲t]BxU* ݴ~(T$*tD-MD"ӧ_soZy˥1Y(`+ !]ϩ4 qwjuriES{a( j Թ\7 S z_QQ?j.6lЁr L L ^۾*!p4l`708hmc1 #829܀ 5@?Qc:՜ hb^I-up+e =JM,~b p>p Uv ?'X/5m9TEPLL@mc~,و| (ώ*#IƚLC:? (22be)&&@`k>?4\S)(eJYD.fR&/{7 !Br\mJy Et)~Rhjꗗ#B: 6/q5()RP7%{{JJD äI{/`H..Yzzr%;oTr<44 G>]G}=N>/L[࣏qZK ,JHY_dr0F@JK"MkPeeC}4 z{a0 pQG 88@ĉ<2t]9'cmRSi)\q ոBs6wee]w={]7tpkkoi)z͊ _:+=dkkDQq P<˧]wh]K*a k^]7_GDg-;v24A󭭵GoEBm vzHY#Z%KXY# \ IDATقd8s]}W]]a Sg'4uΘH$ ;MJ%mdr65Q8[:NMuVM"ޝL­SY> *MzG2ȃiI+(§C[ 4uvn۟xy`!nsd39a0h){4o." Ya{nO?08JI%|H5(:a2wb+V|:?=X|(!U-DzwxGy[.9uR9L&5δp } ˅x\rV&QZ/p}jIoTa7~5Έxc#~c`A a7RF\;P*/ G\1֗{ =TUH~jww A.tf[暚i2)՞=e٬꫉.5WVL @?Ѐ2tc\+t#YҰ'] P֍z}<(\лkꆴ'ܨ PEiNΘG0&()Is fZʤ#qDt7ϵqjk+v=ۭ9!y\PEF&- ca*s% !c6 ;8/BE2\.$s]4ǑIDvK cqѤ!4m3GD=_qo.[K R\u dGK=ZEL@Goca8,ى@쨭sKr,F$Mk+OZ1Ŝ+EO-?_3#~iim###̡5,ߐ۹\15<]5zwHQ~gse떊  WaxX2f l6bݙLGFcijӔP$p<؈aM0ϼpxM,V8Eczpx hk.Q*Oz{Z)DD^@4`OP/˅@3p;`Q`-Q/#h.Rq` \ %JhQcŊB.Pj7p3  XThf?DcELۀf={+ 23VU4@z  p-BZr1@__ k. )eu2TBH!xQl&]H#CB}=.8M E, Āyda&nHڤa%odw0t4c.Xԏ| }݆ 3@LS>p Dizb(9, "1T0P_'`ޫIɶm^UVJ)T4m~:`Amk礓֬Yif$mduRGعnVQ݇:䪪*q5XwwvL[c)i`_y."=z!H&QVDk+^}}~%%K8GE0ݜs>zj?[VVQgWk4:f@>ǕWbQ͛~=.G0|{z̟/$cFGǞxJ$@ AE8CJ1.go {?yxs3^}5u6׮_ 3M1|ut#&=ܹk׾ip]C w n C?YLn2bp(:n\RC,E7LeN /~LEp\n$#>ɺ\3/ =fZ0I8@)S4aeFH14[Hդ'h@Yj[ %MK Q( ipXzkr "}c7&ANv>K/oV::a%3yaYCPP B3Y,8K &8ʙ;Ƕ;*PD577ێ6NtƸ UgsL'a 4\pXuvc$0nJ _Mb7>Ύua rsCXRUUid$r4 7Vm|.[xa,q&,>[}cY{՟/>7^_/pz8}pn"╕ ]8LQ]ӭ4g͖<MǁP{wgrN: ֠hd%4IY*zH逋^|̥-d9H1*B?= "0e Ƒ%%Rli$`)RY6+V˸KpȰ`6RL#r5𫡡*`N~D29|tgx[FDA֑ӋTF!mۖE.ƈQ1 BP<ʠ48ex2 \˚WSDPVYyϲeׅB ߁n7݌eqT6ANJtՅ[{/(Tvg\5c4Ir+ygo޵+K.NW֙Jvw}U mu~x)8WÄm?b̤j}A`Ywj[n'-s]v/zu@/)1++/O$BcM?޻bǹXӪRDFvΝJQxHQ X/ut0O K#ė/tƍ?`׮Jq@. csW\p`"ND#U_Zz}hV`8 xhb%:gKH.X x [.`[뀋he8 E ?9O{ν`S@e1ǩ>1 LJ[mL:Vb&c6Jg7WVlvA+$D@ 07Caƾp_q~RSP$`X D+,")\BJ#Z\vd)=fPΠD p".H3 -(3NM۫1<Nq!2x<1g6zGDžUP`]V?%bUrƊd64TxBkɒN8SO=m` 5i68M DKӠmA)1:4 kkl*+/o_R|UUCCp"GDD0s&^}U.YBۛ=B$#ފT B|x9kΜH1e4n !!߷O}, MCm-fD B88 ,k **/yq .8Rj6+To/= gr Q,CבN#cca,BJ<N; dLۗjb30o,qF?@g'jjF@0oK0qm.nPˉ'Gu5V@0H.pY(S~x(CGaH ]gEQ; Xn#uk+Rf2q8}+\qu. qU+ [\ǺW7Xn$93M)%i'?+U P }P31mzT\f㝲#@.gKE&g-.қ䫻|[+dM̒-Ѕ}[%?H͟{wOMB_oUB ]Qy<ŶB}+`ʱ ?lW?o1ۛt`E5}\h*LbPk؀_ ~`Lm72@ ``@cT\%SJض'38wm<`f{Yi7ͭ*7?EMRCOc9,`SV@[ʕ3 +V5\KKNڛ,zbOTxzԇJXLW8.zz / Qq{V/+ lͯ04 Dj(ߙɃsUmUia(eeN TWC f;00d 跿Eu%4YtPV%Kذ}%稉9 X6f%t};z'OhGlAoUW 0~U>5x^/DWW~ݺ*εvWt 9L(gUyHs;e%kki~nWQ{X4 (i:+,33xhn$zz(AyK]q8,6:;UCame8vkz,6@M ;iдWu{ K24_~=,j]()aD! 頝OzD+g,-D@ԌyyW3W}}M=Sw aT VRt4u,!p"{{%fͮB03 v ^{=} E$ٗve-V0t15.2SܦKOļ=]n+۫< M@>[ƩA?ؒɹA1i^{ɟ]ue?ݳGI2JQ_3+d`[/cxV¼yseA^fzz3uCPR!0r}dm*|L( 02K2`t36dr &H?p \OG<.F݌;mok0yk:d4)R uD@pmg+ c8@ץe=_8UF1MJNg^* 1!Ӏ,[@xXdB<:8xWiE9Kno<1 ʘM$P]@K1`c/`1;Dyal!4"/W  0Lx a=*u}WF.ި$@L .T%tSBvM @ @bcSm0 `:`m.͌FgV{ǒ89u>{/h Wح0#J$GxQSHBAtVKPْt`nj686B1:'chhk{j %f  *j0! @bٍ*pUgɷn%EEE4IֆDZ׶` &LpC}GG;m"q,Ia\{桁ܼyN[[Cy"HT~?TD,HLa*^6Rmv𨣂-Lߏ+1i:;{7::ߣ dĂ((zdE2l42ũl~AqE3r Ai2&㪫Da!^|122F\p6mB tzzQ\ " =KX}']F,2::m,+o6DK$O* A4͜y8!dAI׽!ޞoi9;;+q"O2 >3w?aY8GpmEcr±#}b3JK}VcDcCj2)e82uS4،s]ղLE Yy9۹ۯfTy~8'rI8%X(c\wWsϳ`ھ8%!EuLN20%qkf> œwV۲tjf_jڳWjQy*-Y qasfTUPi&<) c8 LH^G @(,ɶXٶ\O񙱘]TIB'YN0{ !*D:rC ~? AV.ŸɜAK0A0Ά)Lmޜ;c3ڃsUru sJxpaegzKAAmlv_/q/,-wR*]ur)H– 1>t+!qccQ7Ag2&Zv}}ϩjqx]i6GbQZiOULPC`>[Dtw#ii$$irO8Gd򝛫]3'A]O L$!IIǟKaM&a%Serj\VN@MV 4P(GY-4*H8{jkywp(c:gBxMgQ^@U3{iϛꚆ#e0&9NUԿ3G$039^F2:dCIgz}>HpU Txž=hhhxsFݮ?;(H\#!G?lΛt'OO=U u*T%q# ECC&3Heqyiai>h oWELW2kxz&3cĬ-!|W+'5S\WZǏ\W4U@fD a ~4+x?p!n.s? pd2 IDATa@\]@308&ׄKvk4!^0E'773ƺGg/5w /'yY!'*L849@)PZ 1w kX( a.1j㠯{Iq|2E!TLZTD>hs&]HzUgp ˰fR@?~V ƀB%w֭ 'd\~LZ857K,CqQB/[&o&FG|I 55i8cө皦ٶYx(uTZ (cuX6lՙ3~āĉ,c.BPW/GK [\w>mْ[&! /sq0M gpi9##PU1:G…ؾ? vylb1 Gp8T0b|6mŠ:;R|]#Sc<˩e&^pht, ,)Au5Z[!X"Ʈ]?nemۆf&Tsfd?iQ\{7$ܿ`$][tY@~Iߤyj@Җ )ZuS+,p_Yr5-[U l_8e e ;eeB;.5)s Bww#EW tyy@|(` Ud "njz\  aH,oS.^i$&}}3L }<[ Vtl@aIšdvZ^P6,-ͷ]׬wJBBklvchP(!l10& SـHVw39MH$(: rf(hUc H!# !0s(l9Uhd0jA@_7Zd\pՁPX8QԮdnsQq1IQyS#_0ڶ*άƸ&ْWYȬBmX˒R%$|%LIw;|^fj>_{ `0a-'xEOs`MαSBr?77#e5 B*$N>az-+X2Cy +pWWLHn +,لɐd @sHVg2$d0Jq e4%LU?͝~ɲ,9G{Z%KPWUχ^<;޹񆞀sdJdmm6QdɜV˚cmxԯ,0"D鸫"g`EFFN?x%"ƍ(/GY͞M3gdwTn;t3:d?>_ث*iəUU7l&dFHU>wn$]݌1_¶ma#|&2)<~+V\fGe~$ r'U]$˲,豬Q2Y4}zaAA *nc,F,v=Q磌KJR؉BT"GG,K\d"AXBjK-f{ۀ[ `?Xy,'M2BpuaLrƦOd2ߞRSLem/>b@ | 4$MTMc,Gx >B1i55NQ)o={ ǑhBjlea~e w_7n4.rظ5k̑]74{ZϷ+*--XiTUE`q߯.\}W;\2I'qXMMiع^P,~?ٸn\t.E"Bw7~C0(F4 ǁ!DY C1iTwމ1's%}dΙonV--CUQYNe[.TUAQqɨ#5tvWQZ:[oapD[o믳9sO>s}te&{/`Y4 xp5p--B%Kۋ?ǃݻ1}:B!2`{H[TVVOc=*\:둂 ^z%ĉw+ԉ83 _OS:U;|E 9sNzY~%7sW\7ߤ۹+Ӓڱ W\?ͪ,dO*io~0[lS'F%%\QXw7޾6BGV_MޯjkurKlڎx\s'ATITƀQ|eN]=(c55AD Hw(XT~}'_5]bs\לN ?%{o$6O=KΣz0`K6FIAPP$JSwcc1Fy|;>mi_B :3y-.-IF_{bzPĊ \;h팹%>/db0;,k/vC,sɴiSvd|>[B- [w~ x2dnz|mO2 m6hhiYWclT!Yde2X&G{Gkjc1餋TGpS_'WnyNMM0MW^Յ8(J75J8#$+caH ӓgZ1IUXg'bE,]T~^kgPCM!U7(~?f$o/^d5,]P$ß=ettY00)?n_w] PH۶>Z]MBwp<I4z饾Yo" 1ZGW̓atԍZߏ\q#D'MBW'ABM 3M Ra!JJ@U B!+!I,/W6yrWϝk3FڀGXvNKado/Q2y| _C6 B29P0D⿼mn#WLiFbRRŽ:J<<͞>y硱b%%H&Ǩ%Y  'k1<\<+ӗ"|/ػWڵ :; |2Nb<駬KrfY._ h.EQc4lf]%yÈkŋtGaev㆝JF|fy74|8GŤI8\vZ.'Z[O?Ju̖&nQ~t'LTU?q1矔UW}_wvȒ~W8雹6ɌoJiYVM?DKөSBUNix!nשּׂ)j.<PRnə6ޞ^9CePu c|P@)5SR%d+q"Hmwˆql,*[` >Ccxᮡ[u+ ٳ] J"K.C`DWlڔmNN)-gIF^䓯, tι?~}y2y7 8OqIk֮Rmϩqݺ9sL7ץ_ysiwx1AN:앎b0}rETڶ-K 8ug*a_-XPXqˆ ϮY=htfA˿uۗE"z K-0cGN{D̶G##3"ĿNV,;JԧK۾SUI/lXutR@,! 5D0v>{ǩ'R 57s" ='=55fYqjlB].q챲;+݋HdLF=sυ,nx}ahSx ̟&0|T͞M7;reEE}}L\Uӟb|N9= vqx:\µt.!#/8k&̛9; ԄH3EQ#s^5vM?-áp*=r-Z[n? |: ` !K,[r]8N{{eEU%)u>^ݻv[(UJ#;u޲e_uYru˲>ܳ.>we׾w{mY=-Vqգ\0B H&CU]Wa;kOjTi:k0ubjn+*!i=tBTF` 9(sW[&%,kѳ/z8AjnR^`v>d!C=է4ɲx7ɭםq#V: `1rY-|;e[F:D@6k2r$d`, 0mPYy]#H.)tLkDr^]}%ǚf}}6` r]㍍ΝC'XZT5JJ@cšc݌^'R #YRm,ƚ!ii"kVeMRڏs_1;^FHҠ "ӁpJf@~ekNlGP2A63~Z/oF<.fJp+Xų B.}R>=6B--O)2۞&c IDATCELQUw>fZv$/r]fs924fĕjB0(п8IJ Y`Ŋ _5ɚK:Kkb2p8©@gSj]zO+ʲ\RQޞ|_ZĵV}bxYU#_iڊΝNH>d{Ē%Y$&Mb\2G֯ߵhM듞sΦF u8t@Y-7޸ Xv͞DvգcQgތGY>}Ϟ=;Fۄ8Une);LU6͆@#hf*} q3(t˚aY9 }%Eg&Bܛ\Www:}vm>R$%I%%v_PUuw 1 J'-Qtz4 ~.@YQoǎy@U[B`,eI@aǎSKd:sD݅w _|9w-@.~1`-!< xЁ!`,/嬪(3 +@).-hb11 Q+ Q`-0 x(*| lD1N%4Iˊ9` ?tЀ 56EYv."2!Ɲ/Brf`@26⧉UDs` +rM)(fhV~C@=9ɱ$QW#jS$ /:E[8?Re(՞h~vCV4ǡ@pGFߐ;" *t)9؁.x:]Xm-fIk+"ލ!Cd_%%%h> ݘ[Z'$Xcض [J_%ԩ6(vKKUqqyooD0 J$Pܶht!/` k…c PVݻ1w.ǍdD4r i,[#c<<(`ۤ(R*[V?>sѢ+WoY_4J g֬&7b##D9ZZpecL Àߏv²9cǷOl `n['QQRbwv&̛֬Ʊ( 7i@s XVmD{<31ǩF᧫WpgKEZ*gЮ8pwE5< ;@0Vj~*sÜO.-dm*d<(ꅞcE|>뼹͟/^оj{/Mt|f6=F[X;g.N<"^a8G=%%o,Y2Z]"";悹( [< Lw26*ŴX8\VY08nt}A3nj ݉fl!-[9=ErM^M3MMR9@RH`~kv5g1nݞI.hqdrmYvOl~-!l^M4`Y$sSc#K$0^@44֭W) زdD6K D14M=uхUc@fӦ׽^z,"u۫Uv0u2SaG9q7Kb nUP*ud|N*bZ>kgG?~*^x㬳 Cn? i~Kk Y46>?JD{MvL8c`Lאl09s ΰ N qNBWk8KRO`ѡd9sfB ??ܭ>upɒx@8(TJ_qO714"҂sϕ%1Mn±(˜Z.x쀇 #o:c?Xu|ˆ*\vNjCp֭gVIr82U|1oa)fA˦2F2Z&sQc|&Nt4뢢IC}O_OqXyԔBpL A!Qy__{ tBUZU6.,Kɤ]B$l E( M/e.;2!(e^}mJ#VY#qm} 0rT!䉠LޞLalsSоQ^pdZQY1Nou-vC @i)DA_W:cY!hڴcο%yةB\u$0 X|ԩ_|a 4 @̲ ˪ϗ%!fk1`0L"h93mm+Tu !3--7zqm^7rmWGW,3&{7Ysb'p""@dܺr0 @ʀ@P =^o0ہ3 (C gm[@qG/#&@P\H@[ G%gtz#Bw )ms(46nnpH m$2 x+5QAA2l FQ A8sn$~+JDxyU|fo3.S\loxٴ裯?O>x5xU#gR ۆ,0ߏݻaYbq& /`47pxq??ޞV"V]:o{o_qnfq*ehZ#{CrqzzZGGSٳdxEv dr9c"d2jkaYF!\ٳ?0yiY(*¢EcMcL|Ehl;L*]}ڡruU\T1Mr c *QA9a4* EDA猢b t7suuWZ]ϙϽ3}?Uk{7:wXa!(N>~?B!duW@)qʕH$`(*wߡEEHڊ@N'MMӃ+ĺuB4 !@)|R fMakvxm|챆R1tuv;?,˜xQL @s3֭]VNg h8ZLt=/1 C>TbѾ>A!uu03V[+㈤ ;t~Ɍe;L!EqD"c/ h h)҉ۛm6+N!փvW7eۊ I΅5p[rQ}+M;yvZJ%Jd t0٬Q^~Iuu7smm?ƍck׾I}@[[rmozY~Hڽ^V.f_9]DӞD~ʼ%TW)S稪2dP0$;Yk;d$"D3\gnQ0YvrwLK aޗ&J},BP-- p[ &> . VXd>"RQ֋Bd=Udk&n.: t!YE ٬>lL3D@N "ԒQ4p-*֭ !x >kR%\[fS0Dbx"qȭANp,Be!UP8$D;x/axqx|U.RUI&p6Doo{mj9C: YK^qZ.4$;<=y.Ì~?e$-Zuf?z/C'*+}C45e8d!^܎aDVHФN`8_䪫I*%V/w: gݺCN<Ұ7|,tHxx<8p4Փ`F_G=Mĝ?-n6Ǥ\'ia*ҬLǡ~L%KNUqFy/RQTĄaf=uinعɍj !OC r{IK]imI!ri-R̺LoQ^N]P;n,\I>K4=)JDQMHD)h˦ʹMaSPTrT--zUU:Ds.D  &RҲZ!8Mv>3\TE 6=pѲ,82(ޗ->ac$bg ]ϻds.DV+)1MM^Tc B|(N_r~nC7@/w45=XSv;~}~ tGLBdvae'Fÿ#<B:9IRl6{AqÆ=iMH8 p1AHQ БJu3dgx8u 8Z8RꭨWhћ۷K\Rל;-hNH vBYxGbgI~}/,\cxf>9wCt:S\kj %i 30Eu:m0%P 3QH h ,5P"?c] gؾ /Ǔ>-%Nay\(k!O,-]ʈߟ_y衝QޠdĻGbb?ĉum&**0a_Bムwy2$ ́Áþ}(-!PRyቷ!PRĝwBQGgm?qd!xkkեK*--[?nmzYY_8l57L&͏>W_-\ HNီ492r=TQk[F 8@($zꢋvPU:C @uH C7QQA:OH0 c~=2̙O>wމرcLB0 ??矏xhoڵX^sA"bhq=^uSOwH7vtds_m^D8ޝڊv< Ӊt)Sўޢ"r4( B![,  !~HosijjB(O=4~uk6rYܰab7_z/?ޞs5+.?r2 '+uuT +W6,X8D'!iP?_a hZ{ ܢkb(v64 4$9]cmmf_ӄ /w\~E4va jxIlBXU9|q3zO+]FOUfcin,pa'I!ۥy7JJ/JeezPuKS P@@ 5,:$?Gw&3 Ļob%(-S {x%T]oϕ&dV^ivT2K|V]e1ĄD 'ɝ9t7vtÅU<^Mg0@ Ё\(l?6rc@  F/B&k=p2\/:/=#g$I℣Kn}fU7V 48 ͩ~0ESw9(;nicV5vᄏ$J&5{D'xiE^B":7!\ !`Õ+wvj;+ӦK!|"&MD߂r]0ܽ;q|F~p*F]ݘ=n7eXU)}4q"(CL\g)Ϯ9ܘ?UJv.%=YO`O*Q]c}m2`B42t *62@B x=*a҄+>\:7B&Čy6KzlN6Aq$  n0e@& 16DQ 9 -1D|d[SNNIfQ_ޒ56b< p,'DFUaw[<{z%*qj:a_ԎDdezY7Fx8V;v\޻sz}>~V]- (*CP\l :4YKKi %'$lΊZׁ`2d5"twA0H )Ucٰaq3n#dWNܞcO?13X,O&Sj 1%)+Dp7n-6I2vaC8)ri~] @W?jUF7r˵(hxyѢ;tر]PBayFB,@q)!aø|5K2 SGcnF.g3ͱB_r!PҙN!UBIz{E㏟r>/8zw6""$iZJcTb*Yn1r\.map:dhgM!T!ԩ^mk`xx\'JJ<}`a ʲfBc Q!$g)INRJ!2e8fӡxB F~O&%–\N"{j6US3S\ LJx^cum@ WM{ Jx=ԋ!PƱs0i9O/ ǭHwv7M,)gN@77E\V`٫WTtZ;ig~cx|yYhiĉ8<-m& IDAT550 ʁ|EYl҂jq8f xoIBy9R]?` BE&Y.z.6Īc{ȑ#D5W/ݼyG{{}8Tp@af˧cre2hiy/={p]u52$I'f<:v~$y'\.xeX|jor JA) 磦$>(Q^5k0s&F"͆H&0sfLa[Z_]u+mAq1d9_|twWs.O?c~ UR!rnw::IBYE֭xN}N-æM}^<ܾ^{I:{::ZVqC|B`~~T|)!?]TTEP(S)iSG\s"ǫvNjb^?(rNBի[[pBڃE9JusΟu7~ȴ?KcjҀ@Blv;↮dڪZ~K,;_rkTuu{fM|6P$( ɓs_~++NeYf)ILvߵhN|0kJ0aڦ+ !ɮzvgƂZmTf23!v:bsv1yY]Y&TgyM}5̄@w;my".^Z2 YAi B(6%;S 7~_`p14X%9[AavTU%EBU!PEZQU~ߎMH$݅_i"&0 (JAPpg ( (|Yl1R@g=Ի_cQ5@PJa2sVϒ󼅮+fEC!6fwuU/fٰsܨBA b6 b#LM5.(fU,_؝˗?veS s'+DUBd9snBS7Gp [|sH=/}v'JI4ʊYwjl!Y]-mڔÅdBxc tC@I)r\iXe/NtCD2ũ~Q3yſk/.0k* %6EJ\<<Ce<1% F4Z٠F>.$eFPRYgx$\|T0$Yp3ٱ}ꑱdX6 3llQGN&9q# ZZDaQsan2C bZ"{+G % UCݛ5QEk6w^oƍyR/WKP8Pw47e &e;PTd3Ius|eXY*(Hp+'#GrsFo)R  nk=۷SE]HMyWhv PE* BhNS\i54jPIE/N\l^\ I\pяᔊ%z;"QE-+)fw<5|?d4<#VC >CEEE>pʔ)?U' .+$I*8 睝`…Ӟ{a!@$ƞs(FeF,L!dJ_ $iGV_sM-$y3@Z_\ qHKn79`; kB/xnz)lNg.w2 !P'->|rq=  sɃ{Ƕo륗-\mDdpb?FUyBX;t}qro7zykF F\8wUjkCߢHd/0aYWGx<"mhlD.grIuus2rmuw]QaIEWVwv6i&!XLaMdh?44/'E 'bo%Q LLErqܞq㪏ܨld.*W@6orH&ڎ5kx2ҺRJ>i5k& r2ݯRAl^of !H4@[P([o9[T & *I+0kVk.r܂0(stIeVU24 &/nSP {m]tA)a2mYؕPU(^#aE,דSJBQk (`csn"QJTL](i_ ؠ܀@[68X" aeӜB\ & ^U4Ls$1½{o X/IUd2X@4BEE clLw2 <FGU]`1脘g[m(BzR ,~ =P䄈Rι5\bD?0!N@!1Vy%55zr7iVA{;֯G @8畕52B!44hM \. o%Ə7 f,_1r4gN~zG3N99,^8SO @0Wbزwߍa<H$ɓ˖.N`VWbZuP[ Jߟ-(Ӊz8 /r ώW^;,^Y VJ$ h78^zV|T6C \E ^#vۯl3&ՄR3޶Oڵcܣ.WeLFe BXu Tg_V?oCkMwm¡룇~6n/sq%f@}=G<->0L&9d2,_ m E66rEB"%F@K ; |44J=6$K`EדkR` & |^[ TsCׇ y^-t`Xѹ!:;ZUٵxoe"9f_9*H h믃 =  0tht(o;o&gp;7$Mp$Qs,Yj|)`H 3vtD`vlSR@HΤ1zN?CxAd,M@ T;JEuNq2q",$m`u&k+`0276=-q@4z9eg~WKR^_{n`Lk/BӴh 9K7àt$ka 5T$ L|>aFp3=Z*0FYsG}s`V @Vn \__oj3]PpW;:jM* 0`/p7p0CBxBSCv,˲[/켛{a)΁)t<+D{9F!7/Lk<UX8۫e2DL,γWQvqE.\Tͺy xER@F_hDEKu5bw:}LP&!*+YBi$BtlN C0h`pm@98 1DZ?-@Y HBh--) {lvv:2kuX92P pſ[?(5 3ø:U4L{8'PL( ІCa!0_4R2?X`"Nx X" vPTń)wDvlv06qNY9g>&bL΅4^-X0ugb&Ei)?6w眣z%`$$4[ IB21>vBOPU1y㍁#{$cQѐLfBΙR90_}5L)n-;ztr_Vkcϸ*{]=,xN^/`P.-vIcV|lR1wؖH &KR$ߡ!H?~b44 +׻dp]V^QTtq.pa0!Kƹk>֧ DW^1=2NP HvGͨ,XڤïZU-MpY Bl'b&HujQ/6f HN&;JCirIu'ɀ)7 A$TTf]޻%TVNE/G1tzklt,)Ƞ \%π}ۑ.@v bZp`lԐT +Wσ`P$3py9$47 ā(ښ*W*M$n<b)g2ndVWm94 @da0M"D|ȓIL S&`Q,) *WyE()P B\;ڳVU o%4 ]f"W D}aGbq.II*?b̋2{:e2 pa2f%'L3&:;j* &im Cp_2ƨֶJa>I+)Բ,AK uy}}:9밄r~ )mjL(d4RM <o5N.UrȤr@I=k*Y3>(0KooD* c5P\k N_R]^*QQa(ĸD<3Y>+eN IDATֆ1bWiHoAi4f^OUhB8j&I燮Is,L__ȹ"+qn/)KYo׿hެu`v!Da6c'a]L&C5NHU% ?GJx&E@T'W|il(Dq`0p!R5cElłP(~ӑ("`#BBHDm?UcHR>Er<וe#g%%f'562a}F mm_^ tJK;r ÑHIAAn)"(JBт|Vqn0M&'tң>JRbϗL&{zz84:thpp~}gqƏ|O>O?'5 LQ~Y>k1n87?_|a׮ «kRƖF,0 Ci lnl󁁁۶ݗN*4j(QFW끋 "XР~90J|JelHx4ͮi(B|pS *m4AHcf-;v !%Cond `X}4XZ (- .(xp UUu=<2p BB\@%0 DI.g0 4p~CC<ˍ{J#"$)iB|!ĭ.DeGV-!`f!`.pp0&{gyV`{')u*Fx (jt$I:_}sg1qkB !˔))P#R r[͞yUlITQt5΅J~jBem~6nMM_?B k EY9D+ۈEecfg 7@sX ƣ v5@Fm-.?z}4#D/PTVn)DvS^y_pPXHʤ)S裏a*OrヒoG?{9߫^/Ha4 b"zZЪ*[xƢE76n'BDM>G"cu&55Y_wU˕ b<04 3g*Wg,K;N&M eTU4뮿\z;ww|@>:~{l܈m |>X!kݎc$㨯m#0B^tt`̙|eM7Ej *9JJoWU/ݻwc+)5+*@~AE1 CD]>Pꪚ{Lz"Pɑ guy[VZMͱ?I'~]K3вH9S\۴tBKOﻔJ~&RW`8t5B2Ⲳp}}O6lpurn܌Hmg8` !$1#3muj[;G=}s7's⾦}03PHHRXSɓ˖brxe״2K9@bW:.|tHOi5&[An*@GZmf́,!V659-R[]}Sᦝ"TW5w *t,70ݕ9LSEa2 Bx uǣu?^Xr\v)jA@^yU|2|>¹OIXm-4<orCmx])@bWTDlL"~5W94PuƯ#: )Et'\<ťED1ɤ9.88D"G 4҃ExjCՇ)M˕$D._ixTU6|UTRʜ0 *N8ڎ"v۶mh;"( *2#TԸ^y$rl|}ﻎWeWU®ZϺ{@-̶Nm_csY`}X;z [ܛFmJt*\ j?]QXI ⊺UWye D:[Zwdf₥K>"y;iXNHz{|ޛ:?;>0z-23-&BڱM 3Tя?83~c@'SZoVX L$U3܀8czwQQcLJjk{cٲpK{XAw 0O\a,vPUԒk ќL I(\i)=V]=GfȽ_閦*TJv/+py\$D!OP aMJ Fk3Xk+sQ7{yҥ뺙xCzS_~]d HvwwݻKܡb&<66j<>+nFO&DF08ϜrG_K/{ʹr:Cҡ Qd rX PYpa!SgHh0>|ƍo &?b~p%++GP(xbzk!ǿ#w_?03,?Cߛy7WU:;OgI9sMfkSDy~a?n㢋c&yt,؀ H@aM:مUsN$vDp>t1EE{KK-) 9Dp:JveD.gP]Ņ(d(w@V4v5NEl[v0ۆ{V, d*π8G@wy@5 xJA@z`x^Z123pl =up~چaD.E%%eirMt'wؘ1zx{Fvs֩<]`^k{:3~p:/碬HZ:.=+Vbҙ8e",O; ZIJu4ر K.egcd8x?7gTT @4*}.0zzaQ¢ij"J+ ۋUAJvYw3TLS?C=ZZrsi$\do/efJU@o닗~≆׿q-[cɔ)3ki#o`L_*lii3BtL"xY|1"_:H&?_ \c* ^&eaݰaeUrmۛ7b 2[&gXL:~AV l"Bx-Hi<5s2"%QZF[N } u ֚,KA,%ky&'g eͺO?=XVjk$z+wk~f4 4Џђ^,)?#Gm`c?L7wx 第;cW1y C.5/ݴf;/>}IOETEQ:6iRk}jH+& 8@?:jZ\iYY۶+W`(KIJcgeY.P@E8OxJh>f+ʣ;uUU}44D9F"ݓ*8\iLm*%W+] ˆi_Kmn5+'*B )\Ii\GG_S&Mܞ_Pz< ֎4y` $SQ#.$iS۹-Y~OMM'M%Hƕu?k(FU{~ON=WXh$Ahi/ ߢqOVQTΝMD}nY_~B*Nass TUeLI_x lX" ha3{/ͣdL3g\y\7ݳﶭ|ɺ:øoûcQ[:}zw ` Yq1mZ sbdAƈ1FjeG?mݪUk-f{҅6t鮖*#>"wlmnx{x.G*vXV^ksò絖7\F0V+.-I;SxX)D7I$;Gr%E4TUEpL[8ˌ,̩BIAh_j,bh/<`ݴUUʡR;;7y<_jmYj4:4/+EtZFΦ}hr#6T9g{'^Jo{;lѢ3g߽hI`=UTt_0ؼpon+'׽T:uꌛ,֖H$guwwseA +vmp]G;Xotz?}ݟǸd칟}v}ghE V8M];8wR}Wv]tɅ{p!xtH@j\wP[+> r.) /J<`L&P0XO`qSAQ&j(- z^ٖgO 18A )g_5uRQd0HR-;o+Gz&?I'TB9_3X~MݲE]\6g{Θr[q|G@̀R"L0Hyg۾9JOdy4Q{KjVԊmn5?^P"ٮXM.KK8lWKb}7{WP?I0]<'ST.lx̂gg{ZZ$2,7^w)4MBV6+1|@ۖ;9clf12hngAWQ߈7b/5@n}w"9 dh+p7LÊ\ߢ d)(-7w=Sӿu6hpre믝g'\HcY*` "ի I azTݽfUOa!Ə+IK砰m[72KV؋] #plrp0.T!z-}Bp-ĐWX(_I|_~AQG^z,_~CgIpr֯\m~޽[O9E[Bs,/rEQZZlM0F3ƈ >|E'O4) "/>U+JƶmK}Œԭ<L!jtޙ{f(ٖ;|hѬ`p@:gl=3\pute[7ypI'|۶m;vں'wYj|5ّEOȬ?wwm9Bu[wFDٶU"16햦yL"1FR{p7h:w\ J8Q!&,tH!S֚ etp~7#~jjvIC8bgK KEE3sogdAuG ! Z`&06|9/ n *|@ Ā_z͌-qpE١EcƸbMuXVf"J}bv4*Hӆ*ʃe'j1%_+/ UTیWi{e|J pj;Ususdq Kq=0|㜵uӋPPQMBS8I {~qY4&Ar(# V>CVv!_Qk*+K*R0M'^/22RU!, |BtwCg?y1i_|ǨFׅsA4J wu\jkQP<ٮr].L\򭷖q/l֜9wDI|()S 2E]2g~ٙxqfM\f8gM2 x}եtSc,p0M:F (Q_C6c|EJL).)fњ76G^4;bDA=-)+s̟ojkĉ_114܌QtdnosL 3&Q)NH,|n2 ƙe$%;"ɘT| g:l!uXEm3Q$'ڀ>/P;ej%s7+NkmwG-0e58#o2`B@50,\.MI#5۲T5>?ț)PQi4kll$ "3SR";GXϸ{4|muhq&b׏)n8d:m[,[MBֲl˗4UlG]n6xzILdȾx 4vQj`p-0IǑG'P@I-3(\TX4+0,3.e'ˡZΞ擆R O˔tq2%ezMC*۷p`Q2JQ.On&nT!WטC҂FedwV +Y8Ouu-avƙ~o'I)DLZ:ݷEtX?et /cH}kR~ fK)~ҭ,BYRl 1_kfM>;;P\e٢ŲۚXf%7r\$ zI[xor2s+WνsS=ۻW6p^Ik YHX_n'Z{Ub=Fz{q=\tn|;9e ]8Nᧆf,Ct9Õ%Kr:ې$>ӭ"7=2x*@4:u]q&hya麞Nꪫc)U/ϋ>UsqoY:-JOXrPC)p ?Z3d2ƛ*ciK2f$ɒ׭{!wyn !A$,+}]d.Zx{<>_g~׮眓駝EHiUWg7=f1+6%w E^[NjܾY/vGR cɶm?\Ϟw YYY*+t233@ÿuYwhB_S2 rܸKf@;;4ca_/:9';϶]Cʻ,j[,&IJ$nEgNj ;Fj W0 (BT2&,K% T25Q H&9:eRnBx4T4"@PltOj޽Æ@BU uh.ewi8l>Txsٝ.$TF 3Fgd54ry 6sWwǟq)Rm/ NDAQR e094 XjC{տ r~\༿ mW\ ~?TxU\p^~==2Bi)PX>1955!;g'8q(-oAUW-\XrY]]& hK͜"MH%%2+ z5ûb kn:EEn?y//޲eienjɓ@trjkf ̏>*7oC 3My? s(9૯'͏╕oK~e5s&1SNuX\.Mzwj"}MxNgtW;T4 U^yEL$<믳ǏǗ_}+<ZZ]D-ee)IJetᅬL:e.ԩs~LmvU0 #Gp0)dC#a1p=)CZrI4HfHsd~VW++E2I^/߿߮PLS{RE'D#Õsbߏ2`D**;N)1Vеn%TFLAtHyyA0H ˠ0 Jod,$&ZI 0IuTuOA3YTr MQ J0aYlg`%&t O\>ɲBolJ.]^W/OZ/+k^ו.,H'J4ʴB%蜍`G3/-)*/P H(vK}sc;K \z)>\4巶m\*P|) {5=mY |-M^EIom=Lc`^v}s̚5On۱dr'J˒DsXXxԔ)K,s " G 1Q~Sz ]')Sa WQYqɵܘ)t S; Z7NgQS駛^i6@u˗?}ZWu:E}5)S2O;SO"U0?uiyab&,4R|%G-gFOTcuŬ ϒrנM҉Ckw?_+_BvwwqE3;;Gt{}v5K|)$,+s/?3ŋGj hA S?p$;F)ݖEr*R.hb@#c2 B kE8P6K` ٩~'9KUWR\c,Tc)[?Y(E!)wˁy@~ 3}ҤnjhDAoPFMMsG4zF(?~(Î:趶"E]]iDJ& p(":`**p9DqOR+% ~ ԵyRN`Plq {hF" L2fX 8JP"B"*WŔԩŠrP#,&*R[ w]]"*0js=4ov+8gi9wLR 0Zrsq1=ޢ" |5Wò0f }]^~8bI\y9 0g}MHOqG0HlO=U~%: ヒ Ő@"=:O^z 7܀a@QY+ 7mz0!PVM{kHq#XgI眺ec#>4|uםv9ݷ" Q}}|͚}it x혔)˒1o0xN=J 9gBM7Æ}8**Xy98%ҤczdۉݻyA[CTB؜kHc˭SOX,F;v`V~5>z4Ifd;$iu4ùE:˲mde%koEaC74izESS܉pD*6'L *Rc#8Cy&B!p< JJ4-s)emmnymg̖R::S hiy}G}5Ӳ#_ER޴Me9*9c` Wtg(N[#8rdARӧP K8\yekY^lTq;O*uJ(iScw3I嘤 ެmwU9hQm'%UG RƕW_)K022B1!y''M9sgMN|x '[`~L4?Qb1&/g(ʘ>=t)麭idYvƎjh@C-L#Y+}Zn]Ivǎ^iTZ" "H*}Lomkg#:-w R.``C³S@U\V`/8ޢ1mدn!L A7KPs#Uc@SC^. 0-X8DM*t,'dqajM1{=˖4mZ0P+zU^)˒{u"wMܤxyve/(&Ͷ'"3ۥzdY&q0`{II'(C=8jרBPvIk=7孒R`70|IjϋJd^B*]eƘ ^uuoO&D~5@w8|onmSQHh$3e N@-biy.B!R)y]vZ"Q p@R@Ѐ[y",P@ |$0OӶ[>@>Pd5R.Ad\V29f`0c6mߞ8D*5/hove18IQ 2u֬?| !vX[f tsV 桿K0n#e\ 6pp?$x8ƱfmSR (#: h'Z7 طukm{L˚ycRR2q`eq)sI:L3hY0ͦ}~^#SOZ&tTyNk.BHipi(CU^}5 hkhmWJƌz؈+淖EP<4) /_`zbphG(.vzﰬ**:gn76o/0c.qR16T.aho,b6켘s@ Ιooidf⥗0y2KK/[ou3CcƨNcM*7o &]}LMMj 7>uI]BZ~#Fyc~8d |H{FCb㐖'B.[*J?(! p>'8eJOR.5sھ=cZz:Dgͳϖ(qèD08pl#IJ geWU!s-R笼7BVS Ԛ5^xdDBl&}dddwt4X\L ={S}i9uYY.V ~xgO9߶;r޼DtWqFSx_͛B 4\~^z4 /+8qiyQP? #33,[6a"҆ 9~D9q,e Q9a )۸qmk\8~|ykkcpfsӶ]U(N#{"$"=i #*4yЧ6~ ʧ~yhͽaYsQ"v>I|IhSMO 4uN3bv"e :2Ε=S3ufsy'L}~ 1ɓ-W)]P[Kײe7t`7YAuaZ 6GN]]&{.PEiī=|,ㆫ0j"333otq.0Ԋ z#d [CEeK/uh3̲R1}g\5BQ;C{ k 2EU4scM#^(w@O a0:o6E@b+$ =(W#[`.踶-˲4Nn MA?h"ĬV͚De2UY2cG@plԄB*45c$b#t].bE-mu ҽt$=Y\(z^˶gHYnnK-[m2:phy= -JigXkd  %T#4pb׿?ՄX2s|^uBL)]S&w[mۍ_r5d iw=xȳ#Aȸ.u)}g~ 28)*# knn슍m8✩*c`&TEKU]omY17u `H鱬^ǸTogh1l=i&$_{@EtJΟ+=p8p}|hn#>q?3s+s$ cew Cl% >);l֬˝ [SHp"a1FLKtۊmz cʕ^rIgI`ɗ^dg 0k?;i͡x EQ +*;]񸓸 Rr)Fzk;tG  [x~DY6v 5hgvl{ش40T*ThL?4e3[ouڵ7|3k^s5{wW;?> ]!z.ST42I97Rjv[Pො[RKNPrz{5􄏁H rm@ @.0  4ˀK&\f2gf054g&%8 R:1 H8ǀr/PTp)DcӶE8hʻ 4 xn x58uBT߀VUfy{.z{Wu k634*2.7p $b4؆Ѕ)' -&'!!cA.B754S 0Eۨf4mǖCrNΛszY?Cgs?w~֩Z,Kmp /   #mEa>?K%-ǁl߮udUq05VnjY(.a`\tuK9GYVTV;¶mPzuwC4},ulʔ# C+) bDc\WXDpH B,Fز%åaՖ^S͛-xOȖ{qN0e N; yy0MtwQR994vzg'{l˖ݚ ٖ%/Aw7萛7c<(Sh B8834fMM/I^sem-詧 %8òd>is~dcrE=MTjhS`Ѣիll|ζHvu:Aۇ Y[ZZp%PU!HUYU^y%ۛ9SN5Rҁ^K/F{ׯ0{v}oY6s5M3M0U:AD`9Qy7G=߿iӦv2<}x[n2aµX=DM; KrSd`M%iYDbV"@P ,;Jf HpXX c5wPiMC'8d^b9o+PUr3rq{%UrW EEU-s崿Y {#~P1c*PSJ mƬ 44lԲejYɥϟP!ȥKñ+DK xgbȃhM1|?}z)x +F:-JJT۶iOq1]GoLHz.Ar9q,R#[,@qT̙NYYRtw56hńPVTc,I!;B񨵕u;ȅR?Zib k@Xg* dEv ZXUEEp(wt%@ PcR h%Lp{ wYYaiuAhQQ<|C€g Fnq`rT ^oi&LJHZ'PUSԹ#Oȶz',^G $M{%V` N>ƕ|zz(;학~l99~x"؏Z9> #%٦5˚ V!EAgo"0T{rr>Q\|ebD8Y '˗wۍSAm@|ɒXIɄ>Ӯҩw9D2'Q˪͗=u@8b1!2)]~(‡e]41&D@41ήy=eS׵-Aj'X5#")4% VD]4)7Jyw6j=}~Cm .++}E qQ?ߗM(݉wmpw0b5玔j5AQkmE]ۘ4"ttDX ŞX,FFץR/Oap^8*/;IO?c/+nt? 6ܴrO?(n 81kV@Z݇/{ ьǣJý{~CÍ[ڑ:ZgR@vct"\RowhO-V8p}׮q38pu] _&ck|ZiRBD>XX'Ғzac ƔIz! |v}4zH"2*4)eK:=Zƾ`lcn}DYgȌPI41'gQq͖ '%TvAXX%cWx(Y= ]I,gBF+c%$ p0 /;46ο(q০9y@ 9a{6@oƔ&Y Xhp F?u#?wgwyn6cD/i%? XOϲ(X*@DRU{TmRy\Ot/cE%\@IO3 8ӈc/DۊiqΝq.+Wٿ<,r6 { !'VA2)Sܻ#̳ (3`9^yBk+|>|^WEE0yyXVhǼypS0@{;߲Ÿ:hmNx\jaYLҋ/qƍ)<i_~.(!7UUHL04 qdWW&1lL&ɶ]g"^vͣύ$+8?**rrӃgEP`f ^EUM㡐QӤlAܹ֮ũ?D8 Xbj\|1_2IȈ޾|<''7:cG9sf PP0 $9`&L8oرf)G['Ui>\}hiq`0x_kjs!WfKTYZR⭨Ȍ0'"x<\Qh8bk8)M_GA}hSMM=.DDUqgz4z'?da|<5|Ϻ5{hkPK]wȖ2lck},NӋa]힭Yhcl0!k/9 55F2_N9΁m}X9w);>|*T٣M¦yxGSJ sL bɌMӜܿĆ60g]d) t@z8,Y4(V6!⑧ 7 -oSi.P%mD G(E"wyXB2m#I.^bpCP%c2oooilT:;ww@;P `ݜCIofω9J6n[Ȯ.ےƣP?%%޸u8W$p4t DY~6eeM.ǘjuh/{-m? |ߚ(v.== !׶Ҷ X?F&l(~ eιGoڰaљԴć:J$+]QyhêQ͞ŭ$Vܗ?Ty1:cvy(Դ_4+ a9&wtbs~U*ŒI RUee'q y{*+RM>ТE/7?Ӧ>O|h/ |#$sB犪EtuO),2f?]t8p]D@qM۶SoYX 1ׁdݜ7R&3ܲY+{U({`(Աm[du\4ke]pR$sOa9l :nvKOg.7a"DM G( mß۲bڡt s_!!ߊl (Lf4֭Ü9=4<ك#?8< Ȣ!2]v26ofU](̓/9UE^_jt vezzb+W.2OuvZuu8pk^ ƨ@04DK LSH/o.Z4'r=vP(`8^afF_**i۶~Y/䓡TJb_p18ޏt2O8A>۾]Y,ZdttGF4CT]?ᄪ~0UšR"zD, B[-Xd c̵{ts P^ܽ"Ulpy!c `S$]8buL@4kyq5g͆ৰn-\3#ׯE@BAUeqk(g4i""%:Xfy󨢂39Ω\tI7_{35fdx/WUzÆw,諨r))K7Zl^/f6 35Aa-0]FAV!EyI9c" "t T= 8@2FmmT\gKŀ6974U4 F`2[P"XoM $6TS8spa-sڊd3ms@u@ ].cHT@0Ñ ‡¥H%%Eg1t ,=˿!iKD$^8bUĝ4`8x0x_ȶTh-9Lohe3bϗܨIp=-PK5eCta8  }; G Jk~ ,$6du$Tp͇3NibPyN6Zx;1$<OKHԶ1W`0@q5k5Ggp0McH%6mҟ{G{Ƒόlg2:+A$cb ,< DzUܥU=< g!R{4JJuDFI*6lkg:\ L3K/QQQa3wt>Db^edђywg;:il|/r$Bt/&m3w٧A& mqPK][CP"6DB@$r?w@?;{&];;:ce͇k MB'nIaêYRBs2{&Õl:M-E?2-2?:N0Ɓiu~6{cCÃׯ m[veeg]R"9Պsm:nn:$bJ3f矟ٸ!W ' |wc*c+tI㠐Bx˯ߥKش6'gttB C|;0P')1Nr~9sޤ(g2@wN`&p p+8Nd}oyOA]WbS XqRBܪk-  X0@ʴ?2^o1/`0xӆ VlztTJ]Otf LR@X \d x8g{wxxm_ CfqEjۧN`e;Q`:pEɕGt' al J=.`( -pBb@30_ӺT[lL`@TGbAR :Zost C@!P l` D륗>Қ5tDg'8)ۇv~8 R"m RXS[ uv?gy{;[׍^АFϤy3t#qΛo:EE8RJTW͛a),%`%%Z/K΅*%a<̝+g6٬G"Y]!e'HWgmf>앛onI67sP[ F4GwKf{y 'bJ RRնW/$6Ψܧ@Iw@I V0 !r(pJ$v&5Pb'#x*ZvcH{InԎL=\r'yj9額q Wp0.qq np#ߨ* 8b_}ax|wmD݊2 p©.,Z]]][;T0>h9-)s|8Jr@MI.q^>8Wl80t$ra<1 p._:o^Zs.R!;'-i9W+ذ KQ`R\VX]6-Òsi͟l; ]d] VTIL3 B~9L@ l_$,\G*"qVn`͜uX(p( @`o"1 E"g.]ڛO۶ fl_TGC-}3YKN0Jf胟g/8s Na.+YKFVH 2sS=7>qNMiDD--x >eBz@La1'چƲ=A>ӧKAT] IDAT LLT5x2s yoAcZl!X 6rY PLY~3nYqK1w& AHe[9 iW1̲$QYBX&AI'ymg8&RiN-*{m҆VRWsΒ/3TV{Oln\x`HXL NT t.^L+Vhl,Z8KλG<9!RRwg_f|QٻYq%_b[K.o-/T hEzh/L:]Qzp;p"bc6>:0>p0:44eᰡ@3nu:7#yI ȎT'I5SkjX4Kc[Q[[ w0Ιc*.]~晨 sH}/fiʡ_үiEEyB|e6PuS̃h᠐qsij)cӂE e31W2P@g*56(rt]4̋.iKH  \( C:B|~ƪ c1crEqg2t`Y"f*1k!*P ! _X\8Ƽ ॿ>;~&e4ځicB:,@{F`oHYH2v` ؐ{OwTjeBT80vUf5v޽)sNu @KRƦIi@1 ; T$`"r`pGCk؏9=/og nʎP_WuD:CmW-#5ʹeE] D3 )$FRT8,(N&PUv.۞>5)^NK෍ar hkZkk3O2;ۉ!FԀiA(:0}"4޶3]ۊ\:b#ax>ݘ<?jhr@sz1 J}hhwX4J|GEYMæMH$\qF0,˧j8\c}HaOOSh3'i PS3omşsES~EEE,@CCز_Vy{/@=TH8TeCQ  cT1 p)ņ9{6:O&eA͟7p45\' ^VƏ<V̝{I2'3|4ttƯAGQ|[MfYg9ދW7is֙N{;ee-\by(.Fu5^ }ի"9'9Ni[.,y'V]͉`")dkxZ8cD;){æx뇢6;nwP (|?x1WȮE >fM%X\Ȫoގ,Bg?^W.AcBl!)R$4. @㬸^=#uyѾJ'ǀ1ɞ@ ~MoD owVWz)H٘\c.0ٞ5-)Yh:M#̙;? QSÎ;NZ|?AF`ð=R%Dd!.3abt•9d5BshI s逸V|V"F_i%[V| *8H 0ں|q"ʑ_TU8S $9HzJ1]%5QSFN>77ne)D8t +U goW| is*"#'8_‡Hi.0-%F.Q3S҆[./zS!1H7fr "@$!.DP;:2+W<<,9NORtꞼ޸?wXp۲_ iUE3W7^M2-81=/kFb++VA!2-vT[ضiaiJKQSae&3E1|0eE[]8ρ3YoT//lnwNhP#?]z<2̌&\řIVޚ1c4^ꪦ۷^tS]1ZY]q[sDKh .x/9PSQspcL&Jb̶,|NoJ؍mNUeA4zEC=ǫZo~7,im[t͌C!EUD,ǡ3Hӊ+/)sR2L ͂jg^1>sbr/:nDgM햖5R E6.#*bl cqn QkJ9Dz^B0i ˗_⋃==y@0P"  p T7r[(8RN#cчDRR(ʭD&pЉrT\D+]'аh갔NyDWAƦP q$r,)9fKA4A1(Q]?9/yRz;GQ<4@/P|LJة2F)D 0X5 ~ `p \H >~B Lӭf裈a2)1~;qVP;lQt^hw3[O흦-5 G= L;p7FA֌Ad;1ulqN8s.]xfBښzu첇tfE*+1*sO>JK8iDXO/, r߱KWϟoF"f~>*?cڅIpX7qj8jiixԼ~;n| IgLblkր16b18z{a!(҂Pjj襗P\L7U5@ sy<.t);ũew|iP+Ds<Ғ2Mr5 q1.Qt͟/_xR?}6Ԁ1L*68s*z{ 0g3 aVSXNkn槞*3v1x%32)e_\73L?g}T +F'.JJ ]8EA*I 9ٔ{U7tIoy؝?vӆVdaZh6 FrX];+"G.~LdQk0 =t"'74Z[,ŴZ"gy?T0ak)`$HH$F ]! @Qҫw4kT&10" g_LisW"~^c;a9Lwukb_UfR!m;p`م)1g׉#+0#ʀ.3#s@ vv>b- ?0li3/+}FTUȺ P8$a~Ԁp=֞lÃKÊqU[dZ1;2`*8I"2RWOsΈȶ2]{r4m>%!`ɵw߻BWdrΉjʩ7Y*5.F_>bN#k1"BmcBQ[,+ "M8:?1yhQR:>dzurExI!, x7,Y2x9r-D*#OӺe*>.Fc.8C}}x4L,Q32ƛjfd4z-3LFe%_f^_&S)+S&?T?׿s_s׬ɳ~_:#7B^Ó88?^6OL;- Ʀ)JgBtO,"TT5ZPp9 r,s/R^؍~| @H&ݺB8`4zh"pSP\8YD^ 8 nxM&79/-▟[=wqթm>7@P D"IΧ gB3FM%*` p:@'29񂂻7li:0$Tw FHNCto60(y,h^X|8dLf+ޝ0g[qxꙌTeF4ܵܰsV|"P|磶) &  Vbl*z{!%JKqݘ:>8mC|) `ә0HU1NeehmEu| ̛|Xh,hm-stܲBQ#~}YBeap/.id[sYD8TZ/Ʈ]S7tTi@ ͘9vEXV̠WQx*%winU o ^s:cyg $a[r8FpGM-JV H;o1͑hؖZV~YCf_ #$bOj1C 7c %Y1Qu,mZI0m2fin0ဉsvUHtozd]MS&:x܊F7Hqsi lոA4[*;CUqdr7gXK**aw4BN[ڨ b>7݁D/7ݑq*[r_}(=h4?弖.{j"+o;j Nd[~;s VU.Wnx"¾=/wb2ޱcx`8犦974A$tm8O>A6W\,)|(=0v|('23VCbD>$}:8xsնHXWױu+s? TUY.זwJ tw 'TUeBMʀ$RpU_eHyšPXL$@8X0'Ei)V  TON`! ,jXDᝆaYRvWXVo8` (p/4M4-~/Di2@ 0Tt6p9p(Fɣo *cdL}(gmKK8{!::Vr m?8`Iߍ+wE;}4mB\M㍏2 ge={im_hlہ.],kZbu;:vx'|36hYrxplj 2TU6a4Bfg,4459shv^{dl޽2/)SPQ'Ǝ PH?\zSj*L+ʘp},i,] '> h; ^~ip#7m߉ƍ=v$kUUk3 IDATH$ ic7UVb݋vD%%#m[Q[3(+; Z'L Ǹqؼs"8Z 8PD"x% !Ux0]gTa\q4MחWx]wx9-KlnWQdߑNCQFUs<W&n`Uz%m[4 ?biGcHl].PI1bި׮) q!H@D*pHD)I**~Y9̝ ;{<1CW{lQSE<8lE 4ѵ3|Eeg T"\2gp1Q-E: i1F")MB"oY9M#'Wl$Jee-]j~/~DBgzEWɝϢ5R^xr6\OVUX WݱjK?~cvEz9o G?7RhOꩧ_F⫯hl\P.|F"|N $n}Хt9BXR9&Jx$!}UU7^~cwņ ~;l{*0 !z [Q0!܊Rp4a"q8RNaL:yM|e=߯s?VߴdYDeLш/k%@ 3 muo&Iԑ'i8p]HWnͶh~\bjJ`2P- f2"Tm-x2NAg[8]9~^oO2YcҶ!"bN&ݚVQWe7ΰMc&p3(1*`(0ȂA`P H"_Q0V{9--{sW2ݖUS8u>1t9659M~iؚݡ5M?9 DX*2)֎W_`+\.#|=VU2rm|GKƾ8#./)2۲ihnL&bfm@U1e ^Z[ l)ŇǬYmbO?HQ(X+W&kk=kX GnmՖҥm;v[؊…@Uq!XKQ(`8e:glkht3_|ᥗFeel宻K.#nmm>DC'˭[ݍ~Hc(٬m4>0@_,_yJ&iXL{G^/Lذ s矇eAJx:犪Ţ<Ev̓CU5۶F̻jkQR'( T,:Kđ0àgB,f#f \RM{i>jj!綷w ;ٖM>Sgj!H܏2#n6ou6 [1⫮zR`uRGn]vk*,&N⡕oj2)0}/@xEW4'6<?ՅBis)z<8l'̛t5)x׺:SAث (#93Hb,8g.?VD@i vO2T![Q̞-+A^P &yL&x mI NT V?o:vWZڔCC]smm(1I*~g@u cP] [!=5mԃdjRf3&Q`'׉/|g&D|L㕗ƍ>WiZȴ,)UZ:xGѵbW!aVqnNbOU^EDsYӺkn҂[qhha DIuոb ̐x$Bo9tQ55|( di=P2&*+? ZMLqs^ɘ [0Dt6L38awXG<de"1EsUxUJ'=6ܮk˖[o̺Np7W`ٖe_hB<7^|ꩧ^~o5Ui?O%HpB1_ cL"EJ&e0le}3juBaQW%@/PTuX)C[0E\D Ds8I&i? l^ X LYlUJ|Ԏ ":%hP1v{_ԭ^ۿm۰e->O>,A`)ycQ d*$ T(1O@Wow$ZfS|Ŷ=6,<0*?t)c8f >8+gaLǕC&O-;vXmÁ"*.O!vSJR``IvFk g ^aX~+Yi3s˲훃!m{(M?*I. @QPWW%ֳ6+.Onzٳ9Ǜ71ǝm 9+=QUNӔiԩhjxu5 B"H8xB!Au5\.TT QG!mL/cŲehi?RQ] iw[oef>%BggE!)ڊМpV˅|G44g0kb1 !fq!ǚ5PRcLz\.OMM3ӆ ^M)Y,oǝBI]W**4)-SUV]Tvmm@ X }F}= .tPtvBU1y2b[n7fs{:UUSBc0M&1 ׋̙hh@u5UE$t2M900!fBCJI~mְL&>;pbMX,'8JJ.6n,_U{TXښnlt0_: h1.33Lyc[[4\v!jڒ}bTHƾx644͏^p~~LcKI]Uk KQXk}F:]{O>{Ͽ~ǎA)]Ǘ~z1S<"{0~1ܪie2Ö)03Zlܗ*Ml ӜݹٌtSO/ 8d,¸U0asG V¼4]4 5&l `eLX$Ƹi~%B˶m񩤑YDd]/L5-x'zt8(Mqv69|EDv,f8}T0 x,V (Jt؎%Jdf CyTSpgOx]\8^vWl<~?H#`{eC‡_{M90'ǀ|I䆌麡3 wRʽ+ <`\4ve'-o~`4{Erᰣp44ओ 'jg>@ꐔI=dɤ&x9c:3婧>44`F88.adlzV ri~,{v}qh6D,6 g27rj=)V/#J,!rrw[ۚT2vn8ee -f{cg}Zc>}`7~FJ`s} FQgi m==uk!,"Qd30}sVnYf2 Β/5Ͷ,P}]Lwf24SjlWbwUVV[gf C][qbE2l;ep"y[ZNKt}2chpc/qǖzm[]$Qsw2(Rlk<d]gB~2h0YQJnƺK&":*>.mY\GDuhg,R`-p*P$pBW@~z7(!9I,XmCJKYU)'>X  ,t,6iQE1fh8=Ph)gPx>K D'Hy{.Eax1U}u@HeIMUV16ƺ (M/0x* ADKyWy6Oi@ pt z#o íoQl^b U4S3$'y#bBZQRBUʣNW-H>{G8|k(ty*O˞Ve׭-z >Ļ( @03܊{SWWe23g24aHIJeXB"PU#BƌA_Ddmf&Ns:#5c#͜MP, RW,tua>0B!*~=vƂ#%= Q$]]iB\04}ØP0R֒1"[gtUc )dnYW0=nqħ3ĢEX \"$#Dx1Hŋ KήF4rvVŐH eՅx}}8 6Mw#u ǎG %zz0snӶYii> k9gcƐYIivڻt ]tS-O.];dDhho2ν.X4ZH2[sLɷ~+:sQ]!!MrL)֊>tZ?d\JIv|8f@͘1u͚;=t& q*JW5L:ԩ<ݻ4 f!Qum]Fѧs1q8b[0r*7Qji , C:j#|6Y0zٝOi\ZTgoFy=[yhxl9&zQ_/ :: X Lic~dqd^ qj·XDBa2hrl<n+紴\qv*B3d?`8?@سyFSMMςJƊeEkk^/`x ֒ؠUTjm.G+hߕkԙ4v-|nqsP=Xkιh(PU@[ )U<ӆai~^u H*Hȃϼ]zuiYv#ܥ(`hcT@  (񚚙&,Kddf3ڴA=1w`YID9̛2˃5u[7eJggTiնfv>Է CRhU"C~7D\kodX{Lz{ Y&)'a63۹ySNi5)޿<5 x(#R0Ng%w5>:FDwy71Z=2ӶuP_#ʁnPCC$j<C4M++su`,>̯dg]`UV%tKz叺:?dX kb8m: !a8e.ߴ^t.WI4|nRfes3²Cc@\1J]w+J"%XoW=nScйgZI k̚ꑧSgjܴ)k9m=N4M矿}?cˉ\Se=ípU(厖9sfs4[;_>o[]qE0 TjpW^e޻nEX΃>N%6u:]#DI8KUsXQtzq6;11|~ @@\cݡicA_Iycn4M3*)ۀ*"C&+otvVZ @:q:(H<x`r1ԀFЁ.e@4N /DO)D8/x2M IDATD'57zϞA۶GSo"g J*eP,N=-@PfC$"4I9hB|H$+E9!9'Ds? րZ@\H"&Ѳ:7F#9Z1   p\Q1& ReTw>F] Yu>',N<P1nOx+n7OP^oUyYbn9`~:$"R aTzdgq2~Yǎ1e'=[/ zlgCTVAb9^) V}}#" H!s'֭fJRKyEش Bk#p}TSOSNoiLx#e A#F#dR娣Jn[G~*s^R) \qv;XĮ]H$0q`H& wqR*8ř3CCsy쎎7 R,R<ΟxB^=4 0M޽FG4ֆzBNòގx:R Wj;{1.+:\uH xW]GA2 M3G\-gV/~!`',qnݹHpƪU˦OojŢH")c͎[_۾99}RgɚfiMMo;ߎo]w=߿h_U)³ZwWڵsKX6¤ R5?qO=go&D8 aPHw~IIݛo^1S LAGhz^{rޅ ЊJ2~[3u;mŴP=ZeL0YN垧_Jo̟련`* RQVT!:nZ r'=mV @(I̝HAz%Qg\%s) ~fUJ]\v*a%]SOuuf*FfQHJ%Nr5 Ɖ䐹;_aQTQDCruF"EMMr˖vrOz;O]D= Et(ނ/` VLU3f*7tLx7["ɬԊk\;Pmi+ Q.zb_Kpkc~puwu)!BCg;l˒mmXβY 2I1Dr#~}ىm~dƘmik0xJnhE_@ie MwCv)8"pm%AJEJ6nN"]<) : :¯# IaJt (XL O3gO7ꤢ:;͎wY~VWcad]a5^敜62.-/r=#ڱMӳO?yspa^@F"7G]vz]Q!zȣa<0**]5u3⧊ uV.g}IW}>\( ::f0S8~켩Y-Ɂ[rc1XBhlN yIv[n$>~]9 aKa=Kz-D/S5pe P Q}1|A!s󚋭_ouv{T^)SjI !*BB|8.v> a 1'Ԯl;g&|Or?xkIc]mu2;NTחWUDd7"L8县Xݭ|! 9-pY^-[/Io͞=> {N Q_hsi}x6YX]ӼL&}RŹ?CECVogqP1mcKG] &3Hd"c+9!Z&2ƈ Bĥd-D `p9N%W JvccTWmFQM&0b!g 1iIʅ8~@7pfYqfv`)"e cxI|j"|ɨɂ>H !U@ JT&x'h`le-r2{p p6p 1J]͎zmO:bs|rm#LrIR2M:8/ܑtF7Pd" @f\^f:l<1S6ut\0X% cw蹨|u^]Q]5#GΖg0*jٵ{wX7λg?eѩOA$T ~?>؂P0ىZ\?ھݘ<={hjTUYJKa#%fs3|>6g BE"t[Ncz\x!8`Ġދ<:;mG9Bb @N0.~W{zrnw.0F uUFL|> X:͟}V|2`YXMMx:(pJڵ˝vKΑ@UQWY'?>,0wnueeavO<ºM7nʁ H1c-ìY0 X9?3?s[+L,򤓊 /XDUD!|͛;| !'Cr!3SGR:b\x!֭C,x訣xGߴH9U KJCYi'~k55 +#^4C^+Ӧqoo?H:Wnmw'EX[ܺ/Hb00fyB: =c^?/揿Hz_gǙ¬w^ps4Z[c65'XΟzLΖ Ł/ Sw\vv H8;/qfB4˛ޕ ih>`K WwIulӏ?T__1M&R!AS@sA娶!$ʱ H9c1jme=DA7zD?Eتگ(ne\#W`VOo6l<|pK55nRY!.呕՚&kkӊ$1)tr{<e gӗLuS#`lbzk j?Pb\/SAa`AX "T\ш2~<4^Y.@ l@Jڶ",UcLme>,o{{mW̌ @ -:2&%L ۦ'Do/n n7p0 hn,bU xbqW*bk[g L yRw-RJf3tPC7}Ogxw84^fy9Raw31 1r/N ;6fIaS?VPeڶ sbz%~"v4hhfYofΤ .c6cٹg;/ҝ預ߤ^+_vRYhnvR"8^y$-[V:pt1ޟn\3ϔc˗4qT0wAqVKOI'Yc2ƸaYO8a|%5NKYL2ٳg;nLhWZ5gp^`Yq98(TT@iŚ5D$U0NB+"/&jjmr{ ƨ Z`AH; )8S8 L64+:2WőNCCÉ--CS<#g'rc=瞵teF$B[Jue7whQMu*C&OLMo4\}կO>,}IP$ 9GvYVThZU[WGm-|LeF}8p7￟@,cp]Fv6rx2Mtuaq(-ۍ,_]g2޽jVK O3r3&Zɤ^ᜑ"4z8aPpXLBH6mNgti 3 q5{d8\*,,laAtvUx6b{/Rqq(#;C nٴ"Ƙ x4˴ F\߮Ys͔);LsjmEI1?w%f[&cDjj*G [K\^\djpeKTon~lij&=Bqe}F8,lAXip6j1A#QʏDx1K'I `LԡX"a26S8Ur<3c^֑/M8l:[ݤW>^R\|1" ºu D"ᇱx,(LG"efJg))Sۙ.!OAדqqTlRV)]D`094s3fIh7W3dPMn`jJ]QmY1UeQF('iXize%28p4P3Kd(+Þ=df8=GM "WyE Y!$BqFx\Ujc`*yS 2`NŽ F$rAlRlVZ23`UUl\kqM;Z j=[iCdr){}Ͱ|# JY/f{0_6,Ij1{\kn y `%o߀y80|9w/Y,sTgjh#6C i膴 ip\ S:^U( ,XZtꌼ;Kyj_m-fV f0E">>6Λ閖U8ÛJm˳%|:?Iյ|$a`k9hȬW^y{?,ÉN 㯺SVl;קf2"+z)u.dI7Hښ0W&@2A@VDi\=0p3 T& T1h1&+gN1n@|YyR݀|  4\iɶ{{7 qVt[j@ 8ݜa< P?0z`#P>"&wR_8^A84p `aa I8 |@l @9c IRnO*x.Je%ÌRf%+!R@9P p#0  % D-ҫ8HBPPH?M68bh;dJ1{oeuu姌M-ʔCy>^@B -2U:>|+\pAA$1R)lۆ D{]bdypؽ?nx(-$44oA<77`:w܁ DZ9Wh.\u^`dJK)CO PU( //WU3FX8qǡfLRkp(liI74ĉx0k~xptGsϱ}{y&ш뚔iXn.fbak녮 b((կF~fMWT44ž /e̟D{og['J"_,$J}L[r}6ֆgU].J1)57#DGzzp9ÙR_K/7۶ud{;e̓Jב֭rx(餅x,]AӐH x=3/!{7Ҝ!&LiuW@xNSYƍegc(ۖ%m?N\Q _|@! d144`]ݤco*hEZ&cv8d%zmEɩ!Al0p0`_0L12EN.]=DWAjDȋ$ܜ*a;0!uRt]7~B'/0_vlYU"C/cĻ}K,ϻD?xp4 Dq(N3A۷|%cJKliAU&UM7'a(vW=kn86H ^|1Dw6mFNV}h׮͗]_Ϛ" 3Z[c 鴙LY==\̃Kpy_y*I+.r4KAx~>uvԑ_ P$^C(P1M~Nq_M3)6 k9+\ Dy$/"ʂơ7`\BNhl-[xsKe"H@ Iɴ뛣}+“~d)&Ej'':TZI䑆띙2!n ch.5>oA);:I˜ HS;wbtxO-_nuuC?#e&\͉HJSʈm@M:f|c,̜ 3vƍW,5$oj*,ms+ m8hmyy 3C[,#/A3 CbXsV\*U1m1Jgk^=/8FTh# 3vgqpzz֥S@  7c*1߱y+^t80M8犂< jjVkcBIIߟ|晀S?s]SÉtӤ^3B@O# pVN`]ԉiӟ1"ƍ?`&&-Y%rժE{Zk4,%9cҢD/F򖗔U"T㙧x:5gS9qN6:P78iUzm#B($%08 !cr/{s\57W\c?kJ!B/pYEv۪},Hon^ GsqPޏ{">!q P E'tS{Edcq)D"jBXd?cx<;}pJ/:99^ u{E7%KTÀP'nE@>2UB&9R=^zyeO56;i´؄<99--ʂj@sALJjmߺU_^ӌBux4QXD")PBS 1.=F78=#uٶ"E&46(LfLUOѴN ƘXw_b#@ٳe&ZL\y]kk%7;Tǥ2Qqvf2E,aR%\Y.J0m->4.:pۢVݩ+"PP -J-(xX@`D#i+kw"ˑ:q>\tnϷɡXD3X݄zUwOrn7ra4cf"yMl56p9cCj}n+W^w]SHX&KJ=O8֮]2w[NƟyݏ|ZuuMDG]LP:]9iRfV|m5/ߕoxfEQd's~D"<4bLo GP|n< D@ϥR #s+D1%nw>]7͟F\!9"?\!}?LhnjU3$H @?Ā> h@`$(M3D wS40\< ?T@Oq~DD4YL| tK0֣iԴFE)7lMJpŀXݢ(@Bn]@h \ЀӁ3%E܂׍7B ^AM!"&!fn(ʔ@c6%IRE{)̞\J@rTcR0v_ggS2٨ߛ3'Cɶmk]plt ;UpH`&|cpȥpl^{njUE Tjx @[IpQ^p%VF"'YrsqpߏV@8@*5-RD"?h αo^~ 0 ]7~2mm|29z42Swߵo/Fu5m\.%'Ǚ6-O: ףt',DXS ~scxi\|1~  ɡGޛ;ӧKMCwLo~c/[n:l :};D}}dY&8CQnMC[VV ÛobBlۆ /D]hh[[nu99`բ --P8ARLQFkgδgb1!w:ovvUuiƪ IFg'3P[K }zGlv濆2H:yOo4kp8^2}zgEg_ozO|#h4rժ;>M|zrӴ[׬/t#]=95,:*vXU/tiMՒ<:;7oxd`P ¹ 9N0@/P4pFtR:#]hn2I=1`pk\uN/LT8@m#TcD_W^#<7+rfo! ۲V46AہAfw.ƾH 8RoqG\$NfH<\ p7cV;>:, (~Їu46 Ɋ ~=gL0*@.{J7Ǔߟ!vݰfMGړ g+CU8gLcxe̛ c~3W\tQf:݋!de!KQPv-JJPPf,[JVXm[c&̞ ̘^@@R,Dw7v¬YHfcg}]pAߙg3DM lpG/;Ns''RJ nǾWwAM2b&=ט? &S dƋ;:}ݺ^eY%JEyg1dŃſYLw|ztqr1b04y8I?WT}uEIή{#c O.UHL})Yk+S$&UDp鱻˰-%< ι;vH9Ӻ|Y1MhvM}XȍRppm+x5Eon t6,@g$K2BQl:lF;R}[;~@ݼ9i}3>٭4'b;9ExO?:ۨMXYμmrCWQ']1hul(RV>kD b`R&?TY_r5cY6z¸_<}+oݟ.藿78-xY> ~C•9=^o“ A$7SDۭGk2xey/8Hw'{EOϩGCSS}XL@ÿ8ҴϪy%p"Ƙ2NR)"oCkRmGB|ƹ///_}wlݚ[~ꩇ^z_UZ]PϗF/z{m0xH;`0l € @P_0=)G~))S$wX)1 ̑/E pi@ky p`mX | X `uufLX<zN\ʉ8[ `PxB>BHG'rn GN>AisDFeU5`;P>^Tt(\, "ӬyW'̀h?^Ǐ?fv9Qʕӌ-< JKngn"Cȉ!Ёv(tWl]wBp>|y6iJHDe3N4p,K%  jiQqb]>gUEPjl!"X, eAގW@Um`W⥗0gmԩ=زc\X};LwLǃ τ8``E--84\u2z(ݻ_(^[޶UW{Fx!̙:bZݘ?CBp!ee"$mriAtvTTEUf1o: 1lu9y2~ӧDzXi<Tj8 +a 7QV`zaEEqxYpB ??U_*tvb ΅Ê@4qǴi(.e>D,VwI97TwowWɘ蠼<*&M"I)o7|/} CY[zqRJJX"!Q;/O*9#B:-I{hǁY-4m̤IV 5&s+*U޻L:14&b$?]Ŧ-N,dgCȎyzɒ`yB( Iw첥w_;P_׷`De%Q1 #_RT3WK-qTU).e{Wy%c䴾&MKs2L4 (+øfZ\8zQ{񢶵ޒe= /z306=P2 ޔtM=y2Y-SS ?F{6#c|^Uɐ*vj(W5Zb#^ 5 ^/r*Q+R0Vv\S9/,}&ApVTn,2 )vPБǏ0PBRTlhk]j`ܐRwڏ?̚t:m;vzCà)ףB(pНZʞOԗseϙU" qS\H*" h~9 X8d RՄ`JPt.H zMYm1נ#;;V\~ezii|)]][E"fz\oZyetvF-xd(*7M׋>yah=: UU#e[6tLE 8L08t իY~>{adHDF?Aѭ6U9kһblTUah Ȣ"p^yeDR$(.F"MC <ϱ\#KkD)b ;8=?aхve-dT}>|\mG}sK3D$*"X01yeoscEF_AƙO&q-ȫr17Z Z1|#ȹ`a L }= t0_@-Keeyozqh&'i~vhM2rz]"bA[%LMFb(c[ZU6D AQсoǴi|"%c$<.%/ URt ).^foO]fH],C2 EJ$iGECSYs^=|慂W3I.@( M㐪]]Sf%>+bo.:S^ȭ= ~byU`#"#wO;>nZk,?iV*ۿ;Q o!1>8&dWn~*0Ёkkk0n+ ˖kOWTh•@=Sr[{{? mXnL0m}Kpմi^~yoŻ jڟbȶ,K:^Ԁaҥ}}cNx䎥KW!_\7YSQQ1֛-pS+VSL6HDQU8}D&牦&Cp8KT,1u4-D1`XUB 1UՄ4h_mv9myLU ,.]4'($sԸ178L9u!j,H4:<T\!"ۄ,W70oR؀>~ Dmza&sЁq@- h0Hm> aFDÀ5bGq_6@8#L~'?xf.-t}tԄGI r!L>P;:vv.\q"IQ_~>ns%j4ZfM 7\w߽a83MQV( HD_.xHeG27OYWr9'tbw"q08h=~6:fמ9'/]4m?V%"$}<8N&)4iW{^oLN3Ymf]J~tw3rp^+'J)ȁnATHT +śJuzLL  C18}-]r/8سXQ;6NqmO߾hg9$ HD6z{!!s(0{Fd;X('/8ozUө|hpFB0r(~$U~6::IO6XK G߸qY"QS'TB$v@(d_sM_8Θd*نf8mmi}+hjL q5S5J(UT Ф 9Phȑb8B]MS8Y*=P}fs`pKA?MH  / NJ]7$ae\kAE' .Z2m2P09t@ LeщB&z%uukx29niV/{VP@+rmzzh48Ԁ1X,iLd0KS%)5j앙Rٔ4Rp8?eϞyxfφ" ¶m,=WHs9PG(%d.-Չ%#ESdzf{@"R>IzSS叼@_;Yy$`K69B[4C-Cmt#EF3&Ty]䐦: ,ll-8giE6mOnj1g-pFF˾:cX}se8–Ʉg,-AQQ_+l9|^7?V@&p-9}?5Šs,u{p8|3bUMD?L²mZUsB 3v] ď-Ji,`h(c^(:j"7Aӈ82]oYon`1p!~)? p@%DtUUm:ch(nD.SiM3 (t}-,^%R9*R{@+дR.6@6Kr&e|7p.h>J7 Rv \b`BUWL8+E"7gYV'<2)@(gLCL9"pp޽c| 6Ə2Ҥi_bRzFKf+5RkPF &$c|>G us٣[;M ]Iw(}FW]."j'6DSQY =fBm-  P ]?ƱNUVbfBa!>ףL.c!AJbAUՈ1!FTPQ^Bw7ZZ7: IDAT%K"9!t^y/O,iYBp yLb9Dj3ǡ&k+19W_wu{7#{7, >-[ގ#"uwAlL]P >[Td2Ǚ,͛1{6fφB| ֭n7*R)VYIDpT˖D8lfItut1+|2|^] C8R*aPZƍ~z$oY*ӓioWY\v3}) R{վ-[2-76n?aKI jh8is}]MMFC2RFf-Ͽ;>l!]`$rߛ䊡ۮP:8X`ۻBOeZ!bG"ͱ':QqfjM!HR*<ؗu6%!$.\7Jy ⣀p4Z,dI1~<ߴx[ArKګ-D ìX 9E0zz`ٮdVbZXgDooW]5[l4pFjprh. @ khiHXn7wi'r 1!DP"- 8'"BA>1aa_KKqdT/VhXbLR4d&dEqvܸ;>*)?qRh8 S> Gy.@_^8]6.&lG.a D8cu25JMQ!AB}!^U0u_!>9V]%$WuC.rQ?@PztFʥ΢JσG 4)ŷi%1gw ^wz{ŬYSdb:;tL_o )/sE\.}& #(p"Fit`"[vr·52[55x5bgBrǶ͑)$]~qé KI70Lbv|%KnߘfS ^ &t[QU}Nz#>P~xslN#|ƚ哹3( "0&]wxӰh;D 24&oeK(RI7 ITJ(Yrڪ*`kip@-nc̲g6叵 R.D](tʟD۷?9_qvoݰᙕ+;[Qa4|yW8Km{xx8 ݣZmrI)qyjp%RZr)$DNp̘t 3&4_!zy>Ek}yHNXk\.q&=U e/5ՌHyS@xP(fZmD7zu(t|*3Yc Y@ QX JƾOp8У(?<#K@elHr#ERuBgOѹQVܟs'7pqf:dY#A,s̓p\˗[.t[t6sB+cLJ45]cBmÆ|sD]kxV680Q^T(*"6ax XEWu56oFs3χjtw _Z_`$7сO?E$2wvd+zzߟ X`$wHr.ӬXL}%#2ۆhpQ|>vchHyMO&eP=֒s81PS3[Y\,/@s 8$V\̧O?F]tƆYeidoY; "g~otel̙lڄTFg'-C]6-)/h|6mDpֆqe#g>ڵtYvu%/*rc"|$p@,jiiY|w3uj02lvx8]pA0 /(RfB!̛7 qz ܺ34~1meM .O;odl瞿b+ڶ5<1mZeq݌1׫:cC&'9I?5\,_uW R, M5͛J%LTQ&{T_գ!JiӤމ>^f2%&YA2ýQvWs{&&r50DzMeZ dEK)` e^7#IWG;ee$ iX1kCrXJr(,,Ⱥ\ !9q ʣ paSN9!\A,tulFzrxA |ܹ'X𧟾f6qMV mfm˖e`&RA,r&57w rA"[0GU.' bO=!?蒍sUUsa<][:+>D/zW1ERGviHb 02ַ 5VVX ]ִx !t|:ӧਫŤI@m,2=_@2!i_($B:VF@dn҅huے>)]=mXhვͥIc8Zw~,\_Si79 s cw4iP26Dn -e K=;r!|SR9X}g)Gy5AHVC^g !B ADBJ1uBr8SA>84ˤAGɣٞpXb6/ Ή %9qa퇘b.U}\W/&K T7Wڛ30AQz ::O0m[{}sT#,y}l߷@C[3ǩVz{xcVy{։4S"dl!>RZ>(h4DU8ht}w*1DRZ6v6}} EU[LmRsձUT$Tuhh3=JUm)#[2H&؁>x]X#ڙGWh N{c?p145Jbp40` .C22a*-mmXI'Fop2*q+}UUs RuL< J#(,rOC$G unk|%ߐ0ki p98 <(f@8T RHB!hB!R(O>(RS8gwi˥ Љ'ɓv#'ɠ%pfDPU؁\Cw/&Lk:A<g;(.vGڢo|c`WVOkP6 pEz߰"Ve%.TTpƨka߸_Jؘ0]G(dv|2[YSOT̚!9ڰjOGi)+-E8L۷^kW_֮Moj"EE8lSq9n(,̅vj9z g !04P-3gg\ru'`R} Νw&}>a)c?ͨ2UE0(A$8OTc?q_@×o65D8g֮][#g.L;'}}Ɍ1!_ܵuͧӳ1'> .t[j`SKfecsٲo+zz{Q*!)PU@:Wb0ֶmk~cѨ¹RvvfׯdWrPYK+ Ayv`9P~+[~r(8̆Hgo4aɪ*߿,J$H@eLs{֮"&e[W&%c$Ke3Gð|`xx]]Eoae2iQU4k3恝#2 `Ys'6J'o9 7MƠ)16y=ޢqǕf[[C9uΡ_FQ$ka["HJF(-NmY6UaD8F^1:;3}+3Iick+k0& O٭o.fC0Ȇ@sun]vq.@(|iYU$W!V WJjj~:gdD6.^ $$@RTyGӺZӜFa,MUHIBnd'/s:,+BJ"`=}Ӭ*sg2!<1O-,,/+P*e(e22Km{ Ub`2Pe (r+ (]_<3vɾMTvI"9eCyOiۭvwR. GI%@!UMZ(t`߾_r,T@To=jUQ34]ߨW-P08oVޏ+(\e18G4U oSHOkV?o QƆI c a &) 3"C(L$/q,Tq0Vl,ƍ &X|r%!o#mCQ Qx";tm~&HS.'hf<@njOOiӴ^Am-ALoil{՜9}\z*Cydw_R'D@×7m?$˿X/{êJ7Opnp^v`A<xb${pJн`4CN@Q_ՌWsJ\`! 3%#6ͬǮ7cϐCzY~?:_ Bem4׃c)4jNa Jbę&XbdQ;x, Pq$PǎT{k¡bR)[:׭&3714܀<MpN[r DbA` Cp;.4Tm-/,;.rl3/^|1={6 kGq``ysQQŔ8Jy]EC޶KlՈQ(B$hlDA\w3Jf2>HYHFqYO8+_ޫ@1qփݗvV ^hzi4#^wISyqi>gĹmg`-DLZPhr EG;۸Q>|^9I& hD`q.aP5R8w>>n&3YF*TU_|og~MXe2G597Red"!3TR[~΋mw[:|>+^^Ot{FUU5mZ՟_l֛#N'TW?} 'mۙLFiZ1y)ڜN_5fty&k$U%h:}}kiN U\VtQ$cfi2ckiҴh 9K,' u8"!nk)AUs"\! .|p=0&)k6|Ԃ{#?`qUMNp=a9nP< IDATGg &e@p8QmGcDd` |l5C.2m;BrOdD?{pxri`SGF5ӁWۀ$`"/O %@#}"n7Yu eY_¬dK+/9tڙ9W+ Co/**`EEؾv?VH[ !GK}Y8DdvD"GE4 ?`P&RQO43:|{u됗DhmEa!;<5p8o7,啿ZwM??D҉Fÿmku& !ri?~&f}1[ߌK=L]Y˗KokN W+%ab1oˣYXU\y= 4}PRt?/뎮jKoQ¶8W5&x֫8n^Ҍ E"ho…rd( 4ڿ_Z-GZsMs {>?>몔,~z ); ut^{UZMHQu6`8CB '`Ӎ1N(!@bZ:nlc)S`*Y4*黬rSw{Xgoޓd)J 4H4I;JF#%9)CX{"%ؖUq#fM8_ ɥ݆+񘒵լF8\ Āј7[iմAlGƨS5UtmD;o|(DPiF{ձlF@w;j[Pr>MQ~r%~"bx$`Yf*MZC'(2 B6l6 bR?l4M. Z>̙#nēj es{ 2dZPRCk`AAWX3sHf`+V_ en_-kg?RUTY)E*!t5U!m+\nxU!P:b%eVToI4?p(+WS D0ؚSӷwy=FMYZ0!CEザwbnfPw8uPۂpr钔 2-DV K+N1Wz~KUW qkUOL7'=i|Bضt;@$碣xJ_ٶ߰PG3+-,/_2oނo,k,im3܄hso[hI GcsQHӴK~6]S)8#b ̀^ < N*0%Ϲq@P59/d]1A^ʸ+*21T˂y!!!E\Qqm='r>xW*eB>@0TXb,PvR)+/O>y!dQ,dHBlBpp7)UX`5K8;)+]VU}-Ѵ:(c^r.۶utttQRR vt\\<h @{kY:;\dT# [[ɛ(BvxW2PUʬCCŖ8B3Ϋ#=+<ځVصKaL؁09n&0t|Z(BBE=c6?x۶9^oΝ '35!_6׹\o/ TW |>RI*B: !$O<qzr!+W:KI'a2#AxA\uՈ粣?9se 2aFRQ&MrJJHEqMeM͈hhx1r Ơ뤫tVVFs8獍YgWiPU0横f"zQ]t Ѐa/c( 1a~55T-%%ꒄbRRTkm5 CVUFȏP5J)93W~qU-yǸ6VB~Y/8_z)ow1_[C)SM<5XdĤS]Q:u%̽MW^餓wŢXj%#a@P")!R(]ݕ, LFTJMs8D^ȥՔDRB%!M|K`d]=oYgT5p^4k9jT} y|Z$zbbrn>\Lq8'Lhщ;58sh‰o\tI$&N<{޼BvnV|W<$坎smO&[6vƘ |޲,Ju}$@\<Ғ逇FUU87i38mf sTQE.Y!#i('i&d{!&  JDmP u.ceqd.oukO=K. K\"uܮ/z\_ <n uu %J@(TuuBQ0&N? !%wbYɒw4Q^j݋w z•W8|R|xc#,RsPxF'A]8Gq1<~??Dq1mGQ$!,44pmmT"9R8xx1pFM jj0gFBlmQRK#=="k{!dUU bR9a P\`c6rUUMZ_ =< l8/X=cPXt!/^ƌ<iuP``%$!hiA2`xB~2g|u<<scP$2Ɔ tf0"ߎof8j` ((_M_Mc##KCx)~&L~?}hE#4} ;ÙFUUlˣ濭὏`&—pt;)!BD HtXd\a4@ ^A4q) J 6{ #ʯZ'>4@cS"̂dH."/"TYWUlAYVxyV-.H3VeW͘>>3. SUಫ^D8cR\FO'4YR 潈XhTkUC #CPf^P=hTӦWOUXee}|>)SV*+W_!*d]bKDe%{K/MY;+TąB-z1UC,~׿:HiL+WhLm9B~x.yy00T3!qEqYB 8ҵK/V4n4qqm7̀8sNIyCD*GlEI3Vu`q)/t_ \4N MbCs*Γcp3R&?}޵^[_|p3'tZ&:˖vU &s^:@E=餡^p\^Mc qiRUaĪgsJ-B8TU1~ OTiYH&pRU՞X ==.L .~瞋ZX FI e)J##O@!m詯0ժ8h6y[?,1siPNh~r㨜~@poP`ʕ&L?nR\yf|ŊkEGm KJю1u AlOP`IGzXP :t4iR2)'գ$ I (c遨R1 7HHY @BƆҥ]_Ә㨌2'~Y}ϐdM\;J].QZ&Y{,}' iIקm[6(55b&AW2(* Cp)(ĊGiHD!TrtO7W6s:JઊQ0m>袋B|/ܻ|N2 0,k(X=zjpLWT呺jNl[ (6Z< @TM =0k@$lSMa1R<+VKaD Wg4JeL\B_ˣ.cՇ∵ٟLzsryp=tMolVRa$?(@+atvu! ~#=Rı'>\n 9RmoUIJ[~ΦMnN5VvUVF꺢[K@\fl~i1K77̘qHW@TR+C$ IDAT,2ް[-#UԐv1ݤlg(©LM@ ]8 !EGsakio~[ <75Hga#"UU#Я.02^^ +R BgE_UV 18sVMS !0'M)Ӹ o?\czxΛn:xpE͍/B&Ŕޭiq6|~BKH6)%DJǧi=hZزv qUԋ8|K @ `k(}uΝ;ɒWWrٲ J ǝɰ| !49Cʫ[x!A?}-ش-!enY]񾾅6-XZJypƹ\ufKrwIRQQ[fYsskn Xv_v&2o%(R,ń \(Rbc[RB )J&\ ;/(h % "گ(mm#廝 Cu<à \LBB'@ۿ?AyxeK,wn4ؘJ-lKY2]%3>܍WW$~*EYmD%d*ʱvJ5ϥ>cm؝JٔEy(|sJLF8YTTjxm\t}ᅨi;:3u <>^Vn)G=Tl*hlEl? !ɶmG76R?<8e8$TV^-3$(- 1iN>¶'QYG0@`loGuu!w.e 6oFM4tZ[d (ca[PU*#t\L]](/!B{;LmmPS@ hњk=׶\mKA,F4;1o^QSSs?ƏLJR(ffzzei"!s6D(1+$裐R$7O>rxըSOkj_`2)ĈCUHۡpg'wt^{ގt:CiW^D"Nò<!dsŔ)OਣJW]7]88\vA XElAG0lUUF 7ɓPU82f7ldѢ eġ-?zdY)r\}тxkɤYرVo2[^ -SKO7q&ƍ[jbPj,Lb haEMMɉOw ;j_ 'dqxS CZ].QP+CDDGbTh٪=jϬXF{= DO\[s(܉KpY_NE")hW^R Q&LPkpc >LLlؐ}U]H!t) '+$82wa?hi=6o%$nh%&˚;a7G@ \J9K "!5S~w =vaB?iOnxULC aHgPR\p=ˆ,D!Bvv@q'tM.xJ6RCDH#>"ґ1.c} ,LB<.ۖk. {+cҝ!7 |22qwz)nj=fe@ /H3ڻڙe*[J%R:K/c&7lN#?P$_Չ`+YC޽3ƞq_oI:1[hޙyNJf uBv1뭕$T)C"1e|a2P@>[lJlMA4m$ptg睇p> Ѐg p?fnÐ.1sƍ=sr)10 6l}d(̛]|@ YP>q'RJd2غL00< Bk<0Y&Q_?2wW_ xE\y%>() 7T;I?!N&Mk;96RJ--MGT8LS^v֖MO=:?-EE%N[MS ћDIvkh_x **i@)֬嗣6XtR~сGQ|>TWcZry :(AJy-fr75i̾} gӧ+ħN%Ķ%pvd8YjG#}5V@)4} o).P+[AJY0(=8_O߹p[.uEn~xmmI9hke"r5iiwRB%s{Mc/,uf +BH^=4ښxrҝUTX:QR Q;3vmC%(@L5.=\f Kk1/y)U(YDn?_2ʺ:EQ{=D!<)FQRIk苑o?D>O7E|>֎3!%@@>a;DOc$y'N7\C3M~t*J#t.FsNfč@ Ɯ|^pO ׭ԃ9~€u&F!ǠDy@))%^=4sU_QR{D:0k*BAPX+6E4" &ZiAJ@ ^NߖN霆OWhZL\R|["m> 2zkp+-yգ /;wS➜>I$OySG 3}ɩf4૟2%*6ee۷R)I}VL"N1 @d; vϹcgD^%( ?υ F).`7/r>y߷SZ?@o{>zt rͨF&$ EWUB8NtJ)Hp~n];?:":xd>qT4MbɓG^ vwwEjɔI aH&UE%Md7..;KJ.Ҵ_uB CCTTTyyBHါG0!!,LT͵%г5e3oTH,ڻ|+ky={+cˬ^%{rSyuPӄW8@kt2)[|:녽~_|== @p.EQ(&;K%%%Wrߵz{s'`Eё_.D~`c_=@A) VRKeܑa(`x*!,Pj2r*+:sK~qI8X.\mN)8w GHxM^IXw}$T \>dg]iR0 !gp0XYYZQ>+2 :Rjllj) 4;\Em[~{EgӬ_bA Hh ;)z( 8! #ۄ8#/F%J4'p{`q P`4W@P/0?-YrP) =W"K_6qnTʶ }}ems΀V@QIat P44Wĉ:cuLb ɠ3gqȪUr84axm-ghXl6VR箮@gS Pse-Gai6hmHg#ôiX&!) A?#x}O~;jkQZJ&LXc?A"%ar̟ƃp0qTwfD`:=8k~i#B\}.K.{-:D!̝ku0 (*ۧJ&Nw2k8'<12tv/O"rX瞫}}0yxh`:L&<?0 2zr֭眃.D  dx9 ٽ~; {Ewd$[}dse\.ښoFs&JRbx ]!AR)W#x}r с%KPTDzz,WUٱB!(J 0 \O)-1--p'=BZH0gŊ7~e=[s~a(j=&8禛 ڡ )hn`Kp8v|jDW'^Pۛx:`(tԐ8]˟^x޼yO,OHPEwY-!BPzwٲgtG> p0HeUe4JFJ2IPf30q/P֭BU]8kb.3iQgL6V RA: Uև)2ʟ]&BOR~,t1N y  c_=2v~ ҭ[z&-9\ss.`aW =;ŷ…Wn!UQҴk'.CǡΫݯ₏>*@ bGtljj>o O-ZK:RQwR~ (ޔzRnHKY x"v  a-" }P߹lO@7")/̝0=!}ܴ!&q~OR!4)݄ /X8YH:Z.|Rվh[m8ΣR^*eTA!@o)2w)DҘź~Wk)~;c :wxH3Jz^iطL5y˨<ޅbAe#6K!!;e[++~#>AQ `C@ 7F(bP@RB~/r` >B״TU2D}/4hg(8M%nmoϥ @poP P B!R#ґHI JEaRhd8!.w(V2EUBlUH(urʔkOд1}}5dĉKFcbM*δlݭVK$%yl$d 70+kc! !%$$ 0mllY}dkR}ΩѲyg2o}Nwשz~eЕNGy44^^N'x4?iFaiȢk7Zup~luQ8?MMx޽4Ihdp0i.ÌK@AYvS_F--2~t#cic„a ^!P^!+*-_y%_]ǎU}GCC~Eq1_=X%/s1GN2c&fL㎴Ri%EEHB!ڊ.C3MCC/QKqe(.cg?vl+cRb"_UY޷O47ki%=w IDAT8xP^xa +uKoC)\+]+5V5sO5JVeKl;_ii"ڕݸQN8kg<~zOVrAŖ܌XPIx4vժ9B+r&S[50n3W )ox|ƆfIkwvuu^>7sf3q$b- .IY8?4}"ΫDxl.6:z:I*Fg zd2O ➞/^0,4) iݓsa맾KӍgsQ2Ļ?o?tUq:`>pZ_.X < J~2k`8lFBueLh@Bѻw~?(2nYVqmI7{\:/\G*M BT 8RB>+l ) ?͏h.-YKw j&wrCoL$w%X)s{s5ZvF`䉼4,)&YhGURNo4(/-\ Z':CI~Sc#[.iG oɤ)28l9Տ`bnV<"xT<%sEH"58H@s:#u]lj3O>oc3^gpqS~TԏƯC s;ZQµVssO}K_ؼpIK}:9l%eho3ϯxO装9gI) yq<~^PGf{.8k 5_x<㒅 o>Ҟ=ݿb,`gC]]»?tAt ќ_M}̱9D 0c`s)S5a00]gZB&{l o w{F ߱[R\*cC[Jey?dF iJP=گng2!v{p!< 9 &)+ hfe'_kǹ(H <}^tA.!e,v΋9qOt5*Gt@Yl@ ,MfE)ŏe0gqҋq =}\Ar +!,p VPk [ƚ :yKvyiPq"~(ZW`'p#:qUDPg &Jδ6*)Q,ǴiiRp]ލr r[Y>9WD,XtvҳƁK/x≡{7p]~?(S q3ṢJ3$bQ!%JK1e bx "ǡro6.ƀq HA~? Őa̢V|`A4͎6Zy=w+i¶iZ3ȹ8#0Ll/yCӠQx"}z`N8 ( 06IVY > W Yy~6} )I vGK97e؝0<|?TBA +;q#]O>OJt‘N>zȝ0A10vm TVJ&OfK<+D&t$Zx"W{^^3eJlѢ}\ " YSFp f̈X|9%X1cx.(EˍpeU==5R-/k+0aHb;oHph}}9at5{}t4WiVc^uVHʝoԓҺ;wcJWW,(}WWr MMs>l"oU)38؅_-pP\%rdՃv u̙;X[[S*rn>Z9q )y|uq+-C5^]ElA%'kKrT 1S[TV94E7z*FVa2̚KiY&gJm.Cȗ]/Q8[ZEَOsk"?KCSJh6鼑!*HiQ3 7xq}~$D"x-ֳ?b&09gHI #}ƥlw^~U  +*; }rS_,V3alV{zt)#yE?s4bU|ȒcČw >p&0 ){N(W2jg%b}љgNNR[|6ofk;O-,hTv.dpkә\#\7g@MGɚ+W3ME/Im qw"qw 2k|i֩i /O>[oe?~DnE%}wƍ۶-6͙UU#1u=pxlc_͞[/uYUQ f29uR(uu]&O jw\2١T;|+Ø $Q0xx^d4h p!Ex ]XZ0YhH~7$BР] 61LGb w٣ٟ-~3`!S( @ BTŘIm@pQbK Ԛm7PͫL38Di]ws2i*EIqL6]=( \ 0b =*~"~I W{701f6n@'wbt #ǶI LAD+.ds8, j˂l43h0=q {߱}l;3A2J ί8$0-ZQ |:̝e#YHSq{tQZZZSiwomh Xl(!oDp#0uҀ z움F^:,drIΙU8;׮U7?.n&㫵3}ܢr{{4ЗU*ԓ cif(8yto8#j66\vt{ ]'FW6mR=Lgj|{DZ1*49Xuuϧ1o@W;A@g@ t}a ,km%W,:w|_5^C#W2Y泘""K94gvK$XF/r9m+YUsIURbhDH0^e P%yskd ^;I/xui\Jz,{q=m ztT_OOHJ_W}#&nJLUK̲9^MMʟEN&^"::V{oJ60LpUF^vvwʃybiՉ$ 0Ne\QUV!w@4`~ `nDq/  LDh3b^iB$se2DG,l6FZm$dMM,TjmXVowF.-^ Fl`?q& Hν>t]SDD 9oe}l֏1jµ Xw~?8ڲ>Vك[=, r"!2v6t5A5IJPikt$ұ{h:cTx0N8 `K] ?g$j-b5Gg-SBq("q~h]|6$ (x((qB?npʃVaPVFyQq_@m7^Oe ۖD;5 2&4 DH~A$9yN4A]6Nڶ6<v!(!t@-pU7. 0 gqƳTveBΘ8)i`3kz \߬klٵ볮.ZϜIJD\~?iwi͔JM3w7uyXS9DΏyac05)J1>jAlS6mXFZZ?O? ;:3ff*Hc9`1T }}F mqx$pl.Jᛧ©s(6F*h^2 oOر# /{eQtvB7j{0baP #F`uıHqƍ>,8Q66R1P=((C0T8oSv,y& ɷ o<--^amf=zê5m3&f_kh('u}3J)vʷyQ|GgBb;L僪Jx">V07KYUmpj¼" ,6/"eqGdz (C :]QJ v|ו$C i8|DBuXx;6nk L}xi&)8N>18!+WDneq4矣/AITVb6p7p̒n0)]څPs_[FK>-SRh2Kk8J)|C-SbYg듭N7}[|BoFݦʚ If4" =:uDBϼKXNDHiKp,={Hc!u]]HTcG<ݟJrV(.&" mZFU"I7c\3 c4a;a˜_ͿJB`%B(4ep+]uv>rE9Њ'_z7ӧGKҔ$ F(Knq U{{<UL$ޡ^''2;~RR^5P5OBasjM+^[֎wxnc直ЧMX.9@2C_0Zسѡc-n!˕ЃTĽ;@yce%91;n2JGWcD'ҋgåL;5>==ugS'v$ vU~( yNu~s$4XhR0rj? YN5%'QfW}c»yrqZ Gh9B`9C8EЈ>UXZ LA|#72e4,0ϡmpѰ>x Q,wRfsAus/0(j񋗰@cD%@;f\ӘaL<.L+KԢi8|1Rmۺ8VMm9G+Ň`B(JؗHe ajHҀHC3Q[ahNP^; -]皖d,sɹд>(6 E 2ۚN':iRQGQsg0'?'D_)tb>, , ^cIr125&ٵZD%c IDAT~At;h>"Midah( -[O/z<_rn I9'.l1spphGGq$RiկsР*x!ܹ"錑FbLiq*#&MTtV<ӀRTNæMx81n~x6|>ML̽ b.Ӿ*8Ř;ò08~ӦbLs,ï"&N8Fa\;˸ Imc LBȎuƂ;vtZs5 Xl5}^O> Ѩx1jkA] ֯GI:#. *yxl)~np"W|s4K w9?(-7m-2\a,!^loZW/cҶ?Z skn6/#;—_ !㣔saw*6O,x8c87ׯ軛5vD6{?sH^.ۑR)fYT2Ѕ~^-YkX 304)?_& f*_qOj0FTV^^H` d~MEF)ƚblS*)1V}a)*Xo(_d錖:;?a 3AfÐ"6tãk$U8U>䒃97G@txtrh1X[w3L B@<4س(8|]'}}ɓRJgM?kΗ:rw0Ҟ2@D,>Hra \Ð)uE!sca^J<>K}X~ǜai+C=yhb,5*:w")f,kduho&/r?RjcckKKKhpp\qcu]aA.1&(bRIE:>XfR~=㭭Gs+Ϊ 9sNk ])##QDibhHֆe|ի_<;w.?cLكi`@ա A?LF_<762 mڸkyI%nj6"ÔP(̙hpRFWV%Sօg\6/un6Kho˗ӬYNe%‡܉WџWX7p`P㜙f\妖nFN׽u]4XW1& `{.kђdZ6/[]&d0jB]cz١AnTTLSwWWVu`6࠷f„ ݴd:Uuvɣ-G>3GڶFrud(Jn; 'miɄzTΫz1Cx8 TV\mJ7bh| u}fv,y/ԱsBLTw;˗DH7KJ@gόׅ-bb`TsmEyn׮vXZr0uN~fH'u^) IQV٦f0quv1v3n%_-Z׌%Rr9q8=۷wCtuf2 3x\$TTTnK&3֟+#M@r[%eeLdz󹶽apm_ 4洊8WJ>}o֥|F)8r[z˪% <#C/`ep@[([R2&+jS$\ _T9E9ܢ-  ;lUX>_.iϪ\4n`d)gb1`$9WD?aL3 W)FKρQ&ʖڞC}R-|<n`)p'QPwIwC|mb8bíZr{1Pm 1r8,P40kEBnʣ,Tש)SԣIh), FTUmm"򦶶gaCпnk>0.}&vˎXR~Vm|ݷYn+j,U]sOuMΔn]"gܽee{TxqPy0!B,jRfz9p_V&t);NŮ]B|`08wO?}a2*-F0z42a)֭'C7A&t%Q^'7c< ߛ@ 8\b3;2܂!r.6NZÎʅWO6`ho1҂z5z 6|2qqm6le _* tLtMO ;i0Y`Au pBͪU{暛=ܴ)q@,Bf}=N+iLDp,aݟ[+Bl430^k#tuueu\>/FCt'7kjkaOoʂߟ  e:ӟ0=;W\(>"lZu~FE~ABtf;Cx㖂CSȫOEm墴HIWt{cldYI :uqxi[nYr^̹(< .ݗKn\ݻ7GHsH? dXq2b^5Y*v8 \MÐ)LqXMe CsDȧn~K$ ZO52u!`"<) *998ro~oj2={2]]$۰!;i9iDc,c&8ģr?cUrki4EVhu;V;y}?Z iL]`:MmP  Vwۚ 0Y˜ݜڵaDw=x**"(P xl&Z͸{{@7M"HڶFad&ntE@X:G-Qmv-\2uƠdknoadDF4iS:ѻ_dz}U#CQ4Ƙ"z`KYɂ jY TA{hS_@|U]Ju#? ~k.?99#U@rƢ7/^]ef7<>vƙ{&hjuvVw%xVQ]ƠmX<. DD37huՍudμz6v_fژ1jZ5 0Q`a'IT$RXvH㬡ϞɊ܈ZMTE%%@f",{>ly1ҤmR ẲT 8MMlDKI0 pu48S ~YNek"%۱v-;|:!S$b`VUW=mܡZ/cJ)wACCQ `+ܚzۑU9TWT0IuYF'oX:΃; ڧS3W_MX&;q"$0e >OkŊ~{]ךپyC,Pt nK5)>ne|3:(`0/} X=jF "eI.2xE]{ۋײHA}q,9J_͞o=ɄdRUb|d*Iٺ_r墍?M0>?<ۨh‡'ŸZ'MT:W_z45|!r\>g|>˲ W趦ᦦXl\WRB|I[!~;Ynž L ݥi4m%Jm|x{x`0@8^EnN9bD2ގer8`lѕD3⬼~i6Y9_(bd\,%؀. K%Q)2-b)lƬ>ߦt>^`,#R B0tHqr_QDom!D-Y^f@a`16` @c 0,+j}D〝943QM#?-3 1 Q%P =$)W_R~+p+W_eWTSY{+c-:t]z?Y`qz׵;}oyMxb ʬfʋHd>5iIWg۳\Lph( R,hGGj޲4֭v3d;@1\E>(B.LF-,.cL $B,9T9e QRA[F9(+choK/•pֹ[y*[i| #065!)q"hW\E./_.N-8`(To^X=uMH&7Bu8RL3fl{^{msk^ZZ9|>0ESyӧcjD`aYhiBK# xM9&oLRhhڵ n8~c~< ؆`b钅r9D"hnd۲-9sʕѣ}]]i+pTm̀%š eNF3O}s|qjgO?1с' :[YD Zi[n{g5x+)9*f~<ϓRS5 5—RB5o>nχ1ע_Cz H*ty-׶oOaGK 'RG>@F"D-DW6@2PKM잞܅JxC@[{*1xvy99J)øƅLVE)uz}?3\\^k;}f]Q41]Z4*R (jɉ]NĨQADF FEQ:go[~̆=|99pԵg]&DAi`l i:iaJ(xﻏS\dKh^}TXQ *H?rl;{7j=B/PJ uEvk@)1kh F~`ɟ}U92w#won6سfH1#FY@i)ix}tvbn\sien!(DE*cOv<0VAJ d:p"Jz ,~ ^׀_6 \ٸ1 WR(I `gqmWSYc \ ͘S]Rrx*nxgo$hiooժe>ԃw?NȲ|>OD^2:IZhZiUUSikg~w..4~*cJL82cϹHDz-\XHl7͛K;ƀp𳃺]FF:5kq\<8J)>uUE44=fѲҞmZQ #}3IO[ #E]J; ^8Lg 0ǚV-.+>^aq 6@CBxc'PQoq]7u2*uH_Dy?5m~\nR @#:WpӇDOr}]?jw$(v] _A7~Z2yI@;\ Z?q a y 9#5;jT{P8+<`3dq_8rs//o/* ;Y#8=K53VG@cciU'>ّO}21gmeVBR;j5  'e ӥdWwI"_dc+e.Lg2:g̶'US4mii6hZ4\k޵s;O=?w[] $0G4`H$RIFAi< s<+bߥT ^/} 2QW00WGC-ytAJBUUPߧ|NaHJ)Ć fc|c.ziO ۋaptM3ʄnW_=g^pphx%TWaY080< N)r%iʱ4 t)0{zy3.kq`:ߏTt w)1xezDztٻwיgz 7ٯ2$^욳q2te֤gc,j@L+ k*ܲ#.Hca$-5"ݟsrC*p;kj +%d瞚z l9|b̽kܩ'.3dg4n{?^nn}YfP&colyf)!iǃh *$ɘ, :D,vL )ԪėO{>ES w+ˆ2-P^CqS@d"%@ 0wN`?^NTTWPů IDATV$9Ic2,cL*0FѨ^]ݶdhj`f*ki.ټũHٖk1\1 Žwcޅ8b*"{c+&M`g'BgyTWC׹cmƚj@aC1"VS'{,C)\Jo%(M'"F]DYw \-\4gθ)Fa.<.4 EE؄ Kmk{AQ6+TWH&{<v[OUimTĹPW55~M]nt0)p*SL{Ia7Q7/9ׇO>< E,|" >ǵC׼OLoxut,K#XGȸ*ZV fvȷ׮f dY3S׷N=ꢆcmy^u:?+M *~qrR0 j!OxF;7}4Qq˧R̙ Z-ԴEBR.V`p:p P"$qK"- çLIor,)4u1Xe] &!~B`P"fV"y F.!?sQ2FB'^oY$ܡX;(\!3(W@ZpuHuԌu $Mo"sS|lķ(u](&[FN?h@`"͎l~='xs߮UwW8N#p&qS Oo!L׍_R?L̉FG\aLli9dE 4)hŴvW"h-v_S)ҷT8oQ:t,ח]b8?:OӢ+.*JB/ĕRyJyy9y.^K1鱘Ix̞}mHY3~Z r)i`f{K#:{LosÚiJ4^ ܿO=%Km٢JD4:DS`\0x##x9TV"c[fsQbNXp]uײoA@ua-0ƱS8dL Jց{XަBFqIq3?RlC Ni zz2uuMBhU ̵2865GyhDGmŊouw[,n8K^#)ƸàK/771MjnM#R{P)@@IEE6}mCGk>NwpEո^dk|R TNeJUww=“EDscp4\xHMM޽^ͭ/Z@>y]"|43qm_]7ݗ˿NleAV%9gx߾-wyJav'rL=gp=]w-7[(u+E] H!Į>C}t2ГEN>h$4<Ѝq ;]rj^vRlu&Iב6-lxO!o;:GζXnʵrṆ`\ȣan2n5 #8?6/"ƴfɸ$G㚦p1p13, r)D3pF.倀aU2ܿ#}mEӛodfG9Obdc)lڌ.=h Fh-N H0Yr`+-x8f$.j6qp,g$LKk[L5rI"d, *eDDm[ZrRZVZ RWHH%7_FQ].\<s!HT\f. -[}V.#Ls7^9 $m%( xoiwu p( @g(儍%V;o+~J).cR@:͔"~UV@,6nDWm)뵀^hj {l`xp*Av@1ۨ p)H54͛G&S<|J\7@)[jB GG''+G{<FFEGRN =]xy]˵ ɶE2|,J}Wcsn>/gGX/i۶+cϖ8RJ8cl(+w]c`y)?iPۏY`uyGLhr틏;j#Pw7)zv&wsK ®apiif]mk( q~u~qk8N"JK!D4?z3ccL--;"HI BA`W*<*pAp@В"D8,S:c+exu/~`ĦEቱ̱od2#mQ$up6%{C5[Fvk" "с$PgqKq MFd W@V 70d-s pg~b4Qx&V2|&gdRXc2CW>kߴt{cJPE 8\ Ec\.Y*gW2hZr{ `BR2T̜8Yۅr5'B\ѿ-tO}=#8>D<55DB޽K\wi,)⋏^V\B_*% 8ljc_ >Kh, sM |ednRx iD,a{P^06FG#UQz =v걃S=&ʎ47CB!ݍkaoۃ"u5oMMŋlp4<4sgˆzR)'ӦMR"bDׇ^b3f[jSw9(hl{ܵ{1w.&L@*T `qO:]W%P^>Nm¾bdIpap'Tc Sƽb|~8=N#Ak+ 2̚-[afQͶ0ݻ巿]_T}9`D8D=Cz(,ک>KTv4kcÒdNk]u 0[ 第^pZ7xrЃ{Fܺ&>{g.—OSe-[޺uwS'>_L9lJ)P$t]8mU' M|š6> p=o^{-Ekh4r˲[~~KĔV_bOق_"_]/ J*22ѥl0q$唰ZubKhV4Z=H,S"X8Lv tvPm[X& Osmߧi]F @= ,Z550bU#v]` ⒅D Wz,ڵ;⩪'C Z[P SXI}O|3MRv3`s9TT GTh8}}[VIV?JO?Ȅ3I=D";k]_(tn4IsXIWOA)%eQ(e|ٷKSMĝbS̔˔% ElTa! %٦F9HtEƀm!S"pL)eִMWxx''PxQGc`~PM_Je)`Ymd]8V_K'ǧUpU8p)ʪqƅH$HsēX j(H;!-!K  !6))R==hjB:pg,;|ܼ'\U^f%`xI̺\ZZ癮[zGd&tX,חٳϭ\ZT.m"a>:կye213tL ǻH 쩄E0 Ųlyuvf֮/!7l|R>&L`C)ƍ# 01% 0vO]| }\ՈDE^`1 ::PYY4]Ia{; /tJԮ}7+a bN#xI\x '>>0:HKvGyy< fA1[ƹ+1 !) 9_{G3@A%+}pکحFEu2j?"^:yNaFsLJ VT[@3xۊkx->`x#r &QIMv3I*Ef6{vJj5ČaJ&3k~t]FW:R D !4]")|Άr*b|F^ ujEV\z@ kPBc8LC5Dlu10T-(َbV=W:#Z`郶]N6H)( kXLq/э).xo%Had'<+H`RuʼE:FM& #Bܲ嶷޺iK "ݭJKL# uC/*/mKOJRvŵ M{F1m۞7o ^>\ HJX朇Oh9] %UT`%g3TS},:;ēxl[\pQ)l8u۲o^^ZԈm;>ļyߏի8=55\ݹ[=;t;'!THЩ2?z 嚦KDPnLn}3Br EE$%3 rBqnwXrcX8q:~U=ʬDgx}3מHqf_8WMp<^veQ[ZCyՃ㾃u3_/Nw?N|4 NRJ]CϷ?+PI5Yg\9mldK&{X^ R^18804]WD;7oi۶&~cL;y6REo~Cu8?@hU#CCvO'a8=s_/ZnX6뾯|_mm8pΗOQs LJ5<,"W6X|m.*\  bĸRȴE矫T7 `TWwLoxϤB6.1FrMeǣR闆131`NB0i~+w+TsxV~KI!%H*5&@s 0ŋ55UM1abꐔу{${z0=gvb(]G66( ,t5|ź޾d  CJٓl hB必fw{0i`hŋ]sDZ>nMK a*U7cFOdq<xQ@l?U+wPo/5ݣh J^\Qih`ヘE/*7^NuUNK %  q 5!,9&)o1w.A cV茆I͇cORY]5u̱3 STzGUUgTg9==5kr/8=e!a(!l!WcR ~\+t0|()+!)!󠴳8sg?{TՙlЂ|h믫RB()z0cXN'eEdi1q1a+={i㝄RсMEzg'r#g\461B8r2xy~6?7a? hlG'08>̙pKO$R##%rŕ(ތ !$ ])庢'fME*um!SGsQ :~#{?[]mXOO6uޜ7/VW7y=쳑illw_bf:NN;ɯ%Nz١s<S3nJhWo404M;d#?iO( ćljc?xrr|!2T^w@!e`?<(w]c;[~7!8c_~Tv,!b8!i\* sLq| (|UiD2H |>2RbB).=p|t`1628P[i{kq]"73bʐ  cjE2F]Oϕ))PJeuH%MTxƺw  _1QƠ( W8W0,Vǃ[H\b7+T3tC|qF 15=hR]RQdZΖ=C@KnC `DB[5قd $)M&mL&f8?8x7`^; i+.K-` "Rj}VJґe/;Ц`2'66g_K #U06MM5^yIsڒb4G[9 ڛ+$& E ]/ wm/zLR-Xz%~ڻ{L?_=)5G=0^paҷ{whqqo~Str9l݊_0 Cohv'Ϡ@( M~&6s~j˹I'*$GIBJu ]gUU8,́JKs]mrJ;'O!=LJ*@B<^̜ Nu@L(!C*2atbD IGq)V IDATH /kF0MCy94:+fKˊ{Fu!}*+mYxXp[S d Q8q \Bko$,~Ҳse$ʧEG؉ƪWp4N Kd8\-8>=۽'y?uz_ߚ|.hg5>E,؍BOtK|(3t jzse9G[B.tPhTXl6!3Ŭ\?[Y|hVZYy`OoxqSyD; -'H ɾ ɛͯ]Mv^3gԀs}8s*NGՍ" 3\)==tTJJ- ׄ/snR~GF j*+r\J=.ccvq1 izEu3؇wyh-[v[ow.ΘCI ] x-_ ΈhoUn )*bɔ7eKsZHdâEC:jk{_K.UWkTg̙ ?%e"o`ݺH_}҃6lz@z3yɏ>:Ggϝ ׅ&`9C4 {%Xs67>9QR^̚ k9Fݢ~ʿks"2An3*Z15@NaΙvU@`̜ac\LJ w{5GvЖxѻ {pzAuDzL]T@TnV.x/5{wn!ͣ'n;08 &^9 [>f8†'rV[ZS+qݵ)~{&̈́X1πQS/*,U8PGī wUlSP$\ \kcog0t}gʕ?).kjuݗJalpS)9ks@K3tMCcfCWB ah^Z v66z0#~(7-1C䄉ʢ@_HJ%ui,`rum+Mi qC̲xSS]~}WUTtrv\!?si\n}v"w$ a@P\dJh5upB,"MAJW.m]]-;;q#[ۂC*$tnY1-m[ifJ͙fY_͞5 G{?͛&C!W)sZ;7W$1ؒP6HlhȻMꏙÃxk+|sg?"FONctRBߏIG~ ҅e @01DCq4QU:Xy SY aZe68+Q5D$rB]81hػ p&tr*+c~H}}X C\c"3̚&$Txf-ΚJ)PR NQI8i>VU=g.ܿ4hh/)e.+DgDE{8#<>"kk3 \ƌtpXj 7hs1ֽot[+^_zpLzqOQ%}4S?}qw>!>SKt.+K Ƙ|L`D3<Q_@+vE>qݥtm9߲jGѡsE/hz]k=ohmlbtv78#7̞Sd0Uj3X@eu=A9 B'RpRR1<8"$UVTfa\&ӱ/p~0Zj>L{ID$ ImQ`߳2uR"swx{a5U# Q[=ݲ-`^W7ܤ@쬳FmK XUiٳCRnwǬΜ+b j݈;TqcT% D SŞWG_&^^+_B8 3KRKQR9:":"8J'KϝsƊE[-JDO<1gNF}G\y"G y6,gOcn2ӗmlh}2^]ꖆ8\ԄX\ -:kXBdʩ==i#ݨ͠%Q]iahHf f9"e,O&SO7FK$ϟ,o7_aKւi.#L|>UY ũDҗsu%ր|A[ߢ]oJq`' 1ZL97cXp>UIS;|@^ >]M_+E"4سSo^t]nߞt݉&'M*Dly8aXdҿS^CA#"&ȱ-PnF&Oi-f[~h0D"۶mׯX ļxݏF7]{OzË/dRHioy"D5?,2`.|Ï;yҤ_r& 9<b@B| q `DFl339دng2W8"!Vxh4{tC͞=LyB뀗#Ưi+8`5sފNZmeVeڔ@BƬ;b*{0sm^kf4ES4}F'-&`p!;$B(  ЃFZCcp89$'߹s͵gϒ*s?@Ķ9\v.ڰ=Gܿdv bU@P(])Sj\1n%n=,%r 0QA<.4wPjJ}͚cit(En*$݉=+h @@1൱. KSN*b5J3|>nrmԣ.b65VyŲBsjm8+jIX9 ގF*-#&i::敄D'8X 3+CZ\5zz{!%+Cj2v)^oA/%Oq cyϿQK;ceժ%EEÕJ4*i[Y0kd\oa~/ *``YBNt,44`4֛ʙgXFI.Ӭ2quO{{4MMeU6sK^zo[?;76u` c` `i#.ZCƧ9ŅW0} S:?!QWW^}a|@PX^&.wAѲIQM ::};AՉ_L0Q#x %ShMMY#kIR_V\- ɚZapf 8}KKf,0K0sTaiU|Iə$ QmY ,5IG2b; `l%T;83o -NﴚwݵzE7nb&ƌ|]\|%]'╡Oc?bZwLQwDBG1߁܇?ݛڏNcM&X'3H5ͰUhɴ. ;YnkH\K.ϭ8wQ |6yS5^{,,}~ɥ-'ֱchh3gѻFOBƹYuIpK+_SlJ i\3 LhfL>:&LQ ɪp¿hm=g;򎞿uD]Ā20kCnѽ8綐ca@ȶtn\ʙCkA^O@Z~Ňo '.P9*œ|~haCO?EEɐpa-->:U 娩ܸqE@"\ kpDcjԗ;5 n M1k}'`U ͭ#14i5 bs/Dy3@2<,,/bqq#OGSHPH/-řg˗AGGǵ^޶K1w˖?yP$࠲l7OdRrZuqFPa )?Υ<8}j Ι"с7{ ӓ T/0X<f@]=8$tvcOFߏ5t0B#-p9)alllb-;R^tqpMr]'Xo {] ً@FsO|>RiB g୷@CNBockI<=y=٪ QܗVh I  p8: \>50OnUwe=b)MNe~7yfq2Q38{}xߎNkJ)%$Y AU_NzEolQ S}έUޭG/׼87kD<-+**}nĢ>_כ7zGCއdppmrDUf` Hd2D$| >g~mKqy@x5aG=[B0ed!@cfx/FݤIG_IF>* .rO% =/{+?."۾Aŀ\+i7{М W.{~ú+_ydz lSB"gy0II5i~ [hz &-\P W}\S;!.nZ-!8~/GĈl8d5 ,x[ť6̰XR*ı+4߉MMlݺxꭹ75/';%Yl`hD ǡhoB>!+  6icH*d)рY?\8 .G죏#o͚> (/#1]i`v wit"M{O;{ʜRJOd JJPU4 <E|Jtv8OسB{t{p9$cըr1m(ʄֵ8]V8Ȉr|>\x]sH7 MGi <.!_A;f#>23AIg %wQW_&]2DŽWԻe5c5Xno|IRF$¼>C 8~TQp9"TV45_΂  ϘUs}Y0;nSPڍ,'өb;5/6+O&;8gEEh-mūo[_'$4heFmfnL-er|(9 G_BQRf8 ι I IMܐ$ѣ浌-+"'ꋔs;ITfoaf!uP UCL-zZ|K77Z2f$pD+NtISpS`.b,6\hyi 5jeI)::gQN}JΕ(6?pm +e-0))IN#l$<:3SNMع 4zRYs`&0¹[O-jjaf!=:4E&ҬSLӄef.(,b궶9sN=}rcRg w~}Om~&SR=OcoQU/Mcy0笺=o֚Ï@<JM뚟֭]G^zk~To 5byXB`",68E(/)chnn:Vv$5(54Twl`RuS}zrSm?Md30ƅP휅1q^rLfe   (x`1朇iP&ىTD(AQL7} CCq޽N]q::40p'dmҤ<I:Ȟ0lIl Ff#[F߄ޅ?=PոmF۳Y[րZMv7ő, IDATVZrzuW:pW.pW( dZos7HQ8LIS!o &? ; O+nph⦛>,]㡡C ~!$P>|WXs_eY6ZZZzUWYGvv.ҲV]4U( |ZW/B 񗓷oݤQ8$lX*qƸJvD@ V)T*KnRq!"vLèFW*TlGM++W|2cY2^)\ Q/{?PbcVT۰~N\*e_Xs {8Bp/ppG:1T$(# JJ}K(qjT ^`7h^mp֝jd/Ha"), Cl;ʐkr~ LAsiүa-9rnmM5SاD#H(X$C\#,VUcR ='ڨa\QV呀eRJ׶l2462͈8cyb\[].*+0`6n/CJXsϵ?D'Ӯz[[hڊjD"_Y3Q_@I8MM8d|)!8G]?<6oOVV:r/\RREE o$+#wW^).>X c,Rx1Li-xy^di-I:S4hvBR$z{iym|@YTCf;.䯿Ep Ơ  J&7 k-Egz{j|iӶs$B$&2*:K#˙B@ 0؇R.9 h`=ΕdݱD45n|U554ۋqP8K/PX㏇y3f` ²ѕȭxF T2;v,  0۲pz%[YI4E (JH%PH f-Grtms/̛P9Xow{ܛwmY]x^9R5de;޷N|4Z*m[Ӵ|߷Kf{D/X ~^zËZHB8(ݙ=J%qg3yQ"Ʈ>b 8@*X("l%ʴ ?)IԱΖk1r Sڱ.ۼys[nL>(TJwKK}*%s(S8S 9/^dP֝Yيu4l. WLzr6sʔ\wO&D yq\.͝Lw!y;{/޹*([8pFjNդ"۱3&MpUKOFl;^عbEǬh&-Ien! I"P4X`gd'eXRԏ+D* I!% }+ 48Xu{?bh޸wokkZFUV޿yVRpA` PR5J.&" ¥l9kldf99a}Okڟc0 zmY͹R]wADv SD6]$}ICX鴽bMN@a كehWdF F!,dõ` ./1|."=ص個n74 |^$Q襑a?O~O]]2 \ՁyG2u &%[ΚDt aO=ZY)]d'VeqɓpU;:~zI`nD;::.SOow!10WH=PF  aRL7`x;îQO&:_Rt]HeCCcCm{wvAۊG[3hٞm+„`7vR9̊fwAڬ$qc6㔓f :NfF5rRLAL[-rPT\xyewvjF$qٛz}T>7Ĉ@o7.8ZQVFD"* c,jQXQM*g! )kF:;vB0*/)){o0*Eׄx^gq iB2o/O;?xo%T˧?|D-[M&|U㢟E^'ܨ8]y\Y{ zL¡Pp{O\nX90HPS&~I2n!\(`If2CJB..cƘ1ş}vH:.@pT1ՍygeNa8M{O^~phh˕w0&2= 3L+ʽNѫ#E]s޽%j$vVKJ9p+"zk&n]1\TRO%s tMq@:>s4nٛ7w IVM;8n0 p`zJ[<vs$XI0+t1qwXxŖf`  %U1(` .]%8Oϟ yFuzؖ]Ml9ܪVcؓ/`ᙳ7Y^)eck/Ø&@QXM Ú5"p>pk⃗q C`tXfedm)D:|yg?KS.Fx}#F P8l <,1  q5$(14^l[$À h8SLczHEY# )8l BS(-%4 4Lc\&^5EDey'x2 `AH&,K5kU\.hަp. #It%duMM8ȶ6<0]uUh̀ҋu,rOC@@3C.Ъ+jn9zsW֤2|tL7y&+~RooCx<O}mڹ&[cq _ʯ_W嗟I4()fFJLA* |#t_4Ǹj:9=jpuj"jۼǹ8c,㨭r)BhpH*`rR20:s'V3 1Wvn>L#C>##о_@J!*%BLuRMφ및1B=VyuP\cFiWU>MeS{v=vo\ RREs챊lMSm[sbTUmi E鬙4[讘dXhks~=trйazEwe2GFu|uݺHoosUVǍKjUesDkj\*-kU<fC1z kQD8~=l΄ r3QkjyюHy$2j±\ҶKwO$xYYRm4İ$վ/*Oii{[^vj`BSahLyd;?BאUS7IYHUsԎ֗wݚU=16㐂ZZ2,SG([`(yLz+0 B!5vgr#soB/ paS+w_mGD;av9\ihhTbܿ(H#&#)w]h)nWQ:| xq$PB4~Df4qSGj*>|ߥh4.[8^- &Fv7UU81\.d@ wJ/^2SbB"YxEE -# TUC~g8뷿BpbxG6yQ\7 30~ L8P-@& T'3,nޜxB"@b (C>_PB1Wn00%*k[nJZ( "CE PpXa{I{Ry8IiLӆS1o^qv1NDFU ?gƞ=pcOx'G0v|}WJzVXH/>!1 Ef!@$MCI b` yذ,.BU _6-&}n+f.-Fc7=**e R)81cP"SONCȑ-ۅXݮ*|P)R}=H W䣸| Rbo Y02}dlbK` dw{횵\U.sؔ$*`|l(ÂM20Ƹ7@CqǷ~=NEYm /lՔ1)zhx (P]uˋ_EA:D*ht=iP] ΅WX ^sQ^=yycQv_V+@D{; SJ(X~}ϞyљɆI2d#)*v>-~VΨVH˯3;#h74%B|G<?wṑ q<>KV-}ɸAQpc'?),<0{?7jݽCC%П Rz9gN,_HK:>w蟽mvh958AɁ8"$-̟!*WΙSSnMu]6[66UU!@]jܴW!ut_J5 YGHX[> aCq^u?c5GVΖx|5m:vcӊ}>5˵wtttvbĈQ!i1LZÂ͔\e5Y.cw#̙rT?G]l@U;n(A3R*?7R7 R=6լ, JUCu5roEх[-m a] UQM9~LC&=( i[)Q1tߌMPʓ/rʂ#̣ȪZUI0MR4CĦD4chR,~UM?u=?i\8IR,N ǂc?Q&:H 6?[u![7lOϷ l/?~_ё}֬Yg}?d[[7+lM/ 'GU_N8HfA$f"d.C3|YxWF"Tgn~蝒\"kCo i@ 0L`ҋ/>rlҵCC X,& f~~Q!%MUE[͛;w͹H?ђc 77N'D.dGd6/fWX""ϵR vM̂21>{TU9x0~jqr40aZ xU&~$[B ^8*`UThC4"d:J0h i7v(䏹F+R ,onY[ݎcE5r-|ky~e& x_٤e&Kbiĥ׹.]/>PRN~zylw*q۱Qĭ|_x̘$ Aia>|"MQ~Zy:ŀiN8˖Rq9.ݛ9蘠o4x5s$(hosSC )Lv#!4m4hUEI t_éw ^y%[PqξK&(  u0B>OhݺHOO_ʙ]"S%(B)43ia4|4_?:!0cE=nqC[㗮yK Ŋl:XM '757nydҽ c/taBB+ L,JJC+ TSI܎DbUU!כK$\ ZDݏj7&чgsvё]ZN2hpV>g'LM]ܰahtta"q-/1]|/^6d\.SRe ;%b  IDAT,!^=;?dg$$%#3Wjb]?\gA(R[`|-&v+fD},a# aӧ8H,[Wh30mRR"-k<VMfڥyhi(=( `> 'PA8::_Vu5bz6c540VJի O@ C:BllQy/p1ytq2%@;;NLRbť)6oqJBNQ<Ӈeldr:7 J,tC `>G8iM'7}nDsΚkqbϊ?ּLe"Ҙ&ej_>cko"!ƌlE#^aB\IRE9RJ!X.ʫKUf+ɞUHXBv` 5֌=H#OZ9u=\uᅺΤT6'Vk8YH6`6rhG[Ffd1ǰU<#x;g](GW12rZ$կXs%,1: 7AZ&1$*>-T{$o.2!'"s^QQ<`R0ʬݦS,UY.mUUsΉȶP̶+4dG1ðW`gM~hAJHFF qtun@,_2*JKq≨b ?GԎZX ~&~ PW  h@CGqN}}vB|:;PZ켓E Q}6ѓKG%[vΆIEI0 h:0\,Y M-0 Hhsxw-]ed=e1v=mg vQeҔL´{cuH d*ZvBwcp1s,X(f52>DڍT1R/@(}0ڋ@7PZ:Z!e!@<ѧVJ\p477*V3O;I]9J_DM8|p2#'ۧMO=--ҵyJ}lK,#J8/EYaWEZ~]|xoi&q mzw׬<|ˊ?e `T7T+g̘ɿݥ_I=а*ml\lIw\.)wU]kk6M#f$Yel@HHЛ I(Ʀ&Ɛ\BBH $B36.lc;)6fui4s1/ɽc#3Fs^]o!WsQ]Wןո3+r|gm_R*o㮝hk!"U[5u,2ZI0r"2 >_m"S訛~ː2g~2*XeaOBe"}}K͊NZdf/f]A3)._s}әMDkK `@PMHWƌAU ءXSE`"&2Z\ Z̀M: 3K WuH*!\dJ#Kyٳ\VEZ5s(wÝ鹎Jg.!QXTTۦoN&I!RRBi=۷ve-<z SLvSő0C i?7"= ŢrX6&P%!8c25 2"^{H"ޯ3Ǜ]j_.r`WWCk΁H ސn! # 2eHMAB:.š<"6 j7/ *:}ʑ+Wv(?+< - H)Inun $hx֩>Ug+me4zߡQ3۷WtwLёZÚVWqeVuǺT!byf, RG%or  dIB"TUad=yHࡇp<:C+ H ēC1!4 cd#8u6Z@&fyEV{D&qv6Ek$}ҞRmM'5ժ\oqH}; <oCM-qx CJMCE%Y#ukx﨏O׭CHP\{t@lJϞ`40A)(F4+ȶQh<D먬A_CĹq2Q+ӤI8(~C\q\CIƱ2ʲhG$zvb*<}'^? 攢puX1 hm']>@IRL~}7L! DisΔO70< 3\D;PrscUwy8?45&U0XJ@| u2TJj+۲Gi4D$^HA~J[ֻV݅u%,a4|<9?4/J Z4+?Ixh57Š;/nW 2@uumsHJR]W\Rbf)Om}޵ža ͛ڊ?=O{rΝhLj^=;;zH)gA$!5okc'zW_){iSR5QQ@^V)\Q2'Ѩ)F̸qMD̊ c o~8dNp^ A>D-jeHH&r6 SiۺH,e:Wsr&QQDQs-8S-L ,%=דncTTq@UYO+F{\R4wuOV?Btɧ:?iqVK7o4$ȕ.D@B:+:ZXBÀ@1܈İtDp7λ9wܳ}'\ @'Ҙk\  Z4q551BeDj0pاo׋YV^38h.9RFRU5+Naxb(=,2]%Ky󸩾1U ghEʺ5kg--ts.W+ERĽy3m@P8̆dUUTUH9sV}| 41y2N9/9s0{6֯ xq[7y 8(`YLuG8vϻ^6 3̞ 㑖a4R8h__ &+6iˣϼ7k-*u4&UttػJ8e,׭nY6֭3ClvEw=cqԽPk***ǓZq۶J<0\)e>>gNk8>)jڋu3$`~톞sDK Q.lzK7T>],I60*ĊH%<@`oá+Bک#r 7Ov*i8{%(Z#hl;W="|{qNbhPz$d5Ѧ?Km"BF-nq#DCQzYOae첧12^:Xɟx t`.qi z%Oz4zK]]xQqq&/ ەiٽʶdRVVDV ͉]tFdt[!i4%9# +goJ~쁁.7!Pp)ː+CRאcЯLPRڵX.(~5^b^ Dco;nI:zEcmq)`WW m6 nBGɣ5IΰT(* ¤V mQQOekʾq)7dž)@f [Fa~Ͳ 55"R;JPfՙi&Jtc ^@~RoDlXAKEhY`,%k9id ,G7OXX-P&=EGgY4E  "#1|y2f!@*/$5˗/^h&|対γ.*P*o u3{ @((B-bX,P(\tEJ߮G}dWp=K+J+?bjKo T 4G5CC7?o^eMM\ gX[6mz_4ϘyÆ. +|Xw/?[7]5̝.p]i3]wTJ`pOJ }!P%s]IA@Ӑ6P'>񄨽BL5EL.`E0a MXM! 8DB(UӰ+ sun U қRֱP*WF$pxÇOܤBt4ϓPE0 g &Su8zO1nFО|:뮻k|߫| ZxfTji` 51I5)e*D -6Hڜ/go+QH!wvF[_}uhB2H)]WC3gow^!J9gG#rzmj288IRBx hkƍd jjԐha낈duuRp 4`W(CUr󮘗 XG&_/ ;k'4_;w,DP @On .tsvtpф8 IDAT{IsDRJ7OYf偷 qxi|0`0`mr"%Ccni䡇AT,у+sxlX ݰTηrYr"w_ ` Opyt0t(W ɡ ,]Ge%ޏsCHTG0 "7M<ɓte+|[ey+)˪03bW( y_qҲ5!`Q( ] rYl\9"Z8ضf981;b܉F,ZrpPWf@W7cVnUǚPҴF #BHQERUiA f6yҺ{f~L6f=k:_l޳]/LK&oA3 5kV˥U:zovh?=3bƚ=::O-t"pEB1R>Um! /fϑbĤIyзVN̊Hp60B2Rp?OKg|F" ;AK QM/H&,vǯhE 0G%b1;Y ]r3;:ַZ[#<1w۲_ɝw&U BG '贯ZZt PÚ5'U*S2_ |DF0XY[S:V`(NG7SNMNl@=+幚Y )eه| [,J{o) 0%4~uɎ8mAs<}3}@J1}LmI ])3}UVk`C}be ?eηgΝyW׿?8d P_ot`B^dE!F'Ln)Y뚦Yz|7ֿ=)Q v_I[>2ҙ0O:e֬O|MIg#͎sC%:FD\'pk;: !."ǝ3ooyI"dqu/>/|[NnV̧r }tALZ]]gmY (0"U+<>Ҁ8@=1Eaa:=>C{0tV ?wJ W}4%>G ]r (,ݚW+)Y&?6m㐮{WzhhY:u;7  A1+&Fś ?"L!:?6mI,~ 4mB@Kc]CԀ/c.w:PC:1H۫9't8W{ ŤIPxfP]݈bu:Vf9M4(MP`զju5uF#br _UNj89o0e9;ͧAo >Tc q, ݁Jf*ۿ]_UOiC.gα(Uu],1 8T(wYgM?JWh.}rW 3b>p"kM+85!ym|>0%~sOG o?٧"E<fO+xOH06e˱b4DQ@?vƍ8|.zf-N_C)d̜6sz 8iLNaTL* |cgN𼅑Hdŭ+/Y i߮:uρ~]z᥯/x}Gj?Kg ro co]7nݖ_z_D"WEd怦|Ko{\fcʔY%aۊGxi4]FR‰f#UUG6iS癯ۨ۷1x{e%=/_G>ڭPKQ16Vhiȡ0!֌~8RDsZ@x_h 8_sV_o-]w̞#Z[%3GIIW;ZҀx^5eJyʏ:'u{bxVJ22$W.؞ˁK{횏~ZMl0~iV#JE `@ Anb ub?[>{1tnuD<4Xm/UGP{q @҄3J1Ѵ [ZQDy^|(_H٬*7**y+%=/D/H'@Ӧ6&yVQ+!l%Ε ]"AYSq@<= |ӷokh8t$@6 a$=O9^kmuuUxbҀ:bj訮f p,;v|Z,ZRXnT?HP(`tR"ix9 0irc88rP(SO 2A>&4TKWjAT7l@d",tPm}ãΜSҊQA5Ѝe?:l5"`8v%+@"E,]y /tscΙsBt9i׷ȗC.({ͦLٹݿؠ~c&Y> *Y#kjl!Jf=p]G C{:v8DyDCZ[0}F gAoǴi$'Z8`K)iC˲gid۩8;ӟ_>4z(5ܴ};ՂҗQ"> 2` xF$ſ.,Ç5B W%u8\Gz"S,k֡5b2WxPZz^LJ^^h8MƁ@5_?XqH$&Y?hJ O<κҕVr.\Ԁ \E=`Q' ]j7#xm/[vB@}0 =NmII >346LpӍ#ҕ$WީrS6 }%߻2ެ嫶=xDeEo~t/fThE'։1eQ?. )0~2gwM2>>~G~i{&6mt=X>_b17lw=\7(LYyajkKmr@=p{%78n^IH:KImBק.x`fTPxǾH\l żU`́p@l@AqG/[[CG JLM_ub~h_+q߸NÒq[ մᮬGWHq ty Pj"Vh4U謕hpljtLaa 0 k2 ;uY|yn x M`%w׺i}dr}q'Z]̟j&1pN ,6q,e\ zx&t_P(~ lsR.QT~^M_{cp*kiQtZkmmu< Q,*Qٮ FJBtiym95MM)%yDy&"mxXF"]]mmLDCAzz tvm*%73^^/_gGOG] ^SU4t uЃIZ@2S Νqǎ hr٢ơ5_t県I` v!hn.{X삩wN$<l!qR`03#i_c7u:Mm⮩xk?>l C۰'L?g>g8S V ,nPs5+"eɌ1KBJ{:o؀OºuwAA ֮9s ہς)< mD9 ZKH(˴L*+ਣΘad22qo1e?>7T}nЃCyrܻՕ0,fEēgۏo#P!e }HH~"ÞoX1\GEfyz|ʔ<nkG*]35h ŷii*C)`HP>X+f`[KLggv/BNl6keY@ӟ d#ue2dKie>޻7tE2P,rAi2O2e~U˖~q%7 AcY^zK/En:Is-JNtMْ0ןwg!!t)-\B/!J{lTބ3שTq7PV\VTNЉH3Z0dB9vH0nծMJ 7M4 }ȵ0:Q]f[[r9qݲ{Y]]뜻MFeTfm 6B -c!@!!!y)۴`lL -i4n! y﻾5K:ќk5$GUi}V?|O)xaI"`&;e10 srR^)%P%ЈFՁ[ Nrcz 'iO S$6XzE}|TJh/g5}9@T+D_n+cREm\UŮt]̚Uvml(ƜA_yg6K?Bm=Hs `Zp,ֆ7 %@ ?rټKo7^$cͮ[D2vñ T ?Zv߽VlPqJ@E}%~`YW'"5#mr!ok}2>}wuJ=p{dӧrum' `8GET}Xd3$B_/6}VK˗CQ q1(Y.Gܾ<^lg$R)+w/j:$2%Hbq'uۦݍ#"TպJaA&#[y{Wp4E@!J:x֩8D1Q,KW)!,WܜaTo NwN@aJ$.8cͺTfP?[vG @f"{ſCFK\r(NwiE aT*?شi/{.Y䬳yh*qĔI0aX>L+"; OS$㎣Lt^N@ӳ,]g}8'pHHg.x Xs͏K.wr竊DQhG7 uzـB²UykORblX9Mbi$໫ywuQ*dNf[⩙3*%iTWn6ƹ_$\j Qsvrdе` +^JgtUJ!\XHIapXWU5eaYY&.-!]磥[G@8ǫq2*= OWՃާU,OTc{ 3;`sXNPi#FSzbm frI$X8HWUjX,Ѥ+$T[VWZ8z.?3]\lT{!1F{(x9a wR筍]^(HI$X٦3 ,5HD@ U/`|㛭ol %Z7 ùl]"iy}6 U\o@ym=BOZի9'"WQ UoS-mǫx=B.K@۹E63 W^d*)!zf|+ڠ0OhݗU 33L|h$x xV[g^z'S u l+ 6?hw{*h8pX:u;s_|jʹbCӜh 4v}+W3)nrӖ5}r_ַegcW\OUl鳃q"bm@YA ֌Ntk']VM뀵ٌm[~B04֖f".[SqǻCڊtPpS42[J\kv`ئ0 BH^&w0̡3TS<785D-D9%v-c_jүmR]5m.`` '`1 <"ee:dLIy)ʁV=uUWMM;G `  >9u }`ퟑS`[Hm;::H8^0ĻBv V݇j)cyYTQ67)qi>) Io"H%0]ho8L 0RI ;J Xmπpv޷S)]SJ.InR洅[p3l/cDvmTuc80 b #1 =`0,Ȕ|l Gx;){zw|gՍ^{(qxujS)Nn$Yw}-o``\+) pݴ~=7 ݣ{\I: 0![/!&|ۡ΄LKC48"WyV*\ ҩ}w06پ!$;Ʈ aQ@xʅ@:&ܒͤgB\Dh5Jy[w\}~IAR`=|f4jIzӂ1^TW ]WRIiQM@fN =e7 +VlSZMSwHB,!1ybU ]'L~uwhg#2!N *6!P%sCQH:qTMsY!!݉= ?t8mr4P=9K{sp8O5r04om$s.{RG0mw#@(V4(J^SkګOm0N6D~kJo^:sVSgˌ_;JK/ T)KaUVr2c(O̙Ξ8V?̱[u9vҤ5[z]ihR2X(_w;cnvNG0X so:w]]J.KRCRff{<|u?S1Gqy]3|js9LgXp_9I{M"W̖I(h|qq7ϏϷ;`Pb!z4c6nfXbэs5\hK̎iXńa(fA L%wB'0 Ry*DJ:ZrPUXݭ7)6~J)q`nXX7a@f?t=7ꁒ1F1TGm`lD_<%Q^ղ546Sg\*5HIp/x3q``*8~ /ްg^ d DRXm`2gUD0>,vYZTz{s|E6ҥ7BoiZqك;zD&ceb4~衵/R߿h*SJSX{{}綾_әCΔG.[JDHkSȹG MմC/{[P(9ͅNW5 $@8B@V‰o©'mn1X0=;ac!Jf-FqjD]u9׍^Mh<Jv ;uNS̕څI5S>X{=a0  ťUUkusZZQe|RFNm㣭8\x+W ^y1!Yn gMcPUU{ܰ=DȎb((bCѮ:uxdfV@?Y 'cW}_c(VbJ8;ڃkGP!"81T/U6",sd{՗?/p0 d${,TRVF߳|cSmD"'pB/o|cCЗ_2㝏1iOWOL33[7l pV}}}vG212Z,jeBQmt,dG?nèwKn%g^M-['[S;$>`_~UiQƙ"5HfyZ"# 2)U<1|ʌ~S`dCa;"BUWЋ/s)%ggJӸqCCLR٧ã6o2'BW|~P2"% \ScyŲ3拆.%WހQ9e爔TMo`_%z@#eux834YZӲ\8q˺㗥~ <T3\&O0uiǢ<[N6{9/_X:!INS QA)(g޼Ě5 .}ދ#HI]wzcO@ۃ5KF9ۑv;<Us]K;8l6TUm)$_**-,*L֙#P?^4-Zԏ䨹uz^e)¤Om.^Ji%Mv,k›oΙ!0IpmüYC`S\U2K*ZXg`EeOaM?a|ehr(ÿ;>O>IhQyT*eiɞrKßXZ!'p͟d8mմn*"M|YYME.& Wl*@_W]C804먪X-  (GOk;6z (nMg#T*k;>/;D&kV-o0| \ \2ݧL'\5p`UD%-O0 l(j&_*gty1[%K;$Kcan"h5B{ۋKdFPq7K\ti"lsBNu,TU;+UUDtv3G g-6RcK 9 xtv*\W)~]#hTSUpggzB6П^&_\x|],1F,Nfp(z2VMBrIॗx[b5b-M뢾8QdREeE+BJ dQc 8$2W.u&E+@?V܁$b尋# Ѽ1;msyba;nYL0x'M`2䘤1/#Ax"dqki rHd$7bloKDD7A}1g*j@-.H*ESu T V| gpГ0<\U_N~log#%$zzPۈɳj J9ēRbŮSN)#)Si>"ܰrŇ N)ewP( vQAttV c@c%ūjnjiRi;5Mcݤ({벅DZ?xfCŠx [o&SXtuߧrՏɲhfFjԷm[ӄ e.uJt,Rβ M]ҷ^4*W TV\P柪5FLHb77l|:?{$87op>,"!bA@G$SRk+H2CP D<ݓyfUpm]=;EH~"cd黿?Ή*6?w"5KyP@QJMU#RrY<^( tw]na{(\]Mi g4?i55\YW8}^XݾeWƪ\D z&ڒY8vVM)vwm7/z8cR"O\Ɋ 4iJHdM~vʘ%%_Bxh9:|%r'J;{_7I+ ?x3/ZgR;oΨ0<|Ν7utTkޖk/'éTϷWu݁ΏKQ*A@@--3d5Lt()aJ6ٴs*<BCGi2X p?g8E?O_LkKuI;Ƙ %LS =eb \ <,$yá.ʋs$_3LXw-J~@ܢΦRwd #R 0@B04@ M@ GˆK\Dz;e:xآ(:#5΁סΪ(+RI&FjTUPU)|uL&@#0  ITT X@|DX](l/^@mMKv_6{l&PP(MkvO6{h6{JۙVxqނ榴@z__vp2y (hel%0魷<`.c_3ƺm.*i8 /ϚDJHiTVhy= RHzN~z(f/F<*GwX[<l +3[ E p {9{Ԅ"{&͹3E+wF)W͕+볙!ǬQ]'4{s (z{beƇ2cF1̘a,L=u+mS{3e$G%8NlWT3ȯq_! @RKch۶Xpm4=&ϼ3JKoݷ7cƑbZkw>tǝn7*ʀenCp+၁a)k9T1mەDK5QFncW<]oB۶ݤ[>7fem{ɻ|5mh {huwS9hl-0F|D9d1EEWÈrU???>Yc"BQ)ljP^rś %ͩjjW\z5>/Fe3Uce9(*鱆Hb:0AC2J!g4%nK_ Tqld|H7.W|÷r1nj 67{Mܼ[L¨?.]tiBT_{mKjkÚL3W((ڧTeYfFsp4M<ϴ׾iCC G 6%^5V:kˤw@ O",."׏q'%5M;b8=?#ںe>p+?~|L$"Mz, s)}@GX!J +=w\LV23ۯڹ_MRXL춢h~Rv{)LPF5o'Pd3i  `o-el4 0,1. GsL#$$c܋ ;KnM]m-I"2m*)#DӠ.R|rB9"h>o8WA$ܳʅo߶s͔۫R%aLwWK)ƀCdk h *!|L ^쇌+ޟ1eGFv%H6l\il͌#U5Ấio||E*"]|=)9s>`npj oifj.OcFl:Z\{f/ct+y ÕN$w AY-&`j꠩pҰrjѶ0 I"^L8P`*aJ( >|Ck@10cV7Vso=L= p{;gw\_UJb}* ېwA? *6*+IU I"( @g7܇@UysqW>5ksXS (iie%ícLB2ΨDOј}'\cc)C7ҊGG_?V>!h&mIP23 Sw}S/<5Ryw$8߷}{o_0g&e꿙~K/1s <(qI3'^._}L& ~*B?~]ƍfYdKǻv^Qށ{,_yaj4߉^`j#1׉hz3E@WS0] R/WQ-cΦ|8VIk!%B8h?p^öLfjhR|ʛіQ9K~_*.8jrND4q^ Օ jױ~ ̋ޔNUL5Rœ?Zh}4Tٯ}<}p %K7B>2` T`>tl՚ŋ bxAQڊg?v_SټŋSԎbݥbm̳h=j*r vぇn(eN톆+iڤ?RD*:;t`t,O@EHfJIS$قƑк?cvT˲*3jW)#rPE]ù*i,__sg c1V< ^t4#7dG;y1'Րtly G5Ji8T(hew\J%RVk,X ;sSpm7}DE#ȬYGv͞l,1h5YnzܝPțEC[8y5b[V*(54>/НJI1Ulv-zGǍg*ϱ IAּ0&Jc bjEumI +tٞ]¾^/%rr$΅08eOCS¶8MgZWW^p,Դ Ę5FjUU`ٲ/| -3[',ƚڟ[{SҒ~y]m6lŋqTw0U2~C&P ]:ˉ  }MM}SݨtCZ'Iy@,Dv )0Qe#9y0n;jjE6T^J؃,h@ P_0^@yk@X6`& SC!I/&opkvݰ;%W^TI &QTV+`1k; $?L~D]E`Ę#DRЋ&& X~+l}-Dʾɥ5_g/ =Y1p`"nt@2"8+$!\Uд_,^⬳~R"޾0~9ۏ9mm%t)nN|.QXCPbƞ08o^|lRg9bR*) l*ďxS2–>D[|դ5 _-/ORy>oyʎ条6Fuky}8dRQs- Sxp5\? % ( < rԹx.(@EC„8j~B)nm0@?;Suk ~n .9֬@Ehms =XN= 7On4ڊ OP]M+AaVH*" ntPܨmd|PJ1! ptua`@{5Dݎb GFW˛mCyԊ U$3iQ(_ɐ bHmq@Zw53gzRkL(*6kbƘ䱦1"PP46@Qc-H.{_gfDcI~s5sguk]kItݼ}Q}, % كWDxTOpyC(JǺ,~O]xϸq~:!!c 5mik{׎y4ec/N|9-`mKʏBrUg9OkMg0M$YtQK&l"m@H$hMm[QUgIJ믟_ψ& O>:Pc ~`t'++{&M{c'$ *Ja!~$-專zCM鯭^&mYd P7(hcWI'ͽ^̾zTݙ(; q\NCi)i$V É ESUO*Yi`a(.njJCb 0 YM#U94mKfֱjxT |źOB5 ] Uc5tOڝC$|L'uԜ٢HDnh3gbF̿Y|)@FZ@~|ԙBVxuC`&=ԕT[L jIi.,CRTS&!d{LESN:+ݩZiJQ5qŜ·-';M2琒:SfIMLFZƚ-n9_آEUY%#Ҍ1PTBԤ͙߯p8K~8/j m.()*,@:`x•i'z@_8o k'+??  Xٳˡ[W| |w%<ʉ$l'td~pIf K.Q{zDQ1'^x!3*wC׭Se D\v.P|?)>Ca 9+xY6m_XbYZ S3@@:|*#W }XמNgmMSG90TJ>/NQCn~>K ')kxbz.tw 2֡7Ɵ>s+}w3Ι9P+:EZEk%q\aP2yٵ2-&zE د׀Mf;wm\xh4ùg.Cկ>`a$hgy7*uE*%ldpP PPn2!W^0É1p#@ I4ږ]Ad)ʾHa,iY^nR!rfێ?ۀ.8{M,G Ɲ SNf0b sE#@Ui03`J$v; V14޸ Z`'p"#LxLG@ASԶr+tV `&`@݉(!~%yy{9ǩs+XHmQ4Xd HTVx0L0{$`dˢ^c_ GqR҆B޿/($en̄a\>9Ȓr :Ah` Tg0bT֏OQ^d0zTQ_Hfe+}G.偧hbgfx 4G-iAJ?_/՝ySlh3A@H0vzzϔ'ck$@X)֬b@HK0:B IDATc.1ڬ #?\18dk?/h~+왱tHOD,ƹJˤ1acIA1d\@eev[HUɶMbZ(ݗ.@ɤ Ui'XK^AX{l?Agյ3&UiH$R_O(:Z:V< ;-XLZ@1"#W@Z5V>۔ST~j=ta;_rTg}EߟtiZ"1/p<1fJpjښd=5P_`K8~(9(NQ4fvؗy7Qv8)``d %/wj빌8Ҏ ΚJH$1.WYWVn>~^{vVVHjNk"Bec~YwyC_cNO3nzJ#Τ3NA Odpݺ9sds3liru&Es=ify [9p| ťW;6;υ%MT +q퐋})G\ ,Cp;ku*Ku@ӹVS]/nc[D8 L wx) (uiP7g̎E0.,,rTWm~(>l9ٳ:cRw72ͳ.55I1F $mڰ3ڔX .诬T hi<&)-nG!,.V0D2d^͛bj sbRn`DY[يmW_=;tٶ#!A'a\]87}23c *"EG}jMKIOYKLۇe˂ vdKu@F1@A+ նmMe|^Nm; cp^sysg;bp ! heTmc!Z:\ƘV*\| "i |/w[j)>RmmjwϞ,g}*!l阫1#K!$h0]R."Ho <v(70lTîsΗVP22[rgy@bP4;FDHZJ^&:"sdV1lLLjpD?F~$w{ǦN@UJ,@Pn>G䲐TS(0VX2= 8 \47(H_R) |4<]44$ˍE)=%\[/>۹WxR(PMto4h9-C/S{gnV/-'Inq(5,.+ժ*G:@+WAOnls*X&h08CU@ΈWd$Zڠ*q*!iRR[V>8:QՊ 砢,i Kwwt0yh;Xi0E8:.CMRq$l×.u)^!j[vf~`w6xa@-` T"KyA{ng.׼xTHI_r'Djg,h_LcTC5|}`luuaB Bs>;cOy-rןM:; #~UWs6 @0qÎ} 5+~{nuumgN?^~Y(Sy=\3/_4^{q6ܫX zzqH&QXp%R_}|Ӹܷ}O9=,Ldm@<.̙mfAoD)ajїG& #+pf=:a#mu0J0#K}ԩB7IjoXݑO&b[$`/W5ABHMzJ#={5&LS)Kaݜou-MvG2;>@Dί攖=$Ru&KXP[nz՘d4Z:*QY%-ru8fTbD}vRĮҴdLT bWO<VOAYd߿׿5<|4c{e m5 9i<>,dVݷ'8 id"bG+m뽗^:Ĺ˗?gfΝծ7,%#1A*Iڰ!>{S_ ajII)S00dmAםZfu6ӆmU99l̙KR4}%Mc #@^Fnaxr>*}J<}\992$ 3P0KF 5U/˞_e;??SQ$R p$%,zC7gcπԾo_Cmm @M=Λ,_R In}7ox7ݛ͞ك Psv\т=x<8A;jRxwg4'u6c ,0 ?|;* ~p©@4IJCpAYON+KT1E#9m۪*T5TZZ+Ǭ=}˟__zvETU'0pR=W=_P/ʿsCn[m2Jv:?D3*=[9Ph̔w5be!;Pa5+]֒px@DB5kv~-VQ7}իxM?E j/qy K OK[V:gjYuu7kZ׌62~׮ʶ2P$Lpe8T~w=ŁXa7s!*&id890ftt(`50~ TK9)r `.WI*u-p0mf᧊R9LxgtUc.#@1PءR.n&dL4yJL0Hd@# <: oہmD-h6~dWYeb:Wɇ"M1]/6{8+I9IZ~S 4ch"r(fi>U5&eCBUsC(l6ƙi:¶% 3ިImۓ?7C )?$z!`90 08YWUq+0 Oܦi;1wr^ ]@1N+FnL  +?eqf1N^N>݉zTUZ iD 83 [Jkmayش~XO0=w2b2p'Wpf~FB^Gk ynp8g9I!.'.(N4ՑlmP@c뭷6uwZ%kMT^zy&(~(rcCYTD}3gfSb ڻD+DTҏ#D6/s8Kr1^[66>WWxb{}SDo|zOBe(/`9Dcv\Sx\eD5_?GVB㊓e3^yS\7p{N"Q&uFϫ2 *y<^)g:0!ށ/3@dmNV U2X5~ܐ-ۙu3@fpaEytĔ2bZ)T7+62!l/1w`CC̶Kzz˗5!P7Ԉ8c6EAA!;MMvj攸u`,_e;$zq>]69 )8hbE7DI D>4Ƕxx|.4vj+G. Y) R;_vYVls!ƲDqL4A2D(˾{M5'en W؉) 5GB)}TX$q"&Αh=CC;D.ԤfHG\(ly^*avʟjl20\ Xz;9|SLv<`+ߡmcl]߂WK1kgە5!ܙL΄9hZ0Nxv;o)zLEc!?7fp=< cU9CC}[a1UW@^<~O,v 7٨vVvAo9!.wYòaD~'j4^yĞ*KxO/7F/ :($_)CxOEU`U1Q,Dq\Nh(TQVA Lt#hc@*Z+n YC Xp1 ^y=硣CTO0g);pu [].pPA@&+LW}o=:v9-}˸ -:/#|19 jfx}K.ڡ}-BY$aiI"lG?,}OF ^YOp0< ƤtHV<Uu8P5vֵG#&*ˁpT! ?HZ[!QU82c~WyK.q1 Zk.>=sЁ Iuto3yr@9g%U #:cѢ*g۷%﹕io:%Sk֮ZuW"&lGc%[[2LN:"˃}+|Ic c9+߮ouc}%4sÆ30(9c̶vG%ٶl|[0t0@SaqfZGNƿkG]F&eE|NyD6W KGB՛ozCK.Y3VM CJiӚjm !e"RR8ټN?BW\KJ [ی!6MX~hꬢjAav{xiWF.cl3qhi+ǸzX'nRaYMM(-E*]q@a>:q]Kco)YHp*C| չNS #4Tk<듫`hʁu%܏jcϞ3B Ac`d{vzSQlXrEfcs643zH&]]v"QT`H;'F!) HZ¢Bf^ ZQaJ[Cux-2 6yRI' s+fDm=dMDy4;^Tu CnƸҥ.c˖y ̺:b3ςpK]>|u=k*6f5#/ y՞EiDSOoHY)vs]q(P( $Q˟J|`jiC#"u'LX p%HVr69x o `#oZ "b@tm*6ἠIo8!'ƅ8 pYx]_<]]F.18 /}KI]XRU-!½aȓfϦ95ƤP; [uocȕ\͘™L!أja\zzժ;9玔jS4Nu 410JjZ:ƥmJ9CGpLW]^Ⱥ6_]PϽ]BSy \[;W<{yY^5zqog7jj{{j;E`[ )}ȇגM.<ͯ\ oy)d鵯6D;׬壏X߹s'㾿Yi㚔yiFDl~C>ٴ=opT1p~z'|G;~t²9 EX,*6;[FJUob@B&{C ^ 6ڜBcw+*LvOST+-QUVER&)-&+BA 1rpFϤ"Z ;TU:nBXc7 PW-4 zXL\#z?7ӏk) IDAT{qV` “]@744@S' $&vbH] . d#/ 3v1yxmKC4Eq N xJR{wwr웦M=~eU8:W?e˳`g2b2{,YWL{;;t_tgTV9')3==lٲx7BN80bg?ߩKINOD<bG@лbh(,wy "jտe<$ ,쬰z-=ݨ/f?`|);z9cʎH(b{m^QU8Iim2c?`eY~T+/2[Y۟b$316}®c\̘ɟ?FMx}E3YUa{wf*>t'H:݇`_xxpP$^\ϭpy  Q &a鳘?%L+Ԥ]OH]=lT aAX g<RO>ўq= HGȒLd8He dMU.͚#o~Bs`pR, X ]c8$ClT~WOP8Bcge JB qyA$-) =lKMLiZV+.>镛wn}_ˎhhT%ڴcq1J)h;4Mfծ}IlX2^hn VyK\dQ))#固^ ؊' rr L_{?{N^'#")k%50 P[#ڶERvu=/-ᇟ_MDVseTetݜOetԯS]ewBIg@wwf_^0i(LfL׏{<.i1f=3gZۅE/̎:> 6n @p8ɑ=UU/򆜔m`A qqVPg}#b,!FVmsIҌ֪겢^fٵ5R5ڃ?/Q|i,9sٳ{wcH.ңWW45tp.][ks)!YD˖˨RRCe./,UZZRϿE?#3|]ְ /BUc)Dzmo{`<Վc+ BSG}==GnܶkēW-]]W Ya(CtR3bt`D,fWI_7OJi4}}eL0ۖRzA-r0`Bbk+(P33uev"$cqI^)#Lo3(69P䔛A0 l@5-WӁ|Z5{G|5Vy z3kULANgڥ^j} HnQfTgof999oüsU-[pÂ՝~Qee㏇߯ڶtc+ LF{]/]c1zMϐl/uܕnqQDI\"/!;30lT&_`^aEKkU<ļyIlqӴy7~m͛ׯZe,Q[ b΅K8ځÐ13wO1[6L3͚`NHڅbp݆W[R^̫k,v^5G*{}O wPջ()*Py8TD:8B!tt`JJ>)KLI[v1$\2*5e!- 0 x ,@ ?A+MH?yǂmMW?rǃʈ9.5@@btǚu$8lvNh0fd:D) 6~ԻEJCplaiG:mϷϩq d)bvXEp^v%KTUqcopS!%F'] *}F':ЗC]mњhTښmn^RR46Y0Lsu0&v66gΔR"DCLȈ ee$l*gܥk8SCg=cҎDž=Ց%zUUB8t6Hɜ4sOU5۶7lx⭞q\? żPZ~??N;Mpz{K R7n^}#"~7y=Q 5D~q{<^R:SۓxY2B~zT/)`R̙Zpi>!nPAy ~98 ւڎ-fAP`vGf+d$  <= qRbTL"Bup6~5УEGISl󓟔Jh!')X2; ԜqIQD$&Yn!L $t ʲu{U^Vѽl]WrAh-TS*BgLNQG~xuu-郵}^`1bstplFXxL+w^}oY Ыے?kѽzukc0M(!D<1|~_v $\K19Ӗi4Vʃr/{ҕx~]V5bzz2˖bxy̬qhJ%6ѹU?FU.IItlH%q.ռvP!x 0p(*{= u $T*i,gGWm\a=W6+TTO,]X3a VYphoV Wlݙ@l˽wG_Lаn_?p[an!(gMѣb̺;TzmXg$D"ǜpi7SȔ4~rtxIGGlMuJe'b% 5͸I'ضkNsdΰu]קN|k,a'-4gK]+w gyw&wÖgeD J )#Tugw(|5o'|c|ÿpmcdkɦMfմu 2cWccѢ[l3.Z&5mrY0Uu˜aS[׬Ίt4nn7p{gu?eFQ,ɲ -Ն@6`0/@RH6fcL0`JZ(7kTFH-#ܬ5Kff{~ 6D%?O,Ƌ=~f/c6 5BƐFW@+K5#kYe27ZKgj4+jm5ϊś[$v2,5;4K0)ICzu#D%΂d1_n+! &;Ku@/aDGWFlG0x`y @٘ѝ̃M{n~8IRE"x^d'Z\? 4Le i@A@$|i ܊[ { ?.Ztge'U̚0,ʐ=4t3PN/@ ($alrh:s\:3h!ȵ}c؞h-N\O_?9W|9-Pm0*u岖`'xf>\2 ,x-Ug\A*kGԡt(E׫zzdddaA4JC0"a&b^CZu0I"TKV8=øR A+֪4բ2a;rȘy1&7HmL*r5` C-,A@Oҕ!}23>sPS%Kphn`Yx[?pWL7NZd #k`ty=\ۆrՑȈe_١7XNi9,|uvLqtIsx睛Jʂ1'/\}|#Sf駣H&k~̆R!}~Q;RӄRi2GQ6qVV) ?2 i:A44cƄkM?eS^:7֠:]&yPQ|afU~vZsQFeEi(7rYji1\DBK玍Ϝ?!jV#P*EPE(Dykk\|Qfٲ ?ټw+ yיMdu{{wQz4Tc.?+iG ~h?5|QWjO?h^jήDUi*>L47CBnژGロ(QV d{E8oN[;-_*#8|SߤA*ltcY{`TzŖI+)٣ n7JvS#M睧Ə:;XZMd.]ZN7* k\' O20"5/gi29$_0jsJKZ9Գ'}ҩ{3Å7Za+铞Ʀ^[˯))LAb osj6_ORIkhm*Ir^o 4 _m?k=7B&e Ke}< u]r)Pj)m!JHUVfxSׯ ozTGzEA'3 (J@=AD'ԱmC l~iحu]`pF 7ihv׮yn^Vuj]>O( LDKY*_״&iJ%ߒJm*ZZj|~7OxZ^M8ѤiwIځˁP&UKG)*1ݫUBiޞL.dT@@x̻avYB R( PEJ!f2MeM5cD ?|k9z`Bc1q@A,8@ŽuRejU? _vݝ%P0h6ΟA'cQwa#otw'c!hS;V]DA(WZ#&uLr5sGDmi맇|1,Vw c }"}2D2Vb1 B 2t ҧyA*/􉾺ƺ#oC/}x[YFMV?#HLLLJOO?7AP؅_K z&X%,[O9 h}:JA„ 8 W#A*eg׊]'V &=?)k+V.G2Yb/,&B#-X:+nb]7lݠЇ $xq44Uh+%Xe.*7'R!%F#HRK753 W {Ŋf'Ξ̊ULG ߉=h8^|IysEQΫWӪ\^Ƌ/b,{>ھI&4g6>H@qt | O*M4Ќ+!JpzTF vu6[@RS+ Lkܗ==J/{o/^^%_k|>)EG/-*8 IDAT0~`|.7&L5qx =en@nN tI׍5m<ٍVyFy!hGrzbQsZ>^!OJ)?J7+7l8n(w"ޠOw+r W.^ܹaC0ns,O?r9P94 ^N4}MϚRenb0/eB\  }{ _T sO3)K\HLzK"F3ſ!*bX&]R:Rnb*aj\y ʌKDD`Ō_SrJ(*rsm?ha .`9PIL~C0jvc>O?=tzv*ftM,p904[kJQrݹ~bLDÎ;"^cW} vy3޲v`21ѣ1;[ޫu@GQS+4wZ쨫~v'_ì(b0Ii\l۷"2Oxȁ'fDдV)u闲P "b"k$"ɜ"({IJym{| @i p(tyh;pJ' uGO`7Zy3v8A\0`VrدQm)N֕KȨƼ m p|0aOw?e)+,W0$Pq!y (:Ie P>~ 2zO>PYÏ!F]J2$<A>o? QyHtqY^& •8}|I jLCSӠɣde^ ~LbW s.9Gk2/twmFb:k7}"j(%TO7,A "29XF>Y@gN? Θ삙 T7r9c덺X' 3(Ab>UXŒݼk -$N[.`,(X0{{Ck6`Knuc}|47ãaǎ̋Bݻ~k;WD|h GR)Yg]3WTl͘E=Ӯ7Y"U\*T'Q)%['X@y׮oqĉMJɾ͛:{6:{u094M35MCFX><<@jZY]*kAAJ]fvSՠ"?EFc\ z<u EG\pa̙sk<"۶s~ eX6kގj•Wzּ` syon\=R)pyK.Z^6nG]g˒G.kZ(T7G1EcEo2yExuqjf -n߁v \D?3,mkJjўQeYY0(,fBKDEOi3G.y'1ǻ3>:U7aUWMM}}qz:@ a MV4qYPTD S/yxLV^u'X ߝwii!6̚o,"`̐4RYn7R 4.oXR&U:G-+Ŗs:0 J"(SxhcB7lܻ1v nZֵkoZ n]>ya_JmZo>??yݲ>f#Njt_z߯.X{#zG:j IY{G||E){=7bfR{ߺr劻~hzqa43eWM>mL< (o{0?xhp08[p8>}Wo ) ?d$c $r ꍃǯ^6ݚ E(XT Z.EH05Q\+wfH-UUFyM1W 6Tmk0 l" #oo $DEBLTj P˕E\Z桡YV`# 5' 1*A3pERk`sV<缯_|U 㺷j) 䈶Da@Јꀦh/|I@h"b!JeMh6pHH ye Y[)i6k,r@/ `NJ`gI$q=K*:TǭtD)7J- ދF0Hb MeL BfVO(X bWb4Hy$4) 2gw9e@*-#]*8 8!%A`x | T=:P y&(h`lmy4Sx)Uq\8 xH-!XA*ŮuG\0LH=eB<8}_?P~U(f CTmp \4B Ea0.&_G@so(9{W\cq޻)[1sP{ɤrVJgJT1N-? m]mE^mF=Q|O򔔾Ρj-*+eW\9:{U$Y*[)B+tVJ);s+sCSRv?`p燰2ƶiu,]wRU#<ٌ'Rmio%+n|EɪT#<<<`DhPy#^N{y`Arpt-eB74诿nΘabš!ͱ׽?2sHp)Xn,yN~h(vh>uۢ<7Ƣs g~hԴ {tIG_99{Dʵ Y]Ȫ2/l3\sXq06^R(^u ?'BP]r3P2㧓V=ʧG̙θf"劎}RqM,}n>z$ؒ׶`x4QR8,]OCKKiN-[L0d50,kt%ٹVE BPvǖPC[R|2uO^u$~VkVիTkKG?Bu5)'_1wnb=/|Y} qfGRm7,VZA4M3c z˵Xea׽sg7a/hF)|4ͦ&L.0k_zR/rݩi%va;*9벪;퇝Y@aIE/w|[ ul+3o 'ݽ{H*6MXL>q1sf7z.M6j "]̘\ ݄(m( ue)MCK{\uQQaG;TL2ߵ=ǭz@߫m|]?O*Ri#ȬЍ1>r5߾1~?kM(M;7 E"L'E2Y7J@ec(Ci!>&GH餳Ⲵ9bMb-{'w$vF90ԖĴ@a3B)jSNOJ Tc3W6~s6DHOmCdP @#%;(R) eѫQz]= "XH'LcuCxhߝs\/ !A#bfc_L-/,d?SR)ӲFLBr%ŻLqZ=![ax/M>j%ܞd;5"D J(/wɬ?EDjg5uu~P*V 'SѵmڑS^wR~vafb[!+ew%vlZs\>b)^CZ%%F蒕%~_Z?rKz jwlS9 `w$ߣvWГ7όF[vzΙ3 jh΍pCaQ`j5ZKBo{ݹۜrnT1o('12aƷ{ŔvvءfMVLNʽ!j1BhпW^Wᅬ^gغ]=ϑC?N o;]ƀf %xEY"MJ{xbܹh1K]eSnfV&x#mCB2VB>}inS:P@hdRDKڢ{ t]7gͨ]Qr) LAWN!~$Q4o6˩B̙xc/+{?9Ҍ-5)}UE| жOarxt^b(l.CYnuyU.vY3l 7iOFik1ee aMIK˲D|>/V@YY'1u֭kο6#8qgR1yH{cxUU0PX86FFrQّh^&[f[ӭHaz=눈> yt{{u> t:@P4,2 쓭BAB*3TF"%bϳQ,kW]@)ah[`b r=ʸ&ngg{@~A탎<<4UK lW'z.I׍z8h17\N/.{0{{X$|9'C-1u$]vٲ;zR×qQ/>w['e!X V2B?>ҩ\?ܰuC19  ־X#Ą#uŕ3/< 3۶ XJ)MB@3<5z#=X'0Vz+}}ÆWم@Z)/Sdz[Q޼;~h{Rukgg"T@B 4(u+QR!h!@w2઱ݦ0*j,mA&:_UmD<x.wG |1@1#.rs`?:tj%0KhѡD,[xن ^HDrMu)0(CHxp82!t=)}R`!NV[^^ 2 `I K|_>,8 p)Mݬ%LfI$iR~kE3BO uN9% T9 L&[c'`PJ *UXm@ .*k\Xw8qX<|5m b%~ԝ(èK՟Բ=.Gw촱XUe*bc2<=l ](GȎC "Qlհ(c`ߢ|b@i] j8`EKzRc.1MA}ډTmQN p-^BSTd3* #K P0 ْQ- >J;P]>UH} ʄ8ލXx,)!ޅB3pl`eoXX'hh;\r!p$p{dDI2$\E&a(UbJЌAA&b1E/YjUZ*]Pe0DQxjgh>t"VҲ^R'ƚ`X/ypoC*EL&5+=tŶo T XU5Q`' 꼹VA(3wk+tz-DqY쾾u;:?0뽙A^48NkDzdDF.<"(}G%'f}(\ƍ3s$0sʫ(5?tE?oj5<0 d;\Heǻc!ʒ=q8ӜdAz1ցx ܦ/vloޚ&""uz-ژQC0o,ﭩ9頃=єZ9\e"E3ܢ#E>զԐ63CS(sp`Q%XTU:] TdN9|6Q_S;PuR+RQWNX3={g3 TUoOF"'s8Hϛ?!EZQs̾ĭx.hD;HԌ% >I>KD㸮RٵG}e] 6w"u(P& bo@ Wn޼od xx M &Dm7[0~J}E6D o]~OhtQb: c-];qVkTBJh@/P?~C` ƺC1z6&O'|Fo@ Qq`5@p`cI/)mi;-Zm4aY|Dcfb1R_U5g2<<gv@t G5⋟\XҴh #'ߩ ))k50qeC93 nkꪲv]|T(pNE7j[Z'ʇx*Hq[ڙϷ7j {l痿(j\ 2Rgz۫GL.r-u mK;;)<=vMTц`v}ߢ:dvA s7H_ ,R?;JUӓzf{$d t\OJ2Q[W aGܬ !t( {\oY[42avι<噬x|yy5;ei^h8**c /z{+sܸ{~]y[u.ZfU4QsUaF|6pA2ܧ$3>3bbp9F#Q@[ BTòq1rӀC@Phq^ځwSɣV Ν =DkS L0@UvumZפ3]Gb_^"$Jf zo զVªUvu}GF3 2Hy|}9[/ cJh7^)9lg`~Ұ Tg(\N萫eeyeݹX`츭"En8͚x7t3@^E tw'#E0U^1yQ 3Pi8>eHKY1c244t-W~Wȥ{jjEz_<* z$"˃yEJ)}F$n[[/+4ӟ>j7ΘMdnB?x}}}?n)ufdUD١^$(У@+ģ;meh0yu5.$6*/s$IOʁ @͈~+lj uV ɕ6o3ՕeA(UGE=%=Sji25a7Pjd2R ΡiLb4_,/A-ΘwJ2oB p: DLe:>=AR41P#>: vPjH8bm2V` Li @!mxgL{ZǏ ,{֞O*^*yQX- O* K@(bIa04,T݌d|Nd h#ns(0 "5;WTꇇMtSjRx{zՈԃ1m㉵\rL\4쵬eA,,[Ҧ!CSTe ^0@({=7&N6KεF q5k*yvʠ[0q)ERSG9}ԏx{~PhJ)rId*l&u3B퐃{H)QN`C~;&K&Ugwuɜ9fM__[\j۝_;ML6)2GϹ )ğ}5y6jhKW G<(ųCjYրo9\G3o2e;}e/Aunk7s-%=lX0 < յVm,)r=Ngcϴ7 \P9t; kKMnMԐ,_/f ŀm( O' !γ kuv:{;eg8LitZ:T#rm=MMœ$ci6 *5'p&4LlJrw$o +24y<_lB#yuQ[chhW!܇=-rJo}B᧺>ʕZ,at}4@(dre2;sX,9xg>{wYYaH"z]M@4h# ^orӦ뛛BJM"jvcNgi.Q@`=rX|io\*+s,vEdsi.bN4 @-&s.#b_EpPD@%LSv`RygN EwA' | X ,ZL.ڋ`̕~4}zCֽ".WQCaP)Ӷe}^㦲} 9 0h| -{//iB,_t'D(պ2R/"_ /XB_4aYY]6zy *!R8WWt=)wchqb`'^Rnm-4W3oI"X+^=h M);zexK\Mx5̺`5g$R2wಕV31cX&Z;::\7#Qwn _r\U3bgZ_ϦNȒho7nԏ<&xUtvhP7h\<%<ʸv" ޷4C 0s656@{e?9Ҳkg$gS:k11^Iu2X#Jq!Za18#Fbo(>wM@J5?|2% Cǒ'¨MOvէ}q܌DfnqԵg*7ѻplֿ] tF<5W16Ee:^Sr掎o߶@@jݞLW\6LV>2πpf2 kk+z?,k7`%9@+ ,# o.(NӉ@ ,< yC Zi1,Z0sUKwh.ivU($~+j਑ ^VJŀbL `92F],Sfq`3/e,p,`71RQFOQ M''1ξ`e pBaXgL'+2hTQF! M([\0v:[:>_F)%`0K8P80faT_~&MӨjzUPf{GNZhPGD.-po|l _4[sfe?L"]p9`fuM٫Fb@f)TCQ8Y־*?!{5 u Fg'%ѳ񬛹oG<ҷVoA_oGTzW35 LgB$mWUPW!p&ۀ#hjw٤IOܛka~te߂`HtR:Y ڍk 4  LCrc{_EoϘ4cO\̜0=Fq11O5W۴?7okjj{*reY>qhh7 Z}:CJ+f_55^GGrN̕O,?H޽@p.(^[*&,40 P.*ZnCg &lH1 T*tj Q(xu9]y8`Ymd~?~OS,e8,q12 IDATǁ/VeÁz\|6kz/p9WJE5Rg8-A3(e'_3VfMt8w46/Pt~M>E7qJٲc'ko-g7iʧNQU(IB*{޼d}38if6ݷs՚'^79 tu/3_8VX9,/Μ*8*|sQDU.':ڱqixvy7J Ov` 9`3R?I1jԡ10Im¹V Ι@O[V?RiIMJhΟEOk>} r7Xy <7#p8ԇeV:-Q_!Bv=*(^Sl֬ᱜ|V`}Ueeesn??5iWYbcy@MU#yI*4#sRV|CQ@?,UbCB5.ZwΝ ˁcbu ΙsϰM#7)ݸz FԤ()p ۘxx?!mhQeSf=G?)k.߳gڧw&l}xf/r8יZoX{9 =# tw{|ɅƦR\0&yʎʧBZ)5@܉ue* f*tvXͽՄd,Ǜc1oAf'tkh%n L ϔFPǫf;` Tc+JQ(^qw:ւIWP-sf uVGK'qiFfsR`@Z|iP F?d#O/r'+Hh~?e1]3f4z5)ιΫ 7P*`.p7Qmw~Pq\.,!Ñ5e'Kl{R.e1elwƢB;Q"m-zI 1@txfN`7_Dnp8m|j(pp8Έ+D)y  WI`0Ɖ%0 x !@?hW N`g 05״kxp< F|^ٶiۭDՀƊ D 7i# ID!Np)&6vX>*> |xt?VX KUcΫLnM*r0!.؂YP!-5-ιCr%l+됒1 ,F|s5Rp GÐ@ Lx#;op;ݥҫNvyg+އ{wpwStC)Ũ[HHJ*E:Ux2@IfvT jP +-!$g0xhoeg1Ɵ\: `0u/_7g֜ۿwCoo +=݁-<-b ' 2sf(^ő u*-*:3 JYH%P l|azJ*9c,( Tt^Pn`HҰXz\*)W_=Z۲" 3.`'Cd$?XYsu c/ C}Z'X*G2%clӬ!P?3\ýV<r(ߐqVqŚ_|։/~^ÿxydQcS+Vr=݊EEdzkW53׸\SL饗gt"~ڧ}^Z j?I%F"ն=;xˮvIO& fgr`St20Nn=0r2٬xP @D D,SVVj{j u***ycr8@D]]x O';Bax=@X%>Pc33nRhjdD1bCBQO:}IJe'v'HۺwJGخ@1uN WJw+>JU|0~3+bnZpJ^~8wjVF3oPt׭lݳ>yopL~sµJIHKsV=߶{9܀KZ]n;I槔:>my(Txb}x R-x Xi!p'HA?x.x<)H Hfu3U"\*I;nߊ$]&ĕm7?΋8ty Fܾ-ҋd}cU"6fLq43'( yJjXúšK H`#asp)*ڊpbC:ٲ^1}p-Q䌙ֳTm-%6wڎ¶@\JuhZ^Jjcvw8-#O0KK4Pvȋ.RKv zpIV\eO3yd1L5 3dI1#t|$r'\wݷ̹'ZڝJK=fb8~m Q5l?nGmN \ oN2)Tk 7-r:s1_ܽe˝>{7RA)jE_ߍW:0LM-#80Sz#&?kּ 4.˪дqJ}"Ն q~7V Hp5f̣š]]O:wY/Iwy@VuJ>!m<|pӁ5 n\XQQHr Cm2\\.C4 p/  p-<1RtyX|t(TtbԴ{>!8iDE \n nځv@Z,G^`#@|  ha;e@pt ]DWJ#ɲrX| p3fs,뛜_"`7~r^9b=`n@&ʝJUG♧wz0.mb+X7T R` >1ͩT_L ٮ+RWSf_϶O.-}(B0brp^)`ƈOKcFػ==;|7t0 ۶ D=2 ) ?0#1t]g]6z,\eٶV *PO%%pWLVP4`Hlx /mZyż|&BvU8fLM|0ʚfhafanfry<}>|1 NJxdlMLp~*.1`,8;B܌`VS~4`,/M@n8MrxL+UܖgFQt^8jz@5ܞl\Ljw=\GF>ݱCo;U//qm)t>}?TXҭH;f17].")PxW^Sg9+x4Otܼy1wo,Tgh wlVZzl4[pf54"s\P?1vŔR*/¬+04NCQMں@`REJٜ}aST_tҟ_Pu Ve"ch XLS(ۂnpH4''Ƙ3qi~h5TZJ[}1ǥ a#׾+kPA!~ A`0 kakصU+Wϙ kuOZhxcq*X@B A>7&w&}* *(2_͋kV׌ gC jlmKHJM!3Uಀg/pk.泔|Tg׃@-Yj!Mbho#T2  qΡA%j c{_@6KhvC<$'M<V-Rr9nͅKWw'M۹rԨ~&+ 2exkCñr_%D՛&bDmrf; 1";Q "?Ͽ60@g+5Pp)Iv^'>륂 Ccp5rm_{?V* λ_lߎ;!D;8Msz6˟{n۽(zL)Ҡ!H)X>};YQNY/p-94sIJEp B$g $)A镣شKe$k@ `}dD1Ip^M)\{ X3g+*Buv;?R2oUe]mp{]i8 eqЯ JS97 \ ]]Xr2V wsGc*UjGsGBHt  .~x;F4)Jٜu9nLO)@nTXl( 핕_zŋ-ǜ FaEuLgM%|{N%}OX6]-[;uj([vO(-b[r>/߭&T>=fʃu4-m$;h0 =)Jۄ8ׅbPޯ:M;%]aQ2ر ݝv}{$m>8M48X1)d7p0R QHx8<p @(LMJ8xue(&%@WƷ>_ f,.B ,&3 tFBx.a6`%@p)9?fGUU+/cjbgm9J.jXF%E @(ʀo*E/!:R l{':crA$R,`cU%%0WMoC-aMŊ3v#>DRJ;%+O˿K/< L~f`u[tXf}:3&Sˊ{@bZgt[uҡ1pFTB%>ff uLJiV11{C_&PC﷾S&Mq;ܣ:tO'ٗd5L;CJ$ NfJ*}O>'M3P!i.)t/p Q#802sD)0Hy( ;^4=bw$WXyoklHҰFfX]/S>C RHYܹok`⺈|PjkkGG$ 03b׈Q`թ >@,:/0 @$[n1LDp$0sIDFL#) nAG_ '@ f ȁ8r/\/ΒfʱUդgOO _\ O/Z'T6m9cVXjiK˰d$2m[p3{_zgW<[t:Bc+{:)hh gGϲc"(PW~?8}=R ۓɬ-3_KiNfCk6Q偒#u$a3DF-y=G0f PpqW-YR 85YupbWc>;Vrٝ}6q|͹*U!jjeb`J0z\bc))瞋 @*f']@^z.Ç;pl46"` z3W=6 /]ESlKqJc6CVCm+4nJ뿲o`̓ &;*1p&1<ΉDŻҜ\םn 4q)AA t- /cMHdM5.v͟;lt:u@b(pQ]4JY"eL:{ Dܔ=.llV°s8lrfw]vdP8Ce}% c,܈ NjB U{2٬& Vj$=w=cP U$Wl2I@+yõź Z$,fŸ^ǩzJ3J!uu1Iu/پ+1C|ү~%,]FˤҐ:r-hV^utbKTmm@gGaV:K\yp7öm-::z ۫=+Zܛ=cQ`!Nڗo _xѺ>)k=laE'9"Ř4b " ]*^C!1wzuQ)c͝37җOV5Afm0tF~h-MIH7!w,ʄЍN@?S>.S9e~׋igݭGǎbǎvkz(T(TY;\/ KG8D+Ð#<D+4mi^N޻WF+-3lw78QlSj'EYY~p˖xukkR;?1,`|>(0* ɘ; !ctzma8lvh` >AM[,~1їiR@p;&0VI0x%Dq :.a`=H=@HvrmI@\xGRq]׋_H~hQ!>jǁRL3?DC4m@? aAQ_rmE{~cGm `MCFw$䞨G|I^=\[JQd}"~_5;P+B|q:40>Kͨ`Lힳ_w]H $zI49Wǖ%|'}39Q,7Ti^<cz~ky_m11‘cva_ӲRߣtI~d0A*=Msŝ OneYD4n(mR~ݗ\r޽JBӧ׬[ww(5oy`sRC b2D) d WQqaxXʐR?QJMb0(A.ځnfPQjd(t©߷7w L>p^`N9oc6l-Ƴ @20cRLD|&9j_g7`I:u`%P"]ԓC$Pڱ p'pmqrQ-\9q'gҜ^weY@cm ल>{ LnvR,Q2 ~'23p̫t}`o]|=}_+w$PtoŠ.鏒v?Cz}񣃷UWϯ. ߯r[+%}pVz׾TF=  ( ^N Cib`1[8X$x7q];y a{׎ۨHB|*JQW<+F r8TȂoqH"~@pfU]]];4]_s"N5Iig 0GP\WENmqa{e0tIL@<|Z,T dY+>p(mG&Fl> Fx T *b8P@qkN~ |Ep6AU"i9v".01JADp<y/Ld"0=ap>|P}￉g%{'v9RxuO<1>!.˚pD"a'-.DC?tMk_6'5MyuÏ7,J#8fl'^sU$+l9̮t2<|ּy}}@F>o+gFAl؎5 COW'6Q@uN&\uYD8bsEgs{w+nTO} v9I+-˾t4%U %L s5UV]8\x~*P)=^H")Sp#֯.;j2t!C[kiԉ4М@Kh]Sh$}{oh(.ԡG Cm[AWJwǔu[(X"U@1ktCyxn{Y/i3B(V[rIg6t]wtveEA^YU#FG>GZ1WN  Xl4~`9Y Su_7>_&"6<,AUVT* L`[wm(Ֆ%ĬYxep̏(m|#u@8 h*'jk2:qJ@ x HdVcдĔ)6&/6`.X @So_SnFہӠ ݷ=ys⤷[Ri_3p%pg 0H8](0C فِz%7OOƘO`'@A+DC~.MW2L*2Irb;P[S "WBIUjP$RN?xHjʄ,~ #90p6>(~<C'ٕ<"HL x 1@@ KH[QCCf8P %/ OY2B=H'ep ;-] XB0 C"=X@`  G(YEi2$cZnafÇz1  LjwȕeơP ^'.kdeB]4@:/iךIą+V|7J.Դ[z/la>-w$"B 564UhOq)Y:Zu֭EuIUs2@:c\'HϼEFmfp_ܺ2uְL;#rZT6>UPl\ ՞|=[v-ܩs_|pw`PI}]]tiL~ummmMfeOjY) v_ow݆ hNr'6F^YjO睌ӵT+v QHcTv=A48`M?)܅ Rm)\he|A)x%qc3|wc;d<7Z*_w'hG}=Htsse9D Ts-ّ**Je1ü#]=/d'-$D ى)S2TQ󎬬J󒝵zh{at RVfe]FiTC}~yW+ׯ?c )JrFo^uأIDt_wiPsT̑4X OaLe'emKd{m$ZVWȓNьgJOtFohw^}F*rI/d< HJF*K"Ь{rvѪjuR߇7|1d8N/Ho``; Qg$|>P$1"X?;~ZZ2ș /ED^c^™grk+t(tv𽥪P\h*nȧ.Rr]?q|?ﭷ&ZuŊmi =-UUYNބNL8 C.\ ukh]]̜f";=[ <䁫%=mhXxbȥ3uCC;MǓki&)״z)-ә/SjuV^)B~@f Q+Q#s\>l! rMƶ{RnPv@u,QK a QiD@똃>X "0to `/)U=RJ!a#]w6p/`*)QBikkuH[ֽˍf LZDu%a¸u]˲;˶^`4`~v5c:Q3I@_)b8)!4B>R,dIԢiV\i!"jj&YyC>_` |8@{bS,&E3ˎ0x/B%R=z(e!62Ov9Idwlhq.5kآjm9Է~ x~V`f^ !>*rrVajFLp "&:)Sf|fjufrܣ{y|BN+hg%İu]Ƿ|294օ o:0RcnbA4" +6(DnT;9x0"7OÐ<$ YLAEJ׍O3P(t]}̋~4KYB{) EDSZ pۍKB&O,nI <#teW2+⦅{z>!"秮}fUNzuض;xehR &4Q ld xS"~5]}ޒu¦MTI<:; -0'wfBCc70C^v6}P=-79"yIqh̚FJq_6 WoՎΦ&.͛A- o}C7|whV,VyjԷb8xO9ųj'zcP4bM7ry6<_"\fKڔ)ֲ7N>R˅f>A(ɂPWYt>rB8ELmCJ>+.fBQALA4 Iq+]C^ҿBڪh+KV75ea›lnbԤIz벫[޻84F~ #*bdD[t~8v  vAЃE4Ƚۼ~iˣrUZSȖg"V LS?eA!>WƥiCK-@ o+^#LߒQo>kٚɭVy{Zg/ ,R彏 {1tÛN_N/ V>XswH{sGϤ'qa_`M2~o3} ?q0֭#>uꂕ+{CWĬ 1ո%?h}N ÷re6 B%?\ L~A4wqb.fi-رc^6RG_+"WT#Îz[wBB+S!KJɭ9 bqS|DQb:tf@]cJ>:"24 qmsId )H`MEKvAB_,,L0@E+t7<`w9@/-69wqit|X 0p#/0ih,H\@g.-!`Dg(|A1hY_RSFcBtTW/ܳ&[b !f dǓsuDs]^o^i9)o4g7*u | 0RNxbq *)?_c$OV H@0``x8˙%|!pG/nFobLeR:bq!]'@cP ,@Ɋ=N@ ?"7޻Z0co|d㎝;EKEE;`&`VbON`wOfIs6ŗ)o?6! ɐlL@OGIgL"2f/@~ߪhը\Nn9m[6F,T=kB% #Y )֘m|`[Gs\f$ckS_Ѹ#Hzmg^e3ޮFAgb?8ξ!R~hh35ĭ&uo]! IDATH$&^v0R6#4^{q+HL̶-͎y?%"q07YU P^SmP6㌳JR,^R9sO= .\غbŅ~{_Źڨ ?7c9.%I t;7"JӤaL0|W{č3O/j65Pn^W:6r25 [Б7΃ ŵל_=cj|oFkC GR6H\fF{z]:.^rնo<_TO:b/ .m඀ \U{xk Vl7<7slc ׫8[/IH)I՞[7)]F7O֦GKz'b(렳',8D"ݍd_@Rhdx4ҿGT{f]OSU@(ST8 5iǘv椱UGaJXcLqFNcw;:Qs,~[khXK\5$Mr?T_h3 " M CZ,6>.@67`{~\p݋KdR!57::cɒhR\PD6cRjw/OMC(F2|Ӧ#@t A2 | `zٳ̛u}qb~C,l4"/6 Wz"fRR)g, mmh}u{?I!tH9fޭD ~[mw_y-\G,@ ՓN/_ՎX4f쏽dE}W[~/pDBw/^><<%"ż=[>Z A43QnSE-a ]tͲeJi"bV=_]HN U<_nO7wtYTo罩k|뭗C/fO|47|Ab3/;PF9B2}oVi[ו ~U%~"Jm-d*+i&Ue~[>?Ln@;( 'er.]') _{e-"B414jǹJ:uQF,閖{ (X0!`/p4@#FmȺu7-Z}[f-=B|^@.X^bCCCrԕ]]# `[;h9rz$[%1hu= g|5\ǁ_='Di>Eu9 aX\7#\{`n[B J  zx<Ç _9#t :x+eQE]1ceT/@ C-O+F!Wd00&Q81vSJ UKs9:{"gEt*l/p]Yb4 <TMhP1?  `;v2}><:ˇl0BHCh]rHABC(1uFϹ;{":Xric!>oX=Z yITroʗARI+6WdYN"3q2vi/N1 p'f=->[,b9Zo/8}/-gE.%E%dzKzkKNouÀ>X JiJR27=" ?Q<#t= Lj#]7q<Į7p#%6["owpz/),6p WGߣɃeoVSS'JB9L%O)&ho=LJ[GXxngC?mF{g!W>cfo?? .PT,twx ;2^WG̮R!Ooo&Vc呀~qܼ3>{ҧ6;_zwMdM‡F} vӇ^4k(MKdujWC;Gfgn}կgTj^+F JZOi'=·.R 0͊rͧqPsC9wm p)& Wӓ! f !'oݺr5>_6w1:q9<ߙ>D[|,/m9":duU .ij0! g%_uIꔂ䰯r=--_b^.d0lnpҞ榔l%{pF2ad¨T&P^:Xa߲.9m"W ]JN~OTT:z>!,*UqC=QۯV2t כaEmϪ=;ͥ'OَѨjO4e" ڨ)^sTC#0J' _fÆ@phMGBӄ6ZOO) ġ3<@1Rq%3-KCR\&#n.Gn]zy{]$$NɊ [Onjrr':lC+ 6"7<ܳ{L&:Ë/gw^-6V$ ,`%xFF~v**6^T5MyVs  447v\dx*PSCk./%[4^M{R[]|ޘ0ź56P&h!ׯꃥAǸ=@cvSf_fouI'>1JTg_ڛ+Wn/>08N>R?u?TJzU/{M+XqUoڕJ6F"KXC.)5|o(asutPCkes̎5BFF^Dz \#HM]]JJ hRN{uG@{/@@:p yv2^Цz,nE$ZRM`Oz;Gp@pLOA$t% !QYVB z=(RC#`UCqu@ 8<` A X1~*I%I3.`[x4 8q[y^2 *%p\7s!4$8ha 97B]IL^)#`$2JQU)1ABKuYך}3'Nj\!UMOEMEՒU)4X$I2<M|ެO:oR7ߔ'EzIf?C1"5FL5Y(#p( TiP! CE_e7\r%#'Bx:|,O<9Dg?Pq(vuw{;coXg]r._C4߈ERѹs?\WӴK} mʟve3W;;w~eÎA™ W]q}D0MQ(0 HGvvq6JS*zI5| 8V11PKC_V tS\x4XM:5ͦftڰ:3g4W87-*!V-j[Ҙ^3;eral F"C?[;3Lu7EJeZSNRi Iޭ9q[Uw N#qҦ!&OO< HVJjJkZJgKUgɒCQ@]&Z+'xv P^؝( KX"F:' uzsכ@uui)! IJ!•tM5BUwNաC]]vסCE+A5.riVzXi"'Jin*`ӔDQ d !(/~ѻc)ĥc7nwA Ζ`Ś%KrfM1=N=xN;X6I&R}}f9ݽI)D2u epd$*Rbdd^1 viutW@ھς`UۅޥiM: !RR@k_un]gDSb[dL+g?СqJ#i?ԷH[ .]z(s"'S~߮^f̓ʉ]Tɔ&Ě5nֽgˮ""\a/|.;uj8%jToܸoR-U\xTP,KӴ`0z o2J%K.~,8}M&we_2{4!h*d YܞN_I^w**sIvKDa9D4)h.3 D![h! :@@xH;@//֬y0rSX,[*ɺ^%JF`3`MpGGdzæ/k,Z]DWh4@XVe*H Mk<|p8zkWHYԁx !u߶mllN9=$`?` WҲ #+680\ | ʆͦ]nyF%bE=p-p#puH40̖ ?-WuWGF4- L+s{A`>f'54Y0j鞅5Uʎɽ1&ewUa^ꈓFqVn*~[B-Czn"p a1j@#1@%IQ[_ŵsu]˂@W~vABUGT50B"rJ|Hx=e`2| Xc(xy{ lا#GQ{5c-`m;nH)z!l/>x=@TR<arU֞j"2(0D<W="SAAL DD * !;zk+DaS{Z{;a)FY6oWvWnV}1Ƹ <  g~9D|[X3`b/ҿw޷ܿ?1GF}U)o}X%Y*^%mb*!`KU9k2>ٶw3g6ǣ1M9ǚFN|43Q(=ٶ].p7EB?7Q-D]]N?,y 7Ћ]ęmMgU

    ȠA2s!f Wq ^x>m<$*%bj.SPx l?0]{ǚ5_l9V:ub|}ddX45LoNk(IqK?q/_FS)b`SRC0 5f}`]m6} `P+To/|B;vTnu# ]c \uWрZQDE MzT8wF`o{'C #;KY %K9 d PL~`'Dzx'B\tbѾ>Wl^)#/# kbDI; R:jÆ'dulW]Lka,Ҋ%ϛF_]a,AA>{qpW|tι., %e'pmjPgIGT"1<hp]sRQGFxɚb9Rp\9޲{l[q2vRi3^gČsԑO VuPHe`pՕMd[4jo)(k|2 rRZk܉g1WdY;"h-%]&ntoQJ cGb4-^Ko}n;i^:o'^߳n/¯h_.87XWy/ j<# m6@lh-&]Qӄpݑ(JyxtG_bq Qg`ѽt8\^FczWWk6wt}^0kjjuݭ{C4,@#x}Z"D` ѥ& ء,]zY$ٴiH eO4 17^"ۓJЀ]U&9G `Em+Sز%⋗?u7BWT*J\g o C#z?bT:;#={wnmHJV<~ISSo@p:` @/P{wTL&w9)@ [z `fj)cXxARX:Ր7 vr༪Axxy>ZL1YU3 [` gC F2*=0g>r1ꑴіƨBvE}G0Ȃ 1̛i>[q IDAT  С45]ݷ?ӟ|]=D IegJ)r I}<0 $aBkuPXA1S P X@Ѐq_~S5'0bɬ\ (y`pp'3 `})4A 3x~zzX~yب/ÌpT)K\Or3sG"ڲ%}RW^'HMmukP뜖*(_\g Z*I:gVJ<ŶE+f3hlk׾;9vSߦm;4ʹ_nگodyo{ݬA+p% =|~?=rb/,^S~桇^tˋ>`/9z9h:qx}LZuw鮫 bx~>Eþht] PMP X3A)bO@#az,9mJlC9+0|sd̾RĩL]_}4nd? &8A{"7MӴ ٞLRq,SiWLwaCH=7]\u+a&p549=ߞXGV0.9y ,%Yĉl(̞trnHXdIs0VsϯI&/ӌ67{_;7/#- *űzJmbTS_ir]7{{թ[_qg,JYQIHMi 4E"ҋ/_$ՉyNG[Z8o"u+XB}=JE7A$F<`!=,˿?1wj*P*|~ebpR¡ZzٲEj-z}P{@-Xl\LIY#H P{9f?70;~xqײey 4~!pbl{bյ L&=R3彔 x@O Q P>*ˍh4d 8Duޏnۓ68x끯E L7s<=}]R$яq@ tp)B!X^7hǁۀ ( n,z (kYM@ :`0BhVVS6B@h((}$>AȲܼcc|KA7/vڭR &PM9tB)RjVSS)0?8ndšPô̘'rO Y U^pNrO|M=nRRºuDO~ĩ]]"Е]&ׯ3Cv>-4+u(15InRTRr uX{"9AOԡLBtl^Z?F;b"UhH1h:k-;'}ǝJU/%6 1pREO|W:#AciZ=;TS>RKrxx%HVS~,d_xKD^}K$Ua/ zk4DT-D lO.x 'wԖYofTDE0ϝEj#'rOԅLRPP#\sAo6 6@ :R5VE,10we\g2N8 ^JبjU(jFFtCf"eNi5a-2"~i79I2TS9clZy)c0 l݊?>΀bpcVJFD HҠ]b159.moio؄k|4@!.u|\~$/w<;qheVx"XnPS,^,M,3z\wI%C:YvYc!~4xinouMٻ>QU`wjfDf PNa%?n'V1q>&DxmەJswP1&H\LƃARJm%$:&"TyѢ~=]X*"xh3jڢR-. DI`{j&ٚUuu׭[MD{/UK9@J 2Ɖ #;16gw9gD J%X06D7z!`eonUepZNxvRdRO>T*xH| :<х4 \ hƁDJ9Mubѽ >T5\< L@D6wknnvUx ـz>D4i5EV/:lp}+{@K#ZsQcL&D֯xP(qE;Rn^shx9R\#P} N'fOo 7`I)@61¶(^4 *j}WR3^ e N@HR*5?-U}d7Za`zPe rˀtF*N씉q–%j@ӄ<ZEj !$IAxDY0Ɍ{B d*X\x^lM9yJ J>d漆 _}#|d9+WK^h`'N5X mnkHt_۶mf'Z'yl͂r.3b$i- X35\ZH1 V)wX--5yT)#" LNU$VLN&=+ H׃-@VfY īOo rJ'>t\i51; Zh* ]rfecbpJ^|G4z?w߽I&]R,NrWbg%KDw74ϟrJ&?mڥoչ!¼0IYI;\ MT)I{O(t@aR0?Շun ^Of,Tq\q`j姩.FDB7z6ݳӉ\UM+yj5s9kHCWGQ:dsK3e| `4~-j1uk,ILVOn]3ʟT<-mR8c57n: ;;(.垷`>_0vzJA+ 0DΤ'Ljqp%4D|ٞ^yeCTl 5: RS!ܕ_Xq{ѱY9MOET"%_c@vs˖c1慰u~3I~&j,}6y>}{\믿[>02n|5nu\aB^jRF4_ULb3UTJN 8oSIMkϸ KQiK2aO|ѿw̞}oN~ ">?^彶^acs_T LSmljf:M;PbRk=g)=op6V}W<ö5-zT Ъ=-`O*o޴׹ K zl{A+{h'cstEWX*JDՠ5 .@Dyګ-}t ^nw}Yl[ ;۔& ;Na\E ldLse6-ft-*@M+iKi%bH)dli)RQ@W?[(X 0"SR:ƶGj S)R٢FPΎ|KztSØŰ TdEX8Xxmr V 0 $Z:+W$nhBn/0Dm( PsC͞9s:zԤf3#XNڨ X @*AUx"p(@$`b\獑39]ݒK|Nai %9` $<0 S}!pmr1 x8xJI@," F̐s3|$9XϘGGrT<DOy1hK Xۭit)@0@DTdJb $< @ %@RI`953ƊE{].'l[ж5e1ZwG3lދ3 ؟Y.koNmd3H!pmG(m."JC@54M C/ub,cxK,boR']Wc6+ֹgw$g`#,pLqٟu'*KI'%.RSJwϞx%S)2Rw71C);Wg']@/XPw ql }W)q ikƅn67?68X$Z4.9E>lZQl8C X-zR}͜ Ixzɩ|5P7f0gJ߭q|jf~ݷlY;~m~ܓo6/d`SbmuOV(42Iwr aӴ.G4M?~.uyC)ϮokqXk|ѩ~ݖ{:+◽S!- TW:?2.p]T[uutXKJSq9ndoO)9ܹGYodo?iae oxїRzv P(zm˴ {_9ciƄxCJ4>\UHJ>DRH$#֭ζ{~m72}]#*V r@x!P]Jkv8Dzz}]]ug B(u])]Cx>T XQ.%Ni@R^ؙU ā6`5S!`Ep@MqcF [Ä/"5N@0ABy_lCz2 k0?Z[K[*EϿ] Y0xQ!/ 3U2J p&"X=C l >Q5dV r>31@ zP7FF촍P暇r<8{{K+eP-!<^|/M)Ԁx G]ۥW2A)B0@:1&j"fr5~׎ը@OL3b躮yh^GQ6mX'XK&3IͯɲL(*SVaccƨ݉DXҦ}ӾGS*c tCm>\u= 麞 n#c{ۻܸqlo$ØIiΦR~SY&ݷ?ykM-ʖhGDnNJΞR4#|m8 ˌ1cosΘRGif&`σ6[ߔzK]NOnх{z+z6A/?m{ʳ cH &ugc{-hH0ۇ\ѬK gn: \Phje:'K(bJK'Q1Q@#RFwȎv x%$P,h-x7~>)(N";$^d? 6_g).a Vp'/el) swHZ;y`@;;P``l%M ƩwOiQ=b0:H E@+.֌"ȥ89`|ך&Aoun㦧z=)mӢ7ih E"kϞRo˜\_p}+S\s%Q}+>wRM)Q[V͏eSER D ұ}xbٹ9pgo1"Hd0O 0ŋ/VlN~CQ ]-ҠYQo0(V{Tk,R%JzipfNQ h^< (TGQϛW& zw&u5B :~iʜw|֕+#[*6Qom42}XO/;8hZV4i*‘sD4N^W19;NnC*Ώ\pi넼5ۓ}Zs$֤V(.ַ+Z/~b p5(ٲeb]OMzy?DS-m=8;%k#n q'EJ] /b? pzh3ϼݒOhRf=w>~??Y9p7KWFo ODmp~2-!1ƉӼ\ކ*o:.akNp÷s݋)@TDxVn"P!Q^JFFViYDYBD\)z@[Srݺ<}3e_ˏw@ x5уNj@:mBUk`@k8IQ1RnblװD}0 JyD<MÂsޞjj+j=\˺,H9qȶ]"f 8.j$%{H7Ae(Ng:S))c)wTc Cer8xz+4 H-]0! 7PZ{Vvؠ_` k`3Th ?7 7Θ,€hfOgıP=˭@ `3cD@@# : xFP4`Th`a)M h1i a X/oǁl*.t:g`O!ZβM.a A\ӃUg-(LrD-R*E`RېJ3 ZPՄ8Df ׭ L|DoB{b*Q,)L3%`h0 ?e׼76؟Y1#z[K)jܯq=2K&}IhU*MۚV޲=`0hfP%jX$/.V}?Yɧ4}S2\]Jt{R]Wn8fw?X^&cuu§[Ƕaw H [)yC9_iy3θ쮻֭{s BbڊFml0F5/1]6(21؎CXꘓ\D=NW- Yvv#zi( _,E9> q3>is !01bO?ˡ2u1G#SHCUW[u)Yڵ9b ;eŊtsLQ㉍wgU5>2p< !=&8,л<~d2N& ͕zƞP댔bUJ/Eu\{YYgeTQ2Vt~9ߌ<7Ȣ+ha73>ЦM }>/; HB``)W~k<1k`\ݧv'^M.ܜYs'x&Q>,ޝ;W+W~0#_qM׾=tsׂ67?^[~Xm!9p]T*I)-˲,kF¾F2lg]v+oT ~ yRgGŖ`O4]NVi勵ץ`/qƭ{{ê|;?> 14c]cGZO;ήZkSL %D,H4H`H@KJL""" ( D:Jo T&L{ϮǞ=D<9yagvY]g")pƉ}Hqvc2e3{ c$dYI"1z|f! s:! x^E16n#Q0)hm@xY@  l}`{F#:C&dF]H=6Px%_bÜF<1#F"K!,bP:na^7A5̃&8j<>Xi%Jaxa QxFYJR.}_DCm  x QJv ;K/c!& [7? rgX@? FF~BPpp p )Q45J(ԀRPIP@0"pr Z tjkZWDJCpԗ)*yTF+w?on_ї;ٗT HY&Fc6Y'!:>)tBDŇ1&Dm8AfݢPjn&s;}[^eٟ~{m7r lwnVpGmY牷nKo\JyӨd=NH޶t;wܾ}`L ƲT V̾s؄m;վ%,x[zME09xbi$+$򊼦gm;ގW紺,R>g}mRWWǵcc 4\7Dݷ㟜Om˖Mn`r2`7nGZicBJ -euSu&eL)Dh+}(/|N G9|$A@"A$B(_֟>͞2ž+[EjGӴos@kku7Fȅ.[~G*{+vC:<|w`哏rf?3@sru8^nK]yWxų[֮;VR/BTT2 RQ^eAq#P挍<}i_|xw>W߿z/.~Jۦ/~ߍLǦ< C00@ellZwϺc3^5'РSf;7zx$gAL g(P1{c+-+2sQZFiܛοf[^<1RX2S k }|y*)ݺj*7v Gҋ=t!6ϲ\&1RêJ2c/'ңn2/$+ʖ~*ev|Du3 )ACu { ACw~gmcRR9N:v!'j.~-'b9D92r.+u'~vo姈gR䏎`GM5ʞ-,/w)7=v~CSfms~s2e=+uծ]?;n O]0u襡L!-,1D8!+sW### -Wvl"Z&Ըt 5&Q=%'R vqMb폧vm2ȸ7yް1Jrl_=VŘ/cYt4ze˗33={ߠ}oE]P(.ME-|x vwGWdrԶ/dS csR%gO?_E C cQ@)5J5:\ǙwlRi@[~%iD:]2<rVUx_2Y?WJro òVX 7@ >(6NX 9 >>A~| ʠUĂކ>jKa8 + B vQ( <ڴGGԌ m"| 845Ne,z+c At c7ہ)r &r:D2ݽnřǝy| ` J*ǚd;쀣Ha(ahhQEŕ'U+k%b:̸= `C3qa2J+"xzk^[uqUӼmTqP7ڻzْ'e"U# Pc2ڃ1xJ҉"(;n $$eTV j“a#yymxV*>uƧs#NܟIdY(b3/C '˱p=1w2ʒ7RLtWJw9>pP4 yVLHC5,JwS79:R Px/{8EfeM}}ÙL>hUŞ^2ŏǣP4xƺINRZO)R_Z]wG)C.6)5;Xv9wXwcrdSh$%N}JMMQQa|p8d`@QDhkGh{ NUK'Z[[.{nx` x\֤^.pA"ٌZ܂HCd! '59^}_u_~&b%==GG-PMbeke/m*Pm[ Dz_]VHVzl}{yo͛ gMuw`#  Ĉ(7\0AWO]~_ 4U2'/4}W)xV*o1=2K<x{ ,Gl2)"ho:d“us4YnSʹy޿Jg[ r,Qo˄ъ_T=Mͱ9!eoKc 3dVRJ4UUMQA{^8\o= ER"acBĈ~=\߃xl&؈Z>KTmN2#әEbi벌%V"|ŋ6l=χ mUxV$Y%v'MW뷮]{~ mo!39Wݶ-ũ_(sΒs2PZIunSڼڴd2  sJ0<_6 JEƬY_Ag`M(FcBlQ>N$Ο@l_ WL9>_Rh 6X 2x>{J&`; IDATK!u1^."JS`.'Q! ?%p<\_t/!8x VZV]G--nS?`.ԃQ D<ˊ)z{5}:{a͚˗30vww2yotn==?f8_HoPyAid3O&uA a,BZ !!RRC8z 6p-b oB'jL(`6TA0XBvŀ6wP( e= nHʟ7Fk0rt$9#BtP o"!U0 f8ɌfB!{$L^?[dmp8lJOT@Uɍ2`o{^_{m7q'; ;켫 6ARRHZ:ϘoGA|jÆ=Tt\ϬߠdZjF'Pb\D(65Y[JJ]I xJfSZ,4XX"TYX/jkw߽SJqwt;;Ўz3%#MleŲp!\.ڔh#WgMMEf#{V˗߳~oy! @LZLΊ Z9a/}BVǴ<1V'zR#N/?I΋9e?}&>!~ֿ}4i};eBrc΀?8(2剈&a,:dyAN?xʕ}wVs.pv޾ȿ~1K{R_ k< d2BX,_g]g2wf2<r J5,+!Ӧ8DL&B> ,+VQq?DPD֯+ FGstW{{f5.cʕ:~ _9]?Sq1R(7PD ? G4 RMp1 ӾLׯ̞]/&:XW%_ + WC{)NumPV624`AvwH #i0ͲN_"s&(([HҐZ`x@R?5fC:rAPY-FSJNUjA3aF״lIƪҒb\gR~nǂ֒#f;lc0+;SJN7 ֔0"`|Yz_J;+jR B[$][? r\Dm0w{s,ydeYqĊ=;qf ETwV|"tqQ_)H|+^M"@ ){,ls[/twj?}գ.p =m7qǚir˿=}Ч{ώ:Ft~w#-cb,m;6e ۿfxń=i3mɭSi\§ Aî]VXU60…Ka"0FcLSSe#_. t<0Or Ξ.ňAq&+wR,~_HMiaWk6t&gT-_p9vqZ^|y{dhoŋsO-\iqfܿ%@y.m:65..cI4Ӧuǟvz*ǧf͊+/ʟ1CiBٛ7˖霛XdKI_ofh]*IeS`ٔLdnnhxm׾}N/xh,ߟ@=M}ZkcLWs9v]׳12lj^&oě7kފF" nw/~!lx9Z(RA(~S+M}A_1lBRC=\oXQTL]ph3`x`YByy6zAk=aRJ5E"/:N_zdFGQJUUBaeX\d5H)zޤ;:6t5ސ"iW}KA FWvJt=(5b"~pARj 8a8 Jjj#XuS&\S*@doQtkƘqG*F%!yo$cp"ސg6wl2eQ hKhBB̄j8_[8̈Bږ}Dz C鼗WJ0o9h`n8 %5A:G}p뱮3Z`6 ѝחrVܱb`@9g<xӶ7|h<|f;vӿ/~|[*Ko[K퍍tI w}`.f"BFGm ?Q(J۶?q2z wR~eloFCZyj|M+R >8]Y1}ɫdjd$ŪTة mkQN55.qQY9p⢆3hjJ9kHE ቭ ө ʆL+WƄL]g[Z}P XWW>:+.h<+8|GUcӑګ,^8 ijp.<0J* |^ʆW6;k폌 9 /m\%hK]4_ uwm-:J¤Љ)DK ^!_f\xPKKD PlnrfH>O\ik69m)mL2()HzAЪ\r1oŧ`|.# hn WT9=+9^-h:[*" RG%^|q#d-Kfd*㙿葡5T=`'UD*pFU>gvg_~|jjtCXGw77;xǥ>ަβ\::S/ 8 կ}dIJ,:z=Pƫ@.z37z].NI~?yAq*.kn ] ΍{fK躣kʥSO)Zb)H*RiڦP#yjT9nxzۗy3fEn_A]ͪU|Ӝy&G:(+$"hG[{^)8'Z櫑;K}Tjoܳ| 4ם"kr[>R3EcO~=/U8!vjlr*;TyvZkIZ$wS]E\rS<)Sf޾"eF[6_tQW 78px2pނ~:v4fO }8K0Ƽ{D>~o1R,r#DĘB,P"fʔC7}?ysFGk{{*"_drRsKpyî_^~r2xr޼J]yƤJ)÷'u1Wv>JAu6trȉ#r1ܩAD&#pRB98?ctiZW%qMxaPyHA L,+yq6~y\ȖmƅTVYiA tsF9Yr9ƘZboZ¶%ifw+)2)mþf۞v]76¿)5[)}<3.Q(KED7na-^,!a2D&<v7}#7;0MۂrNPvy}gpc lqQ@$ɑ#_'zHT0X*!>괮 ^b3c]]htݷ|7_IITT+H,KoYsM`*E `p&9*6;:vT ‰PjH#2tSN/<ӿ}gVzr3c~u"RUiu<$hR]EO?`<q b=$h5~:xk?"]LsLs tsY¹Af ?rCj9OJc {:ObWT؂ #dt8 x`Aڊ*5>8u jw*={窡&QР9r?](M옶1YʮUC$lDu {JT?ܤ *Ms;*޸S[p.XM*ӐڀP(ìIm.%ή oߖ;_kk azzڸw5kKTߒJgYKg9,̡Tc¨P IDAT|~c]݊*Bһk}衍DX,Ͽbx])yUOg2 #@:>}ƄYd2*uRu鵬~ސ>u1 *Q"ohL8ZM"}`}8V)Gd;k)vPR0d6k2ۮnj|xy7 r,EM0R0HiԺEc w0aWsP%EPaMINs! X0P:1ZR@*"^ߕ0p\ S}\2H͛77oڵуDaĈc0yA$2lv+2ܬNbM &p0CevrסSVa]5:rb$ɪldDvFLg_31k bYVbJbwnkם ww>qtD{Ysq KSOA86cXK*G)[?V C~(/Q1F`[tx|CCl)z%KF`,:Q~2%G BK[*X@WPRhq8C'tY]JxRV}u wa胲dNa([mxÜb(;HZP  -Sᐠ0\ d,:GdĄRJU)XgmR?ˡ{yMlS'[B!{AMxo?Póm蛰q\nZi7OM~꺙 & }7μD~c)i[Ln/h'OՓW\zK(_v B'Cg>}o{:SP!)ت5b߂~4RJ9Wygոk[VǜQ)W/]rK1sc޴_7의Y]Bmm@-^~чсiou0ŀ把gBDU-RhjWug A9wv9[Ee2tui!cjѽ((Qm<aż}/[vjeEᎨms\;:ll JuWcBح+D!(4IPc[厳]\z(a0 dOpvꝑ~`)_ >iJ7>s}3x.B{ٲ[7:7vȼ*V[Uf?|RssIՌ(,^܋SbT 2~S^֦B1Eڼٌg)1!PJ9TNe)Z J㎎b+F1#J!`D +^F  G+ʆ -9efPهc~)('$x\ XXjc{Vݹa_Ϗ#~U*[O睜xb3W8hS9Ō ۣ6(T!>Ϧz 3jwY=xsDa,\vVMP}y玚t$boj: N*J:P҄tof…ݮ%"ɾd.O-b(yW KشYg w=U> %m*G+c)s/zz|XZ,~N?0sVrȇɵ⏊ U5yJ(3nޞkCkuW"{X{z2Qڻ~wTDϪ,$?s1ym Ag(rʫ͍*0qPʴ4g/ϲ[aĶɾ+6m@䇰,ˀa(nYkY{` ,xf64'EZzwhsTrZjL.*8 ,0RhTX`mH7BΪ /oZ pp0/{ :*GBjrKHu*J p+̇ +JՋU\X */44\kTPΰ)v !ֶ|Etc.2J2|RJ9 KRn˲m[f۷XHr!/z1 )6 uw60+6oyZe™ߚ;w)KPX!5&Oе 6XU۫I7qqw ]'Du<ox}+; bd8a&L,b>Ѷg*Y#/,=^Ķ2#PBqřrد&Wvl.~jGvZsvP/~ T|z6UH / &g'PS xB,GBë00vWFkv/"($- ۱[Ȇ`CfJQvYܺ} _& GUx "M F LӘE*Eʶw 50s$QdTk?Os '3*%J2k5ݳWGfƱ'Ff;Mz#nM:xSgSmǶ}z@%Qm(+71goo 9vsqԛ8iT(#rX8/}VVZ3CɽR׭}+cccPu攤brUj=ի\@)T߼SO٭t(UWP6J,,f9{2=ev=DvI1--^Y:tRz3^-.~Lh&~8Ň~!V_kGiU1 B)Ye^jR6&Y-C+W`ZgJ;T"!*N$&ֹ];.xW,[_=\K@WFo`{ʄUeH13=?xX.YeAֹ[Mž<<1C5Kp)t3ҀV=m'gw:s.tm„ jxc}x~sXNGDf#;$IOkˎL}^)yq3255m=,>ôu215d2N?bw|1>mrI^:R?wi *Dy/IFzV'wR* 6JY'U~Y{WaJmx mHvԼ/ 'a'5x;t)BKlTD[ E wi XXRΏ,2߾\jk/{kf᭸3>&{3a7i͚}rY+Vz͚ˏ= yuxOs7lX|uѢ5k2ڽP kMO_-D*fYz{D7o_Lf ڶ DKE ˪1Hc.L&ZZJUu#\+57(D" p 1h8Pqi9Om;v\g *Ł0<,aJn'Wqc$tzJ A3 X:|Yj!,6 9{ WA,M 9nwymt`t«)2s3|J^0)2cUv9TIZHD[ZʛBUpzzR?VyWlwdU?߽o|UJ)&-- \uٖKo2?_d;,NR80ͰD4RgY VH(PK"Q:t}u;zcYV%pXC.W5jD})]cdƞOj\ʢI3X50Q܆dsW?,Xpr*kڪ׫V`eJpO[q+`RNc&\F>XEŕvrqRJ);Au+"9%bV;2Kx~! E"\ҌaaФM>:+yBX1@>BA$0qxD*T "y6{o/3JB1 e:( 82f8^o#u{R Pق)ѡdf-^ GFwĕ7#=ITbk o +Q(<ϳ,JrEFVJ#E,M =e[_"dm^ؾ(Uv3_anX5r g"b_7H"QkUT;#&T1Mnv!ƚjbdp(ONAǙN$rnQSL-~Red'_27PsV)[G~]M<]tAu;lUbٰ_q<[ɽmt"_`lVj\B; ٶl?1}G.wr:Yʼn9GtIO$,~01. ڕzk"N4U9Ζ耹J;%U|Ô\L,_ўWE"##%\:;gU_?Su{ {xY~PEѵ qE649ygK{$Y~elO0Z[?.muR_^y`kDGv9 Cj>$o'&iTܡ ^la_-FBϜzJؤ/O|e1ۙTaF)%HGW$_WQ'b[6a T7ۋ?蠽ߨGGyG3ǃD"ӣR '8j55UU3 - CY}539bE F bl܂?T% %+<Փet"w%~Wfe„"Ĝ m'ʲycY cN^LٛխtQJbXUgp)1}dSS=54{~uȼsߊSAY&74ϙ!IGg7y12JEBZQJq|g0 -z .2ƴ5rt?F3e`&D p7~nrtzٲ{׮,o>{ɒᭌJ뺮eY555BnӞ~g{L[XKK'{ `Yx"p1匀4]ps b) vlv~l8 J J"0U51p睾_0fj/(eȍW%ep& {CRU"iX4f^hFF-K$.H@ F>ao +a% R } \_W]߅,ʁZg,Xs]~9G}+a^ŕn?"C.yc|(rwof}}}9yɒd9$Y[G(Ei;pbM13AYمi%v$eaZk!qhI Pۍ XG55!6C (UP(IV s!OCv6P~֎\5NڪA <{2F3˗ỦT'mK\V%m[B[(hhᡇ“>D,P?nڜY?=J)u'yX+B 3ƌ{~$|ڵoey"hKb7rWM}NSljNUԺ RgmVy۲jfycdt&32b,bNVWz{r ֬A_l?& IDAT\NWw\$T -#y w]' ,YW*wv&RE5f`P62jx}pGgmgas$ wӗ'sahf&Z[mw#rd2$ sbԊz"ʱ,ߟ-9&Ԥ] J8.)2:C9]ᔕ$ mo3@E <;8$UhUJ-E- `k(u#\L_9?l4x7n|aÆͅB1bYV#kS5VF^ O4EG:ᆻ׬( ƞEk#/]zM}}ㄎ ٘௃oܻmp__m[}}d6[yʼnmw6.)ضQ)i 0"HgW/& PtpgRNjEDvh0sR1pxldrQd':x^=T+ +}_ß P30>R?/^qk) \ +BX81$Le7l"p (yp{Eb(v_ upQU|Rp>6տeC/Yri{-k~N"XD- ::A;qe݌=˿Rl;,-J)PVZ˖[>O9oaI!&_tċ&קR--gv}}cccЎB]Q^>;b࠘YH\$]F*hBƹTGviJ{s*q%x7Nc(EVAU KH]TC 6 4DKPBNU!R^c?tݓBPV +'77?hݨKOeCe٩j7z1F6 GA+h47L>7if/EZҎ0p#Z_֊%[lXp @7E"%$uH%cDChMM{A`TTEŀrzL mkwE0:̡`ʻ(TPx`|1~ &rލ8BC%V4XY.؏XFKQ%1a~bvB%ӛ/So痿9VE+ 印Zk'۔N<Ӧ&jYݖ+Vy hc$ jk,GlgYnm6̥˗?vg>xc\^x}ZT*[JE B8jmZ;_XihnNlnn ;")zca/^p۔J#˗?6=dV2tR$9ה}3=j@@b%c uܷ(]!Llx{U*UT9 y$\E[<ʦ6odiɦz`'0=J)e z7(fSHVE頮^DRa&3tv>zA{{ViL$7khcI5.uO[pVz&;QI͵h0fSۄ,z|E~V4,KPLu{tB *FtcpZ>^jTLԔVJU7^WֽR#k]oFصƈ1h\M EFr"lH {L$`6b6J.tkS]| H}3_y5GGsZzx8*qS[ZN;M{I\PHLdF|2-Hv%{{峲ڍD@`J߾ӛ[ꋾj\ W֍F6ARR9)7<{5J~th}B$YJޛZ~8`**q?O:c >x 8ZdhVhQҗEE 9ءd==5[Q&ڤ 'Gw節^ۻzt6>p |ұgnۉ$33әYE=h$-O+UТu[d{{m[-A/Xűrt~`ڣDp9\`Y0<0D$!PIYփa%h3uaHRИ|%wI2k-Pn☄9 >fsvqK rJ8ȭ" a> ^(j뵾Y%ATb_nR UlR j 4fhHaXv KnwakjLP9ʲRHѺ:x. @AEF>J==e5˖]zɟ}U3aG&H:7RW>jy ՅxkZBLEZ`͠C*8EhHoRהQlm5THr=\])TLLr{< 0g7ÂjD5cDϲ H#:oy}v ֞x/oz9N)TUWMb˛qC!9QuJk2L$5r,u%/R zfIsbqg1\SYFE*%/"{+ORL Spj%ZR@rR&llkEڲ d[21m (! 둔.&zEi U8Y+ *$- o@=E)E-HJ]Q4NMȂ#qٓ[`f9 @&P5iٖ,B '}Wp%P^"K~俖-fqUUU- eePq5XoR##nDSEJ75Ǟm+g4 šq=I_'\#hQ};n}+NjO͋i͛{:&믿ox5J5mNjd8=m ۆ^9{ԬT[ sƝw?vDzRit$xq⋛r9z֭IJVnZG"47谶ū􋗪chbdC쏇=ʌ+v+V iϲ$ }RUMxu͚\NA˖}b{=:u46ضJ&Լ9g៚OE 1h{N ̓?3,)`XqêUk蠶7iZuv>}nL16BK8;> 亞Ĩ91¾hf"u1S3Le J3[JL C5JL+qǙ5k%Wa[|c ˆVRW炙 %H1`]XS9vtĠHEtcUGس;;UsSeSqT%Һs::8Д h ٔBmhIFKW؋J*G@$F5;/UolQWa-#R:sg7;: "'X,oBp#瞻k2^:J(;ZFv[+("4OfE1RH.y%S-bS)ms3ΰzfp`0ٔ_:?1=dJs7֖Bi;ӣ0oFF'$lHlmoltTPVSy+t<0Olvq+/M~v۶mmMꔹ*<_iliR47~2(VqʩEOkoU>vKiu*ULQ67$o[nx^ݯo0Ju IJem2K VXWO{h$6n\w/kN&ЀtkhsJ; kE?XN͛ߺuRR̝'Sw۾&[vOSu'?|.vW\Qc'xE `d2P ӱr_ OK?y=u|RryyR) {Uʍ<{Z$7ch*@km W"xIJjte27hu)\FH1op%\`Y16w%W"SҥsT,<;1 JVBYy0 o8H0>JJ"u1gWj ۴- (;Sp1M;e,{{1f@V*4ƁppnN) NWjH$EePU@I 9sxŊ_MSS9E"Ԑm8? K,VD G1 $4C$ rM}U)5==}#٬W(Op0l2ķ^B5c!u2AAEMb/稝NL?uh|8"Nd 2*%کY?p _ұcofP~`,gcYyB~zЊYC7iٙI zk IeY1~蛈8#fbWbBP͉I;ٞq)Ɗeӄl uF=!i8Lsf6Hó/l#)@x9LE"-ȅ>x9p4~uiof(peH8GCe=ơ V/-8jUAN;B^9V I>F C PW)TII n L'9"Ԍ,Gb؞B3f/%Dr#P'W$<7wd-A \Ϋ{uy k6)эhݾo{zb뀩W1uuRąZo}Xҥwil\㋏O6 ̟hODy[򏏎)Xz#;c+^:qiTXۯ{'7;>U(!e14Z+z61VJp,-<:Zxe!S,W^S\j;1AO v2#QUC? {cSO[[íA <5;rX?'Tqd5=5;ڙΌ^ݪ&S ey\1T{X0uWT۞'|UwmȪ{:K&ASk@qx}y5X!bU_[?qQ2Z?}oz{hFDM62޶jvD2= nKicٕLgVŋQ}) muc#+V'lHX#;»JNƍCӷG"Z?>bXITݚYjO1m??ĎɱƉ8(5J;5}^0,uHq΃gY[GggԲ^ C*Ww 0 ˏrd2iY//O=ԏ~QhN;^_!200ֺ̨6r>3šБHDZZc{N5k 7١!@Tm1?zR*vu7^jP1U{@[<9~8v ZX.>j!0& wcy^Ed*~߿>6< +)P+6L̚>f} ӑH琉iD`4 |ʘ (wp0&8N~j:9n*lvaxP#BĜA߀l@{-eS+s睿O&|Xl~k];hYvc]ttsxnڲrL<\_kcc[:Bv ΎMM_W6 eАx#_ǓTfZ{9ܝ~Jf7R <y/p?9%D$K4xn _clWL7_z' _M^< Bj8v)C[GZ:J6A xgwne2 FraE7<|_E@/or@DֿϚTGtԊƦcm/m/6Pc<) GբxL _2B+x G6 /+那a[`-8Pɦ_l BxdYazPSXL RD(VZ?#*mIVEQ]J+ J6 `@&< GT։Rޅy(PHO1+fK- ase*B:GG=1BJ7OL&K䊿+v/6m<}2Zi[/F"?33# 4Ls MMQ79pgV >TrYTl -?*y*{t'SwP14W[3"h:aٕSsz睯Yt#g5xa-_ͯA4q'<= ܂@8<_{Huel:𕤍fF"aqVIfKBao;xu Gk"kR^%e!f7ʜ&'۝r9*&a2hB˪7nJ#|7fD *BK.[ Nn+G o<&ٶyf"zQRaOnfHh[Xc #"fi`R3Jux-ZBP0m㎋D"uuuHc?w#l6-؇.x٦Mz]JT \ \ākˁ!@3)\k1P7S'¨ھN_81"]WXx.2/Kuv8:fbQ`7Fqr Ԧvںub#g˥tzUϻB*8b28έ3)REϓrjL5$hmm,Y{5VvknTkAU>L p( A \&Mj%&%TUE?!໠ E%|f?ۀtqP/*~!Ea4S'\um@ !|b&0?|ɇe4B_szC$GX =ѽ;cem۳s5 @YefFX:HsNM2 |hC^)EFe>qÌd@'Z>+X ):2L b*)c}E^ Ps| DW 2)V*Wf.! x>Rxdx'욉W3UgYNЏK6?= [ZfxVzWl۳̓C?p0vW|x.\ 0ȻD.&4[CBeXsךF<<\džCnxfҸ߾wlN~_L$[)z,tnKPx^{זNV#PBn;RPJu'M3~plX_φn<;2^fR6?Ӏ#OxA4"4&58f)e "(x{ _IUr>D̂]G<.#Ǚ6A׵Lfʶ߻yG4u\R9xyU4=-&e ~4DMb<\"נJ&݃\0>>'BE{z%DALJyVʺu>KEu#˗#kFRbd$Ϧhenҹ=- U!>97ɶvvP6 RPpρsJwE&A14L?[NՙTR[^Mf2EMRQQ.+ez͊Q/QWgDO}tM^8Y*SSN,Rn˧gi ?_8\a{aD><14w\ ݨ?_7E `C{+olcA6=$\A1TJ@h cܧx(•;\Ƣѧ:TjlC,Qp]yDo߾_}…\. |>fS444*qV===%Mk͚_n;5E=ebbMȽ{GH(Ƹa4Caqz䑬eEsC7Rhp֭;{kwT,`_T yEi ~!ضegJ!` d\,M5ۅ(+5Ā$}"1/04٭F X[K] @+0XyhR6~Zq1 P^b`h~ @=H~ Jp=\rCrndoOW@HXV{B|U"D--+RvGt=}橩"`> ~| nlkbqc1ǁum{vm_z_ MNfK/soEFb@`Apo5.Ap+v9=W5)ߧ=Fb` d_u@`,C"t V=kQ%@"؎qO  0UB> @ױURf2<{vylMn /yukIq1 8vN.@?&yi oIʻR 6N1< @@|@W2`'- 4 GB Tn@  (ԈU\`/H'V<+l5P+i'N ֺ[%F P5JQC3R ҈0PY5^K-C}x8mE~FX*+guyxb|Ƕֿւ`C] o@۩#h*` 4 4ÖipX,ӳ:*<$7M+Z'V~7_sP `nx=Md26ܶk+ރ#.-CezVG~]/O߿N\lL)\$d__H%$N|^y4 J蠽nͪ[VQws) ]UߡuT#id^z)Gu( p2iDxnĿ=Fs3 H,K3 Xrp]YhkFYad ^1G1bㆼ0xo[%wN Wtơhe2`k{K>;uz ڌBۺm"ڭQF#5ͫ)*5x| ]Im7e!xރoۖ_LuuU /^y92Zqa*Zo=2YEΚ}C^0}̿ VxDiF;R\( `RL)b\kPDyN%MPhჰ-tك a,Ո$=YLwJ:H#U: uE[l5_zR]G2yڵ k\|uY7lSN>̕>ۭ776A-Ɏ+ӲxÂzP0,rñajERrMOJkηX_]qaGhS<26wSZe+8.t)JAJ1P7֏hiqu:PٶO|Z+oFm?Y檆ߟ29Iu5 xn& J̝ TmZ4T0Tm\Scs/\\ z4[ɑfЁbqȹ\W^k]D-˩T< 3&LJRJ?^ tI٘N?tM371W*<\5m۩TV*ƀ|a!-hijwtYJ~w 4LdcaF H/MEO.WT ?鎺^˺` _}hG^oge;mïN;kՓco-b==b1ت?t׸X,Kiiҧ_Y.F]#MŒcM/ܵ:΂tu^Ճ+V6Z uuFJmjϏ{o0:/l9N ;W:{5B(R ̚Q$lTRr`IB(2AV{л" Mj®`O׹V] w_,s^P;S6|)PVQvL48Tr0uT{6py"AAY," p`pON{,$R"(3[B~:ZdTXӱ|柜9`vDĘv⦳|XuBMt=rOWs_FMٝr<bMM{نt\|1Qru:a| GG_\LvuRwy(Zc6ܽyxGX(RGܓn+;Zʇ%E?Ò}18]ĸ䃰њƒ"ݫh* (-;x9rw1`p.06YyVTe_byB Gs*þ}c茇Pq|D.mtG_4SWW?^VTbwpu-BF>/ı1{;Њ7 &q9(B<4{No^qw_u6O(83Ueb1 uj5L"ஸ0k:ǚw&nMuVE4 ؚ&rB$`~ϝa|[ !7vr+R!mEAS$4tXg%EםS-OFGZ;ft(+,DWa2>|Эh|a!^%Ϙ Hܾ>YqZw&51tty`nYH3:Ef'A 6_ٚurg1yԇeA*m~{?ScMYBhUյhRp &x=ܩ>1ڻw׿b;<}駷nnE3vzQuϻh#`a?Xnw'B%wy")[HT;y_"4͏i/{ؚsf<>:R7vuuOLcYɜ6 l5Ӵ!x".240LSٟ{kJz HI`Q)}tt.#_SkNjfFoq_I)f`3`qjMʊϹ5cUgfwU* D;re=H$\5Sh?ʀP:םHd

    k(G@ 4K@qyMEbHx@ Xs4 Hj%ɒX2$Wk1 = tFFLa xdZnA0Svdϝ1 NutY{L܁7ۍ:o/:aN-@TQ]M/c!.mkS{s7~׎oe#R} #VDS)[Q2"B}$eP# s!/E.[NסxٜY\2w&I5k=u\嬻' XƨO$͂$90,=4=g}N ]b2^xi(dȱ22Nm83fMe7mGM{DB6c//uSׄ;` 2qύ:nvt?9S1k]7Hgw*NǓxb#N:Ugw~.r|LVZo]C54D6XQ=G7?`쳽XM&'\GFo\vY! )fX>o=_(JyXՈiP=O#dUY@.`RҪMTx9dVSnҧjڈL'ⷬVZ)<'??r5kgox/:=֪nBGGY:4F9;r0K/:$R{znN$6u еpaO4ݺ[\+6HkɫwlQaN:[/~XTRʶK6A$b\&4 ~\|T*0Xk@WJa](Szh_B=0BA߂mReڋV} w%1zsܰ l0H4od:Y4D ہ Xʪi! "O> hBnȵ36x7DBhNYv=[1i4]_Sck6T{_^)`)_E4gvq8M A4:W=ʕQ91^`x(P$ bf@}Nd<0 nN&3UHT2_Ӏ-I 5f3}ΤJDҦon2O3J*>*et%P\Off)4ޝ?96׽ҡ_TՈYj#2TTF+qW{mn9v׬X:L_ ';fC(KPTnlֽ} Ŋihb:PlAvAؔ35[L-T^,[v\[_?{Y&{^T–-5k:Ծ~N-j9%إ䒑:bꤓ:wlSStHRj߻W\n;S }X,I'5p& WnӠ3Yu=Ѐy[ǐŒ ::+'֬]_5Nh~ ]l`Idե);Ncan!qG&EXu74=ykXg&fhT `y8}z1iCd/uxcDϨnp_=46 ;zCSOAקu4 ccyHבN;FMǀÌX"6>S'KdL,͏H :RnMPrAsI<bȱWVLb>B܆uWڛ:'p'+uV]Y?v ?_M}9hvGpqH+.J;=-{Ȋo'E/ CVpo8J8e ׬X9 S1ӵ( QQ\RR(o4F2K-aM:_kkjm!"V xO?_xQaцH)WwvJ8Zj$7`A>4}GCXJᅗ!4AD4^aB cs? FB^"b'r麊2{6YQL%<'tRW?/!6,zkkpi7׀2V,Fa<Ĺ$-XD1V-B:; ^z/wE҉CבGv=ZRe(TWV]8|D]ʎCg fH J03WUOT V3%t<@ٳO!"fϗByj7i` Pby@zݻ/40hJtu64 ;pOR {> &99+zc ̃R ,&Vg/6~Yz]_+kqݯ [w_wMRy 49 "!д!j4@!4@> 7o.~;?NccŢ82ۚfnB) ឞdrZi RMaoR`*)U(XU}P>|0@x 2 tE` 0jÁ~)B0)SY(Ue\ Xkރ T< ^+_7ϪDUwHFKڔٻn9L@3P-ףqRx6/*Q NiQ?{{ ]HWVsIU=`d W!v,"&UVx2 5BHC?B!E.!4L۶\piA x@%.5Tu}G0x@ &ԄB=CסjHk"H6U'*jNGH!lT_1!i8 SL#Mj8۳l0!¨)puC8٠& PV}Iv>\V`e筴|֖[<}Jזi'ƞZ(@%bAT)zԉfwyg*mhh]e4 ӊF7ϩ"g-Ƿ(xGyB̊+]-XFCY=\ȣhF).*/G&ӾӳC:lӗA@3`)K6wN**JlOƋ.'w_gM{yw[[]~12/dlX~}m :O,[q9g!36b߻S7]=.'|F 9a޽o5%0H<]fc5BAMk~+REƣho`y]rFWj;Z )F UʳDq8ܣdyq;F :,HfJd< 2 WKU/ݓbsD{=g"*Y)JO?#}Ω3LSAۯT&'{Q N?Ֆ/ 8@D(AqIbg?BbT*>Oz@IJ#ψZVDߞ7AVA9xELQ$&4֘EQRT (X0XLq^f}xvJK"90^z 7'ܥi7 3R)Ȇ_rtyCdinIyϺ;ol"gv23ɮo'<r;p]N:S__?'Q-\N>wy祗^ZEҥK߭K;ʆի?s_ٻ78|4 ^}K0sn4 # ɬ*--?]hС)Fdf @p5p pPw<:hǓW6qX{uwRD%p4.Djuґ%;:"EEH$ F0~PVVNe.`0A*rQ,o)~oӁ2 f@!lsA' 1+SVЁmo'P@%HfFAF&`$ 1+t?*q5 HSA-cԊ`){c3 X@2aTuid&M+\f$ɉ)H`?88(2 8(L0LeP ,Ĉ}[>UQ!{ a )2_-'LDugE"gsQ[#+%/i^+kfm!(D5dlt` @0 ^z#^#NkU2f0)Յr4_7I9 *=IaOE@XR9a'g(F DjPPc,#+x~cc=#Oma<j@PJ q:fE`0eFo۶7I:\ݵkuݶm!D.3 ?ns~%x/DXl 7}}gs177^a_`ׂTPw̧mTZ1c e;ϩҘ0֢})EQE~yG7}k#{zì܎7s Q^4$\22p"=J"ZbsjJ*V,qK:Q6}rT@9/ɝ˖F;dɁ*GOې~.Pdjz@~nzQY ZacñX&u iېDCCᎳ/^]MϤ|ȧ v9Oy+?•eBR=Ê2ȒTX;o(  #O iLitF& >8&CKcWKJr\M94o៑ NOϚ5222ڗRySDc  ꠪R: =txl(Gf@[u##PU*)a@c#V!㇗~tXQUuvbo7ԋFOVcAu9Wr(( QE)nݻ-,! VF@Q<j G+EֈVXmua$9SKUsRT)%X5v :%YXɺvڎn_>[*<>g]]>nDRH-¼ s5~0Y]oYӓJ$ӏs97yz9 [/h`*ӘebWTXP#KKHU]ZO*cn4N+FS4.U- _ ks9=\^XfmxqKTl\'M % E  RhVe ц yV`$,JƇ!f0rGxt=bh|`IAvu3K1\ʂeҦck>ѣdxS8>9}P(sw,]rrYVtHͩ__=ݻ7sJN7KUG7߼j*ﰲjժ.̻7߼KoWZox7no.ρcqRx+|W;00Jm)-]Qqݔ)=P IDATaq;ߴ@`I2/{3ƉRʒ\.広On~qڤD[ 8 t.`:0p@+sd @"I&/j>vB6؇ P{{2D^3{?Q4[KC|>?QO'ޤ(/)Dm_Sl3)V! =u.!`#8`?)^wՀD歟|笳n7y8 cz?ߪrojsxhG&BTdj`"ѕys< o)N@v`*'qxע \B`h+y^=oX U@@+ܵn(qLW:07AyD+li9RUS> *@Ѐ, 6&Lr\P 8 H2(eVncZqAq׹K}968x/pa]`3BZV?_I5m<?CO<@UU#?.%K-ছn>8㌯ D;ӽ>7]‰ϔ}Bae;*zzzKDTݿw83&iS0~ǎN14aLI2Sp"n". 1TmxzWI>ݯHɾdveŃ)ت(KCKP&3lt.5.ΰ0H@ncFij\ i^2}>)1*0N\}#] '/(RX3'uK\ IH"reu1oO<>zF$׮ݪi;N(dc? 睟d xs84GQ 9 GDy955IuIW~ +'@:L&/Ol;rlaJi{QSO6ǘ I{R'Y2.]03"RV__8r":^?u")|1gN_E82'"Prr,.vԗ]7Qm(c1l@c3.lл{i欚=4qJO>=:;BVq7gֵď0X-5L-K/zp _{viDs`qIk[jk$aDFz5Enpߗ ^a?rWE 1?PQ\mzxɒ=7++Tq<(X?`{~,$_R>i~@y܃ܰaWB,|,6p/pZh `'P建])1[<@#eeɎ1FԺM[. `Y>op+U}XJrc" 1x|Ӏ+?q.x:Z`0 <hD={Cc>`\;n 2pΈ e9*!`0@pPW/~gȇO9%p[A55H00O=vΜ?'4-ML f(NpYIЍ *p!  A:D(ΗI (ǘ+a0q}&}48LjmJbǍQJ;e[ J!S}ǵYE\ <"q04G)S|'ğ>k"Ii'%fi% p $(<h\-X=v]!|TNp8@ \HBQ҈*'L_x¥/]+2M'83}=8>E0 l1F1jb6B8m`-8,gIlb,',)%X>t/W QXP|X@:ai@}UN)Iņ6p·]7op`CCa5r$a`Xy14nהF]-UJ++Op~i|>믿n۶i֭۳gϹkietAoH$%%%֭;wWiӦDooOml/Xp׹ .YU(y(3j6 #yk|xՇ+\=W=|kx{'&+{@}*z(}Ѣ}% /^xl&9:nrFτֆE=6UQ[O&'Wq6MtDVt*1]&'FT/'E55 PyXEjwE.~(VJU!`A S?i9)*6Q䮷&0B Si+ hz ۇdF_T:K%'fgB23bme/{Kٽi&$3!r0GRꚚ\$2sSy%rwn:5$t#%ΞSe#g05x0+z_"sÞ/~Zq(iv}`@P4(hnƚ5rrS'bC&".?AʾO"j06j.$"␀ q w'M~}*8G*!Mhr<>4TrK)#g'd<^Z) zܿ1KR&]PI|+dǶ4`A:RFm &V#EQ6۷{;@ioG}BA3l#v C ZͶi7=pUMuukֿv ㌄⺙2!ʐlV,o??QW){/s49<# f b8aG0OK$߸qY!$@q䷙t`B:hjĚ5 ,I_k$UB\F(uIMlOAB 8p{T0.Ř-ss7-BZju1٦(頪![XٕtNu$jh.Up1릮g&Gy|>4onnN}M1k_G:TjjRV} 10PUMfos.0k 1!7bxt9%5Nή]'% t&y@pp`'k|Wޙgy3M`}(`;pCFDD24K.Y />ɜ >rLQhf,qDQ_@RX(ր$$˿j`(` dt)BjqH|F+r L>s.XD+g϶zWrg#7H'Bl~ol>8/) 6I+&Λj'Kzۅx(ڀc,+aTF<+l`$TN+wgfG~$aNב$)Bji lޱyɦLwF#uAƙc;g(IJ&ܐ˼ï7z 042( TPQa:o!zhKTBK ,_TnrBGIdphK -8nCQ100Pd뻺=l[6۱)BԜj`) J9iŚǥ$ax1_]c5'l{cp=r!>ɓ'wy񍠣+vvv~Ưޣ7Ŀ?oNy{w߽+= |p}M9y|]y_Hp(v߲I ###]u탾u_$'ҲdN'?:iSgoAl.-w| Jqlo%~F1*j`J@U,^tͶ, L#o~{$’IT]]n8f\r-i.8ȉT\#TbSZ5 7B,cFx+W78vhC]ݼ b55g^w3"c o޼9e8dustrbq^xGs4Ӫ/YY[B\eLሗI]S5/RdpLSO4q.SТ"b(+9|S3[}lܤZcuFG 9.S(/GM5N? os|u8 WWI%'e=( '2ƍD[L1O; !;;ycy .HUћ\Gq0[RRQ9X@@t74C/QV, ϮNv%f2D])3N \9{;/4"P]DRX$R2:ymh(&߿nm%EaBx! ʃl $M# ̞e W~:kΌQ+bŠ+` ELF魭ummq D<ȶ'C@0Hm5Z`p\"cӀ\6$vi~s]ATx0w \F@87sF]^"\ \JHY~ U~q%@8x զycA@(ίcTb kP?tӁ@:t 00@~{%f.T Uj EUCMMKvD_x`5ۼ|- .ɤ[^@]L1zodjܼ(Dڷox*b/AvZ1 ڤ֬+y±zxk 62 r_C؍(#<7ۡ 0"mǖ[1 sؔmxI3I\0<;Zg&!1[Q_ݹ.#*oI؀X@G]Xg;mxacƓNNhr `pմ8 P L6U02iI` ]5nnp(+ G5xl-+++#@IfcYbd&L)XڇC܌+Iz'@XX&(ْ'ycKLLc|[9 oQb )p]WyW5[Ugc~O(b8PUմL$Y @ʬY'"p±bH'f3pҎ4%cL&$ctE)ޯbVExv}VMSʧOޅ_~w!f!ocxG>o ?hXNlm9g㏏-_~߅.9D3.p^'/88jcƷ_|ܕ:1qιV(h6ʦ CÃ9+GFhKO4 RJ&[OKtz2y;ﯭ8Z+)MJ Gdy6ו3)N] e?0@@­ a;T5UAa #**| #$;>NT68g3}6XPw Q k>|W,pmG˳_qcSeKN=KRS= $c\15M$LQ `LK8: ID~s_ W6V =!DА"MlMMp]qhi"? sYWo8Չ@trrU87n+ IJĹ'Z@GEsOF[\- *. fT ;$KkW:yw>axCu-\|JW׷gKKdA%v2sA0ͺ:41ցdDdzبG>kը )?B>+w r55ʜ#E$J+G%zfxξq$g|ECu檱(۪S-rk&yPsa$*/4#e%SFRE,4A+&}3}"$dR`B;|KҺEJ ?euŮ *.2}ᜳnϣ-0{,ehG"g7nlwcpLU'*ɣ š9L/>y)\  ✳d1eD H'2%r`pl`{08@8(lo(>]w !@mr];Y&2!yw M@E0f.~gǁ#} D@;$0-TJ3gpMn| 0bjf3񖐃|N,Xʁ,3xf0:Pɓ@h$3ЇU9GK+a(<3I8Z&'Y#fT/}2jf3&j ewVPښNi[w.߉á1Z!3ԞxW#8H˫yټQN'd>FGQT@`@P*_j(9?(LaRJpE>`&+gXY$ @Z@Z:%srOkvFԓ~z8⿖:֭۴i77hkt_E ZZ|kߝ=}\p"94׾{o}orOu[o<`g9{=~a9p<~F &xu-1H-|R\QʚB#zArI5vIgcq+.68`B,VP0}#p-v#q騮cWp≹k9k5/\x^UW 6 kn5y^ P:_LuUeKƷlŒ&""J>J5,ȆN]a{E %[62ǔ%[Yla"Ug~1jkd8_9q\_`_[``|L։\Z帮Jd0m_rTiIyiDXHׯjl of.ځEuF0P<ꏗj˺rv!@'>p&P y @B@'p*!a ,A)я9*D=@S>@k@Xs4T>z.t26i ~P xX X.rR1m,]D;|@@0dJ'CvOɯ0G8. ^ˀ PJZ̍)/#k \mPbI u^!5[Β4I Ic8JAp bWy  "KYy洕vV;Q"Fvò x.0 EUxզ7L=, OAQP@5Q>Ļ:P U ` LMg`h>;e:CN蔂) ]Y]ݗO_~o׀AFc[XDb۶-2(8cSc)d3H^v=U.%3cK)vEDuݺM1OsNq$Pv  ɯf⥫W]Cgُۣ#ƵL-[ [S2擲xppo?/(RRɁ^=8MK97.ŸĹPM5DOؽ 0#FZ1S%-EUW{i[|TsڢK_߰^ Y[ {gyMX?>7gM΍r7PxV&O48\a۱![]ǠazHy$VN&D^ @JdC!wN7*׽sv_ͮjW%U ؠHF$`H$"*#H4 A@@BRJRMvjwǮ"9o{5jZk|> fT!X6vDk<l菻ʘ!cR"2~?}T{ZWPjj+J (~PMMi?y9۝ MgYũO&Pqmvll4URX#\+ HY>l*VDկcsELηk _+>=6JAr>!@Nk|M7n6$J*BZx6|>Q51J8h/cY7Tj5'uÎo*QbkMϭW[ LD5 r؁XGnkS&طO^yX> e;* bvsGhlhsxw5;b̸uǬ;YuX<|dO&?X3;ܾpѿd29>>~͹9<\e9V9ls.…ȡ}d>ĎxA]ꤽOۃ:zz̚ϛO~RUWi9̚U\-op$|/+cJ_f"CTFzW\OLR4 ov=>>aÝS1vȻb kY> ?F95N2˾ؗg3LȱK/!YAc>]1jW^_' wMi\*4hz's):Z,ӭ s M ²eVx_y[ U 篿۟~kw\.u1+eK7ZZΩq"Y{nK&nlmK4JWcafxEozz۶Vۘ>ǙW,seݥƬWpȶki구 bVJf pgW怂У u(."1#b! S0lSJס6{Pe\D ijACYX|8l eK\( Ra~ á_pR[]TPR,nlgp Yp<|n}.;2<rz'}嫂`H8\7kLRbcJMAŕT}wA M)K(m5$d,ß`]itsὅO[,]:{OG$=^%^7u:IP ܋ FkpG)d&T9]TF4Bl$ Qw S43 sQ 62BJL cZU1 eH8PgPȖ0"7M`(£aֲ,zs~ WFE|jGo8Uxjr*UDA 87d72veLX LOÉ0{ 7s L {ۧBPǗ YD^+⛂MFَ&kx^&߄` ^&ځ ?//E}@1alD$ xUP#߭Ҩ"/13ެqtTd4jX:dTcɫq8pSW/7w|s {VF0mxO]@SkL +n\桄`"KE_'2Kp2 .>,b=CQᆠys',("𽐣,=A1JIy+iR 8ZM(7"cکs]֭˞sNJaXow}橵͉3^b$ƚkEL`a=Ԙ++8[<.3uRfčwu0Zbm-egb-};R9L,{5}cܥݛJ2q.׿7}*2#&{ AD#j*\Q?MLW[UX :iUu /y:'zvT\>KT*RY53] {aه16d&+ h5c3f:-[ynA"e p`hhg(/Ŵr1+.ȷ?`Ԏ͛G7n 3ϘBAyUU-yճd!Wǰ1ƌEcg?Qh D&o9%?D fn T>} .k"7}TLf0jP{R8㘹sX7jxxe"Os1KM~8HX:Ov.n~Awy['J6"ښui]y8ON!8hz Wm 5̉ho~Su+1{]0ƍ۞|/:,}Qꋯz 7}\ˊ+DZcZT4M/57[GS6993ecA E@|cYgLX8oD](紶ͺJE^6ǹ XAeRyZn;4tD.7mYsv*rrq% [a D%(44-h+ذ ߁0v$RԁN,2 Jmq|&iFS_2֠ʞu|eX9\Q(n Bv— z1<(EqaU@^d߾V[\7 ;e7J9Y^p>\_Ĥ~exjE_*+ E"JaZ)6aWr'3# B#s(ַ1O!BtS[#/pD,Q_[iӫ7_vn@d2߄cl&x>wbi@Ġ3C<-4CRJICC^rvTɝ3mJnbͷg > p4]VuqN繝={)x'4pM`TO[Nlh?lh,2,r,;fg2?=X'YQ 8L)oړQa% <^ X 'ڂV>h@Cr8{X@T۔N: ܤ+>iT^5T4$NKѸ|*`3& Np+pR`xk?mRUJq6(:1خ][YO8`N I9$+%:C!e)+cEB*X-M-^Uo\Wt ;^m peEEś!ξG$?|m]|%gbbbҥo oXFYyuwo;7R ݣ `_J tTG-{R(+ncpK\@a{C)bm[z(zi1y Q(!SKVj G,Ygκo9ђBww᷿mV6sUL]NV4>,| IDATT(dy9Tʚ:{k-=LLP`GEcxH;Rxeeu&;u^m YB;vεk腧 ulS;=x.Q~IUX4OJـMMޞ= ={ @X#(g5awػGJ "--gLZv<'->;n !wNHW}L制w5 DLv:!1~`-bU`PCg5k.ti;9u:m;f'ƿO"J "([9sfϵm{xիׯ\˟:Ik+go@nUjnQ`kXB(u )Vv4˗mfw_s?qw@qR4 Шu TSm(S[$_ZdFȐ_܂m J)b߷.ZZ("(ݤR3>۶Rz/֯^ML#T.X1ђjUP ٓtjKG35O>wOְ/0VT*TYujjzf^ p@IlRc߳rTX)/pTsu9gxwnN[6 /V:k% 맟^8{~MqbUtetT~Rg: EХC<:}-Z͛O<0$SLLLӀ[D|X(p,*>5bPm޿{1_'=ϋMve eG{/hku* Z,6*3Q* 禶vEnw)J=UUG44|>7{vtR} E'}R ϱ||?5ȫŸKnݴiItK64f\=BΦny;y}cs^wd2Y?5-FqNHdI:[n+.:ps~Pof~8(|9D`9ζV]S"1p]/XZE"Ĵ'1_p]R_σ 1>Uc=O9"jo@u艄J)uH$~>~,jkU0i}G8,٬[Av%awCfB\Ϸ@,18vA=8eǿRÉp^ ]xύyBV,ᇧjkS##7(n U% X98|x )@+R%Ca7`N(Ic>8-&DBrH"TA-`8;fWsrxLϋtXYct60eY|Xt)gePm=1X(%*]bڠ_挝gke|./7vȮ])8y֬#jkN`)$ EE2c$V73]1zP'*A(aA 1zE)G/:^-|*a|)YX`,C QxAM7plCf;fQ13j.B+ h㶌H*#>^j1Ќ +2`C,B ^yH^TT_,#3QS1 j@W vHZMg}L&}꣪O[Eh|t}SɎL[fFkR0s}GʺSfO2,hM@d})b]ݻZv"+*%KI1;lèT;;{WJ¹ÇG;i>}3t/{KP{E, w~GB{<_qo-H'cg;.rŊU_ʓ3'2swm|KDA0bY@Vᚰq!OU3(ͫ.ϸqM2{ѬȼT t |e)kxyi~Wjw_Z2ۿ]/[)YT~{9ԇ P;yٌ3sQQ~62J9`PZk,QVTtm`Q0J)yq(< s48!=ؚʊYҷH*R%bd _SUﮮ73`+ҋ"!eĸVOϐR6/۪V[o O='/w{ g|Au㻞&?SS.E7#U8ȰRe/F3fs.zf:ON|43Q yMHdrZ*` 38ݿgQljFq]z{g! "o7H_q*#ۼ}Qܷ!t}~` 76piڧ6m{oyd+ SS&ASwgxxfj He *s9 "uhdd#cHIP(T :D+a8Szx_P Be&s---JѸlrzjoK^YaJϱ3,Y`]M;|rry'~{nEJ1}}[-Pm777r bں=|+hL|va@p댌tnڴY>r5O=uHD+-kB J={Ғˮ[d͢+l rYCXd,ϓ+NRS_R8mlvRSL)y ڊE5brc.˔< /7;Oĕ|'}}:Öu="R̔gm0[)#fkA\pjY\VB-|3}l X ^3*J~ZT{ODI8i0RkGcҝz` CMm"\ +- 1RQnX >fwv9qF8UqipNUM_V3vʩto`b]7ܰ-ˋ n/8cf5J{b|˧־l9VN53t'{&"}/kH$b7J?et׭%ˉJFEc E`0 OD4s )O1Z{ޔksơGI$be-h&F(/~W8@ RtvraN9U׊UwimEKE5|_ϯV^A 4%6lȟ~"cc5ᄏ'W J]1dieUo95dʵ Tj2HWWcL5NXN'w"sBLɴ͸a}MQD ߌTLR Ҋֶnyam0xa"yL6ְQ+KDA^Iy޸RАHڵ3_@#9{1'1ʛ'0@3dBZDi$\kQJ J;|pOkjU=EÑ(%mpFE9ךt9V<^J66Jsj֬9׮U/_SVJI䤝J5J−R {Rܑvt4K\%'+7%o1{7}NyVR[d8x?<<˹bٶBaeQ46fƘ`?Y)R^VҤymccãԎ="PJmF ߷eU=,P`1Da)Z ?/*KZ~ ilh0bnmu˜`p9\0 .|.P&,%j4fAp\ UhcBҖ>ǹ߿Vhlcј/nPE_^h) J`Sv,_t[lcRUZ%4 3\vV,냡Ъ)<p+t]E|9 WP# !\Ϳ(;ƞ܌(f׆^ AF7ۡn2܌v,ӌ)L%Y!L>79_ "r0nB[!o]ĮQ:.!'.cO iQ LClN,mQ}k7OA5.ŀc pty)`D U ?n#4wCq(uPeVZJچ xyT^8e l#]jry8AwP>F HDB ś g4㯌QJN5v4?5CӯA@H`bUg}i:8QJylnil #k(# 䓻>;/kAPĈ R"Ke,81i\5Tjn"hkLGs:Z(|j %(DOM *X| = $TSY(HUuSId*t澁ZD)rU2Xb0*IM6/_>ek\EC/M[{H2j?У;oe2; W߀uUk\|y:h3fooaZK̋+K fD)[XkkÊ s6>8yu3L=wkC!yV(߸qUW}lwd2wsդӸ>a)2]{Z1)Q%'Nh0.?A!UhOXgg8h ?ɼ[Ĕ]!c52qTkR E_>nہxSܾ+N;큗uy/7=ҏ~.zhn&4h߼Y77ŞzZ^獈LL\7F;2ޚf^2!>kMNZ[|ޥζ 8̞=c&;1A$B1-Y8(L&t06 ػ[ݍ{H`~;g21ҡX89;4YHnJ"pԮWALҕ̈́{\ JY,ٳ`Ęֱ[qm?W_Z{X*XƷožg$h@nx\txʃ'TɳఃWBeu|`{ZA`+"PUyc"ke8} l:,R,EWLM}p8bY)rr< Nr\AzބW0 ǑHLLL{pH{mMac.5pA8, R-Mg#(5(Pj 9|R/m 1P.QGS!>'(*noAD <@B.E e rEp3):,JFX&HÝ}c*`{-h,;* _{</G#BVK w5˵eu_cU2ƴ@*8. ? l\/I~x%@gQ)CPx{*QQXNax  VHvF(T,{C9KwMtv2K(Qq8 w+`QYUlap J]2Xs-V|Nj>k4&VVE=UHx qrS8HP /AQXftjR>h t9R0a)ҿ1v|$칯;SuڱyKd( aS[v|*V/Docy'oZ);VPQ}Z IDATXq@z$=ov d2>qmW"G vȠNhEx{LY,(r9QC s C1ˆA8Nrifx h/J@0f"rD,޼DKvbpΜ vܨJHZٯ/> JgWblKu нCqԒH⡡Ѡ[!W_~>B7V}zՓ>t% u NMKXfMͳZԏZxGxa f5{ve埲---fnU.(\QQxF*ץXgDp]JmBlG}Ο._0޳3d^Dׅ, pl $jiL1G5aO dWj}E X]]Lx\Ćj6ӹ- |ΖJj!׿mh򧮻n?ya=FRXwM7Y'Olb5PpxW+֚ҤUb4~WR9uThÆO7Jk@ʝkd'+9TQd2ejm[4E<;'_9ᇷsDof^L0cSNk¬`r.QZr֍%YXѹw*32RlFM7x?ȜOR]&ضtw~_vĞahT* f'6l>0WЯMƧ{YL(> gԱD :4T9p|ԕ"x0)hgļP* ^;yb ShۡX,в}c"JE,.|> ߆XfT-+,bW*m HDEJW LKH¾$`3\uŨu otbRB6 @) )UnZԋ藠|G"7Ş2JraZ<[F}*tlAP&xH,cgaζ>gfʪdIl SyOhC:V &1nR8weTk3sSLA4*0 O\3`h(*U<ݗyB|H+@M!eXpZ4^ضӁ瀷H lZ#Sv m=vuc0ɳaUL޸'PWGn{ɯW9o~8nhML-';p)_e.:b-- _rl餗+:7mEBaI, ;#0̏Dsm{Tnu8Oe|+Hy_j?,'3:a6F:ۘ,e}}֭kCEc:NnH1^|ar%ylkWߡw6gof@x Dԩ8?x^hD&WH)֯z(Ib|01LjhT K>57bZ~U`wm6{RyJ],M:|dW ƺt<5ȢŽ _.p'P|.XBX)c*IvBY59HRKUq` hbJ i}ňh!]y[Wɭk`iUl2P0^YבpZm8_xq.R-Y䓹(JO,VXqyve0Q쳱eQGvڱAiQǷ_O (ڶM4"rJ01\q& _c}붝tRQhGxc1ĺ8l&aaC(V<`-W}W\{}eѢFGP/4<ݻcȔ}vDptԧZ (F4 hHTavZ L0Fb5s ;r;Xr&HiE>W]呇<3` @ "').+OVquE<!|mo9Vn6R ;zvxjeu50A+D#zfgتUS{yٹܑ>-+y031p|7IRJ۶&J GWhwKBп g~$gw^te+YyR}p5~{ |EW0Z7tu8޾}թRJUV65yk_(A4h _IΑ2@$j|@9lN :x'KKDëLF3KoQj)09_ļ8Ht~ ,ap~&puaX 龤8KBW)H _؀lț J<|jQnߡ Y\\+dsz\ = .MܕK%mLpF l*J=a8D^`Ό9>TD1# A l P 4:aE 8"b0) =U/Y[bp]F> < M@R1l Ka!Nsp0E(FphI@@ cvqRtF x`u@+0 "2Ơޟ<˶R]z΃ț#  ':s}\O1L fvqR,,ѷ{\Cr m68yeeN93#gEB)'3q*U b5Yd]] [_iu"k:RD) 9 3'/PAF&/ʦ]:R'>Z n 7^M0y@_-]_ʹl)_@X=>uuy^Mjګ˜oE>D|_&cVڲpX4zzV$͎)Ώ bP}Q`K 4 c r5Gc:҉zsla@6%P2I< ]?# 9!ƭh'+xd ݀c۾`Y&q09r0#%f7\ŋQG5 67Ȟr5:FIبij1MƫW{bCh.,2nsŊKJ6ٶyv<~T<~! )d nl)Y1x穩V_h}YĔzze}=MKvUVn1>xwm 8 љ{XvI6OdK|l7 O>9 pWWI&0u K_6 1i`kR 7sKCoUT|?@qdۈsx(b"_hdLZTO{{UuDX1J^XuDLlۚe޴~ϓ۶'px͚>̏c1 X9{OR#kD0kE0茏g]`S.mmRfO1f00Dʶ g2{ΈŖykw\H}+÷%k!0++UD=fA`otv, do KYR ) 'e3kl`ЊD@7 ۀ8 le~GƢ!۞8(up1 C^m|8[ۀ4w5{j,yfiJ*5w( KiԄ_ 9@ \wȘ1?!:RR~"{xU{Ø3 !8`/SD'4`j{@N6duʒW!?*CbAm P)4[+`dX Q ց+>2$nFjЈUdefiF GG x |g)Bf{U۫-ewL!!`p}"@)`\) p4lp 8[\PcYV, Շ^ܩtfyc@@ 8v8Nrְ~P9 J!RsL6iBi?; eH PD[x%9 x 8 !J(יf kF5Ε;Fs A\8Fu7??΂qr=Q  P C)q/vtb!2*OVJXhooL37J AEE8O#Gw|~;a%2$H؎=*T"yc^lbT)J}/ӝɓV+q|hh?PCh`}}-[d7ݶ8 ~ugOK̔͊zKT4piӎXhi8ֶZjlH<JV;kMl`kh{7"%ߐqL <& ։0 N_[b#dTߋ5fW~(g+ۢb:^Rb_~y]w]tѢ/\OlVd'Z|BxՔֵܻVD)ON g/%ecl3dm<ô+u~G}}ee"K3$T,n_ts5CQd3%8sTįeCcKVxi]hTPW,RDugc7DkתbcL5!%h`cUU J W<}P]]ާVx{*==OOr݄ػ+B'rS"*ԼltPyEDtӊ΄ك6l{ay=t9'<,4@B6{ž`w' iYP/pOWH1,竾Z-0[ng?WG"S%UUw.Ye_<I60q/n|?|x?|Gfa 0W2<5[o\H J遁LF1o7/_̕+iSI J H <8buift(#)ϻDG G<HGI IDAT^< h` C(-k({AY!ڷou/-ZFG:Noncb&p.pr~0Qb[ܓeRbK[[ ^gOO.1WfΧ? i@0;h^SՅ3mfWҘ !D`KjVR'[s<5ׁp<$`f{-QG]ݱ淴94t%@LB́('2tȘ"zA,(0@HQUR)qj9( G"I99}nwMW o8a۲m<8 ^@ jy T/0HÀheOd "!2odp>X0]iWZ*VF0(~F?ad0\o1N <3CAL1Ȑ5j#|ViCFBcG|*A)$I/ # D~>p/PL}}ѹNo++9E, 1"@ %cadKvǼ;z;:w(( #y?|oS֛>Xhh]e E2F,W,r5O"gGp9۞hbxBhI/KJ>SM~b`" "E:{w]f&25p3h $`BMaB0sq57MZ?lRϣ~ A1]"wm@e`@.R0#ՓٷV('8`f/ԑA@i-6t$4e25,˅Ccn<+Z/e)Xs/&Ѝ@PGp3F)#&HSTwChm)Yęc-Jߓ@6cDѢ2w,# :b]?mDC$I>W[{s'l!kHd2-e'>E% ~ᴔ9!+p,LF Yd-K XNCh1*>c_ϟ뇜~j=v~wnڜs.UV6cYV3;GY';~ixUuYBV[aR/ hhC"uhzἑ;_rBF*UltBdl[q&+;O779b ]|[7~zo/Lٻ'UuqhÏ2P^poW߾:K\`h*enMuZw>xmlk%lRP%X{{/̉dr%!L6>QwH䶢ô{t{ްmST1}"؆ jǎ;_&ҢR/ e5EvLIq?C>N|4wo|B3Z \b٢Kxűw1:in&czAVgcG2P)6fض'{wϒ8-wn}^ؔMdG@襗4<|jϻ󪌩l<8*Y"QW\LZXV>!^6v{`/s3ZmdWW΋2lz1M`Da2 L--q xDs>|yĘo y;:y^1)c\ \#C4նuT@*abH"`7v.#(Y eu֗rWw޷R< B={)mc3@ P4c[ R 4{Q7'2*v]|@@EKT":q~ߘ^!1 ضUrcd!(DNKy Rʻ3m{b u}^//w9ZPȞy(fK@}/0gG~O rcz ;N ,[W d'!22A`h 3',/oܷ] =m-[E&Ry&l+4~P9Hz#6^}AtJwӃièn*Ov  @@ Pt'O@ lD|79Lq܌'~͸Foc(B:ޜڼk v1  /@>j#a"8 +6E Ey˲v(\.=ZGɿw%.`*4EZ&i%rO2L^TRR폴| Y*8Җ`M:1]uJuM,|/C)˲ dڳ';mW/[Z/p6 v~e@{{nw NnFg(QNnĊً.F׬~,SRBUR w/֬֬gɓ |P)(UYIӧ{g=d_rԽc?A8Yjkhťe-U\(]1 T [16Q t;~U f)A+h Rz2:p>m:pdyb2]![mQs3?<577[XZ`9 & ~WXreyI d^eknf=!tc#;||`Q DaeR#L{D/%xo < `;6ʌcg?#wnv^<Ľw9MӬ?nɅ+Vn^h$ժxP52bTIՆ;z[})Ab`+{g}UD?+lϥ@qrF}  8` 5k5lToUUI6u_M\ -W@8 ,kogllL)tYf2)cb^(͜+8J. XqӶ}1盀ֺz:{;;0O?v.2 `yJmD)HjҒAnlj(1&J@TkYyvrbD0[YtcL1? ׾ketzj:B |Z\0 0URrB}}e:vDN1x|. ' N\.qNa4_1ؑ1YcB 8#)ͼ_ɩS}F8 2|@l? h8p -2>%]R5@82! T.˗߼+7|%M ̩e@ʩM Q( 8dIRBOۀM IаKlgڼC2 L?̉!wGL,CDl1?`<#wqk|0I4)06 $S+eY)y6S:sOX2.|"lRv͝1qWwLb39ꎢ D(5;kXG҉rῺѶ=yOg!wT(Ƥۖ,AE-TmsxdZOYv®wݷccxRF,Kx,EϘSWZmhw(](5*u~ub1mqW-^?3z{֭|i{{O;s0%~ y4RIR[//r+1>^C'۷qyeeVYK@mKc1(D^ja>+2I<"C)ő!tJ i )AxÁB|hl_‹_lV٣c'%*G J}lO)cȄzVuwX#b*qHDszG,(.^Qrؠw/znr'(Ll[9TέRֲ >@S^# GIRE,F öEvޱR.2k  o$xMhhh4$W5 ˃/ dpym٥ɂC  ԑ a σOoE612 WL! F^LZJ!y8󮐲kDOZ88(D02 h.Wich}]zC%ip<ڴc.X5SfQyMM`s,g4kkϯzZfpk0o@~_aM䬪{gzȔr( @EAE!P DPdA , c HCzꚟas}:zju6lѱԬ`ZMs콾4Mms;:j^{%= '?d"2؟=ֻX,fM eR =ڻCk^zױnzppPj KuÀLP3t|cP<4EEh1F$~^Dbe(tr"` IDATb411cWm޻GY,>1o,!gh'wWôisyzޑL|?sJuB\  !xODY(4-lp鈷2{YM˚G[)75o!nN`#!X=@ 7J""RE dfDzKz'|kh}(8.1,+Q_t nv.7^˺¶|~MߟLՆqRq2JTRʩFt\wDüx  gx9S^N..ys%}۾WΜY;2=߿8vA4-Bd.!BR60'8\Vjpœ{dК2Zb Ad"K R&uKb"w۷)({u%\~.@$r m"$`F'Fa*y苣H@><gy|<ܣ?}x(|> 4&P( (akE@O0Xc5׶ծh}r^sTC F4D9b,ʀY`P L!b@8fmrRΟ={W~?\t[,8cЎtJj#' 4P(Y<CS\2hJ쪧bʁS<ϱhZ U2$`hN2eZ7ʉ6TB)Puc Aż`t>Sޗ-Lh4M7~91] W94Wi1'<^Sb5>RY·Ԕ? ١W̙'%C㬴ݫLG6SXk¨ⱡ oVzI2RBs+pTs˗u,L88hY,sL&/o#3L43 A2nYz|)*Hf wcЯͳU{OuO>}ʎ67't>AB8x7by`GڽdwdeY][2 7tVLRx)/Ybb yNWiqBa Ŭo>Xžd 'v 6!vAsAMw;|ZS0Pe6*T] @0!aZ@XNj];4ǁgKU%%!D0HFܔkV>Bp %b@x8$) $ uދU1j=m9V}K=JQ[Zt1F97!Rd|$( Sv8 6(D|)KMu]TW[B}cbg0Eap. `hC,h7$=x6`Rܻ,>Yʁt a mL6;#\}{QF)pav |1؁e /ߜ9ߣ2a s}F@-FO1Au{pUPVo_R3,h߲p=9sxI[Sf :wsbbgwG._7Lj:5Pifd>L77+ˍđOGfY(aAcP-v1j3geWȏx!)?;_!R**Q]ddj9>Ah b^o%僿U^ɻ:R`- _}OH0"wTKi?t k?zBe:P'hvQf@iPRЦ fըj(8Miwf7$A<9v5Ue`c"QSU@گ-FxU, k2ƥ!Ye#W@8bBAޱ goyq?;€ffBњ㿉.ԍDؕ1'=x|ݗTŅ0qǽ->͜0 V579LdaP UT-3qR$]+l,[ڛecO]L$,u.XU F9 ;v駍V4= ۥ5XSj7ӗ#ۂ]9'.( ){{3~BvWjj|X^|k豺Θ׾9iMn[b[4ߍvCj9SkQ1bK%ޭaXM4w|Od0LvI&)RD41RݳtD,(Q^l7ds?UĈ7n^_S]3>w5m&nB}See {B*k}\ cdIdQV6z8;,h}ll䀐lxw(7=3mtz) H6^($u!gE @g]5sH$Rֺh *vtw_\]=y_L`\|H@[)-@DuM0riǞuE\peSUv}sϭc%K ~|JLK@e. h"qx\\8CCہa-RPTut`3pgqtn9ۮ (ZJ/vpifo^"*W fN~]y9 P@ ^zmgO+2/g$0[OH K{{UUdYkq3x;phL@AnҼ, ʍk(|ҠQL`ӗ ܡC=#0#3[vр/'*@5HUc8ElG#P#fl"$D;u@PN2>Tqʀ

    PySPH is hosted on `github `_. Please see the `github `_ site for development details. .. _Python: http://www.python.org ********** Overview ********** .. toctree:: :maxdepth: 2 overview.rst ********************************* Installation and getting started ********************************* .. toctree:: :maxdepth: 2 installation.rst tutorial/circular_patch_simple.rst tutorial/circular_patch.rst *************************** The framework and library *************************** .. toctree:: :maxdepth: 2 design/overview.rst design/equations.rst starcluster/overview using_pysph.rst contribution/how_to_write_docs.rst ************************** Gallery of PySPH examples ************************** .. toctree:: :maxdepth: 2 examples/index.rst ************************ Reference documentation ************************ Autogenerated from doc strings using sphinx's autodoc feature. .. toctree:: :maxdepth: 2 reference/index design/solver_interfaces ================== Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` pysph-master/docs/source/installation.rst000066400000000000000000001053551356347341600212220ustar00rootroot00000000000000.. _installation: ================================= Installation and getting started ================================= To install PySPH, you need a working Python environment with the required dependencies installed. You may use any of the available Python distributions. PySPH is currently tested with Python-2.7.x and 3.x. If you are new to Python we recommend `Enthought Canopy`_ or EDM_. PySPH will work fine with Miniconda_, Anaconda_ or other environments like WinPython_. The following instructions should help you get started. Since there is a lot of information here, we suggest that you skim the section on :ref:`dependencies` and then directly jump to one of the "Installing the dependencies on xxx" sections below depending on your operating system. Depending on your chosen Python distribution, simply follow the instructions and links referred therein. .. contents:: :local: :depth: 1 .. _Enthought Canopy: https://www.enthought.com/products/canopy/ .. _EDM: https://www.enthought.com/products/edm/ .. _Anaconda: http://continuum.io/downloads .. _Miniconda: https://conda.io/miniconda.html .. _quick-install: ------------------- Quick installation ------------------- If you are reasonably experienced with installing Python packages, already have a C++ compiler setup on your machine, and are not immediately interested in running PySPH on multiple CPUs (using MPI), then installing PySPH is simple. Simply running pip_ like so:: $ pip install PySPH should do the trick. You may do this in a virtualenv_ if you chose to. The important examples are packaged with the sources, you should be able to run those immediately. If you wish to download the sources and explore them, you can download the sources either using the tarball/ZIP or from git, see :ref:`downloading-pysph`. The above will install the latest released version of PySPH, you can install the development version using:: $ pip install https://github.com/pypr/pysph/zipball/master If you wish to track the development of the package, clone the repository (as described in :ref:`downloading-pysph` and do the following:: $ pip install -r requirements.txt $ python setup.py develop The following instructions are more detailed and also show how optional dependencies can be installed. Instructions on how to set things up on Windows is also available below. If you are running into strange issues when you are setting up an installation with ZOLTAN, see here, :ref:`pip-cache-issues`. .. _dependencies: ------------------ Dependencies ------------------ ^^^^^^^^^^^^^^^^^^ Core dependencies ^^^^^^^^^^^^^^^^^^ The core dependencies are: - NumPy_ - Cython_ (version 0.20 and above) - Mako_ - cyarray_ - compyle_ - pytest_ for running the unit tests. The project's `requirements.txt `_ lists all the required core dependencies. These packages can be installed from your Python distribution's package manager, or using pip_. For more detailed instructions on how to do this for different distributions, see below. Running PySPH requires a working C/C++ compiler on your machine. On Linux/OS X the gcc toolchain will work well. On Windows, you will need to have a suitable MSVC compiler installed, see https://wiki.python.org/moin/WindowsCompilers for specific details. On Python 2.7 for example, you will need `Microsoft Visual C++ Compiler for Python 2.7 `_ or an equivalent compiler. More details are available below. .. note:: PySPH generates high-performance code and compiles it on the fly. This requires a working C/C++ compiler even after installing PySPH. .. _NumPy: http://numpy.scipy.org .. _Cython: http://www.cython.org .. _pytest: https://www.pytest.org .. _Mako: https://pypi.python.org/pypi/Mako .. _pip: http://www.pip-installer.org .. _cyarray: https://pypi.python.org/pypi/cyarray .. _compyle: https://pypi.python.org/pypi/compyle ^^^^^^^^^^^^^^^^^^^^^^ Optional dependencies ^^^^^^^^^^^^^^^^^^^^^^ The optional dependencies are: - OpenMP_: PySPH can use OpenMP if it is available. Installation instructions are available below. - PyOpenCL_: PySPH can use OpenCL if it is available. This requires installing PyOpenCL_. - Mayavi_: PySPH provides a convenient viewer to visualize the output of simulations. This viewer can be launched using the command ``pysph view`` and requires Mayavi_ to be installed. Since this is only a viewer it is optional for use, however, it is highly recommended that you have it installed as the viewer is very convenient. - mpi4py_ and Zoltan_: If you want to use PySPH in parallel, you will need mpi4py_ and the Zoltan_ data management library along with the PyZoltan_ package. PySPH will work in serial without mpi4py_ or Zoltan_. Simple build instructions for Zoltan are included below. Mayavi_ is packaged with all the major distributions and is easy to install. Zoltan_ is very unlikely to be already packaged and will need to be compiled. .. _Mayavi: http://code.enthought.com/projects/mayavi .. _mpi4py: http://mpi4py.scipy.org/ .. _Zoltan: http://www.cs.sandia.gov/zoltan/ .. _OpenMP: http://openmp.org/ .. _PyOpenCL: https://documen.tician.de/pyopencl/ .. _OpenCL: https://www.khronos.org/opencl/ .. _PyZoltan: https://github.com/pypr/pyzoltan Building and linking PyZoltan on OSX/Linux ------------------------------------------- If you want to use PySPH in parallel you will need to install PyZoltan_. PyZoltan requires the Zoltan library to be available. We've provided a simple `Zoltan build script `_ in the PyZoltan_ repository. This works on Linux and OS X but not on Windows. It can be used as:: $ ./build_zoltan.sh $INSTALL_PREFIX where the ``$INSTALL_PREFIX`` is where the library and includes will be installed (remember, this script is in the PyZoltan repository and not in PySPH). You may edit and tweak the build to suit your installation. However, this script is what we use to build Zoltan on our continuous integration servers on Travis-CI_ and Shippable_. After Zoltan is build, set the environment variable ``ZOLTAN`` to point to the ``$INSTALL_PREFIX`` that you used above:: $ export ZOLTAN=$INSTALL_PREFIX Note that replace ``$INSTALL_PREFIX`` with the directory you specified above. After this, follow the instructions to build PyZoltan. The PyZoltan wrappers will be compiled and available. Now, when you build PySPH, it too needs to know where to link to Zoltan and you should keep the ``ZOLTAN`` environment variable set. This is only needed until PySPH is compiled, thereafter we do not need the environment variable. If you are running into strange issues when you are setting up pysph with ZOLTAN, see here, :ref:`pip-cache-issues`. .. note:: The installation will use ``$ZOLTAN/include`` and ``$ZOLTAN/lib`` to find the actual directories, if these do not work for your particular installation for whatever reason, set the environment variables ``ZOLTAN_INCLUDE`` and ``ZOLTAN_LIBRARY`` explicitly without setting up ``ZOLTAN``. If you used the above script, this would be:: $ export ZOLTAN_INCLUDE=$INSTALL_PREFIX/include $ export ZOLTAN_LIBRARY=$INSTALL_PREFIX/lib ----------------------------------------- Installing the dependencies on GNU/Linux ----------------------------------------- If you are using `Enthought Canopy`_ EDM_ or Anaconda_ the instructions in the section :ref:`installing-deps-osx` will be useful as the instructions are the same. The following are for the case where you wish to use the native Python packages distributed with the Linux distribution you are using. If you are running into trouble, note that it is very easy to install using EDM_ (see :ref:`using_edm_osx`) or conda (see :ref:`using_conda_osx`) and you may make your lives easier going that route. GNU/Linux is probably the easiest platform to install PySPH. On Ubuntu one may install the dependencies using:: $ sudo apt-get install build-essential python-dev python-numpy \ python-mako cython python-pytest mayavi2 python-qt4 python-virtualenv OpenMP_ is typically available but if it is not, it can be installed with:: $ sudo apt-get install libomp-dev If you need parallel support:: $ sudo apt-get install libopenmpi-dev python-mpi4py $ ./build_zoltan.sh ~/zoltan # Replace ~/zoltan with what you want $ export ZOLTAN=~/zoltan On Linux it is probably best to install PySPH into its own virtual environment. This will allow you to install PySPH as a user without any superuser priviledges. See the section below on :ref:`using-virtualenv`. In short do the following:: $ virtualenv --system-site-packages pysph_env $ source pysph_env/bin/activate $ pip install cython --upgrade # if you have an old version. If you wish to use a compiler which is not currently your default compiler, simply update the ``CC`` and ``CXX`` environment variables. For example, to use icc run the following commands `before` building PySPH:: $ export CC=icc $ export CXX=icpc .. note:: In this case, you will additionally have to ensure that the relevant intel shared libraries can be found when `running` PySPH code. Most intel installations come along with shell scripts that load relevant environment variables with the right values automatically. This shell script is generally named ``compilervars.sh`` and can be found in ``/path/to/icc/bin``. If you didn't get this file along with your installation, you can try running ``export LD_LIBRARY_PATH=/path/to/icc/lib``. You should be set now and should skip to :ref:`downloading-pysph` and :ref:`building-pysph`. On recent versions of Ubuntu (16.10 and 18.04) there may be problems with Mayavi viewer, and ``pysph view`` may not work correctly. To see how to resolve these, please look at :ref:`viewer-issues`. .. note:: If you wish to see a working build/test script please see our `shippable.yml `_. .. _Shippable: http://shippable.com .. _Travis-CI: http://travis-ci.org .. _installing-deps-ubuntu-1804: -------------------------------------------- Installing the dependencies on Ubuntu 18.04 -------------------------------------------- On Ubuntu 18.04 it should be relatively simple to install PySPH with ZOLTAN as follows:: # For OpenMP $ sudo apt-get install libomp-dev # For Zoltan $ sudo apt-get install openmpi-bin libopenmpi-dev libtrilinos-zoltan-dev $ export ZOLTAN_INCLUDE=/usr/include/trilinos $ export ZOLTAN_LIBRARY=/usr/lib/x86_64-linux-gnu $ export USE_TRILINOS=1 Now depending on your setup you can install the Python related dependencies. For example with conda_ you can do:: $ conda install -c conda-forge cython mako matplotlib jupyter pyside pytest \ mock numpy-stl pytools $ conda install -c conda-forge mpi4py Then you should be able to install pyzoltan and its dependency cyarray using:: $ pip install pyzoltan Finally, install PySPH with :: $ pip install pysph Or with:: $ pip install --no-cache-dir pysph If you are having trouble due to pip's cache as discussed in :ref:`pip-cache-issues`. You should be all set now and should next consider :ref:`running-the-tests`. .. _installing-deps-osx: ------------------------------------------ Installing the dependencies on Mac OS X ------------------------------------------ On OS X, your best bet is to install `Enthought Canopy`_, EDM_, or Anaconda_ or some other Python distribution. Ensure that you have gcc or clang installed by installing XCode. See `this `_ if you installed XCode but can't find clang or gcc. ^^^^^^^^^^^^^ OpenMP on OSX ^^^^^^^^^^^^^ The default "gcc" available on OSX uses an LLVM backend and does not support OpenMP_. To use OpenMP_ on OSX, you can install the GCC available on brew_ using :: $ brew install gcc Once this is done, you need to use this as your default compiler. The ``gcc`` formula on brew currently ships with gcc version 8. Therefore, you can tell Python to use the GCC installed by brew by setting:: $ export CC=gcc-8 $ export CXX=g++-8 .. _brew: http://brew.sh/ .. _using_edm_osx: ^^^^^^^^^^^ Using EDM ^^^^^^^^^^^ It is very easy to install all the dependencies with the Enthought Deployment Manager (EDM_). - `Download the EDM installer `_ if you do not already have it installed. Install the appropriate installer package for your system. - Once you have installed EDM, run the following:: $ edm install mayavi pyside cython matplotlib jupyter pytest mock pip $ edm shell $ pip install mako - With this done, you should be able to install PySPH relatively easily, see :ref:`building-pysph`. ^^^^^^^^^^^^^ Using Canopy ^^^^^^^^^^^^^ Download the Canopy express installer for your platform (the full installer is also fine). Launch Canopy after you install it so it initializes your user environment. If you have made Canopy your default Python, all should be well, otherwise launch the Canopy terminal from the Tools menu of the Canopy editor before typing your commands below. NumPy_ ships by default but Cython_ does not. Mako_ and Cython can be installed with ``pip`` easily (``pip`` will be available in your Canopy environment):: $ pip install cython mako Mayavi_ is best installed with the Canopy package manager:: $ enpkg mayavi .. note:: If you are a subscriber you can also ``enpkg cython`` to install Enthought's build. If you need parallel support, please see :ref:`installing-mpi-osx`, otherwise, skip to :ref:`downloading-pysph` and :ref:`building-pysph`. .. _using_conda_osx: ^^^^^^^^^^^^^^^ Using Anaconda ^^^^^^^^^^^^^^^ After installing Anaconda or miniconda_, you will need to make sure the dependencies are installed. You can create a separate environment as follows:: $ conda create -n pysph_env $ source activate pysph_env Now you can install the necessary packages:: $ conda install -c conda-forge cython mako matplotlib jupyter pyside pytest mock $ conda install -c menpo mayavi If you need parallel support, please see :ref:`installing-mpi-osx`, otherwise, skip to :ref:`downloading-pysph` and :ref:`building-pysph`. .. _miniconda: http://conda.pydata.org/miniconda.html .. _installing-mpi-osx: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Installing mpi4py and Zoltan on OS X ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In order to build/install mpi4py_ one first has to install the MPI library. This is easily done with Homebrew_ as follows (you need to have ``brew`` installed for this but that is relatively easy to do):: $ sudo brew install open-mpi After this is done, one can install mpi4py by hand. First download mpi4py from `here `_. Then run the following (modify these to suit your XCode installation and version of mpi4py):: $ cd /tmp $ tar xvzf ~/Downloads/mpi4py-1.3.1.tar.gz $ cd mpi4py-1.3.1 $ export MACOSX_DEPLOYMENT_TARGET=10.7 $ export SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/ $ python setup.py install Change the above environment variables to suite your SDK version. If this installs correctly, mpi4py should be available. You can then follow the instructions on how to build/install Zoltan and PyZoltan given above. You should be set now and should move to :ref:`building-pysph`. Just make sure you have set the ``ZOLTAN`` environment variable so PySPH knows where to find it. .. _Homebrew: http://brew.sh/ --------------------------------------- Installing the dependencies on Windows --------------------------------------- While it should be possible to use mpi4py and Zoltan on Windows, we do not at this point have much experience with this. Feel free to experiment and let us know if you'd like to share your instructions. The following instructions are all without parallel support. ^^^^^^^^^^^ Using EDM ^^^^^^^^^^^ It is very easy to install all the dependencies with the Enthought Deployment Manager (EDM_). - `Download the EDM installer `_ if you do not already have it installed. Install the appropriate installer package for your system. - Once you have installed EDM, run the following:: > edm install mayavi pyside cython matplotlib jupyter pytest mock pip > edm shell > pip install mako Once you are done with this, please skip ahead to :ref:`installing-visual-c++`. ^^^^^^^^^^^^^^ Using Canopy ^^^^^^^^^^^^^^ Download and install Canopy Express for you Windows machine (32 or 64 bit). Launch the Canopy editor at least once so it sets up your user environment. Make the Canopy Python the default Python when it prompts you. If you have already skipped that option, you may enable it in the ``Edit->Preferences`` menu. With that done you may install the required dependencies. You can either use the Canopy package manager or use the command line. We will use the command line for the rest of the instructions. To start a command line, click on "Start" and navigate to the ``All Programs/Enthought Canopy`` menu. Select the "Canopy command prompt", if you made Canopy your default Python, just starting a command prompt (via ``cmd.exe``) will also work. On the command prompt, Mako_ and Cython can be installed with ``pip`` easily (``pip`` should be available in your Canopy environment):: > pip install cython mako Mayavi_ is best installed with the Canopy package manager:: > enpkg mayavi Once you are done with this, please skip ahead to :ref:`installing-visual-c++`. .. note:: If you are a subscriber you can also ``enpkg cython`` to install Enthought's build. ^^^^^^^^^^^^^^^^^ Using WinPython ^^^^^^^^^^^^^^^^^ Instead of Canopy or Anaconda you could try WinPython_ 2.7.x.x. To obtain the core dependencies, download the corresponding binaries from Christoph Gohlke's `Unofficial Windows Binaries for Python Extension Packages `_. Mayavi is available through the binary ETS. You can now add these binaries to your WinPython installation by going to WinPython Control Panel. The option to add packages is available under the section Install/upgrade packages. .. _WinPython: http://winpython.sourceforge.net/ Make sure to set your system PATH variable pointing to the location of the scripts as required. If you have installed WinPython 2.7.6 64-bit, make sure to set your system PATH variables to ``/python-2.7.6.amd64`` and ``/python-2.7.6.amd64/Scripts/``. Once you are done with this, please skip ahead to :ref:`installing-visual-c++`. ^^^^^^^^^^^^^^^ Using Anaconda ^^^^^^^^^^^^^^^ Install Anaconda_ for your platform, make it the default and then install the required dependencies:: $ conda install cython mayavi $ pip install mako Once you are done with this, please skip ahead to :ref:`installing-visual-c++`. .. _installing-visual-c++: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Installing Visual C++ Compiler for Python ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For all of the above Python distributions, it is highly recommended that you build PySPH with Microsoft's Visual C++ for Python. See see https://wiki.python.org/moin/WindowsCompilers for specific details for each version of Python. Note that different Python versions may have different compiler requirements. On Python 3.6 and above you should use `Microsoft's Build Tools for Visual Studio 2017 `_. On Python 2.7 for example use `Microsoft's Visual C++ for Python 2.7 `_. We recommend that you download and install the ``VCForPython27.msi`` available from the `link `_. **Make sure you install the system requirements specified on that page**. For example, you will need to install the Microsoft Visual C++ 2008 SP1 Redistributable Package for your platform (x86 for 32 bit or x64 for 64 bit) and on Windows 8 and above you will need to install the .NET framework 3.5. Please look at the link given above, it should be fairly straightforward. Note that doing this will also get OpenMP_ working for you. After you do this, you will find a "Microsoft Visual C++ Compiler Package for Python" in your Start menu. Choose a suitable command prompt from this menu for your architecture and start it (we will call this the MSVC command prompt). You may make a short cut to it as you will need to use this command prompt to build PySPH and also run any of the examples. After this is done, see section :ref:`downloading-pysph` and get a copy of PySPH. Thereafter, you may follow section :ref:`building-pysph`. .. warning:: On 64 bit Windows, do not build PySPH with mingw64 as it does not work reliably at all and frequently crashes. YMMV with mingw32 but it is safer and just as easy to use the MS VC++ compiler. .. _using-virtualenv: ------------------------------- Using a virtualenv for PySPH ------------------------------- A virtualenv_ allows you to create an isolated environment for PySPH and its related packages. This is useful in a variety of situations. - Your OS does not provide a recent enough Cython_ version (say you are running Debian stable). - You do not have root access to install any packages PySPH requires. - You do not want to mess up your system files and wish to localize any installations inside directories you control. - You wish to use other packages with conflicting requirements. - You want PySPH and its related packages to be in an "isolated" environment. You can either install virtualenv_ (or ask your system administrator to) or just download the `virtualenv.py `_ script and use it (run ``python virtualenv.py`` after you download the script). .. _virtualenv: http://www.virtualenv.org Create a virtualenv like so:: $ virtualenv --system-site-packages pysph_env This creates a directory called ``pysph_env`` which contains all the relevant files for your virtualenv, this includes any new packages you wish to install into it. You can delete this directory if you don't want it anymore for some reason. This virtualenv will also "inherit" packages from your system. Hence if your system administrator already installed NumPy_ it may be imported from your virtual environment and you do not need to install it. This is very useful for large packages like Mayavi_, Qt etc. .. note:: If your version of ``virtualenv`` does not support the ``--system-site-packages`` option, please use the ``virtualenv.py`` script mentioned above. Once you create a virtualenv you can activate it as follows (on a bash shell):: $ source pysph_env/bin/activate On Windows you run a bat file as follows:: $ pysph_env/bin/activate This sets up the PATH to point to your virtualenv's Python. You may now run any normal Python commands and it will use your virtualenv's Python. For example you can do the following:: $ virtualenv myenv $ source myenv/bin/activate (myenv) $ pip install Cython mako pytest (myenv) $ cd pysph (myenv) $ python setup.py install Now PySPH will be installed into ``myenv``. You may deactivate your virtualenv using the ``deactivate`` command:: (myenv) $ deactivate $ On Windows, use ``myenv\Scripts\activate.bat`` and ``myenv\Scripts\deactivate.bat``. If for whatever reason you wish to delete ``myenv`` just remove the entire directory:: $ rm -rf myenv .. note:: With a virtualenv, one should be careful while running things like ``ipython`` or ``pytest`` as these are sometimes also installed on the system in ``/usr/bin``. If you suspect that you are not running the correct Python, you could simply run (on Linux/OS X):: $ python `which ipython` to be absolutely sure. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Using Virtualenv on Canopy ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you are using `Enthought Canopy`_, it already bundles virtualenv for you but you should use the ``venv`` script. For example:: $ venv --help $ venv --system-site-packages myenv $ source myenv/bin/activate The rest of the steps are the same as above. .. _downloading-pysph: ------------------ Downloading PySPH ------------------ One way to install PySPH is to use pip_ :: $ pip install PySPH This will install PySPH, and you should be able to import it and use the modules with your Python scripts that use PySPH. This will also provide the standard set of PySPH examples. If you want to take a look at the PySPH sources you can get it from git or download a tarball or ZIP as described below. To get PySPH using git_ type the following :: $ git clone https://github.com/pypr/pysph.git If you do not have git_ or do not wish to bother with it, you can get a ZIP or tarball from the `pysph site `_. You can unzip/untar this and use the sources. .. _git: http://git-scm.com/ In the instructions, we assume that you have the pysph sources in the directory ``pysph`` and are inside the root of this directory. For example:: $ unzip pysph-pysph-*.zip $ cd pysph-pysph-1ce* or if you cloned the repository:: $ git clone https://github.com/pypr/pysph.git $ cd pysph Once you have downloaded PySPH you should be ready to build and install it, see :ref:`building-pysph`. .. _building-pysph: ------------------------------- Building and Installing PySPH ------------------------------- Once you have the dependencies installed you can install PySPH with:: $ pip install PySPH You can install the development version using:: $ pip install https://github.com/pypr/pysph/zipball/master If you downloaded PySPH using git_ or used a tarball you can do:: $ python setup.py install You could also do:: $ python setup.py develop This is useful if you are tracking the latest version of PySPH via git. With git you can update the sources and rebuild using:: $ git pull $ python setup.py develop You should be all set now and should next consider :ref:`running-the-tests`. .. _pip-cache-issues: Issues with the pip cache -------------------------- Note that pip_ caches any packages it has built and installed earlier. So if you installed PySPH without Zoltan support, say and then uninstalled PySPH using:: $ pip uninstall pysph then if you try a ``pip install pysph`` again (and the PySPH version has not changed), pip_ will simply re-use the old build it made. You do not want this and want it to re-build PySPH to use ZOLTAN say, then you can do the following:: $ pip install --no-cache-dir pysph In this case, pip_ will disregard its default cache and freshly download and build PySPH. This is often handy. .. _running-the-tests: ------------------ Running the tests ------------------ Once you install PySPH you can run the tests using the ``pysph`` script that is installed:: $ pysph test If you see errors while running the tests, you might want more verbose reporting which you can get with:: $ pysph test -v This should run all the tests that do not take a long while to complete. If this fails, please contact the `pysph-users mailing list `_ or send us `email `_. There are a few additional test dependencies that need to be installed when running the tests. These can be installed using:: $ pip install -r requirements-test.txt Once you run the tests, you should see the section on :ref:`running-the-examples`. .. note:: Internally, we use the ``pytest`` package to run the tests. For more information on what you can do with the ``pysph`` script try this:: $ pysph -h .. _running-the-examples: --------------------- Running the examples --------------------- You can verify the installation by exploring some examples. The examples are actually installed along with the PySPH library in the ``pysph.examples`` package. You can list and choose the examples to run by doing:: $ pysph run This will list all the available examples and allow you to run any of them. If you wish to run a particular one, like say ``elliptical_drop``, you may do:: $ pysph run elliptical_drop This can also be run as:: $ pysph run pysph.examples.elliptical_drop To see the options available, try this:: $ pysph run elliptical_drop -h .. note:: Technically you can run the examples using ``python -m pysph.examples.elliptical_drop``. The ``pysph run`` command is a lot more convenient as it allows a much shorter command You can view the data generated by the simulation (after the simulation is complete or during the simulation) by running ``pysph view`` command. To view the simulated data you may do:: $ pysph view elliptical_drop_output If you have Mayavi_ installed this should show a UI that looks like: .. image:: ../Images/pysph_viewer.png :width: 800px :alt: PySPH viewer If the viewer does not start, you may want to see :ref:`viewer-issues`. There are other examples that use the transport velocity formulation:: $ pysph run cavity This runs the driven cavity problem using the transport velocity formulation of Adami et al. The example also performs post-processing of the results and the ``cavity_output`` will contain a few PNG images with these. You may view these results using ``pysph view cavity_output``. For example for example the file ``streamlines.png`` may look like what is shown below: .. image:: ../Images/ldc-streamlines.png If you want to use PySPH for elastic dynamics, you can try some of the examples from Gray et al., Comput. Methods Appl. Mech. Engrg. 190 (2001), 6641-6662:: $ pysph run solid_mech.rings Which runs the problem of the collision of two elastic rings. View the results like so:: $ pysph view rings_output This should produce something that may look like the image below. .. image:: ../Images/rings-collision.png The auto-generated high-performance code for the example resides in the directory ``~/.pysph/source``. A note of caution however, it's not for the faint hearted. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Running the examples with OpenMP ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you have OpenMP available run any of the examples as follows:: $ pysph run elliptical_drop --openmp This should run faster if you have multiple cores on your machine. If you wish to change the number of threads to run simultaneously, you can try the following:: $ OMP_NUM_THREADS=8 pysph run elliptical_drop --openmp You may need to set the number of threads to about 4 times the number of physical cores on your machine to obtain the most scale-up. If you wish to time the actual scale up of the code with and without OpenMP you may want to disable any output (which will be serial), you can do this like:: $ pysph run elliptical_drop --disable-output --openmp Note that one may run example scripts directly with Python but this requires access to the location of the script. For example, if a script ``pysph_script.py`` exists one can run it as:: $ python pysph_script.py The ``pysph run`` command is just a convenient way to run the pre-installed examples that ship with PySPH. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Running the examples with OpenCL ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you have PyOpenCL_ installed and working with an appropriate device setup, then you can transparently use OpenCL as well with PySPH. This feature is very new and still fairly experimental. You may run into issues but using it is simple. You may run any of the supported examples as follows:: $ pysph run elliptical_drop --opencl Yes, thats it, just use the ``--opencl`` option and code will be auto-generated and run for you. By default it uses single-precision but you can also run the code with double precision using:: $ pysph run elliptical_drop --opencl --use-double Currently inlets and outlets are not supported, periodicity is slow and many optimizations still need to be made but this is rapidly improving. If you want to see an example that runs pretty fast, try the cube example:: $ pysph run cube --disable-output --np 1e6 --opencl You may compare the execution time with that of OpenMP. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Running the examples with MPI ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you compiled PySPH with Zoltan_ and have mpi4py_ installed you may run any of the examples with MPI as follows (here we choose 4 processors with ``--np 4``, change this to suit your needs):: $ mpirun -np 4 pysph run dam_break_3d This may not give you significant speedup if the problem is too small. You can also combine OpenMP and MPI if you wish. You should take care to setup the MPI host information suitably to utilize the processors effectively. .. note:: Note that again we are using ``pysph run`` here but for any other scripts, one could do ``mpirun -np python some_script.py`` .. _viewer-issues: ------------------------------- Possible issues with the viewer ------------------------------- Often users are able to install PySPH and run the examples but are unable to run ``pysph view`` for a variety of reasons. This section discusses how these could be resolved. The PySPH viewer uses Mayavi_. Mayavi can be installed via pip. Mayavi depends on VTK_ which can also be installed via pip_ if your package manager does not have a suitable version. If you are using Ubuntu 16.04 or 16.10 or a VTK version built with Qt5, it is possible that you will see a strange segmentation fault when starting the viewer. This is because Mayavi uses Qt4 and the VTK build has linked to Qt5. In these cases it may be best to use to use the latest `VTK wheels `_ that are now available on pypi. If you have VTK installed but you want a more recent version of Mayavi, you can always use pip_ to install Mayavi. For the very specific case of Mayavi on Ubuntu 16.04 and its derivatives, you can use Ubuntu's older VTK package like so:: $ sudo apt remove mayavi2 python-vtk6 $ sudo apt install python-vtk $ pip install mayavi What this does is to remove the system Mayavi and the VTK-6.x package which is linked to Qt5 and instead install the older python-vtk package. Then using pip to install Mayavi against this version of VTK. If the problem persists remember that by default pip caches any previous installations of Mayavi and you may need to install Mayavi like this:: $ pip --no-cache-dir install mayavi If you are using EDM_ or Anaconda_, things should work most of the time. However, there may be problems and in this case please report the issues to the `pysph-users mailing list `_ or send us `email `_. .. _VTK: http://www.vtk.org pysph-master/docs/source/overview.rst000066400000000000000000000245111356347341600203610ustar00rootroot00000000000000=========== Overview =========== PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. It is implemented in Python_ and the performance critical parts are implemented in Cython_ and PyOpenCL_. PySPH is implemented in a way that allows a user to specify the entire SPH simulation in pure Python. High-performance code is generated from this high-level Python code, compiled on the fly and executed. PySPH can use OpenMP to utilize multi-core CPUs effectively. PySPH can work with OpenCL and use your GPGPUs. PySPH also features optional automatic parallelization (multi-CPU) using mpi4py_ and Zoltan_. If you wish to use the parallel capabilities you will need to have these installed. Here are videos of simulations made with PySPH. .. raw:: html
    PySPH is hosted on `github `_. Please see the site for development details. .. _Python: http://www.python.org .. _Cython: http://www.cython.org .. _PyOpenCL: https://documen.tician.de/pyopencl/ .. _mpi4py: http://mpi4py.scipy.org .. _Zoltan: http://www.cs.sandia.gov/zoltan/ --------- Features --------- - User scripts and equations are written in pure Python. - Flexibility to define arbitrary SPH equations operating on particles. - Ability to define your own multi-step integrators in pure Python. - High-performance: our performance is comparable to hand-written solvers implemented in FORTRAN. - Seamless multi-core support with OpenMP. - Seamless GPU support with PyOpenCL_. - Seamless parallel integration using Zoltan_. - `BSD license `_. ----------------- SPH formulations ----------------- Currently, PySPH has numerous examples to solve the viscous, incompressible Navier-Stokes equations using the weakly compressible (WCSPH) approach. The following formulations are currently implemented: - `Weakly Compressible SPH (WCSPH)`_ for free-surface flows (Gesteira et al. 2010, Journal of Hydraulic Research, 48, pp. 6--27) .. figure:: ../Images/db3d.png :width: 500 px :align: center 3D dam-break past an obstacle SPHERIC benchmark `Test 2`_ - `Transport Velocity Formulation`_ for incompressilbe fluids (Adami et al. 2013, JCP, 241, pp. 292--307). .. figure:: ../Images/ldc-streamlines.png :width: 500 px :align: center Streamlines for a driven cavity - `SPH for elastic dynamics`_ (Gray et al. 2001, CMAME, Vol. 190, pp 6641--6662) .. figure:: ../Images/rings-collision.png :width: 500 px :align: center Collision of two elastic rings. - `Compressible SPH`_ (Puri et al. 2014, JCP, Vol. 256, pp 308--333) .. _`Weakly Compressible SPH (WCSPH)`: http://www.tandfonline.com/doi/abs/10.1080/00221686.2010.9641250 .. _`Transport Velocity Formulation`: http://dx.doi.org/10.1016/j.jcp.2013.01.043 .. _`SPH for elastic dynamics`: http://dx.doi.org/10.1016/S0045-7825(01)00254-7 .. _`Compressible SPH`: http://dx.doi.org/10.1016/j.jcp.2013.08.060 .. _`Test 2`: https://wiki.manchester.ac.uk/spheric/index.php/Test2 - `Generalized Transport Velocity Formulation (GTVF) `_ (Zhang et al. 2017, JCP, 337, pp. 216--232) - `Entropically Damped Artificial Compressibility (EDAC) `_ (Ramachandran et al. 2019, Computers and Fluids, 179, pp. 579--594) - `delta-SPH `_ (Marrone et al. CMAME, 2011, 200, pp. 1526--1542) - `Dual Time SPH (DTSPH) `_ (Ramachandran et al. arXiv preprint) - `Incompressible (ISPH) `_ (Cummins et al. JCP, 1999, 152, pp. 584--607) - `Simple Iterative SPH (SISPH) `_ (Muta et al. arXiv preprint) - `Implicit Incompressibel SPH (IISPH) `_ (Ihmsen et al. 2014, IEEE Trans. Vis. Comput. Graph., 20, pp 426--435) - `Gudnov SPH (GSPH) `_ (Inutsuka et al. JCP, 2002, 179, pp. 238--267) - `Conservative Reproducible Kernel SPH (CRKSPH) `_ (Frontiere et al. JCP, 2017, 332, pp. 160--209) - `Approximate Gudnov SPH (AGSPH) `_ (Puri et al. JCP, 2014, pp. 432--458) - `Adaptive Density Kernel Estimate (ADKE) `_ (Sigalotti et al. JCP, 2006, pp. 124--149) - `Akinci `_ (Akinci et al. ACM Trans. Graph., 2012, pp. 62:1--62:8) Boundary conditions from the following papers are implemented: - `Generalized Wall BCs `_ (Adami et al. JCP, 2012, pp. 7057--7075) - `Do nothing type outlet BC `_ (Federico et al. European Journal of Mechanics - B/Fluids, 2012, pp. 35--46) - `Outlet Mirror BC `_ (Tafuni et al. CMAME, 2018, pp. 604--624) - `Method of Characteristics BC `_ (Lastiwaka et al. International Journal for Numerical Methods in Fluids, 2012, pp. 35--46) - `Hybrid BC `_ (Negi et al. arXiv preprint) Corrections proposed in the following papers are also the part for PySPH: - `Corrected SPH `_ (Bonet et al. CMAME, 1999, pp. 97--115) - `hg-correction `_ (Hughes et al. Journal of Hydraulic Research, pp. 105--117) - `Tensile instability correction' `_ (Monaghan J. J. JCP, 2000, pp. 2990--311) - Particle shift algorithms (`Xu et al `_. JCP, 2009, pp. 6703--6725), (`Skillen et al `_. CMAME, 2013, pp. 163--173) Surface tension models are implemented from: - `Morris surface tension`_ (Morris et al. Internaltional Journal for Numerical Methods in Fluids, 2000, pp. 333--353) - `Adami Surface tension formulation `_ (Adami et al. JCP, 2010, pp. 5011--5021) .. _Morris surface tension: https://dx.doi.org/10.1002/1097-0363(20000615)33:3<333::AID-FLD11>3.0.CO;2-7 -------- Credits -------- PySPH is primarily developed at the `Department of Aerospace Engineering, IIT Bombay `__. We are grateful to IIT Bombay for the support. Our primary goal is to build a powerful SPH-based tool for both application and research. We hope that this makes it easy to perform reproducible computational research. To see the list of contributors the see `github contributors page `_ Some earlier developers not listed on the above are: - Pankaj Pandey (stress solver and improved load balancing, 2011) - Chandrashekhar Kaushik (original parallel and serial implementation in 2009) ------------- Citing PySPH ------------- You may use the following article to formally refer to PySPH: - Prabhu Ramachandran, Kunal Puri, Aditya Bhosale, A Dinesh, Abhinav Muta, Pawan Negi, Rahul Govind, Suraj Sanka, Pankaj Pandey, Chandrashekhar Kaushik, Anshuman Kumar, Ananyo Sen, Rohan Kaushik, Mrinalgouda Patil, Deep Tavker, Dileep Menon, Vikas Kurapati, Amal S Sebastian, Arkopal Dutt, Arpit Agarwal, "PySPH: a Python-based framework for smoothed particle hydrodynamics", under review (https://arxiv.org/abs/1909.04504). The following are older presentations: - Prabhu Ramachandran, *PySPH: a reproducible and high-performance framework for smoothed particle hydrodynamics*, In Proceedings of the 15th Python in Science Conference, pages 127--135, July 11th to 17th, 2016. `Link to paper `_. - Prabhu Ramachandran and Kunal Puri, *PySPH: A framework for parallel particle simulations*, In proceedings of the 3rd International Conference on Particle-Based Methods (Particles 2013), Stuttgart, Germany, 18th September 2013. -------- History -------- - 2009: PySPH started with a simple Cython based 1D implementation written by Prabhu. - 2009-2010: Chandrashekhar Kaushik worked on a full 3D SPH implementation with a more general purpose design. The implementation was in a mix of Cython and Python. - 2010-2012: The previous implementation was a little too complex and was largely overhauled by Kunal and Pankaj. This became the PySPH 0.9beta release. The difficulty with this version was that it was almost entirely written in Cython, making it hard to extend or add new formulations without writing more Cython code. Doing this was difficult and not too pleasant. In addition it was not as fast as we would have liked it. It ended up feeling like we might as well have implemented it all in C++ and exposed a Python interface to that. - 2011-2012: Kunal also implemented SPH2D_ and another internal version called ZSPH in Cython which included Zoltan_ based parallelization using PyZoltan_. This was specific to his PhD research and again required writing Cython making it difficult for the average user to extend. - 2013-present In early 2013, Prabhu reimplemented the core of PySPH to be almost entirely auto-generated from pure Python. The resulting code was faster than previous implementations and very easy to extend entirely from pure Python. Kunal and Prabhu integrated PyZoltan into PySPH and the current version of PySPH was born. Subsequently, OpenMP support was also added in 2015. .. _SPH2D: https://bitbucket.org/kunalp/sph2d .. _PyZoltan: https://github.com/pypr/pyzoltan .. _Zoltan: http://www.cs.sandia.gov/zoltan/ ------- Support ------- If you have any questions or are running into any difficulties with PySPH, please email or post your questions on the pysph-users mailing list here: https://groups.google.com/d/forum/pysph-users Please also take a look at the `PySPH issue tracker `_. ---------- Changelog ---------- .. include:: ../../CHANGES.rst pysph-master/docs/source/reference/000077500000000000000000000000001356347341600177145ustar00rootroot00000000000000pysph-master/docs/source/reference/application.rst000066400000000000000000000001361356347341600227510ustar00rootroot00000000000000Module application ================== .. automodule:: pysph.solver.application :members: pysph-master/docs/source/reference/controller.rst000066400000000000000000000001331356347341600226260ustar00rootroot00000000000000Module controller ================= .. automodule:: pysph.solver.controller :members: pysph-master/docs/source/reference/equations.rst000066400000000000000000000033631356347341600224630ustar00rootroot00000000000000SPH equations =============== .. autoclass:: pysph.sph.equation.Equation :members: .. automodule:: pysph.sph.basic_equations :members: :undoc-members: .. automodule:: pysph.sph.wc.basic :members: :undoc-members: .. automodule:: pysph.sph.wc.viscosity :members: :undoc-members: .. automodule:: pysph.sph.wc.transport_velocity :members: :undoc-members: .. automodule:: pysph.sph.wc.gtvf :members: :undoc-members: .. automodule:: pysph.sph.wc.density_correction :members: :undoc-members: .. automodule:: pysph.sph.wc.kernel_correction :members: :undoc-members: .. automodule:: pysph.sph.wc.crksph :members: :undoc-members: .. automodule:: pysph.sph.wc.pcisph :members: :undoc-members: .. automodule:: pysph.sph.boundary_equations :members: :undoc-members: .. automodule:: pysph.sph.solid_mech.basic :members: :undoc-members: .. automodule:: pysph.sph.solid_mech.hvi :members: :undoc-members: Gas Dynamics ------------- .. automodule:: pysph.sph.gas_dynamics.basic :members: :undoc-members: Surface tension ---------------- .. automodule:: pysph.sph.surface_tension :members: :undoc-members: Implicit Incompressible SPH ---------------------------- .. automodule:: pysph.sph.iisph :members: :undoc-members: Rigid body motion ----------------- .. automodule:: pysph.sph.rigid_body :members: :undoc-members: Miscellaneous -------------- .. automodule:: pysph.sph.misc.advection :members: :undoc-members: .. automodule:: pysph.base.reduce_array :members: :undoc-members: Group of equations ------------------- .. autoclass:: pysph.sph.equation.Group :special-members: .. autoclass:: pysph.sph.equation.MultiStageEquations :special-members: pysph-master/docs/source/reference/index.rst000066400000000000000000000004771356347341600215650ustar00rootroot00000000000000PySPH Reference Documentation ============================= Autogenerated from doc strings using sphinx’s autodoc feature. .. toctree:: :maxdepth: 3 application controller equations integrator kernels nnps parallel_manager particle_array scheme solver solver_interfaces toolspysph-master/docs/source/reference/integrator.rst000066400000000000000000000003101356347341600226160ustar00rootroot00000000000000Integrator related modules =========================== .. automodule:: pysph.sph.integrator :members: :undoc-members: .. automodule:: pysph.sph.integrator_step :members: :undoc-members: pysph-master/docs/source/reference/kernels.rst000066400000000000000000000001351356347341600221100ustar00rootroot00000000000000SPH Kernels ============ .. automodule:: pysph.base.kernels :members: :undoc-members: pysph-master/docs/source/reference/nnps.rst000066400000000000000000000005521356347341600214260ustar00rootroot00000000000000Module nnps: Nearest Neighbor Particle Search ============================================== .. automodule:: pysph.base.nnps :members: .. automodule:: pysph.base.nnps_base :members: .. automodule:: pysph.base.linked_list_nnps :members: .. automodule:: pysph.base.box_sort_nnps :members: .. automodule:: pysph.base.spatial_hash_nnps :members: pysph-master/docs/source/reference/parallel_manager.rst000066400000000000000000000002101356347341600237250ustar00rootroot00000000000000======================== Module parallel_manager ======================== .. automodule:: pysph.parallel.parallel_manager :members: pysph-master/docs/source/reference/particle_array.rst000066400000000000000000000007011356347341600234450ustar00rootroot00000000000000Module particle_array ===================== The ``ParticleArray`` class itself is documented as below. .. automodule:: pysph.base.particle_array :members: Convenience functions to create particle arrays ----------------------------------------------- There are several convenience functions that provide a particle array with a requisite set of particle properties that are documented below. .. automodule:: pysph.base.utils :members: pysph-master/docs/source/reference/scheme.rst000066400000000000000000000001371356347341600217130ustar00rootroot00000000000000Module scheme ============== .. automodule:: pysph.sph.scheme :members: :undoc-members: pysph-master/docs/source/reference/solver.rst000066400000000000000000000004631356347341600217630ustar00rootroot00000000000000Module solver ============= .. automodule:: pysph.solver.solver :members: Module solver tools ==================== .. automodule:: pysph.solver.tools :members: Module boundary conditions =========================== .. automodule:: pysph.sph.bc.inlet_outlet_manager :members: :undoc-members:pysph-master/docs/source/reference/solver_interfaces.rst000066400000000000000000000001601356347341600241600ustar00rootroot00000000000000Module solver_interfaces ======================== .. automodule:: pysph.solver.solver_interfaces :members: pysph-master/docs/source/reference/tools.rst000066400000000000000000000030231356347341600216040ustar00rootroot00000000000000.. py:currentmodule:: pysph.tools Miscellaneous Tools for PySPH ============================== .. contents:: :local: :depth: 1 Input/Output of data files --------------------------- The following functions are handy functions when processing output generated by PySPH or to generate new files. .. autofunction:: pysph.solver.utils.dump .. autofunction:: pysph.solver.utils.get_files .. autofunction:: pysph.solver.utils.load .. autofunction:: pysph.solver.utils.load_and_concatenate Interpolator ------------ This module provides a convenient class called :py:class:`interpolator.Interpolator` which can be used to interpolate any scalar values from the points onto either a mesh or a collection of other points. SPH interpolation is performed with a simple Shepard filtering. .. automodule:: pysph.tools.interpolator :members: :undoc-members: SPH Evaluator ------------- This module provides a class that allows one to evaluate a set of equations on a collection of particle arrays. This is very handy for non-trivial post-processing that needs to be quick. .. automodule:: pysph.tools.sph_evaluator :members: :undoc-members: GMsh input/output ------------------ .. automodule:: pysph.tools.gmsh :members: :undoc-members: Mayavi Viewer ------------- .. automodule:: pysph.tools.mayavi_viewer :members: :undoc-members: STL Converter ------------- The following function can be used to convert an STL file to a set of grid points. .. autofunction:: pysph.tools.geometry_stl.get_stl_surface pysph-master/docs/source/starcluster/000077500000000000000000000000001356347341600203315ustar00rootroot00000000000000pysph-master/docs/source/starcluster/overview.rst000066400000000000000000000151331356347341600227340ustar00rootroot00000000000000.. _starcluster-docs: ============================== Using StarCluster with PySPH ============================== StarCluster is an open source cluster-computing toolkit for Amazon’s Elastic Compute Cloud (EC2). StarCluster has been designed to simplify the process of building, configuring, and managing clusters of virtual machines on Amazon’s EC2 cloud. Using StarCluster along with PySPH's MPI support, you can run PySPH code on multiple instances in parallel and complete simulations faster. .. contents:: :local: :depth: 1 Installing StarCluster ++++++++++++++++++++++ StarCluster can be installed via pip as :: $ pip install starcluster Configuring StarCluster +++++++++++++++++++++++ Creating Configuration File ``````````````````````````` After StarCluster has been installed, the next step is to update your StarCluster configuration :: $ starcluster help StarCluster - (http://star.mit.edu/cluster) Software Tools for Academics and Researchers (STAR) Please submit bug reports to starcluster@mit.edu cli.py:87 - ERROR - config file /home/user/.starcluster/config does not exist Options: -------- [1] Show the StarCluster config template [2] Write config template to /home/user/.starcluster/config [q] Quit Please enter your selection: Select the second option by typing 2 and press enter. This will give you a template to use to create a configuration file containing your AWS credentials, cluster settings, etc. The next step is to customize this file using your favorite text-editor :: $ emacs ~/.starcluster/config Updating AWS Credentials ```````````````````````` This file is commented with example “cluster templates”. A cluster template defines a set of configuration settings used to start a new cluster. The config template provides a smallcluster template that is ready to go out-of-the-box. However, first, you must fill in your AWS credentials and keypair info :: [aws info] aws_access_key_id = # your aws access key id here aws_secret_access_key = # your secret aws access key here aws_user_id = # your 12-digit aws user id here To find your AWS User ID, see `Finding your Account Canonical User ID `_ You can get your root user credentials from the `Security Credentials `_ page on AWS Management Console. However, root credentials allow for full access to all resources on your account and it is recommended that you create separate IAM (Identity and Access Management) user credentials for managing access to your EC2 resources. To create IAM user credentials, see `Creating IAM Users (Console) `_ For StarCluster, create an IAM user with the ``EC2 Full Access`` permission. If you don't already have a keypair, you can generate one using StarCluster by running:: $ starcluster createkey mykey -o ~/.ssh/mykey.rsa This will create a keypair called mykey on Amazon EC2 and save the private key to ~/.ssh/mykey.rsa. Once you have a key the next step is to fill in your keypair info in the StarCluster config file :: [key mykey] key_location = ~/.ssh/mykey.rsa Also, update the following information for the smallcluster configuration:: [cluster smallcluster] .. KEYNAME = mykey .. Now that the basic configuration for StarCluster is complete, you can directly launch instances using StarCluster. However, note that EC2 charges are not pro rata and you will be charged for an entire hour even if you run an instance for a few minutes. Before attempting to deploy an instance/cluster you can modify the following information in your cluster configuration:: [cluster smallcluster] .. NODE_INSTANCE_TYPE=t2.micro NODE_IMAGE_ID=ami-6b211202 .. Now you can launch an EC2 instance using:: $ starcluster start smallcluster You can SSH into the master node by running:: $ starcluster sshmaster smallcluster You can transfer files to the nodes using the ``get`` and ``put`` commands as:: $ starcluster put /path/to/local/file/or/dir /remote/path/ $ starcluster get /path/to/remote/file/or/dir /local/path/ Finally, you can terminate the instance by running:: $ starcluster terminate smallcluster Setting up PySPH for StarCluster ++++++++++++++++++++++++++++++++ Most of the public AMIs currently distributed for StarCluster are outdated and have reached their end of life. To ensure a hassle-free experience while further extending the AMI and installing packages, you can use the 64 bit Ubuntu 16.04 AMI with AMI ID ``ami-01fdc27a`` which has most StarCluster dependencies and PySPH dependencies installed. Base AMI for PySPH [Optional] ````````````````````````````` The ``ami.sh`` file which can be found in the ``starcluster`` directory in the PySPH repository automatically launches a vanilla 64-bit Ubuntu 16.04 instance, installs any necessary StarCluster and PySPH dependencies and saves an AMI with this configuration on your AWS account :: $ ./ami.sh The AMI ID of the generated image is stored in ``AMI_ID``. You can also see a list of the AMIs currently in your AWS account by running :: $ starcluster listimages Cluster configuration for PySPH ``````````````````````````````` Modify your StarCluster configuration file with the following information. Launching a cluster with the following configuration will start 2 t2.micro instances, install the latest version of PySPH in each and keep track of the nodes loaded in ``/home/pysph/PYSPH_HOSTS``:: [cluster pysphcluster] KEYNAME = mykey CLUSTER_SIZE = 2 # Number of nodes in cluster CLUSTER_USER = pysph CLUSTER_SHELL = bash NODE_IMAGE_ID = ami-01fdc27a # Or AMI ID for base AMI generated previously NODE_INSTANCE_TYPE = t2.micro # EC2 Instance type PLUGINS = pysph_install [plugin pysph_install] setup_class = sc_pysph.PySPHInstaller Also, copy ``sc_pysph.py`` from the ``starcluster`` directory to ``~/.starcluster/plugins/`` Running PySPH scripts on a cluster ++++++++++++++++++++++++++++++++++ You can start the cluster configured previously by running :: $ starcluster start -c pysphcluster cluster Assuming your PySPH file ``cube.py`` is in the local home directory, you can first transfer this file to the cluster:: $ starcluster put -u pysph cluster ~/cube.py /home/pysph/cube.py Then run the PySPH code as:: $ starcluster sshmaster -u pysph cluster "mpirun -n 2 --hostfile ~/PYSPH_HOSTS python ~/cube.py" Finally, you can get the output generated by PySPH back by running:: $ starcluster get -u pysph cluster /home/pysph/cube_output . pysph-master/docs/source/tutorial/000077500000000000000000000000001356347341600176215ustar00rootroot00000000000000pysph-master/docs/source/tutorial/circular_patch.rst000066400000000000000000000215271356347341600233450ustar00rootroot00000000000000.. _tutorial: ======================== A more detailed tutorial ======================== In the previous tutorial (:doc:`circular_patch_simple`) we provided a high level overview of the PySPH framework. No details were provided on equations, integrators and solvers. This tutorial assumes that you have read the previous one. Recall that in the previous tutorial, a circular patch of fluid with a given initial velocity field was simulated using a weaky-compressible SPH scheme. In that example, a ``WCSPHScheme`` object was created in the ``create_scheme`` method. The details of what exactly the scheme does was not discussed. This tutorial explains some of those details by solving the same problem using a lower-level approach where the actual SPH equations, the integrator, and the solver are created manually. This should help a user write their own schemes or modify an existing scheme. The full code for this example can be seen in `elliptical_drop_no_scheme.py `_. Imports ~~~~~~~~~~~~~ This example requires a few more imports than the previous case. the first several lines are imports of various modules: .. code-block:: python import os from numpy import array, ones_like, mgrid, sqrt # PySPH base and carray imports from pysph.base.utils import get_particle_array_wcsph from pysph.base.kernels import Gaussian # PySPH solver and integrator from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep # PySPH sph imports from pysph.sph.equation import Group from pysph.sph.basic_equations import XSPHCorrection, ContinuityEquation from pysph.sph.wc.basic import TaitEOS, MomentumEquation .. note:: This is common for all examples that do not use a scheme and it is worth noting the pattern of the PySPH imports. Fundamental SPH constructs like the kernel and particle containers are imported from the ``base`` subpackage. The framework related objects like the solver and integrator are imported from the ``solver`` subpackage. Finally, we import from the ``sph`` subpackage, the physics related part for this problem. The methods defined for creating the particles are the same as in the previous tutorial with the exception of the call to ``self.scheme.setup_properties([pa])``. In this example, we do not create a scheme, we instead create all the required PySPH objects from the application. We do not override the ``create_scheme`` method but instead have two other methods called ``create_solver`` and ``create_equations`` which handle this. Setting up the PySPH framework ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As we move on, we encounter instantiations of the PySPH framework objects. These are the :py:class:`pysph.solver.application.Application`, :py:class:`pysph.sph.integrator.TVDRK3Integrator` and :py:class:`pysph.solver.solver.Solver` objects. The ``create_solver`` method constructs a ``Solver`` instance and returns it as seen below: .. code-block:: python def create_solver(self): kernel = Gaussian(dim=2) integrator = EPECIntegrator( fluid=WCSPHStep() ) dt = 5e-6; tf = 0.0076 solver = Solver(kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=True, cfl=0.05, n_damp=50, output_at_times=[0.0008, 0.0038]) return solver As can be seen, various options are configured for the solver, including initial damping etc. .. py:currentmodule:: pysph.sph.integrator Intuitively, in an SPH simulation, the role of the :py:class:`EPECIntegrator` should be obvious. In the code, we see that we ask for the "fluid" to be stepped using a :py:class:`WCSPHStep` object. Taking a look at the ``create_particles`` method once more, we notice that the **ParticleArray** representing the circular patch was named as `fluid`. So we're essentially asking the PySPH framework to step or *integrate* the properties of the **ParticleArray** fluid using :py:class:`WCSPHStep`. It is safe to assume that the framework takes the responsibility to call this integrator at the appropriate time during a time-step. .. py:currentmodule:: pysph.solver.solver The :py:class:`Solver` is the main driver for the problem. It marshals a simulation and takes the responsibility (through appropriate calls to the integrator) to update the solution to the next time step. It also handles input/output and computing global quantities (such as minimum time step) in parallel. Specifying the interactions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ At this stage, we have the particles (represented by the fluid **ParticleArray**) and the framework to integrate the solution and marshall the simulation. What remains is to define how to actually go about updating properties *within* a time step. That is, for each particle we must "do something". This is where the *physics* for the particular problem comes in. For SPH, this would be the pairwise interactions between particles. In PySPH, we provide a specific way to define the sequence of interactions which is a *list* of **Equation** objects (see :doc:`../reference/equations`). For the circular patch test, the sequence of interactions is relatively straightforward: - Compute pressure from the Equation of State (EOS): :math:`p = f(\rho)` - Compute the rate of change of density: :math:`\frac{d\rho}{dt}` - Compute the rate of change of velocity (accelerations): :math:`\frac{d\boldsymbol{v}}{dt}` - Compute corrections for the velocity (XSPH): :math:`\frac{d\boldsymbol{x}}{dt}` Care must be taken that the EOS equation should be evaluated for all the particles before the other equations are evaluated. .. py:currentmodule:: pysph.sph.equation We request this in PySPH by creating a list of :py:class:`Equation` instances in the ``create_equations`` method: .. code-block:: python def create_equations(self): equations = [ Group(equations=[ TaitEOS(dest='fluid', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), ], real=False), Group(equations=[ ContinuityEquation(dest='fluid', sources=['fluid',]), MomentumEquation(dest='fluid', sources=['fluid'], alpha=self.alpha, beta=0.0, c0=self.co), XSPHCorrection(dest='fluid', sources=['fluid']), ]), ] return equations Each ``Group`` instance is completed before the next is taken up. Each group contains a list of ``Equation`` objects. Each *interaction* is specified through an :py:class:`Equation` object, which is instantiated with the general syntax: .. code-block:: python Equation(dest='array_name', sources, **kwargs) The ``dest`` argument specifies the *target* or *destination* **ParticleArray** on which this interaction is going to operate on. Similarly, the ``sources`` argument specifies a *list* of **ParticleArrays** from which the contributions are sought. For some equations like the EOS, it doesn't make sense to define a list of sources and a ``None`` suffices. The specification basically tells PySPH that for one time step of the calculation: - Use the Tait's EOS to update the properties of the fluid array - Compute :math:`\frac{d\rho}{dt}` for the fluid from the fluid - Compute accelerations for the fluid from the fluid - Compute the XSPH corrections for the fluid, using fluid as the source .. note:: Notice the use of the **ParticleArray** name "fluid". It is the responsibility of the user to ensure that the equation specification is done in a manner consistent with the creation of the particles. With the list of equations, our problem is completely defined. PySPH now knows what to do with the particles within a time step. More importantly, this information is enough to generate code to carry out a complete SPH simulation. For more details on how new equations can be written please read :ref:`design_overview`. The example may be run the same way as the previous example:: $ pysph run elliptical_drop_no_scheme The resulting output can be analyzed or viewed the same way as in the previous example. In the previous example (:doc:`circular_patch_simple`), the equations and solver are created automatically by the ``WCSPHScheme``. If the ``create_scheme`` is overwritten and returns a scheme, the ``create_equations`` and ``create_solver`` need not be implemented. For more details on the various application methods, please see :py:class:`pysph.solver.application.Application`. Implementing other schemes can be done by either implementing the equations directly as done in this example or one could implement a new :py:class:`pysph.sph.scheme.Scheme`. pysph-master/docs/source/tutorial/circular_patch_simple.rst000066400000000000000000000657401356347341600247230ustar00rootroot00000000000000.. _simple_tutorial: ================== Learning the ropes ================== In the tutorials, we will introduce the PySPH framework in the context of the examples provided. Read this if you are a casual user and want to use the framework *as is*. If you want to add new functions and capabilities to PySPH, you should read :ref:`design_overview`. If you are new to PySPH however, we highly recommend that you go through this document and the next tutorial (:doc:`circular_patch`). Recall that PySPH is a framework for parallel SPH-like simulations in Python. The idea therefore, is to provide a user friendly mechanism to set-up problems while leaving the internal details to the framework. *All* examples follow the following steps: .. figure:: ../../Images/pysph-examples-common-steps.png :align: center The tutorials address each of the steps in this flowchart for problems with increasing complexity. The first example we consider is a "patch" test for SPH formulations for incompressible fluids in `elliptical_drop_simple.py `_. This problem simulates the evolution of a 2D circular patch of fluid under the influence of an initial velocity field given by: .. math:: u &= -100 x \\ v &= 100 y The kinematical constraint of incompressibility causes the initially circular patch of fluid to deform into an ellipse such that the volume (area) is conserved. An expression can be derived for this deformation which makes it an ideal test to verify codes. Imports ~~~~~~~~~~~~~ Taking a look at the example (see `elliptical_drop_simple.py `_), the first several lines are imports of various modules: .. code-block:: python from numpy import ones_like, mgrid, sqrt from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import WCSPHScheme .. note:: This is common for most examples and it is worth noting the pattern of the PySPH imports. Fundamental SPH constructs like the kernel and particle containers are imported from the ``base`` subpackage. The framework related objects like the solver and integrator are imported from the ``solver`` subpackage. Finally, we import from the ``sph`` subpackage, the physics related part for this problem. The organization of the ``pysph`` package is given below. Organization of the ``pysph`` package ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PySPH is organized into several sub-packages. These are: - ``pysph.base``: This subpackage defines the :py:class:`pysph.base.particle_array.ParticleArray`, the various :doc:`../reference/kernels`, the nearest neighbor particle search (NNPS) code, and the Cython code generation utilities. - ``pysph.sph``: Contains the various :doc:`../reference/equations`, the :doc:`../reference/integrator` and associated integration steppers, and the code generation for the SPH looping. ``pysph.sph.wc`` contains the equations for the weakly compressible formulation. ``pysph.sph.solid_mech`` contains the equations for solid mechanics and ``pysph.sph.misc`` has miscellaneous equations. - ``pysph.solver``: Provides the :py:class:`pysph.solver.solver.Solver`, the :py:class:`pysph.solver.application.Application` and a convenient way to interact with the solver as it is running. - ``pysph.parallel``: Provides the parallel functionality. - ``pysph.tools``: Provides some useful tools including the ``pysph`` script CLI and also the data viewer which is based on Mayavi_. - ``pysph.examples``: Provides many standard SPH examples. These examples are meant to be extended by users where needed. This is extremely handy to reproduce and compare SPH schemes. Functions for loading/generating the particles ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The code begins with a few functions related to obtaining the exact solution for the given problem which is used for comparing the computed solution. A single new class called ``EllipticalDrop`` which derives from :py:class:`pysph.solver.application.Application` is defined. There are several methods implemented on this class: - ``initialize``: lets users specify any parameters of interest relevant to the simulation. - ``create_scheme``: lets the user specify the :py:class:`pysph.sph.scheme.Scheme` to use to solve the problem. Several standard schemes are already available and can be readily used. - ``create_particles``: this method is where one creates the particles to be simulated. Of these, ``create_particles`` and ``create_scheme`` are mandatory for without them SPH would be impossible. The rest (and other methods) are optional. To see a complete listing of possible methods that one can subclass see :py:class:`pysph.solver.application.Application`. The ``create_particles`` method looks like: .. code-block:: python class EllipticalDrop(Application): # ... def create_particles(self): """Create the circular patch of fluid.""" dx = self.dx hdx = self.hdx ro = self.ro name = 'fluid' x, y = mgrid[-1.05:1.05+1e-4:dx, -1.05:1.05+1e-4:dx] x = x.ravel() y = y.ravel() m = ones_like(x)*dx*dx h = ones_like(x)*hdx*dx rho = ones_like(x) * ro u = -100*x v = 100*y # remove particles outside the circle indices = [] for i in range(len(x)): if sqrt(x[i]*x[i] + y[i]*y[i]) - 1 > 1e-10: indices.append(i) pa = get_particle_array(x=x, y=y, m=m, rho=rho, h=h, u=u, v=v, name=name) pa.remove_particles(indices) print("Elliptical drop :: %d particles" % (pa.get_number_of_particles())) self.scheme.setup_properties([pa]) return [pa] .. py:currentmodule:: pysph.base.particle_array The method is used to initialize the particles in Python. In PySPH, we use a :py:class:`ParticleArray` object as a container for particles of a given *species*. You can think of a particle species as any homogenous entity in a simulation. For example, in a two-phase air water flow, a species could be used to represent each phase. A :py:class:`ParticleArray` can be conveniently created from the command line using NumPy arrays. For example .. code-block:: python >>> from pysph.base.utils import get_particle_array >>> x, y = numpy.mgrid[0:1:0.01, 0:1:0.01] >>> x = x.ravel(); y = y.ravel() >>> pa = sph.get_particle_array(x=x, y=y) would create a :py:class:`ParticleArray`, representing a uniform distribution of particles on a Cartesian lattice in 2D using the helper function :py:func:`get_particle_array` in the **base** subpackage. The :py:func:`get_particle_array_wcsph` is a special version of this suited to weakly-compressible formulations. .. note:: **ParticleArrays** in PySPH use *flattened* or one-dimensional arrays. The :py:class:`ParticleArray` is highly convenient, supporting methods for insertions, deletions and concatenations. In the ``create_particles`` function, we use this convenience to remove a list of particles that fall outside a circular region: .. code-block:: python pa.remove_particles(indices) where, a list of indices is provided. One could also provide the indices in the form of a :py:class:`cyarray.carray.LongArray` which, as the name suggests, is an array of 64 bit integers. The particle array also supports what we call strided properties where you may associate multiple values per particle. Normally the stride length is 1. This feature is convenient if you wish to associate a matrix or vector of values per particle. You must still access the individual values as a "flattened" array but one can resize, remove, and add particles and the strided properties will be honored. For example:: >>> pa.add_property(name='A', data=2.0, default=-1.0, stride=2) Will create a new property called ``'A'`` with a stride length of 2. .. note:: Any one-dimensional (NumPy) array is valid input for PySPH. You can generate this from an external program for solid modelling and load it. .. note:: PySPH works with multiple **ParticleArrays**. This is why we actually return a *list* in the last line of the `get_circular_patch` function above. The ``create_particles`` always returns a list of particle arrays even if there is only one. The method ``self.scheme.setup_properties`` automatically adds any properties needed for the particular scheme being used. Setting up the PySPH framework ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As we move on, we encounter instantiations of the PySPH framework objects. In this example, the :py:class:`pysph.sph.scheme.WCSPH` scheme is created in the ``create_scheme`` method. The ``WCSPHScheme`` internally creates other basic objects needed for the SPH simulation. In this case, the scheme instance is passed a list of fluid particle array names and an empty list of solid particle array names. In this case there are no solid boundaries. The class is also passed a variety of values relevant to the scheme and simulation. The kernel to be used is created and passed to the ``configure_solver`` method of the scheme. The :py:class:`pysph.sph.integrator.EPECIntegrator` is used to integrate the particle properties. Various solver related parametes are also setup. .. code-block:: python def create_scheme(self): s = WCSPHScheme( ['fluid'], [], dim=2, rho0=self.ro, c0=self.co, h0=self.dx*self.hdx, hdx=self.hdx, gamma=7.0, alpha=0.1, beta=0.0 ) dt = 5e-6 tf = 0.0076 s.configure_solver(dt=dt, tf=tf) return s As can be seen, various options are configured for the solver, including initial damping etc. The scheme is responsible for: - setting up the actual equations that describe the interactions between particles (see :doc:`../reference/equations`), - setting up the kernel (:doc:`../reference/kernels`) and integrator (:doc:`../reference/integrator`) to use for the simulation. In this case a default cubic spline kernel is used. - setting up the Solver (:doc:`../reference/solver`), which marshalls the entire simulation. For a more detailed introduction to these aspects of PySPH please read, the :doc:`circular_patch` tutorial which provides greater detail on these. However, by simply creating the ``WCSPHScheme`` and creating the particles, one can simulate the problem. .. py:currentmodule:: pysph.solver.application The astute reader may notice that the ``EllipticalDrop`` example is subclassing the :py:class:`Application`. This makes it easy to pass command line arguments to the solver. It is also important for the seamless parallel execution of the same example. To appreciate the role of the :py:class:`Application` consider for a moment how might we write a parallel version of the same example. At some point, we would need some MPI imports and the particles should be created in a distributed fashion. All this (and more) is handled through the abstraction of the :py:class:`Application` which hides all this detail from the user. Running the example ~~~~~~~~~~~~~~~~~~~ .. py:currentmodule:: pysph.solver.application In the last two lines of the example, we instantiate the ``EllipticalDrop`` class and run it: .. code-block:: python if __name__ == '__main__': app = EllipticalDrop() app.run() The :py:class:`Application` takes care of creating the particles, creating the solver, handling command line arguments etc. Many parameters can be configured via the command line, and these will override any parameters setup in the respective ``create_*`` methods. For example one may do the following to find out the various options:: $ pysph run elliptical_drop_simple -h If we run the example without any arguments it will run until a final time of 0.0075 seconds. We can change this example to 0.005 by the following:: $ pysph run elliptical_drop_simple --tf=0.005 When this is run, PySPH will generate Cython code from the equations and integrators that have been provided, compiles that code and runs the simulation. This provides a great deal of convenience for the user without sacrificing performance. The generated code is available in ``~/.pysph/source``. If the code/equations have not changed, then the code will not be recompiled. This is all handled automatically without user intervention. By default, output files will be generated in the directory ``elliptical_drop_output``. If we wish to utilize multiple cores we could do:: $ pysph run elliptical_drop_simple --openmp If we wish to run the code in parallel (and have compiled PySPH with Zoltan_ and mpi4py_) we can do:: $ mpirun -np 4 pysph run elliptical_drop_simple This will automatically parallelize the run using 4 processors. In this example doing this will only slow it down as the number of particles is extremely small. Visualizing and post-processing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can view the data generated by the simulation (after the simulation is complete or during the simulation) by running the ``pysph view`` command. To view the simulated data you may do:: $ pysph view elliptical_drop_simple_output If you have Mayavi_ installed this should show a UI that looks like: .. image:: ../../Images/pysph_viewer.png :width: 800px :alt: PySPH viewer For more help on the viewer, please run:: $ pysph view -h .. _Mayavi: http://code.enthought.com/projects/mayavi .. _mpi4py: http://mpi4py.scipy.org/ .. _Zoltan: http://www.cs.sandia.gov/zoltan/ On the user interface, the right side shows the visualized data. On top of it there are several toolbar icons. The left most is the Mayavi logo and clicking on it will present the full Mayavi user interface that can be used to configure any additional details of the visualization. On the bottom left of the main visualization UI there is a button which has the text "Launch Python Shell". If one clicks on this, one obtains a full Python interpreter with a few useful objects available. These are:: >>> dir() ['__builtins__', '__doc__', '__name__', 'interpolator', 'mlab', 'particle_arrays', 'scene', 'self', 'viewer'] >>> particle_arrays['fluid'].name 'fluid' The ``particle_arrays`` object is a dictionary of **ParticleArrayHelpers** which is available in :py:class:`pysph.tools.mayavi_viewer.ParticleArrayHelper`. The ``interpolator`` is an instance of :py:class:`pysph.tools.mayavi_viewer.InterpolatorView` that is used by the viewer. The other objects can be used to script the user interface if desired. Note that the ``particle_arrays`` can be indexed by array name or index. Here is an example of scripting the viewer. Let us say we have two particle arrays, `'boundary'` and `'fluid'` in that order. Let us say, we wish to make the boundary translucent, then we can write the following:: b = particle_arrays['boundary'] b.plot.actor.property.opacity = 0.2 This does require some knowledge of Mayavi_ and scripting with it. The `plot` attribute of the :py:class:`pysph.tools.mayavi_viewer.ParticleArrayHelper` is a `Glyph` instance from Mayavi_. It is useful to use the `record feature `_ of Mayavi to learn more about how best to script the view. The viewer will always look for a ``mayavi_config.py`` script inside the output directory to setup the visualization parameters. This file can be created by overriding the :py:class:`pysph.solver.application.Application` object's ``customize_output`` method. See the `dam break 3d `_ example to see this being used. Of course, this file can also be created manually. Loading output data files ^^^^^^^^^^^^^^^^^^^^^^^^^^^ The simulation data is dumped out either in ``*.hdf5`` files (if one has h5py_ installed) or ``*.npz`` files otherwise. You may use the :py:func:`pysph.solver.utils.load` function to access the raw data :: from pysph.solver.utils import load data = load('elliptical_drop_100.hdf5') # if one has only npz files the syntax is the same. data = load('elliptical_drop_100.npz') When opening the saved file with ``load``, a dictionary object is returned. The particle arrays and other information can be obtained from this dictionary:: particle_arrays = data['arrays'] solver_data = data['solver_data'] ``particle_arrays`` is a dictionary of all the PySPH particle arrays. You may obtain the PySPH particle array, ``fluid``, like so:: fluid = particle_arrays['fluid'] p = fluid.p ``p`` is a numpy array containing the pressure values. All the saved particle array properties can thus be obtained and used for any post-processing task. The ``solver_data`` provides information about the iteration count, timestep and the current time. A good example that demonstrates the use of these is available in the ``post_process`` method of the ``elliptical_drop.py`` example. .. _h5py: http://www.h5py.org Interpolating properties ^^^^^^^^^^^^^^^^^^^^^^^^^ Data from the solver can also be interpolated using the :py:class:`pysph.tools.interpolator.Interpolator` class. Here is the simplest example of interpolating data from the results of a simulation onto a fixed grid that is automatically computed from the known particle arrays:: from pysph.solver.utils import load data = load('elliptical_drop_output/elliptical_drop_100.npz') from pysph.tools.interpolator import Interpolator parrays = data['arrays'] interp = Interpolator(list(parrays.values()), num_points=10000) p = interp.interpolate('p') ``p`` is now a numpy array of size 10000 elements shaped such that it interpolates all the data in the particle arrays loaded. ``interp.x`` and ``interp.y`` are numpy arrays of the chosen ``x`` and ``y`` coordinates corresponding to ``p``. To visualize this we may simply do:: from matplotlib import pyplot as plt plt.contourf(interp.x, interp.y, p) It is easy to interpolate any other property too. If one wishes to explicitly set the domain on which the interpolation is required one may do:: xmin, xmax, ymin, ymax, zmin, zmax = 0., 1., -1., 1., 0, 1 interp.set_domain((xmin, xmax, ymin, ymax, zmin, zmax), (40, 50, 1)) p = interp.interpolate('p') This will create a meshgrid in the specified region with the specified number of points. One could also explicitly set the points on which one wishes to interpolate the data as:: interp.set_interpolation_points(x, y, z) Where ``x, y, z`` are numpy arrays of the coordinates of the points on which the interpolation is desired. This can also be done with the constructor as:: interp = Interpolator(list(parrays.values()), x=x, y=y, z=z) There are some cases, where one may require a higher order interpolation or gradient approximation of the property. This can be done by passing a ``method`` for interpolation to the interplator as:: interp = Interpolator(list(parrays.values()), num_points=10000, method='order1') Currently, PySPH has three method of interpolation namely ``shepard``, ``sph`` and ``order1``. When ``order1`` is set as method then one can get the higher order interpolation or it's derivative by just passing an extra argument to the interpolate method suggesting the component. To get derivative in `x` we can do as:: px = interp.interpolate('p', comp=1) Here for `comp=0`, the interpolated property is returned and `1`, `2`, `3` will return gradient in `x`, `y` and `z` directions respectively. For more details on the class and the available methods, see :py:class:`pysph.tools.interpolator.Interpolator`. In addition to this there are other useful pre and post-processing utilities described in :doc:`../reference/tools`. Viewing the data in an IPython notebook ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PySPH makes it relatively easy to view the data inside an IPython notebook with minimal additional dependencies. A simple UI is provided to view the saved data using this interface. It requires jupyter_, ipywidgets_ and ipympl_. Currently, a 2D and 3D viewer are provided for the data. Here is a simple example of how one may use this in a notebook. Inside a notebook, one needs the following:: %matplotlib ipympl from pysph.tools.ipy_viewer import Viewer2D viewer = Viewer2D('dam_break_2d_output') The ``viewer`` has many useful methods:: viewer.show_info() # prints useful information about the run. viewer.show_results() # plots any images in the output directory viewer.show_log() # Prints the log file. The most handy one is the one to perform interactive plots:: viewer.interactive_plot() This shows a simple ipywidgets_ based UI that uses matplotlib to plot the data on the browser. The different saved snapshots can be viewed using a convenient slider. The viewer shows both the particles as well as simple vector plots. This is convenient when one wishes to share and show the data without requiring Mayavi. It does require pysph to be installed in order to be able to load the files. It is mandatory to have the first line that sets the matplotlib backend to ``ipympl``. There is also a 3D viewer which may be used using ``Viewer3D`` instead of the ``Viewer2D`` above. This viewer requires ipyvolume_ to be installed. .. _jupyter: https://jupyter.org .. _ipywidgets: https://github.com/jupyter-widgets/ipywidgets .. _ipyvolume: https://pypi.python.org/pypi/ipyvolume .. _ipympl: https://pypi.python.org/pypi/ipympl A slightly more complex example ------------------------------- The first example was very simple. In particular there was no post-processing of the results. Many pysph examples also include post processing code in the example. This makes it easy to reproduce results and also easily compare different schemes. A complete version of the elliptical drop example is available at `elliptical_drop.py `_. There are a few things that this example does a bit differently: - It some useful code to generate the exact solution for comparison. - It uses a ``Gaussian`` kernel and also uses a variety of different options for the solver (see how the ``configure_solver`` is called) for various other options see :py:class:`pysph.solver.solver.Solver`. - The ``EllipticalDrop`` class has a ``post_process`` method which optionally post-process the results generated. This in turn uses a couple of private methods ``_compute_results`` and ``_make_final_plot``. - The last line of the code has a call to ``app.post_process(...)``, which actually post-processes the data. This example is therefore a complete example and shows how one could write a useful and re-usable PySPH example. Doing more ~~~~~~~~~~~ .. py:currentmodule:: pysph.solver.application The :py:class:`Application` has several more methods that can be used in additional contexts, for example one may override the following additional methods: - ``add_user_options``: this is used to create additional user-defined command line arguments. The command line options are available in ``self.options`` and can be used in the other methods. - ``consume_user_options``: this is called after the command line arguments are parsed, and can be optionally used to setup any variables that have been added by the user in ``add_user_options``. Note that the method is called before the particles and solver etc. are created. - ``create_domain``: this is used when a periodic domain is needed. - ``create_inlet_outlet``: Override this to return any inlet an outlet objects. See the :py:class:`pysph.sph.simple_inlet_outlet` module. There are many others, please see the :py:class:`Application` class documentation to see these. The order of invocation of the various methods is also documented there. There are several `examples `_ that ship with PySPH, explore these to get a better idea of what is possible. Debugging when things go wrong ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When you attempt to run your own simulations you may run into a variety of errors. Some errors in setting up equations and the like are easy to detect and PySPH will provide an error message that should usually be helpful. If this is a Python related error you should get a traceback and debug it as you would debug any Python program. PySPH writes out a log file in the output directory, looking at that is sometimes useful. The log file will usually tell you the kernel, integrator, NNPS, and the exact equations and groups used for a simulation. This can be often be very useful when sorting out subtle issues with the equations and groups. Things get harder to debug when you get a segmentation fault or your code just crashes. Even though PySPH is implemented in Python you can get one of these if your timestep is too large or your equations are doing strange things (divide by zero, taking a square root of a negative number). This happens because PySPH translates your code into a lower-level language for performance. The following are the most common causes of a crash/segfault: - The particles have "blown up", this can happen when the accelerations are very large. This can also happen when your timestep is very large. - There are mistakes in your equations or integrator step. Divide by zero, or some quantity was not properly initialized -- for example if the particle masses were not correctly initialized and were set to zero you might get these errors. It is also possible that you have made some indexing errors in your arrays, check all your array accesses in your equations. Let us see how we can debug these. Let us say your code is in ``example.py``, you can do the following:: $ python example.py --pfreq 1 --detailed-output In this case, the ``--pfreq 1`` asks pysph to dump output at every timestep. By default only specific properties that the user has requested are saved. Using ``--detailed-output`` dumps every property of every array. This includes all accelerations as well. Viewing this data with the ``pysph view`` command makes it easy to see which acceleration is causing a problem. Sometimes even this is not enough as the particles diverge or the code blows up at the very first step of a multi-stage integrator. In this case, no output would be generated. To debug the accelerations in this situation one may define a method called ``pre_step`` in your :py:class:`pysph.solver.application.Application` subclass as follows:: class EllipticalDrop(Application): # ... def pre_step(self, solver): solver.dump_output() What this does is to ask the solver to dump the output right before each timestep is taken. At the start of the simulations the first accelerations have been calculated and since this output is now saved, one should be able to debug the accelerations. Again, use ``--detailed-output`` with this to look at the accelerations right at the start. pysph-master/docs/source/using_pysph.rst000066400000000000000000000504411356347341600210640ustar00rootroot00000000000000.. _introduction: ========================== Using the PySPH library ========================== In this document, we describe the fundamental data structures for working with particles in PySPH. Take a look at :ref:`tutorial` for a tutorial introduction to some of the examples. For the experienced user, take a look at :ref:`design_overview` for some of the internal code-generation details and if you want to extend PySPH for your application. ----------------------- Working With Particles ----------------------- As an object oriented framework for particle methods, PySPH provides convenient data structures to store and manipulate collections of particles. These can be constructed from within Python and are fully compatible with NumPy arrays. We begin with a brief description for the basic data structures for arrays. .. py:currentmodule:: cyarray.carray ^^^^^^^^^^ C-arrays ^^^^^^^^^^ The :py:class:`cyarray.carray.BaseArray` class provides a typed array data structure called **CArray**. These are used throughout PySPH and are fundamentally very similar to NumPy arrays. The following named types are supported: - :py:class:`cyarray.carray.UIntArray` (32 bit unsigned integers) - :py:class:`cyarray.carray.IntArray` (32 bit signed integers) - :py:class:`cyarray.carray.LongArray` (64 bit signed integers) - :py:class:`cyarray.carray.DoubleArray` (64 bit floating point numbers Some simple commands to work with **BaseArrays** from the interactive shell are given below .. code-block:: python >>> import numpy >>> from cyarray.carray import DoubleArray >>> array = DoubleArray(10) # array of doubles of length 10 >>> array.set_data( numpy.arange(10) ) # set the data from a NumPy array >>> array.get(3) # get the value at a given index >>> array.set(5, -1.0) # set the value at an index to a value >>> array[3] # standard indexing >>> array[5] = -1.0 # standard indexing .. py:currentmodule:: pysph.base.particle_array ^^^^^^^^^^^^^^ ParticleArray ^^^^^^^^^^^^^^ In PySPH, a collection of **BaseArrays** make up what is called a :py:class:`ParticleArray`. This is the main data structure that is used to represent particles and can be created from NumPy arrays like so: .. code-block:: python >>> import numpy >>> from pysph.base.utils import get_particle_array >>> x, y = numpy.mgrid[0:1:0.1, 0:1:0.1] # create some data >>> x = x.ravel(); y = y.ravel() # flatten the arrays >>> pa = get_particle_array(name='array', x=x, y=y) # create the particle array In the above, the helper function :py:func:`pysph.base.utils.get_particle_array` will instantiate and return a :py:class:`ParticleArray` with properties `x` and `y` set from given NumPy arrays. In general, a :py:class:`ParticleArray` can be instantiated with an arbitrary number of properties. Each property is stored internally as a :py:class:`cyarray.carray.BaseArray` of the appropriate type. By default, every :py:class:`ParticleArray` returned using the helper function will have the following properties: - `x, y, z` : Position coordinates (doubles) - `u, v, w` : Velocity (doubles) - `h, m, rho` : Smoothing length, mass and density (doubles) - `au, av, aw`: Accelerations (doubles) - `p` : Pressure (doubles) - `gid` : Unique global index (unsigned int) - `pid` : Processor id (int) - `tag` : Tag (int) The role of the particle properties like positions, velocities and other variables should be clear. These define either the kinematic or dynamic properties associated with SPH particles in a simulation. In addition to scalar properties, particle arrays also support "strided" properties i.e. associating multiple elements per particle. For example:: >>> pa.add_property('A', data=2.0, stride=2) >>> pa.A This will add a new property with name ``'A'`` but which has 2 elements associated with each particle. When one adds/remove particles this is taken into account automatically. When accessing such a particle, one has to be careful though as the underlying array is still stored as a one-dimensional array. PySPH introduces a global identifier for a particle which is required to be *unique* for that particle. This is represented with the property **gid** which is of type **unsigned int**. This property is used in the parallel load balancing algorithm with Zoltan. The property **pid** for a particle is an **integer** that is used to identify the processor to which the particle is currently assigned. The property **tag** is an **integer** that is used for any other identification. For example, we might want to mark all boundary particles with the tag 100. Using this property, we can delete all such particles as .. code-block:: python >>> pa.remove_tagged_particles(tag=100) This gives us a very flexible way to work with particles. Another way of deleting/extracting particles is by providing the indices (as a `list`, `NumPy array` or a :py:class:`LongArray`) of the particles to be removed: .. code-block:: python >>> indices = [1,3,5,7] >>> pa.remove_particles( indices ) >>> extracted = pa.extract_particles(indices, props=['rho', 'x', 'y']) A :py:class:`ParticleArray` can be concatenated with another array to result in a larger array: .. code-block:: python >>> pa.append_parray(another_array) To set a given list of properties to zero: .. code-block:: python >>> props = ['au', 'av', 'aw'] >>> pa.set_to_zero(props) Properties in a particle array are automatically sized depending on the number of particles. There are times when fixed size properties are required. For example if the total mass or total force on a particle array needs to be calculated, a fixed size constant can be added. This can be done by adding a ``constant`` to the array as illustrated below: .. code-block:: python >>> pa.add_constant('total_mass', 0.0) >>> pa.add_constant('total_force', [0.0, 0.0, 0.0]) >>> print(pa.total_mass, pa.total_force) In the above, the ``total_mass`` is a fixed ``DoubleArray`` of length 1 and the ``total_force`` is a fixed ``DoubleArray`` of length 3. These constants will never be resized as one adds or removes particles to/from the particle array. The constants may be used inside of SPH equations just like any other property. The constants can also set in the constructor of the :py:class:`ParticleArray` by passing a dictionary of constants as a ``constants`` keyword argument. For example: .. code-block:: python >>> pa = ParticleArray( ... name='test', x=x, ... constants=dict(total_mass=0.0, total_force=[0.0, 0.0, 0.0]) ... ) Take a look at :py:class:`ParticleArray` reference documentation for some of the other methods and their uses. .. py:currentmodule:: pysph.base.nnps ------------------------------------------- Nearest Neighbour Particle Searching (NNPS) ------------------------------------------- To carry out pairwise interactions for SPH, we need to find the nearest neighbours for a given particle within a specified interaction radius. The :py:class:`NNPS` object is responsible for handling these nearest neighbour queries for a *list* of particle arrays: .. code-block:: python >>> from pysph.base import nnps >>> pa1 = get_particle_array(...) # create one particle array >>> pa2 = get_particle_array(...) # create another particle array >>> particles = [pa1, pa2] >>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, radius_scale=3) The above will create an :py:class:`NNPS` object that uses the classical *linked-list* algorithm for nearest neighbour searches. The radius of interaction is determined by the argument `radius_scale`. The book-keeping cells have a length of :math:`\text{radius_scale} \times h_{\text{max}}`, where :math:`h_{\text{max}}` is the maximum smoothing length of *all* particles assigned to the local processor. Note that the ``NNPS`` classes also support caching the neighbors computed. This is useful if one needs to reuse the same set of neighbors. To enable this, simply pass ``cache=True`` to the constructor:: >>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, cache=True) Since we allow a list of particle arrays, we need to distinguish between *source* and *destination* particle arrays in the neighbor queries. .. note:: A **destination** particle is a particle belonging to that species for which the neighbors are sought. A **source** particle is a particle belonging to that species which contributes to a given destination particle. With these definitions, we can query for nearest neighbors like so: .. code-block:: python >>> nbrs = UIntArray() >>> nps.get_nearest_particles(src_index, dst_index, d_idx, nbrs) where `src_index`, `dst_index` and `d_idx` are integers. This will return, for the *d_idx* particle of the *dst_index* particle array (species), nearest neighbors from the *src_index* particle array (species). Passing the `src_index` and `dst_index` every time is repetitive so an alternative API is to call ``set_context`` as done below:: >>> nps.set_context(src_index=0, dst_index=0) If the ``NNPS`` instance is configured to use caching, then it will also pre-compute the neighbors very efficiently. Once the context is set one can get the neighbors as:: >>> nps.get_nearest_neighbors(d_idx, nbrs) Where `d_idx` and `nbrs` are as discussed above. If we want to re-compute the data structure for a new distribution of particles, we can call the :py:meth:`NNPS.update` method: .. code-block:: python >>> nps.update() .. py:currentmodule:: pysph.base.nnps ^^^^^^^^^^^^^^^^^^^^^^ Periodic domains ^^^^^^^^^^^^^^^^^^^^^^ The constructor for the :py:class:`NNPS` accepts an optional argument (:py:class:`DomainManager`) that is used to delimit the maximum spatial extent of the simulation domain. Additionally, this argument is also used to indicate the extents for a periodic domain. We construct a :py:class:`DomainManager` object like so .. code-block:: python >>> from pysph.base.nnps import DomainManager >>> domain = DomainManager(xmin, xmax, ymin, ymax, zmin, zmax, periodic_in_x=True, periodic_in_y=True, periodic_in_z=False) where `xmin ... zmax` are floating point arguments delimiting the simulation domain and `periodic_in_x,y,z` are bools defining the periodic axes. When the :py:class:`NNPS` object is constructed with this :py:class:`DomainManager`, care is taken to create periodic ghosts for particles in the vicinity of the periodic boundaries. These *ghost* particles are given a special **tag** defined by :py:class:`ParticleTAGS` .. code-block:: python class ParticleTAGS: Local = 0 Remote = 1 Ghost = 2 .. note:: The *Local* tag is used to for ordinary particles assigned and owned by a given processor. This is the default tag for all particles. .. note:: The *Remote* tag is used for ordinary particles assigned to but not owned by a given processor. Particles with this tag are typically used to satisfy neighbor queries *across* processor boundaries in a parallel simulation. .. note:: The *Ghost* tag is used for particles that are created to satisfy boundary conditions locally. .. py:currentmodule:: pysph.base.particle_array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Particle aligning ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In PySPH, the :py:class:`ParticleArray` aligns all particles upon a call to the :py:meth:`ParticleArray.align_particles` method. The aligning is done so that all particles with the *Local* tag are placed first, followed by particles with other tags. There is no preference given to the tags other than the fact that a particle with a non-zero tag is placed after *all* particles with a zero (*Local*) tag. Intuitively, the local particles represent *real* particles or particles that we want to do active computation on (destination particles). The data attribute `ParticleArray.num_real_particles` returns the number of real or *Local* particles. The total number of particles in a given :py:class:`ParticleArray` can be obtained by a call to the :py:meth:`ParticleArray.get_number_of_particles` method. The following is a simple example demonstrating this default behaviour of PySPH: .. code-block:: python >>> x = numpy.array( [0, 1, 2, 3], dtype=numpy.float64 ) >>> tag = numpy.array( [0, 2, 0, 1], dtype=numpy.int32 ) >>> pa = utils.get_particle_array(x=x, tag=tag) >>> print(pa.get_number_of_particles()) # total number of particles >>> 4 >>> print(pa.num_real_particles) # no. of particles with tag 0 >>> 2 >>> x, tag = pa.get('x', 'tag', only_real_particles=True) # get only real particles (tag == 0) >>> print(x) >>> [0. 2.] >>> print(tag) >>> [0 0] >>> x, tag = pa.get('x', 'tag', only_real_particles=False) # get all particles >>> print(x) >>> [0. 2. 1. 3.] >>> print(tag) >>> [0 0 2 1] We are now in a position to put all these ideas together and write our first SPH application. .. py:currentmodule:: pyzoltan.core.zoltan .. py:currentmodule:: pysph.parallel.parallel_manager ------------------------------- Parallel NNPS with PyZoltan ------------------------------- PySPH uses the Zoltan_ data management library for dynamic load balancing through a Python wrapper :py:class:`PyZoltan`, which provides functionality for parallel neighbor queries in a manner completely analogous to :py:class:`NNPS`. Particle data is managed and exchanged in parallel via a derivative of the abstract base class :py:class:`ParallelManager` object. Continuing with our example, we can instantiate a :py:class:`ZoltanParallelManagerGeometric` object as: .. code-block:: python >>> ... # create particles >>> from pysph.parallel import ZoltanParallelManagerGeometric >>> pm = ZoltanParallelManagerGeometric(dim, particles, comm, radius_scale, lb_method) The constructor for the parallel manager is quite similar to the :py:class:`NNPS` constructor, with two additional parameters, `comm` and `lb_method`. The first is the `MPI communicator` object and the latter is the partitioning algorithm requested. The following geometric load balancing algorithms are supported: - Recursive Coordinate Bisection (RCB_) - Recursive Inertial Bisection (RIB_) - Hilbert Space Filling Curves (HSFC_) The particle distribution can be updated in parallel by a call to the :py:meth:`ParallelManager.update` method. Particles across processor boundaries that are needed for neighbor queries are assigned the tag *Remote* as shown in the figure: .. figure:: ../Images/local-remote-particles.png :align: center Local and remote particles in the vicinity of a processor boundary (dashed line) .. py:currentmodule:: pysph.base.kernels .. py:currentmodule:: pysph.base.nnps --------------------------------------- Putting it together: A simple example --------------------------------------- Now that we know how to work with particles, we will use the data structures to carry out the simplest SPH operation, namely, the estimation of particle density from a given distribution of particles. We consider particles distributed on a uniform Cartesian lattice ( :math:`\Delta x = \Delta y = \Delta`) in a doubly periodic domain :math:`[0,1]\times[0,1]`. The particle mass is set equal to the "volume" :math:`\Delta^2` associated with each particle and the smoothing length is taken as :math:`1.3\times \Delta`. With this initialization, we have for the estimation for the particle density .. math:: <\rho>_a = \sum_{b\in\mathcal{N}(a)} m_b W_{ab} \approx 1 We will use the :py:class:`CubicSpline` kernel, defined in `pysph.base.kernels` module. The code to set-up the particle distribution is given below .. code-block:: python # PySPH imports from cyarray.carray import UIntArray from pysph.base.utils import utils from pysph.base.kernels import CubicSpline from pysph.base.nnps import DomainManager from pysph.base.nnps import LinkedListNNPS # NumPy import numpy # Create a particle distribution dx = 0.01; dxb2 = 0.5 * dx x, y = numpy.mgrid[dxb2:1:dx, dxb2:1:dx] x = x.ravel(); y = y.ravel() h = numpy.ones_like(x) * 1.3*dx m = numpy.ones_like(x) * dx*dx # Create the particle array pa = utils.get_particle_array(x=x,y=y,h=h,m=m) # Create the periodic DomainManager object and NNPS domain = DomainManager(xmin=0., xmax=1., ymin=0., ymax=1., periodic_in_x=True, periodic_in_y=True) nps = LinkedListNNPS(dim=2, particles=[pa,], radius_scale=2.0, domain=domain) # The SPH kernel. The dimension argument is needed for the correct normalization constant k = CubicSpline(dim=2) .. note:: Notice that the particles were created with an offset of :math:`\frac{\Delta}{2}`. This is required since the :py:class:`NNPS` object will *box-wrap* particles near periodic boundaries. The :py:class:`NNPS` object will create periodic ghosts for the particles along each periodic axis. .. figure:: ../Images/periodic-domain-ghost-particle-tags.png :align: center :width: 805 :height: 500 The ghost particles are assigned the `tag` value 2. For this example, periodic ghosts are created along each coordinate direction as shown in the figure. ^^^^^^^^^^^^^^^^^^^ SPH Kernels ^^^^^^^^^^^^^^^^^^^ Pairwise interactions in SPH are weighted by the kernel :math:`W_{ab}`. In PySPH, the `pysph.base.kernels` module provides a Python interface for these terms. The general definition for an SPH kernel is of the form: .. code-block:: python class Kernel(object): def __init__(self, dim=1): self.radius_scale = 2.0 self.dim = dim def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): ... return wij def gradient(self, xij=[0., 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0]): ... grad[0] = dwij_x grad[1] = dwij_y grad[2] = dwij_z The kernel is an object with two methods `kernel` and `gradient`. :math:`\text{xij}` is the difference vector between the destination and source particle :math:`\boldsymbol{x}_{\text{i}} - \boldsymbol{x}_{\text{j}}` with :math:`\text{rij} = \sqrt{ \boldsymbol{x}_{ij}^2}`. The `gradient` method accepts an additional argument that upon exit is populated with the kernel gradient values. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Density summation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In the final part of the code, we iterate over all target or destination particles and compute the density contributions from neighboring particles: .. code-block:: python nbrs = UIntArray() # array for neighbors x, y, h, m = pa.get('x', 'y', 'h', 'm', only_real_particles=False) # source particles will include ghosts for i in range( pa.num_real_particles ): # iterate over all local particles xi = x[i]; yi = y[i]; hi = h[i] nps.get_nearest_particles(0, 0, i, nbrs) # get neighbors neighbors = nbrs.get_npy_array() # numpy array of neighbors rho = 0.0 for j in neighbors: # iterate over each neighbor xij = xi - x[j] # interaction terms yij = yi - y[j] rij = numpy.sqrt( xij**2 + yij**2 ) hij = 0.5 * (h[i] + h[j]) wij = k.kernel( [xij, yij, 0.0], rij, hij) # kernel interaction rho += m[j] * wij pa.rho[i] = rho # contribution for this destination The average density computed in this manner can be verified as :math:`\rho_{\text{avg}} = 0.99994676895585222`. -------- Summary -------- In this document, we introduced the most fundamental data structures in PySPH for working with particles. With these data structures, PySPH can be used as a library for managing particles for your application. If you are interested in the PySPH framework and want to try out some examples, check out :ref:`tutorial`. .. _Zoltan: http://www.cs.sandia.gov/Zoltan/ .. _RCB: http://www.cs.sandia.gov/Zoltan/ug_html/ug_alg_rcb.html .. _RIB: http://www.cs.sandia.gov/Zoltan/ug_html/ug_alg_rib.html .. _HSFC: http://www.cs.sandia.gov/Zoltan/ug_html/ug_alg_hsfc.html .. LocalWords: DomainManager maximum pysph-master/docs/tutorial/000077500000000000000000000000001356347341600163215ustar00rootroot00000000000000pysph-master/docs/tutorial/1_getting_started.ipynb000066400000000000000000000425221356347341600230000ustar00rootroot00000000000000{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# PySPH tutorial: Getting started\n", "\n", "*Prabhu Ramachandran*\n", "\n", "Department of Aerospace Engineering, IIT Bombay\n", "\n", "----\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Installation and getting started\n", "\n", "This is a simple introduction to PySPH. The PySPH documentation is here: http://pysph.readthedocs.io\n", "\n", "1. First install PySPH. See here: http://pysph.readthedocs.io/en/latest/installation.html\n", "2. Go over the installation and getting started page\n", "\n", "\n", "Importantly, once you have PySPH installed run a simple example, as so:\n", "\n", " $ pysph run elliptical_drop\n", " \n", "or\n", "\n", " $ pysph run rigid_body.bouncing_cubes\n", " \n", " \n", "Then view the generated output:\n", "\n", " $ pysph view elliptical_drop_output\n", " \n", " \n", "If this produces a nice looking view, you should be mostly set. It may be handy to be able to run pysph on openmp:\n", "\n", " $ pysph run elliptical_drop --openmp\n", "\n", "If you get this far and everything works, you should be in good shape.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## More on the examples\n", "\n", "\n", "The examples are all written in pure Python. To see the sources for the examples you could either visit the github sources here: https://github.com/pypr/pysph/tree/master/pysph/examples\n", "\n", "Alternatively try this:\n", "\n", " $ pysph run\n", " \n", "Now you can pick among 40 odd examples. To see the source of a simple one you can do the following:\n", "\n", "\n", " $ pysph run --cat elliptical_drop\n", " \n", " \n", "This will simply show you the source code without executing it. You could have also run the example by changing directory into the `/pysph/examples` directory and running the example, for example let us do this easily as follows:\n", "\n", "\n", " $ pysph run --cat elliptical_drop > ed.py # This puts the source into ed.py in the current dir.\n", " \n", " $ python ed.py\n", " \n", "**NOTE: ** there is also a `/old_examples` directory which you should not use.\n", " \n", "You can also import the examples from Python and thus could just as well have run this example as:\n", "\n", " $ python -m pysph.examples.elliptical_drop\n", "\n", "\n", "The point I am making is that `pysph run` is not doing anything special at all, it just makes it a tad easier to run the examples. These examples are usually quite useful and can also be subclassed if you wish to reuse them.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Using the PySPH library\n", "\n", "\n", "In order to simulate your own problems you need to understand the anatomy of a PySPH simulation. All the examples typically will do the following:\n", "\n", "1. Create some particles\n", "2. Specify the equations for the inter-particle interactions.\n", "3. Specify the integrators to use.\n", "4. Put these together in an `Application` and run this application.\n", "\n", "\n", "### Creating particles\n", "\n", "\n", "In this tutorial we will mostly explore the creation of particles in PySPH. In PySPH, particles are created in a data structure called a `ParticleArray`. Let us consider an example. Let us say we have a glass of water. Clearly we have two \"things\", a glass vessel and the water inside it. Since we wish to capture the interaction of the water with the vessel, we would create two `ParticleArray`s. One for the vessel which we call `\"solid\"` and the other for the water which we call `\"fluid\"`. \n", "\n", "Some important points to note. Each particle array \n", "\n", "- has a name (a string) which should be a valid Python variable name, `\"fluid\"` and `\"solid\"` are good as would be `\"fluid1\"` and `\"fluid2\"`.\n", "\n", "- has a collection of various particle properties, like the position `x, y, z`, velocity components `u, v, w`, other scalar properties `m, h, rho` etc. All of these properties are scalars.\n", "\n", "- has a collection of \"constants\", which can have an arbitrary size but are internally stored as 1D arrays.\n", "\n", "The properties are used for things that typically vary, from particle to particle.\n", "\n", "The constants are used for things that are constant for all the particles.\n", "\n", "Let us now try to create a particle array in order to understand it better.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from __future__ import print_function" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from pysph.base.particle_array import ParticleArray" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa = ParticleArray(name='fluid', x=[0.0, 1.0], y=0, m=0.1)\n", "print(pa.name, pa.x, pa.y, pa.m)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we set the name by a kwarg. \n", "\n", "#### Exercise\n", "\n", "- Try creating a particle array without the name.\n", "- While x was passed as a list, y and m were not, what is going on?\n", "- Does this work with numpy arrays? Try it!\n", "- Does it work with numpy arrays of arbitrary shape?\n", "- What if you have arrays passed of different sizes?!\n", "- Can you add a new \"crazy\" property?\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Solution\n", "\n", "- You can create a particle array without a name but DON'T.\n", "- NumPy arrays work and are ravelled, lists and constants work.\n", "- Passing incompatible sizes is a problem and you will get an error.\n", "- You can add any kind of property by passing a suitable kwarg." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa = ParticleArray(name='fluid', x=[0.0, 1.0], y=0, m=0.1, crazy=25)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.crazy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Doing more\n", "\n", "- How do we discover the properties?\n", "- Use `pa.properties`\n", "- What about constants?\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.properties" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So what are the other strange properties? We didn't add `gid, pid and tag`\n", "\n", "So it looks like PySPH automatically adds some special props, what are these? \n", "\n", "- gid: is a global ID for each particle, it is useful in parallel.\n", "- pid: represents the process ID for each particle, also relevant in parallel.\n", "- tag: represents the kind of particle we have.\n", "\n", "The `tag` property is probably the most useful. It is representative of the kind of particles, the ones that are important are:\n", "\n", "- Local: see `get_local_tag` below\n", "- Remote: see `get_remote_tag` below\n", "- Ghost: see `get_ghost_tag` below\n", "\n", "\n", "**Questions**\n", "\n", "- What is a DoubleArray, IntArray? \n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.gid" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.pid" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.tag" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from pysph.base.particle_array import get_local_tag, get_ghost_tag, get_remote_tag" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Local:\", get_local_tag())\n", "print(\"Remote:\", get_remote_tag())\n", "print(\"Ghost:\", get_ghost_tag())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Digression CArrays\n", "\n", "Let us answer the question \"What is this DoubleArray stuff?\"\n", "\n", "These are internal arrays that allow us to efficiently store and compute with these properties and have some useful features." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from cyarray.carray import DoubleArray" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a = DoubleArray(5)\n", "x = a.get_npy_array()\n", "x[:] = 100" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a[1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "a.append(203)\n", "a.length" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise\n", "\n", "- Find the default properties.\n", "- Can you create a particle array with no properties in the constructor?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "empty = ParticleArray(name='dummy')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "empty.properties" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding constants\n", "\n", "Add them by passing a dictionary." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa = ParticleArray(name='fluid', x=[0.0, 1.0], constants={'rho0': 1000})\n", "pa.rho0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.constants" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercises\n", "\n", "- Create different kinds of constants and experiment\n", "- Create a vector or a 2d array. What happens to a 2d array?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "pa = ParticleArray(name='f', x=[0.0, 1.0], \n", " constants=\n", " {'a': 1, 'b': [1,2,3], 'c': np.identity(10)})" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.constants" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.c" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Particle array methods\n", "\n", "There are many methods but some more useful than others. Let us explore this\n", "\n", "- `pa.get_number_of_particles()`\n", "\n", "\n", "- `pa.add_constant()`\n", "- `pa.add_property(...)`\n", "- `pa.add_particles()`\n", "\n", "- `pa.extend(n)`\n", "- `pa.extract_particles(...)`\n", "- `pa.remove_particles()`\n", "- `pa.remove_property(prop)`\n", "- `pa.remove_tagged_particles(tag)`\n", "\n", "\n", "The output property arrays is an important attribute. It is what determines what is dumped to disk when you save particle arrays or run simulations.\n", "\n", "- `pa.set_output_arrays(list)`\n", "- `pa.output_property_arrays`\n", "- `pa.add_output_arrays(list)`\n", "\n", "#### Exercise\n", "\n", "- Explore all of the above methods.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.output_property_arrays" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.add_property('x')\n", "pa.x = np.arange(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Can also add properties with strides. See the documentation for\n", "`pa.add_property(...)`\n", "\n", "here is an example\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.add_property('A', data=2.0, stride=2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now lets see what this does.\n", "pa.A" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The strided property behaves just like any other so when you add/remove particles the right thing is done.\n", "\n", "The `pa.add_output_arrays` is also important." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa.add_output_arrays(['x'])\n", "pa.output_property_arrays" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "- `pa.get_carray(prop)`: will get you the c array.\n", "- `pa.get(props)`: returns properties.\n", "- `pa.set(**props)`: sets the properties in one go" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The convenient `pysph.base.utils`\n", "\n", "- For many standard problems, one requires a bunch of additional properties.\n", "- Use the `pysph.base.utils` for these." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from pysph.base.utils import get_particle_array" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from pysph.base.utils import get_particle_array_wcsph, get_particle_array_tvf_fluid, get_particle_array_gasd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pa = get_particle_array_wcsph(name='fluid', x=[1, 2], m=3)\n", "pa.properties" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercises\n", "\n", "- Create particles inside a disk of radius 1.\n", "- Visualize the particle positions.\n", "- Create a WCSPH compatible particle array with these points." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%matplotlib inline\n", "from matplotlib import pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Solution" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%load solutions/particles_in_disk.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.0" } }, "nbformat": 4, "nbformat_minor": 2 } pysph-master/docs/tutorial/2_solving_a_problem.ipynb000066400000000000000000000235611356347341600233150ustar00rootroot00000000000000{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "# PySPH Tutorial: Solving a simple problem\n", "\n", "*Prabhu Ramachandran*\n", "\n", "Department of Aerospace Engineering, IIT Bombay\n", "\n", "----------\n", "\n", "\n", "Let us try to solve a little problem: the motion of an elliptical drop of water.\n", "\n", "\n", "Initial conditions:\n", "\n", "- $\\rho = 1.0$\n", "- Particles are inside a circle of $x^2 + y^2 < 1$\n", "- $dx = 0.025, h = 1.3 dx$\n", "- $u = -100 x, v = 100 y $\n", "- Choose $c_s = 1400, dt=5e-6, tf=0.0076$\n", "- Fluid is incompressible.\n", "\n", "The velocity field of the initial condition looks as shown in the picture below:\n", "\n", "\n", "\n", "Let us solve this with the WCSPH scheme.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Solving this with PySPH\n", "\n", "\n", "1. Subclass the `Application` class (`pysph.solver.application.Application`).\n", "2. Add a `create_particles(self)` method which returns a tuple/list of particle arrays.\n", "3. Add a `create_scheme(self)` method which returns a suitably configured \"WCSPH scheme\".\n", "\n", "\n", "### Exercise\n", "\n", "- Starting with the example below, create the particles correctly below. \n", "- Put this in a separate file called `ed.py`. \n", "- Remember to set the properties: `u, v, rho, h, m`.\n", "- Don't run this from IPython.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "import numpy as np\n", "\n", "from pysph.solver.application import Application\n", "from pysph.base.utils import get_particle_array\n", "\n", "\n", "class EllipticalDrop(Application):\n", " def create_particles(self):\n", " return []\n", "\n", " \n", "if __name__ == '__main__':\n", " app = EllipticalDrop()\n", " app.run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Solution\n", "\n", "Try solving this yourselves and then compare with the solution below.\n", "\n", "DON'T run this yet!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%load solutions/ed0.py" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Creating the scheme\n", "\n", "- Now that the particles are created, let us create the scheme.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "import numpy as np\n", "\n", "from pysph.solver.application import Application\n", "from pysph.base.utils import get_particle_array\n", "from pysph.sph.scheme import WCSPHScheme # <--- ADDED\n", "\n", "class EllipticalDrop(Application):\n", " def create_particles(self):\n", " # ...\n", " self.scheme.setup_properties([pa]) # <--- ADDED\n", " return pa\n", " \n", " def create_scheme(self):\n", " s = WCSPHScheme(\n", " ['fluid'], [], dim=2, rho0=1.0, c0=1400,\n", " h0=1.3*0.025, hdx=1.3, gamma=7.0, alpha=0.1, beta=0.0\n", " )\n", " dt = 5e-6\n", " tf = 0.0076\n", " s.configure_solver(\n", " dt=dt, tf=tf,\n", " )\n", " return s \n", "\n", "\n", " \n", "if __name__ == '__main__':\n", " app = EllipticalDrop()\n", " app.run()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Exercise\n", "\n", "- Modify your example to add the changes made above.\n", "- Run the example as `python ed.py`\n", "- Visualize the output." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Solution\n", "\n", "Try to get the example running, see the solution below if you are stuck." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%load solutions/ed.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Solution discussion\n", "\n", "Recall that you can run the example as follows (on a terminal):\n", "\n", " > python ed.py\n", " \n", "Try this:\n", "\n", " > python ed.py -h\n", " \n", "You can also run the example to use OpenMP like so:\n", "\n", " > python ed.py --openmp\n", " \n", "With multiple cores this should produce some speed-up.\n", "\n", "The runs will produce output inside the `ed_output` directory.\n", "\n", "- You can use `-d output_dir` to have the output generated in the `output_dir` instead.\n", " \n", " \n", "You can visualize the output using:\n", "\n", " > pysph view ed_output\n", " \n", " \n", "Note that the output directory also contains a very useful log file that is handy when debugging, in this case the file would be `ed_output/ed.log`. \n", "\n", "The output directory contains the data saved at different times in either `*.npz` or `*.hdf5` files depending on your installation. The next chapter explores post-processing a little." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Running the example from inside IPython\n", "\n", "**NOTE:** If you want to run the example on an IPython notebook you may do so, but you will need to create the app instance as follows::\n", "\n", "```\n", "if __name__ == '__main__':\n", " app = EllipticalDrop(fname='ed')\n", "```\n", "\n", "Note the addition of `fname='ed'`. The filename and output directory to pick is determined by the filename and on IPython this is not meaningful, so if you want the output generated in the right directory, you must explicitly pass fname. When run externally from Python, this is automatically determined for you." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Exploring a bit more\n", "\n", "- Learn more about the `WCSPHScheme`\n", "- Learn about the `configure_solver` method and its options." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "from pysph.sph.scheme import WCSPHScheme\n", "WCSPHScheme.configure_solver" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Homework1\n", "\n", "- Go over this tutorial: http://pysph.readthedocs.io/en/latest/tutorial/circular_patch_simple.html\n", "- Familiarize yourself with it" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Advanced exercise\n", "\n", "- Take the above example `ed.py` as a starting point and copy it to `dam_break.py`.\n", "- Modify the `dam_break.py` to solve the 2D dam break problem, as below\n", "- Fluid height = 2\n", "- Fluid width = 1.0\n", "- Container height = 4\n", "- Container width = 4\n", "- Assume g=9.81 m/s\n", "\n", "- Use 2-3 layers of particles for the boundary\n", "\n", "\n", "```\n", " | |\n", " | | \n", " | 1 |\n", " |******* | 4 \n", " |******* |\n", " |******* 2 |\n", " |******* |\n", " ---------------------------\n", " \n", " 4\n", "```\n", " \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 2 } pysph-master/docs/tutorial/3_simple_post_processing.ipynb000066400000000000000000000123611356347341600244030ustar00rootroot00000000000000{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "# PySPH tutorial: Simple post-processing\n", "\n", "*Prabhu Ramachandran*\n", "\n", "IIT Bombay\n", "\n", "----" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "In this section we look at a few simple ways to manually post-process the generated output and also other ways of visualizing the data.\n", "\n", "## Loading data\n", "\n", "We look at the following:\n", "\n", "- Reading the data files\n", "- Simple plots\n", "\n", "To load a data file try the following" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "from __future__ import print_function\n", "from pysph.solver.utils import load" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "data = load('ed_output/ed_1000.hdf5')\n", "print(list(data.keys()))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "data['solver_data']" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "list(data['arrays'].keys())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "f = data['arrays']['fluid']\n", "type(f) # This is the particle array." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Exercise\n", "\n", "- Plot the particle data using matplotlib" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "%matplotlib notebook\n", "from matplotlib import pyplot as plt" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Solution\n", "\n", "Note that the arrays are particle arrays and you can do the usual processing with them." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "%load solutions/plot_pa.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Viewing the data in IPython notebooks\n", "\n", "A convenient viewer is available that makes it very easy to view data in a notebook\n", "\n", "- Viewer2D requires the `ipywidgets` package and matplotlib.\n", "- Viewer3D requires `ipywidgets` and `ipyvolume`\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "%matplotlib notebook\n", "from pysph.tools.ipy_viewer import Viewer2D\n", "viewer = Viewer2D('ed_output')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "viewer.interactive_plot()" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Homework\n", "\n", "- Go over this tutorial: http://pysph.readthedocs.io/en/latest/tutorial/circular_patch_simple.html\n", "- Familiarize yourself with it" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 2 } pysph-master/docs/tutorial/4_without_schemes.ipynb000066400000000000000000000242101356347341600230200ustar00rootroot00000000000000{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "# PySPH Tutorial: without built-in schemes\n", "\n", "*Prabhu Ramachandran*\n", "\n", "-----\n", "\n", "## Recap\n", "\n", "- Looked at creating a simple problem\n", "- Saw how to use schemes and solve the elliptical drop example.\n", "- Looked at simple post-processing of the data.\n" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Schemes provide convenient pre-built particle interactions.\n", "\n", "So how do you do your own thing without schemes.\n", "\n", "Let us now do it in a more low-level way. To do this one needs the following methods:\n", "\n", "- `create_particles(self)`\n", "- `create_equations(self)`\n", "- `create_solver(self)`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "import numpy as np\n", "\n", "from pysph.solver.application import Application\n", "from pysph.base.utils import get_particle_array_wcsph\n", "\n", "class EllipticalDrop(Application):\n", " def create_particles(self):\n", " # ...\n", " pass\n", " \n", " def create_equations(self):\n", " pass\n", " \n", " def create_solver(self):\n", " pass\n", "\n", " \n", "if __name__ == '__main__':\n", " app = EllipticalDrop()\n", " app.run()" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true, "deletable": true, "editable": true }, "source": [ "### Exercise\n", "\n", "- Create an `ed_no_scheme.py` which uses this skeleton, copy over your completed `create_particles` method\n", "- Don't execute the example yet.\n", "\n", "Now, let us flesh out the other methods. Add this code:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "deletable": true, "editable": true }, "outputs": [], "source": [ "from pysph.sph.equation import Group\n", "from pysph.sph.basic_equations import XSPHCorrection, ContinuityEquation\n", "from pysph.sph.wc.basic import TaitEOS, MomentumEquation\n", "\n", "class EllipticalDrop(Application):\n", " def create_equations(self):\n", " equations = [\n", " Group(\n", " equations=[\n", " TaitEOS(dest='fluid', sources=None, rho0=1.0,\n", " c0=1400, gamma=7.0),\n", " ],\n", " real=False\n", " ),\n", "\n", " Group(\n", " equations=[\n", " ContinuityEquation(dest='fluid', sources=['fluid']),\n", "\n", " MomentumEquation(dest='fluid', sources=['fluid'],\n", " alpha=0.1, beta=0.0, c0=1400),\n", "\n", " XSPHCorrection(dest='fluid', sources=['fluid']),\n", " ]\n", " ),\n", " ]\n", " return equations" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "- Note that the `Group` allows one to group a set of equations together.\n", "- All equations in a group are completed first before going on to the next group.\n", "- Explore the different equations.\n", "- Note that we return a list of equations.\n", "\n", "\n", "Next we create the solver:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "from pysph.base.kernels import CubicSpline\n", "\n", "from pysph.solver.application import Application\n", "from pysph.solver.solver import Solver\n", "from pysph.sph.integrator import PECIntegrator\n", "from pysph.sph.integrator_step import WCSPHStep\n", "\n", "class EllipticalDrop(Application):\n", " def create_solver(self):\n", " kernel = CubicSpline(dim=2)\n", "\n", " # Note that fluid is the name of the fluid particle array.\n", " integrator = PECIntegrator(fluid=WCSPHStep())\n", "\n", " dt = 5e-6\n", " tf = 0.0076\n", " solver = Solver(\n", " kernel=kernel, dim=2, integrator=integrator,\n", " dt=dt, tf=tf\n", " )\n", " return solver" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Let us understand the above a bit." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "from pysph.sph.integrator import EPECIntegrator" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "EPECIntegrator??" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "#### Exercise\n", "\n", "- Now put everything together and get it working." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Solution\n", "\n", "If you are stuck, look at the solution below." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [ "%load solutions/ed_no_scheme.py" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Understanding equations\n", "\n", "Let us understand equations a little more.\n", "\n", "- Look at the `TaitEOS` class source\n", "- Look at the `ContinuityEquation` class source.\n", "\n", "- Understand the terms \"source\" and \"destination\"\n", "\n", "Note the following:\n", "\n", "- The methods `initialize`, `loop`, `post_loop` are called per-particle.\n", " - `initialize`: iterates over the destination particles\n", " - `loop`: iterates over the neighbors for each destination.\n", " - `post_loop`: iterates over the destination.\n", " \n", "- `d_*` refers to destination props\n", "- `s_*` refers to source props\n", "- `d_idx`, `s_idx`, refer to particle indices.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Understanding integrators and steppers\n", "\n", "\n", "- Look at `PECIntegrator`\n", "- Look at `WCSPHStep`\n", "- Look at the `Solver` options.\n", "\n", "\n", "- Some useful solver options:\n", "\n", " - kernel\n", " - integrator\n", " - dt, tf\n", " - adaptive_timestep: bool\n", " - cfl: float\n", " - n_damp: int\n", " - output_at_times: list\n", "\n", "- API docs: http://pysph.readthedocs.io/en/latest/reference/" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "### Doing more\n", "\n", "Many other options in the `Application` class.\n", "\n", "- `add_user_options`: this is used to create additional user-defined command line arguments. The command line options are available in self.options and can be used in the other methods.\n", "- `consume_user_options`: this is called after the command line arguments are parsed, and can be optionally used to setup any variables that have been added by the user in add_user_options. Note that the method is called before the particles and solver etc. are created.\n", "- `create_domain`: this is used when a periodic domain is needed.\n", "- `create_inlet_outlet`: Override this to return any inlet an outlet objects. See the pysph.sph.simple_inlet_outlet module.\n", "\n", "See here:\n", "\n", "http://pysph.readthedocs.io/en/latest/reference/application.html#pysph.solver.application.Application\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "deletable": true, "editable": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 2 } pysph-master/docs/tutorial/images/000077500000000000000000000000001356347341600175665ustar00rootroot00000000000000pysph-master/docs/tutorial/images/elliptical_drop_ic.png000066400000000000000000004777701356347341600241430ustar00rootroot00000000000000PNG  IHDRbJf IDATx^yUtF2s= %]'"ΈvmOG?3k ^ssd]qF#"JPث!!dgXkUuM=$$#U]ݻ]rRJ `0`0 `0;`0 c pb`0_`0  `8~1v`0 /0 `0;`0 c pb`0_`0  `8~1v`0 /0^P 9 nJ=ݔz`&l{=Q`xI) h r.ʝͪ:]#RL9E;%8/:O`0.`0@|3>S6}pMp)Pye0|a4z#譖H8\`00^own=NR㤾:. 1._5]i+ "H!DԸ c q.([lIֈ6ݸXaՒCD}s-*`\  ZZM v |MM"5 7F =;2Pj""\ =(an%k&FFl, CGLݠpĬ\RX&e,arpí> 6)Yt!v%aǽO"yAKEA;ޗND!|,S c G j 4vPtLzMH>_8a?Fg;`c3 Lҋ}f0pduByŅ_|>x7RcE#wt%4 C c GZ[ϊ v`l3a﵎ߗѿ=dApb$:PI}'zIb|C65c/CÐE\Kc ˿o/$=uU 0+Rwy1ru*?&0 Y;`8, e5KFw#38!^1_Ec5mT{C.m=x+5& B npau]2q~tnYD1`2bgk|o^rHgT܈/]cEfX3\\d;[6hs)w9njl `vU[ KvX`,ȳLpV-.9O`8~M`88cueNK0+N{N4;>\~:IL } u3 /]4;z|fK-֥l<]B϶; W>`=]QN\R*e7ubbC{pz1kbR:fυI{XwE5Yˆ8:O1g|Lδ[ǪtPy]? )7vx 6lX mB*vuUuG>j@4)3PUmK]MI-4;`b?c=L 8 6w\FW钁Q-QewӈnI?~Owt+fﬕV= #Řcc YUwF$gtn. gz;q+GuH.,u2BԇI'/qN/ ڝ 40R4 `jm |kI~\Z^_/uhˑOQ#"% Eu'*rIA8L_Rc5|{dNX\ﭴb56VV6q pj*yi?xm|;;:;;4=V{d~YZI]"cSW۪lğ*t DQ E ߒE U`d[ ,U #fܧyoi00vBHT~<`!{X`%#ۂv^4RuaDŽko'3̸>'vcD\W w.,NNQeeb`)$xe<7P.Uj(we 9Q' XhqEɤ ^;p]{{yFf 5_f(+yw&F~S`QRRu(γ9WLݟ5)ve;A'W:d NR%~$_99S0^'< -]#j01vjwiHvmnxWE{5i}2Di?6 ',wxKyȭ QQs*I举W83`^7 -0bR$x[V2H 4T$["0׃:E7=c<]An!5ysM?AwV)vS~k.dC`'_iʼj| WIzq mήV_URT7xsMS\ ~b::Ga3b@hx|'GB9%xs]ueV h,Saw\uA\I>kU~e07`8 {,B 3Vo8پ]ULXp!fv_a4$AY`ͅ%vfhP-9fWGQ!li<qѯtbjG> 8ՁH£.\0jՒKm5c ^;p~?Ǚ|2| L Yw_rPXxA7Û׹ԖOK#&!VKz y P|E+YKwR/}䮑<3uVyXvLi2 (SB'&BJ9 0-2S(|֒`^!81ܯ%^kW؎jb:dNaP Թ7>dh+(?s:UIXs\Yz[S A*ok<^jU\E+X  W.+5r .Ւu,٭XLb=m~;ڶ)U^"Đ&<W"r[ 'fлRj} Kδvq]gY@PhpG+kOd xCG.U-kJ%c_&u(Ëc^m -sM :?[(x0 G&A81Tg&KO(L`nIVLm`HbLrz6/h9U&Zlcs+vڒYu5N{ =-]Qv ~5_^+DE;TaŇ"~NF Bق;(Va(h7r/p0v᥏XhЛ̱(ƭ"[_% D5Eɛ\} %tBj?;ʩ<S09z *.yx[&֙`Nq'9BAI0™6':\*?9nzjḵ.? !j}g;skac"%\Uc1_YwYUu.L`L  /qBg/";oU/@m;`|܃HPDmʧ3aХQ!;b[r${%Ӓ~a@&*O㍑4 OLDEδ~9pF@Ir%/Uh |X'0;' ̷Tx$̵IcP]rދ.( Mul@LU^jø+wCzv%`սA\?sPr;CmZ>vzg*dn56 ,1?8'%l:y,p 30ttWuZ=TF_ aPT1vH[^t n8\o]z|x#f4z? &(g.UpL43:2r;DB OӺ}T3?Ag2+z{(aKp!}>CPthr@GWDBZW`h %%.ȃTheI $ݑ|L5_$Hp:`Ý.$s,>Wo\3$/Op48gc>E/h3&@- t=9`fΪ2uh@l}0 -3ݑ?S)3>IHapq `rE?xivۅ>V,Mꬫse ]ҧ B@K k3vV /5>vK ͗r,w[ 8 /2Dž{ kr.'B} wRR pK.cbY_ONQ)#i2]S,v.b}‡*-zeۘQ͂-r XƂKt;̲!HP 4+#bE!]drvAՓ2Wa?̒(G$Â4cR9wLb?k} J=2bp`<)WF?2Sk O*<%G)6,jވ3̚hUiײ)PဿYcw|Cn$Crq`8;!a~3n',>sMB'Jzք#"Tp0W.pB/0%G,zaTr,<}З!N2E |.>љ; G,c¥BeK<Yvb<Z(ip\No:h3u ,Aбjv`U\Z\Fy|A)>A*3[6cvI,|;gqҹZO<6pO "?,/) b{D5V8v(ipQgT*zs[<0`{~VC&뺑C&]!w ,ќkqϠI0<;g[̰$N΀}jK'X< @݂eF^`$Ls $8ΛLBOVaAߎD|ݭvOj/ $tVmʐSߟ4:ғ=]TG'3:bKזu%pl\Q)#y{j tlz^~Ր-> zd}:<Apc͔vDMEZ:[ˌg qV@UB/,S+5VM&\%3vQ/ZEb];s|H  B`~KNQ;Qӯ3|KJgd353u,m ϳ:2pjfr.˂BrrKL*?Ah}//“k\KQ0i]z81"ə3#^P&2/&̮00T$[2Z~ljN!|6ACu@5P[}22Sif l&T#yES5m>Wt ;/Qm\cZ')4F C@=V[\+@_g-"(q,\':9G@,+pM;JKDJab:R: ԰%D)$eYr_O 8'٭z̲s\n f Ce˦~L%|/4Rr<1%9qO8Cu^5b{aQvk7֘m'xwgAq'vfĹW@ye\q3w1$EDNƥP볓3V+}@P-`A%ont |xZ!`EYQvf0 G9 m ?K:ZԥM7llj2bNbbjM<I`ŭ%p'{K]QEuWWc 8Kj8<3\]^!ZEq.fm8Sr ϡ9lh]aæ3;lqHSWy8$ؑcG3cvr.]q<ӓklui>u961\,]1,LO*,pXI e?{ol`FQ0 6p00w FM̮0MjFmsySj vtCP/wbfK eKz, U`J>v<L'L$6tR;2_$y ;]"]2@ ̤ {D5ϸ3%tŠ Cs-EX! 0.\PY?b7s`9m0=PKo3'*@p?]^^B:i:+# DH2'}9 # \k7~HHIJB1͌5xZ.}HVhZ;)]t`}^&GM  GӤ~?n+qg KV@{xCsb0 #g&)>G aR Y*]BgSցN+ЙDP Lcmn`FUO Nym5?@t9<>(x h!9`KELDKt@'x<՟RE\=p'e ^iIt"pqHcr>c00va\P`ms50^73 CLOyZ`l+:G7JUzi|W0 [jxg0J*0N pv@}1f:gU~?!~WT@POѧVe(7' y1i&aoiKt0ri3DK'TU+QsW?S-7D`>x'f͗'$K;.)xu)W%*]PpTW.Ie_䀍\V5ݎ lŰV9 .f>|*ٯFzxds$9m2V^,rd9fT;uU42aq{'y)dZV.9^~GΝ G>ikCcFlv^)EF̶k/fLg^y,o%O5|it'Cl<{ ĊO3sOddFVR <Ӥe'(Iʓ|&C&IFǍ5 .'L0 l8r 3}EfͦyUhA 4̨0#n[ޙmX0f/|/2zU>(|'LI^LUp2`38`e> 9c AHǧuZA0y ;yewxZ]pk>+;70dG&Yli\%!!ӗWOĂ+r,v4UW &9Oi怖.v,Ev% 9 ; ta j\`V9`l)^KMN TGݒkp90pLgU&:;{|֖Pdu >+ !  3z_՘fgUyQ 5=w|kCwzorC;8kgL, $kaMmGw':?&Ep|cX#(t?䰴A8C0ߊu_E&tD 4[ Ц@ ʏ wYdg # ǖ|PЩ~rH09Ȉ+#mnhT:TD.t|vHh.c۫0֥<@7 $Z20\שg2F@ȣ9ά .e7<E=К2<] XM+kB8+5moEc˃)pcC7T >U,./p (h(-Ҏ0"Й<N&`V%YeTW RAArMT.H% \kIu>Gג%* ?hcqAP6̱c龁窱Qj P{ zt<S3eJE_hSەtҙnpGyUX% }Q5@k^hWߘ`0xΎ ԓv 5:3$[J]y'l'Mڤ \굎Z<]Έ4͂H DðڕΈ*в %0t!_rpu>49$Ѹee>ЪTܐC. HAONn!G6l3E ^#b([9I0JxPi9wYn(w=!P|=^H;!vǟ!Pu d)~+Y'HLXX56ysD =|72~mM(>Zue>WՖz+yvK4 +0tC?p,pvTD>l ILs] u&Y{b}z-?s:gRUfA09`l96쒳IpGN,G@u:4$VL蒐NPLuҌ)fd Y([#(4Ҥ]aqj:t [.%"C~OsoY~4[δX]R;קrrq 9f10 Zl|Ad1 fLJ<x~Q;LGᾴ8fʖGFu \}aEg%kL1eR68Y2kӪ#[P Ń2o?綮s &G6!ұ`(6ކT A 4Ӫl}n as3{m$C2]P5X+= Tצ@X2q/ ($igpZd8nkqr \ ^ZlwdB̈CBv QW6]΀.XouQ NpYo$m~C <EUϳD&{mrhᩂSM{`t%Ҝ,UŠD> 7<$y@r} dU1, ~v`YB_pg/J@%2dž9E#y.0Ȏ:+LYYb̳tﹰ`S `mZ}g_ +:L"|WH8jcrdRU߅$U[M 2CU [D{sGq3 \cmyA |ҽz MQAE㗘]6B &n~K}: |6`AA}2 $.nvy `Hq%%0r;]݄$l<"~\cD` mWGV@#6) '?j Hoן !T*0&"I@&V*>aI4U̅~NzYlƕ["KPL6 V , "l<c(6Cp.Hֹ'7!:@)t7"j@A渣Y{*mrIڬ cc"gqb'985N65!X\mG6wz>=$d`Лgzc<[ud;feVfF n2 PDTvRVQ[a#A8vh%!`W|Xsˮ[{mz kVax`2mNÉ] q~͆с'yNoӦ@("MkC:)p`CJo/#dX&Z\gLfYTdĆ;zO@t>ac(j7  :kW-u {d)gNH'S~DdIЎDZ)Gb~o.M7p 6V 5 H|/2Y_#n["ĩE]Gض50 nYY)t MA=3G ,83`\"]S+` pJ+خ=,PCFWT!_nITޘd_|ߦprX]BGdRu4.mq!H0y)56H0Pvnf0\}Ӌ/,: `{|!bfIFGXUIm.*ǚ )9EJ,,'QL!IhŦ&*9\c̵&T+ xg'zs00 p~N@tETi+l^gX$،>4Wd_0bB+f4%>,/fa(T3AmfYg@ Rk탌{F]k;@t!PU]kIv4历%wt,&7\I\|ܒV#g+RJ7kE ֊ߐ<|)WJ)7#7#Pf7rwC_VȪ܎܎<QFbl/fHVdEJ)Z{dPrEJ9\Õ1{kr Yp-6dwUʽ5,򹢔Rn/&>nUľ`OGF >Knϑ?GދjȇߐJ)ELUJ)=yoZ򛖔R~:u';#?SMd< ٲ|&B5fxib5 ,btp=j_:@jSS]MSʼK!_`QK}zJ)˸+/mHb_``p5ꢆvL_<{ ,&ĕ}95vhbp/>HDlT9`ٴnpY?U70㭞%@]Ʈ'̦1|a)elb9`y,1]31>aab",S̬u K cڸ2m}M;Xf;V:.>Q$%؏Y/j*#ϭ% EF}>Z,sIAK^:/Cp a @#`̱ZF4킟9-#@;m@0$xF8.̫X@X[bF.`R9:T{nF]mC 8ƪUhP!$TH@= J({w`X襔#$s*6Gӌݔ"{Tm2Vg:IRgEŌB lVc]jD/ }4ݽ5vY'Cp`i`XZ`ŭbvF~צYܞ$3& IDATNIU*JG@nÂ"ϐnDTj 6PܱY^K2L(CCp3qE9`g798ތ6Z2;1GQ(}麋5$U uUKEM x4c5읱0Dal LͪX` LseXGzߋ =7D4D+ 11{ /ӸAV>#?j&ķJ [E ;+Bl?ɋ]L+wq(4K*Z7ئp8qw=j`8N4- :h; Bv["^] "6رI} = 2{;y=#}u:33VLjT`A}-`}S -I4(!"8vK=X赜;<)p!P5Z$Yxy&YA=vfZz=,'; 4Uta^+z $y_tW.X=gީ(ʜ6rE%nâJޯ=Mέ1w4]FHp2$ZIJpVpQ.%[mˣAJ5.x P,\.N-> Ls t(N2Z`A, AaE3@V*3}F?g,1PQ֡V@^k rX=O5 C.X;G8 zԲF0/@3`͔ɹa^v9v$g馛9^7&TWydWw10 NvIEn&[_~ /kMmW%7dMSƮf+QS-~t^f[ߍOy]'M]&0  .]HU^~Ynik7Xr[C~uKJ)nې;Hj!FލR’ZTIroj"*r]%kl'#j2R!Enb°lr_O@MF)Žj=)6ɶHްd8~9sqFn~deWj`&~"k%e5zid]2,kòEeAY+n`$ȡJUSXKI\WVn$%[A5 ѢRҒ5k,FC>[RIJ_Yŏٚ2=?1QrnTmG*tWC~=OC [-# ȵU*c1*lr!B>grr r rWU =7@>=`txbա< {p%rp8?I심w~x!ޅ\ &t!_:{94㩪~KdKSSP-$ҠԂ e1G!(b*‚R2)R_H9OC8JwX-' 8RBLT7!33L )p/{34,^1/$]nɘcC\#m}&VAL/sNi7af :Mfm$}/d~VFxis!0LƇG1Z#i-$,,`;TЁ&j4KEc`"1sT<_3l&Q γD;p @;EU5mD%I h8#64WpBax&9&]PLV KLdAd&60{Ÿ9l49;i_>&{=!Rܓت,c6G6ɶ$BImO"\/e&d@/̸kO?G n9\^cLuD˯ d 5{LZs\jz-Mn@,dzj.(˿ *} e0K` 7n^fc) d TY#Udlä` Fƒ5l.<㎟vH_ 5{R>~]KȰ).Hm4u/_b?J4Bxy}:(cmm\ӱe[^캃<̎GQ}<^yAPu+Ř ʗZه'-- gܫE{= l!dVOֆ% Tf߼c!-dn }z*7Ms $%ָ݉WD !$B*&ͨa&gs 6kw!+ ` bZ$2F&GS h_&CA?՛BV RE PݥϽӎ_2Wa%y)#&36;НtW=1@ l*com*Ի_R`)RpLߌVx@|iT1sQ@;Ed 0 j#hTt+ї8WiN5 ʆR A 8l[O(el*ڎ^\n]:0pk Ugl ʯ0DDZ :"^ Hx}|(ݦSYhįVftpeks h *rU.㖓@+(p<(23#4LyM)ʢ^ a_zAPYȋ!tuw#~v2e{?MHFNÈtY\K`(|Y"r'b(\`!|J, R๴x2ȊeJz,J)1^O9ك^3)锯)qqS 4k}#Y>5_E";LQf-6QV۵Msn+9$LGܐ i2L;q~kQ1?Dk6)ɷ;& ,z4^58n7ck#% y*{<u#N0J١Yym*8>2c֑%pD; `.?OQ'\4HOFZ?xtp5j VsC(ū@(+ LlM_+I0<xPYߜ(|#R ЀHQj G $Ŗ$?)Ǜ4u+ؠsk(W /R{-zunrZc9+&_`0ic%{8mv$й,4gw`{( yZ&rX1 *]PR&> `@ENl˹|<.j ʜIk$2Ց:R+گ d>%Ews'ODl ^\kvdªa.Wf2S}ڛIzú?2#brĪꁆ]&ɾK̚mQ$V2KW F)6,!/tWPq)n8 pekxVvw]Cb0p 6pa;COOW܏!h@w?̈́13z"pScCrM ;s eP"'Ib9 |W|fo՜ORc I49ӨPlcOY$ ķz( }OGCܥ !] , š(iqbo:mpX+Q0vq{2+2٪\hΜ( &NnIYK.3[d$\BQQ2{P>.ӂEeDFC)F%@  FBrp+B!i{S!#B/L؞!vŎؓ#7UYlx!6莁Ae$E7> -pv Ɇ/ܫkȱ ^SbAG]CU636; '@WGH@ O8 c|6. 6NOeyliV;Սd{@zs$qXظ!, 1*4K8C0CeEzĚ w9fp^vֱІ':BxW2f[NՊand ^98|! :TuDre&mҫa]%AR.tlQ Z?(Uriܵg2:H=1 :9~elf\pa: GLC5kXU%ybeRF-p+Hz|42]˃=]N3XfY+$߇9Պ⪘qvb(m9yf)* 7%Rz_/'q3b[^EWC6/d.uב>}I]Z HэMJ54370WaCIV Q15E= ۤ[<DživbC<>jsY'~5Pl`3r ^BA3%P.;)A Ex>:Eby8#y,~O Ɗ1)QY`+cz]%mAEjN1qT|^E0+ʍ'ű[d(x\x^ Q>F z"Ss8O1_?L:`\EjB;z#BIS|!`y@fEY1#9 3B!qE,s -'GG7/~=x x>/Lx2-JW=-_]<Pϛg;g!{EK|]q^rAK&Y,OvV-zeڌ3Z69cf,V8q\6\so@g,P@iD;9 z`EwWЊވ:)lY -,v͡ f8'J05h@h/$<_X"U3DZC IDATH^|sIoVlE^ٴrghmyPpj]<^Hكlv4_ZE\:OiR˔7EkQ \ lm=vj΃i֙?{ݭ \W(oK3A0fX2BPRW09tD,FpGiq O%l'.2Y ,v;4O` 4s]~՝oQ<.ODUdpT80\Ӎbj !XKsyYh=4(& p2V`d)@&M!/UG'lAm&oSX >g$\b8 _M&j&Q4uنlX|!c1,E?&9`ѣWǏdX:E=9F-g8#uE>1kմ Qe =9ڼrɎ TE8i;+ І6+ !zD$ c¬g BTt&!}f9)5g8`6P* ia4(&0_e jxؙ0o1_53&7-™]?: 9b9U@gH$+9RK&ĕ'&/+MpRD@$: 6}=:[W#pͶJpdkҜʺC;R^}y5+j~9;lǜQn kTqJ@w(l{h~h=hddjx\ԯIgߔ:D#݃aHԛΡh].RV#RC8_6$BjD:fCS3G` ű,gp% ]< ՎKAo0S ~Zcq5Ob:lc sWײK+Rb Q| O8jF38'RLSgF6^]jfX2A4mFJvis#ZAC+!Њ@a =|#܈}1'o&kɭ!U#H0۰ͱC;![({H-tuWRLS H?Ð֑^GqD/'"\WH/;x]I4,+"_ }Ӻ\]xaLUWs {G,'g[vu~Rx6ץy7M^n:a'\?@+3Mn1[Vh &ԹJB[ע 49#i 8l mnS~fGlj~`2 I B"@) AVhR ++ӖXM"@@^# rAFCIl#V\ 0Bn> yt@~Z @teQ;j"I.CU8FT`I&q[)Cb:׹Aß8}Ic`xnEoc*Kh!G)*RV Jvb>:Ro I ѷ L8p!:I eSZnDYF v#NF<NP(bO !3 X[`$= &*i6Ҝhsiї {9d\ ixEp+ޫѤ#ZDDGsld}jq"`k(ܩ;"`v 9+Җ`f`4 nEY2@r{=ɠƀ9Zѥ)1 T[ݛ? 6x!a,1ǩR$"֋X$ sCq|k>}Xm;c*Q z83dBB/$Hw /qF,A c={3912,; `ƈǤE%>#QWVFOS).|]tI'9 `9gJiM0:>{lp,("e?$xlRītcA|<uN G7 "]Rh-azLхzmY4,&}C }M>Ԑ=$(5 .|NH"l%ͳ;NT79hꀳ6u|׹]8$ Q,)%Ki#.XY*@S9*tez5a ޡAz1 nSn%#^)1DV|#`翄eR7E5~wQ &2b_F;c|,! ğ|(nCq99z*m9Gt;8" `V}<)0i;#R HMn)r.Q2$'M䃦c]q<:GȒ+9u\ +.@Jk皯H@]&mlS#%'A#%sS*`e:8cj4]9 ڌv{=j(+[N rJ-?:vCra?Z*: /S*,౳>X,prv#Oc @"]ED-$pR4AAvvN2 h:6yMMZ :,XA)0a܈V#,ip'nC/q-El5ٞcf`M4_ɰpf`hV8hꀳpk/$mrem0_Jv[6V+ɓL<v͜c Р50cr\c**!1KP ,`W z氧ZѻB*k1 (11$Cn6C1\Wo1 
    VA<8p DCc pO49m4uF 0meeAF*m;_$ 4$^$ i^58dAfs\cb:mtI^YͿ`SlJvSn\7g ^C:4"}ʮd!Fe 4%O} ˁA(Wߒ <ʿ$E'g05Sʆװ8EY_Gx^p'jIg7R9u, szt*^ٴ[Q-5*\HPZzu[Nn^g"T`jklrZhꀳ ^ܑxgxRCl\R:\^;u64>j 6 \NB9Ak(ܲu3=%yk{ݫ|7ؿ֓P؏\NuETJt=߲T>fGT-Q}xK8xu‘kQ5HT)S2H%g8h;@ge`a{-i>]bG j/`Nq?>Lrgr ݈s}&Ve:[2OKg]=0[X:7ph,>*p܋$w׾~S D=hjs!B#-jR"=>q0xh4>Q 1VUD*pZ^; ǗQ&lUZ 4b֠5xR`#Ӫ;Sm6.N6dd8bS#8-).raXqot#hrg6In`0 e} P ͤ.;sN@@vFl+򜟋0h}#R.; 20mf#HNr p&0C]]${pJ.Y3^~'3ǰ~aT]5RTE]Kg|л"._৘Xwsx-(e1˛qTdYCXKI G) ;JDx>Xq4hY79m4ljtVt v2( Bl ؐR/ Y.ؠ\ :}$q7&GYo@LboWߌ~Hb9¥| %TþdN}9ǰ2ʩ{bchd;ǝ;pS=f0JdatZR757X?<-mCqCly m IDAT6#vMNMpq&pSdgY`s8 V6R|Ԯs+Ρ`Trac9ƇX]݂DW*0l*ϓ6ӋPA^7@_2*7Ap%b-Er9Y cYAp|p5ؗPbXq5jk(ayR XKqKJnr}W:RCVdTG tiՙ3_62cfcqǣI   9\a@lո``;J494ut8f27e(N,,/XTX<4jpQ0i3YaSg ؙ\duG ~cvD#: p E,4Kƴ}8,18D}Ø5|i nې%`>_cGcu<lnL<@IC:S>R{r9qWHD{c;/aTX1ILc/XEvYm;0:0d,*F,BGn1h1Um3U7AXc}`UME5flߜs n\[Y'(4) [UdPv<1 64 Ng ^ :;JKDOFH3nPzJ/KFvfTS\YihГ7W\iIm!( >Bf A~µ$_&ǘj.rj .֘hQ* wGcI`;B6$$s~jd Hގ p̓m݇ЀOPtL5g;ܷ-}M,HoqokH1 iS D+H.f)J@2Ci;h'O sTfKO-u@NdђX)YvWx: ^5ū1hXs7<:So^"0subdrFJ~t=)ɻbQԯ(5_2 (ΐXCaRxLBgTITkZ1de:o<,LQI794/72C# N1 Lҧsq xit|#8İ`" I~[Eo#=2м PIUAu)+VC ,A2q:v"UsWq' +Ab췆 n#6p#\V9Vc^Xoɭ (e\/fik> /Z'AuK X=";AĹ"+Hb $ZIήt#>¡C-: @횱Hfl9GPJ [9g9^A.PSW@6 O M2H TkmcHْ$+뼿D٢_c^cҥs]K' :3O*sc >Ba5w=%=jݘ,lXM  Z dp#=?%nKlGKc2̰n倃сPߋ1$,O$ b1!ד4pbzeCyzMwHPQ&8pk%\@;)`A`9%@Pg__v|hXU@|BIx&ɰ;my#(& G ,>q2F.K4 O5t)X; ƋfaH*p[6>fg2Z9ƆNOK2<rn #C4 {I.C?$* [ةh(Aѿ;3XFn8=2J .T0.D) M j}C,j (309 XŜ&Nf{BfThӚO82@GGuz -M`F!%OĻrQR& ӴvviqHpc #)~%V&]Ċ1/L9CIt(:MX0cs}iliV*X="V%HH܇م܏Ѹ8ش PElz`wנg2{Ö& K#sE[d-p8BnL_z]Z[bhن v8N26S N({[aMKx)ЙipG KѓQ|;2'.}9~'O&'=ളĺDlZU3B(?ܘ͋%3ٯ\^0O=3pc).`pi%,|6'2̹︢HU d }y̲:d5H ֓~◍"AVo$iaFRM$ &hy+u3"o<5o,H`s(wAwF >x2\u<Po&r Bb(E걝N o$\Z dPH_S>>gF&Qh%%]Uj5$qJ`Y5i?qwީko-Ve {<2/TO5hx %8|}7ex{RKMOS 0%5\W5]}Lpkx &,TZaA?Ңo TPYhӴ&B z.0=!a^cd %qʏY=I+ 8@"~cy5u@} 䒏xj?SGl&yR@c|qru&I&䚐ts(`౫".ݣ"(YQЯv  "!r+@W'N&3f2ǧ9|G՟~WuLU}^W^bսAOf̽d@Mi²wULKO)8th)0JRBu.ɕ+],+}ĦodIci)Mpɇ`Ou=25u%^<( ,U2/tH&7q"#STzD%mT*{(PSw^|. օz*kzm:YPv)P.0#]QٵfcS9q Y|-^ո)gQ]tk|: R`_Edžyf gFY!)6^N/:)1^7pU܋8!.\eMod1_9'.'G+*g;81 0]qh"]zi)op6w0Ey)F\O { bWh5R_"`'zfc. L:6U[&ІjCEy}i~'$).OwJTHog?z#F~] \`q,P}M^'p*E փ>pik? +'N1}(V՝I g~iȅ 6&ؗaTG[y"RKb' :I,dzglO g,+$! ^'| m%"=+_Dt)Y^v]V'O[H:&(Y],GP>% OdoVA+9lN33HOJ6'%[".Yd َP""b+VPna('}(k>-$I+<'L~C:ml'³)) ӄ^'gX LH^ss$̄"UANCN+N9ƿTʿ~GN p%K!+HNR9IFMcIag%C)sX԰QI ¨d+}u Jy\ d +[RY(Z1n!|#|lecċD]Tb#dFZbWK,6H4ߐؙ kJ_c=:@9-&,}\QW'":M ٲ iS]$֪Ɋ/ESKR Q/Q&j_@d (zYdw*V_Zҟ'C$d{ZDi&&Qzb~ƼɔY;&Ez2-1o+/j6~/xN)1Q8.62NL'l sܢ"^Jr<&=TȳfC'LU 8&&SوK_5Ak.8 & $|eKZ(ø{ZJvދȶ#f%8?;qo&|go/~lq;p"e'چ&~o%W~\6pY$A:cp|*Yq`%GsDX@5P20nn0C[շГgm* NQaBTP`sJeY1Ày} 0d[&tnչzlb% pſFؒ8NALxw: /m5)( ,/Q)I tq-r!&_ďwU LVKJ,aX#4y[I!]ͨ'.l .rމ O/tsQ?BkD&bP'Ÿ,>4x[A')]~Vr|:@&y_U-]iq:J1>k :WدHqiM\E[M8S1ZɄE'8+ś͞vASg]ެi ,V4Û`%S`/"।'FJD(C~}t S˲q1j"0BhQ`= DGf͋`""_f4R ðKn.2x#S8dt( ]n{ jVID{fCW  0urjLEϩ!3:'mt`o3^K d}_ϡk6W)T1'`S~"pzQ5Cs,N6Y[mVH lfUSM=`~f O.|jЈxu8iFxN@qZ|%"X{OL.+NN UqUz s`z TN@A콾@M'Z@>!ڌZ݋=mKyS :$p$q~dY$:HCx ~p'@ \4@_tNYuCLn@Ǜ1ScGdW!f ~@:{Ěj0$&-t(gy!ΖnސaK70Th lZY\8:8c"okUFx 5؟hTpDZ/N$;\zצߋV9$8w@a_Da9`(rLQ(40H#X+y!7{;Zew8َyjw3Q|7F?KLE@\l@ &H-"~@ (pPIXm`rf;W"y+,酃Y53J+1*F'8jN``4Xfk]6k# dО*&hKrJ҂D1+.(M-gew ]/eJ+l0q6fʘX|X괈֬]|:rl"PvfegV^Jhܓg<>)S+ٝYWH@[LYެflv䫦|?&+Ӳ) )dEZeeode`:h O6I`+6orQ3xWmhS|&j=.m6eY~n_JN T m<(PZ'z~`?n/*:Qp&=BQN"q5cA <SC(dq0ﱹt@KKBmM!( IDATl ET˶=YU*aH'Y=berS TuiH]Bɘ7ºWV}8p)ù1b{"{!} tӎh<uR}KMĦ4 'eummAY#/44|咿$yx(!&,FsmNE#NpGo7_x"`"%|!p):\Vj;= g0(]4XIdтl4P3"` `$/@\bUD"mpȋV?ȠxQ YaA:+@^D +$/%w_.}+9vQV\zsv}h0>H%ue%G -'(PH 5YGbpEqXWnUG巓b9(5Ute~X+,nK9rR)P{JKS:Ks5CjSc`iqm:\NIng=L;sx p &õUy&Í2G9=C &@CLFHēEy\0hS\.|ʩ~\n4xЪ8,V8<; p@b >Q9[ 蟊^ hzwk&"OpEYd%kEP*( P6,`nLSXW\m@JAy|pyѸa)Γ:Dh@1Z\v/Ս'bk|削H+Luc~@ɿ,2? tf-.Š bt7Cz浘&bBf_IA9ʲie̓a0=ījD=  s;s k2|4GS91"MbV`,G:nuR+AXH*QOZDeʗ*<.%(|<;;Wد@=_¯Jzuu2"Q0 +_qTbu,N%–܎8:BP, Vd{\z\7LJ@a f\AGBtD#0U'q.fQzб󜀙v |"e{i[VIm$}D GUPdP#~q<.uhC9&Sxwprmpə6s#$1'vYaWOp-glq{r3wCGpv:9td&tٛ8<[M2Д)Y 9D-uוGGY'kF;NāeX:A7CM̧[q/$=bAq/..lp ʏlqRM,R#,"~@|$\1Zpf%J?`qouS.q䲥B:q߫b/^`qG#|r}^b_XP̏`ϯ|<}/SWɴb.Ʈ_0QwJjJGSM-ZLxMLЖXgofEs83RgWA:A3{a&S0XXa0W8,pEYC GCT_? aD[ج$H_HV71HbK8jnG>DC^ljos* "Z/E:`+X QW?# m񇅙\gԘxS; Eiδt@T].ϱ##7.\%ouLk,|ld+ Pu f-J]_AVbX+ RoOPj=*>[o|9B%~ b' ^iaYGx#Gbŕr;7F Yx# GR%]ƣ'55"{JDt{ 6?$PPp!mZ;VyX̣"R"@=SUrhl^nAG+#8~#CZyĞ$ YO~-RN^"كa^ϵZ ȎSq DWS4aD2E;2/O&"ޑusV*!gg2\, Q6 Yg8=yV@%:uË6B 6xX™r"`-9"M[,4(A{˰h'(JMs8qjs3|dPLr?ܭ9Hk VJ]C?>OEMY]$?f&9&a*M+u+:~S>pRU !KXV_CSJ V*#L.5GGGSup7dsN} 0XzS_ev$ C/,qR#][O-f*NIA"<c /J*`0b5A!)xT@S^\5?W yus^\`.Jo=}.$:kuwYŵeɀ3| 09(yz-"r}LGb)gLr ޿(y,]fVxozt &` > e_+ҟcПo6TIJ UxLb`%qpcՖ:$6kC?eeƏ)4r-'-֕ P!1)iL{ZlGvż^B^DzS""ӓr U)r G4ekVKTA\pe؏h{榬d*)$TREgem6doVW+1@V$`VR)HVF_&`}%cB pWVx8¸xXR%DžA VUYT2eMҲRߘNJVDYٓ=&܃\g%\fJՍGK@=b ??(fQb֪'X'VW5e4=2R`L{R@ 4M1Fv'㇦[b:#YlXн] YYP)٥A9y Ĺf8ur 8[23,(K>9P`F B `~L-9H2rfa8j(fWtEnxL 0O%u]䊺 \T0*PD, @옮P=K0u\3=bs'M*K]PuV@7@ 34¹.{ HhaZ? P,@R $ /f̚ t2յbt>&x !owS+ 0߄=ve(: &pG ̒ltL5 lL39vx/0T~\ì*ly//Ns InFN0fIh *̄|0**$"H~bN)JF`&,}%"r"` `_g"`jX4t0?A3HU=9_"|l~plpVw 3%-Ɗ@SH^(fTO/]vL/-& 3l7q`~iBh`@u '|V .ukpNC|[±~KtoIf OKXHr+[0W%&dJ:gФcbj]/@8̴8ZHd#!,j6W\.N@|N\~x<9QΉS4t~qK rV0׻Y `(Ɖ.%92b#`NЎ-Thd"RFz= b" 4+RM ZTc=ҿVW 66r O ezP9 2Toh"@'˿ywEe]tuFf ƁP[N~[l5`/GC|"AZ7݉J< ocZf3'c9 `B&:\gisB[(^0ҞhaPK(YR'a&#H״n: Izj`bc;x G3wv@[ψDk+N\m6.FV tMO7-SLϱ,/l\A (<.i";j鍬",wSY';'0gH#Ǿӈ.?#y:Z *푒@w\O6ppX5 xHb. S ,gTu+Cuzj!?![@~ԇj=u>#f~8l8pj@",v+w@IŤrA$") F͈N&S:AgiVͰA) 5.rE[q8O^Ȱ;]r  4]qA'+Ю8#Gm#ɐ63#զ'+ # I%Yb7MCL6]nKro]c-FŸ+wh]PC.-sxwZF]-DŘnSQ,ZĦI&{q% H,@]~D8ӕ+Oio3lLK+Wua]29j_C>rc !cI8]C$Fh+.#,e5il3HU~ph'vx9{`O$e:@=3"խPz3t@< P&f{mn^+jJvlb,)Ros0viM&_!|0`huhPQ~C <\['VTN܈}$WRKf}/ty0$< hsg'dYŭ >+IwXkqp8-v@0WH.Kpb_㯟qV V<t+Ը Wfm'O8fވ8롊K7ʏ%9n,>\j;FaGn-%|wQ`2Mxh*f$kqi 8 %RD /[Z#)lV%F6n'-Y٘@CL.?"+9)*B;LY*vs:o0՗haf_'hI/gZ0LMc`_xyEt(CK&cm D0B}k\a0۝G+mrO OKIQQD;ngc&sGcp PEIÉTeo?wJ.2i>j׉G*'-d1F-F{ 1H25M0M\5ْdk\ua~'Box8,n@#_`\,9QЮ8/]R_)ܘfiNVav8!F m5<0]q6IŒǸ,I2vvhRBӬ8Zmx"%L'* %E@s@LG`cx!Aw V6s@uo?K IDATa/ Y+ͯ,.{U^?ZzEyIoX<72=˒W;ʒ_ApO%"|eɸ,FG[N.y-d[Q,?, sI gjaa(biO0m.'NI^Dž|DM5|+l_r mS ꠡ:W-wz >KG| ` ,xX^)/|H li (!0%Ьx``jYL(eNJvH"` hE-?/LGY}0&~vfjL+ys&ENzblϧ)6/x[3 QgbpKS .F$#@Ê֢HH)XSZ{ _oܼ>"cC SI˗N̝a} |{>vK@50ȂR22;$rl*\`@XTJ- ,_ W\?z xv T.0.b.u0/&FGB?%wY{'ƙu/sq7&YpYdk7th19aQxlVDVG= &Oy== US5+t5 .2tWVQpvfbtEKt,S|zQ@1u: VvefKVq9 öջ4P8y`\ D n?XA|*2Kmh<(gnɼON`,"t# 5VT a&3J 2 Y? 7ykجln$j$ &@G>x8|\nh7c˧9TqnykbX8I~j6ŧ (/0& 4mRhx1≀ ~xSI&} _R'K$_ y^ľNdgzqL8c ,T'ZLֽ^RlGUW` ;:xz0YPg{]7O83qlk3Xc:f0k=rtIl&n?fڑ; 0h2Vx*zO08bZwZ9!ƇlnLszK(L!bˏX|1ɾ]С~Nzpz4AI,z?h"6 k뼀IDǽ(njp,yY^I8iI/]P:rb'qbHI:\(UcarOqY:Ď˅9şIO׾3ƚ^X=NPPP m;<웵*Y8H~=/"qNL`r,U\nsE 0JK߱Yas *xE@v?`b&0 )Nwؚ,,2)b6F\8hcoLNĵzqqL+O [oS87{-oyw6˃z^l6w&F ex'͘l0*IZmK)=U6n=>!ۊ{'sHVN$M|eɫ)8S}DF`l߻VqDF{qXg(a?eK:LaF'Y\hS]6?N6 Qe7DI@,T|fSP#3`ða ݲY'b'K?鈾G&iXŖa<07nRu@P-q5 u&l#"}" 0f9))?ľMɝes[u/z(+&2`rJ~D;2D5L/fؙ|/K5u@٘lC[^F ZҤt)ߡKDN'~$O(Thf܎`3>248^Z Y逥w`I}GP @ĥ>?g -yxAʮ\azgB?&yG)\xgjI٧ω;\hw Sӟ{0օAd]$802xF5uu&WP5x A8_&Z` WK)q n:>L WNOϡxNtq8<[ ` O(H5/ONp_ Ĩd-&?sʘmK7)Gxz[k\Ѧxg+1<]B& ^K\E("*<Fa~W඄ $vtg/LG.4'8AЃkD@(B߬mrxr18&MHxej$8(QR2Q|aO~bbZ>PŴ|-*82őHvEk|?`)%i>Wy\z#@pE`2 \yi%hR~*L% 4ǿj!3HCe,~XP'-z-!>.= ׅeG+J1?b-Tٺ-$?.Om.Ń9ery,ɋY/w3ZEnf(a\jlnQ̶ %{m|))2DHB-4:`7]0Bnm'`*t@!c|jΟIPE3C[)rVc]د$uKl:ƍv H}Ͱ'A{r5'Yݲ,X~p?B%N'~;'kaW4pnY%كC8ECh}Yb2׮+\ `I{ 0S<,<-191Y4BOMN- SI6[L>on<\68!W,6s$8*[l}.ޔ(6s,Ƈ]v'ͦ(P|]8#NMd^"J|F: Ԋy {-@ 0LZ\MLK2<]>u}V8kٚ~k@8'Mzd)q`6rG`~e?H:/-Hh`|j+pVVzfvlO\'lw9ʌFp(nr XxwNҠ ʼn1㽜82μN&$G%*,2VY ?5F͈CU_N` ,Ct}ZPĹ?ܫR9@$>Yu%g{oϢьF3dKԀ 5, fIpIkok: p_ܰtp؀3.Xx%ڥF3陑e|Tfγ=9N"@2ãI:%/9X%)n%f<֏C=aLs4QZ,zϥ4mVaE LL'^Fϲl1OI>F )!ApubsIub*n{jϿt/w} Af2i5 XP r6i _ x>XB6c LJT᪯7z2)_Rb3f]g Qc[GТ.WϻP{3Ua4Tfuk3Ǒу#p?1B/e P'P)j^DU-@+Qrf%C=?w[إAO:\b7H+/0C}mb1u,6Yg:V/87H~^ik`D]Ÿ' .1QϚ, v^qpa_0ИU,r؋nVK rS^d0gQ&!X  lE{;i3¹ފ ?Y`kE(MmruF!sE?]1 0X FQPF>sEX@7h-aI$x9v[xb1fLHӁ&@sE5A 9mf40L@9IoF:1<`T)^f(SoApQ]&{$zOT:`0 wozo(w w2 3 ?!pF` {6XČ*J}pـ ^C$XK&_5[6!~?%ǯUgY {AM)<Dž tLpE| $Y/k*$ VpG|={z pxziVW )I]E*ox%VY"/ɉM3(DS S1P3>+.Gd:ƄRᮀnQ_[»ݩ724C" ~OÏ)C6HUINpIr"`N {"C]/W#݉j`s@̬6!'U^!o$UQm6 l^pp)2\l> W"`Ue)*IRlr(J_.yOE- &0 #dyJb\eEskNp`p}b)ciؒ.nܔۊ9>o[R&]`-6 y_jݚ̰]Ǒ(B^olm!h'+ m`4@S`!4Zc_ ]4Gj=iT?4i$ה@XguuʓtZ ӆY)fؗ ٬~&!_L)=-5JtvDaͰfDߚ}%::?`>Z菠ߓ_ԱCRU/ؒsgV,aJb@?~Rj!}Y=S[J?+wا/g4|U'ݖc:sW#NݥKbRgK:?oä́N:-fB_xVY?sB7M3ЊѤ³Ze7CqRϏz2%u{@ BJzTrE e]fBgfUaNqḧ}Vr 6r{1|^Jnˌ I*E:n A \Mc+8r2ES6Կz1 9|^MvL_n'&# ~[(1p$_= 1z"K3ϵD>UT/a/@:Qaq\Bp,4v n1fBfnvL81 lBMy6F`K$%XB5`^L7B6*ryQ=ؿ>CA8, o NGc"6yKq`Ũ췦"abS[}g`]HQV*Ot@- 7\ ;&_ve+͝[-^(ڤmzeƇ~Xbn˜tW6b#& s4QUT83dNK Ȍd`$[Mf [ QnGh'aG3]I: Rt{/!0MwzƦ-kUL{ytbKntLGNDjt@灀~#p Ÿx=2-Dww1xv2w\?>zmޓw@ تÃ|#ؤbAIvIΆ"@_Lڞ\lqt&(@b0c)KQۥ*clWP^8Lr:I`PI 6Pg4B/'݃,SJ $$ ͭVPXKH9q n-nW㉆ڨ\eI` J Rjo# <iqeR #}}h7 t ޗ˚Ϫ3'\6$4e*4-Z4!2$%!"؄GgO[HAp0"@=NjD@WPgxKw8" 7)2 dvY@Ⴈpͱ|"2Ȱ%SOCLk f8 .M:,͍A8\VӼ7[d>c{d9/1çwbQ eXGo _B܂y#H{͸ED !L)iRlf{&M.S\yqWDmd>>Vq@BbVqfhFK3V D[xIk6ě;y*O.Ÿay?I;qn ;G)'`sg?wz-l^|! Pk/D 2E cSQHL3#hPeݵXfHj4,; *=s,$wHCnK(v`?ر4 "6uٌ%j8m!Li4^c;.up[l/KhXa@{w\9Mw<| !JT!c|^ q Љz7,wrG7+Cf6I 柀>^id7kOl `\n 7\/)r%7CUˊ1Wp@<,^cp3 ء#v`W 8Mx9`˲pRǼ>vֳu|79$yb9xL$yz8(4[sp8[U 9*/I^V pr D3v3ZYET\ 01( B}n . 3؟m8o>}M67&_$x"p-zWCT1&8 h+m%$q UOx^p@pX\@KgEg. =h ;zk+e/ IS| IDAT`B8 30p& +)ne#A]eTV`SPf0 \@CMRRU%>A>w 2(| po+O#, $/i-M:w=#>DC[A <%I.Ӡ*asHP+'QRaoAAWŰp /WMq\rΑp_#bCK_]ÜDMJ~&B.p-A3K#$78zs 8 ?YQ(&mG"fL3h1y93iռX/E~HP1Tph)pMW(dպaY).rLmHVrܮ8hvۏ/Tu$VM0z&Ijgkz׃O\)g#E LZA1͐?t g?X8۹mX+a1cJXBt '&T8|ofc igDhV Tm*IQ@x8luǿE8E:cE>Qw| li킷剀yŰ@q{P1WXOr5xpĊ  =YH'4γXE@%t`" Cr @Nyk:Eqнh@Poͱ$G{{/K4o"Kaq뀗!})p ?©MܼȎ1̅g]lc:<*ـs;,XKvxd0U"A+U&<Xy)L*NM`!-M;n%̡GfD3^D$[2'6 6`L" *Eŭ;56f b-5<@;bϳ=̝j^c6R7 Ò\_t[H%cnd*h䧫V6kdvyeZީlm k p5 Y/牺17? \!<\m6Wҿ]pe',D$ Pd$ Z/ĜA f s8{D 7 1~ʰqH" v5"^neml.\|o|7JФVijPno py@2$k('-5^ -FEH=n3X7&\ ]_H_1P#X l ӄ93@ad+AAT.^u@cs@ Z#^-m6woWK\bQ ~KQӿqGЅs?"ual9oIS/VTtͬ }m&Ld"Å<@08HQP e\L횩\7J͕f8޺2RoBI9̡`MygK%Tg'XBMa cH`5T!뜝u@?Bs?T*WZT'bp?򍝁!剝;%QN؇} ba`؀x1%+Q )nr]KRPX`sc}nXZu ɡA4ҏ#2NJAVZMsM]8yX.zS5"#N\@NacZ[C ـsQye&[UV]E(iwyRqp8U5<E$ fIQod3fOH-ˑ#8s(E `g: S)0jø.r sy ~31tw1~N|s1S)!$L/092}Af-Z-%#<׀/;:ɒ^54ɚgMT 0M@ LQvdU 7"? ׽(WF"O  =EbPVx+}m}~yBfIr$4zNS̯iLD<g&53r1;Ș{g ֆfy@W&kGb @/s5A:}LR_;aRKb۳knt{ Ejb0ň͏ޚ1/8c{>5j >@k08qRtDet@1Ơ<J P'+]d\&XUtbDbTm+^tV1XP,(&Pc"p hEi=Q38E@ EH,A0]pKg6vPw@+_kT/Rka;(V] "3z2-kj*=B.A3&ͣED`@7*NQwbX W .*]`D="7ּmfU"hmc0/CM͑;:dxdD2ޫk #B:%+PĂbF2bNapu%^SXD#'9T{ hy0sL",Px`"o$ۆwE=@bJRo[2O28_ tBl.`sNg9 Xܢ}=;k ^fyUĚf`o5@]nLTgnAX o$% `o:>1 \Iz"y}"3<8 юur&,?)I:F#dhXIDBT:\ޠV2s?j,F pe8bf2L.,$ mL~63f;(6b1`&q@&iE\GQwa̳g g ԗ3G Χ>XqqcVD@2biABY@\GYV H{IP>P v7. diA3HbW"yb1d}]al{[T Rb>rW{:I^ޓ㮃=3ukBli*K{:,)b6O(5a f`) Z&®k5V@5s?E,J-#&aM|;-9@fQFS-c'%{+b+uq(dUX"T]ask5}v^RskjguF7 P?vuT{,Ie` ]Oz=?]XA)5 B.+/!8U MrعQG1tX }!4#^M?$/GwI]Rzc\@>.F8ꝡ}W%%i p3\l©DnEYƊj@QE6PLT@*&z V?1\-v^W F/N{/A~7I P0AV=0\b4r:5+-I8K2/ ^ `J KNIs95͌b6G=='~>ɐbH1|6jdSX_jQ߭|6;v(t$R$ &n>92 }3N^`eW ?>1q+\[2/;f3_*,! @`2ddbzwXe>M+M&4$Lw3`o-Lՠ|W޲IQ$OEPC .z{Dz/pGl@vy+JVmE*Bb877wqLR II t#ԧRTW'WPPlEt2L)Z1e (YMm1B &UP"1o'.c @O,%Dͱ̖^N5Yvc;R_ z'0cS5$Y"6)={1|?Sd$7Y2_i`+&wV1l{|Z/.#QeOu"bD:,-N.Sa!al0[EfUe8z_.GQoK{j8}ԥ=d/ E1ɡ ߂!o  ϔ=bp@/3?&n3 d;ׄWz8ׅHWM>\I]]MX,[In/#ВH!I0"aĽ[X{ 6\dfwO%=.Oepl/uP!t\rb|cyԔ!i>,56-zR\pfE͢cu̻5ܴI?=ݯry y`'XAYJr3v[)˜6<A 6 xT5L)dw4v99E6F9YT};B@˓sXu@vt~stjG.AmX0s$ 'ڼ,в̢Ɛ;wx1BE}>@ܱO)Ro*18c1!n,nX@F' ̸[ujrQ*d?vXt ]YZ,!=Bn X`sCKIm8 WlQn?8` Y$[a$8aM PrUfݴ&eٌ`BxuymsJu@w$[\ߒiaZ1}َiƿ Zןw$[PQIge[C`(>XU|čX5} oߡx}t _l__rSlp4Z?D5<81o*- {5,0#nDcIlr{:KGK)vIX1ː1Qor5*WeQFO]s#$*H»\)v^Z9l3ڍr)Ϭ,d6yN{HCt`, XI:@< %,;bO1E1Ԩ3.9 `V݋Ǟp}0ֵvDg4`ȯaЏ {I}X'>`Մ̣U| 6 2wPܐbX2}2AXn O[WO{Ov vʹAmq^iF{lr |!+œ fD["0^9(߁/gaJƥ0X|P(Rt:jTL:d)ba.WO>[TE9FZ®)wY2 2Md3V Lwc`V~\Plywھ)|/oVwKHoJG1n]˝ˣeAȒUrBR@]%;Z%ʾeYpOL?n HFN~yG8§tHFX=VHT|40fKcV]]%ސfIVSc*oɧ t^ 8AT+SL&bD@3 h"fH(̡P(_LP?W"Y$.! O.DPsP8<}I| h^(%@PFXUqw% +B4-`, $ {(ĐJa`62K3֐=o \&?PLځ#,n}=ݯSMJZpCľp=dý8D!̅\ gT~/|:cUC25U½w8, `&k y,(ii'Tshߨ[z "Iɢ1C aSmPOaO>}MkE!=ցpMCaG c?IR c 8BY>hwgtm݊IWP9#L}tq L3ST> դpf$'PwMr)zlEMꥵXlKY$3r,ߪa;`30r5d}dcA+t9=/#[V9{wk@6|_fUA؂܅ KFD fvӊJ9$Si7 6 : n0ւu@}6e*,?/Y/ 7 V̐Ţbf.t[4 ֙ZJtw: IDATFnIDD$0X nr} j 9, 4#.MXnNS؇*ق <ՆWb^q嘿݅101#M KP(FLn1 %Kc hRgA*[@X-E4z'ȾjSyS@2aaW d Lғ^i.-9aq`JV'{E|jROnwWEG\^aAtœf`9U:pVhY<1pS]&x&=6c*>`gˬ! '1 sWM.*tP`4RjEI,U@b@v"P tG4v XG%}>V@$Ux@ O2N~\a7b =[|+ПBn P67C!> Yچ.2W1o|^Y,nIouZh6_I}oJg3^3S'z&ju\,ht[9OiqHf+Z;:UZ|@}:SS/+iէ+uBO+>҇*Z%OzR :3 ƾQ7Zbꑝՙ 5f&uI,1fH-V꡵:'" h(I'DžFQ>{zܧ3ӢRqRg}b2p׏c:tIa^g't+\S▉7ftt`F7B}$,휤nX&^y/86odЧS d9l2=$h6h2h2h6Bf A ah#'i" 4`jh`͡ua,rՉ[6ŝg/r[};RqUÖ 6[)fG1cSȘ>8Pd\۟/j ب:  g $\@.l'\\W`QI%AM֙FӊiɌbd0xe0XVAɸ8?ň"h LjG<>e v3t]o@qcPe} :y[3ḡ $/)9bo3-5鸍U >>lȸjI5R^KLA I  ΋0=!혁[w]($2M{K<*cW|9PX;Ui/H{T僒?>-VL9#kQ*~wi%y)VA:E|{5+a72 6K2O"QK9p^@s 0N&cj/ް߱qS4}o#ƚqgv@1ی]mQ-6bQ3$%@qt>8_*Ыu cm1xɅz14E;^a0;eƣ "/ ,8x DN P.AepV.1.r(sXUPBxCYfBS{_~n20L~oowGiɸomdfv ƚl s,D 7k5J(6@3s3V;Yzӷ#Yԏ ڄwoYE1E<(`6y͇ v\3kv -.[xHA`HY@b#,E)Z/_Oa}M~r$[߮1\Y e3{R4Uo8`gݗUQmQ`,zl4%r ZHnG GP qhm?׀h\ 4b0'ɿM3'YTL@YͱW)$zi9)A+G!}pV.SI0i Dɵϻ\G7u7/3?[ MU'R.c'Bb<Ѣ y?BEX`? H`@ d{E #y݉xUX"n{0\ >N0 8GVw9%or ZdhA%2&# TYk U*t X $˴ H&\5 hVe 8<ˏA^XP$BC(=l 6B%T؀D_j`[dYTyYeAA^f_?ffQdmM2EVQ70 ;Ps!xAQl F/S7)Mˑ~o[i@\χo3J3R.׿62:+T9# i-GbB*etj*eViTqW'?SB_ߧ?QbD-EBߓֹHOzZDJft8I'FƒYɴL빸U㢺d(}D/WOFM?I?bZg@rc:YrX)z6zHyZz8?z_Jk=3IMQy(O%}$ߗwv偌~ ,.-r=_4yvL8 6-ԩSzN,D?%%I,߷vL rMu3Ѹ8"y ;9ɣ8rc axLuxPxa23pM7ĹWR=|U^6'yoyl^(,]bb0PpyF. jj X^20@^ZZb2_D_@:`9/Es{쨛1y;Pq*`l#xskafpw/kܿh0i qۋ3euV` ("S2:=5M$qW1+98NQeAf 7 ]r#\ c)p":b9c$j\Ha\j΂%ɂ)jZ a`233[f)ړ3l`f(bKpV$5&ov?{&Uxj7i4,Kuږclc!5 ,%!L’/_n$aaaJǗaixxrXyxɫ%Pv(k\"3v]wo%?/+^2^^ /mU_P85ζxqo7:jHs(Ep^EDou˨ys9e'~EG"gȢϿ).,$ή!bS3$p?sND2Q|y=oGP!R.w;yJ"[Ps3LJVyco?W!ȡU_6K^F 'QN@w~1!P_)g$WI>IP,&T_p]Xp*"О$C{?蔋*و@` <;93ܺp 9-}';,|/&| v7ƒB9\|Bc|^i)?%S@ |O3I듼z#[wPxW2inI܉::h M@ż.jA$xge\J?reO>Q ,wwp >dhv- vw?ri1TOz݀#˽RZu6?2ҡsj+֠Z es| ш!}ɻXߎَQ DpsޫVf/NĵXP s7o7 `pW\OG\.Ll/WkOhiܰUQm ]n'ah( G0؋Hf}5s cdWb6 J&`|ކ2pl*R~\U_{f͏3o109 ӤVs&Y9(22H|hB/KiU &&ʑ^8ۤ*np }  &f|lzTϸa>|d\ō`^qO;5Ը,93 xZƨEp Rۏ#r |rb6Sh:DqZq/9:wRL&P\*?e~3"*xǂU/@Υ* q I4 $}70AՍ_v쓼;RG! Id %pmi0h/FcٍPcDIDs`,UADWP2E5 `O)0CE9ZA^Ze9]YD67 9/hp]fIo2}7]pP_xCD%O<l#Us'`ԖV{ڭDH~\Kd\1\ôg2s9żbN1Xu3 ۠ _H/$*80MeLTD[]GN Qe m?|{u.*sUX+iŌsJ>S[P,، 3yq(V%+6Ki/a?jka/JYKHZ CdwO0z;W2]KEq'H[b勘؎DM+ I!@jHva'H/wa $:09߈Tm0IIAO~;*V{cGG|#<ڼL. y|?rO-譈V6@" @h@4ajfqъˡ$ `L-i. "; IDAT | =:x.?T Z*6*'P4z0{G ,m 󷡿8e؁~3֍kqKV<i]:]6I}llAtL!Y Њ;Gxۻ#;oi_CBS xQ"Owc8C潥~揓]Ce' ]XH?NosڈEŰ.$*Pb 41˴5GmQuۂ~f%)FBr).fɫ0. ݉KЛkwWn(LkGGt/vۃs|k-w +>(my2 :91߫;V6epp`ۙ eX3 $j 6oE؉юm1 Z p d:s'­s22Fm-oy l3`[  L\*<@hǐ/`6I8G֝H~i@d\{} y sl~% JWpK0TXW@Dy&(o fQ,{0O١q.8sRw S/49[]:_$/l`I9Td8(jXXsNG3Uڝ^(Oj:J8%L.BX%)'e;zOy+w 'uIdI%sQ2z%gh)6EqYu2d5o ϧú[8_JƩ8qO)]$,MJ 31:uԩSi 9Cvr wiD>,j9D͢ܢngUX|$?z9%I<Ud-(soC1HJ*Ӿ~B)?n̰[Hv2<kE4ѣ\ ?$e`VR[7b3>Ks;Fdž k&э $84 Z :mIl ^$]&;R1/ ONAO|^^@kA/ `=`"3 /E9ߕ |]괊þN=(}Rjh X\k+BPXe5ѩ죳>fO_Qـ{;sV봫Ld2*M@5Cs )|+[d砦]K~8X*P a,Z˽<-vs u\,qI?=(;E|VoBplS*I]_ *pm8W} өK%B`+`[;ubW)_A h4|n{-(@Ԉފ݈FIj۰:r:}։! l3^"߂UHC;2{: [ u/|z N;```W*kV3f fNoU.[0a%jwؿZ BVZ|qY nݧlhT\%5yQP,Kfm!eb  6E2⍀䬻M-yrx3y$!>#yRr_Q @ueo?|4%2c(` L*73d d(Z $(or5-׷'VXRLJ^Rd|WG= a (N˼/_]7m|{pD2zY?ͧ8 kt\b;Nu-f9D#?wu`c 9j+ 'HobV7#Z+F0jst)hӒZp9-uԩSVfReY`yRERqz[zy; NȢ&Y <,xdAŨ_P 5rԩSbx)VU^pd@S,Ɋb6()Go@Te#{lK_%G4Nk]&H11"elGܯqĸp GI<+ 0 vDV(YT5VG1cs(m2O2*9b887C--PhcX jyhތ1q4H-)M4f, (,Nj$80IO rk. U0J%Sh,;-!@X@EJTX;"E" p+8< zQHFy$+y|\E0c@W]V+SYVA[Diw @+،Ts5qy%`|]ÈF֕@Uvc֨6n'rDԊ4\t_kV`b0tڼS+`VAv1Ţ4=&AAŔAbՉu=1M41HHt_1̍OalGNhJ_&]C5TY `J^P\x"s%oNx u=̙Feüԓޓ_0GQ˾V}U`nI]<@6vm:Ott-bwM?mod)ňb*<0$P156;\k& `XbR'%OijG:|$:[iŌ ,A1n5ɯwX|/:ה9IscXd0W"Ux!i:ЊdʗAkbE1g3}vSJqDu4H=oHNHm;`2uԩ9,Z%TSHX#ьdE*iEɜ]c;k|LUqGu;@X;2J5 (u,)ZU)|N:u^DNRHeTI=opI|ǬJjnҠ ,y3'5 Vnqthzu2„=pg4BgqZ~6'\Ś7lҢ|}G}߿(b+Z`"JsX|dlx(#odJr]TrzysIfRD2gqdkK`v}χ˳^c-CyM+ؾpOgɊŒEKIA]Œ0a2nb]}X c8ځ?av'* 2EOrC۫i M ,FTm fQ4e kJmcY?IJʬ{ާi'ʐD#"Ҽ*Ԝ*GfaR=HЭGSY_B˿gCɠY!痻Et'|":w. z)=IcU=iv֬we=~U,:|hGo96\"Jgk#5>>5ESgMGu{Cqcqt~g:vۏt\r=;3)g9H E{ܩU# ' N f r>IuBjw 7^S$eԹdޛH@| 0SLAblzwn7 e6dM 0QՠV4Z !&0 FmXQ¾@+eX5&oN1ɑ>>&ow2y'vW\s>HPHWYPTg7eG޾QYovcY^4 ӊ,c|m9'Xw` {GQ8(;Xd)XS}p+)p%ހ7dkpK&PJN,Wj\kR5?V:Px Sj5CXk0$'76Fm9ѨR-lZLZaU>AQk5ԩsA"9bo;ˎ"_@s/" kjޅP`VrfV\g%ZgiyV sTUhl !{$΄SN^ZG%w[ۺk01b/ .>\^XoX#N:M=WV@V#`FqQ܈"Pz&sg v@`zASiIo l&b#=3>.D/F}aLm7bwyvg`=7} C _/"!ߦ 6KNrk /Bw[F0|,IV,m"'Ljc!W0݈gڅ(vLm $剗%j`ƴ Yޏ߿_'JrMSYImW`fҢI(h4  &&@K@4+dיL4% 6|#@FL`gpG7bx09h QA%ͤ͜AIP,uk=.}b܃l|uHIq?lUCL+-I5NJJqŸb*&NDy,voPIv>̳ ׂI:, Ofvs.lkg8ނ&N;5SB&P $ۢ`"s1뼠6`ɕ6ϧyD9_. ̠:0p hRO ȡ1s^olw/#if6v_MN{ե6mg6[LJ5-l(FT͈k5FhZ'К0Hr,WpܪE&} w7$өǐ@:ΕODNhH֋qəlcvO{j3%$Fh;2 bv#:V蝨S}1cO30 LP<5f΂ڠNpGXeϽi m@;RLzXt9SEE0lhhȗUWBWn)$`fܭ!#ߎGo[-lTI0/ӮO$bA27 A9صf71&Psojk {e^WFmk8 3y\\k`v G3Ut~3c"Nѷj9WX&iD+#H.p#'8$'6?lT5q϶"eow}Bb&KU󋺻 ރ%52O${uv֠dAJ[kwJ4JAh1i5)`q~*)X;$ g܈Q!@O' 6{_&b]W 4f܅!4 #y|u]6ɸ{ m=3 IN__ M9G-f뒞*Ьs<Qֈky;|ǵҦ~3jݢ(JL Z̢Q |:ulmƑkD\EfL+^y#v\m,o-[Zpe@7!iŶf$]FpE$º5aURbp[v@:u,2}Psv|j@$E èY(# vv"Vi7QC)l0^w ˑn-IIvM@hlAN؎َG70F k"d` ^MG\Mr7X#5Z4}yU?'C74K?, &l 0gh^T2vFr!"o{Zo*? oN~@/%j=wpOn;@q.2`S!2$SQY|%v)S#L#ے/)|% ,7>`q,5 GUmХsX# UB7\"WZvPbyJpE4pV. ̂znVdkAYaff|MH;4..uvI>&߀dXs%A%6S$m1Y^߀Hv *c]VaY߆ ༷(O3,:`+`O_H袪j$U1G|Ё1c`  IshY NQ9X ,dl0*967Uh',׋2nonp`[nn=v9ex#&GBk,)&lJ6>hԹL"U(CSH`%.,`9z { f} 1l~&r BA5vzO=#|b_]:bEu-f6<`N6_fB-0[+ 鄷r4K%z h6B.}96%&m"sȻK/A(v@-iD*zt|@Af$f ,D|P%:sȓ,&혻Bǰ24ԆCǺTDzr`CF~B(@ YN\%':l>9J.ddݠT\\5,U⚏b̀M>Q~0jv @E $1_.YMguhђN.8;5%.3&#å !.cD8utuPji=:-~@]A#|#AZ}F3&@IxQt8 ~!cvT8ݎEe%e8:uTI$P f ^U2Zn@lVU#qTYXUHmro8'QV>; 2nY6YwtRN+.qJQVtXş`V ZD@?ڢ)A\WPLP"'X?r"D BjQ}ME"BRg8{O>Mdˢ-o 6 $sxksr? prdjLج 6x-ɶ~ے,iFfѣ׿ge6[gXb}`T)**O VW4^Q-sF-Ҭ)l-Z@~=`}ع›@hYŲ+^>`Ief?p?R|{ܮҡѡz:k  1[ݭ^Gw8}9NhGoXٌ)MjARx6&+Mw:["A3:)u]L6¢N3 DڍN8`< J1-)!&^@[uq9׊QvnY<*A)9Sl3nl. 3n aJbmf:WxIFmv0pc!|63qBO tW_X saeVpbI90UYalA"CvoPtlLL\k3`g<" ~csT| ~t8$x3^eWp<ͩdr9@fɤC`B 8tDy!)< 6v]!)47-E 9ʹV {o FM63,R**p@^p`fiqP0m3 i6Yog6o忤Q,?A%ndٌ c@coec בmEOAZP_[\ T!"L1y ,,RTrqSMpH/M:GGKNsDT5$ﳘsXOY}k6h?eaLdkp+-dBOOpqa rςiA Ku872GY~,/:dmҼ}IO&9FSv,5 /30 'nR {5Zg%@}ۨo󫌸000 =X ?(&M& PIE{TS%uVF TsDjUZm2(H!h [gK&T:L.j'pd3*Ѓۘ'N ҅z 5b7|e81WcD}jÜx@,y%Q`i%{K39CJXn𽾼\qGa66\kYZP'PNlRS0&xfL_>E"aU. NR8?LH?ۋ&wG9@FUrIS3Js H)9`9kZ1Kj/EB"@9V}z4[ EaFNhbƒ;4;\nPiȌE@``ō&S=ow~#Ⱥ б u?‹oQ@NR7D|դM˧_%GOQT` Faϭp}@YZ m{,<=,X?1'eN%V:0u- :ڈג~1zPZv+,pL ʍ/d2)<9iEb, ?#F6P@\.%'krӅ ?h. R j: n9وERX¨)!R<yM2lc'eG H? xq"s/%kQ/x֒3=s5qkF&.h `5`/YᴳNǝ2EZ:U"x/ M8? N߁C[U距oZ d *t{nכj#X)t@zv,5G CK*%ROl8]EZ3qeQ*`-;^lY|p^rGv`b1BjFf:Vtq%9v-H>TO)D؁sk3P Tj92G0`Ģ%"b:H \&ڟ:.>4(\yM%g}ul')<CZjb3elV!k -M-UQuu`:ʉV\4}%06cT܂U35g%OhuX,UzҮҮѮnQE%M3q@ښgD@@#F+bqWwCِ{iw{,lL;uDzJk}6Nf#;=eCSiw!lY/̸٘1=3&;Vg>WZDQwg=Ze3;SΥ tWQYwW2Y`Bĸ~Or=?ά;8-snN^sYw?tͺY`:6u^w dݑ7X0˺Dz m@L~zIJBMh'4t8\J =>8E_=X'64LJNQ1faQ=;`#* XMhzN -EV$wxH_^oN}XWstټ]YP IDATMy6ݥ~dyVe ięzΆQyؠOstv`&yb @;\3im(-*M j !zI?`6߽R2Ğ^,?Gx^[L ^]3Ȍ`f*dRY 5\#tj9.8.?l8CNURX<`s$h.h(BIyF`"LۨӦ%Ȃ(Wr8[,o$EgXN+zib&p^ƌT tl}/Gq&`OoB#/F{oN}6SkUY L~6+Osͻ^'RZ <8ڳE (pBm#30 u1Lf)L&-&Js}:9!/fxZ>Uy~o @Ӕ lh8Z"Q;J(yIMV08Gl`V,‚Ŵ0?jI0y@ywqx_?ba,CHVoތ3t8" ʠZ,uj1+xf"$,Z|Y0wnT`aaizh,Q4,&Sy˪>Hb13bkNNf4^`?`n _T Pހz{wC gwQM9vw˟ćk[6?[Ҹgx?:]ay!Rf383zz6UF2i3vzqY't›Ga afGȉzf aG:T0*e@=Y~)YϸZe&9̊arB*%jڠ .+-٨s]J H2TP*;:E^23s; !.w8k ?&* XGd2yOUlg2+RNL<¬f{S Wa{'*8StDo)zCHwZyQzaM 1V&s/PU.1?nrkJAKSuвNmKNcI'գ6ctb%Ő+ypJzN#deJ '@,A./4 Ie{ 1Z/a 3$ɂdD%`JYOC;ñ"!Q|01w"Kagc)$HX7gg\QH I:@fP ]|ˊ,Y?3|?Nl>}Y%>[ٮc]X6+SXCOnRib&d26YB#a 1jџtrG`^פiB3_L PQc^79* CTD(e3o|n$%؀9[b ht`J "HB 4 ==YEZ2FfBm24̙>;XM^d$s1p`@rE@"hU`IMd6Gʎ+ 'rtjB+% oyk>)^Tj}[-5~Mhʱ(HdHh2רpm4L:m^Lawaw= j,y?: HF&!foH3_?xg7k:рC8,t|XUp1AN͟rԦ/Uͨ^68yAr# rbC k&C8`n8x؆x9;Y؀)ߙq<S6s)I7"tY쳙9.Y >|# >eB&-~~]6w:2VtRxU ]ZdDХ1lwةbQrOJgM{e1=$7c5aԊ ( t^yCy+K{}psm[ shLjrx(2M k1z;j0V,-1Vo%ƒZ Nq;(>̗k/ssm^ gۊ_XS |y3ssd`k${p\pE3M&MhQcCo  `q~w=5iG|6l2=/i<@lW[)`%3ڥ Cn:!nj̥@f&cҪ6& X*NQģX3Y:P߆!jC@ۊ7Pۂ3.xޚ^Nw:` !Ne2 / VF#a ro4 ?x,#b$&~pΰN1Y m +ʽUe55q:HofI|-MG}Ķc& p/^~0V0Q7)0av@"@vCSw6 Y%īJL܁>t:zEr>++N| SfOҘ.<H@7~K+i- EK9>YâkTwc"U?$…e J3- UCUU.M/4(Z׊؉o EClEmwx_[lBހ+JPsûyޣ*?t:\0,kr /u%?ML+n8hLp2 .i_ e)G~uQ}t\}q_JxZdxpl 8/!%Oi,`q;d/O#8Oxuo,.t BuXtXEX}?ft`^pb&S0I? g1=,84ٱaNh|?,~pr g+: af\sɕF&ɐoֹɦ8-qY?Хs0?mp5FXC[!9F y)pfq0iKO1G7|#o,,Ƴmǰ }䨝и#CZfvEs&/ 0 Od]XK)`.TQe"t"OHi9Pi%Bw;v@Z45_؊~OcR \'ӼkL Ր&hV2P&AKdF-΢üux,Fe;1bnH_ n [BDԀ_@czNW ?N jX.WU 9-@ L:knƎt ᕥ\T)Pې@mui E`&N x}eEɃVD ;ֽd'A XCx)In#VUPP1KW>^ęjsN&/wZ׫,3f`Z0)m1)4B,Wi\fޠR̻my&7eBPVt@p' xdlt3$hWY6y3!HЬcb]z ͹O^a`@2sk"0 @؅.A5"WӴ} "],Ac!r%h89TZ5ZLri7Y)tZ.H|qpqupj!"iT_XE@0X"@00J|N<DJ,Kג~ c/缾 %19u@0p*D)e<P*RoQo2/8dOC]ωɨ*&-<[Za(T,^eRm&ѹ p[s$v8y]OH|y'&,ZāϢЗRSH@2>ʤ'dwL c`R PR 9_tzlyJ)@-ww\~T@#*7ޚo pY_! 6aT}I@: v\uzj%UHd[RD1?d܀uDrD VWbL=/(tyU6-H 0^8Ujs٨LBVt&w&ͲkL/<ۃ8Q#gRgcK,`.vT/$OO4lD=B17| ģd؟'e)xbb@0''{?GIRt P  q$"u@ǖjdYﳗ/_{&BP.LtSTU  klJz@ Z޵Է3&9o0HQx(u6;!/omO3f-ꔠ5f(x!Q ˽PTe\$ C&{F/00m(-QŘ]^1}03^ ?D36. ϸ*Uhx I~k !L- 򟃧)/m&F nONT;Lvs R(0͐NSEQ@*J@E>fziGfYC.򷱒嫁c y+o$!yYZP >P K[$}H>J ˦J8E]}M-$%nlv1t^yReYWۈsC 8SB ,(uqaM?f2%<9i `f Jef,r uhj¬耳`~ڴR`hqXӈ-}b'.bynGboLx3pP]poQ~^czA ,_Wi\ gd?s Զ .IaPup~y Ёz#U^A XUő/uz1yIFrrUP[0n~PQhlT,;t3 &CB"L,2q2\E8 p"/ c DT߳ V(Ί8 r8l&\*xI4 &3jwX%\@o2} y~DL8߁,AsR lt@n[l6ϲh<P$`ӯӤqb4/vo?ec-V|%pM±x*KKZbXg3qAsQ~̨z"I}ᦣLd) K]O<:}2MIGJ@ֲFKT0r%O ҬQ'-U ̙04,\f" %,;@}ᒔ M>*XuA!rD#%Gmr`hB"f m0tRYaX+K%ʱrSqEEc:tQ㬠I&0p;jT9F#~1pPjcoDE`'x!1 #87,X\pA&0Anqo 8 IDAT&#RfN6Z~< ~||w~ˠf:y-'Y3 :*?]0ND"""1(q0G1^" JfB )< @ hH:f,5\hX*HMݢM?=!a ]*ŵf. .0_]c+l> e2,yI(g0.J:<`)إ3i5 )\#tE}F0# ([?AjmxAϦp菉Vʁȍ֨dq|`xy_K?l7UhP0o0lz-Q]y` #`3 mN_9MG xdi5ɸ -6]7W7 f83Gq)1c8b1'ג@ɯ{ p+ey^#L0l#l^9`su/87C@ôA^ mpÅ À´D|FBKa(l4Ul=*\a<+,4I}#-Dl]&Js5m ЏvNyVg'G<߆4t4<P4`48I Y_dn1g3i E@J9^]*MK-PPÂ]w[ Luc V8Y^:,:MY!AЕäà,f@ȫ`q)d-M1 ;뛰,X{p;+%oA;@S&8ĥDӬYp;t@U(./&ON.'VaJy!~)j`|&7ZEfETN!A9{P u U"^qAШe*R^:(8# [ZQÞDh@W؝՗.W#wlCh#8S80=F.a,M]8Ct@ ONDru"EJ*BN #`s7$ەNV;tΛ&f+yx &RY`-v>Zل$B A%-{ͨ Xg0f*mm٢W#pPQx csɂIz9im20,0#d藖w4r* 3 9Vː* ,G"Рy}zE⧘ͥ%N:i-{ /)*mNLƨ` 1Q-ĕ٬񀝐5X7<0Q ySN\(Djd&@u(A4<ࡄ> 8"l)LH.VL 0 Wɷ'l0 i,ÀNS!x:"5r(3`Cv;C(`GŃ@.LYBwIgO7[4Ň-]6P/^ +:$f *_ >qAp> u\h/!APkHIE >&t}/U ܇A5 &b&xOw:Yq!QdLF!+:( p` o a`JYtd-؁FD@RQbKx8Ʌ)9B~CeZy\Z+jsLp\(X; 3&6B] aA%i&w Ԃe \{x( kszϲkaS`'N`2O@dY@H҄%dqq2oFheϋd;(<( vJ 2318L(4n3'A,^rj08`pWd4B rSiDl]"@QE"UD$2[aqĿؚ1aDEJjJԸq wcSc4z!cQ?O?{F!0MTJgE"EYhUVk0+hpI<5P_5qd@ц{nK7[3й擀& dv]&Orj4^([Hs)^Ri1|}DMpt@  #9^iܥ1#B߮&YkԼH\ש,ʳ U08Yg-Pb-- @[I V#~@r3+[ɿ$R9?Ncl@ҬPZdH vCPJKXnCt9Y{v  Vv0y ^"9 rчp.$w6k(Ԧ&& FOJ~MòVPm&W! >dN e689e܀)f1FzXlmPNFD ĥ=a뇁M- admq) A)HKӁh7?* My2* w,E&B$AT+r ӀV*EK0q/ U&~eoiEݯei>h)^A`q!y Bz, fLZr~I" HpBy8 3<!EU1 , Mc0UE7oW KQ/yK ?19 pB`XSø}хI{%l8:zL? ~2ߥ@bYxʛ{y|9U/j|0 w, )H<^ߚɋtp2-69 k->@=pˢH=V  O\FL+,qV\P`0a;x36^Ci,] ) u'] A B{1GmT:kV\``>=r"@rc"v+)xG'^p0rn'tK]ZႣg#wHQ0,XI@NYެKMޘ"v^2b8Nhh耹bGG 81'9nh|-;aS^#ˠc"xQ^ShaUq@ tD P#?,P Ǩc'r؅`4X=ϭgo27Ij◡*-R6@Z d9A]LVy!á4WW-.<akpy#$oZ=2[usyp ++uhG[}Ѕ!,.U3)ɤ92G@?:` 3#`)91[c<r,\ 1~q.s˸NI `\ H>pQ cIkL]tstp;!#.0 y$!ǀ8J ق9{CZ&oJPJqRc+WWE <(Ne6GPX:f }P(+=bQ(KE>#NYQaDQ ?H&Lʢ( ߧ?84k(x>X#^#z[SH=*#deNrsÌ8 fZ Y@X Pԑ0GA Cd;S]`s%(9XAp81MS U,̟Yl\(^n6ϗ5؂ hn_,مfK7@x# ĭvs;(!~Eݫkf##PL$. ȉMf)u8ȝ; ].p`ZdҢ'qJ2t%1ɓ6o#=wqyYM#_n ~f֟EX|QexvLC?섃^G m".~2ve6o4ʰfF-e0%)z16، d^SG^1݋s$$PA(Pgr#5>9 0J?TI+fMAklG'ʵq,0bz\2B0. ƋY>-$c|DdY W'͵L؅VHL8?jXKr)^"< sDPF<0ݡ%LL,LI OH<-S1|;v; л KqŃe~9 CC}1nH^ !X~Êdٙ 6"Ԭ G7=ŵ62RER,}V3&8*3d4\Cx\ ctGq"GJUnȇX%!B;WU9@PiHY 4zMcK_ =4 q?W\OzEKl$6<7,~m8\\ f t`Mu/^v܄};=P̉D:o^$ۂ9FKHOH2!Q0%@`yL"X h1"y}D7T+4m&*3mp1 CA f!$]Q? E/ plWm+(6ž H0$ْ? ׼IC̉<צ9.x6&@(:Z j`M+&GLEkœ"1kZ9^滃Zb`)XId(gfY N(EfUY~:TquV#/~}\Eܟe%`,5K>gHS tDpbEV'Là*Y%|t@ "D?,G\K}|6m]EO 馩8 $Pycu;t!A+"Ձ!BZh.cw`# |9 kӬWu>g٬7TX+L ]< 0rF?ÃmF%GEDQsA :_iJ]YMD3TTZc43fa % ԋ"5(X$Ax^C`$Th=+IU,6 GE `L w^|>_xGlf@ӽX~@Y0CƳ"Q9ve}0RY In0 K&ˋ|EQ!ꤡi6!Mqj jL?OHfV{.giGX t qzc;%ҌZ`4!2cLvDۂtB XT30GH s^TЖ`9/3<Lw _bV"h\S}U@ΑPƇsYĘ[ hr- n| d.X&X $ș}TlZד%] $n.PJσĉD?W|@K ]_3`åiEem{A!b@VU0JNʗxD46Vlv@w_x!QwdY 4m.KSS= %rPR/*;ty:XN4X"ЁR!gp[*O:S)N/ t9aeXo$فyTR@݆ũE܁S+ Ϊp%b&Ʃ$ ̹}ԙhP'TPjwV_`653N9v[Wݜp`9*0L?bݷ*dd[hьhS ėbiMmXdMbVN`Àr66v.'8&ab_s]1:![JE[G >p]ƲAU:`A)RF b ?ansT|U[gy5pQmwm~"4)Inޅ9J<P 0(9'$,,mm;UA 2Or D;n&JB ,T9^QnW.Sf0@͉QGΎ/OJG|h9KrYp p +Aq|MM)6tWL&8``)NA7xB@3%mYU+0,1R+#w稛0#,bm*c@(Tx5G@җeݛ@!Z"@v $U;궿:2(Ne{7/Cߛ_{`V)1~$=!QK_R  <$VPӄNV|7$ÎW3ʐ`m]1w^%JC-!qx,-4gDyCHn83p{s >m?Y)؟p48^>#jXn!8xfD&釁ѰCWLwPtFvXK>kZNbsMK_G^zPXY8n5- \{Ic \a|YKkT8g@4>#nYdn~ E@Ytʥ).$Kq`B{qnDP%xG>Q(nU:*2[ nE*X`L[ vDx&0C-eY0OD >JNpዀ3\"`4vkvhx6186h<8/ZW5 -V@A+EX疳S{N7 jj\s$ MU 06>KEv㳹%唲]qСiΟ. .H,SD!@ Ar#./wǺ|8dUTEÀeł_Zjuz/XO|}I"*0p W'\[pI0#gM :Wäl2TJpa-) B̛cag 1h'aH%h+m%oIh &@'(]c$kN;"lP0VFI+pj)tDf=ğ%Eu"#\~"rsLs2w_fE㼮U `X%=<9%)"@2PTρ;\Q:, [kan"rįD&gV+,TMr;Q$PS/:#JpI v-MAs7I;f`YVh:y̋RK`K.AwX0TdSZPozT)M׷"S<ڔ,wDK/_?8_/͞IW+T$7xH%w(v(d3T鍊1U1+*SCozHQk#Z IZeQߎkvP)TdFԍ#U Կdۨo|u5{<!>S˰ 7T<7Wy)>Ds+QLz`ܟC^ว@=ڌWJQO=gG?scQ#fͨۄڝ/X4%ԏ"lQjy,(geBnvH*! l/Y|&rŒ寓.,e{oIpC6kb.-6IM6[<Ԅ[&h?A֠?Q.39tЄ\BG;tMsb`ֳ8rGsq!G&b< Gpd9ЄD-AĸHZua9@q-IzH$܅X@uOFL]I( `pcSD̅4\&$ϙ7=N2|ɢM[[R\mp%T} ^f`G:;ՇC^x*cR}Rg{ Pf腿&kf(n^xb26A2"xW30%8>C6S n(%W\gџKpE8fOwG δ}゘AS QAf㯳!؛S?M0s${,]\ߘFo7CSb4cNz /G,"p_%byI Vq w&Y ֭c`9c"yI0w?T7JPcAӿ0:ZP|Y7Jj2CeDXNep!`s{] V*c ;L9C >$R c2MU0*'aτ^=ĵ3@\D+AO".z.P37.; Nǻ@Z wF1vFKl^O}"g;_swt"8AQIIp9 r{IxEx].$ "Z^J=K,a4˴(j`<6L6(Z#Lda02I[$\Ɂf4twu T+)2|0 KR^si IlݏP!eosq=h \Iv 8 ~\! 3ˉW-Gudc-@T@;;qn㣴hjp%1[$Ã(y7Up E)>z="`D[ 4"`Y8=Ʉ"/VGY-j#@C)>ݝ/=s 5m7r#l]  ?VJ?9rRY0" SJRN%r~.YPKk ,.CYѵ3@K[1XrpM謓/ {+> ^fPsw @/?nV4l~TQN%(S8+A"8QDPL+!U*;x* @Bj":vUOT̓-bADP]轀7{ Ϗ8T8r ŞOkX[{h;= {Nt7"NATFCzƏ+懡äC!>Vh))*0ah٠J,*1=n8ZWZѧɬG$^3 l!QKࣨw&(O>H$%(4=BfG>w/2D2.fy)i.KQ?e*ԉŕ >,{$Gg&|9p 3fyԺa8`\ܙG" ǘ˔` 8&6flcc ksaX"xscFWZ :2Z #DŽRŨz1Z`s j;j;joII$j*Rq5.)0#$TaS%#uH)Rj./<)LZ% V).SY;LPo/T2$*zjPT2Bi‚&꛵=SjlWf_xN7LY\h6[ xR\}C*(oFE+<]B$+2⩻PwjgRza4A?~6~BmRJ=TOU2X]3Y˨A~udrPֲ7lެXe"Vgl^?F#o3au^>~p0P=1`Ę/ {M1zGI'rIpYv2.0DiG0rأ.G5M]pǨ>#8JϐɄ! X.6o2l얕d7xdl,f- >kY >s/.,.OXuly؅Rmio=P\}Q ؃{p3܅1G8q,pSdWs}@Ҝ&M$; {hH*J6xdT^<Ʉdc.bX2$|P-x,w'U-6pؠ}W2GQ,CF8ph87=˴o6Ҍ2h"Q%1GS0 zEzn"?g.0٨1Ȟ[=[(%N\`t'rI\~PUtvF[n^W 0(礸 P .zrf 8Nj|E5!RlLg5Л?<>U{Z</8ހ:fpn[[8w$- .5 0pRZ5'kQ4xgEzAp4P/(ڣ,sJ.).o1|d,Pm%l(5%g?;[/%NT%[w6{k?h^ (G@fA67H$P36@E;Bxv:7Li{bOM';؀`IY1P!(.^o.,%hK&Q)Juro IDATpO/Y8͡@` \ce 1ndv$U}3fRC}"B E@ ~p^  N"`U&o$X3ҜEq_,|+/%鋀|V_SHxłB%2Ş,kiu+L(Z L ?^~VI-Sfe@hiQO˜ȴH3={i' x@X ,+L:YV^[yd9 bs*`+*! *QSqh-2<Dx}-zV8|; KK1x$tk I[1XMը?Th/ ^ ]ϱMUfJu˲*D"/r gEɇqP),KUW$h.|QDJϼ b\|5 ovj2ZEPA7x5p 4AKY]tFP6`s `*tSA`f8h<<+~ЉzG \ZKdBzׁP?#F%Gd>ڀhDd+ɾrg9^Df/>Z&k'[p "]x:MAmDoTW֠VRߠ&܆Jj"6# U3E ӄXY} @(#0Ż˒aijl S,4K 碧RϤX*zd%*ԻE `YƲAj68RlJ;z:r7xZAABj;j,]L3VS5)t+D5OtQ*T<Ê ðS⏊(SPY͠U%J~WKz{rճiR*.¯.C]T ThJuuZe,A}ZVu +I{]橇**O?#e?̶ܫU9%UrL)obbJؿb<"jjY#:"=gf;~FυDtu]6PR 1C j(n JsB#hCPx?1v!$񞒀H\:Fꎗ >CX/Kx; 簊Bpv=ÙQ: 'UH$ P%5}%8`Z`&Eg԰1ɽ-Ʋ^I\~L`,uB>zډ,",&>BoW%нҤ_Jf`1F&o&x+6:p_`C3V07ŋ# d$ VK ?0sm%&{dS4E 7vy{V8xƕh6Yh2a:(>n1כH4tsb"3Ah z<)` g5 d@7SZ15;ālj_9꘬2< "m2<u@%!}`yPZ΍ŵ-p' u"(|5=K4iȻ*Ԋ X@#@h0#EAh%ʊ$ѳOvT + 4Mf/X!,T8a,S*ʗJgCHs KC6ũfWZ'tΈo@ffSfrbu 9NX?P030Ä &<`CN7S@9APG*Nzjg@85pS\Y&uhA-Cz .)y_g{[O7\A73?+j$Zs.1 ޼3XCÅ#?`u ,xݻ@VD@?K24c6T G4DEP~>txln际w8% s  0/f3 MF[6P`f!,xAlCv #0*Nd?֓Yi Y Tڙ6ҥ1H, ͗WO*v#8W pyO9$L-I?$YosĀ?`9\iG\QTEt_Wx{7!J)$ŕDS\(]j ɓXJ7%"ʲ+z`LÖTq".Ew4XP:Ȳ?$צ!!Y +Ҝ-nN21P'k+=&ń5 0fkFPrV9A0Snހ\.p6*WW@g]<aPqc`-_ ^Xm8*LPi&,4ғS*0=z@_,9w+6HdU\)婤Zmk)R#yJLpz^e?*JR7+o{},ډ@e8;(.#dM7X8*nQ"\iUtbG(oLy{UrW63/+@զNȔbzŋ*:ȶl Zfo"RUK ټ ~/:}<)P \Ot?u qr\fhF#kXBd$ ]|¾pɅ K{C$ܛ=a { /2xƶlKZ4hV>ǩ>]]3A2OTUW9o_uwyPK:`sL%[Mw8sәR0ܘ'B_q[`[S&y܀EZ/Id`3)q35$ZN-zA%\I53y3\bc@|gy* 67F+=&Nr@ɻ45eySyl-ϳWЫEN=xzkʘo =4I}YBΔW@ʄJܼ*ըo_jJhgbay?"dV@.E|xyop Q@z1Td`q%Nz9o\x8LY^vX5"8v &<lT&&Dl gMp6qoJr vtVBQ`#/s:R`ѳ+c6]VviS#y6zpXܩ%5է#ا35 81O-mi:t|+ .HҮ{I7%x+`wLxw?o ءA`Iv`=ZkHT m*x ױ¼As%fJ!X0e(%6ƝōDBCM–M>:lɇLf8#lqANcy-9%b1<9lyOiޙ^?D6`CRyv d[\Zx'XI4Y 2*>@OJ``xR'$p/cOP\En̷RT wKI&J?{Ur΀v .^pmQpgQf Jd[yQqh79Ӥ%杖] ~x]يDpYv#&4 $lHQ~?C>M%zLDpV 96erw A0Və9IFrpPL&|\~QI|X_~^OA I%u76IĮTs9SU|USr<"&{$%"CR'b{QD )~~A9G$E2$kʜb~{N8(塖E͋9/ŪXFtN,)~4$gOe);L'M@Ge9Cq&Af͒qF>軈H(bӌ³#6`1\:e6K2 ֳl@s^,&'(GHNx+<_[6Z ,{rઁ Ҝڰ-P851)!YC#˚2WX9 =@GG Yy/` 2X7b6zc\ aX C(7껬N]K9r`#e2 O/>HQ"+AU\EAF% @*XkpQ zDQXn4Y>ekDmc$&\S,8X&+q·}3CĦ/Thg@X)`.oRKܲˬ:H܁Y~Ue$SQD|.6p}t\<8S #!7F"Y}& 3v`-" 7`d TZ-?0-ؕ51aE(e5J{ pzI'l>fIL>_@Dh Ƕ@&hsێc`,y/ "`1_OL46D.&WڬGU%n.GnHD7@&L8S4u9EJxkV@@1 Hr7ˬ]"(cY{yBX@r,B'XWkN%*C2\.,ޭKa n68L]-o'q=r%r`P!dp@yRܲÍz-qwL"af紾I] FI/PZMA|DB@0osJ[MV6EbI4lȒVWPt\& '*7@4(H$hrhsQ :p#y/%MOBħ]Q9x,Y.p{^TS 0Z[IXt# XyN=$^TI'kN-Ly;,駼oJ zcL[CȡDN_Z@c^yaћY>'e g$7Z۔J‘%f]r𦉗W.>*9]w7a\ Ys$e%=A"9dI IDATp% o%߻% nǤ^j?-|Z{d> |{e~O=)t?$*Ƥxp(ƤxpW'|/iLWSbI( 1g$5!T-]̧dK}TdpN}Cfq(w"w"Ɲùڮ8^G3 )(Qd0h RfMN?J;#%q&Kt ,%$Y^"5{jg^~~9X:uǪ*p| ~PϼehjK;zkV`u+lr_t,`J#bV9@~exS3ٌ:& (̢2(g|ͿծLQJyDOQѣy'!g&cA@\&ڂV;9s  -؂ ,y, -[}<`2d M2cY,~f1m'A:M#*3 8AD*0jrF}{7H|#|-ٽf@؊&)w^GD@GUevR kr="zȽꋀEYf'YwAa-D, К3yF4VX~dy$$$p)w.ˤfeюx$X68R>*0hN<4u۲cA:b;XerC pQ Zg@ Xx@0ǒS6 HKӓ*j߲~R2S$wsSKMXx9;zZ"P :!OE(&"?$BI}myu/үJv^y"b뎚l)p? ~/ b ɴդD*0E\0< YTy8P0 %mg(&P?T6ؽ>'<7fJ5 i1a-90G3 eY1\P-Z?R@'%I 䀣V_BثjXgUl$ӜXƸv:*,br2v{?vevq6QT@UExw~10){ .̪U~By;|/Ǵ]9*eZ%~+,$+ ,橥*KP1周٩/A^0cXc5<,xP*80q Y\`F IQ'lPDE e-Z˖+P8a` `țJ2 vb^BB!ܭ *L`lYk-ߓ|z}{ w5xw_f^@ed,k.or) }?`Nę" tf - x1}&V3TxT)M.tȻDĝ أ=,S(ܙ6nri)!cttUgҔ'\r6 QOTٟ)wW/"~:"r0Њy8L=uvo6`oEZȲ0඙iBlmj[.LU0MBθ;Ο' M1(WYM2eV4 o3.I癶FM"Fl<4%#D4/MIn2rlؖ$Alld?OX<_9D\}IlK3hbI8l0eu Z"hBn~!6NtĘ~هPk0]%މ9Ci{;#j]~E_GvX=Dnø Ce jrwӿ%{/'k _5Dt#| ca !A)?In7wL#{N\ө3qS. rLao#l^IgԩL`O^ Twq꘭e3i1чu9c8eYtiMi2j;;0nsSD tBa@_M]ܑ֙u&ۼ= j(ɹS``("ζ.poLDۑ{!<Ǩ؉ T]ɌLfr2.sJQL|-JNJ/H&SU~F{uSU9))N߻$~P7P˸E\K 8C$eaWoa,8]sB/OK/Cw|N27 J\D !Z֩ 0)Żܲ AU 81|jK,Jq^_҉Md:'sn1F*ʏ=)َ<1VE)|lXQ~]T_RZm*tkr&1D&<67F6Xa Xc[|?}H!J+tCɫGD;F_}I\y( IMxbNy_&Ё4-ĺ猧 E\}GrKlyBEo9D k,*fp>+nǗuw  >Ng(E _82jFkD? ?17SĝD>Nr3Wx$ŕgr n$t-,AJ4d$ =)&ZM6 r,dB[RlIRrsυ/G-tkrnMsL5;6|ml<L9M149UHq:RJORv]4gdDG0R@&aj4/Gf\>G,z4vat@' -w}ۉvcsw"bo(!u@֦$/`=}d5m"#v/"`WF)e#IoN7lOm6s N_~!TK'q{f;vzl EETWS6(XܗgE1G W*ǣGܙugy'Køͪ#om;/p^LvI}E=K1"֛nCJLQ 9ֆFTaaBi7K\9a5h4{X-~NTӃ8" bn?P"uI6<(MsF.c4ޖiHB2TKGg.#&-L:oBa@=ux9N@ilܚBojyS}e֢\[e q9Q]~FnX<깂NLbOj18 Qp=oDA1z>bkC "mQi m$ P΀yeZ&ؓ`\uZ2ZT,,:!O3a[:ܚQ \wo7~!4uYaR \[3 zXG9Z0c1ؖ[)0m\.\HGk $NCl)9\}AB`A{^?5pp9w`>CCz݄gIHW)q:I.c̃=c|,ͫs) m:R/‹s$h6ˣ5mEpV6Jd7$فu*Wa i9PR`e!pЊ^nC<0UbeslꀎFfB:.^ܭ;)z0}Eu% ,(Xuw+wo"?ELs=M .n4A LӦ&l T<`0SbK Nlxj3#]V*f `a%5p;69(ϑx\ Sr3!)D q9'ќ<9{!ۑDL#hFDhFfd<'!""7&{ct4W\XI("RTO´0Lxf!`9!!-[<◄6m|Dx&!K2-DڕðW2&<Cn MM![JldU/]pZ?[RVo*d Ub# \Ee6.S|Fꣲ jQ?Br("n~`ެ6&䡸zt)wG);ɁNJrl-.uC\kIm?&?Wr9G d2 ,S>uӨO3~4 q&rGhE##99;.?1{c _R@%:ƕ1} aZ&F*' 4WQФ'랕sUvH>,$P߄ !0!n4?&Wf^DŽS̈0B0)d6hp<&c_mC1َ֯&2Vo"D~FꊀEY)PCE|)9/paYLΆs_yK:\:NA!Ñw5x}6,Lڙ 3#JŋsL؟v;tZtř-%D;Xq(iUxhs˳"P_Z? G ( X`{ `/!y?1n' dCX1"}XKCe¨Vo߀5Fѝ>\ä=og Ei524*af@dK˲.&)DHGc ;e1V, (w6U4uY8Bޒ NxU#&$xS#6@+<.r>si' \iyֵb# P\7I`_ Iw]`b`¬2+-lb28s:@ )xdls; Dk =NxE*T f6DeqX@90դ'ϓ3%:Mdyp2J[Re&mGfiQQ)Opw`ZS 7zu&)M9&J\_WmUmR+q Yi]s6s^hTllB#45y؁#Z 0=뚽|B:/^̦ZvuCS $i`[+Lcj1}ݘ m:[j XVZ)`w΀Y@{@J hP J&f ITgJ<_EX0isw%nvc<0!ٸx%G(q]M:,&7&T D->IipC]JM)@E_Z l~2o}0g{R@_r ,ڌFAXj'Aw8_d`\?nm5p4`=DP*,`Ad/]}K +%0 ;`nXL0<1j}!!'>>Y"!Dwiu Byl!uqX3`yQZ1Wk#*L֌|ߔhgW{"{ :-6*$*q dUItQ{|^fks*C,H%.*Rrl1o<W{~y_UxXSN JwF`Y`mA]q6Iwr8W +ʧ]&"1͉0ԃ=#E&. 8du}ZM&_4*:\v 0c:yf%ry$x*)pSY>/–@Gy'lP1m9+59@&q@' 0XYF:K<Ѕ e|W`tI5MpF:Ȭ$,&QV '*sR^GH(@7q`0a$3D@ϒ\/3Qfc4T=L0 7NuPfxgHn~DOGH2[C8q?n'yJ`$<:#^vJ lAp+}ӱ#OXW("V7{I4U %0ʝ`oй*T='Oc.}}W Xzo:I,#$gA<] 8Az^sA,A x]vkQ#Fo:&<I`骁묊qi Fl HJdt䜢5y G  ><-kD* *`]9!R1ۙ$+΀Y;YsQ*~X'u\h1Ld:uX`%&w@Fd u\H G,ZwѮNn IDAT=}#?DHj ķjE#75'9},ZSsfL0P5׮)ڱ(sFS '-&yy;zmYٚZt0iSwqvE@}Fl1Z)[M\bqA6ثYISJ{Z~TTu;=6Kc%2-| IwZ X $8 ppǶ 󈯁nPK JVw8F RrS^!Hv9࡚aIѓK c_e[Dh tQ$ \b q J@QjqH4։Mfi`.'SX1XNWmF9h;'j:5Dp&tVDLKMg"̏F49{is ˢX$+!|*%gT)pE W6)?@xͫsОJ0Ui>s~(PZ`y,,T%%[NӍjt^-XO w"czI9@ؠ/t;sNP@34RQ)_"Үd lG&.;o'O duZ#p0JӞ91@tH`cVS*g|.;kl&lnRD@bI"7p <"̥wZA|:C9"nFi]M:axcG YEG _VAtYl1U*yJ-V/-Xz`UrN+;0h$Dz@/11nϯ6b]ձ b `_YOl%fQ gbRP?Jx0Z` t 0W+$&VQ\ZY 0Y!EV=%y*clQL^lWfm a1M}tg9fw &UDp"\]I C.7Qui=;i'Ņev{s,kF*kOR 2/11EdFRŒKLF d!'"2_Ȑh<#6r9?<=LQv"OzMޔ;QTNj=&e(Yཕ9Y4fedi$R\*rCpU 幯u1e4QQ[L9OgyLE ,X]5+?t@{W9$ۈwM-O SW"z`uvz*/.[+|;0KHu0!a W[W3U g&vY\B|g40&1n=KΥxzqbbB 81ڔgFQs(˓I69<"pw&/KӤ"X Tw{+S|es/,܁([@uVbxAl" 3S.V>]U"(0_3ݮ^E (0]o[.(%% {OOG4q{wd7A"`;ZKa,8LDw4~` gI'G~z8#L0D@+Y@wnh(pDLf! ,LDS;e4GfE ‹c }4u W=?nٓ[ZC`5NɮAtD *~%K<a,ϱ]J{9\`]Ds480f*]ն-zkfjizIm0^#2$Q5޼2DtcMÔ2A6mh fk?٫}+DfG8' 6{.ڇ+5XKw/0MvE%`= ԩD4e.iy &h1+9Z+NoX@ἳ1]bu″\8$bN L6<͠QO63x0|2W49h P:gޟ=MƏ\y( (')am#*-81ܩ!4:{wj@ydSik]by|Qم1}xC*o{Rs u tifHQAfeԼnlfW6A|߻+k)`1t`\>|&X&M ^apЮ4~(G9 >.feJRytrRi}BvbS~ڱ[X!DR@vUAgkk*2aޠb3յto&U!E0A7sIH3`Vg8Jۛ ˰@Xi#0J䍹jx`8i J \H KKR`k F@e`Ji}hgh5)0oәD nu݆!ԍ w SR@n@I}W]9NArl`mB+#\Dw^30OhoP,y6" AdimN @IW!WѶlf/=R1A`oo+%晬mqnKf&bqy4J(hrN[\|7i|7nLN>}L?'כ2Z.E+(J>&wNRHFNhTe6c^p6t׬d&im#gxR2Džu:*Qw===8"9ž؝sCm:x\r%u}%WeSؓD"{&Ò~&3E|B2/Vs0+eHlZX:#:jQ4}aSsNlQA5=f;~=)َgVeʱfSdR"";3)"r(E>m)%F&,&g\ xtԕFx/J |1r6''XA8ԨpJbbTaP(pR2'%#~:`T2e0-r!:`IaQY)>%ǻݍέJ,f" 3‚R̼d:bqGo~Sb#M)\Z@H@).;oG#𝸤G|')^4^pm;q>CVqyez#?7YBDF5Apv@G=_Wh PQ VUΩsASU&"ykEt{,ay3%\?S䁕Kq?{Ώajt_=))*2 =GbUY]&"}VٸIzLޘ4>emq.Z/qR/>aWf"CS /^ gr)YXVǂ ^Qx,R~Qt#L)O7(Ԛ3iXKRBA:>'S55P!אZnK]/:|)@(0rFV8;~$ 2@{͗ f)(?#ZbΦ5Rh51uYϤڔz 8Rp.2l"667E1b$&dFlJڿޖ C%R9M^(4u (Z{ x9#LAxo ^y)ޘs7vAtF4l Fl'`h3[wH0\Ѝmujz*$Pz] "9};p[msi` (ʥqR = % &hü (9-G )ߏ5oPrqN si5m9 ȃReg81 35mW%%JɵnrRA7A|eɀ rmM^@4 rHL:¡ۋ|" "C6l`3\r 8::>ٟ\ypJ[ 'J@iwkWX9p -Z*/6y/"Pj8FPj$EJȷQKq6ejg1sYU8lv2vTXkg5bIo 6KF.?c+ͻ ~Pʧ 1pu/dO]xEM.C,K E8!.GM&h2 C0y噈i3`j9*#6c>7xX t˥Baarkm՘/7ƋY6ɣlًZIx۔'Q1N^֜lmFY ߀赾3. mS)0^aGgG]E8bsøuX&⭫Ucu8y/'orF}~vU27?bP0ӏ$ă0&lf#`AtPL9gqa[Jޮtf@(t H\ tk\ W/_5zL%'}߿ *|o)W?W_^}+_A$TY_.[~6?T}#?[}_jYW?uG E75N؆-G#LJ p,:V\CL`rz}VDž/jAiC?%Tv5>:y3>?Zci!Ĉaa|1T}keOgG?ۗqj{C zWT +}o)}77gx_?$~/6E:<\60, A5قjW??~^J4jW&4Q:@sxS0C$z H\!o3A!ى5D}D'̂~Ij4o/=c(O{,"PF˾SԢ9 9"(We\j=@f"sU\SrYMnjD7𪦈 X೦|8$,œNB޺Jt]e Kj{2r@6iϔ.=; sd@FSd xZy9q1X$Joi{^G)!c&M`?H IDATԻ0I; s}P7^ _cspMRڊ!: oGMD9؃!*B)ݬ9XX`C'anPsKRL}/R \7lGor@@@^w -f3qIʂ:>جu3LQr3|5?үh|zװ AfBTCPGs2;B0Zg^ӜH"&qL*Sݎ#9"._2W8Pn ; q/zL3[gL+s~|}9k!_Dt@|[D!h.5, 3h$͆'[r`clʅ{@ ENc&$bnZ C037_|慲٩75SOMnzZy*JQ7&AY'e+cㅠ: 43`pAӹ3`{xBi:9й?*TYqUsuFB1lys+(.bMnc 5<:S>ڭ 0k::mL`|)9$\f#]fd 촁)\YO23 1؀ߦ_w`Ľ] φ[2/wB3 1!p*0VRsv+*#@N#-N^Hp~g՟bI<;s~?=SWG/c\HXG PВi \5> `'S[0הqJt e5:{3itR`rK=g89}mSBS`%%3[rb S:-N 2Nr?Jq9g,k .̱ A<>h#A:Q>#)Lk aWzHQѨD9h o؝cB6=^$ 0Yǰ/vx27nSJNZ  =N&:߉=:OVFQ,K?kPp$ЮgyͿ3eF$TwTM S}~yjKob篫;~?S~ G9W4+Iڊ\ @mD =5_[3vq C%o$Aʐ_Gc^a17} aD_ a,8>we1p??Tμ>+Kv؁#N䧝Fl: zZlbg"/Vg[Bjlm-}4- {%*@"_j\}_l(rcƾ ^u :qa?Zns.7 \^U50,m0 /^7R{^_a31p^g-x=7Ɏ:18+U v{]Uh7T@ys$8 ) Pw0x_d<|N"徬a )%Q7No'hQc&*ϖ߶!;bΌ xY'4&tfJUNd;m$_fmnbEySe:2]2+wQ]߬ry(]::jǙ|I\_5ΫohmVȯh!̫$^<ݓ}63[pm18hFa6@]`A"#!V)d(}ց䈚hO8We 6eai(]+LsfCefՙ'kW,Yp7\T"Kv@Ag\ȴmvA-7H>i5yԉǚpo_.n߃Ϋ'0b(!3[pV$pL@'Tb)S0]ӤdC k(:dV _5JP8-H+Inc-ضFÉn3pT|H [a}a' d@GP1wr>FK>ca,;'+nF/ApL"y&݂yͅҔگ1(1۰5VgTv Вk|6GK|`PŐP2vw\c>sJiNkVu[HxStzz`N9KpS,`6)#lw%*51uzQ)pP#sS@_E~@5~B( qC> ʶs\n'"2vK |ݒsu.GJQXG P0B|\VewǙlb1dJt7> _t뼶"k (O~b3kr9`Ã&j]V~%>5:t>`6;[l@`=gg1eJlIc-c?V1V4JcHEr0Ɨ82(eIAP1O<8d69x1Ke~iqZ_Li18~Y;MԑzA; FaAd.ҏfPoNcE&{h%&XX&Evv!L.N3J*Sv#m>lsj?{ 1 !] , 5z F ~3Y<@vr @ȹ GZIL˻hr9T{BX B0>b#OYf29̓ ]\ea_h .'ݎYq#@0F`ov`AC1`1z8)%Њ{LٿY}՘*.\Q5{=. A:zebFN0:07h,8?r $xxk_L5 F ֚(P(쀂QqƹN::`SNM&сjrz&v"^5 7:ɸR2OeV`2ؚl8`nH(BT:,a`}Lbv_1w(/)CtRxu2}wy1̅IL΀z g6Ly ͸~QX60$:0(v@` ?ue520I3Q %|ǚ/ԥ3`] .Qpz `lZl,Ȥ4}[S-elʱVJr3[)XERwku00ފiˈY7\;`n/q0IU >g1XQLW6̽4F F]>~L X#qSBӠGwRaq E Cf[ode N4eK1Vm`CrkKzǟodSPМxKVG5偦I? 1n8Q1(Ax0rFɍ'N8YϮX@POf\V<ui v@y]>su?3=P oQ2!~QeV/|m*p)K@o-+$DQ-n`%XxƘ9z?4̤$yvAԪv -+5V["tat3lI*Uq}4k>ey? "slK4lx,ըHb_mN8ykTI<7PDKhz>؟t8UiYv@An?.ML>V9ޔ@`aUhj?[c Ι6Uy}pɮ&ɗky5 jOXϾ勉ÍId+  ^gZ۱ .hd>G 5/h^ۊ>m;vttH+/]y`죰ʺjL2豳شtn`+[fM/ގ/ C4lqcºP!-(0>'zl1xtWo vHp/VP^fagϗ|9CEX%EZ@APp(X:_3GN#с 2Q,AI qT "K }<T!+@Ifh@3l0#1IĬ>JTYc 0T|`tlע {{gRV٪5pf+e`0ˊ wILե3`NS\k<p6 z=!8hpJ%U=LY@vU*bH`z4@POa" `y|'k9T5N@DV :qI>TJ:Nx>S*MvXnDh`I'dA`P|q%,&*yD'[QEǙ]^mshuN+M`~A(ui< &L:qj\f"4,g+e_j`A=P JéHho|CلADb2}7[: }ËrDwht2S 'l4ے+0yz'+&ڨI@j!}bɟV mk̮n7MցR|4f73ɋi-)쀂CRS17@"[Pp}j}!v?BfOB~%gV`joKon8T;YlHT;SLVl@iњ%__&U}aWGLݤ~h0Ì$t$w(G3vt$ D:}>{ /"P _veh5lw6{ϑȄA b_B$w[=N cX=]JЈ8{֣tU5'f[^If1=w$*[6M ߄LvfL(1Bx8щ6X̠@:A-"HDB3W6ۂQQ& o:J-"EZ@A; ;r)th@ <0J&}>+`9  SЀFxs "\qFՓ[]6bق43MSd2uYXyGEmz=2mBӸOcR`KM]c]_s4d24HޤSD+Fі-Kmuà  mDzrdlp*skd)pn uL`ȃ>ց@/2z\R&ϓ!"-3 ;e t L0H`'l:ƥi`G爾5eW׈i&gI8XIty5HSwB`ud5&%YeϬMv)[N8a 0drm43|vtN 2F['G[SPPh2ь]&ehc|U6W/[goV쳂 &/ƞУq M֛5:h@*>agE8xL :ti S7fA"( uj((8 H2'#{it *y#6Π( 49huV9XeܦzLMK "jn;Y\hv!X@E4~&`V*B] IDATNXꋭpڧ ,m XO)lcs˕/II|K=O+Kg@-ewZeuVV$ IAAv@a@[ȔRueRK@KD56~U`p4~VJ^"4L5Uai}2 Mu9X-ڎ<&Y0ɤBw ^bp ּ"f1L:1)Mwxqd=^,(Х3Og<'i{DQKO@?fog6Ndz`p^D\Fc ; 3a2Gjglהٗ&t k6P0EgvvlC@1Qu8谦F+k@\<>x)x8"v2*4L<^2ߣ :HMhI@ڨuvt-ߔ]]uv.C*xUMNYJC& o(gӃY!K5ꆴ'`v0*`F9@O`$zf*7:&mZ>盶 (8l pK5&^ԤK@T:gwsYW+*)'ϊ@8PEtQd"j[}PbG z0/pr9GՒ)>8%? $vt(b€`ypr,>R?‰ ̏8p͒^M}\5MT1o,Թ?yyȩ߬PP1Pp8N\dBPS`g<ScEe,qL3 A Qh p ߝs\Hūy~kK0r;`{h@ m9ՍS?2!^)bT?X tXlp;&dOף; *Bg9 Xπ ?(,s[4zbpY ]f& ܯ*MxąsLQ|d-ahA_1@tcf?k3$]Y@0X!d-kNbrҭE'z` ENfr4O$ΪD~Đ zr@b HdZ~o`G`Z)/,ɩւ ٓS*\DM7zh$MIލ&3aPJarƵS((s[4>Ý .1-=3sźM= SNuw0~GkW|̍]MkG>|_gz͜>V6aӨVzyu_᦬Ϫ-'(NtRc p|AncSx'i&"AA: x{Q&W>I6KzJ 4B'6P,_iֹOiqg:* gZcVƩ~Ȅk,M>kF.`/x t=[h0z[1Ǣ;k7_s924 RƵ;[ǵIZ:C0,lB k:[,΀[2=0Hca(x(쀂Ïkze`5L\Q}> c3K|j* ތSǚmv_׳m(8,9ÌZy%"ҹ)׀MgcMHK}&O%D-CB] ULJUyz+V*8'x3`I/KRA n"j" Xb\Ƅo{V`%d89 !vx!]c\bE~U;D(SP+ S@i nVMGHhKؙ87/R e g G#y \6I3f'8`Dmxv &R$sTC/3Cg-w0 @+V(6"ރ #ODaƜa@+ L5({l5 7@y##Wu~jsjLی/tm lf3\NtXD@H xT^p . s*etZS$I:XZnRGWb5^tfS_4:[rRKV!0AtJ @yKS6󲼪0 D {c}ـ|Y"*ZcPW@#dB X?zSnYQ&}A >VT2Y3;5]"1;Zfth/:2̞&/82^_C~z|;f|<>e\isi untyKa\tRLĮnf;UX+43ɱ-bX2@c@JDK hK}ПM?S3%,$G{Zو83)oR{Nx銿=1iFft#{g[| .`傂3FK Vpi\߫".Pp!b "pc[ͤYT.N2Ņ(ln;w6%E Lέqn۽dKMqz/-~iqt b/Na'_O< 'ڑl7$ :|poZc9 0 7#=೤ʒj@NȀ *qMi$SZv%cKuNRGl҇ܡ8Jv@8۬G{2ҽ"9䔼sDPr/u3KL@e/eV&w6PpQ Rd \4x(f>lpKFhKlfi}‘[v֣VBBm!_u\]Zg`~M>R)PP 96:WasQqgrI9kU:|FyX踬Ś\XjA 񐛔SV#:?6D'q-U>ޗ@apE1;"YTvSSTA&k_ sBDb|7T ̐| Ծb0Iur2j"Z/ t:MbDܛ\X)W.PPÉ&o_* Q2T.B8YXh&Ϧ6'<_,; 1'M^`Մ#;d_:rt=z1Ma7N j`i&}|:0F3HB5a(?Xf+wҭ퐡$ MAXKDt<έpSKa| ;%71&3tfr jDj֘~bs˂r{k[[g ppy\ldWq|Mίl~eGk]sd7J2:6QU饵blk2]VR^SEeU&u c`[U*l^A+((8Xm&st'Oֳ*ocdjop%Ip p(*Bt}4^Õ((8-QVA?b hQ6eտxIZ8Ȼ{<^qb7?=} ; X`]N.gRx<8K__eo(۵YFA?@L\LS ΀slyi+dJ: #F@((8D\Ֆ<9.d~{ȁqBJ4٢C(Md,SJ,& pvp0CqMlSNDre%.+1 ԇQPFyE `[z\:S6K؅ꦼ/qQ5.|53𲴫Ǘdw{9/[M]U1at`9lbϴ丿e%gbX3¿rJyĥ"/r\n-X QPʤPP0ZD[k|GJ3uN+tMK]IezyC9 ElAX &CQCti$` NPPv@AAqk]&Z|nRO,s yȸ@ot i? $,Pnך՞iBy O[c0 @AA@a4g,Г?0O5s܄K@{(tncn[,srmRUE ._}Ǜ]`-DRFG.6M=Nx>;:;~UH$JIDV͐Xv-%Dt vx(3.~c_ZC1I%2FֈAKl$"rzr~y9{ss>wKFƒcIAA$(&)2R[ 1$ gRM 9Tti7(Kfu'1OT-Ȳ 7?'Al5Ջ=<<}x00J, ȡ-UJ\8[6W jz!azW=kf~W}ܡxG2b__l GB֔10o. Ce}؎}H7&lL%a~*IQ ː`!fOLi},Rtxo<` @5APqW (/@[{ɉ*D x5x!4z!8_Qy9$/4 }^C V>L^}>epx-ѣbko~7Ԯ]{ժU(.[-6mbkAPsW*F=sldxxxyy(we˟y///. ÇF[n?VV޽{wޑX~}F-[3+rjuq;})F~\dJJ4ĤΝ;<+(S^Op߸i&;99Ugyzz%'OTAE?>w<,??ѢElɐ!Cz%iymǎ yb}]~qضm[֭[EQ$ܼy~UVXXF<PjppK,aϯ.Y~T~O?4((HCqvv2eJFFFqq˗Ν[vm6Zoߎ+//g 7otsgggݺu%{ .L42dݻsssӗ/_^^=UVnnnwİ+itt4p-.WxـvU]ʕ+FkժŅhԨW1/////-Q/={ .nݺ~%z[#;\k.((8y$[=!:tܾyfJJʗ_~IKZh=(4h@~y.?ҏܻ r%%%k׮eK=w~~>;޽{Dڵk>|Ȗx)j"ooo4 ؏˗/ ܯ|W\awaaaa#׬ܚ5k=RZ҄Ņ;\ڐPܩ\߾}'A,Pq\^^~y#m-bd9rdƌl Z۴iQ!ĉoܸU95 kժ|r_?~-߅/~˟ˍgzzٳS~UTTܞkm۶mDpر[nqR,5Z0eʸ_jG_AAA>>>F"˕,Xk׮III.]zQvvΝ;kFp{>}pÆ 0aBjj۷#5 HII jժվ}{pj/%5x@II {{HHb bX&M؏W-^N y@ŝxƏLnP4xsDZrqq >|֬YrSBBZ W3F5ETHHȆ hIDDDݺu5wÔ_kfj޼y֭SȀ /M2%33)!NNN|ɂ 4?~8 #7m_geeݽ{N:5~W飝]vmÆ {|rAAA;r7oޜ}Aff&`۵k-NKKӨp^j'0vޭcDzI!DK Fؽ{СCѣG3&**׷8''gϞ=k׮#8zh .P#TVViFz)G 8/5ի^Kعs甔$!k׮k׮gaȑr`t}?0a„-Zhu155uG/)>j}%%%y{{7lذI&z0`@DD<B`_ y}!/<B`_Eqǎzxxghh8W'Z0lM*,,ԫD!sqqqrrjԨ^O\2qĶmzyyifڴiz*//߸qc>}7nо}{//˗/k(**jݺ5ӧO׫a6;._}ZmZ~~~f믿VtE(k[ѫ1i3"_,= mS[WS6Q=ztFFƙ3gzE _zjlTe=󓚝3gvҮ]֭[XQQѰaC)cǎgϞ3gmYfz h)))i޼Ԕ˳~vѯ~bF4/Ȫlp~~~ .\E,  W]cbme|E0Y{fۦmv_$-޽{… 7xyyUW wڬGAƏO:{F޽{i|ZZ(wޥ%z hYlmj<==ׯ_+AG@ŵk4h@j;-L8ѣGRd``k={F5Cj$,,L'Ǐ;uD[(++ӫ2ˌ.GEYsT3y+R߷o-/++Ղ`DZWdggkD.\P suukTTTT|HoܸA{CS[tԦu:wR-Z=Zɓ7n Ӳj*zJKKcbbAG_+u֝;wm۶-[7-TF;M8痿%x";(;; UkA۽{k!!!nnn͚50`jׯ_W?;V/gc[f|ݞ+Qg6u5̳u ț7o:t jTp;vfttZ̩S!;<=U9n8*N8A0a- ׏~ˆ ءs%Jzc6>6y|EEE͛7A'OVkA۪U)ShWA}||tlG=cHO0rTd|1Mm]MہWE>p@v{\kʕ҂jA[7olڴ)!999&˗/Wfڵ?U2Mz!]vFM;wڵk>|XYINN%!!!,Gy{{Z^XXH ?~\ 쯚ݻwohի4vlG=kcHO0rTdj1Mm]Mہ󀯾.ӧ~A}`*qeeekז\f<OHH1>t萼I&ݫԠ('O׫WԩSthaa!{/h\U-hpdnkO"f͚߿?<<5J5Ç?xoeo~{>|&":upHc;YGzWG"Ss,mjj*l6og!҂`ǝ>}*ȑ#IIIʽ{ZU%B^^^bbCGҩS'zFݻwO>e___*h ғ&MrJvv6{\.'OƴirrrrrrEcܭ#s[{,ZhjҤlsrrm^ LP5c )ĉ1ڴ;L1q'aaE{fimS[WSv`<^رc hOgjT>SM''bngihn&`Ra``=؍^!gϞm֬m6**yǎt({OSh]@ӦMo-4؂s[wՔw҅puu52EN3gԪUK vY؎0;zrO0A4{fimS[WSv`-𡻻;1`j/[,44444422; ^(vT"ԭ[wǎקa<++S^z4ު7~xY'܈xښtNڲ~уD'̘1zj:?R0zZO0`y9*0Ҟۦy}D%!R U6lTA?>;>NDWbbZϧa5煅۷;/jg߻w/-aފ)) 0dZn<65A`/䮊_[.{/eff4Օ7oޤ64:#,1,0"X[,=t綩`;u^ҭ['N|tk- v4={4\tI"v^ٻv튈ݻwE;dСgΜ`K6h`/^N_׬YR)XqQGB322222~_ѣGciٹmv=mmю_>{GZz5yB,YՊjQ{rD͎nZkfƢ0ҝۦu3yNHH{Dco1Ղ`.vAP{y;X%.?qu%:(9&׬Y7nGmnpi;Z={Ǽ?:&MQhLBe|Xc;-+69YsTf<@'*vvqDžs:fZ0젏?Xj &HU:w RLd$k̘14 SsN֭[~T@@w<_qrDQ(NܹsNNX綅 WAONm֑%9vzkҥ\+))<4oޜBP:6Sihw_L-Gɱ<ҝۦ .ǟ5[4(44߿Gׯ`c\m߾B ւ-ԌՋ}cǎRx\c߿?cƌ:x{{׭[7""bԩ^+W[n{ 6;w.^#Fhݺ{@@@~mff)m1-_\~]Vݛ:ujTTNϟ-3gΤc¾EN{H,tlmG=&FE9Ȋ 1&3r7u5l|99HP} /<B`_N8B7!OB!O#ڸJj?"-ZL![X*+z)#WcB*UAOJ%?JeJ ˟HeL)!GB !J˙˪:r !"B!9B윧y}!/<B`_ y}!/<B`_ y}!/<B`_ y}!/<B`_ y}!/<B`_ y}!/<ؗ(z1lB`_ y}!/<_+ IENDB`pysph-master/docs/tutorial/solutions/000077500000000000000000000000001356347341600203605ustar00rootroot00000000000000pysph-master/docs/tutorial/solutions/ed.py000066400000000000000000000017511356347341600213260ustar00rootroot00000000000000import numpy as np from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import WCSPHScheme class EllipticalDrop(Application): def create_particles(self): dx = 0.025 x, y = np.mgrid[-1.05:1.05:dx, -1.05:1.05:dx] mask = x*x + y*y < 1 x = x[mask] y = y[mask] rho = 1.0 h = 1.3*dx m = rho*dx*dx pa = get_particle_array( name='fluid', x=x, y=y, u=-100*x, v=100*y, rho=rho, m=m, h=h ) self.scheme.setup_properties([pa]) return [pa] def create_scheme(self): s = WCSPHScheme( ['fluid'], [], dim=2, rho0=1.0, c0=1400, h0=1.3*0.025, hdx=1.3, gamma=7.0, alpha=0.1, beta=0.0 ) dt = 5e-6 tf = 0.0076 s.configure_solver( dt=dt, tf=tf, ) return s if __name__ == '__main__': app = EllipticalDrop(fname='ed') app.run() pysph-master/docs/tutorial/solutions/ed0.py000066400000000000000000000011461356347341600214040ustar00rootroot00000000000000import numpy as np from pysph.base.utils import get_particle_array from pysph.solver.application import Application class EllipticalDrop(Application): def create_particles(self): dx = 0.025 x, y = np.mgrid[-1.05:1.05:dx, -1.05:1.05:dx] mask = x*x + y*y < 1 x = x[mask] y = y[mask] rho = 1.0 h = 1.3*dx m = rho*dx*dx pa = get_particle_array( name='fluid', x=x, y=y, u=-100*x, v=100*y, rho=rho, m=m, h=h ) return [pa] if __name__ == '__main__': app = EllipticalDrop(fname='ed') app.run() pysph-master/docs/tutorial/solutions/ed_no_scheme.py000066400000000000000000000036431356347341600233500ustar00rootroot00000000000000import numpy as np from pysph.base.utils import get_particle_array_wcsph from pysph.base.kernels import CubicSpline from pysph.solver.application import Application from pysph.sph.equation import Group from pysph.sph.basic_equations import XSPHCorrection, ContinuityEquation from pysph.sph.wc.basic import TaitEOS, MomentumEquation from pysph.solver.solver import Solver from pysph.sph.integrator import PECIntegrator from pysph.sph.integrator_step import WCSPHStep class EllipticalDrop(Application): def create_particles(self): dx = 0.025 x, y = np.mgrid[-1.05:1.05:dx, -1.05:1.05:dx] mask = x*x + y*y < 1 x = x[mask] y = y[mask] rho = 1.0 h = 1.3*dx m = rho*dx*dx pa = get_particle_array_wcsph( name='fluid', x=x, y=y, u=-100*x, v=100*y, rho=rho, m=m, h=h ) return [pa] def create_equations(self): equations = [ Group( equations=[ TaitEOS(dest='fluid', sources=None, rho0=1.0, c0=1400, gamma=7.0), ], real=False ), Group( equations=[ ContinuityEquation(dest='fluid', sources=['fluid']), MomentumEquation(dest='fluid', sources=['fluid'], alpha=0.1, beta=0.0, c0=1400), XSPHCorrection(dest='fluid', sources=['fluid']), ] ), ] return equations def create_solver(self): kernel = CubicSpline(dim=2) integrator = PECIntegrator(fluid=WCSPHStep()) dt = 5e-6 tf = 0.0076 solver = Solver( kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf ) return solver if __name__ == '__main__': app = EllipticalDrop(fname='ed_no_scheme') app.run() pysph-master/docs/tutorial/solutions/particles_in_disk.py000066400000000000000000000003511356347341600244170ustar00rootroot00000000000000import numpy as np from pysph.base.utils import get_particle_array_wcsph x, y = np.mgrid[-1:1:50j, -1:1:50j] mask = x*x + y*y < 1.0 pa = get_particle_array_wcsph(name='fluid', x=x[mask], y=y[mask]) plt.scatter(pa.x, pa.y, marker='.')pysph-master/docs/tutorial/solutions/plot_pa.py000066400000000000000000000002401356347341600223640ustar00rootroot00000000000000from pysph.solver.utils import load data = load('ed_output/ed_1000.hdf5') f = data['arrays']['fluid'] plt.axis('equal') plt.scatter(f.x, f.y, c=f.p, marker='.')pysph-master/old_examples/000077500000000000000000000000001356347341600162025ustar00rootroot00000000000000pysph-master/old_examples/README.rst000066400000000000000000000005541356347341600176750ustar00rootroot00000000000000The important examples are shipped along with the PySPH sources and can be run by typing:: $ python -m pysph.examples.run If you know of the particular example to run you can also directly run it as:: $ python -m pysph.examples.elliptical_drop The examples in this directory are just sample examples or are scripts that do not entirely work correctly. pysph-master/old_examples/advection_example.py000066400000000000000000000060261356347341600222470ustar00rootroot00000000000000"""Tests for periodic boundary conditions using a simple advection function The problem is purely kinematical with a circular fluid patch (R = 0.25) in a doubly periodic box [0,1] X [0,1] subjected to a velocity profile : u(x, y) = 1.0 v(x, y) = 1.0 which is divergence free and periodic with period 1 in each coordinate direction. Running a long loop of the simulation should let use test the periodic boundary conditions iplemented in PySPH. To ensure the neighbors are appropriately set up in periodic calculations, the density is estimated by summation for each particle. This should remain constant through the simulation. """ # PySPH imports from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array_wcsph from pysph.base.kernels import Gaussian, WendlandQuintic, CubicSpline from pysph.solver.solver import Solver from pysph.solver.application import Application from pysph.sph.integrator import EulerIntegrator from pysph.sph.integrator_step import EulerStep # the eqations from pysph.sph.equation import Group from pysph.sph.misc.advection import Advect # numpy import numpy as np # domain and constants a = 0.25; b = 0.75 # Numerical setup nx = 50; dx = 1.0/nx hdx = 1.2 def create_particles(**kwargs): # create the particles _x = np.arange( a, b+1e-3, dx ) x, y = np.meshgrid(_x, _x); x = x.ravel(); y = y.ravel() h = np.ones_like(x) * dx cx = cy = 0.5 indices = [] for i in range(x.size): xi = x[i]; yi = y[i] if ( (xi - cx)**2 + (yi - cy)**2 > 0.25**2 ): indices.append(i) # create the arrays fluid = get_particle_array_wcsph(name='fluid', x=x, y=y, h=h) # remove particles outside the circular patch fluid.remove_particles(indices) # add the requisite arrays for prop in ('color', 'ax', 'ay', 'az'): fluid.add_property(name=prop) print("Advection test :: nfluid = %d"%( fluid.get_number_of_particles())) # setup the particle properties pi = np.pi; cos = np.cos; sin=np.sin # color fluid.color[:] = cos(2*pi*fluid.x) * cos(2*pi*fluid.y) fluid.u[:] = 1.0; fluid.v[:] = 1.0 # mass fluid.m[:] = dx**2 * 1.0 # return the particle list return [fluid,] # domain for periodicity domain = DomainManager(xmin=0, xmax=1.0, ymin=0, ymax=1.0, periodic_in_x=True, periodic_in_y=True) # Create the application. app = Application(domain=domain) # Create the kernel kernel = WendlandQuintic(dim=2) # Create the integrator. integrator = EulerIntegrator(fluid=EulerStep()) # Create a solver. solver = Solver(kernel=kernel, dim=2, integrator=integrator) # Setup default parameters. tf = 5 * np.sqrt(2.0) solver.set_time_step(1e-3) solver.set_final_time(tf) equations = [ # Update velocities and advect Group( equations=[ Advect(dest='fluid', sources=None), ]) ] # Setup the application and solver. This also generates the particles. app.setup(solver=solver, equations=equations, particle_factory=create_particles) app.run() pysph-master/old_examples/advection_mixing.py000066400000000000000000000062011356347341600221020ustar00rootroot00000000000000"""Mixing/Unmixing problem to test periodicity and/or SPH integrators The problem is purely kinematical with a fluid in a doubly periodic box [0,1] X [0,1] subjected to a velocity profile : u(x, y) = cos(pi t/T)*sin^2(pi x) * sin(2*pi y) v(x, y) = -cos(pi t/T)*sin^2(pi y) * sin(2*pi x) which is divergence free and periodic with period T. After a time of one period, the fluid should return to it's original position which can be used to check for the accuracy of the numeircal integration. An additional color function phi(x, y) = cos(2 pi x) * cos(4 pi y) can be used to visualize the mixing/unmixing of this problem. I picked up this test from the paper "3DFLUX: A high-order fully three-dimensional flux integral solver for the scalar transport equation", Emmanuel Germaine, Laurent Mydlarski, Luca Cortelezzi, JCP (240), 2013, pp 121-144 """ # PyZoltan imports from cyarray.api import LongArray # PySPH imports from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array_wcsph from pysph.base.kernels import Gaussian, WendlandQuintic, CubicSpline from pysph.solver.solver import Solver from pysph.solver.application import Application from pysph.sph.integrator import EulerIntegrator from pysph.sph.integrator_step import EulerStep # the eqations from pysph.sph.equation import Group from pysph.sph.misc.advection import Advect, MixingVelocityUpdate # numpy import numpy as np # domain and constants L = 1.0; T = 0.1 # Numerical setup nx = 50; dx = L/nx hdx = 1.2 def create_particles(**kwargs): # create the particles _x = np.arange( dx/2, L, dx ) x, y = np.meshgrid(_x, _x); x = x.ravel(); y = y.ravel() h = np.ones_like(x) * dx # create the arrays fluid = get_particle_array_wcsph(name='fluid', x=x, y=y, h=h) # add the requisite arrays for prop in ('color', 'ax', 'ay', 'az', 'u0', 'v0'): fluid.add_property(name=prop) print("Advection mixing problem :: nfluid = %d"%( fluid.get_number_of_particles())) # setup the particle properties pi = np.pi; cos = np.cos; sin=np.sin # color fluid.color[:] = cos(2*pi*x) * cos(4*pi*y) # velocities fluid.u0[:] = +sin(pi*x)*sin(pi*x) * sin(2*pi*y) fluid.v0[:] = -sin(pi*y)*sin(pi*y) * sin(2*pi*x) # return the particle list return [fluid,] # domain for periodicity domain = DomainManager(xmin=0, xmax=L, ymin=0, ymax=L, periodic_in_x=True, periodic_in_y=True) # Create the application. app = Application(domain=domain) # Create the kernel kernel = WendlandQuintic(dim=2) integrator = EulerIntegrator(fluid=EulerStep()) # Create a solver. solver = Solver(kernel=kernel, dim=2, integrator=integrator) # Setup default parameters. solver.set_time_step(1e-3) solver.set_final_time(T) equations = [ # Update velocities and advect Group( equations=[ MixingVelocityUpdate( dest='fluid', sources=None, T=T), Advect(dest='fluid', sources=None) ]) ] # Setup the application and solver. This also generates the particles. app.setup(solver=solver, equations=equations, particle_factory=create_particles) app.run() pysph-master/old_examples/iisph/000077500000000000000000000000001356347341600173165ustar00rootroot00000000000000pysph-master/old_examples/iisph/two_blocks_2body.py000066400000000000000000000027761356347341600231510ustar00rootroot00000000000000""" This simulates two square blocks of water colliding with each other. This example solves exactly the same problem as the two_blocks.py but shows how they can be treated as different fluids. """ import numpy from pysph.examples._db_geometry import create_2D_filled_region from pysph.base.utils import get_particle_array from pysph.sph.iisph import IISPHScheme from pysph.solver.application import Application dx = 0.025 hdx = 1.0 rho0 = 1000 class TwoBlocks2Body(Application): def create_particles(self): x1, y1 = create_2D_filled_region(-1, 0, 0, 1, dx) x2, y2 = create_2D_filled_region(0.5, 0, 1.5, 1, dx) u1 = numpy.ones_like(x1) u2 = -numpy.ones_like(x2) rho = numpy.ones_like(x1)*rho0 h = numpy.ones_like(u1)*hdx*dx m = numpy.ones_like(u1)*dx*dx*rho0 fluid1 = get_particle_array( name='fluid1', x=x1, y=y1, u=u1, rho=rho, m=m, h=h ) fluid2 = get_particle_array( name='fluid2', x=x2, y=y2, u=u2, rho=rho, m=m, h=h ) self.scheme.setup_properties([fluid1, fluid2]) return [fluid1, fluid2] def create_scheme(self): s = IISPHScheme(fluids=['fluid1', 'fluid2'], solids=[], dim=2, rho0=rho0) return s def configure_scheme(self): dt = 2e-3 tf = 1.0 self.scheme.configure_solver( dt=dt, tf=tf, adaptive_timestep=False, pfreq=10 ) if __name__ == '__main__': app = TwoBlocks2Body() app.run() pysph-master/old_examples/summation_density.py000066400000000000000000000071051356347341600223320ustar00rootroot00000000000000"""A simple example demonstrating the use of PySPH as a library for working with particles. The fundamental summation density operation is used as the example. Specifically, a uniform distribution of particles is generated and the expected density via summation is compared with the numerical result. This tutorial illustrates the following: - Creating particles : ParticleArray - Setting up a periodic domain : DomainManager - Nearest Neighbor Particle Searching : NNPS """ # PySPH imports from cyarray.api import UIntArray from pysph.base import utils from pysph.base.nnps import DomainManager, LinkedListNNPS from pysph.base.kernels import CubicSpline, Gaussian, QuinticSpline, WendlandQuintic from pysph.tools.uniform_distribution import uniform_distribution_cubic2D, \ uniform_distribution_hcp2D, get_number_density_hcp # NumPy import numpy # Python timer from time import time # particle spacings dx = 0.01; dxb2 = 0.5 * dx h0 = 1.3*dx # Uniform lattice distribution of particles #x, y, dx, dy, xmin, xmax, ymin, ymax = uniform_distribution_cubic2D( # dx, xmin=0.0, xmax=1.0, ymin=0.0, ymax=1.0) # Uniform hexagonal close packing arrangement of particles x, y, dx, dy, xmin, xmax, ymin, ymax = uniform_distribution_hcp2D( dx, xmin=0.0, xmax=1.0, ymin=0.0, ymax=1.0, adjust=True) # SPH kernel #k = CubicSpline(dim=2) #k = Gaussian(dim=2) #k = QuinticSpline(dim=2) k = WendlandQuintic(dim=2) # for the hexagonal particle spacing, dx*dy is only an approximate # expression for the particle volume. As far as the summation density # test is concerned, the value will be uniform but not equal to 1. To # reproduce a density profile of 1, we need to estimate the kernel sum # or number density of the distribution based on the kernel wij_sum_estimate = get_number_density_hcp(dx, dy, k, h0) volume = 1./wij_sum_estimate print('Volume estimates :: dx^2 = %g, Number density = %g'%(dx*dy, volume)) x = x.ravel(); y = y.ravel() h = numpy.ones_like(x) * h0 m = numpy.ones_like(x) * volume wij = numpy.zeros_like(x) # use the helper function get_particle_array to create a ParticleArray pa = utils.get_particle_array(x=x,y=y,h=h,m=m,wij=wij) # the simulation domain used to request periodicity domain = DomainManager( xmin=0., xmax=1., ymin=0., ymax=1.,periodic_in_x=True, periodic_in_y=True) # NNPS object for nearest neighbor queries nps = LinkedListNNPS(dim=2, particles=[pa,], radius_scale=k.radius_scale, domain=domain) # container for neighbors nbrs = UIntArray() # arrays including ghosts x, y, h, m = pa.get('x', 'y', 'h', 'm', only_real_particles=False) # iterate over destination particles t1 = time() max_ngb = -1 for i in range( pa.num_real_particles ): xi = x[i]; yi = y[i]; hi = h[i] # get list of neighbors nps.get_nearest_particles(0, 0, i, nbrs) neighbors = nbrs.get_npy_array() max_ngb = max( neighbors.size, max_ngb ) # iterate over the neighbor set rho_sum = 0.0; wij_sum = 0.0 for j in neighbors: xij = xi - x[j] yij = yi - y[j] rij = numpy.sqrt( xij**2 + yij**2 ) hij = 0.5 * (h[i] + h[j]) _wij = k.kernel( [xij, yij, 0.0], rij, hij) # contribution from this neibhor wij_sum += _wij rho_sum += m[j] * _wij # total contribution to the density and number density pa.rho[i] = rho_sum pa.wij[i] = wij_sum t2 = time()-t1 avg_density = numpy.sum(pa.rho)/pa.num_real_particles print('2D Summation density: %d particles %g seconds, Max %d neighbors'%(pa.num_real_particles, t2, max_ngb)) print("""Average density = %g, Relative error = %g"""%(avg_density, (1-avg_density)*100), '%') pysph-master/old_examples/wind_tunnel.py000066400000000000000000000161701356347341600211070ustar00rootroot00000000000000"""Demonstrate the windtunnel simulation using inlet and outlet feature in 2D inlet fluid outlet --------- -------- -------- | * * * x | | | | | u | * * * x | | | | | ---> | * * * x | | airfoil| | | | * * * x | | | | | -------- -------- -------- In the figure above, the 'x' are the initial inlet particles. The '*' are the copies of these. The particles are moving to the right and as they do, new fluid particles are added and as the fluid particles flow into the outlet they are converted to the outlet particle array and at last as the particles leave the outlet they are removed from the simulation. The `create_particles` and `create_inlet_outlet` functions may also be passed to the `app.setup` method if needed. This example can be run in parallel. """ import numpy as np from pysph.base.kernels import CubicSpline from pysph.base.utils import get_particle_array_wcsph from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator import PECIntegrator from pysph.sph.simple_inlet_outlet import SimpleInlet, SimpleOutlet from pysph.sph.integrator_step import InletOutletStep, TransportVelocityStep from pysph.sph.integrator_step import InletOutletStep, WCSPHStep from pysph.sph.scheme import TVFScheme from pysph.sph.scheme import WCSPHScheme from pysph.tools.geometry import get_2d_wall, get_2d_block from pysph.tools.geometry import get_5digit_naca_airfoil from pysph.tools.geometry import get_4digit_naca_airfoil from pysph.tools.geometry import remove_overlap_particles def windtunnel_airfoil_model(dx_wall=0.01, dx_airfoil=0.01, dx_fluid=0.01, r_solid=100.0, r_fluid=100.0, airfoil='2412', hdx=1.1, chord=1.0, h_tunnel=1.0, l_tunnel=10.0): """ Generates a geometry which can be used for wind tunnel like simulations. Parameters ---------- dx_wall : a number which is the dx of the wind tunnel wall dx_airfoil : a number which is the dx of the airfoil used dx_fluid : a number which is the dx of the fluid used r_solid : a number which is the initial density of the solid particles r_fluid : a number which is the initial density of the fluid particles airfoil : 4 or 5 digit string which is the airfoil name hdx : a number which is the hdx for the particle arrays chord : a number which is the chord of the airfoil h_tunnel : a number which is the height of the wind tunnel l_tunnel : a number which is the length of the wind tunnel Returns ------- wall : pysph wcsph particle array for the wind tunnel walls wing : pysph wcsph particle array for the airfoil fluid : pysph wcsph particle array for the fluid """ wall_center_1 = np.array([0.0, h_tunnel / 2.]) wall_center_2 = np.array([0.0, -h_tunnel / 2.]) x_wall_1, y_wall_1 = get_2d_wall(dx_wall, wall_center_1, l_tunnel) x_wall_2, y_wall_2 = get_2d_wall(dx_wall, wall_center_2, l_tunnel) x_wall = np.concatenate([x_wall_1, x_wall_2]) y_wall = np.concatenate([y_wall_1, y_wall_2]) y_wall_1 = y_wall_1 + dx_wall y_wall_2 = y_wall_2 - dx_wall y_wall = np.concatenate([y_wall, y_wall_1, y_wall_2]) y_wall_1 = y_wall_1 + dx_wall y_wall_2 = y_wall_2 - dx_wall y_wall = np.concatenate([y_wall, y_wall_1, y_wall_2]) y_wall_1 = y_wall_1 + dx_wall y_wall_2 = y_wall_2 - dx_wall x_wall = np.concatenate([x_wall, x_wall, x_wall, x_wall]) y_wall = np.concatenate([y_wall, y_wall_1, y_wall_2]) h_wall = np.ones_like(x_wall) * dx_wall * hdx rho_wall = np.ones_like(x_wall) * r_solid mass_wall = rho_wall * dx_wall * dx_wall wall = get_particle_array_wcsph(name='wall', x=x_wall, y=y_wall, h=h_wall, rho=rho_wall, m=mass_wall) if len(airfoil) == 4: x_airfoil, y_airfoil = get_4digit_naca_airfoil( dx_airfoil, airfoil, chord) else: x_airfoil, y_airfoil = get_5digit_naca_airfoil( dx_airfoil, airfoil, chord) x_airfoil = x_airfoil - 0.5 h_airfoil = np.ones_like(x_airfoil) * dx_airfoil * hdx rho_airfoil = np.ones_like(x_airfoil) * r_solid mass_airfoil = rho_airfoil * dx_airfoil * dx_airfoil wing = get_particle_array_wcsph(name='wing', x=x_airfoil, y=y_airfoil, h=h_airfoil, rho=rho_airfoil, m=mass_airfoil) x_fluid, y_fluid = get_2d_block(dx_fluid, 1.6, h_tunnel) h_fluid = np.ones_like(x_fluid) * dx_fluid * hdx rho_fluid = np.ones_like(x_fluid) * r_fluid mass_fluid = rho_fluid * dx_fluid * dx_fluid fluid = get_particle_array_wcsph(name='fluid', x=x_fluid, y=y_fluid, h=h_fluid, rho=rho_fluid, m=mass_fluid) remove_overlap_particles(fluid, wall, dx_wall, 2) remove_overlap_particles(fluid, wing, dx_airfoil, 2) return wall, wing, fluid class WindTunnel(Application): def add_user_options(self, group): group.add_argument("--speed", action="store", type=float, dest="speed", default=1.0, help="Speed of inlet particles.") def create_particles(self): dx_airfoil = 0.002 dx_wall = 0.002 wall, wing, fluid = windtunnel_airfoil_model( dx_wall=dx_wall, dx_airfoil=dx_airfoil) outlet = get_particle_array_wcsph(name='outlet') dx = 0.01 y = np.linspace(-0.49, 0.49, 99) x = np.zeros_like(y) - 0.81 rho = np.ones_like(x) * 100.0 m = rho * dx * dx h = np.ones_like(x) * dx * 1.1 u = np.ones_like(x) * self.options.speed inlet = get_particle_array_wcsph(name='inlet', x=x, y=y, m=m, h=h, u=u, rho=rho) return [inlet, fluid, wing, outlet, wall] def create_inlet_outlet(self, particle_arrays): fluid_pa = particle_arrays['fluid'] inlet_pa = particle_arrays['inlet'] outlet_pa = particle_arrays['outlet'] inlet = SimpleInlet( inlet_pa, fluid_pa, spacing=0.01, n=19, axis='x', xmin=-1.00, xmax=-0.81, ymin=-0.49, ymax=0.49 ) outlet = SimpleOutlet( outlet_pa, fluid_pa, xmin=3.0, xmax=4.0, ymin=-0.5, ymax=0.5 ) return [inlet, outlet] def create_scheme(self): s = WCSPHScheme(['fluid', 'inlet', 'outlet'], ['wing', 'wall'], dim=2, rho0=100.0, c0=10.0, h0=0.011, hdx=1.1, hg_correction=True) return s def create_solver(self): kernel = CubicSpline(dim=2) integrator = PECIntegrator( fluid=WCSPHStep(), inlet=InletOutletStep(), outlet=InletOutletStep() ) dt = 0.00005 tf = 20.0 solver = Solver( kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, pfreq=20 ) return solver if __name__ == '__main__': app = WindTunnel() app.run() pysph-master/pysph/000077500000000000000000000000001356347341600146715ustar00rootroot00000000000000pysph-master/pysph/__init__.py000066400000000000000000000045231356347341600170060ustar00rootroot00000000000000# See PEP 440 for more on suitable version numbers. __version__ = '1.0b1.dev0' # Utility functions to determine if Zoltan/MPI are available. _has_zoltan = None _has_opencl = None _has_mpi = None _in_parallel = None try: from pyzoltan import has_mpi # noqa: 402 except ImportError: def has_mpi(): global _has_mpi if _has_mpi is None: try: import mpi4py # noqa: 401 except ImportError: _has_mpi = False else: mpi4py.rc.initialize = False mpi4py.rc.finalize = True return _has_mpi def has_opencl(): """Return True if pyopencl is available. """ global _has_opencl if _has_opencl is None: _has_opencl = True try: import pyopencl # noqa: 401 except ImportError: _has_opencl = False return _has_opencl def has_zoltan(): """Return True if zoltan is available. """ global _has_zoltan if _has_zoltan is None: _has_zoltan = True try: from pyzoltan.core import zoltan # noqa: 401 except ImportError: _has_zoltan = False return _has_zoltan def in_parallel(): """Return true if we're running with MPI and Zoltan support """ global _in_parallel if _in_parallel is None: _in_parallel = has_mpi() and has_zoltan() return _in_parallel # Utility function to determine the possible output files _has_h5py = None _has_pyvisfile = None _has_tvtk = None def has_h5py(): """Return True if h5py is available. """ global _has_h5py if _has_h5py is None: _has_h5py = True try: import h5py # noqa: 401 except ImportError: _has_h5py = False return _has_h5py def has_tvtk(): """Return True if tvtk is available. """ global _has_tvtk if _has_tvtk is None: _has_tvtk = True try: import tvtk # noqa: 401 except ImportError: _has_tvtk = False return _has_tvtk def has_pyvisfile(): """Return True if pyvisfile is available. """ global _has_pyvisfile if _has_pyvisfile is None: _has_pyvisfile = True try: import pyvisfile # noqa: 401 except ImportError: _has_pyvisfile = False return _has_pyvisfile pysph-master/pysph/base/000077500000000000000000000000001356347341600156035ustar00rootroot00000000000000pysph-master/pysph/base/__init__.py000066400000000000000000000000001356347341600177020ustar00rootroot00000000000000pysph-master/pysph/base/box_sort_nnps.pxd000066400000000000000000000014571356347341600212240ustar00rootroot00000000000000from libcpp.map cimport map from nnps_base cimport * from linked_list_nnps cimport * # NNPS using the original gridding algorithm cdef class DictBoxSortNNPS(NNPS): cdef public dict cells # lookup table for the cells cdef list _cell_keys cpdef get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bint prealloc) cpdef _refresh(self) cpdef _bin(self, int pa_index, UIntArray indices) # NNPS using the linked list approach cdef class BoxSortNNPS(LinkedListNNPS): ############################################################################ # Data Attributes ############################################################################ cdef public map[long, int] cell_to_index # Maps cell ID to an index pysph-master/pysph/base/box_sort_nnps.pyx000066400000000000000000000302121356347341600212400ustar00rootroot00000000000000#cython: embedsignature=True # Library imports. import numpy as np cimport numpy as np # Cython imports from cython.operator cimport dereference as deref, preincrement as inc from cython.parallel import parallel, prange, threadid # malloc and friends from libc.stdlib cimport malloc, free from libcpp.map cimport map from libcpp.pair cimport pair from libcpp.vector cimport vector # cpython from cpython.dict cimport PyDict_Clear, PyDict_Contains, PyDict_GetItem from cpython.list cimport PyList_GetItem, PyList_SetItem, PyList_GET_ITEM # Cython for compiler directives cimport cython ############################################################################# cdef class BoxSortNNPS(LinkedListNNPS): """Nearest neighbor query class using the box sort method but which uses the LinkedList algorithm. This makes this very fast but perhaps not as safe as the DictBoxSortNNPS. All this class does is to use a std::map to obtain a linear cell index from the actual flattened cell index. """ #### Private protocol ################################################ cdef long _get_flattened_cell_index(self, cPoint pnt, double cell_size): cdef long cell_id = flatten( find_cell_id(pnt, cell_size), self.ncells_per_dim, self.dim ) return self.cell_to_index[cell_id] @cython.boundscheck(False) @cython.wraparound(False) cdef inline long _get_valid_cell_index(self, int cid_x, int cid_y, int cid_z, int* ncells_per_dim, int dim, int n_cells) nogil: """Return the flattened index for a valid cell""" cdef long ncx = ncells_per_dim[0] cdef long ncy = ncells_per_dim[1] cdef long ncz = ncells_per_dim[2] cdef long cell_index = -1 # basic test for valid indices. Since we bin the particles with # respect to the origin, negative indices can never occur. cdef bint is_valid = ( (ncx > cid_x > -1) and (ncy > cid_y > -1) and (ncz > cid_z > -1) ) # Given the validity of the cells, return the flattened cell index cdef map[long, int].iterator it if is_valid: cell_id = flatten_raw(cid_x, cid_y, cid_z, ncells_per_dim, dim) if cell_id > -1: it = self.cell_to_index.find(cell_id) if it != self.cell_to_index.end(): cell_index = deref(it).second return cell_index cpdef long _count_occupied_cells(self, long n_cells) except -1: cdef list pa_wrappers = self.pa_wrappers cdef NNPSParticleArrayWrapper pa_wrapper cdef int np cdef DoubleArray x, y, z cdef DoubleArray xmin = self.xmin cdef DoubleArray xmax = self.xmax cdef IntArray ncells_per_dim = self.ncells_per_dim cdef int narrays = self.narrays cdef int dim = self.dim cdef double cell_size = self.cell_size cdef cPoint pnt = cPoint_new(0, 0, 0) cdef long _cid cdef map[long, int] _cid_to_index cdef pair[long, int] _entry # flattened cell index cdef int i, j cdef long cell_index for j in range(narrays): pa_wrapper = pa_wrappers[j] np = pa_wrapper.get_number_of_particles() x = pa_wrapper.x y = pa_wrapper.y z = pa_wrapper.z for i in range(np): # the flattened index is considered relative to the # minimum along each co-ordinate direction pnt.x = x.data[i] - xmin.data[0] pnt.y = y.data[i] - xmin.data[1] pnt.z = z.data[i] - xmin.data[2] # flattened cell index _cid = flatten( find_cell_id( pnt, cell_size ), ncells_per_dim, dim ) _entry.first = _cid _entry.second = 1 _cid_to_index.insert(_entry) cdef map[long, int].iterator it = _cid_to_index.begin() cdef int count = 0 while it != _cid_to_index.end(): _entry = deref(it) _cid_to_index[_entry.first] = count count += 1 inc(it) self.cell_to_index = _cid_to_index return count ############################################################################## cdef class DictBoxSortNNPS(NNPS): """Nearest neighbor query class using the box-sort algorithm using a dictionary. NNPS bins all local particles using the box sort algorithm in Cells. The cells are stored in a dictionary 'cells' which is keyed on the spatial index (IntPoint) of the cell. """ def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, cache=False, sort_gids=False): """Constructor for NNPS Parameters ---------- dim : int Number of dimensions. particles : list The list of particle arrays we are working on. radius_scale : double, default (2) Optional kernel radius scale. Defaults to 2 domain : DomainManager, default (None) Optional limits for the domain cache : bint Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations. sort_gids : bint, default (False) Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run. """ # initialize the base class NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) # initialize the cells dict self.cells = {} # compute the intial box sort. First, the Domain Manager's # update method is called to comptue the maximum smoothing # length for particle binning. self.domain.update() self.update() msg = """WARNING: The cache will currently work only if find_nearest_neighbors works and can be used without requiring the GIL. The DictBoxSort does not work with OpenMP since it uses Python objects and cannot release the GIL. Until this is fixed, we cannot use caching, or parallel neighbor finding using this DictBoxSortNNPS, use the more efficient LinkedListNNPS instead. Disabling caching for now. """ if cache: print(msg) self.use_cache = False #### Public protocol ################################################ cpdef get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bint prealloc): """Utility function to get near-neighbors for a particle. Parameters ---------- src_index : int Index of the source particle array in the particles list dst_index : int Index of the destination particle array in the particles list d_idx : int (input) Destination particle for which neighbors are sought. nbrs : UIntArray (output) Neighbors for the requested particle are stored here. prealloc : bool Specifies if the neighbor array already has pre-allocated space for the neighbor list. In this case the neighbors are directly set in the given array without resetting or appending to the array. This improves performance when the neighbors are cached. """ cdef dict cells = self.cells cdef Cell cell cdef NNPSParticleArrayWrapper src = self.pa_wrappers[ src_index ] cdef NNPSParticleArrayWrapper dst = self.pa_wrappers[ dst_index ] # Source data arrays cdef DoubleArray s_x = src.x cdef DoubleArray s_y = src.y cdef DoubleArray s_z = src.z cdef DoubleArray s_h = src.h cdef UIntArray s_gid = src.gid # Destination particle arrays cdef DoubleArray d_x = dst.x cdef DoubleArray d_y = dst.y cdef DoubleArray d_z = dst.z cdef DoubleArray d_h = dst.h cdef UIntArray d_gid = dst.gid cdef double radius_scale = self.radius_scale cdef double cell_size = self.cell_size cdef UIntArray lindices cdef size_t indexj, count cdef ZOLTAN_ID_TYPE j cdef cPoint xi = cPoint_new(d_x.data[d_idx], d_y.data[d_idx], d_z.data[d_idx]) cdef cIntPoint _cid = find_cell_id( xi, cell_size ) cdef IntPoint cid = IntPoint_from_cIntPoint( _cid ) cdef IntPoint cellid = IntPoint(0, 0, 0) cdef double xij2 cdef double hi2, hj2 hi2 = radius_scale * d_h.data[d_idx] hi2 *= hi2 cdef int ierr # reset the nbr array length. This should avoid a realloc if prealloc: nbrs.length = 0 else: nbrs.reset() count = 0 cdef int ix, iy, iz for ix in [cid.data.x -1, cid.data.x, cid.data.x + 1]: for iy in [cid.data.y - 1, cid.data.y, cid.data.y + 1]: for iz in [cid.data.z - 1, cid.data.z, cid.data.z + 1]: cellid.data.x = ix; cellid.data.y = iy; cellid.data.z = iz ierr = PyDict_Contains( cells, cellid ) if ierr == 1: cell = PyDict_GetItem( cells, cellid ) lindices = PyList_GetItem( cell.lindices, src_index ) for indexj in range( lindices.length ): j = lindices.data[indexj] xij2 = norm2( s_x.data[j]-xi.x, s_y.data[j]-xi.y, s_z.data[j]-xi.z ) hj2 = radius_scale * s_h.data[j] hj2 *= hj2 # select neighbor if ( (xij2 < hi2) or (xij2 < hj2) ): if prealloc: nbrs.data[count] = j count += 1 else: nbrs.append( j ) if prealloc: nbrs.length = count if self.sort_gids: self._sort_neighbors(nbrs.data, count, s_gid.data) #### Private protocol ################################################ cpdef _refresh(self): """Clear the cells dict""" cdef dict cells = self.cells PyDict_Clear( cells ) cpdef _bin(self, int pa_index, UIntArray indices): """Bin a given particle array with indices. Parameters ---------- pa_index : int Index of the particle array corresponding to the particles list indices : UIntArray Subset of particles to bin """ cdef NNPSParticleArrayWrapper pa_wrapper = self.pa_wrappers[ pa_index ] cdef DoubleArray x = pa_wrapper.x cdef DoubleArray y = pa_wrapper.y cdef DoubleArray z = pa_wrapper.z cdef dict cells = self.cells cdef double cell_size = self.cell_size cdef UIntArray lindices, gindices cdef size_t num_particles, indexi, i cdef cIntPoint _cid cdef IntPoint cid cdef Cell cell cdef int ierr, narrays = self.narrays # now begin binning the particles num_particles = indices.length for indexi in range(num_particles): i = indices.data[indexi] pnt = cPoint_new( x.data[i], y.data[i], z.data[i] ) _cid = find_cell_id( pnt, cell_size ) cid = IntPoint_from_cIntPoint(_cid) ierr = PyDict_Contains(cells, cid) if ierr == 0: cell = Cell(cid, cell_size, narrays) cells[ cid ] = cell # add this particle to the list of indicies cell = PyDict_GetItem( cells, cid ) lindices = PyList_GetItem( cell.lindices, pa_index ) lindices.append( i ) #gindices.append( gid.data[i] ) self.n_cells = len(cells) self._cell_keys = list(cells.keys()) pysph-master/pysph/base/c_kernels.pyx000066400000000000000000001356231356347341600203240ustar00rootroot00000000000000#cython: embedsignature=True from libc.math cimport * import numpy as np cdef class CubicSpline: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp2 cdef double val """Gradient of a kernel is given by .. math:: \nabla W = normalization \frac{dW}{dq} \frac{dq}{dx} \nabla W = w_dash \frac{dq}{dx} Here we get `w_dash` by using `dwdq` method """ h1 = 1. / h q = rij * h1 # get the kernel normalizing factor ( sigma ) if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute sigma * dw_dq tmp2 = 2. - q if (rij > 1e-12): if (q > 2.0): val = 0.0 elif (q > 1.0): val = -0.75 * tmp2 * tmp2 else: val = -3.0 * q * (1 - 0.75 * q) else: val = 0.0 return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): return 2. / 3 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double tmp2 cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # kernel and gradient evaluated at q tmp2 = 2. - q if (q > 2.0): w = 0.0 dw = 0.0 elif (q > 1.0): w = 0.25 * tmp2 * tmp2 * tmp2 dw = -0.75 * tmp2 * tmp2 else: w = 1 - 1.5 * q * q * (1 - 0.5 * q) dw = -3.0 * q * (1 - 0.75 * q) return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp2 cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 tmp2 = 2. - q if (q > 2.0): val = 0.0 elif (q > 1.0): val = 0.25 * tmp2 * tmp2 * tmp2 else: val = 1 - 1.5 * q * q * (1 - 0.5 * q) return val * fac cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class CubicSplineWrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public CubicSpline kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class WendlandQuintic: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -5.0 * q * tmp * tmp * tmp return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): return 0.5 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double tmp cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * (2.0 * q + 1.0) dw = -5.0 * q * tmp * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * (2.0 * q + 1.0) return val * fac cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class WendlandQuinticWrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public WendlandQuintic kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class Gaussian: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 if (q < 3.0): if (rij > 1e-12): val = -2.0 * q * exp(-q * q) return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): # The inflection point is at q=1/sqrt(2) # the deltap values for some standard kernels # have been tabulated in sec 3.2 of # http://cfd.mace.manchester.ac.uk/sph/SPH_PhDs/2008/crespo_thesis.pdf return 0.70710678118654746 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # kernel and gradient evaluated at q w = 0.0 dw = 0.0 if (q < 3.0): w = exp(-q * q) dw = -2.0 * q * w return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 if (q < 3.0): val = exp(-q * q) * fac return val cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class GaussianWrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public Gaussian kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class QuinticSpline: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp1 cdef double tmp2 cdef double tmp3 cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 tmp3 = 3. - q tmp2 = 2. - q tmp1 = 1. - q # compute the gradient if (rij > 1e-12): if (q > 3.0): val = 0.0 elif (q > 2.0): val = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 elif (q > 1.0): val = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 val += 30.0 * tmp2 * tmp2 * tmp2 * tmp2 else: val = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 val += 30.0 * tmp2 * tmp2 * tmp2 * tmp2 val -= 75.0 * tmp1 * tmp1 * tmp1 * tmp1 else: val = 0.0 return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): # The inflection points for the polynomial are obtained as # http://www.wolframalpha.com/input/?i=%28%283-x%29%5E5+-+6*%282-x%29%5E5+%2B+15*%281-x%29%5E5%29%27%27 # the only permissible value is taken return 0.759298480738450 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double tmp1 cdef double tmp2 cdef double tmp3 cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 tmp3 = 3. - q tmp2 = 2. - q tmp1 = 1. - q # compute the kernel & gradient at q if (q > 3.0): w = 0.0 dw = 0.0 elif (q > 2.0): w = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 dw = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 elif (q > 1.0): w = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 w -= 6.0 * tmp2 * tmp2 * tmp2 * tmp2 * tmp2 dw = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 dw += 30.0 * tmp2 * tmp2 * tmp2 * tmp2 else: w = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 w -= 6.0 * tmp2 * tmp2 * tmp2 * tmp2 * tmp2 w += 15. * tmp1 * tmp1 * tmp1 * tmp1 * tmp1 dw = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 dw += 30.0 * tmp2 * tmp2 * tmp2 * tmp2 dw -= 75.0 * tmp1 * tmp1 * tmp1 * tmp1 return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp1 cdef double tmp2 cdef double tmp3 cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 tmp3 = 3. - q tmp2 = 2. - q tmp1 = 1. - q if (q > 3.0): val = 0.0 elif (q > 2.0): val = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 elif (q > 1.0): val = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 val -= 6.0 * tmp2 * tmp2 * tmp2 * tmp2 * tmp2 else: val = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 val -= 6.0 * tmp2 * tmp2 * tmp2 * tmp2 * tmp2 val += 15. * tmp1 * tmp1 * tmp1 * tmp1 * tmp1 return val * fac cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class QuinticSplineWrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public QuinticSpline kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class SuperGaussian: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double q2 cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 if (q < 3.0): if (rij > 1e-12): q2 = q * q val = q * (2.0 * q2 - self.dim - 4) * exp(-q2) return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): # Found inflection point using sympy. if self.dim == 1: return 0.584540507426389 elif self.dim == 2: return 0.6021141014644256 else: return 0.615369528365158 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double d cdef double fac cdef double h1 cdef double q cdef double q2 cdef double val h1 = 1. / h q = rij * h1 d = self.dim # get the kernel normalizing factor if d == 1: fac = self.fac * h1 elif d == 2: fac = self.fac * h1 * h1 elif d == 3: fac = self.fac * h1 * h1 * h1 # kernel and gradient evaluated at q val = 0.0 if (q < 3.0): q2 = q * q val = (-d * d * 0.5 + 2.0 * d * q2 - d - 2.0 * q2 * q2 + 4 * q2) * exp(-q2) return -fac * h1 * val cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double q2 cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 if (q < 3.0): q2 = q * q val = exp(-q2) * (1.0 + self.dim * 0.5 - q2) * fac return val cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class SuperGaussianWrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public SuperGaussian kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class WendlandQuinticC4: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = (-14.0 / 3.0) * q * (1 + 2.5 * q) * \ tmp * tmp * tmp * tmp * tmp return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): return 0.47114274 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double tmp cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * tmp * tmp * \ ((35.0 / 12.0) * q * q + 3.0 * q + 1.0) dw = (-14.0 / 3.0) * q * (1 + 2.5 * q) * \ tmp * tmp * tmp * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * tmp * tmp * \ ((35.0 / 12.0) * q * q + 3.0 * q + 1.0) return val * fac cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class WendlandQuinticC4Wrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public WendlandQuinticC4 kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class WendlandQuinticC6: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -5.50 * q * tmp * tmp * tmp * tmp * tmp * \ tmp * tmp * (1.0 + 3.5 * q + 4 * q * q) return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): return 0.4305720757 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double tmp cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * tmp * tmp * tmp * tmp * \ (4.0 * q * q * q + 6.25 * q * q + 4.0 * q + 1.0) dw = -5.50 * q * tmp * tmp * tmp * tmp * tmp * \ tmp * tmp * (1.0 + 3.5 * q + 4 * q * q) return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * tmp * tmp * tmp * tmp * \ (4.0 * q * q * q + 6.25 * q * q + 4.0 * q + 1.0) return val * fac cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class WendlandQuinticC6Wrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public WendlandQuinticC6 kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class WendlandQuinticC2_1D: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -3.0 * q * tmp * tmp return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): return 2.0/3 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double tmp cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * (1.5 * q + 1.0) dw = -3.0 * q * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * (1.5 * q + 1.0) return val * fac cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class WendlandQuinticC2_1DWrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public WendlandQuinticC2_1D kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class WendlandQuinticC4_1D: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -3.5 * q * (2 * q + 1) * tmp * tmp * tmp * tmp return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): return 0.55195628 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double tmp cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * tmp * (2 * q * q + 2.5 * q + 1.0) dw = -3.5 * q * (2 * q + 1) * tmp * tmp * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * tmp * (2 * q * q + 2.5 * q + 1.0) return val * fac cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class WendlandQuinticC4_1DWrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public WendlandQuinticC4_1D kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] cdef class WendlandQuinticC6_1D: cdef public long dim cdef public double fac cdef public double radius_scale def __init__(self, **kwargs): for key, value in kwargs.items(): setattr(self, key, value) cdef inline double dwdq(self, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -0.5 * q * (26.25 * q * q + 27 * q + 9.0) * \ tmp * tmp * tmp * tmp * tmp * tmp return val * fac cpdef double py_dwdq(self, double rij, double h): return self.dwdq(rij, h) cdef inline double get_deltap(self): return 0.47996698 cpdef double py_get_deltap(self): return self.get_deltap() cdef inline void gradient(self, double* xij, double rij, double h, double* grad): cdef double h1 cdef double tmp cdef double wdash h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] cpdef py_gradient(self, double[:] xij, double rij, double h, double[:] grad): self.gradient(&xij[0], rij, h, &grad[0]) cdef inline double gradient_h(self, double* xij, double rij, double h): cdef double dw cdef double fac cdef double h1 cdef double q cdef double tmp cdef double w h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * tmp * tmp * tmp * \ (2.625 * q * q * q + 4.75 * q * q + 3.5 * q + 1.0) dw = -0.5 * q * (26.25 * q * q + 27 * q + 9.0) * \ tmp * tmp * tmp * tmp * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) cpdef double py_gradient_h(self, double[:] xij, double rij, double h): return self.gradient_h(&xij[0], rij, h) cdef inline double kernel(self, double* xij, double rij, double h): cdef double fac cdef double h1 cdef double q cdef double tmp cdef double val h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * tmp * tmp * tmp * \ (2.625 * q * q * q + 4.75 * q * q + 3.5 * q + 1.0) return val * fac cpdef double py_kernel(self, double[:] xij, double rij, double h): return self.kernel(&xij[0], rij, h) cdef class WendlandQuinticC6_1DWrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public WendlandQuinticC6_1D kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] pysph-master/pysph/base/c_kernels.pyx.mako000066400000000000000000000034071356347341600212440ustar00rootroot00000000000000#cython: embedsignature=True <% from compyle.api import CythonGenerator from kernels import ( CubicSpline, WendlandQuintic, Gaussian, QuinticSpline, SuperGaussian, WendlandQuinticC4, WendlandQuinticC6, WendlandQuinticC2_1D, WendlandQuinticC4_1D, WendlandQuinticC6_1D ) CLASSES = ( CubicSpline, WendlandQuintic, Gaussian, QuinticSpline, SuperGaussian, WendlandQuinticC4, WendlandQuinticC6, WendlandQuinticC2_1D, WendlandQuinticC4_1D, WendlandQuinticC6_1D ) generator = CythonGenerator(python_methods=True) %> from libc.math cimport * import numpy as np % for cls in CLASSES: <% generator.parse(cls()) classname = cls.__name__ %> ${generator.get_code()} cdef class ${classname}Wrapper: """Reasonably high-performance convenience wrapper for Kernels. """ cdef public ${classname} kern cdef double[3] xij, grad cdef public double radius_scale cdef public double fac def __init__(self, kern): self.kern = kern self.radius_scale = kern.radius_scale self.fac = kern.fac cpdef double kernel(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) return self.kern.kernel(xij, rij, h) cpdef gradient(self, double xi, double yi, double zi, double xj, double yj, double zj, double h): cdef double* xij = self.xij xij[0] = xi-xj xij[1] = yi-yj xij[2] = zi-zj cdef double rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] +xij[2]*xij[2]) cdef double* grad = self.grad self.kern.gradient(xij, rij, h, grad) return grad[0], grad[1], grad[2] % endfor pysph-master/pysph/base/cell_indexing_nnps.pxd000066400000000000000000000037461356347341600221740ustar00rootroot00000000000000# cython: embedsignature=True from libcpp.map cimport map from libcpp.pair cimport pair from nnps_base cimport * ctypedef unsigned int u_int ctypedef map[u_int, pair[u_int, u_int]] key_to_idx_t cdef extern from 'math.h': double log(double) nogil double log2(double) nogil cdef class CellIndexingNNPS(NNPS): ############################################################################ # Data Attributes ############################################################################ cdef u_int** keys cdef u_int* current_keys cdef key_to_idx_t** key_indices cdef key_to_idx_t* current_indices cdef u_int* I cdef u_int J cdef u_int K cdef double radius_scale2 cdef NNPSParticleArrayWrapper dst, src ########################################################################## # Member functions ########################################################################## cdef inline u_int _get_key(self, u_int n, u_int i, u_int j, u_int k, int pa_index) nogil cdef inline int _get_id(self, u_int key, int pa_index) nogil cdef inline int _get_x(self, u_int key, int pa_index) nogil cdef inline int _get_y(self, u_int key, int pa_index) nogil cdef inline int _get_z(self, u_int key, int pa_index) nogil cdef inline int _neighbor_boxes(self, int i, int j, int k, int* x, int* y, int* z) nogil cpdef set_context(self, int src_index, int dst_index) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil cpdef get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bint prealloc) cpdef get_spatially_ordered_indices(self, int pa_index, LongArray indices) cdef void fill_array(self, NNPSParticleArrayWrapper pa_wrapper, int pa_index, UIntArray indices, u_int* current_keys, key_to_idx_t* current_indices) nogil cpdef _refresh(self) cpdef _bin(self, int pa_index, UIntArray indices) pysph-master/pysph/base/cell_indexing_nnps.pyx000066400000000000000000000301741356347341600222140ustar00rootroot00000000000000#cython: embedsignature=True # malloc and friends from libc.stdlib cimport malloc, free from libcpp.vector cimport vector from libcpp.map cimport map from cython.operator cimport dereference as deref, preincrement as inc # Cython for compiler directives cimport cython cdef extern from "" namespace "std" nogil: void sort[Iter, Compare](Iter first, Iter last, Compare comp) void sort[Iter](Iter first, Iter last) IF UNAME_SYSNAME == "Windows": @cython.cdivision(True) cdef inline double log2(double n) nogil: return log(n)/log(2) ############################################################################# cdef class CellIndexingNNPS(NNPS): """Find nearest neighbors using cell indexing. Uses a sorted array to access particles in a cell. Gives better cache performance than spatial hash. Ref: J. Onderik, R. Durikovic, Efficient Neighbor Search for Particle-based Fluids, Journal of the Applied Mathematics, Statistics and Informatics 4 (1) (2008) 29–43. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.105.6732&rep=rep1&type=pdf """ def __init__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False): NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) self.radius_scale2 = radius_scale*radius_scale cdef NNPSParticleArrayWrapper pa_wrapper cdef int i, num_particles for i from 0<=i self.pa_wrappers[i] num_particles = pa_wrapper.get_number_of_particles() self.keys[i] = malloc(num_particles*sizeof(u_int)) self.key_indices[i] = new key_to_idx_t() self.src_index = 0 self.dst_index = 0 self.sort_gids = sort_gids self.domain.update() self.update() def __cinit__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False): cdef int narrays = len(particles) self.keys = malloc(narrays*sizeof(u_int*)) self.key_indices = malloc(narrays*sizeof(key_to_idx_t*)) self.I = malloc(narrays*sizeof(u_int)) self.current_keys = NULL self.current_indices = NULL def __dealloc__(self): cdef int i for i from 0<=i self.pa_wrappers[dst_index] self.src = self.pa_wrappers[src_index] cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: """Low level, high-performance non-gil method to find neighbors. This requires that `set_context()` be called beforehand. This method does not reset the neighbors array before it appends the neighbors to it. """ cdef double* dst_x_ptr = self.dst.x.data cdef double* dst_y_ptr = self.dst.y.data cdef double* dst_z_ptr = self.dst.z.data cdef double* dst_h_ptr = self.dst.h.data cdef double* src_x_ptr = self.src.x.data cdef double* src_y_ptr = self.src.y.data cdef double* src_z_ptr = self.src.z.data cdef double* src_h_ptr = self.src.h.data cdef double x = dst_x_ptr[d_idx] cdef double y = dst_y_ptr[d_idx] cdef double z = dst_z_ptr[d_idx] cdef double h = dst_h_ptr[d_idx] cdef unsigned int* s_gid = self.src.gid.data cdef int orig_length = nbrs.length cdef int c_x, c_y, c_z cdef double* xmin = self.xmin.data cdef int i, j find_cell_id_raw( x - xmin[0], y - xmin[1], z - xmin[2], self.cell_size, &c_x, &c_y, &c_z ) cdef double xij2 = 0 cdef double hi2 = self.radius_scale2*h*h cdef double hj2 = 0 cdef map[u_int, pair[u_int, u_int]].iterator it cdef int x_boxes[27] cdef int y_boxes[27] cdef int z_boxes[27] cdef int num_boxes = self._neighbor_boxes(c_x, c_y, c_z, x_boxes, y_boxes, z_boxes) cdef pair[u_int, u_int] candidate cdef u_int n, idx for i from 0<=iself._get_id(current_keys[j], pa_index)) cdef void fill_array(self, NNPSParticleArrayWrapper pa_wrapper, int pa_index, UIntArray indices, u_int* current_keys, key_to_idx_t* current_indices) nogil: cdef double* x_ptr = pa_wrapper.x.data cdef double* y_ptr = pa_wrapper.y.data cdef double* z_ptr = pa_wrapper.z.data cdef double* xmin = self.xmin.data cdef int i, n cdef int c_x, c_y, c_z for i from 0<=i> self.I[pa_index]) % (1 << self.J) @cython.cdivision(True) cdef inline int _get_y(self, u_int key, int pa_index) nogil: return (key >> (self.I[pa_index] + self.J)) % (1 << self.K) @cython.cdivision(True) cdef inline int _get_z(self, u_int key, int pa_index) nogil: return key >> (self.I[pa_index] + self.J + self.K) cdef inline int _neighbor_boxes(self, int i, int j, int k, int* x, int* y, int* z) nogil: cdef int length = 0 cdef int p, q, r for p from -1<=p<2: for q from -1<=q<2: for r from -1<=r<2: if i+r>=0 and j+q>=0 and k+p>=0: x[length] = i+r y[length] = j+q z[length] = k+p length += 1 return length cpdef _refresh(self): cdef NNPSParticleArrayWrapper pa_wrapper # Only necessary if number of particles in a ParticleArray changes cdef int i, num_particles cdef double* xmax = self.xmax.data cdef double* xmin = self.xmin.data self.J = (1 + log2(ceil((xmax[0] - xmin[0])/self.cell_size))) self.K = (1 + log2(ceil((xmax[1] - xmin[1])/self.cell_size))) for i from 0<=i self.pa_wrappers[i] num_particles = pa_wrapper.get_number_of_particles() self.keys[i] = malloc(num_particles*sizeof(u_int)) self.key_indices[i] = new key_to_idx_t() self.current_keys = self.keys[self.src_index] self.current_indices = self.key_indices[self.src_index] @cython.cdivision(True) cpdef _bin(self, int pa_index, UIntArray indices): cdef NNPSParticleArrayWrapper pa_wrapper = self.pa_wrappers[pa_index] cdef int num_particles = pa_wrapper.get_number_of_particles() self.I[pa_index] = (1 + log2(pa_wrapper.get_number_of_particles())) cdef u_int* current_keys = self.keys[pa_index] cdef key_to_idx_t* current_indices = self.key_indices[pa_index] self.fill_array(pa_wrapper, pa_index, indices, current_keys, current_indices) pysph-master/pysph/base/device_helper.py000066400000000000000000000577511356347341600207720ustar00rootroot00000000000000from __future__ import print_function import logging import numpy as np from pytools import memoize, memoize_method import mako.template as mkt from compyle.config import get_config from compyle.array import get_backend, wrap_array, Array from compyle.parallel import Elementwise, Scan from compyle.api import declare, annotate from compyle.types import dtype_to_ctype from compyle.template import Template import compyle.array as array import pysph.base.particle_array logger = logging.getLogger() class ExtractParticles(Template): def __init__(self, name, prop_names): super(ExtractParticles, self).__init__(name=name) self.prop_names = prop_names def extra_args(self): args = [] for prop in self.prop_names: args.append('stride_%s' % prop) args.append('dst_%s' % prop) args.append('src_%s' % prop) return args, {} def template(self, i, indices, start_idx): ''' idx, s_idx, s_i, j, start = declare('int', 5) idx = indices[i] % for prop in obj.prop_names: s_idx = stride_${prop} * idx s_i = stride_${prop} * i start = stride_${prop} * start_idx for j in range(stride_${prop}): dst_${prop}[start + s_i + j] = src_${prop}[s_idx + j] % endfor ''' class DeviceHelper(object): """Manages the arrays contained in a particle array on the device. Note that it converts the data to a suitable type depending on the value of get_config().use_double. Further, note that it assumes that the names of constants and properties do not clash. """ def __init__(self, particle_array, backend=None): self.backend = get_backend(backend) self._particle_array = pa = particle_array self.use_double = get_config().use_double self._dtype = np.float64 if self.use_double else np.float32 self.num_real_particles = pa.num_real_particles self._data = {} self.properties = [] self.constants = [] self._strided_indices_knl = None for prop, ary in pa.properties.items(): self.add_prop(prop, ary) for prop, ary in pa.constants.items(): self.add_const(prop, ary) def _get_array(self, ary): ctype = ary.get_c_type() if ctype in ['float', 'double']: return ary.get_npy_array().astype(self._dtype) else: return ary.get_npy_array() def _get_prop_or_const(self, prop): pa = self._particle_array return pa.properties.get(prop, pa.constants.get(prop)) def _add_prop_or_const(self, name, carray): """Add a new property or constant given the name and carray, note that this assumes that this property is already added to the particle array. """ np_array = self._get_array(carray) g_ary = Array(np_array.dtype, n=carray.length, backend=self.backend) g_ary.set(np_array) self._data[name] = g_ary setattr(self, name, g_ary) def get_number_of_particles(self, real=False): if real: return self.num_real_particles else: if len(self.properties) > 0: pname = self.properties[0] stride = self._particle_array.stride.get(pname, 1) prop0 = self._data[pname] return len(prop0.dev) // stride else: return 0 def align(self, indices): '''Note that the indices passed here is a dictionary keyed on the stride. ''' if not isinstance(indices, dict): indices = {1: indices} self._make_strided_indices(indices) for prop in self.properties: stride = self._particle_array.stride.get(prop, 1) self._data[prop] = self._data[prop].align(indices.get(stride)) setattr(self, prop, self._data[prop]) def _make_strided_indices(self, indices_dict): '''Takes the indices in a dict assuming that the indices are for a stride of 1 and makes suitable indices for other possible stride values. ''' indices = indices_dict[1] n = indices.length if not self._strided_indices_knl: self._setup_strided_indices_kernel() for stride in set(self._particle_array.stride.values()): dest = array.empty(n*stride, dtype=np.int32, backend=self.backend) self._strided_indices_knl(indices, dest, stride) indices_dict[stride] = dest def _setup_strided_indices_kernel(self): @annotate(int='i, stride', gintp='indices, dest') def set_indices(i, indices, dest, stride): j = declare('int') for j in range(stride): dest[i*stride + j] = indices[i]*stride + j knl = Elementwise(set_indices, backend=self.backend) self._strided_indices_knl = knl def add_prop(self, name, carray): """Add a new property given the name and carray, note that this assumes that this property is already added to the particle array. """ if name in self._particle_array.properties and \ name not in self.properties: self._add_prop_or_const(name, carray) self.properties.append(name) def add_const(self, name, carray): """Add a new constant given the name and carray, note that this assumes that this property is already added to the particle array. """ if name in self._particle_array.constants and \ name not in self.constants: self._add_prop_or_const(name, carray) self.constants.append(name) def update_prop(self, name, dev_array): """Add a new property to DeviceHelper. Note that this property is not added to the particle array itself""" self._data[name] = dev_array setattr(self, name, dev_array) if name not in self.properties: self.properties.append(name) def update_const(self, name, dev_array): """Add a new constant to DeviceHelper. Note that this property is not added to the particle array itself""" self._data[name] = dev_array setattr(self, name, dev_array) if name not in self.constants: self.constants.append(name) def get_device_array(self, prop): if prop in self.properties or prop in self.constants: return self._data[prop] def max(self, arg): return float(array.maximum(getattr(self, arg), backend=self.backend)) def update_minmax_cl(self, props, only_min=False, only_max=False): ary_list = [getattr(self, prop) for prop in props] array.update_minmax_gpu(ary_list, only_min=only_min, only_max=only_max, backend=self.backend) def update_min_max(self, props=None): """Update the min,max values of all properties """ props = props if props else self.properties for prop in props: array = self._data[prop] array.update_min_max() def pull(self, *args): p = self._particle_array if len(args) == 0: args = self._data.keys() for arg in args: arg_cpu = getattr(self, arg).get() if arg in p.properties or arg in p.constants: pa_arr = self._get_prop_or_const(arg) else: if arg in self.properties: p.add_property(arg) if arg in self.constants: p.add_constant(arg, arg_cpu) pa_arr = self._get_prop_or_const(arg) if arg_cpu.size != pa_arr.length: pa_arr.resize(arg_cpu.size) pa_arr.set_data(arg_cpu) p.set_num_real_particles(self.num_real_particles) def push(self, *args): if len(args) == 0: args = self._data.keys() for arg in args: dev_arr = array.to_device( self._get_array(self._get_prop_or_const(arg)), backend=self.backend) self._data[arg].set_data(dev_arr) setattr(self, arg, self._data[arg]) def _check_property(self, prop): """Check if a property is present or not """ if prop in self.properties or prop in self.constants: return else: raise AttributeError('property %s not present' % (prop)) def remove_prop(self, name): if name in self.properties: self.properties.remove(name) if name in self._data: del self._data[name] delattr(self, name) def resize(self, new_size): for prop in self.properties: stride = self._particle_array.stride.get(prop, 1) self._data[prop].resize(new_size * stride) setattr(self, prop, self._data[prop]) @memoize_method def _get_align_kernel_without_strides(self): @annotate(i='int', tag_arr='gintp', return_='int') def align_input_expr(i, tag_arr): return tag_arr[i] == 0 @annotate(int='i, item, prev_item, last_item, num_particles', gintp='tag_arr, new_indices, num_real_particles') def align_output_expr(i, item, prev_item, last_item, tag_arr, new_indices, num_particles, num_real_particles): t, idx = declare('int', 2) t = last_item + i - prev_item idx = t if tag_arr[i] else prev_item new_indices[idx] = i if i == num_particles - 1: num_real_particles[0] = last_item align_particles_knl = Scan(align_input_expr, align_output_expr, 'a+b', dtype=np.int32, backend=self.backend) return align_particles_knl @memoize_method def _get_align_kernel_with_strides(self): @annotate(i='int', tag_arr='gintp', return_='int') def align_input_expr(i, tag_arr): return tag_arr[i] == 0 @annotate(int='i, item, prev_item, last_item, stride, num_particles', gintp='tag_arr, new_indices', return_='int') def align_output_expr(i, item, prev_item, last_item, tag_arr, new_indices, num_particles, stride): t, idx, j_s = declare('int', 3) t = last_item + i - prev_item idx = t if tag_arr[i] else prev_item for j_s in range(stride): new_indices[stride * idx + j_s] = stride * i + j_s align_particles_knl = Scan(align_input_expr, align_output_expr, 'a+b', dtype=np.int32, backend=self.backend) return align_particles_knl def align_particles(self): tag_arr = self._data['tag'] if len(tag_arr) == 0: self.num_real_particles = 0 return num_particles = len(tag_arr) new_indices = array.empty(num_particles, dtype=np.int32, backend=self.backend) num_real_particles = array.empty(1, dtype=np.int32, backend=self.backend) align_particles_knl = self._get_align_kernel_without_strides() align_particles_knl(tag_arr=tag_arr, new_indices=new_indices, num_particles=num_particles, num_real_particles=num_real_particles) indices = {1: new_indices} for stride in set(self._particle_array.stride.values()): if stride > 1: indices[stride] = self._build_indices_with_strides( tag_arr, stride ) self.align(indices) self.num_real_particles = int(num_real_particles.get()) def _build_indices_with_strides(self, tag_arr, stride): num_particles = len(tag_arr.dev) new_indices = array.empty(num_particles * stride, dtype=np.int32, backend=self.backend) align_particles_knl = self._get_align_kernel_with_strides() align_particles_knl(tag_arr=tag_arr, new_indices=new_indices, num_particles=num_particles, stride=stride) return new_indices @memoize_method def _get_remove_particles_bool_kernels(self): @annotate(i='int', if_remove='gintp', return_='int') def remove_input_expr(i, if_remove): return if_remove[i] @annotate(int='i, item, last_item, num_particles', gintp='if_remove, num_removed_particles', new_indices='guintp') def remove_output_expr(i, item, last_item, if_remove, new_indices, num_removed_particles, num_particles): if not if_remove[i]: new_indices[i - item] = i if i == num_particles - 1: num_removed_particles[0] = last_item remove_knl = Scan(remove_input_expr, remove_output_expr, 'a+b', dtype=np.int32, backend=self.backend) @annotate(int='i, size, stride', guintp='indices, new_indices') def stride_knl_elwise(i, indices, new_indices, size, stride): tmp_idx, j_s = declare('unsigned int', 2) for j_s in range(stride): tmp_idx = i * stride + j_s if tmp_idx < size: new_indices[tmp_idx] = indices[i] * stride + j_s stride_knl = Elementwise(stride_knl_elwise, backend=self.backend) return remove_knl, stride_knl def _remove_particles_bool(self, if_remove, align=True): """ Remove particle i if if_remove[i] is True """ num_indices = int(array.sum(if_remove, backend=self.backend)) if num_indices == 0: return num_particles = self.get_number_of_particles() new_indices = Array(np.uint32, n=(num_particles - num_indices), backend=self.backend) num_removed_particles = array.empty(1, dtype=np.int32, backend=self.backend) remove_knl, stride_knl = self._get_remove_particles_bool_kernels() remove_knl(if_remove=if_remove, new_indices=new_indices, num_removed_particles=num_removed_particles, num_particles=num_particles) new_num_particles = num_particles - int(num_removed_particles.get()) strides = set(self._particle_array.stride.values()) s_indices = {1: new_indices} for stride in strides: if stride == 1: continue size = new_num_particles * stride s_index = Array(np.uint32, n=size, backend=self.backend) stride_knl(new_indices, s_index, size, stride) s_indices[stride] = s_index for prop in self.properties: stride = self._particle_array.stride.get(prop, 1) s_index = s_indices[stride] self._data[prop] = self._data[prop].align(s_index) setattr(self, prop, self._data[prop]) if align: self.align_particles() @memoize_method def _get_remove_particles_kernel(self): @annotate(int='i, size', indices='guintp', if_remove='gintp') def fill_if_remove_elwise(i, indices, if_remove, size): if indices[i] < size: if_remove[indices[i]] = 1 fill_if_remove_knl = Elementwise(fill_if_remove_elwise, backend=self.backend) return fill_if_remove_knl def remove_particles(self, indices, align=True): """ Remove particles whose indices are given in index_list. Parameters ---------- indices : array an array of indices, this array can be a list, numpy array or a LongArray. """ if len(indices) > self.get_number_of_particles(): msg = 'Number of particles to be removed is greater than' msg += 'number of particles in array' raise ValueError(msg) num_particles = self.get_number_of_particles() if_remove = Array(np.int32, n=num_particles, backend=self.backend) if_remove.fill(0) fill_if_remove_knl = self._get_remove_particles_kernel() fill_if_remove_knl(indices, if_remove, num_particles) self._remove_particles_bool(if_remove, align=align) def remove_tagged_particles(self, tag, align=True): """ Remove particles that have the given tag. Parameters ---------- tag : int the type of particles that need to be removed. """ tag_array = self.tag if_remove = wrap_array((tag_array.dev == tag).astype(np.int32), self.backend) self._remove_particles_bool(if_remove, align=align) def add_particles(self, align=True, **particle_props): """ Add particles in particle_array to self. Parameters ---------- particle_props : dict a dictionary containing cl arrays for various particle properties. Notes ----- - all properties should have same length arrays. - all properties should already be present in this particles array. if new properties are seen, an exception will be raised. properties. """ pa = self._particle_array if len(particle_props) == 0: return # check if the input properties are valid. for prop in particle_props: self._check_property(prop) num_extra_particles = len(list(particle_props.values())[0]) old_num_particles = self.get_number_of_particles() new_num_particles = num_extra_particles + old_num_particles for prop in self.properties: arr = self._data[prop] stride = self._particle_array.stride.get(prop, 1) if prop in particle_props.keys(): s_arr = particle_props[prop] arr.extend(s_arr) else: arr.resize(new_num_particles * stride) # set the properties of the new particles to the default ones. arr[old_num_particles * stride:] = pa.default_values[prop] self.update_prop(prop, arr) if num_extra_particles > 0 and align: # make sure particles are aligned properly. self.align_particles() return 0 def extend(self, num_particles): """ Increase the total number of particles by the requested amount New particles are added at the end of the list, you may have to manually call align_particles later. """ if num_particles <= 0: return old_size = self.get_number_of_particles() new_size = old_size + num_particles for prop in self.properties: arr = self._data[prop] stride = self._particle_array.stride.get(prop, 1) arr.resize(new_size * stride) arr[old_size * stride:] = \ self._particle_array.default_values[prop] self.update_prop(prop, arr) def append_parray(self, parray, align=True, update_constants=False): """ Add particles from a particle array properties that are not there in self will be added """ if parray.gpu is None: parray.set_device_helper(DeviceHelper(parray)) if parray.gpu.get_number_of_particles() == 0: return num_extra_particles = parray.gpu.get_number_of_particles() old_num_particles = self.get_number_of_particles() new_num_particles = num_extra_particles + old_num_particles # extend current arrays by the required number of particles self.extend(num_extra_particles) my_stride = self._particle_array.stride for prop_name in parray.gpu.properties: stride = parray.stride.get(prop_name, 1) if stride > 1 and prop_name not in my_stride: my_stride[prop_name] = stride if prop_name in self.properties: arr = self._data[prop_name] source = parray.gpu.get_device_array(prop_name) arr.dev[old_num_particles * stride:] = source.dev else: # meaning this property is not there in self. dtype = parray.gpu.get_device_array(prop_name).dtype arr = Array(dtype, n=new_num_particles * stride, backend=self.backend) arr.fill(parray.default_values[prop_name]) self.update_prop(prop_name, arr) # now add the values to the end of the created array dest = self._data[prop_name] source = parray.gpu.get_device_array(prop_name) dest.dev[old_num_particles * stride:] = source.dev if update_constants: for const in parray.gpu.constants: if const not in self.constants: arr = parray.gpu.get_device_array(const) self.update_const(const, arr.copy()) if num_extra_particles > 0 and align: self.align_particles() def empty_clone(self, props=None): if props is None: prop_names = self.properties else: prop_names = props result_array = pysph.base.particle_array.ParticleArray( backend=self._particle_array.backend ) result_array.set_name(self._particle_array.name) result_array.set_device_helper(DeviceHelper(result_array, backend=self.backend)) for prop_name in prop_names: src_arr = self._data[prop_name] stride = self._particle_array.stride.get(prop_name, 1) prop_type = dtype_to_ctype(src_arr.dtype) prop_default = self._particle_array.default_values[prop_name] result_array.add_property(name=prop_name, type=prop_type, default=prop_default, stride=stride) for const in self.constants: result_array.gpu.update_const(const, self._data[const].copy()) if props is None: output_arrays = list(self._particle_array.output_property_arrays) else: output_arrays = list( set(props).intersection( self._particle_array.output_property_arrays ) ) result_array.set_output_arrays(output_arrays) return result_array def extract_particles(self, indices, dest_array=None, align=True, props=None): """Create new particle array for particles with given indices Parameters ---------- indices : Array indices of particles to be extracted. props : list the list of properties to extract, if None all properties are extracted. """ if not dest_array: dest_array = self.empty_clone(props=props) if props is None: prop_names = self.properties else: prop_names = props if len(indices) == 0: return dest_array start_idx = dest_array.get_number_of_particles() dest_array.gpu.extend(len(indices)) args_list = [indices, start_idx] for prop in prop_names: stride = self._particle_array.stride.get(prop, 1) src_arr = self._data[prop] dst_arr = dest_array.gpu.get_device_array(prop) args_list.append(stride) args_list.append(dst_arr) args_list.append(src_arr) extract_particles_knl = ExtractParticles('extract_particles_knl', prop_names).function extract_particles_elwise = Elementwise(extract_particles_knl, backend=self.backend) extract_particles_elwise(*args_list) if align: dest_array.gpu.align_particles() return dest_array pysph-master/pysph/base/gpu_domain_manager.py000066400000000000000000000331701356347341600217750ustar00rootroot00000000000000from __future__ import print_function import numpy as np from pysph.base.nnps_base import DomainManagerBase from compyle.config import get_config from compyle.array import Array, get_backend from compyle.parallel import Elementwise, Reduction, Scan from compyle.types import annotate, dtype_to_ctype from pytools import memoize_method class GPUDomainManager(DomainManagerBase): def __init__(self, xmin=-1000., xmax=1000., ymin=0., ymax=0., zmin=0., zmax=0., periodic_in_x=False, periodic_in_y=False, periodic_in_z=False, n_layers=2.0, backend=None, props=None, mirror_in_x=False, mirror_in_y=False, mirror_in_z=False): """Constructor""" DomainManagerBase.__init__(self, xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, zmin=zmin, zmax=zmax, periodic_in_x=periodic_in_x, periodic_in_y=periodic_in_y, periodic_in_z=periodic_in_z, n_layers=n_layers, props=props) self.use_double = get_config().use_double self.dtype = np.float64 if self.use_double else np.float32 self.dtype_max = np.finfo(self.dtype).max self.backend = get_backend(backend) self.ghosts = None def update(self): """General method that is called before NNPS can bin particles. This method is responsible for the computation of cell sizes and creation of any ghost particles for periodic or wall boundary conditions. """ # compute the cell sizes self.compute_cell_size_for_binning() # Periodicity is handled by adjusting particles according to a # given cubic domain box. In parallel, it is expected that the # appropriate parallel NNPS is responsible for the creation of # ghost particles. if self.is_periodic and not self.in_parallel: # remove periodic ghost particles from a previous step self._remove_ghosts() # box-wrap current particles for periodicity self._box_wrap_periodic() # create new periodic ghosts self._create_ghosts_periodic() def _compute_cell_size_for_binning(self): """Compute the cell size for the binning. The cell size is chosen as the kernel radius scale times the maximum smoothing length in the local processor. For parallel runs, we would need to communicate the maximum 'h' on all processors to decide on the appropriate binning size. """ _hmax, hmax = -1.0, -1.0 _hmin, hmin = self.dtype_max, self.dtype_max for pa_wrapper in self.pa_wrappers: h = pa_wrapper.pa.gpu.get_device_array('h') pa_wrapper.pa.gpu.update_minmax_cl(['h']) _hmax = h.maximum _hmin = h.minimum if _hmax > hmax: hmax = _hmax if _hmin < hmin: hmin = _hmin cell_size = self.radius_scale * hmax self.hmin = self.radius_scale * hmin if cell_size < 1e-6: cell_size = 1.0 self.cell_size = cell_size # set the cell size for the DomainManager self.set_cell_size(cell_size) @memoize_method def _get_box_wrap_kernel(self): @annotate def box_wrap(i, x, y, z, xmin, ymin, zmin, xmax, ymax, zmax, xtranslate, ytranslate, ztranslate, periodic_in_x, periodic_in_y, periodic_in_z): if periodic_in_x: if x[i] < xmin: x[i] = x[i] + xtranslate if x[i] > xmax: x[i] = x[i] - xtranslate if periodic_in_y: if y[i] < ymin: y[i] = y[i] + ytranslate if y[i] > ymax: y[i] = y[i] - ytranslate if periodic_in_z: if z[i] < zmin: z[i] = z[i] + ztranslate if z[i] > zmax: z[i] = z[i] - ztranslate return Elementwise(box_wrap, backend=self.backend) ###########################CHANGE FROM HERE############################### def _box_wrap_periodic(self): """Box-wrap particles for periodicity The periodic domain is a rectangular box defined by minimum and maximum values in each coordinate direction. These values are used in turn to define translation values used to box-wrap particles that cross a periodic boundary. The periodic domain is specified using the DomainManager object """ # minimum and maximum values of the domain xmin, xmax = self.xmin, self.xmax ymin, ymax = self.ymin, self.ymax zmin, zmax = self.zmin, self.zmax # translations along each coordinate direction xtranslate = self.xtranslate ytranslate = self.ytranslate ztranslate = self.ztranslate # periodicity flags for NNPS periodic_in_x = self.periodic_in_x periodic_in_y = self.periodic_in_y periodic_in_z = self.periodic_in_z box_wrap_knl = self._get_box_wrap_kernel() # iterate over each array and mark for translation for pa_wrapper in self.pa_wrappers: x = pa_wrapper.pa.gpu.x y = pa_wrapper.pa.gpu.y z = pa_wrapper.pa.gpu.z box_wrap_knl(x, y, z, xmin, ymin, zmin, xmax, ymax, zmax, xtranslate, ytranslate, ztranslate, periodic_in_x, periodic_in_y, periodic_in_z) @memoize_method def _get_ghosts_reduction_kernel(self): @annotate def map_func(i, periodic_in_x, periodic_in_y, periodic_in_z, x, y, z, xmin, ymin, zmin, xmax, ymax, zmax, cell_size): x_copies, y_copies, z_copies = declare('int', 3) x_copies = 1 y_copies = 1 z_copies = 1 if periodic_in_x: if (x[i] - xmin) <= cell_size: x_copies += 1 if (xmax - x[i]) <= cell_size: x_copies += 1 if periodic_in_y: if (y[i] - ymin) <= cell_size: y_copies += 1 if (ymax - y[i]) <= cell_size: y_copies += 1 if periodic_in_z: if (z[i] - zmin) <= cell_size: z_copies += 1 if (zmax - z[i]) <= cell_size: z_copies += 1 return x_copies * y_copies * z_copies - 1 return Reduction('a+b', map_func=map_func, dtype_out=np.int32, backend=self.backend) @memoize_method def _get_ghosts_scan_kernel(self): @annotate def inp_fill_ghosts(i, periodic_in_x, periodic_in_y, periodic_in_z, x, y, z, xmin, ymin, zmin, xmax, ymax, zmax, cell_size): x_copies, y_copies, z_copies = declare('int', 3) x_copies = 1 y_copies = 1 z_copies = 1 if periodic_in_x: if (x[i] - xmin) <= cell_size: x_copies += 1 if (xmax - x[i]) <= cell_size: x_copies += 1 if periodic_in_y: if (y[i] - ymin) <= cell_size: y_copies += 1 if (ymax - y[i]) <= cell_size: y_copies += 1 if periodic_in_z: if (z[i] - zmin) <= cell_size: z_copies += 1 if (zmax - z[i]) <= cell_size: z_copies += 1 return x_copies * y_copies * z_copies - 1 @annotate def out_fill_ghosts(i, item, prev_item, periodic_in_x, periodic_in_y, periodic_in_z, x, y, z, xmin, ymin, zmin, xmax, ymax, zmax, cell_size, masks, indices): xleft, yleft, zleft = declare('int', 3) xright, yright, zright = declare('int', 3) xleft = 0 yleft = 0 zleft = 0 xright = 0 yright = 0 zright = 0 if periodic_in_x: if (x[i] - xmin) <= cell_size: xright = 1 if (xmax - x[i]) <= cell_size: xleft = -1 if periodic_in_y: if (y[i] - ymin) <= cell_size: yright = 1 if (ymax - y[i]) <= cell_size: yleft = -1 if periodic_in_z: if (z[i] - zmin) <= cell_size: zright = 1 if (zmax - z[i]) <= cell_size: zleft = -1 xp, yp, zp = declare('int', 3) idx, mask = declare('int', 2) idx = prev_item for xp in range(-1, 2): if xp != 0 and ((xleft == 0 and xright == 0) or (xp != xleft and xp != xright)): continue for yp in range(-1, 2): if yp != 0 and ((yleft == 0 and yright == 0) or (yp != yleft and yp != yright)): continue for zp in range(-1, 2): if zp != 0 and ((zleft == 0 and zright == 0) or (zp != zleft and zp != zright)): continue if xp == 0 and yp == 0 and zp == 0: continue mask = (xp + 1) * 9 + (yp + 1) * 3 + (zp + 1) masks[idx] = mask indices[idx] = i idx += 1 return Scan(inp_fill_ghosts, out_fill_ghosts, 'a+b', dtype=np.int32, backend=self.backend) @memoize_method def _get_translate_kernel(self): @annotate def translate(i, x, y, z, tag, xtranslate, ytranslate, ztranslate, masks): xmask, ymask, zmask, mask = declare('int', 4) mask = masks[i] zmask = mask % 3 mask /= 3 ymask = mask % 3 mask /= 3 xmask = mask % 3 x[i] = x[i] + (xmask - 1) * xtranslate y[i] = y[i] + (ymask - 1) * ytranslate z[i] = z[i] + (zmask - 1) * ztranslate tag[i] = 2 return Elementwise(translate, backend=self.backend) def _create_ghosts_periodic(self): """Identify boundary particles and create images. We need to find all particles that are within a specified distance from the boundaries and place image copies on the other side of the boundary. Corner reflections need to be accounted for when using domains with multiple periodicity. The periodic domain is specified using the DomainManager object """ copy_props = self.copy_props pa_wrappers = self.pa_wrappers narrays = self.narrays # cell size used to check for periodic ghosts. For summation density # like operations, we need to create two layers of ghost images, this # is configurable via the n_layers argument to the constructor. cell_size = self.n_layers * self.cell_size # periodic domain values xmin, xmax = self.xmin, self.xmax ymin, ymax = self.ymin, self.ymax zmin, zmax = self.zmin, self.zmax xtranslate = self.xtranslate ytranslate = self.ytranslate ztranslate = self.ztranslate # periodicity flags periodic_in_x = self.periodic_in_x periodic_in_y = self.periodic_in_y periodic_in_z = self.periodic_in_z reduce_knl = self._get_ghosts_reduction_kernel() scan_knl = self._get_ghosts_scan_kernel() translate_knl = self._get_translate_kernel() if not self.ghosts: self.ghosts = [paw.pa.empty_clone(props=copy_props[i]) for i, paw in enumerate(pa_wrappers)] else: for ghost_pa in self.ghosts: ghost_pa.resize(0) for i in range(narrays): self.ghosts[i].ensure_properties( pa_wrappers[i].pa, props=copy_props[i] ) for i, pa_wrapper in enumerate(self.pa_wrappers): ghost_pa = self.ghosts[i] x = pa_wrapper.pa.gpu.x y = pa_wrapper.pa.gpu.y z = pa_wrapper.pa.gpu.z num_extra_particles = reduce_knl(periodic_in_x, periodic_in_y, periodic_in_z, x, y, z, xmin, ymin, zmin, xmax, ymax, zmax, cell_size) num_extra_particles = int(num_extra_particles) indices = Array(np.int32, n=num_extra_particles) masks = Array(np.int32, n=num_extra_particles) scan_knl(periodic_in_x=periodic_in_x, periodic_in_y=periodic_in_y, periodic_in_z=periodic_in_z, x=x, y=y, z=z, xmin=xmin, ymin=ymin, zmin=zmin, xmax=xmax, ymax=ymax, zmax=zmax, cell_size=cell_size, masks=masks, indices=indices) pa_wrapper.pa.extract_particles( indices, ghost_pa, align=False, props=copy_props[i] ) translate_knl(ghost_pa.gpu.x, ghost_pa.gpu.y, ghost_pa.gpu.z, ghost_pa.gpu.tag, xtranslate, ytranslate, ztranslate, masks) pa_wrapper.pa.append_parray(ghost_pa, align=False) pysph-master/pysph/base/gpu_helper_functions.mako000066400000000000000000000104641356347341600227030ustar00rootroot00000000000000//CL// <%def name="get_helpers()" cached="True"> #define NORM2(X, Y, Z) ((X)*(X) + (Y)*(Y) + (Z)*(Z)) #define FIND_CELL_ID(x, y, z, h, c_x, c_y, c_z) \ c_x = floor((x)/h); c_y = floor((y)/h); c_z = floor((z)/h) inline ulong interleave(ulong p, \ ulong q, ulong r); inline int neighbor_boxes(int c_x, int c_y, int c_z, \ ulong* nbr_boxes); inline ulong interleave1(ulong p) { return p; } inline ulong interleave2(ulong p, ulong q) { p = p & 0xffffffff; p = (p | (p << 16)) & 0x0000ffff0000ffff; p = (p | (p << 8)) & 0x00ff00ff00ff00ff; p = (p | (p << 4)) & 0x0f0f0f0f0f0f0f0f; p = (p | (p << 2)) & 0x3333333333333333; p = (p | (p << 1)) & 0x5555555555555555; q = q & 0xffffffff; q = (q | (q << 16)) & 0x0000ffff0000ffff; q = (q | (q << 8)) & 0x00ff00ff00ff00ff; q = (q | (q << 4)) & 0x0f0f0f0f0f0f0f0f; q = (q | (q << 2)) & 0x3333333333333333; q = (q | (q << 1)) & 0x5555555555555555; return (p | (q << 1)); } inline ulong interleave3(ulong p, \ ulong q, ulong r) { p = (p | (p << 32)) & 0x1f00000000ffff; p = (p | (p << 16)) & 0x1f0000ff0000ff; p = (p | (p << 8)) & 0x100f00f00f00f00f; p = (p | (p << 4)) & 0x10c30c30c30c30c3; p = (p | (p << 2)) & 0x1249249249249249; q = (q | (q << 32)) & 0x1f00000000ffff; q = (q | (q << 16)) & 0x1f0000ff0000ff; q = (q | (q << 8)) & 0x100f00f00f00f00f; q = (q | (q << 4)) & 0x10c30c30c30c30c3; q = (q | (q << 2)) & 0x1249249249249249; r = (r | (r << 32)) & 0x1f00000000ffff; r = (r | (r << 16)) & 0x1f0000ff0000ff; r = (r | (r << 8)) & 0x100f00f00f00f00f; r = (r | (r << 4)) & 0x10c30c30c30c30c3; r = (r | (r << 2)) & 0x1249249249249249; return (p | (q << 1) | (r << 2)); } inline ulong interleave(ulong p, \ ulong q, ulong r) { p = (p | (p << 32)) & 0x1f00000000ffff; p = (p | (p << 16)) & 0x1f0000ff0000ff; p = (p | (p << 8)) & 0x100f00f00f00f00f; p = (p | (p << 4)) & 0x10c30c30c30c30c3; p = (p | (p << 2)) & 0x1249249249249249; q = (q | (q << 32)) & 0x1f00000000ffff; q = (q | (q << 16)) & 0x1f0000ff0000ff; q = (q | (q << 8)) & 0x100f00f00f00f00f; q = (q | (q << 4)) & 0x10c30c30c30c30c3; q = (q | (q << 2)) & 0x1249249249249249; r = (r | (r << 32)) & 0x1f00000000ffff; r = (r | (r << 16)) & 0x1f0000ff0000ff; r = (r | (r << 8)) & 0x100f00f00f00f00f; r = (r | (r << 4)) & 0x10c30c30c30c30c3; r = (r | (r << 2)) & 0x1249249249249249; return (p | (q << 1) | (r << 2)); } inline int find_idx(__global ulong* keys, \ int num_particles, ulong key) { int first = 0; int last = num_particles - 1; int middle = (first + last) / 2; while(first <= last) { if(keys[middle] < key) first = middle + 1; else if(keys[middle] > key) last = middle - 1; else if(keys[middle] == key) { if(middle == 0) return 0; if(keys[middle - 1] != key) return middle; else last = middle - 1; } middle = (first + last) / 2; } return -1; } inline int neighbor_boxes(int c_x, int c_y, int c_z, \ ulong* nbr_boxes) { int nbr_boxes_length = 1; int j, k, m; ulong key; nbr_boxes[0] = interleave(c_x, c_y, c_z); #pragma unroll for(j=-1; j<2; j++) { #pragma unroll for(k=-1; k<2; k++) { #pragma unroll for(m=-1; m<2; m++) { if((j != 0 || k != 0 || m != 0) && c_x+m >= 0 && c_y+k >= 0 && c_z+j >= 0) { key = interleave(c_x+m, c_y+k, c_z+j); nbr_boxes[nbr_boxes_length] = key; nbr_boxes_length++; } } } } return nbr_boxes_length; } pysph-master/pysph/base/gpu_helper_kernels.py000066400000000000000000000062521356347341600220370ustar00rootroot00000000000000from compyle.api import annotate, Elementwise from compyle.parallel import Scan from math import floor from pytools import memoize @memoize(key=lambda *args: tuple(args)) def get_elwise(f, backend): return Elementwise(f, backend=backend) @memoize(key=lambda *args: tuple(args)) def get_scan(inp_f, out_f, dtype, backend): return Scan(input=inp_f, output=out_f, dtype=dtype, backend=backend) @annotate def exclusive_input(i, ary): return ary[i] @annotate def exclusive_output(i, prev_item, ary): ary[i] = prev_item @annotate def norm2(x, y, z): return x * x + y * y + z * z @annotate def find_cell_id(x, y, z, h, c): c[0] = floor((x) / h) c[1] = floor((y) / h) c[2] = floor((z) / h) @annotate(p='ulong', return_='ulong') def interleave1(p): return p @annotate(ulong='p, q', return_='ulong') def interleave2(p, q): p = p & 0xffffffff p = (p | (p << 16)) & 0x0000ffff0000ffff p = (p | (p << 8)) & 0x00ff00ff00ff00ff p = (p | (p << 4)) & 0x0f0f0f0f0f0f0f0f p = (p | (p << 2)) & 0x3333333333333333 p = (p | (p << 1)) & 0x5555555555555555 q = q & 0xffffffff q = (q | (q << 16)) & 0x0000ffff0000ffff q = (q | (q << 8)) & 0x00ff00ff00ff00ff q = (q | (q << 4)) & 0x0f0f0f0f0f0f0f0f q = (q | (q << 2)) & 0x3333333333333333 q = (q | (q << 1)) & 0x5555555555555555 return (p | (q << 1)) @annotate(ulong='p, q, r', return_='ulong') def interleave3(p, q, r): p = (p | (p << 32)) & 0x1f00000000ffff p = (p | (p << 16)) & 0x1f0000ff0000ff p = (p | (p << 8)) & 0x100f00f00f00f00f p = (p | (p << 4)) & 0x10c30c30c30c30c3 p = (p | (p << 2)) & 0x1249249249249249 q = (q | (q << 32)) & 0x1f00000000ffff q = (q | (q << 16)) & 0x1f0000ff0000ff q = (q | (q << 8)) & 0x100f00f00f00f00f q = (q | (q << 4)) & 0x10c30c30c30c30c3 q = (q | (q << 2)) & 0x1249249249249249 r = (r | (r << 32)) & 0x1f00000000ffff r = (r | (r << 16)) & 0x1f0000ff0000ff r = (r | (r << 8)) & 0x100f00f00f00f00f r = (r | (r << 4)) & 0x10c30c30c30c30c3 r = (r | (r << 2)) & 0x1249249249249249 return (p | (q << 1) | (r << 2)) @annotate def find_idx(keys, num_particles, key): first = 0 last = num_particles - 1 middle = (first + last) / 2 while first <= last: if keys[middle] < key: first = middle + 1 elif keys[middle] > key: last = middle - 1 elif keys[middle] == key: if middle == 0: return 0 if keys[middle - 1] != key: return middle else: last = middle - 1 middle = (first + last) / 2 return -1 @annotate def neighbor_boxes(c_x, c_y, c_z, nbr_boxes): nbr_boxes_length = 1 nbr_boxes[0] = interleave3(c_x, c_y, c_z) key = declare('ulong') for j in range(-1, 2): for k in range(-1, 2): for m in range(-1, 2): if (j != 0 or k != 0 or m != 0) and c_x + m >= 0 and c_y + k >= 0 and c_z + j >= 0: key = interleave3(c_x + m, c_y + k, c_z + j) nbr_boxes[nbr_boxes_length] = key nbr_boxes_length += 1 return nbr_boxes_length pysph-master/pysph/base/gpu_nnps.py000066400000000000000000000004711356347341600200100ustar00rootroot00000000000000from pysph.base.gpu_nnps_base import GPUNeighborCache, GPUNNPS, BruteForceNNPS from pysph.base.z_order_gpu_nnps import ZOrderGPUNNPS from pysph.base.stratified_sfc_gpu_nnps import StratifiedSFCGPUNNPS from pysph.base.gpu_domain_manager import GPUDomainManager from pysph.base.octree_gpu_nnps import OctreeGPUNNPS pysph-master/pysph/base/gpu_nnps_base.pxd000066400000000000000000000056431356347341600211530ustar00rootroot00000000000000# numpy cimport numpy as np cimport cython from libcpp.map cimport map from libcpp.vector cimport vector import pyopencl as cl import pyopencl.array # PyZoltan CArrays from cyarray.carray cimport UIntArray, IntArray, DoubleArray, LongArray # local imports from particle_array cimport ParticleArray from point cimport * from pysph.base.nnps_base cimport * cdef extern from 'math.h': int abs(int) nogil double ceil(double) nogil double floor(double) nogil double fabs(double) nogil double fmax(double, double) nogil double fmin(double, double) nogil cdef extern from 'limits.h': cdef unsigned int UINT_MAX cdef int INT_MAX cdef class GPUNeighborCache: cdef object backend cdef int _dst_index cdef int _src_index cdef int _narrays cdef list _particles cdef bint _cached cdef public bint _copied_to_cpu cdef GPUNNPS _nnps cdef public object _neighbors_gpu cdef public object _nbr_lengths_gpu cdef public object _start_idx_gpu cdef object _get_start_indices cdef public np.ndarray _neighbors_cpu cdef public np.ndarray _nbr_lengths cdef public np.ndarray _start_idx cdef unsigned int* _neighbors_cpu_ptr cdef unsigned int* _nbr_lengths_ptr cdef unsigned int* _start_idx_ptr cdef void copy_to_cpu(self) cdef void get_neighbors_raw(self, size_t d_idx, UIntArray nbrs) cdef void get_neighbors_raw_gpu(self) cpdef update(self) cdef void _find_neighbors(self) cpdef get_neighbors(self, int src_index, size_t d_idx, UIntArray nbrs) cpdef get_neighbors_gpu(self) cdef class GPUNNPS(NNPSBase): cdef public object backend cdef public object queue cdef public double radius_scale2 cdef public GPUNeighborCache current_cache # The current cache cdef public bint sort_gids # Sort neighbors by their gids. cdef public bint use_double cdef public dtype cdef public dtype_max cdef public double _last_domain_size # last size of domain. cdef public np.ndarray xmin cdef public np.ndarray xmax cpdef get_nearest_particles(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs) cpdef get_nearest_particles_gpu(self, int src_index, int dst_index) cpdef spatially_order_particles(self, int pa_index) cdef void get_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) cdef void find_neighbor_lengths(self, nbr_lengths) cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices) cdef _compute_bounds(self) cpdef update(self) cpdef _bin(self, int pa_index) cpdef _refresh(self) cdef class BruteForceNNPS(GPUNNPS): cdef NNPSParticleArrayWrapper src, dst # Current source and destination. cdef str preamble cpdef set_context(self, int src_index, int dst_index) cdef void find_neighbor_lengths(self, nbr_lengths) cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices) cpdef _refresh(self) pysph-master/pysph/base/gpu_nnps_base.pyx000066400000000000000000000400601356347341600211700ustar00rootroot00000000000000#cython: embedsignature=True # Library imports. import numpy as np cimport numpy as np # Cython imports from cython.operator cimport dereference as deref, preincrement as inc from cython.parallel import parallel, prange, threadid # malloc and friends from libc.stdlib cimport malloc, free from libcpp.map cimport map from libcpp.pair cimport pair from libcpp.vector cimport vector cdef extern from "" namespace "std" nogil: void sort[Iter, Compare](Iter first, Iter last, Compare comp) void sort[Iter](Iter first, Iter last) # cpython from cpython.dict cimport PyDict_Clear, PyDict_Contains, PyDict_GetItem from cpython.list cimport PyList_GetItem, PyList_SetItem, PyList_GET_ITEM # Cython for compiler directives cimport cython import pyopencl as cl import pyopencl.array from pyopencl.elementwise import ElementwiseKernel from pysph.base.nnps_base cimport * from pysph.base.device_helper import DeviceHelper from compyle.config import get_config from compyle.array import get_backend, Array from compyle.parallel import Elementwise, Scan from compyle.types import annotate from compyle.opencl import (get_context, get_queue, set_context, set_queue) import compyle.array as array # Particle Tag information from cyarray.carray cimport BaseArray, aligned_malloc, aligned_free from utils import ParticleTAGS from nnps_base cimport * from pysph.base.gpu_helper_kernels import (exclusive_input, exclusive_output, get_scan) cdef class GPUNeighborCache: def __init__(self, GPUNNPS nnps, int dst_index, int src_index, backend=None): self.backend = get_backend(backend) self._dst_index = dst_index self._src_index = src_index self._nnps = nnps self._particles = nnps.particles self._narrays = nnps.narrays cdef long n_p = self._particles[dst_index].get_number_of_particles() self._get_start_indices = None self._cached = False self._copied_to_cpu = False self._nbr_lengths_gpu = Array(np.uint32, n=n_p, backend=self.backend) self._neighbors_gpu = Array(np.uint32, backend=self.backend) #### Public protocol ################################################ cdef void get_neighbors_raw_gpu(self): if not self._cached: self._find_neighbors() cdef void get_neighbors_raw(self, size_t d_idx, UIntArray nbrs): self.get_neighbors_raw_gpu() if not self._copied_to_cpu: self.copy_to_cpu() nbrs.c_reset() nbrs.c_set_view(self._neighbors_cpu_ptr + self._start_idx_ptr[d_idx], self._nbr_lengths_ptr[d_idx]) #### Private protocol ################################################ cdef void _find_neighbors(self): self._nnps.find_neighbor_lengths(self._nbr_lengths_gpu) # FIXME: # - Store sum kernel # - don't allocate neighbors_gpu each time. # - Don't allocate _nbr_lengths and start_idx. total_size_gpu = array.sum(self._nbr_lengths_gpu) cdef unsigned long total_size = (total_size_gpu) # Allocate _neighbors_cpu and neighbors_gpu self._neighbors_gpu.resize(total_size) self._start_idx_gpu = self._nbr_lengths_gpu.copy() # Do prefix sum on self._neighbor_lengths for the self._start_idx if self._get_start_indices is None: self._get_start_indices = get_scan(exclusive_input, exclusive_output, dtype=np.uint32, backend=self.backend) self._get_start_indices(ary=self._start_idx_gpu) self._nnps.find_nearest_neighbors_gpu(self._neighbors_gpu, self._start_idx_gpu) self._cached = True cdef void copy_to_cpu(self): self._copied_to_cpu = True self._neighbors_cpu = self._neighbors_gpu.get() self._neighbors_cpu_ptr = self._neighbors_cpu.data self._nbr_lengths = self._nbr_lengths_gpu.get() self._nbr_lengths_ptr = self._nbr_lengths.data self._start_idx = self._start_idx_gpu.get() self._start_idx_ptr = self._start_idx.data cpdef update(self): # FIXME: Don't allocate here unless needed. self._cached = False self._copied_to_cpu = False cdef long n_p = self._particles[self._dst_index].get_number_of_particles() self._nbr_lengths_gpu.resize(n_p) cpdef get_neighbors(self, int src_index, size_t d_idx, UIntArray nbrs): self.get_neighbors_raw(d_idx, nbrs) cpdef get_neighbors_gpu(self): self.get_neighbors_raw_gpu() cdef class GPUNNPS(NNPSBase): """Nearest neighbor query class using the box-sort algorithm. NNPS bins all local particles using the box sort algorithm in Cells. The cells are stored in a dictionary 'cells' which is keyed on the spatial index (IntPoint) of the cell. """ def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bint cache=True, bint sort_gids=False, backend=None): """Constructor for NNPS Parameters ---------- dim : int Dimension (fixme: Not sure if this is really needed) particles : list The list of particle arrays we are working on. radius_scale : double, default (2) Optional kernel radius scale. Defaults to 2 domain : DomainManager, default (None) Optional limits for the domain cache : bint Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations. sort_gids : bint, default (False) Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run. backend : string Backend on which to build NNPS Module """ NNPSBase.__init__(self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids) self.backend = get_backend(backend) self.backend = 'opencl' if self.backend is 'cython' else self.backend self.use_double = get_config().use_double self.dtype = np.float64 if self.use_double else np.float32 self.dtype_max = np.finfo(self.dtype).max self._last_domain_size = 0.0 # Set the device helper if needed. for pa in particles: if pa.gpu is None: pa.set_device_helper(DeviceHelper(pa, backend=self.backend)) # The cache. self.use_cache = cache _cache = [] for d_idx in range(len(particles)): for s_idx in range(len(particles)): _cache.append(GPUNeighborCache(self, d_idx, s_idx, backend=self.backend)) self.cache = _cache self.use_double = get_config().use_double cdef void get_nearest_neighbors(self, size_t d_idx, UIntArray nbrs): if self.use_cache: self.current_cache.get_neighbors_raw(d_idx, nbrs) else: nbrs.c_reset() self.find_nearest_neighbors(d_idx, nbrs) cpdef get_nearest_particles_gpu(self, int src_index, int dst_index): cdef int idx = dst_index*self.narrays + src_index if self.src_index != src_index \ or self.dst_index != dst_index: self.set_context(src_index, dst_index) self.cache[idx].get_neighbors_gpu() cpdef spatially_order_particles(self, int pa_index): """Spatially order particles such that nearby particles have indices nearer each other. This may improve pre-fetching on the CPU. """ indices, callback = self.get_spatially_ordered_indices(pa_index) self.particles[pa_index].gpu.align(indices) callback() def set_use_cache(self, bint use_cache): self.use_cache = use_cache if use_cache: for cache in self.cache: cache.update() cdef void find_neighbor_lengths(self, nbr_lengths): raise NotImplementedError("NNPS :: find_neighbor_lengths called") cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices): raise NotImplementedError("NNPS :: find_nearest_neighbors called") cpdef update(self): cdef int i, num_particles cdef ParticleArray pa cdef DomainManager domain = self.domain # use cell sizes computed by the domain. self.cell_size = domain.manager.cell_size self.hmin = domain.manager.hmin # compute bounds and refresh the data structure self._compute_bounds() self._refresh() # indices on which to bin. We bin all local particles for i in range(self.narrays): # bin the particles self._bin(pa_index=i) if self.use_cache: for cache in self.cache: cache.update() def update_domain(self): self.domain.update() cdef _compute_bounds(self): """Compute coordinate bounds for the particles""" cdef list pa_wrappers = self.pa_wrappers cdef NNPSParticleArrayWrapper pa_wrapper cdef double domain_size xmax = -self.dtype_max ymax = -self.dtype_max zmax = -self.dtype_max xmin = self.dtype_max ymin = self.dtype_max zmin = self.dtype_max for pa_wrapper in pa_wrappers: x = pa_wrapper.pa.gpu.get_device_array('x') y = pa_wrapper.pa.gpu.get_device_array('y') z = pa_wrapper.pa.gpu.get_device_array('z') pa_wrapper.pa.gpu.update_minmax_cl(['x', 'y', 'z']) # find min and max of variables xmax = np.maximum(x.maximum, xmax) ymax = np.maximum(y.maximum, ymax) zmax = np.maximum(z.maximum, zmax) xmin = np.minimum(x.minimum, xmin) ymin = np.minimum(y.minimum, ymin) zmin = np.minimum(z.minimum, zmin) # Add a small offset to the limits. lx, ly, lz = xmax - xmin, ymax - ymin, zmax - zmin xmin -= lx*0.01; ymin -= ly*0.01; zmin -= lz*0.01 xmax += lx*0.01; ymax += ly*0.01; zmax += lz*0.01 domain_size = fmax(lx, ly) domain_size = fmax(domain_size, lz) if self._last_domain_size > 1e-16 and \ domain_size > 2.0*self._last_domain_size: msg = ( '*'*70 + '\nWARNING: Domain size has increased by a large amount.\n' + 'Particles are probably diverging, please check your code!\n' + '*'*70 ) print(msg) self._last_domain_size = domain_size # If all of the dimensions have very small extent give it a unit size. _eps = 1e-12 if (np.abs(xmax - xmin) < _eps) and (np.abs(ymax - ymin) < _eps) \ and (np.abs(zmax - zmin) < _eps): xmin -= 0.5; xmax += 0.5 ymin -= 0.5; ymax += 0.5 zmin -= 0.5; zmax += 0.5 # store the minimum and maximum of physical coordinates self.xmin = np.asarray([xmin, ymin, zmin]) self.xmax = np.asarray([xmax, ymax, zmax]) cpdef _bin(self, int pa_index): raise NotImplementedError("NNPS :: _bin called") cpdef _refresh(self): raise NotImplementedError("NNPS :: _refresh called") cdef class BruteForceNNPS(GPUNNPS): def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bint cache=True, bint sort_gids=False, backend='opencl'): GPUNNPS.__init__(self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids, backend) self.radius_scale2 = radius_scale*radius_scale self.src_index = -1 self.dst_index = -1 self.sort_gids = sort_gids self.domain.update() self.update() cdef str norm2 = \ """ #define NORM2(X, Y, Z) ((X)*(X) + (Y)*(Y) + (Z)*(Z)) """ self.preamble = norm2 cpdef set_context(self, int src_index, int dst_index): """Setup the context before asking for neighbors. The `dst_index` represents the particles for whom the neighbors are to be determined from the particle array with index `src_index`. Parameters ---------- src_index: int: the source index of the particle array. dst_index: int: the destination index of the particle array. """ GPUNNPS.set_context(self, src_index, dst_index) self.src = self.pa_wrappers[ src_index ] self.dst = self.pa_wrappers[ dst_index ] cdef void find_neighbor_lengths(self, nbr_lengths): # IMPORTANT NOTE: pyopencl uses the length of the first argument # to determine the global work size arguments = \ """%(data_t)s* d_x, %(data_t)s* d_y, %(data_t)s* d_z, %(data_t)s* d_h, %(data_t)s* s_x, %(data_t)s* s_y, %(data_t)s* s_z, %(data_t)s* s_h, unsigned int num_particles, unsigned int* nbr_lengths, %(data_t)s radius_scale2 """ % {"data_t" : ("double" if self.use_double else "float")} src = """ unsigned int j; unsigned int length = 0; %(data_t)s dist; %(data_t)s h_i = radius_scale2*d_h[i]*d_h[i]; %(data_t)s h_j; for(j=0; j 2 else True self.src_tpl = Template( filename=os.path.join( os.path.dirname(os.path.realpath(__file__)), tpl_filename), disable_unicode=disable_unicode, ) self.data_t = "double" if use_double else "float" if c_type is not None: self.data_t = c_type helper_tpl = Template( filename=os.path.join( os.path.dirname(os.path.realpath(__file__)), "gpu_helper_functions.mako"), disable_unicode=disable_unicode ) helper_preamble = helper_tpl.get_def("get_helpers").render( data_t=self.data_t ) preamble = self.src_tpl.get_def("preamble").render( data_t=self.data_t ) self.preamble = "\n".join([helper_preamble, preamble]) self.cache = {} self.backend = backend def _get_code(self, kernel_name, **kwargs): arguments = self.src_tpl.get_def("%s_args" % kernel_name).render( data_t=self.data_t, **kwargs) src = self.src_tpl.get_def("%s_src" % kernel_name).render( data_t=self.data_t, **kwargs) return arguments, src def get_kernel(self, kernel_name, **kwargs): key = kernel_name, tuple(kwargs.items()) wgs = kwargs.get('wgs', None) if key in self.cache: return self.cache[key] else: args, src = self._get_code(kernel_name, **kwargs) if wgs is None: knl = get_elwise_kernel(kernel_name, args, src, preamble=self.preamble) else: knl = get_simple_kernel(kernel_name, args, src, wgs, preamble=self.preamble) self.cache[key] = knl return knl pysph-master/pysph/base/kernels.py000066400000000000000000001015431356347341600176240ustar00rootroot00000000000000"""Definition of some SPH kernel functions """ from math import pi, sqrt, exp M_1_PI = 1.0 / pi M_2_SQRTPI = 2.0 / sqrt(pi) def get_correction(kernel, h0): rij = kernel.deltap * h0 return kernel.kernel(rij=rij, h=h0) def get_compiled_kernel(kernel): """Given a kernel, return a high performance wrapper kernel. """ from pysph.base import c_kernels cls = getattr(c_kernels, kernel.__class__.__name__) wrapper = getattr(c_kernels, kernel.__class__.__name__ + 'Wrapper') kern = cls(**kernel.__dict__) return wrapper(kern) ############################################################################### # `CubicSpline` class. ############################################################################### class CubicSpline(object): r"""Cubic Spline Kernel: [Monaghan1992]_ .. math:: W(q) = \ &\sigma_3\left[ 1 - \frac{3}{2}q^2\left( 1 - \frac{q}{2} \right) \right], \ & \textrm{for} \ 0 \leq q \leq 1,\\ = \ &\frac{\sigma_3}{4}(2-q)^3, & \textrm{for} \ 1 < q \leq 2,\\ = \ &0, & \textrm{for}\ q>2, \\ where :math:`\sigma_3` is a dimensional normalizing factor for the cubic spline function given by: .. math:: \sigma_3 = \ & \frac{2}{3h^1}, & \textrm{for dim=1}, \\ \sigma_3 = \ & \frac{10}{7\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_3 = \ & \frac{1}{\pi h^3}, & \textrm{for dim=3}. \\ References ---------- .. [Monaghan1992] `J. Monaghan, Smoothed Particle Hydrodynamics, "Annual Review of Astronomy and Astrophysics", 30 (1992), pp. 543-574. `_ """ def __init__(self, dim=1): self.radius_scale = 2.0 self.dim = dim if dim == 3: self.fac = M_1_PI elif dim == 2: self.fac = 10 * M_1_PI / 7.0 else: self.fac = 2.0 / 3.0 def get_deltap(self): return 2. / 3 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 tmp2 = 2. - q if (q > 2.0): val = 0.0 elif (q > 1.0): val = 0.25 * tmp2 * tmp2 * tmp2 else: val = 1 - 1.5 * q * q * (1 - 0.5 * q) return val * fac def dwdq(self, rij=1.0, h=1.0): """Gradient of a kernel is given by .. math:: \nabla W = normalization \frac{dW}{dq} \frac{dq}{dx} \nabla W = w_dash \frac{dq}{dx} Here we get `w_dash` by using `dwdq` method """ h1 = 1. / h q = rij * h1 # get the kernel normalizing factor ( sigma ) if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute sigma * dw_dq tmp2 = 2. - q if (rij > 1e-12): if (q > 2.0): val = 0.0 elif (q > 1.0): val = -0.75 * tmp2 * tmp2 else: val = -3.0 * q * (1 - 0.75 * q) else: val = 0.0 return val * fac def gradient(self, xij=[0., 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # kernel and gradient evaluated at q tmp2 = 2. - q if (q > 2.0): w = 0.0 dw = 0.0 elif (q > 1.0): w = 0.25 * tmp2 * tmp2 * tmp2 dw = -0.75 * tmp2 * tmp2 else: w = 1 - 1.5 * q * q * (1 - 0.5 * q) dw = -3.0 * q * (1 - 0.75 * q) return -fac * h1 * (dw * q + w * self.dim) class WendlandQuinticC2_1D(object): r"""The following is the WendlandQuintic kernel (Wendland C2) kernel for 1D. .. math:: W(q) = \ & \alpha_d (1-q/2)^3 (1.5q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\ where :math:`d` is the number of dimensions and .. math:: \alpha_d = \frac{5}{8h}, \textrm{for dim=1} """ def __init__(self, dim=1): self.radius_scale = 2.0 self.dim = dim if dim == 1: self.fac = 5.0 / 8.0 elif dim == 2: raise ValueError( "WendlandQuinticC2_1D: Dim %d not supported" % dim) elif dim == 3: raise ValueError( "WendlandQuinticC2_1D: Dim %d not supported" % dim) def get_deltap(self): return 2.0/3 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * (1.5 * q + 1.0) return val * fac def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -3.0 * q * tmp * tmp return val * fac def gradient(self, xij=[0., 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * (1.5 * q + 1.0) dw = -3.0 * q * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) class WendlandQuintic(object): r"""The following is the WendlandQuintic kernel(C2) kernel for 2D and 3D. .. math:: W(q) = \ & \alpha_d (1-q/2)^4(2q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\ where :math:`d` is the number of dimensions and .. math:: \alpha_d = \ & \frac{7}{4\pi h^2}, \ & \textrm{for dim=2}, \\ \alpha_d = \ & \frac{21}{16\pi h^3}, \ & \textrm{for dim=3} """ def __init__(self, dim=2): self.radius_scale = 2.0 if dim == 1: raise ValueError("WendlandQuintic: Dim %d not supported" % dim) self.dim = dim if dim == 2: self.fac = 7.0 * M_1_PI / 4.0 elif dim == 3: self.fac = M_1_PI * 21.0 / 16.0 def get_deltap(self): return 0.5 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * (2.0 * q + 1.0) return val * fac def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -5.0 * q * tmp * tmp * tmp return val * fac def gradient(self, xij=[0., 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * (2.0 * q + 1.0) dw = -5.0 * q * tmp * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) class WendlandQuinticC4_1D(object): r"""The following is the WendlandQuintic kernel (Wendland C4) kernel for 1D. .. math:: W(q) = \ & \alpha_d (1-q/2)^5 (2q^2 + 2.5q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\ where :math:`d` is the number of dimensions and .. math:: \alpha_d = \frac{3}{4h}, \ \textrm{for dim=1} """ def __init__(self, dim=1): self.radius_scale = 2.0 self.dim = dim if dim == 1: self.fac = 0.75 if dim == 2: raise ValueError( "WendlandQuinticC4_1D: Dim %d not supported" % dim) elif dim == 3: raise ValueError( "WendlandQuinticC4_1D: Dim %d not supported" % dim) def get_deltap(self): return 0.55195628 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * tmp * (2 * q * q + 2.5 * q + 1.0) return val * fac def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -3.5 * q * (2 * q + 1) * tmp * tmp * tmp * tmp return val * fac def gradient(self, xij=[0., 0., 0.], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * tmp * (2 * q * q + 2.5 * q + 1.0) dw = -3.5 * q * (2 * q + 1) * tmp * tmp * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) class WendlandQuinticC4(object): r"""The following is the WendlandQuintic kernel (Wendland C4) kernel for 2D and 3D. .. math:: W(q) = \ & \alpha_d (1-q/2)^6(\frac{35}{12} q^2 + 3q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\ where :math:`d` is the number of dimensions and .. math:: \alpha_d = \ & \frac{9}{4\pi h^2}, \ & \textrm{for dim=2}, \\ \alpha_d = \ & \frac{495}{256\pi h^3}, \ & \textrm{for dim=3} """ def __init__(self, dim=2): self.radius_scale = 2.0 self.dim = dim if dim == 1: raise ValueError("WendlandQuinticC4: Dim %d not supported" % dim) if dim == 2: self.fac = 9.0 * M_1_PI / 4.0 elif dim == 3: self.fac = M_1_PI * 495.0 / 256.0 def get_deltap(self): return 0.47114274 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * tmp * tmp * \ ((35.0 / 12.0) * q * q + 3.0 * q + 1.0) return val * fac def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = (-14.0 / 3.0) * q * (1 + 2.5 * q) * \ tmp * tmp * tmp * tmp * tmp return val * fac def gradient(self, xij=[0., 0., 0.], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * tmp * tmp * \ ((35.0 / 12.0) * q * q + 3.0 * q + 1.0) dw = (-14.0 / 3.0) * q * (1 + 2.5 * q) * \ tmp * tmp * tmp * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) class WendlandQuinticC6_1D(object): r"""The following is the WendlandQuintic kernel (Wendland C6) kernel for 1D. .. math:: W(q) = \ & \alpha_d (1-q/2)^7 (\frac{21}{8} q^3 + \frac{19}{4} q^2 + 3.5q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\ where :math:`d` is the number of dimensions and .. math:: \alpha_d = \ \frac{55}{64h}, \textrm{for dim=1} """ def __init__(self, dim=1): self.radius_scale = 2.0 self.dim = dim if dim == 1: self.fac = 55.0 / 64.0 if dim == 2: raise ValueError( "WendlandQuinticC6_1D: Dim %d not supported" % dim) elif dim == 3: raise ValueError( "WendlandQuinticC6_1D: Dim %d not supported" % dim) def get_deltap(self): return 0.47996698 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * tmp * tmp * tmp * \ (2.625 * q * q * q + 4.75 * q * q + 3.5 * q + 1.0) return val * fac def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -0.5 * q * (26.25 * q * q + 27 * q + 9.0) * \ tmp * tmp * tmp * tmp * tmp * tmp return val * fac def gradient(self, xij=[0., 0., 0.], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * tmp * tmp * tmp * \ (2.625 * q * q * q + 4.75 * q * q + 3.5 * q + 1.0) dw = -0.5 * q * (26.25 * q * q + 27 * q + 9.0) * \ tmp * tmp * tmp * tmp * tmp * tmp return -fac * h1 * (dw * q + w * self.dim) class WendlandQuinticC6(object): r"""The following is the WendlandQuintic kernel(C6) kernel for 2D and 3D. .. math:: W(q) = \ & \alpha_d (1-q/2)^8 (4 q^3 + 6.25 q^2 + 4q +1))), \ & \textrm{for} \ 0\leq q \leq 2,\\ = \ & 0, & \textrm{for} \ q>2,\\ where :math:`d` is the number of dimensions and .. math:: \alpha_d = \ & \frac{78}{28\pi h^2}, \ & \textrm{for dim=2}, \\ \alpha_d = \ & \frac{1365}{512\pi h^3}, \ & \textrm{for dim=3} """ def __init__(self, dim=2): self.radius_scale = 2.0 self.dim = dim if dim == 1: raise ValueError("WendlandQuinticC6: Dim %d not supported" % dim) if dim == 2: self.fac = 78.0 * M_1_PI / 28.0 elif dim == 3: self.fac = M_1_PI * 1365.0 / 512.0 def get_deltap(self): return 0.4305720757 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1.0 / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 tmp = 1. - 0.5 * q if (q < 2.0): val = tmp * tmp * tmp * tmp * tmp * tmp * tmp * tmp * \ (4.0 * q * q * q + 6.25 * q * q + 4.0 * q + 1.0) return val * fac def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): if (rij > 1e-12): val = -5.50 * q * tmp * tmp * tmp * tmp * tmp * \ tmp * tmp * (1.0 + 3.5 * q + 4 * q * q) return val * fac def gradient(self, xij=[0., 0., 0.], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the kernel and gradient at q w = 0.0 dw = 0.0 tmp = 1.0 - 0.5 * q if (q < 2.0): w = tmp * tmp * tmp * tmp * tmp * tmp * tmp * tmp * \ (4.0 * q * q * q + 6.25 * q * q + 4.0 * q + 1.0) dw = -5.50 * q * tmp * tmp * tmp * tmp * tmp * \ tmp * tmp * (1.0 + 3.5 * q + 4 * q * q) return -fac * h1 * (dw * q + w * self.dim) class Gaussian(object): r"""Gaussian Kernel: [Liu2010]_ .. math:: W(q) = \ &\sigma_g e^{-q^2}, \ & \textrm{for} \ 0\leq q \leq 3,\\ = \ & 0, & \textrm{for} \ q>3,\\ where :math:`\sigma_g` is a dimensional normalizing factor for the gaussian function given by: .. math:: \sigma_g = \ & \frac{1}{\pi^{1/2} h}, \ & \textrm{for dim=1}, \\ \sigma_g = \ & \frac{1}{\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_g = \ & \frac{1}{\pi^{3/2} h^3}, & \textrm{for dim=3}. \\ References ---------- .. [Liu2010] `M. Liu, & G. Liu, Smoothed particle hydrodynamics (SPH): an overview and recent developments, "Archives of computational methods in engineering", 17.1 (2010), pp. 25-76. `_ """ def __init__(self, dim=2): self.radius_scale = 3.0 self.dim = dim self.fac = 0.5 * M_2_SQRTPI if dim > 1: self.fac *= 0.5 * M_2_SQRTPI if dim > 2: self.fac *= 0.5 * M_2_SQRTPI def get_deltap(self): # The inflection point is at q=1/sqrt(2) # the deltap values for some standard kernels # have been tabulated in sec 3.2 of # http://cfd.mace.manchester.ac.uk/sph/SPH_PhDs/2008/crespo_thesis.pdf return 0.70710678118654746 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 if (q < 3.0): val = exp(-q * q) * fac return val def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 if (q < 3.0): if (rij > 1e-12): val = -2.0 * q * exp(-q * q) return val * fac def gradient(self, xij=[0., 0., 0.], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0., 0.], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # kernel and gradient evaluated at q w = 0.0 dw = 0.0 if (q < 3.0): w = exp(-q * q) dw = -2.0 * q * w return -fac * h1 * (dw * q + w * self.dim) class SuperGaussian(object): r"""Super Gaussian Kernel: [Monaghan1992]_ .. math:: W(q) = \ &\frac{1}{h^{d}\pi^{d/2}} e^{-q^2} (d/2 + 1 - q^2), \ & \textrm{for} \ 0\leq q \leq 3,\\ = \ & 0, & \textrm{for} \ q>3,\\ where :math:`d` is the number of dimensions. """ def __init__(self, dim=2): self.radius_scale = 3.0 self.dim = dim self.fac = 0.5 * M_2_SQRTPI if dim > 1: self.fac *= 0.5 * M_2_SQRTPI if dim > 2: self.fac *= 0.5 * M_2_SQRTPI def get_deltap(self): # Found inflection point using sympy. if self.dim == 1: return 0.584540507426389 elif self.dim == 2: return 0.6021141014644256 else: return 0.615369528365158 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 val = 0.0 if (q < 3.0): q2 = q * q val = exp(-q2) * (1.0 + self.dim * 0.5 - q2) * fac return val def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 # compute the gradient val = 0.0 if (q < 3.0): if (rij > 1e-12): q2 = q * q val = q * (2.0 * q2 - self.dim - 4) * exp(-q2) return val * fac def gradient(self, xij=[0., 0., 0.], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0., 0.], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 d = self.dim # get the kernel normalizing factor if d == 1: fac = self.fac * h1 elif d == 2: fac = self.fac * h1 * h1 elif d == 3: fac = self.fac * h1 * h1 * h1 # kernel and gradient evaluated at q val = 0.0 if (q < 3.0): q2 = q * q val = (-d * d * 0.5 + 2.0 * d * q2 - d - 2.0 * q2 * q2 + 4 * q2) * exp(-q2) return -fac * h1 * val class QuinticSpline(object): r"""Quintic Spline SPH kernel: [Liu2010]_ .. math:: W(q) = \ &\sigma_5\left[ (3-q)^5 - 6(2-q)^5 + 15(1-q)^5 \right], \ & \textrm{for} \ 0\leq q \leq 1,\\ = \ &\sigma_5\left[ (3-q)^5 - 6(2-q)^5 \right], & \textrm{for} \ 1 < q \leq 2,\\ = \ &\sigma_5 \ (3-q)^5 , & \textrm{for} \ 2 < q \leq 3,\\ = \ & 0, & \textrm{for} \ q>3,\\ where :math:`\sigma_5` is a dimensional normalizing factor for the quintic spline function given by: .. math:: \sigma_5 = \ & \frac{1}{120 h^1}, & \textrm{for dim=1}, \\ \sigma_5 = \ & \frac{7}{478\pi h^2}, \ & \textrm{for dim=2}, \\ \sigma_5 = \ & \frac{3}{359\pi h^3}, & \textrm{for dim=3}. \\ """ def __init__(self, dim=2): self.radius_scale = 3.0 self.dim = dim if dim == 1: self.fac = 1.0 / 120.0 elif dim == 2: self.fac = M_1_PI * 7.0 / 478.0 elif dim == 3: self.fac = M_1_PI * 3.0 / 359.0 def get_deltap(self): # The inflection points for the polynomial are obtained as # http://www.wolframalpha.com/input/?i=%28%283-x%29%5E5+-+6*%282-x%29%5E5+%2B+15*%281-x%29%5E5%29%27%27 # the only permissible value is taken return 0.759298480738450 def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 tmp3 = 3. - q tmp2 = 2. - q tmp1 = 1. - q if (q > 3.0): val = 0.0 elif (q > 2.0): val = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 elif (q > 1.0): val = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 val -= 6.0 * tmp2 * tmp2 * tmp2 * tmp2 * tmp2 else: val = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 val -= 6.0 * tmp2 * tmp2 * tmp2 * tmp2 * tmp2 val += 15. * tmp1 * tmp1 * tmp1 * tmp1 * tmp1 return val * fac def dwdq(self, rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 tmp3 = 3. - q tmp2 = 2. - q tmp1 = 1. - q # compute the gradient if (rij > 1e-12): if (q > 3.0): val = 0.0 elif (q > 2.0): val = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 elif (q > 1.0): val = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 val += 30.0 * tmp2 * tmp2 * tmp2 * tmp2 else: val = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 val += 30.0 * tmp2 * tmp2 * tmp2 * tmp2 val -= 75.0 * tmp1 * tmp1 * tmp1 * tmp1 else: val = 0.0 return val * fac def gradient(self, xij=[0., 0., 0.], rij=1.0, h=1.0, grad=[0, 0, 0]): h1 = 1. / h # compute the gradient. if (rij > 1e-12): wdash = self.dwdq(rij, h) tmp = wdash * h1 / rij else: tmp = 0.0 grad[0] = tmp * xij[0] grad[1] = tmp * xij[1] grad[2] = tmp * xij[2] def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0): h1 = 1. / h q = rij * h1 # get the kernel normalizing factor if self.dim == 1: fac = self.fac * h1 elif self.dim == 2: fac = self.fac * h1 * h1 elif self.dim == 3: fac = self.fac * h1 * h1 * h1 tmp3 = 3. - q tmp2 = 2. - q tmp1 = 1. - q # compute the kernel & gradient at q if (q > 3.0): w = 0.0 dw = 0.0 elif (q > 2.0): w = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 dw = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 elif (q > 1.0): w = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 w -= 6.0 * tmp2 * tmp2 * tmp2 * tmp2 * tmp2 dw = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 dw += 30.0 * tmp2 * tmp2 * tmp2 * tmp2 else: w = tmp3 * tmp3 * tmp3 * tmp3 * tmp3 w -= 6.0 * tmp2 * tmp2 * tmp2 * tmp2 * tmp2 w += 15. * tmp1 * tmp1 * tmp1 * tmp1 * tmp1 dw = -5.0 * tmp3 * tmp3 * tmp3 * tmp3 dw += 30.0 * tmp2 * tmp2 * tmp2 * tmp2 dw -= 75.0 * tmp1 * tmp1 * tmp1 * tmp1 return -fac * h1 * (dw * q + w * self.dim) pysph-master/pysph/base/linalg3.pxd000066400000000000000000000010411356347341600176450ustar00rootroot00000000000000"""Routines for eigen decomposition of symmetric 3x3 matrices. """ cdef double det(double a[3][3]) nogil cdef void get_eigenvalues(double a[3][3], double *result) nogil cdef void eigen_decomposition(double A[3][3], double V[3][3], double *d) nogil cdef void transform(double A[3][3], double P[3][3], double res[3][3]) nogil cdef void transform_diag(double *A, double P[3][3], double res[3][3]) nogil cdef void transform_diag_inv(double *A, double P[3][3], double res[3][3]) nogil cdef void get_eigenvalvec(double A[3][3], double *R, double *e) pysph-master/pysph/base/linalg3.pyx000066400000000000000000000355261356347341600177110ustar00rootroot00000000000000#cython: boundscheck=False # # Eigen decomposition code for symmetric 3x3 matrices, some code taken # from the public domain Java Matrix library JAMA from libc.math cimport sqrt, cos, acos, sin, atan2, M_PI from libc.string cimport memcpy from numpy.linalg import eigh cimport numpy import numpy cdef extern: double fabs(double) nogil # this is cython substitute for const values cdef enum: n=3 cdef double EPS = numpy.finfo(float).eps cdef inline double MAX(double a, double b) nogil: return a if a>b else b cdef inline double SQR(double a) nogil: return a*a cdef inline double hypot2(double x, double y) nogil: return sqrt(x*x+y*y) cdef double det(double a[3][3]) nogil: '''Determinant of symmetrix matrix ''' return (a[0][0]*a[1][1]*a[2][2] + 2*a[1][2]*a[0][2]*a[0][1] - a[0][0]*a[1][2]*a[1][2] - a[1][1]*a[0][2]*a[0][2] - a[2][2]*a[0][1]*a[0][1]) cpdef double py_det(double[:,:] m): '''Determinant of symmetrix matrix ''' return det(&m[0][0]) # d,s are diagonal and off-diagonal elements of a symmetric 3x3 matrix # d:11,22,33; s:23,13,12 # d:00,11,22; s:12,02,01 cdef void get_eigenvalues(double a[3][3], double *result) nogil: '''Compute the eigenvalues of symmetric matrix a and return in result array. ''' cdef double m = (a[0][0]+ a[1][1]+a[2][2])/3.0 cdef double K[3][3] memcpy(&K[0][0], &a[0][0], sizeof(double)*9) K[0][0], K[1][1], K[2][2] = a[0][0] - m, a[1][1] - m, a[2][2] - m cdef double q = det(K)*0.5 cdef double p = 0 p += K[0][0]*K[0][0] + 2*K[1][2]*K[1][2] p += K[1][1]*K[1][1] + 2*K[0][2]*K[0][2] p += K[2][2]*K[2][2] + 2*K[0][1]*K[0][1] p /= 6.0 cdef double pi = M_PI cdef double phi = 0.5*pi cdef double tmp = p**3 - q**2 if q == 0.0 and p == 0.0: # singular zero matrix result[0] = result[1] = result[2] = m return elif tmp < 0.0 or fabs(tmp) < EPS: # eliminate roundoff error phi = 0 else: phi = atan2(sqrt(tmp), q)/3.0 if phi == 0 and q < 0: phi = pi result[0] = m + 2*sqrt(p)*cos(phi) result[1] = m - sqrt(p)*(cos(phi) + sqrt(3)*sin(phi)) result[2] = m - sqrt(p)*(cos(phi) - sqrt(3)*sin(phi)) cpdef py_get_eigenvalues(double[:,:] m): '''Return the eigenvalues of symmetric matrix. ''' res = numpy.empty(3, float) cdef double[:] _res = res get_eigenvalues(&m[0][0], &_res[0]) return res ############################################################################## cdef void get_eigenvector_np(double A[n][n], double r, double *res): ''' eigenvector of symmetric matrix for given eigenvalue `r` using numpy ''' cdef numpy.ndarray[ndim=2, dtype=numpy.float64_t] mat=numpy.empty((3,3)), evec cdef numpy.ndarray[ndim=1, dtype=numpy.float64_t] evals cdef double[:,:] _mat = mat cdef int i, j for i in range(3): for j in range(3): _mat[i,j] = A[i][j] evals, evec = eigh(mat) cdef int idx=0 cdef double di = fabs(evals[0]-r) if fabs(evals[1]-r) < di: idx = 1 if fabs(evals[2]-r) < di: idx = 2 for i in range(3): res[i] = evec[idx,i] cdef void get_eigenvector(double A[n][n], double r, double *res): ''' get eigenvector of symmetric 3x3 matrix for given eigenvalue `r` uses a fast method to get eigenvectors with a fallback to using numpy ''' res[0] = A[0][1]*A[1][2] - A[0][2]*(A[1][1]-r) # a_01 * a_12 - a_02 * (a_11 - r) res[1] = A[0][1]*A[0][2] - A[1][2]*(A[0][0]-r) # a_01 * a_02 - a_12 * (a_00 - r) res[2] = (A[0][0]-r)*(A[1][1]-r) - A[0][1]*A[0][1] # (a_00 - r) * (a_11 - r) - a_01^2 cdef double norm = sqrt(SQR(res[0]) + SQR(res[1]) + SQR(res[2])) if norm *1e7 <= fabs(r): # its a zero, let numpy get the answer get_eigenvector_np(A, r, res) else: res[0] /= norm res[1] /= norm res[2] /= norm cpdef py_get_eigenvector(double[:,:] A, double r): ''' get eigenvector of a symmetric matrix for given eigenvalue `r` ''' d = numpy.empty(3, dtype=float) cdef double[:] _d = d get_eigenvector(&A[0,0], r, &_d[0]) return d cdef void get_eigenvec_from_val(double A[n][n], double *R, double *e): cdef int i, j cdef double res[3] for i in range(3): get_eigenvector(A, e[i], &res[0]) for j in range(3): R[j*3+i] = res[j] cdef bint _nearly_diagonal(double A[n][n]) nogil: return ( (SQR(A[0][0]) + SQR(A[1][1]) + SQR(A[2][2])) > 1e8*(SQR(A[0][1]) + SQR(A[0][2]) + SQR(A[1][2])) ) cdef void get_eigenvalvec(double A[n][n], double *R, double *e): '''Get the eigenvalues and eigenvectors of symmetric 3x3 matrix. A is the input 3x3 matrix. R is the output eigen matrix e are the output eigenvalues ''' cdef bint use_iter = False cdef int i,j if A[0][1] == A[0][2] == A[1][2] == 0.0: # diagonal matrix. e[0] = A[0][0] e[1] = A[1][1] e[2] = A[2][2] for i in range(3): for j in range(3): R[i*3+j] = (i==j) return # FIXME: implement fast version get_eigenvalues(A, e) if e[0] != e[1] and e[1] != e[2] and e[0] != e[2]: # no repeated eigenvalues use_iter = True if _nearly_diagonal(A): # nearly diagonal matrix use_iter = True if not use_iter: get_eigenvec_from_val(A, R, e) else: eigen_decomposition( A, &R[0], &e[0] ) def py_get_eigenvalvec(double[:,:] A): v = numpy.empty((3,3), dtype=float) d = numpy.empty(3, dtype=float) cdef double[:,:] _v = v cdef double[:] _d = d get_eigenvalvec(&A[0,0], &_v[0,0], &_d[0]) return d, v ############################################################################## cdef void transform(double A[3][3], double P[3][3], double res[3][3]) nogil: '''Compute the transformation P.T*A*P and add it into res. ''' cdef int i, j, k, l for i in range(3): for j in range(3): for k in range(3): for l in range(3): res[i][j] += P[k][i]*A[k][l]*P[l][j] # P.T*A*P cdef void transform_diag(double *A, double P[3][3], double res[3][3]) nogil: '''Compute the transformation P.T*A*P and add it into res. A is diagonal and contains the diagonal entries alone. ''' cdef int i, j, k for i in range(3): for j in range(3): for k in range(3): res[i][j] += P[k][i]*A[k]*P[k][j] # P.T*A*P cdef void transform_diag_inv(double *A, double P[3][3], double res[3][3]) nogil: '''Compute the transformation P*A*P.T and set it into res. A is diagonal and contains just the diagonal entries. ''' cdef int i, j, k for i in range(3): for j in range(3): res[i][j] = 0.0 for i in range(3): for j in range(3): for k in range(3): res[i][j] += P[i][k]*A[k]*P[j][k] # P*A*P.T def py_transform(double[:,:] A, double[:,:] P): res = numpy.zeros((3,3), dtype=float) cdef double[:,:] _res = res transform( &A[0][0], &P[0][0], &_res[0][0] ) return res def py_transform_diag(double[:] A, double[:,:] P): res = numpy.zeros((3,3), dtype=float) cdef double[:,:] _res = res transform_diag( &A[0], &P[0][0], &_res[0][0] ) return res def py_transform_diag_inv(double[:] A, double[:,:] P): res = numpy.empty((3,3), dtype=float) cdef double[:,:] _res = res transform_diag_inv( &A[0], &P[0][0], &_res[0][0] ) return res cdef double * tred2(double V[n][n], double *d, double *e) nogil: '''Symmetric Householder reduction to tridiagonal form This is derived from the Algol procedures tred2 by Bowdler, Martin, Reinsch, and Wilkinson, Handbook for Auto. Comp., Vol.ii-Linear Algebra, and the corresponding Fortran subroutine in EISPACK. d contains the diagonal elements of the tridiagonal matrix. e contains the subdiagonal elements of the tridiagonal matrix in its last n-1 positions. e[0] is set to zero. ''' cdef: double scale, f, g, h, hh int i, j, k for j in range(n): d[j] = V[n-1][j] # Householder reduction to tridiagonal form. for i in range(n-1,0,-1): # Scale to avoid under/overflow. scale = 0.0 h = 0.0 for k in range(i): scale += fabs(d[k]) if (scale == 0.0): e[i] = d[i-1] for j in range(i): d[j] = V[i-1][j] V[i][j] = 0.0 V[j][i] = 0.0 else: # Generate Householder vector. for k in range(i): d[k] /= scale h += d[k] * d[k] f = d[i-1] g = sqrt(h) if f > 0: g = -g e[i] = scale * g h = h - f * g d[i-1] = f - g for j in range(i): e[j] = 0.0 # Apply similarity transformation to remaining columns. for j in range(i): f = d[j] V[j][i] = f g = e[j] + V[j][j] * f for k in range(j+1, i): g += V[k][j] * d[k] e[k] += V[k][j] * f e[j] = g f = 0.0 for j in range(i): e[j] /= h f += e[j] * d[j] hh = f / (h + h) for j in range(i): e[j] -= hh * d[j] for j in range(i): f = d[j] g = e[j] for k in range(j,i): V[k][j] -= (f * e[k] + g * d[k]) d[j] = V[i-1][j]; V[i][j] = 0.0; d[i] = h # Accumulate transformations. for i in range(n-1): V[n-1][i] = V[i][i] V[i][i] = 1.0 h = d[i+1] if h != 0.0: for k in range(i+1): d[k] = V[k][i+1] / h for j in range(i+1): g = 0.0 for k in range(i+1): g += V[k][i+1] * V[k][j] for k in range(i+1): V[k][j] -= g * d[k] for k in range(i+1): V[k][i+1] = 0.0 for j in range(n): d[j] = V[n-1][j] V[n-1][j] = 0.0 V[n-1][n-1] = 1.0 e[0] = 0.0 return d cdef void tql2(double V[n][n], double *d, double *e) nogil: '''Symmetric tridiagonal QL algo for eigendecomposition This is derived from the Algol procedures tql2, by Bowdler, Martin, Reinsch, and Wilkinson, Handbook for Auto. Comp., Vol.ii-Linear Algebra, and the corresponding Fortran subroutine in EISPACK. d contains the eigenvalues in ascending order. if an error exit is made, the eigenvalues are correct but unordered for indices 1,2,...,ierr-1. e has been destroyed. ''' cdef: double f, tst1, eps, g, h, p, r, dl1, c, c2, c3, el1, s, s2 int i, j, k, l, m, iter bint cont for i in range(1, n): e[i-1] = e[i] e[n-1] = 0.0 f = 0.0 tst1 = 0.0 eps = 2.0**-52.0 for l in range(n): # Find small subdiagonal element tst1 = MAX(tst1,fabs(d[l]) + fabs(e[l])) m = l while m < n: if fabs(e[m]) <= eps*tst1: break m += 1 # If m == l, d[l] is an eigenvalue, # otherwise, iterate. if m > l: iter = 0 cont = True while cont: iter = iter + 1 # (Could check iteration count here.) # Compute implicit shift g = d[l] p = (d[l+1] - g) / (2.0 * e[l]) r = hypot2(p,1.0) if p < 0: r = -r d[l] = e[l] / (p + r) d[l+1] = e[l] * (p + r) dl1 = d[l+1] h = g - d[l] for i in range(l+2,n): d[i] -= h f += h # Implicit QL transformation. p = d[m] c = 1.0 c2 = c c3 = c el1 = e[l+1] s = 0.0 s2 = 0.0 for i in range(m-1,l-1,-1): c3 = c2 c2 = c s2 = s g = c * e[i] h = c * p r = hypot2(p,e[i]) e[i+1] = s * r s = e[i] / r c = p / r p = c * d[i] - s * g d[i+1] = h + s * (c * g + s * d[i]) # Accumulate transformation for k in range(n): h = V[k][i+1] V[k][i+1] = s * V[k][i] + c * h V[k][i] = c * V[k][i] - s * h p = -s * s2 * c3 * el1 * e[l] / dl1 e[l] = s * p d[l] = c * p # Check for convergence cont = bool(fabs(e[l]) > eps*tst1) d[l] += f e[l] = 0.0 # Sort eigenvalues and corresponding vectors. for i in range(n-1): k = i p = d[i] for j in range(i+1,n): if d[j] < p: k = j p = d[j] if k != i: d[k] = d[i] d[i] = p for j in range(n): p = V[j][i] V[j][i] = V[j][k] V[j][k] = p cdef void zero_matrix_case(double V[n][n], double *d) nogil: cdef int i, j for i in range(3): d[i] = 0.0 for j in range(3): V[i][j] = (i==j) cdef void eigen_decomposition(double A[n][n], double V[n][n], double *d) nogil: '''Get eigenvalues and eigenvectors of matrix A. V is output eigenvectors and d are the eigenvalues. ''' cdef double e[n] cdef int i, j # Scale the matrix, as if the matrix is tiny, floating point errors # creep up leading to zero division errors in tql2. This is # specifically tested for with a tiny matrix. cdef double s = 0.0 for i in range(n): for j in range(n): V[i][j] = A[i][j] s += fabs(V[i][j]) if s == 0: zero_matrix_case(V, d) else: for i in range(n): for j in range(n): V[i][j] /= s d = tred2(V, d, &e[0]) tql2(V, d, &e[0]) for i in range(n): d[i] *= s def py_eigen_decompose_eispack(double[:,:] a): v = numpy.empty((3,3), dtype=float) d = numpy.empty(3, dtype=float) cdef double[:,:] _v = v cdef double[:] _d = d eigen_decomposition( &a[0,0], &_v[0,0], &_d[0] ) return d, v pysph-master/pysph/base/linked_list_nnps.pxd000066400000000000000000000023511356347341600216600ustar00rootroot00000000000000from libcpp.map cimport map from libcpp.vector cimport vector from nnps_base cimport * # NNPS using the linked list approach cdef class LinkedListNNPS(NNPS): ############################################################################ # Data Attributes ############################################################################ cdef public IntArray ncells_per_dim # number of cells in each direction cdef public int ncells_tot # total number of cells cdef public bint fixed_h # Constant cell sizes cdef public list heads # Head arrays for the cells cdef public list nexts # Next arrays for the particles cdef NNPSParticleArrayWrapper src, dst # Current source and destination. cdef UIntArray next, head # Current next and head arrays. cpdef long _count_occupied_cells(self, long n_cells) except -1 cpdef long _get_number_of_cells(self) except -1 cdef long _get_flattened_cell_index(self, cPoint pnt, double cell_size) cdef long _get_valid_cell_index(self, int cid_x, int cid_y, int cid_z, int* ncells_per_dim, int dim, int n_cells) nogil cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil pysph-master/pysph/base/linked_list_nnps.pyx000066400000000000000000000310321356347341600217030ustar00rootroot00000000000000#cython: embedsignature=True # Library imports. import numpy as np cimport numpy as np # Cython imports from cython.operator cimport dereference as deref, preincrement as inc from cython.parallel import parallel, prange, threadid # malloc and friends from libc.stdlib cimport malloc, free from libcpp.map cimport map from libcpp.pair cimport pair from libcpp.vector cimport vector # cpython from cpython.dict cimport PyDict_Clear, PyDict_Contains, PyDict_GetItem from cpython.list cimport PyList_GetItem, PyList_SetItem, PyList_GET_ITEM # Cython for compiler directives cimport cython ############################################################################# cdef class LinkedListNNPS(NNPS): """Nearest neighbor query class using the linked list method. """ def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bint fixed_h=False, bint cache=False, bint sort_gids=False): """Constructor for NNPS Parameters ---------- dim : int Number of dimension. particles : list The list of particle arrays we are working on radius_scale : double, default (2) Optional kernel radius scale. Defaults to 2 ghost_layers : int Optional number of layers to share in parallel domain : DomainManager, default (None) Optional limits for the domain fixed_h : bint Optional flag to use constant cell sizes throughout. cache : bint Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations. sort_gids : bint, default (False) Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run. """ # initialize the base class NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) # initialize the head and next for each particle array self.heads = [UIntArray() for i in range(self.narrays)] self.nexts = [UIntArray() for i in range(self.narrays)] # flag for constant smoothing lengths self.fixed_h = fixed_h # defaults self.ncells_per_dim = IntArray(3) self.n_cells = 0 self.sort_gids = sort_gids # compute the intial box sort for all local particles. The # DomainManager.setup_domain method is called to compute the # cell size. self.domain.update() self.update() #### Public protocol ################################################ cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: """Low level, high-performance non-gil method to find neighbors. This requires that `set_context()` be called beforehand. This method does not reset the neighbors array before it appends the neighbors to it. """ # Number of cells cdef int n_cells = self.n_cells cdef int dim = self.dim # cell shifts cdef int* shifts = self.cell_shifts.data # Source data arrays cdef double* s_x = self.src.x.data cdef double* s_y = self.src.y.data cdef double* s_z = self.src.z.data cdef double* s_h = self.src.h.data cdef unsigned int* s_gid = self.src.gid.data # Destination particle arrays cdef double* d_x = self.dst.x.data cdef double* d_y = self.dst.y.data cdef double* d_z = self.dst.z.data cdef double* d_h = self.dst.h.data cdef unsigned int* d_gid = self.dst.gid.data cdef unsigned int* head = self.head.data cdef unsigned int* next = self.next.data # minimum values for the particle distribution cdef double* xmin = self.xmin.data # cell size and radius cdef double radius_scale = self.radius_scale cdef double cell_size = self.cell_size # locals cdef size_t indexj cdef double xij2 cdef double hi2, hj2 cdef int ierr, nnbrs cdef unsigned int _next cdef int ix, iy, iz # this is the physical position of the particle that will be # used in pairwise searching cdef double x = d_x[d_idx] cdef double y = d_y[d_idx] cdef double z = d_z[d_idx] # get the un-flattened index for the destination particle with # respect to the minimum cdef int _cid_x, _cid_y, _cid_z find_cell_id_raw( x - xmin[0], y - xmin[1], z - xmin[2], cell_size, &_cid_x, &_cid_y, &_cid_z ) cdef int cid_x, cid_y, cid_z cdef long cell_index, orig_length cid_x = cid_y = cid_z = 0 # gather search radius hi2 = radius_scale * d_h[d_idx] hi2 *= hi2 orig_length = nbrs.length # Begin search through neighboring cells for ix in range(3): for iy in range(3): for iz in range(3): cid_x = _cid_x + shifts[ix] cid_y = _cid_y + shifts[iy] cid_z = _cid_z + shifts[iz] # Only consider valid cell indices cell_index = self._get_valid_cell_index( cid_x, cid_y, cid_z, self.ncells_per_dim.data, dim, n_cells ) if cell_index > -1: # get the first particle and begin iteration _next = head[ cell_index ] while( _next != UINT_MAX ): hj2 = radius_scale * s_h[_next] hj2 *= hj2 xij2 = norm2( s_x[_next]-x, s_y[_next]-y, s_z[_next]-z ) # select neighbor if ( (xij2 < hi2) or (xij2 < hj2) ): nbrs.c_append(_next) # get the 'next' particle in this cell _next = next[_next] if self.sort_gids: self._sort_neighbors( &nbrs.data[orig_length], nbrs.length - orig_length, s_gid ) cpdef get_spatially_ordered_indices(self, int pa_index, LongArray indices): cdef UIntArray head = self.heads[pa_index] cdef UIntArray next = self.nexts[pa_index] cdef ZOLTAN_ID_TYPE _next indices.reset() cdef long i for i in range(self.n_cells): _next = head.data[i] while (_next != UINT_MAX): indices.append(_next) _next = next.data[_next] cpdef set_context(self, int src_index, int dst_index): """Setup the context before asking for neighbors. The `dst_index` represents the particles for whom the neighbors are to be determined from the particle array with index `src_index`. Parameters ---------- src_index: int: the source index of the particle array. dst_index: int: the destination index of the particle array. """ NNPS.set_context(self, src_index, dst_index) # Set the current context. self.src = self.pa_wrappers[ src_index ] self.dst = self.pa_wrappers[ dst_index ] # next and head linked lists self.next = self.nexts[ src_index ] self.head = self.heads[ src_index ] #### Private protocol ################################################ cpdef _bin(self, int pa_index, UIntArray indices): """Bin a given particle array with indices. Parameters ---------- pa_index : int Index of the particle array corresponding to the particles list indices : UIntArray Subset of particles to bin """ cdef NNPSParticleArrayWrapper pa_wrapper = self.pa_wrappers[ pa_index ] cdef DoubleArray x = pa_wrapper.x cdef DoubleArray y = pa_wrapper.y cdef DoubleArray z = pa_wrapper.z cdef DoubleArray xmin = self.xmin cdef DoubleArray xmax = self.xmax cdef IntArray ncells_per_dim = self.ncells_per_dim cdef int dim = self.dim # the head and next arrays for this particle array cdef UIntArray head = self.heads[ pa_index ] cdef UIntArray next = self.nexts[ pa_index ] cdef double cell_size = self.cell_size cdef UIntArray lindices, gindices cdef size_t num_particles, indexi, i # point and flattened index cdef cPoint pnt = cPoint_new(0, 0, 0) cdef int _cid # now bin the particles num_particles = indices.length for indexi in range(num_particles): i = indices.data[indexi] # the flattened index is considered relative to the # minimum along each co-ordinate direction pnt.x = x.data[i] - xmin.data[0] pnt.y = y.data[i] - xmin.data[1] pnt.z = z.data[i] - xmin.data[2] # flattened cell index _cid = self._get_flattened_cell_index(pnt, cell_size) # insert this particle next.data[ i ] = head.data[ _cid ] head.data[_cid] = i cdef long _get_flattened_cell_index(self, cPoint pnt, double cell_size): return flatten( find_cell_id(pnt, cell_size), self.ncells_per_dim, self.dim ) cpdef long _get_number_of_cells(self) except -1: cdef double cell_size = self.cell_size cdef double cell_size1 = 1./cell_size cdef DoubleArray xmin = self.xmin cdef DoubleArray xmax = self.xmax cdef int ncx, ncy, ncz cdef long _ncells cdef int dim = self.dim # calculate the number of cells. ncx = ceil( cell_size1*(xmax.data[0] - xmin.data[0]) ) ncy = ceil( cell_size1*(xmax.data[1] - xmin.data[1]) ) ncz = ceil( cell_size1*(xmax.data[2] - xmin.data[2]) ) if ncx < 0 or ncy < 0 or ncz < 0: msg = 'LinkedListNNPS: Number of cells is negative '\ '(%s, %s, %s).'%(ncx, ncy, ncz) raise RuntimeError(msg) return -1 ncx = 1 if ncx == 0 else ncx ncy = 1 if ncy == 0 else ncy ncz = 1 if ncz == 0 else ncz # number of cells along each coordinate direction self.ncells_per_dim.data[0] = ncx self.ncells_per_dim.data[1] = ncy self.ncells_per_dim.data[2] = ncz # total number of cells _ncells = ncx if dim == 2: _ncells = ncx * ncy if dim == 3: _ncells = ncx * ncy * ncz return _ncells @cython.boundscheck(False) @cython.wraparound(False) cdef long _get_valid_cell_index(self, int cid_x, int cid_y, int cid_z, int* ncells_per_dim, int dim, int n_cells) nogil: return get_valid_cell_index( cid_x, cid_y, cid_z, ncells_per_dim, dim, n_cells ) cpdef long _count_occupied_cells(self, long n_cells) except -1: if n_cells < 0 or n_cells > 2**28: # unsigned ints are 4 bytes, which means 2**28 cells requires 1GB. msg = "ERROR: LinkedListNNPS requires too many cells (%s)."%n_cells raise RuntimeError(msg) return -1 return n_cells cpdef _refresh(self): """Refresh the head and next arrays locally""" cdef DomainManager domain = self.domain cdef int narrays = self.narrays cdef list pa_wrappers = self.pa_wrappers cdef NNPSParticleArrayWrapper pa_wrapper cdef list heads = self.heads cdef list nexts = self.nexts # locals cdef int i, j, np cdef long _ncells cdef UIntArray head, next _ncells = self._get_number_of_cells() _ncells = self._count_occupied_cells(_ncells) self.n_cells = _ncells # initialize the head and next arrays for i in range(narrays): pa_wrapper = pa_wrappers[i] np = pa_wrapper.get_number_of_particles() # re-size the head and next arrays head = PyList_GetItem(heads, i) next = PyList_GetItem(nexts, i ) head.resize( _ncells ) next.resize( np ) # UINT_MAX is used to indicate an invalid index for j in range(_ncells): head.data[j] = UINT_MAX for j in range(np): next.data[j] = UINT_MAX pysph-master/pysph/base/nnps.py000066400000000000000000000013761356347341600171420ustar00rootroot00000000000000from pysph.base.nnps_base import get_number_of_threads, py_flatten, \ py_unflatten, py_get_valid_cell_index from pysph.base.nnps_base import NNPSParticleArrayWrapper, CPUDomainManager, \ DomainManager, Cell, NeighborCache, NNPSBase, NNPS from pysph.base.linked_list_nnps import LinkedListNNPS from pysph.base.box_sort_nnps import BoxSortNNPS, DictBoxSortNNPS from pysph.base.spatial_hash_nnps import SpatialHashNNPS, \ ExtendedSpatialHashNNPS from pysph.base.cell_indexing_nnps import CellIndexingNNPS from pysph.base.z_order_nnps import ZOrderNNPS from pysph.base.stratified_hash_nnps import StratifiedHashNNPS from pysph.base.stratified_sfc_nnps import StratifiedSFCNNPS from pysph.base.octree_nnps import OctreeNNPS, CompressedOctreeNNPS pysph-master/pysph/base/nnps_base.pxd000066400000000000000000000317371356347341600203030ustar00rootroot00000000000000# numpy cimport numpy as np cimport cython from libcpp.map cimport map from libcpp.vector cimport vector # PyZoltan CArrays from cyarray.carray cimport UIntArray, IntArray, DoubleArray, LongArray # local imports from particle_array cimport ParticleArray from point cimport * cdef extern from 'math.h': int abs(int) nogil double ceil(double) nogil double floor(double) nogil double fabs(double) nogil double fmax(double, double) nogil double fmin(double, double) nogil cdef extern from 'limits.h': cdef unsigned int UINT_MAX cdef int INT_MAX # ZOLTAN ID TYPE AND PTR ctypedef unsigned int ZOLTAN_ID_TYPE ctypedef unsigned int* ZOLTAN_ID_PTR cdef inline double norm2(double x, double y, double z) nogil: return x*x + y*y + z*z @cython.cdivision(True) cdef inline int real_to_int(double real_val, double step) nogil: """ Return the bin index to which the given position belongs. Parameters ---------- val -- The coordinate location to bin step -- the bin size Examples -------- >>> real_to_int(1.5, 1.0) 1 >>> real_to_int(-0.5, 1.0) -1 """ cdef int ret_val = floor( real_val/step ) return ret_val cdef inline void find_cell_id_raw(double x, double y, double z, double cell_size, int *ix, int *iy, int *iz) nogil: """ Find the cell index for the corresponding point Parameters ---------- x, y, z: double the point for which the index is sought cell_size : double the cell size to use ix, iy, iz : int* output parameter holding the cell index Notes ------ Performs a box sort based on the point and cell size Uses the function `real_to_int` """ ix[0] = real_to_int(x, cell_size) iy[0] = real_to_int(y, cell_size) iz[0] = real_to_int(z, cell_size) @cython.boundscheck(False) @cython.wraparound(False) cdef inline long flatten_raw(int x, int y, int z, int* ncells_per_dim, int dim) nogil: """Return a flattened index for a cell The flattening is determined using the row-order indexing commonly employed in SPH. This would need to be changed for hash functions based on alternate orderings. """ cdef long ncx = ncells_per_dim[0] cdef long ncy = ncells_per_dim[1] return ( x + ncx * y + ncx*ncy * z ) @cython.boundscheck(False) @cython.wraparound(False) cdef inline long flatten(cIntPoint cid, IntArray ncells_per_dim, int dim) nogil: """Return a flattened index for a cell The flattening is determined using the row-order indexing commonly employed in SPH. This would need to be changed for hash functions based on alternate orderings. """ return flatten_raw(cid.x, cid.y, cid.z, ncells_per_dim.data, dim) @cython.boundscheck(False) @cython.wraparound(False) cdef inline long get_valid_cell_index(int cid_x, int cid_y, int cid_z, int* ncells_per_dim, int dim, int n_cells) nogil: """Return the flattened index for a valid cell""" cdef long ncx = ncells_per_dim[0] cdef long ncy = ncells_per_dim[1] cdef long ncz = ncells_per_dim[2] cdef long cell_index = -1 # basic test for valid indices. Since we bin the particles with # respect to the origin, negative indices can never occur. cdef bint is_valid = ( (ncx > cid_x > -1) and (ncy > cid_y > -1) and (ncz > cid_z > -1) ) # Given the validity of the cells, return the flattened cell index if is_valid: cell_index = flatten_raw(cid_x, cid_y, cid_z, ncells_per_dim, dim) if not (-1 < cell_index < n_cells): cell_index = -1 return cell_index cdef cIntPoint find_cell_id(cPoint pnt, double cell_size) cpdef UIntArray arange_uint(int start, int stop=*) # Basic particle array wrapper used for NNPS cdef class NNPSParticleArrayWrapper: cdef public DoubleArray x,y,z,h cdef public UIntArray gid cdef public IntArray tag cdef public ParticleArray pa cdef str name cdef int np # get the number of particles cdef int get_number_of_particles(self) cdef class DomainManager: cdef public object backend cdef public object manager cdef class DomainManagerBase: cdef public double xmin, xmax cdef public double ymin, ymax cdef public double zmin, zmax cdef public double xtranslate cdef public double ytranslate cdef public double ztranslate cdef public int dim cdef public bint periodic_in_x, periodic_in_y, periodic_in_z cdef public bint is_periodic cdef public bint mirror_in_x, mirror_in_y, mirror_in_z cdef public bint is_mirror cdef public object props cdef public list copy_props cdef public list pa_wrappers # NNPS particle array wrappers cdef public int narrays # number of arrays cdef public double cell_size # distance to create ghosts cdef public double hmin # minimum h cdef public bint in_parallel # Flag to determine if in parallel cdef public double radius_scale # Radius scale for kernel cdef public double n_layers # Number of layers of ghost particles #cdef double dbl_max # Maximum value of double # remove ghost particles from a previous iteration cpdef _remove_ghosts(self) # Domain limits for the simulation cdef class CPUDomainManager(DomainManagerBase): cdef public bint use_double cdef public object dtype cdef public double dtype_max cdef public list ghosts # box-wrap particles within the physical domain cdef _box_wrap_periodic(self) # Convenience function to add a value to a carray cdef _add_to_array(self, DoubleArray arr, double disp, int start=*) # Convenience function to multiply a value to a carray cdef _mul_to_array(self, DoubleArray arr, double val) # Convenience function to add a carray to a carray elementwise cdef _add_array_to_array(self, DoubleArray arr, DoubleArray translate) # Convenience function to add a value to a carray cdef _change_velocity(self, DoubleArray arr, double disp) # create new periodic ghosts cdef _create_ghosts_periodic(self) # create new mirror ghosts cdef _create_ghosts_mirror(self) # Compute the cell size across processors. The cell size is taken # as max(h)*radius_scale cdef _compute_cell_size_for_binning(self) # Cell to hold particle indices cdef class Cell: ############################################################################ # Data Attributes ############################################################################ cdef cIntPoint _cid # Spatial index for the cell cdef public bint is_boundary # Flag to indicate boundary cells cdef int narrays # Number of particle arrays cdef public list lindices # Local indices for particles cdef public list gindices # Global indices for binned particles cdef list nparticles # Number of particles in the cell cdef double cell_size # bin size cdef public cPoint centroid # Centroid computed from indices cdef cPoint boxmin # Bounding box min for the cell cdef cPoint boxmax # Bounding box max for the cell cdef int layers # Layers to compute bounding box cdef IntArray nbrprocs # List of neighboring processors cdef public int size # total number of particles in this cell ############################################################################ # Member functions ############################################################################ # set the indices for the cell cpdef set_indices(self, int index, UIntArray lindices, UIntArray gindices) # compute the bounding box for a cell. Layers is used to determine # the factor times the cell size the bounding box is offset from # the cell. cdef _compute_bounding_box(self, double cell_size, int layers) cdef class NeighborCache: cdef int _dst_index cdef int _src_index cdef int _narrays cdef list _particles cdef int _n_threads cdef NNPS _nnps cdef UIntArray _pid_to_tid cdef UIntArray _start_stop cdef IntArray _cached cdef void **_neighbors # This is made public purely for testing! cdef public list _neighbor_arrays cdef int _last_avg_nbr_size cdef void get_neighbors_raw(self, size_t d_idx, UIntArray nbrs) nogil cpdef get_neighbors(self, int src_index, size_t d_idx, UIntArray nbrs) cpdef find_all_neighbors(self) cpdef update(self) cdef void _update_last_avg_nbr_size(self) cdef void _find_neighbors(self, long d_idx) nogil cdef class NNPSBase: ########################################################################## # Data Attributes ########################################################################## cdef public list particles # list of particle arrays cdef public list pa_wrappers # list of particle array wrappers cdef public int narrays # Number of particle arrays cdef bint use_cache # Use cache or not. cdef public list cache # The neighbor cache. cdef int src_index, dst_index # The current source and dest indices cdef public DomainManager domain # Domain manager cdef public bint is_periodic # flag for periodicity cdef public int dim # Dimensionality of the problem cdef public double cell_size # Cell size for binning cdef public double hmin # Minimum h cdef public double radius_scale # Radius scale for kernel cdef IntArray cell_shifts # cell shifts cdef public int n_cells # number of cells # Testing function for brute force neighbor search. The return # list is of the same type of the local and global ids (uint) cpdef brute_force_neighbors(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs) cpdef get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bint prealloc) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil cpdef get_nearest_particles(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs) cpdef set_context(self, int src_index, int dst_index) # Nearest neighbor locator cdef class NNPS(NNPSBase): ########################################################################## # Data Attributes ########################################################################## cdef public DoubleArray xmin # co-ordinate min values cdef public DoubleArray xmax # co-ordinate max values cdef public NeighborCache current_cache # The current cache cdef public double _last_domain_size # last size of domain. cdef public bint sort_gids # Sort neighbors by their gids. ########################################################################## # Member functions ########################################################################## # Main binning routine for NNPS for local particles. This clears # the current cell data, re-computes the cell size and bins all # particles locally. cpdef update(self) # Index particles given by a list of indices. The indices are # assumed to be of type unsigned int and local to the NNPS object cpdef _bin(self, int pa_index, UIntArray indices) cdef void _sort_neighbors(self, unsigned int* nbrs, size_t length, unsigned int *gids) nogil # compute the min and max for the particle coordinates cdef _compute_bounds(self) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil cdef void get_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil # Neighbor query function. Returns the list of neighbors for a # requested particle. The returned list is assumed to be of type # unsigned int to follow the type of the local and global ids. # This method will never use the cached values. If prealloc is set # to True it will assume that the neighbor array has enough space for # all the new neighbors and directly set the values in the array. cpdef get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bint prealloc) # Neighbor query function. Returns the list of neighbors for a # requested particle. The returned list is assumed to be of type # unsigned int to follow the type of the local and global ids. cpdef get_nearest_particles(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs) cpdef get_spatially_ordered_indices(self, int pa_index, LongArray indices) cpdef set_context(self, int src_index, int dst_index) cpdef spatially_order_particles(self, int pa_index) # refresh any data structures needed for binning cpdef _refresh(self) pysph-master/pysph/base/nnps_base.pyx000066400000000000000000001637071356347341600203330ustar00rootroot00000000000000#cython: embedsignature=True # Library imports. import numpy as np cimport numpy as np # Cython imports from cython.operator cimport dereference as deref, preincrement as inc from cython.parallel import parallel, prange, threadid # malloc and friends from libc.stdlib cimport malloc, free from libcpp.map cimport map from libcpp.pair cimport pair from libcpp.vector cimport vector cdef extern from "" namespace "std" nogil: void sort[Iter, Compare](Iter first, Iter last, Compare comp) void sort[Iter](Iter first, Iter last) # cpython from cpython.dict cimport PyDict_Clear, PyDict_Contains, PyDict_GetItem from cpython.list cimport PyList_GetItem, PyList_SetItem, PyList_GET_ITEM # Cython for compiler directives cimport cython from compyle.config import get_config from compyle.array import get_backend, Array from compyle.parallel import Elementwise from compyle.types import annotate IF OPENMP: cimport openmp cpdef int get_number_of_threads(): cdef int i, n with nogil, parallel(): for i in prange(1): n = openmp.omp_get_num_threads() return n cpdef set_number_of_threads(int n): openmp.omp_set_num_threads(n) ELSE: cpdef int get_number_of_threads(): return 1 cpdef set_number_of_threads(int n): print("OpenMP not available, cannot set number of threads.") IF UNAME_SYSNAME == "Windows": cdef inline double fmin(double x, double y) nogil: return x if x < y else y cdef inline double fmax(double x, double y) nogil: return x if x > y else y # Particle Tag information from cyarray.carray cimport BaseArray, aligned_malloc, aligned_free from utils import ParticleTAGS cdef int Local = ParticleTAGS.Local cdef int Remote = ParticleTAGS.Remote cdef int Ghost = ParticleTAGS.Ghost ctypedef pair[unsigned int, unsigned int] id_gid_pair_t cdef inline bint _compare_gids(id_gid_pair_t x, id_gid_pair_t y) nogil: return y.second > x.second def py_flatten(IntPoint cid, IntArray ncells_per_dim, int dim): """Python wrapper""" cdef cIntPoint _cid = cid.data cdef int flattened_index = flatten( _cid, ncells_per_dim, dim ) return flattened_index def py_get_valid_cell_index(IntPoint cid, IntArray ncells_per_dim, int dim, int n_cells): """Return the flattened cell index for a valid cell""" return get_valid_cell_index(cid.data.x, cid.data.y, cid.data.z, ncells_per_dim.data, dim, n_cells) @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef inline cIntPoint unflatten(long cell_index, IntArray ncells_per_dim, int dim): """Un-flatten a linear cell index""" cdef int ncx = ncells_per_dim.data[0] cdef int ncy = ncells_per_dim.data[1] cdef cIntPoint cid cdef int ix=0, iy=0, iz=0, tmp=0 if dim > 1: if dim > 2: tmp = ncx * ncy iz = cell_index/tmp cell_index = cell_index - iz * tmp iy = cell_index/ncx ix = cell_index - (iy * ncx) else: ix = cell_index # return the tuple cell index cid = cIntPoint_new( ix, iy, iz ) return cid def py_unflatten(long cell_index, IntArray ncells_per_dim, int dim): """Python wrapper""" cdef cIntPoint _cid = unflatten( cell_index, ncells_per_dim, dim ) cdef IntPoint cid = IntPoint_from_cIntPoint(_cid) return cid cdef cIntPoint find_cell_id(cPoint pnt, double cell_size): """ Find the cell index for the corresponding point Parameters ---------- pnt -- the point for which the index is sought cell_size -- the cell size to use id -- output parameter holding the cell index Notes ----- Performs a box sort based on the point and cell size Uses the function `real_to_int` """ cdef cIntPoint p = cIntPoint(real_to_int(pnt.x, cell_size), real_to_int(pnt.y, cell_size), real_to_int(pnt.z, cell_size)) return p cdef inline cPoint _get_centroid(double cell_size, cIntPoint cid): """ Get the centroid of the cell. Parameters ---------- cell_size : double (input) Cell size used for binning cid : cPoint (input) Spatial index for a cell Returns ------- centroid : cPoint Notes ----- The centroid in any coordinate direction is defined to be the origin plus half the cell size in that direction """ centroid = cPoint_new(0.0, 0.0, 0.0) centroid.x = (cid.x + 0.5)*cell_size centroid.y = (cid.y + 0.5)*cell_size centroid.z = (cid.z + 0.5)*cell_size return centroid def get_centroid(double cell_size, IntPoint cid): """ Get the centroid of the cell. Parameters ---------- cell_size : double (input) Cell size used for binning cid : IntPoint (input) Spatial index for a cell Returns ------- centroid : Point Notes ----- The centroid in any coordinate direction is defined to be the origin plus half the cell size in that direction """ cdef cPoint _centroid = _get_centroid(cell_size, cid.data) centroid = Point_new(0.0, 0.0, 0.0) centroid.data = _centroid return centroid cpdef UIntArray arange_uint(int start, int stop=-1): """Utility function to return a numpy.arange for a UIntArray""" cdef int size cdef UIntArray arange cdef int i = 0 if stop == -1: arange = UIntArray(start) for i in range(start): arange.data[i] = i else: size = stop-start arange = UIntArray(size) for i in range(size): arange.data[i] = (start + i) return arange ############################################################################## cdef class NNPSParticleArrayWrapper: def __init__(self, ParticleArray pa): self.pa = pa self.name = pa.name self.x = pa.get_carray('x') self.y = pa.get_carray('y') self.z = pa.get_carray('z') self.h = pa.get_carray('h') self.gid = pa.get_carray('gid') self.tag = pa.get_carray('tag') self.np = pa.get_number_of_particles() cdef int get_number_of_particles(self): cdef ParticleArray pa = self.pa return pa.get_number_of_particles() def remove_tagged_particles(self, int tag): cdef ParticleArray pa = self.pa pa.remove_tagged_particles(tag) ############################################################################## cdef class DomainManager: def __init__(self, double xmin=-1000, double xmax=1000, double ymin=0, double ymax=0, double zmin=0, double zmax=0, periodic_in_x=False, periodic_in_y=False, periodic_in_z=False, double n_layers=2.0, backend=None, props=None, mirror_in_x=False, mirror_in_y=False, mirror_in_z=False): """Constructor Parameters ---------- xmin, xmax, ymin, ymax, zmin, zmax: double: extents of the domain. periodic_in_x, periodic_in_y, periodic_in_z: bool: axis periodicity. mirror_in_x, mirror_in_y, mirror_in_z: bool: axis mirror. n_layers: double: number of ghost layers as a multiple of h_max*radius_scale props: list/dict: properties to copy. Provide a list or dict with the keys as particle array names. Only the specified properties are copied. If not specified, all props are copied. """ self.backend = get_backend(backend) is_periodic = periodic_in_x or periodic_in_y or periodic_in_z is_mirror = mirror_in_x or mirror_in_y or mirror_in_z if (self.backend is 'opencl' or self.backend is 'cuda'): if not is_mirror: from pysph.base.gpu_domain_manager import GPUDomainManager domain_manager = GPUDomainManager else: print("warning: mirrored boundary conditions not " "supported with GPU backend, using CPUDomainManager") domain_manager = CPUDomainManager else: domain_manager = CPUDomainManager self.manager = domain_manager( xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, zmin=zmin, zmax=zmax, periodic_in_x=periodic_in_x, periodic_in_y=periodic_in_y, periodic_in_z=periodic_in_z, n_layers=n_layers, backend=self.backend, props=props, mirror_in_x=mirror_in_x, mirror_in_y=mirror_in_y, mirror_in_z=mirror_in_z ) def set_pa_wrappers(self, wrappers): self.manager.set_pa_wrappers(wrappers) def set_cell_size(self, cell_size): self.manager.set_cell_size(cell_size) def set_in_parallel(self, in_parallel): self.manager.set_in_parallel(in_parallel) def set_radius_scale(self, radius_scale): self.manager.set_radius_scale(radius_scale) def compute_cell_size_for_binning(self): """Compute the cell size for the binning. The cell size is chosen as the kernel radius scale times the maximum smoothing length in the local processor. For parallel runs, we would need to communicate the maximum 'h' on all processors to decide on the appropriate binning size. """ self.manager.compute_cell_size_for_binning() def update(self): self.manager.update() ############################################################################## cdef class DomainManagerBase(object): def __init__(self, double xmin=-1000, double xmax=1000, double ymin=0, double ymax=0, double zmin=0, double zmax=0, periodic_in_x=False, periodic_in_y=False, periodic_in_z=False, double n_layers=2.0, props=None, mirror_in_x=False, mirror_in_y=False, mirror_in_z=False): """Constructor The n_layers argument specifies the number of ghost layers as multiples of hmax*radius_scale. props: list/dict: properties to copy. Provide a list or dict with the keys as particle array names. Only the specified properties are copied. If not specified, all props are copied. """ self._check_limits(xmin, xmax, ymin, ymax, zmin, zmax) self.xmin = xmin; self.xmax = xmax self.ymin = ymin; self.ymax = ymax self.zmin = zmin; self.zmax = zmax self.props = props # Indicates if the domain is periodic self.periodic_in_x = periodic_in_x self.periodic_in_y = periodic_in_y self.periodic_in_z = periodic_in_z # Indicates if the domain is mirrored self.mirror_in_x = mirror_in_x self.mirror_in_y = mirror_in_y self.mirror_in_z = mirror_in_z self.is_periodic = periodic_in_x or periodic_in_y or periodic_in_z self.is_mirror = mirror_in_x or mirror_in_y or mirror_in_z self.n_layers = n_layers # get the translates in each coordinate direction self.xtranslate = xmax - xmin self.ytranslate = ymax - ymin self.ztranslate = zmax - zmin # empty list of particle array wrappers for now self.pa_wrappers = [] self.narrays = 0 # default value for the cell size self.cell_size = 1.0 self.hmin = 1.0 # default DomainManager in_parallel is set to False self.in_parallel = False def _check_limits(self, xmin, xmax, ymin, ymax, zmin, zmax): """Sanity check on the limits""" if ( (xmax < xmin) or (ymax < ymin) or (zmax < zmin) ): raise ValueError("Invalid domain limits!") def set_pa_wrappers(self, wrappers): self.pa_wrappers = wrappers self.narrays = len(wrappers) copy_props = [] props = self.props for i in range(self.narrays): if props is None: copy_props.append(None) elif isinstance(props, dict): name = wrappers[i].pa.name copy_props.append(props[name]) else: copy_props.append(props) self.copy_props = copy_props def set_cell_size(self, cell_size): self.cell_size = cell_size def set_in_parallel(self, bint in_parallel): self.in_parallel = in_parallel def set_radius_scale(self, double radius_scale): self.radius_scale = radius_scale def compute_cell_size_for_binning(self): self._compute_cell_size_for_binning() cpdef _remove_ghosts(self): """Remove all ghost particles from a previous step While creating periodic neighbors, we create new particles and give them the tag utils.ParticleTAGS.Ghost. Before repeating this step in the next iteration, all current particles with this tag are removed. """ cdef list pa_wrappers = self.pa_wrappers cdef int narrays = self.narrays cdef int array_index cdef NNPSParticleArrayWrapper pa_wrapper for array_index in range(narrays): pa_wrapper = pa_wrappers[array_index] pa_wrapper.remove_tagged_particles(Ghost) ############################################################################## cdef class CPUDomainManager(DomainManagerBase): """This class determines the limits of the solution domain. We expect all simulations to have well defined domain limits beyond which we are either not interested or the solution is invalid to begin with. Thus, if a particle leaves the domain, the solution should be considered invalid (at least locally). The initial domain limits could be given explicitly or asked to be computed from the particle arrays. The domain could be periodic. """ def __init__(self, double xmin=-1000, double xmax=1000, double ymin=0, double ymax=0, double zmin=0, double zmax=0, periodic_in_x=False, periodic_in_y=False, periodic_in_z=False, double n_layers=2.0, backend=None, props=None, mirror_in_x=False, mirror_in_y=False, mirror_in_z=False): """Constructor The n_layers argument specifies the number of ghost layers as multiples of hmax*radius_scale. props: list/dict: properties to copy. Provide a list or dict with the keys as particle array names. Only the specified properties are copied. If not specified, all props are copied. """ DomainManagerBase.__init__( self, xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, zmin=zmin, zmax=zmax, periodic_in_x=periodic_in_x, periodic_in_y=periodic_in_y, periodic_in_z=periodic_in_z, n_layers=n_layers, props=props, mirror_in_x=mirror_in_x, mirror_in_y=mirror_in_y, mirror_in_z=mirror_in_z ) self.use_double = True self.dtype = float self.dtype_max = np.finfo(self.dtype).max self.ghosts = None #### Public protocol ################################################ def update(self): """General method that is called before NNPS can bin particles. This method is responsible for the computation of cell sizes and creation of any ghost particles for periodic or wall boundary conditions. """ # compute the cell sizes self._compute_cell_size_for_binning() # Periodicity is handled by adjusting particles according to a # given cubic domain box. In parallel, it is expected that the # appropriate parallel NNPS is responsible for the creation of # ghost particles. if (self.is_periodic or self.is_mirror) and not self.in_parallel: self._update_from_gpu() # remove periodic/mirror ghost particles from a previous step self._remove_ghosts() if self.is_periodic: # box-wrap current particles for periodicity self._box_wrap_periodic() # create new periodic ghosts self._create_ghosts_periodic() if self.is_mirror: # create new mirrored ghosts self._create_ghosts_mirror() # Update GPU. self._update_gpu() #### Private protocol ############################################### cdef _add_to_array(self, DoubleArray arr, double disp, int start=0): cdef int i for i in range(arr.length - start): arr.data[start + i] += disp cdef _change_velocity(self, DoubleArray arr, double disp): cdef int i for i in range(arr.length): arr.data[i] += -1.0 * disp cdef _add_array_to_array(self, DoubleArray arr, DoubleArray translate): cdef int i for i in range(arr.length): arr.data[i] += translate.data[i] cdef _mul_to_array(self, DoubleArray arr, double val): cdef int i for i in range(arr.length): arr.data[i] *= val cdef _create_ghosts_mirror(self): """Identify boundary particles and create images. We need to find all particles that are within a specified distance from the boundaries and place image copies on the other side of the boundary. Corner reflections need to be accounted for when using domains with multiple periodicity. The periodic domain is specified using the DomainManager object """ cdef list pa_wrappers = self.pa_wrappers cdef int narrays = self.narrays # cell size used to check for periodic ghosts. For summation density # like operations, we need to create two layers of ghost images, this # is configurable via the n_layers argument to the constructor. cdef double cell_size = self.n_layers * self.cell_size # mirror domain values cdef double xmin = self.xmin, xmax = self.xmax cdef double ymin = self.ymin, ymax = self.ymax, cdef double zmin = self.zmin, zmax = self.zmax # mirror boundary condition flags cdef bint mirror_in_x = self.mirror_in_x cdef bint mirror_in_y = self.mirror_in_y cdef bint mirror_in_z = self.mirror_in_z # locals cdef NNPSParticleArrayWrapper pa_wrapper cdef ParticleArray pa, added cdef DoubleArray x, y, z cdef double xi, yi, zi cdef int array_index, i, np # temporary indices for particles to be replicated cdef LongArray x_low, x_high, y_high, y_low, z_high, z_low, low, high x_low = LongArray(); x_high = LongArray() y_high = LongArray(); y_low = LongArray() z_high = LongArray(); z_low = LongArray() xt_low = DoubleArray(); xt_high = DoubleArray() yt_high = DoubleArray(); yt_low = DoubleArray() zt_high = DoubleArray(); zt_low = DoubleArray() low = LongArray(); high = LongArray() low_translate = DoubleArray(); high_translate = DoubleArray() for array_index in range(narrays): pa_wrapper = pa_wrappers[ array_index ] pa = pa_wrapper.pa x = pa_wrapper.x; y = pa_wrapper.y; z = pa_wrapper.z # reset the length of the arrays x_low.reset(); x_high.reset(); y_high.reset(); y_low.reset() z_low.reset(); z_high.reset() np = x.length for i in range(np): xi = x.data[i]; yi = y.data[i]; zi = z.data[i] if mirror_in_x: if ((xi - xmin) <= cell_size): x_low.append(i) xt_low.append(-2*(xi - xmin)) if ( (xmax - xi) <= cell_size): x_high.append(i) xt_high.append(2*(xmax - xi)) if mirror_in_y: if ((yi - ymin) <= cell_size): y_low.append(i) yt_low.append(-2*(yi - ymin)) if ( (ymax - yi) <= cell_size ): y_high.append(i) yt_high.append(2*(ymax - yi)) if mirror_in_z: if ((zi - zmin) <= cell_size): z_low.append(i) zt_low.append(-2*(zi - zmin)) if ((zmax - zi) <= cell_size ): z_high.append(i) zt_high.append(2*(zmax - zi)) # now treat each case separately and append to the main array added = ParticleArray(x=None, y=None, z=None) x = added.get_carray('x') y = added.get_carray('y') z = added.get_carray('z') if mirror_in_x: # x_low copy = pa.extract_particles( x_low ) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('x'), xt_low) self._mul_to_array(copy.get_carray('u'), -1) added.append_parray(copy) # x_high copy = pa.extract_particles( x_high ) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('x'), xt_high) self._mul_to_array(copy.get_carray('u'), -1) added.append_parray(copy) if mirror_in_y: # Now do the corners from the previous. low.reset(); high.reset() low_translate.reset(); high_translate.reset() np = x.length for i in range(np): yi = y.data[i] if ( (yi - ymin) <= cell_size ): low.append(i) low_translate.append(-2*(yi - ymin)) if ( (ymax - yi) <= cell_size ): high.append(i) high_translate.append(2*(ymax - yi)) copy = added.extract_particles(low) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('y'), low_translate) self._mul_to_array(copy.get_carray('v'), -1) added.append_parray(copy) copy = added.extract_particles(high) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('y'), high_translate) self._mul_to_array(copy.get_carray('v'), -1) added.append_parray(copy) # Add the actual y_high and y_low now. # y_high copy = pa.extract_particles( y_high ) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('y'), yt_high) self._mul_to_array(copy.get_carray('v'), -1) added.append_parray(copy) # y_low copy = pa.extract_particles( y_low ) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('y'), yt_low) self._mul_to_array(copy.get_carray('v'), -1) added.append_parray(copy) if mirror_in_z: # Now do the corners from the previous. low.reset(); high.reset() low_translate.reset(); high_translate.reset() np = x.length for i in range(np): zi = z.data[i] if ((zi - zmin) <= cell_size): low.append(i) low_translate.append(-2*(zi - zmin)) if ((zmax - zi) <= cell_size): high.append(i) high_translate.append(2*(zmax - zi)) copy = added.extract_particles(low) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('z'), low_translate) self._mul_to_array(copy.get_carray('w'), -1) added.append_parray(copy) copy = added.extract_particles(high) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('z'), high_translate) self._mul_to_array(copy.get_carray('w'), -1) added.append_parray(copy) # Add the actual z_high and z_low now. # z_high copy = pa.extract_particles( z_high ) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('z'), zt_high) self._mul_to_array(copy.get_carray('w'), -1) added.append_parray(copy) # z_low copy = pa.extract_particles( z_low ) if copy.get_number_of_particles() > 0: self._add_array_to_array(copy.get_carray('z'), zt_low) self._mul_to_array(copy.get_carray('w'), -1) added.append_parray(copy) added.tag[:] = Ghost pa.append_parray(added) cdef _box_wrap_periodic(self): """Box-wrap particles for periodicity The periodic domain is a rectangular box defined by minimum and maximum values in each coordinate direction. These values are used in turn to define translation values used to box-wrap particles that cross a periodic boundary. The periodic domain is specified using the DomainManager object """ # minimum and maximum values of the domain cdef double xmin = self.xmin, xmax = self.xmax cdef double ymin = self.ymin, ymax = self.ymax cdef double zmin = self.zmin, zmax = self.zmax # translations along each coordinate direction cdef double xtranslate = self.xtranslate cdef double ytranslate = self.ytranslate cdef double ztranslate = self.ztranslate # periodicity flags for NNPS cdef bint periodic_in_x = self.periodic_in_x cdef bint periodic_in_y = self.periodic_in_y cdef bint periodic_in_z = self.periodic_in_z # locals cdef NNPSParticleArrayWrapper pa_wrapper cdef DoubleArray x, y, z cdef double xi, yi, zi cdef int i, np # iterate over each array and mark for translation for pa_wrapper in self.pa_wrappers: x = pa_wrapper.x; y = pa_wrapper.y; z = pa_wrapper.z np = x.length # iterate over particles and box-wrap for i in range(np): if periodic_in_x: if x.data[i] < xmin : x.data[i] = x.data[i] + xtranslate if x.data[i] > xmax : x.data[i] = x.data[i] - xtranslate if periodic_in_y: if y.data[i] < ymin : y.data[i] = y.data[i] + ytranslate if y.data[i] > ymax : y.data[i] = y.data[i] - ytranslate if periodic_in_z: if z.data[i] < zmin : z.data[i] = z.data[i] + ztranslate if z.data[i] > zmax : z.data[i] = z.data[i] - ztranslate cdef _create_ghosts_periodic(self): """Identify boundary particles and create images. We need to find all particles that are within a specified distance from the boundaries and place image copies on the other side of the boundary. Corner reflections need to be accounted for when using domains with multiple periodicity. The periodic domain is specified using the DomainManager object """ cdef list pa_wrappers = self.pa_wrappers cdef int narrays = self.narrays cdef list copy_props = self.copy_props # cell size used to check for periodic ghosts. For summation density # like operations, we need to create two layers of ghost images, this # is configurable via the n_layers argument to the constructor. cdef double cell_size = self.n_layers * self.cell_size # periodic domain values cdef double xmin = self.xmin, xmax = self.xmax cdef double ymin = self.ymin, ymax = self.ymax, cdef double zmin = self.zmin, zmax = self.zmax cdef double xtranslate = self.xtranslate cdef double ytranslate = self.ytranslate cdef double ztranslate = self.ztranslate # periodicity flags cdef bint periodic_in_x = self.periodic_in_x cdef bint periodic_in_y = self.periodic_in_y cdef bint periodic_in_z = self.periodic_in_z # locals cdef NNPSParticleArrayWrapper pa_wrapper cdef ParticleArray pa, ghost_pa cdef DoubleArray x, y, z, xt, yt, zt cdef double xi, yi, zi cdef int array_index, i, np, start # temporary indices for particles to be replicated cdef LongArray x_low, x_high, y_high, y_low, z_high, z_low, low, high x_low = LongArray(); x_high = LongArray() y_high = LongArray(); y_low = LongArray() z_high = LongArray(); z_low = LongArray() low = LongArray(); high = LongArray() if not self.ghosts: self.ghosts = [paw.pa.empty_clone(props=copy_props[i]) for i, paw in enumerate(pa_wrappers)] else: for ghost_pa in self.ghosts: ghost_pa.resize(0) for i in range(narrays): self.ghosts[i].ensure_properties( pa_wrappers[i].pa, props=copy_props[i] ) for array_index in range(narrays): ghost_pa = self.ghosts[array_index] pa_wrapper = pa_wrappers[array_index] pa = pa_wrapper.pa x = pa_wrapper.x; y = pa_wrapper.y; z = pa_wrapper.z # reset the length of the arrays x_low.reset(); x_high.reset(); y_high.reset(); y_low.reset() z_low.reset(); z_high.reset() np = x.length for i in range(np): xi = x.data[i]; yi = y.data[i]; zi = z.data[i] if periodic_in_x: if ( (xi - xmin) <= cell_size ): x_low.append(i) if ( (xmax - xi) <= cell_size ): x_high.append(i) if periodic_in_y: if ( (yi - ymin) <= cell_size ): y_low.append(i) if ( (ymax - yi) <= cell_size ): y_high.append(i) if periodic_in_z: if ( (zi - zmin) <= cell_size ): z_low.append(i) if ( (zmax - zi) <= cell_size ): z_high.append(i) # now treat each case separately and append to the main array x = ghost_pa.get_carray('x') y = ghost_pa.get_carray('y') z = ghost_pa.get_carray('z') if periodic_in_x: # x_low start = ghost_pa.get_number_of_particles() pa.extract_particles( x_low, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('x'), xtranslate, start=start) # x_high start = ghost_pa.get_number_of_particles() pa.extract_particles( x_high, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('x'), -xtranslate, start=start) if periodic_in_y: # Now do the corners from the previous. low.reset(); high.reset() np = x.length for i in range(np): yi = y.data[i] if ( (yi - ymin) <= cell_size ): low.append(i) if ( (ymax - yi) <= cell_size ): high.append(i) start = ghost_pa.get_number_of_particles() ghost_pa.extract_particles( low, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('y'), ytranslate, start=start) start = ghost_pa.get_number_of_particles() ghost_pa.extract_particles( high, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('y'), -ytranslate, start=start) # Add the actual y_high and y_low now. # y_high start = ghost_pa.get_number_of_particles() pa.extract_particles( y_high, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('y'), -ytranslate, start=start) # y_low start = ghost_pa.get_number_of_particles() pa.extract_particles( y_low, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('y'), ytranslate, start=start) if periodic_in_z: # Now do the corners from the previous. low.reset(); high.reset() np = x.length for i in range(np): zi = z.data[i] if ( (zi - zmin) <= cell_size ): low.append(i) if ( (zmax - zi) <= cell_size ): high.append(i) start = ghost_pa.get_number_of_particles() ghost_pa.extract_particles( low, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('z'), ztranslate, start=start) start = ghost_pa.get_number_of_particles() ghost_pa.extract_particles( high, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('z'), -ztranslate, start=start) # Add the actual z_high and z_low now. # z_high start = ghost_pa.get_number_of_particles() pa.extract_particles( z_high, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('z'), -ztranslate, start=start) # z_low start = ghost_pa.get_number_of_particles() pa.extract_particles( z_low, ghost_pa, align=False, props=copy_props[array_index] ) self._add_to_array(ghost_pa.get_carray('z'), ztranslate, start=start) ghost_pa.set_num_real_particles(ghost_pa.get_number_of_particles()) ghost_pa.tag[:] = Ghost pa.append_parray(ghost_pa, align=False) cdef _compute_cell_size_for_binning(self): """Compute the cell size for the binning. The cell size is chosen as the kernel radius scale times the maximum smoothing length in the local processor. For parallel runs, we would need to communicate the maximum 'h' on all processors to decide on the appropriate binning size. """ cdef list pa_wrappers = self.pa_wrappers cdef NNPSParticleArrayWrapper pa_wrapper cdef DoubleArray h cdef double cell_size cdef double _hmax, hmax = -1.0 cdef double _hmin, hmin = self.dtype_max for pa_wrapper in pa_wrappers: h = pa_wrapper.h h.update_min_max() _hmax = h.maximum _hmin = h.minimum if _hmax > hmax: hmax = _hmax if _hmin < hmin: hmin = _hmin cell_size = self.radius_scale * hmax self.hmin = self.radius_scale * hmin if cell_size < 1e-6: cell_size = 1.0 self.cell_size = cell_size # set the cell size for the DomainManager self.set_cell_size(cell_size) def _update_gpu(self): # FIXME: this is just done for correctness. We should really # implement a GPU Domain Manager and use that instead to do all # this directly on the GPU. cdef list pa_wrappers = self.pa_wrappers cdef NNPSParticleArrayWrapper pa_wrapper cdef ParticleArray pa cdef list props for pa_wrapper in pa_wrappers: pa = pa_wrapper.pa if pa.gpu is not None: props = pa.output_property_arrays if len(props) == 0: props = list(pa.properties.keys()) pa.gpu.resize(pa.get_number_of_particles(real=False)) pa.gpu.push(*props) def _update_from_gpu(self): # FIXME: this is just done for correctness. We should really # implement a GPU Domain Manager and use that instead to do all # this directly on the GPU. cdef list pa_wrappers = self.pa_wrappers cdef NNPSParticleArrayWrapper pa_wrapper cdef ParticleArray pa cdef list props for pa_wrapper in pa_wrappers: pa = pa_wrapper.pa if pa.gpu is not None: props = pa.output_property_arrays if len(props) == 0: props = list(pa.properties.keys()) pa.gpu.pull(*props) ############################################################################## cdef class Cell: """Basic indexing structure for the box-sort NNPS. For a spatial indexing based on the box-sort algorithm, this class defines the spatial data structure used to hold particle indices (local and global) that are within this cell. """ def __init__(self, IntPoint cid, double cell_size, int narrays, int layers=2): """Constructor Parameters ---------- cid : IntPoint Spatial index (unflattened) for the cell cell_size : double Spatial extent of the cell in each dimension narrays : int Number of arrays being binned layers : int Factor to compute the bounding box """ self._cid = cIntPoint_new(cid.x, cid.y, cid.z) self.cell_size = cell_size self.narrays = narrays self.lindices = [UIntArray() for i in range(narrays)] self.gindices = [UIntArray() for i in range(narrays)] self.nparticles = [lindices.length for lindices in self.lindices] self.is_boundary = False # compute the centroid for the cell self.centroid = _get_centroid(cell_size, cid.data) # cell bounding box self.layers = layers self._compute_bounding_box(cell_size, layers) # list of neighboring processors self.nbrprocs = IntArray(0) # current size of the cell self.size = 0 #### Public protocol ################################################ def get_centroid(self, Point pnt): """Utility function to get the centroid of the cell. Parameters ---------- pnt : Point (input/output) The centroid is cmoputed and stored in this object. The centroid is defined as the origin plus half the cell size in each dimension. """ cdef cPoint centroid = self.centroid pnt.data = centroid def get_bounding_box(self, Point boxmin, Point boxmax, int layers = 1, cell_size=None): """Compute the bounding box for the cell. Parameters ---------- boxmin : Point (output) The bounding box min coordinates are stored here boxmax : Point (output) The bounding box max coordinates are stored here layers : int (input) default (1) Number of offset layers to define the bounding box cell_size : double (input) default (None) Optional cell size to use to compute the bounding box. If not provided, the cell's size will be used. """ if cell_size is None: cell_size = self.cell_size self._compute_bounding_box(cell_size, layers) boxmin.data = self.boxmin boxmax.data = self.boxmax cpdef set_indices(self, int index, UIntArray lindices, UIntArray gindices): """Set the global and local indices for the cell""" self.lindices[index] = lindices self.gindices[index] = gindices self.nparticles[index] = lindices.length #### Private protocol ############################################### cdef _compute_bounding_box(self, double cell_size, int layers): self.layers = layers cdef cPoint centroid = self.centroid cdef cPoint boxmin = cPoint_new(0., 0., 0.) cdef cPoint boxmax = cPoint_new(0., 0., 0.) boxmin.x = centroid.x - (layers+0.5) * cell_size boxmax.x = centroid.x + (layers+0.5) * cell_size boxmin.y = centroid.y - (layers+0.5) * cell_size boxmax.y = centroid.y + (layers+0.5) * cell_size boxmin.z = centroid.z - (layers + 0.5) * cell_size boxmax.z = centroid.z + (layers + 0.5) * cell_size self.boxmin = boxmin self.boxmax = boxmax ############################################################################### cdef class NeighborCache: def __init__(self, NNPS nnps, int dst_index, int src_index): self._dst_index = dst_index self._src_index = src_index self._nnps = nnps self._particles = nnps.particles self._narrays = nnps.narrays cdef long n_p = self._particles[dst_index].get_number_of_particles() cdef int nnbr = 10 cdef size_t i if self._nnps.dim == 1: nnbr = 10 elif self._nnps.dim == 2: nnbr = 60 elif self._nnps.dim == 3: nnbr = 120 self._n_threads = get_number_of_threads() self._cached = IntArray(n_p) for i in range(n_p): self._cached.data[i] = 0 self._last_avg_nbr_size = nnbr self._start_stop = UIntArray() self._pid_to_tid = UIntArray() self._neighbor_arrays = [] self._neighbors = aligned_malloc( sizeof(void*)*self._n_threads ) cdef UIntArray _arr for i in range(self._n_threads): _arr = UIntArray() self._neighbor_arrays.append(_arr) self._neighbors[i] = _arr def __dealloc__(self): aligned_free(self._neighbors) #### Public protocol ################################################ cdef void get_neighbors_raw(self, size_t d_idx, UIntArray nbrs) nogil: if self._cached.data[d_idx] == 0: self._find_neighbors(d_idx) cdef size_t start, end, tid start = self._start_stop.data[2*d_idx] end = self._start_stop.data[2*d_idx + 1] tid = self._pid_to_tid.data[d_idx] nbrs.c_set_view( &(self._neighbors[tid]).data[start], end - start ) cpdef get_neighbors(self, int src_index, size_t d_idx, UIntArray nbrs): self.get_neighbors_raw(d_idx, nbrs) cpdef find_all_neighbors(self): cdef long d_idx cdef long np = \ self._particles[self._dst_index].get_number_of_particles() with nogil, parallel(): for d_idx in prange(np): if self._cached.data[d_idx] == 0: self._find_neighbors(d_idx) cpdef update(self): self._update_last_avg_nbr_size() cdef int n_threads = self._n_threads cdef int dst_index = self._dst_index cdef size_t i cdef long np = self._particles[dst_index].get_number_of_particles() self._start_stop.resize(np*2) self._pid_to_tid.resize(np) self._cached.resize(np) for i in range(np): self._cached.data[i] = 0 self._start_stop.data[2*i] = 0 self._start_stop.data[2*i+1] = 0 # This is an upper limit for the number of neighbors in a worst # case scenario. cdef size_t safety = 1024 cdef UIntArray arr for i in range(n_threads): arr = self._neighbors[i] arr.c_reset() arr.c_reserve( self._last_avg_nbr_size*np/n_threads + safety ) #### Private protocol ################################################ cdef void _update_last_avg_nbr_size(self): cdef size_t i cdef size_t np = self._pid_to_tid.length cdef UIntArray start_stop = self._start_stop cdef long total = 0 for i in range(np): total += start_stop.data[2*i + 1] - start_stop.data[2*i] if total > 0 and np > 0: self._last_avg_nbr_size = int(total/np) + 1 cdef void _find_neighbors(self, long d_idx) nogil: cdef int thread_id = threadid() self._pid_to_tid.data[d_idx] = thread_id self._start_stop.data[d_idx*2] = \ (self._neighbors[thread_id]).length self._nnps.find_nearest_neighbors( d_idx, self._neighbors[thread_id] ) self._start_stop.data[d_idx*2+1] = \ (self._neighbors[thread_id]).length self._cached.data[d_idx] = 1 ############################################################################## cdef class NNPSBase: def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bint cache=False, bint sort_gids=False): """Constructor for NNPS Parameters ---------- dim : int Dimension (fixme: Not sure if this is really needed) particles : list The list of particle arrays we are working on. radius_scale : double, default (2) Optional kernel radius scale. Defaults to 2 domain : DomainManager, default (None) Optional limits for the domain cache : bint Flag to set if we want to cache neighbor calls. This costs storage but speeds up neighbor calculations. sort_gids : bint, default (False) Flag to sort neighbors using gids (if they are available). This is useful when comparing parallel results with those from a serial run. """ # store the list of particles and number of arrays self.particles = particles self.narrays = len( particles ) # create the particle array wrappers self.pa_wrappers = [NNPSParticleArrayWrapper(pa) for pa in particles] # radius scale and problem dimensionality. self.radius_scale = radius_scale self.dim = dim self.domain = domain if domain is None: self.domain = DomainManager() # set the particle array wrappers for the domain manager self.domain.set_pa_wrappers(self.pa_wrappers) # set the radius scale to determine the cell size self.domain.set_radius_scale(self.radius_scale) # periodicity self.is_periodic = self.domain.manager.is_periodic # The total number of cells. self.n_cells = 0 # cell shifts. Indicates the shifting value for cell indices # in each co-ordinate direction. self.cell_shifts = IntArray(3) self.cell_shifts.data[0] = -1 self.cell_shifts.data[1] = 0 self.cell_shifts.data[2] = 1 cpdef brute_force_neighbors(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs): cdef NNPSParticleArrayWrapper src = self.pa_wrappers[src_index] cdef NNPSParticleArrayWrapper dst = self.pa_wrappers[dst_index] cdef DoubleArray s_x = src.x cdef DoubleArray s_y = src.y cdef DoubleArray s_z = src.z cdef DoubleArray s_h = src.h cdef DoubleArray d_x = dst.x cdef DoubleArray d_y = dst.y cdef DoubleArray d_z = dst.z cdef DoubleArray d_h = dst.h cdef double cell_size = self.cell_size cdef double radius_scale = self.radius_scale cdef size_t num_particles, j num_particles = s_x.length cdef double xi = d_x.data[d_idx] cdef double yi = d_y.data[d_idx] cdef double zi = d_z.data[d_idx] cdef double hi = d_h.data[d_idx] * radius_scale # gather radius cdef double xj, yj, hj, xij2, xij # reset the neighbors nbrs.reset() for j in range(num_particles): xj = s_x.data[j]; yj = s_y.data[j]; zj = s_z.data[j]; hj = radius_scale * s_h.data[j] # scatter radius xij2 = (xi - xj)*(xi - xj) + \ (yi - yj)*(yi - yj) + \ (zi - zj)*(zi - zj) xij = sqrt(xij2) if ( (xij < hi) or (xij < hj) ): nbrs.append( j ) cpdef get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bint prealloc): """Find nearest neighbors for particle id 'd_idx' without cache Parameters ---------- src_index: int Index in the list of particle arrays to which the neighbors belong dst_index: int Index in the list of particle arrays to which the query point belongs d_idx: size_t Index of the query point in the destination particle array nbrs: UIntArray Array to be populated by nearest neighbors of 'd_idx' """ self.set_context(src_index, dst_index) if prealloc: nbrs.length = 0 else: nbrs.c_reset() self.find_nearest_neighbors(d_idx, nbrs) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: # Implement this in the subclass to actually do something useful. pass cpdef get_nearest_particles(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs): cdef int idx = dst_index*self.narrays + src_index if self.use_cache: if self.src_index != src_index \ or self.dst_index != dst_index: self.set_context(src_index, dst_index) return self.cache[idx].get_neighbors(src_index, d_idx, nbrs) else: return self.get_nearest_particles_no_cache( src_index, dst_index, d_idx, nbrs, False ) cpdef set_context(self, int src_index, int dst_index): """Setup the context before asking for neighbors. The `dst_index` represents the particles for whom the neighbors are to be determined from the particle array with index `src_index`. Parameters ---------- src_index: int: the source index of the particle array. dst_index: int: the destination index of the particle array. """ cdef int idx = dst_index*self.narrays + src_index self.src_index = src_index self.dst_index = dst_index self.current_cache = self.cache[idx] cdef class NNPS(NNPSBase): """Nearest neighbor query class using the box-sort algorithm. NNPS bins all local particles using the box sort algorithm in Cells. The cells are stored in a dictionary 'cells' which is keyed on the spatial index (IntPoint) of the cell. """ def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bint cache=False, bint sort_gids=False): NNPSBase.__init__(self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids) # min and max coordinate values self.xmin = DoubleArray(3) self.xmax = DoubleArray(3) self._last_domain_size = 0.0 # The cache. self.use_cache = cache _cache = [] for d_idx in range(len(particles)): for s_idx in range(len(particles)): _cache.append(NeighborCache(self, d_idx, s_idx)) self.cache = _cache #### Public protocol ################################################# def set_in_parallel(self, bint in_parallel): self.domain.manager.in_parallel = in_parallel def set_use_cache(self, bint use_cache): self.use_cache = use_cache if use_cache: for cache in self.cache: cache.update() def update_domain(self): self.domain.update() cpdef update(self): """Update the local data after particles have moved. For parallel runs, we want the NNPS to be independent of the ParallelManager which is solely responsible for distributing particles across available processors. We assume therefore that after a parallel update, each processor has all the local particle information it needs and this operation is carried out locally. For serial runs, this method should be called when the particles have moved. """ cdef int i, num_particles cdef ParticleArray pa cdef UIntArray indices cdef DomainManager domain = self.domain # use cell sizes computed by the domain. self.cell_size = domain.manager.cell_size self.hmin = domain.manager.hmin # compute bounds and refresh the data structure self._compute_bounds() self._refresh() # indices on which to bin. We bin all local particles for i in range(self.narrays): pa = self.particles[i] num_particles = pa.get_number_of_particles() indices = arange_uint(num_particles) # bin the particles self._bin( pa_index=i, indices=indices ) if self.use_cache: for cache in self.cache: cache.update() cdef void get_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: if self.use_cache: self.current_cache.get_neighbors_raw(d_idx, nbrs) else: nbrs.c_reset() self.find_nearest_neighbors(d_idx, nbrs) #### Private protocol ################################################ cdef _compute_bounds(self): """Compute coordinate bounds for the particles""" cdef list pa_wrappers = self.pa_wrappers cdef NNPSParticleArrayWrapper pa_wrapper cdef DoubleArray x, y, z cdef double xmax = -1e100, ymax = -1e100, zmax = -1e100 cdef double xmin = 1e100, ymin = 1e100, zmin = 1e100 cdef double lx, ly, lz, domain_size for pa_wrapper in pa_wrappers: x = pa_wrapper.x y = pa_wrapper.y z = pa_wrapper.z # find min and max of variables x.update_min_max() y.update_min_max() z.update_min_max() xmax = fmax(x.maximum, xmax) ymax = fmax(y.maximum, ymax) zmax = fmax(z.maximum, zmax) xmin = fmin(x.minimum, xmin) ymin = fmin(y.minimum, ymin) zmin = fmin(z.minimum, zmin) # Add a small offset to the limits. lx, ly, lz = xmax - xmin, ymax - ymin, zmax - zmin xmin -= lx*0.01; ymin -= ly*0.01; zmin -= lz*0.01 xmax += lx*0.01; ymax += ly*0.01; zmax += lz*0.01 domain_size = fmax(lx, ly) domain_size = fmax(domain_size, lz) if self._last_domain_size > 1e-16 and \ domain_size > 2.0*self._last_domain_size: msg = ( '*'*70 + '\nWARNING: Domain size has increased by a large amount.\n' + 'Particles are probably diverging, please check your code!\n' + '*'*70 ) print(msg) self._last_domain_size = domain_size # If all of the dimensions have very small extent give it a unit size. cdef double _eps = 1e-12 if (fabs(xmax - xmin) < _eps) and (fabs(ymax - ymin) < _eps) \ and (fabs(zmax - zmin) < _eps): xmin -= 0.5; xmax += 0.5 ymin -= 0.5; ymax += 0.5 zmin -= 0.5; zmax += 0.5 # store the minimum and maximum of physical coordinates self.xmin.set_data(np.asarray([xmin, ymin, zmin])) self.xmax.set_data(np.asarray([xmax, ymax, zmax])) cdef void _sort_neighbors(self, unsigned int* nbrs, size_t length, unsigned int *gids) nogil: if length == 0: return cdef id_gid_pair_t _entry cdef vector[id_gid_pair_t] _data cdef vector[unsigned int] _ids cdef int i cdef unsigned int _id if gids[0] == UINT_MAX: # Serial runs will have invalid gids so just compare the ids. _ids.resize(length) for i in range(length): _ids[i] = nbrs[i] sort(_ids.begin(), _ids.end()) for i in range(length): nbrs[i] = _ids[i] else: # Copy the neighbor id and gid data. _data.resize(length) for i in range(length): _id = nbrs[i] _entry.first = _id _entry.second = gids[_id] _data[i] = _entry # Sort it. sort(_data.begin(), _data.end(), &_compare_gids) # Set the sorted neighbors. for i in range(length): nbrs[i] = _data[i].first cpdef _bin(self, int pa_index, UIntArray indices): raise NotImplementedError("NNPS :: _bin called") cpdef _refresh(self): raise NotImplementedError("NNPS :: _refresh called") cpdef get_spatially_ordered_indices(self, int pa_index, LongArray indices): raise NotImplementedError("NNPSBase :: get_spatially_ordered_indices called") cpdef spatially_order_particles(self, int pa_index): """Spatially order particles such that nearby particles have indices nearer each other. This may improve pre-fetching on the CPU. """ cdef LongArray indices = LongArray() cdef ParticleArray pa = self.pa_wrappers[pa_index].pa self.get_spatially_ordered_indices(pa_index, indices) cdef BaseArray arr for name, arr in pa.properties.items(): stride = pa.stride.get(name, 1) arr.c_align_array(indices, stride) pysph-master/pysph/base/octree.pxd000066400000000000000000000076321356347341600176110ustar00rootroot00000000000000#cython: embedsignature=True from nnps_base cimport * from libcpp.vector cimport vector cimport cython import numpy as np cimport numpy as np ctypedef unsigned int u_int cdef extern from 'math.h': int abs(int) nogil double ceil(double) nogil double floor(double) nogil double fabs(double) nogil double fmax(double, double) nogil double fmin(double, double) nogil cdef extern from "" namespace "std" nogil: OutputIter copy[InputIter,OutputIter](InputIter,InputIter,OutputIter) cdef struct cOctreeNode: bint is_leaf double length double xmin[3] double hmax int num_particles int level int start_index cOctreeNode* children[8] cOctreeNode* parent cdef class OctreeNode: ########################################################################## # Data Attributes ########################################################################## cdef cOctreeNode* _node cdef public bint is_leaf cdef public double length cdef public np.ndarray xmin cdef public double hmax cdef public int num_particles cdef public int level ########################################################################## # Member functions ########################################################################## cdef void wrap_node(self, cOctreeNode* node) cpdef OctreeNode get_parent(self) cpdef UIntArray get_indices(self, Octree tree) cpdef list get_children(self) cpdef plot(self, ax, color = *) cdef class Octree: ########################################################################## # Data Attributes ########################################################################## cdef cOctreeNode* root cdef vector[cOctreeNode*]* leaf_cells cdef u_int* pids cdef int _next_pid cdef public int num_particles cdef public int leaf_max_particles cdef public double hmax cdef public double length cdef public int depth cdef double machine_eps cdef double xmin[3] cdef double xmax[3] ########################################################################## # Member functions ########################################################################## cdef inline double _get_eps(self, double length, double* xmin) nogil cdef inline void _calculate_domain(self, NNPSParticleArrayWrapper pa) cdef inline cOctreeNode* _new_node(self, double* xmin, double length, double hmax = *, int level = *, cOctreeNode* parent = *, int num_particles = *, bint is_leaf = *) nogil cdef inline void _delete_tree(self, cOctreeNode* node) cdef int _c_build_tree(self, NNPSParticleArrayWrapper pa, vector[u_int]* indices, double* xmin, double length, cOctreeNode* node, int level) nogil cdef void _plot_tree(self, OctreeNode node, ax) cdef int c_build_tree(self, NNPSParticleArrayWrapper pa_wrapper) cdef void _c_get_leaf_cells(self, cOctreeNode* node) cdef void c_get_leaf_cells(self) cdef cOctreeNode* c_find_point(self, double x, double y, double z) cpdef int build_tree(self, ParticleArray pa) cpdef delete_tree(self) cpdef OctreeNode get_root(self) cpdef list get_leaf_cells(self) cpdef OctreeNode find_point(self, double x, double y, double z) cpdef plot(self, ax) cdef class CompressedOctree(Octree): ########################################################################## # Data Attributes ########################################################################## cdef double dbl_max ########################################################################## # Member functions ########################################################################## cdef int _c_build_tree(self, NNPSParticleArrayWrapper pa, vector[u_int]* indices, double* xmin, double length, cOctreeNode* node, int level) nogil pysph-master/pysph/base/octree.pyx000066400000000000000000000447501356347341600176400ustar00rootroot00000000000000#cython: embedsignature=True from nnps_base cimport * from libc.stdlib cimport malloc, free from libcpp.vector cimport vector cimport cython from cython.operator cimport dereference as deref, preincrement as inc import numpy as np cimport numpy as np # EPS_MAX is maximum value of eps in tree building DEF EPS_MAX = 1e-3 IF UNAME_SYSNAME == "Windows": cdef inline double fmin(double x, double y) nogil: return x if x < y else y cdef inline double fmax(double x, double y) nogil: return x if x > y else y ######################################################################## cdef class OctreeNode: def __init__(self): self.xmin = np.zeros(3, dtype=np.float64) @cython.boundscheck(False) @cython.wraparound(False) def __richcmp__(self, other, int op): # Checks if two nodes are the same. # Two nodes are equal if xmin and length are equal. # op = 2 corresponds to "==" # op = 3 corresponds to "!=" if type(other) != OctreeNode: if op == 2: return False if op == 3: return True return NotImplemented cdef bint equal_xmin, equal_length equal_xmin = True equal_length = (self.length == other.length) cdef int i for i from 0<=i<3: if self.xmin[i] != other.xmin[i]: equal_xmin = False cdef bint equal = equal_xmin and equal_length if op == 2: return equal if op == 3: return not equal return NotImplemented #### Public protocol ################################################ @cython.boundscheck(False) @cython.wraparound(False) cdef void wrap_node(self, cOctreeNode* node): self._node = node self.hmax = node.hmax self.length = node.length self.is_leaf = node.is_leaf self.level = node.level self.num_particles = node.num_particles self.xmin[0] = self._node.xmin[0] self.xmin[1] = self._node.xmin[1] self.xmin[2] = self._node.xmin[2] cpdef OctreeNode get_parent(self): """ Get parent of the node. Returns ------- parent : OctreeNode """ if self._node.parent == NULL: return None cdef OctreeNode parent = OctreeNode() parent.wrap_node(self._node.parent) return parent cpdef UIntArray get_indices(self, Octree tree): """ Get indices of a node. Returns empty UIntArray if node is not a leaf. Returns ------- indices : UIntArray """ if not self._node.is_leaf: return UIntArray() cdef int idx = self._node.start_index cdef UIntArray node_indices = UIntArray() cdef u_int* indices = tree.pids node_indices.c_set_view(indices + idx, self._node.num_particles) return node_indices cpdef list get_children(self): """ Get the children of a node. Returns ------- children : list """ if self._node.is_leaf: return [] cdef int i cdef list py_children = [None for i in range(8)] cdef OctreeNode py_node for i from 0<=i<8: if self._node.children[i] != NULL: py_node = OctreeNode() py_node.wrap_node(self._node.children[i]) py_children[i] = py_node return py_children @cython.boundscheck(False) @cython.wraparound(False) cpdef plot(self, ax, color="k"): """ Plots a node. Parameters ---------- ax : mpl_toolkits.mplot3d.Axes3D instance color : color hex string/letter, default ("k") """ cdef int i, j, k cdef double x, y, z cdef list ax_points = [0,0] for i from 0<=i<2: for j from 0<=j<2: x = self.xmin[0] + i*self.length y = self.xmin[1] + j*self.length for k from 0<=k<2: ax_points[k] = self.xmin[2] + k*self.length ax.plot([x,x], [y,y], zs=ax_points[:], color=color) for i from 0<=i<2: for k from 0<=k<2: x = self.xmin[0] + i*self.length z = self.xmin[2] + k*self.length for j from 0<=j<2: ax_points[j] = self.xmin[1] + j*self.length ax.plot([x,x], ax_points[:], zs=[z,z], color=color) for j from 0<=j<2: for k from 0<=k<2: y = self.xmin[1] + j*self.length z = self.xmin[2] + k*self.length for i from 0<=i<2: ax_points[i] = self.xmin[0] + i*self.length ax.plot(ax_points[:], [y,y], zs=[z,z], color=color) cdef class Octree: def __init__(self, int leaf_max_particles): self.leaf_max_particles = leaf_max_particles self.depth = 0 self.root = NULL self.leaf_cells = NULL self.machine_eps = 16*np.finfo(float).eps self.pids = NULL def __dealloc__(self): if self.root != NULL: self._delete_tree(self.root) if self.pids != NULL: free(self.pids) if self.leaf_cells != NULL: del self.leaf_cells #### Private protocol ################################################ @cython.cdivision(True) cdef inline double _get_eps(self, double length, double* xmin) nogil: return (self.machine_eps/length)*fmax(length, fmax(fmax(fabs(xmin[0]), fabs(xmin[1])), fabs(xmin[2]))) @cython.cdivision(True) cdef inline void _calculate_domain(self, NNPSParticleArrayWrapper pa_wrapper): cdef int num_particles = pa_wrapper.get_number_of_particles() pa_wrapper.pa.update_min_max() self.xmin[0] = pa_wrapper.x.minimum self.xmin[1] = pa_wrapper.y.minimum self.xmin[2] = pa_wrapper.z.minimum self.xmax[0] = pa_wrapper.x.maximum self.xmax[1] = pa_wrapper.y.maximum self.xmax[2] = pa_wrapper.z.maximum self.hmax = pa_wrapper.h.maximum cdef double x_length = self.xmax[0] - self.xmin[0] cdef double y_length = self.xmax[1] - self.xmin[1] cdef double z_length = self.xmax[2] - self.xmin[2] self.length = fmax(x_length, fmax(y_length, z_length)) cdef double eps = self._get_eps(self.length, self.xmin) self.xmin[0] -= self.length*eps self.xmin[1] -= self.length*eps self.xmin[2] -= self.length*eps self.length *= (1 + 2*eps) cdef inline cOctreeNode* _new_node(self, double* xmin, double length, double hmax = 0, int level = 0, cOctreeNode* parent = NULL, int num_particles = 0, bint is_leaf = False) nogil: """Create a new cOctreeNode""" cdef cOctreeNode* node = malloc(sizeof(cOctreeNode)) node.xmin[0] = xmin[0] node.xmin[1] = xmin[1] node.xmin[2] = xmin[2] node.length = length node.hmax = hmax node.num_particles = num_particles node.is_leaf = is_leaf node.level = level node.start_index = -1 node.parent = parent cdef int i for i from 0<=i<8: node.children[i] = NULL return node cdef inline void _delete_tree(self, cOctreeNode* node): """Delete octree""" cdef int i cdef cOctreeNode* temp[8] for i from 0<=i<8: temp[i] = node.children[i] free(node) for i from 0<=i<8: if temp[i] == NULL: continue self._delete_tree(temp[i]) @cython.cdivision(True) cdef int _c_build_tree(self, NNPSParticleArrayWrapper pa, vector[u_int]* indices, double* xmin, double length, cOctreeNode* node, int level) nogil: cdef double* src_x_ptr = pa.x.data cdef double* src_y_ptr = pa.y.data cdef double* src_z_ptr = pa.z.data cdef double* src_h_ptr = pa.h.data cdef double xmin_new[3] cdef double hmax_children[8] cdef int depth_child = 0 cdef int depth_max = 0 cdef int i, j, k cdef u_int p, q for i from 0<=i<8: hmax_children[i] = 0 cdef int oct_id # This is required to fix floating point errors. One such case # is mentioned in pysph.base.tests.test_octree cdef double eps = 2*self._get_eps(length, xmin) if (indices.size() < self.leaf_max_particles) or (eps > EPS_MAX): copy(indices.begin(), indices.end(), self.pids + self._next_pid) node.start_index = self._next_pid self._next_pid += indices.size() node.num_particles = indices.size() del indices node.is_leaf = True return 1 cdef vector[u_int]* new_indices[8] for i from 0<=i<8: new_indices[i] = new vector[u_int]() for p from 0<=pfmax(depth_max, depth_child) return 1 + depth_max cdef void _plot_tree(self, OctreeNode node, ax): node.plot(ax) cdef OctreeNode child cdef list children = node.get_children() for child in children: if child == None: continue self._plot_tree(child, ax) cdef void _c_get_leaf_cells(self, cOctreeNode* node): if node.is_leaf: self.leaf_cells.push_back(node) return cdef int i for i from 0<=i<8: if node.children[i] != NULL: self._c_get_leaf_cells(node.children[i]) #### Public protocol ################################################ cdef int c_build_tree(self, NNPSParticleArrayWrapper pa_wrapper): self._calculate_domain(pa_wrapper) cdef int num_particles = pa_wrapper.get_number_of_particles() cdef vector[u_int]* indices_ptr = new vector[u_int]() self.num_particles = num_particles cdef int i for i from 0<=i malloc(num_particles*sizeof(u_int)) self._next_pid = 0 self.root = self._new_node(self.xmin, self.length, hmax=self.hmax, level=0) self.depth = self._c_build_tree(pa_wrapper, indices_ptr, self.root.xmin, self.root.length, self.root, 0) return self.depth cdef void c_get_leaf_cells(self): if self.leaf_cells != NULL: return self.leaf_cells = new vector[cOctreeNode*]() self._c_get_leaf_cells(self.root) @cython.cdivision(True) cdef cOctreeNode* c_find_point(self, double x, double y, double z): cdef cOctreeNode* node = self.root cdef cOctreeNode* prev = self.root cdef int i, j, k, oct_id while node != NULL: find_cell_id_raw( x - node.xmin[0], y - node.xmin[1], z - node.xmin[2], node.length/2, &i, &j, &k ) oct_id = k+2*j+4*i prev = node node = node.children[oct_id] return prev ###################################################################### cpdef int build_tree(self, ParticleArray pa): """ Build tree. Parameters ---------- pa : ParticleArray Returns ------- depth : int Maximum depth of the tree """ cdef NNPSParticleArrayWrapper pa_wrapper = NNPSParticleArrayWrapper(pa) return self.c_build_tree(pa_wrapper) cpdef delete_tree(self): """ Delete tree""" if self.root != NULL: self._delete_tree(self.root) if self.leaf_cells != NULL: del self.leaf_cells self.root = NULL self.leaf_cells = NULL cpdef OctreeNode get_root(self): """ Get root of the tree Returns ------- root : OctreeNode Root of the tree """ cdef OctreeNode py_node = OctreeNode() py_node.wrap_node(self.root) return py_node cpdef list get_leaf_cells(self): """ Get all leaf cells in the tree Returns ------- leaf_cells : list List of leaf cells in the tree """ self.c_get_leaf_cells() cdef int i cdef list py_leaf_cells = [OctreeNode() for i in range(self.leaf_cells.size())] for i from 0<=ipy_leaf_cells[i]).wrap_node(deref(self.leaf_cells)[i]) return py_leaf_cells cpdef OctreeNode find_point(self, double x, double y, double z): """ Get the leaf node to which a point belongs Parameters ---------- x, y, z : double Co-ordinates of the point Returns ------- node : OctreeNode Leaf node to which the point belongs """ cdef cOctreeNode* node = self.c_find_point(x, y, z) cdef OctreeNode py_node = OctreeNode() py_node.wrap_node(node) return py_node cpdef plot(self, ax): """ Plots the tree Parameters ---------- ax : mpl_toolkits.mplot3d.Axes3D instance """ cdef OctreeNode root = self.get_root() self._plot_tree(root, ax) cdef class CompressedOctree(Octree): def __init__(self, int leaf_max_particles): Octree.__init__(self, leaf_max_particles) self.dbl_max = np.finfo(float).max @cython.cdivision(True) cdef int _c_build_tree(self, NNPSParticleArrayWrapper pa, vector[u_int]* indices, double* xmin, double length, cOctreeNode* node, int level) nogil: cdef double* src_x_ptr = pa.x.data cdef double* src_y_ptr = pa.y.data cdef double* src_z_ptr = pa.z.data cdef double* src_h_ptr = pa.h.data cdef double xmin_new[8][3] cdef double xmax_new[8][3] cdef double length_new = 0 cdef double hmax_children[8] cdef int depth_child = 0 cdef int depth_max = 0 cdef int i, j, k cdef u_int p, q for i from 0<=i<8: hmax_children[i] = 0 for j from 0<=j<3: xmin_new[i][j] = self.dbl_max xmax_new[i][j] = -self.dbl_max cdef int oct_id if (indices.size() < self.leaf_max_particles): copy(indices.begin(), indices.end(), self.pids + self._next_pid) node.start_index = self._next_pid self._next_pid += indices.size() node.num_particles = indices.size() del indices node.is_leaf = True return 1 cdef vector[u_int]* new_indices[8] for i from 0<=i<8: new_indices[i] = new vector[u_int]() cdef double* xmin_current cdef double* xmax_current for p from 0<=pfmax(depth_max, depth_child) return 1 + depth_max pysph-master/pysph/base/octree_gpu_nnps.pxd000066400000000000000000000017431356347341600215170ustar00rootroot00000000000000from pysph.base.gpu_nnps_base cimport * cdef class OctreeGPUNNPS(GPUNNPS): cdef NNPSParticleArrayWrapper src, dst # Current source and destination. cdef public list pids cdef public list pid_keys cdef public list cids cdef public list cid_to_idx cdef public list max_cid cdef public object dst_to_src cdef object overflow_cid_to_idx cdef object curr_cid cdef object max_cid_src cdef object helper cdef object radix_sort cdef object make_vec cdef public bint allow_sort cdef bint dst_src cdef public object neighbor_cid_counts cdef public object neighbor_cids cdef public list octrees cdef public bint use_elementwise cdef public bint use_partitions cdef public object leaf_size cpdef _bin(self, int pa_index) cpdef _refresh(self) cdef void find_neighbor_lengths(self, nbr_lengths) cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices) cpdef get_kernel_args(self, c_type) pysph-master/pysph/base/octree_gpu_nnps.pyx000066400000000000000000000127051356347341600215440ustar00rootroot00000000000000#cython: embedsignature=True import numpy as np cimport numpy as np from pysph.base.tree.point_tree import PointTree cdef class OctreeGPUNNPS(GPUNNPS): def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bint fixed_h=False, bint cache=True, bint sort_gids=False, allow_sort=False, leaf_size=32, bint use_elementwise=False, bint use_partitions=False, backend=None): GPUNNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids, backend=backend ) self.src_index = -1 self.dst_index = -1 self.sort_gids = sort_gids self.leaf_size = leaf_size cdef NNPSParticleArrayWrapper pa_wrapper cdef int i, num_particles self.octrees = [] if self.use_double: from warnings import warn warn("Octree NNPS by default uses single precision arithmetic for" "finding neighbors. A few particles outside of the original " "search radius might be included.") for i in range(self.narrays): self.octrees.append(PointTree(pa=self.pa_wrappers[i].pa, radius_scale=radius_scale, use_double=self.use_double, leaf_size=leaf_size, dim=dim)) self.use_elementwise = use_elementwise self.use_partitions = use_partitions self.allow_sort = allow_sort self.domain.update() self.update() # Check if device supports required workgroup size, # else default to elementwise nnps if not self.octrees[0]._is_valid_nnps_wgs(): from warnings import warn warn("Octree NNPS with given leaf size (%d) is " "not supported for given device. Switching to a elementwise " "version of the Octree NNPS" % leaf_size) self.use_elementwise = True cpdef _bin(self, int pa_index): self.octrees[pa_index].refresh(self.xmin, self.xmax, self.domain.manager.hmin) self.octrees[pa_index].set_node_bounds() if self.allow_sort: self.spatially_order_particles(pa_index) def get_spatially_ordered_indices(self, int pa_index): def update(): self.octrees[pa_index]._sort() return self.octrees[pa_index].pids, update cpdef _refresh(self): pass cpdef set_context(self, int src_index, int dst_index): """Setup the context before asking for neighbors. The `dst_index` represents the particles for whom the neighbors are to be determined from the particle array with index `src_index`. Parameters ---------- src_index: int: the source index of the particle array. dst_index: int: the destination index of the particle array. """ GPUNNPS.set_context(self, src_index, dst_index) self.src_index = src_index self.dst_index = dst_index octree_src = self.octrees[src_index] octree_dst = self.octrees[dst_index] self.dst_src = src_index != dst_index self.neighbor_cid_counts, self.neighbor_cids = octree_dst.find_neighbor_cids( octree_src) cdef void find_neighbor_lengths(self, nbr_lengths): octree_src = self.octrees[self.src_index] octree_dst = self.octrees[self.dst_index] args = [] if self.use_elementwise: find_neighbor_lengths = octree_dst.find_neighbor_lengths_elementwise else: find_neighbor_lengths = octree_dst.find_neighbor_lengths find_neighbor_lengths( self.neighbor_cid_counts, self.neighbor_cids, octree_src, nbr_lengths, *args ) cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices): octree_src = self.octrees[self.src_index] octree_dst = self.octrees[self.dst_index] args = [] if self.use_elementwise: find_neighbors = octree_dst.find_neighbors_elementwise else: find_neighbors = octree_dst.find_neighbors args.append(self.use_partitions) find_neighbors( self.neighbor_cid_counts, self.neighbor_cids, octree_src, start_indices, nbrs, *args ) cpdef get_kernel_args(self, c_type): octree_dst = self.octrees[self.dst_index] octree_src = self.octrees[self.src_index] pa_gpu_dst = octree_dst.pa.gpu pa_gpu_src = octree_src.pa.gpu return [ octree_dst.unique_cids.dev.data, octree_src.pids.dev.data, octree_dst.pids.dev.data, octree_dst.cids.dev.data, octree_src.pbounds.dev.data, octree_dst.pbounds.dev.data, np.float32(octree_dst.radius_scale), self.neighbor_cid_counts.dev.data, self.neighbor_cids.dev.data, pa_gpu_dst.x.dev.data, pa_gpu_dst.y.dev.data, pa_gpu_dst.z.dev.data, pa_gpu_dst.h.dev.data, pa_gpu_src.x.dev.data, pa_gpu_src.y.dev.data, pa_gpu_src.z.dev.data, pa_gpu_src.h.dev.data ], [ (self.leaf_size * octree_dst.unique_cid_count,), (self.leaf_size,) ] pysph-master/pysph/base/octree_nnps.pxd000066400000000000000000000035341356347341600206440ustar00rootroot00000000000000#cython: embedsignature=True from nnps_base cimport * from octree cimport Octree, CompressedOctree, cOctreeNode from libcpp.vector cimport vector cimport cython ctypedef unsigned int u_int cdef extern from 'math.h': int abs(int) nogil double ceil(double) nogil double floor(double) nogil double fabs(double) nogil double fmax(double, double) nogil double fmin(double, double) nogil cdef class OctreeNNPS(NNPS): ########################################################################## # Data Attributes ########################################################################## cdef list tree cdef cOctreeNode* current_tree cdef u_int* current_pids cdef double radius_scale2 cdef NNPSParticleArrayWrapper dst, src cdef int leaf_max_particles ########################################################################## # Member functions ########################################################################## cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil cpdef set_context(self, int src_index, int dst_index) cdef void _get_neighbors(self, double q_x, double q_y, double q_z, double q_h, double* src_x_ptr, double* src_y_ptr, double* src_z_ptr, double* src_h_ptr, UIntArray nbrs, cOctreeNode* node) nogil cpdef get_spatially_ordered_indices(self, int pa_index, LongArray indices) cpdef _refresh(self) cpdef _bin(self, int pa_index, UIntArray indices) cdef class CompressedOctreeNNPS(OctreeNNPS): ########################################################################## # Member functions ########################################################################## cpdef set_context(self, int src_index, int dst_index) cpdef _refresh(self) cpdef _bin(self, int pa_index, UIntArray indices) pysph-master/pysph/base/octree_nnps.pyx000066400000000000000000000177461356347341600207030ustar00rootroot00000000000000#cython: embedsignature=True from nnps_base cimport * from octree cimport Octree, CompressedOctree, cOctreeNode from libcpp.vector cimport vector from libc.stdlib cimport malloc, free cimport cython from cython.operator cimport dereference as deref, preincrement as inc IF UNAME_SYSNAME == "Windows": cdef inline double fmin(double x, double y) nogil: return x if x < y else y cdef inline double fmax(double x, double y) nogil: return x if x > y else y ############################################################################# cdef class OctreeNNPS(NNPS): """Nearest neighbor search using Octree. """ def __init__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False, int leaf_max_particles = 10): NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) cdef int i self.tree = [Octree(leaf_max_particles) for i in range(self.narrays)] self.radius_scale2 = radius_scale*radius_scale self.src_index = 0 self.dst_index = 0 self.leaf_max_particles = leaf_max_particles self.current_tree = NULL self.current_pids = NULL self.sort_gids = sort_gids self.domain.update() self.update() #### Public protocol ################################################ cpdef set_context(self, int src_index, int dst_index): """Set context for nearest neighbor searches. Parameters ---------- src_index: int Index in the list of particle arrays to which the neighbors belong dst_index: int Index in the list of particle arrays to which the query point belongs """ NNPS.set_context(self, src_index, dst_index) self.current_tree = (self.tree[src_index]).root self.current_pids = (self.tree[src_index]).pids self.dst = self.pa_wrappers[dst_index] self.src = self.pa_wrappers[src_index] cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: """Low level, high-performance non-gil method to find neighbors. This requires that `set_context()` be called beforehand. This method does not reset the neighbors array before it appends the neighbors to it. """ cdef double* dst_x_ptr = self.dst.x.data cdef double* dst_y_ptr = self.dst.y.data cdef double* dst_z_ptr = self.dst.z.data cdef double* dst_h_ptr = self.dst.h.data cdef double* src_x_ptr = self.src.x.data cdef double* src_y_ptr = self.src.y.data cdef double* src_z_ptr = self.src.z.data cdef double* src_h_ptr = self.src.h.data cdef double x = dst_x_ptr[d_idx] cdef double y = dst_y_ptr[d_idx] cdef double z = dst_z_ptr[d_idx] cdef double h = dst_h_ptr[d_idx] cdef u_int* s_gid = self.src.gid.data cdef int orig_length = nbrs.length self._get_neighbors(x, y, z, h, src_x_ptr, src_y_ptr, src_z_ptr, src_h_ptr, nbrs, self.current_tree) if self.sort_gids: self._sort_neighbors( &nbrs.data[orig_length], nbrs.length - orig_length, s_gid ) #### Private protocol ################################################ @cython.cdivision(True) cdef void _get_neighbors(self, double q_x, double q_y, double q_z, double q_h, double* src_x_ptr, double* src_y_ptr, double* src_z_ptr, double* src_h_ptr, UIntArray nbrs, cOctreeNode* node) nogil: """Find neighbors recursively""" cdef double x_centre = node.xmin[0] + node.length/2 cdef double y_centre = node.xmin[1] + node.length/2 cdef double z_centre = node.xmin[2] + node.length/2 cdef u_int i, j, k cdef double hi2 = self.radius_scale2*q_h*q_h cdef double hj2 = 0 cdef double xij2 = 0 cdef double eff_radius = 0.5*(node.length) + \ fmax(self.radius_scale*q_h, self.radius_scale*node.hmax) if fabs(x_centre - q_x) >= eff_radius or \ fabs(y_centre - q_y) >= eff_radius or \ fabs(z_centre - q_z) >= eff_radius: return if node.is_leaf: for i from 0<=iself.tree[pa_index]).num_particles cdef u_int* current_pids = (self.tree[pa_index]).pids cdef int i for i from 0<=icurrent_pids[i]) cpdef _refresh(self): cdef int i for i from 0<=iself.tree[i]).c_build_tree(self.pa_wrappers[i]) self.current_tree = (self.tree[self.src_index]).root self.current_pids = (self.tree[self.src_index]).pids cpdef _bin(self, int pa_index, UIntArray indices): pass ############################################################################# cdef class CompressedOctreeNNPS(OctreeNNPS): """Nearest neighbor search using Compressed Octree. """ def __init__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False, int leaf_max_particles = 10): NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) cdef int i self.tree = [CompressedOctree(leaf_max_particles) for i in range(self.narrays)] self.radius_scale2 = radius_scale*radius_scale self.src_index = 0 self.dst_index = 0 self.leaf_max_particles = leaf_max_particles self.current_tree = NULL self.current_pids = NULL self.sort_gids = sort_gids self.domain.update() self.update() #### Public protocol ################################################ cpdef set_context(self, int src_index, int dst_index): """Set context for nearest neighbor searches. Parameters ---------- src_index: int Index in the list of particle arrays to which the neighbors belong dst_index: int Index in the list of particle arrays to which the query point belongs """ NNPS.set_context(self, src_index, dst_index) self.current_tree = (self.tree[src_index]).root self.current_pids = (self.tree[src_index]).pids self.dst = self.pa_wrappers[dst_index] self.src = self.pa_wrappers[src_index] #### Private protocol ################################################ cpdef _refresh(self): cdef int i for i from 0<=iself.tree[i]).c_build_tree(self.pa_wrappers[i]) self.current_tree = (self.tree[self.src_index]).root self.current_pids = (self.tree[self.src_index]).pids cpdef _bin(self, int pa_index, UIntArray indices): pass pysph-master/pysph/base/particle_array.pxd000066400000000000000000000100341356347341600213170ustar00rootroot00000000000000cimport numpy as np from cyarray.carray cimport BaseArray, UIntArray, IntArray, LongArray # ParticleTag # Declares various tags for particles, and functions to check them. # Note that these tags are the ones set in the 'tag' property of the # particles, in a particle array. To define additional discrete properties, # one can add another integer property to the particles in the particle array # while creating them. # These tags could be considered as 'system tags' used internally to # distinguish among different kinds of particles. If more tags are needed for # a particular application, add them as mentioned above. # The is_* functions defined below are to be used in Python for tests # etc. Cython modules can directly use the enum name. cdef enum ParticleTag: Local = 0 Remote Ghost cpdef bint is_local(int tag) cpdef bint is_remote(int tag) cpdef bint is_ghost(int tag) cpdef int get_local_tag() cpdef int get_remote_tag() cpdef int get_ghost_tag() cdef class ParticleArray: """ Maintains various properties for particles. """ cdef public str backend # dictionary to hold the properties held per particle cdef public dict properties cdef public list property_arrays cdef public dict stride # list of output property arrays cdef public list output_property_arrays # dictionary to hold the constants for all the particles cdef public dict constants # default value associated with each property cdef public dict default_values # name associated with this particle array cdef public str name # the number of real particles. cdef public long num_real_particles # a list of props to be used for load balancing cdef list lb_props ######################################## # OpenCL/accelerator related attributes. # Object that manages the device properties. cdef public object gpu # time for the particle array cdef public double time cdef object _create_c_array_from_npy_array(self, np.ndarray arr) cdef _check_property(self, str) cdef np.ndarray _get_real_particle_prop(self, str prop) # set/get the time cpdef set_time(self, double time) cpdef double get_time(self) cpdef set_name(self, str name) cpdef get_lb_props(self) cpdef set_num_real_particles(self, long value) cpdef BaseArray get_carray(self, str prop) cpdef int get_number_of_particles(self, bint real=*) cpdef remove_particles(self, indices, align=*) cpdef remove_tagged_particles(self, int tag, bint align=*) # function to add any property cpdef add_constant(self, str name, data) cpdef add_property(self, str name, str type=*, default=*, data=*, stride=*) cpdef remove_property(self, str prop_name) # increase the number of particles by num_particles cpdef extend(self, int num_particles) cpdef has_array(self, str arr_name) # aligns all the real particles in contiguous positions starting from 0 cpdef int align_particles(self) except -1 # add particles from the parray to self. cpdef int append_parray(self, ParticleArray parray, bint align=*, bint update_constants=*) except -1 cpdef ParticleArray empty_clone(self, props=*) cpdef ensure_properties(self, ParticleArray src, list props=*) # create a new particle array with the given particles indices and the # properties. cpdef ParticleArray extract_particles(self, indices, ParticleArray dest_array=*, bint align=*, list props=*) # set the tag value for the particles cpdef set_tag(self, long tag_value, LongArray indices) cpdef copy_properties(self, ParticleArray source, long start_index=*, long end_index=*) # copy properties from one set of variables to another cpdef copy_over_properties(self, dict props) # set the pid for all local particles cpdef set_pid(self, int pid) # set the specified properties to zero cpdef set_to_zero(self, list props) # resize all arrays to a new size cpdef resize(self, long size) pysph-master/pysph/base/particle_array.pyx000066400000000000000000001406301356347341600213520ustar00rootroot00000000000000#cython: embedsignature=True """ A `ParticleArray` represents a collection of particles. """ # logging imports import logging logger = logging.getLogger() # numpy imports cimport numpy import numpy # PyZoltan imports from cyarray.carray cimport * # Local imports from cpython cimport PyObject from cpython cimport * from cython cimport * from compyle.config import get_config from compyle.array import Array, get_backend, to_device from pysph.base.device_helper import DeviceHelper # Maximum value of an unsigned int cdef extern from "limits.h": cdef unsigned int UINT_MAX _UINT_MAX = UINT_MAX # Declares various tags for particles, and functions to check them. # Note that these tags are the ones set in the 'tag' property of the # particles, in a particle array. To define additional discrete # properties, one can add another integer property to the particles in # the particle array while creating them. # These tags could be considered as 'system tags' used internally to # distinguish among different kinds of particles. If more tags are # needed for a particular application, add them as mentioned above. # The is_* functions defined below are to be used in Python for tests # etc. Cython modules can directly use the enum name. cpdef bint is_local(int tag): return tag == Local cpdef bint is_remote(int tag): return tag == Remote cpdef bint is_ghost(int tag): return tag == Ghost cpdef int get_local_tag(): return Local cpdef int get_remote_tag(): return Remote cpdef int get_ghost_tag(): return Ghost cdef class ParticleArray: """ Class to represent a collection of particles. Attributes ---------- name : str name of this particle array. properties : dict dictionary of {prop_name:carray}. constants : dict dictionary of {const_name: carray} Examples -------- There are many ways to create a ParticleArray:: >>> p = ParticleArray(name='fluid', x=[1.,2., 3., 4.]) >>> p.name 'fluid' >>> p.x, p.tag, p.pid, p.gid For a full specification of properties with their type etc.:: >>> p = ParticleArray(name='fluid', ... x=dict(data=[1,2], type='int', default=1)) >>> p.get_carray('x').get_c_type() 'int' The default value is what is used to set the default value when a new particle is added and the arrays auto-resized. To create constants that are not resized with added/removed particles:: >>> p = ParticleArray(name='f', x=[1,2], constants={'c':[0.,0.,0.]}) """ ###################################################################### # `object` interface ###################################################################### def __init__(self, str name='', default_particle_tag=Local, constants=None, backend=None, **props): """Constructor Parameters ---------- name : str name of this particle array. default_particle_tag : int one of `Local`, `Remote` or `Ghost` constants : dict dictionary of constant arrays for the entire particle array. These must be arrays and are not resized when particles are added or removed. These are stored as CArrays internally. props : any additional keyword arguments are taken to be properties, one for each property. """ self.backend = get_backend(backend) self.time = 0.0 self.name = name self.properties = {} self.default_values = {'tag':default_particle_tag} self.stride = {} self._initialize(**props) self.constants = {} if constants is not None: for const, data in constants.items(): self.add_constant(name=const, data=data) # default lb_props are all the arrays self.lb_props = None # list of output property arrays self.output_property_arrays = [] if self.backend is not 'cython': h = DeviceHelper(self, backend=self.backend) self.set_device_helper(h) else: self.gpu = None def __getattr__(self, name): """Convenience, to access particle property arrays as an attribute A numpy array is returned. Look at the get() functions documentation for care on using this numpy array. """ if name in self.properties: return self._get_real_particle_prop(name) elif name in self.constants: return self.constants[name].get_npy_array() else: msg = "ParticleArray %s has no property/constant %s."\ %(self.name, name) raise AttributeError(msg) def __setattr__(self, name, value): """Convenience, to set particle property arrays as an attribute """ self.set(**{name:value}) def __reduce__(self): """Implemented to facilitate pickling of extension types """ d = {} d['name'] = self.name props = {} default_values = {} for prop, arr in self.properties.items(): pinfo = {} pinfo['name'] = prop pinfo['type'] = arr.get_c_type() pinfo['data'] = arr.get_npy_array() pinfo['default'] = self.default_values[prop] pinfo['stride'] = self.stride.get(prop, 1) props[prop] = pinfo d['properties'] = props props = {} for prop, arr in self.constants.items(): pinfo = dict(name=prop, data=arr) props[prop] = pinfo d['constants'] = props return (ParticleArray, (), d) def __setstate__(self, d): """ Load the particle array object from the saved dictionary """ self.properties = {} self.constants = {} self.property_arrays = [] self.default_values = {} self.num_real_particles = 0 self.name = d['name'] props = d['properties'] for prop in props: self.add_property(**props[prop]) consts = d['constants'] for prop in consts: self.add_constant(**consts[prop]) self.num_real_particles = numpy.sum(props['tag']['data']==Local) def _initialize(self, **props): """Initialize the particle array with the given properties. Parameters ---------- props : dict dictionary containing various property arrays. All these arrays are expected to be numpy arrays or objects that can be converted to numpy arrays. Notes ----- - This will clear any existing data. - As a rule internal arrays will always be either long or double arrays. """ cdef int nprop, nparticles cdef bint tag_present = False cdef numpy.ndarray a, arr, npyarr cdef IntArray tagarr cdef str name self.clear() nprop = len(props) if nprop == 0: return # Iterate over all props to find the maximum number of values passed. nv = 0 _props = {} for name, prop in props.items(): if isinstance(prop, dict): if 'data' in prop: d = prop['data'] stride = prop.get('stride', 1) if d is not None: d = numpy.ravel(prop['data']) prop['data'] = d nv = max(nv, len(d)//stride) _props[name] = prop elif prop is not None: d = numpy.ravel(prop) _props[name] = d nv = max(nv, len(d)) else: _props[name] = prop props.update(_props) # add the properties for name, prop in props.items(): if isinstance(prop, dict): prop_info = prop prop_info['name'] = name data = prop.get('data', None) if data is not None: if nv > 1 and len(prop['data']) == 1: prop_info['data'] = numpy.ones(nv)*data else: if nv > 1 and len(prop) == 1: prop = numpy.ones(nv)*prop prop_info = dict(name=name, data=prop) self.add_property(**prop_info) self.align_particles() ###################################################################### # `Public` interface ###################################################################### def update_backend(self, backend=None): self.backend = get_backend(backend) def set_output_arrays(self, list props): """Set the list of output arrays for this ParticleArray Parameters ---------- props : list The list of property arrays Notes ----- In PySPH, the solver obtains the list of property arrays to output by calling the `ParticleArray.get_property_arrays` method. If detailed output is not requested, the `output_property_arrays` attribute is used to determine the arrays that will be written to file """ # first check if the arrays are valid and raise a warning for prop in props: self._check_property(prop) self.output_property_arrays = props def add_output_arrays(self, list props): """Append props to the existing list of output arrays Parameters ---------- props : list The additional list of property arrays to save """ # first check if the arrays are valid and raise a warning for prop in props: self._check_property(prop) # add to the existing list self.output_property_arrays.extend(props) self.output_property_arrays = list( set(self.output_property_arrays) ) def get_property_arrays(self, all=True, only_real=True): """Return a dictionary of arrays held by the `ParticleArray` container. This does not include the constants. Parameters ---------- all : bint Flag to select all arrays only_real : bint Flag to select Local/Remote particles Notes ----- The dictionary returned is keyed on the property name and the value is the NumPy array representing the data. If `all` is set to False, the list of arrays is determined by the `output_property_arrays` data attribute. """ # the arrays dictionary to be returned arrays = {} # the list of properties props = self.output_property_arrays if all or len(props) == 0: props = list(self.properties.keys()) # number of particles num_particles = self.get_number_of_particles(only_real) if self.gpu is not None and self.backend is not 'cython': self.gpu.pull(*props) # add the property arrays for prop in props: stride = self.stride.get(prop, 1) prop_array = self.properties[prop].get_npy_array() arrays[prop] = prop_array[:num_particles*stride] return arrays cpdef set_num_real_particles(self, long value): self.num_real_particles = value cpdef has_array(self, str arr_name): """Returns true if the array arr_name is present """ return self.properties.has_key(arr_name) def clear(self): """Clear all data held by this array """ self.properties = {'tag':IntArray(0), 'pid':IntArray(0), 'gid':UIntArray(0)} tag_def_values = self.default_values['tag'] self.default_values.clear() self.default_values = {'tag':tag_def_values, 'pid':0, 'gid':_UINT_MAX} cpdef set_time(self, double time): self.time = time cpdef double get_time(self): return self.time cpdef set_name(self, str name): self.name = name def set_lb_props(self, list lb_props): self.lb_props = lb_props cpdef get_lb_props(self): """Return the properties that are to be load balanced. If none are explicitly set by the user, return all of the properties. """ if self.lb_props is None: return list(self.properties.keys()) else: return self.lb_props cpdef int get_number_of_particles(self, bint real=False): """ Return the number of particles """ if self.gpu is not None and self.backend is not 'cython': return self.gpu.get_number_of_particles() if real: return self.num_real_particles else: if 'tag' in self.properties: return self.properties['tag'].length elif len(self.properties) > 0: prop = list(self.properties.keys())[0] stride = self.stride.get(prop, 1) return self.properties[prop].length//stride else: return 0 cpdef remove_particles(self, indices, align=True): """ Remove particles whose indices are given in index_list. We repeatedly interchange the values of the last element and values from the index_list and reduce the size of the array by one. This is done for every property that is being maintained. Parameters ---------- indices : array an array of indices, this array can be a list, numpy array or a LongArray. Notes ----- Pseudo-code for the implementation:: if index_list.length > number of particles raise ValueError sorted_indices <- index_list sorted in ascending order. for every every array in property_array array.remove(sorted_indices) """ if self.gpu is not None and self.backend is not 'cython': if type(indices) != Array: if isinstance(indices, BaseArray): indices = indices.get_npy_array() else: indices = numpy.asarray(indices) indices = to_device(indices.astype(numpy.uint32), backend=self.backend) return self.gpu.remove_particles(indices, align=align) cdef BaseArray index_list if isinstance(indices, BaseArray): index_list = indices else: indices = numpy.asarray(indices) index_list = LongArray(indices.size) index_list.set_data(indices) cdef str msg cdef numpy.ndarray sorted_indices cdef BaseArray prop_array cdef int num_arrays, i cdef list property_arrays if index_list.length > self.get_number_of_particles(): msg = 'Number of particles to be removed is greater than' msg += 'number of particles in array' raise ValueError(msg) sorted_indices = numpy.sort(index_list.get_npy_array()) num_arrays = len(self.properties) for name, prop_array in self.properties.items(): stride = self.stride.get(name, 1) prop_array.remove(sorted_indices, 1, stride) if index_list.length > 0 and align: self.align_particles() cpdef remove_tagged_particles(self, int tag, bint align=True): """ Remove particles that have the given tag. Parameters ---------- tag : int the type of particles that need to be removed. """ if self.gpu is not None and self.backend is not 'cython': return self.gpu.remove_tagged_particles(tag, align=align) cdef LongArray indices = LongArray() cdef IntArray tag_array = self.properties['tag'] cdef int *tagarrptr = tag_array.get_data_ptr() cdef int i # find the indices of the particles to be removed. for i in range(tag_array.length): if tagarrptr[i] == tag: indices.append(i) # remove the particles. self.remove_particles(indices, align=align) def add_particles(self, align=True, **particle_props): """ Add particles in particle_array to self. Parameters ---------- particle_props : dict a dictionary containing numpy arrays for various particle properties. Notes ----- - all properties should have same length arrays. - all properties should already be present in this particles array. if new properties are seen, an exception will be raised. """ cdef str prop cdef BaseArray arr if len(particle_props) == 0: return 0 # check if the input properties are valid. prop = '' for prop in particle_props: self._check_property(prop) if self.gpu is not None and self.backend is not 'cython': gpu_particle_props = {} for prop, ary in particle_props.items(): if prop in self.gpu.properties: dtype = self.gpu.get_device_array(prop).dtype else: dtype = self.default_values[prop] gpu_particle_props[prop] = to_device( numpy.array(ary, dtype=dtype), backend=self.backend) return self.gpu.add_particles(align=align, **gpu_particle_props) cdef int num_extra_particles, old_num_particles, new_num_particles cdef numpy.ndarray s_arr, nparr if len(prop) == 0: num_extra_particles = 0 else: stride = self.stride.get(prop, 1) num_extra_particles = len(particle_props[prop])//stride old_num_particles = self.get_number_of_particles() new_num_particles = num_extra_particles + old_num_particles for prop in self.properties: arr = PyDict_GetItem(self.properties, prop) stride = self.stride.get(prop, 1) if PyDict_Contains(particle_props, prop)== 1: d_type = arr.get_npy_array().dtype s_arr = numpy.asarray(particle_props[prop], dtype=d_type) arr.extend(s_arr) else: arr.resize(new_num_particles*stride) # set the properties of the new particles to the default ones. nparr = arr.get_npy_array() nparr[old_num_particles*stride:] = self.default_values[prop] if num_extra_particles > 0 and align: # make sure particles are aligned properly. self.align_particles() return 0 cpdef int append_parray(self, ParticleArray parray, bint align=True, bint update_constants=False) except -1: """ Add particles from a particle array properties that are not there in self will be added """ if parray.get_number_of_particles() == 0: return 0 if self.gpu is not None and self.backend is not 'cython': self.gpu.append_parray(parray, align=align, update_constants=update_constants) return 0 cdef int num_extra_particles = parray.get_number_of_particles() cdef int old_num_particles = self.get_number_of_particles() cdef int new_num_particles = num_extra_particles + old_num_particles cdef str prop_name cdef BaseArray arr, source, dest cdef numpy.ndarray nparr_dest, nparr_source # extend current arrays by the required number of particles self.extend(num_extra_particles) for prop_name in parray.properties: if PyDict_Contains(self.properties, prop_name): arr = PyDict_GetItem(self.properties, prop_name) else: arr = None stride = self.stride.get(prop_name, 1) if arr is not None: source = PyDict_GetItem(parray.properties, prop_name) nparr_source = source.get_npy_array() nparr_dest = arr.get_npy_array() nparr_dest[old_num_particles*stride:] = nparr_source else: # meaning this property is not there in self. stride = parray.stride.get(prop_name, 1) self.add_property(name=prop_name, type=parray.properties[prop_name].get_c_type(), default=parray.default_values[prop_name], stride=stride ) # now add the values to the end of the created array dest = PyDict_GetItem(self.properties, prop_name) nparr_dest = dest.get_npy_array() source = PyDict_GetItem(parray.properties, prop_name) nparr_source = source.get_npy_array() nparr_dest[old_num_particles*stride:] = nparr_source if update_constants: for const in parray.constants: self.constants.setdefault(const, parray.constants[const]) if num_extra_particles > 0 and align: self.align_particles() return 0 cpdef extend(self, int num_particles): """ Increase the total number of particles by the requested amount New particles are added at the end of the list, you will have to manually call align_particles later in order to update the number of particles. """ if num_particles <= 0: return if self.gpu is not None and self.backend is not 'cython': self.gpu.extend(num_particles) return 0 cdef int old_size = self.get_number_of_particles() cdef int new_size = old_size + num_particles cdef BaseArray arr cdef numpy.ndarray nparr cdef int stride for key in self.properties: stride = self.stride.get(key, 1) arr = self.properties[key] arr.resize(new_size*stride) nparr = arr.get_npy_array() nparr[old_size*stride:] = self.default_values[key] cdef numpy.ndarray _get_real_particle_prop(self, str prop_name): """ get the npy array of property corresponding to only real particles No checks are performed. Only call this after making sure that the property required already exists. """ cdef BaseArray prop_array prop_array = self.properties.get(prop_name) stride = self.stride.get(prop_name, 1) if prop_array is not None: return prop_array.get_npy_array()[:self.num_real_particles*stride] else: return None def get(self, *args, only_real_particles=True): """ Return the numpy array/constant for the property names in the arguments. Parameters ---------- only_real_particles : bool indicates if properties of only real particles need to be returned or all particles to be returned. By default only real particles will be returned. args : additional args a list of property names. Notes ----- The returned numpy array does **NOT** own its data. Other operations may be performed. Returns ------- Numpy array. """ cdef int nargs = len(args) cdef list result = [] cdef str arg cdef int i, stride cdef BaseArray arg_array if nargs == 0: return if only_real_particles == True: for i in range(nargs): arg = args[i] self._check_property(arg) stride = self.stride.get(arg, 1) if arg in self.properties: arg_array = self.properties[arg] result.append( arg_array.get_npy_array()[:self.num_real_particles*stride]) else: result.append(self.constants[arg].get_npy_array()) else: for i in range(nargs): arg = args[i] self._check_property(arg) if arg in self.properties: arg_array = self.properties[arg] result.append(arg_array.get_npy_array()) else: result.append(self.constants[arg].get_npy_array()) if nargs == 1: return result[0] else: return tuple(result) def set_device_helper(self, gpu): """Set the device helper to push/pull from a hardware accelerator. """ self.gpu = gpu def set(self, **props): """ Set properties from numpy arrays like objects Parameters ---------- props : dict a dictionary of properties containing the arrays to be set. Notes ----- - the properties being set must already be present in the properties dict. - the size of the data should match the array already present. """ cdef str prop cdef BaseArray prop_array cdef int nprops = len(props) cdef list prop_names = list(props.keys()) cdef int i for i in range(nprops): prop = prop_names[i] self._check_property(prop) for prop in props: proparr = numpy.asarray(props[prop]) if self.properties.has_key(prop): prop_array = self.properties[prop] prop_array.set_data(proparr) elif self.constants.has_key(prop): prop_array = self.constants[prop] prop_array.set_data(proparr) # if the tag property is being set, the alignment will have to be # changed. cpdef BaseArray get_carray(self, str prop): """Return the c-array for the property or constant. """ if PyDict_Contains(self.properties, prop) == 1: return PyDict_GetItem(self.properties, prop) elif PyDict_Contains(self.constants, prop) == 1: return PyDict_GetItem(self.constants, prop) else: raise KeyError( 'Property/constant "%s" not present in particle array.' % prop ) cpdef add_constant(self, str name, data): """Add a constant property to the particle array. A constant property is an array but has a fixed size in that it is never resized as particles are added or removed. These properties are always stored internally as CArrays. An example of where this is useful is if you need to calculate the center of mass of a solid body or the net force on the body. Parameters ---------- name : str name of the constant data : array-like the value for the data. """ if name in self.constants: raise RuntimeError('Constant called "%s" already exists.'%name) if name in self.properties: raise RuntimeError('Property called "%s" already exists.'%name) array_data = numpy.ravel(data) self.constants[name] = self._create_c_array_from_npy_array(array_data) if self.gpu is not None: self.gpu.add_const(name, self.constants[name]) cpdef add_property(self, str name, str type='double', default=None, data=None, stride=1): """Add a new property to the particle array. If a `default` is not supplied 0 is assumed. The stride is useful when many elements are needed per particle. For example if stride is 3 then 3 elements are allocated per particle. Parameters ---------- name : str compulsory name of property. type : str specifying the data type of this property ('double', 'int' etc.) default : value specifying the default value of this property. data : ndarray specifying the data associated with each particle. stride : int the number of elements per particle. Notes ----- If there are no particles currently in the particle array, and a new property with some particles is added, all the remaining properties will be resized to the size of the newly added array. If there are some particles in the particle array, and a new property is added without any particles, then this new property will be resized according to the current size. If there are some particles in the particle array and a new property is added with a different number of particles, then an error will be raised. Warning ------- - it is best not to add properties with data when you already have particles in the particle array. Reason is that, the particles in the particle array will be stored so that the 'Real' particles are in the top of the list and later the dummy ones. The data in your property array should be matched to the particles appropriately. This may not always be possible when there are particles of different type in the particle array. - Properties without any values can be added anytime. - While initializing particle arrays only using the add_property function, you will have to call align_particles manually to make sure particles are aligned properly. """ cdef str prop_name=name, data_type=type cdef bint array_size_proper = False if data is not None: try: len(data) except TypeError: data = numpy.ones(self.get_number_of_particles()*stride)*data else: data = numpy.ravel(data) # make sure the size of the supplied array is consistent. if (data is None or self.get_number_of_particles() == 0 or len(data) == 0): array_size_proper = True else: if (self.get_number_of_particles() == len(data)//stride) and \ (len(data) % stride == 0): array_size_proper = True if array_size_proper == False: msg = 'Array sizes incompatible for property: %s' % name logger.error(msg) raise ValueError(msg) # setup the default values if default is None: if prop_name not in self.properties: default = 0 else: default = self.default_values[prop_name] self.default_values[prop_name] = default if stride != 1: self.stride[name] = stride # array sizes are compatible, now resize the required arrays # appropriately and add. if self.get_number_of_particles() == 0: if data is None or len(data) == 0: # if a property with that name already exists, do not do # anything. if prop_name not in self.properties: # just add the property with a zero array. self.properties[prop_name] = self._create_carray( data_type, 0, default) else: # new property has been added with some particles, while no # particles are currently present. First resize the current # properties to this new length, and then add this new # property. n_elem = len(data)//stride for prop in self.properties: arr = self.properties[prop] prop_stride = self.stride.get(prop, 1) arr.resize(n_elem*prop_stride) arr.get_npy_array()[:] = self.default_values[prop] if prop_name == 'tag': arr = numpy.asarray(data) self.num_real_particles = numpy.sum(arr==Local) else: self.num_real_particles = n_elem if self.properties.has_key(prop_name): # just add the particles to the already existing array. d_type = self.properties[prop_name].get_npy_array().dtype arr = numpy.asarray(data, dtype=d_type) self.properties[prop_name].set_data(arr) else: # now add the new property array # if a type was specifed create that type of array if data_type is None: # get an array for this data self.properties[prop_name] = ( self._create_c_array_from_npy_array(data)) else: arr = self._create_carray(data_type, len(data), default) np_arr = arr.get_npy_array() np_arr[:] = data self.properties[prop_name] = arr else: if data is None or len(data) == 0: # new property is added without any initial data, resize it to # current particle count. if not self.properties.has_key(prop_name): arr = self._create_carray( data_type, self.get_number_of_particles()*stride, default ) self.properties[prop_name] = arr else: if self.properties.has_key(prop_name): d_type = self.properties[prop_name].get_npy_array().dtype arr = numpy.asarray(data, dtype=d_type) self.properties[prop_name].set_data(arr) else: if data_type is None: # just add the property array self.properties[prop_name] = ( self._create_c_array_from_npy_array(data)) else: arr = self._create_carray(data_type, len(data), default) np_arr = arr.get_npy_array() arr.get_npy_array()[:] = data self.properties[prop_name] = arr if self.gpu is not None: self.gpu.add_prop(prop_name, self.properties[prop_name]) if self.gpu.get_number_of_particles() == 0: self.gpu.push() ###################################################################### # Non-public interface ###################################################################### def _create_carray(self, str data_type, int size, default=0): """Create a carray of the requested type, and of requested size Parameters ---------- data_type : str string representing the 'c' data type - eg. 'int' for integers. size : int the size of the requested array default : value the default value to initialize the array with. """ cdef BaseArray arr if data_type == None: arr = DoubleArray(size) elif data_type == 'double': arr = DoubleArray(size) elif data_type == 'long': arr = LongArray(size) elif data_type == 'float': arr = FloatArray(size) elif data_type == 'int': arr = IntArray(size) elif data_type == 'unsigned int': arr = UIntArray(size) else: logger.error('Trying to create carray of unknown ' 'datatype: %s' %data_type) if size > 0: arr.get_npy_array()[:] = default return arr cdef _check_property(self, str prop): """Check if a property is present or not """ if (PyDict_Contains(self.properties, prop) or PyDict_Contains(self.constants, prop)): return else: raise AttributeError, 'property %s not present'%(prop) cdef object _create_c_array_from_npy_array(self, numpy.ndarray np_array): """Create and return a carray array from the given numpy array. Notes ----- - this function is used only when a C array needs to be created (in the initialize function). """ cdef int np = np_array.size cdef object a if np_array.dtype == numpy.int32 or np_array.dtype == numpy.int64: a = LongArray(np) a.set_data(np_array) elif np_array.dtype == numpy.float32: a = FloatArray(np) a.set_data(np_array) elif np_array.dtype == numpy.double: a = DoubleArray(np) a.set_data(np_array) else: msg = 'unknown numpy data type passed %s'%(np_array.dtype) raise TypeError, msg return a cpdef int align_particles(self) except -1: """Moves all 'Local' particles to the beginning of the array This makes retrieving numpy slices of properties of 'Local' particles possible. This facility will be required frequently. Notes ----- Pseudo-code:: index_arr = LongArray(n) next_insert = 0 for i from 0 to n p <- ith particle if p is Local if i != next_insert tmp = index_arr[next_insert] index_arr[next_insert] = i index_arr[i] = tmp next_insert += 1 else index_arr[i] = i next_insert += 1 else index_arr[i] = i # we now have the new index assignment. # swap the required values as needed. for every property array: for i from 0 to n: if index_arr[i] != i: tmp = prop[i] prop[i] = prop[index_arr[i]] prop[index_arr[i]] = tmp """ if self.gpu is not None and self.backend is not 'cython': self.gpu.align_particles() return 0 cdef size_t i, num_particles cdef size_t next_insert cdef int tmp cdef IntArray tag_arr cdef LongArray index_array cdef BaseArray arr cdef long num_real_particles = 0 cdef long num_moves = 0 next_insert = 0 num_particles = self.get_number_of_particles() tag_arr = self.get_carray('tag') # malloc the new index array index_array = LongArray(num_particles) for i in range(num_particles): if tag_arr.data[i] == Local: num_real_particles += 1 if i != next_insert: tmp = index_array.data[next_insert] index_array.data[next_insert] = i index_array.data[i] = tmp next_insert += 1 num_moves += 1 else: index_array.data[i] = i next_insert += 1 else: index_array.data[i] = i self.num_real_particles = num_real_particles # we now have the aligned indices. Rearrange the particles # accordingly. if num_moves > 0: for name, arr in self.properties.items(): stride = self.stride.get(name, 1) arr.c_align_array(index_array, stride) cpdef ParticleArray empty_clone(self, props=None): """Creates an empty clone of the particle array """ if self.gpu is not None and self.backend is not 'cython': return self.gpu.empty_clone(props=props) cdef ParticleArray result_array = ParticleArray() cdef list output_arrays cdef BaseArray src_arr cdef int stride cdef str prop_type, prop_name if props is None: prop_names = self.properties else: prop_names = props for const in self.constants: result_array.add_constant(const, data=self.constants[const]) for prop_name in prop_names: prop_type = self.properties[prop_name].get_c_type() prop_default = self.default_values[prop_name] stride = self.stride.get(prop_name, 1) result_array.add_property(name=prop_name, type=prop_type, default=prop_default, stride=stride) result_array.set_name(self.name) if props is None: output_arrays = list(self.output_property_arrays) else: output_arrays = list( set(props).intersection( self.output_property_arrays ) ) result_array.set_output_arrays(output_arrays) return result_array cpdef ensure_properties(self, ParticleArray src, list props=None): """Ensure that the particle array has the same properties as the one given. Note that this does not check for any constants but only properties. If the optional props argument is passed it only checks for these. """ prop_names = props if props else src.properties.keys() for prop_name in prop_names: if prop_name not in self.properties: prop_type = src.properties[prop_name].get_c_type() prop_default = src.default_values[prop_name] stride = src.stride.get(prop_name, 1) self.add_property( name=prop_name, type=prop_type, default=prop_default, stride=stride ) cpdef ParticleArray extract_particles(self, indices, ParticleArray dest_array=None, bint align=True, list props=None): """Create new particle array for particles with indices in index_array Parameters ---------- indices : list/array/LongArray indices of particles to be extracted (can be a LongArray or list/numpy array). dest_array: ParticleArray optional Particle array to populate. Note that this array should have the necessary properties. If none is passed a new particle array is created and returned. align: bool Specify if the destination particle array is to be aligned after particle extraction. props : list the list of properties to extract, if None all properties are extracted. Notes ----- The algorithm is as follows: - create a new particle array with the required properties. - resize the new array to the desired length (index_array.length) - copy the properties from the existing array to the new array. """ if self.gpu is not None and self.backend is not 'cython': if type(indices) != Array: indices = to_device( numpy.array(indices, dtype=numpy.uint32), backend=self.backend) return self.gpu.extract_particles(indices, dest_array=dest_array, align=align, props=props) if not dest_array: dest_array = self.empty_clone(props=props) cdef BaseArray index_array if isinstance(indices, BaseArray): index_array = indices else: indices = numpy.asarray(indices) index_array = LongArray(indices.size) index_array.set_data(indices) cdef list prop_names, output_arrays cdef BaseArray dst_prop_array, src_prop_array cdef str prop_type, prop cdef int stride, start_idx if props is None: prop_names = list(self.properties.keys()) else: prop_names = props # now we have the result array setup. # resize it if index_array.length == 0: return dest_array start_idx = dest_array.get_number_of_particles() dest_array.extend(index_array.length) # copy the required indices for each property. for prop in prop_names: src_prop_array = self.get_carray(prop) dst_prop_array = dest_array.get_carray(prop) stride = self.stride.get(prop, 1) src_prop_array.copy_values(index_array, dst_prop_array, stride, stride*start_idx) if align: dest_array.align_particles() return dest_array cpdef set_tag(self, long tag_value, LongArray indices): """Set value of tag to tag_value for the particles in indices """ cdef LongArray tag_array = self.get_carray('tag') cdef int i for i in range(indices.length): tag_array.data[indices.data[i]] = tag_value cpdef copy_properties(self, ParticleArray source, long start_index=-1, long end_index=-1): """ Copy properties from source to self Parameters ---------- source : ParticleArray the particle array from where to copy. start_index : long the first particle in self which maps to the 0th particle in source end_index : long the index of first particle from start_index that is not copied """ cdef BaseArray src_array, dst_array for prop_name in source.properties: if prop_name in self.properties: src_array = source.get_carray(prop_name) dst_array = self.get_carray(prop_name) stride = self.stride.get(prop_name, 1) dst_array.copy_subset(src_array, start_index, end_index, stride) cpdef copy_over_properties(self, dict props): """ Copy the properties from one set to another. Parameters ---------- props : dict A mapping between the properties to be copied. Examples -------- To save the properties 'x' and 'y' to say 'x0' and 'y0':: >>> pa.copy_over_properties(props = {'x':'x0', 'y':'y0'} """ cdef DoubleArray dst, src cdef str prop cdef int stride cdef long np = self.get_number_of_particles() cdef long i for prop in props: src = self.get_carray(prop) dst = self.get_carray(props[prop]) stride = self.stride.get(prop, 1) for i in range(np*stride): dst.data[i] = src.data[i] cpdef set_to_zero(self, list props): cdef long np = self.get_number_of_particles() cdef long i cdef int stride cdef DoubleArray prop_arr cdef str prop for prop in props: prop_arr = self.get_carray(prop) stride = self.stride.get(prop, 1) for i in range(np*stride): prop_arr.data[i] = 0.0 cpdef set_pid(self, int pid): """Set the processor id for all particles """ cdef IntArray pid_arr = self.properties['pid'] cdef long a for a in range(pid_arr.length): pid_arr.data[a] = pid cpdef remove_property(self, str prop_name): """Removes property prop_name from the particle array """ if self.properties.has_key(prop_name): self.properties.pop(prop_name) self.default_values.pop(prop_name) if prop_name in self.output_property_arrays: self.output_property_arrays.remove(prop_name) if self.gpu is not None: return self.gpu.remove_prop(prop_name) def update_min_max(self, props=None): """Update the min,max values of all properties """ if self.gpu is not None and self.backend is not 'cython': backend = self.gpu else: backend = self if props: for prop in props: array = backend.properties[prop] array.update_min_max() else: for array in backend.properties.values(): array.update_min_max() cpdef resize(self, long size): """Resize all arrays to the new size. Note that this does not update the number of particles, as this just resizes the internal arrays. To do that, you need to call `align_particles`. """ if self.gpu is not None and self.backend is not 'cython': return self.gpu.resize(size) for prop, array in self.properties.items(): stride = self.stride.get(prop, 1) array.resize(size*stride) # End of ParticleArray class ############################################################################## pysph-master/pysph/base/point.pxd000066400000000000000000000111731356347341600174540ustar00rootroot00000000000000cimport numpy cdef extern from "math.h": double sqrt(double) nogil double ceil(double) nogil cdef extern from 'limits.h': cdef int INT_MAX cdef struct cPoint: double x, y, z cdef inline cPoint cPoint_new(double x, double y, double z): cdef cPoint p = cPoint(x,y,z) return p cdef inline cPoint cPoint_sub(cPoint pa, cPoint pb): return cPoint_new(pa.x-pb.x, pa.y-pb.y, pa.z-pb.z) cdef inline cPoint cPoint_add(cPoint pa, cPoint pb): return cPoint_new(pa.x+pb.x, pa.y+pb.y, pa.z+pb.z) cdef inline double cPoint_dot(cPoint pa, cPoint pb): return pa.x*pb.x + pa.y*pb.y + pa.z*pb.z cdef inline double cPoint_norm(cPoint p): return p.x*p.x + p.y*p.y + p.z*p.z cdef inline double cPoint_distance(cPoint pa, cPoint pb): return sqrt((pa.x-pb.x)*(pa.x-pb.x) + (pa.y-pb.y)*(pa.y-pb.y) + (pa.z-pb.z)*(pa.z-pb.z) ) cdef inline double cPoint_distance2(cPoint pa, cPoint pb): return ((pa.x-pb.x)*(pa.x-pb.x) + (pa.y-pb.y)*(pa.y-pb.y) + (pa.z-pb.z)*(pa.z-pb.z)) cdef inline double cPoint_length(cPoint pa): return sqrt(cPoint_norm(pa)) cdef inline cPoint cPoint_scale(cPoint p, double k): return cPoint_new(p.x*k, p.y*k, p.z*k) cdef inline cPoint normalized(cPoint p): cdef double norm = cPoint_length(p) return cPoint_new(p.x/norm, p.y/norm, p.z/norm) cdef class Point: """ Class to represent point in 3D. """ cdef cPoint data cpdef set(self, double x, double y, double z) cdef set_from_cPoint(self, cPoint value) cpdef numpy.ndarray asarray(self) cpdef double norm(self) cpdef double length(self) cpdef double dot(self, Point p) cpdef Point cross(self, Point p) cpdef double distance(self, Point p) cdef cPoint to_cPoint(self) cdef inline Point Point_new(double x, double y, double z): cdef Point p = Point.__new__(Point) p.x = x p.y = y p.z = z return p cdef inline Point Point_sub(Point pa, Point pb): return Point_new(pa.x-pb.x, pa.y-pb.y, pa.z-pb.z) cdef inline Point Point_add(Point pa, Point pb): return Point_new(pa.x+pb.x, pa.y+pb.y, pa.z+pb.z) cdef inline double Point_length(Point p): return sqrt(p.x*p.x + p.y*p.y + p.z*p.z) cdef inline double Point_length2(Point p): return p.x*p.x + p.y*p.y + p.z*p.z cdef inline double Point_distance(Point pa, Point pb): return sqrt((pa.x-pb.x)*(pa.x-pb.x) + (pa.y-pb.y)*(pa.y-pb.y) + (pa.z-pb.z)*(pa.z-pb.z) ) cdef inline double Point_distance2(Point pa, Point pb): return ((pa.x-pb.x)*(pa.x-pb.x) + (pa.y-pb.y)*(pa.y-pb.y) + (pa.z-pb.z)*(pa.z-pb.z)) cdef inline Point Point_from_cPoint(cPoint p): return Point_new(p.x, p.y, p.z) cdef struct cIntPoint: int x, y, z cdef inline cIntPoint cIntPoint_new(int x, int y, int z): cdef cIntPoint p = cIntPoint(x,y,z) return p cdef inline cIntPoint cIntPoint_sub(cIntPoint pa, cIntPoint pb): return cIntPoint_new(pa.x-pb.x, pa.y-pb.y, pa.z-pb.z) cdef inline cIntPoint cIntPoint_add(cIntPoint pa, cIntPoint pb): return cIntPoint_new(pa.x+pb.x, pa.y+pb.y, pa.z+pb.z) cdef inline long cIntPoint_dot(cIntPoint pa, cIntPoint pb): return pa.x*pb.x + pa.y*pb.y + pa.z*pb.z cdef inline long cIntPoint_norm(cIntPoint p): return p.x*p.x + p.y*p.y + p.z*p.z cdef inline double cIntPoint_length(cIntPoint pa): return sqrt(cIntPoint_norm(pa)) cdef inline long cIntPoint_distance2(cIntPoint pa, cIntPoint pb): return ((pa.x-pb.x)*(pa.x-pb.x) + (pa.y-pb.y)*(pa.y-pb.y) + (pa.z-pb.z)*(pa.z-pb.z)) cdef inline double cIntPoint_distance(cIntPoint pa, cIntPoint pb): return sqrt(cIntPoint_distance2(pa, pb)) cdef inline cIntPoint cIntPoint_scale(cIntPoint p, int k): return cIntPoint_new(p.x*k, p.y*k, p.z*k) cdef inline bint cIntPoint_is_equal(cIntPoint pa, cIntPoint pb): return (pa.x == pb.x and pa.y == pb.y and pa.z == pb.z) cdef class IntPoint: cdef cIntPoint data cpdef numpy.ndarray asarray(self) cdef bint is_equal(self, IntPoint) cdef IntPoint diff(self, IntPoint) cdef tuple to_tuple(self) cdef IntPoint copy(self) cdef inline IntPoint IntPoint_sub(IntPoint pa, IntPoint pb): return IntPoint_new(pa.x-pb.x, pa.y-pb.y, pa.z-pb.z) cdef inline IntPoint IntPoint_add(IntPoint pa, IntPoint pb): return IntPoint_new(pa.x+pb.x, pa.y+pb.y, pa.z+pb.z) cdef inline IntPoint IntPoint_new(int x, int y, int z): cdef IntPoint p = IntPoint.__new__(IntPoint) p.data.x = x p.data.y = y p.data.z = z return p cdef inline IntPoint IntPoint_from_cIntPoint(cIntPoint p): return IntPoint_new(p.x, p.y, p.z) pysph-master/pysph/base/point.pyx000066400000000000000000000201431356347341600174760ustar00rootroot00000000000000#cython: embedsignature=True """A handy set of classes/functions for 3D points. """ from cpython cimport * # numpy imports cimport numpy import numpy # IntPoint's maximum absolute value must be less than `IntPoint_maxint` # this is due to the hash implementation cdef int IntPoint_maxint = 2**20 DTYPE = numpy.float ctypedef numpy.float_t DTYPE_t ############################################################################### # `Point` class. ############################################################################### cdef class Point: """ This class represents a Point in 3D space. """ # Declared in the .pxd file. #cdef public double x, y, z property x: def __get__(Point self): return self.data.x def __set__(Point self, double x): self.data.x = x property y: def __get__(Point self): return self.data.y def __set__(Point self, double y): self.data.y = y property z: def __get__(Point self): return self.data.z def __set__(Point self, double z): self.data.z = z ###################################################################### # `object` interface. ###################################################################### def __init__(Point self, double x=0.0, double y=0.0, double z=0.0): """Constructor for a Point.""" self.data.x = x self.data.y = y self.data.z = z def __reduce__(Point self): """ Implemented to facilitate pickling of the Point extension type. """ d = {} d['x'] = self.data.x d['y'] = self.data.y d['z'] = self.data.z return (Point, (), d) def __setstate__(Point self, d): self.data.x = d['x'] self.data.y = d['y'] self.data.z = d['z'] def __str__(Point self): return '(%f, %f, %f)'%(self.data.x, self.data.y, self.data.z) def __repr__(Point self): return 'Point(%g, %g, %g)'%(self.data.x, self.data.y, self.data.z) def __add__(Point self, Point p): return Point_new(self.data.x + p.data.x, self.data.y + p.data.y, self.data.z + p.data.z) def __sub__(Point self, Point p): return Point_new(self.data.x - p.data.x, self.data.y - p.data.y, self.data.z - p.data.z) def __mul__(Point self, double m): return Point_new(self.data.x*m, self.data.y*m, self.data.z*m) def __div__(Point self, double m): return Point_new(self.data.x/m, self.data.y/m, self.data.z/m) def __abs__(Point self): return cPoint_length(self.data) def __neg__(Point self): return Point_new(-self.data.x, -self.data.y, -self.data.z) def __richcmp__(Point self, Point p, int oper): if oper == 2: # == if self.data.x == p.data.x and self.data.y == p.data.y and self.data.z == p.data.z: return True return False elif oper == 3: # != if self.data.x == p.data.x and self.data.y == p.data.y and self.data.z == p.data.z: return False return True else: raise TypeError('No ordering is possible for Points.') def __iadd__(Point self, Point p): self.data.x += p.data.x self.data.y += p.data.y self.data.z += p.data.z return self def __isub__(Point self, Point p): self.data.x -= p.data.x self.data.y -= p.data.y self.data.z -= p.data.z return self def __imul__(Point self, double m): self.data.x *= m self.data.y *= m self.data.z *= m return self def __idiv__(Point self, double m): self.data.x /= m self.data.y /= m self.data.z /= m return self ###################################################################### # `Point` interface. ###################################################################### cpdef set(Point self, double x, double y, double z): """Set the position from a given array. """ self.data.x = x self.data.y = y self.data.z = z cdef set_from_cPoint(Point self, cPoint value): self.data.x = value.x self.data.y = value.y self.data.z = value.z cpdef numpy.ndarray asarray(Point self): """Return a numpy array with the coordinates.""" cdef numpy.ndarray[DTYPE_t, ndim=1] r = numpy.empty(3) r[0] = self.data.x r[1] = self.data.y r[2] = self.data.z return r cpdef double norm(Point self): """Return the square of the Euclidean distance to this point.""" return cPoint_norm(self.data) cpdef double length(Point self): """Return the Euclidean distance to this point.""" return cPoint_length(self.data) cpdef double dot(Point self, Point p): """Return the dot product of this point with another.""" return cPoint_dot(self.data, p.data) cpdef Point cross(Point self, Point p): """Return the cross product of this point with another, i.e. `self` cross `p`.""" return Point_new(self.data.y*p.data.z - self.data.z*p.data.y, self.data.z*p.data.x - self.data.x*p.data.z, self.data.x*p.data.y - self.data.y*p.data.x) cpdef double distance(Point self, Point p): """Return the distance between this point and p""" return cPoint_distance(self.data, p.data) cdef cPoint to_cPoint(Point self): return self.data def normalize(self): """ Normalize the point """ cdef double norm = cPoint_length(self.data) self.data.x /= norm self.data.y /= norm self.data.z /= norm cdef class IntPoint: property x: def __get__(self): return self.data.x property y: def __get__(self): return self.data.y property z: def __get__(self): return self.data.z def __init__(self, int x=0, int y=0, int z=0): self.data.x = x self.data.y = y self.data.z = z def __reduce__(self): """ Implemented to facilitate pickling of the IntPoint extension type. """ d = {} d['x'] = self.data.x d['y'] = self.data.y d['z'] = self.data.z return (IntPoint, (), d) def __setstate__(self, d): self.data.x = d['x'] self.data.y = d['y'] self.data.z = d['z'] def __str__(self): return '(%d,%d,%d)'%(self.data.x, self.data.y, self.data.z) def __repr__(self): return 'IntPoint(%d,%d,%d)'%(self.data.x, self.data.y, self.data.z) cdef IntPoint copy(self): return IntPoint_new(self.data.x, self.data.y, self.data.z) cpdef numpy.ndarray asarray(self): cdef numpy.ndarray[ndim=1,dtype=numpy.int_t] arr = numpy.empty(3, dtype=numpy.int) arr[0] = self.data.x arr[1] = self.data.y arr[2] = self.data.z return arr cdef bint is_equal(self, IntPoint p): return cIntPoint_is_equal(self.data, p.data) cdef IntPoint diff(self, IntPoint p): return IntPoint_new(self.data.x-p.data.x, self.data.y-p.data.y, self.data.z-p.data.z) cdef tuple to_tuple(self): cdef tuple t = (self.data.x, self.data.y, self.data.z) return t def __richcmp__(IntPoint self, IntPoint p, int oper): if oper == 2: # == return cIntPoint_is_equal(self.data, p.data) elif oper == 3: # != return not cIntPoint_is_equal(self.data, p.data) else: raise TypeError('No ordering is possible for Points.') def __hash__(self): cdef long ret = self.data.x + IntPoint_maxint ret = 2 * IntPoint_maxint * ret + self.data.y + IntPoint_maxint return 2 * IntPoint_maxint * ret + self.data.z + IntPoint_maxint def py_is_equal(self, IntPoint p): return self.is_equal(p) def py_diff(self, IntPoint p): return self.diff(p) def py_copy(self): return self.copy() pysph-master/pysph/base/reduce_array.py000066400000000000000000000036521356347341600206300ustar00rootroot00000000000000"""Functions to reduce array data in serial or parallel. """ import numpy as np from cyarray.carray import BaseArray def _check_operation(op): """Raise an exception if the wrong operation is given. """ valid_ops = ('sum', 'max', 'min', 'prod') msg = "Unsupported operation %s, must be one of %s."%(op, valid_ops) if op not in valid_ops: raise RuntimeError(msg) def _get_npy_array(array_or_carray): """Return a numpy array from given carray or numpy array. """ if isinstance(array_or_carray, BaseArray): return array_or_carray.get_npy_array() else: return array_or_carray def serial_reduce_array(array, op='sum'): """Reduce an array given an array and a suitable reduction operation. Currently, only 'sum', 'max', 'min' and 'prod' are supported. **Parameters** - array: numpy.ndarray: Any numpy array (1D). - op: str: reduction operation, one of ('sum', 'prod', 'min', 'max') """ _check_operation(op) ops = {'sum': np.sum, 'prod': np.prod, 'max': np.max, 'min': np.min} np_array = _get_npy_array(array) return ops[op](np_array) def dummy_reduce_array(array, op='sum'): """Simply returns the array for the serial case. """ return _get_npy_array(array) def mpi_reduce_array(array, op='sum'): """Reduce an array given an array and a suitable reduction operation. Currently, only 'sum', 'max', 'min' and 'prod' are supported. **Parameters** - array: numpy.ndarray: Any numpy array (1D). - op: str: reduction operation, one of ('sum', 'prod', 'min', 'max') """ np_array = _get_npy_array(array) from mpi4py import MPI ops = {'sum': MPI.SUM, 'prod': MPI.PROD, 'max': MPI.MAX, 'min': MPI.MIN} return MPI.COMM_WORLD.allreduce(np_array, op=ops[op]) # This is just to keep syntax highlighters happy in editors while writing # equations. parallel_reduce_array = mpi_reduce_array pysph-master/pysph/base/spatial_hash.h000066400000000000000000000064301356347341600204170ustar00rootroot00000000000000#ifndef SPATIAL_HASH_H #define SPATIAL_HASH_H #include #include #include #include // p1, p2 and p3 are large primes used in the hash function // Ref. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.105.6732&rep=rep1&type=pdf #define p1 73856093 #define p2 19349663 #define p3 83492791 using namespace std; class HashEntry { private: long long int key; public: int c_x, c_y, c_z; double h_max; HashEntry* next; vector indices; HashEntry(long long int key, int idx, double h, int c_x, int c_y, int c_z) { this->key = key; this->indices.push_back(idx); this->h_max = h; this->c_x = c_x; this->c_y = c_y; this->c_z = c_z; this->next = NULL; } inline long long int get_key() { return this->key; } inline vector *get_indices() { return &this->indices; } inline void add(unsigned int idx, double h) { this->indices.push_back(idx); this->h_max = max(this->h_max, h); } }; class HashTable { private: HashEntry** hashtable; public: long long int table_size; HashTable(long long int table_size) { this->table_size = table_size; this->hashtable = new HashEntry*[table_size]; for(int i=0; ihashtable[i] = NULL; } } inline long long int hash(long long int i, long long int j, long long int k) { return ((i*p1)^(j*p2)^(k*p3))%this->table_size; } void add(int i, int j, int k, int idx, double h) { long long int key = this->hash(i,j,k); HashEntry* prev = NULL; HashEntry* entry = this->hashtable[key]; while(entry!=NULL) { if(entry->c_x==i && entry->c_y==j && entry->c_z==k) break; prev = entry; entry = entry->next; } if(entry!=NULL) entry->add(idx, h); else { entry = new HashEntry(key, idx, h, i, j, k); if(prev==NULL) this->hashtable[key] = entry; else prev->next = entry; } } HashEntry* get(int i, int j, int k) { long long int key = this->hash(i,j,k); HashEntry* entry = this->hashtable[key]; while(entry!=NULL) { if(entry->c_x==i && entry->c_y==j && entry->c_z==k) return entry; entry = entry->next; } return NULL; } int number_of_particles() { HashEntry* curr = NULL; int num_particles = 0; for(int i=0; itable_size; i++) { curr = this->hashtable[i]; while(curr!=NULL) { num_particles += curr->indices.size(); curr = curr->next; } } return num_particles; } ~HashTable() { for(int i=0; itable_size; i++) { HashEntry* entry = this->hashtable[i]; while(entry!=NULL) { HashEntry* prev = entry; entry = entry->next; delete prev; } } delete[] this->hashtable; } }; #endif pysph-master/pysph/base/spatial_hash_nnps.pxd000066400000000000000000000054121356347341600220200ustar00rootroot00000000000000from libcpp.vector cimport vector from nnps_base cimport * #Imports for SpatialHashNNPS cdef extern from "spatial_hash.h": cdef cppclass HashEntry: double h_max vector[unsigned int] *get_indices() nogil cdef cppclass HashTable: HashTable(long long int) nogil except + void add(int, int, int, int, double) nogil HashEntry* get(int, int, int) nogil # NNPS using Spatial Hashing algorithm cdef class SpatialHashNNPS(NNPS): ############################################################################ # Data Attributes ############################################################################ cdef long long int table_size # Size of hashtable cdef double radius_scale2 cdef HashTable** hashtable cdef HashTable* current_hash cdef NNPSParticleArrayWrapper dst, src ########################################################################## # Member functions ########################################################################## cpdef set_context(self, int src_index, int dst_index) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil cdef inline void _add_to_hashtable(self, int hash_id, unsigned int pid, double h, int i, int j, int k) nogil cdef inline int _neighbor_boxes(self, int i, int j, int k, int* x, int* y, int* z) nogil cpdef _refresh(self) cpdef _bin(self, int pa_index, UIntArray indices) # NNPS using Extended Spatial Hashing algorithm cdef class ExtendedSpatialHashNNPS(NNPS): ############################################################################ # Data Attributes ############################################################################ cdef long long int table_size # Size of hashtable cdef double radius_scale2 cdef HashTable** hashtable cdef HashTable* current_hash cdef NNPSParticleArrayWrapper dst, src cdef int H cdef double h_sub cdef bint approximate ########################################################################## # Member functions ########################################################################## cpdef set_context(self, int src_index, int dst_index) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil cdef inline int _h_mask_approx(self, int* x, int* y, int* z) nogil cdef inline int _h_mask_exact(self, int* x, int* y, int* z) nogil cdef int _neighbor_boxes(self, int i, int j, int k, int* x, int* y, int* z, double h) nogil cdef inline void _add_to_hashtable(self, int hash_id, unsigned int pid, double h, int i, int j, int k) nogil cpdef _refresh(self) cpdef _bin(self, int pa_index, UIntArray indices) pysph-master/pysph/base/spatial_hash_nnps.pyx000066400000000000000000000405551356347341600220540ustar00rootroot00000000000000#cython: embedsignature=True # malloc and friends from libc.stdlib cimport malloc, free from libcpp.vector cimport vector # Cython for compiler directives cimport cython IF UNAME_SYSNAME == "Windows": cdef inline double fmin(double x, double y) nogil: return x if x < y else y cdef inline double fmax(double x, double y) nogil: return x if x > y else y ############################################################################# cdef class SpatialHashNNPS(NNPS): """Nearest neighbor particle search using Spatial Hashing algorithm Uses a hashtable to store particles according to cell it belongs to. Ref. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.105.6732&rep=rep1&type=pdf """ def __init__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False, long long int table_size = 131072): #Initialize base class NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) self.src_index = 0 self.dst_index = 0 self.sort_gids = sort_gids self.domain.update() self.update() def __cinit__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False, long long int table_size = 131072): cdef int narrays = len(particles) self.table_size = table_size self.radius_scale2 = radius_scale*radius_scale self.hashtable = malloc(narrays*sizeof(HashTable*)) cdef int i for i from 0<=i self.pa_wrappers[dst_index] self.src = self.pa_wrappers[src_index] cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: """Low level, high-performance non-gil method to find neighbors. This requires that `set_context()` be called beforehand. This method does not reset the neighbors array before it appends the neighbors to it. """ cdef double* dst_x_ptr = self.dst.x.data cdef double* dst_y_ptr = self.dst.y.data cdef double* dst_z_ptr = self.dst.z.data cdef double* dst_h_ptr = self.dst.h.data cdef double* src_x_ptr = self.src.x.data cdef double* src_y_ptr = self.src.y.data cdef double* src_z_ptr = self.src.z.data cdef double* src_h_ptr = self.src.h.data cdef double x = dst_x_ptr[d_idx] cdef double y = dst_y_ptr[d_idx] cdef double z = dst_z_ptr[d_idx] cdef double h = dst_h_ptr[d_idx] cdef unsigned int* s_gid = self.src.gid.data cdef int orig_length = nbrs.length cdef int c_x, c_y, c_z cdef double* xmin = self.xmin.data cdef unsigned int i, j, k cdef HashEntry* candidate_cell cdef vector[unsigned int] *candidates find_cell_id_raw( x - xmin[0], y - xmin[1], z - xmin[2], self.cell_size, &c_x, &c_y, &c_z ) cdef int candidate_size = 0 cdef int x_boxes[27] cdef int y_boxes[27] cdef int z_boxes[27] cdef int num_boxes = self._neighbor_boxes(c_x, c_y, c_z, x_boxes, y_boxes, z_boxes) cdef double xij2 = 0 cdef double hi2 = self.radius_scale2*h*h cdef double hj2 = 0 for i from 0<=i=0 and j+q>=0 and k+r>=0: x[length] = i+p y[length] = j+q z[length] = k+r length += 1 return length cpdef _refresh(self): cdef int i for i from 0<=i malloc(narrays*sizeof(HashTable*)) cdef int i for i from 0<=i self.pa_wrappers[dst_index] self.src = self.pa_wrappers[src_index] cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: """Low level, high-performance non-gil method to find neighbors. This requires that `set_context()` be called beforehand. This method does not reset the neighbors array before it appends the neighbors to it. """ cdef double* dst_x_ptr = self.dst.x.data cdef double* dst_y_ptr = self.dst.y.data cdef double* dst_z_ptr = self.dst.z.data cdef double* dst_h_ptr = self.dst.h.data cdef double* src_x_ptr = self.src.x.data cdef double* src_y_ptr = self.src.y.data cdef double* src_z_ptr = self.src.z.data cdef double* src_h_ptr = self.src.h.data cdef double x = dst_x_ptr[d_idx] cdef double y = dst_y_ptr[d_idx] cdef double z = dst_z_ptr[d_idx] cdef double h = dst_h_ptr[d_idx] cdef unsigned int* s_gid = self.src.gid.data cdef int orig_length = nbrs.length cdef int c_x, c_y, c_z cdef double* xmin = self.xmin.data cdef unsigned int i, j, k cdef HashEntry* candidate_cell cdef vector[unsigned int] *candidates find_cell_id_raw( x - xmin[0], y - xmin[1], z - xmin[2], self.h_sub, &c_x, &c_y, &c_z ) cdef int candidate_size = 0 cdef int mask_len = (2*self.H+1)*(2*self.H+1)*(2*self.H+1) cdef int* x_boxes = malloc(mask_len*sizeof(int)) cdef int* y_boxes = malloc(mask_len*sizeof(int)) cdef int* z_boxes = malloc(mask_len*sizeof(int)) cdef int num_boxes = self._neighbor_boxes(c_x, c_y, c_z, x_boxes, y_boxes, z_boxes, h) cdef double xij2 = 0 cdef double hi2 = self.radius_scale2*h*h cdef double hj2 = 0 for i from 0<=i malloc(mask_len*sizeof(int)) cdef int* y_mask = malloc(mask_len*sizeof(int)) cdef int* z_mask = malloc(mask_len*sizeof(int)) if self.approximate: mask_len = self._h_mask_approx(x_mask, y_mask, z_mask) else: mask_len = self._h_mask_exact(x_mask, y_mask, z_mask) for p from 0<=p= 0 and y_temp >= 0 and z_temp >= 0: cell = self.current_hash.get(x_temp, y_temp, z_temp) if cell == NULL: continue h_local = self.radius_scale*fmax(cell.h_max, h) H = ceil(h_local/self.h_sub) if abs(x_mask[p]) <= H and abs(y_mask[p]) <= H and abs(z_mask[p]) <= H: x[length] = x_temp y[length] = y_temp z[length] = z_temp length += 1 free(x_mask) free(y_mask) free(z_mask) return length cpdef _refresh(self): cdef int i for i from 0<=i y else y ############################################################################# cdef class StratifiedHashNNPS(NNPS): """Finds nearest neighbors using Spatial Hashing with particles classified according to their support radii. """ def __init__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False, int H = 1, int num_levels = 1, long long int table_size = 131072): NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) self.table_size = table_size self.radius_scale2 = radius_scale*radius_scale self.interval_size = 0 self.H = H self.src_index = 0 self.dst_index = 0 self.sort_gids = sort_gids self.domain.update() self.update() def __cinit__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False, int H = 1, int num_levels = 1, long long int table_size = 131072): cdef int narrays = len(particles) cdef HashTable** current_hash if fixed_h: self.num_levels = 1 else: self.num_levels = num_levels self.hashtable = malloc(narrays*sizeof(HashTable**)) self.cell_sizes = malloc(narrays*sizeof(double*)) cdef int i, j for i from 0<=i malloc(self.num_levels*sizeof(HashTable*)) self.cell_sizes[i] = malloc(self.num_levels*sizeof(double)) current_hash = self.hashtable[i] for j from 0<=j self.pa_wrappers[dst_index] self.src = self.pa_wrappers[src_index] @cython.cdivision(True) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: """Low level, high-performance non-gil method to find neighbors. This requires that `set_context()` be called beforehand. This method does not reset the neighbors array before it appends the neighbors to it. """ cdef double* dst_x_ptr = self.dst.x.data cdef double* dst_y_ptr = self.dst.y.data cdef double* dst_z_ptr = self.dst.z.data cdef double* dst_h_ptr = self.dst.h.data cdef double* src_x_ptr = self.src.x.data cdef double* src_y_ptr = self.src.y.data cdef double* src_z_ptr = self.src.z.data cdef double* src_h_ptr = self.src.h.data cdef double x = dst_x_ptr[d_idx] cdef double y = dst_y_ptr[d_idx] cdef double z = dst_z_ptr[d_idx] cdef double h = dst_h_ptr[d_idx] cdef unsigned int* s_gid = self.src.gid.data cdef int orig_length = nbrs.length cdef int c_x, c_y, c_z cdef double* xmin = self.xmin.data cdef unsigned int i, j, k, n cdef vector[unsigned int] *candidates cdef int candidate_size = 0 cdef double xij2 = 0 cdef double hi2 = self.radius_scale2*h*h cdef double hj2 = 0 cdef double h_max cdef int H, mask_len, num_boxes cdef int* x_boxes cdef int* y_boxes cdef int* z_boxes cdef HashTable* hash_level = NULL cdef HashEntry* candidate_cell = NULL for i from 0<=i ceil(h_max*self.H/self._get_h_max(self.current_cells, i)) mask_len = (2*H+1)*(2*H+1)*(2*H+1) x_boxes = malloc(mask_len*sizeof(int)) y_boxes = malloc(mask_len*sizeof(int)) z_boxes = malloc(mask_len*sizeof(int)) hash_level = self.current_hash[i] find_cell_id_raw( x - xmin[0], y - xmin[1], z - xmin[2], self._get_h_max(self.current_cells, i)/self.H, &c_x, &c_y, &c_z ) num_boxes = self._neighbor_boxes(c_x, c_y, c_z, x_boxes, y_boxes, z_boxes, H) for j from 0<=j floor((self.radius_scale*h - self.hmin)/self.interval_size) cdef inline double _get_h_max(self, double* current_cells, int hash_id) nogil: return self.radius_scale*current_cells[hash_id] @cython.cdivision(True) cdef inline int _h_mask_exact(self, int* x, int* y, int* z, int H) nogil: cdef int length = 0 cdef int s, t, u for s from -H<=s<=H: for t from -H<=t<=H: for u from -H<=u<=H: x[length] = s y[length] = t z[length] = u length += 1 return length cdef inline int _neighbor_boxes(self, int i, int j, int k, int* x, int* y, int* z, int H) nogil: cdef int length = 0 cdef int p cdef int mask_len = (2*H+1)*(2*H+1)*(2*H+1) cdef int* x_mask = malloc(mask_len*sizeof(int)) cdef int* y_mask = malloc(mask_len*sizeof(int)) cdef int* z_mask = malloc(mask_len*sizeof(int)) mask_len = self._h_mask_exact(x_mask, y_mask, z_mask, H) for p from 0<=p= 0 and j + y_mask[p] >= 0 and k + z_mask[p] >= 0): x[length] = i + x_mask[p] y[length] = j + y_mask[p] z[length] = k + z_mask[p] length += 1 free(x_mask) free(y_mask) free(z_mask) return length cdef inline void _set_h_max(self, double* current_cells, double* src_h_ptr, int num_particles) nogil: cdef double h cdef int i, idx for i from 0<=i inline int get_level(${data_t} radius_scale, ${data_t} hmin, \ ${data_t} interval_size, ${data_t} h) { return (int) floor((radius_scale*h - hmin)/interval_size); } inline int extract_level(ulong key, int max_num_bits) { return key >> max_num_bits; } inline ${data_t} get_cell_size(int level, ${data_t} hmin, ${data_t} interval_size) { return hmin + (level + 1)*interval_size; } <%def name="fill_pids_args(data_t)" cached="True"> ${data_t}* x, ${data_t}* y, ${data_t}* z, ${data_t}* h, ${data_t} interval_size, ${data_t} xmin, ${data_t} ymin, ${data_t} zmin, ${data_t} hmin, ulong* keys, uint* pids, ${data_t} radius_scale, int max_num_bits <%def name="fill_pids_src(data_t)" cached="True"> ulong c_x, c_y, c_z; ulong key; int level = get_level(radius_scale, hmin, interval_size, h[i]); pids[i] = i; FIND_CELL_ID( x[i] - xmin, y[i] - ymin, z[i] - zmin, get_cell_size(level, hmin, interval_size), c_x, c_y, c_z ); key = interleave(c_x, c_y, c_z); ulong mask = level << max_num_bits; keys[i] = key + mask; <%def name="fill_start_indices_args(data_t)" cached="True"> ulong* keys, uint* start_idx_levels, int max_num_bits, uint* num_particles_levels <%def name="fill_start_indices_src(data_t)" cached="True"> int curr_level = extract_level(keys[i], max_num_bits); if(i != 0 && curr_level != extract_level(keys[i-1], max_num_bits)) atomic_min(&start_idx_levels[curr_level], i); else atomic_inc(&num_particles_levels[curr_level]) <%def name="find_nbrs_prep(data_t, sorted)", cached="True"> uint qid = i; ${data_t}4 q = (${data_t}4)(d_x[qid], d_y[qid], d_z[qid], d_h[qid]); ${data_t} radius_scale2 = radius_scale*radius_scale; int3 c; int idx, j, k, m; ${data_t} dist; ${data_t} h_i = radius_scale2*q.w*q.w; ${data_t} h_j; ulong key; uint pid; ${data_t} h_max, cell_size_level; int H, level; ulong mask; <%def name="find_nbr_lengths_args(data_t)" cached="True"> ${data_t}* d_x, ${data_t}* d_y, ${data_t}* d_z, ${data_t}* d_h, ${data_t}* s_x, ${data_t}* s_y, ${data_t}* s_z, ${data_t}* s_h, ${data_t}3 min, uint num_particles, ulong* keys, uint* pids_dst, uint* pids_src, uint* nbr_lengths, ${data_t} radius_scale, ${data_t} hmin, ${data_t} interval_size, uint* start_idx_levels, int max_num_bits, int num_levels, uint* num_particles_levels <%def name="find_nbr_lengths_src(data_t, sorted)" cached="True"> ${find_nbrs_prep(data_t, sorted)} unsigned int length = 0; #pragma unroll for(level=0; level <%def name="find_nbrs_args(data_t)" cached="True"> ${data_t}* d_x, ${data_t}* d_y, ${data_t}* d_z, ${data_t}* d_h, ${data_t}* s_x, ${data_t}* s_y, ${data_t}* s_z, ${data_t}* s_h, ${data_t}3 min, uint num_particles, ulong* keys, uint* pids_dst, uint* pids_src, uint* start_indices, uint* nbrs, ${data_t} radius_scale, ${data_t} hmin, ${data_t} interval_size, uint* start_idx_levels, int max_num_bits, int num_levels, uint* num_particles_levels <%def name="find_nbrs_src(data_t, sorted)" cached="True"> ${find_nbrs_prep(data_t, sorted)} ulong start_idx = (ulong) start_indices[qid]; ulong curr_idx = 0; #pragma unroll for(level=0; level pysph-master/pysph/base/stratified_sfc_gpu_nnps.pxd000066400000000000000000000023011356347341600232160ustar00rootroot00000000000000from libcpp.vector cimport vector from libcpp.map cimport map from libcpp.pair cimport pair from pysph.base.gpu_nnps_base cimport * ctypedef unsigned int u_int ctypedef map[u_int, pair[u_int, u_int]] key_to_idx_t ctypedef vector[u_int] u_int_vector_t cdef extern from 'math.h': int abs(int) nogil double ceil(double) nogil double floor(double) nogil double fabs(double) nogil double fmax(double, double) nogil double fmin(double, double) nogil cdef extern from 'math.h': double log(double) nogil double log2(double) nogil cdef class StratifiedSFCGPUNNPS(GPUNNPS): cdef NNPSParticleArrayWrapper src, dst # Current source and destination. cdef public list pids cdef public list pid_keys cdef public list start_idx_levels cdef public list num_particles_levels cdef public int max_num_bits cdef int num_levels cdef double interval_size cdef double eps cdef object helper cdef bint _sorted cpdef get_spatially_ordered_indices(self, int pa_index) cpdef _bin(self, int pa_index) cpdef _refresh(self) cdef void find_neighbor_lengths(self, nbr_lengths) cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices) pysph-master/pysph/base/stratified_sfc_gpu_nnps.pyx000066400000000000000000000174221356347341600232550ustar00rootroot00000000000000#cython: embedsignature=True # malloc and friends from libc.stdlib cimport malloc, free from libcpp.vector cimport vector from libcpp.map cimport map from libcpp.pair cimport pair from cython.operator cimport dereference as deref, preincrement as inc from gpu_nnps_helper import GPUNNPSHelper import pyopencl as cl import pyopencl.array import pyopencl.algorithm from pyopencl.scan import GenericScanKernel from pyopencl.elementwise import ElementwiseKernel from compyle.array import Array import compyle.array as array from compyle.opencl import get_context # Cython for compiler directives cimport cython import numpy as np cimport numpy as np IF UNAME_SYSNAME == "Windows": cdef inline double fmin(double x, double y) nogil: return x if x < y else y cdef inline double fmax(double x, double y) nogil: return x if x > y else y @cython.cdivision(True) cdef inline double log2(double n) nogil: return log(n)/log(2) cdef class StratifiedSFCGPUNNPS(GPUNNPS): def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bint fixed_h=False, bint cache=True, bint sort_gids=False, int num_levels=2, backend='opencl'): GPUNNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids, backend ) self.radius_scale2 = radius_scale*radius_scale self.helper = GPUNNPSHelper("stratified_sfc_gpu_nnps.mako", use_double=self.use_double, backend=self.backend) self.eps = 16*np.finfo(np.float32).eps self.num_levels = num_levels self.src_index = -1 self.dst_index = -1 self.sort_gids = sort_gids self.domain.update() self.update() @cython.cdivision(True) cpdef _refresh(self): cdef NNPSParticleArrayWrapper pa_wrapper cdef int i, j, num_particles self.pids = [] self.pid_keys = [] self.start_idx_levels = [] self.num_particles_levels = [] self._sorted = False if self.cell_size - self.hmin > self.hmin*self.eps: self.interval_size = \ (self.cell_size - self.hmin)*(1 + self.eps)/self.num_levels else: self.interval_size = self.hmin*self.eps for i from 0<=iself.pa_wrappers[i] num_particles = pa_wrapper.get_number_of_particles() self.pids.append(array.empty(num_particles, dtype=np.uint32, backend=self.backend)) self.pid_keys.append(array.empty(num_particles, dtype=np.uint64, backend=self.backend)) start_idx_i = num_particles + array.zeros(self.num_levels, dtype=np.uint32, backend=self.backend) self.start_idx_levels.append(start_idx_i) self.num_particles_levels.append(array.zeros_like(start_idx_i)) cdef double max_length = fmax(fmax((self.xmax[0] - self.xmin[0]), (self.xmax[1] - self.xmin[1])), (self.xmax[2] - self.xmin[2])) cdef int max_num_cells = ( ceil(max_length/self.hmin)) self.max_num_bits = 1 + 3*( ceil(log2(max_num_cells))) cpdef get_spatially_ordered_indices(self, int pa_index): pass cpdef _bin(self, int pa_index): cdef NNPSParticleArrayWrapper pa_wrapper = self.pa_wrappers[pa_index] fill_pids = self.helper.get_kernel("fill_pids") levels = array.empty(pa_wrapper.get_number_of_particles(), dtype=np.int32, backend=self.backend) pa_gpu = pa_wrapper.pa.gpu fill_pids(pa_gpu.x.dev, pa_gpu.y.dev, pa_gpu.z.dev, pa_gpu.h.dev, self.interval_size, self.xmin[0], self.xmin[1], self.xmin[2], self.hmin, self.pid_keys[pa_index].dev, self.pids[pa_index].dev, self.radius_scale, self.max_num_bits) radix_sort = cl.algorithm.RadixSort(get_context(), "unsigned int* pids, unsigned long* keys", scan_kernel=GenericScanKernel, key_expr="keys[i]", sort_arg_names=["pids", "keys"]) cdef int max_num_bits = (self.max_num_bits + \ ceil(log2(self.num_levels))) (sorted_indices, sorted_keys), evnt = radix_sort(self.pids[pa_index].dev, self.pid_keys[pa_index].dev, key_bits=max_num_bits) self.pids[pa_index].set_data(sorted_indices) self.pid_keys[pa_index].set_data(sorted_keys) #FIXME: This will only work on OpenCL and CUDA backends cdef unsigned long long key = (sorted_keys[0].get()) self.start_idx_levels[pa_index][key >> self.max_num_bits] = 0 fill_start_indices = self.helper.get_kernel("fill_start_indices") fill_start_indices(self.pid_keys[pa_index].dev, self.start_idx_levels[pa_index].dev, self.max_num_bits, self.num_particles_levels[pa_index].dev) cpdef set_context(self, int src_index, int dst_index): """Setup the context before asking for neighbors. The `dst_index` represents the particles for whom the neighbors are to be determined from the particle array with index `src_index`. Parameters ---------- src_index: int: the source index of the particle array. dst_index: int: the destination index of the particle array. """ GPUNNPS.set_context(self, src_index, dst_index) self.src = self.pa_wrappers[src_index] self.dst = self.pa_wrappers[dst_index] cdef void find_neighbor_lengths(self, nbr_lengths): find_nbr_lengths = self.helper.get_kernel("find_nbr_lengths", sorted=self._sorted) make_vec = cl.cltypes.make_double3 if self.use_double \ else cl.cltypes.make_float3 mask_lengths = array.zeros(self.dst.get_number_of_particles(), dtype=np.int32, backend=self.backend) dst_gpu = self.dst.pa.gpu src_gpu = self.src.pa.gpu find_nbr_lengths(dst_gpu.x.dev, dst_gpu.y.dev, dst_gpu.z.dev, dst_gpu.h.dev, src_gpu.x.dev, src_gpu.y.dev, src_gpu.z.dev, src_gpu.h.dev, make_vec(self.xmin[0], self.xmin[1], self.xmin[2]), self.src.get_number_of_particles(), self.pid_keys[self.src_index].dev, self.pids[self.dst_index].dev, self.pids[self.src_index].dev, nbr_lengths.dev, self.radius_scale, self.hmin, self.interval_size, self.start_idx_levels[self.src_index].dev, self.max_num_bits, self.num_levels, self.num_particles_levels[self.src_index].dev) cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices): find_nbrs = self.helper.get_kernel("find_nbrs", sorted=self._sorted) make_vec = cl.cltypes.make_double3 if self.use_double \ else cl.cltypes.make_float3 dst_gpu = self.dst.pa.gpu src_gpu = self.src.pa.gpu find_nbrs(dst_gpu.x.dev, dst_gpu.y.dev, dst_gpu.z.dev, dst_gpu.h.dev, src_gpu.x.dev, src_gpu.y.dev, src_gpu.z.dev, src_gpu.h.dev, make_vec(self.xmin[0], self.xmin[1], self.xmin[2]), self.src.get_number_of_particles(), self.pid_keys[self.src_index].dev, self.pids[self.dst_index].dev, self.pids[self.src_index].dev, start_indices.dev, nbrs.dev, self.radius_scale, self.hmin, self.interval_size, self.start_idx_levels[self.src_index].dev, self.max_num_bits, self.num_levels, self.num_particles_levels[self.src_index].dev) pysph-master/pysph/base/stratified_sfc_nnps.pxd000066400000000000000000000047761356347341600223650ustar00rootroot00000000000000from libcpp.vector cimport vector from libcpp.map cimport map from libcpp.pair cimport pair from nnps_base cimport * cdef extern from 'math.h': int abs(int) nogil double ceil(double) nogil double floor(double) nogil double fabs(double) nogil double fmax(double, double) nogil double fmin(double, double) nogil cdef extern from 'math.h': double log(double) nogil double log2(double) nogil cdef extern from "z_order.h": ctypedef unsigned int uint32_t ctypedef unsigned long long uint64_t inline uint64_t get_key(uint64_t i, uint64_t j, uint64_t k) nogil cdef cppclass CompareSortWrapper: CompareSortWrapper() nogil except + CompareSortWrapper(uint32_t* current_pids, uint64_t* current_keys, int length) nogil except + inline void compare_sort() nogil ctypedef map[uint64_t, pair[uint32_t, uint32_t]] key_to_idx_t cdef class StratifiedSFCNNPS(NNPS): ############################################################################ # Data Attributes ############################################################################ cdef double radius_scale2 cdef public int num_levels cdef int max_num_bits cdef double interval_size cdef uint32_t** pids cdef uint32_t* current_pids cdef uint64_t** keys cdef uint64_t* current_keys cdef key_to_idx_t** pid_indices cdef key_to_idx_t* current_indices cdef double** cell_sizes cdef double* current_cells cdef NNPSParticleArrayWrapper dst, src ########################################################################## # Member functions ########################################################################## cpdef set_context(self, int src_index, int dst_index) cpdef double get_binning_size(self, int interval) cpdef int get_number_of_particles(self, int pa_index, int level) cpdef get_spatially_ordered_indices(self, int pa_index, LongArray indices) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil cdef inline int _neighbor_boxes(self, int i, int j, int k, int* x, int* y, int* z, int H) nogil cdef void fill_array(self, NNPSParticleArrayWrapper pa_wrapper, int pa_index, UIntArray indices, uint32_t* current_pids, uint64_t* current_keys, key_to_idx_t* current_indices, double* current_cells) cdef inline int _get_level(self, double h) nogil cpdef _refresh(self) cpdef _bin(self, int pa_index, UIntArray indices) pysph-master/pysph/base/stratified_sfc_nnps.pyx000066400000000000000000000330151356347341600223760ustar00rootroot00000000000000#cython: embedsignature=True # malloc and friends from libc.stdlib cimport malloc, free from libcpp.vector cimport vector from libcpp.map cimport map from libcpp.pair cimport pair from cython.operator cimport dereference as deref, preincrement as inc from nnps_base cimport * # Cython for compiler directives cimport cython import numpy as np cimport numpy as np DEF EPS = 1e-13 cdef extern from "" namespace "std" nogil: void sort[Iter, Compare](Iter first, Iter last, Compare comp) void sort[Iter](Iter first, Iter last) IF UNAME_SYSNAME == "Windows": cdef inline double fmin(double x, double y) nogil: return x if x < y else y cdef inline double fmax(double x, double y) nogil: return x if x > y else y @cython.cdivision(True) cdef inline double log2(double n) nogil: return log(n)/log(2) ############################################################################# cdef class StratifiedSFCNNPS(NNPS): """Finds nearest neighbors using Space-filling curves with stratified grids""" def __init__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False, int num_levels = 1): NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) self.radius_scale2 = radius_scale*radius_scale self.interval_size = 0 self.src_index = 0 self.dst_index = 0 self.sort_gids = sort_gids self.domain.update() self.update() def __cinit__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False, int num_levels = 1): cdef int narrays = len(particles) if fixed_h: self.num_levels = 1 else: self.num_levels = num_levels self.pids = malloc(narrays*sizeof(uint32_t*)) self.keys = malloc(narrays*sizeof(uint64_t*)) self.pid_indices = malloc(narrays*sizeof(key_to_idx_t*)) self.cell_sizes = malloc(narrays*sizeof(double*)) cdef int i, j, num_particles for i from 0<=i malloc(self.num_levels*sizeof(double)) self.current_pids = NULL self.current_keys = NULL self.current_indices = NULL self.current_cells = NULL def __dealloc__(self): cdef uint32_t* current_pids cdef uint64_t* current_keys cdef key_to_idx_t* current_indices cdef int i, j for i from 0<=i self.pa_wrappers[dst_index] self.src = self.pa_wrappers[src_index] cpdef get_spatially_ordered_indices(self, int pa_index, LongArray indices): indices.reset() cdef int num_particles = ( \ self.pa_wrappers[pa_index]).get_number_of_particles() cdef uint32_t* current_pids = self.pids[pa_index] cdef int j for j from 0<=j ceil(h_max/(self.radius_scale*self.current_cells[i])) mask_len = (2*H+1)*(2*H+1)*(2*H+1) x_boxes = malloc(mask_len*sizeof(int)) y_boxes = malloc(mask_len*sizeof(int)) z_boxes = malloc(mask_len*sizeof(int)) find_cell_id_raw( x - xmin[0], y - xmin[1], z - xmin[2], self.radius_scale*self.current_cells[i], &c_x, &c_y, &c_z ) num_boxes = self._neighbor_boxes(c_x, c_y, c_z, x_boxes, y_boxes, z_boxes, H) for j from 0<=j \ self.pa_wrappers[pa_index]).get_number_of_particles() cdef uint32_t* current_pids = self.pids[pa_index] cdef uint64_t* current_keys = self.keys[pa_index] while (current_keys[i] >> self.max_num_bits) != level: i += 1 while (current_keys[i] >> self.max_num_bits == \ current_keys[i+1] >> self.max_num_bits): length += 1 i += 1 if i == num_particles - 1: break return length #### Private protocol ################################################ @cython.cdivision(True) cdef inline int _get_level(self, double h) nogil: return floor((self.radius_scale*h - self.hmin)/self.interval_size) cdef inline int _neighbor_boxes(self, int i, int j, int k, int* x, int* y, int* z, int H) nogil: cdef int length = 0 cdef int p, q, r for p from -H<=p=0 and j+q>=0 and k+p>=0: x[length] = i+r y[length] = j+q z[length] = k+p length += 1 return length @cython.cdivision(True) cpdef _refresh(self): self.interval_size = ((self.cell_size - self.hmin)/self.num_levels) + EPS cdef double* current_cells cdef int i, j, num_particles for i from 0<=i \ self.pa_wrappers[i]).get_number_of_particles() if self.pids[i] != NULL: free(self.pids[i]) if self.keys[i] != NULL: free(self.keys[i]) if self.pid_indices[i] != NULL: del self.pid_indices[i] self.pids[i] = malloc(num_particles*sizeof(uint32_t)) self.keys[i] = malloc(num_particles*sizeof(uint64_t)) self.pid_indices[i] = new key_to_idx_t() current_cells = self.cell_sizes[i] for j from 0<=j ceil(max_length/self.hmin)) self.max_num_bits = 1 + 3*( ceil(log2(max_num_cells))) cdef void fill_array(self, NNPSParticleArrayWrapper pa_wrapper, int pa_index, UIntArray indices, uint32_t* current_pids, uint64_t* current_keys, key_to_idx_t* current_indices, double* current_cells): cdef double* x_ptr = pa_wrapper.x.data cdef double* y_ptr = pa_wrapper.y.data cdef double* z_ptr = pa_wrapper.z.data cdef double* h_ptr = pa_wrapper.h.data cdef double* xmin = self.xmin.data cdef int c_x, c_y, c_z cdef int i, j, n, level cdef uint64_t level_padded # Finds cell sizes at each level for i from 0<=i self.orig_n) self.assertTrue(xmin < 0.0) self.assertTrue(xmax > self.L) self.assertTrue(ymin < 0.0) self.assertTrue(ymax > self.L) p_expect = self._get_pressure(x, y, 0) diff = np.abs(p - p_expect).max() message = "Pressure not equal, max diff: %s" % diff self.assertTrue(np.allclose(p, p_expect, atol=1e-14), message) class BoxSortPeriodicBox2D(PeriodicBox2DTestCaseCPU): def setUp(self): PeriodicBox2DTestCaseCPU.setUp(self) self.orig_n = self.fluid.get_number_of_particles() self.nnps = BoxSortNNPS( dim=2, particles=[self.fluid], domain=self.domain, radius_scale=self.kernel.radius_scale) def test_summation_density(self): self._check_summation_density() def test_box_wrapping(self): # Given fluid = self.fluid fluid.x += 0.35 fluid.y += 0.35 self._check_summation_density() def test_periodicity(self): # Given. fluid = self.fluid # When self.domain.update() # Then. x, y, p = fluid.get('x', 'y', 'p', only_real_particles=False) xmin, xmax, ymin, ymax = x.min(), x.max(), y.min(), y.max() new_n = fluid.get_number_of_particles() self.assertTrue(new_n > self.orig_n) self.assertTrue(xmin < 0.0) self.assertTrue(xmax > self.L) self.assertTrue(ymin < 0.0) self.assertTrue(ymax > self.L) p_expect = self._get_pressure(x, y, 0) diff = np.abs(p - p_expect).max() message = "Pressure not equal, max diff: %s" % diff self.assertTrue(np.allclose(p, p_expect, atol=1e-14), message) class LinkedListPeriodicBox2D(BoxSortPeriodicBox2D): def setUp(self): PeriodicBox2DTestCaseCPU.setUp(self) self.orig_n = self.fluid.get_number_of_particles() self.nnps = LinkedListNNPS( dim=2, particles=[self.fluid], domain=self.domain, radius_scale=self.kernel.radius_scale) class PeriodicBox3DTestCase(PeriodicBox2DTestCase): """Test the periodicity algorithms in the Domain Manager. We create a 3D box with periodicity along x, y and z. We check if this produces a constant density with summation density. """ def setUp(self): # create the particle arrays L = 1.0 n = 5 dx = L / n hdx = 1.5 self.L = L self.vol = vol = dx * dx * dx # fluid particles xx, yy, zz = np.mgrid[dx / 2:L:dx, dx / 2:L:dx, dx / 2:L:dx] x = xx.ravel() y = yy.ravel() z = zz.ravel() # particle positions p = self._get_pressure(x, y, z) h = np.ones_like(x) * hdx * dx # smoothing lengths m = np.ones_like(x) * vol # mass V = np.zeros_like(x) # volumes fluid = get_particle_array(name='fluid', x=x, y=y, z=z, h=h, m=m, V=V, p=p ) # particles and domain self.fluid = fluid self.domain = DomainManager( xmin=0, xmax=L, ymin=0, ymax=L, zmin=0, zmax=L, periodic_in_x=True, periodic_in_y=True, periodic_in_z=True ) self.kernel = get_compiled_kernel(Gaussian(dim=3)) self.orig_n = self.fluid.get_number_of_particles() self.nnps = LinkedListNNPS( dim=3, particles=[self.fluid], domain=self.domain, radius_scale=self.kernel.radius_scale) def test_summation_density(self): self._check_summation_density() def test_box_wrapping(self): # Given fluid = self.fluid fluid.x += 0.35 fluid.y += 0.35 fluid.z += 0.35 self._check_summation_density() def test_periodicity(self): # Given. fluid = self.fluid # When self.domain.update() # Then. x, y, z, p = fluid.get('x', 'y', 'z', 'p', only_real_particles=False) xmin, xmax, ymin, ymax = x.min(), x.max(), y.min(), y.max() zmin, zmax = z.min(), z.max() new_n = fluid.get_number_of_particles() self.assertTrue(new_n > self.orig_n) self.assertTrue(xmin < 0.0) self.assertTrue(xmax > self.L) self.assertTrue(ymin < 0.0) self.assertTrue(ymax > self.L) self.assertTrue(zmin < 0.0) self.assertTrue(zmax > self.L) p_expect = self._get_pressure(x, y, z) diff = np.abs(p - p_expect).max() message = "Pressure not equal, max diff: %s" % diff self.assertTrue(np.allclose(p, p_expect, atol=1e-14), message) if __name__ == '__main__': unittest.main() pysph-master/pysph/base/tests/test_kernel.py000066400000000000000000000444361356347341600216510ustar00rootroot00000000000000import numpy as np try: from scipy.integrate import quad except ImportError: quad = None from unittest import TestCase, main from pysph.base.kernels import (CubicSpline, Gaussian, QuinticSpline, SuperGaussian, WendlandQuintic, WendlandQuinticC4, WendlandQuinticC6, WendlandQuinticC2_1D, WendlandQuinticC4_1D, WendlandQuinticC6_1D, get_compiled_kernel) ############################################################################### # `TestKernelBase` class. ############################################################################### class TestKernelBase(TestCase): """Base class for all kernel tests. """ kernel_factory = None @classmethod def setUpClass(cls): cls.wrapper = get_compiled_kernel(cls.kernel_factory()) cls.kernel = cls.wrapper.kernel cls.gradient = cls.wrapper.gradient def setUp(self): self.kernel = self.__class__.kernel self.gradient = self.__class__.gradient def check_kernel_moment_1d(self, a, b, h, m, xj=0.0): func = self.kernel if m == 0: def f(x): return func(x, 0, 0, xj, 0, 0, h) else: def f(x): return (pow(x, m) * func(x, 0, 0, xj, 0, 0, h)) if quad is None: kern_f = np.vectorize(f) nx = 201 x = np.linspace(a, b, nx) result = np.sum(kern_f(x)) * (b - a) / (nx - 1) else: result = quad(f, a, b)[0] return result def check_grad_moment_1d(self, a, b, h, m, xj=0.0): func = self.gradient if m == 0: def f(x): return func(x, 0, 0, xj, 0, 0, h)[0] else: def f(x): return (pow(x - xj, m) * func(x, 0, 0, xj, 0, 0, h)[0]) if quad is None: kern_f = np.vectorize(f) nx = 201 x = np.linspace(a, b, nx) return np.sum(kern_f(x)) * (b - a) / (nx - 1) else: return quad(f, a, b)[0] def check_kernel_moment_2d(self, m, n): x0, y0, z0 = 0.5, 0.5, 0.0 def func(x, y): fac = pow(x - x0, m) * pow(y - y0, n) return fac * self.kernel(x, y, 0.0, x0, y0, 0.0, 0.15) vfunc = np.vectorize(func) nx, ny = 101, 101 vol = 1.0 / (nx - 1) * 1.0 / (ny - 1) x, y = np.mgrid[0:1:nx * 1j, 0:1:nx * 1j] result = np.sum(vfunc(x, y)) * vol return result def check_gradient_moment_2d(self, m, n): x0, y0, z0 = 0.5, 0.5, 0.0 def func(x, y): fac = pow(x - x0, m) * pow(y - y0, n) return fac*np.asarray(self.gradient(x, y, 0.0, x0, y0, 0.0, 0.15)) vfunc = np.vectorize(func, otypes=[np.ndarray]) nx, ny = 101, 101 vol = 1.0 / (nx - 1) * 1.0 / (ny - 1) x, y = np.mgrid[0:1:nx * 1j, 0:1:nx * 1j] result = np.sum(vfunc(x, y)) * vol return result def check_kernel_moment_3d(self, l, m, n): x0, y0, z0 = 0.5, 0.5, 0.5 def func(x, y, z): fac = pow(x - x0, l) * pow(y - y0, m) * pow(z - z0, n) return fac * self.kernel(x, y, z, x0, y0, z0, 0.15) vfunc = np.vectorize(func) nx, ny, nz = 51, 51, 51 vol = 1.0 / (nx - 1) * 1.0 / (ny - 1) * 1.0 / (nz - 1) x, y, z = np.mgrid[0:1:nx * 1j, 0:1:ny * 1j, 0:1:nz * 1j] result = np.sum(vfunc(x, y, z)) * vol return result def check_gradient_moment_3d(self, l, m, n): x0, y0, z0 = 0.5, 0.5, 0.5 def func(x, y, z): fac = pow(x - x0, l) * pow(y - y0, m) * pow(z - z0, n) return fac * np.asarray(self.gradient(x, y, z, x0, y0, z0, 0.15)) vfunc = np.vectorize(func, otypes=[np.ndarray]) nx, ny, nz = 51, 51, 51 vol = 1.0 / (nx - 1) * 1.0 / (ny - 1) * 1.0 / (nz - 1) x, y, z = np.mgrid[0:1:nx * 1j, 0:1:ny * 1j, 0:1:nz * 1j] result = np.sum(vfunc(x, y, z)) * vol return result def check_kernel_at_origin(self, w_0): k = self.kernel(xi=0.0, yi=0.0, zi=0.0, xj=0.0, yj=0.0, zj=0.0, h=1.0) expect = w_0 self.assertAlmostEqual(k, expect, msg='Kernel value %s != %s (expected)' % (k, expect)) k = self.kernel(xi=3.0, yi=0.0, zi=0.0, xj=0.0, yj=0.0, zj=0.0, h=1.0) expect = 0.0 self.assertAlmostEqual(k, expect, msg='Kernel value %s != %s (expected)' % (k, expect)) g = self.gradient(xi=0.0, yi=0.0, zi=0.0, xj=0.0, yj=0.0, zj=0.0, h=1.0) expect = 0.0 self.assertAlmostEqual(g[0], expect, msg='Kernel value %s != %s (expected)' % (g[0], expect)) g = self.gradient(xi=3.0, yi=0.0, zi=0.0, xj=0.0, yj=0.0, zj=0.0, h=1.0) expect = 0.0 self.assertAlmostEqual(g[0], expect, msg='Kernel value %s != %s (expected)' % (g[0], expect)) ############################################################################### # `TestCubicSpline1D` class. ############################################################################### class TestCubicSpline1D(TestKernelBase): kernel_factory = staticmethod(lambda: CubicSpline(dim=1)) def test_simple(self): self.check_kernel_at_origin(2. / 3) def test_zeroth_kernel_moments(self): kh = self.wrapper.radius_scale # zero'th moment r = self.check_kernel_moment_1d(-kh, kh, 1.0, 0, xj=0) self.assertAlmostEqual(r, 1.0, 8) # Use a non-unit h. r = self.check_kernel_moment_1d(-kh, kh, 0.5, 0, xj=0) self.assertAlmostEqual(r, 1.0, 8) r = self.check_kernel_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh) self.assertAlmostEqual(r, 1.0, 8) def test_first_kernel_moment(self): kh = self.wrapper.radius_scale r = self.check_kernel_moment_1d(-kh, kh, 1.0, 1, xj=0.0) self.assertAlmostEqual(r, 0.0, 8) def test_zeroth_grad_moments(self): kh = self.wrapper.radius_scale # zero'th moment r = self.check_grad_moment_1d(-kh, kh, 1.0, 0, xj=0) self.assertAlmostEqual(r, 0.0, 8) # Use a non-unit h. r = self.check_grad_moment_1d(-kh, kh, 0.5, 0, xj=0) self.assertAlmostEqual(r, 0.0, 8) r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh) self.assertAlmostEqual(r, 0.0, 8) def test_first_grad_moment(self): kh = self.wrapper.radius_scale r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 1, xj=kh) self.assertAlmostEqual(r, -1.0, 8) ############################################################################### # `TestCubicSpline2D` class. ############################################################################### class TestCubicSpline2D(TestKernelBase): kernel_factory = staticmethod(lambda: CubicSpline(dim=2)) def test_simple(self): self.check_kernel_at_origin(10. / (7 * np.pi)) def test_zeroth_kernel_moments(self): r = self.check_kernel_moment_2d(0, 0) self.assertAlmostEqual(r, 1.0, 7) def test_first_kernel_moment(self): r = self.check_kernel_moment_2d(0, 1) self.assertAlmostEqual(r, 0.0, 7) r = self.check_kernel_moment_2d(1, 0) self.assertAlmostEqual(r, 0.0, 7) r = self.check_kernel_moment_2d(1, 1) self.assertAlmostEqual(r, 0.0, 7) def test_zeroth_grad_moments(self): r = self.check_gradient_moment_2d(0, 0) self.assertAlmostEqual(r[0], 0.0, 7) self.assertAlmostEqual(r[1], 0.0, 7) def test_first_grad_moment(self): r = self.check_gradient_moment_2d(1, 0) self.assertAlmostEqual(r[0], -1.0, 6) self.assertAlmostEqual(r[1], 0.0, 8) r = self.check_gradient_moment_2d(0, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], -1.0, 6) r = self.check_gradient_moment_2d(1, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], 0.0, 8) ############################################################################### # `TestCubicSpline3D` class. ############################################################################### class TestCubicSpline3D(TestKernelBase): kernel_factory = staticmethod(lambda: CubicSpline(dim=3)) def test_simple(self): self.check_kernel_at_origin(1. / np.pi) def test_zeroth_kernel_moments(self): r = self.check_kernel_moment_3d(0, 0, 0) self.assertAlmostEqual(r, 1.0, 6) def test_first_kernel_moment(self): r = self.check_kernel_moment_3d(0, 0, 1) self.assertAlmostEqual(r, 0.0, 7) r = self.check_kernel_moment_3d(0, 1, 0) self.assertAlmostEqual(r, 0.0, 7) r = self.check_kernel_moment_3d(1, 0, 1) self.assertAlmostEqual(r, 0.0, 7) def test_zeroth_grad_moments(self): r = self.check_gradient_moment_3d(0, 0, 0) self.assertAlmostEqual(r[0], 0.0, 7) self.assertAlmostEqual(r[1], 0.0, 7) self.assertAlmostEqual(r[2], 0.0, 7) def test_first_grad_moment(self): r = self.check_gradient_moment_3d(1, 0, 0) self.assertAlmostEqual(r[0], -1.0, 4) self.assertAlmostEqual(r[1], 0.0, 8) self.assertAlmostEqual(r[2], 0.0, 8) r = self.check_gradient_moment_3d(0, 1, 0) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], -1.0, 4) self.assertAlmostEqual(r[2], 0.0, 6) r = self.check_gradient_moment_3d(0, 0, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], 0.0, 8) self.assertAlmostEqual(r[2], -1.0, 4) ############################################################################### # Gaussian kernel class TestGaussian1D(TestCubicSpline1D): kernel_factory = staticmethod(lambda: Gaussian(dim=1)) def test_simple(self): self.check_kernel_at_origin(1.0 / np.sqrt(np.pi)) def test_first_grad_moment(self): kh = self.wrapper.radius_scale r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 1, xj=kh) self.assertAlmostEqual(r, -1.0, 3) def test_zeroth_kernel_moments(self): kh = self.wrapper.radius_scale # zero'th moment r = self.check_kernel_moment_1d(-kh, kh, 1.0, 0, xj=0) self.assertAlmostEqual(r, 1.0, 4) # Use a non-unit h. r = self.check_kernel_moment_1d(-kh, kh, 0.5, 0, xj=0) self.assertAlmostEqual(r, 1.0, 4) r = self.check_kernel_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh) self.assertAlmostEqual(r, 1.0, 4) class TestGaussian2D(TestCubicSpline2D): kernel_factory = staticmethod(lambda: Gaussian(dim=2)) def test_simple(self): self.check_kernel_at_origin(1.0 / np.pi) def test_zeroth_kernel_moments(self): r = self.check_kernel_moment_2d(0, 0) self.assertAlmostEqual(r, 1.0, 3) def test_first_grad_moment(self): r = self.check_gradient_moment_2d(1, 0) self.assertAlmostEqual(r[0], -1.0, 2) self.assertAlmostEqual(r[1], 0.0, 8) r = self.check_gradient_moment_2d(0, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], -1.0, 2) r = self.check_gradient_moment_2d(1, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], 0.0, 8) class TestGaussian3D(TestCubicSpline3D): kernel_factory = staticmethod(lambda: Gaussian(dim=3)) def test_simple(self): self.check_kernel_at_origin(1.0 / pow(np.pi, 1.5)) def test_zeroth_kernel_moments(self): r = self.check_kernel_moment_3d(0, 0, 0) self.assertAlmostEqual(r, 1.0, 2) def test_first_grad_moment(self): r = self.check_gradient_moment_3d(1, 0, 0) self.assertAlmostEqual(r[0], -1.0, 2) self.assertAlmostEqual(r[1], 0.0, 8) self.assertAlmostEqual(r[2], 0.0, 8) r = self.check_gradient_moment_3d(0, 1, 0) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], -1.0, 2) self.assertAlmostEqual(r[2], 0.0, 6) r = self.check_gradient_moment_3d(0, 0, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], 0.0, 8) self.assertAlmostEqual(r[2], -1.0, 2) ############################################################################### # Quintic spline kernel class TestQuinticSpline1D(TestCubicSpline1D): kernel_factory = staticmethod(lambda: QuinticSpline(dim=1)) def test_simple(self): self.check_kernel_at_origin(0.55) class TestQuinticSpline2D(TestCubicSpline2D): kernel_factory = staticmethod(lambda: QuinticSpline(dim=2)) def test_simple(self): self.check_kernel_at_origin(66.0 * 7.0 / (478.0 * np.pi)) class TestQuinticSpline3D(TestGaussian3D): kernel_factory = staticmethod(lambda: QuinticSpline(dim=3)) def test_simple(self): self.check_kernel_at_origin(66.0 * 3.0 / (359.0 * np.pi)) ############################################################################### # SuperGaussian kernel class TestSuperGaussian1D(TestGaussian1D): kernel_factory = staticmethod(lambda: SuperGaussian(dim=1)) def test_simple(self): self.check_kernel_at_origin(1.5 / np.sqrt(np.pi)) def test_first_grad_moment(self): kh = self.wrapper.radius_scale r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 1, xj=kh) self.assertAlmostEqual(r, -1.0, 2) def test_zeroth_kernel_moments(self): kh = self.wrapper.radius_scale # zero'th moment r = self.check_kernel_moment_1d(-kh, kh, 1.0, 0, xj=0) self.assertAlmostEqual(r, 1.0, 3) # Use a non-unit h. r = self.check_kernel_moment_1d(-kh, kh, 0.5, 0, xj=0) self.assertAlmostEqual(r, 1.0, 3) r = self.check_kernel_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh) self.assertAlmostEqual(r, 1.0, 3) class TestSuperGaussian2D(TestGaussian2D): kernel_factory = staticmethod(lambda: SuperGaussian(dim=2)) def test_simple(self): self.check_kernel_at_origin(2.0 / np.pi) def test_zeroth_kernel_moments(self): r = self.check_kernel_moment_2d(0, 0) self.assertAlmostEqual(r, 1.0, 2) def test_first_grad_moment(self): r = self.check_gradient_moment_2d(1, 0) self.assertAlmostEqual(r[0], -1.0, 1) self.assertAlmostEqual(r[1], 0.0, 8) r = self.check_gradient_moment_2d(0, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], -1.0, 1) r = self.check_gradient_moment_2d(1, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], 0.0, 8) class TestSuperGaussian3D(TestGaussian3D): kernel_factory = staticmethod(lambda: SuperGaussian(dim=3)) def test_simple(self): self.check_kernel_at_origin(5.0 / (2.0 * pow(np.pi, 1.5))) def test_first_grad_moment(self): r = self.check_gradient_moment_3d(1, 0, 0) self.assertAlmostEqual(r[0], -1.0, 1) self.assertAlmostEqual(r[1], 0.0, 8) self.assertAlmostEqual(r[2], 0.0, 8) r = self.check_gradient_moment_3d(0, 1, 0) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], -1.0, 1) self.assertAlmostEqual(r[2], 0.0, 6) r = self.check_gradient_moment_3d(0, 0, 1) self.assertAlmostEqual(r[0], 0.0, 8) self.assertAlmostEqual(r[1], 0.0, 8) self.assertAlmostEqual(r[2], -1.0, 1) ############################################################################### # WendlandQuintic C2 kernel class TestWendlandQuintic2D(TestCubicSpline2D): kernel_factory = staticmethod(lambda: WendlandQuintic(dim=2)) def test_simple(self): self.check_kernel_at_origin(7.0 / (4.0 * np.pi)) def test_zeroth_kernel_moments(self): r = self.check_kernel_moment_2d(0, 0) self.assertAlmostEqual(r, 1.0, 6) class TestWendlandQuintic3D(TestGaussian3D): kernel_factory = staticmethod(lambda: WendlandQuintic(dim=3)) def test_simple(self): self.check_kernel_at_origin(21.0 / (16.0 * np.pi)) class TestWendlandQuintic1D(TestCubicSpline1D): kernel_factory = staticmethod(lambda: WendlandQuinticC2_1D(dim=1)) def test_simple(self): self.check_kernel_at_origin(5.0 / 8.0) def test_zeroth_kernel_moments(self): kh = self.wrapper.radius_scale # zero'th moment r = self.check_kernel_moment_1d(-kh, kh, 1.0, 0, xj=0) self.assertAlmostEqual(r, 1.0, 8) # Use a non-unit h. r = self.check_kernel_moment_1d(-kh, kh, 0.5, 0, xj=0) self.assertAlmostEqual(r, 1.0, 7) r = self.check_kernel_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh) self.assertAlmostEqual(r, 1.0, 7) def test_first_grad_moment(self): kh = self.wrapper.radius_scale r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 1, xj=kh) self.assertAlmostEqual(r, -1.0, 7) ############################################################################### # WendlandQuintic C4 kernel class TestWendlandQuinticC4_2D(TestCubicSpline2D): kernel_factory = staticmethod(lambda: WendlandQuinticC4(dim=2)) def test_simple(self): self.check_kernel_at_origin(9.0 / (4.0 * np.pi)) class TestWendlandQuinticC4_3D(TestGaussian3D): kernel_factory = staticmethod(lambda: WendlandQuinticC4(dim=3)) def test_simple(self): self.check_kernel_at_origin(495.0 / (256.0 * np.pi)) class TestWendlandQuinticC4_1D(TestCubicSpline1D): kernel_factory = staticmethod(lambda: WendlandQuinticC4_1D(dim=1)) def test_simple(self): self.check_kernel_at_origin(3.0 / 4.0) ############################################################################### # WendlandQuintic C6 kernel class TestWendlandQuinticC6_2D(TestCubicSpline2D): kernel_factory = staticmethod(lambda: WendlandQuinticC6(dim=2)) def test_simple(self): self.check_kernel_at_origin(78.0 / (28.0 * np.pi)) class TestWendlandQuinticC6_3D(TestGaussian3D): kernel_factory = staticmethod(lambda: WendlandQuinticC6(dim=3)) def test_simple(self): self.check_kernel_at_origin(1365.0 / (512.0 * np.pi)) class TestWendlandQuinticC6_1D(TestCubicSpline1D): kernel_factory = staticmethod(lambda: WendlandQuinticC6_1D(dim=1)) def test_simple(self): self.check_kernel_at_origin(55.0 / 64.0) if __name__ == '__main__': main() pysph-master/pysph/base/tests/test_linalg3.py000066400000000000000000000111131356347341600217040ustar00rootroot00000000000000import unittest import numpy as np from pysph.base import linalg3 class TestLinalg3(unittest.TestCase): def setUp(self): self.N = 10 def assertMatricesEqual(self, result, expected, matrix, info, tol=1e-14): diff = result - expected msg = "Error for {info} matrix\n{matrix}\n"\ "result:\n{result}\n"\ "expected:\n{expected}\n"\ "difference: {diff}".format( info=info, matrix=matrix, diff=diff, result=result, expected=expected ) self.assertTrue(np.max(np.abs(diff)) < tol, msg) def _get_test_matrix(self): a = np.random.random((3,3)) a += a.T return a def _get_difficult_matrix(self): a = np.identity(3, dtype=float) for p in range(3): p0 = int(p == 0) p2 = int(p == 2) a[p0][2-p2] = p0*1e-2 a[2-p2][p0] = a[p0][2-p2] print(a) return a def _get_test_matrices(self): nasty = [ [1.823886368900899e-169, -1.2724997010965309e-169, 0.0], [-1.2724997010965309e-169, -3.647772737801798e-169, 0.0], [0.0, 0.0, 0.0] ] data = [ (np.zeros((3,3), dtype=float), 'zero'), (np.identity(3, dtype=float), 'identity'), (np.diag((1., 2., 3.)), 'diagonal'), (np.diag((2., 2., 1.)), 'diagonal repeated eigenvalues'), (np.ones((3,3), dtype=float), 'ones'), (self._get_test_matrix(), 'random'), (self._get_difficult_matrix(), 'difficult'), (np.asarray(nasty), 'nasty') ] return data def test_determinant(self): for i in range(self.N): self._check_determinant() def _check_determinant(self): # Given/When a = self._get_test_matrix() # Then self.assertAlmostEqual( linalg3.py_det(a), np.linalg.det(a), places=14 ) def test_eigen_values(self): # Given data = self._get_test_matrices() for matrix, info in data: # When result = linalg3.py_get_eigenvalues(matrix) result.sort() # Then expected = np.linalg.eigvals(matrix) expected.sort() self.assertMatricesEqual(result, expected, matrix, info) def test_eigen_decompose_eispack(self): # Given data = self._get_test_matrices() for matrix, info in data: # When val, vec = linalg3.py_eigen_decompose_eispack(matrix) # Then check = np.dot(np.asarray(vec), np.dot(np.diag(val), np.asarray(vec).T)) self.assertMatricesEqual(check, matrix, vec, info) def test_get_eigenvalvec(self): # Given data = self._get_test_matrices() for matrix, info in data: # When val, vec = linalg3.py_get_eigenvalvec(matrix) # Then check = np.dot(np.asarray(vec), np.dot(np.diag(val), np.asarray(vec).T)) self.assertMatricesEqual(check, matrix, vec, info) ################################################################################## # Transformation related tests. def test_transform_function(self): for i in range(self.N): self._check_transform_function() def _check_transform_function(self): # Given a = np.random.random((3,3)) p = np.random.random((3,3)) # When res = linalg3.py_transform(a, p) # Then expected = np.dot(p.T, np.dot(a, p)) self.assertMatricesEqual(res, expected, (a,p), 'transform') def test_transform_diag_function(self): for i in range(self.N): self._check_transform_diag_function() def _check_transform_diag_function(self): # Given a = np.random.random(3) p = np.random.random((3,3)) # When res = linalg3.py_transform_diag(a, p) # Then expected = np.dot(p.T, np.dot(np.diag(a), p)) self.assertMatricesEqual(res, expected, (a,p), 'transform') def test_transform_diag_inv_function(self): for i in range(self.N): self._check_transform_diag_inv_function() def _check_transform_diag_inv_function(self): # Given a = np.random.random(3) p = np.random.random((3,3)) # When res = linalg3.py_transform_diag_inv(a, p) # Then expected = np.dot(p, np.dot(np.diag(a), p.T)) self.assertMatricesEqual(res, expected, (a,p), 'transform') if __name__ == '__main__': unittest.main() pysph-master/pysph/base/tests/test_neighbor_cache.py000066400000000000000000000100111356347341600232670ustar00rootroot00000000000000"""Tests for neighbor caching. """ import numpy as np import unittest from pysph.base.nnps import NeighborCache, LinkedListNNPS from pysph.base.utils import get_particle_array from cyarray.carray import UIntArray class TestNeighborCache(unittest.TestCase): def _make_random_parray(self, name, nx=5): x, y, z = np.random.random((3, nx, nx, nx)) x = np.ravel(x) y = np.ravel(y) z = np.ravel(z) h = np.ones_like(x)*0.2 return get_particle_array(name=name, x=x, y=y, z=z, h=h) def test_neighbors_cached_properly(self): # Given pa1 = self._make_random_parray('pa1', 5) pa2 = self._make_random_parray('pa2', 4) particles = [pa1, pa2] nnps = LinkedListNNPS(dim=3, particles=particles) for dst_index in (0, 1): for src_idx in (0, 1): # When cache = NeighborCache(nnps, dst_index, src_idx) cache.update() nb_cached = UIntArray() nb_direct = UIntArray() # Then. for i in range(len(particles[dst_index].x)): nnps.get_nearest_particles_no_cache( src_idx, dst_index, i, nb_direct, False ) cache.get_neighbors(src_idx, i, nb_cached) nb_e = nb_direct.get_npy_array() nb_c = nb_cached.get_npy_array() self.assertTrue(np.all(nb_e == nb_c)) def test_empty_neigbors_works_correctly(self): # Given pa1 = self._make_random_parray('pa1', 5) pa2 = self._make_random_parray('pa2', 2) pa2.x += 10.0 particles = [pa1, pa2] # When nnps = LinkedListNNPS(dim=3, particles=particles) # Cache for neighbors of destination 0. cache = NeighborCache(nnps, dst_index=0, src_index=1) cache.update() # Then nb_cached = UIntArray() nb_direct = UIntArray() for i in range(len(particles[0].x)): nnps.get_nearest_particles_no_cache(1, 0, i,nb_direct, False) # Get neighbors from source 1 on destination 0. cache.get_neighbors(src_index=1, d_idx=i, nbrs=nb_cached) nb_e = nb_direct.get_npy_array() nb_c = nb_cached.get_npy_array() self.assertEqual(len(nb_e), 0) self.assertTrue(np.all(nb_e == nb_c)) def test_cache_updates_with_changed_particles(self): # Given pa1 = self._make_random_parray('pa1', 5) particles = [pa1] nnps = LinkedListNNPS(dim=3, particles=particles) cache = NeighborCache(nnps, dst_index=0, src_index=0) cache.update() # When pa2 = self._make_random_parray('pa2', 2) pa1.add_particles(x=pa2.x, y=pa2.y, z=pa2.z) nnps.update() cache.update() nb_cached = UIntArray() nb_direct = UIntArray() for i in range(len(particles[0].x)): nnps.get_nearest_particles_no_cache(0, 0, i, nb_direct, False) cache.get_neighbors(0, i, nb_cached) nb_e = nb_direct.get_npy_array() nb_c = nb_cached.get_npy_array() self.assertTrue(np.all(nb_e == nb_c)) def test_setting_use_cache_does_cache(self): # Given pa = self._make_random_parray('pa1', 3) pa.h[:] = 1.0 nnps = LinkedListNNPS(dim=3, particles=[pa], cache=False) n = pa.get_number_of_particles() # When nnps.set_use_cache(True) nbrs = UIntArray() nnps.set_context(0, 0) for i in range(n): nnps.get_nearest_particles(0, 0, i, nbrs) # Then self.assertEqual(nbrs.length, n) # Find the length of all cached neighbors, # in this case, each particle has n neighbors, # so we should have n*n neighbors in all. total_length = sum( x.length for x in nnps.cache[0]._neighbor_arrays ) self.assertEqual(total_length, n*n) if __name__ == '__main__': unittest.main() pysph-master/pysph/base/tests/test_nnps.py000066400000000000000000001220011356347341600213300ustar00rootroot00000000000000"""unittests for the serial NNPS You can run the tests like so: $ pytest -v test_nnps.py """ import numpy from numpy import random # PySPH imports from pysph.base.point import IntPoint, Point from pysph.base.utils import get_particle_array from pysph.base import nnps from compyle.config import get_config # Carrays from PyZoltan from cyarray.carray import UIntArray, IntArray # Python testing framework import unittest import pytest from pytest import importorskip class SimpleNNPSTestCase(unittest.TestCase): """Simplified NNPS test case We distribute particles manually and perform sanity checks on NNPS """ def setUp(self): """Default set-up used by all the tests Particles with the following coordinates (x, y, z) are placed in a box 0 : -1.5 , 0.25 , 0.5 1 : 0.33 , -0.25, 0.25 2 : 1.25 , -1.25, 1.25 3 : 0.05 , 1.25 , -0.5 4 : -0.5 , 0.5 , -1.25 5 : -0.75, 0.75 , -1.25 6 : -1.25, 0.5 , 0.5 7 : 0.5 , 1.5 , -0.5 8 : 0.5 , -0.5 , 0.5 9 : 0.5 , 1.75 , -0.75 The cell size is set to 1. Valid cell indices and the particles they contain are given below: (-2, 0, 0) : particle 0, 6 (0, -1, 0) : particle 1, 8 (1, -2, 1) : particle 2 (0, 1, -1) : particle 3, 7, 9 (-1, 0, -2): particle 4, 5 """ x = numpy.array([ -1.5, 0.33, 1.25, 0.05, -0.5, -0.75, -1.25, 0.5, 0.5, 0.5]) y = numpy.array([ 0.25, -0.25, -1.25, 1.25, 0.5, 0.75, 0.5, 1.5, -0.5, 1.75]) z = numpy.array([ 0.5, 0.25, 1.25, -0.5, -1.25, -1.25, 0.5, -0.5, 0.5, -0.75]) # using a degenrate (h=0) array will set cell size to 1 for NNPS h = numpy.zeros_like(x) pa = get_particle_array(x=x, y=y, z=z, h=h) self.dict_box_sort_nnps = nnps.DictBoxSortNNPS( dim=3, particles=[pa], radius_scale=1.0 ) self.box_sort_nnps = nnps.BoxSortNNPS( dim=3, particles=[pa], radius_scale=1.0 ) self.ll_nnps = nnps.LinkedListNNPS( dim=3, particles=[pa], radius_scale=1.0 ) self.sp_hash_nnps = nnps.SpatialHashNNPS( dim=3, particles=[pa], radius_scale=1.0 ) self.ext_sp_hash_nnps = nnps.ExtendedSpatialHashNNPS( dim=3, particles=[pa], radius_scale=1.0 ) self.strat_radius_nnps = nnps.StratifiedHashNNPS( dim=3, particles=[pa], radius_scale=1.0 ) # these are the expected cells self.expected_cells = { IntPoint(-2, 0, 0): [0, 6], IntPoint(0, -1, 0): [1, 8], IntPoint(1, -2, 1): [2], IntPoint(0, 1, -1): [3, 7, 9], IntPoint(-1, 0, -2): [4, 5] } def test_cell_size(self): "SimpleNNPS :: test cell_size" nnps = self.dict_box_sort_nnps self.assertAlmostEqual(nnps.cell_size, 1.0, 14) nnps = self.box_sort_nnps self.assertAlmostEqual(nnps.cell_size, 1.0, 14) nnps = self.ll_nnps self.assertAlmostEqual(nnps.cell_size, 1.0, 14) nnps = self.sp_hash_nnps self.assertAlmostEqual(nnps.cell_size, 1.0, 14) nnps = self.ext_sp_hash_nnps self.assertAlmostEqual(nnps.cell_size, 1.0, 14) nnps = self.strat_radius_nnps self.assertAlmostEqual(nnps.cell_size, 1.0, 14) def test_cells(self): "SimpleNNPS :: test cells" nnps = self.dict_box_sort_nnps cells = self.expected_cells # check each cell for it's contents for key in cells: self.assertTrue(key in nnps.cells) cell = nnps.cells.get(key) cell_indices = list(cell.lindices[0].get_npy_array()) expected_indices = cells.get(key) self.assertTrue(cell_indices == expected_indices) class NNPS2DTestCase(unittest.TestCase): def setUp(self): """Default set-up used by all the tests Two sets of particle arrays (a & b) are created and neighbors are checked from a -> b, b -> a , a -> a and b -> b """ numpy.random.seed(123) self.numPoints1 = numPoints1 = 1 << 11 self.numPoints2 = numPoints2 = 1 << 10 self.pa1 = pa1 = self._create_random(numPoints1) self.pa2 = pa2 = self._create_random(numPoints2) # the list of particles self.particles = [pa1, pa2] def _create_random(self, numPoints): # average particle spacing and volume in the unit cube dx = pow(1.0 / numPoints, 1. / 2.) # create random points in the interval [-1, 1]^3 x1, y1 = random.random((2, numPoints)) * 2.0 - 1.0 z1 = numpy.zeros_like(x1) h1 = numpy.ones_like(x1) * 1.2 * dx gid1 = numpy.arange(numPoints).astype(numpy.uint32) # first particle array pa = get_particle_array( x=x1, y=y1, z=z1, h=h1, gid=gid1) return pa def _assert_neighbors(self, nbrs_nnps, nbrs_brute_force): # ensure that the lengths of the arrays are the same self.assertEqual(nbrs_nnps.length, nbrs_brute_force.length) nnbrs = nbrs_nnps.length _nbrs1 = nbrs_nnps.get_npy_array() _nbrs2 = nbrs_brute_force.get_npy_array() # sort the neighbors nbrs1 = _nbrs1[:nnbrs] nbrs1.sort() nbrs2 = _nbrs2 nbrs2.sort() # check each neighbor for i in range(nnbrs): self.assertEqual(nbrs1[i], nbrs2[i]) def _test_neighbors_by_particle(self, src_index, dst_index, dst_numPoints): # nnps and the two neighbor lists nps = self.nps nbrs1 = UIntArray() nbrs2 = UIntArray() nps.set_context(src_index, dst_index) # get the neighbors and sort the result for i in range(dst_numPoints): nps.get_nearest_particles(src_index, dst_index, i, nbrs1) nps.brute_force_neighbors(src_index, dst_index, i, nbrs2) # ensure that the neighbor lists are the same self._assert_neighbors(nbrs1, nbrs2) class DictBoxSortNNPS2DTestCase(NNPS2DTestCase): """Test for the original box-sort algorithm""" def setUp(self): NNPS2DTestCase.setUp(self) self.nps = nnps.DictBoxSortNNPS( dim=2, particles=self.particles, radius_scale=2.0 ) def test_neighbors_aa(self): self._test_neighbors_by_particle(src_index=0, dst_index=0, dst_numPoints=self.numPoints1) def test_neighbors_ab(self): self._test_neighbors_by_particle(src_index=0, dst_index=1, dst_numPoints=self.numPoints2) def test_neighbors_ba(self): self._test_neighbors_by_particle(src_index=1, dst_index=0, dst_numPoints=self.numPoints1) def test_neighbors_bb(self): self._test_neighbors_by_particle(src_index=1, dst_index=1, dst_numPoints=self.numPoints2) def test_repeated(self): self.test_neighbors_aa() self.test_neighbors_ab() self.test_neighbors_ba() self.test_neighbors_bb() class OctreeGPUNNPS2DTestCase(DictBoxSortNNPS2DTestCase): """Test for Z-Order SFC based OpenCL algorithm""" def setUp(self): NNPS2DTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.OctreeGPUNNPS( dim=2, particles=self.particles, radius_scale=2.0, backend='opencl' ) def tearDown(self): super(OctreeGPUNNPS2DTestCase, self).tearDown() get_config().use_double = self._orig_use_double class OctreeGPUNNPSDouble2DTestCase(DictBoxSortNNPS2DTestCase): """Test for Z-Order SFC based OpenCL algorithm""" def setUp(self): NNPS2DTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = True self.nps = gpu_nnps.OctreeGPUNNPS( dim=2, particles=self.particles, radius_scale=2.0, backend='opencl' ) def tearDown(self): super(OctreeGPUNNPSDouble2DTestCase, self).tearDown() get_config().use_double = self._orig_use_double class NNPSTestCase(unittest.TestCase): """Standard nearest neighbor queries and comparison with the brute force approach. We randomly distribute particles in 3-space and compare the list of neighbors using the NNPS algorithms and the brute force approach. The following particle arrays are set up for testing 1) pa1, pa2: uniformly distributed distribution with a constant h. pa1 and p2 have a different number of particles and hence, a different h. 2) pa3: Uniformly distributed distribution for both the coordinates and for h. 3) pa4: h varies along with spatial coordinates. """ def setUp(self): """Default set-up used by all the tests """ numpy.random.seed(123) # Datasets with constant h self.numPoints1 = numPoints1 = 1 << 11 self.numPoints2 = numPoints2 = 1 << 10 # Datasets with varying h self.numPoints3 = numPoints3 = 1 << 10 # FIXME: Tets fail with m4=9 # Looks like the issue arises due to rounding errors which should be # acceptable to a degree. Need to modify tests or brute force NNPS to # handle such cases appropriately m4 = 8 self.numPoints4 = numPoints4 = m4 ** 3 self.pa1 = pa1 = self._create_random(numPoints1) self.pa2 = pa2 = self._create_random(numPoints2) self.pa3 = pa3 = self._create_random_variable_h(numPoints3) self.pa4 = pa4 = self._create_linear_radius(0.1, 0.4, m4) # the list of particles self.particles = [pa1, pa2, pa3, pa4] def _create_random(self, numPoints): # average particle spacing and volume in the unit cube dx = pow(1.0 / numPoints, 1. / 3.) # create random points in the interval [-1, 1]^3 x1, y1, z1 = random.random((3, numPoints)) * 2.0 - 1.0 h1 = numpy.ones_like(x1) * 1.2 * dx gid1 = numpy.arange(numPoints).astype(numpy.uint32) # first particle array pa = get_particle_array( x=x1, y=y1, z=z1, h=h1, gid=gid1) return pa def _create_linear_radius(self, dx_min, dx_max, m): n = m ** 3 base = numpy.linspace(1., (dx_max / dx_min), m) hl = base * dx_min xl = numpy.cumsum(hl) x, y, z = numpy.meshgrid(xl, xl, xl) x, y, z = x.ravel(), y.ravel(), z.ravel() h1, h2, h3 = numpy.meshgrid(hl, hl, hl) h = (h1 ** 2 + h2 ** 2 + h3 ** 2) ** 0.5 h = h.ravel() gid = numpy.arange(n).astype(numpy.uint32) pa = get_particle_array( x=x, y=y, z=z, h=h, gid=gid ) return pa def _create_random_variable_h(self, num_points): # average particle spacing and volume in the unit cube dx = pow(1.0 / num_points, 1. / 3.) # create random points in the interval [-1, 1]^3 x1, y1, z1 = random.random((3, num_points)) * 2.0 - 1.0 h1 = numpy.ones_like(x1) * \ numpy.random.uniform(1, 4, size=num_points) * 1.2 * dx gid1 = numpy.arange(num_points).astype(numpy.uint32) # first particle array pa = get_particle_array( x=x1, y=y1, z=z1, h=h1, gid=gid1) return pa def _assert_neighbors(self, nbrs_nnps, nbrs_brute_force): # ensure that the lengths of the arrays are the same if nbrs_nnps.length != nbrs_brute_force.length: print(nbrs_nnps.get_npy_array(), nbrs_brute_force.get_npy_array()) self.assertEqual(nbrs_nnps.length, nbrs_brute_force.length) nnbrs = nbrs_nnps.length _nbrs1 = nbrs_nnps.get_npy_array() _nbrs2 = nbrs_brute_force.get_npy_array() # sort the neighbors nbrs1 = _nbrs1[:nnbrs] nbrs1.sort() nbrs2 = _nbrs2 nbrs2.sort() # check each neighbor for i in range(nnbrs): self.assertEqual(nbrs1[i], nbrs2[i]) def _test_neighbors_by_particle(self, src_index, dst_index, dst_numPoints): # nnps and the two neighbor lists nps = self.nps nbrs1 = UIntArray() nbrs2 = UIntArray() nps.set_context(src_index, dst_index) # get the neighbors and sort the result for i in range(dst_numPoints): nps.get_nearest_particles(src_index, dst_index, i, nbrs1) nps.brute_force_neighbors(src_index, dst_index, i, nbrs2) # ensure that the neighbor lists are the same self._assert_neighbors(nbrs1, nbrs2) class DictBoxSortNNPSTestCase(NNPSTestCase): """Test for the original box-sort algorithm""" def setUp(self): """ Default setup and tests used for 3D NNPS tests We run the tests on the following pairs of particle arrays: Set 1) Same particle arrays. Both have constant h. 1) a -> a 2) b -> b Set 2) Different particle arrays with constant h. 1) a -> b 2) b -> a Set 3) Variable h 1) c -> c 2) d -> d We then repeat the above tests again to ensure that we get the correct results even when running NNPS repeatedly """ NNPSTestCase.setUp(self) self.nps = nnps.DictBoxSortNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) def test_neighbors_aa(self): self._test_neighbors_by_particle(src_index=0, dst_index=0, dst_numPoints=self.numPoints1) def test_neighbors_ab(self): self._test_neighbors_by_particle(src_index=0, dst_index=1, dst_numPoints=self.numPoints2) def test_neighbors_ba(self): self._test_neighbors_by_particle(src_index=1, dst_index=0, dst_numPoints=self.numPoints1) def test_neighbors_bb(self): self._test_neighbors_by_particle(src_index=1, dst_index=1, dst_numPoints=self.numPoints2) def test_neighbors_cc(self): self._test_neighbors_by_particle(src_index=2, dst_index=2, dst_numPoints=self.numPoints3) def test_neighbors_dd(self): self._test_neighbors_by_particle(src_index=3, dst_index=3, dst_numPoints=self.numPoints4) def test_repeated(self): self.test_neighbors_aa() self.test_neighbors_ab() self.test_neighbors_ba() self.test_neighbors_bb() self.test_neighbors_cc() self.test_neighbors_dd() class BoxSortNNPSTestCase(DictBoxSortNNPSTestCase): """Test for the original box-sort algorithm""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.BoxSortNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class SpatialHashNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Spatial Hash algorithm""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.SpatialHashNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class SingleLevelStratifiedHashNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Stratified hash algorithm with num_levels = 1""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.StratifiedHashNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class MultipleLevelsStratifiedHashNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Stratified hash algorithm with num_levels = 2""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.StratifiedHashNNPS( dim=3, particles=self.particles, radius_scale=2.0, num_levels=2 ) class SingleLevelStratifiedSFCNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Stratified SFC algorithm with num_levels = 1""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.StratifiedSFCNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class MultipleLevelsStratifiedSFCNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Stratified SFC algorithm with num_levels = 2""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.StratifiedSFCNNPS( dim=3, particles=self.particles, radius_scale=2.0, num_levels=2 ) class ExtendedSpatialHashNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Extended Spatial Hash algorithm""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.ExtendedSpatialHashNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class OctreeNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Octree based algorithm""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.OctreeNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class CellIndexingNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Cell Indexing based algorithm""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.CellIndexingNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class ZOrderNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Z-Order SFC based algorithm""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.ZOrderNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class ZOrderGPUNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Z-Order SFC based OpenCL algorithm""" def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.ZOrderGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='opencl' ) def tearDown(self): super(ZOrderGPUNNPSTestCase, self).tearDown() get_config().use_double = self._orig_use_double class ZOrderGPUNNPSTestCaseCUDA(ZOrderGPUNNPSTestCase): def setUp(self): NNPSTestCase.setUp(self) importorskip("pycuda") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.ZOrderGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='cuda' ) def tearDown(self): super(ZOrderGPUNNPSTestCase, self).tearDown() get_config().use_double = self._orig_use_double class BruteForceNNPSTestCase(DictBoxSortNNPSTestCase): """Test for OpenCL brute force algorithm""" def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.BruteForceNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='opencl' ) def tearDown(self): super(BruteForceNNPSTestCase, self).tearDown() get_config().use_double = self._orig_use_double class OctreeGPUNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Z-Order SFC based OpenCL algorithm""" def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.OctreeGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='opencl' ) def tearDown(self): super(OctreeGPUNNPSTestCase, self).tearDown() get_config().use_double = self._orig_use_double class ZOrderGPUDoubleNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Z-Order SFC based OpenCL algorithm""" def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = True self.nps = gpu_nnps.ZOrderGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='opencl' ) def tearDown(self): super(ZOrderGPUDoubleNNPSTestCase, self).tearDown() get_config().use_double = self._orig_use_double class ZOrderGPUDoubleNNPSTestCaseCUDA(ZOrderGPUDoubleNNPSTestCase): """Test for Z-Order SFC based OpenCL algorithm""" def setUp(self): NNPSTestCase.setUp(self) importorskip("pycuda") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = True self.nps = gpu_nnps.ZOrderGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='cuda' ) def tearDown(self): super(ZOrderGPUDoubleNNPSTestCase, self).tearDown() get_config().use_double = self._orig_use_double class OctreeGPUDoubleNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Octree based OpenCL algorithm""" def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = True self.nps = gpu_nnps.OctreeGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='opencl' ) def tearDown(self): super(OctreeGPUDoubleNNPSTestCase, self).tearDown() get_config().use_double = self._orig_use_double class TestZOrderGPUNNPSWithSorting(DictBoxSortNNPSTestCase): def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.ZOrderGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='opencl' ) self.nps.spatially_order_particles(0) self.nps.spatially_order_particles(1) for pa in self.particles: pa.gpu.pull() def tearDown(self): super(TestZOrderGPUNNPSWithSorting, self).tearDown() get_config().use_double = self._orig_use_double class TestZOrderGPUNNPSWithSortingCUDA(TestZOrderGPUNNPSWithSorting): def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pycuda") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.ZOrderGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='cuda' ) self.nps.spatially_order_particles(0) self.nps.spatially_order_particles(1) for pa in self.particles: pa.gpu.pull() def tearDown(self): super(TestZOrderGPUNNPSWithSorting, self).tearDown() get_config().use_double = self._orig_use_double class OctreeGPUNNPSWithSortingTestCase(DictBoxSortNNPSTestCase): def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.OctreeGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, backend='opencl' ) self.nps.spatially_order_particles(0) self.nps.spatially_order_particles(1) for pa in self.particles: pa.gpu.pull() def tearDown(self): super(OctreeGPUNNPSWithSortingTestCase, self).tearDown() get_config().use_double = self._orig_use_double class OctreeGPUNNPSWithPartitioningTestCase(DictBoxSortNNPSTestCase): def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps cfg = get_config() self._orig_use_double = cfg.use_double cfg.use_double = False self.nps = gpu_nnps.OctreeGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, use_partitions=True, backend='opencl' ) for pa in self.particles: pa.gpu.pull() def tearDown(self): super(OctreeGPUNNPSWithPartitioningTestCase, self).tearDown() get_config().use_double = self._orig_use_double class StratifiedSFCGPUNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Stratified SFC based OpenCL algorithm""" def setUp(self): NNPSTestCase.setUp(self) cl = importorskip("pyopencl") from pysph.base import gpu_nnps self.nps = gpu_nnps.StratifiedSFCGPUNNPS( dim=3, particles=self.particles, radius_scale=2.0, num_levels=2, backend='opencl' ) @pytest.mark.xfail(reason="StratifiedSFCGPUNNPS failing for \ variable h cases") def test_neighbors_dd(self): self._test_neighbors_by_particle(src_index=3, dst_index=3, dst_numPoints=self.numPoints4) @pytest.mark.xfail(reason="StratifiedSFCGPUNNPS failing for \ variable h cases") def test_repeated(self): self.test_neighbors_aa() self.test_neighbors_ab() self.test_neighbors_ba() self.test_neighbors_bb() self.test_neighbors_cc() self.test_neighbors_dd() class CompressedOctreeNNPSTestCase(DictBoxSortNNPSTestCase): """Test for Compressed Octree based algorithm""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.CompressedOctreeNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) class LinkedListNNPSTestCase(DictBoxSortNNPSTestCase): """Test for the original box-sort algorithm""" def setUp(self): NNPSTestCase.setUp(self) self.nps = nnps.LinkedListNNPS( dim=3, particles=self.particles, radius_scale=2.0 ) def test_cell_index_positivity(self): nps = self.nps ncells_tot = nps.ncells_tot ncells_per_dim = nps.ncells_per_dim dim = nps.dim # cell indices should be positive. We iterate over the # flattened indices, get the unflattened version and check # that each component remains positive for cell_index in range(ncells_tot): cid = nnps.py_unflatten(cell_index, ncells_per_dim, dim) self.assertTrue(cid.x > -1) self.assertTrue(cid.y > -1) self.assertTrue(cid.z > -1) class TestNNPSOnLargeDomain(unittest.TestCase): def _make_particles(self, nx=20): x, y, z = numpy.random.random((3, nx, nx, nx)) x = numpy.ravel(x) y = numpy.ravel(y) z = numpy.ravel(z) h = numpy.ones_like(x) * 1.3 / nx pa = get_particle_array(name='fluid', x=x, y=y, z=z, h=h) # Place one particle far far away # On Linux and OSX this works even if sz is 100000. # However, on Windows this fails but works with 1000, # hence we set it to 1000. sz = 1000.0 pa.add_particles(x=[sz], y=[sz], z=[sz]) return pa def test_linked_list_nnps_raises_exception_for_large_domain(self): # Given/When pa = self._make_particles(20) # Then self.assertRaises( RuntimeError, nnps.LinkedListNNPS, dim=3, particles=[pa], cache=True ) def test_box_sort_works_for_large_domain(self): # Given pa = self._make_particles(20) # We turn on cache so it computes all the neighbors quickly for us. nps = nnps.BoxSortNNPS(dim=3, particles=[pa], cache=True) nbrs = UIntArray() direct = UIntArray() nps.set_context(0, 0) for i in range(pa.get_number_of_particles()): nps.get_nearest_particles(0, 0, i, nbrs) nps.brute_force_neighbors(0, 0, i, direct) x = nbrs.get_npy_array() y = direct.get_npy_array() x.sort() y.sort() assert numpy.all(x == y) def test_spatial_hash_works_for_large_domain(self): # Given pa = self._make_particles(20) # We turn on cache so it computes all the neighbors quickly for us. nps = nnps.SpatialHashNNPS(dim=3, particles=[pa], cache=True) nbrs = UIntArray() direct = UIntArray() nps.set_context(0, 0) for i in range(pa.get_number_of_particles()): nps.get_nearest_particles(0, 0, i, nbrs) nps.brute_force_neighbors(0, 0, i, direct) x = nbrs.get_npy_array() y = direct.get_npy_array() x.sort() y.sort() assert numpy.all(x == y) def test_extended_spatial_hash_works_for_large_domain(self): # Given pa = self._make_particles(20) # We turn on cache so it computes all the neighbors quickly for us. nps = nnps.ExtendedSpatialHashNNPS(dim=3, particles=[pa], cache=True) nbrs = UIntArray() direct = UIntArray() nps.set_context(0, 0) for i in range(pa.get_number_of_particles()): nps.get_nearest_particles(0, 0, i, nbrs) nps.brute_force_neighbors(0, 0, i, direct) x = nbrs.get_npy_array() y = direct.get_npy_array() x.sort() y.sort() assert numpy.all(x == y) def test_octree_works_for_large_domain(self): # Given pa = self._make_particles(20) # We turn on cache so it computes all the neighbors quickly for us. nps = nnps.OctreeNNPS(dim=3, particles=[pa], cache=True) nbrs = UIntArray() direct = UIntArray() nps.set_context(0, 0) for i in range(pa.get_number_of_particles()): nps.get_nearest_particles(0, 0, i, nbrs) nps.brute_force_neighbors(0, 0, i, direct) x = nbrs.get_npy_array() y = direct.get_npy_array() x.sort() y.sort() assert numpy.all(x == y) def test_compressed_octree_works_for_large_domain(self): # Given pa = self._make_particles(20) # We turn on cache so it computes all the neighbors quickly for us. nps = nnps.CompressedOctreeNNPS(dim=3, particles=[pa], cache=True) nbrs = UIntArray() direct = UIntArray() nps.set_context(0, 0) for i in range(pa.get_number_of_particles()): nps.get_nearest_particles(0, 0, i, nbrs) nps.brute_force_neighbors(0, 0, i, direct) x = nbrs.get_npy_array() y = direct.get_npy_array() x.sort() y.sort() assert numpy.all(x == y) class TestLinkedListNNPSWithSorting(unittest.TestCase): def _make_particles(self, nx=20): x = numpy.linspace(0, 1, nx) h = numpy.ones_like(x) / (nx - 1) pa = get_particle_array(name='fluid', x=x, h=h) nps = nnps.LinkedListNNPS(dim=1, particles=[pa], sort_gids=True) return pa, nps def test_nnps_sorts_without_gids(self): # Given pa, nps = self._make_particles(10) # When nps.set_context(0, 0) # Test the that gids are actually huge and invalid. self.assertEqual(numpy.max(pa.gid), numpy.min(pa.gid)) self.assertTrue(numpy.max(pa.gid) > pa.gid.size) # Then nbrs = UIntArray() for i in range(pa.get_number_of_particles()): nps.get_nearest_particles(0, 0, i, nbrs) nb = nbrs.get_npy_array() sorted_nbrs = nb.copy() sorted_nbrs.sort() self.assertTrue(numpy.all(nb == sorted_nbrs)) def test_nnps_sorts_with_valid_gids(self): # Given pa, nps = self._make_particles(10) pa.gid[:] = numpy.arange(pa.x.size) nps.update() # When nps.set_context(0, 0) # Test the that gids are actually valid. self.assertEqual(numpy.max(pa.gid), pa.gid.size - 1) self.assertEqual(numpy.min(pa.gid), 0) # Then nbrs = UIntArray() for i in range(pa.get_number_of_particles()): nps.get_nearest_particles(0, 0, i, nbrs) nb = nbrs.get_npy_array() sorted_nbrs = nb.copy() sorted_nbrs.sort() self.assertTrue(numpy.all(nb == sorted_nbrs)) class TestSpatialHashNNPSWithSorting(TestLinkedListNNPSWithSorting): def _make_particles(self, nx=20): x = numpy.linspace(0, 1, nx) h = numpy.ones_like(x) / (nx - 1) pa = get_particle_array(name='fluid', x=x, h=h) nps = nnps.SpatialHashNNPS(dim=1, particles=[pa], sort_gids=True) return pa, nps class TestMultipleLevelsStratifiedHashNNPSWithSorting( TestLinkedListNNPSWithSorting): def _make_particles(self, nx=20): x = numpy.linspace(0, 1, nx) h = numpy.ones_like(x) / (nx - 1) pa = get_particle_array(name='fluid', x=x, h=h) nps = nnps.StratifiedHashNNPS(dim=1, particles=[pa], num_levels=2, sort_gids=True) return pa, nps class TestMultipleLevelsStratifiedSFCNNPSWithSorting( TestLinkedListNNPSWithSorting): def _make_particles(self, nx=20): x = numpy.linspace(0, 1, nx) h = numpy.ones_like(x) / (nx - 1) pa = get_particle_array(name='fluid', x=x, h=h) nps = nnps.StratifiedSFCNNPS(dim=1, particles=[pa], num_levels=2, sort_gids=True) return pa, nps def test_large_number_of_neighbors_linked_list(): x = numpy.random.random(1 << 14) * 0.1 y = x.copy() z = x.copy() h = numpy.ones_like(x) pa = get_particle_array(name='fluid', x=x, y=y, z=z, h=h) nps = nnps.LinkedListNNPS(dim=3, particles=[pa], cache=False) nbrs = UIntArray() nps.get_nearest_particles(0, 0, 0, nbrs) # print(nbrs.length) assert nbrs.length == len(x) def test_neighbor_cache_update_doesnt_leak(): # Given x, y, z = numpy.random.random((3, 1000)) pa = get_particle_array(name='fluid', x=x, y=y, z=z, h=0.05) nps = nnps.LinkedListNNPS(dim=3, particles=[pa], cache=True) nps.set_context(0, 0) nps.cache[0].find_all_neighbors() old_length = sum(x.length for x in nps.cache[0]._neighbor_arrays) # When nps.update() nps.set_context(0, 0) nps.cache[0].find_all_neighbors() # Then new_length = sum(x.length for x in nps.cache[0]._neighbor_arrays) assert new_length == old_length nnps_classes = [ nnps.BoxSortNNPS, nnps.CellIndexingNNPS, nnps.CompressedOctreeNNPS, nnps.ExtendedSpatialHashNNPS, nnps.LinkedListNNPS, nnps.OctreeNNPS, nnps.SpatialHashNNPS, nnps.StratifiedHashNNPS, nnps.StratifiedSFCNNPS, nnps.ZOrderNNPS ] @pytest.mark.parametrize("cls", nnps_classes) def test_corner_case_1d_few_cells(cls): x, y, z = [0.131, 0.359], [1.544, 1.809], [-3.6489999, -2.8559999] pa = get_particle_array(name='fluid', x=x, y=y, z=z, h=1.0) nbrs = UIntArray() bf_nbrs = UIntArray() nps = cls(dim=3, particles=[pa], radius_scale=0.7) for i in range(2): nps.get_nearest_particles(0, 0, i, nbrs) nps.brute_force_neighbors(0, 0, i, bf_nbrs) assert sorted(nbrs) == sorted(bf_nbrs), 'Failed for particle: %d' % i def test_use_2d_for_1d_data_with_llnps(): y = numpy.array([1.0, 1.5]) h = numpy.ones_like(y) pa = get_particle_array(name='fluid', y=y, h=h) nps = nnps.LinkedListNNPS(dim=2, particles=[pa], cache=False) nbrs = UIntArray() nps.get_nearest_particles(0, 0, 0, nbrs) print(nbrs.length) assert nbrs.length == len(y) def test_use_3d_for_1d_data_with_llnps(): y = numpy.array([1.0, 1.5]) h = numpy.ones_like(y) pa = get_particle_array(name='fluid', y=y, h=h) nps = nnps.LinkedListNNPS(dim=3, particles=[pa], cache=False) nbrs = UIntArray() nps.get_nearest_particles(0, 0, 0, nbrs) print(nbrs.length) assert nbrs.length == len(y) def test_large_number_of_neighbors_spatial_hash(): x = numpy.random.random(1 << 14) * 0.1 y = x.copy() z = x.copy() h = numpy.ones_like(x) pa = get_particle_array(name='fluid', x=x, y=y, z=z, h=h) nps = nnps.SpatialHashNNPS(dim=3, particles=[pa], cache=False) nbrs = UIntArray() nps.get_nearest_particles(0, 0, 0, nbrs) # print(nbrs.length) assert nbrs.length == len(x) def test_large_number_of_neighbors_octree(): x = numpy.random.random(1 << 14) * 0.1 y = x.copy() z = x.copy() h = numpy.ones_like(x) pa = get_particle_array(name='fluid', x=x, y=y, z=z, h=h) nps = nnps.OctreeNNPS(dim=3, particles=[pa], cache=False) nbrs = UIntArray() nps.get_nearest_particles(0, 0, 0, nbrs) # print(nbrs.length) assert nbrs.length == len(x) def test_flatten_unflatten(): # first consider the 2D case where we assume a 4 X 5 grid of cells dim = 2 ncells_per_dim = IntArray(3) ncells_per_dim[0] = 4 ncells_per_dim[1] = 5 ncells_per_dim[2] = 0 # valid un-flattened cell indices cids = [[i, j] for i in range(4) for j in range(5)] for _cid in cids: cid = IntPoint(_cid[0], _cid[1], 0) flattened = nnps.py_flatten(cid, ncells_per_dim, dim) unflattened = nnps.py_unflatten(flattened, ncells_per_dim, dim) # the unflattened index should match with cid assert (cid == unflattened) # 3D dim = 3 ncells_per_dim = IntArray(3) ncells_per_dim[0] = 4 ncells_per_dim[1] = 5 ncells_per_dim[2] = 2 # valid un-flattened indices cids = [[i, j, k] for i in range(4) for j in range(5) for k in range(2)] for _cid in cids: cid = IntPoint(_cid[0], _cid[1], _cid[2]) flattened = nnps.py_flatten(cid, ncells_per_dim, dim) unflattened = nnps.py_unflatten(flattened, ncells_per_dim, dim) # the unflattened index should match with cid assert (cid == unflattened) def test_1D_get_valid_cell_index(): dim = 1 # simulate a dummy distribution such that 10 cells are along the # 'x' direction n_cells = 10 ncells_per_dim = IntArray(3) ncells_per_dim[0] = n_cells ncells_per_dim[1] = 1 ncells_per_dim[2] = 1 # target cell cx = 1 cy = cz = 0 # as long as cy and cz are 0, the function should return the valid # flattened cell index for the cell for i in [-1, 0, 1]: index = nnps.py_get_valid_cell_index( IntPoint(cx + i, cy, cz), ncells_per_dim, dim, n_cells) assert index != -1 # index should be -1 whenever cy and cz are > 1. This is # specifically the case that was failing earlier. for j in [-1, 1]: for k in [-1, 1]: index = nnps.py_get_valid_cell_index( IntPoint(cx, cy + j, cz + k), ncells_per_dim, dim, n_cells) assert index == -1 # When the cx > n_cells or < -1 it should be invalid for i in [-2, -1, n_cells, n_cells + 1]: index = nnps.py_get_valid_cell_index( IntPoint(i, cy, cz), ncells_per_dim, dim, n_cells) assert index == -1 def test_get_centroid(): cell = nnps.Cell(IntPoint(0, 0, 0), cell_size=0.1, narrays=1) centroid = Point() cell.get_centroid(centroid) assert (abs(centroid.x - 0.05) < 1e-10) assert (abs(centroid.y - 0.05) < 1e-10) assert (abs(centroid.z - 0.05) < 1e-10) cell = nnps.Cell(IntPoint(1, 2, 3), cell_size=0.5, narrays=1) cell.get_centroid(centroid) assert (abs(centroid.x - 0.75) < 1e-10) assert (abs(centroid.y - 1.25) < 1e-10) assert (abs(centroid.z - 1.75) < 1e-10) def test_get_bbox(): cell_size = 0.1 cell = nnps.Cell(IntPoint(0, 0, 0), cell_size=cell_size, narrays=1) centroid = Point() boxmin = Point() boxmax = Point() cell.get_centroid(centroid) cell.get_bounding_box(boxmin, boxmax) assert (abs(boxmin.x - (centroid.x - 1.5 * cell_size)) < 1e-10) assert (abs(boxmin.y - (centroid.y - 1.5 * cell_size)) < 1e-10) assert (abs(boxmin.z - (centroid.z - 1.5 * cell_size)) < 1e-10) assert (abs(boxmax.x - (centroid.x + 1.5 * cell_size)) < 1e-10) assert (abs(boxmax.y - (centroid.y + 1.5 * cell_size)) < 1e-10) assert (abs(boxmax.z - (centroid.z + 1.5 * cell_size)) < 1e-10) cell_size = 0.5 cell = nnps.Cell(IntPoint(1, 2, 0), cell_size=cell_size, narrays=1) cell.get_centroid(centroid) cell.get_bounding_box(boxmin, boxmax) assert (abs(boxmin.x - (centroid.x - 1.5 * cell_size)) < 1e-10) assert (abs(boxmin.y - (centroid.y - 1.5 * cell_size)) < 1e-10) assert (abs(boxmin.z - (centroid.z - 1.5 * cell_size)) < 1e-10) assert (abs(boxmax.x - (centroid.x + 1.5 * cell_size)) < 1e-10) assert (abs(boxmax.y - (centroid.y + 1.5 * cell_size)) < 1e-10) assert (abs(boxmax.z - (centroid.z + 1.5 * cell_size)) < 1e-10) if __name__ == '__main__': unittest.main() pysph-master/pysph/base/tests/test_octree.py000066400000000000000000000200731356347341600216410ustar00rootroot00000000000000"""Unittests for Octree You can run the tests like so: $ pytest -v run_tests.py """ import numpy as np # PySPH imports from pysph.base.utils import get_particle_array from pysph.base.octree import Octree, CompressedOctree # Python testing framework import unittest from pytest import importorskip def test_single_level_octree(): N = 50 x, y, z = np.mgrid[0:1:N*1j, 0:1:N*1j, 0:1:N*1j] x = x.ravel() y = y.ravel() z = z.ravel() h = np.ones_like(x) pa = get_particle_array(x=x, y=y, z=z, h=h) # For maximum leaf particles greater that total number # of particles tree = Octree(pa.get_number_of_particles() + 1) tree.build_tree(pa) # Test that depth of the tree is 1 assert tree.depth == 1 def test_compressed_octree_has_lesser_depth_than_octree(): N = 50 x, y, z = np.mgrid[0:1:N*1j, 0:1:N*1j, 0:1:N*1j] x = x.ravel() y = y.ravel() z = z.ravel() h = np.ones_like(x) pa = get_particle_array(x=x, y=y, z=z, h=h) x1 = np.array([20]) y1 = np.array([20]) z1 = np.array([20]) # For a dataset where one particle is far away from a # cluster of particles pa.add_particles(x=x1, y=y1, z=z1) tree = Octree(10) comp_tree = CompressedOctree(10) depth_tree = tree.build_tree(pa) depth_comp_tree = comp_tree.build_tree(pa) # Test that the depth of compressed octree for the same # leaf_max_particles is lesser than that of octree assert depth_comp_tree < depth_tree def test_single_level_compressed_octree(): N = 50 x, y, z = np.mgrid[0:1:N*1j, 0:1:N*1j, 0:1:N*1j] x = x.ravel() y = y.ravel() z = z.ravel() h = np.ones_like(x) pa = get_particle_array(x=x, y=y, z=z, h=h) # For maximum leaf particles greater that total number # of particles tree = CompressedOctree(pa.get_number_of_particles() + 1) tree.build_tree(pa) # Test that depth of the tree is 1 assert tree.depth == 1 class SimpleOctreeTestCase(unittest.TestCase): """Simple test case for Octree """ def setUp(self): N = 50 x, y, z = np.mgrid[0:1:N*1j, 0:1:N*1j, 0:1:N*1j] self.x = x.ravel() self.y = y.ravel() self.z = z.ravel() self.h = np.ones_like(self.x) self.tree = Octree(10) def test_levels_in_tree(self): pa = get_particle_array(x=self.x, y=self.y, z=self.z, h=self.h) self.tree.build_tree(pa) root = self.tree.get_root() def _check_levels(node, level): # Test that levels for nodes are correctly set # starting from 0 at root self.assertTrue(node.level == level) children = node.get_children() for child in children: if child == None: continue _check_levels(child, level + 1) _check_levels(root, 0) self.tree.delete_tree() def test_parent_for_node(self): pa = get_particle_array(x=self.x, y=self.y, z=self.z, h=self.h) self.tree.build_tree(pa) root = self.tree.get_root() # Test that parent of root is 'None' self.assertTrue(root.get_parent() == None) def _check_parent(node): children = node.get_children() for child in children: if child == None: continue # Test that the parent is set correctly for all nodes self.assertTrue(child.get_parent() == node) _check_parent(child) _check_parent(root) self.tree.delete_tree() def test_sum_of_indices_lengths_equals_total_number_of_particles(self): pa = get_particle_array(x=self.x, y=self.y, z=self.z, h=self.h) self.tree.build_tree(pa) root = self.tree.get_root() sum_indices = [0] def _calculate_sum(node, sum_indices): indices = node.get_indices(self.tree) sum_indices[0] += indices.length children = node.get_children() for child in children: if child == None: continue _calculate_sum(child, sum_indices) _calculate_sum(root, sum_indices) # Test that sum of lengths of all indices is equal to total # number of particles self.assertTrue(pa.get_number_of_particles() == sum_indices[0]) self.tree.delete_tree() def test_plot_root(self): pa = get_particle_array(x=self.x, y=self.y, z=self.z, h=self.h) self.tree.build_tree(pa) root = self.tree.get_root() mpl = importorskip("matplotlib") mpl.use('Agg') import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = Axes3D(fig) root.plot(ax) for line in ax.lines: xs = line.get_xdata() ys = line.get_ydata() line_length = (xs[0] - xs[1])**2 + (ys[0] - ys[1])**2 # Test that the lengths of sides 2D projection of the node # is equal to the length of the side of the node self.assertTrue(line_length == root.length**2 or \ line_length == 0) self.tree.delete_tree() class TestOctreeFor2DDataset(SimpleOctreeTestCase): """Test Octree for 2D dataset """ def setUp(self): N = 500 x, y = np.mgrid[0:1:N*1j, 0:1:N*1j] self.x = x.ravel() self.y = y.ravel() self.z = np.zeros_like(self.x) self.h = np.ones_like(self.x) self.tree = Octree(10) class TestOctreeFor1DDataset(SimpleOctreeTestCase): """Test Octree for 1D dataset """ def setUp(self): N = int(1e5) self.x = np.linspace(0, 1, num=N) self.y = np.zeros_like(self.x) self.z = np.zeros_like(self.x) self.h = np.ones_like(self.x) self.tree = Octree(100) class TestOctreeForFloatingPointError(SimpleOctreeTestCase): """Test Octree for floating point error """ def setUp(self): N = 50 x, y, z = np.mgrid[-1:1:N*1j, -1:1:N*1j, -1:1:N*1j] self.x = x.ravel() self.y = y.ravel() self.z = z.ravel() x1 = np.array([-1e-20]) y1 = np.array([1e-20]) z1 = np.array([1e-20]) self.x = np.concatenate([self.x, x1]) self.y = np.concatenate([self.y, y1]) self.z = np.concatenate([self.z, z1]) self.h = np.ones_like(self.x) self.tree = Octree(10) class SimpleCompressedOctreeTestCase(SimpleOctreeTestCase): """Simple test case for Compressed Octree """ def setUp(self): N = 50 x, y, z = np.mgrid[0:1:N*1j, 0:1:N*1j, 0:1:N*1j] self.x = x.ravel() self.y = y.ravel() self.z = z.ravel() self.h = np.ones_like(self.x) self.tree = CompressedOctree(10) class TestCompressedOctreeFor1DDataset(SimpleOctreeTestCase): """Test Octree for 1D dataset """ def setUp(self): N = int(1e5) self.x = np.linspace(0, 1, num=N) self.y = np.zeros_like(self.x) self.z = np.zeros_like(self.x) self.h = np.ones_like(self.x) self.tree = CompressedOctree(100) class TestCompressedOctreeFor2DDataset(SimpleOctreeTestCase): """Test Octree for 2D dataset """ def setUp(self): N = 500 x, y = np.mgrid[0:1:N*1j, 0:1:N*1j] self.x = x.ravel() self.y = y.ravel() self.z = np.zeros_like(self.x) self.h = np.ones_like(self.x) self.tree = CompressedOctree(10) class TestCompressedOctreeForFloatingPointError(SimpleOctreeTestCase): """Test Octree for floating point error """ def setUp(self): N = 50 x, y, z = np.mgrid[-1:1:N*1j, -1:1:N*1j, -1:1:N*1j] self.x = x.ravel() self.y = y.ravel() self.z = z.ravel() x1 = np.array([-1e-20]) y1 = np.array([1e-20]) z1 = np.array([1e-20]) self.x = np.concatenate([self.x, x1]) self.y = np.concatenate([self.y, y1]) self.z = np.concatenate([self.z, z1]) self.h = np.ones_like(self.x) self.tree = CompressedOctree(10) if __name__ == '__main__': unittest.main() pysph-master/pysph/base/tests/test_particle_array.py000066400000000000000000001165771356347341600234000ustar00rootroot00000000000000""" Tests for the particle array module. """ # standard imports import unittest import numpy # local imports from pysph.base import particle_array from pysph.base import utils from cyarray.carray import LongArray, IntArray, DoubleArray import pickle import pytest from compyle.config import get_config def check_array(x, y): """Check if two arrays are equal with an absolute tolerance of 1e-16.""" return numpy.allclose(x, y, atol=1e-16, rtol=0) ############################################################################### # `ParticleArrayTest` class. ############################################################################### class ParticleArrayTest(object): """ Tests for the particle array class. """ def test_constructor(self): # Default constructor test. p = particle_array.ParticleArray(name='test_particle_array', backend=self.backend) self.assertEqual(p.name, 'test_particle_array') self.assertEqual('tag' in p.properties, True) self.assertEqual(p.properties['tag'].length, 0) # Constructor with some properties. x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] p = particle_array.ParticleArray(x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, backend=self.backend) self.assertEqual(p.name, '') self.assertEqual('x' in p.properties, True) self.assertEqual('y' in p.properties, True) self.assertEqual('z' in p.properties, True) self.assertEqual('m' in p.properties, True) self.assertEqual('h' in p.properties, True) # get the properties are check if they are the same xarr = p.properties['x'].get_npy_array() self.assertEqual(check_array(xarr, x), True) yarr = p.properties['y'].get_npy_array() self.assertEqual(check_array(yarr, y), True) zarr = p.properties['z'].get_npy_array() self.assertEqual(check_array(zarr, z), True) marr = p.properties['m'].get_npy_array() self.assertEqual(check_array(marr, m), True) harr = p.properties['h'].get_npy_array() self.assertEqual(check_array(harr, h), True) # check if the 'tag' array was added. self.assertEqual('tag' in p.properties, True) self.assertEqual(list(p.properties.values())[0].length == len(x), True) # Constructor with tags tags = [0, 1, 0, 1] p = particle_array.ParticleArray(x={'data': x}, y={'data': y}, z={'data': z}, tag={'data': tags, 'type': 'int'}, backend=self.backend) self.assertEqual(check_array(p.get('tag', only_real_particles=False), [0, 0, 1, 1]), True) self.assertEqual(check_array(p.get('x', only_real_particles=False), [1, 3, 2, 4]), True) self.assertEqual(check_array(p.get('y', only_real_particles=False), [0, 2, 1, 3]), True) self.assertEqual(check_array(p.get('z', only_real_particles=False), [0, 0, 0, 0]), True) # trying to create particle array without any values but some # properties. p = particle_array.ParticleArray(x={}, y={}, z={}, h={}, backend=self.backend) self.assertEqual(p.get_number_of_particles(), 0) self.assertEqual('x' in p.properties, True) self.assertEqual('y' in p.properties, True) self.assertEqual('z' in p.properties, True) self.assertEqual('tag' in p.properties, True) # now trying to supply some properties with values and others without p = particle_array.ParticleArray( x={'default': 10.0}, y={'data': [1.0, 2.0]}, z={}, h={'data': [0.1, 0.1]}, backend=self.backend ) self.assertEqual(p.get_number_of_particles(), 2) self.assertEqual(check_array(p.x, [10., 10.]), True) self.assertEqual(check_array(p.y, [1., 2.]), True) self.assertEqual(check_array(p.z, [0, 0]), True) self.assertEqual(check_array(p.h, [0.1, 0.1]), True) def test_constructor_works_with_strides(self): # Given x = [1, 2, 3, 4.] rho = 10.0 data = numpy.arange(8) # When p = particle_array.ParticleArray( x=x, rho=rho, data={'data': data, 'stride': 2}, name='fluid', backend=self.backend ) # Then self.assertEqual(p.name, 'fluid') self.assertEqual('x' in p.properties, True) self.assertEqual('rho' in p.properties, True) self.assertEqual('data' in p.properties, True) self.assertEqual('tag' in p.properties, True) self.assertEqual('pid' in p.properties, True) self.assertEqual('gid' in p.properties, True) # get the properties are check if they are the same self.assertEqual(check_array(p.x, x), True) self.assertEqual(check_array(p.rho, numpy.ones(4) * rho), True) self.assertEqual(check_array(p.data, numpy.ravel(data)), True) def test_constructor_works_with_simple_props(self): # Given x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] rho = 10.0 data = numpy.diag((2, 2)) # When p = particle_array.ParticleArray( x=x, y=y, rho=rho, data=data, name='fluid', backend=self.backend ) # Then self.assertEqual(p.name, 'fluid') self.assertEqual('x' in p.properties, True) self.assertEqual('y' in p.properties, True) self.assertEqual('rho' in p.properties, True) self.assertEqual('data' in p.properties, True) self.assertEqual('tag' in p.properties, True) self.assertEqual('pid' in p.properties, True) self.assertEqual('gid' in p.properties, True) # get the properties are check if they are the same self.assertEqual(check_array(p.x, x), True) self.assertEqual(check_array(p.y, y), True) self.assertEqual(check_array(p.rho, numpy.ones(4) * rho), True) self.assertEqual(check_array(p.data, numpy.ravel(data)), True) def test_get_number_of_particles(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] A = numpy.arange(12) p = particle_array.ParticleArray( x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, A={'data': A, 'stride': 3}, backend=self.backend ) self.assertEqual(p.get_number_of_particles(), 4) def test_get(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] A = numpy.arange(12) p = particle_array.ParticleArray( x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, A={'data': A, 'stride': 3}, backend=self.backend ) self.assertEqual(check_array(x, p.get('x')), True) self.assertEqual(check_array(y, p.get('y')), True) self.assertEqual(check_array(z, p.get('z')), True) self.assertEqual(check_array(m, p.get('m')), True) self.assertEqual(check_array(h, p.get('h')), True) self.assertEqual(check_array(A.ravel(), p.get('A')), True) def test_clear(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] p = particle_array.ParticleArray(x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, backend=self.backend) p.clear() self.assertEqual(len(p.properties), 3) self.assertEqual('tag' in p.properties, True) self.assertEqual(p.properties['tag'].length, 0) self.assertEqual('pid' in p.properties, True) self.assertEqual(p.properties['pid'].length, 0) self.assertEqual('gid' in p.properties, True) self.assertEqual(p.properties['gid'].length, 0) def test_getattr(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] A = numpy.arange(12) p = particle_array.ParticleArray( x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, A={'data': A, 'stride': 3}, backend=self.backend ) self.assertEqual(check_array(x, p.x), True) self.assertEqual(check_array(y, p.y), True) self.assertEqual(check_array(z, p.z), True) self.assertEqual(check_array(m, p.m), True) self.assertEqual(check_array(h, p.h), True) self.assertEqual(check_array(A.ravel(), p.get('A')), True) # try getting an non-existant attribute self.assertRaises(AttributeError, p.__getattr__, 'a') def test_setattr(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] A = numpy.arange(12) p = particle_array.ParticleArray( x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, A={'data': A, 'stride': 3}, backend=self.backend ) p.x = p.x * 2.0 self.assertEqual(check_array(p.get('x'), [2., 4, 6, 8]), True) p.x = p.x + 3.0 * p.x self.assertEqual(check_array(p.get('x'), [8., 16., 24., 32.]), True) p.A = p.A*2 self.assertEqual(check_array(p.get('A').ravel(), A*2), True) def test_remove_particles(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] A = numpy.arange(12) p = particle_array.ParticleArray( x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, A={'data': A, 'stride': 3}, backend=self.backend ) remove_arr = LongArray(0) remove_arr.append(0) remove_arr.append(1) p.remove_particles(remove_arr) self.pull(p) self.assertEqual(p.get_number_of_particles(), 2) self.assertEqual(check_array(p.x, [3., 4.]), True) self.assertEqual(check_array(p.y, [2., 3.]), True) self.assertEqual(check_array(p.z, [0., 0.]), True) self.assertEqual(check_array(p.m, [1., 1.]), True) self.assertEqual(check_array(p.h, [.1, .1]), True) self.assertEqual(check_array(p.A, numpy.arange(6, 12)), True) # now try invalid operations to make sure errors are raised. remove_arr.resize(10) self.assertRaises(ValueError, p.remove_particles, remove_arr) # now try to remove a particle with index more that particle # length. remove_arr = [2] p.remove_particles(remove_arr) self.pull(p) # make sure no change occurred. self.assertEqual(p.get_number_of_particles(), 2) self.assertEqual(check_array(p.x, [3., 4.]), True) self.assertEqual(check_array(p.y, [2., 3.]), True) self.assertEqual(check_array(p.z, [0., 0.]), True) self.assertEqual(check_array(p.m, [1., 1.]), True) self.assertEqual(check_array(p.h, [.1, .1]), True) self.assertEqual(check_array(p.A, numpy.arange(6, 12)), True) def test_add_particles(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] A = numpy.arange(12) p = particle_array.ParticleArray( x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, A=dict(data=A, stride=3), backend=self.backend ) new_particles = {} new_particles['x'] = numpy.array([5., 6, 7], dtype=numpy.float32) new_particles['y'] = numpy.array([4., 5, 6], dtype=numpy.float32) new_particles['z'] = numpy.array([0., 0, 0], dtype=numpy.float32) p.add_particles(**new_particles) self.pull(p) self.assertEqual(p.get_number_of_particles(), 7) self.assertEqual(check_array(p.x, [1., 2, 3, 4, 5, 6, 7]), True) self.assertEqual(check_array(p.y, [0., 1, 2, 3, 4, 5, 6]), True) self.assertEqual(check_array(p.z, [0., 0, 0, 0, 0, 0, 0]), True) expect = numpy.zeros(21, dtype=A.dtype) expect[:12] = A numpy.testing.assert_array_equal(p.A, expect) # make sure the other arrays were resized self.assertEqual(len(p.h), 7) self.assertEqual(len(p.m), 7) # try adding an empty particle list p.add_particles(**{}) self.pull(p) self.assertEqual(p.get_number_of_particles(), 7) self.assertEqual(check_array(p.x, [1., 2, 3, 4, 5, 6, 7]), True) self.assertEqual(check_array(p.y, [0., 1, 2, 3, 4, 5, 6]), True) self.assertEqual(check_array(p.z, [0., 0, 0, 0, 0, 0, 0]), True) self.assertEqual(check_array(p.A, expect), True) # make sure the other arrays were resized self.assertEqual(len(p.h), 7) self.assertEqual(len(p.m), 7) # adding particles with tags p = particle_array.ParticleArray(x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, backend=self.backend) p.add_particles(x=[5, 6, 7, 8], tag=[1, 1, 0, 0]) self.pull(p) self.assertEqual(p.get_number_of_particles(), 8) self.assertEqual(check_array(p.x, [1, 2, 3, 4, 7, 8]), True) self.assertEqual(check_array(p.y, [0, 1, 2, 3, 0, 0]), True) self.assertEqual(check_array(p.z, [0, 0, 0, 0, 0, 0]), True) def test_remove_tagged_particles(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] A = numpy.arange(12) tag = [1, 1, 1, 0] p = particle_array.ParticleArray( x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, tag={'data': tag}, A={'data': A, 'stride': 3}, backend=self.backend ) numpy.testing.assert_array_equal( p.get('A'), numpy.arange(9, 12) ) p.remove_tagged_particles(0) self.pull(p) self.assertEqual(p.get_number_of_particles(), 3) self.assertEqual( check_array( numpy.sort(p.get('x', only_real_particles=False)), [1., 2., 3.]), True ) self.assertEqual( check_array( numpy.sort(p.get('y', only_real_particles=False)), [0., 1., 2.] ), True ) self.assertEqual( check_array( numpy.sort(p.get('z', only_real_particles=False)), [0., 0, 0] ), True ) self.assertEqual( check_array( numpy.sort(p.get('h', only_real_particles=False)), [0.1, 0.1, 0.1] ), True ) self.assertEqual( check_array(p.get('m', only_real_particles=False), [1., 1., 1.]), True ) if p.gpu is None: numpy.testing.assert_array_equal( p.get('A', only_real_particles=False), numpy.arange(9) ) else: numpy.testing.assert_array_equal( p.get('A', only_real_particles=False), list(range(3, 9)) + [0., 1, 2] ) self.assertEqual(check_array(p.x, []), True) self.assertEqual(check_array(p.y, []), True) self.assertEqual(check_array(p.z, []), True) self.assertEqual(check_array(p.h, []), True) self.assertEqual(check_array(p.m, []), True) self.assertEqual(check_array(p.A, []), True) def test_add_property(self): x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] tag = [0, 0, 0, 0] p = particle_array.ParticleArray(x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, tag={'data': tag}, backend=self.backend) p.add_property(**{'name': 'x'}) # make sure the current 'x' property is intact. self.assertEqual(check_array(p.x, x), True) # add a property with complete specification p.add_property(**{'name': 'f1', 'data': [1, 1, 2, 3], 'type': 'int', 'default': 4}) self.assertEqual(check_array(p.f1, [1, 1, 2, 3]), True) self.assertEqual(type(p.properties['f1']), IntArray) self.assertEqual(p.default_values['f1'], 4) # add a property with stride. data = [1, 1, 2, 2, 3, 3, 4, 4] p.add_property(**{'name': 'm1', 'data': data, 'type': 'int', 'stride': 2}) self.assertEqual(check_array(p.m1, data), True) self.assertEqual(type(p.properties['m1']), IntArray) # add a property without specifying the type p.add_property(**{'name': 'f2', 'data': [1, 1, 2, 3], 'default': 4.0}) self.assertEqual(type(p.properties['f2']), DoubleArray) self.assertEqual(check_array(p.f2, [1, 1, 2, 3]), True) p.add_property(**{'name': 'f3'}) self.assertEqual(type(p.properties['f3']), DoubleArray) self.assertEqual(p.properties['f3'].length, p.get_number_of_particles()) self.assertEqual(check_array(p.f3, [0, 0, 0, 0]), True) p.add_property(**{'name': 'f4', 'default': 3.0}) self.assertEqual(type(p.properties['f4']), DoubleArray) self.assertEqual(p.properties['f4'].length, p.get_number_of_particles()) self.assertEqual(check_array(p.f4, [3, 3, 3, 3]), True) p.add_property('f5', data=10.0) self.assertEqual(type(p.properties['f5']), DoubleArray) self.assertEqual(p.properties['f5'].length, p.get_number_of_particles()) self.assertEqual(check_array(p.f5, [10.0, 10.0, 10.0, 10.0]), True) p.add_property('m2', data=10.0, stride=2) self.assertEqual(type(p.properties['m2']), DoubleArray) self.assertEqual(p.properties['m2'].length, p.get_number_of_particles()*2) self.assertEqual(check_array(p.m2, [10.0]*8), True) def test_extend(self): # Given p = particle_array.ParticleArray(default_particle_tag=10, x={}, y={'default': -1.}, backend=self.backend) p.add_property('A', default=5.0, stride=2) # When p.extend(5) p.align_particles() self.pull(p) # Then self.assertEqual(p.get_number_of_particles(), 5) self.assertEqual(check_array(p.get( 'x', only_real_particles=False), [0, 0, 0, 0, 0]), True) self.assertEqual(check_array(p.get('y', only_real_particles=False), [-1., -1., -1., -1., -1.]), True) self.assertEqual(check_array(p.get('tag', only_real_particles=False), [10, 10, 10, 10, 10]), True) self.assertEqual(check_array(p.get('A', only_real_particles=False), [5.0]*10), True) # Given p = particle_array.ParticleArray( A={'data': [10.0, 10.0], 'stride': 2, 'default': -1.}, backend=self.backend ) # When p.extend(5) p.align_particles() self.pull(p) # Then self.assertEqual(check_array(p.get('A', only_real_particles=False), [10.0, 10.0] + [-1.0]*10), True) def test_resize(self): # Given p = particle_array.ParticleArray( A={'data': [10.0, 10.0], 'stride': 2, 'default': -1.}, x=[1.0], backend=self.backend ) # When p.resize(4) p.align_particles() self.pull(p) # Then self.assertEqual(p.get_carray('x').length, 4) self.assertEqual(p.get_carray('A').length, 8) def test_align_particles(self): # Given p = particle_array.ParticleArray(backend=self.backend) p.add_property(**{'name': 'x', 'data': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]}) p.add_property(**{'name': 'y', 'data': [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]}) a = numpy.arange(10) + 1 A = numpy.zeros(20) A[::2] = a A[1::2] = a p.add_property('A', data=A, stride=2) print(A) p.set(**{'tag': [0, 0, 1, 1, 1, 0, 4, 0, 1, 5]}) # When self.push(p) p.align_particles() self.pull(p) # Then x_new = p.get('x', only_real_particles=False) y_new = p.get('y', only_real_particles=False) A_new = p.get('A', only_real_particles=False) print(A_new) # check the local particles self.assertEqual(check_array(x_new[:4], [1, 2, 6, 8]), True) self.assertEqual(check_array(y_new[:4], [10, 9, 5, 3]), True) self.assertEqual(check_array(A_new[:8:2], [1, 2, 6, 8]), True) self.assertEqual(check_array(A_new[1:8:2], [1, 2, 6, 8]), True) # check the remaining particles self.assertEqual( check_array(numpy.sort(x_new[4:]), [3, 4, 5, 7, 9, 10]), True ) self.assertEqual( check_array(numpy.sort(y_new[4:]), [1, 2, 4, 6, 7, 8]), True ) self.assertEqual( check_array(numpy.sort(A_new[8::2]), [3, 4, 5, 7, 9, 10]), True ) self.assertEqual( check_array(numpy.sort(A_new[9::2]), [3, 4, 5, 7, 9, 10]), True ) p.set(**{'tag': [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]}) self.push(p) p.align_particles() self.pull(p) x_new = p.get('x', only_real_particles=False) y_new = p.get('y', only_real_particles=False) # check the remaining particles self.assertEqual(check_array(x_new[:4], [1, 2, 6, 8]), True) self.assertEqual(check_array(y_new[:4], [10, 9, 5, 3]), True) self.assertEqual(check_array(A_new[:8:2], [1, 2, 6, 8]), True) self.assertEqual(check_array(A_new[1:8:2], [1, 2, 6, 8]), True) # check the remaining particles self.assertEqual( check_array(numpy.sort(x_new[4:]), [3, 4, 5, 7, 9, 10]), True ) self.assertEqual( check_array(numpy.sort(y_new[4:]), [1, 2, 4, 6, 7, 8]), True ) self.assertEqual( check_array(numpy.sort(A_new[8::2]), [3, 4, 5, 7, 9, 10]), True ) self.assertEqual( check_array(numpy.sort(A_new[9::2]), [3, 4, 5, 7, 9, 10]), True ) def test_append_parray(self): # Given p1 = particle_array.ParticleArray(backend=self.backend) p1.add_property(**{'name': 'x', 'data': [1, 2, 3]}) p1.add_property('A', data=2.0, stride=2) p2 = particle_array.ParticleArray(x={'data': [4, 5, 6]}, y={'data': [1, 2, 3]}, tag={'data': [1, 0, 1]}, backend=self.backend) # When p1.append_parray(p2) self.pull(p1) # Then self.assertEqual(p1.get_number_of_particles(), 6) self.assertEqual(check_array(p1.x, [1, 2, 3, 5]), True) self.assertEqual(check_array(p1.y, [0, 0, 0, 2]), True) numpy.testing.assert_array_equal(p1.A, [2.0]*6 + [0.0]*2) self.assertEqual(check_array(p1.tag, [0, 0, 0, 0]), True) # Given p1 = particle_array.ParticleArray(backend=self.backend) p1.add_property(**{'name': 'x', 'data': [1, 2, 3]}) # In this case the new strided prop is in the second parray. p2 = particle_array.ParticleArray(x={'data': [4, 5, 6]}, y={'data': [1, 2, 3]}, tag={'data': [1, 0, 1]}, backend=self.backend) p2.add_property('A', data=2.0, stride=2) # When p1.append_parray(p2) self.pull(p1) # Then self.assertEqual(p1.get_number_of_particles(), 6) self.assertEqual(check_array(p1.x, [1, 2, 3, 5]), True) self.assertEqual(check_array(p1.y, [0, 0, 0, 2]), True) self.assertEqual(check_array(p1.A, [0.0]*6 + [2.0]*2), True) self.assertEqual(check_array(p1.tag, [0, 0, 0, 0]), True) def test_copy_properties(self): # Given p1 = particle_array.ParticleArray(backend=self.backend) p1.add_property(**{'name': 'x', 'data': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]}) p1.add_property(name='y') p1.add_property(name='t') p1.add_property('A', data=2.0, stride=2) p2 = particle_array.ParticleArray(backend=self.backend) p2.add_property(name='t', data=[-1, -1, -1, -1]) p2.add_property(name='s', data=[2, 3, 4, 5]) p2.add_property('A', data=3.0, stride=2) # When p1.copy_properties(p2, start_index=5, end_index=9) # Then self.assertEqual(check_array(p1.t, [0, 0, 0, 0, 0, -1, -1, -1, -1, 0]), True) numpy.testing.assert_array_equal( p1.A, [2.0]*10 + [3.0]*8 + [2.0]*2 ) # When p1.add_property('s') p1.copy_properties(p2, start_index=5, end_index=9) # Then self.assertEqual(check_array(p1.t, [0, 0, 0, 0, 0, -1, -1, -1, -1, 0]), True) self.assertEqual( check_array(p1.s, [0, 0, 0, 0, 0, 2, 3, 4, 5, 0]), True ) numpy.testing.assert_array_equal( p1.A, [2.0]*10 + [3.0]*8 + [2.0]*2 ) def test_that_constants_can_be_added(self): # Given p = particle_array.ParticleArray(backend=self.backend) nprop = len(p.properties) self.assertEqual(len(p.constants), 0) # When p.add_constant('s', 0.0) p.add_constant('ii', 0) p.add_constant('v', [0.0, 1.0, 2.0]) # Then self.assertEqual(len(p.constants), 3) self.assertEqual(len(p.properties), nprop) self.assertEqual(p.s[0], 0.0) self.assertEqual(p.ii[0], 0) self.assertTrue(str(p.ii[0].dtype).startswith('int')) self.assertTrue(check_array(p.v, [0.0, 1.0, 2.0])) def test_that_constants_can_be_set_in_constructor(self): # Given # When p = particle_array.ParticleArray( constants=dict(s=0.0, v=[0.0, 1.0, 2.0]), backend=self.backend ) nprop = len(p.properties) # Then self.assertEqual(len(p.constants), 2) self.assertEqual(len(p.properties), nprop) self.assertEqual(p.s[0], 0.0) self.assertTrue(check_array(p.v, [0.0, 1.0, 2.0])) def test_that_get_works_on_constants(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], backend=self.backend) # When p.add_constant('s', 0.0) p.add_constant('v', [0.0, 1.0, 2.0]) # Then self.assertTrue(check_array(p.get('s'), [0.0])) self.assertTrue(check_array(p.get('v'), [0.0, 1.0, 2.0])) def test_that_constants_are_not_resized_when_particles_are_added(self): # Given p = particle_array.ParticleArray(name='f', x=[1.0], backend=self.backend) # When p.add_constant('v', [0.0, 1.0]) p.add_particles(x=[2.0, 3.0]) self.pull(p) # Then self.assertTrue(check_array(p.v, [0.0, 1.0])) self.assertTrue(check_array(p.x, [1.0, 2.0, 3.0])) def test_that_set_works_on_constants(self): # Given constants = dict(v=[0.0, 0.0, 0.0], c=[0.0, 0.0, 0.0]) p = particle_array.ParticleArray(name='f', constants=constants, backend=self.backend) # When p.set(v=[0.0, 1.0, 2.0]) p.c = [0.0, 1.0, 2.0] # Then self.assertEqual(len(p.constants), 2) self.assertTrue(check_array(p.get('v'), [0.0, 1.0, 2.0])) self.assertTrue(check_array(p.get('c'), [0.0, 1.0, 2.0])) def test_that_get_carray_works_with_constants(self): # Given p = particle_array.ParticleArray(backend=self.backend) v = [0.0, 1.0, 2.0] # When p.add_constant('v', v) a = p.get_carray('v') # Then self.assertEqual(a.get_c_type(), 'double') self.assertTrue(check_array(a.get_npy_array(), v)) def test_empty_clone_works_without_specific_props(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], backend=self.backend) p.add_property('A', data=numpy.arange(6), stride=2) p.set_output_arrays(['x', 'A']) v = [0.0, 1.0, 2.0] p.add_constant('v', v) # When clone = p.empty_clone() self.pull(clone) # Then self.assertTrue(check_array(clone.v, v)) self.assertEqual(sorted(clone.output_property_arrays), sorted(p.output_property_arrays)) def test_empty_clone_works_with_specific_props(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], backend=self.backend) p.add_property('A', data=numpy.arange(6), stride=2) p.set_output_arrays(['x', 'A']) v = [0.0, 1.0, 2.0] p.add_constant('v', v) # When clone = p.empty_clone(props=['x', 'A']) self.pull(clone) # Then self.assertTrue(check_array(clone.v, v)) self.assertEqual(sorted(clone.output_property_arrays), sorted(p.output_property_arrays)) self.assertFalse('y' in clone.properties) self.assertTrue('x' in clone.properties) self.assertTrue('A' in clone.properties) def test_extract_particles_works_without_specific_props_without_dest(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], backend=self.backend) p.add_property('A', data=numpy.arange(6), stride=2) # When. n = p.extract_particles([1]) self.pull(n) # Then. self.assertEqual(len(p.x), 3) self.assertEqual(len(p.A), 6) self.assertEqual(len(n.x), 1) self.assertEqual(len(n.A), 2) self.assertEqual(n.x[0], 2.0) numpy.testing.assert_array_equal(n.A, [2, 3]) def test_extract_particles_works_with_specific_props_without_dest(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], y=[0, 0, 0], backend=self.backend) p.add_property('A', data=numpy.arange(6), stride=2) # When. n = p.extract_particles([1, 2], props=['x', 'A']) self.pull(n) # Then. self.assertEqual(len(p.x), 3) self.assertEqual(len(n.x), 2) self.assertEqual(n.x[0], 2.0) self.assertEqual(n.x[0], 2.0) numpy.testing.assert_array_equal(n.A, [2, 3, 4, 5]) self.assertFalse('y' in n.properties) def test_extract_particles_works_without_specific_props_with_dest(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], backend=self.backend) p.add_property('A', data=numpy.arange(6), stride=2) n = p.empty_clone() # When. p.extract_particles([1], dest_array=n) self.pull(n) # Then. self.assertEqual(len(p.x), 3) self.assertEqual(len(p.A), 6) self.assertEqual(len(n.x), 1) self.assertEqual(len(n.A), 2) self.assertEqual(n.x[0], 2.0) numpy.testing.assert_array_equal(n.A, [2, 3]) def test_extract_particles_works_with_non_empty_dest(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], backend=self.backend) p.add_property('A', data=numpy.arange(6), stride=2) n = p.empty_clone() n.extend(2) # When. p.extract_particles([1], dest_array=n) self.pull(n) # Then. self.assertEqual(len(p.x), 3) self.assertEqual(len(p.A), 6) self.assertEqual(len(n.x), 3) self.assertEqual(len(n.A), 6) numpy.testing.assert_array_equal(n.x, [0.0, 0.0, 2.0]) numpy.testing.assert_array_equal(n.A, [0, 0, 0, 0, 2, 3]) def test_extract_particles_works_with_specific_props_with_dest(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], y=[0, 0, 0], backend=self.backend) p.add_property('A', data=numpy.arange(6), stride=2) n = p.empty_clone(props=['x', 'A']) # When. p.extract_particles([1, 2], dest_array=n, props=['x', 'A']) self.pull(n) # Then. self.assertEqual(len(p.x), 3) self.assertEqual(len(n.x), 2) self.assertEqual(n.x[0], 2.0) self.assertEqual(n.x[0], 2.0) numpy.testing.assert_array_equal(n.A, [2, 3, 4, 5]) self.assertFalse('y' in n.properties) def test_that_remove_property_also_removes_output_arrays(self): # Given p = particle_array.ParticleArray(name='f', x=[1, 2, 3], y=[0, 0, 0], backend=self.backend) p.add_property('test') p.set_output_arrays(['x', 'y', 'test']) # When p.remove_property('test') # Then self.assertEqual(p.output_property_arrays, ['x', 'y']) class ParticleArrayUtils(unittest.TestCase): def setUp(self): get_config().use_opencl = False def test_that_get_particles_info_works(self): # Given. p = particle_array.ParticleArray(name='f', x=[1, 2, 3]) p.add_property('A', data=numpy.arange(6), stride=2) c = [1.0, 2.0] p.add_constant('c', c) # When. info = utils.get_particles_info([p]) pas = utils.create_dummy_particles(info) dummy = pas[0] # Then. self.assertTrue(check_array(dummy.c, c)) self.assertEqual(dummy.name, 'f') self.assertTrue('x' in dummy.properties) self.assertTrue('A' in dummy.properties) self.assertTrue('A' in dummy.stride) self.assertEqual(dummy.stride['A'], 2) def test_get_particle_array_takes_scalars(self): # Given/when x = [1, 2, 3, 4] data = numpy.diag((2, 2)) pa = utils.get_particle_array(x=x, y=0, rho=1, data=data) # Then self.assertTrue(numpy.allclose(x, pa.x)) self.assertTrue(numpy.allclose(numpy.zeros(4), pa.y)) self.assertTrue(numpy.allclose(numpy.ones(4), pa.rho)) self.assertTrue(numpy.allclose(numpy.ravel(data), pa.data)) class ParticleArrayTestCPU(unittest.TestCase, ParticleArrayTest): """ Tests for the particle array class. """ def setUp(self): get_config().use_opencl = False self.backend = None def pull(self, p): pass def push(self, p): pass def test_pickle(self): """ Tests the pickle and unpickle functions """ p1 = particle_array.ParticleArray() p1.add_property('x', data=numpy.arange(10)) p1.add_property('y', data=numpy.arange(10)) p1.add_constant('c', [0.0, 1.0]) p1.align_particles() s = pickle.dumps(p1) p2 = pickle.loads(s) self.assertEqual(len(p1.x), len(p2.x)) check_array(p1.x, p2.x) self.assertEqual(len(p1.y), len(p2.y)) check_array(p1.y, p2.y) self.assertEqual(len(p1.c), len(p2.c)) check_array(p1.c, p2.c) def test_set(self): """ Tests the set function. """ x = [1, 2, 3, 4.] y = [0., 1., 2., 3.] z = [0., 0., 0., 0.] m = [1., 1., 1., 1.] h = [.1, .1, .1, .1] p = particle_array.ParticleArray(x={'data': x}, y={'data': y}, z={'data': z}, m={'data': m}, h={'data': h}, backend=self.backend) # set the x array with new values p.set(**{'x': [4., 3, 2, 1], 'h': [0.2, 0.2, 0.2, 0.2]}) self.assertEqual(check_array(p.get('x'), [4., 3, 2, 1]), True) self.assertEqual(check_array(p.get('h'), [0.2, 0.2, 0.2, 0.2]), True) # trying to set the tags p.set(**{'tag': [0, 1, 1, 1]}) p.align_particles() self.pull(p) self.assertEqual( check_array(p.get('tag', only_real_particles=False), [0, 1, 1, 1]), True ) self.assertEqual(check_array(p.get('tag'), [0]), True) # try setting array with smaller length array. p.set(**{'x': [5, 6, 7]}) self.assertEqual(check_array(p.get('x', only_real_particles=False), [5, 6, 7, 1]), True) # try setting array with longer array. self.assertRaises(ValueError, p.set, **{'x': [1., 2, 3, 5, 6]}) class ParticleArrayTestOpenCL(unittest.TestCase, ParticleArrayTest): def setUp(self): ocl = pytest.importorskip("pyopencl") cfg = get_config() self.orig_use_double = cfg.use_double cfg.use_double = True self.backend = 'opencl' def tearDown(self): get_config().use_double = self.orig_use_double def pull(self, p): p.gpu.pull() def push(self, p): p.gpu.push() class ParticleArrayTestCUDA(unittest.TestCase, ParticleArrayTest): def setUp(self): cu = pytest.importorskip("pycuda") cfg = get_config() self.orig_use_double = cfg.use_double cfg.use_double = True self.backend = 'cuda' def tearDown(self): get_config().use_double = self.orig_use_double def pull(self, p): p.gpu.pull() def push(self, p): p.gpu.push() if __name__ == '__main__': import logging logger = logging.getLogger() ch = logging.StreamHandler() logger.addHandler(ch) unittest.main() pysph-master/pysph/base/tests/test_periodic_nnps.py000066400000000000000000000234251356347341600232200ustar00rootroot00000000000000"""Tests for the periodicity algorithms in NNPS""" # NumPy import numpy as np # PySPH imports from pysph.base.nnps import DomainManager, BoxSortNNPS, LinkedListNNPS, \ SpatialHashNNPS, ExtendedSpatialHashNNPS from pysph.base.utils import get_particle_array from pysph.base.kernels import Gaussian, get_compiled_kernel import pysph.tools.geometry as G # PyZoltan CArrays from cyarray.carray import UIntArray # Python unit testing framework import unittest class PeriodicChannel2DTestCase(unittest.TestCase): """Test the periodicity algorithms in NNPS. A channel like set-up is used in 2D with fluid particles between parallel flat plates. Periodic boundary conditions are imposed along the 'x' direction and summation density is used to check for the density of the fluid particles. """ def setUp(self): # create the particle arrays L = 1.0 n = 100 dx = L / n hdx = 1.5 _x = np.arange(dx / 2, L, dx) self.vol = vol = dx * dx # fluid particles xx, yy = np.meshgrid(_x, _x) x = xx.ravel() y = yy.ravel() # particle positions h = np.ones_like(x) * hdx * dx # smoothing lengths m = np.ones_like(x) * vol # mass V = np.zeros_like(x) # volumes fluid = get_particle_array(name='fluid', x=x, y=y, h=h, m=m, V=V) # channel particles _y = np.arange(L + dx / 2, L + dx / 2 + 10 * dx, dx) xx, yy = np.meshgrid(_x, _y) xtop = xx.ravel() ytop = yy.ravel() _y = np.arange(-dx / 2, -dx / 2 - 10 * dx, -dx) xx, yy = np.meshgrid(_x, _y) xbot = xx.ravel() ybot = yy.ravel() x = np.concatenate((xtop, xbot)) y = np.concatenate((ytop, ybot)) h = np.ones_like(x) * hdx * dx m = np.ones_like(x) * vol V = np.zeros_like(x) channel = get_particle_array(name='channel', x=x, y=y, h=h, m=m, V=V) # particles and domain self.particles = [fluid, channel] self.domain = DomainManager(xmin=0, xmax=L, periodic_in_x=True) self.kernel = get_compiled_kernel(Gaussian(dim=2)) def _test_periodicity_flags(self): "NNPS :: checking for periodicity flags" nnps = self.nnps domain = self.domain self.assertTrue(nnps.is_periodic) self.assertTrue(domain.manager.periodic_in_x) self.assertTrue(not domain.manager.periodic_in_y) self.assertTrue(not domain.manager.periodic_in_z) def _test_summation_density(self): "NNPS :: testing for summation density" fluid, channel = self.particles nnps = self.nnps kernel = self.kernel # get the fluid arrays fx, fy, fh, frho, fV, fm = fluid.get( 'x', 'y', 'h', 'rho', 'V', 'm', only_real_particles=True) # initialize the fluid density and volume frho[:] = 0.0 fV[:] = 0.0 # compute density on the fluid nbrs = UIntArray() for i in range(fluid.num_real_particles): hi = fh[i] # compute density from the fluid from the source arrays nnps.get_nearest_particles(src_index=0, dst_index=0, d_idx=i, nbrs=nbrs) nnbrs = nbrs.length # the source arrays. First source is also the fluid sx, sy, sh, sm = fluid.get('x', 'y', 'h', 'm', only_real_particles=False) for indexj in range(nnbrs): j = nbrs[indexj] hij = 0.5 * (hi + sh[j]) frho[i] += sm[j] * \ kernel.kernel(fx[i], fy[i], 0.0, sx[j], sy[j], 0.0, hij) fV[i] += kernel.kernel(fx[i], fy[i], 0.0, sx[j], sy[j], 0.0, hij) # compute density from the channel nnps.get_nearest_particles( src_index=1, dst_index=0, d_idx=i, nbrs=nbrs) nnbrs = nbrs.length sx, sy, sh, sm = channel.get( 'x', 'y', 'h', 'm', only_real_particles=False) for indexj in range(nnbrs): j = nbrs[indexj] hij = 0.5 * (hi + sh[j]) frho[i] += sm[j] * \ kernel.kernel(fx[i], fy[i], 0.0, sx[j], sy[j], 0.0, hij) fV[i] += kernel.kernel(fx[i], fy[i], 0.0, sx[j], sy[j], 0.0, hij) # check the number density and density by summation voli = 1. / fV[i] self.assertAlmostEqual(voli, self.vol, 6) self.assertAlmostEqual(frho[i], fm[i] / voli, 6) class PeriodicChannel2DBoxSort(PeriodicChannel2DTestCase): def setUp(self): PeriodicChannel2DTestCase.setUp(self) self.nnps = BoxSortNNPS( dim=2, particles=self.particles, domain=self.domain, radius_scale=self.kernel.radius_scale) def test_periodicity_flags(self): "BoxSortNNPS :: test periodicity flags" self._test_periodicity_flags() def test_summation_density(self): "BoxSortNNPS :: test summation density" self._test_summation_density() class PeriodicChannel2DLinkedList(PeriodicChannel2DTestCase): def setUp(self): PeriodicChannel2DTestCase.setUp(self) self.nnps = LinkedListNNPS( dim=2, particles=self.particles, domain=self.domain, radius_scale=self.kernel.radius_scale) def test_periodicity_flags(self): "LinkedListNNPS :: test periodicity flags" self._test_periodicity_flags() def test_summation_density(self): "LinkedListNNPS :: test summation density" self._test_summation_density() def test_add_property_after_creation_works(self): # Given particles = self.particles fluid = particles[0] # When fluid.add_property('junk') # Then self.nnps.update_domain() self._test_summation_density() class PeriodicChannel2DSpatialHash(PeriodicChannel2DTestCase): def setUp(self): PeriodicChannel2DTestCase.setUp(self) self.nnps = SpatialHashNNPS( dim=2, particles=self.particles, domain=self.domain, radius_scale=self.kernel.radius_scale) def test_periodicity_flags(self): "SpatialHashNNPS :: test periodicity flags" self._test_periodicity_flags() def test_summation_density(self): "SpatialHashNNPS :: test summation density" self._test_summation_density() class PeriodicChannel2DExtendedSpatialHash(PeriodicChannel2DTestCase): def setUp(self): PeriodicChannel2DTestCase.setUp(self) self.nnps = ExtendedSpatialHashNNPS( dim=2, particles=self.particles, domain=self.domain, radius_scale=self.kernel.radius_scale) def test_periodicity_flags(self): self._test_periodicity_flags() def test_summation_density(self): self._test_summation_density() class TestPeriodicChannel3D(unittest.TestCase): def setUp(self): self.l = l = 1.0 n = 20 dx = l / n hdx = 1.5 x, y, z = G.get_3d_block(dx, l, l, l) h = np.ones_like(x) * hdx * dx m = np.ones_like(x) * dx * dx * dx V = np.zeros_like(x) fluid = get_particle_array(name='fluid', x=x, y=y, z=z, h=h, m=m, V=V) x, y = G.get_2d_block(dx, l, l) z = np.ones_like(x) * (l + 5 * dx) / 2.0 z = np.concatenate([z, -z]) x = np.tile(x, 2) y = np.tile(y, 2) m = np.ones_like(x) * dx * dx * dx h = np.ones_like(x) * hdx * dx V = np.zeros_like(x) channel = get_particle_array( name='channel', x=x, y=y, z=z, h=h, m=m, V=V) self.particles = [fluid, channel] self.kernel = get_compiled_kernel(Gaussian(dim=3)) def _test_periodic_flags(self, bool1, bool2, bool3): nnps = self.nnps domain = self.domain.manager self.assertTrue(nnps.is_periodic) self.assertTrue(domain.periodic_in_x == bool1) self.assertTrue(domain.periodic_in_y == bool2) self.assertTrue(domain.periodic_in_z == bool3) class TestPeriodicXYZ3D(TestPeriodicChannel3D): def setUp(self): TestPeriodicChannel3D.setUp(self) l = self.l self.domain = DomainManager( xmin=-l / 2.0, xmax=l / 2.0, ymin=-l / 2.0, ymax=l / 2.0, zmin=-l / 2.0, zmax=l / 2.0, periodic_in_x=True, periodic_in_y=True, periodic_in_z=True) self.nnps = LinkedListNNPS( dim=3, particles=self.particles, domain=self.domain, radius_scale=self.kernel.radius_scale) def test_periodicity_flags(self): self._test_periodic_flags(True, True, True) class TestPeriodicZ3D(TestPeriodicChannel3D): def setUp(self): TestPeriodicChannel3D.setUp(self) l = self.l self.domain = DomainManager( zmin=-l / 2.0, zmax=l / 2.0, periodic_in_z=True) self.nnps = LinkedListNNPS( dim=3, particles=self.particles, domain=self.domain, radius_scale=self.kernel.radius_scale) def test_periodicity_flags(self): self._test_periodic_flags(False, False, True) class TestPeriodicXY3D(TestPeriodicChannel3D): def setUp(self): TestPeriodicChannel3D.setUp(self) l = self.l self.domain = DomainManager( xmin=-l / 2.0, xmax=l / 2.0, ymin=-l / 2.0, ymax=l / 2.0, periodic_in_x=True, periodic_in_y=True) self.nnps = LinkedListNNPS( dim=3, particles=self.particles, domain=self.domain, radius_scale=self.kernel.radius_scale) def test_periodicity_flags(self): self._test_periodic_flags(True, True, False) if __name__ == '__main__': unittest.main() pysph-master/pysph/base/tests/test_reduce_array.py000066400000000000000000000025211356347341600230230ustar00rootroot00000000000000import numpy as np from unittest import TestCase, main from pysph.base.reduce_array import serial_reduce_array, dummy_reduce_array class TestSerialReduceArray(TestCase): def test_reduce_sum_works(self): x = np.linspace(0, 10, 100) expect = np.sum(x) result = serial_reduce_array(x, 'sum') self.assertAlmostEqual(result, expect) def test_reduce_prod_works(self): x = np.linspace(0, 10, 100) expect = np.prod(x) result = serial_reduce_array(x, 'prod') self.assertAlmostEqual(result, expect) def test_reduce_max_works(self): x = np.linspace(0, 10, 100) expect = np.max(x) result = serial_reduce_array(x, 'max') self.assertAlmostEqual(result, expect) def test_reduce_min_works(self): x = np.linspace(0, 10, 100) expect = np.min(x) result = serial_reduce_array(x, 'min') self.assertAlmostEqual(result, expect) def test_reduce_raises_error_for_wrong_op(self): x = np.linspace(0, 10, 100) self.assertRaises(RuntimeError, serial_reduce_array, x, 'foo') def test_dummy_reduce_array_does_nothing(self): x = np.array([1.0, 2.0]) expect = x result = dummy_reduce_array(x, 'min') self.assertTrue(np.alltrue(result == expect)) if __name__ == '__main__': main() pysph-master/pysph/base/tests/test_utils.py000066400000000000000000000015451356347341600215230ustar00rootroot00000000000000from unittest import TestCase, main from ..utils import is_overloaded_method class TestUtils(TestCase): def test_is_overloaded_method_works_for_simple_overloads(self): # Given class A(object): def f(self): pass class B(A): pass # When/Then b = B() self.assertFalse(is_overloaded_method(b.f)) class C(A): def f(self): pass # When/Then c = C() self.assertTrue(is_overloaded_method(c.f)) def test_is_overloaded_method_works_for_parent_overloads(self): # Given class A(object): def f(self): pass class B(A): def f(self): pass class C(B): pass # When/Then c = C() self.assertTrue(is_overloaded_method(c.f)) if __name__ == '__main__': main() pysph-master/pysph/base/tree/000077500000000000000000000000001356347341600165425ustar00rootroot00000000000000pysph-master/pysph/base/tree/__init__.py000066400000000000000000000000001356347341600206410ustar00rootroot00000000000000pysph-master/pysph/base/tree/helpers.py000066400000000000000000000101401356347341600205520ustar00rootroot00000000000000import numpy as np import pyopencl as cl import pyopencl.array import pyopencl.cltypes from pyopencl.elementwise import ElementwiseKernel from pytools import memoize from compyle.opencl import get_context from pysph.base.gpu_nnps_helper import GPUNNPSHelper from compyle.array import Array make_vec_dict = { 'float': { 1: np.float32, 2: cl.cltypes.make_float2, 3: cl.cltypes.make_float3 }, 'double': { 1: np.float64, 2: cl.cltypes.make_double2, 3: cl.cltypes.make_double3 } } @memoize def get_helper(src_file, c_type=None): # ctx and c_type are the only parameters that # change here return GPUNNPSHelper(src_file, backend='opencl', c_type=c_type) @memoize def get_copy_kernel(ctx, dtype1, dtype2, varnames): arg_list = [('%(data_t1)s *%(v)s1' % dict(data_t1=dtype1, v=v)) for v in varnames] arg_list += [('%(data_t2)s *%(v)s2' % dict(data_t2=dtype2, v=v)) for v in varnames] args = ', '.join(arg_list) operation = '; '.join(('%(v)s2[i] = (%(data_t2)s)%(v)s1[i];' % dict(v=v, data_t2=dtype2)) for v in varnames) return ElementwiseKernel(ctx, args, operation=operation) _vector_dtypes = { 'uint': { 2: cl.cltypes.uint2, 4: cl.cltypes.uint4, 8: cl.cltypes.uint8 }, 'float': { 1: cl.cltypes.float, 2: cl.cltypes.float2, 3: cl.cltypes.float3, }, 'double': { 1: cl.cltypes.double, 2: cl.cltypes.double2, 3: cl.cltypes.double3 } } def get_vector_dtype(ctype, dim): try: return _vector_dtypes[ctype][dim] except KeyError: raise ValueError("Vector datatype of type %(ctype)s with %(dim)s items" " is not supported" % dict(ctype=ctype, dim=dim)) c2d = { 'half': np.float16, 'float': np.float32, 'double': np.float64 } def ctype_to_dtype(ctype): return c2d[ctype] class GPUParticleArrayWrapper(object): def __init__(self, pa_gpu, c_type_src, c_type, varnames): self.c_type = c_type self.c_type_src = c_type_src self.varnames = varnames self._allocate_memory(pa_gpu) self.sync(pa_gpu) def _gpu_copy(self, pa_gpu): copy_kernel = get_copy_kernel(get_context(), self.c_type_src, self.c_type, self.varnames) args = [getattr(pa_gpu, v).dev for v in self.varnames] args += [getattr(self, v).dev for v in self.varnames] copy_kernel(*args) def _allocate_memory(self, pa_gpu): shape = getattr(pa_gpu, self.varnames[0]).dev.shape[0] for v in self.varnames: setattr(self, v, Array(ctype_to_dtype(self.c_type), n=shape, backend='opencl')) def _gpu_sync(self, pa_gpu): v0 = self.varnames[0] if getattr(self, v0).dev.shape != getattr(pa_gpu, v0).dev.shape: self._allocate_memory(pa_gpu) self._gpu_copy(pa_gpu) def sync(self, pa_gpu): self._gpu_sync(pa_gpu) class ParticleArrayWrapper(object): """A loose wrapper over Particle Array Objective is to transparently maintain a copy of the original particle array's position properties (x, y, z, h) """ def __init__(self, pa, c_type_src, c_type, varnames): self._pa = pa # If data types are different, then make a copy of the # underlying data stored on the device if c_type_src != c_type: self._pa_gpu_is_copy = True self._gpu = GPUParticleArrayWrapper(pa.gpu, c_type_src, c_type, varnames) else: self._pa_gpu_is_copy = False self._gpu = None def get_number_of_particles(self): return self._pa.get_number_of_particles() @property def gpu(self): if self._pa_gpu_is_copy: return self._gpu else: return self._pa.gpu def sync(self): if self._pa_gpu_is_copy: self._gpu.sync(self._pa.gpu) pysph-master/pysph/base/tree/point_tree.mako000066400000000000000000000221401356347341600215620ustar00rootroot00000000000000//CL// <%def name="preamble(data_t)" cached="True"> #define IN_BOUNDS(X, MIN, MAX) ((X >= MIN) && (X < MAX)) #define NORM2(X, Y, Z) ((X)*(X) + (Y)*(Y) + (Z)*(Z)) #define MIN(X, Y) ((X) < (Y) ? (X) : (Y)) #define MAX(X, Y) ((X) > (Y) ? (X) : (Y)) #define SQR(X) ((X) * (X)) typedef struct { union { float s0; float x; }; } float1; typedef struct { union { double s0; double x; }; } double1; <%def name="fill_particle_data_args(data_t, xvars, dim)" cached="False"> % for v in xvars: ${data_t}* ${v}, % endfor ${data_t} cell_size, ${data_t}${dim} min, unsigned long* keys, unsigned int* pids <%def name="fill_particle_data_src(data_t, xvars, dim)" cached="False"> % for v in xvars: unsigned long c_${v} = floor((${v}[i] - min.${v}) / cell_size); % endfor unsigned long key; ## For 3D: key = interleave3(c_x, c_y, c_z); key = interleave${dim}(${', '.join('c_' + v for v in xvars)}); keys[i] = key; pids[i] = i; <%def name="find_neighbors_template(data_t, sorted, wgs)" cached="False"> /* * Property owners * tree_dst: cids, unique_cids * tree_src: neighbor_cid_offset, neighbor_cids * self evident: xsrc, ysrc, zsrc, hsrc, xdst, ydst, zdst, hdst, pbounds_src, pbounds_dst, */ int idx = i / ${wgs}; // Fetch dst particles ${data_t} xd, yd, zd, hd; int cid_dst = unique_cids[idx]; uint2 pbound_here = pbounds_dst[cid_dst]; char svalid = (pbound_here.s0 + lid < pbound_here.s1); int pid_dst; if (svalid) { % if sorted: pid_dst = pbound_here.s0 + lid; % else: pid_dst = pids_dst[pbound_here.s0 + lid]; % endif xd = xdst[pid_dst]; yd = ydst[pid_dst]; zd = zdst[pid_dst]; hd = hdst[pid_dst]; } // Set loop parameters int cid_src, pid_src; int offset_src = neighbor_cid_offset[idx]; int offset_lim = neighbor_cid_offset[idx + 1]; uint2 pbound_here2; int m; local ${data_t} xs[${wgs}]; local ${data_t} ys[${wgs}]; local ${data_t} zs[${wgs}]; local ${data_t} hs[${wgs}]; ${data_t} r2; ${caller.pre_loop()} while (offset_src < offset_lim) { cid_src = neighbor_cids[offset_src]; pbound_here2 = pbounds_src[cid_src]; offset_src++; while (pbound_here2.s0 < pbound_here2.s1) { // Copy src data if (pbound_here2.s0 + lid < pbound_here2.s1) { %if sorted: pid_src = pbound_here2.s0 + lid; % else: pid_src = pids_src[pbound_here2.s0 + lid]; %endif xs[lid] = xsrc[pid_src]; ys[lid] = ysrc[pid_src]; zs[lid] = zsrc[pid_src]; hs[lid] = hsrc[pid_src]; } m = min(pbound_here2.s1, pbound_here2.s0 + ${wgs}) - pbound_here2.s0; barrier(CLK_LOCAL_MEM_FENCE); // Everything this point forward is done independently // by each thread. if (svalid) { for (int j=0; j < m; j++) { % if sorted: pid_src= pbound_here2.s0 + j; % else: pid_src = pids_src[pbound_here2.s0 + j]; % endif ${data_t} dist2 = NORM2(xs[j] - xd, ys[j] - yd, zs[j] - zd); r2 = MAX(hs[j], hd) * radius_scale; r2 *= r2; if (dist2 < r2) { ${caller.query()} } } } pbound_here2.s0 += ${wgs}; barrier(CLK_LOCAL_MEM_FENCE); } } ${caller.post_loop()} <%def name="find_neighbor_counts_args(data_t, sorted, wgs)" cached="False"> int *unique_cids, int *pids_src, int *pids_dst, int *cids, uint2 *pbounds_src, uint2 *pbounds_dst, ${data_t} *xsrc, ${data_t} *ysrc, ${data_t} *zsrc, ${data_t} *hsrc, ${data_t} *xdst, ${data_t} *ydst, ${data_t} *zdst, ${data_t} *hdst, ${data_t} radius_scale, int *neighbor_cid_offset, int *neighbor_cids, int *neighbor_counts <%def name="find_neighbor_counts_src(data_t, sorted, wgs)" cached="False"> <%self:find_neighbors_template data_t="${data_t}" sorted="${sorted}" wgs="${wgs}"> <%def name="pre_loop()"> int count = 0; <%def name="query()"> count++; <%def name="post_loop()"> if(svalid) neighbor_counts[pid_dst] = count; <%def name="find_neighbors_args(data_t, sorted, wgs)" cached="False"> int *unique_cids, int *pids_src, int *pids_dst, int *cids, uint2 *pbounds_src, uint2 *pbounds_dst, ${data_t} *xsrc, ${data_t} *ysrc, ${data_t} *zsrc, ${data_t} *hsrc, ${data_t} *xdst, ${data_t} *ydst, ${data_t} *zdst, ${data_t} *hdst, ${data_t} radius_scale, int *neighbor_cid_offset, int *neighbor_cids, int *neighbor_counts, int *neighbors <%def name="find_neighbors_src(data_t, sorted, wgs)" cached="False"> <%self:find_neighbors_template data_t="${data_t}" sorted="${sorted}" wgs="${wgs}"> <%def name="pre_loop()"> int offset; if (svalid) offset = neighbor_counts[pid_dst]; <%def name="query()"> if (svalid) neighbors[offset++] = pid_src; <%def name="post_loop()"> <%def name="find_neighbors_elementwise_template(data_t, sorted)" cached="False"> /* * Property owners * tree_dst: cids, unique_cid_idx * tree_src: neighbor_cid_offset, neighbor_cids * self evident: xsrc, ysrc, zsrc, hsrc, xdst, ydst, zdst, hdst, pbounds_src, pbounds_dst, */ int idx = unique_cids_map[i]; // Fetch dst particles ${data_t} xd, yd, zd, hd; int pid_dst; % if sorted: pid_dst = i; % else: pid_dst = pids_dst[i]; % endif xd = xdst[pid_dst]; yd = ydst[pid_dst]; zd = zdst[pid_dst]; hd = hdst[pid_dst]; // Set loop parameters int cid_src, pid_src; int offset_src = neighbor_cid_offset[idx]; int offset_lim = neighbor_cid_offset[idx + 1]; uint2 pbound_here2; ${data_t} r2; ${caller.pre_loop()} while (offset_src < offset_lim) { cid_src = neighbor_cids[offset_src]; pbound_here2 = pbounds_src[cid_src]; offset_src++; for (int j=pbound_here2.s0; j < pbound_here2.s1; j++) { % if sorted: pid_src= j; % else: pid_src = pids_src[j]; % endif ${data_t} dist2 = NORM2(xsrc[pid_src] - xd, ysrc[pid_src] - yd, zsrc[pid_src] - zd); r2 = MAX(hsrc[pid_src], hd) * radius_scale; r2 *= r2; if (dist2 < r2) { ${caller.query()} } } } ${caller.post_loop()} <%def name="find_neighbor_counts_elementwise_args(data_t, sorted)" cached="False"> int *unique_cids_map, int *pids_src, int *pids_dst, int *cids, uint2 *pbounds_src, uint2 *pbounds_dst, ${data_t} *xsrc, ${data_t} *ysrc, ${data_t} *zsrc, ${data_t} *hsrc, ${data_t} *xdst, ${data_t} *ydst, ${data_t} *zdst, ${data_t} *hdst, ${data_t} radius_scale, int *neighbor_cid_offset, int *neighbor_cids, int *neighbor_counts <%def name="find_neighbor_counts_elementwise_src(data_t, sorted)" cached="False"> <%self:find_neighbors_elementwise_template data_t="${data_t}" sorted="${sorted}"> <%def name="pre_loop()"> int count = 0; <%def name="query()"> count++; <%def name="post_loop()"> neighbor_counts[pid_dst] = count; <%def name="find_neighbors_elementwise_args(data_t, sorted)" cached="False"> int *unique_cids_map, int *pids_src, int *pids_dst, int *cids, uint2 *pbounds_src, uint2 *pbounds_dst, ${data_t} *xsrc, ${data_t} *ysrc, ${data_t} *zsrc, ${data_t} *hsrc, ${data_t} *xdst, ${data_t} *ydst, ${data_t} *zdst, ${data_t} *hdst, ${data_t} radius_scale, int *neighbor_cid_offset, int *neighbor_cids, int *neighbor_counts, int *neighbors <%def name="find_neighbors_elementwise_src(data_t, sorted)" cached="False"> <%self:find_neighbors_elementwise_template data_t="${data_t}" sorted="${sorted}"> <%def name="pre_loop()"> int offset = neighbor_counts[pid_dst]; <%def name="query()"> neighbors[offset++] = pid_src; <%def name="post_loop()"> pysph-master/pysph/base/tree/point_tree.py000066400000000000000000000630231356347341600212700ustar00rootroot00000000000000from pysph.base.tree.tree import Tree from pysph.base.tree.helpers import ParticleArrayWrapper, get_helper, \ make_vec_dict, ctype_to_dtype, get_vector_dtype from compyle.opencl import profile_kernel, DeviceWGSException, get_queue, \ named_profile, get_context from compyle.array import Array from pytools import memoize import sys import numpy as np import pyopencl as cl from pyopencl.scan import GenericScanKernel import pyopencl.tools from mako.template import Template # For Mako disable_unicode = False if sys.version_info.major > 2 else True class IncompatibleTreesException(Exception): pass @named_profile('neighbor_count_prefix_sum') @memoize def _get_neighbor_count_prefix_sum_kernel(ctx): return GenericScanKernel(ctx, np.int32, arguments="__global int *ary", input_expr="ary[i]", scan_expr="a+b", neutral="0", output_statement="ary[i] = prev_item") @memoize def _get_macros_preamble(c_type, sorted, dim): result = Template(""" #define IN_BOUNDS(X, MIN, MAX) ((X >= MIN) && (X < MAX)) #define NORM2(X, Y, Z) ((X)*(X) + (Y)*(Y) + (Z)*(Z)) #define NORM2_2D(X, Y) ((X)*(X) + (Y)*(Y)) #define MIN(X, Y) ((X) < (Y) ? (X) : (Y)) #define MAX(X, Y) ((X) > (Y) ? (X) : (Y)) #define AVG(X, Y) (((X) + (Y)) / 2) #define ABS(X) ((X) > 0 ? (X) : -(X)) #define SQR(X) ((X) * (X)) typedef struct { union { float s0; float x; }; } float1; typedef struct { union { double s0; double x; }; } double1; % if sorted: #define PID(idx) (idx) % else: #define PID(idx) (pids[idx]) % endif char contains(${data_t}${dim} node_xmin1, ${data_t}${dim} node_xmax1, ${data_t}${dim} node_xmin2, ${data_t}${dim} node_xmax2) { // Check if node n1 contains node n2 char res = 1; % for i in range(dim): res = res && (node_xmin1.s${i} <= node_xmin2.s${i}) && (node_xmax1.s${i} >= node_xmax2.s${i}); % endfor return res; } char contains_search(${data_t}${dim} node_xmin1, ${data_t}${dim} node_xmax1, ${data_t} node_hmax1, ${data_t}${dim} node_xmin2, ${data_t}${dim} node_xmax2) { // Check if node n1 contains node n2 with n1 having // its search radius extension ${data_t} h = node_hmax1; char res = 1; %for i in range(dim): res = res & (node_xmin1.s${i} - h <= node_xmin2.s${i}) & (node_xmax1.s${i} + h >= node_xmax2.s${i}); %endfor return res; } char intersects(${data_t}${dim} node_xmin1, ${data_t}${dim} node_xmax1, ${data_t} node_hmax1, ${data_t}${dim} node_xmin2, ${data_t}${dim} node_xmax2, ${data_t} node_hmax2) { // Check if node n1 'intersects' node n2 ${data_t} cdist; ${data_t} w1, w2, wavg = 0; char res = 1; ${data_t} h = MAX(node_hmax1, node_hmax2); % for i in range(dim): cdist = fabs((node_xmin1.s${i} + node_xmax1.s${i}) / 2 - (node_xmin2.s${i} + node_xmax2.s${i}) / 2); w1 = fabs(node_xmin1.s${i} - node_xmax1.s${i}); w2 = fabs(node_xmin2.s${i} - node_xmax2.s${i}); wavg = AVG(w1, w2); res &= (cdist - wavg <= h); % endfor return res; } """, disable_unicode=disable_unicode).render(data_t=c_type, sorted=sorted, dim=dim) return result @memoize def _get_node_bound_kernel_parameters(dim, data_t, xvars): result = {} result['setup'] = Template( r""" ${data_t} xmin[${dim}] = {${', '.join(['INFINITY'] * dim)}}; ${data_t} xmax[${dim}] = {${', '.join(['-INFINITY'] * dim)}}; ${data_t} hmax = 0; """, disable_unicode=disable_unicode).render(dim=dim, data_t=data_t) result['args'] = Template( r"""int *pids, % for v in xvars: ${data_t} *${v}, % endfor ${data_t} *h, ${data_t} radius_scale, ${data_t}${dim} *node_xmin, ${data_t}${dim} *node_xmax, ${data_t} *node_hmax """, disable_unicode=disable_unicode).render(dim=dim, data_t=data_t, xvars=xvars) result['leaf_operation'] = Template( r""" for (int j=pbound.s0; j < pbound.s1; j++) { int pid = PID(j); % for d in range(dim): xmin[${d}] = fmin(xmin[${d}], ${xvars[d]}[pid]); xmax[${d}] = fmax(xmax[${d}], ${xvars[d]}[pid]); % endfor hmax = fmax(h[pid] * radius_scale, hmax); } """, disable_unicode=disable_unicode).render(dim=dim, xvars=xvars) result['node_operation'] = Template( r""" % for i in range(2 ** dim): % for d in range(dim): xmin[${d}] = fmin( xmin[${d}], node_xmin[child_offset + ${i}].s${d} ); xmax[${d}] = fmax( xmax[${d}], node_xmax[child_offset + ${i}].s${d} ); % endfor hmax = fmax(hmax, node_hmax[child_offset + ${i}]); % endfor """, disable_unicode=disable_unicode).render(dim=dim) result['output_expr'] = Template( """ % for d in range(dim): node_xmin[node_idx].s${d} = xmin[${d}]; node_xmax[node_idx].s${d} = xmax[${d}]; % endfor node_hmax[node_idx] = hmax; """, disable_unicode=disable_unicode).render(dim=dim) return result @memoize def _get_leaf_neighbor_kernel_parameters(data_t, dim, args, setup, operation, output_expr): result = { 'setup': Template(r""" ${data_t}${dim} node_xmin1; ${data_t}${dim} node_xmax1; ${data_t} node_hmax1; ${data_t}${dim} node_xmin2; ${data_t}${dim} node_xmax2; ${data_t} node_hmax2; node_xmin1 = node_xmin_dst[cid_dst]; node_xmax1 = node_xmax_dst[cid_dst]; node_hmax1 = node_hmax_dst[cid_dst]; %(setup)s; """ % dict(setup=setup), disable_unicode=disable_unicode).render( data_t=data_t, dim=dim), 'node_operation': Template(""" node_xmin2 = node_xmin_src[cid_src]; node_xmax2 = node_xmax_src[cid_src]; node_hmax2 = node_hmax_src[cid_src]; if (!intersects(node_xmin1, node_xmax1, node_hmax1, node_xmin2, node_xmax2, node_hmax2) && !contains(node_xmin2, node_xmax2, node_xmin1, node_xmax1)) { flag = 0; break; } """, disable_unicode=disable_unicode).render(data_t=data_t), 'leaf_operation': Template(""" node_xmin2 = node_xmin_src[cid_src]; node_xmax2 = node_xmax_src[cid_src]; node_hmax2 = node_hmax_src[cid_src]; if (intersects(node_xmin1, node_xmax1, node_hmax1, node_xmin2, node_xmax2, node_hmax2) || contains_search(node_xmin1, node_xmax1, node_hmax1, node_xmin2, node_xmax2)) { %(operation)s; } """ % dict(operation=operation), disable_unicode=disable_unicode).render(), 'output_expr': output_expr, 'args': Template(""" ${data_t}${dim} *node_xmin_src, ${data_t}${dim} *node_xmax_src, ${data_t} *node_hmax_src, ${data_t}${dim} *node_xmin_dst, ${data_t}${dim} *node_xmax_dst, ${data_t} *node_hmax_dst, """ + args, disable_unicode=disable_unicode).render(data_t=data_t, dim=dim) } return result # Support for 1D def register_custom_pyopencl_ctypes(): cl.tools.get_or_register_dtype('float1', np.dtype([('s0', np.float32)])) cl.tools.get_or_register_dtype('double1', np.dtype([('s0', np.float64)])) register_custom_pyopencl_ctypes() class PointTree(Tree): def __init__(self, pa, dim=2, leaf_size=32, radius_scale=2.0, use_double=False, c_type='float'): super(PointTree, self).__init__(pa.get_number_of_particles(), 2 ** dim, leaf_size) assert (1 <= dim <= 3) self.max_depth = None self.dim = dim self.powdim = 2 ** self.dim self.xvars = ('x', 'y', 'z')[:dim] self.c_type = c_type self.c_type_src = 'double' if use_double else 'float' if use_double and c_type == 'float': # Extend the search radius a little to account for rounding errors radius_scale = radius_scale * (1 + 2e-7) # y and z coordinates need to be present for 1D and z for 2D # This is because the NNPS implementation below assumes them to be # just set to 0. self.pa = ParticleArrayWrapper(pa, self.c_type_src, self.c_type, ('x', 'y', 'z', 'h')) self.radius_scale = radius_scale self.use_double = use_double self.helper = get_helper('tree/point_tree.mako', self.c_type) self.xmin = None self.xmax = None self.hmin = None self.make_vec = make_vec_dict[c_type][self.dim] self.ctx = get_context() def set_index_function_info(self): self.index_function_args = ["sfc"] self.index_function_arg_ctypes = ["ulong"] self.index_function_arg_dtypes = [np.uint64] self.index_function_consts = ['mask', 'rshift'] self.index_function_const_ctypes = ['ulong', 'char'] self.index_code = "((sfc[i] & mask) >> rshift)" def _calc_cell_size_and_depth(self): self.cell_size = self.hmin * self.radius_scale * (1. + 1e-3) # Logic from gpu_domain_manager.py if self.cell_size < 1e-6: self.cell_size = 1 # This lets the tree grow up to log2(128) = 7 layers beyond what it # could have previously. Pretty arbitrary. self.cell_size /= 128 max_width = max((self.xmax[i] - self.xmin[i]) for i in range(self.dim)) self.max_depth = int(np.ceil(np.log2(max_width / self.cell_size))) + 1 def _bin(self): dtype = ctype_to_dtype(self.c_type) fill_particle_data = self.helper.get_kernel("fill_particle_data", dim=self.dim, xvars=self.xvars) pa_gpu = self.pa.gpu args = [getattr(pa_gpu, v).dev for v in self.xvars] args += [dtype(self.cell_size), self.make_vec(*[self.xmin[i] for i in range(self.dim)]), self.sfc.dev, self.pids.dev] fill_particle_data(*args) def get_index_constants(self, depth): rshift = np.uint8(self.dim * (self.max_depth - depth - 1)) mask = np.uint64((2 ** self.dim - 1) << rshift) return mask, rshift def _adjust_domain_width(self): # Convert width of domain to a power of 2 multiple of cell size # (Optimal width for cells) # Note that this makes sure the width in _all_ dimensions is the # same. We want our nodes to be cubes ideally. cell_size = self.hmin * self.radius_scale * (1. + 1e-5) max_width = np.max(self.xmax - self.xmin) new_width = cell_size * 2.0 ** int( np.ceil(np.log2(max_width / cell_size))) diff = (new_width - (self.xmax - self.xmin)) / 2 self.xmin -= diff self.xmax += diff def setup_build(self, xmin, xmax, hmin): self._setup_build() self.pa.sync() self.xmin = np.array(xmin) self.xmax = np.array(xmax) self.hmin = hmin self._adjust_domain_width() self._calc_cell_size_and_depth() self._bin() def build(self, fixed_depth=None): self._build(self.max_depth if fixed_depth is None else fixed_depth) self._get_unique_cids_and_count() def refresh(self, xmin, xmax, hmin, fixed_depth=None): self.setup_build(xmin, xmax, hmin) self.build(fixed_depth) def _sort(self): """Set tree as being sorted The particle array needs to be aligned by the caller! """ if not self.sorted: self.pa.sync() self.sorted = 1 ########################################################################### # General algorithms ########################################################################### def set_node_bounds(self): vector_data_t = get_vector_dtype(self.c_type, self.dim) dtype = ctype_to_dtype(self.c_type) self.node_xmin = self.allocate_node_prop(vector_data_t) self.node_xmax = self.allocate_node_prop(vector_data_t) self.node_hmax = self.allocate_node_prop(dtype) params = _get_node_bound_kernel_parameters(self.dim, self.c_type, self.xvars) set_node_bounds = self.tree_bottom_up( params['args'], params['setup'], params['leaf_operation'], params['node_operation'], params['output_expr'], preamble=_get_macros_preamble(self.c_type, self.sorted, self.dim) ) set_node_bounds = profile_kernel(set_node_bounds, 'set_node_bounds') pa_gpu = self.pa.gpu dtype = ctype_to_dtype(self.c_type) args = [self, self.pids.dev] args += [getattr(pa_gpu, v).dev for v in self.xvars] args += [pa_gpu.h.dev, dtype(self.radius_scale), self.node_xmin.dev, self.node_xmax.dev, self.node_hmax.dev] set_node_bounds(*args) ########################################################################### # Nearest Neighbor Particle Search (NNPS) ########################################################################### def _leaf_neighbor_operation(self, tree_src, args, setup, operation, output_expr): # Template for finding neighboring cids of a cell. params = _get_leaf_neighbor_kernel_parameters(self.c_type, self.dim, args, setup, operation, output_expr) kernel = tree_src.leaf_tree_traverse( params['args'], params['setup'], params['node_operation'], params['leaf_operation'], params['output_expr'], preamble=_get_macros_preamble(self.c_type, self.sorted, self.dim) ) def callable(*args): return kernel(tree_src, self, tree_src.node_xmin.dev, tree_src.node_xmax.dev, tree_src.node_hmax.dev, self.node_xmin.dev, self.node_xmax.dev, self.node_hmax.dev, *args) return callable def find_neighbor_cids(self, tree_src): neighbor_cid_count = Array(np.uint32, n=self.unique_cid_count + 1, backend='opencl') find_neighbor_cid_counts = self._leaf_neighbor_operation( tree_src, args="uint2 *pbounds, int *cnt", setup="int count=0", operation=""" if (pbounds[cid_src].s0 < pbounds[cid_src].s1) count++; """, output_expr="cnt[i] = count;" ) find_neighbor_cid_counts = profile_kernel( find_neighbor_cid_counts, 'find_neighbor_cid_count') find_neighbor_cid_counts(tree_src.pbounds.dev, neighbor_cid_count.dev) neighbor_psum = _get_neighbor_count_prefix_sum_kernel(self.ctx) neighbor_psum(neighbor_cid_count.dev) total_neighbors = int(neighbor_cid_count.dev[-1].get()) neighbor_cids = Array(np.uint32, n=total_neighbors, backend='opencl') find_neighbor_cids = self._leaf_neighbor_operation( tree_src, args="uint2 *pbounds, int *cnt, int *neighbor_cids", setup="int offset=cnt[i];", operation=""" if (pbounds[cid_src].s0 < pbounds[cid_src].s1) neighbor_cids[offset++] = cid_src; """, output_expr="" ) find_neighbor_cids = profile_kernel( find_neighbor_cids, 'find_neighbor_cids') find_neighbor_cids(tree_src.pbounds.dev, neighbor_cid_count.dev, neighbor_cids.dev) return neighbor_cid_count, neighbor_cids # TODO?: 1D and 2D NNPS not properly supported here. # Just assuming the other spatial coordinates (y and z in case of 1D, # and z in case of 2D) are set to 0. def find_neighbor_lengths_elementwise(self, neighbor_cid_count, neighbor_cids, tree_src, neighbor_count): self.check_nnps_compatibility(tree_src) pa_gpu_dst = self.pa.gpu pa_gpu_src = tree_src.pa.gpu dtype = ctype_to_dtype(self.c_type) find_neighbor_counts = self.helper.get_kernel( 'find_neighbor_counts_elementwise', sorted=self.sorted ) find_neighbor_counts(self.unique_cids_map.dev, tree_src.pids.dev, self.pids.dev, self.cids.dev, tree_src.pbounds.dev, self.pbounds.dev, pa_gpu_src.x.dev, pa_gpu_src.y.dev, pa_gpu_src.z.dev, pa_gpu_src.h.dev, pa_gpu_dst.x.dev, pa_gpu_dst.y.dev, pa_gpu_dst.z.dev, pa_gpu_dst.h.dev, dtype(self.radius_scale), neighbor_cid_count.dev, neighbor_cids.dev, neighbor_count.dev) def find_neighbors_elementwise(self, neighbor_cid_count, neighbor_cids, tree_src, start_indices, neighbors): self.check_nnps_compatibility(tree_src) pa_gpu_dst = self.pa.gpu pa_gpu_src = tree_src.pa.gpu dtype = ctype_to_dtype(self.c_type) find_neighbors = self.helper.get_kernel( 'find_neighbors_elementwise', sorted=self.sorted) find_neighbors(self.unique_cids_map.dev, tree_src.pids.dev, self.pids.dev, self.cids.dev, tree_src.pbounds.dev, self.pbounds.dev, pa_gpu_src.x.dev, pa_gpu_src.y.dev, pa_gpu_src.z.dev, pa_gpu_src.h.dev, pa_gpu_dst.x.dev, pa_gpu_dst.y.dev, pa_gpu_dst.z.dev, pa_gpu_dst.h.dev, dtype(self.radius_scale), neighbor_cid_count.dev, neighbor_cids.dev, start_indices.dev, neighbors.dev) def _is_valid_nnps_wgs(self): # Max work group size can only be found by building the # kernel. try: find_neighbor_counts = self.helper.get_kernel( 'find_neighbor_counts', sorted=self.sorted, wgs=self.leaf_size ) find_neighbor = self.helper.get_kernel( 'find_neighbors', sorted=self.sorted, wgs=self.leaf_size ) except DeviceWGSException: return False else: return True def find_neighbor_lengths(self, neighbor_cid_count, neighbor_cids, tree_src, neighbor_count, use_partitions=False): self.check_nnps_compatibility(tree_src) wgs = self.leaf_size pa_gpu_dst = self.pa.gpu pa_gpu_src = tree_src.pa.gpu dtype = ctype_to_dtype(self.c_type) def find_neighbor_counts_for_partition(partition_cids, partition_size, partition_wgs, q=None): find_neighbor_counts = self.helper.get_kernel( 'find_neighbor_counts', sorted=self.sorted, wgs=wgs ) find_neighbor_counts(partition_cids.dev, tree_src.pids.dev, self.pids.dev, self.cids.dev, tree_src.pbounds.dev, self.pbounds.dev, pa_gpu_src.x.dev, pa_gpu_src.y.dev, pa_gpu_src.z.dev, pa_gpu_src.h.dev, pa_gpu_dst.x.dev, pa_gpu_dst.y.dev, pa_gpu_dst.z.dev, pa_gpu_dst.h.dev, dtype(self.radius_scale), neighbor_cid_count.dev, neighbor_cids.dev, neighbor_count.dev, gs=(partition_wgs * partition_size,), ls=(partition_wgs,), queue=(get_queue() if q is None else q)) if use_partitions and wgs > 32: if wgs < 128: wgs1 = 32 else: wgs1 = 64 m1, n1 = self.get_leaf_size_partitions(0, wgs1) find_neighbor_counts_for_partition(m1, n1, min(wgs, wgs1)) m2, n2 = self.get_leaf_size_partitions(wgs1, wgs) find_neighbor_counts_for_partition(m2, n2, wgs) else: find_neighbor_counts_for_partition( self.unique_cids, self.unique_cid_count, wgs) def find_neighbors(self, neighbor_cid_count, neighbor_cids, tree_src, start_indices, neighbors, use_partitions=False): self.check_nnps_compatibility(tree_src) wgs = self.leaf_size if self.leaf_size % 32 == 0 else \ self.leaf_size + 32 - self.leaf_size % 32 pa_gpu_dst = self.pa.gpu pa_gpu_src = tree_src.pa.gpu dtype = ctype_to_dtype(self.c_type) def find_neighbors_for_partition(partition_cids, partition_size, partition_wgs, q=None): find_neighbors = self.helper.get_kernel('find_neighbors', sorted=self.sorted, wgs=wgs) find_neighbors(partition_cids.dev, tree_src.pids.dev, self.pids.dev, self.cids.dev, tree_src.pbounds.dev, self.pbounds.dev, pa_gpu_src.x.dev, pa_gpu_src.y.dev, pa_gpu_src.z.dev, pa_gpu_src.h.dev, pa_gpu_dst.x.dev, pa_gpu_dst.y.dev, pa_gpu_dst.z.dev, pa_gpu_dst.h.dev, dtype(self.radius_scale), neighbor_cid_count.dev, neighbor_cids.dev, start_indices.dev, neighbors.dev, gs=(partition_wgs * partition_size,), ls=(partition_wgs,), queue=(get_queue() if q is None else q)) if use_partitions and wgs > 32: if wgs < 128: wgs1 = 32 else: wgs1 = 64 m1, n1 = self.get_leaf_size_partitions(0, wgs1) fraction = (n1 / int(self.unique_cid_count)) if fraction > 0.3: find_neighbors_for_partition(m1, n1, wgs1) m2, n2 = self.get_leaf_size_partitions(wgs1, wgs) assert (n1 + n2 == self.unique_cid_count) find_neighbors_for_partition(m2, n2, wgs) return else: find_neighbors_for_partition( self.unique_cids, self.unique_cid_count, wgs) def check_nnps_compatibility(self, tree): """Check if tree types and parameters are compatible for NNPS Two trees must satisfy a few conditions so that NNPS can be performed on one tree using the other as reference. In this case, the following conditions must be satisfied - 1) Currently both should be instances of point_tree.PointTree 2) Both must have the same sortedness 3) Both must use the same floating-point datatype 4) Both must have the same leaf sizes """ if not isinstance(tree, PointTree): raise IncompatibleTreesException( "Both trees must be of the same type for NNPS" ) if self.sorted != tree.sorted: raise IncompatibleTreesException( "Tree sortedness need to be the same for NNPS" ) if self.c_type != tree.c_type or self.use_double != tree.use_double: raise IncompatibleTreesException( "Tree floating-point data types need to be the same for NNPS" ) if self.leaf_size != tree.leaf_size: raise IncompatibleTreesException( "Tree leaf sizes need to be the same for NNPS (%d != %d)" % ( self.leaf_size, tree.leaf_size) ) return pysph-master/pysph/base/tree/tests/000077500000000000000000000000001356347341600177045ustar00rootroot00000000000000pysph-master/pysph/base/tree/tests/test_point_tree.py000066400000000000000000000220621356347341600234670ustar00rootroot00000000000000import numpy as np import unittest from pytest import importorskip cl = importorskip('pyopencl') import pysph.base.particle_array from pysph.base.device_helper import DeviceHelper # noqa: E402 from pysph.base.utils import get_particle_array # noqa: E402 from pysph.base.tree.point_tree import PointTree # noqa: E402 def _gen_uniform_dataset_2d(n, h, seed=None): if seed is not None: np.random.seed(seed) u = np.random.uniform pa = get_particle_array(x=u(size=n), y=u(size=n), h=h) h = DeviceHelper(pa, backend='opencl') pa.set_device_helper(h) return pa def _gen_uniform_dataset(n, h, seed=None): if seed is not None: np.random.seed(seed) u = np.random.uniform pa = get_particle_array(x=u(size=n), y=u(size=n), z=u(size=n), h=h) h = DeviceHelper(pa, backend='opencl') pa.set_device_helper(h) return pa def _dfs_find_leaf(tree): leaf_id_count = tree.allocate_leaf_prop(np.int32) dfs_find_leaf = tree.leaf_tree_traverse( "int *leaf_id_count", setup="leaf_id_count[i] = 0;", node_operation="if (cid_dst == cid_src) leaf_id_count[i]++", leaf_operation="if (cid_dst == cid_src) leaf_id_count[i]++", output_expr="" ) dfs_find_leaf(tree, tree, leaf_id_count.dev) return leaf_id_count.dev.get() def _check_children_overlap_2d(node_xmin, node_xmax, child_offset): for j in range(4): nxmin1 = node_xmin[child_offset + j] nxmax1 = node_xmax[child_offset + j] for k in range(4): nxmin2 = node_xmin[child_offset + k] nxmax2 = node_xmax[child_offset + k] if j != k: assert (nxmax1[0] <= nxmin2[0] or nxmax2[0] <= nxmin1[0] or nxmax1[1] <= nxmin2[1] or nxmax2[1] <= nxmin1[1]) def _check_children_overlap(node_xmin, node_xmax, child_offset): for j in range(8): nxmin1 = node_xmin[child_offset + j] nxmax1 = node_xmax[child_offset + j] for k in range(8): nxmin2 = node_xmin[child_offset + k] nxmax2 = node_xmax[child_offset + k] if j != k: assert (nxmax1[0] <= nxmin2[0] or nxmax2[0] <= nxmin1[0] or nxmax1[1] <= nxmin2[1] or nxmax2[1] <= nxmin1[1] or nxmax1[2] <= nxmin2[2] or nxmax2[2] <= nxmin1[2]) def _test_tree_structure(tree, k): # Traverse tree and check if max depth is correct # Additionally check if particle sets of siblings is disjoint # and union of particle sets of a nodes children = nodes own children # # This effectively also checks that no particle is present in two nodes of # the same level s = [0, ] d = [0, ] offsets = tree.offsets.dev.get() pbounds = tree.pbounds.dev.get() max_depth = tree.depth max_depth_here = 0 pids = set() while len(s) != 0: n = s[0] depth = d[0] max_depth_here = max(max_depth_here, depth) pbound = pbounds[n] assert (depth <= max_depth) del s[0] del d[0] if offsets[n] == -1: for i in range(pbound[0], pbound[1]): pids.add(i) continue # Particle ranges of children are contiguous # and are contained within parent's particle range start = pbound[0] for i in range(k): child_idx = offsets[n] + i assert (pbounds[child_idx][0] == start) assert (pbounds[child_idx][0] <= pbounds[child_idx][1]) start = pbounds[child_idx][1] assert (child_idx < len(offsets)) s.append(child_idx) d.append(depth + 1) assert (start == pbound[1]) class QuadtreeTestCase(unittest.TestCase): def setUp(self): use_double = False self.N = 3000 pa = _gen_uniform_dataset_2d(self.N, 0.2, seed=0) self.quadtree = PointTree(pa, radius_scale=1., use_double=use_double, leaf_size=32, dim=2) self.leaf_size = 32 self.quadtree.refresh(np.array([0., 0.]), np.array([1., 1.]), np.min(pa.h)) self.pa = pa def test_pids(self): pids = self.quadtree.pids.dev.get() s = set() for i in range(len(pids)): if 0 <= pids[i] < self.N: s.add(pids[i]) assert (len(s) == self.N) def test_depth_and_inclusiveness(self): _test_tree_structure(self.quadtree, 4) def test_node_bounds(self): self.quadtree.set_node_bounds() pids = self.quadtree.pids.dev.get() offsets = self.quadtree.offsets.dev.get() pbounds = self.quadtree.pbounds.dev.get() node_xmin = self.quadtree.node_xmin.dev.get() node_xmax = self.quadtree.node_xmax.dev.get() node_hmax = self.quadtree.node_hmax.dev.get() x = self.pa.x[pids] y = self.pa.y[pids] h = self.pa.h[pids] for i in range(len(offsets)): nxmin = node_xmin[i] nxmax = node_xmax[i] nhmax = node_hmax[i] for j in range(pbounds[i][0], pbounds[i][1]): assert (nxmin[0] <= np.float32(x[j]) <= nxmax[0]) assert (nxmin[1] <= np.float32(y[j]) <= nxmax[1]) assert (np.float32(h[j]) <= nhmax) # Check that children nodes don't overlap if offsets[i] != -1: _check_children_overlap_2d(node_xmin, node_xmax, offsets[i]) def test_dfs_traversal(self): leaf_id_count = _dfs_find_leaf(self.quadtree) np.testing.assert_array_equal( np.ones(self.quadtree.unique_cid_count, dtype=np.int32), leaf_id_count ) def test_get_leaf_size_partitions(self): a, b = np.random.randint(0, self.leaf_size, size=2) a, b = min(a, b), max(a, b) pbounds = self.quadtree.pbounds.dev.get() offsets = self.quadtree.offsets.dev.get() mapping, count = self.quadtree.get_leaf_size_partitions(a, b) mapping = mapping.dev.get() map_set_gpu = {mapping[i] for i in range(count)} map_set_here = {i for i in range(len(offsets)) if offsets[i] == -1 and a < (pbounds[i][1] - pbounds[i][0]) <= b} assert (map_set_gpu == map_set_here) def tearDown(self): del self.quadtree class OctreeTestCase(unittest.TestCase): def setUp(self): use_double = False self.N = 3000 pa = _gen_uniform_dataset(self.N, 0.2, seed=0) self.octree = PointTree(pa, dim=3, radius_scale=1., use_double=use_double, leaf_size=64) self.leaf_size = 64 self.octree.refresh(np.array([0., 0., 0.]), np.array([1., 1., 1.]), np.min(pa.h)) self.pa = pa def test_pids(self): pids = self.octree.pids.dev.get() s = set() for i in range(len(pids)): if 0 <= pids[i] < self.N: s.add(pids[i]) assert (len(s) == self.N) def test_depth_and_inclusiveness(self): _test_tree_structure(self.octree, 8) def test_node_bounds(self): self.octree.set_node_bounds() print(self.octree.node_hmax.dev.get()) pids = self.octree.pids.dev.get() offsets = self.octree.offsets.dev.get() pbounds = self.octree.pbounds.dev.get() node_xmin = self.octree.node_xmin.dev.get() node_xmax = self.octree.node_xmax.dev.get() node_hmax = self.octree.node_hmax.dev.get() x = self.pa.x[pids] y = self.pa.y[pids] z = self.pa.z[pids] h = self.pa.h[pids] for i in range(len(offsets)): nxmin = node_xmin[i] nxmax = node_xmax[i] nhmax = node_hmax[i] for j in range(pbounds[i][0], pbounds[i][1]): assert (nxmin[0] <= np.float32(x[j]) <= nxmax[0]) assert (nxmin[1] <= np.float32(y[j]) <= nxmax[1]) assert (nxmin[2] <= np.float32(z[j]) <= nxmax[2]) assert (np.float32(h[j]) <= nhmax) # Check that children nodes don't overlap if offsets[i] != -1: _check_children_overlap(node_xmin, node_xmax, offsets[i]) def test_dfs_traversal(self): leaf_id_count = _dfs_find_leaf(self.octree) np.testing.assert_array_equal( np.ones(self.octree.unique_cid_count, dtype=np.int32), leaf_id_count ) def test_get_leaf_size_partitions(self): a, b = np.random.randint(0, self.leaf_size, size=2) a, b = min(a, b), max(a, b) pbounds = self.octree.pbounds.dev.get() offsets = self.octree.offsets.dev.get() mapping, count = self.octree.get_leaf_size_partitions(a, b) mapping = mapping.dev.get() map_set_gpu = {mapping[i] for i in range(count)} map_set_here = {i for i in range(len(offsets)) if offsets[i] == -1 and a < (pbounds[i][1] - pbounds[i][0]) <= b} assert (map_set_gpu == map_set_here) def tearDown(self): del self.octree pysph-master/pysph/base/tree/tree.mako000066400000000000000000000057761356347341600203710ustar00rootroot00000000000000//CL// <%def name="preamble(data_t)" cached="False"> char eye_index(ulong sfc, ulong mask, char rshift) { return ((sfc & mask) >> rshift); } <%def name="reorder_particles_args(k, data_vars, data_var_ctypes, const_vars, const_var_ctypes, index_code)" cached="False"> int *pids, int *cids, char *seg_flag, uint2 *pbounds, int *offsets, uint${k} *octant_vector, int *pids_next, int *cids_next, % for var, ctype in zip(data_vars, data_var_ctypes): ${ctype} *${var}, % endfor % for var, ctype in zip(data_vars, data_var_ctypes): ${ctype} *${var}_next, % endfor % for var, ctype in zip(const_vars, const_var_ctypes): ${ctype} ${var}, % endfor uint csum_nodes_prev <%def name="reorder_particles_src(k, data_vars, data_var_ctypes, const_vars, const_var_ctypes, index_code)" cached="False"> int curr_cid = cids[i] - csum_nodes_prev; if (curr_cid < 0 || offsets[curr_cid] == -1) { cids_next[i] = cids[i]; pids_next[i] = pids[i]; % for var in data_vars: ${var}_next[i] = ${var}[i]; % endfor } else { uint2 pbound_here = pbounds[curr_cid]; char octant = (${index_code}); global uint *octv = (global uint *)(octant_vector + i); int sum = octv[octant]; sum -= (octant == 0) ? 0 : octv[octant - 1]; octv = (global uint *)(octant_vector + pbound_here.s1 - 1); sum += (octant == 0) ? 0 : octv[octant - 1]; uint new_index = pbound_here.s0 + sum - 1; pids_next[new_index] = pids[i]; cids_next[new_index] = offsets[curr_cid] + octant; % for var in data_vars: ${var}_next[new_index] = ${var}[i]; % endfor } <%def name="append_layer_args()" cached="False"> int *offsets_next, uint2 *pbounds_next, int *offsets, uint2 *pbounds, int curr_offset, char is_last_level <%def name="append_layer_src()" cached="False"> pbounds[curr_offset + i] = pbounds_next[i]; offsets[curr_offset + i] = is_last_level ? -1 : offsets_next[i]; <%def name="set_node_data_args(k)", cached="False"> int *offsets_prev, uint2 *pbounds_prev, int *offsets, uint2 *pbounds, char *seg_flag, uint${k} *octant_vector, uint csum_nodes, uint N <%def name="set_node_data_src(k)", cached="False"> uint2 pbound_here = pbounds_prev[i]; int child_offset = offsets_prev[i]; if (child_offset == -1) { PYOPENCL_ELWISE_CONTINUE; } child_offset -= csum_nodes; uint${k} octv = octant_vector[pbound_here.s1 - 1]; % for i in range(k): % if i == 0: pbounds[child_offset] = (uint2)(pbound_here.s0, pbound_here.s0 + octv.s0); % else: pbounds[child_offset + ${i}] = (uint2)(pbound_here.s0 + octv.s${i - 1}, pbound_here.s0 + octv.s${i}); if (pbound_here.s0 + octv.s${i - 1} < N) seg_flag[pbound_here.s0 + octv.s${i - 1}] = 1; % endif % endfor pysph-master/pysph/base/tree/tree.py000066400000000000000000000633361356347341600200660ustar00rootroot00000000000000import os import numpy as np from pytools import memoize import pyopencl as cl import pyopencl.cltypes from pyopencl.elementwise import ElementwiseKernel from pyopencl.scan import GenericScanKernel from compyle.opencl import get_context, get_queue, named_profile from compyle.array import Array from pysph.base.tree.helpers import get_vector_dtype, get_helper NODE_KERNEL_TEMPLATE = r""" uint node_idx = i; %(setup)s; int child_offset = offsets[node_idx]; if (child_offset == -1) { uint2 pbound = pbounds[node_idx]; if (pbound.s0 < pbound.s1) { %(leaf_operation)s; } } else { %(node_operation)s; } %(output_expr)s; """ LEAF_DFS_TEMPLATE = r""" /* * Owner of properties - * dst tree: cids, unique_cids * src tree: offsets */ int cid_dst = unique_cids[i]; /* * Assuming max depth of 21 * stack_idx is also equal to current layer of tree * child_idx = number of children iterated through * idx_stack = current node */ char child_stack[%(max_depth)s]; int cid_stack[%(max_depth)s]; char idx = 0; child_stack[0] = 0; cid_stack[0] = 1; char flag; int cid_src; int child_offset; %(setup)s; while (idx >= 0) { // Recurse to find either leaf node or invalid node cid_src = cid_stack[idx]; child_offset = offsets[cid_src]; %(common_operation)s; while (child_offset != -1) { %(node_operation)s; idx++; cid_src = child_offset; cid_stack[idx] = cid_src; child_stack[idx] = 0; child_offset = offsets[cid_src]; } if (child_offset == -1) { %(leaf_operation)s; } // Recurse back to find node with a valid neighbor while (child_stack[idx] >= %(k)s-1 && idx >= 0) idx--; // Iterate to next neighbor if (idx >= 0) { cid_stack[idx]++; child_stack[idx]++; } } %(output_expr)s; """ POINT_DFS_TEMPLATE = r""" /* * Owner of properties - * dst tree: cids, unique_cids * src tree: offsets */ int cid_dst = cids[i]; /* * Assuming max depth of 21 * stack_idx is also equal to current layer of tree * child_idx = number of children iterated through * idx_stack = current node */ char child_stack[%(max_depth)s]; int cid_stack[%(max_depth)s]; char idx = 0; child_stack[0] = 0; cid_stack[0] = 1; char flag; int cid_src; int child_offset; %(setup)s; while (idx >= 0) { // Recurse to find either leaf node or invalid node cid_src = cid_stack[idx]; child_offset = offsets[cid_src]; %(common_operation)s; while (child_offset != -1) { %(node_operation)s; idx++; cid_src = child_offset; cid_stack[idx] = cid_src; child_stack[idx] = 0; child_offset = offsets[cid_src]; } if (child_offset == -1) { %(leaf_operation)s; } // Recurse back to find node with a valid neighbor while (child_stack[idx] >= %(k)s-1 && idx >= 0) idx--; // Iterate to next neighbor if (idx >= 0) { cid_stack[idx]++; child_stack[idx]++; } } %(output_expr)s; """ def get_M_array_initialization(k): result = "uint%(k)s constant M[%(k)s] = {" % dict(k=k) for i in range(k): result += "(uint%(k)s)(%(init_value)s)," % dict( k=k, init_value=','.join((np.arange(k) >= i).astype( int).astype(str)) ) result += "};" return result # This segmented scan is the core of the tree construction algorithm. # Please look at a few segmented scan examples here if you haven't already # https://en.wikipedia.org/wiki/Segmented_scan # # This scan generates an array `v` of k-dimensional vectors where v[i][j] # corresponds to how many particles are in the same node as particle `i`, # are ordered before it and are going to lie in childs 0..j. # # For example, consider the first layer. In this case v[23][4] gives the # particles numbered from 0 to 23 which are going to lie in children 0 to 4 # of the root node. The segment flag bits just let us extend this to further # layers. # # The array of vectors M are just a math trick. M[j] gives a vector with j # zeros and k - j ones (With k = 4, M[1] = {0, 1, 1, 1}). Adding this up in # the prefix sum directly gives us the required result. @named_profile('particle_reordering') @memoize def _get_particle_kernel(ctx, k, args, index_code): return GenericScanKernel( ctx, get_vector_dtype('uint', k), neutral="0", arguments=r"""__global char *seg_flag, __global uint%(k)s *prefix_sum_vector, """ % dict(k=k) + args, input_expr="M[%(index_code)s]" % dict(index_code=index_code), scan_expr="(across_seg_boundary ? b : a + b)", is_segment_start_expr="seg_flag[i]", output_statement=r"""prefix_sum_vector[i]=item;""", preamble=get_M_array_initialization(k) ) # The offset of a node's child is given by: # offset of first child in next layer + k * (number of non-leaf nodes before # given node). # If the node is a leaf, we set this value to be -1. @named_profile('set_offset') @memoize def _get_set_offset_kernel(ctx, k, leaf_size): return GenericScanKernel( ctx, np.int32, neutral="0", arguments=r"""__global uint2 *pbounds, __global uint *offsets, __global int *leaf_count, int csum_nodes_next""", input_expr="(pbounds[i].s1 - pbounds[i].s0 > %(leaf_size)s)" % { 'leaf_size': leaf_size}, scan_expr="a + b", output_statement=r"""{ offsets[i] = ((pbounds[i].s1 - pbounds[i].s0 > %(leaf_size)s) ? csum_nodes_next + (%(k)s * (item - 1)) : -1); if (i == N - 1) { *leaf_count = (N - item); } }""" % {'leaf_size': leaf_size, 'k': k} ) # Each particle belongs to a given leaf / last layer node and this cell is # indexed by the cid array. # The unique_cids algorithm # 1) unique_cids_map: unique_cids_map[i] gives the number of unique_cids # before index i. # 2) unique_cids: List of unique cids. # 3) unique_cids_count: Number of unique cids # # Note that unique_cids also gives us the list of leaves / last layer nodes # which are not empty. @named_profile('unique_cids') @memoize def _get_unique_cids_kernel(ctx): return GenericScanKernel( ctx, np.int32, neutral="0", arguments=r"""int *cids, int *unique_cids_map, int *unique_cids, int *unique_cids_count""", input_expr="(i == 0 || cids[i] != cids[i-1])", scan_expr="a + b", output_statement=r""" if (item != prev_item) { unique_cids[item - 1] = cids[i]; } unique_cids_map[i] = item - 1; if (i == N - 1) *unique_cids_count = item; """ ) # A lot of leaves are going to be empty. Not really sure if this guy is of # any use. @named_profile('leaves') @memoize def _get_leaves_kernel(ctx, leaf_size): return GenericScanKernel( ctx, np.int32, neutral="0", arguments="int *offsets, uint2 pbounds, int *leaf_cids, " "int *num_leaves", input_expr="(pbounds[i].s1 - pbounds[i].s0 <= %(leaf_size)s)" % dict( leaf_size=leaf_size ), scan_expr="a+b", output_statement=r""" if (item != prev_item) { leaf_cids[item - 1] = i; } if (i == N - 1) *num_leaves = item; """ ) @named_profile("group_cids") @memoize def _get_cid_groups_kernel(ctx): return GenericScanKernel( ctx, np.uint32, neutral="0", arguments="""int *unique_cids, uint2 *pbounds, int *group_cids, int *group_count, int gmin, int gmax""", input_expr="pass(pbounds[unique_cids[i]], gmin, gmax)", scan_expr="(a + b)", output_statement=r""" if (item != prev_item) { group_cids[item - 1] = unique_cids[i]; } if (i == N - 1) *group_count = item; """, preamble=""" char pass(uint2 pbound, int gmin, int gmax) { int leaf_size = pbound.s1 - pbound.s0; return (leaf_size > gmin && leaf_size <= gmax); } """ ) @memoize def tree_bottom_up(ctx, args, setup, leaf_operation, node_operation, output_expr, preamble=""): operation = NODE_KERNEL_TEMPLATE % dict( setup=setup, leaf_operation=leaf_operation, node_operation=node_operation, output_expr=output_expr ) args = ', '.join(["int *offsets, uint2 *pbounds", args]) kernel = ElementwiseKernel(ctx, args, operation=operation, preamble=preamble) def callable(tree, *args): csum_nodes = tree.total_nodes out = None for i in range(tree.depth, -1, -1): csum_nodes_next = csum_nodes csum_nodes -= tree.num_nodes[i] out = kernel(tree.offsets.dev, tree.pbounds.dev, *args, slice=slice(csum_nodes, csum_nodes_next)) return out return callable @memoize def leaf_tree_traverse(ctx, k, args, setup, node_operation, leaf_operation, output_expr, common_operation, preamble=""): # FIXME: variable max_depth operation = LEAF_DFS_TEMPLATE % dict( setup=setup, leaf_operation=leaf_operation, node_operation=node_operation, common_operation=common_operation, output_expr=output_expr, max_depth=21, k=k ) args = ', '.join(["int *unique_cids, int *cids, int *offsets", args]) kernel = ElementwiseKernel( ctx, args, operation=operation, preamble=preamble) def callable(tree_src, tree_dst, *args): return kernel( tree_dst.unique_cids.dev[:tree_dst.unique_cid_count], tree_dst.cids.dev, tree_src.offsets.dev, *args ) return callable @memoize def point_tree_traverse(ctx, k, args, setup, node_operation, leaf_operation, output_expr, common_operation, preamble=""): # FIXME: variable max_depth operation = POINT_DFS_TEMPLATE % dict( setup=setup, leaf_operation=leaf_operation, node_operation=node_operation, common_operation=common_operation, output_expr=output_expr, max_depth=21, k=k ) args = ', '.join(["int *cids, int *offsets", args]) kernel = ElementwiseKernel( ctx, args, operation=operation, preamble=preamble) def callable(tree_src, tree_dst, *args): return kernel( tree_dst.cids.dev, tree_src.offsets.dev, *args ) return callable class Tree(object): """k-ary Tree """ def __init__(self, n, k=8, leaf_size=32): self.ctx = get_context() self.queue = get_queue() self.sorted = False self.main_helper = get_helper(os.path.join('tree', 'tree.mako')) self.initialized = False self.preamble = "" self.leaf_size = leaf_size self.k = k self.n = n self.sorted = False self.depth = 0 self.index_function_args = [] self.index_function_arg_ctypes = [] self.index_function_arg_dtypes = [] self.index_function_consts = [] self.index_function_const_ctypes = [] self.index_code = "" self.set_index_function_info() def set_index_function_info(self): raise NotImplementedError def get_data_args(self): return [getattr(self, v) for v in self.index_function_args] def get_index_constants(self, depth): raise NotImplementedError def _initialize_data(self): self.sorted = False num_particles = self.n self.pids = Array(np.uint32, n=num_particles, backend='opencl') self.cids = Array(np.uint32, n=num_particles, backend='opencl') self.cids.fill(0) for var, dtype in zip(self.index_function_args, self.index_function_arg_dtypes): setattr(self, var, Array(dtype, n=num_particles, backend='opencl')) # Filled after tree built self.pbounds = None self.offsets = None self.initialized = True def _reinitialize_data(self): self.sorted = False num_particles = self.n self.pids.resize(num_particles) self.cids.resize(num_particles) self.cids.fill(0) for var in self.index_function_args: getattr(self, var).resize(num_particles) # Filled after tree built self.pbounds = None self.offsets = None def _setup_build(self): if not self.initialized: self._initialize_data() else: self._reinitialize_data() def _build(self, fixed_depth=None): self._build_tree(fixed_depth) ########################################################################### # Core construction algorithm and helper functions ########################################################################### # A little bit of manual book-keeping for temporary variables. # More specifically, these temporary variables would otherwise be thrown # away after building each layer of the tree. # We could instead just allocate new arrays after building each layer and # and let the GC take care of stuff but I'm guessing this is a # a better approach to save on memory def _create_temp_vars(self, temp_vars): n = self.n temp_vars['pids'] = Array(np.uint32, n=n, backend='opencl') for var, dtype in zip(self.index_function_args, self.index_function_arg_dtypes): temp_vars[var] = Array(dtype, n=n, backend='opencl') temp_vars['cids'] = Array(np.uint32, n=n, backend='opencl') def _exchange_temp_vars(self, temp_vars): for k in temp_vars.keys(): t = temp_vars[k] temp_vars[k] = getattr(self, k) setattr(self, k, t) def _clean_temp_vars(self, temp_vars): for k in list(temp_vars.keys()): del temp_vars[k] def _get_temp_data_args(self, temp_vars): result = [temp_vars[v] for v in self.index_function_args] return result def _reorder_particles(self, depth, child_count_prefix_sum, offsets_parent, pbounds_parent, seg_flag, csum_nodes_prev, temp_vars): # Scan args = [('__global ' + ctype + ' *' + v) for v, ctype in zip(self.index_function_args, self.index_function_arg_ctypes)] args += [(ctype + ' ' + v) for v, ctype in zip(self.index_function_consts, self.index_function_const_ctypes)] args = ', '.join(args) particle_kernel = _get_particle_kernel(self.ctx, self.k, args, self.index_code) args = [seg_flag.dev, child_count_prefix_sum.dev] args += [x.dev for x in self.get_data_args()] args += self.get_index_constants(depth) particle_kernel(*args) # Reorder particles reorder_particles = self.main_helper.get_kernel( 'reorder_particles', k=self.k, data_vars=tuple(self.index_function_args), data_var_ctypes=tuple(self.index_function_arg_ctypes), const_vars=tuple(self.index_function_consts), const_var_ctypes=tuple(self.index_function_const_ctypes), index_code=self.index_code ) args = [self.pids.dev, self.cids.dev, seg_flag.dev, pbounds_parent.dev, offsets_parent.dev, child_count_prefix_sum.dev, temp_vars['pids'].dev, temp_vars['cids'].dev] args += [x.dev for x in self.get_data_args()] args += [x.dev for x in self._get_temp_data_args(temp_vars)] args += self.get_index_constants(depth) args += [np.uint32(csum_nodes_prev)] reorder_particles(*args) self._exchange_temp_vars(temp_vars) def _merge_layers(self, offsets_temp, pbounds_temp): curr_offset = 0 total_nodes = 0 for i in range(self.depth + 1): total_nodes += self.num_nodes[i] self.offsets = Array(np.int32, n=total_nodes, backend='opencl') self.pbounds = Array(cl.cltypes.uint2, n=total_nodes, backend='opencl') append_layer = self.main_helper.get_kernel('append_layer') self.total_nodes = total_nodes for i in range(self.depth + 1): append_layer( offsets_temp[i].dev, pbounds_temp[i].dev, self.offsets.dev, self.pbounds.dev, np.int32(curr_offset), np.uint8(i == self.depth) ) curr_offset += self.num_nodes[i] def _update_node_data(self, offsets_prev, pbounds_prev, offsets, pbounds, seg_flag, child_count_prefix_sum, csum_nodes, csum_nodes_next, n): """Update node data and return number of children which are leaves.""" # Update particle-related data of children set_node_data = self.main_helper.get_kernel("set_node_data", k=self.k) set_node_data(offsets_prev.dev, pbounds_prev.dev, offsets.dev, pbounds.dev, seg_flag.dev, child_count_prefix_sum.dev, np.uint32(csum_nodes), np.uint32(n)) # Set children offsets leaf_count = Array(np.uint32, n=1, backend='opencl') set_offsets = _get_set_offset_kernel(self.ctx, self.k, self.leaf_size) set_offsets(pbounds.dev, offsets.dev, leaf_count.dev, np.uint32(csum_nodes_next)) return leaf_count.dev[0].get() def _build_tree(self, fixed_depth=None): # We build the tree one layer at a time. We stop building new # layers after either all the # nodes are leaves or after reaching the target depth (fixed_depth). # At this point, the information for each layer is segmented / not # contiguous in memory, and so we run a merge_layers procedure to # move the data for all layers into a single array. # # The procedure for building each layer can be split up as follows # 1) Determine which child each particle is going to belong to in the # next layer # 2) Perform a kind of segmented scan over this. This gives us the # new order of the particles so that consecutive particles lie in # the same child # 3) Reorder the particles based on this order # 4) Create a new layer and set the node data for the new layer. We # get to know which particles belong to each node directly from the # results of step 2 # 5) Set the predicted offsets of the children of the nodes in the # new layer. If a node has fewer than leaf_size particles, it's a # leaf. A kind of prefix sum over this directly let's us know the # predicted offsets. # Rinse and repeat for building more layers. # # Note that after building the last layer, the predicted offsets for # the children might not be correctly since we're not going to build # more layers. The _merge_layers procedure sets the offsets in the # last layer to -1 to correct this. num_leaves_here = 0 n = self.n temp_vars = {} self.depth = 0 self.num_nodes = [1] # Cumulative sum of nodes in the previous layers csum_nodes_prev = 0 csum_nodes = 1 # Initialize temporary data (but persistent across layers) self._create_temp_vars(temp_vars) child_count_prefix_sum = Array(get_vector_dtype('uint', self.k), n=n, backend='opencl') seg_flag = Array(cl.cltypes.char, n=n, backend='opencl') seg_flag.fill(0) seg_flag.dev[0] = 1 offsets_temp = [Array(np.int32, n=1, backend='opencl')] offsets_temp[-1].fill(1) pbounds_temp = [Array(cl.cltypes.uint2, n=1, backend='opencl')] pbounds_temp[-1].dev[0].set(cl.cltypes.make_uint2(0, n)) # FIXME: Depths above 20 possible and feasible for binary / quad trees loop_lim = min(fixed_depth, 20) for depth in range(1, loop_lim): num_nodes = self.k * (self.num_nodes[-1] - num_leaves_here) if num_nodes == 0: break else: self.depth += 1 self.num_nodes.append(num_nodes) # Allocate new layer offsets_temp.append(Array(np.int32, n=self.num_nodes[-1], backend='opencl')) pbounds_temp.append(Array(cl.cltypes.uint2, n=self.num_nodes[-1], backend='opencl')) # Generate particle index and reorder the particles self._reorder_particles(depth, child_count_prefix_sum, offsets_temp[-2], pbounds_temp[-2], seg_flag, csum_nodes_prev, temp_vars) num_leaves_here = self._update_node_data( offsets_temp[-2], pbounds_temp[-2], offsets_temp[-1], pbounds_temp[-1], seg_flag, child_count_prefix_sum, csum_nodes, csum_nodes + self.num_nodes[-1], n ) csum_nodes_prev = csum_nodes csum_nodes += self.num_nodes[-1] self._merge_layers(offsets_temp, pbounds_temp) self._clean_temp_vars(temp_vars) ########################################################################### # Misc ########################################################################### def _get_unique_cids_and_count(self): n = self.n self.unique_cids = Array(np.uint32, n=n, backend='opencl') self.unique_cids_map = Array(np.uint32, n=n, backend='opencl') uniq_count = Array(np.uint32, n=1, backend='opencl') unique_cids_kernel = _get_unique_cids_kernel(self.ctx) unique_cids_kernel(self.cids.dev, self.unique_cids_map.dev, self.unique_cids.dev, uniq_count.dev) self.unique_cid_count = uniq_count.dev[0].get() def get_leaves(self): leaves = Array(np.uint32, n=self.offsets.dev.shape[0], backend='opencl') num_leaves = Array(np.uint32, n=1, backend='opencl') leaves_kernel = _get_leaves_kernel(self.ctx, self.leaf_size) leaves_kernel(self.offsets.dev, self.pbounds.dev, leaves.dev, num_leaves.dev) num_leaves = num_leaves.dev[0].get() return leaves.dev[:num_leaves], num_leaves def _sort(self): """Set tree as being sorted The particle array needs to be aligned by the caller! """ if not self.sorted: self.sorted = 1 ########################################################################### # Tree API ########################################################################### def allocate_node_prop(self, dtype): return Array(dtype, n=self.total_nodes, backend='opencl') def allocate_leaf_prop(self, dtype): return Array(dtype, n=int(self.unique_cid_count), backend='opencl') def get_preamble(self): if self.sorted: return "#define PID(idx) (idx)" else: return "#define PID(idx) (pids[idx])" def get_leaf_size_partitions(self, group_min, group_max): """Partition leaves based on leaf size Parameters ---------- group_min Minimum leaf size group_max Maximum leaf size Returns ------- groups : Array An array which contains the cell ids of leaves with leaf size > group_min and leaf size <= group_max group_count : int The number of leaves which satisfy the given condition on the leaf size """ groups = Array(np.uint32, n=int(self.unique_cid_count), backend='opencl') group_count = Array(np.uint32, n=1, backend='opencl') get_cid_groups = _get_cid_groups_kernel(self.ctx) get_cid_groups(self.unique_cids.dev[:self.unique_cid_count], self.pbounds.dev, groups.dev, group_count.dev, np.int32(group_min), np.int32(group_max)) result = groups, int(group_count.dev[0].get()) return result def tree_bottom_up(self, args, setup, leaf_operation, node_operation, output_expr, preamble=""): return tree_bottom_up(self.ctx, args, setup, leaf_operation, node_operation, output_expr, preamble) def leaf_tree_traverse(self, args, setup, node_operation, leaf_operation, output_expr, common_operation="", preamble=""): """ Traverse this (source) tree. One thread for each leaf of destination tree. """ return leaf_tree_traverse(self.ctx, self.k, args, setup, node_operation, leaf_operation, output_expr, common_operation, preamble) def point_tree_traverse(self, args, setup, node_operation, leaf_operation, output_expr, common_operation="", preamble=""): """ Traverse this (source) tree. One thread for each particle of destination tree. """ return point_tree_traverse(self.ctx, self.k, args, setup, node_operation, leaf_operation, output_expr, common_operation, preamble) pysph-master/pysph/base/utils.py000066400000000000000000000334621356347341600173250ustar00rootroot00000000000000try: from collections import OrderedDict except ImportError: from ordereddict import OrderedDict import numpy from .particle_array import ParticleArray, \ get_local_tag, get_remote_tag, get_ghost_tag from cyarray.api import LongArray UINT_MAX = (1 << 32) - 1 # Internal tags used in PySPH (defined in particle_array.pxd) class ParticleTAGS: Local = get_local_tag() Remote = get_remote_tag() Ghost = get_ghost_tag() def arange_long(start, stop=-1): """ Creates a LongArray working same as builtin range with upto 2 arguments both expected to be positive """ if stop == -1: arange = LongArray(start) for i in range(start): arange.data[i] = i return arange else: size = stop-start arange = LongArray(size) for i in range(size): arange.data[i] = start + i return arange # A collection of default properties for all SPH arrays. DEFAULT_PROPS = set( ('x', 'y', 'z', 'u', 'v', 'w', 'm', 'h', 'rho', 'p', 'au', 'av', 'aw', 'gid', 'pid', 'tag') ) def get_particle_array(additional_props=None, constants=None, backend=None, **props): """Create and return a particle array with default properties. The default properties are ['x', 'y', 'z', 'u', 'v', 'w', 'm', 'h', 'rho', 'p', 'au', 'av', 'aw', 'gid', 'pid', 'tag'], this set is available in `DEFAULT_PROPS`. Parameters ---------- additional_props : list If specified, add these properties. constants : dict Any constants to be added to the particle array. Other Parameters ---------------- props : dict Additional keywords passed are set as the property arrays. Examples -------- >>> x = linspace(0,1,10) >>> pa = get_particle_array(name='fluid', x=x) >>> pa.properties.keys() ['x', 'z', 'rho', 'pid', 'v', 'tag', 'm', 'p', 'gid', 'au', 'aw', 'av', 'y', 'u', 'w', 'h'] >>> pa1 = get_particle_array(name='fluid', additional_props=['xx', 'yy']) >>> pa = get_particle_array(name='fluid', x=x, constants={'alpha': 1.0}) >>> pa.constants.keys() ['alpha'] """ # handle the name separately if 'name' in props: name = props['name'] props.pop('name') else: name = "array" # default properties for an SPH particle default_props = set(DEFAULT_PROPS) # add any additional props to the default_props if additional_props: default_props = default_props.union(additional_props) np = 0 prop_dict = {} for prop in props.keys(): data = numpy.asarray(props[prop]) np = data.size if prop in ['tag', 'pid']: prop_dict[prop] = {'data': data, 'type': 'int', 'name': prop} elif prop in ['gid']: prop_dict[prop] = {'data': data.astype(numpy.uint32), 'type': 'unsigned int', 'name': prop} else: prop_dict[prop] = {'data': data, 'type': 'double', 'name': prop} # Add the default props for prop in default_props: if prop not in prop_dict: if prop in ["pid"]: prop_dict[prop] = {'name': prop, 'type': 'int', 'default': 0} elif prop in ['tag']: prop_dict[prop] = {'name': prop, 'type': 'int', 'default': ParticleTAGS.Local} elif prop in ['gid']: data = numpy.ones(shape=np, dtype=numpy.uint32) data[:] = UINT_MAX prop_dict[prop] = {'name': prop, 'type': 'unsigned int', 'data': data, 'default': UINT_MAX} else: prop_dict[prop] = {'name': prop, 'type': 'double', 'default': 0} # create the particle array pa = ParticleArray(name=name, constants=constants, backend=backend, **prop_dict) # default property arrays to save out. Any reasonable SPH particle # should define these pa.set_output_arrays(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'm', 'h', 'pid', 'gid', 'tag']) return pa def get_particle_array_wcsph(constants=None, **props): """Return a particle array for the WCSPH formulation. This sets the default properties to be:: ['x', 'y', 'z', 'u', 'v', 'w', 'h', 'rho', 'm', 'p', 'cs', 'ax', 'ay', 'az', 'au', 'av', 'aw', 'x0','y0', 'z0','u0', 'v0','w0', 'arho', 'rho0', 'div', 'gid','pid', 'tag'] Parameters ---------- constants : dict Dictionary of constants Other Parameters ---------------- props : dict Additional keywords passed are set as the property arrays. See Also -------- get_particle_array """ wcsph_props = ['cs', 'ax', 'ay', 'az', 'arho', 'x0', 'y0', 'z0', 'u0', 'v0', 'w0', 'rho0', 'div', 'dt_cfl', 'dt_force'] pa = get_particle_array( constants=constants, additional_props=wcsph_props, **props ) # default property arrays to save out. pa.set_output_arrays([ 'x', 'y', 'z', 'u', 'v', 'w', 'rho', 'm', 'h', 'pid', 'gid', 'tag', 'p' ]) return pa def get_particle_array_iisph(constants=None, **props): """Get a particle array for the IISPH formulation. The default properties are:: ['x', 'y', 'z', 'u', 'v', 'w', 'm', 'h', 'rho', 'p', 'au', 'av', 'aw', 'gid', 'pid', 'tag' 'uadv', 'vadv', 'wadv', 'rho_adv', 'au', 'av', 'aw','ax', 'ay', 'az', 'dii0', 'dii1', 'dii2', 'V', 'aii', 'dijpj0', 'dijpj1', 'dijpj2', 'p', 'p0', 'piter', 'compression' ] Parameters ---------- constants : dict Dictionary of constants Other Parameters ---------------- props : dict Additional keywords passed are set as the property arrays. See Also -------- get_particle_array """ iisph_props = ['uadv', 'vadv', 'wadv', 'rho_adv', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'dii0', 'dii1', 'dii2', 'V', 'dt_cfl', 'dt_force', 'aii', 'dijpj0', 'dijpj1', 'dijpj2', 'p', 'p0', 'piter', 'compression'] # Used to calculate the total compression first index is count and second # the compression. consts = {'tmp_comp': [0.0, 0.0]} if constants: consts.update(constants) pa = get_particle_array( constants=consts, additional_props=iisph_props, **props ) pa.set_output_arrays(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'h', 'm', 'p', 'pid', 'au', 'av', 'aw', 'tag', 'gid', 'V']) return pa def get_particle_array_rigid_body(constants=None, **props): """Return a particle array for a rigid body motion. For multiple bodies, add a body_id property starting at index 0 with each index denoting the body to which the particle corresponds to. Parameters ---------- constants : dict Dictionary of constants Other Parameters ---------------- props : dict Additional keywords passed are set as the property arrays. See Also -------- get_particle_array """ extra_props = ['au', 'av', 'aw', 'V', 'fx', 'fy', 'fz', 'x0', 'y0', 'z0', 'tang_disp_x', 'tang_disp_y', 'tang_disp_z', 'tang_disp_x0', 'tang_disp_y0', 'tang_disp_z0', 'tang_velocity_x', 'tang_velocity_y', 'rad_s', 'tang_velocity_z', 'nx', 'ny', 'nz'] body_id = props.pop('body_id', None) nb = 1 if body_id is None else numpy.max(body_id) + 1 consts = {'total_mass': numpy.zeros(nb, dtype=float), 'num_body': numpy.asarray(nb, dtype=int), 'cm': numpy.zeros(3*nb, dtype=float), # The mi are also used to temporarily reduce mass (1), center of # mass (3) and the interia components (6), total force (3), total # torque (3). 'mi': numpy.zeros(16*nb, dtype=float), 'force': numpy.zeros(3*nb, dtype=float), 'torque': numpy.zeros(3*nb, dtype=float), # velocity, acceleration of CM. 'vc': numpy.zeros(3*nb, dtype=float), 'ac': numpy.zeros(3*nb, dtype=float), 'vc0': numpy.zeros(3*nb, dtype=float), # angular velocity, acceleration of body. 'omega': numpy.zeros(3*nb, dtype=float), 'omega0': numpy.zeros(3*nb, dtype=float), 'omega_dot': numpy.zeros(3*nb, dtype=float) } if constants: consts.update(constants) pa = get_particle_array(constants=consts, additional_props=extra_props, **props) pa.add_property('body_id', type='int', data=body_id) pa.set_output_arrays(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'h', 'm', 'p', 'pid', 'au', 'av', 'aw', 'tag', 'gid', 'V', 'fx', 'fy', 'fz', 'body_id']) return pa def get_particle_array_tvf_fluid(constants=None, **props): """Return a particle array for the TVF formulation for a fluid. Parameters ---------- constants : dict Dictionary of constants Other Parameters ---------------- props : dict Additional keywords passed are set as the property arrays. See Also -------- get_particle_array """ tv_props = ['uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', 'vmag2', 'V'] pa = get_particle_array( constants=constants, additional_props=tv_props, **props ) pa.set_output_arrays(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'p', 'h', 'm', 'au', 'av', 'aw', 'V', 'vmag2', 'pid', 'gid', 'tag']) return pa def get_particle_array_tvf_solid(constants=None, **props): """Return a particle array for the TVF formulation for a solid. Parameters ---------- constants : dict Dictionary of constants Other Parameters ---------------- props : dict Additional keywords passed are set as the property arrays. See Also -------- get_particle_array """ tv_props = ['u0', 'v0', 'w0', 'V', 'wij', 'ax', 'ay', 'az', 'uf', 'vf', 'wf', 'ug', 'vg', 'wg'] pa = get_particle_array( constants=constants, additional_props=tv_props, **props ) pa.set_output_arrays( ['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'p', 'h', 'm', 'V', 'pid', 'gid', 'tag'] ) return pa def get_particle_array_gasd(constants=None, **props): """Return a particle array for a Gas Dynamics problem. Parameters ---------- constants : dict Dictionary of constants Other Parameters ---------------- props : dict Additional keywords passed are set as the property arrays. See Also -------- get_particle_array """ required_props = [ 'x', 'y', 'z', 'u', 'v', 'w', 'rho', 'h', 'm', 'cs', 'p', 'e', 'au', 'av', 'aw', 'arho', 'ae', 'am', 'ah', 'x0', 'y0', 'z0', 'u0', 'v0', 'w0', 'rho0', 'e0', 'h0', 'div', 'dt_cfl', 'grhox', 'grhoy', 'grhoz', 'dwdh', 'omega', 'converged', 'alpha1', 'alpha10', 'aalpha1', 'alpha2', 'alpha20', 'aalpha2', 'del2e' ] pa = get_particle_array( constants=constants, additional_props=required_props, **props ) # set the intial smoothing length h0 to the particle smoothing # length. This can result in an annoying error in the density # iterations which require the h0 array pa.h0[:] = pa.h[:] pa.set_output_arrays(['x', 'y', 'u', 'v', 'rho', 'm', 'h', 'cs', 'p', 'e', 'au', 'av', 'ae', 'pid', 'gid', 'tag', 'dwdh', 'alpha1', 'alpha2']) return pa def get_particles_info(particles): """Return the array information for a list of particles. Returns ------- An OrderedDict containing the property information for a list of particles. This dict can be used for example to set-up dummy/empty particle arrays. """ info = OrderedDict() for parray in particles: prop_info = {} for prop_name, prop in parray.properties.items(): prop_info[prop_name] = { 'name': prop_name, 'type': prop.get_c_type(), 'default': parray.default_values[prop_name], 'stride': parray.stride.get(prop_name, 1), 'data': None} const_info = {} if parray.gpu is not None: parray.gpu.pull(*list(parray.constants.keys())) for c_name, value in parray.constants.items(): const_info[c_name] = value.get_npy_array() info[parray.name] = dict( properties=prop_info, constants=const_info, output_property_arrays=parray.output_property_arrays ) return info def create_dummy_particles(info): """Returns a replica (empty) of a list of particles""" particles = [] for name, pa_data in info.items(): prop_dict = pa_data['properties'] constants = pa_data['constants'] pa = ParticleArray(name=name, constants=constants, **prop_dict) pa.set_output_arrays(pa_data['output_property_arrays']) particles.append(pa) return particles def is_overloaded_method(method): """Returns True if the given method is overloaded from any of its bases. """ method_name = method.__name__ klass = method.__self__.__class__ count = 0 prev = None for base in klass.mro(): if hasattr(base, method_name): method = getattr(base, method_name) if method != prev: prev = method count += 1 if count > 1: break return count > 1 pysph-master/pysph/base/z_order.h000066400000000000000000000043341356347341600174240ustar00rootroot00000000000000#ifndef Z_ORDER_H #define Z_ORDER_H #include #include #include #ifdef _WIN32 typedef unsigned int uint32_t; typedef unsigned long long uint64_t; #else #include #endif using namespace std; inline void find_cell_id(double x, double y, double z, double h, int &c_x, int &c_y, int &c_z) { c_x = floor(x/h); c_y = floor(y/h); c_z = floor(z/h); } inline uint64_t get_key(uint64_t i, uint64_t j, uint64_t k) { i = (i | (i << 32)) & 0x1f00000000ffff; i = (i | (i << 16)) & 0x1f0000ff0000ff; i = (i | (i << 8)) & 0x100f00f00f00f00f; i = (i | (i << 4)) & 0x10c30c30c30c30c3; i = (i | (i << 2)) & 0x1249249249249249; j = (j | (j << 32)) & 0x1f00000000ffff; j = (j | (j << 16)) & 0x1f0000ff0000ff; j = (j | (j << 8)) & 0x100f00f00f00f00f; j = (j | (j << 4)) & 0x10c30c30c30c30c3; j = (j | (j << 2)) & 0x1249249249249249; k = (k | (k << 32)) & 0x1f00000000ffff; k = (k | (k << 16)) & 0x1f0000ff0000ff; k = (k | (k << 8)) & 0x100f00f00f00f00f; k = (k | (k << 4)) & 0x10c30c30c30c30c3; k = (k | (k << 2)) & 0x1249249249249249; return (i | (j << 1) | (k << 2)); } class CompareSortWrapper { private: uint32_t* current_pids; uint64_t* current_keys; int length; public: CompareSortWrapper() { this->current_pids = NULL; this->current_keys = NULL; this->length = 0; } CompareSortWrapper(uint32_t* current_pids, uint64_t* current_keys, int length) { this->current_pids = current_pids; this->current_keys = current_keys; this->length = length; } struct CompareFunctionWrapper { CompareSortWrapper* data; CompareFunctionWrapper(CompareSortWrapper* data) { this->data = data; } inline bool operator()(const int &a, const int &b) { return this->data->current_keys[a] < this->data->current_keys[b]; } }; inline void compare_sort() { sort(this->current_pids, this->current_pids + this->length, CompareFunctionWrapper(this)); sort(this->current_keys, this->current_keys + this->length); } }; #endif pysph-master/pysph/base/z_order_gpu_nnps.pxd000066400000000000000000000021351356347341600216760ustar00rootroot00000000000000# cython: embedsignature=True from libcpp.map cimport map from libcpp.pair cimport pair from pysph.base.gpu_nnps_base cimport * ctypedef unsigned int u_int ctypedef map[u_int, pair[u_int, u_int]] key_to_idx_t cdef extern from "math.h": double log2(double) nogil cdef class ZOrderGPUNNPS(GPUNNPS): cdef NNPSParticleArrayWrapper src, dst # Current source and destination. cdef public list pids cdef public list pid_keys cdef public list cids cdef public list cid_to_idx cdef public list max_cid cdef public object dst_to_src cdef object overflow_cid_to_idx cdef object curr_cid cdef object max_cid_src cdef object allocator cdef object helper cdef object radix_sort cdef object make_vec cdef public bint sorted cdef bint dst_src cdef object z_order_nbrs cdef object z_order_nbr_lengths #cpdef get_spatially_ordered_indices(self, int pa_index) cpdef _bin(self, int pa_index) cpdef _refresh(self) cdef void find_neighbor_lengths(self, nbr_lengths) cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices) pysph-master/pysph/base/z_order_gpu_nnps.pyx000066400000000000000000000242701356347341600217270ustar00rootroot00000000000000#cython: embedsignature=True # malloc and friends from libc.stdlib cimport malloc, free from libc.math cimport log from libcpp.vector cimport vector from libcpp.map cimport map from cython.operator cimport dereference as deref, preincrement as inc # Cython for compiler directives cimport cython import pyopencl as cl import pyopencl.array import pyopencl.algorithm from pyopencl.scan import GenericScanKernel from pyopencl.elementwise import ElementwiseKernel import numpy as np cimport numpy as np from mako.template import Template from pysph.base.gpu_nnps_helper import GPUNNPSHelper from compyle.array import Array import compyle.array as array from compyle.opencl import get_context, get_config, profile_kernel from compyle.api import Elementwise from pysph.base.z_order_gpu_nnps_kernels import (ZOrderNbrsKernel, ZOrderLengthKernel) from pysph.base.gpu_helper_kernels import get_elwise, get_scan from pysph.base.z_order_gpu_nnps_kernels import * IF UNAME_SYSNAME == "Windows": cdef inline double fmin(double x, double y) nogil: return x if x < y else y cdef inline double fmax(double x, double y) nogil: return x if x > y else y @cython.cdivision(True) cdef inline double log2(double n) nogil: return log(n)/log(2) cdef class ZOrderGPUNNPS(GPUNNPS): def __init__(self, int dim, list particles, double radius_scale=2.0, int ghost_layers=1, domain=None, bint fixed_h=False, bint cache=True, bint sort_gids=False, backend=None): GPUNNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids, backend ) self.radius_scale2 = radius_scale*radius_scale self.src_index = -1 self.dst_index = -1 self.sort_gids = sort_gids cdef NNPSParticleArrayWrapper pa_wrapper cdef int i, num_particles self.pids = [] self.pid_keys = [] self.cids = [] self.cid_to_idx = [] for i from 0<=iself.pa_wrappers[i] num_particles = pa_wrapper.get_number_of_particles() self.pids.append(Array(np.uint32, n=num_particles, backend=self.backend)) self.pid_keys.append(Array(np.uint64, n=num_particles, backend=self.backend)) self.cids.append(Array(np.uint32, n=num_particles, backend=self.backend)) self.cid_to_idx.append(Array(np.int32, backend=self.backend)) self.curr_cid = array.ones(1, dtype=np.uint32, backend=self.backend) self.max_cid_src = array.zeros(1, dtype=np.int32, backend=self.backend) self.dst_to_src = Array(np.uint32, backend=self.backend) self.overflow_cid_to_idx = Array(np.int32, backend=self.backend) self.z_order_nbrs = [None] * 2 self.z_order_nbr_lengths = [None] * 2 self.domain.update() self.update() def get_spatially_ordered_indices(self, int pa_index): def update_pids(): pids_new = array.arange(0, num_particles, 1, dtype=np.uint32, backend=self.backend) self.pids[pa_index].set_data(pids_new) cdef NNPSParticleArrayWrapper pa_wrapper = self.pa_wrappers[pa_index] num_particles = pa_wrapper.get_number_of_particles() self.sorted = True return self.pids[pa_index], update_pids cpdef _bin(self, int pa_index): self.sorted = False cdef NNPSParticleArrayWrapper pa_wrapper = self.pa_wrappers[pa_index] fill_pids_knl = get_elwise(fill_pids, self.backend) pa_gpu = pa_wrapper.pa.gpu fill_pids_knl(pa_gpu.x, pa_gpu.y, pa_gpu.z, self.cell_size, self.xmin[0], self.xmin[1], self.xmin[2], self.pid_keys[pa_index], self.pids[pa_index]) cdef double max_length = fmax(fmax((self.xmax[0] - self.xmin[0]), (self.xmax[1] - self.xmin[1])), (self.xmax[2] - self.xmin[2])) cdef int max_num_cells = ( ceil(max_length/self.hmin)) cdef int max_num_bits = 3*( ceil(log2(max_num_cells))) sorted_keys, sorted_indices = array.sort_by_keys( [self.pid_keys[pa_index], self.pids[pa_index]], key_bits=max_num_bits, backend=self.backend ) self.pids[pa_index].set_data(sorted_indices) self.pid_keys[pa_index].set_data(sorted_keys) self.curr_cid.fill(1) fill_unique_cids_knl = get_scan(inp_fill_unique_cids, out_fill_unique_cids, np.int32, self.backend) fill_unique_cids_knl(keys=self.pid_keys[pa_index], cids=self.cids[pa_index]) curr_cids = self.cids[pa_index] cdef unsigned int num_cids = 1 + curr_cids[-1] self.cid_to_idx[pa_index].resize(27 * num_cids) self.cid_to_idx[pa_index].fill(-1) self.max_cid[pa_index] = num_cids map_cid_to_idx_knl= get_elwise(map_cid_to_idx, self.backend) map_cid_to_idx_knl( pa_gpu.x, pa_gpu.y, pa_gpu.z, pa_wrapper.get_number_of_particles(), self.cell_size, self.xmin[0], self.xmin[1], self.xmin[2], self.pids[pa_index], self.pid_keys[pa_index], self.cids[pa_index], self.cid_to_idx[pa_index] ) cpdef _refresh(self): cdef NNPSParticleArrayWrapper pa_wrapper cdef int i, num_particles self.max_cid = [] self.sorted = False for i from 0<=iself.pa_wrappers[i] num_particles = pa_wrapper.get_number_of_particles() self.pids[i].resize(num_particles) self.pid_keys[i].resize(num_particles) self.cids[i].resize(num_particles) self.max_cid.append(0) cpdef set_context(self, int src_index, int dst_index): """Setup the context before asking for neighbors. The `dst_index` represents the particles for whom the neighbors are to be determined from the particle array with index `src_index`. Parameters ---------- src_index: int: the source index of the particle array. dst_index: int: the destination index of the particle array. """ GPUNNPS.set_context(self, src_index, dst_index) self.src = self.pa_wrappers[src_index] self.dst = self.pa_wrappers[dst_index] self.dst_src = src_index != dst_index cdef unsigned int overflow_size = 0 if self.dst_src: self.dst_to_src.resize(self.max_cid[dst_index]) map_dst_to_src_knl = get_elwise(map_dst_to_src, self.backend) self.max_cid_src.fill(self.max_cid[src_index]) map_dst_to_src_knl(self.dst_to_src, self.cids[dst_index], self.cid_to_idx[dst_index], self.pid_keys[dst_index], self.pid_keys[src_index], self.cids[src_index], self.src.get_number_of_particles(), self.max_cid_src) overflow_size = (self.max_cid_src.get()) - \ self.max_cid[src_index] self.overflow_cid_to_idx.resize(max(1, 27 * overflow_size)) self.overflow_cid_to_idx.fill(-1) fill_overflow_map_knl = get_elwise(fill_overflow_map, self.backend) dst_gpu = self.dst.pa.gpu fill_overflow_map_knl(self.dst_to_src, self.cid_to_idx[dst_index], dst_gpu.x, dst_gpu.y, dst_gpu.z, self.src.get_number_of_particles(), self.cell_size, self.xmin[0], self.xmin[1], self.xmin[2], self.pid_keys[src_index], self.pids[dst_index], self.overflow_cid_to_idx, np.array(self.max_cid[src_index], dtype=np.uint32)) cdef void find_neighbor_lengths(self, nbr_lengths): if not self.z_order_nbr_lengths[self.dst_src]: krnl_source = ZOrderLengthKernel( "z_order_nbr_lengths", dst_src=self.dst_src ) self.z_order_nbr_lengths[self.dst_src] = Elementwise( krnl_source.function, backend=self.backend ) knl = self.z_order_nbr_lengths[self.dst_src] dst_gpu = self.dst.pa.gpu src_gpu = self.src.pa.gpu knl(dst_gpu.x, dst_gpu.y, dst_gpu.z, dst_gpu.h, src_gpu.x, src_gpu.y, src_gpu.z, src_gpu.h, self.xmin[0], self.xmin[1], self.xmin[2], self.src.get_number_of_particles(), self.pid_keys[self.src_index], self.pids[self.dst_index], self.pids[self.src_index], self.max_cid[self.src_index], self.cids[self.dst_index], self.cid_to_idx[self.src_index], self.overflow_cid_to_idx, self.dst_to_src, self.radius_scale2, self.cell_size, nbr_lengths) cdef void find_nearest_neighbors_gpu(self, nbrs, start_indices): if not self.z_order_nbrs[self.dst_src]: krnl_source = ZOrderNbrsKernel( "z_order_nbrs", dst_src=self.dst_src ) self.z_order_nbrs[self.dst_src] = Elementwise( krnl_source.function, backend=self.backend ) knl = self.z_order_nbrs[self.dst_src] dst_gpu = self.dst.pa.gpu src_gpu = self.src.pa.gpu knl(dst_gpu.x, dst_gpu.y, dst_gpu.z, dst_gpu.h, src_gpu.x, src_gpu.y, src_gpu.z, src_gpu.h, self.xmin[0], self.xmin[1], self.xmin[2], self.src.get_number_of_particles(), self.pid_keys[self.src_index], self.pids[self.dst_index], self.pids[self.src_index], self.max_cid[self.src_index], self.cids[self.dst_index], self.cid_to_idx[self.src_index], self.overflow_cid_to_idx, self.dst_to_src, self.radius_scale2, self.cell_size, start_indices, nbrs) pysph-master/pysph/base/z_order_gpu_nnps_kernels.py000066400000000000000000000126201356347341600232560ustar00rootroot00000000000000from pysph.base.gpu_helper_kernels import * from compyle.api import declare from compyle.template import Template from compyle.low_level import atomic_inc @annotate def fill_pids(i, x, y, z, cell_size, xmin, ymin, zmin, keys, pids): c = declare('matrix(3, "int")') find_cell_id( x[i] - xmin, y[i] - ymin, z[i] - zmin, cell_size, c ) key = declare('ulong') key = interleave3(c[0], c[1], c[2]) keys[i] = key pids[i] = i @annotate def inp_fill_unique_cids(i, keys, cids): return 1 if i != 0 and keys[i] != keys[i - 1] else 0 @annotate def out_fill_unique_cids(i, item, cids): cids[i] = item @annotate def map_cid_to_idx(i, x, y, z, num_particles, cell_size, xmin, ymin, zmin, pids, keys, cids, cid_to_idx): cid = cids[i] if i != 0 and cids[i] == cids[i - 1]: return c = declare('matrix(3, "int")') pid = pids[i] find_cell_id( x[pid] - xmin, y[pid] - ymin, z[pid] - zmin, cell_size, c ) nbr_boxes = declare('matrix(27, "ulong")') nbr_boxes_length = neighbor_boxes(c[0], c[1], c[2], nbr_boxes) for j in range(nbr_boxes_length): key = nbr_boxes[j] idx = find_idx(keys, num_particles, key) cid_to_idx[27 * cid + j] = idx @annotate def map_dst_to_src(i, dst_to_src, cids_dst, cid_to_idx_dst, keys_dst, keys_src, cids_src, num_particles_src, max_cid_src): idx = cid_to_idx_dst[27 * i] key = keys_dst[idx] idx_src = find_idx(keys_src, num_particles_src, key) dst_to_src[i] = atomic_inc( max_cid_src[0]) if ( idx_src == - 1) else cids_src[idx_src] @annotate def fill_overflow_map(i, dst_to_src, cid_to_idx_dst, x, y, z, num_particles_src, cell_size, xmin, ymin, zmin, keys_src, pids_dst, overflow_cid_to_idx, max_cid_src): cid = dst_to_src[i] # i is the cid in dst if cid < max_cid_src: return idx = cid_to_idx_dst[27 * i] pid = pids_dst[idx] c = declare('matrix(3, "int")') find_cell_id( x[pid] - xmin, y[pid] - ymin, z[pid] - zmin, cell_size, c ) nbr_boxes = declare('matrix(27, "ulong")') nbr_boxes_length = neighbor_boxes(c[0], c[1], c[2], nbr_boxes) start_idx = cid - max_cid_src for j in range(nbr_boxes_length): key = nbr_boxes[j] idx = find_idx(keys_src, num_particles_src, key) overflow_cid_to_idx[27 * start_idx + j] = idx class ZOrderNNPSKernel(Template): def __init__(self, name, dst_src=False, z_order_length=False, z_order_nbrs=False): super(ZOrderNNPSKernel, self).__init__(name=name) self.z_order_length = z_order_length self.z_order_nbrs = z_order_nbrs assert self.z_order_nbrs != self.z_order_length self.dst_src = dst_src def template(self, i, d_x, d_y, d_z, d_h, s_x, s_y, s_z, s_h, xmin, ymin, zmin, num_particles, keys, pids_dst, pids_src, max_cid_src, cids, cid_to_idx, overflow_cid_to_idx, dst_to_src, radius_scale2, cell_size): ''' q = declare('matrix(4)') qid = pids_dst[i] q[0] = d_x[qid] q[1] = d_y[qid] q[2] = d_z[qid] q[3] = d_h[qid] cid = cids[i] nbr_boxes = declare('GLOBAL_MEM int*') nbr_boxes = cid_to_idx h_i = radius_scale2*q[3]*q[3] % if obj.dst_src: cid = dst_to_src[cid] start_id_nbr_boxes = 27*cid if cid >= max_cid_src: start_id_nbr_boxes = 27*(cid - max_cid_src) nbr_boxes = overflow_cid_to_idx % else: start_id_nbr_boxes = 27*cid % endif % if obj.z_order_length: length = 0 % elif obj.z_order_nbrs: start_idx = start_indicies[qid] curr_idx = 0 % endif s = declare('matrix(4)') j = declare('int') for j in range(27): idx = nbr_boxes[start_id_nbr_boxes + j] if idx == -1: continue key = keys[idx] while (idx < num_particles and keys[idx] == key): pid = pids_src[idx] s[0] = s_x[pid] s[1] = s_y[pid] s[2] = s_z[pid] s[3] = s_h[pid] h_j = radius_scale2 * s[3] * s[3] % if obj.z_order_nbrs: dist = norm2(q[0] - s[0], q[1] - s[1], q[2] - s[2]) if dist < h_i or dist < h_j: nbrs[start_idx + curr_idx] = pid curr_idx += 1 % else: dist = norm2(q[0] - s[0], q[1] - s[1], q[2] - s[2]) if dist < h_i or dist < h_j: length += 1 %endif idx += 1 % if obj.z_order_length: nbr_lengths[qid] = length % endif ''' class ZOrderLengthKernel(ZOrderNNPSKernel): def __init__(self, name, dst_src): super(ZOrderLengthKernel, self).__init__( name, dst_src, z_order_length=True, ) def extra_args(self): return ['nbr_lengths'], {} class ZOrderNbrsKernel(ZOrderNNPSKernel): def __init__(self, name, dst_src): super(ZOrderNbrsKernel, self).__init__( name, dst_src, z_order_nbrs=True ) def extra_args(self): return ['start_indicies', 'nbrs'], {} pysph-master/pysph/base/z_order_nnps.pxd000066400000000000000000000037571356347341600210360ustar00rootroot00000000000000# cython: embedsignature=True from libcpp.map cimport map from libcpp.pair cimport pair from nnps_base cimport * cdef extern from "math.h": double log2(double) nogil cdef extern from "z_order.h": ctypedef unsigned long long uint64_t ctypedef unsigned int uint32_t inline uint64_t get_key(uint64_t i, uint64_t j, uint64_t k) nogil cdef cppclass CompareSortWrapper: CompareSortWrapper() nogil except + CompareSortWrapper(uint32_t* current_pids, uint64_t* current_keys, int length) nogil except + inline void compare_sort() nogil ctypedef map[uint64_t, pair[uint32_t, uint32_t]] key_to_idx_t cdef class ZOrderNNPS(NNPS): ############################################################################ # Data Attributes ############################################################################ cdef uint32_t** pids cdef uint32_t* current_pids cdef uint64_t** keys cdef key_to_idx_t** pid_indices cdef key_to_idx_t* current_indices cdef double radius_scale2 cdef NNPSParticleArrayWrapper dst, src ########################################################################## # Member functions ########################################################################## cdef inline int _neighbor_boxes(self, int i, int j, int k, int* x, int* y, int* z) nogil cpdef set_context(self, int src_index, int dst_index) cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil cpdef get_nearest_particles_no_cache(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs, bint prealloc) cpdef get_spatially_ordered_indices(self, int pa_index, LongArray indices) cdef void fill_array(self, NNPSParticleArrayWrapper pa_wrapper, int pa_index, UIntArray indices, uint32_t* current_pids, uint64_t* current_keys, key_to_idx_t* current_indices) cpdef _refresh(self) cpdef _bin(self, int pa_index, UIntArray indices) pysph-master/pysph/base/z_order_nnps.pyx000066400000000000000000000242061356347341600210530ustar00rootroot00000000000000#cython: embedsignature=True # malloc and friends from libc.stdlib cimport malloc, free from libcpp.vector cimport vector from libcpp.map cimport map from cython.operator cimport dereference as deref, preincrement as inc # Cython for compiler directives cimport cython import numpy as np cimport numpy as np cdef extern from "" namespace "std" nogil: void sort[Iter, Compare](Iter first, Iter last, Compare comp) void sort[Iter](Iter first, Iter last) ############################################################################# cdef class ZOrderNNPS(NNPS): """Find nearest neighbors using Z-Order space filling curve""" def __init__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False): NNPS.__init__( self, dim, particles, radius_scale, ghost_layers, domain, cache, sort_gids ) self.radius_scale2 = radius_scale*radius_scale cdef NNPSParticleArrayWrapper pa_wrapper cdef int i, num_particles for i from 0<=i self.pa_wrappers[i] num_particles = pa_wrapper.get_number_of_particles() self.pids[i] = malloc(num_particles*sizeof(uint32_t)) self.pid_indices[i] = new key_to_idx_t() self.src_index = 0 self.dst_index = 0 self.sort_gids = sort_gids self.domain.update() self.update() def __cinit__(self, int dim, list particles, double radius_scale = 2.0, int ghost_layers = 1, domain=None, bint fixed_h = False, bint cache = False, bint sort_gids = False): cdef int narrays = len(particles) self.pids = malloc(narrays*sizeof(uint32_t*)) self.keys = malloc(narrays*sizeof(uint64_t*)) self.pid_indices = malloc(narrays*sizeof(key_to_idx_t*)) self.current_pids = NULL self.current_indices = NULL def __dealloc__(self): cdef int i for i from 0<=i self.pa_wrappers[dst_index] self.src = self.pa_wrappers[src_index] cdef void find_nearest_neighbors(self, size_t d_idx, UIntArray nbrs) nogil: """Low level, high-performance non-gil method to find neighbors. This requires that `set_context()` be called beforehand. This method does not reset the neighbors array before it appends the neighbors to it. """ cdef double* dst_x_ptr = self.dst.x.data cdef double* dst_y_ptr = self.dst.y.data cdef double* dst_z_ptr = self.dst.z.data cdef double* dst_h_ptr = self.dst.h.data cdef double* src_x_ptr = self.src.x.data cdef double* src_y_ptr = self.src.y.data cdef double* src_z_ptr = self.src.z.data cdef double* src_h_ptr = self.src.h.data cdef double x = dst_x_ptr[d_idx] cdef double y = dst_y_ptr[d_idx] cdef double z = dst_z_ptr[d_idx] cdef double h = dst_h_ptr[d_idx] cdef unsigned int* s_gid = self.src.gid.data cdef int orig_length = nbrs.length cdef int c_x, c_y, c_z cdef double* xmin = self.xmin.data cdef int i, j find_cell_id_raw( x - xmin[0], y - xmin[1], z - xmin[2], self.cell_size, &c_x, &c_y, &c_z ) cdef double xij2 = 0 cdef double hi2 = self.radius_scale2*h*h cdef double hj2 = 0 cdef map[uint64_t, pair[uint32_t, uint32_t]].iterator it cdef int x_boxes[27] cdef int y_boxes[27] cdef int z_boxes[27] cdef int num_boxes = self._neighbor_boxes(c_x, c_y, c_z, x_boxes, y_boxes, z_boxes) cdef pair[uint32_t, uint32_t] candidate cdef uint32_t n, idx for i from 0<=icurrent_pids[j]) cdef void fill_array(self, NNPSParticleArrayWrapper pa_wrapper, int pa_index, UIntArray indices, uint32_t* current_pids, uint64_t* current_keys, key_to_idx_t* current_indices): cdef double* x_ptr = pa_wrapper.x.data cdef double* y_ptr = pa_wrapper.y.data cdef double* z_ptr = pa_wrapper.z.data cdef double* xmin = self.xmin.data cdef int c_x, c_y, c_z cdef int i, n for i from 0<=i=0 and j+q>=0 and k+p>=0: x[length] = i+r y[length] = j+q z[length] = k+p length += 1 return length cpdef _refresh(self): cdef NNPSParticleArrayWrapper pa_wrapper cdef int i, num_particles cdef double* xmax = self.xmax.data cdef double* xmin = self.xmin.data for i from 0<=i self.pa_wrappers[i] num_particles = pa_wrapper.get_number_of_particles() self.pids[i] = malloc(num_particles*sizeof(uint32_t)) self.keys[i] = malloc(num_particles*sizeof(uint64_t)) self.pid_indices[i] = new key_to_idx_t() self.current_pids = self.pids[self.src_index] self.current_indices = self.pid_indices[self.src_index] @cython.cdivision(True) cpdef _bin(self, int pa_index, UIntArray indices): cdef NNPSParticleArrayWrapper pa_wrapper = self.pa_wrappers[pa_index] cdef int num_particles = pa_wrapper.get_number_of_particles() cdef uint32_t* current_pids = self.pids[pa_index] cdef uint64_t* current_keys = self.keys[pa_index] cdef key_to_idx_t* current_indices = self.pid_indices[pa_index] self.fill_array(pa_wrapper, pa_index, indices, current_pids, current_keys, current_indices) pysph-master/pysph/examples/000077500000000000000000000000001356347341600165075ustar00rootroot00000000000000pysph-master/pysph/examples/__init__.py000066400000000000000000000000001356347341600206060ustar00rootroot00000000000000pysph-master/pysph/examples/_db_geometry.py000066400000000000000000000313321356347341600215220ustar00rootroot00000000000000""" Helper functions to generate commonly used geometries. PySPH used an axis convention as follows: Y | | | | | | /Z | / | / | / | / | / |/_________________X """ import numpy from numpy import concatenate, where, array from pysph.base.utils import get_particle_array_wcsph, get_particle_array_iisph from cyarray.api import LongArray def create_2D_tank(x1, y1, x2, y2, dx): """ Generate an open rectangular tank. Parameters: ----------- x1,y1,x2,y2 : Coordinates defining the rectangle in 2D dx : The spacing to use """ yl = numpy.arange(y1, y2 + dx / 2, dx) xl = numpy.ones_like(yl) * x1 nl = len(xl) yr = numpy.arange(y1, y2 + dx / 2, dx) xr = numpy.ones_like(yr) * x2 nr = len(xr) xb = numpy.arange(x1 + dx, x2 - dx + dx / 2, dx) yb = numpy.ones_like(xb) * y1 nb = len(xb) x = numpy.concatenate([xl, xb, xr]) y = numpy.concatenate([yl, yb, yr]) return x, y def create_2D_filled_region(x1, y1, x2, y2, dx): x, y = numpy.mgrid[x1:x2 + dx / 2:dx, y1:y2 + dx / 2:dx] x = x.ravel() y = y.ravel() return x, y def create_obstacle(x1, x2, height, dx): eps = 1e-6 # left inside wall yli = numpy.arange(dx / 2.0, height + eps, dx) xli = numpy.ones_like(yli) * x1 # left outside wall ylo = numpy.arange(dx, height + dx / 2.0 + eps, dx) xlo = numpy.ones_like(ylo) * x1 - dx / 2.0 # # right inside wal # yri = numpy.arange( dx/2.0, height + eps, dx ) # xri = numpy.ones_like(yri) * x2 # # right outter wall # yro = numpy.arange( dx, height + dx/2.0 + eps, dx ) # xro = numpy.ones_like(yro) * x2 + dx/2.0 # concatenate the arrays x = numpy.concatenate((xli, xlo)) y = numpy.concatenate((yli, ylo)) return x, y class DamBreak2DGeometry(object): def __init__(self, container_width=4.0, container_height=3.0, fluid_column_width=1.0, fluid_column_height=2.0, dx=0.03, dy=0.03, nboundary_layers=4, ro=1000.0, co=1.0, with_obstacle=False, beta=1.0, nfluid_offset=2, hdx=1.5, iisph=False, wall_hex_pack=True): self.container_width = container_width self.container_height = container_height self.fluid_column_height = fluid_column_height self.fluid_column_width = fluid_column_width self.nboundary_layers = nboundary_layers self.nfluid_offset = nfluid_offset self.beta = beta self.hdx = hdx self.dx = dx self.dy = dy self.iisph = iisph self.wall_hex_pack = wall_hex_pack self.nsolid = 0 self.nfluid = 0 self.ro = ro self.co = co self.with_obstacle = with_obstacle def get_wall(self, nboundary_layers=4): container_width = self.container_width container_height = self.container_height dx, dy = self.dx / self.beta, self.dy / self.beta factor = 0.5 if self.wall_hex_pack else 1.0 _x = [] _y = [] for i in range(nboundary_layers): xb, yb = create_2D_tank( x1=-factor * i * dx, y1=-factor * i * dy, x2=container_width + factor * i * dx, y2=container_height, dx=dx) _x.append(xb) _y.append(yb) x = numpy.concatenate(_x) y = numpy.concatenate(_y) self.nsolid = len(x) return x, y def get_fluid(self, noffset=2): fluid_column_width = self.fluid_column_width fluid_column_height = self.fluid_column_height dx, dy = self.dx, self.dy factor = 0.5 _x = [] _y = [] for i in range(noffset): ii = i + 1 xf, yf = create_2D_filled_region( x1=dx - factor * i * dx, y1=dx - factor * i * dx, # x1=0.5*ii*dx, y1=0.5*ii*dx, x2=fluid_column_width + factor * i * dx, y2=fluid_column_height, dx=dx) _x.append(xf) _y.append(yf) x = numpy.concatenate(_x) y = numpy.concatenate(_y) self.nfluid = len(x) return x, y def create_particles(self, nboundary_layers=2, nfluid_offset=2, hdx=1.5, **kwargs): nfluid = self.nfluid xf, yf = self.get_fluid(nfluid_offset) if self.iisph: fluid = get_particle_array_iisph(name='fluid', x=xf, y=yf) else: fluid = get_particle_array_wcsph(name='fluid', x=xf, y=yf) fluid.gid[:] = list(range(fluid.get_number_of_particles())) np = nfluid xb, yb = self.get_wall(nboundary_layers) if self.iisph: boundary = get_particle_array_iisph(name='boundary', x=xb, y=yb) else: boundary = get_particle_array_wcsph(name='boundary', x=xb, y=yb) np += boundary.get_number_of_particles() dx, dy, ro = self.dx, self.dy, self.ro # smoothing length, mass and density fluid.h[:] = numpy.ones_like(xf) * hdx * dx if nfluid_offset == 2: fluid.m[:] = dx * dy * ro * 0.5 else: fluid.m[:] = dx * dy * ro fluid.rho[:] = ro if not self.iisph: fluid.rho0[:] = ro boundary.h[:] = numpy.ones_like(xb) * hdx * dx if nboundary_layers == 2: boundary.m[:] = dx * dy * ro * 0.5 else: boundary.m[:] = dx * dy * ro boundary.rho[:] = ro if not self.iisph: boundary.rho0[:] = ro # create the particles list particles = [fluid, boundary] if self.with_obstacle: xo, yo = create_obstacle(x1=2.5, x2=2.5 + dx, height=0.25, dx=dx) gido = numpy.array(list(range(xo.size)), dtype=numpy.uint32) obstacle = get_particle_array_wcsph(name='obstacle', x=xo, y=yo) obstacle.h[:] = numpy.ones_like(xo) * hdx * dx obstacle.m[:] = dx * dy * 0.5 * ro obstacle.rho[:] = ro if not self.iisph: obstacle.rho0[:] = ro # add the obstacle to the boundary particles boundary.append_parray(obstacle) np += obstacle.get_number_of_particles() # set the gid for the boundary particles boundary.gid[:] = list(range(boundary.get_number_of_particles())) # boundary particles can do with a reduced list of properties # to be saved to disk since they are fixed boundary.set_output_arrays( ['x', 'y', 'rho', 'm', 'h', 'p', 'tag', 'pid', 'gid']) if self.iisph: boundary.add_output_arrays(['V']) print("2D dam break with %d fluid, %d boundary particles" % ( fluid.get_number_of_particles(), boundary.get_number_of_particles())) return particles class DamBreak3DGeometry(object): def __init__( self, container_height=1.0, container_width=1.0, container_length=3.22, fluid_column_height=0.55, fluid_column_width=1.0, fluid_column_length=1.228, obstacle_center_x=2.5, obstacle_center_y=0, obstacle_length=0.16, obstacle_height=0.161, obstacle_width=0.4, nboundary_layers=5, with_obstacle=True, dx=0.02, hdx=1.2, rho0=1000.0): # save the geometry details self.container_width = container_width self.container_length = container_length self.container_height = container_height self.fluid_column_length = fluid_column_length self.fluid_column_width = fluid_column_width self.fluid_column_height = fluid_column_height self.obstacle_center_x = obstacle_center_x self.obstacle_center_y = obstacle_center_y self.obstacle_width = obstacle_width self.obstacle_length = obstacle_length self.obstacle_height = obstacle_height self.nboundary_layers = nboundary_layers self.dx = dx self.hdx = hdx self.rho0 = rho0 self.with_obstacle = with_obstacle def get_max_speed(self, g=9.81): return numpy.sqrt(2 * g * self.fluid_column_height) def create_particles(self, **kwargs): fluid_column_height = self.fluid_column_height fluid_column_width = self.fluid_column_width fluid_column_length = self.fluid_column_length container_height = self.container_height container_length = self.container_length container_width = self.container_width obstacle_height = self.obstacle_height obstacle_length = self.obstacle_length obstacle_width = self.obstacle_width obstacle_center_x = self.obstacle_center_x obstacle_center_y = self.obstacle_center_y nboundary_layers = self.nboundary_layers dx = self.dx # get the domain limits ghostlims = nboundary_layers * dx xmin, xmax = 0.0 - ghostlims, container_length + ghostlims zmin, zmax = 0.0 - ghostlims, container_height + ghostlims cw2 = 0.5 * container_width ymin, ymax = -cw2 - ghostlims, cw2 + ghostlims # create all particles eps = 0.1 * dx xx, yy, zz = numpy.mgrid[xmin:xmax + eps:dx, ymin:ymax + eps:dx, zmin:zmax + eps:dx] x = xx.ravel() y = yy.ravel() z = zz.ravel() # create a dummy particle array from which we'll sort pa = get_particle_array_wcsph(name='block', x=x, y=y, z=z) # get the individual arrays indices = [] findices = [] oindices = [] obw2 = 0.5 * obstacle_width obl2 = 0.5 * obstacle_length obh = obstacle_height ocx = obstacle_center_x ocy = obstacle_center_y for i in range(x.size): xi = x[i] yi = y[i] zi = z[i] # fluid if ((0 < xi <= fluid_column_length) and (-cw2 < yi < cw2) and (0 < zi <= fluid_column_height)): findices.append(i) # obstacle if ((ocx - obl2 <= xi <= ocx + obl2) and (ocy - obw2 <= yi <= ocy + obw2) and (0 < zi <= obh)): oindices.append(i) # extract the individual arrays fa = LongArray(len(findices)) fa.set_data(numpy.array(findices)) fluid = pa.extract_particles(fa) fluid.set_name('fluid') if self.with_obstacle: oa = LongArray(len(oindices)) oa.set_data(numpy.array(oindices)) obstacle = pa.extract_particles(oa) obstacle.set_name('obstacle') indices = concatenate((where(y <= -cw2)[0], where(y >= cw2)[0], where(x >= container_length)[0], where(x <= 0)[0], where(z <= 0)[0])) # remove duplicates indices = array(list(set(indices))) wa = LongArray(indices.size) wa.set_data(indices) boundary = pa.extract_particles(wa) boundary.set_name('boundary') # create the particles if self.with_obstacle: particles = [fluid, boundary, obstacle] else: particles = [fluid, boundary] # set up particle properties h0 = self.hdx * dx volume = dx**3 m0 = self.rho0 * volume for pa in particles: pa.m[:] = m0 pa.h[:] = h0 pa.rho[:] = self.rho0 nf = fluid.num_real_particles nb = boundary.num_real_particles if self.with_obstacle: no = obstacle.num_real_particles print( "3D dam break with %d fluid, %d boundary, %d obstacle particles" % (nf, nb, no)) else: print( "3D dam break with %d fluid, %d boundary particles" % (nf, nb)) # load balancing props for the arrays # fluid.set_lb_props(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'h', 'm', 'gid', # 'x0', 'y0', 'z0', 'u0', 'v0', 'w0', 'rho0']) fluid.set_lb_props(list(fluid.properties.keys())) #boundary.set_lb_props(['x', 'y', 'z', 'rho', 'h', 'm', 'gid', 'rho0']) #obstacle.set_lb_props(['x', 'y', 'z', 'rho', 'h', 'm', 'gid', 'rho0']) boundary.set_lb_props(list(boundary.properties.keys())) # boundary and obstacle particles can do with a reduced list of properties # to be saved to disk since they are fixed boundary.set_output_arrays( ['x', 'y', 'z', 'rho', 'm', 'h', 'p', 'tag', 'pid', 'gid']) if self.with_obstacle: obstacle.set_lb_props(list(obstacle.properties.keys())) obstacle.set_output_arrays( ['x', 'y', 'z', 'rho', 'm', 'h', 'p', 'tag', 'pid', 'gid']) return particles pysph-master/pysph/examples/cavity.py000066400000000000000000000167641356347341600203760ustar00rootroot00000000000000"""Lid driven cavity using the Transport Velocity formulation. (10 minutes) """ import os # PySPH imports from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import TVFScheme, SchemeChooser from pysph.sph.wc.edac import EDACScheme # numpy import numpy as np # domain and reference values L = 1.0 Umax = 1.0 c0 = 10 * Umax rho0 = 1.0 p0 = c0 * c0 * rho0 # Numerical setup hdx = 1.0 class LidDrivenCavity(Application): def add_user_options(self, group): group.add_argument( "--nx", action="store", type=int, dest="nx", default=50, help="Number of points along x direction." ) group.add_argument( "--re", action="store", type=float, dest="re", default=100, help="Reynolds number (defaults to 100)." ) self.n_avg = 5 group.add_argument( "--n-vel-avg", action="store", type=int, dest="n_avg", default=None, help="Average velocities over these many saved timesteps." ) def consume_user_options(self): nx = self.options.nx if self.options.n_avg is not None: self.n_avg = self.options.n_avg self.dx = L / nx self.re = self.options.re h0 = hdx * self.dx self.nu = Umax * L / self.re dt_cfl = 0.25 * h0 / (c0 + Umax) dt_viscous = 0.125 * h0**2 / self.nu dt_force = 1.0 self.tf = 10.0 self.dt = min(dt_cfl, dt_viscous, dt_force) def configure_scheme(self): h0 = hdx * self.dx if self.options.scheme == 'tvf': self.scheme.configure(h0=h0, nu=self.nu) elif self.options.scheme == 'edac': self.scheme.configure(h=h0, nu=self.nu) self.scheme.configure_solver(tf=self.tf, dt=self.dt, pfreq=500) def create_scheme(self): tvf = TVFScheme( ['fluid'], ['solid'], dim=2, rho0=rho0, c0=c0, nu=None, p0=p0, pb=p0, h0=hdx ) edac = EDACScheme( fluids=['fluid'], solids=['solid'], dim=2, c0=c0, rho0=rho0, nu=0.0, pb=p0, eps=0.0, h=0.0, ) s = SchemeChooser(default='tvf', tvf=tvf, edac=edac) return s def create_particles(self): dx = self.dx ghost_extent = 5 * dx # create all the particles _x = np.arange(-ghost_extent - dx / 2, L + ghost_extent + dx / 2, dx) x, y = np.meshgrid(_x, _x) x = x.ravel() y = y.ravel() # sort out the fluid and the solid indices = [] for i in range(x.size): if ((x[i] > 0.0) and (x[i] < L)): if ((y[i] > 0.0) and (y[i] < L)): indices.append(i) # create the arrays solid = get_particle_array(name='solid', x=x, y=y) # remove the fluid particles from the solid fluid = solid.extract_particles(indices) fluid.set_name('fluid') solid.remove_particles(indices) print("Lid driven cavity :: Re = %d, dt = %g" % (self.re, self.dt)) # add requisite properties to the arrays: self.scheme.setup_properties([fluid, solid]) # setup the particle properties volume = dx * dx # mass is set to get the reference density of rho0 fluid.m[:] = volume * rho0 solid.m[:] = volume * rho0 # Set a reference rho also, some schemes will overwrite this with a # summation density. fluid.rho[:] = rho0 solid.rho[:] = rho0 # volume is set as dx^2 fluid.V[:] = 1. / volume solid.V[:] = 1. / volume # smoothing lengths fluid.h[:] = hdx * dx solid.h[:] = hdx * dx # imposed horizontal velocity on the lid solid.u[:] = 0.0 solid.v[:] = 0.0 for i in range(solid.get_number_of_particles()): if solid.y[i] > L: solid.u[i] = Umax return [fluid, solid] def post_process(self, info_fname): try: import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt except ImportError: print("Post processing requires matplotlib.") return if self.rank > 0: return info = self.read_info(info_fname) if len(self.output_files) == 0: return t, ke = self._plot_ke_history() x, ui, vi, ui_c, vi_c = self._plot_velocity() res = os.path.join(self.output_dir, "results.npz") np.savez(res, t=t, ke=ke, x=x, u=ui, v=vi, u_c=ui_c, v_c=vi_c) def _plot_ke_history(self): from pysph.tools.pprocess import get_ke_history from matplotlib import pyplot as plt t, ke = get_ke_history(self.output_files, 'fluid') plt.clf() plt.plot(t, ke) plt.xlabel('t') plt.ylabel('Kinetic energy') fig = os.path.join(self.output_dir, "ke_history.png") plt.savefig(fig, dpi=300) return t, ke def _plot_velocity(self): from pysph.tools.interpolator import Interpolator from pysph.solver.utils import load from pysph.examples.ghia_cavity_data import get_u_vs_y, get_v_vs_x # interpolated velocities _x = np.linspace(0, 1, 101) xx, yy = np.meshgrid(_x, _x) # take the last solution data fname = self.output_files[-1] data = load(fname) tf = data['solver_data']['t'] interp = Interpolator(list(data['arrays'].values()), x=xx, y=yy) ui = np.zeros_like(xx) vi = np.zeros_like(xx) # Average out the velocities over the last n_avg timesteps for fname in self.output_files[-self.n_avg:]: data = load(fname) tf = data['solver_data']['t'] interp.update_particle_arrays(list(data['arrays'].values())) _u = interp.interpolate('u') _v = interp.interpolate('v') _u.shape = 101, 101 _v.shape = 101, 101 ui += _u vi += _v ui /= self.n_avg vi /= self.n_avg # velocity magnitude self.vmag = vmag = np.sqrt(ui**2 + vi**2) import matplotlib.pyplot as plt f = plt.figure() plt.streamplot( xx, yy, ui, vi, density=(2, 2), # linewidth=5*vmag/vmag.max(), color=vmag ) plt.xlim(0, 1) plt.ylim(0, 1) plt.colorbar() plt.xlabel('$x$') plt.ylabel('$y$') plt.title('Streamlines at %s seconds' % tf) fig = os.path.join(self.output_dir, 'streamplot.png') plt.savefig(fig, dpi=300) f = plt.figure() ui_c = ui[:, 50] vi_c = vi[50] s1 = plt.subplot(211) s1.plot(ui_c, _x, label='Computed') y, data = get_u_vs_y() if self.re in data: s1.plot(data[self.re], y, 'o', fillstyle='none', label='Ghia et al.') s1.set_xlabel(r'$v_x$') s1.set_ylabel(r'$y$') s1.legend() s2 = plt.subplot(212) s2.plot(_x, vi_c, label='Computed') x, data = get_v_vs_x() if self.re in data: s2.plot(x, data[self.re], 'o', fillstyle='none', label='Ghia et al.') s2.set_xlabel(r'$x$') s2.set_ylabel(r'$v_y$') s2.legend() fig = os.path.join(self.output_dir, 'centerline.png') plt.savefig(fig, dpi=300) return _x, ui, vi, ui_c, vi_c if __name__ == '__main__': app = LidDrivenCavity() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/couette.py000066400000000000000000000110401356347341600205250ustar00rootroot00000000000000"""Couette flow using the transport velocity formulation (30 seconds). """ import os import numpy as np # PySPH imports from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array from pysph.solver.utils import load from pysph.solver.application import Application from pysph.sph.scheme import TVFScheme # domain and reference values Re = 0.0125 d = 0.5; Ly = 2*d; Lx = 0.4*Ly rho0 = 1.0; nu = 0.01 # upper wall velocity based on the Reynolds number and channel width Vmax = nu*Re/(2*d) print(Vmax) c0 = 10*Vmax; p0 = c0*c0*rho0 # Numerical setup dx = 0.05 ghost_extent = 5 * dx hdx = 1.0 # adaptive time steps h0 = hdx * dx dt_cfl = 0.25 * h0/( c0 + Vmax ) dt_viscous = 0.125 * h0**2/nu dt_force = 1.0 tf = 100.0 dt = min(dt_cfl, dt_viscous, dt_force) class CouetteFlow(Application): def create_domain(self): return DomainManager(xmin=0, xmax=Lx, periodic_in_x=True) def create_particles(self): _x = np.arange( dx/2, Lx, dx ) # create the fluid particles _y = np.arange( dx/2, Ly, dx ) x, y = np.meshgrid(_x, _y); fx = x.ravel(); fy = y.ravel() # create the channel particles at the top _y = np.arange(Ly+dx/2, Ly+dx/2+ghost_extent, dx) x, y = np.meshgrid(_x, _y); tx = x.ravel(); ty = y.ravel() # create the channel particles at the bottom _y = np.arange(-dx/2, -dx/2-ghost_extent, -dx) x, y = np.meshgrid(_x, _y); bx = x.ravel(); by = y.ravel() # concatenate the top and bottom arrays cx = np.concatenate( (tx, bx) ) cy = np.concatenate( (ty, by) ) # create the arrays channel = get_particle_array( name='channel', x=cx, y=cy, rho=rho0*np.ones_like(cx) ) fluid = get_particle_array( name='fluid', x=fx, y=fy, rho=rho0*np.ones_like(fx) ) print("Couette flow :: Re = %g, nfluid = %d, nchannel=%d, dt = %g"%( Re, fluid.get_number_of_particles(), channel.get_number_of_particles(), dt)) self.scheme.setup_properties([fluid, channel]) # setup the particle properties volume = dx * dx # mass is set to get the reference density of rho0 fluid.m[:] = volume * rho0 channel.m[:] = volume * rho0 # volume is set as dx^2 fluid.V[:] = 1./volume channel.V[:] = 1./volume # smoothing lengths fluid.h[:] = hdx * dx channel.h[:] = hdx * dx # channel velocity on upper portion indices = np.where(channel.y > d)[0] channel.u[indices] = Vmax # return the particle list return [fluid, channel] def create_scheme(self): s = TVFScheme( ['fluid'], ['channel'], dim=2, rho0=rho0, c0=c0, nu=nu, p0=p0, pb=p0, h0=dx*hdx ) s.configure_solver(tf=tf, dt=dt) return s def post_process(self, info_fname): info = self.read_info(info_fname) if len(self.output_files) == 0: return import matplotlib matplotlib.use('Agg') y_ex, u_ex, y, u = self._plot_u_vs_y() t, ke = self._plot_ke_history() res = os.path.join(self.output_dir, "results.npz") np.savez(res, t=t, ke=ke, y_ex=y_ex, u_ex=u_ex, y=y, u=u) def _plot_ke_history(self): from pysph.tools.pprocess import get_ke_history from matplotlib import pyplot as plt t, ke = get_ke_history(self.output_files, 'fluid') plt.clf() plt.plot(t, ke) plt.xlabel('t'); plt.ylabel('Kinetic energy') fig = os.path.join(self.output_dir, "ke_history.png") plt.savefig(fig, dpi=300) return t, ke def _plot_u_vs_y(self): files = self.output_files # take the last solution data fname = files[-1] data = load(fname) tf = data['solver_data']['t'] fluid = data['arrays']['fluid'] yp = fluid.y.copy() up = fluid.u.copy() # exact parabolic profile for the u-velocity ye = np.linspace(0, 1, 101) ue = Vmax*ye/Ly from matplotlib import pyplot as plt plt.clf() plt.plot(ye, ue, label="exact") plt.plot(yp, up, 'ko', fillstyle='none', label="computed") plt.xlabel('y'); plt.ylabel('u') plt.legend() plt.title('Velocity profile at %s'%tf) fig = os.path.join(self.output_dir, "comparison.png") plt.savefig(fig, dpi=300) return ye, ue, yp, up if __name__ == '__main__': app = CouetteFlow() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/cube.py000066400000000000000000000047111356347341600200020ustar00rootroot00000000000000"""A very simple example to help benchmark PySPH. (2 minutes) The example creates a cube shaped block of water falling in free-space under the influence of gravity while solving the incompressible, inviscid flow equations. Only 5 time steps are solved but with a million particles. It is easy to change the number of particles by simply passing the command line argument --np to a desired number:: $ pysph run cube --np 2e6 To check the performance of PySPH using OpenMP one could try the following:: $ pysph run cube --disable-output $ pysph run cube --disable-output --openmp """ import numpy from pysph.base.kernels import CubicSpline from pysph.base.utils import get_particle_array_wcsph from pysph.solver.application import Application from pysph.sph.scheme import WCSPHScheme rho0 = 1000.0 class Cube(Application): def add_user_options(self, group): group.add_argument( "--np", action="store", type=float, dest="np", default=int(1e5), help="Number of particles in the cube (1e5 by default)." ) def consume_user_options(self): self.hdx = 1.5 self.dx = 1.0/pow(self.options.np, 1.0/3.0) def configure_scheme(self): self.scheme.configure(h0=self.hdx*self.dx, hdx=self.hdx) kernel = CubicSpline(dim=3) dt = 1e-4 tf = 5e-4 self.scheme.configure_solver(kernel=kernel, tf=tf, dt=dt) def create_scheme(self): co = 10.0 s = WCSPHScheme( ['fluid'], [], dim=3, rho0=rho0, c0=co, h0=0.1, hdx=1.5, gz=-9.81, gamma=7.0, alpha=0.5, beta=0.0 ) return s def create_particles(self): dx = self.dx hdx = self.hdx xmin, xmax = 0.0, 1.0 ymin, ymax = 0.0, 1.0 zmin, zmax = 0.0, 1.0 x, y, z = numpy.mgrid[xmin:xmax:dx, ymin:ymax:dx, zmin:zmax:dx] x = x.ravel() y = y.ravel() z = z.ravel() # set up particle properties h0 = hdx * dx volume = dx**3 m0 = rho0 * volume fluid = get_particle_array_wcsph(name='fluid', x=x, y=y, z=z) fluid.m[:] = m0 fluid.h[:] = h0 fluid.rho[:] = rho0 #nnps = LinkedListNNPS(dim=3, particles=[fluid]) #nnps.spatially_order_particles(0) print("Number of particles:", x.size) fluid.set_lb_props( list(fluid.properties.keys()) ) return [fluid] if __name__ == '__main__': app = Cube() app.run() pysph-master/pysph/examples/dam_break_2d.py000066400000000000000000000253401356347341600213570ustar00rootroot00000000000000"""Two-dimensional dam break over a dry bed. (30 minutes) The case is described in "State of the art classical SPH for free surface flows", Moncho Gomez-Gesteira, Benedict D Rogers, Robert A, Dalrymple and Alex J.C Crespo, Journal of Hydraulic Research, Vol 48, Extra Issue (2010), pp 6-27. DOI:10.1080/00221686.2010.9641242 """ import os import numpy as np from pysph.base.kernels import WendlandQuintic, QuinticSpline from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import WCSPHScheme, SchemeChooser, AdamiHuAdamsScheme from pysph.sph.wc.edac import EDACScheme from pysph.sph.iisph import IISPHScheme from pysph.sph.equation import Group from pysph.sph.wc.kernel_correction import (GradientCorrectionPreStep, GradientCorrection, MixedKernelCorrectionPreStep) from pysph.sph.wc.crksph import CRKSPHPreStep, CRKSPH from pysph.sph.wc.gtvf import GTVFScheme from pysph.sph.isph.sisph import SISPHScheme from pysph.tools.geometry import get_2d_tank, get_2d_block fluid_column_height = 2.0 fluid_column_width = 1.0 container_height = 4.0 container_width = 4.0 nboundary_layers = 2 nu = 0.0 dx = 0.03 g = 9.81 ro = 1000.0 vref = np.sqrt(2 * 9.81 * fluid_column_height) co = 10.0 * vref gamma = 7.0 alpha = 0.1 beta = 0.0 B = co * co * ro / gamma p0 = 1000.0 hdx = 1.3 h = hdx * dx m = dx**2 * ro class DamBreak2D(Application): def add_user_options(self, group): corrections = ['', 'mixed-corr', 'grad-corr', 'kernel-corr', 'crksph'] group.add_argument( '--dx', action='store', type=float, dest='dx', default=dx, help='Particle spacing.' ) group.add_argument( '--hdx', action='store', type=float, dest='hdx', default=hdx, help='Specify the hdx factor where h = hdx * dx.' ) group.add_argument( "--kernel-corr", action="store", type=str, dest='kernel_corr', default='', help="Type of Kernel Correction", choices=corrections ) group.add_argument( '--staggered-grid', action="store_true", dest='staggered_grid', default=False, help="Use a staggered grid for particles.", ) def consume_user_options(self): self.hdx = self.options.hdx self.dx = self.options.dx self.h = self.hdx * self.dx self.kernel_corr = self.options.kernel_corr print("Using h = %f" % self.h) def configure_scheme(self): tf = 2.5 kw = dict( tf=tf, output_at_times=[0.4, 0.6, 0.8, 1.0] ) if self.options.scheme == 'wcsph': dt = 0.125 * self.h / co self.scheme.configure(h0=self.h, hdx=self.hdx) kernel = WendlandQuintic(dim=2) from pysph.sph.integrator import PECIntegrator kw.update( dict( integrator_cls=PECIntegrator, kernel=kernel, adaptive_timestep=True, n_damp=50, fixed_h=False, dt=dt ) ) elif self.options.scheme == 'aha': self.scheme.configure(h0=self.h) kernel = QuinticSpline(dim=2) dt = 0.125 * self.h / co kw.update( dict( kernel=kernel, dt=dt ) ) print("dt = %f" % dt) elif self.options.scheme == 'edac': self.scheme.configure(h=self.h) kernel = QuinticSpline(dim=2) dt = 0.125 * self.h / co kw.update( dict( kernel=kernel, dt=dt ) ) print("dt = %f" % dt) elif self.options.scheme == 'iisph': kernel = QuinticSpline(dim=2) dt = 0.125 * 10 * self.h / co kw.update( dict( kernel=kernel, dt=dt, adaptive_timestep=True ) ) print("dt = %f" % dt) elif self.options.scheme == 'gtvf': scheme = self.scheme kernel = QuinticSpline(dim=2) dt = 0.125 * self.h / co kw.update(dict(kernel=kernel, dt=dt)) scheme.configure(pref=B*gamma, h0=self.h) print("dt = %f" % dt) elif self.options.scheme == 'sisph': dt = 0.125*self.h/vref kernel = QuinticSpline(dim=2) print("SISPH dt = %f" % dt) kw.update(dict(kernel=kernel)) self.scheme.configure_solver( dt=dt, tf=tf, adaptive_timestep=False, pfreq=10, ) self.scheme.configure_solver(**kw) def create_scheme(self): wcsph = WCSPHScheme( ['fluid'], ['boundary'], dim=2, rho0=ro, c0=co, h0=h, hdx=1.3, gy=-9.81, alpha=alpha, beta=beta, gamma=gamma, hg_correction=True, update_h=True ) aha = AdamiHuAdamsScheme( fluids=['fluid'], solids=['boundary'], dim=2, c0=co, nu=nu, rho0=ro, h0=h, p0=0.0, gy=-g, gamma=1.0, tdamp=0.0, alpha=alpha ) edac = EDACScheme( fluids=['fluid'], solids=['boundary'], dim=2, c0=co, nu=nu, rho0=ro, h=h, pb=0.0, gy=-g, eps=0.0, clamp_p=True ) iisph = IISPHScheme( fluids=['fluid'], solids=['boundary'], dim=2, nu=nu, rho0=ro, gy=-g ) gtvf = GTVFScheme( fluids=['fluid'], solids=['boundary'], dim=2, nu=nu, rho0=ro, gy=-g, h0=None, c0=co, pref=None ) sisph = SISPHScheme( fluids=['fluid'], solids=['boundary'], dim=2, nu=nu, c0=co, rho0=ro, alpha=0.05, gy=-g, pref=ro*co**2, internal_flow=False, hg_correction=True, gtvf=True, symmetric=True ) s = SchemeChooser(default='wcsph', wcsph=wcsph, aha=aha, edac=edac, iisph=iisph, gtvf=gtvf, sisph=sisph) return s def create_equations(self): eqns = self.scheme.get_equations() if self.options.scheme == 'iisph' or self.options.scheme == 'sisph': return eqns if self.options.scheme == 'gtvf': return eqns n = len(eqns) if self.kernel_corr == 'grad-corr': eqn1 = Group(equations=[ GradientCorrectionPreStep('fluid', ['fluid', 'boundary']) ], real=False) for i in range(n): eqn2 = GradientCorrection('fluid', ['fluid', 'boundary']) eqns[i].equations.insert(0, eqn2) eqns.insert(0, eqn1) elif self.kernel_corr == 'mixed-corr': eqn1 = Group(equations=[ MixedKernelCorrectionPreStep('fluid', ['fluid', 'boundary']) ], real=False) for i in range(n): eqn2 = GradientCorrection('fluid', ['fluid', 'boundary']) eqns[i].equations.insert(0, eqn2) eqns.insert(0, eqn1) elif self.kernel_corr == 'crksph': eqn1 = Group(equations=[ CRKSPHPreStep('fluid', ['fluid', 'boundary']), CRKSPHPreStep('boundary', ['fluid', 'boundary']) ], real=False) for i in range(n): eqn2 = CRKSPH('fluid', ['fluid', 'boundary']) eqn3 = CRKSPH('boundary', ['fluid', 'boundary']) eqns[i].equations.insert(0, eqn3) eqns[i].equations.insert(0, eqn2) eqns.insert(0, eqn1) return eqns def create_particles(self): if self.options.staggered_grid: nboundary_layers = 2 nfluid_offset = 2 wall_hex_pack = True else: nboundary_layers = 4 nfluid_offset = 1 wall_hex_pack = False xt, yt = get_2d_tank(dx=self.dx, length=container_width, height=container_height, base_center=[2, 0], num_layers=nboundary_layers) xf, yf = get_2d_block(dx=self.dx, length=fluid_column_width, height=fluid_column_height, center=[0.5, 1]) xf += self.dx yf += self.dx fluid = get_particle_array(name='fluid', x=xf, y=yf, h=h, m=m, rho=ro) boundary = get_particle_array(name='boundary', x=xt, y=yt, h=h, m=m, rho=ro) self.scheme.setup_properties([fluid, boundary]) if self.options.scheme == 'iisph': # the default position tends to cause the particles to be pushed # away from the wall, so displacing it by a tiny amount helps. fluid.x += self.dx / 4 # Adding extra properties for kernel correction corr = self.kernel_corr if corr == 'kernel-corr' or corr == 'mixed-corr': fluid.add_property('cwij') boundary.add_property('cwij') if corr == 'mixed-corr' or corr == 'grad-corr': fluid.add_property('m_mat', stride=9) boundary.add_property('m_mat', stride=9) elif corr == 'crksph': fluid.add_property('ai') boundary.add_property('ai') fluid.add_property('gradbi', stride=9) boundary.add_property('gradbi', stride=9) for prop in ['gradai', 'bi']: fluid.add_property(prop, stride=3) boundary.add_property(prop, stride=3) return [fluid, boundary] def post_process(self, info_fname): self.read_info(info_fname) if len(self.output_files) == 0: return from pysph.solver.utils import iter_output t, x_max = [], [] factor = np.sqrt(2.0 * 9.81 / fluid_column_width) files = self.output_files for sd, array in iter_output(files, 'fluid'): t.append(sd['t'] * factor) x = array.get('x') x_max.append(x.max()) t, x_max = list(map(np.asarray, (t, x_max))) fname = os.path.join(self.output_dir, 'results.npz') np.savez(fname, t=t, x_max=x_max) import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt from pysph.examples import db_exp_data as dbd plt.plot(t, x_max, label='Computed') te, xe = dbd.get_koshizuka_oka_data() plt.plot(te, xe, 'o', label='Koshizuka & Oka (1996)') plt.xlim(0, 0.7 * factor) plt.ylim(0, 4.5) plt.xlabel('$T$') plt.ylabel('$Z/L$') plt.legend(loc='upper left') plt.savefig(os.path.join(self.output_dir, 'x_vs_t.png'), dpi=300) plt.close() def customize_output(self): self._mayavi_config(''' b = particle_arrays['fluid'] b.scalar = 'vmag' ''') if __name__ == '__main__': app = DamBreak2D() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/dam_break_3d.py000066400000000000000000000047051356347341600213620ustar00rootroot00000000000000"""Three-dimensional dam break over a dry bed. (14 hours) The case is described as a SPHERIC benchmark https://wiki.manchester.ac.uk/spheric/index.php/Test2 By default the simulation runs for 6 seconds of simulation time. """ import numpy as np from pysph.base.kernels import WendlandQuintic from pysph.examples._db_geometry import DamBreak3DGeometry from pysph.solver.application import Application from pysph.sph.integrator import EPECIntegrator from pysph.sph.scheme import WCSPHScheme dim = 3 dt = 1e-5 tf = 6.0 # parameter to change the resolution dx = 0.02 nboundary_layers = 1 hdx = 1.3 ro = 1000.0 h0 = dx * hdx gamma = 7.0 alpha = 0.25 beta = 0.0 c0 = 10.0 * np.sqrt(2.0 * 9.81 * 0.55) class DamBreak3D(Application): def add_user_options(self, group): group.add_argument( '--dx', action='store', type=float, dest='dx', default=dx, help='Particle spacing.' ) group.add_argument( '--hdx', action='store', type=float, dest='hdx', default=hdx, help='Specify the hdx factor where h = hdx * dx.' ) def consume_user_options(self): dx = self.options.dx self.dx = dx self.hdx = self.options.hdx self.geom = DamBreak3DGeometry( dx=dx, nboundary_layers=nboundary_layers, hdx=self.hdx, rho0=ro ) self.co = 10.0 * self.geom.get_max_speed(g=9.81) def create_scheme(self): s = WCSPHScheme( ['fluid'], ['boundary', 'obstacle'], dim=dim, rho0=ro, c0=c0, h0=h0, hdx=hdx, gz=-9.81, alpha=alpha, beta=beta, gamma=gamma, hg_correction=True, tensile_correction=False ) return s def configure_scheme(self): s = self.scheme hdx = self.hdx kernel = WendlandQuintic(dim=dim) h0 = self.dx * hdx s.configure(h0=h0, hdx=hdx) dt = 0.25*h0/(1.1 * self.co) s.configure_solver( kernel=kernel, integrator_cls=EPECIntegrator, tf=tf, dt=dt, adaptive_timestep=True, n_damp=50, output_at_times=[0.4, 0.6, 1.0] ) def create_particles(self): return self.geom.create_particles() def customize_output(self): self._mayavi_config(''' viewer.scalar = 'u' b = particle_arrays['boundary'] b.plot.actor.mapper.scalar_visibility = False b.plot.actor.property.opacity = 0.1 ''') if __name__ == '__main__': app = DamBreak3D() app.run() pysph-master/pysph/examples/db_exp_data.py000066400000000000000000000042411356347341600213140ustar00rootroot00000000000000"""Experimental data for the dam break problem. The data is extracted from: Martin and Moyce 1952 "An Experimental Study of the Collapse of Liquid Columns on a Rigid Horizontal Plane", J. C. Martin and W. J. Moyce, Philososphical Transactions of the Royal Society of London Series A, 244, 312--324 (1952). and "Moving-Particle Semi-Implicit Method for Fragmentation of Incompressible fluid", S. Koshizuka and Y. Oka, Nuclear Science and Engineering, 123, 421--434 (1996). """ import numpy as np from io import StringIO # This is the data for n^2=2 and a=1.125 from Figure 3. mm_data_1 = u""" 0.849 1.245 1.212 1.443 1.602 1.884 2.283 2.689 2.950 3.728 3.598 4.528 3.905 4.999 4.592 5.841 4.961 6.271 5.316 6.717 """ # This is the data for n^2=2 and a=2.25 from Figure 3. mm_data_2 = u""" 0.832 1.217 1.219 1.474 1.997 2.292 2.547 2.995 3.345 4.134 4.034 4.944 4.418 5.881 5.091 6.980 5.685 7.945 6.306 8.966 6.822 9.986 7.439 10.963 8.031 11.977 8.633 13.005 9.237 13.970 """ ko_data = u""" 0.0 1.000 0.381 1.111 0.769 1.252 1.153 1.505 1.537 1.892 1.935 2.241 2.323 2.615 2.719 3.003 3.096 3.624 """ ko_mps_data = u""" 0.000 1.002 0.227 1.019 0.416 1.091 0.591 1.205 0.778 1.351 0.958 1.512 1.095 1.637 1.226 1.771 1.381 1.931 1.536 2.100 1.684 2.268 1.858 2.480 2.043 2.707 2.278 3.004 2.451 3.251 2.604 3.481 2.752 3.700 2.943 3.997 """ def get_martin_moyce_1(): """Returns t*sqrt(2*g/a), z/a for the case where a = 1.125 inches """ # z/a vs t*np.sqrt(2*g/a) t, z = np.loadtxt(StringIO(mm_data_1), unpack=True) return t, z def get_martin_moyce_2(): """Returns t*sqrt(2*g/a), z/a for the case where a = 2.25 inches """ # z/a vs t*np.sqrt(2*g/a) t, z = np.loadtxt(StringIO(mm_data_2), unpack=True) return t, z def get_koshizuka_oka_data(): # z/L vs t*np.sqrt(2*g/L) t, z = np.loadtxt(StringIO(ko_data), unpack=True) return t, z def get_koshizuka_oka_mps_data(): """These are computational results using the MPS scheme. """ # z/L vs t*np.sqrt(2*g/L) t, z = np.loadtxt(StringIO(ko_mps_data), unpack=True) return t, z pysph-master/pysph/examples/elliptical_drop.py000066400000000000000000000154371356347341600222410ustar00rootroot00000000000000"""Evolution of a circular patch of incompressible fluid. (60 seconds) See J. J. Monaghan "Simulating Free Surface Flows with SPH", JCP, 1994, 100, pp 399 - 406 An initially circular patch of fluid is subjected to a velocity profile that causes it to deform into an ellipse. Incompressibility causes the initially circular patch to deform into an ellipse such that the area is conserved. An analytical solution for the locus of the patch is available (exact_solution) This is a standard test for the formulations for the incompressible SPH equations. """ from __future__ import print_function import os from numpy import ones_like, mgrid, sqrt import numpy as np # PySPH base and carray imports from pysph.base.utils import get_particle_array from pysph.base.kernels import Gaussian # PySPH solver and integrator from pysph.solver.application import Application from pysph.sph.integrator import EPECIntegrator from pysph.sph.scheme import WCSPHScheme, SchemeChooser from pysph.sph.iisph import IISPHScheme def _derivative(x, t): A, a = x Anew = A*A*(a**4 - 1)/(a**4 + 1) anew = -a*A return np.array((Anew, anew)) def _scipy_integrate(y0, tf, dt): from scipy.integrate import odeint result = odeint(_derivative, y0, [0.0, tf]) return result[-1] def _numpy_integrate(y0, tf, dt): t = 0.0 y = y0 while t <= tf: t += dt y += dt*_derivative(y, t) return y def exact_solution(tf=0.0075, dt=1e-6, n=101): """Exact solution for the locus of the circular patch. n is the number of points to find the result at. Returns the semi-minor axis, A, pressure, x, y. Where x, y are the points corresponding to the ellipse. """ import numpy y0 = np.array([100.0, 1.0]) try: from scipy.integrate import odeint except ImportError: Anew, anew = _numpy_integrate(y0, tf, dt) else: Anew, anew = _scipy_integrate(y0, tf, dt) dadt = _derivative([Anew, anew], tf)[0] po = 0.5*-anew**2 * (dadt - Anew**2) theta = numpy.linspace(0, 2*numpy.pi, n) return anew, Anew, po, anew*numpy.cos(theta), 1/anew*numpy.sin(theta) class EllipticalDrop(Application): def initialize(self): self.co = 1400.0 self.ro = 1.0 self.hdx = 1.3 self.dx = 0.025 self.alpha = 0.1 def add_user_options(self, group): group.add_argument( "--nx", action="store", type=int, dest="nx", default=40, help="Number of points along x direction. (default 40)" ) def consume_user_options(self): self.dx = 1.0/self.options.nx def create_scheme(self): wcsph = WCSPHScheme( ['fluid'], [], dim=2, rho0=self.ro, c0=self.co, h0=self.dx*self.hdx, hdx=self.hdx, gamma=7.0, alpha=0.1, beta=0.0 ) iisph = IISPHScheme( ['fluid'], [], dim=2, rho0=self.ro ) s = SchemeChooser(default='wcsph', wcsph=wcsph, iisph=iisph) return s def configure_scheme(self): scheme = self.scheme kernel = Gaussian(dim=2) tf = 0.0076 dt = 0.25*self.hdx*self.dx/(141 + self.co) if self.options.scheme == 'wcsph': scheme.configure(h0=self.hdx*self.dx) scheme.configure_solver( kernel=kernel, integrator_cls=EPECIntegrator, dt=dt, tf=tf, adaptive_timestep=True, cfl=0.3, n_damp=50, output_at_times=[0.0008, 0.0038] ) elif self.options.scheme == 'iisph': scheme.configure_solver( kernel=kernel, dt=dt, tf=tf, adaptive_timestep=True, output_at_times=[0.0008, 0.0038] ) def create_particles(self): """Create the circular patch of fluid.""" dx = self.dx hdx = self.hdx co = self.co ro = self.ro name = 'fluid' x, y = mgrid[-1.05:1.05+1e-4:dx, -1.05:1.05+1e-4:dx] x = x.ravel() y = y.ravel() m = ones_like(x)*dx*dx*ro h = ones_like(x)*hdx*dx rho = ones_like(x) * ro u = -100*x v = 100*y # remove particles outside the circle indices = [] for i in range(len(x)): if sqrt(x[i]*x[i] + y[i]*y[i]) - 1 > 1e-10: indices.append(i) pa = get_particle_array(x=x, y=y, m=m, rho=rho, h=h, u=u, v=v, name=name) pa.remove_particles(indices) print("Elliptical drop :: %d particles" % (pa.get_number_of_particles())) mu = ro*self.alpha*hdx*dx*co/8.0 print("Effective viscosity: rho*alpha*h*c/8 = %s" % mu) self.scheme.setup_properties([pa]) return [pa] def _make_final_plot(self): try: import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt except ImportError: print("Post processing requires matplotlib.") return last_output = self.output_files[-1] from pysph.solver.utils import load data = load(last_output) pa = data['arrays']['fluid'] tf = data['solver_data']['t'] a, A, po, xe, ye = exact_solution(tf) print("At tf=%s" % tf) print("Semi-major axis length (exact, computed) = %s, %s" % (1.0/a, max(pa.y))) plt.plot(xe, ye) plt.scatter(pa.x, pa.y, marker='.') plt.ylim(-2, 2) plt.xlim(plt.ylim()) plt.title("Particles at %s secs" % tf) plt.xlabel('x') plt.ylabel('y') fig = os.path.join(self.output_dir, "comparison.png") plt.savefig(fig, dpi=300) print("Figure written to %s." % fig) def _compute_results(self): from pysph.solver.utils import iter_output from collections import defaultdict data = defaultdict(list) for sd, array in iter_output(self.output_files, 'fluid'): _t = sd['t'] data['t'].append(_t) m, u, v, x, y = array.get('m', 'u', 'v', 'x', 'y') vmag2 = u**2 + v**2 data['ke'].append(0.5*np.sum(m*vmag2)) data['xmax'].append(x.max()) data['ymax'].append(y.max()) a, A, po, _xe, _ye = exact_solution(_t, n=0) data['minor'].append(a) data['major'].append(1.0/a) for key in data: data[key] = np.asarray(data[key]) fname = os.path.join(self.output_dir, 'results.npz') np.savez(fname, **data) def post_process(self, info_file_or_dir): if self.rank > 0: return self.read_info(info_file_or_dir) if len(self.output_files) == 0: return self._compute_results() self._make_final_plot() if __name__ == '__main__': app = EllipticalDrop() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/elliptical_drop_no_scheme.py000066400000000000000000000064661356347341600242630ustar00rootroot00000000000000"""Evolution of a circular patch of incompressible fluid. (60 seconds) This shows how one can explicitly setup equations and the solver instead of using a scheme. """ from __future__ import print_function from numpy import ones_like, mgrid # PySPH base and carray imports from pysph.base.utils import get_particle_array_wcsph from pysph.base.kernels import Gaussian from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.sph.equation import Group from pysph.sph.basic_equations import XSPHCorrection, ContinuityEquation from pysph.sph.wc.basic import TaitEOS, MomentumEquation from pysph.examples.elliptical_drop import EllipticalDrop as EDScheme class EllipticalDrop(EDScheme): def create_scheme(self): # Don't create a scheme as done in the parent example class. return None def create_particles(self): """Create the circular patch of fluid.""" dx = self.dx hdx = self.hdx ro = self.ro name = 'fluid' x, y = mgrid[-1.05:1.05+1e-4:dx, -1.05:1.05+1e-4:dx] # Get the particles inside the circle. condition = ~((x*x + y*y - 1.0) > 1e-10) x = x[condition].ravel() y = y[condition].ravel() m = ones_like(x)*dx*dx*ro h = ones_like(x)*hdx*dx rho = ones_like(x) * ro u = -100*x v = 100*y pa = get_particle_array_wcsph(x=x, y=y, m=m, rho=rho, h=h, u=u, v=v, name=name) print("Elliptical drop :: %d particles" % (pa.get_number_of_particles())) # add requisite variables needed for this formulation for name in ('arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0'): pa.add_property(name) # set the output property arrays pa.set_output_arrays(['x', 'y', 'u', 'v', 'rho', 'm', 'h', 'p', 'pid', 'tag', 'gid']) return [pa] def create_solver(self): print("Create our own solver.") kernel = Gaussian(dim=2) integrator = EPECIntegrator(fluid=WCSPHStep()) dt = 5e-6 tf = 0.0076 solver = Solver(kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=True, cfl=0.3, n_damp=50, output_at_times=[0.0008, 0.0038]) return solver def create_equations(self): print("Create our own equations.") equations = [ Group( equations=[ TaitEOS( dest='fluid', sources=None, rho0=self.ro, c0=self.co, gamma=7.0 ), ], real=False ), Group(equations=[ ContinuityEquation(dest='fluid', sources=['fluid']), MomentumEquation( dest='fluid', sources=['fluid'], alpha=self.alpha, beta=0.0, c0=self.co ), XSPHCorrection(dest='fluid', sources=['fluid']), ]), ] return equations if __name__ == '__main__': app = EllipticalDrop() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/elliptical_drop_simple.py000066400000000000000000000035701356347341600236050ustar00rootroot00000000000000"""Evolution of a circular patch of incompressible fluid. (30 seconds) This is the simplest implementation using existing schemes. See J. J. Monaghan "Simulating Free Surface Flows with SPH", JCP, 1994, 100, pp 399 - 406 """ from __future__ import print_function from numpy import ones_like, mgrid, sqrt from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import WCSPHScheme class EllipticalDrop(Application): def initialize(self): self.co = 1400.0 self.ro = 1.0 self.hdx = 1.3 self.dx = 0.025 self.alpha = 0.1 def create_scheme(self): s = WCSPHScheme( ['fluid'], [], dim=2, rho0=self.ro, c0=self.co, h0=self.dx*self.hdx, hdx=self.hdx, gamma=7.0, alpha=0.1, beta=0.0 ) dt = 5e-6 tf = 0.0076 s.configure_solver(dt=dt, tf=tf) return s def create_particles(self): """Create the circular patch of fluid.""" dx = self.dx hdx = self.hdx ro = self.ro name = 'fluid' x, y = mgrid[-1.05:1.05+1e-4:dx, -1.05:1.05+1e-4:dx] x = x.ravel() y = y.ravel() m = ones_like(x)*dx*dx*ro h = ones_like(x)*hdx*dx rho = ones_like(x) * ro u = -100*x v = 100*y # remove particles outside the circle indices = [] for i in range(len(x)): if sqrt(x[i]*x[i] + y[i]*y[i]) - 1 > 1e-10: indices.append(i) pa = get_particle_array(x=x, y=y, m=m, rho=rho, h=h, u=u, v=v, name=name) pa.remove_particles(indices) print("Elliptical drop :: %d particles" % (pa.get_number_of_particles())) self.scheme.setup_properties([pa]) return [pa] if __name__ == '__main__': app = EllipticalDrop() app.run() pysph-master/pysph/examples/flow_past_cylinder_2d.py000066400000000000000000000416151356347341600233440ustar00rootroot00000000000000""" Flow past cylinder """ import numpy as np import os from pysph.base.kernels import QuinticSpline from pysph.sph.equation import Equation from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.tools import geometry as G from pysph.sph.wc.edac import EDACScheme from pysph.sph.bc.inlet_outlet_manager import ( InletInfo, OutletInfo) # Fluid mechanical/numerical parameters rho = 1000 umax = 1.0 c0 = 10 * umax p0 = rho * c0 * c0 class SolidWallNoSlipBCReverse(Equation): def __init__(self, dest, sources, nu): self.nu = nu super(SolidWallNoSlipBCReverse, self).__init__(dest, sources) def initialize(self, d_idx, d_auf, d_avf, d_awf): d_auf[d_idx] = 0.0 d_avf[d_idx] = 0.0 d_awf[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, d_rho, s_rho, d_V, s_V, d_ug, d_vg, d_wg, d_auf, d_avf, d_awf, s_u, s_v, s_w, DWIJ, R2IJ, EPS, XIJ): # averaged shear viscosity Eq. (6). etai = self.nu * d_rho[d_idx] etaj = self.nu * s_rho[s_idx] etaij = 2 * (etai * etaj)/(etai + etaj) # particle volumes; d_V inverse volume. Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # scalar part of the kernel gradient Fij = XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2] # viscous contribution (third term) from Eq. (8), with VIJ # defined appropriately using the ghost values tmp = 1./d_m[d_idx] * (Vi2 + Vj2) * (etaij * Fij/(R2IJ + EPS)) d_auf[d_idx] += tmp * (d_ug[d_idx] - s_u[s_idx]) d_avf[d_idx] += tmp * (d_vg[d_idx] - s_v[s_idx]) d_awf[d_idx] += tmp * (d_wg[d_idx] - s_w[s_idx]) class ResetInletVelocity(Equation): def __init__(self, dest, sources, U, V, W): self.U = U self.V = V self.W = W super(ResetInletVelocity, self).__init__(dest, sources) def loop(self, d_idx, d_u, d_v, d_w, d_x, d_y, d_z, d_xn, d_yn, d_zn, d_uref): if d_idx == 0: d_uref[0] = self.U d_u[d_idx] = self.U d_v[d_idx] = self.V d_w[d_idx] = self.W class WindTunnel(Application): def initialize(self): # Geometric parameters self.Lt = 30.0 # length of tunnel self.Wt = 15.0 # half width of tunnel self.dc = 1.2 # diameter of cylinder self.cxy = 10., 0.0 # center of cylinder self.nl = 10 # Number of layers for wall/inlet/outlet self.io_method = 'donothing' def add_user_options(self, group): group.add_argument( "--re", action="store", type=float, dest="re", default=200, help="Reynolds number." ) group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.2, help="Ratio h/dx." ) group.add_argument( "--nx", action="store", type=int, dest="nx", default=12, help="Number of points in 1D of the cylinder." ) group.add_argument( "--lt", action="store", type=float, dest="Lt", default=30, help="Length of the WindTunnel." ) group.add_argument( "--wt", action="store", type=float, dest="Wt", default=15, help="Half width of the WindTunnel." ) group.add_argument( "--dc", action="store", type=float, dest="dc", default=1.2, help="Diameter of the cylinder." ) group.add_argument( "--io-method", action="store", type=str, dest="io_method", default='donothing', help="'donothing', 'mirror'," "or 'characteristic', 'mod_donothing', hybrid." ) def consume_user_options(self): self.dc = dc = self.options.dc self.Lt = self.options.Lt/2 * dc self.Wt = self.options.Wt/2 * dc self.io_method = self.options.io_method nx = self.options.nx re = self.options.re self.nu = nu = umax * self.dc / re self.cxy = 5.*self.dc, 0 self.dx = dx = self.dc / nx self.volume = dx * dx hdx = self.options.hdx self.nl = (int)(6.0*hdx) self.h = h = hdx * self.dx dt_cfl = 0.25 * h / (c0 + umax) dt_viscous = 0.125 * h**2 / nu self.dt = min(dt_cfl, dt_viscous) self.tf = 100.0 def _create_fluid(self): dx = self.dx h0 = self.h x, y = np.mgrid[dx / 2:self.Lt:dx, -self.Wt + dx/2:self.Wt:dx] x, y = (np.ravel(t) for t in (x, y)) one = np.ones_like(x) volume = dx * dx * one m = volume * rho fluid = get_particle_array( name='fluid', m=m, x=x, y=y, h=h0, V=1.0 / volume, u=umax, p=0.0, rho=rho) return fluid def _create_solid(self): dx = self.dx h0 = self.h x = [0.0] y = [0.0] r = dx nt = 0 while r - self.dc / 2 < 0.00001: nnew = int(np.pi*r**2/dx**2 + 0.5) tomake = nnew-nt theta = np.linspace(0., 2.*np.pi, tomake + 1) for t in theta[:-1]: x.append(r*np.cos(t)) y.append(r*np.sin(t)) nt = nnew r = r + dx x = np.array(x) y = np.array(y) x, y = (t.ravel() for t in (x, y)) x += self.cxy[0] volume = dx*dx solid = get_particle_array( name='solid', x=x, y=y, m=volume*rho, rho=rho, h=h0, V=1.0/volume) return solid def _create_wall(self): dx = self.dx h0 = self.h x0, y0 = np.mgrid[ dx/2: self.Lt+self.nl*dx+self.nl*dx: dx, dx/2: self.nl*dx: dx] x0 -= self.nl*dx y0 -= self.nl*dx+self.Wt x0 = np.ravel(x0) y0 = np.ravel(y0) x1 = np.copy(x0) y1 = np.copy(y0) y1 += self.nl*dx+2*self.Wt x1 = np.ravel(x1) y1 = np.ravel(y1) x0 = np.concatenate((x0, x1)) y0 = np.concatenate((y0, y1)) volume = dx*dx wall = get_particle_array( name='wall', x=x0, y=y0, m=volume*rho, rho=rho, h=h0, V=1.0/volume) return wall def _set_wall_normal(self, pa): props = ['xn', 'yn', 'zn'] for p in props: pa.add_property(p) y = pa.y cond = y > 0.0 pa.yn[cond] = 1.0 cond = y < 0.0 pa.yn[cond] = -1.0 def _create_outlet(self): dx = self.dx h0 = self.h x, y = np.mgrid[dx/2:self.nl * dx:dx, -self.Wt + dx/2:self.Wt:dx] x, y = (np.ravel(t) for t in (x, y)) x += self.Lt one = np.ones_like(x) volume = dx * dx * one m = volume * rho outlet = get_particle_array( name='outlet', x=x, y=y, m=m, h=h0, V=1.0/volume, u=umax, p=0.0, rho=one * rho, uhat=umax) return outlet def _create_inlet(self): dx = self.dx h0 = self.h x, y = np.mgrid[dx / 2:self.nl*dx:dx, -self.Wt + dx/2:self.Wt:dx] x, y = (np.ravel(t) for t in (x, y)) x = x - self.nl * dx one = np.ones_like(x) volume = one * dx * dx inlet = get_particle_array( name='inlet', x=x, y=y, m=volume * rho, h=h0, u=umax, rho=rho, V=1.0 / volume, p=0.0) return inlet def create_particles(self): dx = self.dx fluid = self._create_fluid() solid = self._create_solid() G.remove_overlap_particles(fluid, solid, dx, dim=2) outlet = self._create_outlet() inlet = self._create_inlet() wall = self._create_wall() ghost_inlet = self.iom.create_ghost(inlet, inlet=True) ghost_outlet = self.iom.create_ghost(outlet, inlet=False) particles = [fluid, inlet, outlet, solid, wall] if ghost_inlet: particles.append(ghost_inlet) if ghost_outlet: particles.append(ghost_outlet) self.scheme.setup_properties(particles) self._set_wall_normal(wall) if self.io_method == 'hybrid': fluid.uag[:] = 1.0 fluid.uta[:] = 1.0 outlet.uta[:] = 1.0 return particles def create_scheme(self): h = nu = None s = EDACScheme( ['fluid'], ['solid'], dim=2, rho0=rho, c0=c0, h=h, pb=p0, nu=nu, inlet_outlet_manager=None, inviscid_solids=['wall'] ) return s def configure_scheme(self): scheme = self.scheme self.iom = self._create_inlet_outlet_manager() scheme.inlet_outlet_manager = self.iom pfreq = 100 kernel = QuinticSpline(dim=2) self.iom.update_dx(self.dx) scheme.configure(h=self.h, nu=self.nu) scheme.configure_solver(kernel=kernel, tf=self.tf, dt=self.dt, pfreq=pfreq, n_damp=0) def _get_io_info(self): inleteqns = [ResetInletVelocity('ghost_inlet', [], U=-umax, V=0.0, W=0.0), ResetInletVelocity('inlet', [], U=umax, V=0.0, W=0.0)] i_update_cls = None i_has_ghost = True o_update_cls = None o_has_ghost = True manager = None props_to_copy = ['x0', 'y0', 'z0', 'uhat', 'vhat', 'what', 'x', 'y', 'z', 'u', 'v', 'w', 'm', 'h', 'rho', 'p', 'ioid'] if self.io_method == 'donothing': from pysph.sph.bc.donothing.inlet import Inlet from pysph.sph.bc.donothing.outlet import Outlet from pysph.sph.bc.donothing.simple_inlet_outlet import ( SimpleInletOutlet) o_has_ghost = False i_update_cls = Inlet o_update_cls = Outlet manager = SimpleInletOutlet elif self.io_method == 'mirror': from pysph.sph.bc.mirror.inlet import Inlet from pysph.sph.bc.mirror.outlet import Outlet from pysph.sph.bc.mirror.simple_inlet_outlet import ( SimpleInletOutlet) i_update_cls = Inlet o_update_cls = Outlet manager = SimpleInletOutlet elif self.io_method == 'hybrid': from pysph.sph.bc.hybrid.inlet import Inlet from pysph.sph.bc.hybrid.outlet import Outlet from pysph.sph.bc.hybrid.simple_inlet_outlet import ( SimpleInletOutlet) i_update_cls = Inlet o_update_cls = Outlet o_has_ghost = False manager = SimpleInletOutlet props_to_copy += ['uta', 'pta', 'u0', 'v0', 'w0', 'p0'] if self.io_method == 'mod_donothing': from pysph.sph.bc.mod_donothing.inlet import Inlet from pysph.sph.bc.mod_donothing.outlet import Outlet from pysph.sph.bc.mod_donothing.simple_inlet_outlet import ( SimpleInletOutlet) o_has_ghost = False i_update_cls = Inlet o_update_cls = Outlet manager = SimpleInletOutlet if self.io_method == 'characteristic': from pysph.sph.bc.characteristic.inlet import Inlet from pysph.sph.bc.characteristic.outlet import Outlet from pysph.sph.bc.characteristic.simple_inlet_outlet import ( SimpleInletOutlet) o_has_ghost = False i_update_cls = Inlet o_update_cls = Outlet manager = SimpleInletOutlet inlet_info = InletInfo( pa_name='inlet', normal=[-1.0, 0.0, 0.0], refpoint=[0.0, 0.0, 0.0], equations=inleteqns, has_ghost=i_has_ghost, update_cls=i_update_cls, umax=umax ) outlet_info = OutletInfo( pa_name='outlet', normal=[1.0, 0.0, 0.0], refpoint=[self.Lt, 0.0, 0.0], has_ghost=o_has_ghost, update_cls=o_update_cls, equations=None, props_to_copy=props_to_copy ) return inlet_info, outlet_info, manager def _create_inlet_outlet_manager(self): inlet_info, outlet_info, manager = self._get_io_info() iom = manager( fluid_arrays=['fluid'], inletinfo=[inlet_info], outletinfo=[outlet_info] ) return iom def create_inlet_outlet(self, particle_arrays): iom = self.iom io = iom.get_inlet_outlet(particle_arrays) return io def post_process(self, info_fname): self.read_info(info_fname) if len(self.output_files) == 0: return t, cd, cl = self._plot_force_vs_t() res = os.path.join(self.output_dir, 'results.npz') np.savez(res, t=t, cd=cd, cl=cl) def _plot_force_vs_t(self): from pysph.solver.utils import iter_output, load from pysph.tools.sph_evaluator import SPHEvaluator from pysph.sph.equation import Group from pysph.base.kernels import QuinticSpline from pysph.sph.wc.transport_velocity import ( MomentumEquationPressureGradient, SummationDensity, SetWallVelocity ) data = load(self.output_files[0]) solid = data['arrays']['solid'] fluid = data['arrays']['fluid'] prop = ['awhat', 'auhat', 'avhat', 'wg', 'vg', 'ug', 'V', 'uf', 'vf', 'wf', 'wij', 'vmag', 'pavg', 'nnbr', 'auf', 'avf', 'awf'] for p in prop: solid.add_property(p) fluid.add_property(p) # We find the force of the solid on the fluid and the opposite of that # is the force on the solid. Note that the assumption is that the solid # is far from the inlet and outlet so those are ignored. print(self.nu, p0, self.dc, rho) equations = [ Group( equations=[ SummationDensity(dest='fluid', sources=['fluid', 'solid']), SummationDensity(dest='solid', sources=['fluid', 'solid']), SetWallVelocity(dest='solid', sources=['fluid']), ], real=False), Group( equations=[ # Pressure gradient terms MomentumEquationPressureGradient( dest='solid', sources=['fluid'], pb=p0), SolidWallNoSlipBCReverse( dest='solid', sources=['fluid'], nu=self.nu), ], real=True), ] sph_eval = SPHEvaluator( arrays=[solid, fluid], equations=equations, dim=2, kernel=QuinticSpline(dim=2) ) t, cd, cl = [], [], [] import gc print(self.dc, self.dx, self.nu) print('fxf', 'fxp', 'fyf', 'fyp', 'cd', 'cl', 't') for sd, arrays in iter_output(self.output_files[:]): fluid = arrays['fluid'] solid = arrays['solid'] for p in prop: solid.add_property(p) fluid.add_property(p) t.append(sd['t']) sph_eval.update_particle_arrays([solid, fluid]) sph_eval.evaluate() fxp = sum(solid.m*solid.au) fyp = sum(solid.m*solid.av) fxf = sum(solid.m*solid.auf) fyf = sum(solid.m*solid.avf) fx = fxf + fxp fy = fyf + fyp cd.append(fx/(0.5 * rho * umax**2 * self.dc)) cl.append(fy/(0.5 * rho * umax**2 * self.dc)) print(fxf, fxp, fyf, fyp, cd[-1], cl[-1], t[-1]) gc.collect() t, cd, cl = list(map(np.asarray, (t, cd, cl))) # Now plot the results. import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt plt.figure() plt.plot(t, cd, label=r'$C_d$') plt.plot(t, cl, label=r'$C_l$') plt.xlabel(r'$t$') plt.ylabel('cd/cl') plt.legend() plt.grid() fig = os.path.join(self.output_dir, "force_vs_t.png") plt.savefig(fig, dpi=300) plt.close() return t, cd, cl def customize_output(self): if self.io_method == 'hybrid': self._mayavi_config(''' viewer.scalar = 'u' ''') elif self.io_method == 'mirror': self._mayavi_config(''' viewer.scalar = 'u' parr = ['ghost_outlet', 'ghost_inlet'] for particle in parr: b = particle_arrays[particle] b.visible = False ''') else: self._mayavi_config(''' viewer.scalar = 'u' parr = ['ghost_inlet'] for particle in parr: b = particle_arrays[particle] b.visible = False ''') def post_step(self, solver): freq = 500 if solver.count % freq == 0: self.nnps.update() for i, pa in enumerate(self.particles): if pa.name == 'fluid': self.nnps.spatially_order_particles(i) self.nnps.update() if __name__ == '__main__': app = WindTunnel() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/gas_dynamics/000077500000000000000000000000001356347341600211505ustar00rootroot00000000000000pysph-master/pysph/examples/gas_dynamics/__init__.py000066400000000000000000000000001356347341600232470ustar00rootroot00000000000000pysph-master/pysph/examples/gas_dynamics/accuracy_test_2d.py000066400000000000000000000120351356347341600247410ustar00rootroot00000000000000"""Acoustic wave diffusion in 2-d (2 mins) Two Dimensional constant pressure accuracy test particles should simply advect in a periodic domain """ # NumPy and standard library imports import numpy from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array as gpa from pysph.solver.application import Application from pysph.sph.scheme import GasDScheme, ADKEScheme, GSPHScheme, SchemeChooser from pysph.sph.wc.crksph import CRKSPHScheme # PySPH tools from pysph.tools import uniform_distribution as ud # Numerical constants dim = 2 gamma = 1.4 gamma1 = gamma - 1.0 # solution parameters dt = 5e-3 tf = 1. # domain size xmin = 0. xmax = 1. ymin = 0. ymax = 1. # scheme constants alpha1 = 1.0 alpha2 = 0.1 beta = 2.0 kernel_factor = 1.5 class AccuracyTest2D(Application): def initialize(self): self.xmin = xmin self.xmax = xmax self.ymin = ymin self.ymax = ymax self.ny = 128 self.nx = self.ny self.dx = (self.xmax - self.xmin) / (self.nx) self.hdx = 2. self.p = 1. self.u = 1 self.v = -1 self.c_0 = 1.18 self.cfl = 0.1 def add_user_options(self, group): group.add_argument( "--nparticles", action="store", type=int, dest="nprt", default=256, help="Number of particles in domain" ) def consume_user_options(self): self.nx = self.options.nprt self.ny = self.nx self.dx = (self.xmax - self.xmin) / (self.nx) self.dt = self.cfl * self.dx / self.c_0 def create_domain(self): return DomainManager( xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, periodic_in_x=True, periodic_in_y=True) def create_particles(self): global dx data = ud.uniform_distribution_cubic2D( self.dx, xmin, xmax, ymin, ymax ) x = data[0].ravel() y = data[1].ravel() dx = data[2] volume = dx * dx rho = 1 + 0.2 * numpy.sin( numpy.pi * (x + y) ) p = numpy.ones_like(x) * self.p # const h and mass h = numpy.ones_like(x) * self.hdx * dx m = numpy.ones_like(x) * volume * rho # u = 1 u = numpy.ones_like(x) * self.u # v = -1 v = numpy.ones_like(x) * self.v # thermal energy from the ideal gas EOS e = p/(gamma1*rho) fluid = gpa(name='fluid', x=x, y=y, rho=rho, p=p, e=e, h=h, m=m, h0=h.copy(), u=u, v=v) self.scheme.setup_properties([fluid]) print("2D Accuracy Test with %d particles" % (fluid.get_number_of_particles())) return [fluid, ] def create_scheme(self): self.tf = tf adke = ADKEScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, alpha=0, beta=0, k=1.5, eps=0., g1=0., g2=0., has_ghosts=True) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=kernel_factor, alpha1=0, alpha2=0, beta=beta, has_ghosts=True ) crksph = CRKSPHScheme( fluids=['fluid'], dim=dim, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=gamma, cl=2, has_ghosts=True ) gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1., g1=0., g2=0., rsolver=7, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=5.0, niter=40, tol=1e-6, has_ghosts=True ) s = SchemeChooser( default='gsph', adke=adke, mpm=mpm, gsph=gsph, crksph=crksph ) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'mpm': s.configure(kernel_factor=kernel_factor) s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=True, pfreq=50) elif self.options.scheme == 'adke': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'gsph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'crksph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) def post_process(self): from pysph.solver.utils import load if len(self.output_files) < 1: return outfile = self.output_files[-1] data = load(outfile) pa = data['arrays']['fluid'] x_c = pa.x y_c = pa.y rho_c = pa.rho rho_e = 1 + 0.2 * numpy.sin( numpy.pi * (x_c + y_c) ) num_particles = rho_c.size l1_norm = numpy.sum( numpy.abs(rho_c - rho_e) ) / num_particles print(l1_norm) if __name__ == '__main__': app = AccuracyTest2D() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/acoustic_wave.py000066400000000000000000000125331356347341600243620ustar00rootroot00000000000000r"""Diffusion of an acoustic wave in 1-d (5 minutes) Propagation of acoustic wave particles have properties according to the following distribuion .. math:: \rho = \rho_0 + \Delta\rho sin(kx) p = p_0 + c_0^2\Delta\rho sin(kx) u = c_0\rho_0^{-1}\Delta\rho sin(kx) with :math:`\Delta\rho = 1e-6` and :math:`k = 2\pi/\lambda` where :math:`\lambda` is the domain length. .. math:: \rho_0 = \gamma = 1.4 and p_0 = 1.0 """ # standard library and numpy imports import numpy # pysph imports from pysph.base.utils import get_particle_array as gpa from pysph.base.nnps import DomainManager from pysph.solver.application import Application from pysph.sph.scheme import \ GSPHScheme, ADKEScheme, GasDScheme, SchemeChooser from pysph.sph.wc.crksph import CRKSPHScheme class AcousticWave(Application): def initialize(self): self.xmin = 0. self.xmax = 1. self.gamma = 1.4 self.rho_0 = self.gamma self.p_0 = 1. self.c_0 = 1. self.delta_rho = 1e-6 self.n_particles = 8 self.domain_length = self.xmax - self.xmin self.k = -2 * numpy.pi / self.domain_length self.cfl = 0.1 self.hdx = 1.0 self.dt = 1e-3 self.tf = 5 self.dim = 1 def create_domain(self): return DomainManager( xmin=0, xmax=1, periodic_in_x=True ) def add_user_options(self, group): group.add_argument( "--nparticles", action="store", type=int, dest="nprt", default=256, help="Number of particles in domain" ) def consume_user_options(self): self.n_particles = self.options.nprt self.dx = self.domain_length / (self.n_particles) self.dt = self.cfl * self.dx / self.c_0 def create_particles(self): x = numpy.arange( self.xmin + self.dx*0.5, self.xmax, self.dx ) rho = self.rho_0 + self.delta_rho *\ numpy.sin(self.k * x) p = self.p_0 + self.c_0**2 *\ self.delta_rho * numpy.sin(self.k * x) u = self.c_0 * self.delta_rho * numpy.sin(self.k * x) /\ self.rho_0 cs = numpy.sqrt( self.gamma * p / rho ) h = numpy.ones_like(x) * self.dx * self.hdx m = numpy.ones_like(x) * self.dx * rho e = p / ((self.gamma - 1) * rho) fluid = gpa( name='fluid', x=x, p=p, rho=rho, u=u, h=h, m=m, e=e, cs=cs, h0=h.copy() ) self.scheme.setup_properties([fluid]) return [fluid, ] def create_scheme(self): gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=self.dim, gamma=self.gamma, kernel_factor=1.0, g1=0., g2=0., rsolver=7, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=5.0, niter=40, tol=1e-6, has_ghosts=True ) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=self.dim, gamma=self.gamma, kernel_factor=1.2, alpha1=0, alpha2=0, beta=2.0, update_alpha1=False, update_alpha2=False, has_ghosts=True ) crksph = CRKSPHScheme( fluids=['fluid'], dim=self.dim, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=self.gamma, cl=2, has_ghosts=True ) adke = ADKEScheme( fluids=['fluid'], solids=[], dim=self.dim, gamma=self.gamma, alpha=0, beta=0.0, k=1.5, eps=0.0, g1=0.0, g2=0.0, has_ghosts=True) s = SchemeChooser( default='gsph', gsph=gsph, mpm=mpm, crksph=crksph, adke=adke ) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'gsph': s.configure_solver( dt=self.dt, tf=self.tf, adaptive_timestep=True, pfreq=50 ) if self.options.scheme == 'mpm': s.configure(kernel_factor=1.2) s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) if self.options.scheme == 'crksph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) if self.options.scheme == 'adke': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) def post_process(self): from pysph.solver.utils import load if len(self.output_files) < 1: return outfile = self.output_files[-1] data = load(outfile) pa = data['arrays']['fluid'] x_c = pa.x u = self.c_0 * self.delta_rho * numpy.sin(self.k * x_c) /\ self.rho_0 u_c = pa.u l_inf = numpy.max( numpy.abs(u_c - u) ) l_1 = (numpy.sum( numpy.abs(u_c - u) ) / self.n_particles) print("L_inf norm of velocity for the problem: %s" % (l_inf)) print("L_1 norm of velocity for the problem: %s" % (l_1)) rho = self.rho_0 + self.delta_rho *\ numpy.sin(self.k * x_c) rho_c = pa.rho l1 = numpy.sum( numpy.abs(rho - rho_c) ) l1 = l1 / self.n_particles print("l_1 norm of density for the problem: %s" % (l1)) if __name__ == "__main__": app = AcousticWave() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/blastwave.py000066400000000000000000000050731356347341600235170ustar00rootroot00000000000000"""Simulate a 1D blast wave problem (30 seconds). """ import os import numpy from pysph.examples.gas_dynamics.shocktube_setup import ShockTubeSetup from pysph.sph.scheme import ADKEScheme, GasDScheme, GSPHScheme, SchemeChooser # Numerical constants dim = 1 gamma = 1.4 gamma1 = gamma - 1.0 # solution parameters dt = 1e-6 tf = 0.0075 # domain size and discretization parameters xmin = -0.5 xmax = 0.5 class Blastwave(ShockTubeSetup): def initialize(self): self.xmin = -0.5 self.xmax = 0.5 self.x0 = 0.0 self.rhol = 1.0 self.rhor = 1.0 self.pl = 1000.0 self.pr = 0.01 self.ul = 0.0 self.ur = 0.0 def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.5, help="Ratio h/dx." ) group.add_argument( "--nl", action="store", type=float, dest="nl", default=200, help="Number of particles in left region" ) def consume_user_options(self): self.nl = self.options.nl self.hdx = self.options.hdx ratio = self.rhor/self.rhol self.nr = ratio*self.nl self.dxl = 0.5/self.nl self.dxr = 0.5/self.nr self.h0 = self.hdx * self.dxr self.hdx = self.hdx def create_particles(self): return self.generate_particles(xmin=-0.5, xmax=0.5, dxl=self.dxl, dxr=self.dxr, m=self.dxl, pl=self.pl, pr=self.pr, h0=self.h0, bx=0.03, gamma1=gamma1) def create_scheme(self): self.dt = dt self.tf = tf adke = ADKEScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, alpha=1, beta=1, k=1.0, eps=0.5, g1=0.2, g2=0.4) # Fix mpm scheme first mpm = GasDScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, kernel_factor=1.2, alpha1=1.0, alpha2=0.1, beta=2.0, update_alpha1=True, update_alpha2=True ) gsph = GSPHScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, kernel_factor=1.0, g1=0.2, g2=0.4, rsolver=2, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=20, tol=1e-6 ) s = SchemeChooser(default='adke', adke=adke, gsph=gsph) return s if __name__ == '__main__': app = Blastwave() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/cheng_shu_1d.py000066400000000000000000000051341356347341600240540ustar00rootroot00000000000000r"""Cheng and Shu's 1d acoustic wave propagation in 1d (1 min) particles have properties according to the following distribuion .. math:: \rho = \rho_0 + \Delta\rho sin(kx) p = 1.0 u = 1 + 0.1sin(kx) with :math:`\Delta\rho = 1` and :math:`k = 2\pi/\lambda` where \lambda is the domain length. .. math:: \rho_0 = 2, \gamma = 1.4 and p_0 = 1.0 """ # standard library and numpy imports import numpy # pysph imports from pysph.base.utils import get_particle_array as gpa from pysph.base.nnps import DomainManager from pysph.solver.application import Application from pysph.sph.scheme import GSPHScheme, SchemeChooser class ChengShu(Application): def initialize(self): self.xmin = 0. self.xmax = 1. self.gamma = 1.4 self.p_0 = 1. self.c_0 = 1. self.delta_rho = 1 self.n_particles = 1000 self.domain_length = self.xmax - self.xmin self.dx = self.domain_length / (self.n_particles - 1) self.k = 2 * numpy.pi / self.domain_length self.hdx = 2. self.dt = 1e-4 self.tf = 1.0 self.dim = 1 def create_domain(self): return DomainManager( xmin=self.xmin, xmax=self.xmax, periodic_in_x=True ) def create_particles(self): x = numpy.linspace( self.xmin, self.xmax, self.n_particles ) rho = 2 + numpy.sin(2 * numpy.pi * x)*self.delta_rho p = numpy.ones_like(x) u = 1 + 0.1 * numpy.sin(2 * numpy.pi * x) cs = numpy.sqrt( self.gamma * p / rho ) h = numpy.ones_like(x) * self.dx * self.hdx m = numpy.ones_like(x) * self.dx * rho e = p / ((self.gamma - 1) * rho) fluid = gpa( name='fluid', x=x, p=p, rho=rho, u=u, h=h, m=m, e=e, cs=cs ) self.scheme.setup_properties([fluid]) return [fluid, ] def create_scheme(self): gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=self.dim, gamma=self.gamma, kernel_factor=1., g1=0., g2=0., rsolver=3, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=5.0, niter=200, tol=1e-6 ) s = SchemeChooser( default='gsph', gsph=gsph ) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'gsph': s.configure_solver( dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=1000 ) if __name__ == "__main__": app = ChengShu() app.run() pysph-master/pysph/examples/gas_dynamics/hydrostatic_box.py000066400000000000000000000072571356347341600247420ustar00rootroot00000000000000"""Simulate the hydrostatic box problem (30 minutes) high density square region inside a low density square medium, in pressure equilibrium, the solution should not evolve in time """ from pysph.sph.wc.crksph import CRKSPHScheme from pysph.sph.scheme import ( ADKEScheme, SchemeChooser, GasDScheme, GSPHScheme ) from pysph.base.utils import get_particle_array as gpa from pysph.base.nnps import DomainManager from pysph.solver.application import Application from pysph.tools import uniform_distribution as ud import numpy class HydrostaticBox(Application): def initialize(self): self.xmin = 0.0 self.xmax = 1.0 self.ymin = 0.0 self.ymax = 1.0 self.gamma = 1.5 self.p = 1 self.rho0 = 1 self.rhoi = 4 self.nx = 50 self.ny = self.nx self.dx = (self.xmax - self.xmin) / self.nx self.hdx = 1.5 self.dt = 1e-3 self.tf = 10 def create_particles(self): data = ud.uniform_distribution_cubic2D( self.dx, self.xmin, self.xmax, self.ymin, self.ymax ) x = data[0] y = data[1] box_indices = numpy.where( (x > 0.25) & (x < 0.75) & (y > 0.25) & (y < 0.75) )[0] rho = numpy.ones_like(x) * self.rho0 rho[box_indices] = self.rhoi e = self.p / ((self.gamma - 1) * rho) m = self.dx * self.dx * rho h = self.hdx * self.dx fluid = gpa( name='fluid', x=x, y=y, p=self.p, rho=rho, e=e, u=0., v=0., h=self.hdx*self.dx, m=m, h0=h ) self.scheme.setup_properties([fluid]) return [fluid] def create_domain(self): return DomainManager( xmin=self.xmin, xmax=self.xmax, ymin=self.ymin, ymax=self.ymax, periodic_in_x=True, periodic_in_y=True ) def create_scheme(self): gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=2, gamma=self.gamma, kernel_factor=1.0, g1=0., g2=0., rsolver=7, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=5.0, niter=40, tol=1e-6, has_ghosts=True ) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=2, gamma=self.gamma, kernel_factor=1.2, alpha1=0, alpha2=0, beta=2.0, update_alpha1=False, update_alpha2=False, has_ghosts=True ) crk = CRKSPHScheme( fluids=['fluid'], dim=2, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=self.gamma, cl=2, has_ghosts=True ) adke = ADKEScheme( fluids=['fluid'], solids=[], dim=2, gamma=self.gamma, alpha=0.1, beta=0.1, k=1.5, eps=0., g1=0.1, g2=0.1, has_ghosts=True) s = SchemeChooser( default='crksph', crksph=crk, adke=adke, mpm=mpm, gsph=gsph ) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'gsph': s.configure_solver( dt=self.dt, tf=self.tf, adaptive_timestep=True, pfreq=50 ) elif self.options.scheme == 'mpm': s.configure(kernel_factor=1.2) s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=True, pfreq=50) elif self.options.scheme == 'crksph': s.configure_solver( dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50 ) elif self.options.scheme == 'adke': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) if __name__ == "__main__": app = HydrostaticBox() app.run() pysph-master/pysph/examples/gas_dynamics/kelvin_helmholtz_instability.py000066400000000000000000000111331356347341600275120ustar00rootroot00000000000000""" The Kelvin-Helmholtz instability test (1.5 hours) """ # NumPy and standard library imports import numpy # PySPH base and carray imports from pysph.base.utils import get_particle_array as gpa from pysph.solver.application import Application from pysph.sph.scheme import GasDScheme, SchemeChooser, ADKEScheme, GSPHScheme from pysph.sph.wc.crksph import CRKSPHScheme from pysph.base.nnps import DomainManager from pysph.tools import uniform_distribution as ud # problem constants dim = 2 gamma = 5.0/3.0 xmin = ymin = 0.0 xmax = ymax = 1.0 rhoi_1 = 1 rhoi_2 = 2 rhoi_m = 0.5 * (rhoi_1 - rhoi_2) v_i1 = 0.5 v_i2 = -0.5 v_im = 0.5 * (v_i1 - v_i2) delta = 0.025 dely = 0.01 wavelen = 0.5 dt = 1e-3 tf = 2 class KHInstability(Application): def initialize(self): self.nx = 200 self.dx = (xmax - xmin) / self.nx self.dy = self.dx self.hdx = 1.5 def create_particles(self): data = ud.uniform_distribution_cubic2D( self.dx, xmin, xmax, ymin, ymax ) x = data[0].ravel() y = data[1].ravel() y1 = numpy.where( (y >= 0) & (y < 0.25) )[0] y2 = numpy.where( (y >= 0.25) & (y < 0.5) )[0] y3 = numpy.where( (y >= 0.5) & (y < 0.75) )[0] y4 = numpy.where( (y >= 0.75) & (y < 1.0) )[0] rho1 = rhoi_1 - rhoi_m * numpy.exp((y[y1] - 0.25)/delta) rho2 = rhoi_2 + rhoi_m * numpy.exp((0.25 - y[y2])/delta) rho3 = rhoi_2 + rhoi_m * numpy.exp((y[y3] - 0.75)/delta) rho4 = rhoi_1 - rhoi_m * numpy.exp((0.75 - y[y4])/delta) u1 = v_i1 - v_im * numpy.exp((y[y1] - 0.25)/delta) u2 = v_i2 + v_im * numpy.exp((0.25 - y[y2])/delta) u3 = v_i2 + v_im * numpy.exp((y[y3] - 0.75)/delta) u4 = v_i1 - v_im * numpy.exp((0.75 - y[y4])/delta) v = dely * numpy.sin( 2 * numpy.pi * x / wavelen ) p = 2.5 rho = numpy.concatenate(( rho1, rho2, rho3, rho4 )) u = numpy.concatenate(( u1, u2, u3, u4 )) v = numpy.concatenate(( v[y1], v[y2], v[y3], v[y4] )) x = numpy.concatenate(( x[y1], x[y2], x[y3], x[y4] )) y = numpy.concatenate(( y[y1], y[y2], y[y3], y[y4] )) e = p / ((gamma - 1) * rho) m = self.dx * self.dx * rho h = self.dx * self.hdx fluid = gpa( name='fluid', x=x, y=y, u=u, v=v, rho=rho, p=p, e=e, m=m, h=h, h0=h ) self.scheme.setup_properties([fluid]) return [fluid] def create_domain(self): return DomainManager( xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, periodic_in_x=True, periodic_in_y=True ) def create_scheme(self): crk = CRKSPHScheme( fluids=['fluid'], dim=2, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=gamma, cl=2, has_ghosts=True ) adke = ADKEScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, alpha=0.1, beta=0.1, k=1.2, eps=0.1, g1=0.1, g2=0.2, has_ghosts=True) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.2, alpha1=1.0, alpha2=0.1, beta=2.0, update_alpha1=True, update_alpha2=True, has_ghosts=True ) gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.5, g1=0.2, g2=0.4, rsolver=2, interpolation=1, monotonicity=2, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=40, tol=1e-6, has_ghosts=True ) s = SchemeChooser( default='crksph', crksph=crk, gsph=gsph, adke=adke, mpm=mpm ) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'crksph': s.configure_solver( dt=dt, tf=tf, adaptive_timestep=False, pfreq=50 ) elif self.options.scheme == 'mpm': s.configure(kernel_factor=1.2) s.configure_solver(dt=dt, tf=tf, adaptive_timestep=True, pfreq=50) elif self.options.scheme == 'adke': s.configure_solver(dt=dt, tf=tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'gsph': s.configure_solver(dt=dt, tf=tf, adaptive_timestep=False, pfreq=50) if __name__ == "__main__": app = KHInstability() app.run() pysph-master/pysph/examples/gas_dynamics/ndspmhd-sedov-initial-conditions.npz000066400000000000000000025125121356347341600302620ustar00rootroot00000000000000PKOD*RkPRPRe.npyNUMPYF{'descr': '=@m.@K?LO@5 xmT8@?[O@Y֚T@?[O@ xmT8@KLO@;K?5 xmT8@`&WZ[@/N٥5 n@_ \s@/N٥5 n@`&WZ[@ xmT8@K?m.@?[O@/N٥5 n@?L6@=j:@?L6@N٥5 n@`?[O@ .@2`>=@Y֚T@_ \s@=j:@@?j:@_ \s@(֚T@`>=@m.@?[O@/N٥5 n@?L6@?j:@?L6@N٥5 n@`?[O@".@K? xmT8@`&WZ[@N٥5 n@_ \s@N٥5 n@`&WZ[@ xmT8@NK?KLO@ xmT8@`?[O@(֚T@`?[O@ xmT8@KO@K? .@`>=@".@NK?PKODPRPRh.npyNUMPYF{'descr': 'DP?jDP?iDP?=DP?9DP?:DP?:DP?9DP?:DP?9DP?:DP?9DP?>DP?jDP?iDP?=DP?9DP?:DP?ODP?PDP?PDP?gDP?fDP?PDP?PDP?PDP?gDP?eDP?RDP?ODP?ODP?PDP?NDP?QDP?QDP?gDP?gDP?RDP?gDP?eDP?RDP?ODP?ODP?PDP?NDP?RDP?hDP?eDP?QDP?NDP?PDP?NDP?RDP?hDP?eDP?QDP?NDP?QDP?QDP?hDP?jDP?SDP?PDP?LDP?PDP?hDP?eDP?QDP?NDP?QDP?QDP?PDP?PDP?LDP?QDP?DP?DP?RDP?ODP?PDP?NDP?QDP?ODP?ODP?PDP?NDP?PDP?RDP?DP?DP?QDP?PDP?NDP?QDP?ODP?ODP?PDP?NDP?PDP?NDP?SDP?DP?}DP?RDP?NDP?QDP?ODP?ODP?PDP?NDP?PDP?NDP?SDP?DP?}DP?RDP?NDP?PDP?NDP?NDP?ODP?gDP?eDP?ODP?NDP?ODP?gDP?dDP?PDP?NDP?MDP?NDP?MDP?ODP?PDP?eDP?eDP?QDP?gDP?dDP?PDP?NDP?MDP?NDP?MDP?QDP?fDP?eDP?ODP?MDP?NDP?MDP?QDP?fDP?eDP?ODP?MDP?ODP?PDP?fDP?iDP?RDP?NDP?KDP?ODP?fDP?eDP?ODP?MDP?ODP?PDP?ODP?NDP?KDP?PDP?~DP?}DP?PDP?MDP?NDP?MDP?ODP?NDP?MDP?NDP?MDP?NDP?QDP?DP?}DP?QDP?NDP?MDP?ODP?NDP?MDP?NDP?MDP?NDP?MDP?RDP?~DP?}DP?QDP?MDP?ODP?NDP?MDP?NDP?MDP?NDP?MDP?RDP?~DP?}DP?QDP?MDP?PDP?9DP?9DP?:DP?PDP?ODP?9DP?9DP?:DP?PDP?PDP?;DP?9DP?8DP?9DP?9DP?9DP?:DP?NDP?PDP?;DP?PDP?PDP?;DP?9DP?8DP?9DP?9DP?:DP?ODP?ODP?;DP?9DP?9DP?9DP?:DP?ODP?ODP?;DP?9DP?9DP?:DP?PDP?SDP?=DP?9DP?6DP?8DP?ODP?ODP?;DP?9DP?9DP?:DP?9DP?9DP?6DP?:DP?iDP?hDP?DP?jDP?iDP?=DP?9DP?:DP?:DP?9DP?:DP?9DP?:DP?9DP?>DP?jDP?iDP?=DP?9DP?:DP?NDP?ODP?PDP?hDP?fDP?ODP?ODP?PDP?hDP?fDP?QDP?NDP?NDP?ODP?NDP?PDP?PDP?fDP?fDP?QDP?hDP?fDP?QDP?NDP?NDP?ODP?NDP?QDP?hDP?fDP?QDP?NDP?ODP?NDP?QDP?hDP?fDP?QDP?NDP?PDP?PDP?gDP?jDP?SDP?ODP?LDP?PDP?hDP?fDP?QDP?NDP?PDP?PDP?ODP?ODP?LDP?PDP?DP?DP?QDP?NDP?ODP?NDP?PDP?NDP?NDP?ODP?NDP?ODP?RDP?DP?~DP?QDP?ODP?NDP?PDP?NDP?NDP?ODP?NDP?ODP?NDP?SDP?DP?~DP?RDP?NDP?PDP?NDP?NDP?ODP?NDP?ODP?NDP?SDP?DP?~DP?RDP?NDP?PDP?NDP?NDP?PDP?fDP?eDP?ODP?NDP?PDP?fDP?dDP?PDP?NDP?MDP?NDP?NDP?NDP?PDP?eDP?fDP?QDP?fDP?dDP?PDP?NDP?MDP?NDP?NDP?PDP?fDP?dDP?PDP?NDP?NDP?NDP?PDP?fDP?dDP?PDP?NDP?NDP?PDP?fDP?iDP?RDP?ODP?LDP?ODP?fDP?dDP?PDP?NDP?NDP?PDP?ODP?NDP?KDP?PDP?~DP?}DP?QDP?MDP?NDP?NDP?NDP?NDP?MDP?NDP?NDP?NDP?RDP?DP?}DP?QDP?NDP?NDP?NDP?NDP?MDP?NDP?NDP?NDP?NDP?SDP?~DP?}DP?QDP?NDP?NDP?NDP?MDP?NDP?NDP?NDP?NDP?SDP?~DP?}DP?QDP?NDP?ODP?9DP?9DP?:DP?QDP?PDP?:DP?:DP?:DP?PDP?PDP?DP?;DP?7DP?:DP?QDP?PDP?=DP?:DP?;DP?DP?:DP?;DP?:DP?;DP?;DP?:DP?;DP?:DP?;DP?>DP?kDP?jDP?>DP?;DP?:DP?;DP?;DP?:DP?;DP?:DP?;DP?:DP??DP?kDP?jDP?>DP?:DP?;DP?;DP?:DP?;DP?:DP?;DP?:DP??DP?kDP?jDP?>DP?:DP?;DP?NDP?ODP?PDP?hDP?fDP?ODP?ODP?PDP?hDP?fDP?QDP?NDP?NDP?ODP?NDP?PDP?PDP?fDP?fDP?QDP?hDP?fDP?QDP?NDP?NDP?ODP?NDP?QDP?hDP?fDP?QDP?NDP?ODP?NDP?QDP?hDP?fDP?QDP?NDP?PDP?PDP?gDP?jDP?SDP?ODP?LDP?PDP?hDP?fDP?QDP?NDP?PDP?PDP?ODP?ODP?LDP?PDP?DP?DP?QDP?NDP?ODP?NDP?PDP?NDP?NDP?ODP?NDP?ODP?RDP?DP?~DP?QDP?ODP?NDP?PDP?NDP?NDP?ODP?NDP?ODP?NDP?SDP?DP?~DP?RDP?NDP?PDP?NDP?NDP?ODP?NDP?ODP?NDP?SDP?DP?~DP?RDP?NDP?PDP?NDP?NDP?PDP?fDP?eDP?ODP?NDP?PDP?fDP?dDP?PDP?NDP?MDP?NDP?NDP?NDP?PDP?eDP?fDP?QDP?fDP?dDP?PDP?NDP?MDP?NDP?NDP?PDP?fDP?dDP?PDP?NDP?NDP?NDP?PDP?fDP?dDP?PDP?NDP?NDP?PDP?fDP?iDP?RDP?ODP?LDP?ODP?fDP?dDP?PDP?NDP?NDP?PDP?ODP?NDP?KDP?PDP?~DP?}DP?QDP?MDP?NDP?NDP?NDP?NDP?MDP?NDP?NDP?NDP?RDP?DP?}DP?QDP?NDP?NDP?NDP?NDP?MDP?NDP?NDP?NDP?NDP?SDP?~DP?}DP?QDP?NDP?NDP?NDP?MDP?NDP?NDP?NDP?NDP?SDP?~DP?}DP?QDP?NDP?ODP?9DP?9DP?:DP?QDP?PDP?:DP?:DP?:DP?PDP?PDP?DP?DP?DP?RDP?TDP??DP?TDP?RDP?>DP?DP?TDP?SDP?>DP?=DP?DP?TDP?SDP?>DP?=DP?DP?SDP?VDP?@DP?DP?=DP?DP?>DP?DP?kDP?jDP?>DP?;DP?DP?9DP?6DP?9DP?ODP?PDP?:DP?8DP?9DP?:DP?:DP?9DP?5DP?:DP?hDP?hDP?;DP?8DP?9DP?8DP?9DP?8DP?8DP?9DP?8DP?9DP?DP?QDP?RDP?=DP?;DP?;DP?;DP?DP?;DP?;DP?DP?DP?DP?QDP?SDP??DP?RDP?RDP?=DP?=DP?DP?SDP?UDP??DP?DP?=DP?DP?QDP?RDP?=DP?DP?RDP?RDP?=DP?;DP?;DP?;DP?;DP?=DP?RDP?SDP?DP?;DP?;DP?;DP?DP?;DP?;DP?DP?kDP?jDP?=DP?;DP?DP?kDP?jDP?=DP?;DP?DP?:DP?;DP?;DP?:DP?;DP?:DP?;DP?:DP?;DP?=DP?jDP?jDP?=DP?;DP?;DP?:DP?;DP?:DP?;DP?:DP?;DP?=DP?jDP?jDP?=DP?;DP?;DP?8DP?8DP?9DP?NDP?NDP?9DP?8DP?9DP?NDP?NDP?:DP?9DP?8DP?8DP?8DP?8DP?:DP?MDP?PDP?;DP?NDP?NDP?:DP?9DP?8DP?8DP?8DP?9DP?NDP?MDP?:DP?8DP?8DP?8DP?9DP?NDP?MDP?:DP?8DP?8DP?:DP?ODP?QDP?;DP?8DP?5DP?7DP?NDP?MDP?:DP?8DP?8DP?:DP?9DP?8DP?4DP?9DP?gDP?gDP?DP?RDP?RDP?=DP?;DP?;DP?;DP?;DP?=DP?RDP?SDP?DP?;DP?;DP?;DP?DP?;DP?;DP?DP?kDP?jDP?=DP?;DP?DP?kDP?jDP?=DP?;DP?DP?:DP?;DP?;DP?:DP?;DP?:DP?;DP?:DP?;DP?=DP?jDP?jDP?=DP?;DP?;DP?:DP?;DP?:DP?;DP?:DP?;DP?=DP?jDP?jDP?=DP?;DP?;DP?8DP?8DP?9DP?NDP?NDP?9DP?8DP?9DP?NDP?NDP?:DP?9DP?8DP?8DP?8DP?8DP?:DP?MDP?PDP?;DP?NDP?NDP?:DP?9DP?8DP?8DP?8DP?9DP?NDP?MDP?:DP?8DP?8DP?8DP?9DP?NDP?MDP?:DP?8DP?8DP?:DP?ODP?QDP?;DP?8DP?5DP?7DP?NDP?MDP?:DP?8DP?8DP?:DP?9DP?8DP?4DP?9DP?gDP?gDP? ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿Q޿Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\??\(\߿RQ?RQ?Q޿p= ף?p= ף?Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\?RQRQ?\(\߿RQ?RQ?Q޿p= ף?p= ף?Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\?RQRQ?RQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQRQ?p= ף?RQ\(\߿RQ?RQ?Q޿p= ף?p= ף?Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\?RQRQ?\(\߿RQ?RQ?Q޿p= ף?p= ף?Gz޿p= ףpݿܿ)\(ܿQۿzGڿ= ףp=ڿٿ(\ؿRQؿGz׿ ףp= ׿ffffffֿ(\տQտzGzԿףp= ӿ333333ӿ(\ҿQѿHzGѿp= ףппQοp= ףpͿ(\(̿zGʿɿRQȿ ףp= ǿ(\ſzGzĿ233333ÿQp= ףQ(\(ףp= xGzQ뱿QyGzQqGzGzT~Gz?Gz?Q?Gz??Q?Q?Gz?ףp= ??(\(?Q?p= ף?Q?433333?|Gz?(\? ףp= ?TQ??zG?(\(?p= ףp?Q??p= ף?HzG?Q?(\?433333?أp= ?|Gz? Q?(\?ffffff? ףp= ?Gz?RQ?(\??> ףp=?zG?Q?*\(??r= ףp?Gz?Q?\(\?RQRQ?PKPDPRPRx.npyNUMPYF{'descr': ' ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?> ףp=?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?zG?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?*\(?????????????????????????????????????????????????????????????????????????????????????????????????????r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?r= ףp?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Gz?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?Q?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?\(\?????????????????????????????????????????????????????????????????????????????????????????????????????RQ?\(\߿RQ?RQ?\(\߿RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?RQ?\(\߿RQ?RQ?\(\߿RQ?p= ף?Q޿p= ף?p= ף?Q޿p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?p= ף?Q޿p= ף?p= ף?Q޿p= ף?Gz޿Gz޿Gz޿Gz޿p= ףpݿp= ףpݿp= ףpݿp= ףpݿܿܿܿܿ)\(ܿ)\(ܿ)\(ܿ)\(ܿQۿQۿQۿQۿzGڿzGڿzGڿzGڿ= ףp=ڿ= ףp=ڿ= ףp=ڿ= ףp=ڿٿٿٿٿ(\ؿ(\ؿ(\ؿ(\ؿRQؿRQؿRQؿRQؿGz׿Gz׿Gz׿Gz׿ ףp= ׿ ףp= ׿ ףp= ׿ ףp= ׿ffffffֿffffffֿffffffֿffffffֿ(\տ(\տ(\տ(\տQտQտQտQտzGzԿzGzԿzGzԿzGzԿףp= ӿףp= ӿףp= ӿףp= ӿ333333ӿ333333ӿ333333ӿ333333ӿ(\ҿ(\ҿ(\ҿ(\ҿQѿQѿQѿQѿHzGѿHzGѿHzGѿHzGѿp= ףпp= ףпp= ףпp= ףпппппQοQοQοQοp= ףpͿp= ףpͿp= ףpͿp= ףpͿ(\(̿(\(̿(\(̿(\(̿zGʿzGʿzGʿzGʿɿɿɿɿRQȿRQȿRQȿRQȿ ףp= ǿ ףp= ǿ ףp= ǿ ףp= ǿ(\ſ(\ſ(\ſ(\ſzGzĿzGzĿzGzĿzGzĿ233333ÿ233333ÿ233333ÿ233333ÿQQQQp= ףp= ףp= ףp= ףQQQQ(\((\((\((\(ףp= ףp= ףp= ףp= xGzxGzxGzxGzQ뱿Q뱿Q뱿Q뱿QQQQyGzyGzyGzyGzQQQQqGzqGzqGzqGzGzGzGzGzTTTT~Gz?~Gz?~Gz?~Gz?Gz?Gz?Gz?Gz?Q?Q?Q?Q?Gz?Gz?Gz?Gz?????Q?Q?Q?Q?Q?Q?Q?Q?Gz?Gz?Gz?Gz?ףp= ?ףp= ?ףp= ?ףp= ?????(\(?(\(?(\(?(\(?Q?Q?Q?Q?p= ף?p= ף?p= ף?p= ף?Q?Q?Q?Q?433333?433333?433333?433333?|Gz?|Gz?|Gz?|Gz?(\?(\?(\?(\? ףp= ? ףp= ? ףp= ? ףp= ?TQ?TQ?TQ?TQ?????zG?zG?zG?zG?(\(?(\(?(\(?(\(?p= ףp?p= ףp?p= ףp?p= ףp?Q?Q?Q?Q?????p= ף?p= ף?p= ף?p= ף?HzG?HzG?HzG?HzG?Q?Q?Q?Q?(\?(\?(\?(\?433333?433333?433333?433333?أp= ?أp= ?أp= ?أp= ?|Gz?|Gz?|Gz?|Gz? Q? Q? Q? Q?(\?(\?(\?(\?ffffff?ffffff?ffffff?ffffff? ףp= ? ףp= ? ףp= ? ףp= ?Gz?Gz?Gz?Gz?RQ?RQ?RQ?RQ?(\?(\?(\?(\?????> ףp=?> ףp=?> ףp=?> ףp=?zG?zG?zG?zG?Q?Q?Q?Q?*\(?*\(?*\(?*\(?????r= ףp?r= ףp?r= ףp?r= ףp?Gz?Gz?Gz?Gz?Q?Q?Q?Q?RQ\(\?RQRQ\(\?RQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQRQ\(\?RQRQ\(\?RQ????PKOD*RkPRPRe.npyPKODPRPRsRh.npyPKODߍPRPRm.npyPKODwPRPRYp.npyPKPDMoMPRPRIu.npyPKPD$ PRPR?rho.npyPKPDcVPRPRy.npyPKPDPRPR'A x.npyPK pysph-master/pysph/examples/gas_dynamics/noh.py000066400000000000000000000130451356347341600223110ustar00rootroot00000000000000"""Example for the Noh's cylindrical implosion test. (10 minutes) """ # NumPy and standard library imports import numpy # PySPH base and carray imports from pysph.base.utils import get_particle_array as gpa from pysph.solver.application import Application from pysph.sph.scheme import GasDScheme, SchemeChooser, ADKEScheme, GSPHScheme from pysph.sph.wc.crksph import CRKSPHScheme from pysph.base.nnps import DomainManager # problem constants dim = 2 gamma = 5.0/3.0 gamma1 = gamma - 1.0 # scheme constants alpha1 = 1.0 alpha2 = 0.1 beta = 2.0 kernel_factor = 1.5 # numerical constants dt = 1e-3 tf = 0.6 # domain and particle spacings xmin = ymin = -1.0 xmax = ymax = 1.0 nx = ny = 100 dx = (xmax-xmin)/nx dxb2 = 0.5 * dx # initial values h0 = kernel_factor*dx rho0 = 1.0 m0 = dx*dx * rho0 vr = -1.0 class NohImplosion(Application): def create_particles(self): x, y = numpy.mgrid[ xmin:xmax:dx, ymin:ymax:dx] # positions x = x.ravel() y = y.ravel() rho = numpy.ones_like(x) * rho0 m = numpy.ones_like(x) * m0 h = numpy.ones_like(x) * h0 u = numpy.ones_like(x) v = numpy.ones_like(x) sin, cos, arctan = numpy.sin, numpy.cos, numpy.arctan2 for i in range(x.size): theta = arctan(y[i], x[i]) u[i] = vr*cos(theta) v[i] = vr*sin(theta) fluid = gpa( name='fluid', x=x, y=y, m=m, rho=rho, h=h, u=u, v=v, p=1e-12, e=2.5e-11, h0=h.copy() ) self.scheme.setup_properties([fluid]) print("Noh's problem with %d particles " % (fluid.get_number_of_particles())) return [fluid, ] def create_domain(self): return DomainManager( xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, mirror_in_x=True, mirror_in_y=True ) def create_scheme(self): mpm = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=kernel_factor, alpha1=alpha1, alpha2=alpha2, beta=beta, adaptive_h_scheme="mpm", update_alpha1=True, update_alpha2=True, has_ghosts=True ) crksph = CRKSPHScheme( fluids=['fluid'], dim=2, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=gamma, cl=2, has_ghosts=True ) gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.5, g1=0.25, g2=0.5, rsolver=7, interpolation=1, monotonicity=2, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=40, tol=1e-6, has_ghosts=True ) adke = ADKEScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, alpha=1, beta=1, k=1.0, eps=0.8, g1=0.5, g2=0.5, has_ghosts=True) s = SchemeChooser( default='crksph', crksph=crksph, mpm=mpm, adke=adke, gsph=gsph ) s.configure_solver(dt=dt, tf=tf, adaptive_timestep=False) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'mpm': s.configure(kernel_factor=1.2) s.configure_solver( dt=dt, tf=tf, adaptive_timestep=True, pfreq=50 ) elif self.options.scheme == 'crksph': s.configure_solver( dt=dt, tf=tf, adaptive_timestep=False, pfreq=50 ) elif self.options.scheme == 'gsph': s.configure_solver( dt=dt, tf=tf, adaptive_timestep=False, pfreq=50 ) elif self.options.scheme == 'adke': s.configure_solver( dt=dt, tf=tf, adaptive_timestep=False, pfreq=50 ) def post_process(self): try: import matplotlib matplotlib.use('Agg') from matplotlib import pyplot except ImportError: print("Post processing requires matplotlib.") return if self.rank > 0 or len(self.output_files) == 0: return import os from pysph.solver.utils import load outfile = self.output_files[-1] data = load(outfile) pa = data['arrays']['fluid'] x = pa.x y = pa.y rho = pa.rho p = pa.p r = numpy.sqrt(x**2 + y**2) # exact solutions vs = 1.0/3.0 # shock radial velocity rs = vs * tf # position of shock ri = numpy.linspace(0, rs, 10) ro = numpy.linspace(rs, xmax, 100) re = numpy.concatenate((ri, ro)) rho_e1 = numpy.ones_like(ri) * ((gamma + 1) / (gamma - 1))**dim rho_e2 = rho0 * (1 + tf / ro)**(dim - 1) rho_e = numpy.concatenate((rho_e1, rho_e2)) p_e1 = vs * rho_e1 p_e2 = numpy.zeros_like(ro) p_e = numpy.concatenate((p_e1, p_e2)) pyplot.scatter(r, p, s=1) pyplot.xlabel('r') pyplot.ylabel('P') pyplot.plot(re, p_e, color='r', lw=1) pyplot.legend( ['exact', self.options.scheme] ) fname = os.path.join(self.output_dir, 'pressure.png') pyplot.savefig(fname, dpi=300) pyplot.close('all') pyplot.scatter(r, rho, s=1) pyplot.xlabel('r') pyplot.ylabel(r'$\rho$') pyplot.plot(re, rho_e, color='r', lw=1) pyplot.legend( ['exact', self.options.scheme] ) fname = os.path.join(self.output_dir, 'density.png') pyplot.savefig(fname, dpi=300) pyplot.close('all') if __name__ == '__main__': app = NohImplosion() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/riemann_2d.py000066400000000000000000000231071356347341600235430ustar00rootroot00000000000000"""2-d Riemann problem (3 hours) four shock waves' interaction at the centerof the domain """ # numpy and standard imports import numpy # pysph imports from pysph.base.utils import get_particle_array as gpa from pysph.base.nnps import DomainManager from pysph.solver.application import Application from pysph.sph.scheme import GSPHScheme, SchemeChooser, ADKEScheme, GasDScheme from pysph.sph.wc.crksph import CRKSPHScheme from pysph.examples.gas_dynamics.riemann_2d_config import R2DConfig # current case from the al possible unique cases case = 3 # config for current case config = R2DConfig(case) gamma = 1.4 gamma1 = gamma - 1 kernel_factor = 1.5 dt = 1e-4 dim = 2 class Riemann2D(Application): def initialize(self): # square domain self.dt = dt self.tf = config.endtime def add_user_options(self, group): group.add_argument( "--dscheme", choices=["constant_mass", "constant_volume"], dest="dscheme", default="constant_volume", help="spatial discretization scheme, one of constant_mass" "or constant_volume" ) group.add_argument( "--nparticles", action="store", type=int, dest="nparticles", default=200 ) def consume_user_options(self): self.nx = self.options.nparticles self.ny = self.nx self.dx = (config.xmax - config.xmin) / self.nx # discretization function if self.options.dscheme == "constant_volume": self.dfunction = self.create_particles_constant_volume elif self.options.dscheme == "constant_mass": self.dfunction = self.create_particles_constant_mass def create_particles_constant_volume(self): dx = self.dx dx2 = dx * 0.5 vol = dx * dx xmin = config.xmin ymin = config.ymin xmax = config.xmax ymax = config.ymax xmid = config.xmid ymid = config.ymid rho1, u1, v1, p1 = config.rho1, config.u1, config.v1, config.p1 rho2, u2, v2, p2 = config.rho2, config.u2, config.v2, config.p2 rho3, u3, v3, p3 = config.rho3, config.u3, config.v3, config.p3 rho4, u4, v4, p4 = config.rho4, config.u4, config.v4, config.p4 x, y = numpy.mgrid[xmin+dx2:xmax:dx, ymin+dx2:ymax:dx] x = x.ravel() y = y.ravel() u = numpy.zeros_like(x) v = numpy.zeros_like(x) # density and mass rho = numpy.ones_like(x) p = numpy.ones_like(x) for i in range(x.size): if x[i] <= xmid: if y[i] <= ymid: # w3 rho[i] = rho3 p[i] = p3 u[i] = u3 v[i] = v3 else: # w2 rho[i] = rho2 p[i] = p2 u[i] = u2 v[i] = v2 else: if y[i] <= ymid: # w4 rho[i] = rho4 p[i] = p4 u[i] = u4 v[i] = v4 else: # w1 rho[i] = rho1 p[i] = p1 u[i] = u1 v[i] = v1 # thermal energy e = p/(gamma1 * rho) # mass m = vol * rho # smoothing length h = kernel_factor * (m/rho)**(1./dim) # create the particle array pa = gpa(name='fluid', x=x, y=y, m=m, rho=rho, h=h, u=u, v=v, p=p, e=e, h0=h.copy()) return pa def create_particles_constant_mass(self): dx = self.dx xmin = config.xmin ymin = config.ymin xmax = config.xmax ymax = config.ymax xmid = config.xmid ymid = config.ymid rho_max = config.rho_max nb4 = self.nx/4 dx0 = (xmax - xmid)/nb4 vol0 = dx0 * dx0 m0 = rho_max * vol0 # first quadrant vol1 = config.rho_max/config.rho1 * vol0 dx = numpy.sqrt(vol1) dxb2 = 0.5 * dx x1, y1 = numpy.mgrid[xmid+dxb2:xmax:dx, ymid+dxb2:ymax:dx] x1 = x1.ravel() y1 = y1.ravel() u1 = numpy.ones_like(x1) * config.u1 v1 = numpy.zeros_like(x1) * config.v1 rho1 = numpy.ones_like(x1) * config.rho1 p1 = numpy.ones_like(x1) * config.p1 m1 = numpy.ones_like(x1) * m0 h1 = numpy.ones_like(x1) * kernel_factor * (m1/rho1)**(0.5) # second quadrant vol2 = config.rho_max/config.rho2 * vol0 dx = numpy.sqrt(vol2) dxb2 = 0.5 * dx x2, y2 = numpy.mgrid[xmid-dxb2:xmin:-dx, ymid+dxb2:ymax:dx] x2 = x2.ravel() y2 = y2.ravel() u2 = numpy.ones_like(x2) * config.u2 v2 = numpy.ones_like(x2) * config.v2 rho2 = numpy.ones_like(x2) * config.rho2 p2 = numpy.ones_like(x2) * config.p2 m2 = numpy.ones_like(x2) * m0 h2 = numpy.ones_like(x2) * kernel_factor * (m2/rho2)**(0.5) # third quadrant vol3 = config.rho_max/config.rho3 * vol0 dx = numpy.sqrt(vol3) dxb2 = 0.5 * dx x3, y3 = numpy.mgrid[xmid-dxb2:xmin:-dx, ymid-dxb2:ymin:-dx] x3 = x3.ravel() y3 = y3.ravel() u3 = numpy.ones_like(x3) * config.u3 v3 = numpy.ones_like(x3) * config.v3 rho3 = numpy.ones_like(x3) * config.rho3 p3 = numpy.ones_like(x3) * config.p3 m3 = numpy.ones_like(x3) * m0 h3 = numpy.ones_like(x3) * kernel_factor * (m3/rho3)**(0.5) # fourth quadrant vol4 = config.rho_max/config.rho4 * vol0 dx = numpy.sqrt(vol4) dxb2 = 0.5 * dx x4, y4 = numpy.mgrid[xmid+dxb2:xmax:dx, ymid-dxb2:ymin:-dx] x4 = x4.ravel() y4 = y4.ravel() u4 = numpy.ones_like(x4) * config.u4 v4 = numpy.ones_like(x4) * config.v4 rho4 = numpy.ones_like(x4) * config.rho4 p4 = numpy.ones_like(x4) * config.p4 m4 = numpy.ones_like(x4) * m0 h4 = numpy.ones_like(x4) * kernel_factor * (m4/rho4)**(0.5) # concatenate the arrays x = numpy.concatenate([x1, x2, x3, x4]) y = numpy.concatenate([y1, y2, y3, y4]) p = numpy.concatenate([p1, p2, p3, p4]) u = numpy.concatenate([u1, u2, u3, u4]) v = numpy.concatenate([v1, v2, v3, v4]) h = numpy.concatenate([h1, h2, h3, h4]) m = numpy.concatenate([m1, m2, m3, m4]) rho = numpy.concatenate([rho1, rho2, rho3, rho4]) # derived variables e = p/((gamma-1.0) * rho) # create the particle array pa = gpa( name='fluid', x=x, y=y, m=m, rho=rho, h=h, u=u, v=v, p=p, e=e, h0=h.copy() ) return pa def create_particles(self): fluid = self.dfunction() self.scheme.setup_properties([fluid]) return [fluid] def create_domain(self): return DomainManager( xmin=config.xmin, xmax=config.xmax, ymin=config.ymin, ymax=config.ymax, mirror_in_x=True, mirror_in_y=True ) def create_scheme(self): gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.5, g1=0.25, g2=0.5, rsolver=2, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=40, tol=1e-6, has_ghosts=True ) adke = ADKEScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, alpha=1, beta=1.0, k=1.0, eps=0.5, g1=0.2, g2=0.4, has_ghosts=True) crksph = CRKSPHScheme( fluids=['fluid'], dim=dim, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=gamma, cl=2, has_ghosts=True ) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.2, alpha1=1.0, alpha2=0.1, beta=2.0, update_alpha1=True, update_alpha2=True, has_ghosts=True ) s = SchemeChooser( default='gsph', gsph=gsph, adke=adke, crksph=crksph, mpm=mpm ) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'gsph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'adke': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'crksph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'mpm': s.configure(kernel_factor=kernel_factor) s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) def post_process(self): if len(self.output_files) < 1 or self.rank > 0: return try: import matplotlib matplotlib.use('Agg') from matplotlib import pyplot except ImportError: print("Post processing requires matplotlib.") return from pysph.solver.utils import load import os outfile = self.output_files[-1] data = load(outfile) pa = data['arrays']['fluid'] x = pa.x y = pa.y pyplot.scatter(x, y, s=1) pyplot.xlim((0.1, 0.6)) pyplot.ylim((0.1, 0.6)) fig = os.path.join(self.output_dir, "positions.png") pyplot.savefig(fig, dpi=300) pyplot.close('all') if __name__ == "__main__": app = Riemann2D() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/riemann_2d_config.py000066400000000000000000000107151356347341600250710ustar00rootroot00000000000000"""Different configurations for the 2D Riemann problems. The different configurations and expected solutions to these problems are defined in 'Solution of Two Dimensional Riemann Problems for Gas Dynamics without Riemann Problem Solvers' by Alexander Kurganov and Eitan Tadmor General notations for the different configurations are (S) for shock waves, (R) for Rarefactions and (J) for contact/slip lines Code from https://bitbucket.org/kunalp/sph2d/ path: src/examples/r2d_config.py """ class R2DConfig(object): def __init__(self, config=3): self.config = config self.xmin = -0.25 self.xmax = 1.15 self.ymin = -0.25 self.ymax = 1.15 self.zmin = 0 self.zmax = 0 self.endtime = 0.25 if config == 12: self.setup_config12() elif config == 2: self.setup_config2() elif config == 3: self.setup_config3() elif config == 4: self.setup_config4() elif config == 5: self.setup_config5() elif config == 6: self.setup_config6() elif config == 8: self.setup_config8() self.xmid = 0.5 * (self.xmin + self.xmax) self.ymid = 0.5 * (self.ymin + self.ymax) self.rho_max = max(self.rho1, self.rho2, self.rho3, self.rho4) self.rho_min = min(self.rho1, self.rho2, self.rho3, self.rho4) def setup_config3(self): """Four Shocks""" self.endtime = 0.3 self.p1 = 1.5 self.rho1 = 1.5 self.u1 = 0.0 self.v1 = 0.0 self.p2 = 0.3 self.rho2 = 0.5323 self.u2 = 1.206 self.v2 = 0.0 self.p3 = 0.029 self.rho3 = 0.138 self.u3 = 1.206 self.v3 = 1.206 self.p4 = 0.3 self.rho4 = 0.5323 self.u4 = 0.0 self.v4 = 1.206 def setup_config2(self): """Four Rarefactions""" self.endtime = 0.2 self.p1 = 1.0 self.rho1 = 1.0 self.u1 = 0.0 self.v1 = 0.0 self.p2 = 0.4 self.rho2 = 0.5197 self.u2 = -0.7259 self.v2 = 0.0 self.p3 = 1.0 self.rho3 = 1.0 self.u3 = -0.7259 self.v3 = -0.7259 self.p4 = 0.4 self.rho4 = 0.5197 self.u4 = 0.0 self.v4 = -0.7259 def setup_config4(self): self.endtime = 0.25 self.p1 = 1.1 self.rho1 = 1.1 self.u1 = 0.0 self.v1 = 0.0 self.p2 = 0.35 self.rho2 = 0.5065 self.u2 = 0.8939 self.v2 = 0.0 self.p3 = 1.1 self.rho3 = 1.1 self.u3 = 0.8939 self.v3 = 0.8939 self.p4 = 0.35 self.rho4 = 0.5065 self.u4 = 0.0 self.v4 = 0.8939 def setup_config5(self): self.endtime = 0.23 self.p1 = 1 self.rho1 = 1 self.u1 = -0.75 self.v1 = -0.5 self.p2 = 1.0 self.rho2 = 2.0 self.u2 = -0.75 self.v2 = 0.5 self.p3 = 1 self.rho3 = 1 self.u3 = 0.75 self.v3 = 0.5 self.p4 = 1.0 self.rho4 = 3.0 self.u4 = 0.75 self.v4 = -0.5 def setup_config6(self): self.endtime = 0.3 self.p1 = 1 self.rho1 = 1 self.u1 = 0.75 self.v1 = -0.5 self.p2 = 1.0 self.rho2 = 2.0 self.u2 = 0.75 self.v2 = 0.5 self.p3 = 1 self.rho3 = 1 self.u3 = -0.75 self.v3 = 0.5 self.p4 = 1.0 self.rho4 = 3.0 self.u4 = -0.75 self.v4 = -0.5 def setup_config8(self): self.endtime = 0.25 self.p1 = 0.4 self.rho1 = 0.5197 self.u1 = 0.1 self.v1 = 0.1 self.p2 = 1 self.rho2 = 1.0 self.u2 = -0.6259 self.v2 = 0.1 self.p3 = 1 self.rho3 = 0.8 self.u3 = 0.1 self.v3 = 0.1 self.p4 = 1.0 self.rho4 = 1.0 self.u4 = 0.1 self.v4 = -0.6259 def setup_config12(self): self.endtime = 0.25 self.p1 = 0.4 self.rho1 = 0.5313 self.u1 = 0.0 self.v1 = 0.0 self.p2 = 1 self.rho2 = 1.0 self.u2 = 0.7276 self.v2 = 0.0 self.p3 = 1 self.rho3 = 0.8 self.u3 = 0.0 self.v3 = 0.0 self.p4 = 1.0 self.rho4 = 1.0 self.u4 = 0.0 self.v4 = 0.7276 if __name__ == "__main__": config = R2DConfig() pysph-master/pysph/examples/gas_dynamics/riemann_solver.py000066400000000000000000000236151356347341600245540ustar00rootroot00000000000000""" Exact solution to Riemann problems. """ import numpy from math import sqrt def set_gamma(g): global gamma, gp1_2g, gm1_2g, gm1_gp1, gm1_2, gm1, gp1 gamma = g gm1_2g = (gamma - 1.0) / (2.0 * gamma) gp1_2g = (gamma + 1.0) / (2.0 * gamma) gm1_gp1 = (gamma - 1.0) / (gamma + 1.0) gm1_2 = (gamma - 1.0) / 2.0 gm1 = gamma - 1.0 gp1 = gamma + 1.0 def solve(x_min=-0.5, x_max=0.5, x_0=0.0, t=0.1, p_l=1.0, p_r=0.1, rho_l=1.0, rho_r=0.125, u_l=0.0, u_r=0.0, N=101): r""" Parameters ---------- x_min : float the leftmost point of domain x_max : float the rightmost point of domain x_0 : float the position of the diaphgram t : float total time of simulation p_l, u_l, rho_l : float pressure, velocity, density in the left region p_r, u_r, rho_r : float pressure, velocity, density in the right region N : int number of points under study The default arguments mentioned correspond to the Sod shock tube case. Notes ----- The function returns the exact solution in the order of density, velocity, pressure, energy and x-coordinates of the points under study. References ---------- .. E.F. Toro, Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer (2009), Chapter 4, pp. 115-138 """ c_l = sqrt(gamma * p_l / rho_l) c_r = sqrt(gamma * p_r / rho_r) try: import scipy print("Using fsolve to solve the non-linear equation") p_star, u_star = star_pu_fsolve(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r) except ImportError: print("Using Newton-Raphson method to solve the non-linear equation") p_star, u_star = star_pu_newton_raphson(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r) # check if the discontinuity is inside the domain msg = "discontinuity not in domain" assert x_0 >= x_min and x_0 <= x_max, msg # transform domain according to initial discontinuity x_min = x_min - x_0 x_max = x_max - x_0 print('p_star=' + str(p_star)) print('u_star=' + str(u_star)) x = numpy.linspace(x_min, x_max, N) density = [] pressure = [] velocity = [] energy = [] for i in range(0, N): s = x[i] / t rho, u, p = complete_solution(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r, p_star, u_star, s) density.append(rho) velocity.append(u) pressure.append(p) energy.append(p / (gm1 * rho)) # transform the domain back to original coordinates x = x + x_0 return tuple(map(numpy.asarray, [density, velocity, pressure, energy, x])) def _flux_fsolve(pressure, rho1, c1, p1): if pressure <= p1: # Rarefaction return lambda p: (2 / gm1) * c1 * ((p / p1)**gm1_2g - 1.0) else: # Shock return lambda p: ( (p - p1) * sqrt(((2 / gp1) / rho1) / ((gm1_gp1 * p1) + p)) ) def star_pu_fsolve(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r): p_min = min(p_l, p_r) p_max = max(p_l, p_r) f_min = _flux_fsolve(p_min, rho_l, c_l, p_l)(p_min) + \ _flux_fsolve(p_min, rho_r, c_r, p_r)(p_min) + u_r - u_l f_max = _flux_fsolve(p_max, rho_l, c_l, p_l)(p_max) + \ _flux_fsolve(p_max, rho_r, c_r, p_r)(p_max) + u_r - u_l if (f_min > 0 and f_max > 0): p_guess = 0.5 * (0 + p_min) p_star, u_star = _star_pu(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r, p_guess) elif(f_min <= 0 and f_max >= 0): p_guess = (p_l + p_r) * 0.5 p_star, u_star = _star_pu(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r, p_guess) else: p_guess = 2 * p_max p_star, u_star = _star_pu(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r, p_guess) return p_star, u_star def _star_pu(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r, p_guess): """Computes the pressure and velocity in the star region using fsolve from scipy module""" fl = _flux_fsolve(p_guess, rho_l, c_l, p_l) fr = _flux_fsolve(p_guess, rho_r, c_r, p_r) f = lambda p: fl(p) + fr(p) + u_r - u_l from scipy.optimize import fsolve p_star = fsolve(f, 0.0) u_star = ( 0.5 * (u_l + u_r + _flux_fsolve(p_star, rho_r, c_r, p_r)(p_star) - _flux_fsolve(p_star, rho_l, c_l, p_l)(p_star)) ) return p_star, u_star def _flux_newton(pressure, rho1, c1, p1): if pressure <= p1: # Rarefaction flux = (2 / gm1) * c1 * ((pressure / p1)**gm1_2g - 1.0) flux_derivative = (1.0 / (rho1 * c1)) * \ (pressure / p1)**(-gp1_2g) return flux, flux_derivative else: # Shock flux = ( (pressure - p1) * sqrt(((2 / gp1) / rho1) / ((gm1_gp1 * p1) + pressure)) ) flux_derivative = ( (1.0 - 0.5 * (pressure - p1) / ((gm1_gp1 * p1) + pressure)) * sqrt(((2 / gp1) / rho1) / ((gm1_gp1 * p1) + pressure)) ) return flux, flux_derivative def star_pu_newton_raphson(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r): tol_pre = 1.0e-06 nr_iter = 20 p_start = _compute_guess_p(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r) p_old = p_start u_diff = u_r - u_l for i in range(nr_iter): fL, fLd = _flux_newton(p_old, rho_l, c_l, p_l) fR, fRd = _flux_newton(p_old, rho_r, c_r, p_r) p = p_old - (fL + fR + u_diff) / (fLd + fRd) change = 2.0 * abs((p - p_old) / (p + p_old)) if change <= tol_pre: break if p < 0.0: p = tol_pre p_old = p u = 0.5 * (u_l + u_r + fR - fL) return p, u def _compute_guess_p(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r): """ Computes the initial guess for pressure. References ---------- E.F. Toro, Riemann Solvers and Numerical Methods for Fluid Dynamics Springer (2009), Chapter 9, pp. 297-306 """ quser = 2.0 p_linearized = 0.5 * (p_l + p_r) + 0.5 * (u_l - u_r) * \ 0.25 * (rho_l + rho_r) * (c_l + c_r) p_linearized = max(0.0, p_linearized) p_min = min(p_l, p_r) p_max = max(p_l, p_r) qmax = p_max / p_min if( qmax <= quser and (p_min <= p_linearized and p_linearized <= p_max) ): """A Primitive Variable Riemann Solver (PMRS)""" return p_linearized else: """A Two-Rarefaction Riemann Solver (TRRS)""" if p_linearized < p_min: p_lr = (p_l / p_r)**gm1_2g u_linearized = (p_lr * u_l / c_l + u_r / c_r + (2 / gm1) * (p_lr - 1.0)) / (p_lr / c_l + 1.0 / c_r) return ( 0.5 * (p_l * (1.0 + gm1_2 * (u_l - u_linearized) / c_l)**(1.0 / gm1_2g) + p_r * (1.0 + gm1_2 * (u_linearized - u_r) / c_r) ** (1.0 / gm1_2g)) ) else: """A Two-Shock Riemann Solver (TSRS)""" gL = sqrt(((2 / gp1) / rho_l) / (gm1_gp1 * p_l + p_linearized)) gR = sqrt(((2 / gp1) / rho_r) / (gm1_gp1 * p_r + p_linearized)) return (gL * p_l + gR * p_r - (u_r - u_l)) / (gL + gR) def complete_solution(rho_l, u_l, p_l, c_l, rho_r, u_r, p_r, c_r, p_star, u_star, s): if s <= u_star: rho, u, p = left_contact(rho_l, u_l, p_l, c_l, p_star, u_star, s) else: rho, u, p = right_contact(rho_r, u_r, p_r, c_r, p_star, u_star, s) return rho, u, p def left_contact(rho_l, u_l, p_l, c_l, p_star, u_star, s): if p_star <= p_l: rho, u, p = left_rarefaction(rho_l, u_l, p_l, c_l, p_star, u_star, s) else: rho, u, p = left_shock(rho_l, u_l, p_l, c_l, p_star, u_star, s) return rho, u, p def left_rarefaction(rho_l, u_l, p_l, c_l, p_star, u_star, s): s_head = u_l - c_l s_tail = u_star - c_l * (p_star / p_l)**gm1_2g if s <= s_head: rho, u, p = rho_l, u_l, p_l elif (s > s_head and s <= s_tail): u = (2 / gp1) * (c_l + gm1_2 * u_l + s) c = (2 / gp1) * (c_l + gm1_2 * (u_l - s)) rho = rho_l * (c / c_l)**(2 / gm1) p = p_l * (c / c_l)**(1.0 / gm1_2g) else: rho = rho_l * (p_star / p_l)**(1.0 / gamma) u = u_star p = p_star return rho, u, p def left_shock(rho_l, u_l, p_l, c_l, p_star, u_star, s): sL = u_l - c_l * sqrt(gp1_2g * (p_star / p_l) + gm1_2g) if s <= sL: rho, u, p = rho_l, u_l, p_l else: rho_1 = rho_l * ((p_star / p_l) + gm1_gp1) / \ ((p_star / p_l) * gm1_gp1 + 1.0) rho, u, p = rho_1, u_star, p_star return rho, u, p def right_contact(rho_r, u_r, p_r, c_r, p_star, u_star, s): if p_star > p_r: rho, u, p = right_shock(rho_r, u_r, p_r, c_r, p_star, u_star, s) else: rho, u, p = right_rarefaction(rho_r, u_r, p_r, c_r, p_star, u_star, s) return rho, u, p def right_rarefaction(rho_r, u_r, p_r, c_r, p_star, u_star, s): s_head = u_r + c_r s_tail = u_star + c_r * (p_star / p_r)**gm1_2g if s >= s_head: rho, u, p = rho_r, u_r, p_r elif (s < s_head and s > s_tail): u = (2 / gp1) * (-c_r + gm1_2 * u_r + s) c = (2 / gp1) * (c_r - gm1_2 * (u_r - s)) rho = rho_r * (c / c_r)**(2 / gm1) p = p_r * (c / c_r)**(1.0 / gm1_2g) else: rho = rho_r * (p_star / p_r)**(1.0 / gamma) u = u_star p = p_star return rho, u, p def right_shock(rho_r, u_r, p_r, c_r, p_star, u_star, s): sR = u_r + c_r * sqrt(gp1_2g * (p_star / p_r) + gm1_2g) if s >= sR: rho, u, p = rho_r, u_r, p_r else: rho_1 = rho_r * ((p_star / p_r) + gm1_gp1) / \ ((p_star / p_r) * gm1_gp1 + 1.0) rho, u, p = rho_1, u_star, p_star return rho, u, p if __name__ == '__main__': set_gamma(1.4) solve() pysph-master/pysph/examples/gas_dynamics/robert.py000066400000000000000000000052431356347341600230230ustar00rootroot00000000000000"""Simulate the Robert's problem (1D) (40 seconds). """ from pysph.examples.gas_dynamics.shocktube_setup import ShockTubeSetup from pysph.sph.scheme import ADKEScheme, GasDScheme, GSPHScheme, SchemeChooser import numpy # Numerical constants dim = 1 gamma = 1.4 gamma1 = gamma - 1.0 # solution parameters dt = 1e-4 tf = 0.1 # domain size and discretization parameters xmin = -0.5 xmax = 0.5 class Robert(ShockTubeSetup): def initialize(self): self.xmin = -0.5 self.xmax = 0.5 self.x0 = 0.0 self.rhol = 3.86 self.rhor = 1.0 self.pl = 10.33 self.pr = 1.0 self.ul = -0.39 self.ur = -3.02 def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=2.0, help="Ratio h/dx." ) group.add_argument( "--nl", action="store", type=float, dest="nl", default=1930, help="Number of particles in left region" ) def consume_user_options(self): self.nl = self.options.nl self.hdx = self.options.hdx ratio = self.rhor/self.rhol self.xb_ratio = 2 self.nr = ratio*self.nl self.dxl = 0.5/self.nl self.dxr = 0.5/self.nr self.h0 = self.hdx * self.dxr self.hdx = self.hdx def create_particles(self): return self.generate_particles(xmin=self.xmin*self.xb_ratio, xmax=self.xmax*self.xb_ratio, dxl=self.dxl, dxr=self.dxr, m=self.dxr, pl=self.pl, pr=self.pr, h0=self.h0, bx=0.03, gamma1=gamma1, ul=self.ul, ur=self.ur) def create_scheme(self): self.dt = dt self.tf = tf adke = ADKEScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, alpha=1, beta=2.0, k=1.0, eps=0.5, g1=0.5, g2=1.0) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.2, alpha1=1.0, alpha2=0.1, beta=2.0, update_alpha1=True, update_alpha2=True ) gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=2.0, g1=0.2, g2=0.4, rsolver=2, interpolation=1, monotonicity=2, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=40, tol=1e-6 ) s = SchemeChooser(default='adke', adke=adke, mpm=mpm, gsph=gsph) return s if __name__ == '__main__': app = Robert() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/sedov.py000066400000000000000000000036501356347341600226460ustar00rootroot00000000000000"""Sedov point explosion problem. (7 minutes) Particles are distributed on concentric circles about the origin with increasing number of particles with increasing radius. A unit charge is distributed about the center which gives the initial pressure disturbance. """ # NumPy and standard library imports import os.path import numpy # PySPH base and carray imports from pysph.base.utils import get_particle_array as gpa from pysph.solver.application import Application from pysph.sph.scheme import GasDScheme # Numerical constants dim = 2 gamma = 5.0/3.0 gamma1 = gamma - 1.0 # solution parameters dt = 1e-4 tf = 0.1 # scheme constants alpha1 = 10.0 alpha2 = 1.0 beta = 2.0 kernel_factor = 1.2 class SedovPointExplosion(Application): def create_particles(self): fpath = os.path.join( os.path.dirname(__file__), 'ndspmhd-sedov-initial-conditions.npz' ) data = numpy.load(fpath) x = data['x'] y = data['y'] rho = data['rho'] p = data['p'] e = data['e'] h = data['h'] m = data['m'] fluid = gpa(name='fluid', x=x, y=y, rho=rho, p=p, e=e, h=h, m=m) self.scheme.setup_properties([fluid]) # set the initial smoothing length proportional to the particle # volume fluid.h[:] = kernel_factor * (fluid.m/fluid.rho)**(1./dim) print("Sedov's point explosion with %d particles" % (fluid.get_number_of_particles())) return [fluid,] def create_scheme(self): s = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=kernel_factor, alpha1=alpha1, alpha2=alpha2, beta=beta, adaptive_h_scheme="gsph", update_alpha1=True, update_alpha2=True ) s.configure_solver(dt=dt, tf=tf, adaptive_timestep=False, pfreq=25) return s if __name__ == '__main__': app = SedovPointExplosion() app.run() pysph-master/pysph/examples/gas_dynamics/shocktube.py000066400000000000000000000172351356347341600235210ustar00rootroot00000000000000"""Two-dimensional Shocktube problem. (10 mins) The density is assumed to be uniform and the shocktube problem is defined by the pressure jump. The pressure jump of 10^5 (pl = 1000.0, pr = 0.01) corresponds to the Woodward and Colella strong shock or blastwave problem. """ # NumPy and standard library imports import numpy from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array as gpa from pysph.solver.application import Application from pysph.sph.scheme import GasDScheme, ADKEScheme, GSPHScheme, SchemeChooser from pysph.sph.wc.crksph import CRKSPHScheme # PySPH tools from pysph.tools import uniform_distribution as ud # Numerical constants dim = 2 gamma = 1.4 gamma1 = gamma - 1.0 # solution parameters dt = 7.5e-6 tf = 0.005 # domain size xmin = 0. xmax = 1 dx = 0.002 ny = 50 ymin = 0 ymax = ny * dx x0 = 0.5 # initial discontuinity # scheme constants alpha1 = 1.0 alpha2 = 1.0 beta = 2.0 kernel_factor = 1.5 h0 = kernel_factor * dx class ShockTube2D(Application): def initialize(self): self.xmin = xmin self.xmax = xmax self.ymin = ymin self.ymax = ymax self.dx = dx self.hdx = 1.7 self.x0 = x0 self.ny = ny self.pl = 1000 self.pr = 0.01 self.rhol = 1.0 self.rhor = 1.0 self.ul = 0. self.ur = 0. self.vl = 0. self.vr = 0. def create_domain(self): return DomainManager( xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, periodic_in_x=True, periodic_in_y=True) def create_particles(self): global dx data = ud.uniform_distribution_cubic2D(dx, xmin, xmax, ymin, ymax) x = data[0] y = data[1] dx = data[2] dy = data[3] # volume estimate volume = dx * dy # indices on either side of the initial discontinuity right_indices = numpy.where(x > x0)[0] # density is uniform rho = numpy.ones_like(x) * self.rhol rho[right_indices] = self.rhor # pl = 100.0, pr = 0.1 p = numpy.ones_like(x) * self.pl p[right_indices] = self.pr # const h and mass h = numpy.ones_like(x) * self.hdx * self.dx m = numpy.ones_like(x) * volume * rho # ul = ur = 0 u = numpy.ones_like(x) * self.ul u[right_indices] = self.ur # vl = vr = 0 v = numpy.ones_like(x) * self.vl v[right_indices] = self.vr # thermal energy from the ideal gas EOS e = p/(gamma1*rho) fluid = gpa(name='fluid', x=x, y=y, rho=rho, p=p, e=e, h=h, m=m, h0=h.copy(), u=u, v=v) self.scheme.setup_properties([fluid]) print("2D Shocktube with %d particles" % (fluid.get_number_of_particles())) return [fluid, ] def create_scheme(self): self.dt = dt self.tf = tf adke = ADKEScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, alpha=1, beta=1, k=1.0, eps=0.8, g1=0.5, g2=0.5, has_ghosts=True) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=kernel_factor, alpha1=alpha1, alpha2=alpha2, beta=beta, max_density_iterations=1000, density_iteration_tolerance=1e-4, has_ghosts=True ) gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.5, g1=0.25, g2=0.5, rsolver=2, interpolation=1, monotonicity=2, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=40, tol=1e-6, has_ghosts=True ) crksph = CRKSPHScheme( fluids=['fluid'], dim=dim, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=gamma, cl=2, has_ghosts=True ) s = SchemeChooser( default='adke', adke=adke, mpm=mpm, gsph=gsph, crksph=crksph ) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'mpm': s.configure(kernel_factor=kernel_factor) s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=True, pfreq=50) elif self.options.scheme == 'adke': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'gsph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'crksph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) def post_process(self): try: import matplotlib matplotlib.use('Agg') from matplotlib import pyplot except ImportError: print("Post processing requires matplotlib.") return if self.rank > 0 or len(self.output_files) == 0: return import os from pysph.solver.utils import load from pysph.examples.gas_dynamics import riemann_solver outfile = self.output_files[-1] data = load(outfile) pa = data['arrays']['fluid'] try: gamma = self.options.gamma or 1.4 except AttributeError: gamma = 1.4 print(gamma) riemann_solver.set_gamma(gamma) rho_e, u_e, p_e, e_e, x_e = riemann_solver.solve( x_min=0, x_max=1, x_0=0.5, t=self.tf, p_l=self.pl, p_r=self.pr, rho_l=self.rhol, rho_r=self.rhor, u_l=self.ul, u_r=self.ur, N=101 ) x = pa.x u = pa.u e = pa.e p = pa.p rho = pa.rho cs = pa.cs pyplot.scatter( x, rho, label='pysph (' + str(self.options.scheme) + ')', s=1, color='k' ) pyplot.plot(x_e, rho_e, label='exact') pyplot.xlim((0.2, 0.8)) pyplot.xlabel('x') pyplot.ylabel('rho') pyplot.legend() fig = os.path.join(self.output_dir, "density.png") pyplot.savefig(fig, dpi=300) pyplot.clf() pyplot.scatter( x, e, label='pysph (' + str(self.options.scheme) + ')', s=1, color='k' ) pyplot.plot(x_e, e_e, label='exact') pyplot.xlim((0.2, 0.8)) pyplot.xlabel('x') pyplot.ylabel('e') pyplot.legend() fig = os.path.join(self.output_dir, "energy.png") pyplot.savefig(fig, dpi=300) pyplot.clf() pyplot.scatter( x, rho * u, label='pysph (' + str(self.options.scheme) + ')', s=1, color='k' ) pyplot.plot(x_e, rho_e * u_e, label='exact') pyplot.xlim((0.2, 0.8)) pyplot.xlabel('x') pyplot.ylabel('M') pyplot.legend() fig = os.path.join(self.output_dir, "Machno.png") pyplot.savefig(fig, dpi=300) pyplot.clf() pyplot.scatter( x, p, label='pysph (' + str(self.options.scheme) + ')', s=1, color='k' ) pyplot.plot(x_e, p_e, label='exact') pyplot.xlim((0.2, 0.8)) pyplot.xlabel('x') pyplot.ylabel('p') pyplot.legend() fig = os.path.join(self.output_dir, "pressure.png") pyplot.savefig(fig, dpi=300) pyplot.clf() fname = os.path.join(self.output_dir, 'results.npz') numpy.savez(fname, x=x, u=u, e=e, cs=cs, rho=rho, p=p) fname = os.path.join(self.output_dir, 'exact.npz') numpy.savez(fname, x=x_e, u=u_e, e=e_e, rho=rho_e, p=p_e) if __name__ == '__main__': app = ShockTube2D() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/shocktube_setup.py000066400000000000000000000127751356347341600247450ustar00rootroot00000000000000""" This is a setup example that will be used other gas_dynamics problem like SodShockTube, BlastWave """ import os import numpy from math import sqrt from pysph.base.utils import get_particle_array as gpa from pysph.solver.application import Application from pysph.examples.gas_dynamics import riemann_solver class ShockTubeSetup(Application): def generate_particles(self, xmin, xmax, dxl, dxr, m, pl, pr, h0, bx, gamma1, ul=0, ur=0, constants={}): xt1 = numpy.arange(xmin - bx + 0.5 * dxl, 0, dxl) xt2 = numpy.arange(0.5 * dxr, xmax + bx, dxr) xt = numpy.concatenate([xt1, xt2]) leftb_indices = numpy.where(xt <= xmin)[0] left_indices = numpy.where((xt > xmin) & (xt < 0))[0] right_indices = numpy.where((xt >= 0) & (xt < xmax))[0] rightb_indices = numpy.where(xt >= xmax)[0] x1 = xt[left_indices] x2 = xt[right_indices] b1 = xt[leftb_indices] b2 = xt[rightb_indices] x = numpy.concatenate([x1, x2]) b = numpy.concatenate([b1, b2]) right_indices = numpy.where(x > 0.0)[0] rho = numpy.ones_like(x) * m / dxl rho[right_indices] = m / dxr p = numpy.ones_like(x) * pl p[right_indices] = pr u = numpy.ones_like(x) * ul u[right_indices] = ur h = numpy.ones_like(x) * h0 m = numpy.ones_like(x) * m e = p / (gamma1 * rho) wij = numpy.ones_like(x) bwij = numpy.ones_like(b) brho = numpy.ones_like(b) bp = numpy.ones_like(b) be = bp / (gamma1 * brho) bm = numpy.ones_like(b) * dxl bh = numpy.ones_like(b) * 4 * h0 bhtmp = numpy.ones_like(b) fluid = gpa( constants=constants, name='fluid', x=x, rho=rho, p=p, e=e, h=h, m=m, u=u, wij=wij, h0=h.copy() ) boundary = gpa( constants=constants, name='boundary', x=b, rho=brho, p=bp, e=be, h=bh, m=bm, wij=bwij, h0=bh.copy(), htmp=bhtmp ) self.scheme.setup_properties([fluid, boundary]) print("1D Shocktube with %d particles" % (fluid.get_number_of_particles())) return [fluid, boundary] def post_process(self): try: import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt except ImportError: print("Post processing requires matplotlib.") return if self.rank > 0 or len(self.output_files) == 0: return last_output = self.output_files[-1] from pysph.solver.utils import load data = load(last_output) pa = data['arrays']['fluid'] gamma = self.options.gamma if self.options.gamma else 1.4 riemann_solver.set_gamma(gamma) rho_e, u_e, p_e, e_e, x_e = riemann_solver.solve( x_min=self.xmin, x_max=self.xmax, x_0=self.x0, t=self.tf, p_l=self.pl, p_r=self.pr, rho_l=self.rhol, rho_r=self.rhor, u_l=self.ul, u_r=self.ur, N=101 ) x = pa.x rho = pa.rho e = pa.e cs = pa.cs u = pa.u p = pa.p h = pa.h plt.plot(x, rho, label='pysph (' + str(self.options.scheme) + ')') plt.plot(x_e, rho_e, label='exact') plt.xlabel('x') plt.ylabel('rho') plt.legend() fig = os.path.join(self.output_dir, "density.png") plt.savefig(fig, dpi=300) plt.clf() plt.plot(x, e, label='pysph (' + str(self.options.scheme) + ')') plt.plot(x_e, e_e, label='exact') plt.xlabel('x') plt.ylabel('e') plt.legend() fig = os.path.join(self.output_dir, "energy.png") plt.savefig(fig, dpi=300) plt.clf() plt.plot(x, rho * u, label='pysph (' + str(self.options.scheme) + ')') plt.plot(x_e, rho_e * u_e, label='exact') plt.xlabel('x') plt.ylabel('M') plt.legend() fig = os.path.join(self.output_dir, "Machno.png") plt.savefig(fig, dpi=300) plt.clf() plt.plot(x, p, label='pysph (' + str(self.options.scheme) + ')') plt.plot(x_e, p_e, label='exact') plt.xlabel('x') plt.ylabel('p') plt.legend() fig = os.path.join(self.output_dir, "pressure.png") plt.savefig(fig, dpi=300) plt.clf() fname = os.path.join(self.output_dir, 'results.npz') numpy.savez(fname, x=x, u=u, e=e, cs=cs, rho=rho, p=p, h=h) fname = os.path.join(self.output_dir, 'exact.npz') numpy.savez(fname, x=x_e, u=u_e, e=e_e, p=p_e, rho=rho_e) def configure_scheme(self): s = self.scheme dxl = 0.5/self.nl ratio = self.rhor/self.rhol nr = ratio*self.nl dxr = 0.5/self.nr h0 = self.hdx * self.dxr kernel_factor = self.options.hdx if self.options.scheme == 'mpm': s.configure(kernel_factor=kernel_factor) s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=True, pfreq=50) elif self.options.scheme == 'adke': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'gsph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'crk': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=1) pysph-master/pysph/examples/gas_dynamics/sjogreen.py000066400000000000000000000051271356347341600233430ustar00rootroot00000000000000"""Simulate the Sjogreen problem in 1D (10 seconds). """ from pysph.examples.gas_dynamics.shocktube_setup import ShockTubeSetup from pysph.sph.scheme import ADKEScheme, GasDScheme, GSPHScheme, SchemeChooser import numpy # Numerical constants dim = 1 gamma = 1.4 gamma1 = gamma - 1.0 # solution parameters dt = 1e-4 tf = 0.1 # solution parameters dt = 1e-4 tf = 0.1 # domain size and discretization parameters xmin = -0.5 xmax = 0.5 class SjoGreen(ShockTubeSetup): def initialize(self): self.xmin = -0.5 self.xmax = 0.5 self.x0 = 0.0 self.rhol = 1.0 self.rhor = 1.0 self.pl = 0.4 self.pr = 0.4 self.ul = -2.0 self.ur = 2.0 def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=2.5, help="Ratio h/dx." ) group.add_argument( "--nl", action="store", type=float, dest="nl", default=200, help="Number of particles in left region" ) def consume_user_options(self): self.nl = self.options.nl self.hdx = self.options.hdx ratio = self.rhor/self.rhol self.nr = ratio*self.nl self.dxl = 0.5/self.nl self.dxr = 0.5/self.nr self.h0 = self.hdx * self.dxr self.hdx = self.hdx def create_particles(self): lng = numpy.zeros(1, dtype=float) consts = {'lng': lng} return self.generate_particles( xmin=self.xmin, xmax=self.xmax, dxl=self.dxl, dxr=self.dxr, m=self.dxl, pl=self.pl, pr=self.pr, h0=self.h0, bx=0.03, gamma1=gamma1, ul=self.ul, ur=self.ur, constants=consts ) def create_scheme(self): self.dt = dt self.tf = tf adke = ADKEScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, alpha=0, beta=0.0, k=1.0, eps=1.0, g1=0.0, g2=0.0) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.5, alpha1=0, alpha2=0, beta=2.0, update_alpha1=True, update_alpha2=True ) gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.5, g1=0.2, g2=0.4, rsolver=2, interpolation=1, monotonicity=2, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=40, tol=1e-6 ) s = SchemeChooser(default='adke', adke=adke, mpm=mpm, gsph=gsph) return s if __name__ == '__main__': app = SjoGreen() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/sod_shocktube.py000066400000000000000000000057411356347341600243650ustar00rootroot00000000000000"""Simulate the classical Sod Shocktube problem in 1D (5 seconds). """ from pysph.examples.gas_dynamics.shocktube_setup import ShockTubeSetup from pysph.sph.scheme import ADKEScheme, GasDScheme, GSPHScheme, SchemeChooser from pysph.sph.wc.crksph import CRKSPHScheme from pysph.base.nnps import DomainManager import numpy # Numerical constants dim = 1 gamma = 1.4 gamma1 = gamma - 1.0 # solution parameters dt = 1e-4 tf = 0.15 class SodShockTube(ShockTubeSetup): def initialize(self): self.xmin = -0.5 self.xmax = 0.5 self.x0 = 0.0 self.rhol = 1.0 self.rhor = 0.125 self.pl = 1.0 self.pr = 0.1 self.ul = 0.0 self.ur = 0.0 def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.2, help="Ratio h/dx." ) group.add_argument( "--nl", action="store", type=float, dest="nl", default=640, help="Number of particles in left region" ) def consume_user_options(self): self.nl = self.options.nl self.hdx = self.options.hdx ratio = self.rhor/self.rhol self.nr = self.nl*ratio self.dxl = 0.5/self.nl self.dxr = 0.5/self.nr self.ml = self.dxl * self.rhol self.h0 = self.hdx * self.dxr self.hdx = self.hdx def create_particles(self): return self.generate_particles(xmin=self.xmin, xmax=self.xmax, dxl=self.dxl, dxr=self.dxr, m=self.ml, pl=self.pl, pr=self.pr, h0=self.h0, bx=0.03, gamma1=gamma1, ul=self.ul, ur=self.ur) def create_domain(self): return DomainManager( xmin=self.xmin, xmax=self.xmax, mirror_in_x=True ) def create_scheme(self): self.dt = dt self.tf = tf adke = ADKEScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, alpha=1, beta=1.0, k=0.3, eps=0.5, g1=0.2, g2=0.4) mpm = GasDScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, kernel_factor=1.2, alpha1=1.0, alpha2=0.1, beta=2.0, update_alpha1=True, update_alpha2=True ) gsph = GSPHScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, kernel_factor=1.0, g1=0.2, g2=0.4, rsolver=2, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=20, tol=1e-6 ) crk = CRKSPHScheme( fluids=['fluid'], dim=dim, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=gamma, cl=3 ) s = SchemeChooser( default='adke', adke=adke, mpm=mpm, gsph=gsph, crk=crk ) return s if __name__ == '__main__': app = SodShockTube() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/wallshock.py000066400000000000000000000053221356347341600235130ustar00rootroot00000000000000"""Wall-shock problem in 1D (40 seconds). """ from pysph.examples.gas_dynamics.shocktube_setup import ShockTubeSetup from pysph.sph.scheme import ADKEScheme, GasDScheme, GSPHScheme, SchemeChooser # Numerical constants dim = 1 gamma = 1.4 gamma1 = gamma - 1.0 # solution parameters dt = 1e-6 tf = 0.4 # domain size and discretization parameters xmin = -0.2 xmax = 0.2 class WallShock(ShockTubeSetup): def initialize(self): self.xmin = xmin self.xmax = xmax self.x0 = 0.0 self.rhol = 1.0 self.rhor = 1.0 self.pl = 4e-7 self.pr = 4e-7 self.ul = 1.0 self.ur = -1.0 def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.5, help="Ratio h/dx." ) group.add_argument( "--nl", action="store", type=float, dest="nl", default=500, help="Number of particles in left region" ) def consume_user_options(self): self.nl = self.options.nl self.hdx = self.options.hdx ratio = self.rhor/self.rhol self.nr = ratio*self.nl self.xb_ratio = 5 self.dxl = (self.x0 - self.xmin) / self.nl self.dxr = (self.xmax - self.x0) / self.nr self.h0 = self.hdx * self.dxr self.hdx = self.hdx def create_particles(self): return self.generate_particles(xmin=self.xmin*self.xb_ratio, xmax=self.xmax*self.xb_ratio, dxl=self.dxl, dxr=self.dxr, m=self.dxl, pl=self.pl, pr=self.pr, h0=self.h0, bx=0.02, gamma1=gamma1, ul=self.ul, ur=self.ur) def create_scheme(self): self.dt = dt self.tf = tf adke = ADKEScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, alpha=1, beta=1, k=0.7, eps=0.5, g1=0.5, g2=1.0) mpm = GasDScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, kernel_factor=1.2, alpha1=1.0, alpha2=0.1, beta=2.0, update_alpha1=True, update_alpha2=True ) gsph = GSPHScheme( fluids=['fluid'], solids=['boundary'], dim=dim, gamma=gamma, kernel_factor=1.0, g1=0.2, g2=0.4, rsolver=2, interpolation=1, monotonicity=2, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=40, tol=1e-6 ) s = SchemeChooser(default='adke', adke=adke, mpm=mpm, gsph=gsph) return s if __name__ == '__main__': app = WallShock() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/wc_blastwave.py000066400000000000000000000130211356347341600242000ustar00rootroot00000000000000"""Woodward Collela blastwave (2 minutes) Two discontinuities moving towards each other and the results after they interact """ from pysph.sph.scheme import ( GSPHScheme, SchemeChooser, ADKEScheme, GasDScheme ) from pysph.sph.wc.crksph import CRKSPHScheme from pysph.base.utils import get_particle_array as gpa from pysph.base.nnps import DomainManager from pysph.solver.application import Application import numpy # Numerical constants dim = 1 gamma = 1.4 gamma1 = gamma - 1.0 # solution parameters dt = 5e-6 tf = 0.038 class WCBlastwave(Application): def initialize(self): self.xmin = 0.0 self.xmax = 1.0 self.domain_length = self.xmax - self.xmin self.rho = 1.0 self.p1 = 1000 self.p2 = 0.01 self.p3 = 100 self.u = 0.0 self.gamma = gamma self.hdx = 1.5 self.n_particles = 1000 def consume_user_options(self): pass def create_particles(self): self.dx = self.domain_length / self.n_particles x = numpy.arange( self.xmin + self.dx*0.5, self.xmax, self.dx ) p = numpy.ones_like(x) * self.p2 left_indices = numpy.where(x < 0.1)[0] right_indices = numpy.where(x > 0.9)[0] p[left_indices] = self.p1 p[right_indices] = self.p3 h = self.hdx*self.dx m = self.dx * self.rho e = p / ((self.gamma - 1) * self.rho) cs = numpy.sqrt(self.gamma * p / self.rho) fluid = gpa( name='fluid', x=x, rho=self.rho, p=p, h=h, m=m, e=e, cs=cs, h0=h, u=0 ) self.scheme.setup_properties([fluid]) return [fluid] def create_domain(self): return DomainManager( xmin=self.xmin, xmax=self.xmax, mirror_in_x=True ) def create_scheme(self): self.dt = dt self.tf = tf adke = ADKEScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, alpha=1, beta=1.0, k=1.0, eps=0.8, g1=0.2, g2=0.4, has_ghosts=True) mpm = GasDScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.2, alpha1=1.0, alpha2=0.1, beta=2.0, update_alpha1=True, update_alpha2=True, has_ghosts=True ) gsph = GSPHScheme( fluids=['fluid'], solids=[], dim=dim, gamma=gamma, kernel_factor=1.0, g1=0.2, g2=0.4, rsolver=2, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=2.0, niter=20, tol=1e-6, has_ghosts=True ) crk = CRKSPHScheme( fluids=['fluid'], dim=dim, rho0=0, c0=0, nu=0, h0=0, p0=0, gamma=gamma, cl=4, cq=1, eta_crit=0.2, has_ghosts=True ) s = SchemeChooser( default='gsph', gsph=gsph, adke=adke, mpm=mpm, crksph=crk ) return s def configure_scheme(self): s = self.scheme if self.options.scheme == 'mpm': s.configure(kernel_factor=1.2) s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=True, pfreq=50) elif self.options.scheme == 'adke': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'gsph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=50) elif self.options.scheme == 'crksph': s.configure_solver(dt=self.dt, tf=self.tf, adaptive_timestep=False, pfreq=20) def post_process(self): if len(self.output_files) < 1 or self.rank > 0: return try: import matplotlib matplotlib.use('Agg') from matplotlib import pyplot except ImportError: print("Post processing requires matplotlib.") return from pysph.solver.utils import load import os plot_exact = False plot_legends = ['pysph(%s)' % (self.options.scheme)] try: import h5py plot_exact = True plot_legends.append('exact') except ImportError: print("h5py not found, exact data will not be plotted") fname = os.path.join( os.path.dirname(__file__), 'wc_exact.hdf5' ) props = ["rho", "u", "p", "e"] props_h5 = ["'" + pr + "'" for pr in props] if plot_exact: h5file = h5py.File(fname) dataset = h5file['data_0'] outfile = self.output_files[-1] data = load(outfile) pa = data['arrays']['fluid'] x = pa.x u = pa.u e = pa.e p = pa.p rho = pa.rho prop_vals = [rho, u, p, e] for _i, prop in enumerate(props): pyplot.plot(x, prop_vals[_i]) if plot_exact: pyplot.scatter( dataset.get(props_h5[_i])['data_0'].get("'x'") ['data_0'][:], dataset.get(props_h5[_i])['data_0'].get("'data'") ['data_0'][:], c='k', s=4 ) pyplot.xlabel('x') pyplot.ylabel(props[_i]) pyplot.legend(plot_legends) fig = os.path.join(self.output_dir, props[_i] + ".png") pyplot.savefig(fig, dpi=300) pyplot.close('all') if __name__ == '__main__': app = WCBlastwave() app.run() app.post_process() pysph-master/pysph/examples/gas_dynamics/wc_exact.hdf5000066400000000000000000001173201356347341600235210ustar00rootroot00000000000000HDF  О` PTREEHEAPXdata_0H @CLASS @VERSION 8typehickle HPYTHON_VERSION0TREEHEAPX('rho''u''p''e'0 @type(!GCOLhickle3.4.33.7.3SNODpTREE HEAPXdata_0HSNOD Ё\Xe\HhX70@7Hh @type dict_item H key_type TREE%HEAPX0 'x''data'@SNOD  @typeX&TREE00HEAPX$data_0HSNODx113P"x"$x"$ @type dict_item H key_type  ?@4 40(\ 8typendarray82?\"a?P;E? ʔ?|^?H뢵x?@Wu?ES.?t8N?TZ/J?|EY?aό ?p}`Ge?>? ?w{?1?t64?@$Ǧ?@Xa?[I?wz ?v ?WΥ/??os_`?V=?:?<~?$9? D?8ʲ:?y?T[mO?J?p'6?r4uZ?X}?@ƿ?&1e?6Wzp?ßW?Q4o?0-?myk?̪z?v (?\S(?MB? ә?`?{:H?ol1?b5P?U|'JpM? b{ ?3?G9?G9?G9?G9?< p'?< p'?< p'?< p'?̦?̦?̦?XID?#?\s ? &,X? Ǡ?N/V?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?lPL?e?8@?-]?5$z?["?A?y?m E?7,1y?w%?S?;>?GK=K?Gh?}ד?Lq?Ve?F3?F3?!}P?!}P?!}P?!}P?!}P?rym?rym?rym?rym?rym?rym?rym?rym?;닰?;닰?;닰?;닰?;닰?;닰?;닰?;닰?;닰?Ⴝ?Et?X?3i8?jU?jU?jU?jU?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s?LN]s??????????????????????@E#?2:C!?&#bx?j2? ֆ&?z}?Ro˚?Ro˚?Ro˚?ny?`ŝ?+?gA? >?Z^1?T}??/j6?vAۍ?G?t ?@4 408\ 8typendarray8SNOD5PBHEAPX7data_0HFG?>K? m?`Yͽ?.,?~0?ki? J?@o"?`} ?hO%?_4Ȉ,?'3? :?`PF4B?EnI?P?@XW?c_?@Sf?`1j"m?Ћ?v'?6a? R?@gӟ?\K5?ުn?9 ?Mi? ?`K(T?=??]F?.9?@~s?`od? ?n&#?ཱྀX? !?@_8A(?0?=7?OJ_w>?E? L?`@\}#T?\[?@d>qy@u?:E[@@*@AH@B3@C^L4@D'Όn@EO @F=@GSF@ IK@JWG@,K؟@yLw@޸@pR @ݞ( @ôb @ &e @@ @bJ @xM @n)@ƭ@毒܋@$<8@(r@JH瞭@lO@l#@hl@hƷ@`O4@Aذ@,a-@n֩2@x{W@aD@vQӽ@@* @zdK@ecA@7X@X7@y|3@8|@M^@@&@6\W6@G%@Xq@~kH@Ί@ߋ/8@D@b3s@+@"6N@2K@B:@TOW@'/@eyP@#@^F}@6 $t@7ʎ|@8ծ@9\-L@"`6@Ꮊ@ cW@}?@,Fg@</@MjI@^j@njn@3R@qB@AӋ@b@ׯi5@Ap @˪ @e| @- @*Z @r @#E@ӵz@G+@٠@2k0@T>k@vޏ@!@ⳲQ@EV@ͯsg!?8zɖ?x+ ????@cQl?燞?뫥W?Ϭw?LA?zcg?1oZ?>\?ٴn ^?ȭ_?Xԟh?5TREE8JHEAPXXE'x''data'@SNOD7C8EC8E @typeKTREEXUHEAPXIdata_0HSNODVVXxGGIGI @type dict_item H key_type  ?@4 4XMP\ 8typendarray8%fo?`>?POD?k:XB?<3X>?܆G#?UJ?X?ިgk? vs?T]i,?FOT?G?Uz*?Jv?sމ`?P??m˜ӛ?U4 ?=FT?!t? ?-h?U?2xy?8l2>?*p?|O? I?&$?Z;?j ERS?b?Ja y?e]?ޛ) ?g^ܷ?R ¡?0RO??%?(Y?-`?(Z?"?F?+K ?H}?ނ?u.v?f?+h?h?g "F?t ?d"?+c??2M\?'Jy?.~ڕ?j?< k%?\tY1?g9? ɝ3%?mB?ɭn?n??vBv?a;y8?KNr? ?@4 4X]P\ 8typendarray8SNOD(ZxgHEAPX\data_0H lM:M?@8v??1ջ9?@r5!?K?`Ї??`%?p[-?Z?7?`*I?+`u?=?$ b?+|$?ƀC @q܅@@0wQzy@sE@Ks@<;@ |@LEL @ f @Ḻ @ @P++z @vD@Lڳ@vP@u2& @\Nב@o{@ph"b@U'@0Pދ@@$i@”BG @|9y!@#"@ly!@8m"@"O#@<*#@Oۣ$@*$@RX#%@8a`˸%@VU=*&@ &@ȕV'@}l'@{O\(@02(@=)@dFGpC*@󇛭*@'H)+@[y\+@x\*@-u&0*@mI)@O)@Mu(@B o(@ʡ'@fh'@#"@4/\!@vB!@ٽ @Tb @$i@~G&@x!@TB@@`@( _D@>p@o{@h @җ@>& @5zL:@gBkxY@~[x@^LЗ@T<@,{-(@*^ T@@KWg@g߯ @G @|_" @T `@(pc@Dg@5&@.Z@0?Β?PU=*?G@?P9G?c7@?1ջ9?zf?Pm?E%e?E%e?E%e?E%e?E%e?E%e?E%e?TREEjXe\ @type dict_item H key_type TREE`oHEAPXj'x''data'@SNOD ]@h`j@h`j @typepTREEzHEAPXodata_0HSNOD{{~llnln @type dict_item H key_type  ?@4 4r\ 8typendarray8LYI-֋?- $?$^O?̏1?NC?R[?+^?H0?T?}E&'?}?[˻Y^?)?W?L~%?d`?JV?D[E?5=?\+w?B[? cu?>٨C?%$۷?2l,?>A?KFt?X1?o>?Xրʿ v/|?Vۛ?԰ 1?TEF?OP?֩g?Ӏ ?Bԫ?}Z?w? TI?)i? >?2R*?[s?4CO^?õ/?A?2>h?)\Q?⭪9?RUn"?a ?s?p?LW?fң_?;@?^<`?k f?` O? M!=u7?jL?|?22? $Ӧ?ju?uD?SuLk?57!?ӠpT?kٶס?Z?}KCޠ?j'v:? 4C?\/ƹ?g#J?ݠi͸?QٯP?ӷ?K!?u83?/.?|?cND?VӋG?+1?@X?f"i?N_{? Zz?^ez?yy?]2-ly?R@ksx?VHq?(9?z?@Gz!?'x?0>?*ބ2?|˵?$?}%A?ܵ%?2%6*?&&i?^&ל?6&φ?&?'6o?&2'b\?SNODpq~TREE`HEAPX0~data_0H{~ @type dict_item H key_type  ?@4 4\ 8typendarray8SNODPHEAPXdata_0HHܫ@Q@P#EնQ@JQ@'>R@( ֞@R@9O]kcR@U}R@(88R@rS@`QS@$gASS@[S@DZAS@& T@8"bT@ߣdT@LT@4*KtT@yU@^urU@4XtU@ԜU@pT6U@@R8UV@H3V@uKV@8 W@DE^ W@F`W@4W@ +W@z橢W@`mMGW@ X@l`:l"X@௰L$X@T-&X@ i[zX@WX@PX@~\Y@>NZ@kw\@ y]@[ɭ5z@\=y@0Sǫ%y@lϕx@HKdMx@J%2w@Cw@޾?v@f:v@Ėlu@"s2;2u@O t@+*t@<$}s@!uur@Cur@Xq@ygnq@Vp@r2} gp@o@^ 5Vn@m@Gnl@ ;k@Pܡj@ rE~i@*h@g@Bcf@T}e@ Vud@vƮGnc@4fb@7-_a@_@@*IGH@dǜ[@D[Lb@|@Q@\a7#@ Qޞ@2Q(@n09U@l-ʓ@`ʓ@cʓ@ @}-ֱ@p@]@@z"h@xB@,x@;]-h?@3ؒ@ܠkr@CZ @&U5g@[ @siI@wHܔ|@'P@5nÛ@6@7#H@h{@@pL@*M\@"E`@:vc@pbWΌ~@O:>|@0=$X{@* y@D:Z)Xx@۹4r@`;p@lNn@ak@ xg@@}ѩDd@`@8):`@Ӈp`@J|a@@\1i@=e@*cLi@#4h@@ye@0AOb@<M@L@0, IT@R)m@e( i@@vf@Pĺc@Y`@KZ@T `Vs@8Ґ|r@d@dd@x>fjd@(eid@00d@TREEЁ @type dict_item H key_type TREEHEAPX'x''data'@SNODHhh @typeЕTREEHEAPX0data_0HSNOD8ȑ @type dict_item H key_type oo ?@4 4vx\ 8typendarray8SNODTREEHEAPXXdata_0H8 @type dict_item H key_type oo ?@4 4x\ 8typendarray8SNODxpysph-master/pysph/examples/ghia_cavity_data.py000066400000000000000000000071071356347341600223460ustar00rootroot00000000000000"""This module provides a few convenient functions for the Lid Driven Cavity solutions in the paper: "High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method", U Ghia, K.N Ghia, C.T Shin, JCP, Volume 48, Issue 3, December 1982, Pages 387-411. """ import numpy as np from io import StringIO RE = [100, 400, 1000, 3200, 5000, 7500, 10000] # u velocity along vertical line through center (Table I) table1 = u""" 1.0000 1.0000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.9766 0.84123 0.75837 0.65928 0.53236 0.48223 0.47244 0.47221 0.9688 0.78871 0.68439 0.57492 0.48296 0.46120 0.47048 0.47783 0.9609 0.73722 0.61756 0.51117 0.46547 0.45992 0.47323 0.48070 0.9531 0.68717 0.55892 0.46604 0.46101 0.46036 0.47167 0.47804 0.8516 0.23151 0.29093 0.33304 0.34682 0.33556 0.34228 0.34635 0.7344 0.00332 0.16256 0.18719 0.19791 0.20087 0.2059 0.20673 0.6172 -0.13641 0.02135 0.05702 0.07156 0.08183 0.08342 0.08344 0.5000 -0.20581 -0.11477 -0.06080 -0.04272 -0.03039 -0.03800 0.03111 0.4531 -0.21090 -0.17119 -0.10648 -0.86636 -0.07404 -0.07503 -0.07540 0.2813 -0.15662 -0.32726 -0.27805 -0.24427 -0.22855 -0.23176 -0.23186 0.1719 -0.10150 -0.24299 -0.38289 -0.34323 -0.33050 -0.32393 -0.32709 0.1016 -0.06434 -0.14612 -0.29730 -0.41933 -0.40435 -0.38324 -0.38000 0.0703 -0.04775 -0.10338 -0.22220 -0.37827 -0.43643 -0.43025 -0.41657 0.0625 -0.04192 -0.09266 -0.20196 -0.35344 -0.42901 -0.43590 -0.42537 0.0547 -0.03717 -0.08186 -0.18109 -0.32407 -0.41165 -0.43154 -0.42735 0.0000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 """ # v velocity along horizontal line through center (Table II) table2 = u""" 1.0000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.9688 -0.05906 -0.12146 -0.21388 -0.39017 -0.49774 -0.53858 -0.54302 0.9609 -0.07391 -0.15663 -0.27669 -0.47425 -0.55069 -0.55216 -0.52987 0.9531 -0.08864 -0.19254 -0.33714 -0.52357 -0.55408 -0.52347 -0.49099 0.9453 -0.10313 -0.22847 -0.39188 -0.54053 -0.52876 -0.48590 -0.45863 0.9063 -0.16914 -0.23827 -0.51550 -0.44307 -0.41442 -0.41050 -0.41496 0.8594 -0.22445 -0.44993 -0.42665 -0.37401 -0.36214 -0.36213 -0.36737 0.8047 -0.24533 -0.38598 -0.31966 -0.31184 -0.30018 -0.30448 -0.30719 0.5000 0.05454 0.05188 0.02526 0.00999 0.00945 0.00824 0.00831 0.2344 0.17527 0.30174 0.32235 0.28188 0.27280 0.27348 0.27224 0.2266 0.17507 0.30203 0.33075 0.29030 0.28066 0.28117 0.28003 0.1563 0.16077 0.28124 0.37095 0.37119 0.35368 0.35060 0.35070 0.0938 0.12317 0.22965 0.32627 0.42768 0.42951 0.41824 0.41487 0.0781 0.10890 0.20920 0.30353 0.41906 0.43648 0.43564 0.43124 0.0703 0.10091 0.19713 0.29012 0.40917 0.43329 0.44030 0.43733 0.0625 0.09233 0.18360 0.27485 0.39560 0.42447 0.43979 0.43983 0.0000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 """ def _get_data(table): data = np.loadtxt(StringIO(table)) y = data[:,0] result = {} for i, r in enumerate(RE): result[r] = data[:,i+1] return y, result def get_u_vs_y(): """Return the data from table 1, this returns an array y and a dictionary, with the keys as the available Reynolds numbers. """ return _get_data(table1) def get_v_vs_x(): """Return the data from table 2, this returns an array x and a dictionary, with the keys as the available Reynolds numbers. """ return _get_data(table2) pysph-master/pysph/examples/hydrostatic_tank.py000066400000000000000000000337171356347341600224460ustar00rootroot00000000000000"""Hydrostatic tank example. (2 minutes) This example is from (Section 6.0) of Adami et. al. JCP 231, 7057-7075. This is a good problem to test the implementation of the wall boundary condition. Physically, a column of fluid is left in an open tank and allowed to settle to equilibrium. Upon settling, a linear pressure field (p = rho*g*h) should be established according to elementary fluid mechanics. Different boundary formulations can be used to check for this behaviour: - Adami et al. "A generalized wall boundary condition for smoothed particle hydrodynamics", 2012, JCP, 231, pp 7057--7075 (REF1) - Monaghan and Kajtar, "SPH particle boundary forces for arbitrary boundaries", 2009, 180, pp 1811--1820 (REF2) - Gesteria et al. "State-of-the-art of classical SPH for free-surface flows", 2010, JHR, pp 6--27 (REF3) Of these, the first and third are ghost particle methods while the second is the classical Monaghan style, repulsive particle approach. For the fluid dynamics, we use the multi-phase formulation presented in REF1. """ import os.path import numpy as np # PyZoltan imports from cyarray.api import LongArray # PySPH imports from pysph.base.utils import get_particle_array_wcsph as gpa from pysph.base.kernels import Gaussian, WendlandQuintic, CubicSpline, QuinticSpline from pysph.solver.solver import Solver from pysph.solver.application import Application from pysph.sph.integrator import PECIntegrator from pysph.sph.integrator_step import WCSPHStep # the eqations from pysph.sph.equation import Group # Equations for REF1 from pysph.sph.wc.transport_velocity import VolumeFromMassDensity,\ ContinuityEquation,\ MomentumEquationPressureGradient, \ MomentumEquationArtificialViscosity,\ SolidWallPressureBC # Monaghan type repulsive boundary forces used in REF(2) from pysph.sph.boundary_equations import MonaghanBoundaryForce,\ MonaghanKajtarBoundaryForce # Equations for the standard WCSPH formulation and dynamic boundary # conditions defined in REF3 from pysph.sph.wc.basic import TaitEOS, TaitEOSHGCorrection, MomentumEquation from pysph.sph.basic_equations import XSPHCorrection, \ MonaghanArtificialViscosity # domain and reference values Lx = 2.0 Ly = 1.0 H = 0.9 gy = -1.0 Vmax = np.sqrt(abs(gy) * H) c0 = 10 * Vmax rho0 = 1000.0 p0 = c0 * c0 * rho0 gamma = 1.0 # Reynolds number and kinematic viscosity Re = 100 nu = Vmax * Ly / Re # Numerical setup nx = 100 dx = Lx / nx ghost_extent = 5.5 * dx hdx = 1.2 # adaptive time steps h0 = hdx * dx dt_cfl = 0.25 * h0 / (c0 + Vmax) dt_viscous = 0.125 * h0**2 / nu dt_force = 0.25 * np.sqrt(h0 / abs(gy)) tdamp = 1.0 tf = 2.0 dt = 0.75 * min(dt_cfl, dt_viscous, dt_force) output_at_times = np.arange(0.25, 2.1, 0.25) def damping_factor(t, tdamp): if t < tdamp: return 0.5 * (np.sin((-0.5 + t / tdamp) * np.pi) + 1.0) else: return 1.0 class HydrostaticTank(Application): def add_user_options(self, group): group.add_argument( '--bc-type', action='store', type=int, dest='bc_type', default=1, help="Specify the implementation type one of (1, 2, 3)" ) def create_particles(self): # create all the particles _x = np.arange(-ghost_extent, Lx + ghost_extent, dx) _y = np.arange(-ghost_extent, Ly, dx) x, y = np.meshgrid(_x, _y) x = x.ravel() y = y.ravel() # sort out the fluid and the solid indices = [] for i in range(x.size): if ((x[i] > 0.0) and (x[i] < Lx)): if ((y[i] > 0.0) and (y[i] < H)): indices.append(i) # create the arrays solid = gpa(name='solid', x=x, y=y) # remove the fluid particles from the solid fluid = solid.extract_particles(indices) fluid.set_name('fluid') solid.remove_particles(indices) # remove the lid to generate an open tank indices = [] for i in range(solid.get_number_of_particles()): if solid.y[i] > 0.9: if (0 < solid.x[i] < Lx): indices.append(i) solid.remove_particles(indices) print("Hydrostatic tank :: nfluid = %d, nsolid=%d, dt = %g" % ( fluid.get_number_of_particles(), solid.get_number_of_particles(), dt)) ###### ADD PARTICLE PROPS FOR MULTI-PHASE SPH ###### # particle volume fluid.add_property('V') solid.add_property('V') # kernel sum term for boundary particles solid.add_property('wij') # advection velocities and accelerations for name in ('auhat', 'avhat', 'awhat'): fluid.add_property(name) ##### INITIALIZE PARTICLE PROPS ##### fluid.rho[:] = rho0 solid.rho[:] = rho0 fluid.rho0[:] = rho0 solid.rho0[:] = rho0 # mass is set to get the reference density of rho0 volume = dx * dx # volume is set as dx^2 fluid.V[:] = 1. / volume solid.V[:] = 1. / volume fluid.m[:] = volume * rho0 solid.m[:] = volume * rho0 # smoothing lengths fluid.h[:] = hdx * dx solid.h[:] = hdx * dx # return the particle list return [fluid, solid] def create_solver(self): # Create the kernel #kernel = Gaussian(dim=2) kernel = QuinticSpline(dim=2) integrator = PECIntegrator(fluid=WCSPHStep()) # Create a solver. solver = Solver(kernel=kernel, dim=2, integrator=integrator, tf=tf, dt=dt, output_at_times=output_at_times) return solver def create_equations(self): # Formulation for REF1 equations1 = [ # For the multi-phase formulation, we require an estimate of the # particle volume. This can be either defined from the particle # number density or simply as the ratio of mass to density. Group(equations=[ VolumeFromMassDensity(dest='fluid', sources=None) ], ), # Equation of state is typically the Tait EOS with a suitable # exponent gamma Group(equations=[ TaitEOS( dest='fluid', sources=None, rho0=rho0, c0=c0, gamma=gamma), ], ), # The boundary conditions are imposed by extrapolating the fluid # pressure, taking into considering the bounday acceleration Group(equations=[ SolidWallPressureBC(dest='solid', sources=['fluid'], b=1.0, gy=gy, rho0=rho0, p0=p0), ], ), # Main acceleration block Group(equations=[ # Continuity equation ContinuityEquation( dest='fluid', sources=[ 'fluid', 'solid']), # Pressure gradient with acceleration damping. MomentumEquationPressureGradient( dest='fluid', sources=['fluid', 'solid'], pb=0.0, gy=gy, tdamp=tdamp), # artificial viscosity for stability MomentumEquationArtificialViscosity( dest='fluid', sources=['fluid', 'solid'], alpha=0.24, c0=c0), # Position step with XSPH XSPHCorrection(dest='fluid', sources=['fluid'], eps=0.0) ]), ] # Formulation for REF2. Note that for this formulation to work, the # boundary particles need to have a spacing different from the fluid # particles (usually determined by a factor beta). In the current # implementation, the value is taken as 1.0 which will mostly be # ineffective. equations2 = [ # For the multi-phase formulation, we require an estimate of the # particle volume. This can be either defined from the particle # number density or simply as the ratio of mass to density. Group(equations=[ VolumeFromMassDensity(dest='fluid', sources=None) ], ), # Equation of state is typically the Tait EOS with a suitable # exponent gamma Group(equations=[ TaitEOS( dest='fluid', sources=None, rho0=rho0, c0=c0, gamma=gamma), ], ), # Main acceleration block Group(equations=[ # The boundary conditions are imposed as a force or # accelerations on the fluid particles. Note that the # no-penetration condition is to be satisfied with this # equation. The subsequent equations therefore do not have # solid as the source. Note the difference between the # ghost-fluid formulations. K should be 0.01*co**2 # according to REF2. We take it much smaller here on # account of the multiple layers of boundary particles MonaghanKajtarBoundaryForce(dest='fluid', sources=['solid'], K=0.02, beta=1.0, h=hdx * dx), # Continuity equation ContinuityEquation(dest='fluid', sources=['fluid', ]), # Pressure gradient with acceleration damping. MomentumEquationPressureGradient( dest='fluid', sources=['fluid'], pb=0.0, gy=gy, tdamp=tdamp), # artificial viscosity for stability MomentumEquationArtificialViscosity( dest='fluid', sources=['fluid'], alpha=0.25, c0=c0), # Position step with XSPH XSPHCorrection(dest='fluid', sources=['fluid'], eps=0.0) ]), ] # Formulation for REF3 equations3 = [ # For the multi-phase formulation, we require an estimate of the # particle volume. This can be either defined from the particle # number density or simply as the ratio of mass to density. Group(equations=[ VolumeFromMassDensity(dest='fluid', sources=None) ], ), # Equation of state is typically the Tait EOS with a suitable # exponent gamma. The solid phase is treated just as a fluid and # the pressure and density operations is updated for this as well. Group(equations=[ TaitEOS( dest='fluid', sources=None, rho0=rho0, c0=c0, gamma=gamma), TaitEOS( dest='solid', sources=None, rho0=rho0, c0=c0, gamma=gamma), ], ), # Main acceleration block. The boundary conditions are imposed by # peforming the continuity equation and gradient of pressure # calculation on the solid phase, taking contributions from the # fluid phase Group(equations=[ # Continuity equation ContinuityEquation( dest='fluid', sources=[ 'fluid', 'solid']), ContinuityEquation(dest='solid', sources=['fluid']), # Pressure gradient with acceleration damping. MomentumEquationPressureGradient( dest='fluid', sources=['fluid', 'solid'], pb=0.0, gy=gy, tdamp=tdamp), # artificial viscosity for stability MomentumEquationArtificialViscosity( dest='fluid', sources=['fluid', 'solid'], alpha=0.25, c0=c0), # Position step with XSPH XSPHCorrection(dest='fluid', sources=['fluid'], eps=0.5) ]), ] if self.options.bc_type == 1: return equations1 elif self.options.bc_type == 2: return equations2 elif self.options.bc_type == 3: return equations3 def post_process(self, info_fname): self.read_info(info_fname) if len(self.output_files) == 0: return from pysph.tools.interpolator import Interpolator from pysph.solver.utils import iter_output files = self.output_files y = np.linspace(0, 0.9, 20) x = np.ones_like(y) interp = None t, p, p_ex = [], [], [] for sd, arrays in iter_output(files): fluid, solid = arrays['fluid'], arrays['solid'] if interp is None: interp = Interpolator([fluid, solid], x=x, y=y) else: interp.update_particle_arrays([fluid, solid]) t.append(sd['t']) p.append(interp.interpolate('p')) g = 1.0 * damping_factor(t[-1], tdamp) p_ex.append(abs(rho0 * H * g)) t, p, p_ex = list(map(np.asarray, (t, p, p_ex))) res = os.path.join(self.output_dir, 'results.npz') np.savez(res, t=t, p=p, p_ex=p_ex, y=y) import matplotlib matplotlib.use('Agg') pmax = abs(0.9 * rho0 * gy) from matplotlib import pyplot as plt plt.plot(t, p[:, 0] / pmax, 'o-') plt.xlabel(r'$t$') plt.ylabel(r'$p$') fig = os.path.join(self.output_dir, 'p_bottom.png') plt.savefig(fig, dpi=300) plt.clf() output_at = np.arange(0.25, 2.1, 0.25) count = 0 for i in range(len(t)): if abs(t[i] - output_at[count]) < 1e-8: plt.plot(y, p[i] / pmax, 'o', label='t=%.2f' % t[i]) plt.plot(y, p_ex[i] * (H - y) / (H * pmax), 'k-') count += 1 plt.xlabel('$y$') plt.ylabel('$p$') plt.legend() fig = os.path.join(self.output_dir, 'p_vs_y.png') plt.savefig(fig, dpi=300) if __name__ == '__main__': app = HydrostaticTank() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/lattice_cylinders.py000066400000000000000000000116131356347341600225640ustar00rootroot00000000000000"""Incompressible flow past a periodic lattice of cylinders. (30 minutes) """ import os # numpy import numpy as np # PySPH imports from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import TVFScheme # domain and reference values L = 0.1 Umax = 5e-5 c0 = 10 * Umax rho0 = 1000.0 p0 = c0 * c0 * rho0 a = 0.02 H = L fx = 1.5e-7 # Reynolds number and kinematic viscosity Re = 1.0 nu = a * Umax / Re # Numerical setup nx = 100 dx = L / nx ghost_extent = 5 * 1.5 * dx hdx = 1.0 # adaptive time steps h0 = hdx * dx dt_cfl = 0.25 * h0 / (c0 + Umax) dt_viscous = 0.125 * h0**2 / nu dt_force = 0.25 * np.sqrt(h0 / abs(fx)) tf = 1000.0 dt = min(dt_cfl, dt_viscous, dt_force) class LatticeCylinders(Application): def create_domain(self): # domain for periodicity domain = DomainManager( xmin=0, xmax=L, ymin=0, ymax=H, periodic_in_x=True, periodic_in_y=True ) return domain def create_particles(self): # create all the particles _x = np.arange(dx / 2, L, dx) _y = np.arange(dx / 2, H, dx) x, y = np.meshgrid(_x, _y) x = x.ravel() y = y.ravel() # sort out the fluid and the solid indices = [] cx = 0.5 * L cy = 0.5 * H for i in range(x.size): xi = x[i] yi = y[i] if (np.sqrt((xi - cx)**2 + (yi - cy)**2) > a): # if ( (yi > 0) and (yi < H) ): indices.append(i) # create the arrays solid = get_particle_array(name='solid', x=x, y=y) # remove the fluid particles from the solid fluid = solid.extract_particles(indices) fluid.set_name('fluid') solid.remove_particles(indices) print("Periodic cylinders :: Re = %g, nfluid = %d, nsolid=%d, dt = %g" % ( Re, fluid.get_number_of_particles(), solid.get_number_of_particles(), dt)) self.scheme.setup_properties([fluid, solid]) # setup the particle properties volume = dx * dx # mass is set to get the reference density of rho0 fluid.m[:] = volume * rho0 solid.m[:] = volume * rho0 solid.rho[:] = rho0 # reference pressures and densities fluid.rho[:] = rho0 # volume is set as dx^2 fluid.V[:] = 1. / volume solid.V[:] = 1. / volume # smoothing lengths fluid.h[:] = hdx * dx solid.h[:] = hdx * dx # return the particle list return [fluid, solid] def create_scheme(self): s = TVFScheme( ['fluid'], ['solid'], dim=2, rho0=rho0, c0=c0, nu=nu, p0=p0, pb=p0, h0=dx * hdx, gx=fx ) s.configure_solver(tf=tf, dt=dt) return s def post_process(self, info_fname): info = self.read_info(info_fname) if len(self.output_files) == 0 or self.rank > 0: return y, ui_lby2, ui_l, xx, yy, vmag = self._plot_velocity() res = os.path.join(self.output_dir, "results.npz") np.savez(res, y=y, ui_l=ui_l, ui_lby2=ui_lby2, xx=xx, yy=yy, vmag=vmag) def _plot_velocity(self): from pysph.tools.interpolator import Interpolator from pysph.solver.utils import load # Find the u profile for comparison. y = np.linspace(0.0, H, 100) x = np.ones_like(y) * L / 2 fname = self.output_files[-1] data = load(fname) dm = self.create_domain() interp = Interpolator(list(data['arrays'].values()), x=x, y=y, domain_manager=dm) ui_lby2 = interp.interpolate('u') x = np.ones_like(y) * L interp.set_interpolation_points(x=x, y=y) ui_l = interp.interpolate('u') import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt y /= H y -= 0.5 f = plt.figure() plt.plot(y, ui_lby2, 'k-', label='x=L/2') plt.plot(y, ui_l, 'k-', label='x=L') plt.xlabel('y/H') plt.ylabel('u') plt.legend() fig = os.path.join(self.output_dir, 'u_profile.png') plt.savefig(fig, dpi=300) plt.close() # Plot the contours of vmag. xx, yy = np.mgrid[0:L:100j, 0:H:100j] interp.set_interpolation_points(x=xx, y=yy) u = interp.interpolate('u') v = interp.interpolate('v') xx /= L yy /= H vmag = np.sqrt(u * u + v * v) f = plt.figure() plt.contourf(xx, yy, vmag) plt.xlabel('x/L') plt.ylabel('y/H') plt.colorbar() fig = os.path.join(self.output_dir, 'vmag_contour.png') plt.savefig(fig, dpi=300) plt.close() return y, ui_lby2, ui_l, xx, yy, vmag if __name__ == '__main__': app = LatticeCylinders() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/periodic_cylinders.py000066400000000000000000000160071356347341600227370ustar00rootroot00000000000000"""Incompressible flow past a periodic array of cylinders. (42 hours) See Ellero and Adams, International Journal for Numerical Methods in Engineering, 2011, vol 86, pp 1027-1040 for the detailed parameters for this problem and also Adami, Hu and Adams, JCP, 2013, vol 241, pp 292-307. In particular, we note that we set c0 from Ellero and Adams as using the value from Adami et al. will cause the solution to blow up. If one sets c0=10*Umax and sets pb=300*p0, that will cause particles to void at the rear of the cylinder. """ import os import numpy as np # PySPH imports from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import TVFScheme # domain and reference values L = 0.12 Umax = 1.2e-4 a = 0.02 H = 4*a fx = 2.5e-4 # c0 is set from Ellero and Adams. # Note that setting this to 0.1*np.sqrt(a*fx) as per Adami Hu and Adams is # incorrect and will actually cause a blow up of the solution. c0 = 0.02 rho0 = 1000.0 p0 = c0*c0*rho0 pb = p0 # Reynolds number and kinematic viscosity nu = 0.1/rho0 Re = a*Umax/nu # Numerical setup nx = 144 dx = L/nx ghost_extent = 5 * 1.5 * dx hdx = 1.2 # adaptive time steps h0 = hdx * dx dt_cfl = 0.25 * h0/(c0 + Umax) dt_viscous = 0.125 * h0**2/nu dt_force = 0.25 * np.sqrt(h0/abs(fx)) T = a/Umax tf = 2.5*T dt = min(dt_cfl, dt_viscous, dt_force) class PeriodicCylinders(Application): def create_domain(self): # domain for periodicity domain = DomainManager(xmin=0, xmax=L, periodic_in_x=True) return domain def create_particles(self): # create all the particles _x = np.arange(dx/2, L, dx) _y = np.arange(-ghost_extent, H+ghost_extent, dx) x, y = np.meshgrid(_x, _y) x = x.ravel() y = y.ravel() # sort out the fluid and the solid indices = [] cx = 0.5 * L cy = 0.5 * H for i in range(x.size): xi = x[i] yi = y[i] if (np.sqrt((xi-cx)**2 + (yi-cy)**2) > a): if ((yi > 0) and (yi < H)): indices.append(i) # create the arrays solid = get_particle_array(name='solid', x=x, y=y) # remove the fluid particles from the solid fluid = solid.extract_particles(indices) fluid.set_name('fluid') solid.remove_particles(indices) print("Periodic cylinders :: Re = %g, nfluid = %d, nsolid=%d, dt = %g" % (Re, fluid.get_number_of_particles(), solid.get_number_of_particles(), dt)) print("tf = %f" % tf) # add requisite properties to the arrays: self.scheme.setup_properties([fluid, solid]) # setup the particle properties volume = dx * dx # mass is set to get the reference density of rho0 fluid.m[:] = volume * rho0 solid.m[:] = volume * rho0 # initial particle density fluid.rho[:] = rho0 solid.rho[:] = rho0 # volume is set as dx^2. V is the number density form of the # particle volume and will be computed in the equations for the # fluid phase. The initial values are used for the solid phase fluid.V[:] = 1./volume solid.V[:] = 1./volume # particle smoothing lengths fluid.h[:] = hdx * dx solid.h[:] = hdx * dx # return the particle list return [fluid, solid] def create_scheme(self): s = TVFScheme( ['fluid'], ['solid'], dim=2, rho0=rho0, c0=c0, nu=nu, p0=p0, pb=p0, h0=dx*hdx, gx=fx ) s.configure_solver(tf=tf, dt=dt, n_damp=100, pfreq=500) return s def post_process(self, info_fname): self.read_info(info_fname) if len(self.output_files) == 0: return t, cd = self._plot_cd_vs_t() res = os.path.join(self.output_dir, 'results.npz') np.savez(res, t=t, cd=cd) def _plot_cd_vs_t(self): from pysph.solver.utils import iter_output, load from pysph.tools.sph_evaluator import SPHEvaluator from pysph.sph.equation import Group from pysph.base.kernels import QuinticSpline from pysph.sph.wc.transport_velocity import ( SetWallVelocity, MomentumEquationPressureGradient, SolidWallNoSlipBC, SolidWallPressureBC, VolumeSummation ) data = load(self.output_files[0]) solid = data['arrays']['solid'] fluid = data['arrays']['fluid'] x, y = solid.x.copy(), solid.y.copy() cx = 0.5 * L cy = 0.5 * H inside = np.sqrt((x-cx)**2 + (y-cy)**2) <= a dest = solid.extract_particles(inside.nonzero()[0]) # We use the same equations for this as the simulation, except that we # do not include the acceleration terms as these are externally # imposed. The goal of these is to find the force of the fluid on the # cylinder, thus, gx=0.0 is used in the following. equations = [ Group( equations=[ VolumeSummation( dest='fluid', sources=['fluid', 'solid'] ), VolumeSummation( dest='solid', sources=['fluid', 'solid'] ), ], real=False), Group( equations=[ SetWallVelocity(dest='solid', sources=['fluid']), ], real=False), Group( equations=[ SolidWallPressureBC(dest='solid', sources=['fluid'], gx=0.0, b=1.0, rho0=rho0, p0=p0), ], real=False), Group( equations=[ # Pressure gradient terms MomentumEquationPressureGradient( dest='fluid', sources=['solid'], gx=0.0, pb=pb), SolidWallNoSlipBC( dest='fluid', sources=['solid'], nu=nu), ], real=True), ] sph_eval = SPHEvaluator( arrays=[dest, fluid], equations=equations, dim=2, kernel=QuinticSpline(dim=2) ) t, cd = [], [] for sd, fluid in iter_output(self.output_files, 'fluid'): fluid.remove_property('vmag2') t.append(sd['t']) sph_eval.update_particle_arrays([dest, fluid]) sph_eval.evaluate() Fx = np.sum(-fluid.au*fluid.m) cd.append(Fx/(nu*rho0*Umax)) t, cd = list(map(np.asarray, (t, cd))) # Now plot the results. import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt plt.figure() plt.plot(t, cd) plt.xlabel('$t$') plt.ylabel(r'$C_D$') fig = os.path.join(self.output_dir, "cd_vs_t.png") plt.savefig(fig, dpi=300) plt.close() return t, cd if __name__ == '__main__': app = PeriodicCylinders() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/poiseuille.py000066400000000000000000000140221356347341600212320ustar00rootroot00000000000000"""Poiseuille flow using the transport velocity formulation (5 minutes). """ import os # numpy import numpy as np # PySPH imports from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.solver.utils import load from pysph.sph.scheme import TVFScheme # Numerical setup dx = 1.0/60.0 ghost_extent = 5 * dx hdx = 1.0 # adaptive time steps h0 = hdx * dx class PoiseuilleFlow(Application): def initialize(self): self.d = 0.5 self.Ly = 2*self.d self.Lx = 0.4*self.Ly self.rho0 = 1.0 self.nu = 0.01 def add_user_options(self, group): group.add_argument( "--re", action="store", type=float, dest="re", default=0.0125, help="Reynolds number of flow." ) group.add_argument( "--remesh", action="store", type=float, dest="remesh", default=0, help="Remeshing frequency (setting it to zero disables it)." ) def consume_user_options(self): self.re = self.options.re self.Vmax = self.nu*self.re/(2*self.d) self.c0 = 10*self.Vmax self.p0 = self.c0**2*self.rho0 # The body force is adjusted to give the Required Reynold's number # based on the steady state maximum velocity Vmax: # Vmax = fx/(2*nu)*(d^2) at the centerline self.fx = self.Vmax * 2*self.nu/(self.d**2) # Setup default parameters. dt_cfl = 0.25 * h0/( self.c0 + self.Vmax ) dt_viscous = 0.125 * h0**2/self.nu dt_force = 0.25 * np.sqrt(h0/self.fx) self.dt = min(dt_cfl, dt_viscous, dt_force) def configure_scheme(self): tf = 100.0 scheme = self.scheme scheme.configure(c0=self.c0, p0=self.p0, pb=self.p0, gx=self.fx) scheme.configure_solver(tf=tf, dt=self.dt, pfreq=1000) print("dt = %g"%self.dt) def create_scheme(self): s = TVFScheme( ['fluid'], ['channel'], dim=2, rho0=self.rho0, c0=None, nu=self.nu, p0=None, pb=None, h0=h0, gx=None ) return s def create_domain(self): return DomainManager(xmin=0, xmax=self.Lx, periodic_in_x=True) def create_particles(self): Lx = self.Lx Ly = self.Ly _x = np.arange( dx/2, Lx, dx ) # create the fluid particles _y = np.arange( dx/2, Ly, dx ) x, y = np.meshgrid(_x, _y); fx = x.ravel(); fy = y.ravel() # create the channel particles at the top _y = np.arange(Ly+dx/2, Ly+dx/2+ghost_extent, dx) x, y = np.meshgrid(_x, _y); tx = x.ravel(); ty = y.ravel() # create the channel particles at the bottom _y = np.arange(-dx/2, -dx/2-ghost_extent, -dx) x, y = np.meshgrid(_x, _y); bx = x.ravel(); by = y.ravel() # concatenate the top and bottom arrays cx = np.concatenate( (tx, bx) ) cy = np.concatenate( (ty, by) ) # create the arrays channel = get_particle_array(name='channel', x=cx, y=cy) fluid = get_particle_array(name='fluid', x=fx, y=fy) print("Poiseuille flow :: Re = %g, nfluid = %d, nchannel=%d"%( self.re, fluid.get_number_of_particles(), channel.get_number_of_particles())) # add requisite properties to the arrays: self.scheme.setup_properties([fluid, channel]) # setup the particle properties volume = dx * dx # mass is set to get the reference density of rho0 fluid.m[:] = volume * self.rho0 channel.m[:] = volume * self.rho0 # Set the default rho. fluid.rho[:] = self.rho0 channel.rho[:] = self.rho0 # volume is set as dx^2 fluid.V[:] = 1./volume channel.V[:] = 1./volume # smoothing lengths fluid.h[:] = hdx * dx channel.h[:] = hdx * dx # return the particle list return [fluid, channel] def create_tools(self): tools = [] if self.options.remesh > 0: from pysph.solver.tools import SimpleRemesher remesher = SimpleRemesher( self, 'fluid', props=['u', 'v', 'uhat', 'vhat'], freq=self.options.remesh ) tools.append(remesher) return tools def post_process(self, info_fname): info = self.read_info(info_fname) if len(self.output_files) == 0: return import matplotlib matplotlib.use('Agg') y_ex, u_ex, y, u = self._plot_u_vs_y() t, ke = self._plot_ke_history() res = os.path.join(self.output_dir, "results.npz") np.savez(res, t=t, ke=ke, y=y, u=u, y_ex=y_ex, u_ex=u_ex) def _plot_ke_history(self): from pysph.tools.pprocess import get_ke_history from matplotlib import pyplot as plt t, ke = get_ke_history(self.output_files, 'fluid') plt.clf() plt.plot(t, ke) plt.xlabel('t'); plt.ylabel('Kinetic energy') fig = os.path.join(self.output_dir, "ke_history.png") plt.savefig(fig, dpi=300) return t, ke def _plot_u_vs_y(self): files = self.output_files # take the last solution data fname = files[-1] data = load(fname) tf = data['solver_data']['t'] fluid = data['arrays']['fluid'] u = fluid.u.copy() y = fluid.y.copy() # exact parabolic profile for the u-velocity d = self.d fx = self.fx nu = self.nu ye = np.arange(-d, d+1e-3, 0.01) ue = -0.5 * fx/nu * (ye**2 - d*d) ye += d from matplotlib import pyplot as plt plt.clf() plt.plot(ye, ue, label="exact") plt.plot(y, u, 'ko', fillstyle='none', label="computed") plt.xlabel('y'); plt.ylabel('u') plt.legend() plt.title('Velocity profile at %s'%tf) fig = os.path.join(self.output_dir, "comparison.png") plt.savefig(fig, dpi=300) return ye, ue, y, u if __name__ == '__main__': app = PoiseuilleFlow() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/rayleigh_taylor.py000066400000000000000000000104541356347341600222630ustar00rootroot00000000000000"""Rayleigh-Taylor instability problem. (16 hours) """ # numpy import numpy as np # PySPH imports from pysph.base.utils import get_particle_array from pysph.solver.solver import Solver from pysph.solver.application import Application from pysph.sph.scheme import TVFScheme # domain and reference values gy = -1.0 Lx = 1.0 Ly = 2.0 Re = 420 Vmax = np.sqrt(0.5 * Ly * abs(gy)) nu = Vmax * Ly / Re # density for the two phases rho1 = 1.8 rho2 = 1.0 # speed of sound and reference pressure Fr = 0.01 c0 = Vmax / Fr p1 = c0**2 * rho1 p2 = c0**2 * rho2 # Numerical setup nx = 50 dx = Lx / nx ghost_extent = 5 * dx hdx = 1.2 # adaptive time steps h0 = hdx * dx dt_cfl = 0.25 * h0 / (c0 + Vmax) dt_viscous = 0.125 * h0**2 / nu dt_force = 0.25 * np.sqrt(h0 / abs(gy)) tf = 25 dt = 0.5 * min(dt_cfl, dt_viscous, dt_force) class RayleighTaylor(Application): def create_particles(self): # create all the particles _x = np.arange(-ghost_extent - dx / 2, Lx + ghost_extent + dx / 2, dx) _y = np.arange(-ghost_extent - dx / 2, Ly + ghost_extent + dx / 2, dx) x, y = np.meshgrid(_x, _y) x = x.ravel() y = y.ravel() # sort out the fluid and the solid indices = [] for i in range(x.size): if ((x[i] > 0.0) and (x[i] < Lx)): if ((y[i] > 0.0) and (y[i] < Ly)): indices.append(i) # create the arrays solid = get_particle_array(name='solid', x=x, y=y) # remove the fluid particles from the solid fluid = solid.extract_particles(indices) fluid.set_name('fluid') solid.remove_particles(indices) # sort out the two fluid phases indices = [] for i in range(fluid.get_number_of_particles()): if fluid.y[i] > 1 - 0.15 * np.sin(2 * np.pi * fluid.x[i]): indices.append(i) fluid1 = fluid.extract_particles(indices) fluid1.set_name('fluid1') fluid2 = fluid fluid2.set_name('fluid2') fluid2.remove_particles(indices) fluid1.rho[:] = rho1 fluid2.rho[:] = rho2 print("Rayleigh Taylor Instability problem :: Re = %d, nfluid = %d, nsolid=%d, dt = %g" % ( Re, fluid1.get_number_of_particles() + fluid2.get_number_of_particles(), solid.get_number_of_particles(), dt)) # add requisite properties to the arrays: self.scheme.setup_properties([fluid1, fluid2, solid]) # setup the particle properties volume = dx * dx # mass is set to get the reference density of each phase fluid1.m[:] = volume * rho1 fluid2.m[:] = volume * rho2 # volume is set as dx^2 fluid1.V[:] = 1. / volume fluid2.V[:] = 1. / volume solid.V[:] = 1. / volume # smoothing lengths fluid1.h[:] = hdx * dx fluid2.h[:] = hdx * dx solid.h[:] = hdx * dx # return the arrays return [fluid1, fluid2, solid] def create_scheme(self): s = TVFScheme( ['fluid1', 'fluid2'], ['solid'], dim=2, rho0=rho1, c0=c0, nu=nu, p0=p1, pb=p1, h0=dx * hdx, gy=gy ) s.configure_solver(tf=tf, dt=dt, pfreq=500) return s def create_equations(self): # This is an ugly hack to support different densities for fluids. # What we should really do is set rho0 as a fluid constant and rewrite # the equations to use that, then once the fluid properties are # defined, this will just work. equations = super(RayleighTaylor, self).create_equations() from pysph.sph.equation import Group def process_term(x): if hasattr(x, 'rho0'): if x.dest == 'fluid1' or x.sources == ['fluid1']: x.rho0 = rho1 elif x.dest == 'fluid2' or x.sources == ['fluid2']: x.rho0 = rho2 if hasattr(x, 'p0'): if x.dest == 'fluid1': x.p0 = p1 elif x.dest == 'fluid2': x.p0 = p2 for eq in equations: if isinstance(eq, Group): for eq in eq.equations: process_term(eq) else: process_term(eq) return equations if __name__ == '__main__': app = RayleighTaylor() app.run() pysph-master/pysph/examples/rigid_body/000077500000000000000000000000001356347341600206225ustar00rootroot00000000000000pysph-master/pysph/examples/rigid_body/README.rst000066400000000000000000000014371356347341600223160ustar00rootroot00000000000000This directory contains a bunch of examples that demonstrate rigid body motion and interaction both with other rigid bodies and rigid-fluid coupling. The demos here are only proofs of concept. They need work to make sure that the physics is correct, the equations correct and produce the right numbers. In particular, - the rigid_block in tank does not work without the pressure rigid body equation which is incorrect. - the formulation and parameters used for the rigid body collision is not tested if it conserves energy and works correctly in all cases. The choice of parameters is currently ad-hoc. - the rigid-fluid coupling should also be looked at a bit more carefully with proper comparisons to well-known results. Right now, it looks pretty and is a reasonable start. pysph-master/pysph/examples/rigid_body/__init__.py000066400000000000000000000000001356347341600227210ustar00rootroot00000000000000pysph-master/pysph/examples/rigid_body/bouncing_cube.py000066400000000000000000000056351356347341600240070ustar00rootroot00000000000000"""A cube bouncing inside a box. (5 seconds) This is used to test the rigid body equations. """ import numpy as np from pysph.base.kernels import CubicSpline from pysph.base.utils import get_particle_array_rigid_body from pysph.sph.equation import Group from pysph.sph.integrator import EPECIntegrator from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.rigid_body import (BodyForce, RigidBodyCollision, RigidBodyMoments, RigidBodyMotion, RK2StepRigidBody) dim = 3 dt = 5e-3 tf = 5.0 gz = -9.81 hdx = 1.0 dx = dy = 0.02 rho0 = 10.0 class BouncingCube(Application): def create_particles(self): nx, ny, nz = 10, 10, 10 dx = 1.0 / (nx - 1) x, y, z = np.mgrid[0:1:nx * 1j, 0:1:ny * 1j, 0:1:nz * 1j] x = x.flat y = y.flat z = (z - 1).flat m = np.ones_like(x) * dx * dx * rho0 h = np.ones_like(x) * hdx * dx # radius of each sphere constituting in cube rad_s = np.ones_like(x) * dx body = get_particle_array_rigid_body(name='body', x=x, y=y, z=z, h=h, m=m, rad_s=rad_s) body.vc[0] = -5.0 body.vc[2] = -5.0 # Create the tank. nx, ny, nz = 40, 40, 40 dx = 1.0 / (nx - 1) xmin, xmax, ymin, ymax, zmin, zmax = -2, 2, -2, 2, -2, 2 x, y, z = np.mgrid[xmin:xmax:nx * 1j, ymin:ymax:ny * 1j, zmin:zmax:nz * 1j] interior = ((x < 1.8) & (x > -1.8)) & ((y < 1.8) & (y > -1.8)) & ( (z > -1.8) & (z <= 2)) tank = np.logical_not(interior) x = x[tank].flat y = y[tank].flat z = z[tank].flat m = np.ones_like(x) * dx * dx * rho0 h = np.ones_like(x) * hdx * dx # radius of each sphere constituting in cube rad_s = np.ones_like(x) * dx tank = get_particle_array_rigid_body(name='tank', x=x, y=y, z=z, h=h, m=m, rad_s=rad_s) tank.total_mass[0] = np.sum(m) return [body, tank] def create_solver(self): kernel = CubicSpline(dim=dim) integrator = EPECIntegrator(body=RK2StepRigidBody()) solver = Solver(kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) solver.set_print_freq(10) return solver def create_equations(self): equations = [ Group(equations=[ BodyForce(dest='body', sources=None, gz=gz), RigidBodyCollision(dest='body', sources=['tank'], kn=1e4, en=1) ]), Group(equations=[RigidBodyMoments(dest='body', sources=None)]), Group(equations=[RigidBodyMotion(dest='body', sources=None)]), ] return equations if __name__ == '__main__': app = BouncingCube() app.run() pysph-master/pysph/examples/rigid_body/bouncing_cubes.py000066400000000000000000000075641356347341600241750ustar00rootroot00000000000000"""Four cubes bouncing inside a box. (10 seconds) This is used to test the rigid body equations and the support for multiple bodies. """ import numpy as np from pysph.base.kernels import CubicSpline from pysph.base.utils import get_particle_array_rigid_body from pysph.sph.equation import Group from pysph.sph.integrator import EPECIntegrator from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.rigid_body import (BodyForce, RigidBodyCollision, RigidBodyMoments, RigidBodyMotion, RK2StepRigidBody) dim = 3 dt = 5e-3 tf = 5.0 gz = -9.81 hdx = 1.0 rho0 = 10.0 def make_cube(lx, ly, lz, dx): """Return points x, y, z for a cube centered at origin with given lengths. """ # Convert to floats to be safe with integer division. lx, ly, lz = list(map(float, (lx, ly, lz))) x, y, z = np.mgrid[-lx / 2:lx / 2 + dx:dx, -ly / 2:ly / 2 + dx:dx, -lz / 2: lz / 2 + dx:dx] x = x.ravel() y = y.ravel() z = z.ravel() return x, y, z class BouncingCubes(Application): def create_particles(self): dx = 1.0 / 9.0 _x, _y, _z = make_cube(0.5, 0.5, 0.5, dx) _z += 1.0 _id = np.ones(_x.shape, dtype=int) x, y, z, body_id = [], [], [], [] disp = [(0.4, 0, 0), (-0.4, 0, 0), (0.0, 1.0, 0.0), (0.0, -1.0, 0.0)] for i, d in enumerate(disp): x.append(_x + d[0]) y.append(_y + d[1]) z.append(_z + d[2]) body_id.append(_id * i) x = np.concatenate(x) y = np.concatenate(y) z = np.concatenate(z) body_id = np.concatenate(body_id) m = np.ones_like(x) * dx * dx * rho0 h = np.ones_like(x) * hdx * dx # Split this one cube # radius of each sphere constituting in cube rad_s = np.ones_like(x) * dx body = get_particle_array_rigid_body(name='body', x=x, y=y, z=z, h=h, m=m, body_id=body_id, rad_s=rad_s) body.vc[0] = 5.0 body.vc[2] = -5.0 body.vc[6] = -5.0 body.vc[7] = -5.0 body.vc[10] = 5.0 # Create the tank. nx, ny, nz = 40, 40, 40 dx = 1.0 / (nx - 1) xmin, xmax, ymin, ymax, zmin, zmax = -2, 2, -2, 2, -2, 2 x, y, z = np.mgrid[xmin:xmax:nx * 1j, ymin:ymax:ny * 1j, zmin:zmax:nz * 1j] interior = ((x < 1.8) & (x > -1.8)) & ((y < 1.8) & (y > -1.8)) & ( (z > -1.8) & (z <= 2)) tank = np.logical_not(interior) x = x[tank].flat y = y[tank].flat z = z[tank].flat m = np.ones_like(x) * dx * dx * rho0 h = np.ones_like(x) * hdx * dx # radius of each sphere constituting in cube rad_s = np.ones_like(x) * dx tank = get_particle_array_rigid_body(name='tank', x=x, y=y, z=z, h=h, m=m, rad_s=rad_s) tank.total_mass[0] = np.sum(m) return [body, tank] def create_solver(self): kernel = CubicSpline(dim=dim) integrator = EPECIntegrator(body=RK2StepRigidBody()) solver = Solver(kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) solver.set_print_freq(10) return solver def create_equations(self): equations = [ Group(equations=[ BodyForce(dest='body', sources=None, gz=gz), RigidBodyCollision(dest='body', sources=['tank', 'body'], kn=1e4, en=0.8) ]), Group(equations=[RigidBodyMoments(dest='body', sources=None)]), Group(equations=[RigidBodyMotion(dest='body', sources=None)]), ] return equations if __name__ == '__main__': app = BouncingCubes() app.run() pysph-master/pysph/examples/rigid_body/cubes_colliding_in_tank.py000066400000000000000000000205531356347341600260310ustar00rootroot00000000000000"""A solid cube of density falling in water and hitting another cube of density 500 Check basic equations of SPH to throw a ball inside the vessel """ from __future__ import print_function import numpy as np # PySPH base and carray imports from pysph.base.utils import (get_particle_array_wcsph, get_particle_array_rigid_body) from pysph.base.kernels import CubicSpline from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.sph.equation import Group from pysph.sph.basic_equations import (XSPHCorrection, ContinuityEquation, SummationDensity) from pysph.sph.wc.basic import TaitEOSHGCorrection, MomentumEquation from pysph.solver.application import Application from pysph.sph.rigid_body import ( BodyForce, RigidBodyCollision, LiuFluidForce, RigidBodyMoments, RigidBodyMotion, RK2StepRigidBody, ) def create_boundary(): dx = 2 # bottom particles in tank xb = np.arange(-2 * dx, 140 + 2 * dx, dx) yb = np.arange(-2 * dx, 0, dx) xb, yb = np.meshgrid(xb, yb) xb = xb.ravel() yb = yb.ravel() xl = np.arange(-2 * dx, 0, dx) yl = np.arange(0, 150, dx) xl, yl = np.meshgrid(xl, yl) xl = xl.ravel() yl = yl.ravel() xr = np.arange(140, 140 + 2 * dx, dx) yr = np.arange(0, 150, dx) xr, yr = np.meshgrid(xr, yr) xr = xr.ravel() yr = yr.ravel() x = np.concatenate([xl, xb, xr]) y = np.concatenate([yl, yb, yr]) return x * 1e-3, y * 1e-3 def create_fluid(): dx = 2 xf = np.arange(0, 140, dx) yf = np.arange(0, 130, dx) xf, yf = np.meshgrid(xf, yf) xf = xf.ravel() yf = yf.ravel() xf = np.arange(0, 140, dx) yf = np.arange(0, 130, dx) xf, yf = np.meshgrid(xf, yf) xf = xf.ravel() yf = yf.ravel() p = (xf > 59) & (xf < 81) & (yf > 119) xf = xf[~p] yf = yf[~p] return xf * 1e-3, yf * 1e-3 def create_cube(dx=1): x = np.arange(60, 80, dx) y = np.arange(121, 141, dx) x, y = np.meshgrid(x, y) x = x.ravel() y = y.ravel() return x * 1e-3, y * 1e-3 def get_density(y): c_0 = 2 * np.sqrt(2 * 9.81 * 130 * 1e-3) rho_0 = 1000 height_water_clmn = 130 * 1e-3 gamma = 7. _tmp = gamma / (rho_0 * c_0**2) rho = np.zeros_like(y) for i in range(len(rho)): p_i = rho_0 * 9.81 * (height_water_clmn - y[i]) rho[i] = rho_0 * (1 + p_i * _tmp)**(1. / gamma) return rho def geometry(): import matplotlib.pyplot as plt # please run this function to know how # geometry looks like x_tank, y_tank = create_boundary() x_fluid, y_fluid = create_fluid() x_cube, y_cube = create_cube() x_wood, y_wood = create_cube() y_wood = y_wood + 0.04 plt.scatter(x_fluid, y_fluid) plt.scatter(x_tank, y_tank) plt.scatter(x_cube, y_cube) plt.scatter(x_wood, y_wood) plt.axes().set_aspect('equal', 'datalim') print("done") plt.show() class RigidFluidCoupling(Application): # here wood has 2120 density and falls from some height on low density cube def initialize(self): self.dx = 2 * 1e-3 self.hdx = 1.2 self.ro = 1000 self.solid_rho = 500 self.wood_rho = 2120 self.m = 1000 * self.dx * self.dx self.co = 2 * np.sqrt(2 * 9.81 * 130 * 1e-3) self.alpha = 0.1 def create_particles(self): """Create the circular patch of fluid.""" # xf, yf = create_fluid_with_solid_cube() xf, yf = create_fluid() rho = get_density(yf) m = rho[:] * self.dx * self.dx rho = np.ones_like(xf) * self.ro h = np.ones_like(xf) * self.hdx * self.dx fluid = get_particle_array_wcsph(x=xf, y=yf, h=h, m=m, rho=rho, name="fluid") xt, yt = create_boundary() m = np.ones_like(xt) * 1000 * self.dx * self.dx rho = np.ones_like(xt) * 1000 rad_s = np.ones_like(xt) * 2 / 2. * 1e-3 h = np.ones_like(xt) * self.hdx * self.dx tank = get_particle_array_wcsph(x=xt, y=yt, h=h, m=m, rho=rho, rad_s=rad_s, name="tank") dx = 1 xc, yc = create_cube(1) m = np.ones_like(xc) * self.solid_rho * dx * 1e-3 * dx * 1e-3 rho = np.ones_like(xc) * self.solid_rho h = np.ones_like(xc) * self.hdx * self.dx rad_s = np.ones_like(xc) * dx / 2. * 1e-3 # add cs property to run the simulation cs = np.zeros_like(xc) cube = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho, rad_s=rad_s, cs=cs, name="cube") dx = 1 xc, yc = create_cube(1) yc = yc + 0.04 xc = xc + 0.02 m = np.ones_like(xc) * self.wood_rho * dx * 1e-3 * dx * 1e-3 rho = np.ones_like(xc) * self.wood_rho h = np.ones_like(xc) * self.hdx * self.dx rad_s = np.ones_like(xc) * dx / 2. * 1e-3 # add cs property to run the simulation cs = np.zeros_like(xc) wood = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho, rad_s=rad_s, cs=cs, name="wood") return [fluid, tank, cube, wood] def create_solver(self): kernel = CubicSpline(dim=2) integrator = EPECIntegrator(fluid=WCSPHStep(), tank=WCSPHStep(), cube=RK2StepRigidBody(), wood=RK2StepRigidBody()) dt = 0.125 * self.dx * self.hdx / (self.co * 1.1) / 2. print("DT: %s" % dt) tf = 1.5 solver = Solver( kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, ) return solver def create_equations(self): equations = [ Group( equations=[ BodyForce(dest='cube', sources=None, gy=-9.81), BodyForce(dest='wood', sources=None, gy=-9.81), SummationDensity(dest='cube', sources=['fluid', 'cube']), SummationDensity(dest='wood', sources=['fluid', 'wood']) ], real=False), Group(equations=[ TaitEOSHGCorrection(dest='wood', sources=None, rho0=self.wood_rho, c0=self.co, gamma=7.0), TaitEOSHGCorrection(dest='cube', sources=None, rho0=self.solid_rho, c0=self.co, gamma=7.0), TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), TaitEOSHGCorrection(dest='tank', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), ], real=False), Group(equations=[ ContinuityEquation( dest='fluid', sources=['fluid', 'tank', 'cube', 'wood'], ), ContinuityEquation( dest='tank', sources=['fluid', 'tank', 'cube', 'wood'], ), MomentumEquation(dest='fluid', sources=[ 'fluid', 'tank', ], alpha=self.alpha, beta=0.0, c0=self.co, gy=-9.81), LiuFluidForce( dest='fluid', sources=['cube'], ), LiuFluidForce( dest='fluid', sources=['wood'], ), XSPHCorrection(dest='fluid', sources=['fluid', 'tank']), ]), Group(equations=[ RigidBodyCollision(dest='cube', sources=['tank', 'wood'], kn=1e6) ]), Group(equations=[RigidBodyMoments(dest='cube', sources=None)]), Group(equations=[RigidBodyMotion(dest='cube', sources=None)]), Group(equations=[ RigidBodyCollision(dest='wood', sources=['tank', 'cube'], kn=1e6) ]), Group(equations=[RigidBodyMoments(dest='wood', sources=None)]), Group(equations=[RigidBodyMotion(dest='wood', sources=None)]), ] return equations if __name__ == '__main__': app = RigidFluidCoupling() app.run() pysph-master/pysph/examples/rigid_body/dam_break3D_sph.py000066400000000000000000000117751356347341600241550ustar00rootroot00000000000000"""3D Dam Break Over a block of text spelling "SPH". (8 hours) This example demonstrates how to setup a rigid body motion and couple it to the fluid motion. """ from os.path import dirname, join import numpy as np from pysph.examples._db_geometry import DamBreak3DGeometry from pysph.base.kernels import WendlandQuintic from pysph.base.utils import get_particle_array_rigid_body from pysph.sph.equation import Group from pysph.sph.basic_equations import ContinuityEquation, XSPHCorrection from pysph.sph.wc.basic import TaitEOS, TaitEOSHGCorrection, MomentumEquation from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.tools.gmsh import vtk_file_to_points from pysph.sph.rigid_body import ( BodyForce, NumberDensity, RigidBodyForceGPUGems, RigidBodyMoments, RigidBodyMotion, RK2StepRigidBody, PressureRigidBody ) dim = 3 dt = 1e-5 tf = 2.0 # parameter to chane the resolution dx = 0.02 nboundary_layers = 3 hdx = 1.2 rho0 = 1000.0 class DamBreak3DSPH(Application): def initialize(self): self.geom = DamBreak3DGeometry( dx=dx, nboundary_layers=nboundary_layers, hdx=hdx, rho0=rho0, with_obstacle=False) def create_particles(self): fluid, boundary = self.geom.create_particles() fpath = join(dirname(__file__), 'sph.vtk.gz') x, y, z = vtk_file_to_points(fpath, cell_centers=False) y -= 0.15 z += 0.05 m = np.ones_like(x)*fluid.m[0] h = np.ones_like(x)*fluid.h[0] rho = np.ones_like(x)*fluid.rho[0] obstacle = get_particle_array_rigid_body( name='obstacle', x=x, y=y, z=z, m=m, h=h, rho=rho, rho0=rho ) obstacle.total_mass[0] = np.sum(m) obstacle.add_property('cs') obstacle.add_property('arho') obstacle.set_lb_props(list(obstacle.properties.keys())) obstacle.set_output_arrays( ['x', 'y', 'z', 'u', 'v', 'w', 'fx', 'fy', 'fz', 'rho', 'm', 'h', 'p', 'tag', 'pid', 'gid'] ) boundary.add_property('V') boundary.add_property('fx') boundary.add_property('fy') boundary.add_property('fz') return [fluid, boundary, obstacle] def create_solver(self): kernel = WendlandQuintic(dim=dim) integrator = EPECIntegrator(fluid=WCSPHStep(), obstacle=RK2StepRigidBody(), boundary=WCSPHStep()) solver = Solver(kernel=kernel, dim=dim, integrator=integrator, tf=tf, dt=dt, adaptive_timestep=True, n_damp=0) return solver def create_equations(self): co = 10.0 * self.geom.get_max_speed(g=9.81) gamma = 7.0 alpha = 0.5 beta = 0.0 equations = [ Group(equations=[ BodyForce(dest='obstacle', sources=None, gz=-9.81), NumberDensity(dest='obstacle', sources=['obstacle']), NumberDensity(dest='boundary', sources=['boundary']), ], ), # Equation of state Group(equations=[ TaitEOS( dest='fluid', sources=None, rho0=rho0, c0=co, gamma=gamma ), TaitEOSHGCorrection( dest='boundary', sources=None, rho0=rho0, c0=co, gamma=gamma ), TaitEOSHGCorrection( dest='obstacle', sources=None, rho0=rho0, c0=co, gamma=gamma ), ], real=False), # Continuity, momentum and xsph equations Group(equations=[ ContinuityEquation( dest='fluid', sources=['fluid', 'boundary', 'obstacle'] ), ContinuityEquation(dest='boundary', sources=['fluid']), ContinuityEquation(dest='obstacle', sources=['fluid']), MomentumEquation(dest='fluid', sources=['fluid', 'boundary'], alpha=alpha, beta=beta, gz=-9.81, c0=co, tensile_correction=True), PressureRigidBody( dest='fluid', sources=['obstacle'], rho0=rho0 ), XSPHCorrection(dest='fluid', sources=['fluid']), RigidBodyForceGPUGems( dest='obstacle', sources=['boundary'], k=1.0, d=2.0, eta=0.1, kt=0.1 ), ]), Group(equations=[RigidBodyMoments(dest='obstacle', sources=None)]), Group(equations=[RigidBodyMotion(dest='obstacle', sources=None)]), ] return equations if __name__ == '__main__': app = DamBreak3DSPH() app.run() pysph-master/pysph/examples/rigid_body/simple.py000066400000000000000000000032701356347341600224670ustar00rootroot00000000000000"""Very simple rigid body motion. (5 seconds) This is used to test the rigid body equations. """ import numpy as np from pysph.base.kernels import CubicSpline from pysph.base.utils import get_particle_array_rigid_body from pysph.sph.equation import Group from pysph.sph.integrator import EPECIntegrator from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.rigid_body import RigidBodyMoments, RigidBodyMotion, RK2StepRigidBody dim = 3 dt = 1e-3 tf = 2.5 hdx = 1.0 rho0 = 10.0 class SimpleRigidMotion(Application): def create_particles(self): nx, ny, nz = 10, 10, 10 dx = 1.0/(nx-1) x, y, z = np.mgrid[0:1:nx*1j, 0:1:ny*1j, 0:1:nz*1j] x = x.flat y = y.flat z = z.flat m = np.ones_like(x)*dx*dx*rho0 h = np.ones_like(x)*hdx*dx body = get_particle_array_rigid_body( name='body', x=x, y=y, z=z, h=h, m=m, ) body.omega[0] = 5.0 body.omega[1] = 5.0 body.vc[0] = 1.0 body.vc[1] = 1.0 return [body] def create_solver(self): kernel = CubicSpline(dim=dim) integrator = EPECIntegrator(body=RK2StepRigidBody()) solver = Solver(kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) solver.set_print_freq(10) return solver def create_equations(self): equations = [ Group(equations=[RigidBodyMoments(dest='body', sources=None)]), Group(equations=[RigidBodyMotion(dest='body', sources=None)]), ] return equations if __name__ == '__main__': app = SimpleRigidMotion() app.run() pysph-master/pysph/examples/rigid_body/solid_body_floating_in_tank.py000066400000000000000000000153301356347341600267130ustar00rootroot00000000000000"""A sphere of density 500 falling into a hydrostatic tank (15 minutes) Check basic equations of SPH to throw a ball inside the vessel """ from __future__ import print_function import numpy as np # PySPH base and carray imports from pysph.base.utils import (get_particle_array_wcsph, get_particle_array_rigid_body) from pysph.base.kernels import CubicSpline from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.sph.equation import Group from pysph.sph.basic_equations import (XSPHCorrection, ContinuityEquation, SummationDensity) from pysph.sph.wc.basic import TaitEOSHGCorrection, MomentumEquation from pysph.solver.application import Application from pysph.sph.rigid_body import (BodyForce, RigidBodyCollision, LiuFluidForce, RigidBodyMoments, RigidBodyMotion, RK2StepRigidBody) def create_boundary(): dx = 2 # bottom particles in tank xb = np.arange(-2 * dx, 100 + 2 * dx, dx) yb = np.arange(-2 * dx, 0, dx) xb, yb = np.meshgrid(xb, yb) xb = xb.ravel() yb = yb.ravel() xl = np.arange(-2 * dx, 0, dx) yl = np.arange(0, 250, dx) xl, yl = np.meshgrid(xl, yl) xl = xl.ravel() yl = yl.ravel() xr = np.arange(100, 100 + 2 * dx, dx) yr = np.arange(0, 250, dx) xr, yr = np.meshgrid(xr, yr) xr = xr.ravel() yr = yr.ravel() x = np.concatenate([xl, xb, xr]) y = np.concatenate([yl, yb, yr]) return x * 1e-3, y * 1e-3 def create_fluid(): dx = 2 xf = np.arange(0, 100, dx) yf = np.arange(0, 150, dx) xf, yf = np.meshgrid(xf, yf) xf = xf.ravel() yf = yf.ravel() return xf * 1e-3, yf * 1e-3 def create_sphere(dx=1): x = np.arange(0, 100, dx) y = np.arange(151, 251, dx) x, y = np.meshgrid(x, y) x = x.ravel() y = y.ravel() p = ((x - 50)**2 + (y - 200)**2) < 20**2 x = x[p] y = y[p] # lower sphere a little y = y - 20 return x * 1e-3, y * 1e-3 def get_density(y): height = 150 c_0 = 2 * np.sqrt(2 * 9.81 * height * 1e-3) rho_0 = 1000 height_water_clmn = height * 1e-3 gamma = 7. _tmp = gamma / (rho_0 * c_0**2) rho = np.zeros_like(y) for i in range(len(rho)): p_i = rho_0 * 9.81 * (height_water_clmn - y[i]) rho[i] = rho_0 * (1 + p_i * _tmp)**(1. / gamma) return rho def geometry(): # please run this function to know how # geometry looks like import matplotlib.pyplot as plt x_tank, y_tank = create_boundary() x_fluid, y_fluid = create_fluid() x_cube, y_cube = create_sphere() plt.scatter(x_fluid, y_fluid) plt.scatter(x_tank, y_tank) plt.scatter(x_cube, y_cube) plt.axes().set_aspect('equal', 'datalim') print("done") plt.show() class RigidFluidCoupling(Application): def initialize(self): self.dx = 2 * 1e-3 self.hdx = 1.2 self.ro = 1000 self.solid_rho = 500 self.m = 1000 * self.dx * self.dx self.co = 2 * np.sqrt(2 * 9.81 * 150 * 1e-3) self.alpha = 0.1 def create_particles(self): """Create the circular patch of fluid.""" xf, yf = create_fluid() rho = get_density(yf) m = rho[:] * self.dx * self.dx rho = np.ones_like(xf) * self.ro h = np.ones_like(xf) * self.hdx * self.dx fluid = get_particle_array_wcsph(x=xf, y=yf, h=h, m=m, rho=rho, name="fluid") xt, yt = create_boundary() m = np.ones_like(xt) * 1000 * self.dx * self.dx rho = np.ones_like(xt) * 1000 rad_s = np.ones_like(xt) * 2 / 2. * 1e-3 h = np.ones_like(xt) * self.hdx * self.dx tank = get_particle_array_wcsph(x=xt, y=yt, h=h, m=m, rho=rho, rad_s=rad_s, name="tank") dx = 1 xc, yc = create_sphere(1) m = np.ones_like(xc) * self.solid_rho * dx * 1e-3 * dx * 1e-3 rho = np.ones_like(xc) * self.solid_rho h = np.ones_like(xc) * self.hdx * self.dx rad_s = np.ones_like(xc) * dx / 2. * 1e-3 # add cs property to run the simulation cs = np.zeros_like(xc) cube = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho, rad_s=rad_s, cs=cs, name="cube") return [fluid, tank, cube] def create_solver(self): kernel = CubicSpline(dim=2) integrator = EPECIntegrator(fluid=WCSPHStep(), tank=WCSPHStep(), cube=RK2StepRigidBody()) dt = 0.125 * self.dx * self.hdx / (self.co * 1.1) / 2. print("DT: %s" % dt) tf = .5 solver = Solver( kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, ) return solver def create_equations(self): equations = [ Group( equations=[ BodyForce(dest='cube', sources=None, gy=-9.81), SummationDensity(dest='cube', sources=['fluid', 'cube']) ], real=False), Group(equations=[ TaitEOSHGCorrection(dest='cube', sources=None, rho0=self.solid_rho, c0=self.co, gamma=7.0), TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), TaitEOSHGCorrection(dest='tank', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), ], real=False), Group(equations=[ ContinuityEquation( dest='fluid', sources=['fluid', 'tank', 'cube'], ), ContinuityEquation( dest='tank', sources=['fluid', 'tank', 'cube'], ), MomentumEquation(dest='fluid', sources=[ 'fluid', 'tank', 'cube' ], alpha=self.alpha, beta=0.0, c0=self.co, gy=-9.81), LiuFluidForce( dest='fluid', sources=['cube'], ), XSPHCorrection(dest='fluid', sources=['fluid', 'tank']), ]), Group(equations=[ RigidBodyCollision(dest='cube', sources=['tank'], kn=1e5) ]), Group(equations=[RigidBodyMoments(dest='cube', sources=None)]), Group(equations=[RigidBodyMotion(dest='cube', sources=None)]), ] return equations if __name__ == '__main__': app = RigidFluidCoupling() app.run() pysph-master/pysph/examples/rigid_body/sph.vtk.gz000066400000000000000000004741061356347341600225750ustar00rootroot00000000000000fUsph.vtkɮ˲)N!w,$ $X Q@R7pϵ2ڗuy^7?oo??w_??o?7w_/?ZoMeW+/˿G?_^ɯ㜗{9}<_ϼ_뿕CKy3//#>[}qR414Jꭏ^J_gxϡZ}5/Q?lpz<5p|j\?[<~/+[e< pSߊ?_?G@=-jXZWbÆ=RiWDޯe/0nZ)j~_u6R* j/ggU{ύk6J|?=|Cߋf=^O*NcNTsP|^{l_`Bޣh^^(JzXf]ea4u,7 3{a'|&B9d{.V{ڝ>/s۰yヴ_'aď[aUT}O>=:7J0k=a:=s16g͞ǔiu"'՟uuѾ}o9=~7_cӾ9k`:b^ڏO׾/_==<|kS$\ vq86Wm_/۾^x;6^|Y+O/I-ϺF~SYFi/}?OcR_n#y_y-y/G~zyX5Z{ZzRs *m7ҏȽw[V蕎  Isy0 oV2'ް{="Owst4DnoʞӀQYQYQYQYQYQYQYQYf]}|nDcC~(ۘo/o֕/ԗ͵}_c 7!S1”֛u\qf_g8QK8Q>K8QK8}Qs9ۜ---------#io:}_:::::%N7t39N7t35N7t39N7I[~W4-[ߎRb;vg:vVt8Jo矎[_P:od՗\/exo|͓?yo}~ߏо}{ [h^CQ5oo|{ ;(_^AʗP<oϟ=/ϟ<okVs*Y5f?׬U\ʟkVs*Y5f?׬U\ʟkVK*Y/5f?׬URʟkVK*Yo5fլU\_jVK*Y/5fԬUXʟkVs*Y5f?׬U\kVs*Y5f?׬U\jV[*Y/5f?׬UR_jVK*Y/5fլURʟkVK*Y/5fլ^q-.߂-._B%._%.%,_%,_B9,9,o-.ߢ-.ߢ%._%._!>Po^7oWp;}?~ ㏿/~}단__D"QEo/~+Q͈mFo3~ˌ_fD2#=۳y/O@W {a_1<‘$녻fnϳ_@ݣ{)ؔNl'6?MĦInGh32~AhM=֓sctCQT('g#Tsk}d4nU)֔&5}0lezN ?(^ߴ:v l0Mҁ8H-5c~lZ8s%fcw?RAW*HJ_ # ߗyWw/s9ڏ}Y7W~ W~ W~ oMxw moDJ 6 fNC'1]I't(Mn]KC#'xWg5t_`n>CۺwpIzwxujn} \Nl*FwWH/x:1? d"/~ ǺǪǚŇl}~9m}|YLJm_ۘn_u:aݾm`o#}__D"Eo/"{ۋ^D"QEo/~{ۋ^D/"}__D"Eo/"x_-|} W |{ӷOߞ?}{ӗO_?}yӧ/ߞ|{˗/_|y˷'yg`yc{F9?BlǩDDUY[R*Ow>\d)RSk#T>W,K{2-s{EkXv1tjiv>V^}Zm٦ I5>^u%ˁKKxQq -Xoޑe#d*f'DW,h3ּL3t,ؠ]dSz7KmXb>[o[yUEv1ʭi?ۚ稁;BuS. Y^jœr}«W_6ls4m-s61E>UM45|yF.[mѴ ~NVf-%ImOl۳sqWL]Wn'gEg&D|4hӰ*{_k08 j㟃*DE-6Aᮏ[}95f)\N[۵eγپR-~:5n, ·xI`Jj_2O#͆zlv [EVYa߈ !xlՒ`FLYYhekh+(sSa`7dWZ+&>ȠLF>+HҋԦ-v/v)l#Nq[+{\zmbf+aZ-f`;PbjG z/l^baVCB[ X9{ST5^4li6Ax8ڂBy+pɂ^m6/žuwp!DAz"dϚ[FW1YGXQ xii'%|u|ۍJ촠f,pm9qDoӒ} :-fxYAmqf =%J~A﵏9ҿMn5k*j].GM[CI2lS NZ6/m˷oZ1:Þ ZlȖRkXƳ,D&kqN&K/^ƨL^ݿNL IX!I`iЩoWLoĆ8<^M H{VlKmOtLQ"FbsO6)U;pQi#ER|+i\):B*1lNYfKf[MBjoDvVyʒhO[XU 7>t}3T=aC&H'sGRG"bcGYJ CnV |scRGـiOwAHt625iokԵ8U&l=}(mmηcN2@a0Y=gJu u ;4ӆhd!oU7I.-ۣYj3w9ɏgً|U[`sRuRnyKfzlFaP+te !`|6A㒤{(/wik-zweM{~}j$Y![e+>l6־d{-vQخUoh?&8,8BTTƒ>0;-|O~v6-l->/m/L6 g!%e"KwxW.-[mA+as ՛-]G/TDN~}{KAm[+MQR,K~4l>籽0d-34xEdяp>co>/;wD'ڧc}3[yzIY RGfC)\#d5Tto!=/ ]ZfqJSzF[pauJ [Ǔ d"&䱶 cdGNǽiKu52{Mvµ=l)D{O(5}ZEIU%lad8lqQ[|iG̚`ҽr,Im)o4\( 4[&gҦ,DB>I6e@ Xm's6mK9Sli6dm3vvVfKm'otyk IIp*lӇ)JwWԥvc} +reӃu>yTSȓ-l[|c*J7Չd6$jIMpZS޸E{o'ܜzX O#vX6y:;=o8O#Y0^|Xղ䕎PWqdݜTHjPL~`_yf`>0 |@DziL#k>TLCawj[QnefIJ\2$j0]'R$B;BtwGJΫ >oנַό#=ǁ=AYnثIqqg,l)PQp1,s2":tFH(qdwAϞBfc}y"Va@ 뽳7J]Ҷ*+dNȇک':<jC= m{TiH4//;eK<( nmzבlGHa&{E*}nʇSJZ>[)H:n*T'u o-x(jݿYҒ } U$fѡXVLCxoE* z᠞|"ys?4Cz3ߏ CW&MPPFk74\,z `ó[zߞ Bs[ΦZTigv2ޞJ҇ɀ ^~`$|rmס%ψ=c@I.ϑ-a<2ӐDċ;0 ,2x"9rPoGfjAo*=PduUH@(օ9I ڿ(.mh.bA0`8_hBDw[{AkU*ɊUhHaXr Z]2K٥*Z^MUN*'3|VD͉[l_!N ;yz 椟&Am&g_?>&jCa!}Nl 8nNu!l 3DƭދKKj#&تn6J,Mx3{F>bbnaۉaI&\ؔ1gx\&68Kcn6-;.Y{%7*dρfKؑ-هuBc'@KEKL*kzQ/|&וrg,C>RkP65qiL6]ctzQjE>Sۥz)~EPr`YD 5QCx9{3fCB 4Z`!6p{2*;GS4OYylcH#I@e*L{ 6Mj XQ6KZfɘҼ2L8KjgLEcvߖYlu5"(w!~Q=8}5c(b X]POc/,@ǎ Mp0:ڥYmayl]GpN#j /:=, "t /E*#_DiiJ[fvln (tژp, ju, ~iD"t~9}{w =.m4.ЬP}B'OsE9PFX~eӺ$bB#dK l IQPƁ \AkDŽ;:ұx-"R>N1" _ l\9 >bYr[u$uI:GxR_@ *^C,;"Etw=.2ӋGM\-f-x{v/Tѡa0|nCsDګ6D{ ]Dزm+o:ԍ}\bKrGj6m8QZǎBzhE> AqG65ި{Nhdnm31͞_B́]xpƥMj)9@}:TwW' Ō"|l^-.~F92Ψ09X[*1ʝ]lep|e^{n,"L`'~6E,].u҇% va v^{%%_שM/d;KҙӤ4&"#^i660 1 X=HZ;)CPHD4\f˭>ӟ6ٖM ^&!_f|w4Q=EY \<_T$R:۱%Hqp [Ѷ(@5W) np]}Z6Jy~^LJ|8]x%mKQNDWqMo\Ta9ŭBϟ"k+]:?%r(͐V¹x'9*GŠD _0y$=KH2QĹA:H^k'bL3.(7=ץ` u5]~ofJ#BH`~?J%-.h!>yUS$Pm//OG$@EPTH#e@j#ةV L,BZfWY@0l6/){j ̗P+}rznm]c\ 0=[-:\1cm:id16=M*faXPRw:PD ihb=ޮ ɥ),GGFA>҂ll0 Q( HH%ԯI\x+XPI8j$y}5 !MWZP8|x+]_l ΤMHI0i9$~4SctY ڶ|rgzęSB8 ǣ Bڇ[+5Tc3b{GfI;Ev؟=ٝm_Xl`9N ߊhߥ0m(Xϑ|knR,*pϩQ54xvkZI֊iwEC:|Ē;;Ti\hI~(2xz#/]wg6p '^XW >C֑)B 8s iHB^d$v( JeR}ZR@mL~}X m_3g('%bUR+z3J/p7S&IيAӀמ̃U֥k{&U+WҠ7т- ##F##dv`Oȩc4c4W$HChjGrg 0JԵ h慴ZѪSa;f nX-X;RMs% =綇#@'ۤ{:' ec*[ sy>a(žYjzTⴎ$r4pZu5Cg|.o$עGn!7V3pÃq9". `\HRYee gNPOͺ)f- RL oSGRWXq<5.PM1~yEu7NgSE'=QCi'ePsH&6GIW)N_ze62n#0w#K?۳i%=GnkD|͙$pÒǶI!f_p]fLQ*dEE-e`$cO-6rl<^H(6c1[k=Q T\_9] /UONm[J*%4(ֽНƙƓ6H̒LO_OL&{\"*2MJ/4^FI1,܎Z |`3bhsK =1cPd"L! FO]ӽ8FJOB=~<ņT5aQ\m ˜2 ttY+,035)XZv:Q'$~KjC7 8Dr5QuzgBN9di߾Iy8ۋWhf͇Ӌb% tItpmLP3.leB:4~~`?=ȟ#(dTXQ$6CJl5#BG']Pȓz,Lm&nq\Q{|{ٗ^n{"؉J{E+#ga1B7FcOkNd ȉԯ"HJ$ !܄Ges2Y9Q^*_Wy}8xA}З8f]m<1ߜj<7+4dѶh^5.X29`~ykV\Bi-* *!bPTL] YJlNa 3S>+ ٿ/Hk=uƾμشKQ&hh7Lf$#5uܭS9$jhUB咔lU,(#:@9v s]k 7EG'tϲ 5!F4r3li TZ2 w4/!}v {3Mcjn%YoF<e%X+ZNFA~,IvtDЭǮy|.M4T6I!cDt.oٹ~sXTgsM$b|+.l!yL UZ?GK]!r =08z)lM+I ƥbI˖ڐLF$3hYF V#QW[d4.O|^u'rZ*)s]"4@[XyEnTԠEJk'x5b؜fO8+nmׂ}7(z(0Z5QU tFg+&mit /_e ĶVzj֨CxbAˡ.cvX0eW'@ h^ ;hA.EOU&?*6y38oޥ_Dɿ"ݠI 12JzW4xDπfzN/itq+2[j)HjBBpz]iף4T\F YUWMYh*yI"N`kE0&a>߱+Z)٩EHm|8aݭ:蛃Bq%s"nh2A1HcunKzTKˡ35$TQ!p] ,. a{Aw.xRK,%k(񠀾HA&T]!%rE]T$贡BlL,4C"H--ţ)0ǠjVu=8 'ҩU_zr[8%(2I|^.e$1]f)}Sѫe#;ńb-b6أ#$d>wuF3[[l?}1)#&&ݷ]@HIZ_]Ӻk*r4k/p^cO7MewLbQpBDd(eXB&ģ[@cXjUqހ gʳc'+&yo?e¼. ^0!dF.l® KHp O6j\_gᓹ689I(` qaHd -ka$ dfE!=_QYUP7a"9Bx !\XI8IRD(W)6WLʊmX٠/1Ǧ,iPc{usݕ7Du:˨_|e R)-CE>"LAk[%ʁbe4F59jيR6O?aDY>}=뢞J,_)%㮒\2]UHou\G DuG'~o~tq"ڏylyS)'PBNV1ۿv :TbfgdS`jCȹK^fRL-;N\̷!T|М[[17Zh5-`4Y%r0d< #b%`<:U_^dH@xv(ܯ%]ȯh] ϺE3=IK'~3ַ9r"KlzJjVFYR#x،J|/Lg{#ъהy(o}a_3 MYoC'yaɒ釜#9tPT:KUl.H}eC{` AY-{ Yr0_AШɫ04T*~v:aӂ!;5'YSL+}9SꈝK d:! 5x2$ V,tYijYa88;=@L,iLFrʠV!g.'den M*E)XW N(+Z"B iFR+WgY.^)R*-%V-:  h űĢN~(Ggo2QH3qr 8Kye)yEG IJ B6&˷NOaNg=Qy~vB}'>Fkڂ7]ʪ n 9t4- 8~W4Z.P L&M+N ?B j9Lyh\SDp2 *]lxͱ9MCmzY/y1j\b&1RdV]#F%\)e"}_õyHY<]A#wv(Fö6\H^'I?J3&XJs * c`G!="L7=EyRiz.F}mb&ki?Ĕ6_%:d%syցb"Sݎ$MWUyurcM/,ʃ$)u?DNt`hCQ7e7|QCdVR٨}Lkݭi*N`,,d.ED8jx<@Uo)t9;wnF\Rm tvD! !=GFw߼gY{ˇoYfH9|:{|؃'VBo"Y%slMj5~ȞHXro8ܢD/R'elct.+}* f)z^~~Rµ)b ٭QH , y$H>p=5p5EaD>>YebH_m# 'tSF 种;6^b#b#}拍O?忴ޛ#=H7?H?H?HeF:{tm#=~Hb#H=NP8}ݔ2G:7:}W9vGz_>}#a$M&_I=^?|m/ǎGz _Hο}OѿH)####_|/^H)TY?|:G''c<|wHcbG:G:kgG٫ړO>O>G:p*{A#]•6n}}}o{G;=G:Gz~O@p)LOSq|}/~H#S#핁)h>y0@] 8 v`WXaBVV 8v8=!M JڂꡂK?Rm yJ;ۈH1ŁYC Qht)qAS~9@J<>KkTx~MHull9,_lpP?/@~^8xdBCk F@^NIJQT[$d!3d)8j7ԾeG&`y0A! F#uy.EsQtN }?2){&un\|Լ$I& -$] sÃ11>"{ڟvDV>Yr}%'Ɇ4-MX874J r[l y6GDóKυ]vEcN -) :AN50n@FI~&O qʮkB$>L"}7VqS~oEreiD&^{09R^we`)JUlR Hh pC]a,$Uz؀F~{.=INbD-|~Rܰi@"E:ՙ/r.rfv |lIXqUB_1nK]%,֥~DEXϐGʙI;26R5(aS}dKɂn ^+7jWa" !BVDߝui*=Ŗ$QNŁҮ0~hr`}S"?F3F 5Dv"thx9q8r3[ |>\tN&=cW'`;!g8b;jE虾ݠSd%ō>F`ڐpF|,H~iI$ShN%I|[lH~%SwáKOt/(s_]T`BAOTmI ]Y+"$~*"lJםŁzfNwtljr-Nim԰Z+[MsX|bs?`Y o"Fq D`HYc ¬@iΎceXP#~ȩ/7Q4;GG%0"z#D940H\DڴUA;{x;텛^ʆG~KoHkTmd=SXp*GeCP1Q|?2Fv` 0X5D(cT34[&Hak*+<.5}.?% #1vCUbJy"SiZb+{&ޞ( -;sbYx#ɮ X74,y`յO-i' R,hd.L .% Lɣm\jo?8ZVr0BzٔAGpk̺zOcwKN.'P; i#N|կ1s$\ĐEP?Gjb77r``I]'D#.p"ExqբⳛX113 ?6=P2zwJǓͺ^D%uNE d2 bJrnGHS3Ը #gB ֳ`ց&dXs^KNlZ%IuUOqZQlgaƌAmYۤ}80;p ,R↟d& A;4Ȅg5|OO  thv%4'M<*Ī.4i13c'%Ѝ~nPX8ykJC`ܨ(8ЀY]j\!dzIQK\Hr>e8Whkܵ zńJ2͋ʱ@gYܡ1P~-]4::bv\bAž/?NO h, {0E^wCbs`G~o)Dv@Du)qq˭W钩Z@$ N0@)0 2B24!8&0HCs[W{PEo-F}b- 3!1 T v\Ě o4:pRٟ䐵yjpuɡrP\',4ͻ!b^GlTR(HlnP6M*vҍŸ@E_K'6JgJ?k=ns4_#5J\Kʠi'd>pryĺ|5$Ln+&i/Awut'\1<a:wjǭ悍h_9 VZU=hI#d YhP wG~GK\: nI uɛ% ~Eu qXR%.[˽9 wm9}6} ߑnOE@zW֦Bx=^Oȼ 1@peg$|#uŀxę0XIP4w1>ZU7qH{K(CPكH/$$K .){Kks w)DXr}^|RQK73D|ۍC'<`:D ]Dw[xn՗ۑaE->](FO9 Ay *<5G0坷 #syHQ #.qI@)7;Z?gy4B_HyumXV$ w CI槼 AQ?J=JG𳓴6Fχ&ERC(Q]1\n=8Md(|$h \Q $Z*ʂIArm/odI"z[IB5/%uD IyRTF}{(8i0 έAvΨr>a9HY)(2{W^BiLCIv} cH疒<&@aHsdr1*f8ȴBIkL#]AU\]ϛmFҽ~<& 9gcfzlwl8#kPwQD$%Ag~ǻoItߴweobKU ;y5Fc<[Dd|s.Hm$fӡ;y|:So^|$)&0oLt;|0m]"`ÃL TAMs. D_$J #OU^1D[WyGjJAѡh"[yP!LjNW9b=+pgዟc!T^-[۽֮ac ljcu< XC_UAdTbKLHp*e3 8*CWer+U~=wP͈W7$0m긹6)D xiQ/`˂/|Xl]Ys9ʺzW9Ь^vq+;)$\r`Ќ$֨,)179Uu {c#@jj]%~ tc 2 [#CRykrFXM鬸9qA@%_Fǿu1o2QL ų8T26yًӎ)&"};BՕaǵ/!쎏 +& qy̚HٍKcH ;Ɛnrs0or+Aбct]0y 2RZU-Xchֻ& I9sI44V&QU(tFh>vYV#7z 펝&sh/sȸ% ڗ)CRr=nH:Wo)aiZ,_#tF!w|HG}[op 3G Oq Q'Ck\zHiBcS"yu@t$J؆eΌw ~ 8Ql;l tH`_I 伉ǰVzu,4J'0NIB.=G vƎ;9RGR\MֽǢ com;Y";yKxq@X9|LhyB%K1:&OEQjD}$YI 5&o+|IB~)EJ0F#kA@,GkRn=ĞCɹ} tޜ\v@,2ˇt' nT6Tp]<> ;ڬ9=Yy|*t ԉzسÃ\__"FPFMiT H1jxFGDb/{DKXt*rhouODB}0F`:%Fz.ZrI]cU:1 n"wHae$J]6|XN0`!(#z KT^#wƖD8,Tl^. %嘚L ASc˥.ig9|e%V|!$;A_~SbT2le7|B=Hnk_%LH3 g`mt!Ւ.Fw(`W}P:]3^!<(KƆ.ǑS:SfNBV0dMim:i?*r 5ι:t4ӕ`I ^}}K<|!0MV"5l '=h5tjx+ a3A >()(~aTB?R)CO80ŸWKy@#E=MԉJ "]8n)Ƨm}@)br+ E44mg71ԯ?! pog>D~ }8P *rMF:]q$ IP e!A>PY2^ThF">Sg ؞qj{/mc2k(C߰8-aq[.X$4hpں,Do"!=PMe'F婏G…?:lsG(ai,Xbn:Iφ4"}!rȠ9tLDD.N.wtn904eWVbub/a=뜩,Ƈz $UZ8HfdMfqxx"@h cb=]1%.ϥU0xxN>|Cpx % l,cX!=NW]́"};~Q=( J* SD ,5P#&Ѡ$Xl4Kc9N/\BRoB/C > /]β.$A89~HI=\D3yLTáa#k']P@su5' %^EG#EL@DxOrUm,W4i,q&=%*Xόu"иrW@`ƅ}jE~WBS8xV7Ay6va~)A1K˳ܵ@,"?? 0[>dޓO`apB`wBDZi 8V;L k>V=: Q|qʾD7쁹u3BRF<Ml(0ō繱laBkҔKꯨPl*{$ b3ɕ7itAiL1i*BKU&]@ ̲I CHnq$y}qr`0) MWMzj܆g+43D4ҋ_J%n>ZqMУrLGb(K[ќ= א?hm;d>v<}hʸOe*CIU11}/CwRAhXEDڄԇ[y; ᴺˣ4! E =J*1t(Q$Dw) J{IRvhr4Xfڙ (&RE !P.T TdY qVIy^wYx P6, >-@_rks_wSl5 UBјDT+'_tPxFE{D =TK~nj93?solrSq %K#༤ NQl_ <[Mu3Ƴ"˰YBQ|pބKPH)JI)QxW y}{DWո_% u\#Y؟QdT?eUg=r)eaM֗@p_R{ URt>J!\, Zg\7 xw֥vCs(2%BAYWPLph!,Co;Tbl`[ 13YX׃}ΈOISKݠLa@6_`2!%T@k`a?~O"# Z!qܲ#?JKaQB`G3^[atwt%$qhǶ!gB8'z}gcϞ$L1EE7JY$$cFtt=A%) V1ϲ済GBn2!J|68d$*!L 2AQ1&}b[]e$@Yiҝ[/ #$ߎ(i","]WD8G|)'x 8t %ĐrN֭2~De꯭+<7P@Vo .bc*!9W}2_{7F`++}-w 3KInu]ش ^ި~(nt9}qz38Uq6=tvH.(YQhO !5g+<)Sp+P5-/2l/Aj$2EUoJ*vKX"@#81r=o|UXD%ut("%#RCm c6ₜ21ƣNPN}9PltR9'#aҁf(/fl eЪU"\8Ej2K44ʴ3hja\[] ԂsQŠa\ G% 99D5,j^iPH]?AAu#İLdHp_" ~ +k̗M%<5Jv$|)>Iv5 I,LǶd("xILxokwH^V $p@>vW\v$.TS#(&0vޣ825↝[RnGd.o&)PcʂqW]=Rp:@1U/0S*7E7"PB + ކ2 Cΐ^Y# zJ?YY,&p]?FIX>**; AJOMJ߷ @b\]{KHNG9^-(E\JqRBnf`܅OC<@ QX|0_Xq.*GU|WE 3)$7 mDaFgmq߬:Chx{SHay "E2ܶs%I4񸆭j|Ì1f꣹:̋&vB\-ޤcg8X,83֚a]lDC7.4uRMKQW8EcM" B

    ~!Y.x.v,!zU5H ȡKozj >fzdO!yDcLa'vCKt(hgQ*03fo@"K5DsMC{i*ǍH:[;#ZZ7]Od` u'&gX+SN xX9ǍPsiআȎ7]X6ODaX'8#" I!_\Eϑ=EoRNr>"9Wtӄ(R?#yS'8 dyp,!N8jS^ȺlEMa G2OJv0mIpĒKw-i 16ٓKߢA(VIlL  |\T/^vnʸ S`\\ KlǾ*{-zed#I w+Vf&P{p=XVqF&n(@k-JG 5%i VYN:L [22yy \2gR(zbk+%DTU~ad.EM\]ndjة'!Z(&>hfArB IA-8z%)G,wgÆ汰D,0)Cd7*n#PH'&2HwIn?G 8䲵Oh ,Y#ؕ<:CF3(G9Gʹz J?L$@ץO(7BhYV{,8* 0AMot;W>Q0L-`4j?t8J +̋P=q"e^-aUv_v_PTe?.V:+p/~VT(Z BCJ$r_Xkr%.B/z"HQ{hҀ(M[OWG46@jI~h'^Q|tQXIeL.7ĔqAbzIb5CqoT:Ƴt[@ƒiE]Yڧn'ߑ؁8;;*ɤy< bâ"Ù!35}% bױ]d -nt92­?eʩ,3՟2Ty+SN]1#ʔCWƑJ僮L9ue!KX2N+e,eˡe,E^(SYu-,SNa!,SaƖO2ⅿUeV #O2 2eʄMzhʌFS2kFS2+4e)sߚ2Ԕ[5Lyєy9LyєjSSs5eʩ)sha֔)~o4eʩ)_2ԔKS25GԔL>ŴJʔISwERƫ-̡FR2cߚ2І'z}E;ATϑL>d}KʐCǑz:5Vջз%e%_2EQfGT58R)L>ROE_=|֓))[Oz2a IOz27z2uqtx2ԓ̱-֓)L9z2EO͟z2Г5/=rɴc'SN=lෞLz2ir2唓)TT!'SN9܏#ɔ9|j)̵L=d!KN̗|)/r2%'S9qn6e)(sJTL>v_2;x#(SNAӾQ)aqFRhua|/!,IdN[Q&(xa ʴ~\L9eu)LWAz靠L=e~HL9wWAwz2-jd<_VPM`Mʵ9}tJdAc}ig|0G'P@nki0*Thw%VqAX+1+wSP+wۡj`zݳ`.-ꗟq%,Eaב_/57T0y}?)-Š?uH4mc `#KlʔY;rS|ݖXTnkc.M1Wre'qe^'}99y-Wt%rϢ~f'S$oO*/i.Uqhy zr0Ϭ ]HqO\9n"pK"Ħb5y}4O'Q@$g-@6 }^^'>*`US6}Tybcܫquk5>E4 Q_qI}$ip&Q!iV^  O?=rƗb΂W`Ů׃F=d_jiӆd6F1{̖۵.{n7u܀8mD/Km7\ @] DNA0A[(.S\ zϧQI}\njoӶ*\׸Dq.Ze} C(3h[ק~V{|ȻЭBRdʇUQ $#YmCi O3zySr=߿y:j0>̔t3}鼳WT$zKYimR{⟚c% 'y^vt 5j(ocxow9d<ϻ>CH͏eY3myJϫ)_INS.4-^y꧛N !L,eݻq^uw \-&,B_Wj)C(GJM)q#z:m[O繘qj>2W /O#ycσ>9[CjS2M!+oi0j:pP`Ӳ$3A?.xAW-kF曂Wm\:qjsK,P8{!?o*^(0,+^yco+^QV9R?gasٷg#uNv΢PȻٙ%Y5T֧ E~yo8ׂ}|qIO0_fr}V B']&`ok#'&ZtmXWgT^HXX1 8ozvs0 CZT}\@mh0M%@ϱ8.O 7p*>Z9Ԑ%$h$@j鼉&~s 1P`WG&+te]&{ =6'$"+~{WogaQp,ИX!r35B~с bhBPÖ Dj?EʶYٰPCo+ ?-kT.%AfKc'}idOk9U5`kA]P00xy,aCrFD6>0t|/7g"g㡇,e8Ü z!1Se4!8Fy^`a_y,=rQw!)޷}!qIcN|.m;@\NWqE2S:~}b:(XE.ܭArBC(AO\PQR9%mo0;1>hvkA] ("_޾ۄy9viN< pe!rϔ8V`κgR)%AəCc9l/.2zutKĭ!qkPdC_; *^KnK)nH&a&3Rk'HOISE\()D$hF7oD1wwzV\8Y:G/j41bNS!P,Hr%p~C  \QM 6У3 SC5k@xNxlvkaafp D/`;J͢\OU7'{a̽B |_ ˍz\To鈅bkG5#t<t*7 q.s@`i' axȇ1^C[Y{s)57 u :QmOZ!~20}\$,pKTQ>|d:if+Qun%D9 5P[=DT%UTt2#hjf.3CU=1}40m` ^7͋b(.9cip9BODB풧q~%1 j׀v"@tVKV)ؒp|T!з8 К.Bss(#6[d6ϱqDr̕N p5jEt8y"M$"^^1L") W )adL~ kKV)p><Tp~0J)a1Ki6U*19^^ pGu D_hZe[B$D<$+#Gzv;rC$:ޜw,S>LI|Q.ZbdvӀZ<z#\:} biE{a TAp1ncx~wg효 P.}]͉ҲF "0\QUClZ%h݈&pǙKķB9j>G-z* YT|FMU.Jufzޜk>`j)3GI/t?BEa ;=IئHB^?3KVz"\_Ss,5"^OeZT),8: +C_7$@1xNd^pK2HE;y  _V9nK@n@.:.v^(ț&* dsWy& ÙY2ް8ZW+7(brz8jt+L BAq8Ue\T$=qM*qMijop}h.Dd3:ڲNbS@}NҺqMݫ h{M)z 6QpMrbL=Dm:sMo2(59i jOÈT_ɲ%YM 6žqf#^uOA/K| \wJuo.Ͱf+S5 ηy9]Ӆy$RtIg 1ڈ2~Y U\IwK/P~q#Wx"X]*sЎ,a3}+dkMFXOy\)+6i|\RyeV(s|<=ĞEF7G Y;ܺfrTG)&RA]Zn?w)3`mJ΋rLd56 mc@R}>\<1u&P]BXHAK)x'hl&PQm3A+:YTQR[=蝕Ld\ nCۅzoC64F" Ɓ49Sz5iG΄ UzjīW&Sy9Pż(V9xyNe34DOM<'iYwZ$sJ LYC L'̈&T|I!sA"s Su6 ޕp+()9V-()y80Gw{))F 8H!s:,+Zx=0/rZ,.[}@|Z"<ܦLigX溻x 2Hq$0Qt<3Y&龋mTYZ"->Ll23qC6$l\^vQӞIP!J(0Y[9oΔ]wl j%b"u *;`<$,gU&/;sr*:ͭ;ڀ(1H2;b̩HVrEr<'Uiyδ:UC.:#Ċ/4'9G2HKwX@tٻE+VDN!IhORʒ &?B#9a+&+0 r+4>hu%KI`pdM*Q@3w^0Yd4=trT27%"{RԵ"AE,]Llq0Xә䮸hxW(#ks;KBיխޜyf>,4ٝ.δi%kEtI%TF/b#}k&}%6qbMD6{ſ(E ` bbpRX,ه)韞tDdO/ʛ63[zcQf%sδL0:BM4f8N`W1+, n)&nJfDp!;BlO\5[&IGL~M;r%p331Y5gZwTgpeH€O< |/\gB"*xߕ`:zznp[~I]U̓d q1\5T" 6nNAR5Kȹ%+Jd8PYP ωh_ "US?i<dcAaU^03aS3qI+݂QcijqKfr J N>s@aXg wfȴ;6!gMqF`EFCS> p$5tp7Bv-G&k-[w\߭d."ǹpեN, 76Z&tKK|$}RrwR<`D1N-tAk.ũ`#xuYER Z+ۘYg@ q- 0 G^.t*Gt3o{I/),Zk=[:Fx C1HU|>a(F$~=lE v'>.1[ޡ$kۄe4P| I̓XvQS f^N^"z& ^lby mʂPC8H~CEV0D37$ӆdiOa3e_dz3$Wh\:s?N`BH\uwՈGzI<` 1:2zHrlN L?}qmR&2RIF3Y3WB 5  A4Syg[lQZK1o&)i$v`j$G}WzG)%*ݧ.פ0a36Vp_s5yŭIz',Y/P wACՃ3|t[lg2*h@ϷRZ btrshoLa'IEdҲDŌĝ Ds%M Nt  AOHC{u{CRn{wz1R-yԀ.dyU 8TJ&> jЁWͻ9bsw\xt(ܲ,Y.O2]d_dSݦIݩXZHAG6cYea_bqobf ?Y-{w6|KJ̋,KUr0`)aB{*OSYxcAI7yC0aQ[89rR@&!D@6e:,j5%e̵uܻ{n,M~-Ȏ-RJ9M!h&gPrBdFXV[..u~t1 rpv3 K)x_`Uc@&1ǥ:_@q2C4'z菝0 }xH-%mJO]J;mS%gqaU׻(nOKD6ќ|yRE'T`جK҂">f_RtH6U&${-hLCy ϩl?Pp|.T+E<֑EЀ\z2NVboĽ{Q[W Bc__:F+Q||NhkIi@!!us? %}Vw [髲Q[uI"nڰ#;7&J ]T` ^zCjIO}SlȖ1%`I,sb{_*,;(3Q3n4NbNݢq 'Oux(RbJ.H~.X-@ۈw4S~],mC ;[gwJPq(ޞ#J3C=<ug@b1#[_@4[]KAM"ct#YW"|.%GʣHB3I&_S"tD=z~O6$.dS2)SJΩ t7Wgl0(7!1~va+rBS  M~|WJY8/Fy}(L'r|XS!@b@o! p5!5Eh=KfcCLJGdk"1HP /?o-ZtS*; b 0؎%ؾB̿F+LѸ x?UQ$z[! !m6hȗ): u[ hfQmRSVLRi LEj*`{ҁ/8i;\XZ*#){Pp$]! c^Kx'#YwVh2dſ @ˍ[? K{g1oPi;V-ٱ+{`fۂ=xDkfLb_%kP-%0`qX8S, Wyjޜ6le?ϼoLybJ,ڧ{H^wjbH7o=2c\ܬ4U"DKf?)s Wv/>m+̎ }[&*p}Chlۅ+(T)JNÎ(]pmy8 =2 [FOYnfQmFJz)&/~ѱW; qQ?]|Dn6HL!(QlpJĜ1x؞E0|Rg(9T#79:O'&Ѹ*N @!^g[b`ߢ=#?0řϫGAE]3^nŔPjԻ+1 K^GK@t(R82{Htф:3Q(WnACF}\BI>z)p])ҸcT)"R nWn:Hw/TzIe*T 2O൰p |# 6]]8 9?쁾fbjVE` d^*LN\ 0@lR ,nK'`MY:{4@G%-fYt4ܢevpW%7EL+αV:SWڽ=6jԖŢ'7Vӗ 6'@g^X 4$C$dAv\d߶luݩfX?5TЇfwE:2\% L0F\pAk0- ޯS}2Z1`9ȗo8n*amp:yช7^D65~"EA0@$ǩT.? LXrC-E2C!Q<q3i4B8iӑ')! y*G>g(N|",$bA|Ԁ 8j 8<͙Ml뛨PP4.7F3 E ! O-!T!1nlKRޅ)/!|%";,Z+܌`zT 駧[E_|\_>9 jm:4cK8ۭ#e{%]=:1rx])C:I}_jR j Vd>1|8<.0c(P&Keq9w9@ NPhE 尹3P1`~ eBR\Rx! UxfXR/Dw)G5恑 ."f`tYnfiiRXr-B/I'&@>$wHfB|"q1:|J(8W o'Kh;7ax#i[#v_Jy3 ūي:]dN""2rV ֺx8yK(32 l @2)sp#~Z ;YxeFH쒪_;*y@.1kɱƩM5p+qB^t" ğx5InW7@l(DAGBR˙j!QN`NDh;մVB=*4TDnPwy uߎ/@s7cb1,ِv[TP7iL jCW&IkejsX Q?a)5WU`u;xLħTm#Tfԟ8@]C͛79ԔN!V@B@둻;G[t9"DJ!Xb|cJQⴱe-CT΁l{xR!5Z&|0;֩GBH(y.K,IMyFШѵTY(d͙(ѿ F g7һejRm)F$yr]b)Bf$]x DAuIqGGB! s),ppޤ@L8wzi$ẓ؄-L5õ$/ 'İyI5)WB,j'X]U&Ŝp=nTdrdޘ#Eyjn?:R+VFzPRef˄'e74 )^(,sD mi5ѩ[\h?,vW S80fovՏWOAYK{_ @ _$&p+ظVB^Jx.z GP^G?QJq͕KXHp KҡH~C[XhQ?Y92ByOkh "dv F,9Y2b)ԃֽXQW{_OFԭE *E[PKr̨_(Xh@Q6Y [ƚP 1ۆ%2D2PshT~.|Zrľ U`r mKv{hC݂|(BE_xH%L5΁j0~I(|rB0GC~KNCJs:*Av* MJJ})TiƵ@ѐAԁ&O ٸkl hͶP tٳ_:ܹTQӉEi eL2tə;ťZq4S?BK\A V)gShwS(GB|qŁ4Nk9T:Klㄧ(Wq\&, 0F-MBA͑ѐ14fMEaMե'E#`8yȨ`KLH1`#%[5nٱ ?QYʊ[+a.T%E9HvafUrHe?2΅Z, DJə(S"xyoJVRӲ)/lQi`8k]܎m2:K#:!{%+doJ:I Mjn5l ~&&C bEɨZ[ %lȱgQ}Q+9Id3M[Yr՜yhͲŞ~.^ !o)d*TIbo:Cv%*ႋj }"C[JNJuЇuHԢg_);1'DGk+S)Sۨ'f`xbR9_(a M}*\"mdŲE+ pk~B`Smi:'q۹8iP7C&KOU4 !8 Ȩ !TJ'J1`ㅋ̟VHaPD/V%wXEigj<PxAM /aO _t_-`.6Rvv#g}-Gˆ\}Q+DF^ehL:7lBA'rGHmۘSp}$EMaUDtBʕCeϖG^o$!}ŮZv3?]ݲ\x` *.״ "aI݉Ғv9 h5kiE;O FjԢ;kt8^6_lyCʬn yQ"`ޥMt;hP^q"'2ezrSrKmMfT@6RJ@`pR•TyNP 4)ƐQTs%lw}"_A&I.Cy S-rQXc H;(Gjjh m׽Bˢ">JsIГҾUƥwV̀1~`hɤ=\ʭ|0})SəZҹ\юAg%0%]C8ʉ47#yB)tuˀ22z qP Ny,hJ<Sf+ZTɠORNR͒Fo J(H e隷U0ivOH ڕp:e_c#e2Ae!i۱ą. N!g"}'!jB=ץ`hb~AvKF|D{oc%o (.q!e.?!6؋*{qO>.GR]NWd{axxЁHpSTH(_<{ɼM]f퉎">;/8HP>Nۜ씬pf.jZzTwMES { ,Tw!`jJ8%$P;K˛ʏ, AprwXTj\]wiFndnH4d&c/btg咆z[(/ !,PTeQBrJ#׽|,IZ3z.N^0H jEۑ6 !@QMHVXL(LoteM  Ac}@@XLDjѦہ !2CaCFwXq*j3JzPM,_ ,G5bݡI>Yq 3v{!1]p$El/խ&kCGܒ59{񩕊l+,V})8OE84»TԥNX5I j5^G^thQZ,;%!薣ʐN] >(1Z>N=b1){߂ΡSQ$asTHn,la2ܢ|WE!(1 4>HdAG>wX KyO2mGb/FZw/*&D-]ı>B >NV0S]ҽV ЂGPTH]܍VxI Sv3v:QdPF Vu/*FTd5πQ!S T`!bԔ}M/Bݟ],92>pUǸd/RV!r(v"#t{%/A-[O(kDjv-S@dSn=Flg={=)hÐE&EUi:\6mFF1tśN_!$R߆IR/1XfCd  2Oߦ^6LRwlL.ʫ˰/)[fUl 2 ʻ;o2ty{ 4*_2хvpBeLl8K<_w۫|~!D1O=ÞËDnbtnv%_io lI/b<"A^G醕m5;.H6m5Aد 9.9FՖ1s!c0[n?<*X;0UT 2K".PzmR4@"" k Nwd@O vJ[Ǫ(c`EK9b){6`l[ESrs=:ߠ)Uꂗ[ұ RiD$E)`T\h7A=7bA~r/ې X}ιh፱o2jA"ݵCQ_!H\ŽUJvsܬ!@ vM.PD횮eIt\#ˌUYh^T%M!Ω5Bn:|0qދbȯWJ]y"\oY)i% z.KoAQ.K :8d?4vaLeu4i#1ub7TG"P//`b# Dž"]b蓙I P Fs1嗏Kʍ IG >jJZ4AO<3dD}vAUDB 4jl5nJ>-%:\qt>51:*a0Pf4'VVlE:^U&RTjznGOWy,/0*$3ry9D\9 (zah~ΪwzYf%~xR iT`@rl tz.ե&Z(CwUMB;QALWGѵ!րK(Pm!-W+5<T5ZvDDS"l}/6"pߨqShJ'Z[ ^ hV}Pl?JZYv:lDr)Drbl^_ZNaNeWe n:tRsʟM( &]v *NxxW?RbeS!-qmEȍIAGb9Sb6)`CDfqcpfx2'I܀sP/0 -tY!@PJ$w @VZ]!dk KJTQޖE?  tMy?ij | }Pڈ"Vt޽xbOX w)n ڍ t.E+ |kb{L 12Pǻ} GG9dD삠Όw#[DUtyi @6"IEHZjQ$&yU*R'Xര=h %j1!w\E]!IK:730] )(;/p6 lQ%- hIB&cWL0y vRBB'*ӤD?B-M_*[-$&,-i)]rq >6H7<7A8A-buGu`-h:HO.C(hpvU4)n+Q ͛Vܖ5R2z(ep.9ga"aq&H_XC<$"g*COWTxInc că ceS})| ab$;rYgGͳ]3fFI)qB|CFlR+^CĀ/ʪq;ZFz F(rnp- _mW /%]q-RzV $5AzU+ח1@{x/`x^pKUM>;ݪ8t%$@py‚/oV!FF[T{%&vR8cl2e'N9Xmp\W)II˧;rѢ8^}pI漺%oWN'B𓚐/),4UI4,`)L0k'tNS[j0+,lp>b'3@DlH(\E<ODwܧ{Td{EJEuũ -hS)zU6l^Tj81+ZqGIU|T lnpb_I2U!-ڥ_z։7cd6~u`aG;.Cѿ0%qSRqe=HC`%Lq/C_pLfYA!`FhRKZTۃ@̡hxRئ4 ֤tg&!gOj1(˹箌CҵK\Y[Qj GʶN%$ǜ[w6V&j+@ TB[ɱT6_c2Mvඐ)sG.5hxzMX N =_"iu֔BI#Bc&bFox)}BV-,ƅw%cbQoXThԞpc$F\u3 D~֣,7OR9NT}GjX"+{C-Cf~D4zf\".M.5*=vQpZ1 e.)R YT#D\х&^!V kZ0.& xVeZrPh[M!=__z-<4ȡ-Ȩ3tr|P$lĀh,ei`}&۬\xX,0BOFFlco/I!+TYU(=ئA:g@نI%)Dj2}EjW A{0"[ppnMN@ƛ݇*bY+l-\%e{P5$M34CC:ڳ}tGy@K!c@;.7CpGh2AG^opL|fQoH`S(`b8FID- A*pxJ~WSefp`Z/9:A|sR(65yڒQ4ɺ =b"ѧA3XծVs(RrD]bSF#DjZE8H}tPJtdIjYr{ ; q1Jf:,]H'dT^-2hwa_ e'h,pWg7$7hНr591bRzBUW-,ŤU]T']zH!*7{RisK 9G:2ZW 8sztɪs lsJ8ɱKVDI(pS#R37%GO;EJc|\ =3% nsie5 I=dW*=(RQ# `*xS||.cG˗Kq M95~ohA ~(x;e!PͣʊMa~$%' #đOSjOS rKQtTK)Z]WX:/kp{F"c9P-B85U"/ tXbTB7ε4#)S~޴*!y`Z|hhGf>dE뒠7"*j%=o "P)P,T顩I_ۣiRE"*P4g uol4'pM 7 ?Mf%JMP Ԁtb :u{,7}VoS:+7C!葒cQ M %<@FX^0Ёx0MaIt- }[Π2PY扔Jߐ-z+G8DNx;e([*Njpz`[/PZ 8 0a+z^!.:`$_^}%e ՊIāŒ\ pa"_MBbq򏇞bÆ蹙Y="{n/wG:HAVa"-J1)^P,`znov&DI3uBzDEo l_]߷%RpJjEOl *6[) 8[-)‰c@Iy|x~uz6:r;TD gHlnU;tG+yx|3f`{ Ã"UXhm# &6R0mEnKc9xAR'9D!Ei/4y> PMHنGT#J4{Od@w520SRUmoGW}O _;#&;G*|0DN+`>1P;'2~` $:M g_@ۈMeQ7Je>^v(ax9#+>'B/ 5*d׆>6 -,5JG+եP m ^׈m ji᝾tHKh!HV@aiz& R#p #V=, (6f< ]t&-xPV תZ薁~t=+5mE}݌mmSPLt2XX EנcikefUj)'6 v>@I7HCeBt#SAmKBz `V{M)g(^Hodrr=]oj-pBHmB {ʎ2DzeLED2b/b]%!uHӟPܭ2n26] -)&$1@7G7u$\?=o) =]}`CoeL@|ĖjUbюtNHfY =E+\C34bhю ,TLʷ%LAi)9cǾe\_@zؐ[R:U 8,G c-Vb NwHVتHmLp`@DcAξ\An|;MuP/32=Dᾋq atvIo(YD> `A0DairC+?co#ɒ,  LFjK h*?2LUfbteEGp؟~6hcqnw#(CE_l3vX'8qK(1cc$O`!:H$Qz y%34MBA7C;¥dSR(IM C*RlaP<ilܗFL>*SC`j ׇ ia#'6=RzѼ?r[_Xfa0fA6n|.fch?w^x>h,[i3ox'j~x޽)`{]]е\E}oE664HEkƽ﮺XD1N @&3zmjoUp9)Alvq0iZYxc`sߵ@9,+l& zc/-cl=rreτwǐ/6602YUcJBw7@*] '8 71wl0L%.dۢx0·Wa "i漦4O;`Y;m~:pA醰/xXj҈ |iF6+?!L ၲ6o,}8̻fOYXHLiN`{l,Df42x*i Pl.{9ͬӜX Y]2Azh1ʔFfE21_oiZ6xb}b:#c@beM{mp"uq8ۊr߶h(&G)LxdBdUԗJģb"eG ;x ֵa(Q kMu Di|:jؠƬ,e-z6ڼ|d ald`Gif`K\E ݍec 'YF־8̼FZbNêg`{"PۛsXuX˺<;HkӿBI1xw s`PKXLsX9avuI ̬=s=c:l݇ٺBXfkpaYv7zLyɨy0VUc،h ;y#Y4a_VkqiK9U3"D ^2k3dEAca7ZW0)ke׫Ӊmɰo:jH&d\^ l8Њ%3ӀG+67l^hfڪmdvFU<Km=ʦZ.W6s&N`}IaKP;NΡ[hJϴ&dȀbx b k=ULdpIg;^4Hc5P[.Bd6$z_X05E\E@DŽOFd7+V E0n "v >vcTxn E%AS>(fVf|HZ}Rbf@|Y̚Q4ٵJK}m?xr-:kn蝹/ZUfmpJߙ{QRm~d[ kOf +^ͫFsTjmy?QbIE6M8A!|0׸&I _QodփmDtccfCdj%sh[3ٕ\Q韧\`Tm >O!bg^ 3d5TVچ_ 4Um; &#lY:Ѯ\0aNGa52 [@ 1 mFv1m;C܁MK'3IچB9 $k=$X1 A盟dBk;EFqniF { w5k.K7?S#Z gS..y{uiN&AqG׆bcf;‚U0&Nbΐ*ҳ­G]+Y8uE%-,ye33}5mR icMy`|t1IHӃB-$ SݹvnCΨ3hӚ.`1T'[ 1,7j%wLeko@ky6v3챉Ðji=Zڶ˔a n}=E52O˺M9ySY /vqrEԲ-[ˌLW's\H rL-[pC,̘(d}k5Lpᆔ[&wphqG.iGk7w4 `-Fa x >|laE2_6T=CMôGf 6h|,E5>nhtM߻0}W]`n#!$;7|9lĘ6$%rW +<$Ya=48:y-[@Alix5O:^QMllݜP=iƒ<2H{gMm9m,{(@>mPu*NȑXhܐhݰKǩOFc˷߉nX \Ao4N)TMAk Lj>z5TrӍHSO5CXcj,fKmz6%ZWb6VyYoD,s5m9\#zt죥m븡:VɞDuf)-郀8|V PvccU$$rԦJF6-!s; -ƦkRٰUgKow9ڻaX%rYJj crjdqg c8rt]!d BBaA[fp(>"56躦rWiwS9Z7inBAV/ ͘8P Q ˟ ];LOo|ۂ,%BY:O7F9AP0ݰ,Φ XgˁsK*g-شĒvJ`!F 6(cͮdS\mkVZb&eV%} 3d`⑆"Kӑ:lIk q [PIa^h65E%֭/uWAohP(i6$/2m%ʆ,Wiŀt9Ȣ0dbHڶBF{8h65Քm'|@ ѝiT͖5ޖ= SAN%T9bGFa,fN>!×,llBlGM?SMFj149D"Ƽhx˃v[^WlЌQh mݐq642t`,Zlkf-[0Ύc#5*ҿ%xhoLYWF^3Z!r0*p<4"EBuݠ= M p&ōikMeVKdDVh7$a*͌ܰ0/163 oq3 d& %0nOMWj1]YFͱl:H҄֐E$"tvՙ,c(\3 V#RhT,lneW$ JG{v[-8VewS\$ɏ tָ;\dBqk"cIl,ȔD2Gw}}܅&2MoRN۱8,lqn6nK Q+jpc{z_YI#]zcYZO'4_tͥ|n6[^s3ftRkb )x8amdŏ0u35׈ ;T=ϑ´X}pt4|fx!EvlCgs3qi ^z"9=Eigkh6]FdǦr^鑼W,wRJ:2FP-650Ր`4K Yě1M+3kb] (3Ɔ|!faP ehld eR!>Ƶo9Rw%iZ@و<|F۶Ovv{ 6 8LPfCF80(C&͡`6%EJ) Na!P{~]l>yT-a*}Wj1oad-61g%IвwMf=fVo$uAtM L ̵oR =MLOLa[+oDe%s{5կb qd&Vr5I(Ou v?1cQhx`w)#N?N{V&9IvpܘגA,~( s=Y:kf>fsdž-aTI֍p ƥMݤV,xXC n|Hd07FM`UO7!vdoM\lj)^g,ɠ FOFoP w sGUo~2drH ]Ai;lmM>Y_I4zÌz]bl`#Qj쮳JVt9mDVޤGCZR@NSLfedM#=h&eQ FLIPՀnN7~|2 \MbS'apnюebQh_5Pd2w3) baL!b#ҞJt-c34+{py[$7m$7NqEGh bM΅gʉXzƖME(]A&2H`C%ug$aed߸ъ(n&[J6[xn@(0+)Î,cMϿ$PӱҖnm`cv`Cݯ}V 1/1G3ìYL_tc'Zi VΙYJ >ATTlYMhi+B$v ]'*P]IRlq`o"<̲_e!b'gl.8YO6V_wa+O*!!d KXom;—ƜՄoÔf o\sfW},dr.U͓uCq$Bo +/^H:&AbV[iKjuq.6!RN J0ీ50^DXsQ .I7D nf3y aºVB> 2bԊChKǎ) mٷuh(=g`bARp4`{[ðÙ,˞tc#\cnasSYdfvx*exjTNk%6^bLq3%&LZtUz7]??! YI˨iFAx43[jMiӘ!񴈟c9(Lc:letmu0XehF3:)Ui:iYKA&btp49ô.Ȕ$T" uKX9T7Q$9Be`A9ef:U<@3Yڑo5*o0#Y Z_hݖY({{y @"_ZD;j2@+5 S3*)O|t~ڜl(U0;_Z٨cSf;V$ 9mhv ʈN0lvH o\-ذ RY*ToDv~kx!J9,s_k(jnj^QAd`vƴ^M+lNa.9&0ܳ ޮ)N:g5c5/eo3Zq)Oۘ mռC՜[" Km\[Z^lȷKV0u ~Ib9YlAU`v_RlF(ɮhTdPƛH (/ YgR>:h;Y!/B ';-X20̭Ic@)/96\ uמՀxW)Z9֫vbmdLadd&k0z:2dHAl6B6ZkEP10M3ؼiggL]&7FP??J?}M؟zO-wx/7ce4`l_w wW4|?s{lU__{KlV'X?~_$?oh7;^-uj02O/}1~'vv!OPal~vitp\gx{ߎnV~|~\ vٯ]X$@~G}q\4Fˮ_|׿6V-a>\,~ L|}??`^ kѸ1v74|\j?6Ԥ7[ sJ'.´M{P}c%zlVgz7aC,?KH{x8xWuOG.Xiӟ.3kb;?:uߍkrǿ5oVG+ߌ"q=y[_Y}4mXv`7#c?8:uc-E峷/FCH6\} rAZ$>o~߳_m]CUMeXVƌ"'Ϳ?V4|;c'zj 6Qrf\hO?e3PSSyf |uOT:g,wn½ټНn縕b 3{GU_ µOSܶq-bO)[gآ>Xe /~,^`z=^긴p)+Xw˃j7A] w]յ. 5~G[:r9t]c6Y/}8ZBu}U[z@ڢfu8u7zlncOgojex}Mҁg=*%aXt 5>&Jf8INǑ᧺Ump2(ƛ[2FsqzyS~F݌Vb`s +f s5_{6L+cyƥauu"ʶ]۾ɵQo7@.ne/n2cQ1C$]Bi/@bֶ_"(T~>9/ /*ƱBѭ\lR'V;vs-RTY_]̈́Yj>ZL>f ]?%g8ao^d,"ݟ]tZF6)xp}"KX8uӎX0P. ;h@jfw \t ޵oi[y wxM ǎr K[8ڝqE};)4!l;p6uٳw/|1*KnG)av7.r pkJks_ ߈ԪkٻzP7̷m]r ;G4tFgr:E: bP2h<^V ?n9__y/=qź'O^];+yOB븴:{B#-r(?hPly'ѷP l}wj鉭Ko\ x\,,5\8|4B:u L,33?$"tqJ׽Ɯ-5ǮF؂>Vdy43|qpϷn2_H[jא(f{|i܉{⇱\s*\N"؉lΪD3w JT+fqx>Np# YsZ|v6;kDֺV2}Ա`̪n?[C%Т|S~>͊U @vpN ya[/I\̵0yC]UmO/\BY-ϗY9[_(S|WmEe8*Iyٱ2\^Rԓٞ zvdư_N8vx}|ː(]93Z$6h-o.$Z1G=u09cU;UAZ'@S2r!|v kĥcC qkLzM-9ҲŬ}͚֔5}1b M"xFCt7YQf?vr"΋Oҿo@h-E+i/vhL;ҍz;K/?qf_c pҨfn&l>q qy:DelǏv+FLVׂ9qٷGwb==n^O]i0`94 ?`z)e>"ʶlI]/~5SѯODKa^T@> Ůd|vGsP3/˩كf iv//s 0񐙐&sZ@?xVnVoTb<6y?'c]|q94TOa# qIvؚə՟7%v4iVz8'od>jjXTd#ǟbjKm˻ c۳L ?dmXX!:H}=Wpyc F5:U%TdȞ8I (dӌ'Χu@ `WhRcpE*t:f^dlxCAlYDAs278YxxYd,pհZo4E+GpSi'UW:\NHQgȁJh@=k;8yb縛a qDP!Kx- iJōkWn<"} a "1X(Y5 EogNJ0uNFhQ;_[PZs&r,:Pdȶw rр]bK8&ggIU2 >5 tڋa>6|mn6p2qA6ne&Nk%C25_[Vu:VU/OC qٙȕ6x~KiridU1|n}V|~[ގy 謃=%*D2`?$8:G)] C^_TUI[:dp{6B)YGԯ*GqR+u%,)Tα[mjf3fW J5VL .-}~~ bA"'z63Cq\8ogzJ5txUm)qK; t1&^_F?gNQ:3ӫ[2Sgi)u1*Ew݆d:6NNk|Z}^qR9/4}Qe7!+ҥf4J+0 Ͻ{UX!¦t| Ȫ Kf;8k̩ ` vbO,[s c(:Zn 9"j3$(U kqpL=q8gNK0 }AVas͉Y[ۗv响0(|vE-umr_ R#QBlC<0 Is-\WW8pD.ë]~?S?3Updƺ.Q3zUB^0 s:`6j 0{/sU\jhauk#)q{W玊WD|߹d/[<۽ 1Ac:5j7(kN(fs?BoX(xTzϛ*nZ:y\)5j%F  Uߜsӫ,*dֵ/v! 2N rv{JI=uTNu(Dʂ`zMnoҮ"xmf[Rۈ҄2[ Ns|sk8 }7@ROHgv-O;RabNky0a2D"KT|aϖAcfh\J}(Nf;9?ҨknGV߭b 6XMpţRItvFgq $Qˆ}8$̻_6u ̓]Ůn J>7y^0 1K#&nQ5Jk;˙I8$(>?w˄gƂ *$cT#q1\n,Vf,/=iԤfJ5 `2E m4f7رzN,u:iGrM: }K*W #qK sX7d.u 7i\4Psz@.勇t.Zkػw˛ތێ87:% T]0طxԸlIGEq\b(`͓"'{Cu{{{s:8\*=D u+dmpMhI_Zq? o*YW7/) L@[?Һ :{T~M  6cBoetH<OX6q^sNc5 xM%3H7c"*5d'puຉ|&<ou[rV*ԴPeHC| 4f}9xn/&+8bJfHG!}ꯘ;f_zU=I4Si@ k X﮺ijX肊QKFQ3(?p(f/(PpU g}bx8m.BnN'=+ɠtئ@9|Z6&L(ـ-Sg|:.0ȱ9V35zf3`TzbC8"碾ǏJAxg@U+\~P?}9j-- ! vTDfmH'- 9b mXI:w/ c +#pib:w Л`܋ЀhOG aW͆R7/%76.`yJk7K+ HXU!zHJ3H8+}Ⱥ+'_uH$j@ng˼ѡj ]찣E .毌8YaNɣjs1K?-U|gc7!J넴0R|8yw%#{c'% LF0- s[GWtG,W ="yG]!Vt_2\ 0 N|-KMU!Ւ> R1hvґ).HڊIQՁppn35C(4= ,S3 Qe-;3w\% [2nۑiKM.QU&Yu_wzv!E/2T6OR~  `QZ)ErkFĘB^~`% vEY ޕ2`]" w"p|pw1zrABܥBN\!jGO YfVu4r5Ǧ~:@n\gQtA4 cxmj]E Stp޵E-s 7L )%[ Gy&a.s3ㄓs#?AgBo*R$R%;$1>,_ W/uCa#}+¾ LwQ ܅xOOpk CgD T]'Ħ f`Ë챴)\H"iF&IN,G:?Ob |Py#Sq ՔdHA0Fld<u 5/v`8e=.G? HwTJi|dþywl S+{lUXV/ /]L$*ׄ u:K51>Xtxh=<ɡ_C+{n_'Gue݋}hICz稠t9$0䣁5hSۼ~P!ACtNVs<mcVv|( o_ 2*?lI/c#}dG*5]'b#';iX+E+EQYkrӛSuz`rŗHaDC/CWveBUiY/\# /FVV{O j#"]kQS''^yI~;!( 's2cTpB6"rV["A#]q;Dc˞?60l u'T Zܼ>ROm^CQ,NK[ȥc;mhfDzj@XS۱M~#G>794W"ҐIEVGz|ٝA†okO#wM✕&0".JA}z8ۧ"2֟5fʣ+ARR*[ ĀY2zAd~˹7|pq&)N(FA_uqiɸv}1 ܵSP@/+?%:?f2T"rpyETv̔Dw#*KTo'R(D8KTPEC%x@{C>'bϰ1X_~iQ>dx/a/H~+d(J%?6%w50=(;ō3Dib&\=rmS4t1y  tR>A#=~{!z4y:ܤo7-`ӵPW=Iq{L]yYQ [[l&K,y| ] x] 0!+JS\ZtUkVhfauY§xp{:=IhI7\;yEkC~5us>JzR|7P-rZ$e &ׯn.&#IF$=saԹY-oPl8*(}9Mtz{y6"{ukۑ6θrS{vVQt)"9/=cːP` ^W~ί uGsyMJ5 (P85[n0?j#4{ wn1Wt-3HbWՅ;X*@MJ ]$PT֞IA#&O[6N{Iha 1:8;ao%?.s{+ 9ӥ)Dggg 1k̙:PO40d/1;%2aDƴveO2oǬ"1ofJ8P;HĦe kM-?k.:1+n KJA*EB\!"a#)\ ^߀ou7C8$dB]kb$i #,\|I-R'af>}B͚XDSV$s R,hŔ<Ӂ XmhB5!c]xc:+!hup;XЧ*P!#%jJ;0G,(f7\<[6/\1OU~̨\̻f:.&ݎ~•~lU?T׌:h(ALH1wrF}A2]!aR1aW]z!c<_ѳ(vͥ.*Q 趣V]3 %aIC5Ȑ:LRdA9(#r=k2lgDP's@3]pNDGflN&4($%92ϸAua7NRTg5%'Q/gI_Z# !m=g49YY dV%u~oϮ^0ڼ)Os. 9m!Ϫg=:pBY-am ,$"Pn.\VwDp#5>Ϟ΅=cǻ̼eX2^aB{knœ6*E/;KWn_g\(d ǔ b#)Wn{YG* mF>1l>oq}7$Tvk~~z x9M?>e]7a'J?g21˴ cAZ"\fMind:$??.VBK!c trL]"`mbƳAasFSȰpg{ɩz%.qFi"-bhZռArLאw y0 Y4 1Cfʮ-%'%Ȝ{< DŽB9>) i=ni`0\Y]IZŦQjΟQ҇alR#g#gS0.cZKy&N?!E,ur T=ܛ!ZpmCn{B򼗒  V{K@j]דwyg6h_o[3 e~p)˚n 6. r&^ulTiC='ʌ=-m~)V0O.`Hxl YaFgO<JXphD <&m=f _D-oDW4E:<]LK}1  'r(&W/M ް^x!G D#!SNh2=&ڠ7> s|X4׾ޏ(@Co f, 5tnYc~=KC4NJ9NOHPjٿLn Ai#HIJy!̀=+~*vx3 YZ1wL"(ѣ."֚sab0ۚSpL,c~=}3Z8s<1Hi/T0o(~ai:H@Fk ,t]JzȘW:(&܊NJi7Ĩ .-PqJ"‹/~ݗ̀ʝǹRpY 0@7wH(I~tHsf7q5t-[kj.K R-9J6I=[e%fA_.܊)D*^g lq PlX9p&nr*V 5reڣߗ#>A65XgMT+6ƻ(nY~jm8yYQD1ׅ,(<5cL@,Nrz &`Q) b6oG|4dvם/sHt< l%74""t]1BV{EN\[Z>)ew Ie~RWR^}ċaNN4h+~x acF*r:*OD ~88K l_׉ѨvUn0|0yɌ!{-"'{μW5 BDpf*<@Zg/vۑ;Z וw )*e65[ J!AfjCUL>Wne*/&Id5iӗ@ h4eӻr^ߞ9bj|(D1\gV:TAYL!y KAao1Dmsiۡ{#XlAyi3׽' |j%6+_WQ+SmΎ!AUiO <ڍ=L4o-n\V;!Gk+:,Nn iR.ۧC59EM.;}*c$320FxZBT46i{ sen:lànǁJdETNF@^3道j۷軳G߽̦#,O٦-o67s}yc}&q{5[mPCQv#GYؕ=W-G4@e ډ2.% ǿȃ'izw?oP vҚe Ipd:-:&Dxq^MX@֮ܢG`D]ivF2Ctռ~KӤDuFr aM^UO V֬^1e16pS6@રӶ»Ƿv`QٝsiFDFE(_k h@ [\Bsͽ5dh@wcc8qʲz?\lϝ Un:)~6AkPD5y+y7u;ukA*NHjpT  HFvdmCdo:D|6RΨ_>/܇p8]лisPǠS>#L.^W I^ɝ>$8b?䩕c+;t2@'u I@\ݜ j>1d^Xi zP™bJlO:9BGY7^"xOgbė\t2x1^?!ѣ\b3ݚ\۠![۸U*,9x~ʌ+|uKTZ:vT֚g yDAu@{a _Aq>ʇh~Q 'vq<0iP:^<["Ň+|ÇJ[/k>ָi%\\)s I uS|FIJ…b_i̺k  TN7Kt^M{{f8RvyyZy/kMR_х4gUQw)Voxvyc8$j&[" o,qM[aSDޗ?;~ECsݲH+w+  oƭvpj%nj^OGsm&:lxSl3iPiz/LJŕ;6"IL'|'\HV(!># 8D)4D0O%EaoHgfC(uOaUu f\I9t6 V?Jgnuy@.#k5.x*1 Ӌ K8-u_lFv+4v~LdPĵ);,:l* OdoRHsnfHoDq~8𞌹tRE2e|p W*M6[}xRψ V;FsVjnv`8驄0w!hY<" ڟ2=Az(t@2Nqb,S$nJ3vWjUɜlԅgo0T}7.% /\6Y-}\^f{wt۩4tWbw3v6= ㋈K3p ]/+uNtaj1ŋ C{/,v/tfL4flN6;m.d.X8YΨ 5MxqD%j ꔯtn4.M?"ʒ]@w0hP4$uKGPa7XC I^͋8 qn r nt@ż]+r䃾ӌ 4#^ -\o>--N"PxʥOIswP92ݡv!&|1H[~%]$WJ&[ns w9ޜ#0pۙ:!KX5)~]y9X-gm.kwJ/ʀᱫl~Vƿj$d'!AK1P;Y?^nVM8G 4-R?J@tߵ-ߺ]Jv!)Ѿ"{[,,G데%"~&W%kM$ TjؠBђ1Ms7}yTSJi?k-۝XKn4}4hy6Rzql. 19Pf?-YbyV_2oO\+{VքS/΂뽪.٨<їN?X5eW )+~MlA))˵uߛ{sgI~\y?M8 E=;aՐq֣Q!"hT*=3]"kp0zzTGH̙|pNP+! R^lSFlU؅M*!hB !tU覞{в&AԱ}'BZK$tj f%{Ww맚iI t!ccКot4]OHod YK*z <" ĢA,lH:O}/W\lwo%=lJ?Exgϼu'كΉ1tQ"&~=W#KhqO|g}i RMڿՀ32GQ_f+ǾCgZDcr:IWq$hC״=P%҄ fKԇIuvucHnq 䥅9WbGđ,2v q(+آ ]@;gm6f˪@DBn6%I џ'm>Oj29P;y՘E~̐Ow RќO+xd")ɮ/^[d]M\~EEs^9 V/aPlryWz4p"!"₨,*"\tŎ5^yYr nB$f W8`i,Wzn;<טFؑq^C8Zbl>Ŵ0_LjZ52!880nP5Xz1%(R^Zl唽j!)T+Qzѽ\kN0} jz#kݝeofVvf2hSwYڵ:m(ׯJ}a ZmK,Nl:%ұ-7&b.&nu1G[uLH)T1]Zs*F`K9m?-Bܟ'HM9YhqQvLP ȇb($~9*νHw?Qm/L=<. q6lS[dDs6{q _sFMuRF<1&-zvk:eopAQ%J@r arp2Tx׶^X[ "Ũ׌AZ;iɌv';c8$lA^H늶aA)P?7y&V Gwl]郜.H%Y;DpKǺDf3]lNJی o~eٳt-{&/FGL,,f#^zp7ḄHKBFj.,]~xD4Sn[3"g1J ->/3o% ,&YzY뾁uq3"-$*\~(8&cۧ( l8JD<:>/09%$YAOva8(#5X"9BB3Z [_JBJ'+y&5Ţ%S@&`uwpNY%޸Ij )r ϰxzxk$.+\kVg7GI̍ ͪ a{d(+q$H Ykila.i& /,ߕ@41U!uXNV3ɱrs&a_e>ٌUc!fAUٟrXբ]I᪨?k&`,`YN&v͸B@PuEABOPE7ql/gN܎Yrpp}uFs%58Cjb8XH\T#P*Í)#mir#,ea82f ~pn/NytX )i3&K]Ksأ>O6QyNȿlğ+\:&2B+ĭdGC]}q~1, x1)]RSZ5_Z!vӚURwuK_ S Bv;9>krL]m`Ks33Lr_gK*ܿ`:(0ST,` .'XJ٬͋h#mFbtޒG_ Y_2B}Nbg:MB"=iJpLWN'J6ݪ:+@=3 GG9UlekI9>ؼ KlIN܂y!Tnl楽r+?OFR_.ə^]zŐϾ:y,jeIt1"k|!6tb\-`Y"~"!LRQF|`'9RO*3P3ܙaH*h;4/ {88xgHB)q=-jFQ SlĂWa{)='"'{=ݽRxu欯؇d:cyyIADi̓ș~Y^KƔT2l׽i(jߕ\{a:KصPfLT[zB\AIGcu9"c 45 ¿7ʫwXn͑r Ys5Ҫ-jSUIz5DD]d".)faJdo)n4(!/.`~'NǯissK1xeDRĩįGC҈,6''Qf-Ƶ1µtBhBz98\{dW;/E!Z7m"'s;)&+b/ˆlV@b*f͆R+_mb&JS"Hc`,V\f& LقߏY9fm3쓛wڢU8TPa0! Ko~hJ"AxӵX0(wGUȹu2sC#ʨrA0pa~ h(yxDnچ"Gt|5A{zwѵ#'ݡ~mhvȩWg*1Cy]B1و! )p6-xrm ^hDgEyvVUd^!UOvçP*!V "[, OG"bťQ@i3@b*t(3.A#I"-&#F e)nۘ庆,5QL9eSřCɱFo`Ov|Iw@%/N/cĉ0 = ,ӤJ#$ӟ{Kl-~Pz= Srӈsc+֚sh gCoOB_TKvzEW$m uΧt^m|!FS qoȬXIj7LP5ڸoZ8[{:lNEF{ A'`&hbB[ϒ|~&@r㶖 qp6w8 5A,>;od5|+Vgq^~<Gɽ<Z ԕ;>Y 5/ VAӎB >,SrpwixŘism'7);+JZFXֆRDEhs!WR!,|ʙlhta`RݕW:@+.w!YF/_Eiqw@It_2QqEK6i] p`@DS^DNa)[d"EhXn&mn >-rT?gzV.MФaMM9LiG_R]c|]ӵξ ;+lֺ =^׹'5TORZ s.Bx-] K@r|\zQ!&L99qHQT/;k};Mς07 n7q%B:o$#hw<}jdT2f&KXT Zs`LTC+ 2,GĺBH[GkƖJD:r\)M@uRLG|З] xJ݊YJFae^{푽wٌleKHʖyy%6rvYSfz:%i[@-cގ$bBD 47NZ$?jj:MNy_\QrQniQʈ^2י,{z jɦ8Kac٤1@` iUw1ɌP< Gef:Т^H2= =k>w{(|M!2FX*˲_/ uZqXQ;F?^xV'8(!!2bdW $S1W8fZw.uE:PWX<.m=,lsBEgdx.(CEYi\* Sqy%n氌,P_oyA]OK>jBOix.xlv\6JlzqCx; ?$;kc /d6G*߀TLU{W()]K ?+*D }cD]! ֜t.(>|f|Mؾg,j(d!t^ ﺅt;l2.72Ǣ󌦍RT=]E8gԨ%H%]9>aVîҵX- ]4>`DRNPGĕ9^"&`.&lH4}ͻ{ hw-EL'j̳{4 "Zf[ъ(j8_lF@#l2M`Gk='_Y= \ZF{j8+@:>-n%7Z#Ǵ$=2 VP;Pi{"ngiG a8xuh HwBCGݝb,ƆƁ\[``nf^oy{NW4hWa3j2*:T0vFjRzqV*5/$ݞL՜dg&0E\zJ/y'y>3F6PBAbNQo:gGX#fTX6XЕ7 ya" Q'V/`\dw+1U\c}aCi/=T=R`- Di78(A7L**0']'ȴF4ڕ2 )$MxkSmqC~/Gwڌ~|ga|P1D) z a|mk)TJKWc&e̬xo@ee:segYZ.:P0.f&EEsT8TAj7LLѽtj=4B0(M8j^YZzwޅPi2}b-Tc GX-\q$S˛~|ʰ;P3'8u:5'bn3NlNt-'F)%nKƟoلo+QOڍ#]9/'Vu,3Wyf%(te.V73tl/ [}iv<9}uwqjbNeF0Dbqpf7vjcЧ~N2YӷlqG͹ OmԀQQU9XvV`TQ(b&x7)Hc Ҙ k|un-2 ޫDyiی,5+Tㄹ Gɢy%+!;YGќ ηN)CԘZ1W2*ʠlcol*^+U ^Wrrx(ȿw^17bXnl$8bP@%Y$W)w)[,8`h_{2ۨf#8|})*AL6Ѹ9~( -A,-=26ghΪ aW&!([c(#={]nrA ϝsE,%WVQT i %^qGHP$^P`+I]lD<{i"na[8Ih>1KLYϷT:S#,Zlw9^Tw1t<:2V%xNuӥ˻Mw<)b!I g4=z|ѽz~v]}[JKLOr s8< ^KAMTHq,w.*(wq,Wz8Pݿ<Iӽϴ`xS*V ="UAkpW1{*bZIO"`_[Č^R;88q LUOőq;$rQ~f|&Gt qzg$=EXn?;XJ EP%=/q] nnTp$@r'^T )h&: L6] q ŦZMK0CW :;]riK59:vK'VpC926A?.}zۼj FC.:smQ'LۅVi{uFh}$XEdN=5x8<4ݭ@)0 3o|P[`ͽEzt\\2&aψ,`)wλ QR<~jopIn=aoʥ'u@nAgm^Xv႓n2H=YHd$aPe bA5 voJ;-"@MhPĨhcxGߊf|NB!7=nK{g.q ~)mcgyT,JT2ۡ}Ї'"^Yhcf {W,m5:r'#C*:T!|Zm?#)K WgVCt@r;#+Q(|?otL>}+ |b3q&uZ ~ 31TJ*29?Zf4v\Q!*a96eLՠ+11TxMgi1a9*ZD oR`2:13 mWAzNfB yy~Ϻ~m.aT =_ ݲ]i%wyĽV(ٶ)e?D[0\x֩c_^U$>'S(*0>8SFf@ ~uϭ;\ŀA.3#T̄hk 栞,nyFظ([(!}Ӝ̐o%!pe}5UDg67]DsɊ[>'ST#/!rb~¹\"U Čnf_}%J7Nv,MBl)p_:'v= Uݷ"q#D0A#>h-,PC1g f_tuzrRԫa~tOƻq2$Ypnw\rO\^f^ H[3mĥB/X V8\x:Qcw]:w7IXJ~<7`~b.q{ݰt0\ ^'?xumCCOQ={+l |S% I{/ҟ5[oXK]fٓQ"aY#xЬA2 k>?Y<Z? 364pE$Vc(59GH%e^ vܶO\0cOБqzvB/#y1$ȇ}:7LKN5]wtRcL[ζŇߴK*ʷj3ӅeK{b1۞@; uc^ 6 |gπ]5O&,m36^;7p_\>wy:b'#tT~p^nX4v'ݿ_~5Kpٝ>~E.puwJUA$+v&OiE%Vļ_E2#&+#`l㲖7*acW7c. Fٮ(xW"VOmy[1Z;d>.+-&uĉ n ɍ[emm% Np ߎJat>U_֭&?k<,| |!b| xښ\W δ(/&w>n<|lJG v^bOYO]%jx]ov?.ovo/s["s-s@sSR6io޺|>[Wkx Ŭ/wސNpZ}AWϪח4t~,qouqm|wf8}؛iȯi9~/cxJ]'k+7|#22dhQ ƗcI{Xf9bW&yhb-K> 6֑6~'vb݄ Be@0$![]GAVX?xXmdš嫶cpEĖ#,~~ Itt;Z/3#m&( yy ai!iuWMTI> j;[ѭ{6]6_؟yWN_bĶ "o})u}3=tmqaq Yۤ#1/M/9kCk ,[%0qI]8҇TYžVVqGjkh]w" g6u?5|BviX7ָ]!vɱXxxxOUY:n3;p6BD=@~8o^ן VYWP:Ԟtm}k>['>ۤЙ`uؾ:OkSƀY3v<6WQ;z3eU=oXԍƲ+'Wu*xI(A'{q. [ʛ<Ԛ[Zs:jO:gS?aAQmY(尞N}nQLvT[JRpyK/&M^u(# T{I >,mOP;gB'eW$U+;mktze`yr/zpQHвciT=t''bx2a )Ld:ւ5yDa{વVP0Z;f6q&yн8\- ƤB~qQBjTC'$8;{|<'ؽIP7b]"  X!EJa6{~aggH^}I}]",,y8d] .z>u> 0d+!fĜ\5dZ+<lb%/TJFf6MP0"VuGb:/>^q67*tU[0(vͱU[J.?_ۡBM:O\Ax_W!ɚYU7K'}$Ѡ(s` ] J0fXOYTa/3[)R/"s4UkQ>w$1}%3]١?PǙ8|庉:هhef>K.e1 $$tlwzd1a {GئEEB߂ӋqQH.X/8 Ȁ b.B}evkDMP@'<'dw~25NI!D?(d׏++)0ho^L299kG絗X|26׷ j 5zuؕߣ/K%4' yc2C0a +$T$hY^zvrFI`XTa<_}c z9QYk: 'aml1 Rxz |/w]mi:kV! ~I9>CEr1^Oi Ones]Q3qaAFXG(ݯm /{3QG P\\f2(M pSnVV1,y?[<8,P5C}c+%f1l %T?*:/_-HbP9&W/.# jA[Bb$ ̡u= $K:G^i{,/' _6>=|2lj9Na8Adfxդ|,p8L Y%iXn[mgc){.^Xs2GeM(+ɫ6>p0q#]r(xڦ  k;FIxU_'[ҩ ~.ljq4uE/6 RcT Hm~չvy81-ECFh@ŞɁ~E]*H2/basDX5a U1;+tLN\H3`^~~/ĊWB-%ℷýn{&:z@W:haGS=G_7?q& ^Tj{͌Hy=&'6NaG\q}\}Ao,ڸP$㤜nG n0wqQvvR=,<7>a$x tu8gnt6zӭVF'S9~O;@'x M"߆UyMV\HsW=']H`Cb!oev) Go@"Tм 2K~ppr>yqaYo!JbςE^?`>$znպG;sUє1WAG:*/1;X;hlo*CgmOs3.MՅ B$Nt>n 5Fڲ`h-P,^Dfsw.жQ s$ VP!F~. SYN}}^swR[MAd{khGեq )l;0`;Gx'p,k%w_hwk`DAIp^z呠wz^dzj*1Voebjunͩ-gm (_BseEIMuF؂^RA`ίmf84EA훆ex׀qɱ]Wk wŒR% *&Lu;-aABxS;cý`9Ŧn:s"<+Yq7!M 8 􆮪휤}6 .jZ)qKĎe >K%$VC0]C!4NsS^V@| T@|ͥ~+h՗lfl4}8Dɮֲ{R6paH4:@]+41LIrĻ.*3MGd͓f9Aф R'G> &5Y1uҀعeX=yG: F0%b|u٥UMLcwQdRa-IaKpKn|9l f0Miu)si̘L#6.z\x^/RKi;&@ G!F')9P⫬ݽsU$AEhoE3@ 7XEE"|4K1麕=h/'?v#yJgwt"pH;bRuoړ {։AX^9P#hg7{% 5{u3#^TUsj_&sA3PG  Q~8PmQ>Ʈ7k},Aa?#uV%"4E`7LB8 5}jARĐΦ {v88)kfWes:_jCp=oTxX-l_Uo:!Aw+og _H.m|5t24]:[mq"1lM/>EŒ]]4?: :3 A)\8b.^$? P.ꠞb(yXaV˘`"-/[*M2 39⺡/gY=f`@ LN8 k5O1>֏mr"g7Z FJg2p]+w؛@As&-#YH/~wm'o',c9r c9/x /V""WI Fs~YN=>b^f & =^/%HMoM:DdēExTvoam3;kGtЎTnA=F-AmWXw]G{şjf> 8qC-qOcD;o59^bM lTכdL,qE c1n|C\(QK^, ֥&WWaЖ+κe'yBMŚh6GVQ P1߯-'xرdžt 9:)[0D#~F&OspD6:WwRCũ!ep#02RY-`Iu9>^lƴ"+<~7 @+¾S%ױwt.~f{IgV4'l/t]IwaI9%&fu"k/*EzgxLl Y/r.K;#bc:V )P>w7R. j[ 6/Mj~mSA zOA*SȏF*р9~@K{CB< ߩp@8}aT(&2u!+vZKqLJiQjzϔ T+P`N?]lIfʧ@IJH81F,ק;Ћ}qXfBy\J،iᅠ oLxXuL,T Θ%sus2c\blՄpJ Ӯԩ9tDAPpx:"y)d^ !+6gy6s5땇S.Jf%n=ȸD)Ԏ P4.6_ >,Ԝ0\N5\]&s @=W{OៀoZS-r+kE'I`ҟYʯ?~kpP%}b^Tfj@" PѝU 1=dD-5Nj_X4U8p:Hs;NtkynL ˗~`4*:N8ՊQu*N{cJEycGs<ڙ8InRkic0Y:xq|~OY[dmOVO^wT_.c)EDxfȾ 7=$Nн.)tc-$Cmq''ʩ'|/MFSp8Hr6bRY>r*:(ZS-+\`{¼G(ovBQ,U\tO:tVkdMlAj*Eɤk )UQq9đaݥVcBfS|"yֳ'žɛ;"CXgs#J}<⤇'AI|_'-6_2>KΥ|nIhnDT 0ָߵtmw52f.7ʁ ^{ʹ8Һ}K{Bw*$QY #AG&Wz^KrU+Y䯵/%No\(I=bY(v.WIOߴ@1mx^\-}Wƫ+W_QmnT^uO 8 mT42qR]HlNa1Ҋ1I҉ҒSO7·ěwk4 W!|X${KQL 1߆Zv*c×&>j|a,E7 !%I7! aUռEζT(CC_\r>f=C@m -Vga9]d:43yV.K[A4g#(#UAw}l/i֠crD/SuSXW5Q{uS𮰢"Yuђbshx2+)0jjK d'9'եRMvz:lAEȊb풉߾ 9 EDdc^G gBNY=yYWMxO´f]*S7b`)>DZ^l{\NvNtH32=vW:3}Iti_:/ 7T];Z"4>rhQᕵ˟.E >qt~‹9dr;-]K̓&|dA8,pD=X%jy"Oú+>[p&rpTj"^C^$-^}bqrhd rX 4[f*N9o4WY4I;69k,|4}Og;͙eRS=`CBiIyC3SQgx3] }'̭1r.ߧ <%jyϊlRj^uS`7GIi ? gs Qm粓1{eAP'_}RuTlWhFtۧ0.a.nVC3Am GִyRq9(BdVeɽsfuf߭CQqF5zuSS~(F~Fڂy:!v*aҤnCd/*狔YDqgjjW|h7_e}hVjl |qz6^C}C ^M=_ 30Ya- Me{*shῢ[."a@xKSYKGN#J{pO{2Ɖ&r4 ϓ"|(xQ O-ھdH/V[6t p=8e^rP[JYen _~H9[<. jaP,@aNFhǫ{NdwpB +̓ 5i'ȗNukvx%]T5X6l3KK&e6B1ٯh>Ƕ0SΊtwdFޫZMȔjy'{17԰&Ȍ`}u0r?j$QL%hǂS-,8@ Lo[=MqIH!妮K=4w.ƚ ϡ~|>|!fHE !&$Y^;sW*ʧXRpAxFhPbQt+/yWkKf&(O^˛ jUSӓw齨#kN ̠b[(gâI% $wd x/yġAV?nLdXؾ񧿣Vp|y'W%D!&eNL`d*h;;[7&\7ѡX!*pyAN;2 o6?\/anri\ǧ^a^oEppÿ*\_e)**S^3o祤&dդQcJ[z|ߔ>thV+ g_ WWQ텢ksd-O;ށO`ޯfֹ=Ej?ZNc4-.p{qL c72~4Vn4&xéd)k2*BgʉqI* lT~ȗl)aḾVy[ȃqX*Ws[U ӦttIOd;E~ƱVQXֱ?lm?Fco5/wq1rwAAt 9aB/QNHdt(80Mŗ'Lv˸OʴCK:mׯ#%=CzqQ2ڑ!:B9|ㄺl/aUjhYfVrRS|e|, +'tůtw?rh$\ fːKJZTJg"A)S!?V8PkXL3gZ%&v"!ؠ_l갯D|D\P$ǽx%N +eUY1(ٞA\tnևq"jN@5rrUO5}v˷_R>Y~Y) [* Mȏr7F>w#:wUϭih]ilaꑕۏW#mݲ Pz\A,#H0$!BýVpRoZ7@)Sz1 i[')y xIO1o56o$|xg}KhW1Nm:T/bxX:.k|;r&T<))>5KLJ!Ci@C~P\ hl 0soiE(Hafl]up1e7ܤ7֫!aoCǮvlD<\e_vxtuXŪ ND"c 3V'tzW7')% \d1fJyX8!Na/u -$4m~fcit3\Ո'Hd`wn,yA6ZÜꕞrO>AtgXQ̍eDpY ?vrHf)I*~ @+W=ǿC]vz$X$& x/uKyIUdiL7yLg~5 S@ ]K`Vv+he&󼡸T!>Oݔ;`EhN--!peB(JGQR+Ne2)oo|\9h;ԯkM"kYTMr9YW!T7mgFӫ4B~>Gɹ,n *aZ~ff]34 @L8Ȍv y°5El~Jܮߞ$Ie5TX+11siB~s ˀK#p_YZ0]`A -}d+ ]D e0[]~8G*ʍ؉r˭SkԽwSBq, .2{E=c GWQz"ƊMŚZϻ9~n3f ds轼fuޓ+%Xs׷_BЂ#v(m,Jk"`+ǟ,wƟ^fkļL3/8yVs2u>#|*, SjQZ9p綛{eF![޴J4iQZmp`_@p:jjT251s.^"݂4$yS9Ns!UEQDDP\OYEqZLJwL9zHo~#Jژ]X ׽:uBDe CW+ WKQ8sF"萃-[Y2`U~9ViDǖ+br,^Z j&اys(uW)Nʟj:k%x 0жA\)[8?_Qœm.CFJ"[x1B(!#z ` {\?रbw{(jʌ[KVu*Q~_zHH²l&-*ErIN/MoD58v J9q#|N9{ ;o+t#ey3r$Gl31!)a M; EV=jK͏ڏA> 8v}9hyسԡ3l3&((eI!A ϕ 4@+o"RĪǬCI4ZZG<*Ws:2<(s˼/-3ډ HR 9i54j>ldC4+/fyaF؋'MQu'5*F!+ vZ-^WE3*89cN-t$$ʫ"[;m2+k)vNSav !p9@5sȱMD10/k Wݝ%=Ԅӹ=*XMxu^^A"Qb^]Q'W20i*F6VSGL#@8)K0t8bP^a5yUrK:TJP#9۝pe+[[_OR(9y{78C]W kW6ba坩7t!kd}V .1&f;qw^8te Eaԯh6Dv ٍjTI d;cR}SGQ~w҅u84#D 0l $О.ԡK4dN@8mql ;<y>V0Kd)E>E>\:낷k5W{pNứ.o8"ĽBEkTfU4=1-.8Bߜ8Wxv+X$EK6Ld87) ^=T]dR_@ݕn[!`<-{z8%{\ WRQO}nV{)ǟ̞5gOJۊN~.TS IL)OXV} O\W@l{"% sL lIU C-_|oeI+A G D#^U5hWu͆щ/`w4fc o|Wo` ټC@1z9VO<-Wr^YT櫊2Tb>nv:|S,[O8p?y;W)/0 `"%B;KiBq`VrX1zzh o+l( LWP.]ڝ]Ny/_~.M O,8g\#QATS\u!o>(/cF-vEYFe2`‴vcP/72̈́KQ=_]"eYtÞ}Csj2 L8M- ˪8!IAvzGg۸~A^9Xy7Z@F+1FʑdQlN)a}<ոFpOv埉qDO_Zi1CW2uœ k*L!?݄8me'h~btpud!& t AHhu~},?8L7cFQ G/_:طn$'$A)b0+~P*vO7[ 1߸i ٗY<2'u4TnYsGgwbutYZL/+2?"Y2d`M+{.ٹ T4==r|+̎lx{>(Ub=H?-qȍ[]$~V4iΜMVL_:L 렐{\' 7Va}>#02BwQ+=Z<PE@O %pWSm8١ԙPvMn.佥< TA-&LnX@]6`ӑV=/{{;kƒlf"|bZW,* G\~ 1G!^'M,r*n3uz,ǝO7ސekn^Ƥ=҇!-Ozqqko.^ǑځӦJ*}y`}xE/O<9X#%f?n : 3@auQD& d a5ȰdŻ}!3q/V"eDٳign4~*bC*rLK +yH~WSD쀶n8.&Fݸ"+٢*%1yN=^VzI뫬u{J? b>K`a`ЃF ^$y-mq S񹂚@->GӐ2Q\=4壝?&ǁR '/F7tT\$ÙPNy ;3:L4EҸ%WDjt<#ߕhm OC+uHz}հDaN9=ǖ>{lcp[I_VvFWFՉ' ?Bzy+zU߳죱5-<bGw*@NՇpEr^~AGUY D rZJ /-8.9&Yn?3J r:Ӣ.D3f^bOn8Оt\5d{%tdZAS3 /{]0|aQ-IW`gA<=4%q+fVօ\l@{硙[lSPL0<20J.NX}iLM4c+۶:7<q$5mO-LO}(艟u$~Pt4LZo삀vJe_:@NH"4ݚLJ"s٢ry!9ە$83Q]NmoVbcxnvQ\NCvUV5=Y>W޽{IWQ`}Tԥs Z5o;ɭb‹ڿۺ^k^n Zm\\ѿ c+@e0>ǯL9eha8 !MD4JbwDP ,mG㛘}~0D%* VⷂU E8]ؖ1h܎LV2j2l~"1^Zy; !i6kIzFfH\u5$aTRV~=ܰ#?rYU8HURDu ź^JkKx)8A9{/y3+iĔy#ꙮv':_b뚒`rpޮrĻFv %;4s|2J(b4;C oa c{ x7L܀^]nKc%9k Ɉz+s Pw5Esp2P3d<#YW)htob191Q$f h)ހϼOE HmE+F(2zizdw4Ror" 6>P_^<qw6E:^')#nejKzhAԞa j:zT9W v')CENqS&HEL/B>\9#2ݹwfLjUa#td͓a~xgãW lFX_#컄YH@^Ͼ7L. ԄX hP$W`g%by& R͋a-jz[ˌKeVC֧l Wa:oG:wڔF%Ə~}|>Ϲgf^Уͷ's>Į^8 w{ Wobtj>U\6-Hk][=lF .Gt ƛKg$sz0|~XY!V Zz\)*thBH<S2"dk JK-ta Js '>^\%yEj~0|^qj1-XuTcf:1}g@kǵ]wsl$!maTT9.UowEI8 +9ИuBID_ aL^S=Z tgliP$MYyDT*Mo2ziFOouw-XɈ T=h>.CG68x.<2R=$i^4ou޲1Φ}*pzI*"Hd$ C4`IJRG3;GB<-'`QĥQfn.,q>x|u1k|@s\pk+j/8?!O G""bM¤?lP/yC우"2@k9)|~>.]' B-$'Fhxct+@`1bcg~pY]^)ypz"sod. WҬh9,ȡ:lHH Xd_! E ftZˀ9d%]‰|~`KgI>woM5N1?o21\%قfj(M2%`lW6+3 (ߌ+aH*b8:wY! !vCc>I7(n6Z!z| *ۻΎђ]s9xZ l B_!4bU& ``<_zUxa??I4A,Ym`MEDʲ/%ֱ@WnZU]Ӆxy vpR鞳F4$;JL~7`ɯٰDü|%5 I>T03./7iѲ}eԀISjLu360B7*1_0ZTqڰG|3CCSQWfMoGI/Е$R_M"߰4_&Os_VIV(3ʓPqGCaĕx͜fW _T$U iw||Q7Rs@;t1z)Qtj>޻ |[$ ?'<#'E0NIپĿ9<5?o+тI]mt+'d{;G `8 gߙ];;\m}B ~'f5f%\wy.5?ߜiIqCvz4J$ D+MK,wO,u<47 D='ϤEV۔< mɑgMS_XLw(r+Y:դƹf/cl:r4gC ;@=' >;i˥ƉĽ$׆X1@ދ' MG)tz)f:ubmCf'^'UQ i!Fk\,CLQĿ7.62ъؓc*#Dll4zd(b UZ[*#nu#@;\ L+Q9GIic&j^t]{eҎ,Zt93$OFPqA^K:RW:&"sQĜjfZO^ӯSѰ8"ɏAȆ*-0)ֹO ۡ;Lz# 9?Ⱥ9ǕbJftbI}0Ϸo'z}CU%CJ$e ?F|hǽ&V>ɉ#s9[ߠƇq*Mk,Qe>dq79K]a/ڨ4Xm>N[7*%19 }9/ ,zkABWJF/޽@HvMֈ*|y.?ۓz< dt*^*{w@{8;1K^M޻ÉTfwj '/@HvƦĔr9+IKY\/EŽَ$}ymS4 vV8ehI È)[K aiA/0T]8)taA݋'wWzwDcp `c?ah׊%y]&Xsp< %r.77fCwhU(OJv4_~eWl[֝` 6q44 $ eذjN dZ%өk:A87(ޛ 7ٿe s jCh yghM 'd_pЩ4wqȵK?me,7 p_c3k2/<6vq L:x5X_d4U=}[h+^z)̥y5.z 4rn{Zpw2aCsCf >Xs%-gt ÆrOd [DkM^2%Pĉ..sg3O\TE0rOPy͈#":sl9V?iV*䦦^ty\W1ѵ4g.%ᵓld-j$8^ d:Ng=}cFdc'+D6Q( MIS|~hT87L$dyRWKDʺ Dv?- 5؃'4oqD(ΫuwREYJpV8Qd|0tXRH yzч#{(m#<Ͼ$g/A *nK'-f2qۃ6ԅof3چ3d(@oLDڙJBFӊE2DHD;Y+#(ю9[3R߶ ς*7@rjcڶ*xH(бoL]=qvJ}+nHs}Z7J:3g󪃋-TW ZG:j3XG!}L7K!t1us $@)֋KU4f ''/,}C:[ZF?qp~q މOd:LiauMĔ0 r`T~epvN@߁lmmpnp,Ya6UOTҹ{Sx(%\[Av>Hf\%X9*d+']m3{l/42K,$"a'R$/`j[QС:lauD}(=\c:gӯʢMZ7GqY/Myu[RI&;"BvZ͉txt|U:ouw#aԭBh1^t:pl7{ȓ1e>PIb*!4iSN7\^w n>fҧ0rSVWbW(֪-DS}~Pc /}|F!~2H[+Q<3n|g?iGG'L@q.VcsWU$#!isZ+[j4_DAHE6J:h|ƛP s9P->rP(󔨯vd$%_W1bs+GCKf&bRkQ/^$b̨zMKI@&] +c^Xk.8.NO\F(Dx20SѢdå&6WCFJ~^vK:( vwtHb18CRp|>:}p;-kgo9hq?l^6̣~ dBzE֐wB+c/"9We`S3pg9GIXl|TF\ټ?ڪw_kϒ(f!(]/O5Q̗35$-ܭ~NtSxoRńD7 [C?Z OL8H\B,"ᱩXLsf',Zi `&;5@8 Pz[*N GWv"Zx'K˳h2^2m(8Y#fK.HVʿf8Cya+j8a%YH:Qc(ѐ_if#\5_7잠Sk7Qg^dt'+dܻ%?gߝP$ti6菿=2nUq$;]*KӼ?j û5$9Lg=I__P R~{+}Jp}9$˲~VKȘ#ek %03u*~}Ɍoz9IskDH mMU6*LŬT>ft?pqa%Sұ'-Y*Y}<_iG][4+rjd:gx"Mw猡H[ i^l\Xa.C{c"^Inr1n#3-;ttP0Zw,KK(qR?ٳeFȇf"60E¼HѩUTk8%yQ}Y,a 49YUR[+Waȍ扨)6D$T\k -)؊vpeiv'zS֬44-O/hΈ,>5&mZe (O< ȴgNu[3A {l!i$9/隧 ,FLxMbf0[ͩ7ٶjs<?Fs%> w0(iNcFQd 2ho azIF*:ָ"͆pӯ$mg˶ P&fPz1iKiXWbYbr8[eQ(& GM|J%wVV2$ytb3LI})ɏ"II))#.p`\N(=^<,nM]`,0뽞s/.6Tީcq¶1hHhSL5M|zፅ][ʋWhz/^f˲j#ki[[ YQbwy?e'^o%E-Ydf]INWK-x&dS s8  61D;ئ4JLۙÀ^3uD̀,ܵe[Ѯ\/S5>6|pRAP!쉖 Yf嬬,+M*~ZF%-,{!nL.P,<&MW85R&/I @f4qz`}yUkwƐy(Fy0gCՔE ȫp=׊_g %7rsf=/pK&̏DhSGnLOd?=ǨgEyb9&`4t01 5`a=҅_͹8AWxBRZ6*^*"PPIo `~!X)-,Lq/M[9q~hn "[ҧW&.? ҅WYsē%bޘ#ODzJˮ l&S<OiA~8L`8>]^1gHs1vYf ~Z(U?(甎(ыe}jSPԉ耘ݠ>` a97T5*b&b)ٳb~WTB5:.f3' #_in uoW':0-0 ԼDwJ L\ر,wf qಐT)<+qE銔Ū|v"/hKk:Gb;tr"B +Dꅧ 09~isӆnIVƗ՝=c1vk^ FnP)V h‚M`=ÅNӓݒpdD( 2NcaVr[R`YFeo5>qmnbw i. LڸD]0e9_@DAG80 QPD<s9Yl7j 2Kp|yW+f0+ E\vP i|L79+ fJ>\p'J\:6S$Ӈ1o2L%sa ]\86`iF'n{]yHZ^btP,ى?\dM į]bIctRb&<WפQ 743;lG5+ TZ!o] ܎7,1FC֯ʝ)~N0<^)![b>6v׵1󉨥RJelz;R}ᖀcasz[>9tU3TxiU䉵rDw݅S;;h] _*5),߲ADM(M& {J9O4Q.8wk_bViSeox;Pe!:9^8I J=H5ܫS73jV.'r[^(rLE0kEbl@=?IQAf}+zgN537%$W`{vT|#i;9e\_Y)QUA\~hN 'Q(dz]Xy#Z>haj"Xt*/Sg.._ " kڇ|ˁ0V-;c.ArվvkS^0=xг*U1L∵,`,iQϹV̪{BUb#!>Cqϸ+ X߈`9E벝ep*ptptb;Xt)C[UٽGBd$0_l(urӲ`,S2 R 'cBk!X9G :h)ș(u (jeέB#I@ZzȰ|9MA"IK#6ޔDHIXy~x 6D p<ˡH@Dׄ Д&QlM5K'~^yʚ5=nnn#ĸu=|yXd I6@Wug]-޹b+اp [™)79F hPokEȗ7;#s,f&l=2 >0YXb!@>Cc8:1W^l5-Ywg8F;I^|t(IV~Œt)&LqIq6d:aX&"D6ЎW,Xqהc&?*l} EC<Nݦ N稚,'TZh>D&߂̦x}8ጣ$@!\쓄Ay1S h5&Djl7䓧[ .07=1~],ϰ]+r(Q `8g0J܇9|FǍ(꾫TeV^{XRZf&VӐ\M/{@yWb#_.c0/C'7*K &k(tE oǥ}qD ILdx8/TJx !0M/;2j!꘸AhNg{Pj@8]H-fWcESio#lY2^%~e Z4c7ytݨY yQÂeZW4z؊1DY#Ò6>*.f mIPrm('\+cq%Y.@F޷F&z!hNz`Y@JQ(9Gn% T$1ts`%,;kj%p4Loޝ) Ny'ݛE=tx4JPcJFٝn/1[KVq.AJj@)mՑ)m|M>&F&+7؉gmsxޡ3Fمv TQR>b3I6G$k'ln֠rd\ %2O$3GW|$l *Vçmz])|r鯡9)f Ry90pkA5)_ƔdX"7XW+#m:i5(6Ew2o.ifhla%*g|ikKCGr\}4 ÝBwc1L@f+d@8 (I󤒓t.;v`4bӡ3f=]{(2;هGZ 64:ʛXE][lrZ5Z,)t^`#*=:txzݗ53Oc 5qn3\P0KLZ1>?Bs23˧ C,9:'(jSB1MJ.D{j07)zK|y4#ZHՔmZ ĘuLȬu'TwqY.9m*6obQ}z"qg n268(SgK:ߔJC%YJh]f7k0p/\?ksLW-gOYkVCښpZ\޻oj'S[⟊enUw{*u^zL7l92 yO\x,}y0֢JEkW Sa<"/^>l^l糀a!x~ҧt)]:C5Et26wۘ&xϸ{i}[>V%;/ع?-HT} j߮Z|HuR]4Wڶ3ۇTN,2wY6rrѾVnP((7.4I{GRp:3nɊG~OWg B)23ԯB(Xt"uc5=ʓ"g( KWrv^+E>r8{pwEϥyp;/_  w}t4-,;ŪpBZ GRy\5/2QkN\8u`8P ɓQ0bLm+?ӊSP;$=5.GV\"pR@ 4fײ)g+\ ʺQl }+KD NJk[ޯR£o~") xݸ܀7?sT1/[!0+OX{+ת)A}!`2 +.M&ș'us2{&3y+ ?3L=R t:n#!a-rw! wׄǁ%Ϝ&x,=@2}z;0לfLIggmybf.%^%^7sy7#gFb6Dy7&2DCM㫧]Lx+V^ ^c}4@]G)'A,74Tm%|=sx6QsE#fDKlIl'؉`M˾Q'&+Z擛4i,IJ"ӏf3 ^tVPs|" C$$R1 M.iJ$GߐG(f=: vhU9"}0U\5T [̖fvwwsmU= ]A?YlNy3G[F@a&.M'g^.Onp*^{I£HHdOӎگ:_[%?r/fˎDs%n/o+8BK/?DOo7 >5ة\S^{Y = |4=8MI4o! bϷILUq(AkxRYNH+\kl,+)q ա=kZ\ZG(/Q0xf:%>A>;-Ą/h'}w=y&$/}ͳL{Qͤ +LqB049׵nBdy9Aǭ u"<@iČS;}hi<ӍO(F۝3"I S!Еrx pҝfO},.nQ2:#D,̬L'+7:',Sdv't'A^z S68@1N)%6c*;M^&Aaw@8i@.@ċLpQPrc[8%BwDra~ VB Z t$ *69?)b_ʆ^Xm_/&E3|9NڇpVp \GvP#`fm[7^lͼ-߯U7FmZDjOft 3:)D`.Á !~W!egG~} Nh`mGJ SYP'M3L)OEQ#zuOet R6f& ~]xyy'MXۤ΢2Rpֈ(W4RQvSo {di}b<.N;$IP[DO,8f^`gsڏ>rBK*YjqYjBklnPVآb$ܦ ; QbY'{N/@M"Xa`%cu3ݸ7C9 fD00ruKuA9x 2Y dc g8ַH)iѹ:8Y̡-ZW:B *gL>^"qsZbc^ )Fqk)5GodjY(y*+Qt6UfT.޹=d)Z'^zw㞋 m̓-;+99D}s$]G[5OMtz!cFŜ TۼzՖS[ZU6Zy3ZIZQ84Χ3 _a7XNA6'@P7"oL#&i,@y1 idd g-0UN"`n ;<-{Wng7<ݎ!{eb)Zl?4=xCg9'a{Dod A6LoZaV.;LSC"J,$ "Xr2Uo5~$kmb95Y$2gdY?,0GƯ^WQiG@s{aYfa[v? Z/w#!}fmeM>9rh1dM0 71(\EP6qH ,8ؤ|A.(i=sw{XRvP;q~(tY'ySGOUGNbHWdrCn*`̣{t Nw<(r$la{͘|&7 !|SɎmAp `<[B6J36"e<_$0<u͵/R$ ^B qN@Y$B֋oQ}E*M*Pz p' g(~, @xk E&0'U4()_֞+4P臁wq,yߕ2.%6 r2$"+ :͕0!dn%f(oacҚʫLk. g~/p֚iEĉAIeSԿzԸ@AˇrY &|Y lj?AtgQb~RXC~ّbgC^b8bƸ̑'x'q.2<筙Tg,j5wq8bq^1W[9qh0DNzv-t(7rQrSX-2Cy _huV9uʸ1љE`$ ŶQp*F^Zn20IP?Wi0iO1 /(MF]w(2WCI9/ .}]!˖T0Q7cqUz$ZcQud\+Mb}Z%+$,1웿9/,ܰeugBMt!N2o9Cp-OkԽGƥf3D e{3IҭDaג ɬJ:$4|'U>kF>@49vk@uOLTɓgZ%g8F1^#'.n,2͹H~9fȳ^=(dm [X S\ԟȿ@x~H}.6$]?!N?\a-:Q,1e}'b_P\&9R_X4H`zЁip1bWߗ%wx~ 9hqC}/ɠ%C bЌ!or\ش~#n-XK|Hbhٟ/?`9gu]ywzψȲ߁Oe'౩pޗ.c YGP9Et> Bp~zn=XyĹ˿ A\p> B;^k9yhSxa)zBaU En-qem~J OA?m^<_ og?-ztpkWC Ì`ɒHۄ MߔG_{P۾v,$]Pcpݶq6Ub谊B }ONC<.DPկ[a˱R`~b7)aJ8IRE1>_ t7(3>U\ ԯi7x%v9+_W/e\CqӹKK=3N'xʛ_oE ~s;ea)B2{2sTIv'hXrJr6r'8y"c`1~x`86G:n3ay.5nAvێqׇjġ[A,j}@׀pW Q3<Խbۗ$0)&OOVfxT*vvoE?r.ZdI]4 !0h|{X- ^Evܥ^® eUwEb$ǤfWϫ>1b\\PDS$./[68R88y:j],%bWn6ݗ(ΟvU(x; KQjǎ:50ʒ *lCcgkruQ cdj,cy/1P\Qve \c5.լDDE~NѼ,yd[>U+%(wd*Y\jo]7Թ0Ց.Fى"bkce Zsٍ;j} PY>[\\|ÊtnsC۶vřpIAf닟vtyJF_JUk$y2A Tu%CEnk MXb>wc;p yFnXMg6i4]g P.q&t_čndԗJBU{-gKQ3p_~m.w@է!|R} oj-%kmM T5tmÂ~ƙw::aŦE*9c-|3idHkմ?xOv>X6>47W$]+72,O&ЍU6rQieZߌ?7<)i==AeL4Bbz#5QqNXc9WÄBtUЁ5qSK,qP^W{@uvP+ۺ/P&h~A0P₵snۨE%/?Uhz)?|VJ,brwH,%Vlş'{JÊJf}p݀>u6fy.5 ξFjʱ98tpS8@ѳ:~U >'x^-}:<3mU6">C߫H^OlڏT%`261T]{^䥅`ŎEpq$sph6\4ۻT$>gxCh9ʧ9{('`O&;HonWZ_8PT/E|ϯz:0Q8¼pR>@嚫V-6~ iβ$6$X&H\uNi@1ku\\[SR 5 FW }0JxM'"`09)p>FM<9 X%zKOtį1pQh-'M4U8 \(ӟH2x_N Lt(f!i7ao^Attzc;Bt,Uxu4<ÄJ^M{8T潐~e~MXF4XGlAN0qw*XXZJH§=hy_`٠ {Ӣ@^CӾ`GAt/qJ|+}ckKUܬ)}>w3C͋v߰;tcl'&sA:GmD6}56p, .-R8ۼ=ndxkPAoLJ(c%~ ] (*]k_hod$@4bJI7EXX-p ! 6f5G1@7u1K&n (ؙ#e性ϩ-Ծ>)٪^ #h"* -M%p~bypl8,9UxO/q_zk`}אO>5)rqypֳ&AlXx@SR꿂~gFaKښ;#Y|8{, JVMlyZ$9m&f1y/udV)y]SYAIu3p2<svMsֽXޮa{}d!#+*>x6WhO.pG v26c6c2FcL|NޏJd7K ۇN" 5C @p\!UIUi>~ &N[_mϘWfmzUT;1ϸ!.5uYG=(ЬG|~B4vۈo 4 f$XйNʺ,>tdp0(N%vˇu kE[OPvbkohDi&7?;ˎa<㯡ql tJ(/ίqdK@8[ϱFz,U`hycBpPs>[[5x@Ҥ$ u.q؊u}eKK}2Fݶ"kԷ̊0xҪ0@k+=$[I6SCǮsl[ 9 lGr ,d 6ޢU`Ri ai`t+u}۷6H*% |oeQ\x|[?|SLZgX}V%Q8ZUۼGYF>q'S[Ōjkyl0 nhȅ"aP亦Q (A^ô?v'2!q絽 8yd~:Qi%wY$bo@wn(FyzI.3h(TSz-/ IN, a+5gB(Iv4Cg|x|z `,r5lGM$*_2<Ϻ\YdM s{X%3/[qܡI2!;W2dq\Ԁu?:*uԗ  e$n~)fHd q`r<&*|Ê ga$7}]Bbp1Lx YTTndBf+8]ReT.kNYtFVrI)BD, v.5sK W?oGH >gIu*a:s,d Ht]xV0e./&Fh׃ӖN=jc5;΂{V<8:G\@\H|zVfx o.BçۅDF -¹sOelHC5hʰn:+\JPō .1%ejLbZ׷]H{s2ƣcW!{+͟Æc{ m)gge$G>1-f"ew G'}M}?'KqbpLϊr, z\Qc$ ;mX>|cr'NӕX/6k"ЙÎNխUi<H?"٣Ěk`\*ups̕1s_snWq>l3&:\9g7pqY@!e:h"D|`ͨ-r`\:SO@uAB{AŇ&$QDW(8LJ /da֟Ypi'shN΍MnC[T!k"֒hKjb>yP3&7A0]l{7B`n޻[Xt /[5ZU1=aWskka:5l!xt7ktJe`|ӛaa!6e4I{f1J^BzGRx-Y%q#lw^7.h-/~^V\݄8#2BUSyK=> ɰ9;%MBjj2[:45 7xΚ6KnKDM~entzJ2>{/OSK@nQD՟v˛~ y IB#jڐi,{m&MgRWwI WQOizOA/'ce5 BTT/A4 C4^)\=$Q"$| ˃LR ;z>Y2X6K2ĞT? aqܓҔZ-=W<5.1ګ3t5 ̚gAZZ5pB:1=Pc5ETU}Nl&B~KZPv۸j~ 3ho ;t$>P6+,C<|[k8WQ;yXGL&WzHds ])KX4l:`;@cdV&Nsޡ%QɊ͡x}l& ^Te JavËE .YP]^sZjP[t%iue ,{7EX $s%9hVL5Wj8: \ZDeo_E""ePYݐ35ѻEFI{ ?:}·ҊBm] w?"s6dWP(X&6&)矼R# z!c IIY d|P MF ps1Ik燽u\ Pc$F[C a]D@ # ntU."+N "]7M|= ^gEͼJ,jg{U cBn2~vm_UhRP%R]BLp7^Q"{Q]bb`(:jHv`X*['F5{6fH")Ww6ۙ|Xdehb$aϊ;R4e4~ߎ-_gk:)Umnso7I6!d^;$ S|ڗAzGՎ`Μ%1 dJu]mtBex7ݏoRV93Z;~aAGe8h%Wh3.xvb ,+yH~6pcrE\TF)U*`0SW-U{ႂ?2X, :/aN%{J6=HP5,r%#qy;uQ.-t%߯Ì+ 2!>bs}*IW9YT%a7M {х%? Gfcƻ%6]֭2Y<֓)hVnJpp/э'sD>kꂛbٜ@-p!Ugtg3.jroUQAVz!y5Fȕe0E[&Oc4i3b1lsapJ#I]Yp( .Q`iҾ_By,eL b_?OƍxI5Ε!,$[^s숦nu/g17}Ss5r4M;ܖtv/ x40B)U MS t;X_e0(WӱGGYɩ]S5?'R !1N+M.r Eݲ0S9\-P0!)M@* $CG!ʹ{b0W̧}G].fk3CzD{8͟  xQ5ԟK+#PE8V"0RCу8$Mԯz \404+J+(`o8;MM#9Y?tNApwZdV]>}QX%w5,RGOpC͗1*ʈ?R**Ϯ؟CIn˓AR^<cu\8եʁ;XnG4);3kIr7{SZ щ z,j}5.(q5\ͮ8u\ vip"Xg ܏a`.$#Ox@䶛UfޫѪX,ipJMoGnB?+gD)Z'/N S >V$IRsEAZF;ުm} +OX3DVG\GA[ K$Z^{Io_6<:U3kB#C*% jĕhW%HlHH۹/+ZwLZ}&$ple+J4co m7.| NɮOp_gO]5T[i>rq"^7Ylɽ!E ZDӷxW d&kd:I%!a!tA6|`϶n.^^}R`(z, "U @&Tx!53甥l_+ cbo\>̼|ҿ`~ӚӘ"ON-ªDžf{ aI! %8o]VǍwЂ ;y98${鵉 5 ;yoނT`% Zvb^쁙pPdiJrS)izb;k%\^lA'83K@*gAZnYKc+M?;z ;68leg|8Crt m+8hY]0NxA&a, d6ܠ#6r9l@L%8y?9 <*7aXI"US^֨N,D lLV3~9Zlf3Xt7_^mZ1P5KIrnxԃ]sȇ\(TBD\\ y]t1N85[~LpA p&@`vv/#ZT&jK4 A$nR_–t%Wbꔑs24O]lƌLF|f_ƕdýpӸCoIeLjC=uȞlKk"(&2OtL@٠N#D#C@>O#y fİn#~ ZɎ=5==YyDy: -`o6ZMy7wS_K߂40Y eD [ti ϐ]E{n9S*8\{/ifkQ{g5;04|BCI`Hq#7sZBtY3Z0狘x4[XKa}GMK>)a4Vql5'^0e )e #-i"g|՜ޢ@x`w I^Sr>Ny-\]h!GŨ%H{ɜZ2N`^ϕ fz7W%{o S@k0 &TF*E<^lB6f eNYhn84;"xb{%" ƧRlusy)(w񻫫|x [ CbZHxU̻ݷf CWmu*'TQHkIf#fl$V64•כ[vkNUqm/,i.}+G\^ъA$oRL{ hNۥlkҤFbV6[`-]qOC Ʃױo&?}XިMrE lN02{0]|,b_ ox듛RGan?ϘJAb{p'N{P=OI9+澍? +wi /#\S̞hZf W6-;_0KM]bWد'6շ=HNa΃|-Z-7DAcf ]KN`qiqpM9$zVir2b |zq^n[ulVXǥS#4 }  eO+<~CꌹxV bYlɜ;f JJH ח[$/kԖ&9&<?#[;` \>>eYx|a^LUp һk9>qD-ΞCD9UT{Z¾Y Yɟ8&f#⯴),δr8'qIHc{p^,tդ$ڀHxzbKWK* ƁP[)^X~ 21vSٛ@9FK’{nv f](4ݞb%ߟ*SӖ"2Bm]LuGS 9jf7wLsҽ(,]Sr!9CbELM\Tżi~Jڛʖaɇ~%X]\X}HҴWs.ix9FNwf>@,# ')7zx{UE[2炬&""tGOdNw6gi=z;E˲C]FڜBENj]Je1`uopdz>4fH-poԔpi<*tX7Xx!Ў$MlUy\ŚrƠ6s ߐIh_$EuCrp`|>y @-bkTG7IH])'8CLs=(ɌCKl/™3y*Ka7?@bI0~AVR FX5L6A:  2dC^`ށc-!ʤwsXdC_7dԀ6.8js^D%9ٙb낚o4EA ۹GJӹ۠1Z@ya5jpvlEF|+I'aD]/IVe31ft <#Ȁ\[ϭxTWT879H,|T>US\Ufma4qCs0Šqz<&dkqJyh:*X\]{lN m")_bB]1cD76F6N'us{ol $TL200mz{w{D䪶@!uZHij%tHiy'4@v%n_e# Bذ2y%:{GDzRK<SGd6X}dLuE_<=2@eP6y"J|Lt֕H[ɨM#ڸ=80_iu4/COm"=\"Qΰ1[;WQ}:c7qMgDӁ6I\D3I|.smBZצKLҡ/9B""|} "}'3p͚WD`glݣ=F6ϝ5cș& FZӻ{yOGO'N\l SEWàm-΍r#' TZ{hW,\IK5gӐ@39< 5Vl f=i 7NUGUiNA3p6G`RwoHvO">ἌE)xA鄶tIO ~^w5JFMsHL3z3c]T~8g)\=k4gWz4iN+.bf+O Bޥ}%VqI[j9dR㸋i%/4Ȇ{7/6 #u GS%`%tKKk!Q>42,I.U"r[{ɳX"!s mR7\˹=(;nYtމ5+Hٯ;wkUɐlO: kgV{ fv'+r({uLTt:y 9 X) \!HL+M7VLTVn9 1A`T N־䉭xh4)X*:Rwօ:?|ח+F:9:Zbp_8/bԓ;NTC4vWonΚ<0vWaJv~RB Ͽ1WȈ=lK]~2OxNQa؜G?;; $[%[Cx}l,&8#,9,u꫷a(٠Ѣ xT زwGTR,S>P~ ̠::cenxW1h&M9 Y(^!R+ejl6 hRF#Zx q^eVK!`EH߅KN -=*.g] g%Ge*7~vliP;.޶f $a Gk_Tae)(H ftGyQ-ɾcI u иBJI;E iT<>}O*EIZ:RE:OFͰ&U—[0SMQn6:wE#vAZM7\ϑPdT\# wX" n`ou4t`iA8`>g ~IdBȞ#.]JYy4OI(Y%uD*Jm̤_g|7ei gII*F/ɎF 0+z'N04Bw^d -wb2 N:‘Dk;ļέWYJvL=K-'D$}Q+\_i]ے,` e&70#1`݉܀_a; KXd8pv-e~ږ+yaMohS=WJ Gp7*5W;&wx,H瘑>W++K$̑ALZF2-VQbAcq3AEr˶4l Њ;A7($2޺, m2kemK uB ,h" j( {+]gWbqkCk&ạqv=r!tf8arV3CCRs)rQ.dBn SvKW*x=MvOדUth(J'Fs=~ƥFTj f;_`Vڡu+ C>{2s(e,W2((E'~H)z5H y4{TĨqV +i42gb9ogHmu١" &,<d~# s}y?8+!.w͎6~&PqׯZϩ%hoEf֔o||)ct_9+dƃH8=84C~ژ~^µ4‰qa9lK۰^!w_!/' rsDQՌSLi$LMIPN`nJQܡOmM}\qNzd"r 6 ]M6`}G2ʵ`42U+XYJ2n4&XpbpS }WK |L.#o;ԠPB~sF0X")|S.1#>^L[Kb-b#ָa煎\Nu!lP-' }pwrì^<CDM`8*|[\7^c( ʊFF,1V:XXK1D K{Xr$k)RV)APO*b+e!rLM$h9QK;XT.t ;JSng{wu}{ɿ?jco(.;YnYREdեu62O,RPWX6p܏k }=E1qg+ :r[ hh Adp*T:}d13IH/A6]McSs+v{ C#zØlWi5#Zwd2*9xu2aGsuHg VMCƪQr>jRP@qs?a4s=9UG [asdnY{Zb#6]k5B T#}ne Ane^{/M}fRG?r8B]!aL72ӈXϐ!"@Z]4bwn;Ŷi"(~.=-+etDǀ/AsKH>o򨰶w4A7Q;>r*IeE,' J @C^<+eG "m5FB)bLr_cc;$Ky~~#JB%qqLnB pbOPy-Ż]ߖ%=cqcLvyoA(g`#̲e# ><#@-u"NP7J!/+|(AY0O+^UO2p jYd5I!6^(oQT12AFsDs'Єp-Fd,K)$*'HDKSqA5(ybgÓO,25c7u܋EˬI,'vm^>JJ2>E M5xʷnufF&Ο>|& nAEIy* ^DhT.*$+:?ЂXˈ)F(^btżvp;-U'rEgsxT:50w#_s/ROu6HbOp:))G.+Ŭq˩~z8BUZOKpypMYiFSؖϵ͆T5z鎂p_ޡ0(r90o ;z& I8>_L H&QJ7D >ȱTD0}F4jJĵHC9W0ڬkXNzѡ\@%Z`_lLFfYA(r&EG!@#zDLY=Funiu.H}zgB$|/zh1K$o'R_oIR|a,/ {qVF}uGlۂ(2D?.kdZPFjT(y5/Gb0w`Y8lRt+lI#)}ISsy 1@[wDQdW \vI?p$ȿ0c!Gx'VpJ3mJ)!U `oָ 0PJ*$BX^#DᔜZL2 m CuEWpAzpd!A?1$Mof8(ȫFYuLY&rD;BK께Y=)ܦȭ%J i9WJ5n)/>i;^O_uadFi|m*2O'rHS4S.{%qC,jn%5aú!pyPQQoZ]Ek"~ijGWtd W հ+\WWr-5N h J_p}4$ܥ9h]D,nؙZ `"av%S1[jE㈮009Ec0$/՛(fEgkՃ4`Вd]64!:Jwa{Fmqh1 ٓoqgG gQ9n i0HCue؅5=)H -,􂎑۲L3K|mjh$ l`m_N!9spn[1G%2j:3 ZNU/n[wn52n11;-ޢ@` O2ur:B.EzJi}K,_&˲Y0Wj9LO\18Y0Mf B8*䷄9`oIX՜7 ǘ5m4aC bQ` 2rGFX6oif_) gr*99PL|DA!+:01 +;:,TlQ nxs0W9a=1GU/f)VuI6ME]|~Qı. uK^K5"7sJa58|dK8 &Qc1αw &N1R x4L+3`УL3wSLjY` C# -7s`^oZ>,=DZx *j̝[8qz01rCɈDνC\HF ~%/w^Z3 Cj3rEuU\dJ9(aXuIr[5iPeihğ6l%ZΔ}b6F#S=(~A:?k&S3q 7;E"2mj +>ECn(u p [pTn9/4d&|yZQtEG]Ĕ&1y%Y'2W0aV]BEoAXGoӝ}~w_YeDp!p,[ mSw2WK4~·~Ɨo xNZB95p;qd+WwS7"qCV~9z NK( 4;gc*~|&PcbDGkoݧ ^[`vQvpd׿^% %qw|̾Rn?9"pp|R*hPp _L˼9`v] ]cd"KU˾b;xR۴1btu"K2Л,&eb!\1r-S1ļ/-P8(3K!8]芆p8EEA9fxX{{w!0mJ+11v^cD~)˳??:`+\mt ]# 1I gNOyjk[j,U)Ml?}~w6foS+s"K*Nu*kxRn)`] ,j8cCގ*KO\UŖfye$Ћ2Ke8dM31n%k>T[=&A<n{EJC/+ r"IT+/&It$v\ V+ oO\"+,Yp+w><-0ZհgI>SVQpFBtڤS˞ՃO'—L[bܜ/@B`q';#5:-MoYH>^կ.|%WF .'̰D}Pğ{970ހu?* da0Wąӈ_.t=$~K1 8Q?eq91iKr6W%7>7w].H "` ȅAg{mJ=/ih]}ģwڠrߢH2b%ٵJ6_`3x4񏦌/41c " p\o쓵AqѶF"';rX7/9Ҷh,?%&9Ϫ!C벓wC|Am 1n(k:r݆xW`X٥{TVIrj uUb9w6,/U̫w<*4ʨeZzkPAscb4v{2S0r~aa܂|@.m;`@Xʓؙ!4 ͩ5T<k4IVR=gq磹[ *ϓD3*'KTt7댲..?aJJ*È,F:mwB5aAsj΋{ qhs|T;S i&^u'mM6i)ί=CD~OWIb.1H{lǴsL<Ӥ6KosRONKΣ?qםs;{_7iڠ_K"qB!3հxEJ(^fZ^ 4qRI67|kHp&ן+yqY 42-_wc@H7lvndzh*{J kQKU-rmTzK:4;hRC@gMW[v*H`v-oT {íSWr޶ Δ Q^CN{tS3PO~RZcluY!LhᝍyH'j}JvN!ZA3̴Ӯ;Zjә `| QeDpo=8k)#glǫ p{a~a8_arňlι: XjqMe4ƀBG8w_Zhߋt^-AE\2q=9E_^aqoK<@r ~}%l3{)unN@=T<}8BiΌP?= A~:Ǝ/x_gz=2PVT3/" N;ʔ> 0pkb#b ./:D;ʌ>1VJIp 4R/>Tc r=G1x_~[Op;~ʣ@_wJc`^`&7?T@eXu4}ѹfk_;݀PٝlKq/9w=EFwڡ\wn O j`#6l9lƾKLOxvO0XO?~%4ɤe1Ț@Z H:dj<Ôgukw11k t(, j.6m 3ό-Kw|7\ "=fmʆg $ڛpe8+"FQw'nZ7iNѐ)d^xP-fnm1cN'|J9Qnަz/,~X| FK3;?' ouy6TmۀO4p?K< 'i.O6QۛnF_._vms YJ|o̎ӖIy/iF,nN<>"Nϵ&~N$eԺ,Xh`^r!B~[#azy MLkWE#0Gy!6MbLjk\)%OMsz*,uCX@W$ՠq,09 vǒ%j7݃G8RܠiZƗ#g&@Ų:wgʁd[_bl KmOʫ̃6Pg7YzP <&OAL.:p/ci̓ 3crH \rTܯ.0-^1.KTɀ<\TL)$3E2|x 9uApb1n#qpyD=Bm1Mץ^[ߩ}Q#22#!@}|3dc9? c`A'33+ AvβI }~0Lx O]0 9;*#N*`7/t!!V>P&}CNؑĒYPE[hS,Q\ݹ+?d͜w9;"׼<8XQ"gńodlۋSmD<¡fh؟<9MM#,U;^T_2A^cUZsyZ0'P6\OZc|_Zj9:1IJO&\16932IDeuD8Nia]s#)M$D}8]tLDL `)bNf>ߩx6nXz9 ,$'8wޜR?(eoeR:xZ+nsNsMJ:39ہ*3i_EźK`_931ydϪT~myY6s+RZDOfY9c z!|~W[LǔZ$&4_u; / WTcQSBIZ>!:J5fjpj%ݖl?̔z=O5|AZgYhYo>A5Zb8]])m)5ĉj+?ۼM:W">GdfN>%QҸ?su,l'ŧH[n^#`]E6eV{+ üCӮ>K|`ɐ<[s,٢+1F<{a&Źz=$ajb,m߭=۳SM~7,/}m8*[dK=8K9FzR]>)i)*mο_*}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}~}/c0pysph-master/pysph/examples/rigid_body/sphere_in_vessel_akinci.py000066400000000000000000000143601356347341600260530ustar00rootroot00000000000000"""A sphere of density 500 falling into a hydrostatic tank (15 minutes) Check basic equations of SPH to throw a ball inside the vessel """ from __future__ import print_function import numpy as np # PySPH base and carray imports from pysph.base.utils import (get_particle_array_wcsph, get_particle_array_rigid_body) from pysph.base.kernels import CubicSpline from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.sph.equation import Group from pysph.sph.basic_equations import (XSPHCorrection, SummationDensity) from pysph.sph.wc.basic import TaitEOSHGCorrection, MomentumEquation from pysph.solver.application import Application from pysph.sph.rigid_body import ( BodyForce, SummationDensityBoundary, RigidBodyCollision, RigidBodyMoments, RigidBodyMotion, AkinciRigidFluidCoupling, RK2StepRigidBody) def create_boundary(): dx = 2 # bottom particles in tank xb = np.arange(-2 * dx, 100 + 2 * dx, dx) yb = np.arange(-2 * dx, 0, dx) xb, yb = np.meshgrid(xb, yb) xb = xb.ravel() yb = yb.ravel() xl = np.arange(-2 * dx, 0, dx) yl = np.arange(0, 250, dx) xl, yl = np.meshgrid(xl, yl) xl = xl.ravel() yl = yl.ravel() xr = np.arange(100, 100 + 2 * dx, dx) yr = np.arange(0, 250, dx) xr, yr = np.meshgrid(xr, yr) xr = xr.ravel() yr = yr.ravel() x = np.concatenate([xl, xb, xr]) y = np.concatenate([yl, yb, yr]) return x * 1e-3, y * 1e-3 def create_fluid(): dx = 2 xf = np.arange(0, 100, dx) yf = np.arange(0, 150, dx) xf, yf = np.meshgrid(xf, yf) xf = xf.ravel() yf = yf.ravel() return xf * 1e-3, yf * 1e-3 def create_sphere(dx=1): x = np.arange(0, 100, dx) y = np.arange(151, 251, dx) x, y = np.meshgrid(x, y) x = x.ravel() y = y.ravel() p = ((x - 50)**2 + (y - 200)**2) < 20**2 x = x[p] y = y[p] # lower sphere a little y = y - 20 return x * 1e-3, y * 1e-3 def get_density(y): height = 150 c_0 = 2 * np.sqrt(2 * 9.81 * height * 1e-3) rho_0 = 1000 height_water_clmn = height * 1e-3 gamma = 7. _tmp = gamma / (rho_0 * c_0**2) rho = np.zeros_like(y) for i in range(len(rho)): p_i = rho_0 * 9.81 * (height_water_clmn - y[i]) rho[i] = rho_0 * (1 + p_i * _tmp)**(1. / gamma) return rho def geometry(): import matplotlib.pyplot as plt # please run this function to know how # geometry looks like x_tank, y_tank = create_boundary() x_fluid, y_fluid = create_fluid() x_cube, y_cube = create_sphere() plt.scatter(x_fluid, y_fluid) plt.scatter(x_tank, y_tank) plt.scatter(x_cube, y_cube) plt.axes().set_aspect('equal', 'datalim') print("done") plt.show() class RigidFluidCoupling(Application): def initialize(self): self.dx = 2 * 1e-3 self.hdx = 1.2 self.ro = 1000 self.solid_rho = 500 self.m = 1000 * self.dx * self.dx self.co = 2 * np.sqrt(2 * 9.81 * 150 * 1e-3) self.alpha = 0.1 def create_particles(self): """Create the circular patch of fluid.""" xf, yf = create_fluid() m = self.ro * self.dx * self.dx rho = self.ro h = self.hdx * self.dx fluid = get_particle_array_wcsph(x=xf, y=yf, h=h, m=m, rho=rho, name="fluid") dx = 2 xt, yt = create_boundary() m = 1000 * self.dx * self.dx rho = 1000 rad_s = 2 / 2. * 1e-3 h = self.hdx * self.dx V = dx * dx * 1e-6 tank = get_particle_array_wcsph(x=xt, y=yt, h=h, m=m, rho=rho, rad_s=rad_s, V=V, name="tank") for name in ['fx', 'fy', 'fz']: tank.add_property(name) dx = 1 xc, yc = create_sphere(1) m = self.solid_rho * dx * 1e-3 * dx * 1e-3 rho = self.solid_rho h = self.hdx * self.dx rad_s = dx / 2. * 1e-3 V = dx * dx * 1e-6 cs = 0.0 cube = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho, rad_s=rad_s, V=V, cs=cs, name="cube") return [fluid, tank, cube] def create_solver(self): kernel = CubicSpline(dim=2) integrator = EPECIntegrator(fluid=WCSPHStep(), tank=WCSPHStep(), cube=RK2StepRigidBody()) dt = 0.125 * self.dx * self.hdx / (self.co * 1.1) / 2. # dt = 1e-4 print("DT: %s" % dt) tf = 0.5 solver = Solver( kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, ) return solver def create_equations(self): equations = [ Group(equations=[ BodyForce(dest='cube', sources=None, gy=-9.81), ], real=False), Group(equations=[ SummationDensity( dest='fluid', sources=['fluid'], ), SummationDensityBoundary( dest='fluid', sources=['tank', 'cube'], fluid_rho=1000.0) ]), # Tait equation of state Group(equations=[ TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), ], real=False), Group(equations=[ MomentumEquation(dest='fluid', sources=['fluid'], alpha=self.alpha, beta=0.0, c0=self.co, gy=-9.81), AkinciRigidFluidCoupling(dest='fluid', sources=['cube', 'tank']), XSPHCorrection(dest='fluid', sources=['fluid', 'tank']), ]), Group(equations=[ RigidBodyCollision(dest='cube', sources=['tank'], kn=1e5) ]), Group(equations=[RigidBodyMoments(dest='cube', sources=None)]), Group(equations=[RigidBodyMotion(dest='cube', sources=None)]), ] return equations if __name__ == '__main__': app = RigidFluidCoupling() app.run() pysph-master/pysph/examples/rigid_body/ten_spheres_in_vessel_2d.py000066400000000000000000000254311356347341600261540ustar00rootroot00000000000000"""Ten sphere of different density falling into a hydrostatic tank implemented using Akinci in 2d. (15 minutes) Check basic equations of SPH to throw a ball inside the vessel Geometry will be like ^ || ___ ___ ___ ___ || | || / \ / \ / \ / \ || | || | | | | | | | | || | || ___\___/ ___\___/ ___ ___ \___/ ___\___/ ___ || | || / \ / \ / \ / \ / \ / \ || | || | | | | | | | | | | | | || | || \___/ \___/ \___/ \___/ \___/ \___/ || | ||___________________________________________________________________|| 500mm || ^ || | || | || | || | || | || |300 mm || | || | || | || | || | || | || v ||__________________________v________________________________________|| ||___________________________________________________________________|| <--------------------------1000mm-------------------------------------> """ from __future__ import print_function import numpy as np # PySPH base and carray imports from pysph.base.utils import (get_particle_array_wcsph, get_particle_array_rigid_body) from pysph.base.kernels import CubicSpline from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.sph.equation import Group from pysph.sph.basic_equations import (XSPHCorrection, SummationDensity) from pysph.sph.wc.basic import TaitEOSHGCorrection, MomentumEquation from pysph.solver.application import Application from pysph.sph.rigid_body import ( BodyForce, RigidBodyCollision, SummationDensityBoundary, RigidBodyMoments, RigidBodyMotion, AkinciRigidFluidCoupling, RK2StepRigidBody) def get_2d_dam(length=10, height=15, dx=0.1, layers=2): _x = np.arange(0, length, dx) _y = np.arange(0, height, dx) x, y = np.meshgrid(_x, _y) x, y = x.ravel(), y.ravel() # get particles inside the tank cond = ((x > (layers - 1) * dx)) & ((x < (x[-1] - (layers - 1) * dx)) & (y > (layers - 1) * dx)) # exclude inside particles x, y = x[~cond], y[~cond] return x, y def get_2d_block(length=10, height=15, dx=0.1): x = np.arange(0, length, dx) y = np.arange(0, height, dx) x, y = np.meshgrid(x, y) x, y = x.ravel(), y.ravel() return x, y def get_fluid_and_dam_geometry(d_l, d_h, f_l, f_h, d_layers, d_dx, f_dx, fluid_left_extreme=None): xd, yd = get_2d_dam(d_l, d_h, d_dx, d_layers) xf, yf = get_2d_block(f_l, f_h, f_dx) if fluid_left_extreme: x_trans, y_trans = fluid_left_extreme xf += x_trans yf += y_trans else: xf += 2 * d_dx yf += 2 * d_dx return xd, yd, xf, yf def get_circle(centre=[0, 0], radius=1, dx=0.1): x = np.arange(0, radius * 2, dx) y = np.arange(0, radius * 2, dx) x, y = np.meshgrid(x, y) x, y = x.ravel(), y.ravel() cond = ((x - radius)**2 + (y - radius)**2) <= radius**2 x, y = x[cond], y[cond] x_trans = centre[0] - radius y_trans = centre[1] - radius x = x + x_trans y = y + y_trans return x, y def create_ten_circles(radius=20 * 1e-3, spacing=1 * 1e-3, fluid_height=300 * 1e-3): x1, y1 = get_circle(centre=[100 * 1e-3, fluid_height + radius + 30 * 1e-3], radius=radius, dx=spacing) x2, y2 = x1 + 2 * radius, y1 + 3 * radius x3, y3 = x2 + 2 * radius, y1 x4, y4 = x3 + 2 * radius, y2 x5, y5 = x4 + 2 * radius, y3 x_left = np.concatenate([x1, x2, x3, x4, x5]) y_left = np.concatenate([y1, y2, y3, y4, y5]) # x_middle, y_middle = x1 + 400 * 1e-3, y1 + 300 * 1e-3 x_right = x_left + 500 * 1e-3 y_right = y_left x = np.concatenate([x_left, x_right]) y = np.concatenate([y_left, y_right]) return x, y def get_rho_of_each_sphere(xc, yc, radius=20 * 1e-3, spacing=1 * 1e-3): x1, y1 = get_circle(radius=radius, dx=spacing) pars = len(x1) rho = np.ones_like(xc) no_of_spheres = int(len(rho) / len(x1)) for i in range(no_of_spheres): if i < 5: rho[i * pars:(i + 1) * pars] = 500 if i >= 5: rho[i * pars:(i + 1) * pars] = 1500 return rho def get_body_id_of_each_sphere(xc, yc, radius=20 * 1e-3, spacing=1 * 1e-3): x1, y1 = get_circle(radius=radius, dx=spacing) pars = len(x1) body_id = np.ones_like(xc, dtype=int) no_of_spheres = int(len(body_id) / len(x1)) for i in range(no_of_spheres): body_id[i * pars:(i + 1) * pars] = i return body_id class RigidFluidCoupling(Application): def initialize(self): self.dam_length = 1000 * 1e-3 self.dam_height = 500 * 1e-3 self.dam_spacing = 2 * 1e-3 self.dam_layers = 3 self.fluid_length = ( 1000 * 1e-3 - 3 * self.dam_layers * self.dam_spacing) self.fluid_height = 300 * 1e-3 self.fluid_spacing = 5 * 1e-3 self.fluid_rho = 1000. self.sphere_radius = 30 * 1e-3 self.sphere_spacing = 4 * 1e-3 # simulation properties self.hdx = 1.2 self.co = 2 * np.sqrt(2 * 9.81 * self.fluid_height) self.alpha = 0.1 def create_particles(self): # get the geometry xt, yt, xf, yf = get_fluid_and_dam_geometry( self.dam_length, self.dam_height, self.fluid_length, self.fluid_height, self.dam_layers, self.dam_spacing, self.fluid_spacing, [3 * self.dam_spacing, 3 * self.dam_spacing]) # create fluid particle array m = self.fluid_rho * self.fluid_spacing * self.fluid_spacing rho = self.fluid_rho h = self.hdx * self.fluid_spacing fluid = get_particle_array_wcsph(x=xf, y=yf, h=h, m=m, rho=rho, name="fluid") # create tank particle array m = self.fluid_rho * self.dam_spacing * self.dam_spacing rho = 1000 rad_s = self.dam_spacing / 2. h = self.hdx * self.dam_spacing V = self.dam_spacing**2 tank = get_particle_array_wcsph(x=xt, y=yt, h=h, m=m, rho=rho, rad_s=rad_s, V=V, name="tank") for name in ['fx', 'fy', 'fz']: tank.add_property(name) xc, yc = create_ten_circles(radius=self.sphere_radius, spacing=self.sphere_spacing, fluid_height=self.fluid_height) # get density of each sphere rho = get_rho_of_each_sphere(xc, yc, radius=self.sphere_radius, spacing=self.sphere_spacing) # get bodyid for each sphere body_id = get_body_id_of_each_sphere(xc, yc, radius=self.sphere_radius, spacing=self.sphere_spacing) m = rho * self.sphere_spacing**2 h = self.hdx * self.sphere_spacing rad_s = self.sphere_spacing / 2. V = self.sphere_spacing**2 cs = 0.0 cube = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho, rad_s=rad_s, V=V, cs=cs, body_id=body_id, name="cube") return [fluid, tank, cube] def create_solver(self): kernel = CubicSpline(dim=2) integrator = EPECIntegrator(fluid=WCSPHStep(), cube=RK2StepRigidBody(), tank=WCSPHStep()) dt = 1 * 1e-4 print("DT: %s" % dt) tf = 1 solver = Solver( kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, ) return solver def create_equations(self): equations = [ Group(equations=[ BodyForce(dest='cube', sources=None, gy=-9.81), ], real=False), Group(equations=[ SummationDensity( dest='fluid', sources=['fluid'], ), SummationDensityBoundary( dest='fluid', sources=['tank', 'cube'], fluid_rho=1000.0) ]), # Tait equation of state Group(equations=[ TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.fluid_rho, c0=self.co, gamma=7.0), ], real=False), Group(equations=[ MomentumEquation(dest='fluid', sources=['fluid'], alpha=self.alpha, beta=0.0, c0=self.co, gy=-9.81), AkinciRigidFluidCoupling(dest='fluid', sources=['cube', 'tank']), XSPHCorrection(dest='fluid', sources=['fluid', 'tank', 'cube']), ]), Group(equations=[ RigidBodyCollision(dest='cube', sources=['tank', 'cube'], kn=1e5) ]), Group(equations=[RigidBodyMoments(dest='cube', sources=None)]), Group(equations=[RigidBodyMotion(dest='cube', sources=None)]), ] return equations def geometry(self): import matplotlib.pyplot as plt # please run this function to know how # geometry looks like x_tank, y_tank, x_fluid, y_fluid = get_fluid_and_dam_geometry( self.dam_length, self.dam_height, self.fluid_length, self.fluid_height, self.dam_layers, self.dam_spacing, self.fluid_spacing, [3 * self.dam_spacing, 3 * self.dam_spacing]) x_cube, y_cube = create_ten_circles(radius=self.sphere_radius, spacing=self.sphere_spacing, fluid_height=self.fluid_height) plt.scatter(x_fluid, y_fluid) plt.scatter(x_tank, y_tank) plt.scatter(x_cube, y_cube) plt.axes().set_aspect('equal', 'datalim') print("done") plt.show() if __name__ == '__main__': app = RigidFluidCoupling() app.run() # app.geometry() pysph-master/pysph/examples/rigid_body/three_cubes_in_vessel_3d.py000066400000000000000000000203531356347341600261240ustar00rootroot00000000000000"""Three cubes of different density falling into a static tank of water in 3d.(20 minutes) An example to illustrate 3d pysph framework """ from __future__ import print_function import numpy as np from pysph.base.utils import (get_particle_array_wcsph, get_particle_array_rigid_body) # PySPH base and carray imports from pysph.base.kernels import CubicSpline from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.sph.equation import Group from pysph.sph.basic_equations import (XSPHCorrection, ContinuityEquation) from pysph.sph.wc.basic import TaitEOSHGCorrection, MomentumEquation from pysph.solver.application import Application from pysph.sph.rigid_body import ( BodyForce, RigidBodyCollision, RigidBodyMoments, RigidBodyMotion, AkinciRigidFluidCoupling, RK2StepRigidBody) def get_3d_dam(length=10, height=15, depth=10, dx=0.1, layers=2): _x = np.arange(0, length, dx) _y = np.arange(0, height, dx) _z = np.arange(0, depth, dx) x, y, z = np.meshgrid(_x, _y, _z) x, y, z = x.ravel(), y.ravel(), z.ravel() # get particles inside the tank tmp = layers - 1 cond_1 = (x > tmp * dx) & (x < _x[-1] - tmp * dx) & (y > tmp * dx) cond_2 = (z > tmp * dx) & (z < z[-1] - tmp * dx) cond = cond_1 & cond_2 # exclude inside particles x, y, z = x[~cond], y[~cond], z[~cond] return x, y, z def get_3d_block(length=10, height=15, depth=10, dx=0.1): x = np.arange(0, length, dx) y = np.arange(0, height, dx) z = np.arange(0, depth, dx) x, y, z = np.meshgrid(x, y, z) x, y, z = x.ravel(), y.ravel(), z.ravel() return x, y, z def get_fluid_and_dam_geometry_3d(d_l, d_h, d_d, f_l, f_h, f_d, d_layers, d_dx, f_dx, fluid_left_extreme=None, tank_outside=False): xd, yd, zd = get_3d_dam(d_l, d_h, d_d, d_dx, d_layers) xf, yf, zf = get_3d_block(f_l, f_h, f_d, f_dx) if fluid_left_extreme: x_trans, y_trans, z_trans = fluid_left_extreme xf += x_trans yf += y_trans zf += z_trans else: xf += 2 * d_dx yf += 2 * d_dx zf += 2 * d_dx return xd, yd, zd, xf, yf, zf def get_sphere(centre=[0, 0, 0], radius=1, dx=0.1): x = np.arange(0, radius * 2, dx) y = np.arange(0, radius * 2, dx) z = np.arange(0, radius * 2, dx) x, y, z = np.meshgrid(x, y, z) x, y, z = x.ravel(), y.ravel(), z.ravel() cond = ((x - radius)**2 + (y - radius)**2) + (z - radius)**2 <= radius**2 x, y, z = x[cond], y[cond], z[cond] x_trans = centre[0] - radius y_trans = centre[1] - radius z_trans = centre[2] - radius x = x + x_trans y = y + y_trans z = z + z_trans return x, y, z class RigidFluidCoupling(Application): def initialize(self): self._spacing = 4 self.spacing = self._spacing * 1e-3 self.dx = self.spacing self.hdx = 1.2 self.ro = 1000 self.solid_rho = 800 self.m = 1000 * self.dx * self.dx * self.dx self.co = 2 * np.sqrt(2 * 9.81 * 150 * 1e-3) self.alpha = 0.1 def create_particles(self): # get coordinates of tank and fluid tank_len = 150 tank_hei = 150 tank_dep = 150 layers = 2 flu_len = 150 - 2 * layers * self._spacing flu_hei = 52 flu_dep = 150 - 2 * layers * self._spacing xt, yt, zt, xf, yf, zf = get_fluid_and_dam_geometry_3d( d_l=tank_len, d_h=tank_hei, d_d=tank_dep, f_l=flu_len, f_h=flu_hei, f_d=flu_dep, d_layers=2, d_dx=self._spacing, f_dx=self._spacing) # scale it to mm xt, yt, zt, xf, yf, zf = (xt * 1e-3, yt * 1e-3, zt * 1e-3, xf * 1e-3, yf * 1e-3, zf * 1e-3) # get coordinates of cube xc, yc, zc = get_3d_block(20, 20, 20, self._spacing/2.) xc1, yc1, zc1 = ((xc + 60) * 1e-3, (yc + 120) * 1e-3, (zc + 70) * 1e-3) xc2, yc2, zc2 = ((xc + 4 * self._spacing) * 1e-3, (yc + 120) * 1e-3, (zc + 70) * 1e-3) xc3, yc3, zc3 = ((xc + 100) * 1e-3, (yc + 120) * 1e-3, (zc + 70) * 1e-3) xc, yc, zc = (np.concatenate((xc1, xc2, xc3)), np.concatenate( (yc1, yc2, yc3)), np.concatenate((zc1, zc2, zc3))) # Create particle array for fluid m = self.ro * self.spacing * self.spacing * self.spacing rho = self.ro h = self.hdx * self.spacing fluid = get_particle_array_wcsph(x=xf, y=yf, z=zf, h=h, m=m, rho=rho, name="fluid") # Create particle array for tank m = 1000 * self.spacing**3 rho = 1000 rad_s = self.spacing / 2. h = self.hdx * self.spacing V = self.spacing**3 tank = get_particle_array_wcsph(x=xt, y=yt, z=zt, h=h, m=m, rho=rho, rad_s=rad_s, V=V, name="tank") for name in ['fx', 'fy', 'fz']: tank.add_property(name) # Create particle array for cube h = self.hdx * self.spacing/2. # assign density of three spheres rho1 = np.ones_like(xc1) * 2000 rho2 = np.ones_like(xc1) * 800 rho3 = np.ones_like(xc1) * 500 rho = np.concatenate((rho1, rho2, rho3)) # assign body id's body1 = np.ones_like(xc1, dtype=int) * 0 body2 = np.ones_like(xc1, dtype=int) * 1 body3 = np.ones_like(xc1, dtype=int) * 2 body = np.concatenate((body1, body2, body3)) m = rho * (self.spacing/2.)**3 rad_s = self.spacing / 4. V = (self.spacing/2.)**3 cs = 0.0 cube = get_particle_array_rigid_body(x=xc, y=yc, z=zc, h=h, m=m, rho=rho, rad_s=rad_s, V=V, cs=cs, body_id=body, name="cube") print( fluid.get_number_of_particles(), tank.get_number_of_particles(), cube.get_number_of_particles(), ) return [fluid, tank, cube] def create_solver(self): kernel = CubicSpline(dim=3) integrator = EPECIntegrator(fluid=WCSPHStep(), tank=WCSPHStep(), cube=RK2StepRigidBody()) # dt = 0.125 * self.dx * self.hdx / (self.co * 1.1) / 2. dt = 1e-4 print("DT: %s" % dt) tf = 0.6 solver = Solver( kernel=kernel, dim=3, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, ) return solver def create_equations(self): equations = [ Group(equations=[ BodyForce(dest='cube', sources=None, gy=-9.81), ], real=False), Group(equations=[ ContinuityEquation(dest='fluid', sources=['fluid', 'tank', 'cube']), ContinuityEquation(dest='tank', sources=['tank', 'fluid', 'cube']) ]), # Tait equation of state Group(equations=[ TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), TaitEOSHGCorrection(dest='tank', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), ], real=False), Group(equations=[ MomentumEquation(dest='fluid', sources=['fluid', 'tank'], alpha=self.alpha, beta=0.0, c0=self.co, gy=-9.81), AkinciRigidFluidCoupling(dest='fluid', sources=['cube']), XSPHCorrection(dest='fluid', sources=['fluid', 'tank']), ]), Group(equations=[ RigidBodyCollision(dest='cube', sources=['tank', 'cube'], kn=1e5) ]), Group(equations=[RigidBodyMoments(dest='cube', sources=None)]), Group(equations=[RigidBodyMotion(dest='cube', sources=None)]), ] return equations if __name__ == '__main__': app = RigidFluidCoupling() app.run() pysph-master/pysph/examples/rigid_body/three_spheres_in_fluid.py000066400000000000000000000161331356347341600257110ustar00rootroot00000000000000"""Three sphere of different density falling into a hydrostatic tank implemented using Akinci in 2d. (15 minutes) Check basic equations of SPH to throw a ball inside the vessel """ from __future__ import print_function import numpy as np # PySPH base and carray imports from pysph.base.utils import (get_particle_array_wcsph, get_particle_array_rigid_body) from pysph.base.kernels import CubicSpline from pysph.solver.solver import Solver from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.sph.equation import Group from pysph.sph.basic_equations import (XSPHCorrection, SummationDensity) from pysph.sph.wc.basic import TaitEOSHGCorrection, MomentumEquation from pysph.solver.application import Application from pysph.sph.rigid_body import ( BodyForce, SummationDensityBoundary, RigidBodyCollision, RigidBodyMoments, RigidBodyMotion, AkinciRigidFluidCoupling, RK2StepRigidBody) def create_boundary(): dx = 2 # bottom particles in tank xb = np.arange(-2 * dx, 500 + 2 * dx, dx) yb = np.arange(-2 * dx, 0, dx) xb, yb = np.meshgrid(xb, yb) xb = xb.ravel() yb = yb.ravel() xl = np.arange(-2 * dx, 0, dx) yl = np.arange(0, 250, dx) xl, yl = np.meshgrid(xl, yl) xl = xl.ravel() yl = yl.ravel() xr = np.arange(500, 500 + 2 * dx, dx) yr = np.arange(0, 250, dx) xr, yr = np.meshgrid(xr, yr) xr = xr.ravel() yr = yr.ravel() x = np.concatenate([xl, xb, xr]) y = np.concatenate([yl, yb, yr]) return x * 1e-3, y * 1e-3 def create_fluid(): dx = 2 xf = np.arange(0, 500, dx) yf = np.arange(0, 150, dx) xf, yf = np.meshgrid(xf, yf) xf = xf.ravel() yf = yf.ravel() return xf * 1e-3, yf * 1e-3 def create_sphere(dx=1): x = np.arange(0, 100, dx) y = np.arange(151, 251, dx) x, y = np.meshgrid(x, y) x = x.ravel() y = y.ravel() p = ((x - 50)**2 + (y - 200)**2) < 20**2 x = x[p] y = y[p] return x * 1e-3, y * 1e-3 def create_three_spheres(): x_cube1, y_cube1 = create_sphere() x_cube2, y_cube2 = x_cube1 + 200 * 1e-3, y_cube1 x_cube3, y_cube3 = x_cube2 + 200 * 1e-3, y_cube1 x_cube = np.concatenate([x_cube1, x_cube2, x_cube3]) y_cube = np.concatenate([y_cube1, y_cube2, y_cube3]) return x_cube, y_cube def properties_of_three_spheres(): x_cube, y_cube = create_sphere() b_id = np.array([]) b_id1 = np.ones_like(x_cube, dtype=int) * 0 rho_1 = np.ones_like(x_cube, dtype=int) * 2000 b_id2 = np.ones_like(x_cube, dtype=int) * 1 rho_2 = np.ones_like(x_cube, dtype=int) * 1000 b_id3 = np.ones_like(x_cube, dtype=int) * 2 rho_3 = np.ones_like(x_cube, dtype=int) * 500 b_id = np.concatenate([b_id1, b_id2, b_id3]) rho = np.concatenate([rho_1, rho_2, rho_3]) return b_id, rho def get_density(y): height = 150 c_0 = 2 * np.sqrt(2 * 9.81 * height * 1e-3) rho_0 = 1000 height_water_clmn = height * 1e-3 gamma = 7. _tmp = gamma / (rho_0 * c_0**2) rho = np.zeros_like(y) for i in range(len(rho)): p_i = rho_0 * 9.81 * (height_water_clmn - y[i]) rho[i] = rho_0 * (1 + p_i * _tmp)**(1. / gamma) return rho def geometry(): import matplotlib.pyplot as plt # please run this function to know how # geometry looks like x_tank, y_tank = create_boundary() x_fluid, y_fluid = create_fluid() x_cube, y_cube = create_three_spheres() plt.scatter(x_fluid, y_fluid) plt.scatter(x_tank, y_tank) plt.scatter(x_cube, y_cube) plt.axes().set_aspect('equal', 'datalim') print("done") plt.show() class RigidFluidCoupling(Application): def initialize(self): self.dx = 2 * 1e-3 self.hdx = 1.2 self.ro = 1000 self.solid_rho = 500 self.m = 1000 * self.dx * self.dx self.co = 2 * np.sqrt(2 * 9.81 * 150 * 1e-3) self.alpha = 0.1 def create_particles(self): """Create the circular patch of fluid.""" xf, yf = create_fluid() m = self.ro * self.dx * self.dx rho = self.ro h = self.hdx * self.dx fluid = get_particle_array_wcsph(x=xf, y=yf, h=h, m=m, rho=rho, name="fluid") dx = 2 xt, yt = create_boundary() m = 1000 * self.dx * self.dx rho = 1000 rad_s = 2 / 2. * 1e-3 h = self.hdx * self.dx V = dx * dx * 1e-6 tank = get_particle_array_wcsph(x=xt, y=yt, h=h, m=m, rho=rho, rad_s=rad_s, V=V, name="tank") for name in ['fx', 'fy', 'fz']: tank.add_property(name) dx = 1 xc, yc = create_three_spheres() b_id, rho = properties_of_three_spheres() m = rho * dx * 1e-3 * dx * 1e-3 h = self.hdx * self.dx rad_s = dx / 2. * 1e-3 V = dx * dx * 1e-6 cs = 0.0 cube = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho, rad_s=rad_s, V=V, cs=cs, body_id=b_id, name="cube") return [fluid, tank, cube] def create_solver(self): kernel = CubicSpline(dim=2) integrator = EPECIntegrator(fluid=WCSPHStep(), cube=RK2StepRigidBody()) dt = 0.125 * self.dx * self.hdx / (self.co * 1.1) / 2. print("DT: %s" % dt) tf = 2.8 solver = Solver( kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, ) return solver def create_equations(self): equations = [ Group(equations=[ BodyForce(dest='cube', sources=None, gy=-9.81), ], real=False), Group(equations=[ SummationDensity( dest='fluid', sources=['fluid'], ), SummationDensityBoundary( dest='fluid', sources=['tank', 'cube'], fluid_rho=1000.0) ]), # Tait equation of state Group(equations=[ TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), ], real=False), Group(equations=[ MomentumEquation(dest='fluid', sources=['fluid'], alpha=self.alpha, beta=0.0, c0=self.co, gy=-9.81), AkinciRigidFluidCoupling(dest='fluid', sources=['cube', 'tank']), XSPHCorrection(dest='fluid', sources=['fluid', 'tank']), ]), Group(equations=[ RigidBodyCollision(dest='cube', sources=['tank', 'cube'], kn=1e5) ]), Group(equations=[RigidBodyMoments(dest='cube', sources=None)]), Group(equations=[RigidBodyMotion(dest='cube', sources=None)]), ] return equations if __name__ == '__main__': app = RigidFluidCoupling() app.run() pysph-master/pysph/examples/run.py000066400000000000000000000140351356347341600176700ustar00rootroot00000000000000"""List and run PySPH examples. One can optionally supply the name of the example and any additional arguments. """ from __future__ import print_function import argparse import ast import os import sys HERE = os.path.dirname(__file__) def _exec_file(filename): ns = {'__name__': '__main__', '__file__': filename} if sys.version_info[0] > 2: co = compile(open(filename, 'rb').read(), filename, 'exec') exec(co, ns) else: execfile(filename, ns) def _extract_full_doc(filename): p = ast.parse(open(filename, 'rb').read()) return ast.get_docstring(p) def _extract_short_doc(dirname, fname): return open(os.path.join(dirname, fname)).readline()[3:].strip() def _get_module(fname): start = fname parts = ['pysph.examples'] while os.path.dirname(start) != '': dirname, start = os.path.split(start) parts.append(dirname) return '.'.join(parts + [start[:-3]]) def example_info(module, filename): print("Information for example: %s" % module) print(_extract_full_doc(filename)) def get_all_examples(): basedir = HERE examples = [] _ignore = [['run.py'], ['ghia_cavity_data.py'], ['db_exp_data.py'], ['tests', 'test_examples.py'], ['tests', 'test_riemann_solver.py'], ['gas_dynamics', 'shocktube_setup.py'], ['gas_dynamics', 'riemann_2d_config.py'], ['sphysics', 'beach_geometry.py'], ['sphysics', 'periodic_rigidbody.py']] ignore = [os.path.abspath(os.path.join(basedir, *pth)) for pth in _ignore] for dirpath, dirs, files in os.walk(basedir): rel_dir = os.path.relpath(dirpath, basedir) if rel_dir == '.': rel_dir = '' py_files = [x for x in files if x.endswith('.py') and not x.startswith('_')] data = [] for f in py_files: path = os.path.join(rel_dir, f) full_path = os.path.join(basedir, path) if os.path.abspath(full_path) in ignore: continue module = _get_module(path) doc = _extract_short_doc(dirpath, f) data.append((module, doc)) examples.extend(data) return examples def get_input(prompt): if sys.version_info[0] > 2: return input(prompt) else: return raw_input(prompt) def get_path(module): """Return the path to the module filename given the module. """ x = module[len('pysph.examples.'):].split('.') x[-1] = x[-1] + '.py' return os.path.join(HERE, *x) def guess_correct_module(example): """Given some form of the example name guess and return a reasonable module. Examples -------- >>> guess_correct_module('elliptical_drop') 'pysph.examples.elliptical_drop' >>> guess_correct_module('pysph.examples.elliptical_drop') 'pysph.examples.elliptical_drop' >>> guess_correct_module('solid_mech.rings') 'pysph.examples.solid_mech.rings' >>> guess_correct_module('solid_mech/rings.py') 'pysph.examples.solid_mech.rings' >>> guess_correct_module('solid_mech/rings') 'pysph.examples.solid_mech.rings' """ if example.endswith('.py'): example = example[:-3] example = example.replace('/', '.') if not example.startswith('pysph.examples.'): module = 'pysph.examples.' + example else: module = example return module def cat_example(module): filename = get_path(module) print("# File: %s" % filename) print(open(filename).read()) def list_examples(examples): for idx, (module, doc) in enumerate(examples): print("%d. %s" % (idx + 1, module[len('pysph.examples.'):])) print(" %s" % doc) def run_command(module, args): print("Running example %s.\n" % module) filename = get_path(module) if '-h' not in args and '--help' not in args: example_info(module, filename) # FIXME: This is ugly but we want the user to be able to run # mpirun -np 4 pysph run elliptical_drop # This necessitates that we do not use subprocess. The cleaner alternative # is to expect each user to write a main function which accepts args that # we can call. For now we just clobber sys.argv. sys.argv = [filename] + args _exec_file(filename) def main(argv=None): if argv is None: argv = sys.argv[1:] examples = get_all_examples() parser = argparse.ArgumentParser( prog="run", description=__doc__, add_help=False ) parser.add_argument( "-h", "--help", action="store_true", default=False, dest="help", help="show this help message and exit" ) parser.add_argument( "-l", "--list", action="store_true", default=False, dest="list", help="List examples" ) parser.add_argument( "--cat", action="store_true", default=False, dest="cat", help="Show/cat the example code on stdout" ) parser.add_argument( "args", type=str, nargs="?", help='''optional example name (for example both cavity or pysph.examples.cavity will work) and arguments to the example.''' ) if len(argv) > 0 and argv[0] in ['-h', '--help']: parser.print_help() sys.exit() options, extra = parser.parse_known_args(argv) if options.list: return list_examples(examples) if options.cat: module = guess_correct_module(options.args) return cat_example(module) if len(argv) > 0: module = guess_correct_module(argv[0]) run_command(module, argv[1:]) else: list_examples(examples) try: ans = int(get_input("Enter example number you wish to run: ")) except ValueError: ans = 0 if ans < 1 or ans > len(examples): print("Invalid example number, exiting!") sys.exit() args = str(get_input( "Enter additional arguments (leave blank to skip): " )) module, doc = examples[ans - 1] print("-" * 80) run_command(module, args.split()) if __name__ == '__main__': main() pysph-master/pysph/examples/solid_mech/000077500000000000000000000000001356347341600206155ustar00rootroot00000000000000pysph-master/pysph/examples/solid_mech/__init__.py000066400000000000000000000000001356347341600227140ustar00rootroot00000000000000pysph-master/pysph/examples/solid_mech/impact.py000066400000000000000000000244371356347341600224560ustar00rootroot00000000000000"""High-velocity impact of an Steel projectile on an Aluminium plate""" import numpy # SPH equations from pysph.sph.equation import Group from pysph.sph.basic_equations import IsothermalEOS, ContinuityEquation, MonaghanArtificialViscosity,\ XSPHCorrection, VelocityGradient2D from pysph.sph.solid_mech.basic import MomentumEquationWithStress, HookesDeviatoricStressRate,\ MonaghanArtificialStress, EnergyEquationWithStress from pysph.sph.solid_mech.hvi import VonMisesPlasticity2D, MieGruneisenEOS, StiffenedGasEOS from pysph.sph.gas_dynamics.basic import UpdateSmoothingLengthFromVolume from pysph.base.utils import get_particle_array from pysph.base.kernels import Gaussian, CubicSpline, WendlandQuintic from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator import PECIntegrator, EPECIntegrator from pysph.sph.integrator_step import SolidMechStep def add_properties(pa, *props): for prop in props: pa.add_property(name=prop) # Parameters dx = dy = 0.0001 # m hdx = 1.3 h = hdx*dx r = 0.005 ###################################################################### # Material properties: Table (1) of "A Free Lagrange Augmented Godunov Method # for the Simulation of Elastic-Plastic Solids", B. P. Howell and G. J. Ball, # JCP (2002) # ALUMINIUM ro1 = 2785.0 # refenrence density C1 = 5328.0 # reference sound-speed S1 = 1.338 # Particle-shock velocity slope gamma1 = 2.0 # Gruneisen gamma/parameter G1 = 2.76e7 # Shear Modulus (kPa) Yo1 = 0.3e6 # Yield stress E1 = ro1*C1*C1 # Youngs Modulus # STEEL ro2 = 7900.0 # reference density C2 = 4600.0 # reference sound-speed S2 = 1.490 # Particle shock velocity slope gamma2 = 2.17 # Gruneisen gamma/parameter G2 = 8.530e7 # Shear modulus Yo2 = 0.979e6 # Yield stress E2 = ro2*C2*C2 # Youngs modulus # general v_s = 3100.0 # Projectile velocity 3.1 km/s cs1=numpy.sqrt(E1/ro1) # speed of sound in aluminium cs2=numpy.sqrt(E2/ro2) # speed of sound in steel ###################################################################### # SPH constants and parameters # Monaghan-type artificial viscosity avisc_alpha = 1.0; avisc_beta = 1.5; avisc_eta = 0.1 # XSPH epsilon xsph_eps = 0.5 # SAV1 artificial viscosity coefficients alpha1 = 1.0 beta1 = 1.5 eta = 0.1# in piab equation eta2 was written so final value is 0.01.(as req.) # SAV2 alpha2 = 2.5 beta2 = 2.5 eta = 0.1# in piab equation eta2 was written so final value is 0.01.(as req.) # XSPH eps = 0.5 ###################################################################### # Particle creation rouintes def get_projectile_particles(): x,y = numpy.mgrid[-r:r:dx, -r:r:dx] x = x.ravel() y = y.ravel() d = (x*x+y*y) keep = numpy.flatnonzero(d<=r*r) x = x[keep] y = y[keep] x = x-(r+2*dx) print('%d Projectile particles'%len(x)) hf = numpy.ones_like(x) * h mf = numpy.ones_like(x) * dx * dy * ro2 rhof = numpy.ones_like(x) * ro2 csf = numpy.ones_like(x) * cs2 z = numpy.zeros_like(x) u = numpy.ones_like(x) * v_s pa = projectile = get_particle_array( name="projectile", x=x, y=y, h=hf, m=mf, rho=rhof, cs=csf, u=u) # add requisite properties # sound speed etc. add_properties(pa, 'e') # velocity gradient properties add_properties(pa, 'v00', 'v01', 'v02', 'v10', 'v11', 'v12', 'v20', 'v21', 'v22') # artificial stress properties add_properties(pa, 'r00', 'r01', 'r02', 'r11', 'r12', 'r22') # deviatoric stress components add_properties(pa, 's00', 's01', 's02', 's11', 's12', 's22') # deviatoric stress accelerations add_properties(pa, 'as00', 'as01', 'as02', 'as11', 'as12', 'as22') # deviatoric stress initial values add_properties(pa, 's000', 's010', 's020', 's110', 's120', 's220') # standard acceleration variables add_properties(pa, 'arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'ae') # initial values add_properties(pa, 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'e0') pa.add_constant('G', G1) pa.add_constant('n', 4) kernel = Gaussian(dim=2) wdeltap = kernel.kernel(rij=dx, h=hdx*dx) pa.add_constant('wdeltap', wdeltap) # load balancing properties pa.set_lb_props( list(pa.properties.keys()) ) return projectile def get_plate_particles(): xarr = numpy.arange(0, 0.002+dx, dx) yarr = numpy.arange(-0.020, 0.02+dx, dx) x,y = numpy.meshgrid( xarr, yarr ) x, y = x.ravel(), y.ravel() print('%d Target particles'%len(x)) hf = numpy.ones_like(x) * h mf = numpy.ones_like(x) * dx * dy * ro1 rhof = numpy.ones_like(x) * ro1 csf = numpy.ones_like(x) * cs1 z = numpy.zeros_like(x) pa = plate = get_particle_array(name="plate", x=x, y=y, h=hf, m=mf, rho=rhof, cs=csf) # add requisite properties # sound speed etc. add_properties(pa, 'e' ) # velocity gradient properties add_properties(pa, 'v00', 'v01', 'v02', 'v10', 'v11', 'v12', 'v20', 'v21', 'v22') # artificial stress properties add_properties(pa, 'r00', 'r01', 'r02', 'r11', 'r12', 'r22') # deviatoric stress components add_properties(pa, 's00', 's01', 's02', 's11', 's12', 's22') # deviatoric stress accelerations add_properties(pa, 'as00', 'as01', 'as02', 'as11', 'as12', 'as22') # deviatoric stress initial values add_properties(pa, 's000', 's010', 's020', 's110', 's120', 's220') # standard acceleration variables add_properties(pa, 'arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'ae') # initial values add_properties(pa, 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'e0') pa.add_constant('G', G2) pa.add_constant('n', 4) kernel = Gaussian(dim=2) wdeltap = kernel.kernel(rij=dx, h=hdx*dx) pa.add_constant('wdeltap', wdeltap) # load balancing properties pa.set_lb_props( list(pa.properties.keys()) ) # removed S_00 and similar components plate.v[:]=0.0 return plate class Impact(Application): def create_particles(self): plate = get_plate_particles() projectile = get_projectile_particles() return [plate, projectile] def create_solver(self): kernel = Gaussian(dim=2) #kernel = WendlandQuintic(dim=2) self.wdeltap = kernel.kernel(rij=dx, h=hdx*dx) integrator = EPECIntegrator(projectile=SolidMechStep(), plate=SolidMechStep()) solver = Solver(kernel=kernel, dim=2, integrator=integrator) dt = 1e-9 tf = 8e-6 solver.set_time_step(dt) solver.set_final_time(tf) solver.set_print_freq(100) return solver def create_equations(self): equations = [ # update smoothing length # Group( # equations = [ # UpdateSmoothingLengthFromVolume(dest='plate', sources=['plate', 'projectile'], dim=2, k=hdx), # UpdateSmoothingLengthFromVolume(dest='projectile', sources=['plate', 'projectile'], dim=2, k=hdx), # ], # update_nnps=True, # ), # compute properties from the current state Group( equations = [ # EOS (compute the pressure using one of the EOSs) #MieGruneisenEOS(dest='plate', sources=None, gamma=gamma1, r0=ro1 , c0=C1, S=S1), #MieGruneisenEOS(dest='projectile', sources=None, gamma=gamma2, r0=ro2 , c0=C2, S=S2), StiffenedGasEOS(dest='plate', sources=None, gamma=gamma1, r0=ro1 , c0=C1), StiffenedGasEOS(dest='projectile', sources=None, gamma=gamma2, r0=ro2 , c0=C2), # compute the velocity gradient tensor VelocityGradient2D(dest='plate', sources=['plate']), VelocityGradient2D(dest='projectile', sources=['projectile']), # stress VonMisesPlasticity2D(dest='plate', sources=None, flow_stress=Yo1), VonMisesPlasticity2D(dest='projectile', sources=None, flow_stress=Yo2), # artificial stress to avoid clumping MonaghanArtificialStress(dest='plate', sources=None, eps=0.3), MonaghanArtificialStress(dest='projectile', sources=None, eps=0.3), ] ), # accelerations (rho, u, v, ...) Group( equations = [ # continuity equation ContinuityEquation(dest='plate', sources=['projectile','plate']), ContinuityEquation(dest='projectile', sources=['projectile','plate']), # momentum equation MomentumEquationWithStress(dest='projectile', sources=['projectile','plate',]), MomentumEquationWithStress(dest='plate', sources=['projectile','plate',]), # energy equation: EnergyEquationWithStress(dest='plate', sources=['projectile','plate',], alpha=avisc_alpha, beta=avisc_beta, eta=avisc_eta), EnergyEquationWithStress(dest='projectile', sources=['projectile','plate',], alpha=avisc_alpha, beta=avisc_beta, eta=avisc_eta), # avisc MonaghanArtificialViscosity(dest='plate', sources=['projectile','plate'], alpha=avisc_alpha, beta=avisc_beta), MonaghanArtificialViscosity(dest='projectile', sources=['projectile','plate'], alpha=avisc_alpha, beta=avisc_beta), # updates to the stress term HookesDeviatoricStressRate(dest='plate', sources=None), HookesDeviatoricStressRate(dest='projectile', sources=None), # position stepping XSPHCorrection(dest='plate', sources=['plate'], eps=xsph_eps), XSPHCorrection(dest='projectile', sources=['projectile'], eps=xsph_eps), ] ), ] # End Group list return equations if __name__ == '__main__': app = Impact() app.run() pysph-master/pysph/examples/solid_mech/impact3d.py000066400000000000000000000246151356347341600227030ustar00rootroot00000000000000"""High-velocity impact of an Steel projectile on an Aluminium plate""" import numpy # SPH equations from pysph.sph.equation import Group from pysph.sph.basic_equations import IsothermalEOS, ContinuityEquation, MonaghanArtificialViscosity,\ XSPHCorrection, VelocityGradient3D from pysph.sph.solid_mech.basic import MomentumEquationWithStress, HookesDeviatoricStressRate,\ MonaghanArtificialStress, EnergyEquationWithStress from pysph.sph.solid_mech.hvi import VonMisesPlasticity2D, MieGruneisenEOS, StiffenedGasEOS from pysph.sph.gas_dynamics.basic import UpdateSmoothingLengthFromVolume from pysph.base.utils import get_particle_array from pysph.base.kernels import Gaussian, CubicSpline, WendlandQuintic from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator import PECIntegrator, EPECIntegrator from pysph.sph.integrator_step import SolidMechStep def add_properties(pa, *props): for prop in props: pa.add_property(name=prop) # Parameters dx = dy = dz = 0.0005 # m hdx = 1.3 h = hdx*dx r = 0.005 ###################################################################### # Material properties: Table (1) of "A Free Lagrange Augmented Godunov Method # for the Simulation of Elastic-Plastic Solids", B. P. Howell and G. J. Ball, # JCP (2002) # ALUMINIUM ro1 = 2785.0 # refenrence density C1 = 5328.0 # reference sound-speed S1 = 1.338 # Particle-shock velocity slope gamma1 = 2.0 # Gruneisen gamma/parameter G1 = 2.76e7 # Shear Modulus (kPa) Yo1 = 0.3e6 # Yield stress E1 = ro1*C1*C1 # Youngs Modulus # STEEL ro2 = 7900.0 # reference density C2 = 4600.0 # reference sound-speed S2 = 1.490 # Particle shock velocity slope gamma2 = 2.17 # Gruneisen gamma/parameter G2 = 8.530e7 # Shear modulus Yo2 = 0.979e6 # Yield stress E2 = ro2*C2*C2 # Youngs modulus # general v_s = 3100.0 # Projectile velocity 3.1 km/s cs1=numpy.sqrt(E1/ro1) # speed of sound in aluminium cs2=numpy.sqrt(E2/ro2) # speed of sound in steel ###################################################################### # SPH constants and parameters # Monaghan-type artificial viscosity avisc_alpha = 1.0; avisc_beta = 1.5; avisc_eta = 0.1 # XSPH epsilon xsph_eps = 0.5 # SAV1 artificial viscosity coefficients alpha1 = 1.0 beta1 = 1.5 eta = 0.1# in piab equation eta2 was written so final value is 0.01.(as req.) # SAV2 alpha2 = 2.5 beta2 = 2.5 eta = 0.1# in piab equation eta2 was written so final value is 0.01.(as req.) # XSPH eps = 0.5 ###################################################################### # Particle creation rouintes def get_projectile_particles(): x, y, z = numpy.mgrid[-r:r+1e-6:dx, -r:r+1e-6:dx, -r:r+1e-6:dx] x = x.ravel() y = y.ravel() z = z.ravel() d = (x*x+y*y+z*z) keep = numpy.flatnonzero(d<=r*r) x = x[keep] y = y[keep] z = z[keep] x = x-(r+2*dx) print('%d Projectile particles'%len(x)) hf = numpy.ones_like(x) * h mf = numpy.ones_like(x) * dx * dy * dz * ro2 rhof = numpy.ones_like(x) * ro2 csf = numpy.ones_like(x) * cs2 u = numpy.ones_like(x) * v_s pa = projectile = get_particle_array( name="projectile", x=x, y=y, z=z, h=hf, m=mf, rho=rhof, cs=csf, u=u) # add requisite properties # sound speed etc. add_properties(pa, 'e') # velocity gradient properties add_properties(pa, 'v00', 'v01', 'v02', 'v10', 'v11', 'v12', 'v20', 'v21', 'v22') # artificial stress properties add_properties(pa, 'r00', 'r01', 'r02', 'r11', 'r12', 'r22') # deviatoric stress components add_properties(pa, 's00', 's01', 's02', 's11', 's12', 's22') # deviatoric stress accelerations add_properties(pa, 'as00', 'as01', 'as02', 'as11', 'as12', 'as22') # deviatoric stress initial values add_properties(pa, 's000', 's010', 's020', 's110', 's120', 's220') # standard acceleration variables add_properties(pa, 'arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'ae') # initial values add_properties(pa, 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'e0') pa.add_constant('G', G2) pa.add_constant('n', 4) kernel = Gaussian(dim=3) wdeltap = kernel.kernel(rij=dx, h=hdx*dx) pa.add_constant('wdeltap', wdeltap) # load balancing properties pa.set_lb_props( list(pa.properties.keys()) ) return projectile def get_plate_particles(): xarr = numpy.arange(0, 0.002+dx, dx) yarr = numpy.arange(-0.020, 0.02+dx, dx) zarr = numpy.arange(-0.02, 0.02+dx, dx) x, y, z = numpy.mgrid[0:0.002+dx:dx, -0.02:0.02+dx:dx, -0.02:0.02+dx:dx] x = x.ravel() y = y.ravel() z = z.ravel() print('%d Target particles'%len(x)) hf = numpy.ones_like(x) * h mf = numpy.ones_like(x) * dx * dy * dz * ro1 rhof = numpy.ones_like(x) * ro1 csf = numpy.ones_like(x) * cs1 pa = plate = get_particle_array(name="plate", x=x, y=y, z=z, h=hf, m=mf, rho=rhof, cs=csf) # add requisite properties # sound speed etc. add_properties(pa, 'e' ) # velocity gradient properties add_properties(pa, 'v00', 'v01', 'v02', 'v10', 'v11', 'v12', 'v20', 'v21', 'v22') # artificial stress properties add_properties(pa, 'r00', 'r01', 'r02', 'r11', 'r12', 'r22') # deviatoric stress components add_properties(pa, 's00', 's01', 's02', 's11', 's12', 's22') # deviatoric stress accelerations add_properties(pa, 'as00', 'as01', 'as02', 'as11', 'as12', 'as22') # deviatoric stress initial values add_properties(pa, 's000', 's010', 's020', 's110', 's120', 's220') # standard acceleration variables add_properties(pa, 'arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'ae') # initial values add_properties(pa, 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'e0') pa.add_constant('G', G1) pa.add_constant('n', 4) kernel = Gaussian(dim=3) wdeltap = kernel.kernel(rij=dx, h=hdx*dx) pa.add_constant('wdeltap', wdeltap) # load balancing properties pa.set_lb_props( list(pa.properties.keys()) ) # removed S_00 and similar components plate.v[:]=0.0 return plate class Impact(Application): def create_particles(self): plate = get_plate_particles() projectile = get_projectile_particles() return [plate, projectile] def create_solver(self): dim=3 kernel = Gaussian(dim=dim) #kernel = WendlandQuintic(dim=dim) integrator = EPECIntegrator(projectile=SolidMechStep(), plate=SolidMechStep()) solver = Solver(kernel=kernel, dim=dim, integrator=integrator) dt = 1e-9 tf = 8e-6 solver.set_time_step(dt) solver.set_final_time(tf) solver.set_print_freq(100) return solver def create_equations(self): equations = [ # update smoothing length # Group( # equations = [ # UpdateSmoothingLengthFromVolume(dest='plate', sources=['plate', 'projectile'], dim=dim, k=hdx), # UpdateSmoothingLengthFromVolume(dest='projectile', sources=['plate', 'projectile'], dim=dim, k=hdx), # ], # update_nnps=True, # ), # compute properties from the current state Group( equations = [ # EOS (compute the pressure using one of the EOSs) #MieGruneisenEOS(dest='plate', sources=None, gamma=gamma1, r0=ro1 , c0=C1, S=S1), #MieGruneisenEOS(dest='projectile', sources=None, gamma=gamma2, r0=ro2 , c0=C2, S=S2), StiffenedGasEOS(dest='plate', sources=None, gamma=gamma1, r0=ro1 , c0=C1), StiffenedGasEOS(dest='projectile', sources=None, gamma=gamma2, r0=ro2 , c0=C2), # compute the velocity gradient tensor VelocityGradient3D(dest='plate', sources=['plate']), VelocityGradient3D(dest='projectile', sources=['projectile']), # # stress VonMisesPlasticity2D(dest='plate', sources=None, flow_stress=Yo1), VonMisesPlasticity2D(dest='projectile', sources=None, flow_stress=Yo2), # # artificial stress to avoid clumping MonaghanArtificialStress(dest='plate', sources=None, eps=0.3), MonaghanArtificialStress(dest='projectile', sources=None, eps=0.3), ] ), # accelerations (rho, u, v, ...) Group( equations = [ # continuity equation ContinuityEquation(dest='plate', sources=['projectile','plate']), ContinuityEquation(dest='projectile', sources=['projectile','plate']), # momentum equation MomentumEquationWithStress(dest='projectile', sources=['projectile','plate',]), MomentumEquationWithStress(dest='plate', sources=['projectile','plate',]), # energy equation: EnergyEquationWithStress(dest='plate', sources=['projectile','plate',], alpha=avisc_alpha, beta=avisc_beta, eta=avisc_eta), EnergyEquationWithStress(dest='projectile', sources=['projectile','plate',], alpha=avisc_alpha, beta=avisc_beta, eta=avisc_eta), #avisc MonaghanArtificialViscosity(dest='plate', sources=['projectile','plate'], alpha=avisc_alpha, beta=avisc_beta), MonaghanArtificialViscosity(dest='projectile', sources=['projectile','plate'], alpha=avisc_alpha, beta=avisc_beta), #updates to the stress term HookesDeviatoricStressRate(dest='plate', sources=None), HookesDeviatoricStressRate(dest='projectile', sources=None), #position stepping XSPHCorrection(dest='plate', sources=['plate'], eps=xsph_eps), XSPHCorrection(dest='projectile', sources=['projectile'], eps=xsph_eps), ] ), ] # End Group list return equations if __name__ == '__main__': app = Impact() app.run() pysph-master/pysph/examples/solid_mech/oscillating_plate.py000066400000000000000000000127061356347341600246720ustar00rootroot00000000000000import numpy as np from numpy import cos, sin, cosh, sinh # SPH equations from pysph.sph.solid_mech.basic import (ElasticSolidsScheme, get_particle_array_elastic_dynamics) from pysph.base.kernels import CubicSpline from pysph.solver.application import Application class OscillatingPlate(Application): def initialize(self): self.L = 0.2 self.H = 0.02 # wave number K self.KL = 1.875 self.K = 1.875 / self.L # edge velocity of the plate (m/s) self.Vf = 0.05 self.dx_plate = 0.002 self.h = 1.3 * self.dx_plate self.plate_rho0 = 1000. self.plate_E = 2. * 1e6 self.plate_nu = 0.3975 self.plate_inside_wall_length = self.L / 4. self.wall_layers = 3 self.alpha = 1.0 self.beta = 1.0 self.xsph_eps = 0.5 self.artificial_stress_eps = 0.3 self.tf = 1.0 self.dt = 1e-5 def create_plate(self): dx = self.dx_plate xp, yp = np.mgrid[-self.plate_inside_wall_length:self.L + dx / 2.:dx, -self.H / 2.:self.H / 2. + dx / 2.:dx] xp = xp.ravel() yp = yp.ravel() return xp, yp def create_wall(self): xp, yp = self.create_plate() # get the minimum and maximum of the plate xp_min = xp.min() yp_min = yp.min() yp_max = yp.max() dx = self.dx_plate xw_upper, yw_upper = np.mgrid[ -self.plate_inside_wall_length:self.dx_plate / 2.:dx, yp_max + dx:yp_max + dx + (self.wall_layers - 1) * dx + dx / 2.:dx] xw_upper = xw_upper.ravel() yw_upper = yw_upper.ravel() xw_lower, yw_lower = np.mgrid[ -self.plate_inside_wall_length:self.dx_plate / 2.:dx, yp_min - dx:yp_min - dx - (self.wall_layers - 1) * dx - dx / 2.:-dx] xw_lower = xw_lower.ravel() yw_lower = yw_lower.ravel() xw_left_max = xp_min - dx xw_left_min = xw_left_max - (self.wall_layers - 1) * dx - dx / 2. yw_left_max = yw_upper.max() + dx / 2. yw_left_min = yw_lower.min() xw_left, yw_left = np.mgrid[xw_left_max:xw_left_min:-dx, yw_left_min: yw_left_max:dx] xw_left = xw_left.ravel() yw_left = yw_left.ravel() # wall coordinates xw, yw = np.concatenate((xw_lower, xw_upper, xw_left)), np.concatenate( (yw_lower, yw_upper, yw_left)) return xw, yw def create_particles(self): xp, yp = self.create_plate() m = self.plate_rho0 * self.dx_plate**2. # get the index of the particle which will be used to compute the # amplitude xp_max = max(xp) fltr = np.argwhere(xp == xp_max) fltr_idx = int(len(fltr) / 2.) amplitude_idx = fltr[fltr_idx][0] kernel = CubicSpline(dim=2) self.wdeltap = kernel.kernel(rij=self.dx_plate, h=self.h) plate = get_particle_array_elastic_dynamics( x=xp, y=yp, m=m, h=self.h, rho=self.plate_rho0, name="plate", constants=dict( wdeltap=self.wdeltap, n=4, rho_ref=self.plate_rho0, E=self.plate_E, nu=self.plate_nu, amplitude_idx=amplitude_idx)) ################################## # vertical velocity of the plate # ################################## # initialize with zero at the beginning v = np.zeros_like(xp) v = v.ravel() # set the vertical velocity for particles which are only # out of the wall K = self.K # L = self.L KL = self.KL M = sin(KL) + sinh(KL) N = cos(KL) + cosh(KL) Q = 2 * (cos(KL) * sinh(KL) - sin(KL) * cosh(KL)) # import pudb; pudb.set_trace() fltr = xp > 0 tmp1 = (cos(K * xp[fltr]) - cosh(K * xp[fltr])) tmp2 = (sin(K * xp[fltr]) - sinh(K * xp[fltr])) v[fltr] = self.Vf * plate.cs[0] * (M * tmp1 - N * tmp2) / Q # set vertical velocity plate.v = v # ######################################### # #### Create the wall particle array ##### # ######################################### # create the particle array xw, yw = self.create_wall() wall = get_particle_array_elastic_dynamics( x=xw, y=yw, m=m, h=self.h, rho=self.plate_rho0, name="wall", constants=dict(E=self.plate_E, nu=self.plate_nu)) return [plate, wall] def create_scheme(self): s = ElasticSolidsScheme(elastic_solids=['plate'], solids=['wall'], dim=2) s.configure_solver(dt=self.dt, tf=self.tf) return s def post_process(self): if len(self.output_files) == 0: return from pysph.solver.utils import iter_output files = self.output_files t, amplitude = [], [] for sd, array in iter_output(files, 'plate'): _t = sd['t'] t.append(_t) amplitude.append(array.y[array.amplitude_idx[0]]) import matplotlib import os matplotlib.use('Agg') from matplotlib import pyplot as plt plt.clf() plt.plot(t, amplitude) plt.xlabel('t') plt.ylabel('Amplitude') plt.legend() fig = os.path.join(self.output_dir, "amplitude.png") plt.savefig(fig, dpi=300) if __name__ == '__main__': app = OscillatingPlate() app.run() app.post_process() pysph-master/pysph/examples/solid_mech/rings.py000066400000000000000000000046411356347341600223160ustar00rootroot00000000000000"""Colliding Elastic Balls. (10 minutes) """ import numpy # SPH equations from pysph.sph.equation import Group from pysph.sph.solid_mech.basic import (get_particle_array_elastic_dynamics, ElasticSolidsScheme) from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator import PECIntegrator from pysph.sph.integrator_step import SolidMechStep class Rings(Application): def initialize(self): # constants self.E = 1e7 self.nu = 0.3975 self.rho0 = 1.0 self.dx = 0.0005 self.hdx = 1.5 self.h = self.hdx * self.dx # geometry self.ri = 0.03 self.ro = 0.04 self.spacing = 0.041 self.dt = 1e-8 self.tf = 5e-5 def create_particles(self): spacing = self.spacing # spacing = 2*5cm x, y = numpy.mgrid[-self.ro:self.ro:self.dx, -self.ro:self.ro:self.dx] x = x.ravel() y = y.ravel() d = (x * x + y * y) ro = self.ro ri = self.ri keep = numpy.flatnonzero((ri * ri <= d) * (d < ro * ro)) x = x[keep] y = y[keep] x = numpy.concatenate([x - spacing, x + spacing]) y = numpy.concatenate([y, y]) dx = self.dx hdx = self.hdx m = numpy.ones_like(x) * dx * dx h = numpy.ones_like(x) * hdx * dx rho = numpy.ones_like(x) # create the particle array kernel = CubicSpline(dim=2) self.wdeltap = kernel.kernel(rij=dx, h=self.h) pa = get_particle_array_elastic_dynamics( name="solid", x=x + spacing, y=y, m=m, rho=rho, h=h, constants=dict( wdeltap=self.wdeltap, n=4, rho_ref=self.rho0, E=self.E, nu=self.nu)) print('Ellastic Collision with %d particles' % (x.size)) print("Shear modulus G = %g, Young's modulus = %g, Poisson's ratio =%g" % (pa.G, pa.E, pa.nu)) u_f = 0.059 pa.u = pa.cs * u_f * (2 * (x < 0) - 1) return [pa] def create_scheme(self): s = ElasticSolidsScheme(elastic_solids=['solid'], solids=[], dim=2) s.configure_solver(dt=self.dt, tf=self.tf, pfreq=500) return s if __name__ == '__main__': app = Rings() app.run() pysph-master/pysph/examples/solid_mech/taylor_bar.py000066400000000000000000000136651356347341600233400ustar00rootroot00000000000000"""Taylor bar example with SPH. (5 minutes) """ import numpy from pysph.sph.equation import Group from pysph.base.utils import get_particle_array from pysph.base.kernels import WendlandQuintic from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator import PECIntegrator from pysph.sph.integrator_step import SolidMechStep # basic sph equations from pysph.sph.basic_equations import ContinuityEquation, \ MonaghanArtificialViscosity, XSPHCorrection, VelocityGradient2D # baic stress equations from pysph.sph.solid_mech.basic import HookesDeviatoricStressRate,\ MomentumEquationWithStress, EnergyEquationWithStress # plasticity model and eos from pysph.sph.solid_mech.hvi import VonMisesPlasticity2D, MieGruneisenEOS # boundary force from pysph.sph.boundary_equations import MonaghanBoundaryForce # Numerical Parameters and constants dx = dy = 0.000384848 hdx = 2.0 h = hdx * dx r0 = 7850.0 m0 = dx * dy * r0 v_s = 200.0 ss = 4699.0 C = 3630.0 S = 1800.0 gamma = 1.81 alpha = 0.5 beta = 0.5 eta = 0.01 eps = 0.5 bar_width = 0.0076 G = 8*1e10 Yo = 6*1e8 ro2 = 2750.0 plate_start = -2.0*bar_width plate_end = 2.0*bar_width def get_plate_particles(): x = numpy.arange(plate_start, plate_end+dx, dx) y = numpy.zeros_like(x) # normals and tangents tx = numpy.ones_like(x) ty = numpy.zeros_like(x) tz = numpy.zeros_like(x) ny = numpy.ones_like(x) nx = numpy.zeros_like(x) nz = numpy.zeros_like(x) cs = numpy.ones_like(x)*ss pa = get_particle_array(name='plate', x=x, y=y, tx=tx, ty=ty, tz=tz, nx=nx, ny=ny, nz=nz, cs=cs) pa.m[:] = m0 return pa def get_bar_particles(): xarr = numpy.arange(-bar_width/2.0, bar_width/2.0 + dx, dx) yarr = numpy.arange(4*dx, 0.0254 + 4*dx, dx) x,y = numpy.meshgrid( xarr, yarr ) x, y = x.ravel(), y.ravel() print('Number of bar particles: ', len(x)) hf = numpy.ones_like(x) * h mf = numpy.ones_like(x) * dx * dy * r0 rhof = numpy.ones_like(x) * r0 csf = numpy.ones_like(x) * ss z = numpy.zeros_like(x) pa = get_particle_array(name="bar", x=x, y=y, h=hf, m=mf, rho=rhof, cs=csf, e=z) # negative fluid particles pa.v[:]=-200 return pa class TaylorBar(Application): def create_particles(self): bar = get_bar_particles() plate = get_plate_particles() # add requisite properties # velocity gradient for the bar for name in ('v00', 'v01', 'v02', 'v10', 'v11', 'v12', 'v20', 'v21', 'v22'): bar.add_property(name) # deviatoric stress components for name in ('s00', 's01', 's02', 's11', 's12', 's22'): bar.add_property(name) # deviatoric stress accelerations for name in ('as00', 'as01', 'as02', 'as11', 'as12', 'as22'): bar.add_property(name) # deviatoric stress initial values for name in ('s000', 's010', 's020', 's110', 's120', 's220'): bar.add_property(name) bar.add_property('e0') # artificial stress properties for name in ('r00', 'r01', 'r02', 'r11', 'r12', 'r22'): bar.add_property(name) # standard acceleration variables and initial values. for name in ('arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'ae', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'e0'): bar.add_property(name) bar.add_constant('G', G) bar.add_constant('n', 4) kernel = WendlandQuintic(dim=2) wdeltap = kernel.kernel(rij=dx, h=hdx*dx) bar.add_constant('wdeltap', wdeltap) return [bar, plate] def create_solver(self): kernel = WendlandQuintic(dim=2) self.wdeltap = kernel.kernel(rij=dx, h=hdx*dx) integrator = PECIntegrator(bar=SolidMechStep()) solver = Solver(kernel=kernel, dim=2, integrator=integrator) dt = 1e-9 tf = 2.5e-5 solver.set_time_step(dt) solver.set_final_time(tf) return solver def create_equations(self): equations = [ # Properties computed set from the current state Group( equations=[ # p MieGruneisenEOS(dest='bar', sources=None, gamma=gamma, r0=r0, c0=C, S=S), # vi,j : requires properties v00, v01, v10, v11 VelocityGradient2D(dest='bar', sources=['bar',]), # rij : requires properties s00, s01, s11 VonMisesPlasticity2D(flow_stress=Yo, dest='bar', sources=None), ], ), # Acceleration variables are now computed Group( equations=[ # arho ContinuityEquation(dest='bar', sources=['bar']), # au, av MomentumEquationWithStress( dest='bar', sources=['bar']), # au, av MonaghanArtificialViscosity( dest='bar', sources=['bar'], alpha=0.5, beta=0.5), # au av MonaghanBoundaryForce( dest='bar', sources=['plate'], deltap=dx), # ae EnergyEquationWithStress(dest='bar', sources=['bar'], alpha=0.5, beta=0.5, eta=0.01), # a_s00, a_s01, a_s11 HookesDeviatoricStressRate( dest='bar', sources=None), # ax, ay, az XSPHCorrection( dest='bar', sources=['bar',], eps=0.5), ] ) # End Acceleration Group ] # End Group list return equations if __name__ == '__main__': app = TaylorBar() app.run() pysph-master/pysph/examples/spheric/000077500000000000000000000000001356347341600201445ustar00rootroot00000000000000pysph-master/pysph/examples/spheric/__init__.py000066400000000000000000000000001356347341600222430ustar00rootroot00000000000000pysph-master/pysph/examples/spheric/moving_square.py000066400000000000000000000256261356347341600234100ustar00rootroot00000000000000"""SPHERIC benchmark case 6. (2 hours) See http://spheric-sph.org/tests/test-6 for more details. """ # math from math import exp # PySPH imports from pysph.base.utils import get_particle_array from pysph.base.kernels import QuinticSpline from pysph.solver.solver import Solver from pysph.solver.application import Application from pysph.sph.integrator_step import TwoStageRigidBodyStep, TransportVelocityStep from pysph.sph.integrator import Integrator from pysph.tools import uniform_distribution # SPH equations for this problem from pysph.sph.equation import Group, Equation from pysph.sph.wc.transport_velocity import SummationDensity,\ StateEquation, MomentumEquationPressureGradient, MomentumEquationViscosity,\ MomentumEquationArtificialStress, SolidWallPressureBC, SolidWallNoSlipBC,\ SetWallVelocity # domain and reference values Lx = 10.0 Ly = 5.0 Umax = 1.0 c0 = 25.0 * Umax rho0 = 1.0 p0 = c0 * c0 * rho0 # obstacle dimensions obstacle_width = 1.0 obstacle_height = 1.0 # Reynolds number and kinematic viscosity Re = 150 nu = Umax * obstacle_width / Re # Numerical setup nx = 50 dx = 0.20 * Lx / nx nghost_layers = 4 ghost_extent = nghost_layers * dx hdx = 1.2 # adaptive time steps h0 = hdx * dx dt_cfl = 0.25 * h0 / (c0 + Umax) dt_viscous = 0.125 * h0**2 / nu dt_force = 1.0 tf = 8.0 dt = 0.8 * min(dt_cfl, dt_viscous, dt_force) class SPHERICBenchmarkAcceleration(Equation): r"""Equation to set the acceleration for the moving square benchmark problem. We use scipy.optimize to fit the Gaussian: .. math:: a \exp( -\frac{(t-b)^2}{2c^2} ) + d to the SPHERIC Motion.dat file. The values for the parameters are a = 2.8209512 b = 0.525652151 c = 0.14142151 d = -2.55580905e-08 Notes: This equation must be instantiated with no sources """ def loop(self, d_idx, d_au, t=0.0): a = 2.8209512 b = 0.525652151 c = 0.14142151 d = -2.55580905e-08 # compute the acceleration and set it for the destination d_au[d_idx] = a * exp(-(t - b) * (t - b) / (2.0 * c * c)) + d def _get_interior(x, y): indices = [] for i in range(x.size): if ((x[i] > 0.0) and (x[i] < Lx)): if ((y[i] > 0.0) and (y[i] < Ly)): indices.append(i) return indices def _get_obstacle(x, y): indices = [] for i in range(x.size): if ((1.0 <= x[i] <= 2.0) and (2.0 <= y[i] <= 3.0)): indices.append(i) return indices class MovingSquare(Application): def _setup_particle_properties(self, particles, volume): fluid, solid, obstacle = particles #### ADD PROPS FOR THE PARTICLES ### # volume from number density fluid.add_property('V') solid.add_property('V') obstacle.add_property('V') # extrapolated velocities for the fluid for name in ['uf', 'vf', 'wf']: solid.add_property(name) obstacle.add_property(name) # dummy velocities for the solid and obstacle # required for the no-slip BC for name in ['ug', 'vg', 'wg']: solid.add_property(name) obstacle.add_property(name) # advection velocities and accelerations for fluid for name in ('uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', 'au', 'av', 'aw'): fluid.add_property(name) # kernel summation correction for solids solid.add_property('wij') obstacle.add_property('wij') # initial velocities and positions needed for the obstacle for # rigid-body integration obstacle.add_property('u0'); obstacle.u0[:] = 0. obstacle.add_property('v0'); obstacle.v0[:] = 0. obstacle.add_property('w0'); obstacle.w0[:] = 0. obstacle.add_property('x0') obstacle.add_property('y0') obstacle.add_property('z0') # imposed accelerations on the solid and obstacle solid.add_property('ax') solid.add_property('ay') solid.add_property('az') obstacle.add_property('ax') obstacle.add_property('ay') obstacle.add_property('az') # magnitude of velocity squared fluid.add_property('vmag2') #### SETUP PARTICLE PROPERTIES ### # mass is set to get the reference density of rho0 fluid.m[:] = volume * rho0 solid.m[:] = volume * rho0 obstacle.m[:] = volume * rho0 # volume is set as dx^2 fluid.V[:] = 1. / volume solid.V[:] = 1. / volume obstacle.V[:] = 1. / volume # smoothing lengths fluid.h[:] = h0 solid.h[:] = h0 obstacle.h[:] = h0 # set the output arrays fluid.set_output_arrays(['x', 'y', 'u', 'v', 'vmag2', 'rho', 'p', 'V', 'm', 'h']) solid.set_output_arrays(['x', 'y', 'rho', 'p']) obstacle.set_output_arrays(['x', 'y', 'u0', 'rho', 'p', 'u']) particles = [fluid, solid, obstacle] return particles def add_user_options(self, group): group.add_argument( "--hcp", action="store_true", dest="hcp", default=False, help="Use hexagonal close packing of particles." ) def create_particles(self): hcp = self.options.hcp # Initial distribution using Hexagonal close packing of particles # create all particles global dx if hcp: x, y, dx, dy, xmin, xmax, ymin, ymax = uniform_distribution.uniform_distribution_hcp2D( dx=dx, xmin=-ghost_extent, xmax=Lx + ghost_extent, ymin=-ghost_extent, ymax=Ly + ghost_extent) else: x, y, dx, dy, xmin, xmax, ymin, ymax = uniform_distribution.uniform_distribution_cubic2D( dx=dx, xmin=-ghost_extent, xmax=Lx + ghost_extent, ymin=-ghost_extent, ymax=Ly + ghost_extent) x = x.ravel() y = y.ravel() # create the basic particle array solid = get_particle_array(name='solid', x=x, y=y) # now sort out the interior from all particles indices = _get_interior(solid.x, solid.y) fluid = solid.extract_particles(indices) fluid.set_name('fluid') solid.remove_particles(indices) # sort out the obstacle from the interior indices = _get_obstacle(fluid.x, fluid.y) obstacle = fluid.extract_particles(indices) obstacle.set_name('obstacle') fluid.remove_particles(indices) print("SPHERIC benchmark 6 :: Re = %d, nfluid = %d, nsolid=%d, nobstacle = %d, dt = %g" % ( Re, fluid.get_number_of_particles(), solid.get_number_of_particles(), obstacle.get_number_of_particles(), dt)) # setup requisite particle properties and initial conditions if hcp: wij_sum = uniform_distribution.get_number_density_hcp( dx, dy, kernel, h0) volume = 1. / wij_sum else: volume = dx * dy particles = self._setup_particle_properties( [fluid, solid, obstacle], volume=volume ) return particles def create_solver(self): kernel = QuinticSpline(dim=2) integrator = Integrator(fluid=TransportVelocityStep(), obstacle=TwoStageRigidBodyStep()) solver = Solver(kernel=kernel, dim=2, integrator=integrator, tf=tf, dt=dt, adaptive_timestep=False, output_at_times=[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0]) return solver def create_equations(self): equations = [ # set the acceleration for the obstacle using the special function # mimicking the accelerations provided in the test. Group( equations=[ SPHERICBenchmarkAcceleration( dest='obstacle', sources=None), ], real=False ), # Summation density along with volume summation for the fluid # phase. This is done for all local and remote particles. At the # end of this group, the fluid phase has the correct density # taking into consideration the fluid and solid # particles. Group( equations=[ SummationDensity( dest='fluid', sources=[ 'fluid', 'solid', 'obstacle']), ], real=False ), # Once the fluid density is computed, we can use the EOS to set # the fluid pressure. Additionally, the dummy velocity for the # channel is set, which is later used in the no-slip wall BC. Group( equations=[ StateEquation( dest='fluid', sources=None, p0=p0, rho0=rho0, b=1.0), SetWallVelocity(dest='solid', sources=['fluid']), SetWallVelocity(dest='obstacle', sources=['fluid']), ], real=False ), # Once the pressure for the fluid phase has been updated, we can # extrapolate the pressure to the ghost particles. After this # group, the fluid density, pressure and the boundary pressure has # been updated and can be used in the integration equations. Group( equations=[ SolidWallPressureBC( dest='obstacle', sources=['fluid'], b=1.0, rho0=rho0, p0=p0), SolidWallPressureBC( dest='solid', sources=['fluid'], b=1.0, rho0=rho0, p0=p0), ], real=False ), # The main accelerations block. The acceleration arrays for the # fluid phase are upadted in this stage for all local particles. Group( equations=[ # Pressure gradient terms MomentumEquationPressureGradient( dest='fluid', sources=['fluid', 'solid', 'obstacle'], pb=p0), # fluid viscosity MomentumEquationViscosity( dest='fluid', sources=['fluid'], nu=nu), # No-slip boundary condition. This is effectively a # viscous interaction of the fluid with the ghost # particles. SolidWallNoSlipBC( dest='fluid', sources=['solid', 'obstacle'], nu=nu), # Artificial stress for the fluid phase MomentumEquationArtificialStress( dest='fluid', sources=['fluid']), ], real=True ), ] return equations if __name__ == '__main__': app = MovingSquare() app.run() pysph-master/pysph/examples/sphysics/000077500000000000000000000000001356347341600203545ustar00rootroot00000000000000pysph-master/pysph/examples/sphysics/INDAT.gz000066400000000000000000000003051356347341600215530ustar00rootroot00000000000000ņSINDATR *hLH«뿥88srfŘ>9=ZSp LiU znDu`#,…@pvp ~a 9)[#I)Rv,nAk GJ bvݖb*k_H\BŹzAF>fZcNYs`pysph-master/pysph/examples/sphysics/IPART.gz000066400000000000000000003573431356347341600216140ustar00rootroot00000000000000ņSIPART̽[Y-Q?Ws1DDl{vDlQVvPa8&*'p=p_?/}|}~|zzx>=zx\ }ل>ǿ5?yeg?}0zY =3\ =?.{=Co >=C/gFX+l6666666Cm`\VVVTWVVVCmՁ uM@uM@u@m@m@mP[a>6666666P[a;u&&¦`]aP]a#P[a#P[a#P[a9Vػ5@m@m@mM@u@m@m@m>VدE ?XWVVVxq P[a#P[a#P[aP]a#P[a#P[a#P[ao:P]aP]aP]aP]aP]a#P[a#P[a#P[a9Vػ5@m@m@mM@u@m@m@m>TW/<@} @} @} @} @}M@uM@uM@uMP]a=6666666P]a66666666Cu`\TWTWTWWTWTWTWCu⫡WWWWWTWTWTWtރq P]aP]aP]aP_aP]aP]aP]a.O:Pυ @ ?6.l\T sa9Tk?6.l\ saP&υM>ߤ?geu @_mp/jW _gB68; k H ΀̇ H ΀ H |_6666666P]aԁ     Ρ{0+l+l+l+l+l+l+l~WWWWWTWTWTWtރq P]aP]aP]aP_aP]aP]aP]a66666666Cu`\TWTWTWWTWTWTWCmTWTWTWTWTWVVVxq P[a#P[a#P[aP]a#P[a#P[a#P[aiwaP_aP_aP_aP_aP_aP]aP]aP]a9TW{5@uM@uM@u @}M@uM@uM@uM>TWЁ     Ρ{0+l+l+l+l+l+l+l¾/.l+l+l+l+l+l+l+l+l: {  ɇ ` +&&&&&&j+&&&&&FFFswk      }/_-66666666Cu`\TWTWTWWTWTWTWCu}[]WWWWWTWTWTWtރq P]aP]aP]aP_aP]aP]aP]acwaP_aP_aP_aP_aP_aP]aP]aP]a9TW{5@uM@uM@u @}M@uM@uM@uM>Vطe߅M@uM@uM@uM@uM@u@m@m@mP[a>6666666P]a[R+l+l+l+l+l+l+l+l: {  ɇ 컰  +&&&&&&+u&&&¦sk  |۾      Ρ{0+l+l+l+l+l+l+l¾?}TWTWTWTWTWVVVxq P[a#P[a#P[aP]a#P[a#P[a#P[aewaP_aP_aP_aP_aP_aP]aP]aP]a9TW{5@uM@uM@u @}M@uM@uM@uM>TWطe߅M@} @} @} @} @}M@uM@uM@uMP]a=6666666P]a?}666666666Cu`\TWTWTWWTWTWTWCmxoI j+FF&FFFF+&&&&¦sk  |˾     Ρ{0+l+l+l+l+l+l+l¾/.l+l+l+l+l+l+l+l+l: {  ɇ :P]aP]aP]aP]aP]a#P[a#P[a#P[a9Vػ5@m@m@mM@u@m@m@m>TWاe߅M@} @} @} @} @}M@uM@uM@uMP]a=6666666P]a_}666666666Cu`\TWTWTWWTWTWTWCu}[]WWWWWTWTWTWtރq P]aP]aP]aP_aP]aP]aP]ak%u&&&¦sk  |'66666666Cm`\VVVTWVVVC}}{[HPk AaHPWWWp΃q P_aP_aP_a @} @} @} >Wؿ^Vg@_ { $(5@^    Ρy0+l+l+5 ?EaHPk Aa @} @} @} P_a<66HPWWWCu}|X]WWWWWTWTWTWtރq P]aP]aP]aP_aP]aP]aP]a(5@^ { $(5+^    |/˾ HPk Aa @} @} @} P_a<66HPWWWC}}آ@^ { $(5@†swk   { $(l+l+l+l~n.5@^ { $(5+^    |O˾      Ρ{0+l+l+l+l+l+l+l>/.lHPk Aa6666C}`\WWWk AaP_aP_aP_a`^ { $(5@^    Ρy0+l+l+5 컰HPk AaHPWWWp΃q P_aP_aP_a @} @} @} >TWe߅ @} @} @} @} @}M@uM@uM@uMP]a=6666666P_a}6 { $(5@^    Ρy0+l+l+5 ?EaHPk Aa @} @} @} P_a<66HPWWWC}}_]$(5@^ { $(l+l+l+l8 {@+7oHPk Aa6666C}`\WWWk AaP_aP_aP_aqwaP_aP_aP_aP_aP_aP]aP]aP]a9TW{5@uM@uM@u @}M@uM@uM@uM>Wؗ- { $(5@^ { $(l+l+l+l8 {@+۲ AaHPk AaP_aP_aP_a9W;5@} @} @}6666P_a?}6 { $(5@^    Ρy0+l+l+5 컰 +&&&&&&+HPk Aa6666C}`\WWWk AaP_aP_aP_auwa@^ { $(5+^    |- { $(5@^ { $(l+l+l+l8 {@+^ { $(5@^    Ρy0+l+l+5 _z_: @mz}[6ez|x?=ONYlB˞egu3z~X =?-{_=C۞3t { n+l+l+l|        em@m@m@m@m=-+l+l+l|        em@m@m@m@m=-+l+l+l|        em@m@m@m@m=7(%m!P].lz߅M@&waP ].l6տ ߅MwaP ]T.l+/ {t[WWW 666666666 j^666=CuM@uM@uM@uM@u @}M@uM@uM@uMﲺ&&&&jXW@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ + +l+l+l+l+l+l+l+l+lz6666pmPKB&&g ]VVVVTWo6( {t[WWW 666666666 zEan   &&&&&&&¦wY]aP]aP]aP]aP]aSBan   &&&&&&&¦wY]aP]aP]aP]aP[a-+l+l+l|        em@m@m@m@u}fW@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ /lP +l+l+l+l+l+l+l+l+lz6666 P+B¦g  ]VWTWTWTWTWϋo6( {t[WWW 666666666 컰@@uM@uM@uP[a#P[a#P[a#P[a#P[aP]a#P[a#P[a#P[a㻬˾ { t[WWW 666666666 컰@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ ˾ { t[WWW 666666666 컰@@uM@uM@uP[a#P[a#P[a#P[a#P[aP]a#P[a#P[a#P[a㻬O˾ { t[WWW 666666666 컰@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ ˾ { t[WWW 666666666 W@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ o˾ { t[TWTWTW 666666666     컰@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ o˾ { t[WWW 666666666 컰@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ ˾ { t[TWTWTW 666666666     컰@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ ˾ { t[WWW 666666666 컰@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ ۾ {t[WWW 666666666 컰@@uM@uM@uP[a#P[a#P[a#P[a#P[aP]a#P[a#P[a#P[a㻬/˾ { t[WWW 666666666 컰@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ ˾ { t[WWW 666666666 컰@@uM@uM@uP[a#P[a#P[a#P[a#P[aP]a#P[a#P[a#P[a㻬˾ { t[WWW 666666666 컰@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ ˾ { t[WWW 666666666 ~SY -+l+l+l|        em@m@m@m@}Ha/n HP 6666HPWWW.+l+l+l+l+_/E {t[$(5@†g@†wY_aP_aP_aP_aP_az7,R B Aa6߷S~|ڟz}_6@O˞e?yegwZ~c3P~h&&&&&FFFswk      }0]aP]aP]aP]aP]aP]a#P[a#P[a#P[a9Vػ5@m@m@mM@u@m@m@m>VX j+FF&FFFFj+,BWTWTWTWTWTWVVVxq P[a#P[a#P[aP]a#P[a#P[a#P[aos]aP]aP]aP]aP]aP]a#P[a#P[a#P[a9Vػ5@m@m@mM@u@m@m@m>TWX<} @} @} @} @} @}M@uM@uM@uMP]a=6666666P]a666666666Cu`\TWTWTWWTWTWTWCuūWWWWWWTWTWTWtރq P]aP]aP]aP_aP]aP]aP]a.7).l\TWsaP&υM@ Ρυ`\T saPυM@ ?6.l!68?geu @} @_mpC_mpg@_mpgWq WiWWiWi猯 +&&&&&&+,X      Ρ{0+l+l+l+l+l+l+lO +&&&&&&j+,Z]aP]aP]aP]aP]aP]a#P[a#P[a#P[a9Vػ5@m@m@mM@u@m@m@m>TWX<#&&&¦sk  |-666666666Cu`\TWTWTWWTWTWTWCuf?q&&&¦sk  |    ϡ}0j+lj+lj+l+lj+lj+lj+l>-.l+l+l+l+l+l+l+l+l: {  ɇ z| @} @} @} @} @}M@uM@uM@uMP]a=6666666P]aߗ}666666666Cu`\TWTWTWWTWTWTWCu<#66666666Cu`\TWTWTWWTWTWTWCm}}\]TWTWTWTWTWVVVxq P[a#P[a#P[aP]a#P[a#P[a#P[a      Ρ{0+l+l+l+l+l+l+l¾-.l+l+l+l+l+l+l+l+l: {  ɇ q?G+l+l+l+l+l+l+l+l: {  ɇ 컰 j+FF&FFFF+666666666Cu`\TWTWTWWTWTWTWCu}]]WWWWWTWTWTWtރq P]aP]aP]aP_aP]aP]aP]a?WWWWWTWTWTWtރq P]aP]aP]aP_aP]aP]aP]a ǁ     Ρ{0+l+l+l+l+l+l+l¾?]aP]aP]aP]aP]aP]a#P[a#P[a#P[a9Vػ5@m@m@mM@u@m@m@m>TWؗe߅M@} @} @} @} @}M@uM@uM@uMP]a=6666666P]aN@} @} @} @} @}M@uM@uM@uMP]a=6666666P]a?}666666666Cu`\TWTWTWWTWTWTWCmx<&&&&&FFFswk      }˾     Ρ{0+l+l+l+l+l+l+l¾ǁ     Ρ{0+l+l+l+l+l+l+l¾/.l+l+l+l+l+l+l+l+l: {  ɇ @WTWTWTWTWTWVVVxq P[a#P[a#P[aP]a#P[a#P[a#P[aiwaP_aP_aP_aP_aP_aP]aP]aP]a9TW{5@uM@uM@u @}M@uM@uM@uM>TWؗ?8P_aP_aP_aP_aP_aP]aP]aP]a9TW{5@uM@uM@u @}M@uM@uM@uM>TWطe߅M@} @} @} @} @}M@uM@uM@uMP]a=6666666P]a? +&&&&&&j+__?TWTWTWTWTWVVVxq P[a#P[a#P[aP]a#P[a#P[a#P[a lQk AaHPk AaP_aP_aP_a9W;5@} @} @}6666P_azYSHPk Aa6666C}`\WWWk AaP_aP_aP_a`^ { $(5@^    Ρy0+l+l+5 컰 +&&&&&&+ lQk AaHPk AaP_aP_aP_a9W;5@} @} @}6666P_a_}6 { $(5@^    Ρy0+l+l+5 ?EaHPk Aa @} @} @} P_a<66HPWWWC}]k AaHPk AaP_aP_aP_a9W;5@} @} @}6666P]a @} @} @} @} @}M@uM@uM@uMP]a=6666666P_a}6 { $(5@^    Ρy0+l+l+5 ?EaHPk Aa @} @} @} P_a<66HPWWWC}X]$(5@^ { $(l+l+l+l8 {@+X +&&&&&&+Ӳ AaHPk AaP_aP_aP_a9W;5@} @} @}6666P_a(5@^ { $(5+^    |˾ HPk Aa @} @} @} P_a<66HPWWWC}{[HPk AaHPWWWp΃q P_aP_aP_a @} @} @} >TWؗe߅ @} @} @} @} @}M@uM@uM@uMP]a=6666666P_a_(5@^ { $(5+^    |o˾ HPk Aa @} @} @} P_a<66HPWWWC}{[HPk AaHPWWWp΃q P_aP_aP_a @} @} @} >TWׇe߅ @} @} @} @} @}M@uM@uM@uMP]a=6666666P_a(5@^ { $(5+^    |˾ HPk Aa @} @} @} P_a<66HPWWWC}lQk AaHPk AaP_aP_aP_a9W;5@} @} @}6666P_a?/O~HPk AaHPWWWp΃q P_aP_aP_a @} @} @} >TWؿod]l ۲ ,>.ezX6egy3zX =?,{=C/˞mg { &666>Cm@m@m@m@mM@u@m@m@mﲶFFFFwt-+l+l+l|        em@m@m@m@m^ { t[TWTWTW 666666666     ˷@@waP6տ ߅M@&waP ]T.l6߅M@&waP xy {t[WWW 666666666 wW@@} @} @}MP]aP]aP]aP]aP]aP_aP]aP]aP]aӻ x5 {t[WWW 666666666 yEan   &&&&&&&¦wY]aP]aP]aP]aP[ay { t[TWTWTW 666666666     sW+B¦g  ]VWTWTWTWTW׋o6( {t[WWW 666666666 qEan   &&&&&&&¦wY]aP]aP]aP]aP[a { t[TWTWTW 666666666     gW+B¦g  ]VWTWTWTWTWؗo6( {t[WWW 666666666 ~Ean   &&&&&&&¦wY]aP]aP]aP]aP]a?/٠(m!P_aP_aP_a3TWTWTWTWTWWTWTWTW.+l+l+l+lj+^666>Cm@m@m@m@mM@u@m@m@mﲶFFFF>/.%m!P_aP_aP_a3TWTWTWTWTWWTWTWTW.+l+l+l+l+۲^666=CuM@uM@uM@uM@u @}M@uM@uM@uMﲺ&&&&~,.%m!P_aP_aP_a3TWTWTWTWTWWTWTWTW.+l+l+l+lj+ò^666>Cm@m@m@m@mM@u@m@m@mﲶFFFF>-.%m!P_aP_aP_a3TWTWTWTWTWWTWTWTW.+l+l+l+l+^666=CuM@uM@uM@uM@u @}M@uM@uM@uMﲺ&&&&¾/.%m!P_aP_aP_a3TWTWTWTWTWWTWTWTW.+l+l+l+l+^666=CuM@uM@uM@uM@u @}M@uM@uM@uMﲺ&&&&¾=..%m!P]aP]aP]a3VVVVVTWVVV.k+lj+lj+lj+l+˲^666=CuM@uM@uM@uM@u @}M@uM@uM@uMﲺ&&&&¾-.%m!P_aP_aP_a3TWTWTWTWTWWTWTWTW.+l+l+l+l+,|-+l+l+lz  euM@uM@uM@uM@m}X]KB&&g ]VVVVTWe߅-+l+l+lz  euM@uM@uM@uM@u}]]KB¦g  ]VWTWTWTWTWe߅-+l+l+lz  euM@uM@uM@uM@u]+B¦g  ]VWTWTWTWV؏e߅-+l+l+l|        em@m@m@m@u}Y]KB¦g  ]VWTWTWTWTWطe߅-+l+l+lz  euM@uM@uM@uM@uX]KB¦g  ]VWTWTWTWVe߅-+l+l+l|        em@m@m@m@u}^]KB¦g  ]VWTWTWTWTWe߅-+l+l+lz  euM@uM@uM@uM@u}_]KB¦g  ]VWTWTWTWV_3_2 j+lj+lj+lj+lj+l+lj+lj+lj+l|6666rHPk Aa P_aP_aP_aP_aP_a @} @} @} ﲾ@^ { $(5+l+l+l+l+5]WWWWWؿ Q B Aa6#z}_6@O˞e?yegwZ~c3 ?C_5HWX K ]aVd+8l== ;l@HWd+v ;|Vد ُ K ]a +,t%Vd+vCžkl@ [a +v ;l>d+Wǁt%HWd+v ;![aO5@ [av ;l@@HWX K ]a +v ;l琭 [aVX ;l@ [a շ K ]a +,t%Vd+vCžkl@ [a +v ;l>+_?+,|@HWX K ]ay {v0@HWX K ]a +,t w׏ _a+,|@HWXCžkt%@HWX K ]aC~U@WX _a+,t%琮g ]a +,t@HWX_=HWX K ]a +,t@ [a`\d+v K ]aVd+!]ax8@WX K ]a +,t9+HWX K ]a+,t%>+ϝ@WX _a +,t%+~~WX _a+,|%@`\+,t%@HWX Kmwa +,|@HWX K ]ay {v0@HWX K ]a +,t 컰@WX _a +,t%+~~WX _a+,|%@`\+,t%@HWX Kuwa +,|@HWX K ]ay {v0@HWX K ]a +,t 5@WX _a +,t%3+: _a+,|3+,t%@WX K ]a +,eHWX K ]a +쟿E+: _a+,|3+,t%@WX K ]a +,eHWX K ]a +Y-Z+,||t%@HWX K ]a +,tLWX K ]a +,t`b"|ϐ@HWX _a +,t%| K ]a +,t%/˾ {-@HW ;l@ [av ;l@ŽwVd+v 컰B _a+,|3+,t%@WX K ]a +,eHWX K ]a +۲n |ϐ@HWX _a +,t%| K ]a +,t%˾ {-@WX>CHWX K ]a +,|%@]+,t%@¾>,.HWX K ]a3d+v ;l@HWd+v ;e [aV+Ӳn |ϐ@HWX _a +,t%| K ]a +,t%˾ {-@WX>CHWX K ]a +,|%@]+,t%@¾/.WX _a K ]a +,t%@HWX w@HWX 6 _a+,|3+,t%@WX K ]a +,eHWX K ]a d+n t%v<DzgA~j?+,t%@ [aVq {z0v ;l%Vd+v_@@WX K ]aVd+8l== ;l@HWd+v ;|VدϏ K ]a +,t%Vd+vCžkl@ [a +v ;l>d+Wuǁt%@@Wd+v ;![aO5@ [av ;l@o@HWX K ]aVd+8l== ;l@HWd+v ;|HW/LWX _a+,|%@`\+,t%@HWX K@WX _a +,t%+ӲWX _a+,|%@`\+,t%@HWX K@WX _a+,t%琮g ]a +,t@HWX˾ K _a+,|@HWX sHWسq @WX K ]a +,}HWϯ8@WX K ]a +,t9+HWX K ]a+,t%>d+HWX K ]a +,t@ [a`\d+v K ]aVd+!]a_[@WX _a+,t%琮g ]a +,t@HWXo˾ K _a+,|@HWX sHWسq @WX K ]a +,}HW؏8@WX K ]a +,t9+HWX K ]a+,t%>d+òHWX K ]a +,t@ [a`\d+v K ]aVd+!]a[@WX _a+,t%琮g ]a +,t@HWX˾ K _a+,|@HWX sHWسq @WX K ]a +,}HW8@WX K ]a +,t9+HWX K ]a+,t%>TWط1 //Zց~)ez|aǟ}۲ },gq3zz] =},{=CO˞egS~BAait%vCHWX K ]a +,|%@]+,t%@|Y+,||t%@HWX K ]a +,tLWX K ]a +,tO u@WX gHWX K ]a +,t@HWXt%@VCAaj@ŽgVd+v ;l%Vd+vl@ [aHW? x@@HWX K ]a+,t%2]a +,t%ߢ gf@@HWX K ]a+,t%2]a +,t%ωPX@WX>CHWX K ]a +,|%@]+,t%@ Ba1u@WX gHWX K ]a +,t@HWXt%@Vؗe߅= K ]a +xl@ [aVX ;l@ [aǻVd+v ;t}^]m!@@HWX K ]a+,t%2]a +,t%mwa@@WX gHWX K ]a +,t@HWXt%@HW؏e߅= _a+,!]a +,t%@HWX K ]a.@HWX K [a_}t[+,t%v ;l@ [a +v ;l2[aVd+iwa@@WX gHWX K ]a +,t@HWXt%@HWe߅= _a+,!]a +,t%@HWX K ]a.@HWX K ]aߗ}t[+,||t%@HWX K ]a +,tLWX K ]a +,t]O|m@@HWX K ]a+,t%2]a +,t%qwa@@HWX ;![aVd+@ [aV.v ;l@ ]a_}t[+,||t%@HWX K ]a +,tLWX K ]a +,t}[]m!@@HWX K ]a+,t%2]a +,t%j+?ԁ|;.e[~=[/|? }| ہ=C5@/˞|?gq3z},{^=C/O˞Ƹ}3z}\ | ޖ=C˞eu3{v|;c9@z!cccc9dӃq cccdddCv@v955@v@v!cccc^fddU@z!c9@z@v@v@v琝cN5@v@v@v!cccc9f99f9@z@v@v@v{cccNWi wos C =@1ssss8sz0sss 3333|11Ƹ11ssss33sJ[@߽:;c9@z!cccc9dӃq cccdddCv@v955@v@v!cccc^fddUMہC =H1s 3333!;ǜk33C = ; ; ; ss̩1ss 33332; ; ;ǜ7/9@~c9@z!cx9`\9@zc9@z9@z935@z!c9@z!c9@z!cVi os  ?1s C =H1<sv0s C =1s C =ЇC =ǜ =H1s C =H1C =H1gkϷ9@~c9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳U-ہC =H1s 3333!;ǜk33C = ; ; ; ss̩1ss 33332; ; ;ǜ7K9@~c9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳UZfv ?1/'޶9@~!c9琞c5@z!c9@z!cc3c\9@~!c9{c9lրYY1s  ?1s C =sH1s  ?H1s Cs sfkC =1s C =p/s C =ǜ79@~c9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳Un!v =H1s C = ; ; ;njs1 ; ; ;H1111Ç3sjk3C = ; ; ;nj111@~cO.ms C =H1s !=ǜkC =H1s C =H1!=H1gƸH1s C =H1s 2=H1s*}n|;c9@~c9@z9瘳q c9@~!c9>99@zc9@z^9@z9[5o~;c瘏 o[ ?H1s sH1g =H1s C =H1s }H1s̙1s  ?H1s ýL1s sJ[@2v =H1s C = ; ; ;njs1 ; ; ;H1111Ç3sjk3C = ; ; ;nj111os trm c9@z!cx9`\9@zc9@z9@z935@z!c9@z!c9@z!cVi _9@~c9@z!cx9`\9@zc9@z9@z935@z!c9@z!c9@z!cVi Oos ~rm c9@z!cx9`\9@zc9@z9@z935@z!c9@z!c9@z!cVi os  ?1s C =H1<sv0s C =1s C =ЇC =ǜ =H1s C =H1C =H1gη9@z!c9f9f9f9fCv9=9f9f9@z@v@v@v>ddSc\dd9f9f9f9fev@v@v9]5oc@~c9@~!c9琞c5@z!c9@z!cc3c\9@~!c9{c9lրOշ9@~y;1s C =H1<sv0s C =1s C =ЇC =ǜ =H1s C =H1C =H1g9@~c9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳UgہC =H1s 3333!;ǜk33C = ; ; ; ss̩1ss 33332; ; ;ǜ7};c9@~c9@z9瘳q c9@~!c9>99@zc9@z^9@z9[5oȷ9@~y=1s C =H1<sv0s C =1s C =ЇC =ǜ =H1s C =H1C =H1g9@~c9@z!cx9`\9@zc9@z9@z935@z!c9@z!c9@z!cVi sώc9@~c9@z9瘳q c9@~!c9>99@zc9@z^9@z9[--v =H1s C = ; ; ;njs1 ; ; ;H1111Ç3sjk3C = ; ; ;nj111g1sɅ-@~c9@z9瘳q c9@~!c9>99@zc9@z^9@z9[5v ?1s  ?H1s sH1g =H1s C =H1s }H1s̙1s  ?H1s ýL1s sJk@~v  ?|\x9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳Uxߢos C =H1ssss8sz0sss 3333|11Ƹ11ssss33sJk@~v  ?<\x9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳UZz^os  ?1s C =H1<sv0s C =1s C =ЇC =ǜ =H1s C =H1C =H1gg1sɅ-@~c9@z9瘳q c9@~!c9>99@zc9@z^9@z9[-9@z!c9f9f9f9fCv9=9f9f9@z@v@v@v>ddSc\dd9f9f9f9fev@v@v9]5e?;@~cN.ms C =H1s !=ǜkC =H1s C =H1!=H1gƸH1s C =H1s 2=H1s*|/9@~c9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳UZz[c9 ?1s C =sH1s  ?H1s Cs sfkC =1s C =p/s C =ǜ};c9@~c9@z9瘳q c9@~!c9>99@zc9@z^9@z9[-LJV_9@z!cdddq9`\ddd9f9f9f9fccNq ccddd瘱9f9f9tȖsZc@ rhA9-1G9@~cC~999@~c@ r  ?1!?1'Ƹ1shAc9{c9dZ{O9-1G9 #Ђ ?1s !?ǜk ?1G9@~ccc\9 1s 1s sJ{@[rhA9-1G9 9@~cp9`\9@~9-1s  ? ?ǜ ?1G9@~ce~c瘓UZz|Xc9@~c9@z!cx9`\9@zc9@z9@z935@z!c9@z!c9@z!cViD#ЂsZc@ rhA9-1s  ?s1s sZc9@~9@~915@~c@ r  ?1 ?1'ghA9-1/9 1s s1' ?1shAc9>99@~9-1s  ?`/s  ?ǜsZc@ rhA9-1G9@~cC~999@~c@ r  ?1!?1'Ƹ1shAc9{c9d>#ЂsZc@ rhA9-1s  ?s1s sZc9@~9@~915@~c@ r  ?1 ?1'D9@~c9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳Uz^c |m ЂsZc9@~9瘓q c9 1s s sbk ?9@~c9@~cNVi`K9-1G9 #Ђ ?1s !?ǜk ?1G9@~ccc\9 1s 1s sJ{@~v #Ђ/mZc@ r  ?18sr0s  ?9@~cC~cNq c#Ђ ?1s 2?1s*=?=%9@~c9@~!c9琞c5@z!c9@z!cc3c\9@~!c9{c9l-1G9?_x 9@~cp9`\9@~9-1s  ? ?ǜ ?1G9@~ce~c瘓U %#ЂsZc@ rhAc9琟cN5@~c#Ђ ?1s |1s̉1s sZc9@~^9@~9Y=e?;@ rhAy޶-1G9@~cC~999@~c@ r  ?1!?1'Ƹ1shAc9{c9d>ȖsZc@ rhA9-1G9@~cC~999@~c@ r  ?1!?1'Ƹ1shAc9{c9dր^9@~c9@z!cCz9;9@z!c9@z!cCz!cΌq c9@z!cez!c瘳Uzؒc@ rhA9-1G9 1s s1' ?1shAc9>99@~9-1s  ?`/s  ?ǜ۲9 Ǽ o[#Ђ ?1s !?ǜk ?1G9@~ccc\9 1s 1s sJ{@OdK9-1G9 #Ђ ?1s !?ǜk ?1G9@~ccc\9 1s 1s sJk@~v  ?1s C =H1s !=ǜkC =H1s C =H1!=H1gƸH1s C =H1s 2=H1s*=l1G9 #ЂsZc9@~9瘓q c9 1s s sbk ?9@~c9@~cNViuώЂsZc^-@ rhAc9琟cN5@~c#Ђ ?1s |1s̉1s sZc9@~^9@~9Y=̖sZc@ rhA9-1G9@~cC~999@~c@ r  ?1!?1'Ƹ1shAc9{c9dրT ~_-@ϡ-@&ws| ?{ Dz }.{=CO˞eDzga3z~Y =o{>=C/˞e۲gc3z}Z .{^߷=C˞e˲gM~qA ~H1111ev@v@v@v@v ^?]?]?]?Scccc@z@v@v8@z ^?׏ cccc@z@v@v@v2~ ~ ~ ~ ~HȮȮȮé1111A ~ ~ ~VۂA ~Hx1111A ~ ~ ~ ~w]?]?]?]?]?׏d׏d׏d׏v@v@v@v@v ^?]?]?mA ^?cCz ^?@z_?@zLA ~HA ~Htj~HA ~ȯvP?@~3@z ^?@z ^?A ~H ~HAA ~HA ~H= @~_?>Cz ^?@z_?@zLA ~HA ~Htj~HA ~ȯqP?@~3@z ^?@z ^?A ~H ~HAA ~HA ~Ȯ-A ~gȮȮȮȮȮxcccc@z@v@v@vNm׏d׏d׏d׏dccxǯ ~A ~HA ~Hez ^?@z_?@zS@z ^?@~ ^?׏U~ȯgHA ~HA ~H|@z ^?@~ ^?N@z ^?@z ^?4ۆ ~ȯ!~HA ~ȯA ~]@z ^?@z ^?:^?@z ^?@v8mA ^?cCz ^?@z_?@zLA ~HA ~Htj~HA ~ȯ~ȯgHA ~HA ~H|@z ^?@~ ^?N@z ^?@z ^?>Wq ~ȯ!~HA ~ȯA ~]@z ^?@z ^?:^?@z ^?@v8n A ~H1111ev@v@v@v@v ^?]?]?]?Scccc@z@v@zx>^ ~ȯ|A ~H ~Hw^?@z ^?@z ^?z ^?@z_?ϓ~ȯgHA ~HA ~H|@z ^?@~ ^?N@z ^?@z ^?>Wq ~ȯ!~HA ~ȯA ~]@z ^?@z ^?:^?@z ^?@v8;@z ^?׏ cccc@z@v@v@v2~ ~ ~ ~ ~HȮȮȮé1111A ~ ~ ~<@~_?>Cz ^?@z_?@zLA ~HA ~Htj~HA ~ȯ矪ۆ ~ȯ!~HA ~ȯA ~]@z ^?@z ^?:^?@z ^?@zx?^ ~ȯ|A ~H ~Hw^?@z ^?@z ^?z ^?@z_?xǯ ~A ~HA ~Hez ^?@z_?@zS@z ^?@~ ^?d׏omA ^?c^ ~ȯ|A ~H ~Hw^?@z ^?@z ^?z ^?@z_?u _?ϐ^?@z ^?@z ^?.A ~HA ~HZA ~HA ~/ ~A ~HA ~Hez ^?@z_?@zS@z ^?@~ ^?׏m?@~_?>Cz ^?@z_?@zLA ~HA ~Htj~HA ~ȯt@z ^?3d׏d׏d׏d׏dcccc1111A ~ ~ ~ ~ 11~ȯgHA ~HA ~H|@z ^?@~ ^?N@z ^?@z ^?ޖ@~_? @z ^?@~ ^?2~HA ~ȯA ~ЩA ~H ~HDzA ~ȯ!~HA ~ȯA ~]@z ^?@z ^?:^?@z ^?@v|<^A ~H1111ev@v@v@v@v ^?]?]?]?Scccc@z@v@zx^s?@~3@z ^?@z ^?A ~H ~HAA ~HA ~H~ ~ȯ|A ~H ~Hw^?@z ^?@z ^?z ^?@z_?} _?ϐ^?@z ^?@z ^?.A ~HA ~HZA ~HA ~?z ^?cWqԏ#ЂqZP?@ !~ȯ ~@~_?. ~ȯhA_?N@~_?׏#Ђ ~H~ ~ȯ|A ~H ~Hw^?@z ^?@z ^?z ^?@z_?yЂqZP?@ !~ȯ ~@~_?. ~ȯhA_?N@~_?׏#Ђ ~ȯ~ԏ#ЂqZP? @~_?~ȯw_?@~_?@  ~ȯpj~ȯ ~@~8\YP?@ hA8-x ~ȯqZP?@~̯ ~ȯG@~_?8_?@~_?@  ~<=, ~A ~HA ~Hez ^?@z_?@zS@z ^?@~ ^?׏e?@ hA8-x ~ȯqZP?@~̯ ~ȯG@~_?8_?@~_?@  ~,ZP?@ hA3@~_?׏#Ђ ~ȯe~_?@~8- ~ ~ȯqZP?pgA8-G~ ~ȯhA_?2~ȯ ~@~_?~_?@~8-~ԏ#ЂqZP? @~_?~ȯw_?@~_?@  ~ȯpj~ȯ ~@zx~\s?@~3@z ^?@z ^?A ~H ~HAA ~HA ~ȯ~ԏ#ЂqZP? @~_?~ȯw_?@~_?@  ~ȯpj~ȯ ~@~x;\YP?@ hA8-x ~ȯqZP?@~̯ ~ȯG@~_?8_?@~_?@  ~|,ZP?@ hA3@~_?׏#Ђ ~ȯe~_?@~8- ~ ~ȯqZP?a_?ϐ^?@z ^?@z ^?.A ~HA ~HZA ~HA ~<-ZP?@ hA3@~_?׏#Ђ ~ȯe~_?@~8- ~ ~ȯqZP?pgA8-G~ ~ȯhA_?2~ȯ ~@~_?~_?@~8-~hA8-Gϐ_?@~_?@  ~ȯx@~_?~ȯ ~ȯhA_?׏m?~ԏ#Ђgȯ ~ȯG@~_? ~ȯqZP?@~S@~_?~ȯ㲟 ~ȯ!~HA ~ȯA ~]@z ^?@z ^?:^?@z ^?@~x9\YP?@ hA8-x ~ȯqZP?@~̯ ~ȯG@~_?8_?@~_?@  ~-ZP?@ hA3@~_?׏#Ђ ~ȯe~_?@~8- ~ ~ȯqZP?cЂqZP?@ !~ȯ ~@~_?. ~ȯhA_?N@~_?׏#Ђ ~Ȯ^nBO: @/&tXޗ=CV}09z~G?g]ʮg?zzXv e!zYv @Ρe᳏@ΡÇ1+COV3z_v=l;>C/Ρz+COW^kΏ@~8s@~8sCz8?;@z8' s@z8'Cz8'όq  s@z8' sez8' Uz  s@z8'هpN =HYcԀ޾ǁp ?9p ?9!=kpN =H9pN =H!=HgƸH9pN =H92=H*H9pN =HCz8' s1j@_S@~8s@~8s5@z8' s@z8' s 3c\s@z8' s{ sl^s@z8' s!=H9p~5=?s@~8s@~8y`\s@z8' s@z8@z8?35@z8' s@z8'ι@z8'V5@z8' s@z8g9pN =HgQ3@~8s@~8s5@z8' s@z8' s 3c\s@z8' s{ sl^s@z8' s!=H9p~-燯8 s@z8' s@z8O5@v8@v8@v8@v8@v8@v8@v8>ddSc\dddddd||t^dddddчp>p>p>p~5ǯMǁp ?9p ?9!=kpN =H9pN =H!=HgƸH9pN =H92=H*H9pN =HCz8' s1j@_q ?=g,{p ?s@z8' s@z8^s@z8?[@z8' s}H9pN =5F k~9p ?9p ?H9pN =H9pN9p~fkpN =H9pN =s/9pN =kpN =H9p>s@z8'c 著$@p<yP?~8OP={0T P='z8O@pT P='z81j@ w "~7@ w "~7@ w @Gݠ @Gݠ J7(qw7(qw7(qw7(qw7(eGݠd J@Gݠd J@Gݠd J@Gݠg*~P?~8@p<y{5@p<yT P='Tz8O@p<yTi/T ]yT P=>T P='z81j@ w "~7@ w "~7@ w @Gݠ @Gݠ J7(qw7(qw7(qw7(qw7(eGݠd J@Gݠd J@Gݠd J@GݠC @p<yP?~8OP={0T P='z8O@ps@z8'ߢ9pN =H9pN =HÇp>p~jkp>p>p>p>p>p>2; ; ;kp>p>p>p>p>O8s@~8s@~89q  s@z8' s>s@z8' s@z8^s@z8?[@z8' s}H9pN =5F k?s@~8Ο9p ?H9pN =H9pN9p~fkpN =H9pN =s/9pN =kpN =H9p>s@z8'sǁp ?9p ?9!=kpN =H9pN =H!=HgƸH9pN =H92=H*H9pN =HCz8' s1j@_c@~8s@~8s5@z8' s@z8' s 3c\s@z8' s{ sl^s@z8' s!=H9p~5q ?9p ?9psHg =H9pN =H9}H19pN =H9pνL9p~J9pN =H9 s@z8?k- f_:@Oގ/1Щ? ǯ_;oo//?N_7t_ =|?=oﯿ_#{cB/wO:ۿ^^N`}>>q9N}0>~矿eOǻ' o/y}׺|ީ_I x|nO}C~}y|[C|~O7tׇ||oz~y }Sm Czީ?_>~zo<zCo'|~~ןϡoԿ }/MwBЩ>^~?^~ @tLH~CϿ"7/EN[?~۞?o:7Я7N S/?|{{x&xSO_#& ~=PoS?|;!=ʵG\9!}s͕J}s͕7qٛ+'ocB:&docB:&docB:&docB:|\9!}s=5Fks71!{s71!{s71!{s+^\9!}s͕7WNH\ o7qٛ+'ocB:&docB:&docB:&docB:|\9!}s=5Fks71!{s71!{s71!{s+U\9!}s͕7WNH\9!}suCJ}s71!{s71!{s71!{s7C 1Z또 ٛ또 ٛ또 ٛ+\!d +&obB +&o7WLH\DŽ͕71!}suLH\9!{s7WrBzf:&o +'docB +t\ +&obB +&o<7WNH\9!}s͕7WNH\9!}s͕>orB +'orB +^抗bB +&obB +!}sń͕7WNH\9!}s͕7WNH\9!}s+&ogƨmrB +'orBWUobB +&obBsH\1!s͕7WNH\9!}s͕7WNH\C 1j+'orB +'o 7W +'orB +'o7WNH\DŽuL\DŽuL\DŽuL\DŽu@\O\DŽuL\DŽuL\DŽuLH\\sy=WN\sy=q{cB +'orB +'o!}sń͕7WNH\9!}s͕7WNH\{. {.:!}s}]vuLH\9!}s͕7WNH\9!}s͕>orB +'orB +~ւbB +&obB +!}sń͕7WNH\9!}s͕7WNH\9!}s+&ogƨmrB +'orB }s}~v7WN\1!}s7WN\yٛ+'o +'docB ٛ또rB:|H\1!}s=5FmsuLH\9!{s7WN\DŽ+'oOrB+'oOsH\s7WN\DŽ͕71!}suLH\C 1Z또rB:&o +'oLJ\_se=WN\_se=q˲{cB +'orB +'o!}sń͕7WNH\9!}s͕7WNH\/7e\9!sń7WL\1!sń͕琾bB +'orB +'o҇73c6WNH\9!}s͕7WNH\9!}s}s\vϕ7WL\1!sń7WL\y+&orB +'orB +}H\1!s=3Fms͕7WNH\9!}s͕7חs}s}y\vϕ7WNH\9!}s͕7WNH\9do ٛ또 ٛ또 ٛ또 ٛ!{suL\DŽuL\DŽuL\DŽi=חe\9!s}^vϕ7e\9!s}^vuC또rB +'orB +}H\1!s=3Fms͕7WNH\9!}s͕7W m=WN\1!sń7WL\1!s9orB +'orB +'o!}sń͕7WNH\9!}s͕7WNH\qC\?s7WL\1!sń7WC +'orB +'orBJ7WL\όQ\9!}s͕7WNH\9!}s\\_sń͕7WLH\9!sń͕7WC +'docB ٛ또rB:&o7WLH\OQ\9!{s7WN\DŽ͕71!{s}}\vi=WN\si=WN\s琾>-:&o +'docB ٛ또͕҇73c61!}suLH\9!{s7WNH\_/7e\9!sń7WL\1!sń͕琾bB +'orB +'o҇73c6WNH\9!}s͕7WNH\9!}s};{bB +&obBsH\1!s͕7WNH\9!}s͕7WNH\C 1j+'orB +'o~{bB +&obBsH\1!s͕7WNH\9!}s͕7WNH\C 1j+'orB +'o=,KWv} +'orB +'o7WNH\DŽuL\DŽuL\DŽuL\DŽurBzj:&docB:&docB:&docB, +&obB +&o<7WNH\9!}s͕7WNH\9!}s͕>orB +'orB +u7T_P}/K@5(>C#%?:61-}(#l\˴ӄo4zNGP= }(M94zjcP; 6NN'qBmɏ>iBnc?qB5(N{YP; 8vk}s͕7WZ\k}s͕7WZ\9do ٛ또 ٛ또 ٛ또 ٛ!{smohmcB:&docB:&docB:&doz}s+'orB +͕:!{suL\DŽuL\DŽuL\DŽuL\ٛ+'ohmcB:&docB:&docB:&doTV}s+'orB +'os\ ocB:&docB:&docB:&docB:|\9!}s=5Fks71!{s71!{s71!{s+L\9!sń͕7WLH\9!sńuC 또rB:&o +'docBJ7WN\ό\DŽ͕71!}suLH\9!}sn+t\1!sń7WL\1!sń͕琾bB +'orB +'o҇73c6WNH\9!}s͕7WNH\9!}s\27WL\1!sń7WL\1!s9orB +'orB +'o!}sń͕7WNH\9!}s͕7WNH\7W*7WL\1!sń7WL\y+&orB +'orB +}H\1!s=3Fms͕7WNH\9!}s͕7WʃP\9!}s͕7WNH\9!}suC 또 ٛ또 ٛ또 ٛ또7W1Z또 ٛ또 ٛ또 +++&obB +&o7WL\9!}s͕7WNH\9!}s͕7WbBzf +'orB +'oxu=WN\_su=WN\_s琾.:&orB +'orBJ7WL\όQ\9!}s͕7WNH\9!}sEqrkA\1!sń7WL\1!sń͕琾bB +'orB +'o҇73c6WNH\9!}s͕7WNH\9!}s>?l +'orB +'o<͕7WN\DŽ͕71!}suLH\9!{s>orB:&o +'docBގB}s}Zvϕ7WLH\9!sń͕7WLH\9o +'docB ٛ또rB:&o!{suLH\9!{s7WN\DŽ͕7Co/rB+'o/rB8e=1!}s͕7WNH\9!}s͕7WbBzf +'orB +'oo{bB +&obBsH\1!s͕7WNH\9!}s͕7WNH\C 1j+'orB +'o~^~. +&obB +&o<7WNH\9!}s͕7WNH\9!}s͕>orB +'orB ٛ񏹾<. +'orB +'o7WNH\DŽuL\DŽuL\DŽuL\DŽurBzj:&docB:&docB:&docB{>/ {>/:!}s}^vuLH\9!}s͕7WNH\9!}s͕>orB +'orB ++'obB +&o7WL\9!}s͕7WNH\9!}s͕7WbBzf +'orB +'oסorB+'orB8c=1!}s͕7WNH\9!}s͕7WbBzf +'orB +'o?+&obB +&o7WN\9!{s7WN\DŽ͕71!}subBzj ٛ또rB:&o ٛ{OrB+'oOrB8i=1!}suLH\9!{s7WN\DŽ͕>do +'docB ٛ또rB. +&obB +&o<7WNH\9!}s͕7WNH\9!}s͕>orB +'orB \ߗs}=WN\ߗs}=q{cB +'orB +'o!}sń͕7WNH\9!}s͕7WNH\?s=WN\1!sń7WL\1!s9orB +'orB +'o!}sń͕7WNH\9!}s͕7WN\s+>s͕7WNH\9!}s͕7qٛ+'ocB:&docB:&docB:&docB:|\9!}s=5Fks71!{s71!{s71!}s}ƿP\_s7WL\1!sń7WC +'orB +'orBJ7WL\όQ\9!}s͕7WNH\9!}s:ڛk/ %ڒN Ƙ>o{2V˕LY8MOg4~ ճpP݇҄Y8CmAKg81 pP[aY8NqB,'ԖC,&T:6/,'^Y8N pP{QjgaNkr7WNH\9!}s=-:>7WNH\9!}s琽rB:&docB:&docB:&docB:&doÇ͕7Sc61!{s71!{s71!{s7WZ\k}s͕7WZ\k}s͕7qٛ+ocB:&docB:&docB:&docB:|\ǧCzj:&docB:&docB:&docBJe7W*rB +'orB 8͕:&docB:&docB:&docB:&doÇ͕7Sc61!{s71!{s71!{s7W B͕7WLH\9!sń͕7WLH\9o +'docB ٛ또rB:&o!{suLH\9!{s7WN\DŽ͕7W膿B77WL\1!sń7WL\y+&orB +'orB +}H\1!s=3Fms͕7WNH\9!}s͕7WL/Ssń7WL\1!sń7WC +'orB +'orBJ7WL\όQ\9!}s͕7WNH\9!}s sū\1!sń7WL\1!sń͕琾bB +'orB +'o҇73c6WNH\9!}s͕7WNH\9!{sAo<͕7WNH\9!}s͕7WNH\9do ٛ또 ٛ또 ٛ또 ٛ!{s% ٛ또 ٛ또 ٛ또⏹>/ {>/ {7e\DŽ͕7WNH\9!}s͕7WNH\C 1j+'orB +'o!7W< +&obB +&o<7WNH\9!}s͕7WNH\9!}s͕>orB +'orB +~ւbB +&obB +!}sń͕7WNH\9!}s͕7WNH\9!}s+&ogƨmrB +'orB }s}~v7WN\1!}s7WN\yٛ+'o +'docB ٛ또rB:|H\1!}s=5FmsuLH\9!{s7WN\DŽ+'oOrB+'oOsH\s7WN\DŽ͕71!}suLH\C 1Z또rB:&o +'oLJ\_s7WL\1!sń7WC +'orB +'orBJ7WL\όQ\9!}s͕7WNH\9!}s B}s}_vϕ7WL\1!sń7WL\y+&orB +'orB +}H\1!s=3Fms͕7WNH\9!}s͕7׏m\?s7WL\1!sń7WC +'orB +'orBJ7WL\όQ\9!}s͕7WNH\9!}s\\_s͕7WNH\9!}s͕7qٛ+'ocB:&docB:&docB:&docB:|\9!}s=5Fks71!{s71!{s71!}s}:>+'obB +&o7WL\9!}s͕7WNH\9!}s͕7WbBzf +'orB +'o۲{bB +&obBsH\1!s͕7WNH\9!}s͕7WNH\C 1j+'orB +'o/+'orB+'osH\?s7WNH\9!}s͕7WNH\9!}s+&ogƨmrB +'orBycobB +&obB +!{s͕71!}suLH\9!{s7WN\+&oƨm +'docB ٛ또>bs}Zvϕ7WLH\9!sń͕7WLH\9o +'docB ٛ또rB:&o!{suLH\9!{s7WN\DŽ͕7 u=WN\1!sń7WL\1!s9orB +'orB +'o!}sń͕7WNH\9!}s͕7WNH\ߎ+'orB+'osH\ߗs7WNH\9!}s͕7WNH\9!}s+&ogƨmrB +'orBqcorB +&obB +!}sń͕7WNH\9!}s͕7WNH\9!}s+&ogƨmrB +'orBp|̕7WNH\9!}s͕7WNH\9do ٛ또 ٛ또 ٛ또 ٛ!{suL\DŽuL\DŽuL\DŽB}s}Yvϕ7WL\1!sń7WL\y+&orB +'orB +}H\1!s=3Fms͕7WNH\9!}s͕7Wk/ %ڒN Ƙ>o{V˕'1_,&Tۧ ճpP]?҄Y8MCiB,ϡ ճpPY8NqB,'v8vjK~vNowQjAqB,'Y8N( 0'ĵG\+'o@\ orB:!{suL\DŽuL\DŽuL\DŽuL\ٛ+'ohmcB:&docB:&docB:&doz}s+'oQks_#7W+uC 또 ٛ또 ٛ또 ٛ또7WNH\O\DŽuL\DŽuL\DŽuL\k}s͕7WZ\k}s͕7WZ\9do:&do:&do:&do:&doÇu|\O\DŽuL\(;r%Y,nFsRΪx8Rњ2C׾t@( .!urB\;r@˵B\#˵B\#˵B\#u\xFZ!]Ckt!^:xvZ!_q@[:xVHZ!]Ckxfq×k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rԗkn\#5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxVkn\s!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\{! !/ r/ r/ r/qZ!^CH:t!]CH:tyHku91nhVH:t!]CH\߇Z!_s/aVȗ\u\/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\Ms{9= rk|~{u\x~{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+5|?/r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^r}:=r/r/r/^tVȗkt!^:xVHZ!]c!^ˉq@{Z!]Ckt!^:t3] /aVȗ\+u{:=q:=!˵B\/ rB\+ur<˵B\WƁur.!˵B\/ r}_}~k|~k|~k|~r:=!˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/?\{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx^{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkt~3\+˵B\+˵B\+˵B\uHkx!]CH:t!]CH:!]府7UCH:t!]CH:=ׯaVȗ\+}{>=q>=!˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/O /?Z!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\^/ןs/ןs/ןs/ןs!^?:xVkxVkxVkxv!_q@{Z!^Z!^Z!^ϜiFk|Fk|Fk|:˵B\+ur.!˵B\/ r/r]N+ rB\+ur.!\?s/aVȗ\+u{:u{Ckt!^:xVHyHk|&ƍmZ!]Ckt!^/s{!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+\{ϵB\{ϵB\{ϵB\{u\x~Z!^Z!^Z!^yk|&ƍkxVkxVkxV/׿Z!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\:=ƕ~Z!^Z!^Z!^:C\+urB\.!urB\.1r/ĸqrB\.!urB\/w /ׯs/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^tP;]vw3{=C큞o{{'ƻ=>y@Ozn}c\z3}o,|'c =XF·ס =XVwBO?zӏo,|'c[ CO?z;1n?z,|+c; =XN·Bӏ+Z!^rxVkx. rB\.!urB\.!uC\+u91rB\.!urB\.\;r/ r/ rx\uHkx!]CH:t!]CH:!]府U!]CH:t!]CHkGV^YyVkx.\*. r/qrB\.!urB\.!uC\+u91nh\.!urB\.!uk>rg_˵}3/~ϼ\g^u|Ϻ\g^u|Ϻ\g^t7x&ƍmZ!]Ckt!^q×k _!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+57S_r/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^Urͭ—k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r텐k/\+˵B\+˵B\+˵B\uHkx!]CH:t!]CH:!]=/ĸqZ!]CH:t!]Ckr}k|{ϵB\߇Z!_s!^sB\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r7/aVȗ\+s{9=q9=!˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/'D\#5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxVk\#˵B\#˵B\#˵B\{Z!_:xVHZ!]Cktykx.'ƍkt!^:xVHv\_Z!_s/aVȗ\u\/ rB\+ur.!˵. r]M*!˵B\/ rB\+}&u{u{u{u{:\/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\ r>= r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^:x== r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^Ϝs/ r/ r/ r!]:t!]CH:t!]CH똇tVrb8V!]CH:t!]C\އZ!_s/aVȗ\u\/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\?\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r{\{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx3u{Z!_Z!_Z!_r/ rB\+ur.!˵B\<5B\ƁʵB\/ rB\+ur>=\+u{:= r}r}Z!]Ckt!^:xvZ!_q@[:xVHZ!]Ckx~ rk|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rs\+5B\,>r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^sgaVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r.?{q嬟VkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!/\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV'u<^7z>o| z:wg=>@hwz|;݁ۧk3>XFo?n,|#C?z:݉q@c;3NwBwٝ㏅; G˵˵B\+˵r/ r!]:t!]CH:t!]CH똇tVrbT:t!]CH:t!]yvZ!^Z!^rx. rB\.!urB\.!uC\+u91rB\.!urB\.׎\;r/ r/ r/ r!]=/!urB\.!urB\<˵B\ƁurB\.!urB\;r@˵B\#uYWr/r!^:xVHZ!]Ckt!^tVȗjb8V!^:xVHZ!^+1V5}/|Ͼ\ٗk>rg^˵y3/~ϼ\k?rg^xwr]M+ r/ r/ r/L}f5B\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkxV5 _!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+˵B^r/ r/ r/ r!]:t!]CH:t!]CH똇t\Ɓkt!]CH:t!^g5}Fȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/|rk|~{ϵB\?Z!_s!^sB\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r"_?XFȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/iFk|Fk|Fk|:˵B\+ur.!˵B\/ r/r]N+ rB\+ur.!y{ϵB\#˵B\#˵B\#u\xFZ!]Ckt!^:xvZ!_q@[:xVHZ!]Ckxľ\{ϵB\{ϵB\{ϵB\{u\x~Z!^Z!^Z!^yk|&ƍkxVkxVkxV|A^߇Z!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\^/׿Z!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\~]?r}{Z!^Z!^Z!^:˵B\.!urB\.!ur. r]N*!urB\.!ur}\+}{>= r}r}Z!^Z!^Z!^yk|&ƍkxVkxVkxV'_s/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^yaVȗaVȗaVȗa﹎/ןsB\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r{\#˵B\#˵B\#˵B\{Z!_:xVHZ!]Cktykx.'ƍkt!^:xVH}{aVȗ\+u{:=q:=!˵B\/ rB\+ur<˵B\WƁur.!˵B\/ r9= r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^z_߇Z!_߇Z!_߇Z!_߇:C\{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+ٗs/rYc!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+ϯsg\9gZ!^Z!^Z!^:˵B\.!urB\.!ur. r]N*!urB\.!ur} r:= r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^{́ś=e7z>=>@wz|b9݁qw)>u}ֽ>XFo?n,|#C?z:݉q@c;3NwBwٝ㏅; G˵˵B\+˵r/ r!]:t!]CH:t!]CH똇tVrbT:t!]CH:t!]yvZ!^Z!^rx. rB\.!urB\.!uC\+u91rB\.!urB\.׎\;r/ r/ r/ r!]=/!urB\.!urB\<˵B\ƁurB\.!urB\;r@˵B\#˵B\#˵B\#u\xFZ!]Ckt!^:xvZ!_q@[:xVHZ!]Ckxfq×k|Fȗk#/Q^!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+u\ٗk>rg_5}/|ϼ\k?rg^˵y3/~ϼ\;roA庚7WZ!^Z!^Z!^Urͭ—k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r텐k/\+˵B\+˵B\+˵B\uHkx!]CH:t!]CH:!]=/ĸqZ!]CH:t!]Ckr}k|{ϵB\߇Z!_s!^sB\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r7/|r/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^Og+r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^r}:=r/r/r/^tVȗkt!^:xVHZ!]c!^ˉq@{Z!]Ckt!^:t3] /aVȗ\+u{:=q:=!˵B\/ rB\+ur<˵B\WƁur.!˵B\/ r}_?r:= r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^'_aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/ןs{{!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+9/ׯ߇Z!^Z!^Z!^:C\+urB\.!urB\.1r/ĸqrB\.!urB\//aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/O /?Z!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\{\+\+\+\ua!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx3u{Z!_Z!_Z!_r/ rB\+ur.!˵B\<5B\ƁʵB\/ rB\+ur~:= r/ r/ r/q!^Ckt!^:xVHyHk|&ƍmZ!]Ckt!^/s{!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+\{ϵB\{ϵB\{ϵB\{u\x~Z!^Z!^Z!^yk|&ƍkxVkxVkxV/׿Z!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\~ϸr\+˵B\+˵B\+˵B\uHkx!]CH:t!]CH:!]府7UCH:t!]CHu{!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+s9xs@gqwLJ=>S@O7z~@;݁?>է\7 =zǍon,|wz|@z㏅a,|'L}'c;LJy7BWǿ;1n?{,|'x =XN.zx'c5{x6{xVkx\{^Z!^:˵B\.!urB\.!ur. r]N\.!urB\.!˵C=/\+˵B\+˵B\{^=/qZ!^CH:t!]CH:tyHkx.'FUCH:t!]CHڑkGV^Z!^Z!^Z!^:˵:t!]CH:t!]CH똇tVrb8V!]CH:t!]CHk2_xVȗkxVȗkxVȗkx/rB\+ur.!˵B\/C\+u51nh\/ rB\+ur/׌\3nr/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^rԗk|Fȗk|er/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^Rrg_5}/|Ͼ\ٗk?rg^˵y3/~ϼ\k?r<˵jb8^VkxVkxVktB˵B^Z!^Z!^Z!^:˵B\.!urB\.!ur.rb8s.!urB\.!5}{ϵB\߇Z!_s/a﹎/a!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkxؗ\+s{9= rrZ!^Z!^Z!^yk|&ƍkxVkxVkxVkr"_!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+5u{Z!_Z!_Z!_r/ rB\+ur.!˵B\<5B\ƁʵB\/ rB\+urgz~A^s/aVȗ\+u{:u{Ckt!^:xVHyHk|&ƍmZ!]Ckt!^~r:= r:= r:= r:=qu{CkxVkxVkxVk!^庚7WZ!^Z!^Z!^y~k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r{\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rugaVkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!u{_s/aVȗ\+}{:}{CkxVkxVkxVk!^庚7WZ!^Z!^Z!^|A^{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx~^?Z!_?Z!_?Z!_?:C\{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+9/ϯs/ r/ r/ ruHk|VHZ!]Ckt!^:!^府7W:xVHZ!]CHa~^Z!_s/aVȗ\u\/ rB\+ur.!˵. r]M*!˵B\/ rB\++_\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV|~k|~k|~k|~r>=!˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/ן?s_{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVktu{+g̵B\+˵B\+˵B\+u\tV:t!]CH:t!]cZ!^ˉq@[:t!]CH:t!^|A^_Z!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\?yr9xs@gqwLJ=>S@O7z~@;݁?ݾUѧ\Mۛ㏅ =XNNwBCwB?== =XVV·BOBO?zzzC= r}r}Z!^Z!^Z!^yk|&ƍkxVkxVkxV'_s/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^yaVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/׿\?N{5B\+5B\+5B\+˵!]Z!]Ckt!^:xVH똇xFrb8^VHZ!]Ckt!]߇yk|{ϵB\_Z!_s!^sB\+ur.!˵B\/C\+u51nh\/ rB\+ur/ׯ|A^s/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^}~k|FȗgV!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\}== r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^a33 r/ r/ r/qZ!^CH:t!]CH:tyHkx.'ƍm:t!]CH:xy~k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r低@χ́ﲛ=AwCW݁L}wGػ=>1@Cwz|;w}5#>XFo?n,|#C?z:݉q@c;3NwBwٝ㏅; G˵˵B\+˵r/ r!]:t!]CH:t!]CH똇tVrbT:t!]CH:t!]yvZ!^Z!^rx. rB\.!urB\.!uC\+u91rB\.!urB\.׎\;r/ r/ r/ r!]=/!urB\.!urB\<˵B\ƁurB\.!urB\;r@˵B\#˵B\#˵B\#u\xFZ!]Ckt!^:xvZ!_q@[:xVHZ!]Ckxb\3nr/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^rԗk|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rͭ—kn\#5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxVHk/\{!Z!^Z!^Z!^:C\+urB\.!urB\.1rx.'ƍ= rB\.!urB\\k>rg_5}/|/>sZ!^Z!^Z!^Z!^˵y&ƍk?rg^˵y3/~ob_s/aVȗ\+s{:s{CkxVkxVkxVk!^庚7WZ!^Z!^Z!^Og+r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^r}:=r/r/r/^tVȗkt!^:xVHZ!]c!^ˉq@{Z!]Ckt!^:t3] /aVȗkxVȗkxVȗkx/rB\+ur.!˵B\/C\+u51nh\/ rB\+ur/ؗaVȗaVȗaVȗa﹎/ׯsB\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r/\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ ruaVkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!u{_s/aVȗ\+}{:}{CkxVkxVkxVk!^庚7WZ!^Z!^Z!^|A^{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx~_\+\+\+\ua!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx~u{Z!_Z!_Z!_r/ rB\+ur.!˵B\<5B\ƁʵB\/ rB\+ur>=\+u{:= r}r}Z!]Ckt!^:xvZ!_q@[:xVHZ!]Ckx~ rk|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rs=/s/s/s/s!^߇:xVkxVkxVkxv!_q@{Z!^Z!^Z!^saVȗk|,ձr/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^a33 r/ r/ r/qZ!^CH:t!]CH:tyHkx.'ƍm:t!]CH:xy~k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r\|P9]vs3{;C݁{w'ƛ=?y@wznO]?Io?zn#cǏ =?=X:v#c;'; => =XNNwBw㏅owb8XN z]v'c;CNjrmr/ rx\+˵B\uHkx!]CH:t!]CH:!]府U!]CH:t!]CHkz^yVkxVkx\{^:˵B\.!urB\.!ur. r]N\.!urB\.!˵#+/׎\+˵B\+˵B\+˵B\uHkurB\.!urB\.1r/ĸqrB\.!urB\.d\3r/r/r/r!^:xVHZ!]Ckt!^tVȗjb8V!^:xVHZ!^7|f!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\s3嚛/r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^[/*|Fȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r.^yB˵B\+˵B\+˵B\+u\tV:t!]CH:t!]cr]N{:t!]CH:x柹/aVȗ\+}{>=q>=!˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/bϼ\ٗk>rg_5}/^xv[kxVkxVkxVkx3/~庚7W˵y3/~ϼ\krY5B\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx\߿N{5B\+5B\+5B\+˵!]Z!]Ckt!^:xVH똇xFrb8^VHZ!]Ckt!]Lo/u{:= r}k|{u\x{ur.!˵B\/ rB\;r/ĸqrB\+ur.!˵B\Ͼ\{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx}{!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rugaVkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!u}{!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+/aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/s9= r9= r9= r9=qs{CkxVkxVkxVk!^庚7WZ!^Z!^Z!^y~~kxVȗkxVȗkxVȗkC\+˵B\/ rB\+ur.1r/ĸqr.!˵B\/ rB\?}{ϵB\#˵B\#˵B\#u\xFZ!]Ckt!^:xvZ!_q@[:xVHZ!]Ckx~ rk|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rs=/s/s/s/s!^߇:xVkxVkxVkxv!_q@{Z!^Z!^Z!^sgaVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r.?y33 r/ r/ r/qZ!^CH:t!]CH:tyHkx.'ƍm:t!]CH:xy~k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r\o|(.9t=xu!@wz|;́=XNNwBwB?z|zC= r}k|{u\x{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+5ľ\?Z!_s/aVȗ\u\/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\Wk?rg_5}/|Ͼ\{= r/ r/ r/qZ!^CH:t!]CH:tyHkx.'ƍm:t!]CH:x{}{>= r}k|{u\x{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+/aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/\+\+\+\ua!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx3u{Z!_Z!_Z!_r/ rB\+ur.!˵B\<5B\ƁʵB\/ rB\+ur>=\+u{:= r}r}Z!]Ckt!^:xvZ!_q@[:xVHZ!]Ckx~ rk|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rs=/s/s/s/s!^߇:xVkxVkxVkxv!_q@{Z!^Z!^Z!^sgaVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r.?{q嬟VkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!/\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV'u<^7z>o| z:wg=>@hwz|;݁nާ\߿ߴ>XFo?,|#rg_˵}:u^:xVHZ!]Cktyk?r]N+~:x.!˵u!]Lo/u{:= r}k|{u\x{ur.!˵B\/ rB\;r/ĸqrB\+ur.!˵B\ob__Z!__Z!__Z!__:C\{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+O /s/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^?ys/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!]_y~>= r/ r/ r/qZ!^CH:t!]CH:tyHkx.'ƍm:t!]CH:x{}{>= r}k|{u\x{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+/aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV?s^_!^!^!^. r.!˵B\/ rB\+uC\#u91nh\+ur.!˵B\.s{ϵB\_Z!_s/a﹎/a!^:xVHZ!]Ck!]庚7UCkt!^:xVW /aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/?Ͼ\{ϵB\#3+r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^?\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r\WkxVkxVkxVr/!urB\.!urB\<˵B\ƁurB\.!urB\\{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx~^zwC@w݁Ϡ@Ozz=#큞v{==~ܧ?~]Gz3}o,|'c =XF·ס =XVwBO?zӏo,|'c[ CO?z;1n?z,|+c; =XN·Bӏ+Z!^rxVkx. rB\.!urB\.!uC\+u91rB\.!urB\.\;r/ r/ rx\uHkx!]CH:t!]CH:!]府U!]CH:t!]CHkGV^YyVkxVkxVkx.:t!]CH:t!]cZ!^ˉq@[:t!]CH:t!]|f Z!_Z!_Z!_:C\#ur.!˵B\/ rB\;r/ĸqrB\+ur.!˵B\ 1V7|Fȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/L}f5B\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkxV5 _!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+˵B^r/ r/ r/ r!]:t!]CH:t!]CH똇t\Ɓkt!]CH:t!^g5}Fȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/|rk|~{ϵB\?Z!_s!^sB\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r"_?XFȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/iFk|Fk|Fk|:˵B\+ur.!˵B\/ r/r]N+ rB\+ur.!upYuϼ\k?rg^5y/n=]Ckt!^:xVHyH똱y&ƍmZ!]Ckt!^~r:= r:= r:= r:=qu{CkxVkxVkxVk!^庚7WZ!^Z!^Z!^y~k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r{\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ ruaVkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!u{_s/aVȗ\+}{:}{CkxVkxVkxVk!^庚7WZ!^Z!^Z!^|A^{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx~^?Z!_?Z!_?Z!_?:C\{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+3/ϯs/ r/ r/ ruHk|VHZ!]Ckt!^:!^府7W:xVHZ!]CHa~^Z!_s/aVȗ\u\/ rB\+ur.!˵. r]M*!˵B\/ rB\++_\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV|~k|~k|~k|~r>=!˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/ןg_{ϵB\#g񯎕k|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r.?{q嬟VkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!/\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV'u<^7z>o| z:wg=>@hwz|;݁ۿrg_˵!^\+˵B\+˵B\+˵B\+˵/tjb8^VkxVkxVkx}{!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rugaVkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!u}{!_!_!_r/ r/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+/aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/s9= r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^ϜiFk|Fk|Fk|:˵B\+ur.!˵B\/ r/r]N+ rB\+ur.!ٗ\+5B\+5B\+5B\ukx!^:xVHZ!]Ck!]庚7UCkt!^:xVW /aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/?r>= r/r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^sgaVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r.?y33 r/ r/ r/qZ!^CH:t!]CH:tyHkx.'ƍm:t!]CH:xy~k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r\o|(.9t=xu!@wz|;́=XNNwBwB?z|zC= r}k|{u\x{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+5ľ\?Z!_s/aVȗ\u\/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\ȗkr/r/r/ruk|VkxVkxVkxVk!^庚7WZ!^Z!^Z!^9/s/ r/ r/ ruHk|VHZ!]Ckt!^:!^府7W:xVHZ!]CH? r}k|{ϵB\_Z!_s!^sB\+ur.!˵B\/C\+u51nh\/ rB\+ur/ؗaVȗaVȗaVȗa﹎/ׯsB\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rj^)V5}/|Ͼ\{ڿg˵B\+˵B\+˵B\+˵B\;r^#/ĸqr/ r/ r/ r{\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rugaVkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!u{_s/aVȗ\+}{:}{CkxVkxVkxVk!^庚7WZ!^Z!^Z!^|A^{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx~^?Z!_?Z!_?Z!_?:C\{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+9/ϯs/ r/ r/ ruHk|VHZ!]Ckt!^:!^府7W:xVHZ!]CHa~^Z!_s/aVȗ\u\/ rB\+ur.!˵. r]M*!˵B\/ rB\++_\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV|~k|~k|~k|~r>=!˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/ן?s_{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVktu{+g̵B\+˵B\+˵B\+u\tV:t!]CH:t!]cZ!^ˉq@[:t!]CH:t!^|A^_Z!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\?yr9xs@gqwLJ=>S@O7z~@;݁?'Soߴ>XFo?n,|#C?z:݉q@c;3NwBwٝ㏅; G˵˵B\+˵r/ r!]:t!]CH:t!]CH똇tVrbT:t!]CH:t!]yvZ!^Z!^rx. rB\.!urB\.!uC\+u91rB\.!urB\.׎\;r/ r/ r/ r!]=/!urB\.!urB\<˵B\ƁurB\.!urB\;r@˵B\#˵B\#˵B\#u\xFZ!]Ckt!^:xvZ!_q@[:xVHZ!]Ckxfq×k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rԗkn\#5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxVkn\s!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\{! !/ r/ r/ r/qZ!^CH:t!]CH:tyHku91nhVH:t!]CH\߇Z!_s/aVȗ\u\/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\Ms{9= rk|~{u\x~{ur/ r/ r/ r<5B\WƁʵB\+˵B\+˵B\+5|,Z`!_!_!_/r/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\s^_!^!^!^. r.!˵B\/ rB\+uC\#u91nh\+ur.!˵B\.:= r}k|{ϵB\_:C\_:xVHZ!]Ckt!^tVȗjb8V!^:xVHZ!^7/ׯs/ׯs/ׯs/ׯs!^_:xVkxVkxVkxv!_q@{Z!^Z!^Z!^'_aVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r/\^}3i+|Ͼ\ٗk>rg_r_Z!^Z!^Z!^Z!^x<^q@{Z!^Z!^Z!^Ϝs/ r/ r/ r!]:t!]CH:t!]CH똇tVrb8V!]CH:t!]C\އZ!_s/aVȗ\u\/ r/ r/ r/C\#u51nh\+˵B\+˵B\+˵B\?\k|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ r{\{ϵB\#5B\#5B\#˵!^Z!^Z!^Z!^Z!^xFȗjb8^VkxVkxVkx3u{Z!_Z!_Z!_r/ rB\+ur.!˵B\<5B\ƁʵB\/ rB\+ur>=\+u{:= r}r}Z!]Ckt!^:xvZ!_q@[:xVHZ!]Ckx~ rk|Fȗk|Fȗk|FȗkC\#˵B\+˵B\+˵B\+˵B\;r/ĸqr/ r/ r/ rs\+5B\,>r/r/^xFȗkxVkxVkxVkxv!_q@{Z!^Z!^Z!^sgaVȗk|Fȗk|Fȗk|:5B\+˵B\+˵B\+˵B\+˵/r]M+ r/ r/ r.?{q嬟VkxVkxVkx. rB\.!urB\.!uC\+u91nh\.!urB\.!/\+5B\#5B\#5B\{!_Z!^Z!^Z!^yk|&ƍkxVkxVkxV'u<^7z>o| z:wg=>@hwz|;݁}@\@\o큞큞큞͵?7\{ ~su|88ko@:o@]~s͵7\88@:o@:o]7\{ ~su|88ko@:o@ s~s͵7\88@:o@:o57\s s@ko=@ko=Vś57\=o=@ko9@koko9{\{ ~s͵7\s s͵7H\t͵7\{ ~sC:o@:o@8857\s s@ko=@ko=wxko9@!~s͵7\{ ~ś͵7\{ ~sͯ7\s ś͵Cko=@ko=@暿o9@k͵7\{ ~s57\{ ~s ko=@:uH\ǁuH\ǁ͵7q }s7q ~s}gko9{\{ ~s͵7\s s͵7\ ko9@!~s͵7\{ ~ś͵7\{ ~s7\s ś͵Cko=@ko=@:~˅.[=|7~7=o@:o@:oc[F\ǁuH\ǁu\{ ś57~ko=@ko=@'_\s ś57~ko=@ko=@a?s57\=o=@ko9@ko57\s s@ko=@ko=~~3\{ ~su|88ko@:o@/o9@k͵7\{ ~s57\{ ~sO~o9@k͵7\{ ~s57\{ ~s'?7\ 57~ko=@ko=@_̵7\{ ~sC:o@:o@8857\s s@ko=@ko=~ko9@!~s͵7\{ ~ś͵7\{o2z߿&G@߻@7mo=??sR~)Bs?߇}uBpP.?"M d^}xL^GBYO {uPz/::B_]=^;_ v{uPzzv/9MîC )}u7v#>.Ol ~OПîC_v)suo|N׎m ~w̦l }v/Ź)U=?7C?sH3"A"jb " * * V}N " * * * * * * * 2H!HP!HP!HP!HGx~ "A"A"A"A"A"A"G X!HP!HP!HP!HP!HP!HP!Hy/G|`51# * * * * * e| B~B|B|B|B|B|B|B|χ"A"A"A"1B|B|B|B|ϩ"A"A"A"A"A"A"A"A"A^ " * * * /|_$_$_$_$_$_$_$X=(W! * _$^$B|Bz`ECzB~`51# * _$^$B|Bz`t_$_$B|Bz`E E! * _$!HP!HP!H0"1B|Bz`E E/D/ !HP!H0"A"ҋC/ezB~Bz`E E<~y * " * " * " * * _$^$B|Bz`E E1E8J/,'c"ҋC/TH/ !H.ӋҋC/TH/ !HP!H0"A"x> " _$^$X>c!HP!HP!HP!HP!HP!H.}> " * * V}N " * * * * * * * 2H!HP!HP!HP!H/ddtf E E E E E E EC|"jbV#l[!>V#l!>Fȏ }]GOZ)a#G6B~a#Gkq!d B|a+G B|a;6B~]MLⰛaGB~a#G6B~a, vaa+G B|a+G a#GȄ^a?~a+GB~a+GvBzB|aa+G!GCzaW׻;8u8f#U:` BW!>V6BzB~aa#G!GvC~ v91:2^tx=@C0~@nke[¨7/f;FEco&CH!#ҿoh2oJE _8_4B1!MCHE!cd.=+4+ҿsh2/>x(74P Po:Fbx(Vbx(7P P P P Po!X!⡘jCbP P\.cx(Vbx(Vb#P P P P^XX!X!1>y( P Po#CqP!C1B:_x8|(FHx5b~?g7xs ?@~6cC=~ȁ>7ׯ?9]kW~U?q:]{W~U? !"I/+,BoDC\eaסaס;_ v{uaס1B_]aס~!CAW"<!xxҋ^^Oo]g^\#"!!S-*t^#" ,|O?B,\X$ Hp:`*t"UE"UEx_$_$_$_$_$_$HEy> B,D/D/,1# " " " /\X$_$_$_$_$_$_$_$"UEEEE9ea*t"UEW B,\X$ H!H!H!H!H!H!H!Hy/\X$XLEEEEEEt_$ H!H!H!H!H!H!H!HC~*t"A"A"1B~ B~ B~ B~ ϩ"UEEEEEEEE/\X$_$_$_$__E߯ߗsH!H!H!H!H!H!H!HP!HP!HP!HP!HP!HP!HP!Hy/D/&c"A"A"A"A"A"A/D/T/T/T/T/T/T/T/_$_$_$_$X=c:)/\X$ Hp:`*t"UEW " " " " " " " 2E!H!H!H!H!H!H.W " " " " " " " |/\X$_$_$XW!HP!H!HP!H!HP!H!HP!HEH!HP!H!HS ^׿ "t"U/DE_$W!H " * " * " * " :E_$XME EE EE Et_$}> B~B~ B|`!HP!H!HP!H| B,D/T/D/T/D/T/D/e~* * " * :)_׿ B,\X$ Hp:`*t"UE"UEx_$_$_$_$_$_$HEy> B,D/D/,1# " " " /\X$_$_$_$_$_$_$_$"UEEEEo-H7"UEW B,\X$ Hp:` B~ B~ B~ B~ B~ B~ B~ _$ H2Hp:` B~ B~ B~ B~ B~ B~ B~ χ"UEEE3c"A"A"A"ASEy_$ H!H!H!H/p"էW B,\X$ Hp:`*t"UEEEEEEEEC~*t"bb B,D/D/,1# " " " /\X$_$_$_$_$_$_$_$"UEEEEZ8`sx~*t"UEW B,\X$ H!H!H!H!H!H!H!Hy/\X$XLEEEEEEt_$ H!H!H!H!H!H!H!HC~*t"A"A"1B~ B~ B~ B~ ϩ"UEEEEEEEE/\X$_$_$_$8e(X$ Hp:`*t"UEW B,D/D/D/D/D/D/D/dW 1B~ B~ B~ B~ B~ B~ ] B,D/D/D/D/D/D/D/_$ H!H!Hxx_$_$_$_$sjHp:` B~ B~ B~ B~ B~ B~ B~ ?/W " " " KEϯW!H B~ B,\"AX$ E EE EE EE EC| B,,&c"A"A"A"A"A"A/\"A"A"A"A"A"A"A"AEH!HP!Hxx_$_$_$_$sjHpy_$:):`* "t"U/DE_$"U/&c"A"A"A"A"A"A/DEE EE EE EE>W!HP!H!Hzx_$_$_$_$sjH " * " * " * " 2Hps9e7"UEW B,\X$ Hp:` B~ B~ B~ B~ B~ B~ B~ _$ H2Hp:` B~ B~ B~ B~ B~ B~ B~ χ"UEEE3c"A"A"A"ASEy_$ H!H!H!H__E_8`*t"UEW B,\X$ H!H!H!H!H!H!H!Hy/\X$XLEEEEEEt_$ H!H!H!H!H!H!H!HC~*t"A"A"1B~ B~ B~ B~ ϩ"UEEEEEEEE/\X$_$_$_$)s1@Tz~@G9bCx 1?ʁOr ×YG'9py?΁cx́C?cGz9V9>.cr #?\PNApysph-master/pysph/examples/sphysics/__init__.py000066400000000000000000000000001356347341600224530ustar00rootroot00000000000000pysph-master/pysph/examples/sphysics/beach_geometry.py000066400000000000000000000034061356347341600237060ustar00rootroot00000000000000import numpy as np from pysph.tools.geometry import get_2d_wall def get_beach_geometry_2d(dx=0.1, l=3.0, h=1.0, flat_l=1.0, angle=45.0, num_layers=3): """ Generates a beach like geometry which is commonly used for simulations related to SPHysics. Parameters ---------- dx : Spacing between the particles l : Total length of the beach h : Height of the wall used at the beach position flat_l : Length of the flat part angle : Angle of the inclined part num_layers : number of layers Returns ------- x : 1d numpy array with x coordinates of the beach y : 1d numpy array with y coordinates of the beach x4 : 1d numpy array with x coordinates of the obstacle y4 : 1d numpy array with y coordinates of the obstacle """ theta = np.pi * angle / 180.0 x1, y1 = get_2d_wall(dx, np.array([(flat_l + dx) / 2.0, 0.]), flat_l, num_layers, False) x2 = np.arange(flat_l - l, 0.0, dx * np.cos(theta)) h2 = (l - flat_l) * np.tan(theta) y2_layer = x2 * np.tan(-theta) x2 = np.tile(x2, num_layers) y2 = [] for i in range(num_layers): y2.append(y2_layer - i * dx) y2 = np.ravel(np.array(y2)) y3 = np.arange(h2 + dx, h + h2, dx) x3_layer = np.ones_like(y3) * (flat_l - l) y3 = np.tile(y3, num_layers) x3 = [] for i in range(num_layers): x3.append(x3_layer - i * dx) x3 = np.ravel(np.array(x3)) x = np.concatenate([x1, x2, x3]) y = np.concatenate([y1, y2, y3]) y4 = np.arange(dx, 2.0 * h, dx) x4_layer = np.ones_like(y4) * flat_l y4 = np.tile(y4, num_layers) x4 = [] for i in range(num_layers): x4.append(x4_layer + i * dx) x4 = np.ravel(np.array(x4)) return x, y, x4, y4 pysph-master/pysph/examples/sphysics/case1.py000066400000000000000000000071321356347341600217250ustar00rootroot00000000000000""" SPHysics case1 - dambreak (6 minutes) """ from pysph.base.kernels import CubicSpline from pysph.solver.application import Application from pysph.sph.integrator_step import WCSPHStep from pysph.base.utils import get_particle_array import numpy as np from pysph.sph.scheme import AdamiHuAdamsScheme, WCSPHScheme, SchemeChooser from pysph.sph.wc.edac import EDACScheme from pysph.tools.geometry import remove_overlap_particles, rotate from pysph.tools.geometry import get_2d_tank, get_2d_block def get_dam_geometry(dx_tank=0.03, dx_fluid=0.03, r_tank=100.0, h_f=2.0, l_f=1.0, r_fluid=100.0, hdx=1.5, l_tank=4.0, h_tank=4.0): tank_x, tank_y = get_2d_tank(dx_tank, length=l_tank, height=h_tank, num_layers=5) rho_tank = np.ones_like(tank_x) * r_tank m_tank = rho_tank * dx_tank * dx_tank h_t = np.ones_like(tank_x) * dx_tank * hdx tank = get_particle_array(name='dam', x=tank_x, y=tank_y, h=h_t, rho=rho_tank, m=m_tank) center = np.array([(l_f - l_tank) / 2.0, h_f / 2.0]) fluid_x, fluid_y = get_2d_block(dx_fluid, l_f, h_f, center) fluid_x += dx_tank fluid_y += dx_tank h_fluid = np.ones_like(fluid_x) * dx_fluid * hdx r_f = np.ones_like(fluid_x) * r_fluid m_f = r_f * dx_fluid * dx_fluid fluid = get_particle_array(name='fluid', x=fluid_x, y=fluid_y, h=h_fluid, rho=r_f, m=m_f) return fluid, tank class Dambreak2D(Application): def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.3, help="h/dx value used in SPH to change the smoothness") group.add_argument( "--dx", action="store", type=float, dest="dx", default=0.03, help="spacing between the particles") def consume_user_options(self): self.hdx = self.options.hdx self.dx = self.options.dx self.h0 = self.hdx * self.dx self.dt = 0.15 * self.h0 / co def create_particles(self): fluid, dam = get_dam_geometry(self.dx, self.dx, hdx=self.hdx, h_f=h_fluid, r_fluid=ro, r_tank=ro) self.scheme.setup_properties([fluid, dam]) particles = [fluid, dam] return particles def create_scheme(self): aha = AdamiHuAdamsScheme(['fluid'], ['dam'], dim=2, rho0=ro, c0=co, alpha=alp, gy=-9.81, nu=0.0, h0=0.03, gamma=1.0) wcsph = WCSPHScheme(['fluid'], ['dam'], dim=2, rho0=ro, c0=co, h0=0.03, hdx=1.3, hg_correction=True, gy=-9.81, alpha=alp, gamma=gamma, update_h=True) edac = EDACScheme(['fluid'], ['dam'], dim=2, rho0=ro, c0=co, gy=-9.81, alpha=0.0, nu=0.0, h=0.03, clamp_p=True) return SchemeChooser(default='wcsph', wcsph=wcsph, aha=aha, edac=edac) def configure_scheme(self): s = self.scheme scheme = self.options.scheme if scheme == 'wcsph': s.configure(h0=self.h0, hdx=self.hdx) elif scheme == 'aha': s.configure(h0=self.h0) elif scheme == 'edac': s.configure(h=self.h0) s.configure_solver(kernel=CubicSpline(dim=2), dt=self.dt, tf=3.0, adaptive_timestep=False) if __name__ == '__main__': l_dam = 4.0 h_dam = 4.0 h_fluid = 2.0 l_fluid = 1.0 gamma = 7.0 alp = 0.2 ro = 100.0 co = 10.0 * np.sqrt(2.0 * 9.81 * h_fluid) app = Dambreak2D() app.run() pysph-master/pysph/examples/sphysics/case2.py000066400000000000000000000100701356347341600217210ustar00rootroot00000000000000""" SPHysics case2 - dambreak on wet surface (5 minutes) """ from pysph.base.kernels import CubicSpline from pysph.solver.application import Application from pysph.sph.integrator_step import WCSPHStep from pysph.base.utils import get_particle_array import numpy as np from pysph.sph.scheme import AdamiHuAdamsScheme, WCSPHScheme, SchemeChooser from pysph.sph.wc.edac import EDACScheme from pysph.tools.geometry import remove_overlap_particles, rotate from pysph.tools.geometry import get_2d_tank, get_2d_block def get_dam_geometry(dx_tank=0.03, dx_fluid=0.03, r_tank=100.0, h_f=2.0, l_f=1.0, r_fluid=100.0, hdx=1.5, l_tank=4.0, h_tank=4.0, h_f2=1.0): tank_x, tank_y = get_2d_tank(dx_tank, length=l_tank, height=h_tank, num_layers=4) rho_tank = np.ones_like(tank_x) * r_tank m_tank = rho_tank * dx_tank * dx_tank h_t = np.ones_like(tank_x) * dx_tank * hdx tank = get_particle_array(name='dam', x=tank_x, y=tank_y, h=h_t, rho=rho_tank, m=m_tank) center = np.array([(l_f - l_tank) / 2.0, h_f / 2.0]) fluid_x1, fluid_y1 = get_2d_block(dx_fluid, l_f, h_f, center) center = np.array([l_f / 2.0, h_f2 / 2.0]) fluid_x2, fluid_y2 = get_2d_block(dx_fluid, l_tank - l_f - 2.0 * dx_fluid, h_f2, center) fluid_x = np.concatenate([fluid_x1, fluid_x2]) fluid_y = np.concatenate([fluid_y1, fluid_y2]) h_fluid = np.ones_like(fluid_x) * dx_fluid * hdx r_f = np.ones_like(fluid_x) * r_fluid m_f = r_f * dx_fluid * dx_fluid fluid = get_particle_array(name='fluid', x=fluid_x, y=fluid_y, h=h_fluid, rho=r_f, m=m_f) remove_overlap_particles(fluid, tank, dx_tank, 2) return fluid, tank class Dambreak_2D(Application): def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.3, help="h/dx value used in SPH to change the smoothness") group.add_argument( "--dx", action="store", type=float, dest="dx", default=0.005, help="spacing between the particles") def consume_user_options(self): self.hdx = self.options.hdx self.dx = self.options.dx self.h0 = self.hdx * self.dx self.dt = 0.15 * self.h0 / co def create_particles(self): fluid, dam = get_dam_geometry(self.dx, self.dx, hdx=self.hdx, h_f=h_fluid, h_f2=h_fluid2, r_fluid=ro, r_tank=ro, l_f=l_fluid, l_tank=l_dam, h_tank=h_dam) self.scheme.setup_properties([fluid, dam]) particles = [fluid, dam] return particles def create_scheme(self): aha = AdamiHuAdamsScheme(['fluid'], ['dam'], dim=2, rho0=ro, c0=co, alpha=alp, gy=-9.81, nu=0.0, h0=0.005, gamma=1.0) wcsph = WCSPHScheme(['fluid'], ['dam'], dim=2, rho0=ro, c0=co, h0=0.005, hdx=1.3, hg_correction=True, gy=-9.81, alpha=alp, gamma=gamma, update_h=True) edac = EDACScheme(['fluid'], ['dam'], dim=2, rho0=ro, c0=co, gy=-9.81, alpha=0.0, nu=0.0, h=0.005, clamp_p=True) return SchemeChooser(default='wcsph', wcsph=wcsph, aha=aha, edac=edac) def configure_scheme(self): s = self.scheme scheme = self.options.scheme if scheme == 'wcsph': s.configure(h0=self.h0, hdx=self.hdx) elif scheme == 'aha': s.configure(h0=self.h0) elif scheme == 'edac': s.configure(h=self.h0) s.configure_solver(kernel=CubicSpline(dim=2), dt=self.dt, tf=1.2, adaptive_timestep=False) if __name__ == '__main__': l_dam = 2.0 h_dam = 0.16 h_fluid = 0.15 l_fluid = 0.376 h_fluid2 = 0.018 gamma = 7.0 alp = 0.2 ro = 100.0 co = 10.0 * np.sqrt(2.0 * 9.81 * h_fluid) app = Dambreak_2D() app.run() pysph-master/pysph/examples/sphysics/case3.py000066400000000000000000000112131356347341600217220ustar00rootroot00000000000000""" SPHysics case3 - wavemaker in beach (17 minutes) """ from pysph.base.kernels import CubicSpline from pysph.solver.application import Application from pysph.sph.integrator_step import WCSPHStep from pysph.sph.integrator_step import TwoStageRigidBodyStep from pysph.base.utils import get_particle_array import numpy as np from pysph.sph.scheme import AdamiHuAdamsScheme, WCSPHScheme, SchemeChooser from pysph.sph.wc.edac import EDACScheme from pysph.tools.geometry import remove_overlap_particles from pysph.tools.geometry import get_2d_block from pysph.examples.sphysics.beach_geometry import get_beach_geometry_2d def get_wavespaddle_geometry(hdx=1.5, dx_f=0.1, dx_s=0.1, r_f=100., r_s=100., length=3.75, height=0.3, flat_l=1., angle=4.2364, h_fluid=0.2): x1, y1, x2, y2 = get_beach_geometry_2d(dx_s, length, height, flat_l, angle, 5) r1 = np.ones_like(x1) * r_s m1 = r1 * dx_s * dx_s h1 = np.ones_like(x1) * hdx * dx_s wall = get_particle_array(name='wall', x=x1, y=y1, rho=r1, m=m1, h=h1) r2 = np.ones_like(x2) * r_s m2 = r2 * dx_s * dx_s h2 = np.ones_like(x2) * hdx * dx_s paddle = get_particle_array(name='paddle', x=x2, y=y2, rho=r2, m=m2, h=h2) fluid_center = np.array([flat_l - length / 2.0, h_fluid / 2.0]) x_fluid, y_fluid = get_2d_block(dx_f, length, h_fluid, fluid_center) x3 = [] y3 = [] theta = np.pi * angle / 180.0 for i, xi in enumerate(x_fluid): if y_fluid[i] >= np.tan(-theta) * xi: x3.append(xi) y3.append(y_fluid[i]) x3 = np.array(x3) y3 = np.array(y3) r3 = np.ones_like(x3) * r_f m3 = r3 * dx_f * dx_f h3 = np.ones_like(x3) * hdx * dx_f fluid = get_particle_array(name='fluid', x=x3, y=y3, rho=r3, m=m3, h=h3) remove_overlap_particles(fluid, wall, dx_s, 2) remove_overlap_particles(fluid, paddle, dx_s, 2) return fluid, wall, paddle class WavesPaddle2D(Application): def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.3, help="h/dx value used in SPH to change the smoothness") group.add_argument( "--dx", action="store", type=float, dest="dx", default=0.01, help="spacing between the particles") def consume_user_options(self): self.hdx = self.options.hdx self.dx = self.options.dx self.h0 = self.hdx * self.dx self.dt = 0.25 * self.h0 / co def pre_step(self, solver): t = solver.t theta = 2.0 * np.pi * t / period paddle = self.particles[2] paddle.u = amplitude * (paddle.y - self.dx) * np.cos(theta) paddle.v = amplitude * (flat_l - paddle.x) * np.cos(theta) def create_particles(self): fluid, wall, paddle = get_wavespaddle_geometry( self.hdx, self.dx, self.dx, h_fluid=h_fluid) self.scheme.setup_properties([fluid, wall, paddle]) scheme = self.options.scheme if scheme == 'aha' or scheme == 'edac': for p in ['u0', 'v0', 'w0', 'x0', 'y0', 'z0']: paddle.add_property(p) particles = [fluid, wall, paddle] return particles def create_scheme(self): aha = AdamiHuAdamsScheme(['fluid'], ['wall', 'paddle'], dim=2, rho0=ro, c0=co, alpha=alp, gy=-9.81, nu=0.0, h0=0.01, gamma=1.0) wcsph = WCSPHScheme(['fluid'], ['wall', 'paddle'], dim=2, rho0=ro, c0=co, h0=0.01, hdx=1.3, hg_correction=True, gy=-9.81, alpha=alp, gamma=gamma, update_h=True) edac = EDACScheme(['fluid'], ['wall', 'paddle'], dim=2, rho0=ro, c0=co, gy=-9.81, alpha=0.0, nu=0.0, h=0.01, clamp_p=True) return SchemeChooser(default='wcsph', wcsph=wcsph, aha=aha, edac=edac) def configure_scheme(self): s = self.scheme scheme = self.options.scheme if scheme == 'wcsph': s.configure(h0=self.h0, hdx=self.hdx) elif scheme == 'aha': s.configure(h0=self.h0) elif scheme == 'edac': s.configure(h=self.h0) step = dict(paddle=TwoStageRigidBodyStep()) s.configure_solver( kernel=CubicSpline(dim=2), tf=5.0, dt=self.dt, adaptive_timestep=False, extra_steppers=step) if __name__ == '__main__': h_fluid = 0.25 co = 10.0 * np.sqrt(2.0 * 9.81 * h_fluid) flat_l = 1.0 gamma = 7.0 ro = 100.0 alp = 0.2 amplitude = 1.0 period = 1.4 app = WavesPaddle2D() app.run() pysph-master/pysph/examples/sphysics/case4.py000066400000000000000000000134001356347341600217230ustar00rootroot00000000000000""" SPHysics case4 - tsunami (6 minutes) """ from pysph.base.kernels import CubicSpline from pysph.solver.application import Application from pysph.sph.integrator_step import WCSPHStep from pysph.sph.integrator_step import TwoStageRigidBodyStep from pysph.base.utils import get_particle_array import numpy as np from pysph.sph.scheme import AdamiHuAdamsScheme, WCSPHScheme, SchemeChooser from pysph.sph.wc.edac import EDACScheme from pysph.tools.geometry import remove_overlap_particles from pysph.tools.geometry import get_2d_block from pysph.examples.sphysics.beach_geometry import get_beach_geometry_2d def get_tsunami_geometry(dx_solid=0.01, dx_fluid=0.01, r_solid=100.0, r_fluid=100.0, hdx=1.3, l_wall=9.5, h_wall=4.0, angle=26.565051, h_fluid=3.5, l_obstacle=0.91, flat_l=2.25): x1, y1, x2, y2 = get_beach_geometry_2d(dx_solid, l_wall, h_wall, flat_l, angle, 4) wall_x = np.concatenate([x1, x2]) wall_y = np.concatenate([y1, y2]) r_wall = np.ones_like(wall_x) * r_solid m_wall = r_wall * dx_solid * dx_solid h_w = np.ones_like(wall_x) * dx_solid * hdx wall = get_particle_array(name='wall', x=wall_x, y=wall_y, h=h_w, rho=r_wall, m=m_wall) theta = np.pi * angle / 180.0 h_obstacle = l_obstacle * np.tan(theta) obstacle_x1, obstacle_y1 = get_2d_block(dx_solid, l_obstacle, h_obstacle) obstacle_x = [] obstacle_y = [] for x, y in zip(obstacle_x1, obstacle_y1): if (y >= (-x * np.tan(theta))): obstacle_x.append(x) obstacle_y.append(y) x_translate = (l_wall - flat_l) * 0.8 y_translate = x_translate * np.tan(theta) obstacle_x = np.asarray(obstacle_x) - x_translate obstacle_y = np.asarray(obstacle_y) + y_translate + dx_solid h_obstacle = np.ones_like(obstacle_x) * dx_solid * hdx r_obstacle = np.ones_like(obstacle_x) * r_solid m_obstacle = r_obstacle * dx_solid * dx_solid obstacle = get_particle_array(name='obstacle', x=obstacle_x, y=obstacle_y, rho=r_obstacle, m=m_obstacle, h=h_obstacle) fluid_center = np.array([flat_l - l_wall / 2.0, h_fluid / 2.0]) x_fluid, y_fluid = get_2d_block(dx_fluid, l_wall, h_fluid, fluid_center) x3 = [] y3 = [] for i, xi in enumerate(x_fluid): if y_fluid[i] >= np.tan(-theta) * xi: x3.append(xi) y3.append(y_fluid[i]) fluid_x = np.array(x3) fluid_y = np.array(y3) h_f = np.ones_like(fluid_x) * dx_fluid * hdx r_f = np.ones_like(fluid_x) * r_fluid m_f = r_f * dx_fluid * dx_fluid fluid = get_particle_array(name='fluid', x=fluid_x, y=fluid_y, h=h_f, m=m_f, rho=r_f) remove_overlap_particles(fluid, obstacle, dx_solid * 1.1, 2) remove_overlap_particles(fluid, wall, dx_solid * 1.1, 2) return fluid, wall, obstacle class Tsunami2D(Application): def initialize(self): self.count = 0 def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.3, help="h/dx value used in SPH to change the smoothness") group.add_argument( "--dx", action="store", type=float, dest="dx", default=0.05, help="spacing between the particles") def consume_user_options(self): self.hdx = self.options.hdx self.dx = self.options.dx self.h0 = self.hdx * self.dx self.dt = 0.25 * self.h0 / co def pre_step(self, solver): if self.count == 0 and solver.t >= 2.5: obstacle = self.particles[2] obstacle.v[:] = 0.0 obstacle.u[:] = 0.0 self.count += 1 def create_particles(self): a = 2.0 theta = 26.565051 * np.pi / 180.0 c = np.cos(theta) s = np.sin(theta) fluid, wall, obstacle = get_tsunami_geometry( self.dx, self.dx, hdx=self.hdx, h_fluid=h_fluid) obstacle.u[:] = a * c obstacle.v[:] = -a * s self.scheme.setup_properties([fluid, wall, obstacle]) scheme = self.options.scheme if scheme == 'aha' or scheme == 'edac': for p in ['u0', 'v0', 'w0', 'x0', 'y0', 'z0']: obstacle.add_property(p) particles = [fluid, wall, obstacle] return particles def create_scheme(self): aha = AdamiHuAdamsScheme(['fluid'], ['wall', 'obstacle'], dim=2, rho0=ro, c0=co, alpha=alp, gy=-9.81, nu=0.0, h0=0.05, gamma=1.0) wcsph = WCSPHScheme(['fluid'], ['wall', 'obstacle'], dim=2, rho0=ro, c0=co, h0=0.05, hdx=1.3, hg_correction=True, gy=-9.81, alpha=alp, gamma=gamma, update_h=True) edac = EDACScheme(['fluid'], ['wall', 'obstacle'], dim=2, rho0=ro, c0=co, gy=-9.81, alpha=0.0, nu=0.0, h=0.05, clamp_p=True) return SchemeChooser(default='wcsph', wcsph=wcsph, aha=aha, edac=edac) def configure_scheme(self): s = self.scheme scheme = self.options.scheme if scheme == 'wcsph': s.configure(h0=self.h0, hdx=self.hdx) elif scheme == 'aha': s.configure(h0=self.h0) elif scheme == 'edac': s.configure(h=self.h0) step = dict(obstacle=TwoStageRigidBodyStep()) s.configure_solver(kernel=CubicSpline(dim=2), dt=self.dt, tf=3.0, adaptive_timestep=False, extra_steppers=step) if __name__ == '__main__': h_fluid = 3.0 co = 10.0 * np.sqrt(2.0 * 9.81 * h_fluid) ro = 100.0 alp = 0.2 gamma = 7.0 app = Tsunami2D() app.run() pysph-master/pysph/examples/sphysics/case5.py000066400000000000000000000100441356347341600217250ustar00rootroot00000000000000'''A configurable case5 from the SPHERIC benchmark. (15 mins) The example is similar to the dambreak_sphysics example. The details of the geometry are in "State-of-the-art of classical SPH for free-surface flows" by Moncho Gomez-Gesteira , Benedict D. Rogers , Robert A. Dalrymple & Alex J.C. Crespo. http://dx.doi.org/10.1080/00221686.2010.9641242 ''' import numpy as np from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import WCSPHScheme def ravel(*args): return tuple(np.ravel(x) for x in args) def rhstack(*args): '''Join given set of args, we that each element in args has the same shape. Each argument is first ravelled and then stacked. ''' return tuple(np.hstack(ravel(*t)) for t in zip(*args)) class Case5(Application): def add_user_options(self, group): group.add_argument( '--dx', action='store', type=float, dest='dx', default=0.025, help='Particle spacing.' ) hdx = np.sqrt(3)*0.85 group.add_argument( '--hdx', action='store', type=float, dest='hdx', default=hdx, help='Specify the hdx factor where h = hdx * dx.' ) def consume_user_options(self): dx = self.options.dx self.dx = dx self.hdx = self.options.hdx def create_scheme(self): self.c0 = c0 = 10.0 * np.sqrt(2.0*9.81*0.3) self.hdx = hdx = 1.2 dx = 0.01 h0 = hdx*dx alpha = 0.1 beta = 0.0 gamma = 7.0 s = WCSPHScheme( ['fluid'], ['boundary'], dim=3, rho0=1000, c0=c0, h0=h0, hdx=hdx, gz=-9.81, alpha=alpha, beta=beta, gamma=gamma, hg_correction=True, tensile_correction=False ) return s def configure_scheme(self): s = self.scheme h0 = self.dx * self.hdx s.configure(h0=h0, hdx=self.hdx) dt = 0.25*h0/(1.1 * self.c0) tf = 1.5 s.configure_solver( tf=tf, dt=dt, adaptive_timestep=True, n_damp=50 ) def create_particles(self): dx = self.dx dxb2 = dx * 0.5 l, b, h = 1.6, 0.61, 0.4 lw, hw = 0.4, 0.3 # Big filled vessel with staggered points. p1 = np.mgrid[-dx:l+dx*1.5:dx, -dx:b+1.5*dx:dx, -dx:h:dx] p2 = np.mgrid[-dxb2:l+dx*1.5:dx, -dxb2:b+1.5*dx:dx, -dxb2:h:dx] x, y, z = rhstack(p1, p2) # The post p3 = np.mgrid[0.9:1.02:dxb2, 0.25:0.37:dxb2, 0:0.45:dxb2] x3, y3, z3 = ravel(*p3) xmax, ymax = max(x3), max(y3) post_cond = ~((x3 > 0.9) & (x3 < xmax) & (y3 > 0.25) & (y3 < ymax)) p_post = x3[post_cond], y3[post_cond], z3[post_cond] # Masks to extract different parts from the vessel. wcond = ((x >= 0) & (x <= lw) & (y >= 0) & (y < b) & (z >= 0) & (z <= hw)) box = ~((x >= 0) & (x <= l) & (y >= 0) & (y < b) & (z >= 0) & (z <= h)) wcond1 = (((x > 0.4) & (x <= l) & (y >= 0) & (y < b) & (z >= 0) & (z <= 0.02)) & ~((x >= (0.9 - dx)) & (x <= (xmax + dx)) & (y >= (0.25 - dx)) & (y <= (ymax + dx)))) p_box = x[box], y[box], z[box] p_water = x[wcond], y[wcond], z[wcond] p_water_floor = x[wcond1], y[wcond1], z[wcond1] xs, ys, zs = rhstack(p_box, p_post) xf, yf, zf = rhstack(p_water, p_water_floor) vol = 0.5*dx**3 m = vol*1000 f = get_particle_array( name='fluid', x=xf, y=yf, z=zf, m=m, h=dx*self.hdx, rho=1000.0 ) b = get_particle_array( name='boundary', x=xs, y=ys, z=zs, m=m, h=dx*self.hdx, rho=1000.0 ) self.scheme.setup_properties([f, b]) return [f, b] def customize_output(self): self._mayavi_config(''' viewer.scalar = 'vmag' b = particle_arrays['boundary'] b.plot.actor.mapper.scalar_visibility = False b.plot.actor.property.opacity = 0.1 ''') if __name__ == '__main__': app = Case5() app.run() pysph-master/pysph/examples/sphysics/case6.py000066400000000000000000000155271356347341600217410ustar00rootroot00000000000000""" SPHysics case6 - wavemaker in beach with moving obstacles (30 minutes) """ from pysph.base.kernels import CubicSpline from pysph.solver.application import Application from pysph.sph.integrator_step import WCSPHStep from pysph.sph.integrator_step import TwoStageRigidBodyStep from pysph.base.utils import get_particle_array, get_particle_array_rigid_body import numpy as np from pysph.sph.equation import Group from pysph.sph.scheme import AdamiHuAdamsScheme, WCSPHScheme, SchemeChooser from pysph.sph.wc.edac import EDACScheme from pysph.tools.geometry import remove_overlap_particles from pysph.tools.geometry import get_2d_block from pysph.examples.sphysics.beach_geometry import get_beach_geometry_2d from pysph.sph.rigid_body import (BodyForce, RigidBodyCollision, RigidBodyMoments, LiuFluidForce, RigidBodyMotion, RK2StepRigidBody) def get_wavespaddle_geometry(hdx=1.5, dx_f=0.1, dx_s=0.05, r_f=100., r_s=100., length=3.75, height=0.3, flat_l=1., angle=4.2364, h_fluid=0.2, obstacle_side=0.06): x1, y1, x2, y2 = get_beach_geometry_2d(dx_s, length, height, flat_l, angle, 3) r1 = np.ones_like(x1) * r_s m1 = r1 * dx_s * dx_s h1 = np.ones_like(x1) * hdx * dx_s cs1 = np.zeros_like(x1) rad1 = np.ones_like(x1) * dx_s wall = get_particle_array( name='wall', x=x1, y=y1, rho=r1, m=m1, h=h1, cs=cs1, rad_s=rad1) r2 = np.ones_like(x2) * r_s m2 = r2 * dx_s * dx_s h2 = np.ones_like(x2) * hdx * dx_s paddle = get_particle_array(name='paddle', x=x2, y=y2, rho=r2, m=m2, h=h2) fluid_center = np.array([flat_l - length / 2.0, h_fluid / 2.0]) x_fluid, y_fluid = get_2d_block(dx_f, length, h_fluid, fluid_center) x3 = [] y3 = [] theta = np.pi * angle / 180.0 for i, xi in enumerate(x_fluid): if y_fluid[i] >= np.tan(-theta) * xi: x3.append(xi) y3.append(y_fluid[i]) x3 = np.array(x3) y3 = np.array(y3) r3 = np.ones_like(x3) * r_f m3 = r3 * dx_f * dx_f h3 = np.ones_like(x3) * hdx * dx_f cs3 = np.zeros_like(x3) rad3 = np.ones_like(x3) * dx_f fluid = get_particle_array( name='fluid', x=x3, y=y3, rho=r3, m=m3, h=h3) square_center = np.array([-0.38, 0.16]) x4, y4 = get_2d_block(dx_s, obstacle_side, obstacle_side, square_center) b1 = np.zeros_like(x4, dtype=int) square_center = np.array([-0.7, 0.16]) x5, y5 = get_2d_block(dx_s, obstacle_side, obstacle_side, square_center) b2 = np.ones_like(x5, dtype=int) square_center = np.array([-1.56, 0.22]) x6, y6 = get_2d_block(dx_s, obstacle_side, obstacle_side, square_center) b3 = np.ones_like(x5, dtype=int) * 2 b = np.concatenate([b1, b2, b3]) x4 = np.concatenate([x4, x5, x6]) y4 = np.concatenate([y4, y5, y6]) r4 = np.ones_like(x4) * r_s * 0.5 m4 = r4 * dx_s * dx_s h4 = np.ones_like(x4) * hdx * dx_s cs4 = np.zeros_like(x4) rad4 = np.ones_like(x4) * dx_s obstacle = get_particle_array_rigid_body( name='obstacle', x=x4, y=y4, h=h4, rho=r4, m=m4, cs=cs4, rad_s=rad4, body_id=b) remove_overlap_particles(fluid, wall, dx_s, 2) remove_overlap_particles(fluid, paddle, dx_s, 2) remove_overlap_particles(fluid, obstacle, dx_s, 2) return fluid, wall, paddle, obstacle class WavesPaddle2D(Application): def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.3, help="h/dx value used in SPH to change the smoothness") group.add_argument( "--dx", action="store", type=float, dest="dx", default=0.01, help="spacing between the particles") def consume_user_options(self): self.hdx = self.options.hdx self.dx = self.options.dx self.h0 = self.hdx * self.dx self.dt = 0.25 * self.h0 / co def pre_step(self, solver): t = solver.t theta = 2.0 * np.pi * t / period paddle = self.particles[2] paddle.u = amplitude * (paddle.y - self.dx) * np.cos(theta) paddle.v = amplitude * (flat_l - paddle.x) * np.cos(theta) def create_particles(self): f, w, pad, obst = get_wavespaddle_geometry( self.hdx, self.dx, 0.75 * self.dx, length=lx, height=ly, h_fluid=h_fluid, obstacle_side=side, flat_l=flat_l, r_f=ro, r_s=ro) self.scheme.setup_properties([f, w, pad, obst], clean=False) for p in ['u0', 'v0', 'w0', 'x0', 'y0', 'z0']: pad.add_property(p) particles = [f, w, pad, obst] return particles def create_scheme(self): wcsph = WCSPHScheme(['fluid'], ['wall', 'paddle', 'obstacle'], dim=2, rho0=ro, c0=co, h0=0.01, hdx=1.3, gy=-9.81, hg_correction=True, alpha=alp, gamma=gamma, update_h=True) edac = EDACScheme(['fluid'], ['wall', 'paddle', 'obstacle'], dim=2, rho0=ro, c0=co, gy=-9.81, alpha=alp, nu=0.0, h=0.01, clamp_p=True) aha = AdamiHuAdamsScheme(['fluid'], ['wall', 'paddle', 'obstacle'], dim=2, rho0=ro, h0=0.01, gamma=1.0, alpha=alp, gy=-9.81, nu=0.0, c0=co) return SchemeChooser(default='wcsph', aha=aha, wcsph=wcsph, edac=edac) def create_equations(self): eqns = self.scheme.get_equations() eqn1 = Group(equations=[ BodyForce(dest='obstacle', sources=None, gy=-9.81), RigidBodyCollision(dest='obstacle', sources=['wall'], kn=1.0e4, en=0.8)], real=False) eqn2 = Group(equations=[ LiuFluidForce(dest='fluid', sources=['obstacle'])]) eqn3 = Group(equations=[ RigidBodyMoments(dest='obstacle', sources=None)]) eqn4 = Group(equations=[ RigidBodyMotion(dest='obstacle', sources=None)]) eqns.append(eqn1) eqns.append(eqn2) eqns.append(eqn3) eqns.append(eqn4) return eqns def configure_scheme(self): s = self.scheme scheme = self.options.scheme if scheme == 'wcsph': s.configure(h0=self.h0, hdx=self.hdx) elif scheme == 'edac': s.configure(h=self.h0) step = dict(paddle=TwoStageRigidBodyStep(), obstacle=RK2StepRigidBody()) s.configure_solver( kernel=CubicSpline(dim=2), tf=7.0, dt=self.dt, adaptive_timestep=False, extra_steppers=step) if __name__ == '__main__': h_fluid = 0.18 co = 10.0 * np.sqrt(2.0 * 9.81 * h_fluid) ro = 1000.0 alp = 0.2 gamma = 7.0 flat_l = 2.0 side = 0.06 lx = 4.75 ly = 0.3 amplitude = 1.5 period = 1.4 app = WavesPaddle2D() app.run() pysph-master/pysph/examples/sphysics/case7.py000066400000000000000000000124361356347341600217360ustar00rootroot00000000000000""" SPHysics case7 - wavemaker in beach with stationary obstacle (25 minutes) """ from pysph.base.kernels import CubicSpline from pysph.solver.application import Application from pysph.sph.integrator_step import WCSPHStep from pysph.sph.integrator_step import TwoStageRigidBodyStep from pysph.base.utils import get_particle_array import numpy as np from pysph.sph.scheme import AdamiHuAdamsScheme, WCSPHScheme, SchemeChooser from pysph.sph.wc.edac import EDACScheme from pysph.tools.geometry import remove_overlap_particles from pysph.tools.geometry import get_2d_block from pysph.examples.sphysics.beach_geometry import get_beach_geometry_2d def get_wavespaddle_geometry(hdx=1., dx_f=0.02, dx_s=0.02, r_f=100., r_s=100., length=13., height=0.8, flat_l=1.48, h_fluid=0.43, angle=2.8624): x1, y1, x2, y2 = get_beach_geometry_2d(dx_s, length, height, flat_l, angle, 3) theta = np.pi * angle / 180.0 p1x = flat_l - 9.605 p1y = 0.40625 p2x = flat_l - 10.065 p2y = 0.6085 theta1 = np.arctan((p2y - p1y) / (p2x - p1x)) p3x = flat_l - 10.28 p3y = 0.6085 theta2 = np.arctan((p3y - p2y) / (p3x - p2x)) p4x = flat_l - 10.62 p4y = 0.457 theta3 = np.arctan((p4y - p3y) / (p4x - p3x)) obs_x1 = np.arange(p2x, p1x, dx_s / 2.0) obs_y1 = p1y + (obs_x1 - p1x) * np.tan(theta1) obs_x2 = np.arange(p3x, p2x, dx_s / 2.0) obs_y2 = p2y + (obs_x2 - p2x) * np.tan(-theta2) obs_x3 = np.arange(p4x, p3x, dx_s / 2.0) obs_y3 = p3y + (obs_x3 - p3x) * np.tan(theta3) x1 = np.concatenate([x1, obs_x1, obs_x2, obs_x3]) y1 = np.concatenate([y1, obs_y1, obs_y2, obs_y3]) r1 = np.ones_like(x1) * r_s m1 = r1 * dx_s * dx_s h1 = np.ones_like(x1) * hdx * dx_s wall = get_particle_array(name='wall', x=x1, y=y1, rho=r1, m=m1, h=h1) r2 = np.ones_like(x2) * r_s m2 = r2 * dx_s * dx_s h2 = np.ones_like(x2) * hdx * dx_s paddle = get_particle_array(name='paddle', x=x2, y=y2, rho=r2, m=m2, h=h2) fluid_center = np.array([flat_l - length / 2.0, h_fluid / 2.0]) x_fluid, y_fluid = get_2d_block(dx_f, length, h_fluid, fluid_center) x3 = [] y3 = [] for i, xi in enumerate(x_fluid): if y_fluid[i] >= np.tan(-theta) * xi: x3.append(xi) y3.append(y_fluid[i]) x3 = np.array(x3) y3 = np.array(y3) r3 = np.ones_like(x3) * r_f m3 = r3 * dx_f * dx_f h3 = np.ones_like(x3) * hdx * dx_f fluid = get_particle_array(name='fluid', x=x3, y=y3, rho=r3, m=m3, h=h3) remove_overlap_particles(fluid, wall, dx_s, 2) remove_overlap_particles(fluid, paddle, dx_s, 2) return fluid, wall, paddle class WavesPaddle2D(Application): def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.3, help="h/dx value used in SPH to change the smoothness") group.add_argument( "--dx", action="store", type=float, dest="dx", default=0.02, help="spacing between the particles") def consume_user_options(self): self.hdx = self.options.hdx self.dx = self.options.dx self.h0 = self.hdx * self.dx self.dt = 0.25 * self.h0 / co def pre_step(self, solver): t = solver.t theta = 2.0 * np.pi * t / period paddle = self.particles[2] paddle.u[:] = -amplitude * np.sin(theta) def create_particles(self): fluid, wall, paddle = get_wavespaddle_geometry( self.hdx, self.dx, self.dx, h_fluid=h_fluid) self.scheme.setup_properties([fluid, wall, paddle]) scheme = self.options.scheme if scheme == 'aha' or scheme == 'edac': for p in ['u0', 'v0', 'w0', 'x0', 'y0', 'z0']: paddle.add_property(p) particles = [fluid, wall, paddle] return particles def create_scheme(self): aha = AdamiHuAdamsScheme(['fluid'], ['wall', 'paddle'], dim=2, rho0=ro, c0=co, alpha=alp, gy=-9.81, nu=0.0, h0=0.02, gamma=1.0) wcsph = WCSPHScheme(['fluid'], ['wall', 'paddle'], dim=2, rho0=ro, c0=co, h0=0.02, hdx=1.3, hg_correction=True, gy=-9.81, alpha=alp, gamma=gamma, update_h=True) edac = EDACScheme(['fluid'], ['wall', 'paddle'], dim=2, rho0=ro, c0=co, gy=-9.81, alpha=alp, nu=0.0, h=0.02, clamp_p=True) return SchemeChooser(default='wcsph', wcsph=wcsph, aha=aha, edac=edac) def configure_scheme(self): s = self.scheme scheme = self.options.scheme if scheme == 'wcsph': s.configure(h0=self.h0, hdx=self.hdx) elif scheme == 'aha': s.configure(h0=self.h0) elif scheme == 'edac': s.configure(h=self.h0) step = dict(paddle=TwoStageRigidBodyStep()) s.configure_solver( kernel=CubicSpline(dim=2), tf=12.0, dt=self.dt, adaptive_timestep=False, extra_steppers=step) if __name__ == '__main__': h_fluid = 0.43 co = 10.0 * np.sqrt(2.0 * 9.81 * h_fluid) ro = 100.0 alp = 0.2 gamma = 7.0 flat_l = 1.0 amplitude = 1.0 period = 0.6 app = WavesPaddle2D() app.run() pysph-master/pysph/examples/sphysics/case8.py000066400000000000000000000153371356347341600217420ustar00rootroot00000000000000""" SPHysics case8 - dambreak with obstacles (30 minutes) """ from pysph.base.kernels import CubicSpline from pysph.solver.solver import Solver from pysph.solver.application import Application from pysph.sph.integrator_step import WCSPHStep from pysph.sph.integrator_step import TwoStageRigidBodyStep from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array, get_particle_array_rigid_body import numpy as np from pysph.sph.equation import Group, Equation from pysph.sph.scheme import AdamiHuAdamsScheme, WCSPHScheme, SchemeChooser from pysph.sph.wc.edac import EDACScheme from pysph.tools.geometry import remove_overlap_particles from pysph.tools.geometry import get_2d_block, get_2d_wall from pysph.sph.rigid_body import (BodyForce, RigidBodyCollision, RigidBodyMoments, RigidBodyMotion, RK2StepRigidBody, LiuFluidForce) from pysph.examples.sphysics.periodic_rigidbody import GroupParticles def get_geometry(dx_s=0.03, dx_f=0.03, hdx=1.3, r_f=100.0, r_s=100.0, wall_l=4.0, wall_h=2.0, fluid_l=1., fluid_h=2., cube_s=0.25): wall_y1 = np.arange(dx_s, wall_h, dx_s) wall_xlayer = np.ones_like(wall_y1) * 2.0 wall_x1 = [] wall_x2 = [] num_layers = 3 for i in range(num_layers): wall_x1.append(wall_xlayer + i * dx_s) wall_x2.append(wall_xlayer - i * dx_s + wall_l / 4.0) wall_x1, wall_x2 = np.ravel(wall_x1), np.ravel(wall_x2) wall_y1 = np.tile(wall_y1, num_layers) wall_y2 = wall_y1 w_center = np.array([wall_l / 2.0, 0.0]) wall_x3, wall_y3 = get_2d_wall(dx_s, w_center, wall_l, num_layers, False) w_center = np.array([2.5, wall_h + dx_s / 2.0]) wall_x4, wall_y4 = get_2d_wall(dx_s, w_center, 1.0, num_layers) wall_x = np.concatenate([wall_x1, wall_x2, wall_x3, wall_x4]) wall_y = np.concatenate([wall_y1, wall_y2, wall_y3, wall_y4]) r1 = np.ones_like(wall_x) * r_s m1 = r1 * dx_s * dx_s h1 = np.ones_like(wall_x) * dx_s * hdx cs1 = np.zeros_like(wall_x) rad1 = np.ones_like(wall_x) * dx_s wall = get_particle_array(name='wall', x=wall_x, y=wall_y, h=h1, rho=r1, m=m1, cs=cs1, rad_s=rad1) f_center = np.array([3.0 * wall_l / 8.0, wall_h / 2.0]) x2, y2 = get_2d_block(dx_f, fluid_l, fluid_h, f_center) r2 = np.ones_like(x2) * r_f m2 = r2 * dx_f * dx_f h2 = np.ones_like(x2) * dx_f * hdx cs2 = np.zeros_like(x2) rad2 = np.ones_like(x2) * dx_f fluid = get_particle_array(name='fluid', x=x2, y=y2, h=h2, rho=r2, m=m2, cs=cs2, rad_s=rad2) center1 = np.array([wall_l / 8.0 + cube_s / 2.0, wall_h / 4.0 + cube_s / 2.0]) cube1_x, cube1_y = get_2d_block(dx_s, cube_s, cube_s, center1) b1 = np.zeros_like(cube1_x, dtype=int) center2 = np.array([3.0 * wall_l / 4.0 + cube_s / 2.0 + 3.0 * dx_s, wall_h + cube_s / 2.0 + (num_layers + 1) * dx_s]) cube2_x, cube2_y = get_2d_block(dx_s, cube_s, cube_s, center2) b2 = np.ones_like(cube2_x, dtype=int) b = np.concatenate([b1, b2]) x3 = np.concatenate([cube1_x, cube2_x]) y3 = np.concatenate([cube1_y, cube2_y]) r3 = np.ones_like(x3) * r_s * 0.5 m3 = r3 * dx_s * dx_s h3 = np.ones_like(x3) * dx_s * hdx cs3 = np.zeros_like(x3) rad3 = np.ones_like(x3) * dx_s cube = get_particle_array_rigid_body( name='cube', x=x3, y=y3, h=h3, cs=cs3, rho=r3, m=m3, rad_s=rad3, body_id=b) remove_overlap_particles(fluid, wall, dx_s, 2) return fluid, wall, cube class Dambreak2D(Application): def add_user_options(self, group): group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.3, help="h/dx value used in SPH to change the smoothness") group.add_argument( "--dx", action="store", type=float, dest="dx", default=0.03, help="spacing between the particles") def consume_user_options(self): self.hdx = self.options.hdx self.dx = self.options.dx self.h0 = self.hdx * self.dx self.dt = 0.25 * self.h0 / co def create_domain(self): return DomainManager(xmin=0.0, xmax=4.0, periodic_in_x=True) def create_particles(self): fluid, wall, cube = get_geometry(0.5 * self.dx, self.dx, self.hdx) self.scheme.setup_properties([fluid, wall, cube], clean=False) for p in ['u0', 'v0', 'w0', 'x0', 'y0', 'z0']: wall.add_property(p) for p in ['fx', 'fy', 'fz', 'V', 'arho']: cube.add_property(p) particles = [fluid, wall, cube] return particles def create_scheme(self): wcsph = WCSPHScheme(['fluid'], ['wall', 'cube'], dim=2, rho0=ro, h0=0.03, hdx=1.3, hg_correction=True, c0=co, gy=-9.81, alpha=alp, gamma=gamma, update_h=True) edac = EDACScheme(['fluid'], ['wall', 'cube'], dim=2, rho0=ro, c0=co, alpha=alp, nu=0.0, h=0.03, gy=-9.81, clamp_p=True) aha = AdamiHuAdamsScheme(['fluid'], ['wall', 'cube'], dim=2, rho0=ro, h0=0.03, gamma=1.0, alpha=alp, gy=-9.81, nu=0.0, c0=co) return SchemeChooser(default='wcsph', aha=aha, wcsph=wcsph, edac=edac) def configure_scheme(self): s = self.scheme scheme = self.options.scheme if scheme == 'wcsph': s.configure(h0=self.h0, hdx=self.hdx) elif scheme == 'edac': s.configure(h=self.h0) step = dict(cube=RK2StepRigidBody()) s.configure_solver(kernel=CubicSpline(dim=2), dt=self.dt, tf=3.0, adaptive_timestep=False, extra_steppers=step) def create_equations(self): eqns = self.scheme.get_equations() eqn1 = Group(equations=[ BodyForce(dest='cube', sources=None, gy=-9.81), RigidBodyCollision(dest='cube', sources=['wall', 'cube'], kn=1.0e5, en=0.8), LiuFluidForce(dest='fluid', sources=['cube'])], real=False) eqn2 = Group(equations=[ GroupParticles('cube', xmin=0.0, xmax=4.0, periodic_in_x=True)], real=False) eqn3 = Group(equations=[ RigidBodyMoments(dest='cube', sources=None)], real=False) eqn4 = Group(equations=[ RigidBodyMotion(dest='cube', sources=None)], real=False) eqns.append(eqn1) eqns.append(eqn2) eqns.append(eqn3) eqns.append(eqn4) return eqns if __name__ == '__main__': l_dam = 4.0 h_dam = 4.0 h_fluid = 2.0 l_fluid = 1.0 gamma = 7.0 alp = 0.2 ro = 100.0 co = 10.0 * np.sqrt(2.0 * 9.81 * h_fluid) app = Dambreak2D() app.run() pysph-master/pysph/examples/sphysics/dam_break.py000066400000000000000000000066561356347341600226500ustar00rootroot00000000000000'''The standard Sphysics dam break benchmark. (4 hours) This is intended to be the same as the standard dam break from the DualSPhysics code just for comparison. ''' import numpy as np from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.sph.scheme import WCSPHScheme def ravel(*args): return tuple(np.ravel(x) for x in args) def rhstack(*args): '''Join given set of args, we that each element in args has the same shape. Each argument is first ravelled and then stacked. ''' return tuple(np.hstack(ravel(*t)) for t in zip(*args)) class DamBreak(Application): def add_user_options(self, group): group.add_argument( '--dx', action='store', type=float, dest='dx', default=0.0085, help='Particle spacing.' ) hdx = np.sqrt(3) group.add_argument( '--hdx', action='store', type=float, dest='hdx', default=hdx, help='Specify the hdx factor where h = hdx * dx.' ) def consume_user_options(self): dx = self.options.dx self.dx = dx self.hdx = self.options.hdx def create_scheme(self): self.c0 = c0 = 10.0 * np.sqrt(2.0*9.81*0.3) self.hdx = hdx = np.sqrt(3) dx = 0.01 h0 = hdx*dx alpha = 0.1 beta = 0.0 gamma = 7.0 s = WCSPHScheme( ['fluid'], ['boundary'], dim=3, rho0=1000, c0=c0, h0=h0, hdx=hdx, gz=-9.81, alpha=alpha, beta=beta, gamma=gamma, hg_correction=True, tensile_correction=False ) return s def configure_scheme(self): s = self.scheme h0 = self.dx * self.hdx s.configure(h0=h0, hdx=self.hdx) dt = 0.25*h0/(1.1 * self.c0) tf = 1.5 s.configure_solver( tf=tf, dt=dt, adaptive_timestep=True, n_damp=50 ) def create_particles(self): dx = self.dx l, b, h = 1.6, 0.67, 0.4 lw, hw = 0.4, 0.3 # Big filled vessel with staggered points. x, y, z = np.mgrid[0:l+dx:dx, 0:b+dx:dx, 0:h:dx] # The post x3, y3, z3 = np.mgrid[0.9:1.02:dx, 0.25:0.37:dx, dx:0.45:dx] xmax, ymax, zmax = max(x3.flat), max(y3.flat), max(z3.flat) post_cond = ~((x3 > 0.9) & (x3 < xmax) & (y3 > 0.25) & (y3 < ymax) & (z3 < zmax)) p_post = x3[post_cond], y3[post_cond], z3[post_cond] # Masks to extract different parts from the vessel. wcond = ((x > 0) & (x < lw) & (y > 0) & (y < b) & (z > 0) & (z < hw)) box = ~((x > 0) & (x <= l) & (y > 0) & (y < b) & (z > 0) & (z <= h)) p_box = x[box], y[box], z[box] xf, yf, zf = x[wcond], y[wcond], z[wcond] xs, ys, zs = rhstack(p_box, p_post) vol = dx**3 m = vol*1000 f = get_particle_array( name='fluid', x=xf, y=yf, z=zf, m=m, h=dx*self.hdx, rho=1000.0 ) b = get_particle_array( name='boundary', x=xs, y=ys, z=zs, m=m, h=dx*self.hdx, rho=1000.0 ) self.scheme.setup_properties([f, b]) return [f, b] def customize_output(self): self._mayavi_config(''' viewer.scalar = 'vmag' b = particle_arrays['boundary'] b.plot.actor.mapper.scalar_visibility = False b.plot.actor.property.opacity = 0.15 ''') if __name__ == '__main__': app = DamBreak() app.run() pysph-master/pysph/examples/sphysics/dambreak_sphysics.py000066400000000000000000000067141356347341600244310ustar00rootroot00000000000000"""Dam break past an obstacle with data from SPHysics. (40 minutes) For benchmarking, we use the input geometry and discretization as the SPHYSICS Case 5 (https://wiki.manchester.ac.uk/sphysics/index.php/SPHYSICS_Home_Page) We only require the INDAT and IPART files generated by SPHysics. These define respectively, the numerical parameters and the initial particle data used for the run. The rest of the problem is set-up in the usual way. """ import os import numpy from pysph.sph.equation import Group from pysph.base.kernels import CubicSpline from pysph.sph.wc.basic import TaitEOS, TaitEOSHGCorrection, MomentumEquation from pysph.sph.basic_equations import ContinuityEquation, XSPHCorrection from pysph.solver.solver import Solver from pysph.solver.application import Application from pysph.sph.integrator import EPECIntegrator, PECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.tools.sphysics import sphysics2pysph MY_DIR = os.path.dirname(__file__) INDAT = os.path.join(MY_DIR, 'INDAT.gz') IPART = os.path.join(MY_DIR, 'IPART.gz') # problem dimensionality dim = 3 # suggested initial time step and final time dt = 1e-5 tf = 2.0 # physical constants for the run loaded from SPHysics INDAT indat = numpy.loadtxt(INDAT) H = float( indat[10] ) B = float( indat[11] ) gamma = float( indat[12] ) eps = float( indat[14] ) rho0 = float( indat[15] ) alpha = float( indat[16] ) beta = 0.0 c0 = numpy.sqrt( B*gamma/rho0 ) class DamBreak3DSPhysics(Application): def add_user_options(self, group): group.add_argument( "--test", action="store_true", dest="test", default=False, help="For use while testing of results, uses PEC integrator." ) def create_particles(self): return sphysics2pysph(IPART, INDAT, vtk=False) def create_solver(self): kernel = CubicSpline(dim=3) if self.options.test: integrator = PECIntegrator(fluid=WCSPHStep(),boundary=WCSPHStep()) adaptive, n_damp = False, 0 else: integrator = EPECIntegrator(fluid=WCSPHStep(),boundary=WCSPHStep()) adaptive, n_damp = True, 0 solver = Solver(dim=dim, kernel=kernel, integrator=integrator, adaptive_timestep=adaptive, tf=tf, dt=dt, n_damp=n_damp) return solver def create_equations(self): equations = [ # Equation of state Group(equations=[ TaitEOS(dest='fluid', sources=None, rho0=rho0, c0=c0, gamma=gamma), TaitEOSHGCorrection(dest='boundary', sources=None, rho0=rho0, c0=c0, gamma=gamma), ], real=False), # Continuity Momentum and XSPH equations Group(equations=[ ContinuityEquation(dest='fluid', sources=['fluid', 'boundary']), ContinuityEquation(dest='boundary', sources=['fluid']), MomentumEquation( dest='fluid', sources=['fluid', 'boundary'], c0=c0, alpha=alpha, beta=beta, gz=-9.81, tensile_correction=True), # Position step with XSPH XSPHCorrection(dest='fluid', sources=['fluid'], eps=eps) ]) ] return equations if __name__ == '__main__': app = DamBreak3DSPhysics() app.run() pysph-master/pysph/examples/sphysics/periodic_rigidbody.py000066400000000000000000000032411356347341600245600ustar00rootroot00000000000000from pysph.sph.equation import Equation import numpy as np class GroupParticles(Equation): def __init__(self, dest, sources=None, xmin=0.0, xmax=0.0, ymin=0.0, ymax=0.0, zmin=0.0, zmax=0.0, periodic_in_x=False, periodic_in_y=False, periodic_in_z=False): self.periodic_in_x = periodic_in_x self.periodic_in_y = periodic_in_y self.periodic_in_z = periodic_in_z self.xlen = abs(xmax - xmin) self.xmin = xmin self.xmax = xmax self.ylen = abs(ymax - ymin) self.ymin = ymin self.ymax = ymax self.zlen = abs(zmax - zmin) self.zmin = zmin self.zmax = zmax super(GroupParticles, self).__init__(dest, sources) def loop(self, d_idx, d_cm, d_body_id, d_x, d_y, d_z): b = declare('int') b = d_body_id[d_idx] if self.periodic_in_x: if (abs(d_x[d_idx] - d_cm[3 * b]) > (self.xlen / 2.0)): if (d_cm[3 * b] > self.xmin + self.xlen / 2.0): d_x[d_idx] += self.xlen else: d_x[d_idx] -= self.xlen if self.periodic_in_y: if (abs(d_y[d_idx] - d_cm[3 * b + 1]) > (self.ylen / 2.0)): if (d_cm[3 * b + 1] > self.ymin + self.ylen / 2.0): d_y[d_idx] += self.ylen else: d_y[d_idx] -= self.ylen if self.periodic_in_z: if (abs(d_z[d_idx] - d_cm[3 * b + 2]) > (self.zlen / 2.0)): if (d_cm[3 * b + 2] > self.zmin + self.zlen / 2.0): d_z[d_idx] += self.zlen else: d_z[d_idx] -= self.zlen pysph-master/pysph/examples/surface_tension/000077500000000000000000000000001356347341600216765ustar00rootroot00000000000000pysph-master/pysph/examples/surface_tension/__init__.py000066400000000000000000000000001356347341600237750ustar00rootroot00000000000000pysph-master/pysph/examples/surface_tension/capillary_wave.py000066400000000000000000000231611356347341600252550ustar00rootroot00000000000000from math import sqrt import numpy as np import os from pysph.sph.wc.transport_velocity import SummationDensity, \ MomentumEquationPressureGradient, StateEquation,\ MomentumEquationArtificialStress, MomentumEquationViscosity, \ SolidWallNoSlipBC from pysph.sph.surface_tension import InterfaceCurvatureFromNumberDensity, \ ShadlooYildizSurfaceTensionForce, CSFSurfaceTensionForce, \ SmoothedColor, AdamiColorGradient, MorrisColorGradient, \ SY11DiracDelta, SY11ColorGradient, MomentumEquationViscosityAdami, \ AdamiReproducingDivergence, CSFSurfaceTensionForceAdami,\ MomentumEquationPressureGradientAdami, ColorGradientAdami, \ ConstructStressMatrix, SurfaceForceAdami, SummationDensitySourceMass, \ MomentumEquationViscosityMorris, MomentumEquationPressureGradientMorris, \ InterfaceCurvatureFromDensity, SolidWallPressureBCnoDensity from pysph.sph.wc.basic import TaitEOS from pysph.sph.gas_dynamics.basic import ScaleSmoothingLength from pysph.tools.geometry import get_2d_block, remove_overlap_particles, \ get_2d_circle from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline, QuinticSpline from pysph.sph.equation import Group, Equation from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator_step import TransportVelocityStep from pysph.sph.integrator import PECIntegrator from pysph.base.nnps import DomainManager from pysph.solver.utils import iter_output dim = 2 Lx = 1.0 Ly = 1.0 nu1 = 0.05 nu2 = 0.0005 sigma = 1.0 factor1 = 0.5 factor2 = 1 / factor1 rho1 = 1.0 c0 = 20.0 gamma = 1.4 R = 287.1 rho2 = 0.001 p1 = c0**2 * rho1 p2 = c0*c0*rho2 nx = 60 dx = Lx / nx volume = dx * dx hdx = 1.0 h0 = hdx * dx tf = 0.5 epsilon = 0.01 / h0 v0 = 10.0 r0 = 0.05 dt1 = 0.25*np.sqrt(rho2*h0*h0*h0/(2.0*np.pi*sigma)) dt2 = 0.25*h0/(c0+v0) dt3 = 0.125*rho2*h0*h0/nu2 dt = 0.9*min(dt1, dt2, dt3) d = 2 def r(x, y): return x*x + y*y class MultiPhase(Application): def create_particles(self): fluid_x, fluid_y = get_2d_block( dx=dx, length=Lx, height=Ly, center=np.array([0., 0.])) rho_fluid = np.ones_like(fluid_x) * rho2 m_fluid = rho_fluid * volume h_fluid = np.ones_like(fluid_x) * h0 cs_fluid = np.ones_like(fluid_x) * c0 circle_x, circle_y = get_2d_circle(dx=dx, r=0.2, center=np.array([0.0, 0.0])) rho_circle = np.ones_like(circle_x) * rho1 m_circle = rho_circle * volume h_circle = np.ones_like(circle_x) * h0 cs_circle = np.ones_like(circle_x) * c0 wall_x, wall_y = get_2d_block(dx=dx, length=Lx+6*dx, height=Ly+6*dx, center=np.array([0., 0.])) rho_wall = np.ones_like(wall_x) * rho2 m_wall = rho_wall * volume h_wall = np.ones_like(wall_x) * h0 cs_wall = np.ones_like(wall_x) * c0 additional_props = ['V', 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', 'nx', 'ny', 'nz', 'ddelta', 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', 'ax', 'ay', 'az', 'wij', 'vmag2', 'N', 'wij_sum', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'kappa', 'arho', 'nu', 'wg', 'ug', 'vg', 'pi00', 'pi01', 'pi02', 'pi10', 'pi11', 'pi12', 'pi20', 'pi21', 'pi22'] gas = get_particle_array( name='gas', x=fluid_x, y=fluid_y, h=h_fluid, m=m_fluid, rho=rho_fluid, cs=cs_fluid, additional_props=additional_props) gas.nu[:] = nu2 gas.color[:] = 0.0 liquid = get_particle_array( name='liquid', x=circle_x, y=circle_y, h=h_circle, m=m_circle, rho=rho_circle, cs=cs_circle, additional_props=additional_props) liquid.nu[:] = nu1 liquid.color[:] = 1.0 wall = get_particle_array( name='wall', x=wall_x, y=wall_y, h=h_wall, m=m_wall, rho=rho_wall, cs=cs_wall, additional_props=additional_props) wall.color[:] = 0.0 remove_overlap_particles(wall, liquid, dx_solid=dx, dim=2) remove_overlap_particles(wall, gas, dx_solid=dx, dim=2) remove_overlap_particles(gas, liquid, dx_solid=dx, dim=2) gas.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) liquid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) wall.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) for i in range(len(gas.x)): R = sqrt(r(gas.x[i], gas.y[i]) + 0.0001*gas.h[i]*gas.h[i]) f = np.exp(-R/r0) gas.u[i] = v0*gas.x[i]*(1.0-(gas.y[i]*gas.y[i])/(r0*R))*f/r0 gas.v[i] = -v0*gas.y[i]*(1.0-(gas.x[i]*gas.x[i])/(r0*R))*f/r0 for i in range(len(liquid.x)): R = sqrt(r(liquid.x[i], liquid.y[i]) + 0.0001*liquid.h[i]*liquid.h[i]) liquid.u[i] = v0*liquid.x[i] * \ (1.0 - (liquid.y[i]*liquid.y[i])/(r0*R))*f/r0 liquid.v[i] = -v0*liquid.y[i] * \ (1.0 - (liquid.x[i]*liquid.x[i])/(r0*R))*f/r0 return [liquid, gas, wall] def create_solver(self): kernel = QuinticSpline(dim=2) integrator = PECIntegrator(liquid=TransportVelocityStep(), gas=TransportVelocityStep()) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) return solver def create_equations(self): adami_stress_equations = [ Group(equations=[ SummationDensity( dest='liquid', sources=[ 'liquid', 'wall', 'gas']), SummationDensity( dest='gas', sources=[ 'liquid', 'wall', 'gas']), SummationDensity( dest='wall', sources=['liquid', 'wall', 'gas']) ]), Group(equations=[ TaitEOS(dest='liquid', sources=None, rho0=rho1, c0=c0, gamma=1, p0=p1), TaitEOS(dest='gas', sources=None, rho0=rho2, c0=c0, gamma=1, p0=p1), SolidWallPressureBCnoDensity(dest='wall', sources=['liquid', 'gas']), ]), Group(equations=[ ColorGradientAdami(dest='liquid', sources=['liquid', 'wall', 'gas']), ColorGradientAdami(dest='gas', sources=[ 'liquid', 'wall', 'gas']), ]), Group(equations=[ConstructStressMatrix(dest='liquid', sources=None, sigma=sigma, d=2), ConstructStressMatrix(dest='gas', sources=None, sigma=sigma, d=2)]), Group( equations=[ MomentumEquationPressureGradientAdami( dest='liquid', sources=['liquid', 'wall', 'gas']), MomentumEquationPressureGradientAdami( dest='gas', sources=['liquid', 'wall', 'gas']), MomentumEquationViscosityAdami( dest='liquid', sources=['liquid', 'gas']), MomentumEquationViscosityAdami( dest='gas', sources=['liquid', 'gas']), SurfaceForceAdami( dest='liquid', sources=['liquid', 'wall', 'gas']), SurfaceForceAdami( dest='gas', sources=['liquid', 'wall', 'gas']), SolidWallNoSlipBC(dest='liquid', sources=['wall'], nu=nu1), SolidWallNoSlipBC(dest='gas', sources=['wall'], nu=nu2), ]), ] return adami_stress_equations def post_process(self): try: import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt except ImportError: print("Post processing requires Matplotlib") return from pysph.solver.utils import load files = self.output_files t = [] centerx = [] centery = [] for f in files: data = load(f) pa = data['arrays']['liquid'] t.append(data['solver_data']['t']) x = pa.x y = pa.y length = len(x) cx = 0 cy = 0 count = 0 for i in range(length): if x[i] > 0 and y[i] > 0: cx += x[i] cy += y[i] count += 1 else: continue # As the masses are all the same in this case centerx.append(cx/count) centery.append(cy/count) fname = os.path.join(self.output_dir, 'results.npz') np.savez(fname, t=t, centerx=centerx, centery=centery) plt.plot(t, centerx, label='x position') plt.plot(t, centery, label='y position') plt.legend() fig1 = os.path.join(self.output_dir, 'centerofmassposvst') plt.savefig(fig1) plt.close() if __name__ == '__main__': app = MultiPhase() app.run() app.post_process() pysph-master/pysph/examples/surface_tension/circular_droplet.py000066400000000000000000000225051356347341600256110ustar00rootroot00000000000000"""Curvature computation for a circular droplet. (5 seconds) For particles distributed in a box, an initial circular interface is distinguished by coloring the particles within a circle. The resulting equations can then be used to check for the numerical curvature and discretized dirac-delta function based on this artificial interface. """ import numpy # Particle generator from pysph.base.utils import get_particle_array from pysph.base.kernels import QuinticSpline # SPH Equations and Group from pysph.sph.equation import Group from pysph.sph.wc.viscosity import ClearyArtificialViscosity from pysph.sph.wc.transport_velocity import SummationDensity, \ MomentumEquationPressureGradient,\ SolidWallPressureBC, SolidWallNoSlipBC, \ StateEquation, MomentumEquationArtificialStress, MomentumEquationViscosity from pysph.sph.surface_tension import ColorGradientUsingNumberDensity, \ InterfaceCurvatureFromNumberDensity, ShadlooYildizSurfaceTensionForce,\ SmoothedColor, AdamiColorGradient, AdamiReproducingDivergence,\ MorrisColorGradient from pysph.sph.gas_dynamics.basic import ScaleSmoothingLength # PySPH solver and application from pysph.solver.application import Application from pysph.solver.solver import Solver # Integrators and Steppers from pysph.sph.integrator_step import TransportVelocityStep from pysph.sph.integrator import PECIntegrator # Domain manager for periodic domains from pysph.base.nnps import DomainManager # problem parameters dim = 2 domain_width = 1.0 domain_height = 1.0 # numerical constants wavelength = 1.0 wavenumber = 2*numpy.pi/wavelength rho0 = rho1 = 1000.0 rho2 = 1*rho1 U = 0.5 sigma = 1.0 # discretization parameters dx = dy = 0.0125 dxb2 = dyb2 = 0.5 * dx hdx = 1.5 h0 = hdx * dx rho0 = 1000.0 c0 = 20.0 p0 = c0*c0*rho0 nu = 0.01 # set factor1 to [0.5 ~ 1.0] to simulate a thick or thin # interface. Larger values result in a thick interface. Set factor1 = # 1 for the Morris Method I factor1 = 1.0 factor2 = 1./factor1 # correction factor for Morris's Method I. Set with_morris_correction # to True when using this correction. epsilon = 0.01/h0 # time steps dt_cfl = 0.25 * h0/(1.1*c0) dt_viscous = 0.125 * h0**2/nu dt_force = 1.0 dt = 0.9 * min(dt_cfl, dt_viscous, dt_force) tf = 5*dt staggered = True class CircularDroplet(Application): def create_domain(self): return DomainManager( xmin=0, xmax=domain_width, ymin=0, ymax=domain_height, periodic_in_x=True, periodic_in_y=True) def create_particles(self): if staggered: x1, y1 = numpy.mgrid[dxb2:domain_width:dx, dyb2:domain_height:dy] x2, y2 = numpy.mgrid[dx:domain_width:dx, dy:domain_height:dy] x1 = x1.ravel() y1 = y1.ravel() x2 = x2.ravel() y2 = y2.ravel() x = numpy.concatenate([x1, x2]) y = numpy.concatenate([y1, y2]) volume = dx*dx/2 else: x, y = numpy.mgrid[dxb2:domain_width:dx, dyb2:domain_height:dy] x = x.ravel() y = y.ravel() volume = dx*dx m = numpy.ones_like(x) * volume * rho0 rho = numpy.ones_like(x) * rho0 h = numpy.ones_like(x) * h0 cs = numpy.ones_like(x) * c0 # additional properties required for the fluid. additional_props = [ # volume inverse or number density 'V', # color and gradients 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', # discretized interface normals and dirac delta 'nx', 'ny', 'nz', 'ddelta', # interface curvature 'kappa', # filtered velocities 'uf', 'vf', 'wf', # transport velocities 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', # imposed accelerations on the solid wall 'ax', 'ay', 'az', 'wij', # velocity of magnitude squared needed for TVF 'vmag2', # variable to indicate reliable normals and normalizing # constant 'N', 'wij_sum' ] # get the fluid particle array fluid = get_particle_array( name='fluid', x=x, y=y, h=h, m=m, rho=rho, cs=cs, additional_props=additional_props) # set the color of the inner circle for i in range(x.size): if (((fluid.x[i]-0.5)**2 + (fluid.y[i]-0.5)**2) <= 0.25**2): fluid.color[i] = 1.0 # particle volume fluid.V[:] = 1./volume # set additional output arrays for the fluid fluid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'p', 'kappa', 'N', 'scolor']) print("2D Circular droplet deformation with %d fluid particles" % ( fluid.get_number_of_particles())) return [fluid, ] def create_solver(self): kernel = QuinticSpline(dim=2) integrator = PECIntegrator(fluid=TransportVelocityStep()) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, pfreq=1) return solver def create_equations(self): equations = [ # We first compute the mass and number density of the fluid # phase. This is used in all force computations henceforth. The # number density (1/volume) is explicitly set for the solid phase # and this isn't modified for the simulation. Group(equations=[ SummationDensity(dest='fluid', sources=['fluid']), ]), # Given the updated number density for the fluid, we can update # the fluid pressure. Also compute the smoothed color based on the # color index for a particle. Group(equations=[ StateEquation(dest='fluid', sources=None, rho0=rho0, p0=p0, b=1.0), SmoothedColor(dest='fluid', sources=['fluid']), ]), ################################################################# # Begin Surface tension formulation ################################################################# # Scale the smoothing lengths to determine the interface # quantities. Group(equations=[ ScaleSmoothingLength(dest='fluid', sources=None, factor=factor1) ], update_nnps=False), # Compute the gradient of the color function with respect to the # new smoothing length. At the end of this Group, we will have the # interface normals and the discretized dirac delta function for # the fluid-fluid interface. Group(equations=[ MorrisColorGradient(dest='fluid', sources=['fluid'], epsilon=0.01/h0), # ColorGradientUsingNumberDensity(dest='fluid',sources=['fluid'], # epsilon=epsilon), # AdamiColorGradient(dest='fluid', sources=['fluid']), ], ), # Compute the interface curvature using the modified smoothing # length and interface normals computed in the previous Group. Group(equations=[ InterfaceCurvatureFromNumberDensity( dest='fluid', sources=['fluid'], with_morris_correction=True ), # AdamiReproducingDivergence(dest='fluid',sources=['fluid'], # dim=2), ], ), # Now rescale the smoothing length to the original value for the # rest of the computations. Group(equations=[ ScaleSmoothingLength(dest='fluid', sources=None, factor=factor2) ], update_nnps=False, ), ################################################################# # End Surface tension formulation ################################################################# # The main acceleration block Group( equations=[ # Gradient of pressure for the fluid phase using the # number density formulation. MomentumEquationPressureGradient( dest='fluid', sources=['fluid'], pb=p0), # Artificial viscosity for the fluid phase. MomentumEquationViscosity( dest='fluid', sources=['fluid'], nu=nu), # Surface tension force for the SY11 formulation ShadlooYildizSurfaceTensionForce(dest='fluid', sources=None, sigma=sigma), # Artificial stress for the fluid phase MomentumEquationArtificialStress(dest='fluid', sources=['fluid']), ], ) ] return equations if __name__ == '__main__': app = CircularDroplet() app.run() pysph-master/pysph/examples/surface_tension/equilibrium_rod.py000066400000000000000000000121401356347341600254410ustar00rootroot00000000000000import numpy as np import os from pysph.sph.surface_tension import get_surface_tension_equations from pysph.tools.geometry import get_2d_block from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline, QuinticSpline from pysph.sph.equation import Group, Equation from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator_step import TransportVelocityStep, \ VelocityVerletSymplecticWCSPHStep from pysph.sph.integrator import PECIntegrator from pysph.base.nnps import DomainManager from pysph.solver.utils import iter_output dim = 2 Lx = 1.0 Ly = 1.0 nu = 0.05 sigma = 1.0 factor1 = 0.8 factor2 = 1 / factor1 rho0 = 1.0 c0 = 20.0 gamma = 1.4 R = 287.1 tf = 10.0 p0 = c0**2 * rho0 nx = 50 dx = Lx / nx volume = dx * dx hdx = 1.5 h0 = hdx * dx epsilon = 0.01 / h0 dt1 = 0.25*np.sqrt(rho0*h0*h0*h0/(2.0*np.pi*sigma)) dt2 = 0.25*h0/(c0) dt3 = 0.125*rho0*h0*h0/nu dt = 0.9*min(dt1, dt2, dt3) def radius(x, y): return x*x + y*y class MultiPhase(Application): def add_user_options(self, group): choices = ['morris', 'tvf', 'adami_stress', 'adami', 'shadloo'] group.add_argument( "--scheme", action="store", dest='scheme', default='morris', choices=choices, help='Specify scheme to use among %s' % choices ) def create_particles(self): fluid_x, fluid_y = get_2d_block( dx=dx, length=Lx - dx, height=Ly - dx, center=np.array([0., 0.])) rho_fluid = np.ones_like(fluid_x) * rho0 m_fluid = rho_fluid * volume h_fluid = np.ones_like(fluid_x) * h0 cs_fluid = np.ones_like(fluid_x) * c0 additional_props = ['V', 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', 'nx', 'ny', 'nz', 'ddelta', 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', 'ax', 'ay', 'az', 'wij', 'vmag2', 'N', 'wij_sum', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'kappa', 'arho', 'nu', 'pi00', 'pi01', 'pi02', 'pi10', 'pi11', 'pi12', 'pi20', 'pi21', 'pi22', 'alpha'] fluid = get_particle_array( name='fluid', x=fluid_x, y=fluid_y, h=h_fluid, m=m_fluid, rho=rho_fluid, cs=cs_fluid, additional_props=additional_props) fluid.alpha[:] = sigma for i in range(len(fluid.x)): if (fluid.x[i]*fluid.x[i] + fluid.y[i]*fluid.y[i]) < 0.0625: fluid.color[i] = 1.0 else: fluid.color[i] = 0.0 fluid.V[:] = 1. / volume fluid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) fluid.nu[:] = nu return [fluid] def create_domain(self): return DomainManager( xmin=-0.5 * Lx, xmax=0.5 * Lx, ymin=-0.5*Ly, ymax=0.5*Ly, periodic_in_x=True, periodic_in_y=True) def create_solver(self): kernel = QuinticSpline(dim=2) stepper = TransportVelocityStep() integrator = PECIntegrator(fluid=stepper) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) return solver def create_equations(self): return get_surface_tension_equations(['fluid'], [], self.options.scheme, rho0, p0, c0, 0, factor1, factor2, nu, sigma, 2, epsilon, gamma, real=False) def post_process(self): try: import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt except ImportError: print("Post processing requires Matplotlib") return from pysph.solver.utils import load files = self.output_files dp = [] t = [] for f in files: data = load(f) pa = data['arrays']['fluid'] t.append(data['solver_data']['t']) m = pa.m x = pa.x y = pa.y N = pa.N p = pa.p n = len(m) count_in = 0 count_out = 0 p_in = 0 p_out = 0 for i in range(n): r = radius(x[i], y[i]) if N[i] < 1: if radius(x[i], y[i]) < 0.0625: p_in += p[i] count_in += 1 else: p_out += p[i] count_out += 1 else: continue dp.append((p_in/count_in) - (p_out/count_out)) fname = os.path.join(self.output_dir, 'results.npz') np.savez(fname, t=t, dp=dp) plt.plot(t, dp) fig = os.path.join(self.output_dir, "dpvst.png") plt.savefig(fig) plt.close() if __name__ == '__main__': app = MultiPhase() app.run() app.post_process() pysph-master/pysph/examples/surface_tension/equilibrium_rod_hex.py000066400000000000000000000121761356347341600263160ustar00rootroot00000000000000import numpy as np import os from pysph.sph.surface_tension import get_surface_tension_equations from pysph.tools.geometry import get_2d_block from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline, QuinticSpline from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator_step import TransportVelocityStep from pysph.sph.integrator import PECIntegrator from pysph.base.nnps import DomainManager from pysph.solver.utils import iter_output dim = 2 Lx = 1.0 Ly = 1.0 nu = 0.05 sigma = 1.0 factor1 = 0.8 factor2 = 1 / factor1 rho0 = 1.0 c0 = 20.0 gamma = 1.4 R = 287.1 p0 = c0*c0*rho0 nx = 50 dx = Lx / nx volume = dx * dx hdx = 1.5 h0 = hdx*dx tf = 10.0 epsilon = 0.01 / h0 dt1 = 0.25*np.sqrt(rho0*h0*h0*h0/(2.0*np.pi*sigma)) dt2 = 0.25*h0/(c0) dt3 = 0.125*rho0*h0*h0/nu dt = 0.9*min(dt1, dt2, dt3) def radius(x, y): return x*x + y*y class MultiPhase(Application): def add_user_options(self, group): choices = ['morris', 'tvf', 'adami_stress', 'adami', 'shadloo'] group.add_argument( "--scheme", action="store", dest='scheme', default='morris', choices=choices, help='Specify scheme to use among %s' % choices ) def create_particles(self): x, y = np.mgrid[-0.5*Lx:0.5*Lx:dx, -0.5*Ly:0.5*Ly:dx] xc = x.copy() + 0.5*dx yc = y.copy() + 0.5*dx fluid_x = np.concatenate([x.ravel(), xc.ravel()]) + 0.25*dx fluid_y = np.concatenate([y.ravel(), yc.ravel()]) + 0.25*dx rho_fluid = np.ones_like(fluid_x) * rho0 m_fluid = rho_fluid*volume*0.5 h_fluid = np.ones_like(fluid_x) * h0 cs_fluid = np.ones_like(fluid_x) * c0 additional_props = ['V', 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', 'nx', 'ny', 'nz', 'ddelta', 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', 'ax', 'ay', 'az', 'wij', 'vmag2', 'N', 'wij_sum', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'kappa', 'arho', 'nu', 'pi00', 'pi01', 'pi02', 'pi10', 'pi11', 'pi12', 'pi20', 'pi21', 'pi22', 'alpha'] fluid = get_particle_array( name='fluid', x=fluid_x, y=fluid_y, h=h_fluid, m=m_fluid, rho=rho_fluid, cs=cs_fluid, additional_props=additional_props) for i in range(len(fluid.x)): if (fluid.x[i] * fluid.x[i] + fluid.y[i] * fluid.y[i]) < 0.0625: fluid.color[i] = 1.0 else: fluid.color[i] = 0.0 fluid.alpha[:] = sigma fluid.V[:] = 1. / volume fluid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) fluid.nu[:] = nu return [fluid] def create_domain(self): return DomainManager( xmin=-0.5 * Lx, xmax=0.5 * Lx, ymin=-0.5 * Ly, ymax=0.5 * Ly, periodic_in_x=True, periodic_in_y=True) def create_solver(self): kernel = QuinticSpline(dim=2) integrator = PECIntegrator(fluid=TransportVelocityStep()) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) return solver def create_equations(self): return get_surface_tension_equations(['fluid'], [], self.options.scheme, rho0, p0, c0, 0, factor1, factor2, nu, sigma, 2, epsilon, gamma, real=False) def post_process(self): try: import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt except ImportError: print("Post processing requires Matplotlib") return from pysph.solver.utils import load files = self.output_files dp = [] t = [] for f in files: data = load(f) pa = data['arrays']['fluid'] t.append(data['solver_data']['t']) m = pa.m x = pa.x y = pa.y N = pa.N p = pa.p n = len(m) count_in = 0 count_out = 0 p_in = 0 p_out = 0 for i in range(n): r = radius(x[i], y[i]) if N[i] < 1: if radius(x[i], y[i]) < 0.0625: p_in += p[i] count_in += 1 else: p_out += p[i] count_out += 1 else: continue dp.append((p_in/count_in) - (p_out/count_out)) fname = os.path.join(self.output_dir, 'results.npz') np.savez(fname, t=t, dp=dp) plt.plot(t, dp) fig = os.path.join(self.output_dir, "dpvst.png") plt.savefig(fig) plt.close() if __name__ == '__main__': app = MultiPhase() app.run() app.post_process() pysph-master/pysph/examples/surface_tension/interface_instability.py000066400000000000000000000111621356347341600266240ustar00rootroot00000000000000import numpy as np import os from pysph.sph.surface_tension import get_surface_tension_equations from pysph.tools.geometry import get_2d_block from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline, QuinticSpline from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator_step import TransportVelocityStep from pysph.sph.integrator import PECIntegrator from pysph.base.nnps import DomainManager from pysph.solver.utils import iter_output dim = 2 Lx = 0.5 Ly = 1.0 factor1 = 0.8 factor2 = 1.0/factor1 nu = 0.0 sigma = 1.0 rho0 = 1. c0 = 20.0 gamma = 1.4 R = 287.1 p0 = c0**2 * rho0 nx = 50 dx = Lx / nx volume = dx * dx hdx = 1.5 h0 = hdx * dx tf = 0.5 epsilon = 0.01 / h0 KE = 10**(-6.6)*p0*p0*gamma/(c0 * c0 * rho0 * rho0 * nx * nx * (gamma - 1)) Vmax = np.sqrt(2 * KE / (rho0 * dx * dx)) dt1 = 0.25*np.sqrt(rho0*h0*h0*h0/(2.0*np.pi*sigma)) dt2 = 0.25*h0/(c0+Vmax) dt = 0.9*min(dt1, dt2) class MultiPhase(Application): def add_user_options(self, group): choices = ['morris', 'tvf', 'adami_stress', 'adami', 'shadloo'] group.add_argument( "--scheme", action="store", dest='scheme', default='morris', choices=choices, help='Specify scheme to use among %s' % choices ) def create_particles(self): fluid_x, fluid_y = get_2d_block( dx=dx, length=Lx-dx, height=Ly-dx, center=np.array([0., 0.5*Ly])) rho_fluid = np.ones_like(fluid_x) * rho0 m_fluid = rho_fluid * volume h_fluid = np.ones_like(fluid_x) * h0 cs_fluid = np.ones_like(fluid_x) * c0 additional_props = ['V', 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', 'nx', 'ny', 'nz', 'ddelta', 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', 'ax', 'ay', 'az', 'wij', 'vmag2', 'N', 'wij_sum', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'kappa', 'arho', 'nu', 'pi00', 'pi01', 'pi02', 'pi10', 'pi11', 'pi12', 'pi20', 'pi21', 'pi22'] fluid = get_particle_array( name='fluid', x=fluid_x, y=fluid_y, h=h_fluid, m=m_fluid, rho=rho_fluid, cs=cs_fluid, additional_props=additional_props) for i in range(len(fluid.x)): if fluid.y[i] > 0.25 and fluid.y[i] < 0.75: fluid.color[i] = 1.0 else: fluid.color[i] = 0.0 fluid.V[:] = 1. / volume fluid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) angles = np.random.random_sample((len(fluid.x),))*2*np.pi vel = np.sqrt(2 * KE / fluid.m) fluid.u = vel fluid.v = vel fluid.nu[:] = 0.0 return [fluid] def create_domain(self): return DomainManager( xmin=-0.5 * Lx, xmax=0.5 * Lx, ymin=0.0, ymax=Ly, periodic_in_x=True, periodic_in_y=True, n_layers=6) def create_solver(self): kernel = CubicSpline(dim=2) integrator = PECIntegrator(fluid=TransportVelocityStep()) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) return solver def create_equations(self): return get_surface_tension_equations(['fluid'], [], self.options.scheme, rho0, p0, c0, 0, factor1, factor2, nu, sigma, 2, epsilon, gamma, real=False) def post_process(self): try: import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt except ImportError: print("Post processing requires Matplotlib") return from pysph.solver.utils import load files = self.output_files ke = [] t = [] for f in files: data = load(f) pa = data['arrays']['fluid'] t.append(data['solver_data']['t']) m = pa.m u = pa.u v = pa.v length = len(m) ke.append(np.log10(sum(0.5 * m * (u**2 + v**2) / length))) fname = os.path.join(self.output_dir, 'results.npz') np.savez(fname, t=t, ke=ke) plt.plot(t, ke, 'o') fig = os.path.join(self.output_dir, "KEvst.png") plt.savefig(fig) plt.close() if __name__ == '__main__': app = MultiPhase() app.run() app.post_process() pysph-master/pysph/examples/surface_tension/khi_sy11.py000066400000000000000000000135341356347341600237060ustar00rootroot00000000000000from math import sqrt import numpy as np import os import numpy from pysph.sph.surface_tension import get_surface_tension_equations from pysph.tools.geometry import get_2d_block, remove_overlap_particles from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline, QuinticSpline from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator_step import TransportVelocityStep from pysph.sph.integrator import PECIntegrator from pysph.base.nnps import DomainManager from pysph.solver.utils import iter_output # problem parameters dim = 2 domain_width = 1.0 domain_height = 1.0 # numerical constants alpha = 0.0001 wavelength = 1.0 wavenumber = 2*numpy.pi/wavelength Ri = 0.1 rho0 = rho1 = 1000.0 rho2 = rho1 = 2000.0 U = 0.5 sigma = Ri * (rho1*rho2) * (2*U)**2/(wavenumber*(rho1 + rho2)) psi0 = 0.03*domain_height gy = -9.81 # discretization parameters nghost_layers = 5 dx = dy = 0.0125 dxb2 = dyb2 = 0.5 * dx volume = dx*dx hdx = 1.0 h0 = hdx * dx epsilon = 0.01/h0 rho0 = 1000.0 c0 = 10.0 p0 = c0*c0*rho0 nu = 0.125 * alpha * h0 * c0 # time steps and final time tf = 3.0 dt1 = 0.25*np.sqrt(rho0*h0*h0*h0/(2.0*np.pi*sigma)) dt2 = 0.25*h0/(c0) dt3 = 0.125*rho0*h0*h0/nu dt = 0.9*min(dt1, dt2, dt3) factor1 = 0.8 factor2 = 1/factor1 class SquareDroplet(Application): def create_particles(self): ghost_extent = (nghost_layers + 0.5)*dx x, y = numpy.mgrid[dxb2:domain_width:dx, -ghost_extent:domain_height+ghost_extent:dy] x = x.ravel() y = y.ravel() m = numpy.ones_like(x) * volume * rho0 rho = numpy.ones_like(x) * rho0 p = numpy.ones_like(x) * p0 h = numpy.ones_like(x) * h0 cs = numpy.ones_like(x) * c0 # additional properties required for the fluid. additional_props = [ # volume inverse or number density 'V', 'pi00', 'pi01', 'pi02', 'pi10', 'pi11', 'pi12', 'pi20', 'pi21', 'pi22', # color and gradients 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', # discretized interface normals and dirac delta 'nx', 'ny', 'nz', 'ddelta', # interface curvature 'kappa', 'nu', 'alpha', # filtered velocities 'uf', 'vf', 'wf', # transport velocities 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', # imposed accelerations on the solid wall 'ax', 'ay', 'az', 'wij', # velocity of magnitude squared 'vmag2', # variable to indicate reliable normals and normalizing # constant 'N', 'wij_sum', 'wg', 'ug', 'vg' ] # get the fluid particle array fluid = get_particle_array( name='fluid', x=x, y=y, h=h, m=m, rho=rho, cs=cs, p=p, additional_props=additional_props) # set the fluid velocity with respect to the sinusoidal # perturbation fluid.u[:] = -U fluid.N[:] = 0.0 fluid.nu[:] = nu fluid.alpha[:] = sigma mode = 1 for i in range(len(fluid.x)): ang = 2*numpy.pi*fluid.x[i]/(mode*domain_width) if fluid.y[i] >= domain_height/2+psi0*domain_height*numpy.sin(ang): fluid.u[i] = U fluid.color[i] = 1.0 fluid.rho[i] = 2000.0 # extract the top and bottom boundary particles indices = numpy.where(fluid.y > domain_height)[0] wall = fluid.extract_particles(indices) fluid.remove_particles(indices) indices = numpy.where(fluid.y < 0)[0] bottom = fluid.extract_particles(indices) fluid.remove_particles(indices) # concatenate the two boundaries wall.append_parray(bottom) wall.set_name('wall') # set the number density initially for all particles fluid.V[:] = 1./volume wall.V[:] = 1./volume for i in range(len(wall.x)): if wall.y[i] > 0.5: wall.color[i] = 1.0 else: wall.color[i] = 0.0 # set additional output arrays for the fluid fluid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'p', 'rho', 'au', 'av']) wall.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'p', 'rho', 'au', 'av']) print("2D KHI with %d fluid particles and %d wall particles" % ( fluid.get_number_of_particles(), wall.get_number_of_particles())) return [fluid, wall] def create_domain(self): return DomainManager(xmin=0, xmax=domain_width, ymin=0, ymax=domain_height, periodic_in_x=True, periodic_in_y=False, n_layers=5.0) def create_solver(self): kernel = QuinticSpline(dim=2) integrator = PECIntegrator(fluid=TransportVelocityStep()) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) return solver def add_user_options(self, group): choices = ['morris', 'tvf', 'adami_stress', 'adami', 'shadloo'] group.add_argument( "--scheme", action="store", dest='scheme', default='morris', choices=choices, help='Specify scheme to use among %s' % choices ) def create_equations(self): return get_surface_tension_equations(['fluid'], ['wall'], self.options.scheme, rho0, p0, c0, 0, factor1, factor2, nu, sigma, 2, epsilon, 1, real=False) if __name__ == '__main__': app = SquareDroplet() app.run() pysph-master/pysph/examples/surface_tension/khi_tvf.py000066400000000000000000000255611356347341600237130ustar00rootroot00000000000000"""2D Kelvin Helmoltz Instability example using TVF. (1 hour) """ import numpy # Particle generator from pysph.base.utils import get_particle_array from pysph.base.kernels import WendlandQuintic # SPH Equations and Group from pysph.sph.equation import Group from pysph.sph.wc.viscosity import ClearyArtificialViscosity from pysph.sph.wc.transport_velocity import SummationDensity, \ MomentumEquationPressureGradient,\ SolidWallPressureBC, SolidWallNoSlipBC, SetWallVelocity, \ StateEquation, MomentumEquationArtificialStress, MomentumEquationViscosity from pysph.sph.surface_tension import ColorGradientUsingNumberDensity, \ InterfaceCurvatureFromNumberDensity, ShadlooYildizSurfaceTensionForce,\ SmoothedColor from pysph.sph.gas_dynamics.basic import ScaleSmoothingLength # PySPH solver and application from pysph.solver.application import Application from pysph.solver.solver import Solver # Integrators and Steppers from pysph.sph.integrator_step import TransportVelocityStep from pysph.sph.integrator import PECIntegrator # Domain manager for periodic domains from pysph.base.nnps import DomainManager # problem parameters dim = 2 domain_width = 1.0 domain_height = 1.0 # numerical constants gy = -9.81 alpha = 0.001 wavelength = 1.0 wavenumber = 2*numpy.pi/wavelength Ri = 0.05 rho0 = rho1 = 1000.0 rho2 = 1*rho1 U = 0.5 sigma = Ri * (rho1*rho2) * (2*U)**2/(wavenumber*(rho1 + rho2)) # initial perturbation amplitude psi0 = 0.03*domain_height # discretization parameters nghost_layers = 5 dx = dy = 0.01 dxb2 = dyb2 = 0.5 * dx volume = dx*dx hdx = 1.5 h0 = hdx * dx rho0 = 1000.0 c0 = 25.0 p0 = c0*c0*rho0 nu = 0.125 * alpha * h0 * c0 # time steps tf = 3.0 dt_cfl = 0.25 * h0/(1.1*c0) dt_viscous = 0.125 * h0**2/nu dt_force = 1.0 dt = 0.8 * min(dt_cfl, dt_viscous, dt_force) class KHITVF(Application): def create_particles(self): ghost_extent = (nghost_layers + 0.5)*dx x, y = numpy.mgrid[dxb2:domain_width:dx, -ghost_extent:domain_height+ghost_extent:dy] x = x.ravel() y = y.ravel() m = numpy.ones_like(x) * volume * rho0 rho = numpy.ones_like(x) * rho0 h = numpy.ones_like(x) * h0 cs = numpy.ones_like(x) * c0 # additional properties required for the fluid. additional_props = [ # volume inverse or number density 'V', # color and gradients 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', # discretized interface normals and dirac delta 'nx', 'ny', 'nz', 'ddelta', # interface curvature 'kappa', # transport velocities 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', # imposed accelerations on the solid wall 'ax', 'ay', 'az', 'wij', # velocity of magnitude squared 'vmag2', # variable to indicate reliable normals and normalizing # constant 'N', 'wij_sum', ] # get the fluid particle array fluid = get_particle_array( name='fluid', x=x, y=y, h=h, m=m, rho=rho, cs=cs, additional_props=additional_props) # set the fluid velocity with respect to the sinusoidal # perturbation fluid.u[:] = -U mode = 1 for i in range(len(fluid.x)): ang = 2*numpy.pi*fluid.x[i]/(mode*domain_width) temp = domain_height/2 + psi0*domain_height*numpy.sin(ang) if fluid.y[i] > temp: fluid.u[i] = U fluid.color[i] = 1 fluid.rho[i] = rho1 fluid.m[i] = volume*rho1 else: fluid.rho[i] = rho2 fluid.m[i] = rho2/rho1*volume*rho2 # extract the top and bottom boundary particles indices = numpy.where(fluid.y > domain_height)[0] wall = fluid.extract_particles(indices) fluid.remove_particles(indices) indices = numpy.where(fluid.y < 0)[0] bottom = fluid.extract_particles(indices) fluid.remove_particles(indices) # concatenate the two boundaries wall.append_parray(bottom) wall.set_name('wall') # set the number density initially for all particles fluid.V[:] = 1./volume wall.V[:] = 1./volume # set additional output arrays for the fluid fluid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'p', 'rho']) # extrapolated velocities for the wall for name in ['uf', 'vf', 'wf']: wall.add_property(name) # dummy velocities for the wall # required for the no-slip BC for name in ['ug', 'vg', 'wg']: wall.add_property(name) print("2D KHI with %d fluid particles and %d wall particles" % ( fluid.get_number_of_particles(), wall.get_number_of_particles())) return [fluid, wall] def create_domain(self): return DomainManager(xmin=0, xmax=domain_width, ymin=0, ymax=domain_height, periodic_in_x=True, periodic_in_y=False) def create_solver(self): kernel = WendlandQuintic(dim=2) integrator = PECIntegrator(fluid=TransportVelocityStep()) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) return solver def create_equations(self): tvf_equations = [ # We first compute the mass and number density of the fluid # phase. This is used in all force computations henceforth. The # number density (1/volume) is explicitly set for the solid phase # and this isn't modified for the simulation. Group(equations=[ SummationDensity(dest='fluid', sources=['fluid', 'wall']) ]), # Given the updated number density for the fluid, we can update # the fluid pressure. Additionally, we can extrapolate the fluid # velocity to the wall for the no-slip boundary # condition. Also compute the smoothed color based on the color # index for a particle. Group(equations=[ StateEquation(dest='fluid', sources=None, rho0=rho0, p0=p0, b=1.0), SetWallVelocity(dest='wall', sources=['fluid']), SmoothedColor(dest='fluid', sources=['fluid']), ]), ################################################################# # Begin Surface tension formulation ################################################################# # Scale the smoothing lengths to determine the interface # quantities. The NNPS need not be updated since the smoothing # length is decreased. Group(equations=[ ScaleSmoothingLength(dest='fluid', sources=None, factor=0.8) ], update_nnps=False), # Compute the gradient of the color function with respect to the # new smoothing length. At the end of this Group, we will have the # interface normals and the discretized dirac delta function for # the fluid-fluid interface. Group(equations=[ ColorGradientUsingNumberDensity(dest='fluid', sources=['fluid', 'wall'], epsilon=0.01/h0), ], ), # Compute the interface curvature using the modified smoothing # length and interface normals computed in the previous Group. Group(equations=[ InterfaceCurvatureFromNumberDensity( dest='fluid', sources=['fluid'], with_morris_correction=True), ], ), # Now rescale the smoothing length to the original value for the # rest of the computations. Group(equations=[ ScaleSmoothingLength(dest='fluid', sources=None, factor=1.25) ], update_nnps=False, ), ################################################################# # End Surface tension formulation ################################################################# # Once the pressure for the fluid phase has been updated via the # state-equation, we can extrapolate the pressure to the wall # ghost particles. After this group, the density and pressure of # the boundary particles has been updated and can be used in the # integration equations. Group( equations=[ SolidWallPressureBC(dest='wall', sources=['fluid'], p0=p0, rho0=rho0, gy=gy), ], ), # The main acceleration block Group( equations=[ # Gradient of pressure for the fluid phase using the # number density formulation. No penetration boundary # condition using Adami et al's generalized wall boundary # condition. The extrapolated pressure and density on the # wall particles is used in the gradient of pressure to # simulate a repulsive force. MomentumEquationPressureGradient( dest='fluid', sources=['fluid', 'wall'], pb=p0, gy=gy), # Artificial viscosity for the fluid phase. MomentumEquationViscosity( dest='fluid', sources=['fluid'], nu=nu), # No-slip boundary condition using Adami et al's # generalized wall boundary condition. This equation # basically computes the viscous contribution on the fluid # from the wall particles. SolidWallNoSlipBC(dest='fluid', sources=['wall'], nu=nu), # Surface tension force for the SY11 formulation ShadlooYildizSurfaceTensionForce(dest='fluid', sources=None, sigma=sigma), # Artificial stress for the fluid phase MomentumEquationArtificialStress(dest='fluid', sources=['fluid']), ], ) ] return tvf_equations if __name__ == '__main__': app = KHITVF() app.run() pysph-master/pysph/examples/surface_tension/oscillating_rod.py000066400000000000000000000163161356347341600254330ustar00rootroot00000000000000from math import sqrt import numpy as np import os from pysph.sph.surface_tension import get_surface_tension_equations from pysph.tools.geometry import get_2d_block, remove_overlap_particles from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline, QuinticSpline from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator_step import TransportVelocityStep from pysph.sph.integrator import PECIntegrator from pysph.base.nnps import DomainManager from pysph.solver.utils import iter_output dim = 2 Lx = 1.0 Ly = 1.0 nu = 0.05 sigma = 1.0 factor1 = 0.8 factor2 = 1 / factor1 rho0 = 1.0 c0 = 20.0 gamma = 1.4 R = 287.1 p0 = c0**2 * rho0 nx = 120 dx = Lx / nx volume = dx * dx tf = 0.5 hdx = 1.5 h0 = hdx * dx epsilon = 0.01 / h0 r0 = 0.05 v0 = 10.0 dt1 = 0.25*np.sqrt(rho0*h0*h0*h0/(2.0*np.pi*sigma)) dt2 = 0.25*h0/(c0+v0) dt3 = 0.125*rho0*h0*h0/nu dt = 0.9*min(dt1, dt2, dt3) def r(x, y): return x*x + y*y class MultiPhase(Application): def create_particles(self): c0 = 20 hdx = 1.5 h0 = hdx * dx if self.options.scheme == 'adami_stress': c0 = 10 hdx = 1.0 h0 = hdx*dx dt1 = 0.25*np.sqrt(rho0*h0*h0*h0/(2.0*np.pi*sigma)) dt2 = 0.25*h0/(c0+v0) dt3 = 0.125*rho0*h0*h0/nu dt = 0.9*min(dt1, dt2, dt3) fluid_x, fluid_y = get_2d_block( dx=dx, length=Lx, height=Ly, center=np.array([0., 0.])) rho_fluid = np.ones_like(fluid_x) * rho0 m_fluid = rho_fluid * volume h_fluid = np.ones_like(fluid_x) * h0 cs_fluid = np.ones_like(fluid_x) * c0 wall_x, wall_y = get_2d_block(dx=dx, length=Lx+6*dx, height=Ly+6*dx, center=np.array([0., 0.])) rho_wall = np.ones_like(wall_x) * rho0 m_wall = rho_wall * volume h_wall = np.ones_like(wall_x) * h0 cs_wall = np.ones_like(wall_x) * c0 additional_props = ['V', 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', 'nx', 'ny', 'nz', 'ddelta', 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', 'ax', 'ay', 'az', 'wij', 'vmag2', 'N', 'wij_sum', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'kappa', 'arho', 'nu', 'wg', 'ug', 'vg', 'pi00', 'pi01', 'pi02', 'pi10', 'pi11', 'pi12', 'pi20', 'pi21', 'pi22', 'alpha'] consts = {'max_ddelta': np.zeros(1, dtype=float)} fluid = get_particle_array( name='fluid', x=fluid_x, y=fluid_y, h=h_fluid, m=m_fluid, rho=rho_fluid, cs=cs_fluid, additional_props=additional_props, constants=consts) for i in range(len(fluid.x)): if (fluid.x[i]*fluid.x[i] + fluid.y[i]*fluid.y[i]) < 0.04: fluid.color[i] = 1.0 else: fluid.color[i] = 0.0 fluid.alpha[:] = sigma wall = get_particle_array( name='wall', x=wall_x, y=wall_y, h=h_wall, m=m_wall, rho=rho_wall, cs=cs_wall, additional_props=additional_props) wall.color[:] = 0.0 remove_overlap_particles(wall, fluid, dx_solid=dx, dim=2) fluid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) wall.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) for i in range(len(fluid.x)): R = sqrt(r(fluid.x[i], fluid.y[i]) + 0.0001*fluid.h[i]*fluid.h[i]) f = np.exp(-R/r0)/r0 fluid.u[i] = v0*fluid.x[i]*(1.0-(fluid.y[i]*fluid.y[i])/(r0*R))*f fluid.v[i] = -v0*fluid.y[i]*(1.0-(fluid.x[i]*fluid.x[i])/(r0*R))*f fluid.nu[:] = nu return [fluid, wall] def create_solver(self): kernel = QuinticSpline(dim=2) integrator = PECIntegrator(fluid=TransportVelocityStep()) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, output_at_times=[0., 0.08, 0.16, 0.26]) return solver def add_user_options(self, group): choices = ['morris', 'tvf', 'adami_stress', 'adami', 'shadloo'] group.add_argument( "--scheme", action="store", dest='scheme', default='morris', choices=choices, help='Specify scheme to use among %s' % choices ) def create_equations(self): return get_surface_tension_equations(['fluid'], ['wall'], self.options.scheme, rho0, p0, c0, 0, factor1, factor2, nu, sigma, 2, epsilon, gamma, real=True) def post_process(self): try: import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt except ImportError: print("Post processing requires Matplotlib") return from pysph.solver.utils import load files = self.output_files amat = [] t = [] centerx = [] centery = [] velx = [] vely = [] for f in files: data = load(f) pa = data['arrays']['fluid'] t.append(data['solver_data']['t']) x = pa.x y = pa.y u = pa.u v = pa.v color = pa.color length = len(color) min_x = 0.0 max_x = 0.0 cx = 0 cy = 0 vx = 0 vy = 0 count = 0 for i in range(length): if color[i] == 1: if x[i] < min_x: min_x = x[i] if x[i] > max_x: max_x = x[i] if x[i] > 0 and y[i] > 0: cx += x[i] cy += y[i] vx += u[i] vy += v[i] count += 1 amat.append(0.5*(max_x - min_x)) centerx.append(cx/count) centery.append(cy/count) velx.append(vx/count) vely.append(vy/count) fname = os.path.join(self.output_dir, 'results.npz') np.savez(fname, t=t, semimajor=amat, centerx=centerx, centery=centery, velx=velx, vely=vely) plt.plot(t, amat) fig = os.path.join(self.output_dir, 'semimajorvst.png') plt.savefig(fig) plt.close() plt.plot(t, centerx, label='x position') plt.plot(t, centery, label='y position') plt.legend() fig1 = os.path.join(self.output_dir, 'centerofmassposvst') plt.savefig(fig1) plt.close() plt.plot(t, velx, label='x velocity') plt.plot(t, vely, label='y velocity') plt.legend() fig2 = os.path.join(self.output_dir, 'centerofmassvelvst') plt.savefig(fig2) plt.close() if __name__ == '__main__': app = MultiPhase() app.run() app.post_process() pysph-master/pysph/examples/surface_tension/square_droplet.py000066400000000000000000000141331356347341600253030ustar00rootroot00000000000000"""Deformation of a square droplet. (15 minutes) _______________________________ | | | | | 0 | | | | ___________ | | | | | | | 1 | | | | | | | |_________| | | | | | | | | | |_____________________________| Initially, two two fluids of the same density are distinguished by a color index assigned to them and allowed to settle under the effects of surface tension. It is expected that the surface tension at the interface between the two fluids deforms the initially square droplet into a cirular droplet to minimize the interface area/length. The references for this problem are - J. Morris "Simulating surface tension with smoothed particle hydrodynamics", 2000, IJNMF, 33, pp 333--353 [JM00] - S. Adami, X.Y. Hu, N.A. Adams "A new surface tension formulation for multi-phase SPH using a reproducing divergence approximation", 2010, JCP, 229, pp 5011--5021 [AHA10] - M. S. Shadloo, M. Yildiz "Numerical modelling of Kelvin-Helmholtz instability using smoothed particle hydrodynamics", IJNME, 2011, 87, pp 988--1006 [SY11] The surface-tension model used currently is the CSF model based on interface curvature and normals computed from the color function. """ import numpy import numpy as np import os from pysph.sph.surface_tension import get_surface_tension_equations from pysph.tools.geometry import get_2d_block from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline, QuinticSpline from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator_step import TransportVelocityStep, \ VelocityVerletSymplecticWCSPHStep from pysph.sph.integrator import PECIntegrator from pysph.base.nnps import DomainManager from pysph.solver.utils import iter_output # problem parameters dim = 2 domain_width = 1.0 domain_height = 1.0 # numerical constants sigma = 1.0 # set factor1 to [0.5 ~ 1.0] to simulate a thick or thin # interface. Larger values result in a thick interface. factor1 = 0.8 factor2 = 1./factor1 # discretization parameters dx = dy = 0.0125 dxb2 = dyb2 = 0.5 * dx volume = dx*dx hdx = 1.3 h0 = hdx * dx rho0 = 1 c0 = 20.0 p0 = c0*c0*rho0 nu = 0.2 # correction factor for Morris's Method I. Set with_morris_correction # to True when using this correction. epsilon = 0.01/h0 # time steps tf = 1.0 dt_cfl = 0.25 * h0/(1.1*c0) dt_viscous = 0.125 * h0**2/nu dt_force = 1.0 dt = 0.9 * min(dt_cfl, dt_viscous, dt_force) class SquareDroplet(Application): def add_user_options(self, group): choices = ['morris', 'tvf', 'adami_stress', 'adami', 'shadloo'] group.add_argument( "--scheme", action="store", dest='scheme', default='morris', choices=choices, help='Specify scheme to use among %s' % choices ) def create_particles(self): x, y = numpy.mgrid[dxb2:domain_width:dx, dyb2:domain_height:dy] x = x.ravel() y = y.ravel() m = numpy.ones_like(x) * volume * rho0 rho = numpy.ones_like(x) * rho0 h = numpy.ones_like(x) * h0 cs = numpy.ones_like(x) * c0 # additional properties required for the fluid. additional_props = [ # volume inverse or number density 'V', 'alpha', # color and gradients 'color', 'scolor', 'cx', 'cy', 'cz', 'cx2', 'cy2', 'cz2', # discretized interface normals and dirac delta 'nx', 'ny', 'nz', 'ddelta', # interface curvature 'kappa', # transport velocities 'uhat', 'vhat', 'what', 'auhat', 'avhat', 'awhat', # imposed accelerations on the solid wall 'ax', 'ay', 'az', 'wij', # velocity of magnitude squared 'vmag2', # variable to indicate reliable normals and normalizing # constant 'N', 'wij_sum', 'pi00', 'pi01', 'pi02', 'pi10', 'pi11', 'pi12', 'pi20', 'pi21', 'pi22', 'nu' ] # get the fluid particle array fluid = get_particle_array( name='fluid', x=x, y=y, h=h, m=m, rho=rho, cs=cs, additional_props=additional_props) # set the color of the inner square for i in range(x.size): if ((fluid.x[i] > 0.35) and (fluid.x[i] < 0.65)): if ((fluid.y[i] > 0.35) and (fluid.y[i] < 0.65)): fluid.color[i] = 1.0 # particle volume fluid.V[:] = 1./volume fluid.nu[:] = nu fluid.alpha[:] = sigma # set additional output arrays for the fluid fluid.add_output_arrays(['V', 'color', 'cx', 'cy', 'nx', 'ny', 'ddelta', 'kappa', 'N', 'scolor', 'p']) print("2D Square droplet deformation with %d fluid particles" % ( fluid.get_number_of_particles())) return [fluid, ] def create_domain(self): return DomainManager( xmin=0, xmax=domain_width, ymin=0, ymax=domain_height, periodic_in_x=True, periodic_in_y=True) def create_solver(self): kernel = QuinticSpline(dim=2) stepper = TransportVelocityStep() if self.options.scheme == 'shadloo': stepper = VelocityVerletSymplecticWCSPHStep() integrator = PECIntegrator(fluid=stepper) solver = Solver( kernel=kernel, dim=dim, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False) return solver def create_equations(self): return get_surface_tension_equations(['fluid'], [], self.options.scheme, rho0, p0, c0, 0, factor1, factor2, nu, sigma, 2, epsilon, 1, real=False) if __name__ == '__main__': app = SquareDroplet() app.run() pysph-master/pysph/examples/taylor_green.py000066400000000000000000000436361356347341600215670ustar00rootroot00000000000000"""Taylor Green vortex flow (5 minutes). """ import os import numpy as np from numpy import pi, sin, cos, exp from pysph.base.nnps import DomainManager from pysph.base.utils import get_particle_array from pysph.base.kernels import QuinticSpline from pysph.solver.application import Application from pysph.sph.equation import Group, Equation from pysph.sph.scheme import TVFScheme, WCSPHScheme, SchemeChooser from pysph.sph.wc.edac import ComputeAveragePressure, EDACScheme from pysph.sph.iisph import IISPHScheme from pysph.sph.wc.kernel_correction import ( GradientCorrectionPreStep, GradientCorrection, MixedKernelCorrectionPreStep, MixedGradientCorrection ) from pysph.sph.wc.crksph import CRKSPHPreStep, CRKSPH, CRKSPHScheme from pysph.sph.wc.gtvf import GTVFScheme from pysph.sph.wc.pcisph import PCISPHScheme from pysph.sph.wc.shift import ShiftPositions from pysph.sph.isph.sisph import SISPHScheme from pysph.sph.isph.isph import ISPHScheme # domain and constants L = 1.0 U = 1.0 rho0 = 1.0 c0 = 10 * U p0 = c0**2 * rho0 def m4p(x=0.0): """From the paper by Chaniotis et al. (JCP 2002). """ if x < 0.0: return 0.0 elif x < 1.0: return 1.0 - 0.5*x*x*(5.0 - 3.0*x) elif x < 2.0: return (1 - x)*(2 - x)*(2 - x)*0.5 else: return 0.0 class M4(Equation): '''An equation to be used for remeshing. ''' def initialize(self, d_idx, d_prop): d_prop[d_idx] = 0.0 def _get_helpers_(self): return [m4p] def loop(self, s_idx, d_idx, s_temp_prop, d_prop, d_h, XIJ): xij = abs(XIJ[0]/d_h[d_idx]) yij = abs(XIJ[1]/d_h[d_idx]) d_prop[d_idx] += m4p(xij)*m4p(yij)*s_temp_prop[s_idx] def exact_solution(U, b, t, x, y): factor = U * exp(b*t) u = -cos(2*pi*x) * sin(2*pi*y) v = sin(2*pi*x) * cos(2*pi*y) p = -0.25 * (cos(4*pi*x) + cos(4*pi*y)) return factor * u, factor * v, factor * factor * p class TaylorGreen(Application): def add_user_options(self, group): group.add_argument( "--init", action="store", type=str, default=None, help="Initialize particle positions from given file." ) group.add_argument( "--perturb", action="store", type=float, dest="perturb", default=0, help="Random perturbation of initial particles as a fraction " "of dx (setting it to zero disables it, the default)." ) group.add_argument( "--nx", action="store", type=int, dest="nx", default=50, help="Number of points along x direction. (default 50)" ) group.add_argument( "--re", action="store", type=float, dest="re", default=100, help="Reynolds number (defaults to 100)." ) group.add_argument( "--hdx", action="store", type=float, dest="hdx", default=1.0, help="Ratio h/dx." ) group.add_argument( "--pb-factor", action="store", type=float, dest="pb_factor", default=1.0, help="Use fraction of the background pressure (default: 1.0)." ) corrections = ['', 'mixed', 'gradient', 'crksph'] group.add_argument( "--kernel-correction", action="store", type=str, dest='kernel_correction', default='', help="Type of Kernel Correction", choices=corrections ) group.add_argument( "--remesh", action="store", type=int, dest="remesh", default=0, help="Remeshing frequency (setting it to zero disables it)." ) remesh_types = ['m4', 'sph'] group.add_argument( "--remesh-eq", action="store", type=str, dest="remesh_eq", default='m4', choices=remesh_types, help="Remeshing strategy to use." ) group.add_argument( "--shift-freq", action="store", type=int, dest="shift_freq", default=0, help="Particle position shift frequency.(set zero to disable)." ) shift_types = ['simple', 'fickian'] group.add_argument( "--shift-kind", action="store", type=str, dest="shift_kind", default='simple', choices=shift_types, help="Use of fickian shift in positions." ) group.add_argument( "--shift-parameter", action="store", type=float, dest="shift_parameter", default=None, help="Constant used in shift, range for 'simple' is 0.01-0.1" "range 'fickian' is 1-10." ) group.add_argument( "--shift-correct-vel", action="store_true", dest="correct_vel", default=False, help="Correct velocities after shifting (defaults to false)." ) def consume_user_options(self): nx = self.options.nx re = self.options.re self.nu = nu = U * L / re self.dx = dx = L / nx self.volume = dx * dx self.hdx = self.options.hdx h0 = self.hdx * self.dx if self.options.scheme.endswith('isph'): dt_cfl = 0.25 * h0 / U else: dt_cfl = 0.25 * h0 / (c0 + U) dt_viscous = 0.125 * h0**2 / nu dt_force = 0.25 * 1.0 self.dt = min(dt_cfl, dt_viscous, dt_force) self.tf = 2.0 self.kernel_correction = self.options.kernel_correction def configure_scheme(self): scheme = self.scheme h0 = self.hdx * self.dx pfreq = 100 kernel = QuinticSpline(dim=2) if self.options.scheme == 'tvf': scheme.configure(pb=self.options.pb_factor * p0, nu=self.nu, h0=h0) elif self.options.scheme == 'wcsph': scheme.configure(hdx=self.hdx, nu=self.nu, h0=h0) elif self.options.scheme == 'edac': scheme.configure(h=h0, nu=self.nu, pb=self.options.pb_factor * p0) elif self.options.scheme.endswith('isph'): pfreq = 10 scheme.configure(nu=self.nu) elif self.options.scheme == 'crksph': scheme.configure(h0=h0, nu=self.nu) elif self.options.scheme == 'gtvf': scheme.configure(pref=p0, nu=self.nu, h0=h0) scheme.configure_solver(kernel=kernel, tf=self.tf, dt=self.dt, pfreq=pfreq) def create_scheme(self): h0 = None hdx = None wcsph = WCSPHScheme( ['fluid'], [], dim=2, rho0=rho0, c0=c0, h0=h0, hdx=hdx, nu=None, gamma=7.0, alpha=0.0, beta=0.0 ) tvf = TVFScheme( ['fluid'], [], dim=2, rho0=rho0, c0=c0, nu=None, p0=p0, pb=None, h0=h0 ) edac = EDACScheme( ['fluid'], [], dim=2, rho0=rho0, c0=c0, nu=None, pb=p0, h=h0 ) iisph = IISPHScheme( fluids=['fluid'], solids=[], dim=2, nu=None, rho0=rho0, has_ghosts=True ) crksph = CRKSPHScheme( fluids=['fluid'], dim=2, nu=None, rho0=rho0, h0=h0, c0=c0, p0=0.0 ) gtvf = GTVFScheme( fluids=['fluid'], solids=[], dim=2, rho0=rho0, c0=c0, nu=None, h0=None, pref=None ) pcisph = PCISPHScheme( fluids=['fluid'], dim=2, rho0=rho0, nu=None ) sisph = SISPHScheme( fluids=['fluid'], solids=[], dim=2, nu=None, rho0=rho0, c0=c0, alpha=0.0, has_ghosts=True, pref=p0, rho_cutoff=0.2, internal_flow=True, gtvf=True ) isph = ISPHScheme( fluids=['fluid'], solids=[], dim=2, nu=None, rho0=rho0, c0=c0, alpha=0.0 ) s = SchemeChooser( default='tvf', wcsph=wcsph, tvf=tvf, edac=edac, iisph=iisph, crksph=crksph, gtvf=gtvf, pcisph=pcisph, sisph=sisph, isph=isph ) return s def create_equations(self): eqns = self.scheme.get_equations() # This tolerance needs to be fixed. tol = 0.5 if self.kernel_correction == 'gradient': cls1 = GradientCorrectionPreStep cls2 = GradientCorrection elif self.kernel_correction == 'mixed': cls1 = MixedKernelCorrectionPreStep cls2 = MixedGradientCorrection elif self.kernel_correction == 'crksph': cls1 = CRKSPHPreStep cls2 = CRKSPH if self.kernel_correction: g1 = Group(equations=[cls1('fluid', ['fluid'], dim=2)]) eq2 = cls2(dest='fluid', sources=['fluid'], dim=2, tol=tol) if self.options.scheme == 'wcsph': eqns.insert(1, g1) eqns[2].equations.insert(0, eq2) elif self.options.scheme == 'tvf': eqns[1].equations.append(g1.equations[0]) eqns[2].equations.insert(0, eq2) elif self.options.scheme == 'edac': eqns.insert(1, g1) eqns[2].equations.insert(0, eq2) return eqns def create_domain(self): return DomainManager( xmin=0, xmax=L, ymin=0, ymax=L, periodic_in_x=True, periodic_in_y=True ) def create_particles(self): # create the particles dx = self.dx _x = np.arange(dx / 2, L, dx) x, y = np.meshgrid(_x, _x) if self.options.init is not None: fname = self.options.init from pysph.solver.utils import load data = load(fname) _f = data['arrays']['fluid'] x, y = _f.x.copy(), _f.y.copy() if self.options.perturb > 0: np.random.seed(1) factor = dx * self.options.perturb x += np.random.random(x.shape) * factor y += np.random.random(x.shape) * factor # Initialize m = self.volume * rho0 h = self.hdx * dx re = self.options.re b = -8.0*pi*pi / re u0, v0, p0 = exact_solution(U=U, b=b, t=0, x=x, y=y) color0 = cos(2*pi*x) * cos(4*pi*y) # create the arrays fluid = get_particle_array(name='fluid', x=x, y=y, m=m, h=h, u=u0, v=v0, rho=rho0, p=p0, color=color0) self.scheme.setup_properties([fluid]) print("Taylor green vortex problem :: nfluid = %d, dt = %g" % ( fluid.get_number_of_particles(), self.dt)) # volume is set as dx^2 if self.options.scheme == 'sisph': nfp = fluid.get_number_of_particles() fluid.gid[:] = np.arange(nfp) fluid.add_output_arrays(['gid']) if self.options.scheme == 'tvf': fluid.V[:] = 1. / self.volume if self.options.scheme == 'iisph': # These are needed to update the ghost particle properties. nfp = fluid.get_number_of_particles() fluid.orig_idx[:] = np.arange(nfp) fluid.add_output_arrays(['orig_idx']) corr = self.kernel_correction if corr in ['mixed', 'crksph']: fluid.add_property('cwij') if corr == 'mixed' or corr == 'gradient': fluid.add_property('m_mat', stride=9) fluid.add_property('dw_gamma', stride=3) elif corr == 'crksph': fluid.add_property('ai') fluid.add_property('gradbi', stride=4) for prop in ['gradai', 'bi']: fluid.add_property(prop, stride=2) if self.options.shift_freq > 0: fluid.add_constant('vmax', [0.0]) fluid.add_property('dpos', stride=3) fluid.add_property('gradv', stride=9) if self.options.scheme == 'isph': gid = np.arange(fluid.get_number_of_particles(real=False)) fluid.add_property('gid') fluid.gid[:] = gid[:] fluid.add_property('dpos', stride=3) fluid.add_property('gradv', stride=9) return [fluid] def create_tools(self): tools = [] options = self.options if options.remesh > 0: if options.remesh_eq == 'm4': equations = [M4(dest='interpolate', sources=['fluid'])] else: equations = None from pysph.solver.tools import SimpleRemesher if options.scheme == 'wcsph' or options.scheme == 'crksph': props = ['u', 'v', 'au', 'av', 'ax', 'ay', 'arho'] elif options.scheme == 'pcisph': props = ['u', 'v', 'p'] elif options.scheme == 'tvf': props = ['u', 'v', 'uhat', 'vhat', 'au', 'av', 'auhat', 'avhat'] elif options.scheme == 'edac': if 'uhat' in self.particles[0].properties: props = ['u', 'v', 'uhat', 'vhat', 'p', 'au', 'av', 'auhat', 'avhat', 'ap'] else: props = ['u', 'v', 'p', 'au', 'av', 'ax', 'ay', 'ap'] elif options.scheme == 'iisph' or options.scheme == 'isph': # The accelerations are not really needed since the current # stepper is a single stage stepper. props = ['u', 'v', 'p'] elif options.scheme == 'gtvf': props = [ 'uhat', 'vhat', 'what', 'rho0', 'rhodiv', 'p0', 'auhat', 'avhat', 'awhat', 'arho', 'arho0' ] remesher = SimpleRemesher( self, 'fluid', props=props, freq=self.options.remesh, equations=equations ) tools.append(remesher) if options.shift_freq > 0: shift = ShiftPositions( self, 'fluid', freq=self.options.shift_freq, shift_kind=self.options.shift_kind, correct_velocity=self.options.correct_vel, parameter=self.options.shift_parameter ) tools.append(shift) return tools # The following are all related to post-processing. def _get_post_process_props(self, array): """Return x, y, m, u, v, p. """ if 'pavg' not in array.properties or \ 'pavg' not in array.output_property_arrays: self._add_extra_props(array) sph_eval = self._get_sph_evaluator(array) sph_eval.update_particle_arrays([array]) sph_eval.evaluate() x, y, m, u, v, p, pavg = array.get( 'x', 'y', 'm', 'u', 'v', 'p', 'pavg' ) return x, y, m, u, v, p - pavg def _add_extra_props(self, array): extra = ['pavg', 'nnbr'] for prop in extra: if prop not in array.properties: array.add_property(prop) array.add_output_arrays(extra) def _get_sph_evaluator(self, array): if not hasattr(self, '_sph_eval'): from pysph.tools.sph_evaluator import SPHEvaluator equations = [ ComputeAveragePressure(dest='fluid', sources=['fluid']) ] dm = self.create_domain() sph_eval = SPHEvaluator( arrays=[array], equations=equations, dim=2, kernel=QuinticSpline(dim=2), domain_manager=dm ) self._sph_eval = sph_eval return self._sph_eval def post_process(self, info_fname): info = self.read_info(info_fname) if len(self.output_files) == 0: return from pysph.solver.utils import iter_output decay_rate = -8.0 * np.pi**2 / self.options.re files = self.output_files t, ke, ke_ex, decay, linf, l1, p_l1 = [], [], [], [], [], [], [] for sd, array in iter_output(files, 'fluid'): _t = sd['t'] t.append(_t) x, y, m, u, v, p = self._get_post_process_props(array) u_e, v_e, p_e = exact_solution(U, decay_rate, _t, x, y) vmag2 = u**2 + v**2 vmag = np.sqrt(vmag2) ke.append(0.5 * np.sum(m * vmag2)) vmag2_e = u_e**2 + v_e**2 vmag_e = np.sqrt(vmag2_e) ke_ex.append(0.5 * np.sum(m * vmag2_e)) vmag_max = vmag.max() decay.append(vmag_max) theoretical_max = U * np.exp(decay_rate * _t) linf.append(abs((vmag_max - theoretical_max) / theoretical_max)) l1_err = np.average(np.abs(vmag - vmag_e)) avg_vmag_e = np.average(np.abs(vmag_e)) # scale the error by the maximum velocity. l1.append(l1_err / avg_vmag_e) p_e_max = np.abs(p_e).max() p_error = np.average(np.abs(p - p_e)) / p_e_max p_l1.append(p_error) t, ke, ke_ex, decay, l1, linf, p_l1 = list(map( np.asarray, (t, ke, ke_ex, decay, l1, linf, p_l1)) ) decay_ex = U * np.exp(decay_rate * t) fname = os.path.join(self.output_dir, 'results.npz') np.savez( fname, t=t, ke=ke, ke_ex=ke_ex, decay=decay, linf=linf, l1=l1, p_l1=p_l1, decay_ex=decay_ex ) import matplotlib matplotlib.use('Agg') from matplotlib import pyplot as plt plt.clf() plt.semilogy(t, decay_ex, label="exact") plt.semilogy(t, decay, label="computed") plt.xlabel('t') plt.ylabel('max velocity') plt.legend() fig = os.path.join(self.output_dir, "decay.png") plt.savefig(fig, dpi=300) plt.clf() plt.plot(t, linf) plt.xlabel('t') plt.ylabel(r'$L_\infty$ error') fig = os.path.join(self.output_dir, "linf_error.png") plt.savefig(fig, dpi=300) plt.clf() plt.plot(t, l1, label="error") plt.xlabel('t') plt.ylabel(r'$L_1$ error') fig = os.path.join(self.output_dir, "l1_error.png") plt.savefig(fig, dpi=300) plt.clf() plt.plot(t, p_l1, label="error") plt.xlabel('t') plt.ylabel(r'$L_1$ error for $p$') fig = os.path.join(self.output_dir, "p_l1_error.png") plt.savefig(fig, dpi=300) def customize_output(self): self._mayavi_config(''' b = particle_arrays['fluid'] b.scalar = 'vmag' ''') if __name__ == '__main__': app = TaylorGreen() app.run() app.post_process(app.info_filename) pysph-master/pysph/examples/tests/000077500000000000000000000000001356347341600176515ustar00rootroot00000000000000pysph-master/pysph/examples/tests/__init__.py000066400000000000000000000000001356347341600217500ustar00rootroot00000000000000pysph-master/pysph/examples/tests/test_examples.py000066400000000000000000000040661356347341600231060ustar00rootroot00000000000000import os import tempfile import shutil import subprocess import sys from pytest import mark from pysph.examples import run def check_output(*args, **kw): """Simple hack to support Python 2.6 which does not have subprocess.check_output. """ if not hasattr(subprocess, 'check_output'): subprocess.call(*args, **kw) else: subprocess.check_output(*args, **kw) def print_safe(string_or_bytes): if type(string_or_bytes) is bytes: print(string_or_bytes.decode('utf-8')) else: print(string_or_bytes) _orig_ets_toolkit = None def setup_module(): # Set the ETS_TOOLKIT to null to avoid errors when importing TVTK. global _orig_ets_toolkit var = 'ETS_TOOLKIT' _orig_ets_toolkit = os.environ.get(var) os.environ[var] = 'null' def teardown_module(): var = 'ETS_TOOLKIT' if _orig_ets_toolkit is None: del os.environ[var] else: os.environ[var] = _orig_ets_toolkit def run_example(module): """This simply runs the example to make sure that the example executes correctly. It wipes out the generated output directory. """ out_dir = tempfile.mkdtemp() cmd = [sys.executable, "-m", module, "--max-steps", "1", "--disable-output", "-q", "-d", out_dir] env_vars = dict(os.environ) env_vars['ETS_TOOLKIT'] = 'null' try: check_output(cmd, env=env_vars) except subprocess.CalledProcessError as e: print_safe(e.stdout) print_safe(e.stderr) raise finally: shutil.rmtree(out_dir) def _has_tvtk(): try: from tvtk.api import tvtk except (ImportError, SystemExit): return False else: return True def _find_examples(): examples = [] for module, doc in run.get_all_examples(): if module == 'pysph.examples.rigid_body.dam_break3D_sph' and \ not _has_tvtk(): continue examples.append(module) return examples @mark.slow @mark.parametrize("module", _find_examples()) def test_example_should_run(module): run_example(module) pysph-master/pysph/examples/tests/test_riemann_solver.py000066400000000000000000000221221356347341600243040ustar00rootroot00000000000000import unittest import numpy as np from math import sqrt from pysph.examples.gas_dynamics import riemann_solver import numpy.testing as npt try: import scipy except ImportError: scipy = None class RiemannSolverTestCase(unittest.TestCase): def setUp(self): riemann_solver.set_gamma(1.4) def assert_error(self, given, expected, precision): return npt.assert_almost_equal(np.ravel(given), expected, precision) @unittest.skipIf(scipy is None, 'No scipy module, skipping this test') def test_compute_star_fsolve(self): # Sod Shock Tube x = riemann_solver.star_pu_fsolve( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.4), rho_r=0.125, u_r=0.0, p_r=0.1, c_r=sqrt(1.12) ) self.assert_error(x, (0.30313, 0.92745), 5) # Sjogreen x = riemann_solver.star_pu_fsolve( rho_l=1.0, u_l=-2.0, p_l=0.4, c_l=sqrt(0.56), rho_r=1.0, u_r=2.0, p_r=0.4, c_r=sqrt(0.56) ) self.assert_error(x, (0.00189, 0.00000), 5) # Blastwave x = riemann_solver.star_pu_fsolve( rho_l=1.0, u_l=0.0, p_l=1000.0, c_l=sqrt(1400), rho_r=1.0, u_r=0.0, p_r=0.01, c_r=sqrt(0.014) ) self.assert_error(x, (460.894, 19.5975), 3) # Woodward and Collela x = riemann_solver.star_pu_fsolve( rho_l=1.0, u_l=0.0, p_l=0.01, c_l=sqrt(0.014), rho_r=1.0, u_r=0.0, p_r=100.0, c_r=sqrt(140) ) self.assert_error(x, (46.0950, -6.19633), 4) # Sod Shock Tube riemann_solver.set_gamma(1.2) x = riemann_solver.star_pu_fsolve( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.2), rho_r=0.125, u_r=0.0, p_r=0.1, c_r=sqrt(0.96) ) self.assert_error(x, (0.31274, 1.01132), 5) def test_compute_star_newton_raphson(self): # SodShock Tube x = riemann_solver.star_pu_newton_raphson( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.4), rho_r=0.125, u_r=0.0, p_r=0.1, c_r=sqrt(1.12) ) self.assert_error(x, (0.30313, 0.92745), 5) # Sjogreen x = riemann_solver.star_pu_newton_raphson( rho_l=1.0, u_l=-2.0, p_l=0.4, c_l=sqrt(0.56), rho_r=1.0, u_r=2.0, p_r=0.4, c_r=sqrt(0.56) ) self.assert_error(x, (0.00189, 0.00000), 5) # Blastwave x = riemann_solver.star_pu_newton_raphson( rho_l=1.0, u_l=0.0, p_l=1000.0, c_l=sqrt(1400), rho_r=1.0, u_r=0.0, p_r=0.01, c_r=sqrt(0.014) ) self.assert_error(x, (460.894, 19.5975), 3) # Woodward and Collela x = riemann_solver.star_pu_newton_raphson( 1.0, 0.0, 0.01, sqrt(0.014), 1.0, 0.0, 100.0, sqrt(140) ) self.assert_error(x, (46.0950, -6.19633), 4) def test_left_rarefaction(self): # SodShock Tube x = riemann_solver.left_rarefaction( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.4), p_star=0.30313, u_star=0.92745, s=-2.0 ) self.assert_error(x, (1.0, 0.0, 1.0), 10) x = riemann_solver.left_rarefaction( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.4), p_star=0.30313, u_star=0.92745, s=-1.6 ) self.assert_error(x, (1.0, 0.0, 1.0), 10) x = riemann_solver.left_rarefaction( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.4), p_star=0.30313, u_star=0.92745, s=0.0 ) self.assert_error(x, (0.42632, 0.92745, 0.30313), 5) # Blastwave x = riemann_solver.left_rarefaction( rho_l=1.0, u_l=0.0, p_l=1000.0, c_l=sqrt(1400), p_star=460.894, u_star=19.5975, s=-0.5 / 0.012 ) self.assert_error(x, (1.0, 0.0, 1000.0), 6) x = riemann_solver.left_rarefaction( rho_l=1.0, u_l=0.0, p_l=1000.0, c_l=sqrt(1400), p_star=460.894, u_star=19.5975, s=0.0 / 0.012 ) self.assert_error(x, (0.57506, 19.5975, 460.894), 5) def test_right_shock(self): # SodShock Tube x = riemann_solver.right_shock( rho_r=0.125, u_r=0.0, p_r=0.1, c_r=sqrt(1.4), p_star=0.30313, u_star=0.92745, s=2.0 ) self.assert_error(x, (0.125, 0.0, 0.1), 10) x = riemann_solver.right_shock( rho_r=0.125, u_r=0.0, p_r=0.1, c_r=sqrt(1.4), p_star=0.30313, u_star=0.92745, s=1.6 ) self.assert_error(x, (0.26557, 0.92745, 0.30313), 5) # Blastwave x = riemann_solver.right_shock( rho_r=1.0, u_r=0.0, p_r=0.01, c_r=sqrt(0.014), p_star=460.894, u_star=19.5975, s=0.5 / 0.012 ) self.assert_error(x, (1.0, 0.0, 0.01), 10) x = riemann_solver.right_shock( rho_r=1.0, u_r=0.0, p_r=0.01, c_r=sqrt(0.014), p_star=460.894, u_star=19.5975, s=0.2 / 0.012 ) self.assert_error(x, (5.99924, 19.5975, 460.894), 5) def test_right_rarefaction(self): # Sjogreen x = riemann_solver.right_rarefaction( rho_r=1.0, u_r=2.0, p_r=0.4, c_r=sqrt(0.56), p_star=0.00189, u_star=0.000, s=0.5 / 0.15 ) self.assert_error(x, (1.0, 2.0, 0.4), 10) x = riemann_solver.right_rarefaction( rho_r=1.0, u_r=2.0, p_r=0.4, c_r=sqrt(0.56), p_star=0.00189, u_star=0.000, s=0.0 / 0.15 ) self.assert_error(x, (0.02185, 0.0, 0.00189), 4) def test_left_shock(self): # Woodward and Collela x = riemann_solver.left_shock( rho_l=1.0, u_l=0.0, p_l=0.01, c_l=sqrt(0.014), p_star=46.095, u_star=-6.1963, s=-0.5 / 0.035 ) self.assert_error(x, (1.0, 0.0, 0.01), 10) x = riemann_solver.left_shock( rho_l=1.0, u_l=0.0, p_l=0.01, c_l=sqrt(0.014), p_star=46.095, u_star=-6.1963, s=-0.2 / 0.035 ) self.assert_error(x, (5.99242, -6.1963, 46.095), 5) # Test_case 5 (Toro) x = riemann_solver.left_shock( rho_l=5.99924, u_l=19.5975, p_l=460.894, c_l=sqrt(107.555), p_star=1691.64, u_star=8.68975, s=-0.5 / 0.035 ) self.assert_error(x, (5.99924, 19.5975, 460.894), 5) x = riemann_solver.left_shock( rho_l=5.99924, u_l=19.5975, p_l=460.894, c_l=sqrt(107.555), p_star=1691.64, u_star=8.68975, s=0.1 / 0.035 ) self.assert_error(x, (14.2823, 8.68975, 1691.64), 4) def test_left_contact(self): # Woodward and Collela x = riemann_solver.left_contact( rho_l=1.0, u_l=0.0, p_l=0.01, c_l=sqrt(0.014), p_star=46.095, u_star=-6.1963, s=-0.2 / 0.035 ) self.assert_error(x, (5.99242, -6.1963, 46.095), 5) # Test_case 5 (Toro) x = riemann_solver.left_contact( rho_l=5.99924, u_l=19.5975, p_l=460.894, c_l=sqrt(107.555), p_star=1691.64, u_star=8.68975, s=0.1 / 0.035 ) self.assert_error(x, (14.2823, 8.68975, 1691.64), 4) # SodShock Tube x = riemann_solver.left_contact( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.4), p_star=0.30313, u_star=0.92745, s=0.0 ) self.assert_error(x, (0.42632, 0.92745, 0.30313), 5) # Blastwave x = riemann_solver.left_contact( rho_l=1.0, u_l=0.0, p_l=1000.0, c_l=sqrt(1400), p_star=460.894, u_star=19.5975, s=0.0 / 0.012 ) self.assert_error(x, (0.57506, 19.5975, 460.894), 5) def test_right_contact(self): # Sjogreen x = riemann_solver.right_contact( rho_r=1.0, u_r=2.0, p_r=0.4, c_r=sqrt(0.56), p_star=0.00189, u_star=0.000, s=0.0 / 0.15 ) self.assert_error(x, (0.02185, 0.0, 0.00189), 4) # SodShock Tube x = riemann_solver.right_contact( rho_r=0.125, u_r=0.0, p_r=0.1, c_r=sqrt(1.4), p_star=0.30313, u_star=0.92745, s=1.6 ) self.assert_error(x, (0.26557, 0.92745, 0.30313), 5) # Blastwave x = riemann_solver.right_contact( rho_r=1.0, u_r=0.0, p_r=0.01, c_r=sqrt(0.014), p_star=460.894, u_star=19.5975, s=0.2 / 0.012 ) self.assert_error(x, (5.99924, 19.5975, 460.894), 5) def test_different_gamma(self): # SodShock Tube(gamma = 1.2) riemann_solver.set_gamma(1.2) x = riemann_solver.star_pu_newton_raphson( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.2), rho_r=0.125, u_r=0.0, p_r=0.1, c_r=sqrt(0.96) ) self.assert_error(x, (0.31274, 1.01132), 5) x = riemann_solver.right_contact( rho_r=0.125, u_r=0.0, p_r=0.1, c_r=sqrt(1.2), p_star=0.31274, u_star=1.01132, s=0.15/0.1 ) self.assert_error(x, (0.31323, 1.01132, 0.31274), 5) x = riemann_solver.left_contact( rho_l=1.0, u_l=0.0, p_l=1.0, c_l=sqrt(1.2), p_star=0.31274, u_star=1.01132, s=-0.5/0.1 ) self.assert_error(x, (1.0, 0.0, 1.0), 10) if __name__ == '__main__': unittest.main() pysph-master/pysph/examples/trivial_inlet_outlet.py000066400000000000000000000122561356347341600233300ustar00rootroot00000000000000"""Demonstrate the inlet and outlet feature in 2D. (1 second) We first create three particle arrays, an "inlet", "fluid" and "outlet" in the `create_particle` method. Initially there are no fluid particles. A block of inlet and outlet particles are created. The inlet is created in the region (-1.0, 0.0) and (0.0, 1.0). velocity is prescribed to the inlet i.e. along the y-axis with a u velocity = 0.25. An outlet is also created in the region (1.0, 2.0), (0.0, 1.0) and as fluid particles enter the outlet region, they are converted to outlet particles. As outlet particles leave the outlet they are removed from the simulation. The `SimpleInletOutlet` is created which created `Inlet` and `Outlet` objects which can update the particles when they are moved from inlet to outlet. Also other missing variables in `InletInfo` and `OutletInfo` are evaluated by the manager. The following figure should make this clear. inlet fluid outlet --------- -------- -------- | * * * * | | | | * * * *| u | * * * * | | | | * * * *| ---> | * * * * | | | | * * * *| | * * * * | | | | * * * *| -------- -------- -------- In the figure '*' are the particles at the t=0. The particles are moving to the right and as they do, new fluid particles are added and as the fluid particles flow into the outlet they are converted to the outlet particle array and at last as the particles leave the outlet they are removed from the simulation. This example can be run in parallel. """ import numpy as np from pysph.base.kernels import CubicSpline from pysph.base.utils import get_particle_array from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.sph.integrator import PECIntegrator from pysph.sph.bc.donothing.simple_inlet_outlet import ( SimpleInletOutlet) from pysph.sph.bc.inlet_outlet_manager import ( InletInfo, OutletInfo, OutletStep, InletStep) from pysph.sph.basic_equations import SummationDensity class InletOutletApp(Application): def add_user_options(self, group): group.add_argument( "--speed", action="store", type=float, dest="speed", default=0.25, help="Speed of inlet particles.") def create_particles(self): # Note that you need to create the inlet and outlet arrays # in this method. # Initially fluid has no particles -- these are generated by the inlet. fluid = get_particle_array(name='fluid') # Setup the inlet particle array with just the particles we need at the dx = 0.1 x, y = np.mgrid[-1+dx/2: 0: dx, 0:1:dx] m = np.ones_like(x)*dx*dx h = np.ones_like(x)*dx*1.5 rho = np.ones_like(x) # Remember to set u otherwise the inlet particles won't move. Here we # use the options which may be set by the user from the command line. u = np.ones_like(x)*self.options.speed inlet = get_particle_array(name='inlet', x=x, y=y, m=m, h=h, u=u, rho=rho) x += 2.0 outlet = get_particle_array(name='outlet', x=x, y=y, m=m, h=h, u=u, rho=rho) particles = [inlet, fluid, outlet] props = ['ioid', 'disp', 'x0'] for p in props: for pa in particles: pa.add_property(p) return particles def _create_inlet_outlet_manager(self): from pysph.sph.bc.donothing.inlet import Inlet from pysph.sph.bc.donothing.outlet import Outlet props_to_copy = ['x', 'y', 'z', 'u', 'v', 'w', 'm', 'h', 'rho', 'p', 'ioid'] inlet_info = InletInfo( pa_name='inlet', normal=[-1.0, 0.0, 0.0], refpoint=[0.0, 0.0, 0.0], has_ghost=False, update_cls=Inlet ) outlet_info = OutletInfo( pa_name='outlet', normal=[1.0, 0.0, 0.0], refpoint=[1.0, 0.0, 0.0], update_cls=Outlet, props_to_copy=props_to_copy ) iom = SimpleInletOutlet( fluid_arrays=['fluid'], inletinfo=[inlet_info], outletinfo=[outlet_info] ) return iom def create_inlet_outlet(self, particle_arrays): iom = self.iom io = iom.get_inlet_outlet(particle_arrays) return io def create_equations(self): equations = [ SummationDensity( dest='fluid', sources=['inlet', 'outlet', 'fluid'] ) ] return equations def create_solver(self): self.iom = self._create_inlet_outlet_manager() kernel = CubicSpline(dim=2) integrator = PECIntegrator( fluid=InletStep(), inlet=InletStep(), outlet=OutletStep() ) self.iom.active_stages = [2] self.iom.setup_iom(dim=2, kernel=kernel) self.iom.update_dx(dx=0.1) dt = 1e-2 tf = 12 solver = Solver( kernel=kernel, dim=2, integrator=integrator, dt=dt, tf=tf, adaptive_timestep=False, pfreq=20 ) return solver if __name__ == '__main__': app = InletOutletApp() app.run() pysph-master/pysph/examples/two_blocks.py000066400000000000000000000025031356347341600212270ustar00rootroot00000000000000"""Two square blocks of water colliding with each other. (20 seconds) """ import numpy from pysph.examples._db_geometry import create_2D_filled_region from pysph.base.utils import get_particle_array from pysph.sph.iisph import IISPHScheme from pysph.solver.application import Application dx = 0.025 hdx = 1.0 rho0 = 1000 class TwoBlocks(Application): def create_particles(self): x1, y1 = create_2D_filled_region(-1, 0, 0, 1, dx) x2, y2 = create_2D_filled_region(0.5, 0, 1.5, 1, dx) x = numpy.concatenate((x1, x2)) y = numpy.concatenate((y1, y2)) u1 = numpy.ones_like(x1) u2 = -numpy.ones_like(x2) u = numpy.concatenate((u1, u2)) rho = numpy.ones_like(u)*rho0 h = numpy.ones_like(u)*hdx*dx m = numpy.ones_like(u)*dx*dx*rho0 fluid = get_particle_array( name='fluid', x=x, y=y, u=u, rho=rho, m=m, h=h ) self.scheme.setup_properties([fluid]) return [fluid] def create_scheme(self): s = IISPHScheme(fluids=['fluid'], solids=[], dim=2, rho0=rho0) return s def configure_scheme(self): dt = 2e-3 tf = 1.0 self.scheme.configure_solver( dt=dt, tf=tf, adaptive_timestep=False, pfreq=10 ) if __name__ == '__main__': app = TwoBlocks() app.run() pysph-master/pysph/parallel/000077500000000000000000000000001356347341600164655ustar00rootroot00000000000000pysph-master/pysph/parallel/__init__.py000066400000000000000000000000001356347341600205640ustar00rootroot00000000000000pysph-master/pysph/parallel/parallel_manager.pxd000066400000000000000000000147221356347341600224760ustar00rootroot00000000000000# Numpy cimport numpy as np from cpython cimport dict from cpython cimport list # PyZoltan from cyarray.carray cimport UIntArray, IntArray, DoubleArray, LongArray from pyzoltan.core.zoltan cimport PyZoltan, ZoltanGeometricPartitioner from pyzoltan.core.zoltan_comm cimport ZComm from pyzoltan.czoltan.czoltan_types cimport ZOLTAN_ID_TYPE, ZOLTAN_ID_PTR, ZOLTAN_OK # PySPH imports from pysph.base.nnps_base cimport NNPSParticleArrayWrapper from pysph.base.particle_array cimport ParticleArray from pysph.base.point cimport * cdef class ParticleArrayExchange: ############################################################################ # Data Attributes ############################################################################ cdef public int msglength_tag_remote # msg length tag for remote_exchange cdef public int data_tag_remote # data tag for remote_exchange cdef public int msglength_tag_lb # msg length tag for lb_exchange cdef public int data_tag_lb # data tag for lb_exchange cdef public int pa_index # Particle index cdef public ParticleArray pa # Particle data cdef public NNPSParticleArrayWrapper pa_wrapper # wrapper to exchange data # flags to indicate whether data needs to be exchanged cdef public bint lb_exchange cdef public bint remote_exchange cdef public size_t num_local # Total number of particles cdef public size_t num_global # Global number of particles cdef public size_t num_remote # Number of remote particles cdef public size_t num_ghost # Number of ghost particles # mpi.Comm object and associated rank and size cdef public object comm cdef public int rank, size # list of load balancing props cdef public list lb_props cdef public int nprops # Import/Export lists for particles cdef public UIntArray exportParticleGlobalids cdef public UIntArray exportParticleLocalids cdef public IntArray exportParticleProcs cdef public int numParticleExport cdef public UIntArray importParticleGlobalids cdef public UIntArray importParticleLocalids cdef public IntArray importParticleProcs cdef public int numParticleImport ############################################################################ # Member functions ############################################################################ # exchange data given send and receive lists cdef exchange_data(self, ZComm zcomm, dict sendbufs, int count) # base class for all parallel managers cdef class ParallelManager: ############################################################################ # Data Attributes ############################################################################ # mpi comm, rank and size cdef public object comm cdef public int rank cdef public int size cdef public np.ndarray dt_sendbuf # array for global reduction cdef public np.ndarray dt_recvbuf # for time step calculation cdef public int lb_freq # load balancing frequency cdef public int lb_count # counter for current lb step cdef public int ncells_local # number of local cells cdef public int ncells_remote # number of remote cells cdef public int ncells_total # total number of cells cdef public list cell_list # list of cells cdef public dict cell_map # index structure cdef public int ghost_layers # BOunding box size cdef public double cell_size # cell size used for binning # list of particle arrays, wrappers, exchange and nnps instances cdef public list particles cdef public list pa_wrappers cdef public list pa_exchanges # number of local and remote particles cdef public list num_local cdef public list num_remote cdef public list num_global cdef public double radius_scale # Radius scale for kernel cdef public bint initial_update cdef public bint update_cell_sizes # number of arrays cdef int narrays # boolean for parallel cdef bint in_parallel # Min and max across all processors cdef np.ndarray minx, miny, minz # min and max arrays used cdef np.ndarray maxx, maxy, maxz # for MPI Allreduce cdef np.ndarray maxh # operations cdef public double mx, my, mz # global min and max values cdef public double Mx, My, Mz, Mh # global indices for the cells cdef UIntArray cell_gid # cell coordinate values cdef DoubleArray cx, cy, cz # Import/Export lists for cells cdef public UIntArray exportCellGlobalids cdef public UIntArray exportCellLocalids cdef public IntArray exportCellProcs cdef public int numCellExport cdef public UIntArray importCellGlobalids cdef public UIntArray importCellLocalids cdef public IntArray importCellProcs cdef public int numCellImport ############################################################################ # Member functions ############################################################################ # Index particles given by a list of indices. The indices are # assumed to be of type unsigned int and local to the NNPS object cdef _bin(self, int pa_index, UIntArray indices) # Compute the cell size across processors. The cell size is taken # as max(h)*radius_scale cpdef compute_cell_size(self) # compute global bounds for the particle distribution. The bounds # are the min and max coordinate values across all processors and # the maximum smoothing length needed for parallel binning. cdef _compute_bounds(self) # nearest neighbor search routines taking into account multiple # particle arrays cpdef get_nearest_particles(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs) # Zoltan based parallel cell manager for SPH simulations cdef class ZoltanParallelManager(ParallelManager): ############################################################################ # Data Attributes ############################################################################ cdef public int changes # logical (0,1) if the partition changes cdef public PyZoltan pz # the PyZoltan wrapper for lb etc # Class of geometric load balancers cdef class ZoltanParallelManagerGeometric(ZoltanParallelManager): pass pysph-master/pysph/parallel/parallel_manager.pyx000066400000000000000000001402371356347341600225240ustar00rootroot00000000000000#cython: embedsignature=True # Numpy import numpy as np cimport numpy as np # MPI4PY import mpi4py.MPI as mpi from cpython.list cimport PyList_Append, PyList_GET_SIZE # PyZoltan from pyzoltan.czoltan cimport czoltan from pyzoltan.czoltan.czoltan cimport Zoltan_Struct from pyzoltan.core import zoltan_utils # PySPH imports from pysph.base.nnps_base cimport DomainManager, Cell, find_cell_id, arange_uint from pysph.base.utils import ParticleTAGS cdef int Local = ParticleTAGS.Local cdef int Remote = ParticleTAGS.Remote cdef extern from 'math.h': cdef double ceil(double) nogil cdef double floor(double) nogil cdef double fabs(double) nogil cdef extern from 'limits.h': cdef int INT_MAX cdef unsigned int UINT_MAX cpdef get_strided_indices(np.ndarray indices, int stride): cdef int j cdef list args = [] if stride == 1: return indices else: tmp = indices*stride for j in range(stride): args.append(tmp + j) return np.ravel(np.column_stack(args)) ################################################################ # ParticleArrayExchange ################################################################w cdef class ParticleArrayExchange: def __init__(self, int pa_index, ParticleArray pa, object comm): self.pa_index = pa_index self.pa = pa self.pa_wrapper = NNPSParticleArrayWrapper( pa ) # unique data and message length tags for MPI communications name = pa.name self.msglength_tag_remote = sum( [ord(c) for c in name + '_msglength_remote'] ) self.data_tag_remote = sum( [ord(c) for c in name + '_data_remote'] ) self.msglength_tag_lb = sum( [ord(c) for c in name + '_msglength_lb'] ) self.data_tag_lb = sum( [ord(c) for c in name + '_data_lb'] ) # MPI comm rank and size self.comm = comm self.rank = comm.Get_rank() self.size = comm.Get_size() # num particles self.num_local = pa.get_number_of_particles() self.num_remote = 0 self.num_global = 0 self.num_ghost = 0 # Particle Import/Export lists self.exportParticleGlobalids = UIntArray() self.exportParticleLocalids = UIntArray() self.exportParticleProcs = IntArray() self.numParticleExport = 0 self.importParticleGlobalids = UIntArray() self.importParticleLocalids = UIntArray() self.importParticleProcs = IntArray() self.numParticleImport = 0 # load balancing props are taken from the particle array lb_props = pa.get_lb_props() lb_props.sort() self.lb_props = lb_props self.nprops = len( lb_props ) # exchange flags self.lb_exchange = True self.remote_exchange = True def lb_exchange_data(self): """Share particle info after Zoltan_LB_Balance After an initial call to 'Zoltan_LB_Balance', the new size of the arrays should be (num_particles - numExport + numImport) to reflect particles that should be imported and exported. This function should be called after the load balancing lists a defined. The particles to be exported are removed and the arrays re-sized. MPI is then used to send and receive the data """ # data array cdef ParticleArray pa = self.pa # Export lists for the particles cdef UIntArray exportGlobalids = self.exportParticleGlobalids cdef UIntArray exportLocalids = self.exportParticleLocalids cdef IntArray exportProcs = self.exportParticleProcs cdef int numImport, numExport = self.numParticleExport # MPI communicator cdef object comm = self.comm # message tags cdef int dtag = self.data_tag_lb # create the zoltan communicator and get number of objects to import cdef ZComm zcomm = ZComm(comm, dtag, numExport, exportProcs.get_npy_array()) numImport = zcomm.nreturn # extract particles to be exported cdef dict sendbufs = self.get_sendbufs( exportLocalids ) # remove particles to be exported pa.remove_particles(exportLocalids) pa.align_particles() # old and new sizes cdef int current_size = self.num_local cdef int count = current_size - numExport cdef int newsize = count + numImport # resize the particle array to accomodate the receive pa.resize( newsize ) # exchange data self.exchange_data(zcomm, sendbufs, count) # set all particle tags to local self.set_tag(count, newsize, Local) pa.align_particles() # update the number of particles self.num_local = newsize def remote_exchange_data(self): """Share particle info after computing remote particles. After a load balancing and the corresponding exhange of particles, additional particles are needed as remote (remote). Calls to 'compute_remote_particles' and 'Zoltan_Invert_Lists' builds the lists required. The arrays must now be re-sized to (num_particles + numImport) to reflect the new particles that come in as remote particles. """ # data arrays cdef ParticleArray pa = self.pa # Export lists cdef UIntArray exportGlobalids = self.exportParticleGlobalids cdef UIntArray exportLocalids = self.exportParticleLocalids cdef IntArray exportProcs = self.exportParticleProcs cdef int numImport, numExport = self.numParticleExport # current number of particles cdef int current_size = self.num_local cdef int count, new_size # MPI communicator cdef object comm = self.comm # zoltan communicator cdef int dtag = self.data_tag_remote cdef ZComm zcomm = ZComm(comm, dtag, numExport, exportProcs.get_npy_array()) numImport = zcomm.nreturn # old and new sizes count = current_size newsize = current_size + numImport # copy particles to be exported cdef dict sendbufs = self.get_sendbufs( exportLocalids ) # update the size of the array pa.resize( newsize ) self.exchange_data(zcomm, sendbufs, count) # set tags for all received particles as Remote self.set_tag(count, newsize, Remote) pa.align_particles() # store the number of remote particles self.num_remote = newsize - current_size cdef exchange_data(self, ZComm zcomm, dict sendbufs, int count): cdef ParticleArray pa = self.pa cdef str prop cdef int prop_tag, stride, nbytes, i, nprops = self.nprops cdef np.ndarray prop_arr, sendbuf, recvbuf cdef list props = self.lb_props for i in range(nprops): prop = props[i] prop_arr = pa.properties[prop].get_npy_array() nbytes = prop_arr.dtype.itemsize stride = pa.stride.get(prop, 1) # the send and receive buffers sendbuf = sendbufs[prop] recvbuf = prop_arr[count*stride:] # tag for this property prop_tag = i # set the nbytes and tag for the zoltan communicator zcomm.set_nbytes( nbytes*stride ) zcomm.set_tag( prop_tag ) # exchange the data zcomm.Comm_Do( sendbuf, recvbuf ) def remove_remote_particles(self): self.num_local = self.pa.get_number_of_particles(real=True) cdef int num_local = self.num_local # resize the particle array self.pa.resize( num_local ) self.pa.align_particles() # reset the number of remote particles self.num_remote = 0 def align_particles(self): self.pa.align_particles() def set_tag(self, int start, int end, int value): """Reset the annoying tag value after particles are resized.""" cdef int i cdef IntArray tag = self.pa_wrapper.tag for i in range(start, end): tag[i] = value def update_particle_gids(self): """Update the global indices. We call a utility function to get the new number of particles across the processors and then linearly assign indices to the objects. """ cdef int num_global_objects, num_local_objects, _sum, i cdef np.ndarray[ndim=1, dtype=np.int32_t] num_objects_data # the global indices array cdef UIntArray gid = self.pa_wrapper.gid cdef object comm = self.comm cdef int rank = self.rank cdef int size = self.size num_objects_data = zoltan_utils.get_num_objects_per_proc( comm, self.num_local) num_local_objects = num_objects_data[ rank ] num_global_objects = np.sum( num_objects_data ) _sum = np.sum( num_objects_data[:rank] ) gid.resize( num_local_objects ) for i in range( num_local_objects ): gid.data[i] = ( _sum + i ) # set the number of local and global objects self.num_global = num_global_objects def reset_lists(self): """Reset the particle lists""" self.numParticleExport = 0 self.numParticleImport = 0 self.exportParticleGlobalids.reset() self.exportParticleLocalids.reset() self.exportParticleProcs.reset() self.importParticleGlobalids.reset() self.importParticleLocalids.reset() self.importParticleProcs.reset() def extend(self, int currentsize, int newsize): self.pa.resize( newsize ) def get_sendbufs(self, UIntArray exportIndices): cdef ParticleArray pa = self.pa cdef dict sendbufs = {} cdef str prop cdef int stride cdef np.ndarray indices = exportIndices.get_npy_array() strides = set(pa.stride.values()) s_indices = {1:indices} for stride in strides: if stride > 1: s_indices[stride] = get_strided_indices(indices, stride) for prop in pa.properties: prop_arr = pa.properties[prop].get_npy_array() stride = pa.stride.get(prop, 1) sendbufs[prop] = prop_arr[s_indices[stride]] return sendbufs # ################################################################# # # ParallelManager extension classes # ################################################################# cdef class ParallelManager: """Base class for all parallel managers.""" def __init__(self, int dim, list particles, object comm, double radius_scale=2.0, int ghost_layers=2, domain=None, bint update_cell_sizes=True): """Constructor. Parameters ---------- dim : int Dimension particles : list list of particle arrays to be managed. comm : mpi4py.MPI.COMM, default MPI communicator for parallel invocations radius_scale : double, default (2) Optional kernel radius scale. Defaults to 2 ghost_layers : int, default (2) Optional factor for computing bounding boxes for cells. domain : DomainManager, default (None) Optional limits for the domain """ # number of arrays and a reference to the particle list self.narrays = len(particles) self.particles = particles # particle array exchange instances self.pa_exchanges = [ParticleArrayExchange(i, pa, comm) \ for i, pa in enumerate(particles)] # particle array wrappers self.pa_wrappers = [exchange.pa_wrapper for exchange in self.pa_exchanges] # number of local/global/remote particles self.num_local = [exchange.num_local for exchange in self.pa_exchanges] self.num_global = [0] * self.narrays self.num_remote = [0] * self.narrays # global indices used for load balancing self.cell_gid = UIntArray() # cell coordinate values used for load balancing self.cx = DoubleArray() self.cy = DoubleArray() self.cz = DoubleArray() # minimum and maximum arrays for MPI reduce operations. These # are used to find global bounds across processors. self.minx = np.zeros( shape=1, dtype=np.float64 ) self.miny = np.zeros( shape=1, dtype=np.float64 ) self.minz = np.zeros( shape=1, dtype=np.float64 ) # global minimum values for x, y and z self.mx = 0.0; self.my = 0.0; self.mz = 0.0 self.maxx = np.zeros( shape=1, dtype=np.float64 ) self.maxy = np.zeros( shape=1, dtype=np.float64 ) self.maxz = np.zeros( shape=1, dtype=np.float64 ) self.maxh = np.zeros( shape=1, dtype=np.float64 ) # global max values for x ,y, z & h self.Mx = 0.0; self.My = 0.0; self.Mz = 0.0; self.Mh = 0.0 # MPI comm rank and size self.comm = comm self.rank = comm.Get_rank() self.size = comm.Get_size() self.in_parallel = True if self.size == 1: self.in_parallel = False # The cell_map dictionary, radius scale for binning and ghost # layers for remote neighbors. self.cell_map = {} self.cell_list = [] # number of loca/remote cells self.ncells_local = 0 self.ncells_remote = 0 self.ncells_total = 0 # radius scale and ghost layers self.radius_scale = radius_scale self.ghost_layers = ghost_layers # setup cell import/export lists self._setup_arrays() # flags to re-compute cell sizes self.initial_update = True self.update_cell_sizes = update_cell_sizes # update the particle global ids at startup self.update_particle_gids() self.local_bin() # load balancing frequency and counter self.lb_count = 0 self.lb_freq = 1 # array for global reduction of time steps self.dt_sendbuf = np.array( [1.0], dtype=np.float64 ) def update_time_steps(self, double local_dt): """Peform a reduction to compute the globally stable time steps""" cdef np.ndarray dt_sendbuf = self.dt_sendbuf cdef np.ndarray dt_recvbuf = np.zeros_like(dt_sendbuf) comm = self.comm # set the local time step and peform the global reduction dt_sendbuf[0] = local_dt comm.Allreduce( sendbuf=dt_sendbuf, recvbuf=dt_recvbuf, op=mpi.MIN ) return dt_recvbuf[0] cpdef compute_cell_size(self): """Compute the cell size for the binning. The cell size is chosen as the kernel radius scale times the maximum smoothing length in the local processor. For parallel runs, we would need to communicate the maximum 'h' on all processors to decide on the appropriate binning size. """ # compute global bounds for the solution self._compute_bounds() cdef double hmax = self.Mh cell_size = self.radius_scale * hmax if cell_size < 1e-6: msg = """Cell size too small %g. Perhaps h = 0? Setting cell size to 1"""%(cell_size) print msg cell_size = 1.0 self.cell_size = cell_size def update_cell_gids(self): """Update global indices for the cell_map dictionary. The objects to be partitioned in this class are the cells and we need to number them uniquely across processors. The numbering is done sequentially starting from the processor with rank 0 to the processor with rank size-1 """ # update the cell gids cdef PyZoltan pz = self.pz pz.num_local_objects = PyList_GET_SIZE( self.cell_list ) pz._update_gid( self.cell_gid ) def update_particle_gids(self): """Update individual particle global indices""" for i in range(self.narrays): pa_exchange = self.pa_exchanges[i] pa_exchange.update_particle_gids() self.num_local[i] = pa_exchange.num_local self.num_global[i] = pa_exchange.num_global def update(self): cdef int lb_freq = self.lb_freq cdef int lb_count = self.lb_count lb_count += 1 # remove remote particles from a previous step self.remove_remote_particles() if ( lb_count == lb_freq ): self.update_partition() self.lb_count = 0 else: self.migrate_partition() self.lb_count = lb_count def update_partition(self): """Update the partition. This is the main entry point for the parallel manager. Given particles distributed across processors, we bin them, assign uniqe global indices for the cells and particles and subsequently perform the following steps: (a) Call a load balancing function to get import and export lists for the cell indices. for each particle array: (b) Create particle import/export lists using the cell import/export lists (c) Call ParticleArrayExchange's lb_exchange data with these particle import/export lists to effect the data movement for particles. (d) Update the local cell map after the data has been exchanged. now that we have the updated map, for each particle array: (e) Compute remote particle import/export lists from the cell map (f) Call ParticleArrayExchange's remote_exchange_data with these lists to effect the data movement. now that the data movement is done, (g) Update the local cell map to accomodate remote particles. Notes ----- Although not intended to be a 'parallel' NNPS, the ParallelManager can be used for this purpose. I don't think this is advisable in variable smoothing length simulations where we should be using a very coarse grained partitioning to ensure safe local computations. For two step integrators commonly employed in SPH and with a sufficient number of ghost layers for remote particles, we should call 'update' only at the end of a time step. The idea of multiple ghost layers is to ensure that local particles have a sufficiently large halo region around them. """ # update particle gids, bin particles and update cell gids #self.update_particle_gids() self.local_bin() self.update_cell_gids() if self.in_parallel: # use Zoltan to get the cell import/export lists self.load_balance() # move the data to the new partitions if self.changes == 1: self.create_particle_lists() # exchange the data for each array for i in range(self.narrays): self.pa_exchanges[i].lb_exchange_data() # update the local cell map after data movement self.update_local_data() # compute remote particles and exchange data self.compute_remote_particles() for i in range(self.narrays): self.pa_exchanges[i].remote_exchange_data() # update the local cell map to accomodate remote particles self.update_remote_data() # set the particle pids now that we have the partitions for i in range(self.narrays): pa = self.particles[i] pa.set_pid( self.rank ) def migrate_partition(self): # migrate particles self.migrate_particles() # update local data self.update_local_data() # compute remote particles and exchange data self.compute_remote_particles() for i in range(self.narrays): self.pa_exchanges[i].remote_exchange_data() # update the local cell map to accomodate remotes self.update_remote_data() # set the particle pids for i in range(self.narrays): self.particles[i].set_pid( self.rank ) def remove_remote_particles(self): """Remove remote particles""" cdef int narrays = self.narrays cdef ParticleArrayExchange pa_exchange for i in range(narrays): pa_exchange = self.pa_exchanges[i] pa_exchange.remove_remote_particles() self.num_local[i] = pa_exchange.num_local self.num_remote[i] = pa_exchange.num_remote def local_bin(self): """Create the local cell map. Bin the particles by deleting any previous cells and re-computing the indexing structure. This corresponds to a local binning process that is called when each processor has a given list of particles to deal with. """ cdef dict cell_map = self.cell_map cdef int num_particles cdef UIntArray indices # clear the indices for the cell_map. self.cell_map.clear() self.cell_list = [] self.ncells_total = 0 # compute the cell size if self.initial_update or self.update_cell_sizes: self.compute_cell_size() # # deal with ghosts # if self.is_periodic: # # remove-ghost-particles # self._remove_ghost_particles() # # adjust local particles # self._adjust_particles() # # create new ghosts # self._create_ghosts() # bin each particle array separately for i in range(self.narrays): num_particles = self.num_local[i] # bin the particles indices = arange_uint(num_particles) self._bin( i, indices ) # local number of cells at this point are the total number of cells self.ncells_local = self.ncells_total cdef _bin(self, int pa_index, UIntArray indices): """Bin a given particle array with indices. Parameters ---------- pa_index : int Index of the particle array corresponding to the particles list indices : UIntArray Subset of particles to bin """ cdef NNPSParticleArrayWrapper pa_wrapper = self.pa_wrappers[ pa_index ] cdef DoubleArray x = pa_wrapper.x cdef DoubleArray y = pa_wrapper.y cdef DoubleArray z = pa_wrapper.z cdef UIntArray gid = pa_wrapper.gid cdef dict cell_map = self.cell_map cdef list cell_list = self.cell_list cdef double cell_size = self.cell_size cdef UIntArray lindices, gindices cdef size_t num_particles, indexi cdef ZOLTAN_ID_TYPE i cdef Cell cell cdef cIntPoint _cid cdef IntPoint cid = IntPoint() cdef cPoint pnt cdef int ncells_total = 0 cdef int narrays = self.narrays cdef int layers = self.ghost_layers # now bin the particles num_particles = indices.length for indexi in range(num_particles): i = indices.data[indexi] pnt = cPoint_new( x.data[i], y.data[i], z.data[i] ) _cid = find_cell_id( pnt, cell_size ) cid = IntPoint_from_cIntPoint(_cid) if not cell_map.has_key( cid ): cell = Cell(cid=cid, cell_size=cell_size, narrays=self.narrays, layers=layers) # store this cell in the cells list and cell map PyList_Append( cell_list, cell ) cell_map[ cid ] = cell ncells_total = ncells_total + 1 # add this particle to the list of indicies cell = cell_map[ cid ] lindices = cell.lindices[pa_index] gindices = cell.gindices[pa_index] lindices.append( i ) gindices.append( gid.data[i] ) # increment the size of the cells to reflect the added # particle. The size will be used to set the weights. cell.size += 1 # update the total number of cells self.ncells_total = self.ncells_total + ncells_total def update_local_data(self): """Update the cell map after load balance. After the load balancing step, each processor has a new set of local particles which need to be indexed. This new cell structure is then used to compute remote particles. """ cdef ParticleArrayExchange pa_exchange cdef int num_particles, i cdef UIntArray indices # clear the local cell_map dict self.cell_map.clear() self.cell_list = [] self.ncells_total = 0 for i in range(self.narrays): pa_exchange = self.pa_exchanges[i] num_particles = pa_exchange.num_local # set the number of local and global particles self.num_local[i] = pa_exchange.num_local self.num_global[i] = pa_exchange.num_global # bin the particles indices = arange_uint( num_particles ) self._bin(i, indices) self.ncells_local = self.ncells_total def update_remote_data(self): """Update the cell structure after sharing remote particles. After exchanging remote particles, we need to index the new remote particles for a possible neighbor query. """ cdef int narrays = self.narrays cdef int num_local, num_remote, i cdef ParticleArrayExchange pa_exchange cdef UIntArray indices for i in range(narrays): pa_exchange = self.pa_exchanges[i] # get the number of local and remote particles num_local = pa_exchange.num_local num_remote = pa_exchange.num_remote # update the number of local/remote particles self.num_local[i] = num_local self.num_remote[i] = num_remote indices = arange_uint( num_local, num_local + num_remote ) self._bin( i, indices ) # compute the number of remote cells added self.ncells_remote = self.ncells_total - self.ncells_local def save_partition(self, fname, count=0): """Collect cell data from processors and save""" # get the global number of cells cdef cPoint centroid cdef int ncells_total = self.ncells_total cdef int ncells_local = self.ncells_local # cell centroid arrays cdef double[:] x = np.zeros( shape=ncells_total, dtype=np.float64 ) cdef double[:] y = np.zeros( shape=ncells_total, dtype=np.float64 ) cdef int[:] lid = np.zeros( shape=ncells_total, dtype=np.int32 ) for i in range( ncells_total ): cell = self.cell_list[i]; centroid = cell.centroid x[i] = centroid.x; y[i] = centroid.y lid[i] = i cdef int[:] tag = np.zeros(shape=(ncells_total,), dtype=np.int32) tag[ncells_local:] = 1 cdef double cell_size = self.cell_size # save the partition locally fname = fname + '/partition%03d.%d'%(count, self.rank) np.savez( fname, x=x, y=y, lid=lid, tag=tag, cell_size=cell_size, ncells_local=self.ncells_local, ncells_total=self.ncells_total ) def set_lb_freq(self, int lb_freq): self.lb_freq = lb_freq def load_balance(self): raise NotImplementedError("ParallelManager::load_balance") def create_particle_lists(self, pa_index): raise NotImplementedError("ParallelManager::create_particle_lists") def compute_remote_particles(self, pa_index): raise NotImplementedError("ParallelManager::compute_remote_particles") ####################################################################### # Private interface ####################################################################### def set_data(self): """Compute the user defined data for use with Zoltan""" raise NotImplementedError("ZoltanParallelManager::_set_data should not be called!") def _setup_arrays(self): # Import/Export lists for cells self.exportCellGlobalids = UIntArray() self.exportCellLocalids = UIntArray() self.exportCellProcs = IntArray() self.importCellGlobalids = UIntArray() self.importCellLocalids = UIntArray() self.importCellProcs = IntArray() self.numCellImport = 0 self.numCellExport = 0 cdef _compute_bounds(self): """Compute the domain bounds for indexing.""" cdef double mx, my, mz cdef double Mx, My, Mz, Mh cdef list pa_wrappers = self.pa_wrappers cdef NNPSParticleArrayWrapper pa cdef DoubleArray x, y, z, h # set some high and low values cdef double high = -1e20, low = 1e20 mx = low; my = low; mz = low Mx = high; My = high; Mz = high; Mh = high # find the local min and max for all arrays on this proc for pa in pa_wrappers: x = pa.x; y = pa.y; z = pa.z; h = pa.h if x.length > 0: x.update_min_max(); y.update_min_max() z.update_min_max(); h.update_min_max() if x.minimum < mx: mx = x.minimum if x.maximum > Mx: Mx = x.maximum if y.minimum < my: my = y.minimum if y.maximum > My: My = y.maximum if z.minimum < mz: mz = z.minimum if z.maximum > Mz: Mz = z.maximum if h.maximum > Mh: Mh = h.maximum self.minx[0] = mx; self.miny[0] = my; self.minz[0] = mz self.maxx[0] = Mx; self.maxy[0] = My; self.maxz[0] = Mz self.maxh[0] = Mh # now compute global min and max if in parallel comm = self.comm if self.in_parallel: # revc buffers for all reduce _minx = np.zeros_like(self.minx) _miny = np.zeros_like(self.miny) _minz = np.zeros_like(self.minz) _maxx = np.zeros_like(self.maxx) _maxy = np.zeros_like(self.maxy) _maxz = np.zeros_like(self.maxz) _maxh = np.zeros_like(self.maxh) # global reduction for minimum values comm.Allreduce(sendbuf=self.minx, recvbuf=_minx, op=mpi.MIN) comm.Allreduce(sendbuf=self.miny, recvbuf=_miny, op=mpi.MIN) comm.Allreduce(sendbuf=self.minz, recvbuf=_minz, op=mpi.MIN) # global reduction for maximum values comm.Allreduce(sendbuf=self.maxx, recvbuf=_maxx, op=mpi.MAX) comm.Allreduce(sendbuf=self.maxy, recvbuf=_maxy, op=mpi.MAX) comm.Allreduce(sendbuf=self.maxz, recvbuf=_maxz, op=mpi.MAX) comm.Allreduce(sendbuf=self.maxh, recvbuf=_maxh, op=mpi.MAX) self.mx = _minx[0]; self.my = _miny[0]; self.mz = _minz[0] self.Mx = _maxx[0]; self.My = _maxy[0]; self.Mz = _maxz[0] self.Mh = _maxh[0] ###################################################################### # Neighbor location routines ###################################################################### cpdef get_nearest_particles(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs): """Utility function to get near-neighbors for a particle. Parameters ---------- src_index : int Index of the source particle array in the particles list dst_index : int Index of the destination particle array in the particles list d_idx : int (input) Particle for which neighbors are sought. nbrs : UIntArray (output) Neighbors for the requested particle are stored here. """ cdef dict cell_map = self.cell_map cdef Cell cell cdef NNPSParticleArrayWrapper src = self.pa_wrappers[ src_index ] cdef NNPSParticleArrayWrapper dst = self.pa_wrappers[ dst_index ] # Source data arrays cdef DoubleArray s_x = src.x cdef DoubleArray s_y = src.y cdef DoubleArray s_z = src.z cdef DoubleArray s_h = src.h # Destination particle arrays cdef DoubleArray d_x = dst.x cdef DoubleArray d_y = dst.y cdef DoubleArray d_z = dst.z cdef DoubleArray d_h = dst.h cdef double radius_scale = self.radius_scale cdef double cell_size = self.cell_size cdef UIntArray lindices cdef size_t indexj cdef ZOLTAN_ID_TYPE j cdef cPoint xi = cPoint_new(d_x.data[d_idx], d_y.data[d_idx], d_z.data[d_idx]) cdef cIntPoint _cid = find_cell_id( xi, cell_size ) cdef IntPoint cid = IntPoint_from_cIntPoint( _cid ) cdef IntPoint cellid = IntPoint(0, 0, 0) cdef cPoint xj cdef double xij cdef double hi, hj hi = radius_scale * d_h.data[d_idx] cdef int nnbrs = 0 cdef int ix, iy, iz for ix in [cid.data.x -1, cid.data.x, cid.data.x + 1]: for iy in [cid.data.y - 1, cid.data.y, cid.data.y + 1]: for iz in [cid.data.z -1, cid.data.z, cid.data.z + 1]: cellid.data.x = ix; cellid.data.y = iy cellid.data.z = iz if cell_map.has_key( cellid ): cell = cell_map[ cellid ] lindices = cell.lindices[src_index] for indexj in range( lindices.length ): j = lindices.data[indexj] xj = cPoint_new( s_x.data[j], s_y.data[j], s_z.data[j] ) xij = cPoint_distance( xi, xj ) hj = radius_scale * s_h.data[j] if ( (xij < hi) or (xij < hj) ): if nnbrs == nbrs.length: nbrs.resize( nbrs.length + 50 ) print """Neighbor search :: Extending the neighbor list to %d"""%(nbrs.length) nbrs.data[ nnbrs ] = j nnbrs = nnbrs + 1 # update the length for nbrs to indicate the number of neighbors nbrs.length = nnbrs cdef class ZoltanParallelManager(ParallelManager): """Base class for Zoltan enabled parallel cell managers. To partition a list of arrays, we do an NNPS like box sort on all arrays to create a global spatial indexing structure. The cell_map are then used as 'objects' to be partitioned by Zoltan. The cell_map dictionary (cell-map) need not be unique across processors. We are responsible for assignning unique global ids for the cells. The Zoltan generated (cell) import/export lists are then used to construct particle import/export lists which are used to perform the data movement of the particles. A ParticleArrayExchange object (lb_exchange_data and remote_exchange_data) is used for each particle array in turn to effect this movement. """ def __init__(self, int dim, list particles, object comm, double radius_scale=2.0, int ghost_layers=2, domain=None, bint update_cell_sizes=True): """Constructor. Parameters ---------- dim : int Dimension particles : list list of particle arrays to be managed. comm : mpi4py.MPI.COMM, default MPI communicator for parallel invocations radius_scale : double, default (2) Optional kernel radius scale. Defaults to 2 ghost_layers : int, default (2) Optional factor for computing bounding boxes for cells. domain : DomainManager, default (None) Optional limits for the domain """ super(ZoltanParallelManager, self).__init__( dim, particles, comm, radius_scale, ghost_layers, domain, update_cell_sizes) # Initialize the base PyZoltan class. self.pz = PyZoltan(comm) def create_particle_lists(self): """Create particle export/import lists For the cell based partitioner, Zoltan generated import and export lists apply to the cells. From this data, we can generate local export indices for the particles which must then be inverted to get the requisite import lists. We first construct the particle export lists in local arrays using the information from Zoltan_LB_Balance and subsequently call Zoltan_Invert_Lists to get the import lists. These arrays are then copied to the ParticleArrayExchange import/export lists. """ # these are the Zoltan generated lists that correspond to cells cdef UIntArray exportCellLocalids = self.exportCellLocalids cdef UIntArray exportCellGlobalids = self.exportCellGlobalids cdef IntArray exportCellProcs = self.exportCellProcs cdef int numCellExport = self.numCellExport # the local cell map and cellids cdef int narrays = self.narrays cdef list cell_list = self.cell_list cdef ParticleArrayExchange pa_exchange cdef UIntArray exportLocalids, exportGlobalids cdef IntArray exportProcs cdef int indexi, i, j, export_proc, nindices, pa_index cdef UIntArray lindices, gindices cdef IntPoint cellid cdef Cell cell # initialize the particle export lists for pa_index in range(self.narrays): pa_exchange = self.pa_exchanges[pa_index] pa_exchange.reset_lists() # Iterate over the cells to be exported for indexi in range(numCellExport): i = exportCellLocalids[indexi] # local index for the cell to export cell = cell_list[i] # local cell to export export_proc = exportCellProcs.data[indexi] # processor to export cell to # populate the export lists for each array in this cell for pa_index in range(self.narrays): # the ParticleArrayExchange object for this array pa_exchange = self.pa_exchanges[pa_index] exportLocalids = pa_exchange.exportParticleLocalids exportGlobalids = pa_exchange.exportParticleGlobalids exportProcs = pa_exchange.exportParticleProcs # local and global indices in this cell lindices = cell.lindices[pa_index] gindices = cell.gindices[pa_index] nindices = lindices.length for j in range( nindices ): exportLocalids.append( lindices.data[j] ) exportGlobalids.append( gindices.data[j] ) exportProcs.append( export_proc ) # save the number of particles for pa_index in range(narrays): pa_exchange = self.pa_exchanges[pa_index] pa_exchange.numParticleExport = pa_exchange.exportParticleProcs.length def compute_remote_particles(self): """Compute remote particles. Particles to be exported are determined by flagging individual cells and where they need to be shared to meet neighbor requirements. """ # the PyZoltan object used to find intersections cdef PyZoltan pz = self.pz cdef Cell cell cdef cPoint boxmin, boxmax cdef object comm = self.comm cdef int rank = self.rank, narrays = self.narrays cdef list cell_list = self.cell_list cdef UIntArray lindices, gindices cdef np.ndarray nbrprocs cdef np.ndarray[ndim=1, dtype=np.int32_t] procs cdef int nbrproc, num_particles, indexi, pa_index cdef ZOLTAN_ID_TYPE i cdef int numprocs = 0 cdef ParticleArrayExchange pa_exchange cdef UIntArray exportGlobalids, exportLocalids cdef IntArray exportProcs # reset the export lists for pa_index in range(narrays): pa_exchange = self.pa_exchanges[pa_index] pa_exchange.reset_lists() # Chaeck for each cell for cell in cell_list: # get the bounding box for this cell boxmin = cell.boxmin; boxmax = cell.boxmax numprocs = pz.Zoltan_Box_PP_Assign( boxmin.x, boxmin.y, boxmin.z, boxmax.x, boxmax.y, boxmax.z) # the array of processors that this box intersects with procs = pz.procs # array of neighboring processors nbrprocs = procs[np.where( (procs != -1) * (procs != rank) )[0]] if nbrprocs.size > 0: cell.is_boundary = True # store the neighboring processors for this cell cell.nbrprocs.resize( nbrprocs.size ) cell.nbrprocs.set_data( nbrprocs ) # populate the particle export lists for each array for pa_index in range(narrays): pa_exchange = self.pa_exchanges[pa_index] exportGlobalids = pa_exchange.exportParticleGlobalids exportLocalids = pa_exchange.exportParticleLocalids exportProcs = pa_exchange.exportParticleProcs # local and global indices in this cell lindices = cell.lindices[pa_index] gindices = cell.gindices[pa_index] num_particles = lindices.length for nbrproc in nbrprocs: for indexi in range( num_particles ): i = lindices.data[indexi] exportLocalids.append( i ) exportGlobalids.append( gindices.data[indexi] ) exportProcs.append( nbrproc ) # set the numParticleExport for each array for pa_index in range(narrays): pa_exchange = self.pa_exchanges[pa_index] pa_exchange.numParticleExport = pa_exchange.exportParticleProcs.length def load_balance(self): """Use Zoltan to generate import/export lists for the cells. For the Zoltan interface, we require to populate a user defined struct with appropriate data for Zoltan to deal with. For the geometric based partitioners for example, we require the unique cell global indices and arrays for the cell centroid coordinates. Computation of this data must be done prior to calling 'pz.Zoltan_LB_Balance' through a call to 'set_data' """ cdef PyZoltan pz = self.pz # set the data which will be used by the Zoltan wrapper self.set_data() # call Zoltan to get the cell import/export lists self.changes = pz.Zoltan_LB_Balance() # copy the Zoltan export lists to the cell lists self.numCellExport = pz.numExport self.exportCellGlobalids.resize( pz.numExport ) self.exportCellGlobalids.copy_subset( pz.exportGlobalids ) self.exportCellLocalids.resize( pz.numExport ) self.exportCellLocalids.copy_subset( pz.exportLocalids ) self.exportCellProcs.resize( pz.numExport ) self.exportCellProcs.copy_subset( pz.exportProcs ) # copy the Zoltan import lists to the cell lists self.numCellImport = pz.numImport self.importCellGlobalids.resize( pz.numImport ) self.importCellGlobalids.copy_subset( pz.importGlobalids ) self.importCellLocalids.resize( pz.numImport ) self.importCellLocalids.copy_subset( pz.importLocalids ) self.importCellProcs.resize( pz.numImport ) self.importCellProcs.copy_subset( pz.importProcs ) def migrate_particles(self): raise NotImplementedError("ZoltanParallelManager::migrate_particles called") cdef class ZoltanParallelManagerGeometric(ZoltanParallelManager): """Zoltan enabled parallel manager for use with geometric load balancing. Use this class for the Zoltan RCB, RIB and HSFC load balancing algorithms. """ def __init__(self, int dim, list particles, object comm, double radius_scale=2.0, int ghost_layers=2, domain=None, str lb_method='RCB', str keep_cuts="1", str obj_weight_dim="1", bint update_cell_sizes=True ): # initialize the base class super(ZoltanParallelManagerGeometric, self).__init__( dim, particles, comm, radius_scale, ghost_layers, domain, update_cell_sizes) # concrete implementation of a PyZoltan class self.pz = ZoltanGeometricPartitioner( dim, self.comm, self.cx, self.cy, self.cz, self.cell_gid, obj_weight_dim=obj_weight_dim, keep_cuts=keep_cuts, lb_method=lb_method) def set_zoltan_rcb_lock_directions(self): """Set the Zoltan flag to fix the directions of the RCB cuts""" self.pz.set_rcb_lock_directions('1') def set_zoltan_rcb_reuse(self): """Use previous cuts as guesses for the current RCB cut""" self.pz.set_rcb_reuse('1') def set_zoltan_rcb_rectilinear_blocks(self): """Flag to control the shape of the RCB regions """ self.pz.set_rcb_rectilinear_blocks('1') def set_zoltan_rcb_directions(self, value): """Option to group the cuts along a given direction Legal values (refer to the Zoltan User Guide): '0' = don't order cuts; '1' = xyz '2' = xzy '3' = yzx '4' = yxz '5' = zxy '6' = zyx """ self.pz.set_rcb_directions(value) def set_data(self): """Set the user defined particle data structure for Zoltan. For the geometric based partitioners, Zoltan requires as input the number of local/global objects, unique global indices for the local objects and coordinate values for the local objects. """ cdef ZoltanGeometricPartitioner pz = self.pz cdef list cell_list = self.cell_list cdef int num_local_objects = pz.num_local_objects cdef double num_global_objects1 = 1.0/pz.num_global_objects cdef double weight cdef DoubleArray x = self.cx cdef DoubleArray y = self.cy cdef DoubleArray z = self.cz # the weights array for PyZoltan cdef DoubleArray weights = pz.weights cdef int i cdef Cell cell cdef cPoint centroid # resize the coordinate and PyZoltan weight arrays x.resize( num_local_objects ) y.resize( num_local_objects ) z.resize( num_local_objects ) weights.resize( num_local_objects ) # populate the arrays for i in range( num_local_objects ): cell = cell_list[ i ] centroid = cell.centroid # weights are defined as cell.size/num_total weights.data[i] = num_global_objects1 * cell.size x.data[i] = centroid.x y.data[i] = centroid.y z.data[i] = centroid.z def migrate_particles(self): """Update an existing partition""" cdef PyZoltan pz = self.pz cdef int pa_index, narrays=self.narrays cdef double xi, yi, zi cdef NNPSParticleArrayWrapper pa_wrapper cdef ParticleArrayExchange pa_exchange cdef UIntArray exportGlobalids, exportLocalids cdef IntArray exportParticleProcs cdef DoubleArray x, y, z cdef UIntArray gid cdef ZOLTAN_ID_TYPE local_id, global_id cdef int num_particles, i, proc cdef int rank = self.rank # iterate over all arrays for pa_index in range(narrays): pa_exchange = self.pa_exchanges[ pa_index ] pa_wrapper = self.pa_wrappers[ pa_index ] # reset the export lists pa_exchange.reset_lists() exportGlobalids = pa_exchange.exportParticleGlobalids exportLocalids = pa_exchange.exportParticleLocalids exportProcs = pa_exchange.exportParticleProcs # particle arrays x = pa_wrapper.x; y = pa_wrapper.y; z = pa_wrapper.z gid = pa_wrapper.gid num_particles = self.num_local[pa_index] for i in range(num_particles): xi = x.data[i]; yi = y.data[i]; zi = z.data[i] # find the processor to which this point belongs proc = pz.Zoltan_Point_PP_Assign(xi, yi, zi) if proc != rank: exportGlobalids.append( gid.data[i] ) exportLocalids.append( i ) exportProcs.append(proc) # exchange the data given the export lists pa_exchange.numParticleExport = exportProcs.length pa_exchange.lb_exchange_data() pysph-master/pysph/parallel/tests/000077500000000000000000000000001356347341600176275ustar00rootroot00000000000000pysph-master/pysph/parallel/tests/__init__.py000066400000000000000000000000001356347341600217260ustar00rootroot00000000000000pysph-master/pysph/parallel/tests/cavity.py000066400000000000000000000002761356347341600215050ustar00rootroot00000000000000# We do this as we are not interested in the post-processing step. if __name__ == '__main__': from pysph.examples.cavity import LidDrivenCavity app = LidDrivenCavity() app.run() pysph-master/pysph/parallel/tests/check_dump_load.py000066400000000000000000000025131356347341600233030ustar00rootroot00000000000000"""Test if dumping and loading files works in parallel correctly. """ import mpi4py.MPI as mpi import numpy as np from os.path import join import shutil from tempfile import mkdtemp from pysph.base.particle_array import ParticleArray from pysph.solver.utils import dump, load def assert_lists_same(l1, l2): expect = list(sorted(l1)) result = list(sorted(l2)) assert expect == result, "Expected %s, got %s" % (expect, result) def main(): comm = mpi.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() root = mkdtemp() filename = join(root, 'test.npz') x = np.ones(5, dtype=float)*rank pa = ParticleArray(name='fluid', constants={'c1': 0.0, 'c2': [0.0, 0.0]}, x=x) try: dump(filename, [pa], {}, mpi_comm=comm) if rank == 0: data = load(filename) pa1 = data["arrays"]["fluid"] assert_lists_same(pa.properties.keys(), pa1.properties.keys()) assert_lists_same(pa.constants.keys(), pa1.constants.keys()) expect = np.ones(5*size) for i in range(size): expect[5*i:5*(i+1)] = i assert np.allclose(pa1.x, expect, atol=1e-14), \ "Expected %s, got %s" % (expect, pa1.x) finally: shutil.rmtree(root) if __name__ == '__main__': main() pysph-master/pysph/parallel/tests/elliptical_drop.py000066400000000000000000000003051356347341600233450ustar00rootroot00000000000000# We do this as we are not interested in the post-processing step. if __name__ == '__main__': from pysph.examples.elliptical_drop import EllipticalDrop app = EllipticalDrop() app.run() pysph-master/pysph/parallel/tests/example_test_case.py000066400000000000000000000126511356347341600236730ustar00rootroot00000000000000 import os import shutil import tempfile import unittest import numpy as np from pysph.solver.utils import load, get_files from pysph.tools import run_parallel_script MY_DIR = run_parallel_script.get_directory(__file__) def get_example_script(script): """Given a relative posix path to a script located inside the pysph.examples package, return the full path to it. """ ex_dir = os.path.join(os.path.dirname(os.path.dirname(MY_DIR)), 'examples') return os.path.join(ex_dir, *script.split('/')) class ExampleTestCase(unittest.TestCase): """ A script to run an example in serial and parallel and compare results. The comparison is performed in the _test function and currently just checks the positions of the particles and tests if they are close. To test an example in parallel, subclass from ExampleTest and write a test function like so: def test_elliptical_drop(self): dt = 1e-5; tf = 100*dt serial_kwargs = dict(timestep=dt, tf=tf, ghost_layers=1, lb_freq=5) self.run_example( 'dambreak3D.py', nprocs=4, timeout=900, serial_kwargs=serial_kwargs ) """ def _kwargs_to_command_line(self, kwargs): """Convert a dictionary of keyword arguments to a list of command-line options. """ def _key_to_option(arg): return arg.replace('_', '-') cmd_line = [] for key, value in kwargs.items(): option = _key_to_option(key) if value is None: arg = "--{option}".format(option=option) else: arg = "--{option}={value}".format( option=option, value=str(value) ) cmd_line.append(arg) return cmd_line def run_example(self, filename, nprocs=2, timeout=300, atol=1e-14, serial_kwargs=None, extra_parallel_kwargs=None): """Run an example and compare the results in serial and parallel. Parameters: ----------- filename : str The name of the file to run nprocs : int Number of processors to use for the parallel run. timeout : float Time in seconds to wait for execution before an error is raised. atol: float Absolute tolerance for differences between the runs. serial_kwargs : dict The options to pass for a serial run. Note that if the value of a particular key is None, the option is simply set and no value passed. For example if `openmp=None`, then `--openmp` is used. extra_parallel_kwargs: dict The extra options to pass for the parallel run. """ if serial_kwargs is None: serial_kwargs = {} if extra_parallel_kwargs is None: extra_parallel_kwargs = {} parallel_kwargs = dict(serial_kwargs) parallel_kwargs.update(extra_parallel_kwargs) prefix = os.path.splitext(os.path.basename(filename))[0] # dir1 is for the serial run dir1 = tempfile.mkdtemp() serial_kwargs.update(fname=prefix, directory=dir1) # dir2 is for the parallel run dir2 = tempfile.mkdtemp() parallel_kwargs.update(fname=prefix, directory=dir2) serial_args = self._kwargs_to_command_line(serial_kwargs) parallel_args = self._kwargs_to_command_line(parallel_kwargs) try: # run the example script in serial run_parallel_script.run( filename=filename, args=serial_args, nprocs=1, timeout=timeout, path=MY_DIR ) # run the example script in parallel run_parallel_script.run( filename=filename, args=parallel_args, nprocs=nprocs, timeout=timeout, path=MY_DIR ) # get the serial and parallel results dir1path = os.path.abspath(dir1) dir2path = os.path.abspath(dir2) # load the serial output file = get_files(dirname=dir1path, fname=prefix)[-1] serial = load(file) serial = serial['arrays']['fluid'] # load the parallel output file = get_files(dirname=dir2path, fname=prefix)[-1] parallel = load(file) parallel = parallel['arrays']['fluid'] finally: shutil.rmtree(dir1, True) shutil.rmtree(dir2, True) # test self._test(serial, parallel, atol, nprocs) def _test(self, serial, parallel, atol, nprocs): # make sure the arrays are at the same time self.assertAlmostEqual( serial.time, parallel.time, 12) # test the results. xs, ys, zs, rhos = serial.get("x","y", "z", "rho") xp, yp, zp, rhop, gid = parallel.get("x", "y", "z", "rho", 'gid') if nprocs == 1: # Not really a parallel run (used for openmp support). gid = np.arange(xs.size) self.assertTrue( xs.size, xp.size ) x_err = np.max(xs[gid] - xp) y_err = np.max(ys[gid] - yp) z_err = np.max(zs[gid] - zp) self.assertTrue(np.allclose(xs[gid], xp, atol=atol, rtol=0), "Max x_error %s"%x_err) self.assertTrue(np.allclose(ys[gid], yp, atol=atol, rtol=0), "Max y_error %s"%y_err) self.assertTrue(np.allclose(zs[gid], zp, atol=atol, rtol=0), "Max z_error %s"%z_err) pysph-master/pysph/parallel/tests/lb_exchange.py000066400000000000000000000116501356347341600224430ustar00rootroot00000000000000"""Test the ParticleArrayExchange object with the following data 25 particles are created on a rectangular grid with the following processor assignment: 2---- 3---- 3---- 3---- 3 | | | | | 1---- 1---- 1---- 2---- 2 | | | | | 0---- 0---- 1---- 1---- 1 | | | | | 0---- 0---- 0---- 0---- 0 | | | | | 0---- 0---- 0---- 0---- 0 We assume that after a load balancing step, we have the following distribution: 1---- 1---- 1---- 3---- 3 | | | | | 1---- 1---- 1---- 3---- 3 | | | | | 0---- 0---- 0---- 3---- 3 | | | | | 0---- 0---- 2---- 2---- 2 | | | | | 0---- 0---- 2---- 2---- 2 We create a ParticleArray to represent the initial distribution and use ParticleArrayExchange to move the data by manually setting the particle import/export lists. We require that the test be run with 4 processors. """ import mpi4py.MPI as mpi import numpy as np from pysph.parallel.parallel_manager import ParticleArrayExchange from pysph.base.utils import get_particle_array_wcsph def main(): comm = mpi.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() if size != 4: if rank == 0: raise RuntimeError("Run this test with 4 processors") # create the initial distribution if rank == 0: numPoints = 12 x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 0.0, 1.0, 2.0, 3.0, 4.0, 0.0, 1.0]) y = np.array([0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 2.0, 2.0]) gid = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], dtype=np.uint32) exportLocalids = np.array([2, 3, 4, 7, 8, 9], dtype=np.uint32) exportProcs = np.array([2, 2, 2, 2, 2, 2], dtype=np.int32) numExport = 6 parts = np.ones(shape=numPoints, dtype=np.int32) * rank if rank == 1: numPoints = 6 x = np.array([2.0, 3.0, 4.0, 0.0, 1.0, 2.0]) y = np.array([2.0, 2.0, 2.0, 3.0, 3.0, 3.0]) gid = np.array([12, 13, 14, 15, 16, 17], dtype=np.uint32) exportLocalids = np.array([0, 1, 2], dtype=np.uint32) exportProcs = np.array([0, 3, 3], dtype=np.int32) numExport = 3 parts = np.ones(shape=numPoints, dtype=np.int32) * rank if rank == 2: numPoints = 3 x = np.array([4.0, 5.0, 0.0]) y = np.array([3.0, 3.0, 4.0]) gid = np.array([18, 19, 20], dtype=np.uint32) exportLocalids = np.array([0, 1, 2], dtype=np.uint32) exportProcs = np.array([3, 3, 1], dtype=np.int32) numExport = 3 parts = np.ones(shape=numPoints, dtype=np.int32) * rank if rank == 3: numPoints = 4 x = np.array([1.0, 2.0, 3.0, 4.0]) y = np.array([4.0, 4.0, 4.0, 4.0]) gid = np.array([21, 22, 23, 24], dtype=np.uint32) exportLocalids = np.array([0, 1], dtype=np.uint32) exportProcs = np.array([1, 1], dtype=np.int32) numExport = 2 parts = np.ones(shape=numPoints, dtype=np.int32) * rank # Gather the Global data on root X = np.zeros(shape=25, dtype=np.float64) Y = np.zeros(shape=25, dtype=np.float64) GID = np.zeros(shape=25, dtype=np.uint32) displacements = np.array([12, 6, 3, 4], dtype=np.int32) comm.Gatherv( sendbuf=[x, mpi.DOUBLE], recvbuf=[X, (displacements, None)], root=0) comm.Gatherv( sendbuf=[y, mpi.DOUBLE], recvbuf=[Y, (displacements, None)], root=0) comm.Gatherv( sendbuf=[gid, mpi.UNSIGNED_INT], recvbuf=[GID, (displacements, None)], root=0) # broadcast global X, Y and GID to everyone comm.Bcast(buf=X, root=0) comm.Bcast(buf=Y, root=0) comm.Bcast(buf=GID, root=0) # create the local particle arrays and exchange objects pa = get_particle_array_wcsph(name='test', x=x, y=y, gid=gid) pae = ParticleArrayExchange(pa_index=0, pa=pa, comm=comm) # set the export indices for each array pae.reset_lists() pae.numParticleExport = numExport pae.exportParticleLocalids.resize(numExport) pae.exportParticleLocalids.set_data(exportLocalids) pae.exportParticleProcs.resize(numExport) pae.exportParticleProcs.set_data(exportProcs) # call lb_balance with these lists pae.lb_exchange_data() # All arrays must be local after lb_exchange_data assert (pa.num_real_particles == pa.get_number_of_particles()) assert (np.allclose(pa.tag, 0)) # now check the data on each array numParticles = 6 if rank == 0: numParticles = 7 assert (pa.num_real_particles == numParticles) for i in range(numParticles): assert (abs(X[pa.gid[i]] - pa.x[i]) < 1e-15) assert (abs(Y[pa.gid[i]] - pa.y[i]) < 1e-15) assert (GID[pa.gid[i]] == pa.gid[i]) if __name__ == '__main__': main() pysph-master/pysph/parallel/tests/reduce_array.py000066400000000000000000000014031356347341600226440ustar00rootroot00000000000000"""Test if the mpi_reduce_array function works correctly. """ import mpi4py.MPI as mpi import numpy as np from pysph.base.reduce_array import serial_reduce_array, mpi_reduce_array def main(): comm = mpi.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() n = 5 data = np.ones(n)*(rank + 1) full_data = [] for i in range(size): full_data = np.concatenate([full_data, np.ones(n)*(i+1)]) for op in ('sum', 'prod', 'min', 'max'): serial_data = serial_reduce_array(data, op) result = mpi_reduce_array(serial_data, op) expect = getattr(np, op)(full_data) msg = "For op %s: Expected %s, got %s" % (op, expect, result) assert expect == result, msg if __name__ == '__main__': main() pysph-master/pysph/parallel/tests/remote_exchange.py000066400000000000000000000115541356347341600233440ustar00rootroot00000000000000"""Test the remote exchange data of ParticleArrayExchange We assume that after a load balancing step, we have the following distribution: 1---- 1---- 1---- 3---- 3 | | | | | 1---- 1---- 1---- 3---- 3 | | | | | 0---- 0---- 0---- 3---- 3 | | | | | 0---- 0---- 2---- 2---- 2 | | | | | 0---- 0---- 2---- 2---- 2 Remote neighbors are those nodes which share an edge with a node belonging to another processor. For example, the number of remote neighbors for processor 0 in this case is 6. We create a ParticleArray to represent the initial distribution and use ParticleArrayExchange to move the data by manually setting the particle import/export lists. We require that the test be run with 4 processors. """ import mpi4py.MPI as mpi import numpy as np from pysph.parallel.parallel_manager import ParticleArrayExchange from pysph.base.utils import get_particle_array_wcsph def main(): comm = mpi.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() if size != 4: if rank == 0: raise RuntimeError("Run this test with 4 processors") # create the data gid0 = np.array([0, 1, 5, 6, 10, 11, 12], dtype=np.uint32) gid1 = np.array([15, 16, 17, 20, 21, 22], dtype=np.uint32) gid2 = np.array([2, 3, 4, 7, 8, 9], dtype=np.uint32) gid3 = np.array([13, 14, 18, 19, 23, 24], dtype=np.uint32) if rank == 0: numPoints = 7 x = np.array([0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 2.0]) y = np.array([0.0, 0.0, 1.0, 1.0, 2.0, 2.0, 2.0]) gid = gid0 exportLocalids = np.array([1, 3, 4, 5, 6, 6, 6], dtype=np.uint32) exportProcs = np.array([2, 2, 1, 1, 1, 2, 3], dtype=np.int32) numExport = 7 numRemote = 6 elif rank == 1: numPoints = 6 x = np.array([0.0, 1.0, 2.0, 0.0, 1.0, 2.0]) y = np.array([3.0, 3.0, 3.0, 4.0, 4.0, 4.0]) gid = gid1 exportLocalids = np.array([0, 1, 2, 2, 5], dtype=np.uint32) exportProcs = np.array([0, 0, 0, 3, 3], dtype=np.int32) numExport = 5 numRemote = 5 elif rank == 2: numPoints = 6 x = np.array([2.0, 3.0, 4.0, 2.0, 3.0, 4.0]) y = np.array([0.0, 0.0, 0.0, 1.0, 1.0, 1.0]) gid = gid2 exportLocalids = np.array([0, 3, 4, 5], dtype=np.uint32) exportProcs = np.array([0, 0, 3, 3], dtype=np.int32) numExport = 4 numRemote = 5 elif rank == 3: numPoints = 6 x = np.array([3.0, 4.0, 3.0, 4.0, 3.0, 4.0]) y = np.array([2.0, 2.0, 3.0, 3.0, 4.0, 4.0]) gid = gid3 exportLocalids = np.array([0, 0, 1, 2, 4], dtype=np.uint32) exportProcs = np.array([0, 2, 2, 1, 1], dtype=np.int32) numExport = 5 numRemote = 5 stride = 2 a = np.ones(x.shape[0]*stride)*rank # Global data X = np.array([0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4], dtype=np.float64) Y = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4], dtype=np.float64) A = np.ones(X.shape[0]*stride) A[::stride][gid0] = 0 A[1::stride][gid0] = 0 A[::stride][gid1] = 1 A[1::stride][gid1] = 1 A[::stride][gid2] = 2 A[1::stride][gid2] = 2 A[::stride][gid3] = 3 A[1::stride][gid3] = 3 GID = np.array(range(25), dtype=np.uint32) # create the local particle arrays and exchange objects pa = get_particle_array_wcsph(name='test', x=x, y=y, gid=gid) pa.add_property('a', data=a, stride=stride) pae = ParticleArrayExchange(pa_index=0, pa=pa, comm=comm) # set the export indices for each array pae.reset_lists() pae.numParticleExport = numExport pae.exportParticleLocalids.resize(numExport) pae.exportParticleLocalids.set_data(exportLocalids) pae.exportParticleProcs.resize(numExport) pae.exportParticleProcs.set_data(exportProcs) # call remote_exchange data with these lists pae.remote_exchange_data() # the added particles should be remote tag = pa.get('tag', only_real_particles=False) assert (pa.num_real_particles == numPoints) assert (pa.get_number_of_particles() == numPoints + numRemote) assert (np.allclose(tag[numPoints:], 1)) # now check the data on each array numParticles = numPoints + numRemote x, y, a, gid = pa.get('x', 'y', 'a', 'gid', only_real_particles=False) np.testing.assert_array_almost_equal(X[gid], x) np.testing.assert_array_almost_equal(Y[gid], y) np.testing.assert_array_equal(GID[gid], gid) np.testing.assert_array_almost_equal( A[::stride][gid], a[::2] ) np.testing.assert_array_almost_equal( A[1::stride][gid], a[1::2] ) if __name__ == '__main__': main() pysph-master/pysph/parallel/tests/simple_reduction.py000066400000000000000000000034221356347341600235470ustar00rootroot00000000000000import numpy as np from pysph.base.kernels import CubicSpline from pysph.base.particle_array import ParticleArray from pysph.sph.equation import Equation from pysph.sph.integrator_step import IntegratorStep from pysph.sph.integrator import EulerIntegrator from pysph.solver.application import Application from pysph.solver.solver import Solver from pysph.base.reduce_array import parallel_reduce_array, serial_reduce_array def create_particles(): x = np.linspace(0, 10, 10) m = np.ones_like(x) y = np.zeros_like(x) z = np.zeros_like(x) h = np.ones_like(x) * 0.2 fluid = ParticleArray(name='fluid', x=x, y=y, z=z, m=m, h=h) fluid.add_constant('total_mass', 0.0) return [fluid] class TotalMass(Equation): def reduce(self, dst, t, dt): m = serial_reduce_array(dst.m, op='sum') dst.total_mass[0] = parallel_reduce_array(m, op='sum') class DummyStepper(IntegratorStep): def initialize(self): pass def stage1(self): pass def main(): # Create the application. app = Application() dim = 1 # Create the kernel kernel = CubicSpline(dim=dim) # Create the integrator. integrator = EulerIntegrator(fluid=DummyStepper()) solver = Solver(kernel=kernel, dim=dim, integrator=integrator) solver.set_time_step(0.1) solver.set_final_time(0.1) equations = [TotalMass(dest='fluid', sources=['fluid'])] app.setup( solver=solver, equations=equations, particle_factory=create_particles) # There is no need to write any output as the test below # computes the total mass. solver.set_disable_output(True) app.run() fluid = solver.particles[0] err = fluid.total_mass[0] - 10.0 assert abs(err) < 1e-16, "Error: %s" % err if __name__ == '__main__': main() pysph-master/pysph/parallel/tests/summation_density.py000066400000000000000000000210041356347341600237510ustar00rootroot00000000000000"""Summation density example using the cell based NNPS partitioner.""" import mpi4py.MPI as mpi import numpy from numpy import random # Carray from PyZoltan from cyarray.carray import UIntArray # PySPH imports from pysph.base.nnps import BoxSortNNPS from pysph.parallel.parallel_manager import ZoltanParallelManagerGeometric from pysph.base.utils import get_particle_array_wcsph from pysph.base.kernels import CubicSpline, get_compiled_kernel """Utility to compute summation density""" dim = 3 def sd_evaluate(nnps, pm, mass, src_index, dst_index): # the destination particle array dst = nnps.particles[dst_index] src = nnps.particles[src_index] # particle coordinates dx, dy, dz, dh, drho = dst.get( 'x', 'y', 'z', 'h', 'rho', only_real_particles=True) sx, sy, sz, sh, srho = src.get( 'x', 'y', 'z', 'h', 'rho', only_real_particles=False) neighbors = UIntArray() cubic = get_compiled_kernel(CubicSpline(dim=dim)) # compute density for each destination particle num_particles = dst.num_real_particles # the number of local particles should have tag Local assert (num_particles == pm.num_local[dst_index]) for i in range(num_particles): hi = dh[i] nnps.get_nearest_particles(src_index, dst_index, i, neighbors) nnbrs = neighbors.length rho_sum = 0.0 for indexj in range(nnbrs): j = neighbors[indexj] wij = cubic.kernel(dx[i], dy[i], dz[i], sx[j], sy[j], sz[j], hi) rho_sum = rho_sum + mass * wij drho[i] += rho_sum def main(): # Initialize MPI and find out number of local particles comm = mpi.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() # number of particles per array numMyPoints = 1 << 10 numGlobalPoints = size * numMyPoints avg_vol = 1.0 / numGlobalPoints dx = numpy.power(avg_vol, 1.0 / dim) mass = avg_vol hdx = 1.3 if numGlobalPoints % size != 0: raise RuntimeError("Run with 2^n num procs!") # everybody creates two particle arrays with numMyPoints x1 = random.random(numMyPoints) y1 = random.random(numMyPoints) z1 = random.random(numMyPoints) h1 = numpy.ones_like(x1) * hdx * dx rho1 = numpy.zeros_like(x1) x2 = random.random(numMyPoints) y2 = random.random(numMyPoints) z2 = random.random(numMyPoints) h2 = numpy.ones_like(x2) * hdx * dx rho2 = numpy.zeros_like(x2) # z1[:] = 1.0 # z2[:] = 0.5 # local particle arrays pa1 = get_particle_array_wcsph(x=x1, y=y1, h=h1, rho=rho1, z=z1) pa2 = get_particle_array_wcsph(x=x2, y=y2, h=h2, rho=rho2, z=z2) # gather the data on root X1 = numpy.zeros(numGlobalPoints) Y1 = numpy.zeros(numGlobalPoints) Z1 = numpy.zeros(numGlobalPoints) H1 = numpy.ones_like(X1) * hdx * dx RHO1 = numpy.zeros_like(X1) gathers = (numpy.ones(size) * numMyPoints, None) comm.Gatherv(sendbuf=[x1, mpi.DOUBLE], recvbuf=[X1, gathers, mpi.DOUBLE]) comm.Gatherv(sendbuf=[y1, mpi.DOUBLE], recvbuf=[Y1, gathers, mpi.DOUBLE]) comm.Gatherv(sendbuf=[z1, mpi.DOUBLE], recvbuf=[Z1, gathers, mpi.DOUBLE]) comm.Gatherv( sendbuf=[rho1, mpi.DOUBLE], recvbuf=[RHO1, gathers, mpi.DOUBLE] ) X2 = numpy.zeros(numGlobalPoints) Y2 = numpy.zeros(numGlobalPoints) Z2 = numpy.zeros(numGlobalPoints) H2 = numpy.ones_like(X2) * hdx * dx RHO2 = numpy.zeros_like(X2) comm.Gatherv(sendbuf=[x2, mpi.DOUBLE], recvbuf=[X2, gathers, mpi.DOUBLE]) comm.Gatherv(sendbuf=[y2, mpi.DOUBLE], recvbuf=[Y2, gathers, mpi.DOUBLE]) comm.Gatherv(sendbuf=[z2, mpi.DOUBLE], recvbuf=[Z2, gathers, mpi.DOUBLE]) comm.Gatherv( sendbuf=[rho2, mpi.DOUBLE], recvbuf=[RHO2, gathers, mpi.DOUBLE] ) # create the particle arrays and PM PA1 = get_particle_array_wcsph(x=X1, y=Y1, z=Z1, h=H1, rho=RHO1) PA2 = get_particle_array_wcsph(x=X2, y=Y2, z=Z2, h=H2, rho=RHO2) # create the parallel manager PARTICLES = [PA1, PA2] PM = ZoltanParallelManagerGeometric( dim=dim, particles=PARTICLES, comm=comm ) # create the local NNPS object with all the particles Nnps = BoxSortNNPS(dim=dim, particles=PARTICLES) Nnps.update() # only root computes summation density if rank == 0: assert numpy.allclose(PA1.rho, 0) sd_evaluate(Nnps, PM, mass, src_index=1, dst_index=0) sd_evaluate(Nnps, PM, mass, src_index=0, dst_index=0) RHO1 = PA1.rho assert numpy.allclose(PA2.rho, 0) sd_evaluate(Nnps, PM, mass, src_index=0, dst_index=1) sd_evaluate(Nnps, PM, mass, src_index=1, dst_index=1) RHO2 = PA2.rho # wait for the root... comm.barrier() # create the local particle arrays particles = [pa1, pa2] # create the local nnps object and parallel manager pm = ZoltanParallelManagerGeometric( dim=dim, comm=comm, particles=particles ) nnps = BoxSortNNPS(dim=dim, particles=particles) # set the Zoltan parameters (Optional) pz = pm.pz pz.set_lb_method("RCB") pz.Zoltan_Set_Param("DEBUG_LEVEL", "0") # Update the parallel manager (distribute particles) pm.update() # update the local nnps nnps.update() # Compute summation density individually on each processor sd_evaluate(nnps, pm, mass, src_index=0, dst_index=1) sd_evaluate(nnps, pm, mass, src_index=1, dst_index=1) sd_evaluate(nnps, pm, mass, src_index=0, dst_index=0) sd_evaluate(nnps, pm, mass, src_index=1, dst_index=0) # gather the density and global ids rho1 = pa1.rho tmp = comm.gather(rho1) if rank == 0: global_rho1 = numpy.concatenate(tmp) assert (global_rho1.size == numGlobalPoints) rho2 = pa2.rho tmp = comm.gather(rho2) if rank == 0: global_rho2 = numpy.concatenate(tmp) assert (global_rho2.size == numGlobalPoints) # gather global x1 and y1 x1 = pa1.x tmp = comm.gather(x1) if rank == 0: global_x1 = numpy.concatenate(tmp) assert (global_x1.size == numGlobalPoints) y1 = pa1.y tmp = comm.gather(y1) if rank == 0: global_y1 = numpy.concatenate(tmp) assert (global_y1.size == numGlobalPoints) z1 = pa1.z tmp = comm.gather(z1) if rank == 0: global_z1 = numpy.concatenate(tmp) assert (global_z1.size == numGlobalPoints) # gather global x2 and y2 x2 = pa2.x tmp = comm.gather(x2) if rank == 0: global_x2 = numpy.concatenate(tmp) assert (global_x2.size == numGlobalPoints) y2 = pa2.y tmp = comm.gather(y2) if rank == 0: global_y2 = numpy.concatenate(tmp) assert (global_y2.size == numGlobalPoints) z2 = pa2.z tmp = comm.gather(z2) if rank == 0: global_z2 = numpy.concatenate(tmp) assert (global_z2.size == numGlobalPoints) # gather global indices gid1 = pa1.gid tmp = comm.gather(gid1) if rank == 0: global_gid1 = numpy.concatenate(tmp) assert (global_gid1.size == numGlobalPoints) gid2 = pa2.gid tmp = comm.gather(gid2) if rank == 0: global_gid2 = numpy.concatenate(tmp) assert (global_gid2.size == numGlobalPoints) # check rho1 if rank == 0: # make sure the arrays are of the same size assert (global_x1.size == X1.size) assert (global_y1.size == Y1.size) assert (global_z1.size == Z1.size) for i in range(numGlobalPoints): # make sure we're chacking the right point assert abs(global_x1[i] - X1[global_gid1[i]]) < 1e-14 assert abs(global_y1[i] - Y1[global_gid1[i]]) < 1e-14 assert abs(global_z1[i] - Z1[global_gid1[i]]) < 1e-14 diff = abs(global_rho1[i] - RHO1[global_gid1[i]]) condition = diff < 1e-14 assert condition, "diff = %g" % (diff) # check rho2 if rank == 0: # make sure the arrays are of the same size assert (global_x2.size == X2.size) assert (global_y2.size == Y2.size) assert (global_z2.size == Z2.size) for i in range(numGlobalPoints): # make sure we're chacking the right point assert abs(global_x2[i] - X2[global_gid2[i]]) < 1e-14 assert abs(global_y2[i] - Y2[global_gid2[i]]) < 1e-14 assert abs(global_z2[i] - Z2[global_gid2[i]]) < 1e-14 diff = abs(global_rho2[i] - RHO2[global_gid2[i]]) condition = diff < 1e-14 assert condition, "diff = %g" % (diff) if rank == 0: print("Summation density test: OK") if __name__ == '__main__': main() pysph-master/pysph/parallel/tests/test_openmp.py000066400000000000000000000037411356347341600225430ustar00rootroot00000000000000""" Module to run the example files and report their success/failure results Add a function to the ExampleTest class corresponding to an example script to be tested. This is done till better strategy for parallel testing is implemented """ from pytest import mark from .example_test_case import ExampleTestCase, get_example_script from pysph.base.nnps import get_number_of_threads @mark.skipif(get_number_of_threads() == 1, reason= "N_threads=1; OpenMP does not seem available.") class TestOpenMPExamples(ExampleTestCase): @mark.slow def test_3Ddam_break_example(self): dt = 2e-5; tf = 13*dt serial_kwargs = dict( timestep=dt, tf=tf, pfreq=100, test=None ) extra_parallel_kwargs = dict(openmp=None) # Note that we set nprocs=1 here since we do not want # to run this with mpirun. self.run_example( get_example_script('sphysics/dambreak_sphysics.py'), nprocs=1, atol=1e-14, serial_kwargs=serial_kwargs, extra_parallel_kwargs=extra_parallel_kwargs ) @mark.slow def test_elliptical_drop_example(self): tf = 0.0076*0.25 serial_kwargs = dict(kernel='CubicSpline', tf=tf) extra_parallel_kwargs = dict(openmp=None) # Note that we set nprocs=1 here since we do not want # to run this with mpirun. self.run_example( 'elliptical_drop.py', nprocs=1, atol=1e-14, serial_kwargs=serial_kwargs, extra_parallel_kwargs=extra_parallel_kwargs ) def test_ldcavity_example(self): dt=1e-4; tf=200*dt serial_kwargs = dict(timestep=dt, tf=tf, pfreq=500) extra_parallel_kwargs = dict(openmp=None) # Note that we set nprocs=1 here since we do not want # to run this with mpirun. self.run_example( 'cavity.py', nprocs=1, atol=1e-14, serial_kwargs=serial_kwargs, extra_parallel_kwargs=extra_parallel_kwargs ) pysph-master/pysph/parallel/tests/test_parallel.py000066400000000000000000000055531356347341600230440ustar00rootroot00000000000000"""Tests for the PySPH parallel module""" import shutil import tempfile import unittest import numpy as np from pytest import mark, importorskip from pysph.tools import run_parallel_script path = run_parallel_script.get_directory(__file__) class ParticleArrayTestCase(unittest.TestCase): @classmethod def setUpClass(cls): importorskip("pysph.parallel.parallel_manager") def test_get_strided_indices(self): # Given from pysph.parallel.parallel_manager import get_strided_indices indices = np.array([1, 5, 3]) # When idx = get_strided_indices(indices, 1) # Then np.testing.assert_array_equal(idx, indices) # When idx = get_strided_indices(indices, 2) # Then np.testing.assert_array_equal( idx, [2, 3, 10, 11, 6, 7] ) # When idx = get_strided_indices(indices, 3) # Then np.testing.assert_array_equal( idx, [3, 4, 5, 15, 16, 17, 9, 10, 11] ) class ParticleArrayExchangeTestCase(unittest.TestCase): @classmethod def setUpClass(cls): importorskip("mpi4py.MPI") importorskip("pyzoltan.core.zoltan") @mark.parallel def test_lb_exchange(self): run_parallel_script.run(filename='lb_exchange.py', nprocs=4, path=path) @mark.parallel def test_remote_exchange(self): run_parallel_script.run( filename='remote_exchange.py', nprocs=4, path=path ) class SummationDensityTestCase(unittest.TestCase): @classmethod def setUpClass(cls): importorskip("mpi4py.MPI") importorskip("pyzoltan.core.zoltan") @mark.slow @mark.parallel def test_summation_density(self): run_parallel_script.run( filename='summation_density.py', nprocs=4, path=path ) class MPIReduceArrayTestCase(unittest.TestCase): @classmethod def setUpClass(cls): importorskip("mpi4py.MPI") importorskip("pyzoltan.core.zoltan") def setUp(self): self.root = tempfile.mkdtemp() def tearDown(self): shutil.rmtree(self.root) @mark.parallel def test_mpi_reduce_array(self): run_parallel_script.run( filename='reduce_array.py', nprocs=4, path=path ) @mark.parallel def test_parallel_reduce(self): args = ['--directory=%s' % self.root] run_parallel_script.run( filename='simple_reduction.py', args=args, nprocs=4, path=path ) class DumpLoadTestCase(unittest.TestCase): @classmethod def setUpClass(cls): importorskip("mpi4py.MPI") importorskip("pyzoltan.core.zoltan") @mark.parallel def test_dump_and_load_work_in_parallel(self): run_parallel_script.run( filename='check_dump_load.py', nprocs=4, path=path ) if __name__ == '__main__': unittest.main() pysph-master/pysph/parallel/tests/test_parallel_run.py000066400000000000000000000035761356347341600237330ustar00rootroot00000000000000""" Module to run the example files and report their success/failure results Add a function to the ExampleTest class corresponding to an example script to be tested. This is done till better strategy for parallel testing is implemented """ from pytest import mark, importorskip from pysph.tools import run_parallel_script from pysph.parallel.tests.example_test_case import ExampleTestCase, get_example_script class ParallelTests(ExampleTestCase): @classmethod def setUpClass(cls): importorskip("mpi4py.MPI") importorskip("pyzoltan.core.zoltan") @mark.slow @mark.parallel def test_3Ddam_break_example(self): serial_kwargs = dict( max_steps=50, pfreq=200, sort_gids=None, test=None ) extra_parallel_kwargs = dict(ghost_layers=1, lb_freq=5) self.run_example( get_example_script('sphysics/dambreak_sphysics.py'), nprocs=4, atol=1e-12, serial_kwargs=serial_kwargs, extra_parallel_kwargs=extra_parallel_kwargs ) @mark.slow @mark.parallel def test_elliptical_drop_example(self): serial_kwargs = dict(sort_gids=None, kernel='CubicSpline', tf=0.0038) extra_parallel_kwargs = dict(ghost_layers=1, lb_freq=5) self.run_example( 'elliptical_drop.py', nprocs=2, atol=1e-11, serial_kwargs=serial_kwargs, extra_parallel_kwargs=extra_parallel_kwargs ) @mark.parallel def test_ldcavity_example(self): max_steps = 150 serial_kwargs = dict(max_steps=max_steps, pfreq=500, sort_gids=None) extra_parallel_kwargs = dict(ghost_layers=2, lb_freq=5) self.run_example( 'cavity.py', nprocs=4, atol=1e-14, serial_kwargs=serial_kwargs, extra_parallel_kwargs=extra_parallel_kwargs ) if __name__ == '__main__': import unittest unittest.main() pysph-master/pysph/solver/000077500000000000000000000000001356347341600162035ustar00rootroot00000000000000pysph-master/pysph/solver/__init__.py000066400000000000000000000000001356347341600203020ustar00rootroot00000000000000pysph-master/pysph/solver/application.py000077500000000000000000001667661356347341600211110ustar00rootroot00000000000000# Standard imports. from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter import glob import inspect import json import logging import os from os.path import (abspath, basename, dirname, isdir, join, realpath, splitext) import socket import sys from textwrap import dedent import time import numpy as np import warnings # PySPH imports. from pysph.base import utils from pysph.base.utils import is_overloaded_method from pysph.base.nnps import LinkedListNNPS, BoxSortNNPS, SpatialHashNNPS, \ ExtendedSpatialHashNNPS, CellIndexingNNPS, StratifiedHashNNPS, \ StratifiedSFCNNPS, OctreeNNPS, CompressedOctreeNNPS, ZOrderNNPS from pysph.base import kernels from compyle.config import get_config from pysph.solver.controller import CommandManager from pysph.solver.utils import mkdir, load, get_files # conditional parallel imports from pysph import has_mpi, has_zoltan, in_parallel if in_parallel(): from pysph.parallel.parallel_manager import ZoltanParallelManagerGeometric import mpi4py.MPI as mpi logger = logging.getLogger(__name__) def is_using_ipython(): """Return True if the code is being run from an IPython session or notebook. """ try: # If this is being run inside an IPython console or notebook # then this is defined. __IPYTHON__ except NameError: return False else: return True def list_all_kernels(): """Return list of available kernels. """ return [n for n in dir(kernels) if inspect.isclass(getattr(kernels, n))] ############################################################################## # `Application` class. ############################################################################## class Application(object): """Subclass this to run any SPH simulation. There are several important methods that this class provides. The application is typically used as follows:: class EllipticalDrop(Application): def create_particles(self): # ... def create_scheme(self): # ... ... app = EllipticalDrop() app.run() app.post_process(app.info_filename) .. py:currentmodule:: pysph.solver.application The :py:meth:`post_process` method is entirely optional and typically performs the post-processing. It is important to understand the correct sequence of the method calls. When the ``Application`` instance is created, the following methods are invoked by the :py:meth:`__init__` method: 1. :py:meth:`initialize()`: use this to setup any constants etc. 2. :py:meth:`create_scheme()`: this needs to be overridden if one wishes to use a :py:class:`pysph.sph.scheme.Scheme`. If one does not want to use a scheme, the :py:meth:`create_equations` and :py:meth:`create_solver` methods must be overridden. 3. ``self.scheme.add_user_options()``: i.e. the scheme's command line options are added, if there is a scheme. 4. :py:meth:`add_user_options()`: add any user specified command line options. When ``app.run()`` is called, the following methods are called in order: 1. ``_parse_command_line()``: this is a private method but it is important to note that the command line arguments are first parsed. 2. :py:meth:`consume_user_options()`: this is called right after the command line args are parsed. 3. :py:meth:`configure_scheme()`: This is where one may configure the scheme according to the passed command line arguments. 4. :py:meth:`create_solver()`: Create the solver, note that this is needed only if one has not used a scheme, otherwise, this will by default return the solver created by the scheme chosen. 5. :py:meth:`create_equations()`: Create any equations. Defaults to letting the scheme generate and return the desired equations. 6. :py:meth:`create_particles()` 7. :py:meth:`create_inlet_outlet()` 8. :py:meth:`create_domain()`: Not needed for non-periodic domains. 9. :py:meth:`create_nnps()`: Not needed unless one wishes to override the default NNPS. 10. :py:meth:`create_tools()`: Add any ``pysph.solver.tools.Tool`` instances. 11. :py:meth:`customize_output()`: Customize the output visualization. Additionally, as the application runs there are several convenient optional callbacks setup: 1. :py:meth:`pre_step`: Called before each time step. 2. :py:meth:`post_stage`: Called after every stage of the integration. 3. :py:meth:`post_step`: Called after each time step. Finally, it is a good idea to overload the :py:meth:`post_process` method to perform any post processing for the generated data. The application instance also has several important attributes, some of these are as follows: - ``args``: command line arguments, typically ``sys.argv[1:]``. - ``domain``: optional :py:class:`pysph.base.nnps_base.DomainManager` instance. - ``fname``: filename pattern to use when dumping output. - ``inlet_outlet``: list of inlet/outlets. - ``nnps``: instance of :py:class:`pysph.base.nnps_base.NNPS`. - ``num_procs``: total number of processes running. - ``output_dir``: Output directory. - ``parallel_manager``: in parallel, an instance of :py:class:`pysph.parallel.parallel_manager.ParallelManager`. - ``particles``: list of :py:class:`pysph.base.particle_array.ParticleArray`s. - ``rank``: Rank of this process. - ``scheme``: the optional :py:class:`pysph.sph.scheme.Scheme` instance. - ``solver``: the solver instance, :py:class:`pysph.solver.solver.Solver`. - ``tools``: a list of possible :py:class:`pysph.solver.tools.Tool`s. """ def __init__(self, fname=None, output_dir=None, domain=None): """ Constructor Parameters ---------- fname : str file name to use for the output files. output_dir : str output directory name. domain : pysph.base.nnps_base.DomainManager A domain manager to use. This is used for periodic domains etc. """ self.domain = domain self.solver = None self.nnps = None self.scheme = None self.tools = [] self.parallel_manager = None if fname is None: fname = self._guess_output_filename() self.fname = fname self.args = sys.argv[1:] # MPI related vars. self.comm = None self.num_procs = 1 self.rank = 0 if in_parallel(): if not mpi.Is_initialized(): mpi.Init() self.comm = comm = mpi.COMM_WORLD self.num_procs = comm.Get_size() self.rank = comm.Get_rank() self._log_levels = { 'debug': logging.DEBUG, 'info': logging.INFO, 'warning': logging.WARNING, 'error': logging.ERROR, 'critical': logging.CRITICAL, 'none': None } if output_dir is None: self.output_dir = abspath(self._get_output_dir_from_fname()) else: self.output_dir = output_dir self.particles = [] self.inlet_outlet = [] # The default value that is overridden by the command line # options passed or in initialize. self.cache_nnps = False self.iom = None self.initialize() self.scheme = self.create_scheme() self._setup_argparse() def _get_output_dir_from_fname(self): return self.fname + '_output' def _guess_output_filename(self): """Try to guess the output filename to use. """ module = self.__module__.rsplit('.', 1)[-1] if is_using_ipython(): return module else: if len(sys.argv[0]) == 0: return module else: return splitext(basename(abspath(sys.argv[0])))[0] def _setup_argparse(self): usage = '%(prog)s [options]' description = """ Note that you may run this program via MPI and the run will be automatically parallelized. To do this run:: $ mpirun -n 4 /path/to/your/python %prog [options] Replace '4' above with the number of processors you have. Below are the options you may pass. """ parser = ArgumentParser( usage=usage, description=description, formatter_class=ArgumentDefaultsHelpFormatter) self.arg_parse = parser # Add some default options. # -v valid_vals = "Valid values: %s" % self._log_levels.keys() parser.add_argument( "-v", "--loglevel", action="store", dest="loglevel", default='info', help="Log-level to use for log messages. " + valid_vals) # --logfile parser.add_argument( "--logfile", action="store", dest="logfile", default=None, help="Log file to use for logging, set to " + "empty ('') for no file logging.") # -l parser.add_argument( "-l", "--print-log", action="store_true", dest="print_log", default=False, help="Print log messages to stderr.") # --final-time parser.add_argument( "--tf", action="store", type=float, dest="final_time", default=None, help="Total time for the simulation.") # --timestep parser.add_argument( "--timestep", action="store", type=float, dest="time_step", default=None, help="Timestep to use for the simulation.") # --max-steps parser.add_argument( "--max-steps", action="store", type=int, dest="max_steps", default=1 << 31, help="Maximum number of iteration steps to take (defaults to a " "very large value).") # --n-damp parser.add_argument( "--n-damp", action="store", type=int, dest="n_damp", default=None, help="Number of iterations to damp timesteps initially.") # --adaptive-timestep parser.add_argument( "--adaptive-timestep", action="store_true", dest="adaptive_timestep", default=None, help="Use adaptive time stepping.") parser.add_argument( "--no-adaptive-timestep", action="store_false", dest="adaptive_timestep", default=None, help="Do not use adaptive time stepping.") # --cfl parser.add_argument( "--cfl", action="store", dest="cfl", type=float, default=0.3, help="CFL number for adaptive time steps") # -q/--quiet. parser.add_argument( "-q", "--quiet", action="store_true", dest="quiet", default=False, help="Do not print any progress information.") # --disable-output parser.add_argument( "--disable-output", action="store_true", dest="disable_output", default=False, help="Do not dump any output files.") # -o/ --fname parser.add_argument( "-o", "--fname", action="store", dest="fname", default=self.fname, help="File name to use for output") # --pfreq. parser.add_argument( "--pfreq", action="store", dest="freq", default=None, type=int, help="Printing frequency for the output") parser.add_argument( '--reorder-freq', action="store", dest="reorder_freq", default=None, type=int, help="Frequency between spatially reordering particles." ) # --detailed-output. parser.add_argument( "--detailed-output", action="store_true", dest="detailed_output", default=None, help="Dump detailed output.") # -z/--compress-output parser.add_argument( "-z", "--compress-output", action="store_true", dest="compress_output", default=False, help="Compress generated output files.") # --output-remote parser.add_argument( "--output-dump-remote", action="store_true", dest="output_dump_remote", default=False, help="Save Remote particles in parallel") # -d/--directory parser.add_argument( "-d", "--directory", action="store", dest="output_dir", default=self.output_dir, help="Dump output in the specified directory.") # --openmp parser.add_argument( "--openmp", action="store_true", dest="with_openmp", default=None, help="Use OpenMP to run the " "simulation using multiple cores.") parser.add_argument( "--no-openmp", action="store_false", dest="with_openmp", default=None, help="Do not use OpenMP to run the " "simulation using multiple cores.") # --omp-schedule parser.add_argument( "--omp-schedule", action="store", dest="omp_schedule", default="dynamic,64", help="""Schedule how loop iterations are divided amongst multiple threads""") # --opencl parser.add_argument( "--opencl", action="store_true", dest="with_opencl", default=False, help="Use OpenCL to run the simulation.") # --cuda parser.add_argument( "--cuda", action="store_true", dest="with_cuda", default=False, help="Use CUDA to run the simulation." ) # --use-local-memory parser.add_argument( "--use-local-memory", action="store_true", dest="with_local_memory", default=False, help="Use local memory with OpenCL (Experimental)" ) # --profile parser.add_argument( "--profile", action="store_true", dest="profile", default=False, help="Enable profiling with OpenCL.") # --use-double parser.add_argument( "--use-double", action="store_true", dest="use_double", default=False, help="Use double precision for OpenCL/CUDA code.") # --kernel all_kernels = list_all_kernels() parser.add_argument( "--kernel", action="store", dest="kernel", default=None, choices=all_kernels, help="Use specified kernel from %s" % all_kernels) parser.add_argument( '--post-process', action="store", dest="post_process", default=None, help="Only perform post-processing and exit." ) # Restart options restart = parser.add_argument_group("Restart options", "Restart options for PySPH") restart.add_argument( "--restart-file", action="store", dest="restart_file", default=None, help=("""Restart a PySPH simulation using a specified file """)) restart.add_argument( "--rescale-dt", action="store", dest="rescale_dt", default=1.0, type=float, help=("Scale dt upon restarting by a numerical constant")) # NNPS options nnps_options = parser.add_argument_group("NNPS", "Nearest Neighbor searching") # --nnps nnps_options.add_argument( "--nnps", dest="nnps", choices=[ 'box', 'll', 'sh', 'esh', 'ci', 'sfc', 'comp_tree', 'strat_hash', 'strat_sfc', 'tree', 'gpu_octree' ], default='ll', help="Use one of box-sort ('box') or " "the linked list algorithm ('ll') or " "the spatial hash algorithm ('sh') or " "the extended spatial hash algorithm ('esh') or " "the cell indexing algorithm ('ci') or " "the z-order space filling curve based algorithm ('sfc') or " "the stratified hash algorithm ('strat_hash') or " "the stratified sfc algorithm ('strat_sfc') or " "the octree algorithm ('tree') or " "the compressed octree algorithm ('comp_tree') or " "the gpu octree algorithm ('gpu_octree')") nnps_options.add_argument( "--spatial-hash-sub-factor", dest="H", type=int, default=3, help="Sub division factor for ExtendedSpatialHashNNPS") nnps_options.add_argument( "--approximate-nnps", dest="approximate_nnps", action="store_true", default=False, help="Use for approximate NNPS") nnps_options.add_argument( "--spatial-hash-table-size", dest="table_size", type=int, default=131072, help="Table size for SpatialHashNNPS and ExtendedSpatialHashNNPS") nnps_options.add_argument( "--stratified-grid-num-levels", dest="num_levels", type=int, default=1, help="Number of levels for StratifiedHashNNPS and \ StratifiedSFCNNPS") nnps_options.add_argument( "--tree-leaf-max-particles", dest="leaf_max_particles", type=int, default=10, help="Maximum number of particles in leaf of octree") # --fixed-h nnps_options.add_argument( "--fixed-h", dest="fixed_h", action="store_true", default=False, help="Option for fixed smoothing lengths") nnps_options.add_argument( "--cache-nnps", dest="cache_nnps", action="store_true", default=self.cache_nnps, help="Option to enable the use of neighbor caching.") nnps_options.add_argument( "--sort-gids", dest="sort_gids", action="store_true", default=False, help="Sort neighbors by the GIDs to get " + "consistent results in serial and parallel (slows down a bit).") # Zoltan Options zoltan = parser.add_argument_group("PyZoltan", "Zoltan load balancing options") zoltan.add_argument( "--with-zoltan", action="store_true", dest="with_zoltan", default=True, help="Use PyZoltan for dynamic load balancing") zoltan.add_argument( "--zoltan-lb-method", action="store", dest="zoltan_lb_method", default="RCB", help="Choose the Zoltan load balancnig method") zoltan.add_argument( "--rcb-lock", action="store_true", dest="zoltan_rcb_lock_directions", default=False, help="Lock the directions of the RCB cuts") zoltan.add_argument( "--rcb-reuse", action='store_true', dest="zoltan_rcb_reuse", default=False, help="Reuse previous RCB cuts") zoltan.add_argument( "--rcb-rectilinear", action="store_true", dest='zoltan_rcb_rectilinear', default=False, help="Produce nice rectilinear blocks without projections") zoltan.add_argument( "--rcb-set-direction", action='store', dest="zoltan_rcb_set_direction", default=0, type=int, help="Set the order of the RCB cuts") zoltan.add_argument( "--zoltan-weights", action="store_false", dest="zoltan_weights", default=True, help=("""Switch between using weights for input to Zoltan. defaults to True""")) zoltan.add_argument( "--ghost-layers", action='store', dest='ghost_layers', default=3.0, type=float, help=('Number of ghost cells to share for remote neighbors')) zoltan.add_argument( "--lb-freq", action='store', dest='lb_freq', default=10, type=int, help=('The frequency for load balancing')) zoltan.add_argument( "--zoltan-debug-level", action="store", dest="zoltan_debug_level", default="0", help=("""Zoltan debugging level""")) # Options to control parallel execution parallel_options = parser.add_argument_group("Parallel Options") # --update-cell-sizes parallel_options.add_argument( "--update-cell-sizes", action='store_true', dest='update_cell_sizes', default=False, help=("Recompute cell sizes for binning in parallel")) # --parallel-scale-factor parallel_options.add_argument( "--parallel-scale-factor", action="store", dest="parallel_scale_factor", default=2.0, type=float, help=("""Kernel scale factor for the parallel update""")) # --parallel-output-mode parallel_options.add_argument( "--parallel-output-mode", action="store", dest="parallel_output_mode", default='collected', choices=['collected', 'distributed'], help="""Use 'collected' to dump one output at root or 'distributed' for every processor. """) # solver interfaces interfaces = parser.add_argument_group("Interfaces", "Add interfaces to the solver") interfaces.add_argument( "--interactive", action="store_true", dest="cmd_line", default=False, help=("Add an interactive commandline interface to the solver")) interfaces.add_argument( "--xml-rpc", action="store", dest="xml_rpc", metavar="[HOST:] PORT", help=("Add an XML-RPC interface to the solver;" "HOST=0.0.0.0 by default")) interfaces.add_argument( "--multiproc", action="store", dest="multiproc", metavar='[[AUTHKEY@] HOST:] PORT[+] ', default="pysph@0.0.0.0:8800+", help=("Add a python multiprocessing interface " "to the solver; " "AUTHKEY=pysph, HOST=0.0.0.0, PORT=8800+ by" " default (8800+ means first available port " "number 8800 onwards)")) interfaces.add_argument( "--no-multiproc", action="store_const", dest="multiproc", const=None, help=("Disable multiprocessing interface " "to the solver")) interfaces.add_argument( "--octree-leaf-size", dest="octree_leaf_size", default=32, help=("Specify leaf size of octree. " "Must be multiples of 32 (Experimental)") ) interfaces.add_argument( "--octree-elementwise-nnps", action="store_const", dest="octree_elementwise", default=False, const=True, help=("Run NNPS for different particles " "on different threads (Experimental)") ) # Scheme options. if self.scheme is not None: scheme_options = parser.add_argument_group( "SPH Scheme options", "Scheme related command line arguments", conflict_handler="resolve") self.scheme.add_user_options(scheme_options) # User options. user_options = parser.add_argument_group( "User", "User defined command line arguments") self.add_user_options(user_options) def _parse_command_line(self, force=False): """If force is True, it will parse the arguments regardless of whether it is running in IPython or not. This is handy when you want to parse the command line for a previously run case. """ if is_using_ipython() and not force: # Don't parse the command line args. options = self.arg_parse.parse_args([]) else: options = self.arg_parse.parse_args(self.args) self.options = options def _process_command_line(self): """Process the parsed command line arguments. This method calls the scheme's ``consume_user_options`` and :py:meth:`consume_user_options` as well as the :py:meth:`configure_scheme`. """ options = self.options if options.post_process: self._message('-'*70) self._message('Performing post processing alone.') self.post_process(options.post_process) # Exit right after this so even if the user # has an app.post_process call, it doesn't call it. sys.exit(0) # save the path where we want to dump output self.output_dir = abspath(options.output_dir) mkdir(self.output_dir) if self.scheme is not None: self.scheme.consume_user_options(options) self.consume_user_options() if self.scheme is not None: self.configure_scheme() def _setup_logging(self): """Setup logging for the application. """ options = self.options # Setup logging based on command line options. level = self._log_levels[options.loglevel] if level is None: return # logging setup logger.setLevel(level) filename = options.logfile # Setup the log file. if filename is None: filename = self.fname + '.log' if len(filename) > 0: # This is needed if the application is launched twice, # as in that case, the old handlers must be removed. for handler in logging.root.handlers[:]: logging.root.removeHandler(handler) lfn = os.path.join(self.output_dir, filename) mkdir(self.output_dir) format = '%(levelname)s|%(asctime)s|%(name)s|%(message)s' logging.basicConfig( level=level, format=format, filename=lfn, filemode='a') if options.print_log: logger.addHandler(logging.StreamHandler()) host = socket.gethostname() try: ip = socket.gethostbyname(host) except socket.gaierror: ip = host logger.info( 'Running on {host} with address {ip}'.format(host=host, ip=ip)) def _create_inlet_outlet(self, inlet_outlet_factory): """Create the inlets and outlets if needed. This method requires that the particles be already created. The `inlet_outlet_factory` is passed a dictionary of the particle arrays. The factory should return a list of inlets and outlets. """ if inlet_outlet_factory is not None: solver = self.solver particle_arrays = dict([(p.name, p) for p in self.particles]) self.inlet_outlet = inlet_outlet_factory(particle_arrays) # Hook up the inlet/outlet's update method to be called after # each stage. for obj in self.inlet_outlet: solver.add_post_stage_callback(obj.update) def _create_particles(self, particle_factory, *args, **kw): """ Create particles given a callable `particle_factory` and any arguments to it. """ options = self.options rank = self.rank # particle array info that is used to create dummy particles # on non-root processors particles_info = {} # Only master actually calls the particle factory, the rest create # dummy particle arrays. if rank == 0: if options.restart_file is not None: # FIXME: not tested, probably does not work! solver = self.solver data = load(options.restart_file) arrays = data['arrays'] solver_data = data['solver_data'] # arrays and particles particles = [] for array_name in arrays: particles.append(arrays[array_name]) # save the particles list self.particles = particles # time, timestep and solver iteration count at restart t, dt, count = solver_data['t'], solver_data[ 'dt'], solver_data['count'] # rescale dt at restart dt *= options.rescale_dt solver.t, solver.dt, solver.count = t, dt, count else: self.particles = particle_factory(*args, **kw) for pa in self.particles: if len(pa.x) > 0: if np.max(pa.h) < 1e-12: warnings.warn( "'h' for particle array '{}' is 0.0".format( pa.name), UserWarning) if np.max(pa.m) < 1e-12: warnings.warn( "Mass 'm' for particle array '{}' is 0.0".format( pa.name), UserWarning) # get the array info which will be b'casted to other procs particles_info = utils.get_particles_info(self.particles) # Broadcast the particles_info to other processors for parallel runs if self.num_procs > 1: particles_info = self.comm.bcast(particles_info, root=0) # now all processors other than root create dummy particle arrays if rank != 0: self.particles = utils.create_dummy_particles(particles_info) def _configure_global_config(self): options = self.options # Setup configuration options. config = get_config() if options.with_openmp is not None: config.use_openmp = options.with_openmp logger.info('Using OpenMP') if options.omp_schedule is not None: config.set_omp_schedule(options.omp_schedule) logger.info('Using OpenMP schedule %s', options.omp_schedule) if options.with_opencl: config.use_opencl = True logger.info('Using OpenCL') elif options.with_cuda: config.use_cuda = True logger.info('Using CUDA') if options.with_local_memory: leaf_size = int(options.octree_leaf_size) config.wgs = leaf_size config.use_local_memory = True if options.use_double: config.use_double = options.use_double logger.info('Using double precision') if options.profile: config.profile = options.profile def _configure_solver(self): """Configures the application using the options from the command-line. """ options = self.options # setup the solver using any options self.solver.setup_solver(options.__dict__) solver = self.solver # fixed smoothing lengths fixed_h = solver.fixed_h or options.fixed_h kernel = solver.kernel if options.kernel is not None: kernel = getattr(kernels, options.kernel)(dim=solver.dim) solver.kernel = kernel # This should be called before an NNPS is created as the particles are # changed after the initial load-balancing. self._setup_parallel_manager_and_initial_load_balance() if self.nnps is None: cache = options.cache_nnps # create the NNPS object if options.with_opencl or options.with_cuda: if options.nnps == 'gpu_octree': leaf_size = int(options.octree_leaf_size) # if leaf_size % 32 != 0: # raise ValueError("GPU Octree leaf size must " # "be a multiple of 32") from pysph.base.octree_gpu_nnps import OctreeGPUNNPS # Sorting enabled by default print("Using elementwise: ", options.octree_elementwise) nnps = OctreeGPUNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, cache=True, sort_gids=options.sort_gids, allow_sort=True, leaf_size=leaf_size, use_elementwise=options.octree_elementwise, ) else: from pysph.base.gpu_nnps import ZOrderGPUNNPS nnps = ZOrderGPUNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, cache=True, sort_gids=options.sort_gids) elif options.nnps == 'box': nnps = BoxSortNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, cache=cache, sort_gids=options.sort_gids) elif options.nnps == 'll': nnps = LinkedListNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, sort_gids=options.sort_gids) elif options.nnps == 'sh': nnps = SpatialHashNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, table_size=options.table_size, sort_gids=options.sort_gids) elif options.nnps == 'esh': nnps = ExtendedSpatialHashNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, H=options.H, table_size=options.table_size, sort_gids=options.sort_gids, approximate=options.approximate_nnps) elif options.nnps == 'strat_hash': nnps = StratifiedHashNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, table_size=options.table_size, sort_gids=options.sort_gids, num_levels=options.num_levels) elif options.nnps == 'strat_sfc': nnps = StratifiedSFCNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, sort_gids=options.sort_gids, num_levels=options.num_levels) elif options.nnps == 'tree': nnps = OctreeNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, leaf_max_particles=options.leaf_max_particles, sort_gids=options.sort_gids) elif options.nnps == 'ci': nnps = CellIndexingNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, sort_gids=options.sort_gids) elif options.nnps == 'sfc': nnps = ZOrderNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, sort_gids=options.sort_gids) elif options.nnps == 'comp_tree': nnps = CompressedOctreeNNPS( dim=solver.dim, particles=self.particles, radius_scale=kernel.radius_scale, domain=self.domain, fixed_h=fixed_h, cache=cache, leaf_max_particles=options.leaf_max_particles, sort_gids=options.sort_gids) self.nnps = nnps nnps = self.nnps # inform NNPS if it's working in parallel if self.num_procs > 1: nnps.set_in_parallel(True) dt = options.time_step if dt is not None: solver.set_time_step(dt) tf = options.final_time if tf is not None: solver.set_final_time(tf) solver.set_max_steps(self.options.max_steps) # Setup the solver output file name fname = options.fname if in_parallel(): rank = self.rank if self.num_procs > 1: fname += '_' + str(rank) # set the rank for the solver solver.rank = self.rank solver.pid = self.rank solver.comm = self.comm # set the in parallel flag for the solver if self.num_procs > 1: solver.in_parallel = True # output file name solver.set_output_fname(fname) solver.set_compress_output(options.compress_output) # disable_output solver.set_disable_output(options.disable_output) if options.reorder_freq is None: if options.with_opencl: solver.set_reorder_freq(50) else: solver.set_reorder_freq(options.reorder_freq) # output print frequency if options.freq is not None: solver.set_print_freq(options.freq) # output printing level (default is not detailed) if options.detailed_output is not None: solver.set_output_printing_level(options.detailed_output) # solver output behaviour in parallel if options.output_dump_remote: solver.set_output_only_real(False) # output directory solver.set_output_directory(abspath(options.output_dir)) self._message("Generating output in %s" % self.output_dir) # set parallel output mode solver.set_parallel_output_mode(options.parallel_output_mode) # Set the adaptive timestep if options.adaptive_timestep is not None: solver.set_adaptive_timestep(options.adaptive_timestep) # set solver cfl number solver.set_cfl(options.cfl) if options.n_damp is not None: solver.set_n_damp(options.n_damp) # setup the solver. This is where the code is compiled solver.setup( particles=self.particles, equations=self.equations, nnps=nnps, kernel=kernel, fixed_h=fixed_h) self._log_solver_info(solver) # add solver interfaces self.command_manager = CommandManager(solver, self.comm) solver.set_command_handler(self.command_manager.execute_commands) if self.rank == 0: # commandline interface if options.cmd_line: from pysph.solver.solver_interfaces import CommandlineInterface self.command_manager.add_interface( CommandlineInterface().start) # XML-RPC interface if options.xml_rpc: from pysph.solver.solver_interfaces import XMLRPCInterface addr = options.xml_rpc idx = addr.find(':') host = "0.0.0.0" if idx == -1 else addr[:idx] port = int(addr[idx + 1:]) self.command_manager.add_interface( XMLRPCInterface((host, port)).start) # python MultiProcessing interface if options.multiproc: from pysph.solver.solver_interfaces import ( MultiprocessingInterface ) addr = options.multiproc idx = addr.find('@') authkey = "pysph" if idx == -1 else addr[:idx] addr = addr[idx + 1:] idx = addr.find(':') host = "0.0.0.0" if idx == -1 else addr[:idx] port = addr[idx + 1:] if port[-1] == '+': try_next_port = True port = port[:-1] else: try_next_port = False port = int(port) interface = MultiprocessingInterface((host, port), authkey.encode(), try_next_port) self.command_manager.add_interface(interface.start) logger.info('started multiprocessing interface on %s' % (interface.address, )) def _configure(self): """Configures the application using the options from the command-line. """ self._configure_global_config() self._configure_solver() def _setup_parallel_manager_and_initial_load_balance(self): """This will automatically distribute the particles among processors if this is a parallel run. """ # Instantiate the Parallel Manager here and do an initial LB num_procs = self.num_procs options = self.options solver = self.solver comm = self.comm self.parallel_manager = None if num_procs > 1: options = self.options if options.with_zoltan: if not (has_zoltan() and has_mpi()): raise RuntimeError("Cannot run in parallel!") else: raise ValueError("""Sorry. You're stuck with Zoltan for now use the option '--with-zoltan' for parallel runs """) # create the parallel manager obj_weight_dim = "0" if options.zoltan_weights: obj_weight_dim = "1" zoltan_lb_method = options.zoltan_lb_method # ghost layers ghost_layers = options.ghost_layers # radius scale for the parallel update radius_scale = (options.parallel_scale_factor * solver.kernel.radius_scale) self.parallel_manager = pm = ZoltanParallelManagerGeometric( dim=solver.dim, particles=self.particles, comm=comm, lb_method=zoltan_lb_method, obj_weight_dim=obj_weight_dim, ghost_layers=ghost_layers, update_cell_sizes=options.update_cell_sizes, radius_scale=radius_scale ) # ## ADDITIONAL LOAD BALANCING FUNCTIONS FOR ZOLTAN ### # RCB lock directions if options.zoltan_rcb_lock_directions: pm.set_zoltan_rcb_lock_directions() if options.zoltan_rcb_reuse: pm.set_zoltan_rcb_reuse() if options.zoltan_rcb_rectilinear: pm.set_zoltan_rcb_rectilinear_blocks() if options.zoltan_rcb_set_direction > 0: pm.set_zoltan_rcb_directions( str(options.zoltan_rcb_set_direction)) # set zoltan options pm.pz.Zoltan_Set_Param("DEBUG_LEVEL", options.zoltan_debug_level) pm.pz.Zoltan_Set_Param("DEBUG_MEMORY", "0") # do an initial load balance pm.update() pm.initial_update = False # set subsequent load balancing frequency lb_freq = options.lb_freq if lb_freq < 1: raise ValueError("Invalid lb_freq %d" % lb_freq) pm.set_lb_freq(lb_freq) # wait till the initial partition is done comm.barrier() # set the solver's parallel manager solver.set_parallel_manager(self.parallel_manager) def _setup_solver_callbacks(self, obj): """Setup any solver callbacks given an object with any of `pre_step`, `post_step' and `post_stage` """ if is_overloaded_method(obj.pre_step): self.solver.add_pre_step_callback(obj.pre_step) if is_overloaded_method(obj.post_stage): self.solver.add_post_stage_callback(obj.post_stage) if is_overloaded_method(obj.post_step): self.solver.add_post_step_callback(obj.post_step) def _message(self, msg): if self.num_procs == 1: logger.info(msg) if not self.options.quiet: print(msg) elif (self.num_procs > 1 and self.rank in (0, 1)): s = "Rank %d: %s" % (self.rank, msg) logger.info(s) if not self.options.quiet: print(s) def _write_info(self, filename, **kw): """Write the information dictionary to given filename. Any extra keyword arguments are written to the file. """ info = dict( fname=self.fname, output_dir=self.output_dir, args=self.args) info.update(kw) json.dump(info, open(filename, 'w')) def _log_solver_info(self, solver): sep = '-'*70 pa_info = {p.name: p.get_number_of_particles() for p in solver.particles} particle_info = '\n '.join( ['%s: %d' % (k, v) for k, v in pa_info.items()] ) total = sum(pa_info.values()) if len(pa_info) > 1: particle_info += '\n Total: %d' % total p_msg = '%s\nNo of particles:\n %s\n%s' % (sep, particle_info, sep) self._message(p_msg) kernel_name = solver.kernel.__class__.__name__ kernel_info = '%s(dim=%s)' % (kernel_name, solver.dim) logger.info('Using kernel:\n%s\n %s\n%s', sep, kernel_info, sep) nnps_name = self.nnps.__class__.__name__ nnps_info = '%s(dim=%s)' % (nnps_name, solver.dim) logger.info('Using nnps:\n%s\n %s\n%s', sep, nnps_info, sep) logger.info( 'Using integrator:\n%s\n %s\n%s', sep, solver.integrator, sep ) equations = self.equations if isinstance(equations, list): eqn_info = '[\n' + ',\n'.join([str(e) for e in equations]) + '\n]' else: eqn_info = equations logger.info('Using equations:\n%s\n%s\n%s', sep, eqn_info, sep) def _mayavi_config(self, code): """Write out the given code to a `mayavi_config.py` in the output directory. Note that this will call `textwrap.dedent` on the code. """ cfg = os.path.join(self.output_dir, 'mayavi_config.py') if not os.path.exists(cfg): with open(cfg, 'w') as fp: fp.write(dedent(code)) ###################################################################### # Public interface. ###################################################################### def add_tool(self, tool): """Add a :py:class:`pysph.solver.tools.Tool` instance to the application. """ self._setup_solver_callbacks(tool) self.tools.append(tool) def dump_code(self, file): """Dump the generated code to given file. """ file.write(self.solver.sph_eval.ext_mod.code) @property def info_filename(self): return abspath(join(self.output_dir, self.fname + '.info')) def initialize(self): """Called on the constructor, set constants etc. up here if needed. """ pass @property def output_files(self): return get_files(self.output_dir, self.fname) def read_info(self, fname_or_dir): """Read the information from the given info file (or directory containing the info file, the first found info file will be used). """ if isdir(fname_or_dir): fname_or_dir = glob.glob(join(fname_or_dir, "*.info"))[0] info_dir = dirname(fname_or_dir) with open(fname_or_dir, 'r') as f: info = json.load(f) self.args = info.get('args', self.args) self._parse_command_line(force=True) self.fname = info.get('fname', self.fname) output_dir = info.get('output_dir', self.output_dir) if realpath(info_dir) != realpath(output_dir): # Happens if someone moved the directory! self.output_dir = info_dir info['output_dir'] = info_dir else: self.output_dir = output_dir # Set the output directory of the options so it is corrected as per the # info file. self.options.output_dir = self.output_dir self._process_command_line() return info def run(self, argv=None): """Run the application. """ if argv is not None: self.set_args(argv) if self.solver is None: start_time = time.time() self._parse_command_line(force=argv is not None) self._process_command_line() self._setup_logging() self._configure_global_config() self.solver = self.create_solver() msg = "Solver is None, you may have forgotten to return it!" assert self.solver is not None, msg self.equations = self.create_equations() self._create_particles(self.create_particles) # This must be done before the initial load balancing # as the inlets will create new particles. if is_overloaded_method(self.create_inlet_outlet): self._create_inlet_outlet(self.create_inlet_outlet) if self.domain is None: self.domain = self.create_domain() self.nnps = self.create_nnps() self._configure_solver() self._setup_solver_callbacks(self) for tool in self.create_tools(): self.add_tool(tool) end_time = time.time() setup_duration = end_time - start_time self._message("Setup took: %.5f secs" % (setup_duration)) self._write_info(self.info_filename, completed=False, cpu_time=0) self.customize_output() start_time = time.time() self.solver.solve(not self.options.quiet) end_time = time.time() run_duration = end_time - start_time self._message("Run took: %.5f secs" % (run_duration)) if self.options.with_opencl and self.options.profile: from compyle.opencl import print_profile print_profile() self._write_info( self.info_filename, completed=True, cpu_time=run_duration) def set_args(self, args): self.args = args def setup(self, solver, equations, nnps=None, inlet_outlet_factory=None, particle_factory=None, *args, **kwargs): """Setup the application's solver. This will parse the command line arguments (if this is not called from within an IPython notebook or shell) and then using those parameters and any additional parameters and call the solver's setup method. Parameters ---------- solver: pysph.solver.solver.Solver The solver instance. equations: list A list of Groups/Equations. nnps: pysph.base.nnps_base.NNPS Optional NNPS instance. If None is given a default NNPS is created. inlet_outlet_factory: callable or None The `inlet_outlet_factory` is passed a dictionary of the particle arrays. The factory should return a list of inlets and outlets. particle_factory : callable or None If supplied, particles will be created for the solver using the particle arrays returned by the callable. Else particles for the solver need to be set before calling this method args: extra positional arguments passed on to the `particle_factory`. kwargs: extra keyword arguments passed to the `particle_factory`. Examples -------- >>> def create_particles(): ... ... ... >>> solver = Solver(...) >>> equations = [...] >>> app = Application() >>> app.setup(solver=solver, equations=equations, ... particle_factory=create_particles) >>> app.run() """ start_time = time.time() self.solver = solver self.equations = equations solver.get_options(self.arg_parse) self._parse_command_line() self._process_command_line() self._setup_logging() self._configure_global_config() # Create particles either from scratch or restart self._create_particles(particle_factory, *args, **kwargs) # This must be done before the initial load balancing # as the inlets will create new particles. self._create_inlet_outlet(inlet_outlet_factory) if nnps is not None: self.nnps = nnps self._configure_solver() end_time = time.time() setup_duration = end_time - start_time self._message("Setup took: %.5f secs" % (setup_duration)) self._write_info(self.info_filename, completed=False, cpu_time=0) ###################################################################### # User methods that could be overloaded. ###################################################################### def add_user_options(self, group): """Add any user-defined options to the given option group. Note ---- This uses the `argparse` module. """ pass def configure_scheme(self): """This is called after :py:meth:`consume_user_options` is called. One can configure the SPH scheme here as at this point all the command line options are known. """ pass def consume_user_options(self): """This is called right after the command line arguments are parsed. All the parsed options are available in ``self.options`` and can be used in this method. This is meant to be overridden by users to setup any internal variables etc. that depend on the command line arguments passed. Note that this method is called well before the solver or particles are created. """ pass def create_domain(self): """Create a `pysph.base.nnps_base.DomainManager` and return it if needed. This is used for periodic domains etc. Note that if the domain is passed to :py:meth:`__init__`, then this method is not called. """ return None def create_inlet_outlet(self, particle_arrays): """Create inlet and outlet objects and return them as a list. The method is passed a dictionary of particle arrays keyed on the name of the particle array. """ pass def create_equations(self): """Create the equations to be used and return them. """ if self.scheme is not None: return self.scheme.get_equations() else: msg = "Application.create_equations method must be overloaded." raise NotImplementedError(msg) def create_nnps(self): """Create any NNPS if desired and return it, else a default NNPS will be created automatically. """ return None def create_particles(self): """Create particle arrays and return a list of them. """ message = "Application.create_particles method must be overloaded." raise NotImplementedError(message) def create_scheme(self): """Create a suitable SPH scheme and return it. Note that this method is called after the arguments are all processed and after :py:meth:`consume_user_options` is called. """ return None def create_solver(self): """Create the solver and return it. """ if self.scheme is not None: return self.scheme.get_solver() else: msg = "Application.create_solver method must be overloaded." raise NotImplementedError(msg) def create_tools(self): """Create any tools and return a sequence of them. This method is called after particles/inlets etc. are all setup, configured etc. """ return [] def customize_output(self): """Customize the output file visualization by adding any files. For example, the pysph view command will look for a ``mayavi_config.py`` file that can be used to script the viewer. You can use self._mayavi_config('code') to add a default customization here. Note that this is executed before the simulation starts. """ pass def pre_step(self, solver): """If overloaded, this is called automatically before each integrator step. The method is passed the solver instance. """ pass def post_stage(self, current_time, dt, stage): """If overloaded, this is called automatically after each integrator stage, i.e. if the integrator is a two stage integrator it will be called after the first and second stages. The method is passed (current_time, dt, stage). See the the :py:meth:`pysph.sph.integrator.Integrator.one_timestep` methods for examples of how this is called. """ pass def post_step(self, solver): """If overloaded, this is called automatically after each integrator step. The method is passed the solver instance. """ pass def post_process(self, info_fname_or_directory): """Given an info filename or a directory containing the info file, read the information and do any post-processing of the results. Please overload the method to perform any processing. The info file has a few useful attributes and can be read using the :py:meth:`read_info` method. The `output_files` property should provide the output files generated. """ print('Overload this method to post-process the results.') pysph-master/pysph/solver/controller.py000066400000000000000000000372241356347341600207500ustar00rootroot00000000000000''' Implement infrastructure for the solver to add various interfaces ''' from functools import wraps import threading try: from thread import LockType except ImportError: from _thread import LockType from pysph.base.particle_array import ParticleArray import logging logger = logging.getLogger(__name__) class DummyComm(object): ''' A dummy MPI.Comm implementation as placeholder for for serial runs ''' def Get_size(self): ''' return the size of the comm (1) ''' return 1 def Get_rank(self): ''' return the rank of the process (0) ''' return 0 def send(self, data, pid): ''' dummy send implementation ''' self.data = data def recv(self, pid): ''' dummy recv implementation ''' data = self.data del self.data return data def bcast(self, data): ''' bcast (broadcast) implementation for serial run ''' return data def gather(self, data): ''' gather implementation for serial run ''' return [data] def synchronized(lock_or_func): ''' decorator for synchronized (thread safe) function Usage: - sync_func = synchronized(lock)(func) # sync with an existing lock - sync_func = synchronized(func) # sync with a new private lock ''' if isinstance(lock_or_func, LockType): lock = lock_or_func def synchronized_inner(func): @wraps(func) def wrapped(*args, **kwargs): with lock: return func(*args, **kwargs) return wrapped return synchronized_inner else: func = lock_or_func lock = threading.Lock() return synchronized(lock)(func) def wrap_dispatcher(obj, meth, *args2, **kwargs2): @wraps(meth) def wrapped(*args, **kwargs): kw = {} kw.update(kwargs2) kw.update(kwargs) return meth(obj.block, *(args2+args), **kw) return wrapped class Controller(object): ''' Controller class acts a a proxy to control the solver This is passed as an argument to the interface **Methods available**: - get -- get the value of a solver parameter - set -- set the value of a solver parameter - get_result -- return result of a queued command - pause_on_next -- pause solver thread on next iteration - wait -- wait (block) calling thread till solver is paused (call after `pause_on_next`) - cont -- continue solver thread (call after `pause_on_next`) Various other methods are also available as listed in :data:`CommandManager.dispatch_dict` which perform different functions. - The methods in CommandManager.active_methods do their operation and return the result (if any) immediately - The methods in CommandManager.lazy_methods do their later when solver thread is available and return a task-id. The result of the task can be obtained later using the blocking call `get_result()` which waits till result is available and returns the result. The availability of the result can be checked using the lock returned by `get_task_lock()` method FIXME: wait/cont currently do not work in parallel ''' def __init__(self, command_manager, block=True): super(Controller, self).__init__() self.__command_manager = command_manager self.daemon = True self.block = block self._set_methods() def _set_methods(self): for prop in self.__command_manager.solver_props: setattr(self, 'get_'+prop, wrap_dispatcher(self, self.__command_manager.dispatch, 'get', prop)) setattr(self, 'set_'+prop, wrap_dispatcher(self, self.__command_manager.dispatch, 'set', prop)) for meth in self.__command_manager.solver_methods: setattr(self, meth, wrap_dispatcher(self, self.__command_manager.dispatch, meth)) for meth in self.__command_manager.lazy_methods: setattr(self, meth, wrap_dispatcher(self, self.__command_manager.dispatch, meth)) for meth in self.__command_manager.active_methods: setattr(self, meth, wrap_dispatcher(self, self.__command_manager.dispatch, meth)) def get(self, name): ''' get a solver property; returns immediately ''' return self.__command_manager.dispatch(self.block, 'get', name) def set(self, name, value): ''' set a solver property; returns immediately; ''' return self.__command_manager.dispatch(self.block, 'set', name, value) def pause_on_next(self): ''' pause the solver thread on next iteration ''' return self.__command_manager.pause_on_next() def wait(self): ''' block the calling thread until the solver thread pauses call this only after calling the `pause_on_next` method to tell the controller to pause the solver thread''' self.__command_manager.wait() return True def get_prop_names(self): return list(self.__command_manager.solver_props) def cont(self): ''' continue solver thread after it has been paused by `pause_on_next` call this only after calling the `pause_on_next` method ''' return self.__command_manager.cont() def get_result(self, task_id): ''' get the result of a previously queued command ''' return self.__command_manager.get_result(task_id) def set_blocking(self, block): ''' set the blocking mode to True/False In blocking mode (block=True) all methods other than getting of solver properties block until the command is executed by the solver and return the results. The blocking time can vary depending on the time taken by solver per iteration and the command_interval In non-blocking mode, these methods queue the command for later and return a string corresponding to the task_id of the operation. The result can be later obtained by a (blocking) call to get_result with the task_id as argument ''' if block != self.block: self.block = block self._set_methods() return self.block def get_blocking(self): ''' get the blocking mode ( True/False ) ''' return self.block def ping(self): return True def on_root_proc(f): ''' run the decorated function only on the root proc ''' @wraps(f) def wrapper(self, *args, **kwds): if self.comm.Get_rank()==0: return f(self, *args, **kwds) return wrapper def in_parallel(f): ''' return a list of results of running decorated function on all procs ''' @wraps(f) def wrapper(self, *args, **kwds): return self.comm.gather(f(self, *args, **kwds)) return wrapper class CommandManager(object): ''' Class to manage and synchronize commands from various Controllers ''' solver_props = set(('t', 'tf', 'dt', 'count', 'pfreq', 'fname', 'detailed_output', 'output_directory', 'command_interval')) solver_methods = set(('dump_output',)) lazy_methods = set(('get_particle_array_names', 'get_named_particle_array', 'get_particle_array_combined', 'get_particle_array_from_procs')) active_methods = set(('get_status', 'get_task_lock', 'set_log_level')) def __init__(self, solver, comm=None): if comm is not None: self.comm = comm self.rank = comm.Get_rank() else: try: self.comm = solver.particles.cell_manager.parallel_controller.comm except AttributeError: self.comm = DummyComm() self.rank = 0 logger.debug('CommandManager: using comm: %s'%self.comm) self.solver = solver self.interfaces = [] self.func_dict = {} self.rlock = threading.RLock() self.res_lock = threading.Lock() self.plock = threading.Condition() self.qlock = threading.Condition() # queue lock self.queue = [] self.queue_dict = {} self.queue_lock_map = {} self.results = {} self.pause = set([]) @on_root_proc def add_interface(self, callable, block=True): ''' Add a callable interface to the controller The callable must accept an Controller instance argument. The callable is called in a new thread of its own and it can do various actions with methods defined on the Controller instance passed to it The new created thread is set to daemon mode and returned ''' logger.debug('adding_interface: %s'%callable) control = Controller(self, block) thr = threading.Thread(target=callable, args=(control,)) thr.daemon = True thr.start() return thr def add_function(self, callable, interval=1): ''' add a function to to be called every `interval` iterations ''' l = self.func_dict[interval] = self.func_dict.get(interval, []) l.append(callable) def execute_commands(self, solver): ''' called by the solver after each timestep ''' # TODO: first synchronize all the controllers in different processes # using mpi self.sync_commands() with self.qlock: self.run_queued_commands() if self.rank == 0: logger.debug('control handler: count=%d'%solver.count) for interval in self.func_dict: if solver.count%interval == 0: for func in self.func_dict[interval]: func(solver) self.wait_for_cmd() def wait_for_cmd(self): ''' wait for command from any interface ''' with self.qlock: while self.pause: with self.plock: self.plock.notify_all() self.qlock.wait() self.run_queued_commands() def sync_commands(self): ''' send the pending commands to all the procs in parallel run ''' self.queue_dict, self.queue, self.pause = self.comm.bcast((self.queue_dict, self.queue, self.pause)) def run_queued_commands(self): while self.queue: lock_id = self.queue.pop(0) meth, args, kwargs = self.queue_dict[lock_id] with self.res_lock: try: self.results[lock_id] = self.run_command(meth, args, kwargs) finally: del self.queue_dict[lock_id] if self.comm.Get_rank()==0: self.queue_lock_map[lock_id].release() def run_command(self, cmd, args=[], kwargs={}): res = self.dispatch_dict[cmd](self, *args, **kwargs) logger.debug('controller: running_command: %s %s %s %s'%( cmd, args, kwargs, res)) return res def pause_on_next(self): ''' pause and wait for command on the next control interval ''' if self.comm.Get_size() > 1: logger.debug('pause/continue not yet supported in parallel runs') return False with self.plock: self.pause.add(threading.current_thread().ident) self.plock.notify() return True def wait(self): with self.plock: self.plock.wait() def cont(self): ''' continue after a pause command ''' if self.comm.Get_size() > 1: logger.debug('pause/continue noy yet supported in parallel runs') return with self.plock: self.pause.remove(threading.current_thread().ident) self.plock.notify() with self.qlock: self.qlock.notify_all() def get_result(self, lock_id): ''' get the result of a previously queued command ''' lock_id = int(lock_id) lock = self.queue_lock_map[lock_id] with lock: with self.res_lock: ret = self.results[lock_id] del self.results[lock_id] del self.queue_lock_map[lock_id] return ret def get_task_lock(self, lock_id): ''' get the Lock instance associated with a command ''' return self.queue_lock_map[int(lock_id)] def get_prop(self, name): ''' get a solver property ''' return getattr(self.solver, name) def set_prop(self, name, value): ''' set a solver property ''' return setattr(self.solver, name, value) def solver_method(self, name, *args, **kwargs): ''' execute a method on the solver ''' ret = getattr(self.solver, name)(*args, **kwargs) ret = self.comm.gather(ret) return ret def get_particle_array_names(self): ''' get the names of the particle arrays ''' return [pa.name for pa in self.solver.particles] def get_named_particle_array(self, name, props=None): for pa in self.solver.particles: if pa.name == name: if props: return [getattr(pa, p) for p in props if hasattr(pa, p)] else: return pa def get_particle_array_index(self, name): ''' get the index of the named particle array ''' for i,pa in enumerate(self.solver.particles): if pa.name == name: return i def get_particle_array_from_procs(self, idx, procs=None): ''' get particle array at index from all processes specifying processes is currently not implemented ''' if procs is None: procs = list(range(self.comm.size)) pa = self.solver.particles[idx] pas = self.comm.gather(pa) return pas def get_particle_array_combined(self, idx, procs=None): ''' get a single particle array with combined data from all procs specifying processes is currently not implemented ''' if procs is None: procs = list(range(self.comm.size)) pa = self.solver.particles[idx] pas = self.comm.gather(pa) pa = ParticleArray(name=pa.name) for p in pas: pa.append_parray(p) return pa def get_status(self): ''' get the status of the controller ''' return 'commands queued: %d'%len(self.queue) def set_log_level(self, level): ''' set the logging level ''' logger.setLevel(level) dispatch_dict = {'get':get_prop, 'set':set_prop} for meth in solver_methods: dispatch_dict[meth] = solver_method for meth in lazy_methods: dispatch_dict[meth] = locals()[meth] for meth in active_methods: dispatch_dict[meth] = locals()[meth] @synchronized def dispatch(self, block, meth, *args, **kwargs): ''' execute/queue a command with specified arguments ''' if meth in self.dispatch_dict: if meth=='get' or meth=='set': prop = args[0] if prop not in self.solver_props: raise RuntimeError('Invalid dispatch on method: %s with ' 'non-existant property: %s '%(meth,prop)) if block or meth=='get' or meth in self.active_methods: logger.debug('controller: immediate dispatch(): %s %s %s'%( meth, args, kwargs)) return self.dispatch_dict[meth](self, *args, **kwargs) else: lock = threading.Lock() lock.acquire() lock_id = id(lock) with self.qlock: self.queue_lock_map[lock_id] = lock self.queue_dict[lock_id] = (meth, args, kwargs) self.queue.append(lock_id) logger.debug('controller: dispatch(%d): %s %s %s'%( lock_id, meth, args, kwargs)) return str(lock_id) else: raise RuntimeError('Invalid dispatch on method: '+meth) pysph-master/pysph/solver/output.py000066400000000000000000000264531356347341600201270ustar00rootroot00000000000000""" An interface to output the data in various format """ import numpy import os import sys from pysph.base.particle_array import ParticleArray from pysph.base.utils import get_particles_info, get_particle_array from pysph import has_h5py output_formats = ('hdf5', 'npz') def _to_str(s): if isinstance(s, bytes) and sys.version_info[0] > 2: return s.decode('utf-8') else: return str(s) def gather_array_data(all_array_data, comm): """Given array_data from the current processor and an MPI communicator,return a joined array_data from all processors on rank 0 and the same array_data on the other machines. """ array_names = all_array_data.keys() # gather the data from all processors collected_data = comm.gather(all_array_data, root=0) if comm.Get_rank() == 0: all_array_data = {} size = comm.Get_size() # concatenate the arrays for array_name in array_names: array_data = {} all_array_data[array_name] = array_data _props = collected_data[0][array_name].keys() for prop in _props: data = [collected_data[pid][array_name][prop] for pid in range(size)] prop_arr = numpy.concatenate(data) array_data[prop] = prop_arr return all_array_data class Output(object): """ Class that handles output for simulation """ def __init__(self, detailed_output=False, only_real=True, mpi_comm=None, compress=False): self.compress = compress self.detailed_output = detailed_output self.only_real = only_real self.mpi_comm = mpi_comm def dump(self, fname, particles, solver_data): self.particle_data = dict(get_particles_info(particles)) self.all_array_data = {} for array in particles: self.all_array_data[array.name] = array.get_property_arrays( all=self.detailed_output, only_real=self.only_real ) mpi_comm = self.mpi_comm if mpi_comm is not None: self.all_array_data = gather_array_data( self.all_array_data, mpi_comm ) self.solver_data = solver_data if mpi_comm is None or mpi_comm.Get_rank() == 0: self._dump(fname) def load(self, fname): return self._load(fname) def _dump(self, fname): """ Implement the method for writing the output to a file here """ raise NotImplementedError() def _load(self, fname): """ Implement the method for loading from file here """ raise NotImplementedError() def _dict_bytes_to_str(d): # This craziness is needed as if the npz file is saved in Python2 # then all the strings are bytes and if this is loaded in Python 3, # the keys will be bytes and not strings leading to strange errors. res = {} for key, value in d.items(): if isinstance(value, dict): value = _dict_bytes_to_str(value) if isinstance(value, bytes): value = _to_str(value) if isinstance(value, list): if value and isinstance(value[0], bytes): value = [_to_str(x) for x in value] res[_to_str(key)] = value return res def _get_dict_from_arrays(arrays): arrays.shape = (1,) res = arrays[0] if res and isinstance(list(res.keys())[0], bytes): return _dict_bytes_to_str(res) else: return res class NumpyOutput(Output): def _dump(self, filename): save_method = numpy.savez_compressed if self.compress else numpy.savez output_data = {"particles": self.particle_data, "solver_data": self.solver_data} for name, arrays in self.all_array_data.items(): self.particle_data[name]["arrays"] = arrays save_method(filename, version=2, **output_data) def _load(self, fname): data = numpy.load(fname, encoding='bytes', allow_pickle=True) if 'version' not in data.files: msg = "Wrong file type! No version number recorded." raise RuntimeError(msg) ret = {} ret["arrays"] = {} version = data['version'] solver_data = _get_dict_from_arrays(data["solver_data"]) ret["solver_data"] = solver_data if version == 1: arrays = _get_dict_from_arrays(data["arrays"]) for array_name in arrays: array = get_particle_array(name=array_name, **arrays[array_name]) ret["arrays"][array_name] = array elif version == 2: particles = _get_dict_from_arrays(data["particles"]) for array_name, array_info in particles.items(): for prop, data in array_info['arrays'].items(): array_info['properties'][prop]['data'] = data array = ParticleArray(name=array_name, constants=array_info["constants"], **array_info["properties"]) array.set_output_arrays( array_info.get('output_property_arrays', []) ) ret["arrays"][array_name] = array else: raise RuntimeError("Version not understood!") return ret class HDFOutput(Output): def _dump(self, filename): import h5py with h5py.File(filename, 'w') as f: solver_grp = f.create_group('solver_data') particles_grp = f.create_group('particles') for ptype, pdata in self.particle_data.items(): ptype_grp = particles_grp.create_group(ptype) arrays_grp = ptype_grp.create_group('arrays') data = self.all_array_data[ptype] self._set_constants(pdata, ptype_grp) self._set_properties(pdata, arrays_grp, data) self._set_solver_data(solver_grp) def _load(self, fname): if has_h5py(): import h5py else: msg = "Install python-h5py to load this file" raise ImportError(msg) ret = {} with h5py.File(fname, 'r') as f: solver_grp = f['solver_data'] particles_grp = f['particles'] ret["solver_data"] = self._get_solver_data(solver_grp) ret["arrays"] = self._get_particles(particles_grp) return ret def _get_particles(self, grp): particles = {} for name, prop_array in grp.items(): output_array = [] const_grp = prop_array['constants'] arrays_grp = prop_array['arrays'] constants = self._get_constants(const_grp) array = ParticleArray(_to_str(name), constants=constants) for pname, h5obj in arrays_grp.items(): prop_name = _to_str(h5obj.attrs['name']) type_ = _to_str(h5obj.attrs['type']) default = h5obj.attrs['default'] stride = h5obj.attrs.get('stride', 1) if h5obj.attrs['stored']: output_array.append(_to_str(pname)) array.add_property( prop_name, type=type_, default=default, data=numpy.array(h5obj), stride=stride ) else: array.add_property(prop_name, type=type_, stride=stride) array.set_output_arrays(output_array) particles[str(name)] = array return particles def _get_solver_data(self, grp): solver_data = {} for name, value in grp.attrs.items(): solver_data[_to_str(name)] = value return solver_data def _get_constants(self, grp): constants = {} for const_name, const_data in grp.items(): constants[_to_str(const_name)] = numpy.array(const_data) return constants def _set_constants(self, pdata, ptype_grp): pconstants = pdata['constants'] constGroup = ptype_grp.create_group('constants') for constName, constArray in pconstants.items(): constGroup.create_dataset(constName, data=constArray) def _set_properties(self, pdata, ptype_grp, data): for propname, attributes in pdata['properties'].items(): if propname in data: array = data[propname] if self.compress: prop = ptype_grp.create_dataset( propname, data=array) else: prop = ptype_grp.create_dataset( propname, data=array, compression="gzip", compression_opts=9 ) prop.attrs['stored'] = True else: prop = ptype_grp.create_dataset(propname, (0,)) prop.attrs['stored'] = False for attname, value in attributes.items(): if value is None: value = 'None' prop.attrs[attname] = value def _set_solver_data(self, grp): for name, data in self.solver_data.items(): grp.attrs[name] = data def load(fname): """ Load the output data Parameters ---------- fname: str Name of the file or full path Examples -------- >>> data = load('elliptical_drop_100.npz') >>> data.keys() ['arrays', 'solver_data'] >>> arrays = data['arrays'] >>> arrays.keys() ['fluid'] >>> fluid = arrays['fluid'] >>> type(fluid) pysph.base.particle_array.ParticleArray >>> data['solver_data'] {'count': 100, 'dt': 4.6416394784204199e-05, 't': 0.0039955855395528766} """ if fname.endswith('npz'): output = NumpyOutput() elif fname.endswith('hdf5'): output = HDFOutput() if os.path.isfile(fname): return output.load(fname) else: msg = "File not present" raise RuntimeError(msg) def dump(filename, particles, solver_data, detailed_output=False, only_real=True, mpi_comm=None, compress=False): """ Dump the given particles and solver data to the given filename. Parameters ---------- filename: str Filename to dump to. particles: sequence(ParticleArray) Sequence of particle arrays to dump. solver_data: dict Additional information to dump about solver state. detailed_output: bool Specifies if all arrays should be dumped. only_real: bool Only dump the real particles. mpi_comm: mpi4pi.MPI.Intracomm An MPI communicator to use for parallel commmunications. compress: bool Specify if the file is to be compressed or not. If `mpi_comm` is not passed or is set to None the local particles alone are dumped, otherwise only rank 0 dumps the output. """ if filename.endswith(output_formats): fname = os.path.splitext(filename)[0] else: fname = filename filename = fname + '.hdf5' if filename.endswith('hdf5') and has_h5py(): file_format = 'hdf5' output = HDFOutput(detailed_output, only_real, mpi_comm, compress) else: output = NumpyOutput(detailed_output, only_real, mpi_comm, compress) file_format = 'npz' filename = fname + '.' + file_format output.dump(filename, particles, solver_data) pysph-master/pysph/solver/solver.py000066400000000000000000000602261356347341600200750ustar00rootroot00000000000000""" An implementation of a general solver base class """ from __future__ import print_function # System library imports. import os import numpy # PySPH imports from pysph.base.kernels import CubicSpline from pysph.sph.acceleration_eval import make_acceleration_evals from pysph.sph.sph_compiler import SPHCompiler from pysph.solver.utils import ProgressBar, load, dump import logging logger = logging.getLogger(__name__) EPSILON = numpy.finfo(float).eps*2 class Solver(object): """Base class for all PySPH Solvers """ def __init__(self, dim=2, integrator=None, kernel=None, n_damp=0, tf=1.0, dt=1e-3, adaptive_timestep=False, cfl=0.3, output_at_times=(), fixed_h=False, **kwargs): """**Constructor** Any additional keyword args are used to set the values of any of the attributes. Parameters ---------- dim : int Dimension of the problem integrator : pysph.sph.integrator.Integrator Integrator to use kernel : pysph.base.kernels.Kernel SPH kernel to use n_damp : int Number of timesteps for which the initial damping is required. This is used to improve stability for problems with strong discontinuity in initial condition. Setting it to zero will disable damping of the timesteps. dt : double Suggested initial time step for integration tf : double Final time for integration adaptive_timestep : bint Flag to use adaptive time steps cfl : double CFL number for adaptive time stepping pfreq : int Output files dumping frequency. output_at_times : list/array Optional list of output times to force dump the output file fixed_h : bint Flag for constant smoothing lengths `h` reorder_freq : int The number of iterations after which particles should be re-ordered. If zero, do not do this. Example ------- >>> integrator = PECIntegrator(fluid=WCSPHStep()) >>> kernel = CubicSpline(dim=2) >>> solver = Solver(dim=2, integrator=integrator, kernel=kernel, ... n_damp=50, tf=1.0, dt=1e-3, adaptive_timestep=True, ... pfreq=100, cfl=0.5, output_at_times=[1e-1, 1.0]) """ self.integrator = integrator self.dim = dim if kernel is not None: self.kernel = kernel else: self.kernel = CubicSpline(dim) # set the particles to None self.particles = None self.acceleration_evals = None self.nnps = None # solver time and iteration count self.t = 0 self.count = 0 self.execute_commands = None # list of functions to be called before and after an integration step self.pre_step_callbacks = [] self.post_step_callbacks = [] # List of functions to be called after each stage of the integrator. self.post_stage_callbacks = [] # default output printing frequency self.pfreq = 100 # Compress generated files. self.compress_output = False self.disable_output = False # the process id for parallel runs self.pid = None # set the default rank to 0 self.rank = 0 # set the default comm to None. self.comm = None # set the default mode to serial self.in_parallel = False # arrays to print output self.arrays_to_print = [] # the default parallel output mode self.parallel_output_mode = "collected" # flag to print all arrays self.detailed_output = False # flag to save Remote arrays self.output_only_real = True # output filename self.fname = self.__class__.__name__ # output drectory self.output_directory = self.fname+'_output' # solution damping to avoid impulsive starts self.n_damp = n_damp # Use adaptive time steps and cfl number self.adaptive_timestep = adaptive_timestep self.cfl = cfl # list of output times self.output_at_times = numpy.asarray(output_at_times) self.force_output = False # default time step constants self.tf = tf self.dt = dt self.max_steps = 1 << 31 self._prev_dt = None self._damping_factor = 1.0 self._epsilon = EPSILON*tf # flag for constant smoothing lengths self.fixed_h = fixed_h self.reorder_freq = 0 # Set all extra keyword arguments for attr, value in kwargs.items(): if hasattr(self, attr): setattr(self, attr, value) else: msg = 'Unknown keyword arg "%s" passed to constructor' % attr raise TypeError(msg) ########################################################################## # Public interface. ########################################################################## def setup(self, particles, equations, nnps, kernel=None, fixed_h=False): """ Setup the solver. The solver's processor id is set if the in_parallel flag is set to true. The order of the integrating calcs is determined by the solver's order attribute. This is usually called at the start of a PySPH simulation. """ self.particles = particles if kernel is not None: self.kernel = kernel mode = 'mpi' if self.in_parallel else 'serial' self.acceleration_evals = make_acceleration_evals( particles, equations, self.kernel, mode ) sph_compiler = SPHCompiler( self.acceleration_evals, self.integrator ) sph_compiler.compile() # Set the nnps for all concerned objects. self.nnps = nnps for ae in self.acceleration_evals: ae.set_nnps(nnps) self.integrator.set_nnps(nnps) # set the parallel manager for the integrator self.integrator.set_parallel_manager(self.pm) # Set the post_stage_callback. self.integrator.set_post_stage_callback(self._post_stage_callback) # set integrator option for constant smoothing length self.fixed_h = fixed_h self.integrator.set_fixed_h(fixed_h) logger.debug("Solver setup complete.") def add_post_stage_callback(self, callback): """These callbacks are called *after* each integrator stage. The callbacks are passed (current_time, dt, stage). See the the `Integrator.one_timestep` methods for examples of how this is called. Example ------- >>> def post_stage_callback_function(t, dt, stage): >>> # This function is called after every stage of integrator. >>> print(t, dt, stage) >>> # Do something >>> solver.add_post_stage_callback(post_stage_callback_function) """ self.post_stage_callbacks.append(callback) def add_post_step_callback(self, callback): """These callbacks are called *after* each timestep is performed. The callbacks are passed the solver instance (i.e. self). Example ------- >>> def post_step_callback_function(solver): >>> # This function is called after every time step. >>> print(solver.t, solver.dt) >>> # Do something >>> solver.add_post_step_callback(post_step_callback_function) """ self.post_step_callbacks.append(callback) def add_pre_step_callback(self, callback): """These callbacks are called *before* each timestep is performed. The callbacks are passed the solver instance (i.e. self). Example ------- >>> def pre_step_callback_function(solver): >>> # This function is called before every time step. >>> print(solver.t, solver.dt) >>> # Do something >>> solver.add_pre_step_callback(pre_step_callback_function) """ self.pre_step_callbacks.append(callback) def append_particle_arrrays(self, arrays): """ Append the particle arrays to the existing particle arrays """ if not self.particles: print('Warning! Particles not defined.') return for array in self.particles: array_name = array.name for arr in arrays: if array_name == arr.name: array.append_parray(arr) self.setup(self.particles) def reorder_particles(self): """Re-order particles so as to coalesce memory access. """ for i in range(len(self.particles)): self.nnps.spatially_order_particles(i) # We must update after the reorder. self.nnps.update() def set_adaptive_timestep(self, value): """Set it to True to use adaptive timestepping based on cfl, viscous and force factor. Look at pysph.sph.integrator.compute_time_step for more details. """ self.adaptive_timestep = value def set_cfl(self, value): 'Set the CFL number for adaptive time stepping' self.cfl = value def set_final_time(self, tf): """ Set the final time for the simulation """ self.tf = tf self._epsilon = EPSILON*tf def set_n_damp(self, ndamp): """Set the number of timesteps for which the timestep should be initially damped. """ self.n_damp = ndamp def set_time_step(self, dt): """ Set the time step to use """ self.dt = dt def set_print_freq(self, n): """ Set the output print frequency """ self.pfreq = n def set_disable_output(self, value): """Disable file output. """ self.disable_output = value def set_arrays_to_print(self, array_names=None): """Only print the arrays with the given names. """ available_arrays = [array.name for array in self.particles] if array_names: for name in array_names: if name not in available_arrays: raise RuntimeError("Array %s not availabe" % (name)) for arr in self.particles: if arr.name == name: array = arr break self.arrays_to_print.append(array) else: self.arrays_to_print = self.particles def set_output_fname(self, fname): """ Set the output file name """ self.fname = fname def set_output_printing_level(self, detailed_output): """ Set the output printing level """ self.detailed_output = detailed_output def set_output_only_real(self, output_only_real): """ Set the flag to save out only real particles """ self.output_only_real = output_only_real def set_output_directory(self, path): """ Set the output directory """ self.output_directory = path def set_output_at_times(self, output_at_times): """ Set a list of output times """ self.output_at_times = numpy.asarray(output_at_times) def set_max_steps(self, max_steps): """Set the maximum number of iterations to perform. """ self.max_steps = max_steps def set_compress_output(self, compress): """Compress the dumped output files. """ self.compress_output = compress def set_parallel_output_mode(self, mode="collected"): """Set the default solver dump mode in parallel. The available modes are: collected : Collect array data from all processors on root and dump a single file. distributed : Each processor dumps a file locally. """ assert mode in ("collected", "distributed") self.parallel_output_mode = mode def set_command_handler(self, callable, command_interval=1): """ set the `callable` to be called at every `command_interval` iteration the `callable` is called with the solver instance as an argument """ self.execute_commands = callable self.command_interval = command_interval def set_parallel_manager(self, pm): self.pm = pm def set_reorder_freq(self, freq): """Set the reorder frequency in number of iterations. """ self.reorder_freq = freq def barrier(self): if self.comm: self.comm.barrier() def solve(self, show_progress=True): """ Solve the system Notes ----- Pre-stepping functions are those that need to be called before the integrator is called. Similarly, post step functions are those that are called after the stepping within the integrator. """ if self.in_parallel: show = False else: show = show_progress bar = ProgressBar(self.t, self.tf, show=show) self._epsilon = EPSILON*self.tf # Initial solution self.dump_output() self.barrier() # everybody waits for this to complete reorder_freq = self.reorder_freq if reorder_freq > 0: self.reorder_particles() # Compute the accelerations once for the predictor corrector # integrator to work correctly at the first time step. self.integrator.initial_acceleration(self.t, self.dt) # Now get a suitable adaptive (if requested) and damped timestep to # integrate with. self.dt = self._get_timestep() while (self.tf - self.t) > self._epsilon and \ (self.count < self.max_steps): # perform any pre step functions for callback in self.pre_step_callbacks: callback(self) if self.rank == 0: logger.debug( "Iteration=%d, time=%f, timestep=%f" % (self.count, self.t, self.dt) ) # perform the integration and update the time. # print('Solver Iteration', self.count, self.dt, self.t) self.integrator.step(self.t, self.dt) # perform any post step functions for callback in self.post_step_callbacks: callback(self) # update time and iteration counters if successfully # integrated self.t += self.dt self.count += 1 self._epsilon = EPSILON*self.tf*self.count # Compute the next timestep. self.dt = self._get_timestep() # Note: this may adjust dt to land at a desired time. self._dump_output_if_needed() # update progress bar bar.update(self.t) # update the time for all arrays self.update_particle_time() if reorder_freq > 0 and (self.count % reorder_freq == 0): self.reorder_particles() if self.execute_commands is not None: if self.count % self.command_interval == 0: self.execute_commands(self) # close the progress bar bar.finish() # final output save self.dump_output() def update_particle_time(self): for array in self.particles: array.set_time(self.t) def dump_output(self): """Dump the simulation results to file The arrays used for printing are determined by the particle array's `output_property_arrays` data attribute. For debugging it is sometimes nice to have all the arrays (including accelerations) saved. This can be chosen from using the command line option `--detailed-output` Output data Format: A single file named as: __.npz The data is saved as a Python dictionary with two keys: `solver_data` : Solver meta data like time, dt and iteration number `arrays` : A dictionary keyed on particle array names and with particle properties as value. Example: You can load the data output by PySPH like so: >>> from pysph.solver.utils import load >>> data = load('output_directory/filename_x_xxx.npz') >>> solver_data = data['solver_data'] >>> arrays = data['arrays'] >>> fluid = arrays['fluid'] >>> ... In the above example, it is assumed that the output file contained an array named fluid. """ if self.disable_output: return if self.rank == 0: msg = 'Writing output at time %g, iteration %d, dt = %g' % ( self.t, self.count, self.dt) logger.info(msg) fname = os.path.join(self.output_directory, self.fname + '_' + str(self.count)) comm = None if self.parallel_output_mode == "collected" and self.in_parallel: comm = self.comm dump(fname, self.particles, self._get_solver_data(), detailed_output=self.detailed_output, only_real=self.output_only_real, mpi_comm=comm, compress=self.compress_output) def load_output(self, count): """Load particle data from dumped output file. Parameters ---------- count : str The iteration time from which to load the data. If time is '?' then list of available data files is returned else the latest available data file is used Notes ----- Data is loaded from the :py:attr:`output_directory` using the same format as stored by the :py:meth:`dump_output` method. Proper functioning required that all the relevant properties of arrays be dumped. """ # get the list of available files available_files = [i.rsplit('_', 1)[1][:-4] for i in os.listdir(self.output_directory) if i.startswith(self.fname) and i.endswith('.npz')] if count == '?': return sorted(set(available_files), key=int) else: if count not in available_files: msg = "File with iteration count `%s` does not exist" % (count) msg += "\nValid iteration counts are %s" % ( sorted(set(available_files), key=int)) raise IOError(msg) array_names = [pa.name for pa in self.particles] # load the output file data = load(os.path.join(self.output_directory, self.fname+'_'+str(count)+'.npz')) arrays = [data["arrays"][i] for i in array_names] # set the Particle's arrays self.particles = arrays solver_data = data['solver_data'] self.t = float(solver_data['t']) self.dt = float(solver_data['dt']) self.count = int(solver_data['count']) def get_options(self, arg_parser): """ Implement this to add additional options for the application """ pass def setup_solver(self, options=None): """ Implement the basic solvers here All subclasses of Solver may implement this function to add the necessary operations for the problem at hand. Parameters ---------- options : dict options set by the user using commandline (there is no guarantee of existence of any key) """ pass ########################################################################## # Non-public interface. ########################################################################## def _compute_timestep(self): undamped_dt = self._get_undamped_timestep() if self.adaptive_timestep: # locally stable time step dt = self.integrator.compute_time_step(undamped_dt, self.cfl) # set the globally stable time step across all processors if self.in_parallel: if dt is None: # For some reason this processor does not have an adaptive # timestep constraint so we set it to a large number so the # timestep is determined by the other processors. dt = 1e20 dt = self.pm.update_time_steps(dt) else: if dt is None: dt = undamped_dt else: dt = undamped_dt return dt def _damp_timestep(self, dt): """Damp the timestep initially to prevent transient errors at startup. This basically damps the initial timesteps by the factor 0.5 (sin(pi*(-0.5 + count/n_damp)) + 1) Where n_damp is the number of iterations to damp the timestep for and count is the number of iterations. """ n_damp = self.n_damp if self.count < n_damp and n_damp > 0: iter_fraction = (self.count+1)/float(n_damp) fac = 0.5*(numpy.sin(numpy.pi*(-0.5 + iter_fraction)) + 1.0) self._damping_factor = fac else: self._damping_factor = 1.0 return dt*self._damping_factor def _dump_output_if_needed(self): """Dump output if needed while solve is running. This is called by `solve`. Warning ------- This will adjust `dt` if the user has asked for output at a non-integral multiple of dt. """ if abs(self.t - self.tf) < self._epsilon: return # dump output if the iteration number is a multiple of the printing # frequency. dump = self.count % self.pfreq == 0 # Consider the other cases if user has requested output at a specified # time. output_at_times = self.output_at_times dt = self.dt # adjust dt to land on specific output times or dump output if we have # reached a desired time. if len(output_at_times) > 0: tdiff = output_at_times - self.t if numpy.any(numpy.abs(tdiff) < self._epsilon): dump = True # Our next step may exceed a required timestep so we adjust the # timestep. timestep_too_big = (tdiff > 0.0) & (tdiff < dt) if numpy.any(timestep_too_big): index = numpy.where(timestep_too_big)[0] output_time = output_at_times[index] if abs(output_time - self.t) > self._epsilon: # It sometimes happens that the current time is just # shy of the requested output time which results in a # ridiculously small dt so we skip that case. # Compute the new time-step to fall on the specified output # time instant and save the previous dt value. self._prev_dt = dt self.dt = float(output_time - self.t) if dump: self.dump_output() self.barrier() def _get_solver_data(self): if self._prev_dt is not None: dt = self._prev_dt/self._damping_factor else: dt = self._get_undamped_timestep() return {'dt': dt, 't': self.t, 'count': self.count} def _get_timestep(self): if abs(self.tf - self.t) < self._epsilon: # We have reached the end, so no need to adjust the timestep # anymore. return self.dt if self._prev_dt is not None and \ abs(self._prev_dt - self.dt) > self._epsilon: # if the _prev_dt was set then we need to use it as the current dt # was set to print at an intermediate time. self.dt = self._prev_dt self._prev_dt = None dt = self._compute_timestep() dt = self._damp_timestep(dt) # adjust dt to land exactly on final time if (self.t + dt) > (self.tf - self._epsilon): dt = self.tf - self.t return dt def _get_undamped_timestep(self): return self.dt/self._damping_factor def _post_stage_callback(self, time, dt, stage): for callback in self.post_stage_callbacks: callback(time, dt, stage) ############################################################################ pysph-master/pysph/solver/solver_interfaces.py000066400000000000000000000165671356347341600223110ustar00rootroot00000000000000import threading import os import socket try: from SimpleXMLRPCServer import (SimpleXMLRPCServer, SimpleXMLRPCRequestHandler) from SimpleHTTPServer import SimpleHTTPRequestHandler except ImportError: # Python 3.x from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler from http.server import SimpleHTTPRequestHandler from multiprocessing.managers import BaseManager, BaseProxy def get_authkey_bytes(authkey): if isinstance(authkey, bytes): return authkey else: return authkey.encode('utf-8') class MultiprocessingInterface(BaseManager): """ A multiprocessing interface to the solver controller This object exports a controller instance proxy over the multiprocessing interface. Control actions can be performed by connecting to the interface and calling methods on the controller proxy instance """ def __init__(self, address=None, authkey=None, try_next_port=False): authkey = get_authkey_bytes(authkey) BaseManager.__init__(self, address, authkey) self.authkey = authkey self.try_next_port = try_next_port def get_controller(self): return self.controller def start(self, controller): self.controller = controller self.register('get_controller', self.get_controller) if not self.try_next_port: self.get_server().serve_forever() host, port = self.address while self.try_next_port: try: BaseManager.__init__(self, (host, port), self.authkey) self.get_server().serve_forever() self.try_next_port = False except socket.error as e: try_next_port = False import errno if e.errno == errno.EADDRINUSE: port += 1 else: raise class MultiprocessingClient(BaseManager): """ A client for the multiprocessing interface Override the run() method to do appropriate actions on the proxy instance of the controller object or add an interface using the add_interface methods similar to the Controller.add_interface method """ def __init__(self, address=None, authkey=None, serializer='pickle', start=True): authkey = get_authkey_bytes(authkey) BaseManager.__init__(self, address, authkey, serializer) if start: self.start() def start(self, connect=True): self.interfaces = [] # to work around a python caching bug # http://stackoverflow.com/questions/3649458/broken-pipe-when-using-python-multiprocessing-managers-basemanager-syncmanager if self.address in BaseProxy._address_to_local: del BaseProxy._address_to_local[self.address][0].connection self.register('get_controller') if connect: self.connect() self.controller = self.get_controller() self.run(self.controller) @staticmethod def is_available(address): try: socket.create_connection(address, 1).close() return True except socket.error: return False def run(self, controller): pass def add_interface(self, callable): """ This makes it act as substitute for the actual command_manager """ thr = threading.Thread(target=callable, args=(self.controller,)) thr.daemon = True thr.start() return thr class CrossDomainXMLRPCRequestHandler(SimpleXMLRPCRequestHandler, SimpleHTTPRequestHandler): """ SimpleXMLRPCRequestHandler subclass which attempts to do CORS CORS is Cross-Origin-Resource-Sharing (http://www.w3.org/TR/cors/) which enables xml-rpc calls from a different domain than the xml-rpc server (such requests are otherwise denied) """ def do_OPTIONS(self): """ Implement the CORS pre-flighted access for resources """ self.send_response(200) self.send_header("Access-Control-Allow-Origin", "*") self.send_header("Access-Control-Allow-METHODS", "POST,GET,OPTIONS") # self.send_header("Access-Control-Max-Age", "60") self.send_header("Content-length", "0") self.end_headers() def do_GET(self): """ Handle http requests to serve html/image files only """ print(self.path, self.translate_path(self.path)) permitted_extensions = ['.html', '.png', '.svg', '.jpg', '.js'] if not os.path.splitext(self.path)[1] in permitted_extensions: self.send_error(404, 'File Not Found/Allowed') else: SimpleHTTPRequestHandler.do_GET(self) def end_headers(self): """ End response header with adding Access-Control-Allow-Origin This is done to enable CORS request from all clients """ self.send_header("Access-Control-Allow-Origin", "*") SimpleXMLRPCRequestHandler.end_headers(self) class XMLRPCInterface(SimpleXMLRPCServer): """ An XML-RPC interface to the solver controller Currently cannot work with objects which cannot be marshalled (which is basically most custom classes, most importantly ParticleArray and numpy arrays) """ def __init__(self, addr, requestHandler=CrossDomainXMLRPCRequestHandler, logRequests=True, allow_none=True, encoding=None, bind_and_activate=True): SimpleXMLRPCServer.__init__(self, addr, requestHandler, logRequests, allow_none, encoding, bind_and_activate) def start(self, controller): self.register_instance(controller, allow_dotted_names=False) self.register_introspection_functions() self.serve_forever() class CommandlineInterface(object): """ command-line interface to the solver controller """ def start(self, controller): while True: try: try: inp = raw_input('pysph[%d]>>> ' % controller.get('count')) except NameError: inp = input('pysph[%d]>>> ' % controller.get('count')) cmd = inp.strip().split() try: cmd, args = cmd[0], cmd[1:] except Exception as e: print('Invalid command') self.help() continue args2 = [] for arg in args: try: arg = eval(arg) except: pass finally: args2.append(arg) if cmd == 'p' or cmd == 'pause': controller.pause_on_next() elif cmd == 'c' or cmd == 'cont': controller.cont() elif cmd == 'g' or cmd == 'get': print(controller.get(args[0])) elif cmd == 's' or cmd == 'set': print(controller.set(args[0], args2[1])) elif cmd == 'q' or cmd == 'quit': break else: print(getattr(controller, cmd)(*args2)) except Exception as e: self.help() print(e) def help(self): print('''Valid commands are: p | pause c | cont g | get s | set q | quit -- quit commandline interface (solver keeps running)''') pysph-master/pysph/solver/tests/000077500000000000000000000000001356347341600173455ustar00rootroot00000000000000pysph-master/pysph/solver/tests/__init__.py000066400000000000000000000000001356347341600214440ustar00rootroot00000000000000pysph-master/pysph/solver/tests/test_application.py000066400000000000000000000057461356347341600232750ustar00rootroot00000000000000# Author: Anshuman Kumar try: # This is for Python-2.6.x from unittest2 import TestCase except ImportError: from unittest import TestCase try: from unittest import mock except ImportError: import mock import os import shutil import sys from tempfile import mkdtemp from pysph.solver.application import Application from pysph.solver.solver import Solver class MockApp(Application): @mock.patch('pysph.solver.application.in_parallel', return_value=False) def __init__(self, mock_in_parallel, *args, **kw): super(MockApp, self).__init__(*args, **kw) def add_user_options(self, group): group.add_argument( "--testarg", action="store", type=float, dest="testarg", default=int(10.0), help="Test Argument" ) def consume_user_options(self): self.testarg = self.options.testarg def create_particles(self): return [] def create_equations(self): return [] def create_solver(self): solver = Solver() solver.particles = [] solver.solve = mock.Mock() solver.setup = mock.Mock() return solver def create_nnps(self): nnps = mock.Mock() return nnps class TestApplication(TestCase): def setUp(self): self.output_dir = mkdtemp() self.app = MockApp(output_dir=self.output_dir) def tearDown(self): if sys.platform.startswith('win'): try: shutil.rmtree(self.output_dir) except WindowsError: pass else: shutil.rmtree(self.output_dir) def test_user_options_when_args_are_not_passed(self): # Given app = self.app # When args = [] app.run(args) # Then self.assertEqual(app.comm, None) expected = 10.0 error_message = "Expected %f, got %f" % (expected, app.testarg) self.assertEqual(expected, app.testarg, error_message) def test_user_options_when_args_are_passed(self): # Given app = self.app # When args = ['--testarg', '20'] app.run(args) # Then expected = 20.0 error_message = "Expected %f, got %f" % (expected, app.testarg) self.assertEqual(expected, app.testarg, error_message) def test_output_dir_when_moved_and_read_info_called(self): # Given app = self.app args = ['-d', app.output_dir] app.run(args) copy_root = mkdtemp() copy_dir = os.path.join(copy_root, 'new') shutil.copytree(app.output_dir, copy_dir) self.addCleanup(shutil.rmtree, copy_root) orig_fname = app.fname # When app = MockApp() app.read_info(copy_dir) # Then realpath = os.path.realpath assert realpath(app.output_dir) != realpath(self.output_dir) assert realpath(app.output_dir) == realpath(copy_dir) assert app.fname == orig_fname pysph-master/pysph/solver/tests/test_solver.py000066400000000000000000000064521356347341600222770ustar00rootroot00000000000000try: # This is for Python-2.6.x from unittest2 import TestCase, main except ImportError: from unittest import TestCase, main try: from unittest import mock except ImportError: import mock import numpy as np import numpy.testing as npt from pysph.solver.solver import Solver class TestSolver(TestCase): def setUp(self): patcher = mock.patch( 'pysph.sph.acceleration_eval.AccelerationEval', spec=True ) AccelerationEval = patcher.start() self.a_eval = AccelerationEval() self.addCleanup(patcher.stop) patcher = mock.patch( 'pysph.sph.integrator.PECIntegrator', spec=True ) PECIntegrator = patcher.start() self.integrator = PECIntegrator() self.addCleanup(patcher.stop) def test_solver_dumps_output_given_output_at_times(self): # Given dt = 0.1 self.integrator.compute_time_step.return_value = dt tf = 10.05 pfreq = 5 output_at_times = [0.3, 0.35] solver = Solver( integrator=self.integrator, tf=tf, dt=dt, output_at_times=output_at_times ) solver.set_print_freq(pfreq) solver.acceleration_evals = [self.a_eval] solver.particles = [] # When record = [] record_dt = [] def _mock_dump_output(): # Record the time at which the solver dumped anything record.append(solver.t) # This smells but ... sd = solver._get_solver_data() record_dt.append(sd['dt']) solver.dump_output = mock.Mock(side_effect=_mock_dump_output) solver.solve(show_progress=False) # Then expected = np.asarray( [0.0, 0.3, 0.35] + np.arange(0.45, 10.1, 0.5).tolist() + [10.05] ) error_message = "Expected %s, got %s" % (expected, record) self.assertEqual(len(expected), len(record), error_message) self.assertTrue( np.max(np.abs(expected - record)) < 1e-12, error_message ) self.assertEqual(101, solver.count) # The final timestep should not be a tiny one due to roundoff. self.assertTrue(solver.dt > 0.1*0.25) npt.assert_array_almost_equal( [0.1]*len(record_dt), record_dt, decimal=12 ) def test_solver_honors_set_time_step(self): # Given dt = 0.1 tf = 1.0 pfreq = 1 solver = Solver( integrator=self.integrator, tf=tf, dt=dt, adaptive_timestep=False ) solver.set_print_freq(pfreq) solver.acceleration_evals = [self.a_eval] solver.particles = [] record = [] def _mock_dump_output(): # Record the time at which the solver dumped anything record.append(solver.t) solver.dump_output = mock.Mock(side_effect=_mock_dump_output) # When solver.set_time_step(0.2) solver.solve(show_progress=False) # Then expected = np.asarray([0.0, 0.2, 0.4, 0.6, 0.8, 1.0]) error_message = "Expected %s, got %s" % (expected, record) self.assertEqual(len(expected), len(record), error_message) self.assertTrue( np.max(np.abs(expected - record)) < 1e-12, error_message ) if __name__ == '__main__': main() pysph-master/pysph/solver/tests/test_solver_utils.py000066400000000000000000000151071356347341600235140ustar00rootroot00000000000000import numpy as np import shutil import os from os.path import join from tempfile import mkdtemp from pysph import has_h5py try: # This is for Python-2.6.x from unittest2 import TestCase, main, skipUnless except ImportError: from unittest import TestCase, main, skipUnless from pysph.base.utils import get_particle_array, get_particle_array_wcsph from pysph.solver.utils import dump, load, dump_v1, get_files class TestGetFiles(TestCase): def setUp(self): self.root = mkdtemp() self.fname = 'dam_break_2d' self.dirname = join(self.root, self.fname + '_output') os.mkdir(self.dirname) self.files = [ join( self.dirname, self.fname+'_'+str(i)+'.npz' ) for i in range(11) ] for name in self.files: with open(name, 'w') as fp: fp.write('') def test_get_files(self): self.assertEqual(get_files(self.dirname), self.files) self.assertEqual(get_files(self.dirname, fname=self.fname), self.files) self.assertEqual( get_files( self.dirname, fname=self.fname, endswith=('npz', 'hdf5') ), self.files ) def tearDown(self): shutil.rmtree(self.root) class TestOutputNumpy(TestCase): def setUp(self): self.root = mkdtemp() def tearDown(self): shutil.rmtree(self.root) def _get_filename(self, fname): return join(self.root, fname) + '.npz' def test_dump_and_load_works_by_default(self): x = np.linspace(0, 1.0, 10) y = x*2.0 dt = 1.0 pa = get_particle_array(name='fluid', x=x, y=y) fname = self._get_filename('simple') dump(fname, [pa], solver_data={'dt': dt}) data = load(fname) solver_data = data['solver_data'] arrays = data['arrays'] pa1 = arrays['fluid'] self.assertListEqual(list(solver_data.keys()), ['dt']) self.assertListEqual(list(sorted(pa.properties.keys())), list(sorted(pa1.properties.keys()))) self.assertTrue(np.allclose(pa.x, pa1.x, atol=1e-14)) self.assertTrue(np.allclose(pa.y, pa1.y, atol=1e-14)) def test_dump_and_load_works_with_compress(self): x = np.linspace(0, 1.0, 10) y = x*2.0 dt = 1.0 pa = get_particle_array(name='fluid', x=x, y=y) fname = self._get_filename('simple') dump(fname, [pa], solver_data={'dt': dt}) fnamez = self._get_filename('simplez') dump(fnamez, [pa], solver_data={'dt': dt}, compress=True) # Check that the file size is indeed smaller self.assertTrue(os.stat(fnamez).st_size < os.stat(fname).st_size) data = load(fnamez) solver_data = data['solver_data'] arrays = data['arrays'] pa1 = arrays['fluid'] self.assertListEqual(list(solver_data.keys()), ['dt']) self.assertListEqual(list(sorted(pa.properties.keys())), list(sorted(pa1.properties.keys()))) self.assertTrue(np.allclose(pa.x, pa1.x, atol=1e-14)) self.assertTrue(np.allclose(pa.y, pa1.y, atol=1e-14)) def test_dump_and_load_with_partial_data_dump(self): x = np.linspace(0, 1.0, 10) y = x*2.0 pa = get_particle_array_wcsph(name='fluid', x=x, y=y) pa.set_output_arrays(['x', 'y']) fname = self._get_filename('simple') dump(fname, [pa], solver_data={}) data = load(fname) arrays = data['arrays'] pa1 = arrays['fluid'] self.assertListEqual(list(sorted(pa.properties.keys())), list(sorted(pa1.properties.keys()))) self.assertTrue(np.allclose(pa.x, pa1.x, atol=1e-14)) self.assertTrue(np.allclose(pa.y, pa1.y, atol=1e-14)) def test_dump_and_load_with_constants(self): x = np.linspace(0, 1.0, 10) y = x*2.0 pa = get_particle_array_wcsph(name='fluid', x=x, y=y, constants={'c1': 1.0, 'c2': [2.0, 3.0]}) pa.add_property('A', data=2.0, stride=2) pa.set_output_arrays(['x', 'y', 'A']) fname = self._get_filename('simple') dump(fname, [pa], solver_data={}) data = load(fname) arrays = data['arrays'] pa1 = arrays['fluid'] self.assertListEqual(list(sorted(pa.properties.keys())), list(sorted(pa1.properties.keys()))) self.assertListEqual(list(sorted(pa.constants.keys())), list(sorted(pa1.constants.keys()))) self.assertTrue(np.allclose(pa.x, pa1.x, atol=1e-14)) self.assertTrue(np.allclose(pa.y, pa1.y, atol=1e-14)) self.assertTrue(np.allclose(pa.A, pa1.A, atol=1e-14)) self.assertTrue(np.allclose(pa.c1, pa1.c1, atol=1e-14)) self.assertTrue(np.allclose(pa.c2, pa1.c2, atol=1e-14)) def test_that_output_array_information_is_saved(self): # Given x = np.linspace(0, 1.0, 10) y = x*2.0 pa = get_particle_array(name='fluid', x=x, y=y, u=3*x) # When output_arrays = ['x', 'y', 'u'] pa.set_output_arrays(output_arrays) fname = self._get_filename('simple') dump(fname, [pa], solver_data={}) data = load(fname) pa1 = data['arrays']['fluid'] # Then. self.assertEqual(set(pa.output_property_arrays), set(output_arrays)) self.assertEqual(set(pa1.output_property_arrays), set(output_arrays)) class TestOutputHdf5(TestOutputNumpy): @skipUnless(has_h5py(), "h5py module is not present") def setUp(self): super(TestOutputHdf5, self).setUp() def _get_filename(self, fname): return join(self.root, fname) + '.hdf5' class TestOutputNumpyV1(TestCase): def setUp(self): self.root = mkdtemp() def tearDown(self): shutil.rmtree(self.root) def _get_filename(self, fname): return join(self.root, fname) + '.npz' def test_load_works_with_dump_version1(self): x = np.linspace(0, 1.0, 10) y = x*2.0 pa = get_particle_array(name='fluid', x=x, y=y) fname = self._get_filename('simple') dump_v1(fname, [pa], solver_data={}) data = load(fname) arrays = data['arrays'] pa1 = arrays['fluid'] self.assertListEqual(list(sorted(pa.properties.keys())), list(sorted(pa1.properties.keys()))) self.assertTrue(np.allclose(pa.x, pa1.x, atol=1e-14)) self.assertTrue(np.allclose(pa.y, pa1.y, atol=1e-14)) if __name__ == '__main__': main() pysph-master/pysph/solver/tools.py000066400000000000000000000164051356347341600177230ustar00rootroot00000000000000 class Tool(object): """A tool is typically an object that can be used to perform a specific task on the solver's pre_step/post_step or post_stage callbacks. This can be used for a variety of things. For example, one could save a plot, print debug statistics or perform remeshing etc. To create a new tool, simply subclass this class and overload any of its desired methods. """ def pre_step(self, solver): """If overloaded, this is called automatically before each integrator step. The method is passed the solver instance. """ pass def post_stage(self, current_time, dt, stage): """If overloaded, this is called automatically after each integrator stage, i.e. if the integrator is a two stage integrator it will be called after the first and second stages. The method is passed (current_time, dt, stage). See the the `Integrator.one_timestep` methods for examples of how this is called. """ pass def post_step(self, solver): """If overloaded, this is called automatically after each integrator step. The method is passed the solver instance. """ pass class SimpleRemesher(Tool): """A simple tool to periodically remesh a given array of particles onto an initial set of points. """ def __init__(self, app, array_name, props, freq=100, xi=None, yi=None, zi=None, kernel=None, equations=None): """Constructor. Parameters ---------- app : pysph.solver.application.Application The application instance. array_name: str Name of the particle array that needs to be remeshed. props : list(str) List of properties to interpolate. freq : int Frequency of remeshing operation. xi, yi, zi : ndarray Positions to remesh the properties onto. If not specified they are taken from the particle arrays at the time of construction. kernel: any kernel from pysph.base.kernels equations: list or None Equations to use for the interpolation, passed to the interpolator. """ from pysph.solver.utils import get_array_by_name self.app = app self.particles = app.particles self.array = get_array_by_name(self.particles, array_name) self.props = props if xi is None: xi = self.array.x if yi is None: yi = self.array.y if zi is None: zi = self.array.z self.xi, self.yi, self.zi = xi.copy(), yi.copy(), zi.copy() self.freq = freq from pysph.tools.interpolator import Interpolator if kernel is None: kernel = app.solver.kernel self.interp = Interpolator( self.particles, x=self.xi, y=self.yi, z=self.zi, kernel=kernel, domain_manager=app.create_domain(), equations=equations ) def post_step(self, solver): if solver.count % self.freq == 0 and solver.count > 0: self.interp.nnps.update() data = dict(x=self.xi, y=self.yi, z=self.zi) for prop in self.props: data[prop] = self.interp.interpolate(prop) self.array.set(**data) self.interp.nnps.update_domain() class DensityCorrection(Tool): """ A tool to reinitialize the density of the fluid particles """ def __init__(self, app, arr_names, corr='shepard', freq=10, kernel=None): """ Parameters ---------- app : pysph.solver.application.Application. The application instance. arr_names : array Names of the particle arrays whose densities needs to be reinitialized. corr : str Name of the density reinitialization operation. corr='shepard' for using zeroth order shepard filter freq : int Frequency of reinitialization. kernel: any kernel from pysph.base.kernels """ from pysph.solver.utils import get_array_by_name self.freq = freq self.corr = corr self.names = arr_names self.count = 1 self._sph_eval = None self.kernel = kernel self.dim = app.solver.dim self.particles = app.particles self.arrs = [get_array_by_name(self.particles, i) for i in self.names] options = ['shepard', 'mls2d_1', 'mls3d_1'] assert self.corr in options, 'corr should be one of %s' % options def _get_sph_eval_shepard(self): from pysph.sph.wc.density_correction import ShepardFilter from pysph.tools.sph_evaluator import SPHEvaluator from pysph.sph.equation import Group if self._sph_eval is None: arrs = self.arrs eqns = [] for arr in arrs: name = arr.name arr.add_property('rhotmp') eqns.append(Group(equations=[ ShepardFilter(name, [name])], real=False)) sph_eval = SPHEvaluator( arrays=arrs, equations=eqns, dim=self.dim, kernel=self.kernel(dim=self.dim)) return sph_eval else: return self._sph_eval def _get_sph_eval_mls2d_1(self): from pysph.sph.wc.density_correction import MLSFirstOrder2D from pysph.tools.sph_evaluator import SPHEvaluator from pysph.sph.equation import Group if self._sph_eval is None: arrs = self.arrs eqns = [] for arr in arrs: name = arr.name arr.add_property('rhotmp') eqns.append(Group(equations=[ MLSFirstOrder2D(name, [name])], real=False)) sph_eval = SPHEvaluator( arrays=arrs, equations=eqns, dim=self.dim, kernel=self.kernel(dim=self.dim)) return sph_eval else: return self._sph_eval def _get_sph_eval_mls3d_1(self): from pysph.sph.wc.density_correction import MLSFirstOrder3D from pysph.tools.sph_evaluator import SPHEvaluator from pysph.sph.equation import Group if self._sph_eval is None: arrs = self.arrs eqns = [] for arr in arrs: name = arr.name arr.add_property('rhotmp') eqns.append(Group(equations=[ MLSFirstOrder3D(name, [name])], real=False)) sph_eval = SPHEvaluator( arrays=arrs, equations=eqns, dim=self.dim, kernel=self.kernel(dim=self.dim)) return sph_eval else: return self._sph_eval def _get_sph_eval(self, corr): if corr == 'shepard': return self._get_sph_eval_shepard() elif corr == 'mls2d_1': return self._get_sph_eval_mls2d_1() elif corr == 'mls3d_1': return self._get_sph_eval_mls3d_1() else: pass def post_step(self, solver): if self.freq == 0: pass elif self.count % self.freq == 0: self._sph_eval = self._get_sph_eval(self.corr) self._sph_eval.update() self._sph_eval.evaluate() self.count += 1 pysph-master/pysph/solver/utils.py000066400000000000000000000252041356347341600177200ustar00rootroot00000000000000""" Module contains some common functions. """ # standard imports import os import sys import time import numpy import pysph from pysph.solver.output import load, dump, output_formats # noqa: 401 from pysph.solver.output import gather_array_data as _gather_array_data ASCII_FMT = " 123456789#" try: uni_chr = unichr except NameError: uni_chr = chr UTF_FMT = u" " + u''.join(map(uni_chr, range(0x258F, 0x2587, -1))) def _supports_unicode(fp): # Taken somewhat from the tqdm package. if not hasattr(fp, 'encoding'): return False else: encoding = fp.encoding try: u'\u2588\u2589'.encode(encoding) except UnicodeEncodeError: return False except Exception: try: return encoding.lower().startswith('utf-') or ('U8' == encoding) except: return False else: return True def check_array(x, y): """Check if two arrays are equal with an absolute tolerance of 1e-16.""" return numpy.allclose(x, y, atol=1e-16, rtol=0) def get_distributed_particles(pa, comm, cell_size): # FIXME: this can be removed once the examples all use Application. from pysph.parallel.load_balancer import LoadBalancer rank = comm.Get_rank() num_procs = comm.Get_size() if rank == 0: lb = LoadBalancer.distribute_particles(pa, num_procs=num_procs, block_size=cell_size) else: lb = None particles = comm.scatter(lb, root=0) return particles def get_array_by_name(arrays, name): """Given a list of arrays and the name of the desired array, return the desired array. """ for array in arrays: if array.name == name: return array def fmt_time(time): mm, ss = divmod(time, 60) hh, mm = divmod(mm, 60) if hh > 0: s = "%d:%02d:%02d" % (hh, mm, ss) else: s = "%02d:%02.1f" % (mm, ss) return s class ProgressBar(object): def __init__(self, ti, tf, show=True, file=None, ascii=False): if file is None: self.file = sys.stdout self.ti = ti self.tf = tf self.t = 0.0 self.dt = 1.0 self.start = time.time() self.count = 0 self.iter_inc = 1 self.show = show self.ascii = ascii if not ascii and not _supports_unicode(self.file): self.ascii = True if not self.file.isatty(): self.show = False self.display() def _fmt_bar(self, percent, width): chars = ASCII_FMT if self.ascii else UTF_FMT nsyms = len(chars) - 1 tens, ones = divmod(int(percent/100 * width * nsyms), nsyms) end = chars[ones] if ones > 0 else '' return (chars[-1]*tens + end).ljust(width) def _fmt_iters(self, iters): if iters < 1e3: s = '%d' % iters elif iters < 1e6: s = '%.1fk' % (iters/1e3) elif iters < 1e9: s = '%.1fM' % (iters/1e6) return s def display(self): if self.show: elapsed = time.time() - self.start if self.t > 0: eta = (self.tf - self.t)/self.t * elapsed else: eta = 0.0 percent = int(round(self.t/self.tf*100)) bar = self._fmt_bar(percent, 20) secsperit = elapsed/self.count if self.count > 0 else 0 out = ('{percent:3}%|{bar}|' ' {iters}it | {time:.1e}s [{elapsed}<{eta} | ' '{secsperit:.3f}s/it]').format( bar=bar, percent=percent, iters=self._fmt_iters(self.count), time=self.t, elapsed=fmt_time(elapsed), eta=fmt_time(eta), secsperit=secsperit ) self.file.write('\r%s' % out.ljust(70)) self.file.flush() def update(self, t, iter_inc=1): '''Set the current time and update the number of iterations. ''' self.dt = t - self.t self.iter_inc = iter_inc self.count += iter_inc self.t = t self.display() def finish(self): self.display() if self.show: self.file.write('\n') ############################################################################## # friendly mkdir from http://code.activestate.com/recipes/82465/. ############################################################################## def mkdir(newdir): """works the way a good mkdir should :) - already exists, silently complete - regular file in the way, raise an exception - parent directory(ies) does not exist, make them as well """ if os.path.isdir(newdir): pass elif os.path.isfile(newdir): raise OSError("a file with the same name as the desired " "dir, '%s', already exists." % newdir) else: head, tail = os.path.split(newdir) if head and not os.path.isdir(head): mkdir(head) if tail: try: os.mkdir(newdir) # To prevent race in mpi runs except OSError as e: import errno if e.errno == errno.EEXIST and os.path.isdir(newdir): pass else: raise def get_pysph_root(): return os.path.split(pysph.__file__)[0] def dump_v1(filename, particles, solver_data, detailed_output=False, only_real=True, mpi_comm=None): """Dump the given particles and solver data to the given filename using version 1. This is mainly used only for testing that we can continue to load older versions of the data files. """ all_array_data = {} output_data = {"arrays": all_array_data, "solver_data": solver_data} for array in particles: all_array_data[array.name] = array.get_property_arrays( all=detailed_output, only_real=only_real ) # Gather particle data on root if mpi_comm is not None: all_array_data = _gather_array_data(all_array_data, mpi_comm) output_data['arrays'] = all_array_data if mpi_comm is None or mpi_comm.Get_rank() == 0: numpy.savez(filename, version=1, **output_data) def load_and_concatenate(prefix, nprocs=1, directory=".", count=None): """Load the results from multiple files. Given a filename prefix and the number of processors, return a concatenated version of the dictionary returned via load. Parameters ---------- prefix : str A filename prefix for the output file. nprocs : int The number of processors (files) to read directory : str The directory for the files count : int The file iteration count to read. If None, the last available one is read """ if count is None: counts = [i.rsplit('_', 1)[1][:-4] for i in os.listdir(directory) if i.startswith(prefix) and i.endswith('.npz')] counts = sorted([int(i) for i in counts]) count = counts[-1] arrays_by_rank = {} for rank in range(nprocs): fname = os.path.join( directory, prefix + '_' + str(rank) + '_' + str(count) + '.npz' ) data = load(fname) arrays_by_rank[rank] = data["arrays"] arrays = _concatenate_arrays(arrays_by_rank, nprocs) data["arrays"] = arrays return data def _concatenate_arrays(arrays_by_rank, nprocs): """Concatenate arrays into one single particle array. """ if nprocs <= 0: return 0 array_names = arrays_by_rank[0].keys() first_processors_arrays = arrays_by_rank[0] if nprocs > 1: ret = {} for array_name in array_names: first_array = first_processors_arrays[array_name] for rank in range(1, nprocs): other_processors_arrays = arrays_by_rank[rank] other_array = other_processors_arrays[array_name] # append the other array to the first array first_array.append_parray(other_array) # remove the non local particles first_array.remove_tagged_particles(1) ret[array_name] = first_array else: ret = arrays_by_rank[0] return ret def get_files(dirname=None, fname=None, endswith=output_formats): """Get all solution files in a given directory, `dirname`. Parameters ---------- dirname: str Name of directory. fname: str An initial part of the filename, if not specified use the first part of the dirname. endswith: str The extension of the file to load. """ if dirname is None: return [] path = os.path.abspath(dirname) files = os.listdir(path) if fname is None: fname = os.path.split(path)[1].split('_output')[0] # get all the output files in the directory files = [f for f in files if f.startswith(fname) and f.endswith(endswith)] files = [os.path.join(path, f) for f in files] # sort the files def _key_func(arg): a = os.path.splitext(arg)[0] return int(a[a.rfind('_') + 1:]) files.sort(key=_key_func) return files def iter_output(files, *arrays): """Given an iterable of the solution files, this loads the files, and yields the solver data and the requested arrays. If arrays is not supplied, it returns a dictionary of the arrays. Parameters ---------- files : iterable Iterates over the list of desired files *arrays : strings Optional series of array names of arrays to return. Examples -------- >>> files = get_files('elliptical_drop_output') >>> for solver_data, arrays in iter_output(files): ... print(solver_data['t'], arrays.keys()) >>> files = get_files('elliptical_drop_output') >>> for solver_data, fluid in iter_output(files, 'fluid'): ... print(solver_data['t'], fluid.name) """ for file in files: data = load(file) solver_data = data['solver_data'] if len(arrays) == 0: yield solver_data, data['arrays'] else: _arrays = [data['arrays'][x] for x in arrays] yield [solver_data] + _arrays def _sort_key(arg): a = os.path.splitext(arg)[0] return int(a[a.rfind('_') + 1:]) def remove_irrelevant_files(files): """Remove any npz files that are not output files. That is, the file should not end with a '_number.npz'. This allows users to dump other .npz of .hdf5 files in the output while post-processing without breaking. """ result = [] for f in files: try: _sort_key(f) except ValueError: pass else: result.append(f) return result pysph-master/pysph/solver/vtk_output.py000066400000000000000000000141551356347341600210070ustar00rootroot00000000000000""" Dumps VTK output files. It takes a hdf or npz file as an input and output vtu file. """ from pysph import has_tvtk, has_pyvisfile from pysph.solver.output import Output, load, output_formats from pysph.solver.utils import remove_irrelevant_files import numpy as np import argparse import sys import os class VTKOutput(Output): def __init__(self, scalars=None, **vectors): self.set_output_scalar(scalars) self.set_output_vector(**vectors) super(VTKOutput, self).__init__(True) def set_output_vector(self, **vectors): """ Set the vector to dump in VTK output Parameter ---------- vectors: Vectors to dump Example V=['u', 'v', 'z'] """ self.vectors = {} for name, vector in vectors.items(): assert (len(vector) is 3) self.vectors[name] = vector def set_output_scalar(self, scalars=None): """ Set the scalars to dump in VTK output Parameter --------- scalar_array: list The set of properties to dump """ self.scalars = scalars def _get_scalars(self, arrays): if self.scalars is None: properties = list(arrays.keys()) else: properties = self.scalars scalars = [] for prop_name in properties: scalars.append((prop_name, arrays[prop_name])) return scalars def _get_vectors(self, arrays): vectors = [] for prop_name, prop_list in self.vectors.items(): vec = np.array([arrays[prop_list[0]], arrays[prop_list[1]], arrays[prop_list[2]]]) data = (prop_name, vec) vectors.append(data) return vectors def _dump(self, filename): for ptype, pdata in self.all_array_data.items(): self._setup_data(pdata) try: fname, seq = filename.rsplit('_', 1) self._dump_arrays(fname + '_' + ptype + '_' + seq) except ValueError: self._dump_arrays(filename + '_' + ptype) def _setup_data(self, arrays): self.numPoints = arrays['x'].size self.points = np.array([arrays['x'], arrays['y'], arrays['z']]) self.data = [] self.data.extend(self._get_scalars(arrays)) self.data.extend(self._get_vectors(arrays)) class PyVisFileOutput(VTKOutput): def _dump_arrays(self, filename): from pyvisfile.vtk import (UnstructuredGrid, DataArray, AppendedDataXMLGenerator, VTK_VERTEX) n = self.numPoints da = DataArray("points", self.points) grid = UnstructuredGrid((n, da), cells=np.arange(n), cell_types=np.asarray([VTK_VERTEX] * n)) for name, field in self.data: da = DataArray(name, field) grid.add_pointdata(da) with open(filename + '.vtu', "w") as f: AppendedDataXMLGenerator(None)(grid).write(f) class TVTKOutput(VTKOutput): def _dump_arrays(self, filename): from tvtk.api import tvtk n = self.numPoints cells = np.arange(n) cells.shape = (n, 1) cell_type = tvtk.Vertex().cell_type ug = tvtk.UnstructuredGrid(points=self.points.transpose()) ug.set_cells(cell_type, cells) from mayavi.core.dataset_manager import DatasetManager dsm = DatasetManager(dataset=ug) for name, field in self.data: dsm.add_array(field.transpose(), name) dsm.activate(name) from tvtk.api import write_data write_data(ug, filename) def dump_vtk(filename, particles, scalars=None, **vectors): """ Parameter ---------- filename: str Filename to dump to particles: sequence(ParticleArray) Sequence if particles arrays to dump scalars: list list of scalars to dump. vectors: Vectors to dump Example V=['u', 'v', 'z'] """ if has_pyvisfile(): output = PyVisFileOutput(scalars, **vectors) elif has_tvtk(): output = TVTKOutput(scalars, **vectors) else: msg = 'TVTK and pyvisfile Not present' raise ImportError(msg) output.dump(filename, particles, {}) def run(options): for fname in options.inputfile: if os.path.isdir(fname): files = [os.path.join(fname, file) for file in os.listdir(fname) if file.endswith(output_formats)] files = remove_irrelevant_files(files) options.inputfile.extend(files) continue data = load(fname) particles = [] for ptype, pdata in data['arrays'].items(): particles.append(pdata) filename = os.path.splitext(fname)[0] if options.outdir is not None: filename = options.outdir + os.path.split(filename)[1] dump_vtk(filename, particles, scalars=options.scalars, velocity=['u', 'v', 'w']) def main(argv=None): if argv is None: argv = sys.argv[1:] parser = argparse.ArgumentParser( prog='dump_vtk', description=__doc__, add_help=False ) parser.add_argument( "-h", "--help", action="store_true", default=False, dest="help", help="show this help message and exit" ) parser.add_argument( "-s", "--scalars", metavar="scalars", type=str, default=None, help="scalar variables to dump in VTK output, provide a " + "comma-separated list, for example: -s rho,p,m" ) parser.add_argument( "-d", "--outdir", metavar="outdir", type=str, default=None, help="Directory to output VTK files" ) parser.add_argument( "inputfile", type=str, nargs='+', help=" list of input files or/and directories (hdf5 or npz format)" ) if len(argv) > 0 and argv[0] in ['-h', '--help']: parser.print_help() sys.exit() options, extra = parser.parse_known_args(argv) if options.scalars is not None: options.scalars = options.scalars.split(',') run(options) if __name__ == '__main__': main() pysph-master/pysph/sph/000077500000000000000000000000001356347341600154635ustar00rootroot00000000000000pysph-master/pysph/sph/__init__.py000066400000000000000000000000001356347341600175620ustar00rootroot00000000000000pysph-master/pysph/sph/acceleration_eval.py000066400000000000000000000216351356347341600215040ustar00rootroot00000000000000from collections import defaultdict try: from collections import OrderedDict except ImportError: from ordereddict import OrderedDict from compyle.config import get_config from pysph.sph.equation import ( CUDAGroup, CythonGroup, Group, MultiStageEquations, OpenCLGroup, get_arrays_used_in_equation) ############################################################################### def group_equations(equations): """Checks the given equations and ensures the following: - Raises an error if the user mixes Groups and Equations. - If only equations are given as a list, return a single group with all these equations. """ only_groups = [x for x in equations if isinstance(x, Group)] if len(only_groups) > 0 and len(only_groups) != len(equations): raise ValueError('All elements must be Groups if you use groups.') if len(only_groups) == 0: return [Group(equations)] else: return equations ############################################################################### def check_equation_array_properties(equation, particle_arrays): """Given an equation and the particle arrays, check if the particle arrays have the necessary properties. """ p_arrays = dict((x.name, x) for x in particle_arrays) _src, _dest = get_arrays_used_in_equation(equation) if equation.dest not in p_arrays: msg = "ERROR: Equation {eq_name} has invalid dest: '{dest}'".format( eq_name=equation.name, dest=equation.dest ) raise RuntimeError(msg) if not equation.no_source: for src in equation.sources: if src not in p_arrays: msg = "ERROR: Equation {eq_name} has invalid "\ "source: '{src}'".format(eq_name=equation.name, src=src) raise RuntimeError(msg) eq_src = set([x[2:] for x in _src]) eq_dest = set([x[2:] for x in _dest]) def _check_array(array, eq_props, errors): """Updates the `errors` with any errors. """ props = set(list(array.properties.keys()) + list(array.constants.keys())) if not eq_props < props: errors[array.name].update(eq_props - props) errors = defaultdict(set) _check_array(p_arrays[equation.dest], eq_dest, errors) if equation.sources is not None: for src in equation.sources: _check_array(p_arrays[src], eq_src, errors) if len(errors) > 0: msg = ("ERROR: Missing array properties for equation: %s\n" % equation.name) for name, missing in errors.items(): msg += "Array '%s' missing properties %s.\n" % (name, missing) print(msg) raise RuntimeError(msg) def make_acceleration_evals(particle_arrays, equations, kernel, mode='serial', backend=None): '''Returns a list of acceleration evaluators. If a MultiStageEquations object is given the resulting list will have multiple evaluators else it will have a single one. ''' if isinstance(equations, MultiStageEquations): groups = equations.groups else: groups = [equations] return [ AccelerationEval(particle_arrays, group, kernel, mode, backend) for group in groups ] ############################################################################### class MegaGroup(object): """A mega-group refactors actual equation Groups into a more organized form as described below. They inherit all properties of the Group so these can be used while generating code but delegate the tasks to real groups underneath that are assembled from the original group. MegaGroups are organized as: {destination: (eqs_with_no_source, sources, all_eqs)} eqs_with_no_source: Group([equations]) all SPH Equations with no src. sources are {source: Group([equations...])} all_eqs is a Group of all equations having this destination. This is what is stored in the `data` attribute. Note that the order of the equations in all_eqs, eqs_with_no_source and sources should honor the order in which the user defines them in the original group. """ def __init__(self, group, group_cls): self._orig_group = group self.Group = group_cls self._copy_props(group) self.data = self._make_data(group) def get_converged_condition(self): return self._orig_group.get_converged_condition() def _copy_props(self, group): for key in ('real', 'update_nnps', 'iterate', 'pre', 'post', 'max_iterations', 'min_iterations', 'has_subgroups'): setattr(self, key, getattr(group, key)) def _make_data(self, group): equations = group.equations if group.has_subgroups: return [MegaGroup(g, self.Group) for g in equations] dest_list = [] for equation in equations: dest = equation.dest if dest not in dest_list: dest_list.append(dest) dests = OrderedDict() for dest in dest_list: sources = defaultdict(list) eqs_with_no_source = [] all_equations = [] for equation in equations: if equation.dest != dest: continue if equation not in all_equations: all_equations.append(equation) if equation.no_source: eqs_with_no_source.append(equation) else: for src in equation.sources: sources[src].append(equation) for src in sources: eqs = sources[src] sources[src] = self.Group(eqs) dests[dest] = (self.Group(eqs_with_no_source), sources, self.Group(all_equations)) return dests ############################################################################### class AccelerationEval(object): def __init__(self, particle_arrays, equations, kernel, mode='serial', backend=None): """ Parameters ---------- particle_arrays: list(ParticleArray): list of particle arrays to use. equations: list: A list of equations/groups. kernel: The kernel to use. mode: str: One of 'serial', 'mpi'. backend: str: indicates the backend to use. one of ('opencl', 'cython', 'cuda', '', None) """ assert backend in ('opencl', 'cython', 'cuda', '', None) self.backend = self._get_backend(backend) self.particle_arrays = particle_arrays self.equation_groups = group_equations(equations) self.kernel = kernel self.nnps = None self.mode = mode if self.backend == 'cython': self.Group = CythonGroup elif self.backend == 'opencl': self.Group = OpenCLGroup elif self.backend == 'cuda': self.Group = CUDAGroup all_equations = [] for group in self.equation_groups: if group.has_subgroups: for g in group.equations: all_equations.extend(g.equations) else: all_equations.extend(group.equations) self.all_group = self.Group(equations=all_equations) for equation in all_equations: check_equation_array_properties(equation, particle_arrays) self.mega_groups = [MegaGroup(g, self.Group) for g in self.equation_groups] self.c_acceleration_eval = None ########################################################################## # Private interface. ########################################################################## def _get_backend(self, backend): if not backend: cfg = get_config() if cfg.use_opencl: backend = 'opencl' elif cfg.use_cuda: backend = 'cuda' else: backend = 'cython' return backend ########################################################################## # Public interface. ########################################################################## def compute(self, t, dt): """Compute the accelerations given the current time, t, and the timestep, dt. """ self.c_acceleration_eval.compute(t, dt) def set_compiled_object(self, c_acceleration_eval): """Set the high-performance compiled object to call internally. """ self.c_acceleration_eval = c_acceleration_eval def set_nnps(self, nnps): self.nnps = nnps self.c_acceleration_eval.set_nnps(nnps) def update_particle_arrays(self, particle_arrays): """Call this to update the particle arrays with new ones. Make sure though that the same properties exist in both or you will get a segfault. """ self.c_acceleration_eval.update_particle_arrays(particle_arrays) pysph-master/pysph/sph/acceleration_eval_cython.mako000066400000000000000000000273571356347341600233760ustar00rootroot00000000000000# Automatically generated, do not edit. #cython: cdivision=True <%def name="indent(text, level=0)" buffered="True"> % for l in text.splitlines(): ${' '*4*level}${l} % endfor <%def name="do_group(helper, group, level=0)" buffered="True"> ####################################################################### ## Call any `pre` functions ####################################################################### % if group.pre: ${indent(helper.get_pre_call(group), 0)} % endif ####################################################################### ## Iterate over destinations in this group. ####################################################################### % for dest, (eqs_with_no_source, sources, all_eqs) in group.data.items(): # --------------------------------------------------------------------- # Destination ${dest}.\ ####################################################################### ## Setup destination array pointers. ####################################################################### dst = self.${dest} ${indent(helper.get_dest_array_setup(dest, eqs_with_no_source, sources, group.real), 0)} dst_array_index = dst.index ####################################################################### ## Call py_initialize for all equations for this destination. ####################################################################### ${indent(all_eqs.get_py_initialize_code(), 0)} ####################################################################### ## Initialize all equations for this destination. ####################################################################### % if all_eqs.has_initialize(): # Initialization for destination ${dest}. for d_idx in range(NP_DEST): ${indent(all_eqs.get_initialize_code(helper.object.kernel), 1)} % endif ####################################################################### ## Handle all the equations that do not have a source. ####################################################################### % if len(eqs_with_no_source.equations) > 0: % if eqs_with_no_source.has_loop(): # SPH Equations with no sources. for d_idx in range(NP_DEST): ${indent(eqs_with_no_source.get_loop_code(helper.object.kernel), 1)} % endif % endif ####################################################################### ## Iterate over sources. ####################################################################### % for source, eq_group in sources.items(): # -------------------------------------- # Source ${source}.\ ####################################################################### ## Setup source array pointers. ####################################################################### src = self.${source} ${indent(helper.get_src_array_setup(source, eq_group), 0)} src_array_index = src.index % if eq_group.has_initialize_pair(): for d_idx in range(NP_DEST): ${indent(eq_group.get_initialize_pair_code(helper.object.kernel), 1)} % endif % if eq_group.has_loop() or eq_group.has_loop_all(): ####################################################################### ## Iterate over destination particles. ####################################################################### nnps.set_context(src_array_index, dst_array_index) ${helper.get_parallel_block()} thread_id = threadid() ${indent(eq_group.get_variable_array_setup(), 1)} for d_idx in ${helper.get_parallel_range("NP_DEST")}: ############################################################### ## Find and iterate over neighbors. ############################################################### nnps.get_nearest_neighbors(d_idx, self.nbrs[thread_id]) NBRS = (self.nbrs[thread_id]).data N_NBRS = (self.nbrs[thread_id]).length % if eq_group.has_loop_all(): ${indent(eq_group.get_loop_all_code(helper.object.kernel), 2)} % endif % if eq_group.has_loop(): for nbr_idx in range(N_NBRS): s_idx = (NBRS[nbr_idx]) ########################################################### ## Iterate over the equations for the same set of neighbors. ########################################################### ${indent(eq_group.get_loop_code(helper.object.kernel), 3)} % endif ## if has_loop % endif ## if eq_group.has_loop() or has_loop_all(): # Source ${source} done. # -------------------------------------- % endfor ################################################################### ## Do any post_loop assignments for the destination. ################################################################### % if all_eqs.has_post_loop(): # Post loop for destination ${dest}. for d_idx in range(NP_DEST): ${indent(all_eqs.get_post_loop_code(helper.object.kernel), 1)} % endif ################################################################### ## Do any reductions for the destination. ################################################################### % if all_eqs.has_reduce(): ${indent(all_eqs.get_reduce_code(), 0)} % endif # Destination ${dest} done. # --------------------------------------------------------------------- ####################################################################### ## Update NNPS locally if needed ####################################################################### % if group.update_nnps: # Updating NNPS. nnps.update_domain() nnps.update() % endif % endfor ####################################################################### ## Call any `post` functions ####################################################################### % if group.post: ${indent(helper.get_post_call(group), 0)} % endif from libc.stdio cimport printf from libc.math cimport * from libc.math cimport fabs as abs cimport numpy import numpy from cython import address % if not helper.config.use_openmp: from cython.parallel import threadid prange = range % else: from cython.parallel import parallel, prange, threadid % endif from pysph.base.particle_array cimport ParticleArray from pysph.base.nnps_base cimport NNPS from pysph.base.reduce_array import serial_reduce_array % if helper.object.mode == 'serial': from pysph.base.reduce_array import dummy_reduce_array as parallel_reduce_array % elif helper.object.mode == 'mpi': from pysph.base.reduce_array import mpi_reduce_array as parallel_reduce_array % endif from pysph.base.nnps import get_number_of_threads from cyarray.carray cimport (DoubleArray, FloatArray, IntArray, LongArray, UIntArray, aligned, aligned_free, aligned_malloc) ${helper.get_header()} # ############################################################################# cdef class ParticleArrayWrapper: cdef public int index cdef public ParticleArray array ${indent(helper.get_array_decl_for_wrapper(), 1)} cdef public str name def __init__(self, pa, index): self.index = index self.set_array(pa) cpdef set_array(self, pa): self.array = pa props = set(pa.properties.keys()) props = props.union(['tag', 'pid', 'gid']) for prop in props: setattr(self, prop, pa.get_carray(prop)) for prop in pa.constants.keys(): setattr(self, prop, pa.get_carray(prop)) self.name = pa.name cpdef long size(self, bint real=False): return self.array.get_number_of_particles(real) # ############################################################################# cdef class AccelerationEval: cdef public tuple particle_arrays cdef public ParticleArrayWrapper ${helper.get_particle_array_names()} cdef public NNPS nnps cdef public int n_threads cdef public list _nbr_refs cdef void **nbrs # CFL time step conditions cdef public double dt_cfl, dt_force, dt_viscous cdef object groups cdef object all_equations ${indent(helper.get_kernel_defs(), 1)} ${indent(helper.get_equation_defs(), 1)} def __init__(self, kernel, equations, particle_arrays, groups): self.particle_arrays = tuple(particle_arrays) self.groups = groups self.n_threads = get_number_of_threads() cdef int i for i, pa in enumerate(particle_arrays): name = pa.name setattr(self, name, ParticleArrayWrapper(pa, i)) self.nbrs = aligned_malloc(sizeof(void*)*self.n_threads) cdef UIntArray _arr self._nbr_refs = [] for i in range(self.n_threads): _arr = UIntArray() _arr.reserve(1024) self.nbrs[i] = _arr self._nbr_refs.append(_arr) ${indent(helper.get_kernel_init(), 2)} ${indent(helper.get_equation_init(), 2)} all_equations = {} for equation in equations: all_equations[equation.var_name] = equation self.all_equations = all_equations def __dealloc__(self): aligned_free(self.nbrs) def set_nnps(self, NNPS nnps): self.nnps = nnps def update_particle_arrays(self, particle_arrays): for pa in particle_arrays: name = pa.name getattr(self, name).set_array(pa) cpdef compute(self, double t, double dt): cdef long nbr_idx, NP_SRC, NP_DEST cdef long s_idx, d_idx cdef int thread_id, N_NBRS cdef unsigned int* NBRS cdef NNPS nnps = self.nnps cdef ParticleArrayWrapper src, dst cdef int max_iterations, min_iterations, _iteration_count ####################################################################### ## Declare all the arrays. ####################################################################### # Arrays.\ ${indent(helper.get_array_declarations(), 2)} ####################################################################### ## Declare any variables. ####################################################################### # Variables.\ cdef int src_array_index, dst_array_index ${indent(helper.get_variable_declarations(), 2)} ####################################################################### ## Iterate over groups: ## Groups are organized as {destination: (eqs_with_no_source, sources, all_eqs)} ## eqs_with_no_source: Group([equations]) all SPH Equations with no source. ## sources are {source: Group([equations...])} ## all_eqs is a Group of all equations having this destination. ####################################################################### % for g_idx, group in enumerate(helper.object.mega_groups): % if len(group.data) > 0: # No equations in this group. # --------------------------------------------------------------------- # Group ${g_idx}. % if group.iterate: max_iterations = ${group.max_iterations} min_iterations = ${group.min_iterations} _iteration_count = 1 while True: % else: if True: % endif % if group.has_subgroups: % for sg_idx, sub_group in enumerate(group.data): # Doing subgroup ${sg_idx} ${indent(do_group(helper, sub_group, 3), 3)} % endfor % else: ${indent(do_group(helper, group, 3), 3)} % endif ####################################################################### ## Break the iteration for the group. ####################################################################### % if group.iterate: # Check for convergence or timeout if (_iteration_count >= min_iterations) and (${group.get_converged_condition()} or (_iteration_count == max_iterations)): _iteration_count = 1 break _iteration_count += 1 % endif # Group ${g_idx} done. # --------------------------------------------------------------------- % endif # (if len(group.data) > 0) % endfor pysph-master/pysph/sph/acceleration_eval_cython_helper.py000066400000000000000000000236411356347341600244260ustar00rootroot00000000000000from collections import defaultdict from os.path import dirname, join, expanduser, realpath from mako.template import Template from cyarray import carray from compyle.config import get_config from compyle.cython_generator import (CythonGenerator, KnownType, get_parallel_range) from compyle.ext_module import ExtModule, get_platform_dir ############################################################################### def get_cython_code(obj): """This function looks at the object and gets any additional code to wrap from either the `_cython_code_` method or the `_get_helpers_` method. """ result = [] if hasattr(obj, '_cython_code_'): code = obj._cython_code_() doc = '# From %s' % obj.__class__.__name__ result.extend([doc, code] if len(code) > 0 else []) return result def get_helper_code(helpers): """Given a list of helpers, return the helper code suitably wrapped. """ result = [] result.append('# Helpers') cg = CythonGenerator() for helper in helpers: cg.parse(helper) result.append(cg.get_code()) return result ############################################################################### def get_all_array_names(particle_arrays): """For each type of carray, find the union of the names of all particle array properties/constants along with their array type. Returns a dictionary keyed on the name of the Array class with values being a set of property names for each. Parameters ---------- particle_array : list A list of particle arrays. Examples -------- A simple example would be:: >>> x = np.linspace(0, 1, 10) >>> pa = ParticleArray(name='f', x=x) >>> get_all_array_names([pa]) {'DoubleArray': {'x'}, 'IntArray': {'pid', 'tag'}, 'UIntArray': {'gid'}} """ props = defaultdict(set) for array in particle_arrays: for properties in (array.properties, array.constants): for name, arr in properties.items(): a_type = arr.__class__.__name__ props[a_type].add(name) return dict(props) def get_known_types_for_arrays(array_names): """Given all the array names from `get_all_array_names` this creates known types for each of them so that the code generators can use this type information when needed. Note that known type info is generated for both source and destination style arrays. Parameters ---------- array_names: dict A dictionary produced by `get_all_array_names`. Examples -------- A simple example would be:: >>> x = np.linspace(0, 1, 10) >>> pa = ParticleArray(name='f', x=x) >>> pa.remove_property('pid') >>> info = get_all_array_names([pa]) >>> get_known_types_for_arrays(info) {'d_gid': KnownType("unsigned int*"), 'd_tag': KnownType("int*"), 'd_x': KnownType("double*"), 's_gid': KnownType("unsigned int*"), 's_tag': KnownType("int*"), 's_x': KnownType("double*")} """ result = {} for arr_type, arrays in array_names.items(): c_type = getattr(carray, arr_type)().get_c_type() for arr in arrays: known_type = KnownType(c_type + '*') result['s_' + arr] = known_type result['d_' + arr] = known_type return result ############################################################################### class AccelerationEvalCythonHelper(object): def __init__(self, acceleration_eval): self.object = acceleration_eval self.config = get_config() self.all_array_names = get_all_array_names( self.object.particle_arrays ) self.known_types = get_known_types_for_arrays( self.all_array_names ) self._ext_mod = None self._module = None self._compute_group_map() ########################################################################## # Private interface. ########################################################################## def _compute_group_map(self): # Given all the groups, create a mapping from the group to an index of # sorts that can be used when adding the pre/post callback code. mapping = {} for g_idx, group in enumerate(self.object.mega_groups): mapping[group] = 'self.groups[%d]' % g_idx if group.has_subgroups: for sg_idx, sub_group in enumerate(group.data): code = 'self.groups[{gid}].data[{sgid}]'.format( gid=g_idx, sgid=sg_idx ) mapping[sub_group] = code self._group_map = mapping ########################################################################## # Public interface. ########################################################################## def get_code(self): path = join(dirname(__file__), 'acceleration_eval_cython.mako') template = Template(filename=path) main = template.render(helper=self) return main def setup_compiled_module(self, module): # Create the compiled module. object = self.object acceleration_eval = module.AccelerationEval( object.kernel, object.all_group.equations, object.particle_arrays, object.mega_groups ) object.set_compiled_object(acceleration_eval) def compile(self, code): # Note, we do not add carray or particle_array as nnps_base would # have been rebuilt anyway if they changed. root = expanduser(join('~', '.pysph', 'source', get_platform_dir())) depends = ["pysph.base.nnps_base"] # Add pysph/base directory to inc_dirs for including spatial_hash.h # for SpatialHashNNPS extra_inc_dirs = [join(dirname(dirname(realpath(__file__))), 'base')] self._ext_mod = ExtModule( code, verbose=False, root=root, depends=depends, extra_inc_dirs=extra_inc_dirs ) self._module = self._ext_mod.load() return self._module ########################################################################## # Mako interface. ########################################################################## def get_array_decl_for_wrapper(self): array_names = self.all_array_names decl = [] for a_type in sorted(array_names.keys()): props = array_names[a_type] decl.append( 'cdef public {a_type} {attrs}'.format( a_type=a_type, attrs=', '.join(sorted(props)) ) ) return '\n'.join(decl) def get_header(self): object = self.object helpers = [] headers = [] headers.extend(get_cython_code(object.kernel)) if hasattr(object.kernel, '_get_helpers_'): helpers.extend(object.kernel._get_helpers_()) # get headers from the Equations for equation in object.all_group.equations: headers.extend(get_cython_code(equation)) if hasattr(equation, '_get_helpers_'): for helper in equation._get_helpers_(): if helper not in helpers: helpers.append(helper) headers.extend(get_helper_code(helpers)) # Kernel wrappers. cg = CythonGenerator(known_types=self.known_types) cg.parse(object.kernel) headers.append(cg.get_code()) # Equation wrappers. self.known_types['SPH_KERNEL'] = KnownType( object.kernel.__class__.__name__ ) headers.append(object.all_group.get_equation_wrappers( self.known_types )) return '\n'.join(headers) def get_equation_defs(self): return self.object.all_group.get_equation_defs() def get_equation_init(self): return self.object.all_group.get_equation_init() def get_kernel_defs(self): return 'cdef public %s kernel' % ( self.object.kernel.__class__.__name__ ) def get_kernel_init(self): object = self.object return 'self.kernel = %s(**kernel.__dict__)' % ( object.kernel.__class__.__name__ ) def get_variable_declarations(self): group = self.object.all_group ctx = group.context return group.get_variable_declarations(ctx) def get_array_declarations(self): group = self.object.all_group src, dest = group.get_array_names() src.update(dest) return group.get_array_declarations(src, self.known_types) def get_dest_array_setup(self, dest_name, eqs_with_no_source, sources, real): src, dest_arrays = eqs_with_no_source.get_array_names() for g in sources.values(): s, d = g.get_array_names() dest_arrays.update(d) lines = ['NP_DEST = self.%s.size(real=%s)' % (dest_name, real)] lines += ['%s = dst.%s.data' % (n, n[2:]) for n in sorted(dest_arrays)] return '\n'.join(lines) def get_src_array_setup(self, src_name, eq_group): src_arrays, dest = eq_group.get_array_names() lines = ['NP_SRC = self.%s.size()' % src_name] lines += ['%s = src.%s.data' % (n, n[2:]) for n in sorted(src_arrays)] return '\n'.join(lines) def get_parallel_block(self): if self.config.use_openmp: return "with nogil, parallel():" else: return "if True: # Placeholder used for OpenMP." def get_parallel_range(self, start, stop=None, step=1): return get_parallel_range(start, stop, step) def get_particle_array_names(self): parrays = [pa.name for pa in self.object.particle_arrays] return ', '.join(parrays) def get_pre_call(self, group): return self._group_map[group] + '.pre()' def get_post_call(self, group): return self._group_map[group] + '.post()' pysph-master/pysph/sph/acceleration_eval_gpu.mako000066400000000000000000000116661356347341600226610ustar00rootroot00000000000000<%def name="do_group(helper, g_idx, sg_idx, group)" buffered="True"> ####################################################################### ## Call any `pre` functions ####################################################################### % if group.pre: <% helper.call_pre(group) %> % endif ####################################################################### ## Iterate over destinations in this group. ####################################################################### % for dest, (eqs_with_no_source, sources, all_eqs) in group.data.items(): // Destination ${dest} ## Call py_initialize if it is defined for the equations. <% helper.call_py_initialize(all_eqs, dest) %> ####################################################################### ## Initialize all equations for this destination. ####################################################################### % if all_eqs.has_initialize(): // Initialization for destination ${dest} ${helper.get_initialize_kernel(g_idx, sg_idx, group, dest, all_eqs)} % endif ####################################################################### ## Handle all the equations that do not have a source. ####################################################################### % if len(eqs_with_no_source.equations) > 0: % if eqs_with_no_source.has_loop(): // Equations with no sources. ${helper.get_simple_loop_kernel(g_idx, sg_idx, group, dest, eqs_with_no_source)} % endif % endif ####################################################################### ## Iterate over sources. ####################################################################### % for source, eq_group in sources.items(): // Source ${source}. ################################################################### ## Do any pairwise initializations. ################################################################### % if eq_group.has_initialize_pair(): // Initialization for destination ${dest} with source ${source}. ${helper.get_initialize_pair_kernel(g_idx, sg_idx, group, dest, source, eq_group)} % endif ################################################################### ## Do any loop interactions between source and destination. ################################################################### % if eq_group.has_loop() or eq_group.has_loop_all(): ${helper.get_loop_kernel(g_idx, sg_idx, group, dest, source, eq_group)} % endif % endfor ################################################################### ## Do any post_loop assignments for the destination. ################################################################### % if all_eqs.has_post_loop(): // post_loop for destination ${dest} ${helper.get_post_loop_kernel(g_idx, sg_idx, group, dest, all_eqs)} % endif ################################################################### ## Do any reductions for the destination. ################################################################### % if all_eqs.has_reduce(): <% helper.call_reduce(all_eqs, dest) %> % endif // Finished destination ${dest}. ####################################################################### ## Update NNPS locally if needed ####################################################################### % if group.update_nnps: <% helper.call_update_nnps(group) %> % endif % endfor ####################################################################### ## Call any `post` functions ####################################################################### % if group.post: <% helper.call_post(group) %> % endif #define abs fabs #define max(x, y) fmax((double)(x), (double)(y)) #define NORM2(X, Y, Z) ((X)*(X) + (Y)*(Y) + (Z)*(Z)) #define MAX(X, Y) ((X) > (Y) ? (X) : (Y)) ${helper.get_header()} ####################################################################### ## Iterate over groups ####################################################################### % for g_idx, group in enumerate(helper.object.mega_groups): % if len(group.data) > 0: // ------------------------------------------------------------------ // Group${g_idx} ####################################################################### ## Start iteration if needed. ####################################################################### % if group.iterate: <% helper.start_iteration(group) %> % endif ####################################################################### ## Handle sub-groups. ####################################################################### % if group.has_subgroups: % for sg_idx, sub_group in enumerate(group.data): // Subgroup ${sg_idx} ${do_group(helper, g_idx, sg_idx, sub_group)} % endfor ## sg_idx % else: ${do_group(helper, g_idx, -1, group)} % endif ## has_subgroups ####################################################################### ## Stop iteration if needed. ####################################################################### % if group.iterate: <% helper.stop_iteration(group) %> % endif ## % endif ## len(group.data) > 0: // Finished Group${g_idx} // ------------------------------------------------------------------ % endfor ## (for g_idx, group ...) pysph-master/pysph/sph/acceleration_eval_gpu_helper.py000066400000000000000000001062641356347341600237200ustar00rootroot00000000000000'''This helper module orchestrates the generation of OpenCL/CUDA code, compiles it and makes it available for use. Overview ~~~~~~~~~ Look first at sph/tests/test_acceleration_eval.py to see the big picture. The general idea when using AccelerationEval instances is: - Create the particle arrays. - Specify any equations and the SPH kernel. - Construct the AccelerationEval with the particles, equations and kernel. - Compile this with SPHCompiler and hand in an NNPS. - For the GPU all that changes is the backend and the NNPS. So the difference in the CPU version and GPU version is the choice of the backend. The AccelerationEval delegates its actual high-performance work to its `self.c_acceleration_eval` instance. This instance is either compiled with Cython or OpenCL. With Cython this is actually a compiled extension module created with Cython and with OpenCL this is the Python class OpenCLAccelerationEval in this file. This is where the helpers come in. The AccelerationEvalCythonHelper and AccelerationEvalOpenCLHelper have three main methods: - get_code(): returns the code to be compiled. - compile(code): compile the code and return the compiled module/opencl Program. - setup_compiled_module(module): sets the AccelerationEval's c_acceleration_eval to an instance based on the helper. The helper basically uses mako templates, code generation via simple string manipulations, and transpilation to generate HPC code automatically from the given particle arrays, equations, and kernel. In this module, an OpenCLAccelerationEval is defined which does the work of calling the compiled opencl kernels. The AccelerationEvalOpenCLHelper generates the OpenCL kernels. The general idea of how we generate OpenCL kernels is quite simple. We transpile pure Python code using `pysph.base.translator` which generates C from a subset of pure Python. - We do not support inheritance but convert classes to simple C-structs and functions which take the struct as the first argument. - Python functions are also transpiled. - Type inference is done using either conventions like s_idx, d_idx, s_x, d_x, WIJ etc. or by type hints given using default arguments. Lists are treated as raw pointers to the contained type. One can also set certain predefined known_types and the code generator will generate suitable code. There are plenty of tests illustrating what is supported in ``pysph.base.tests.test_translator``. - One can also use the ``declare`` function to declare any types in the Python code. This is enough to do what we need. We transpile the kernel, all required equations, and generate suitable kernels. All structs are converted to suitable GPU types and the data from the Python classes is converted into suitably aligned numpy dtypes (using cl.tools.match_dtype_to_c_struct). These are constructed for each class and stored in an _gpu attribute on the Python object. When calling kernels these are passed and pushed/pulled from the GPU. When the user calls AccelerationEval.compute, this in turn calls the c_acceleration_eval's compute method. For OpenCL, this is provided by the OpenCLAccelerationEval class below. While the implementation is a bit complex, the details a bit hairy, the general idea is very simple. ''' from functools import partial import inspect import os import re import sys from textwrap import wrap import numpy as np from mako.template import Template from pysph.base.utils import is_overloaded_method from pysph.base.device_helper import DeviceHelper from pysph.sph.acceleration_nnps_helper import generate_body, \ get_kernel_args_list from compyle.ext_module import get_platform_dir from compyle.config import get_config from compyle.translator import (CStructHelper, CUDAConverter, OpenCLConverter, ocl_detect_type, ocl_detect_pointer_base_type) from .equation import get_predefined_types, KnownType from .acceleration_eval_cython_helper import ( get_all_array_names, get_known_types_for_arrays ) getfullargspec = getattr( inspect, 'getfullargspec', inspect.getargspec ) def get_converter(backend): if backend == 'opencl': Converter = OpenCLConverter elif backend == 'cuda': Converter = CUDAConverter else: raise RuntimeError('Invalid backend: %s' % backend) return Converter def get_kernel_definition(kernel, arg_list): sig = 'KERNEL void\n{kernel}\n({args})'.format( kernel=kernel, args=', '.join(arg_list), ) return '\n'.join(wrap(sig, width=78, subsequent_indent=' ' * 4, break_long_words=False)) def wrap_code(code, indent=' ' * 4): return wrap( code, width=74, initial_indent=indent, subsequent_indent=indent + ' ' * 4, break_long_words=False ) def get_helper_code(helpers, transpiler=None, backend=None): """This function generates any additional code for the given list of helpers. """ result = [] if transpiler is None: transpiler = get_converter(backend) doc = '\n// Helpers.\n' result.append(doc) for helper in helpers: result.append(transpiler.parse_function(helper)) return result class DummyQueue(object): def finish(self): pass def get_context(backend): if backend == 'cuda': from compyle.cuda import set_context set_context() from pycuda.autoinit import context return context elif backend == 'opencl': from compyle.opencl import get_context return get_context() else: raise RuntimeError('Unsupported GPU backend %s' % backend) def get_queue(backend): if backend == 'cuda': return DummyQueue() elif backend == 'opencl': from compyle.opencl import get_queue return get_queue() else: raise RuntimeError('Unsupported GPU backend %s' % backend) def profile_kernel(knl, backend): if backend == 'cuda': return knl elif backend == 'opencl': from compyle.opencl import profile_kernel return profile_kernel(knl, knl.function_name) else: raise RuntimeError('Unsupported GPU backend %s' % backend) class GPUAccelerationEval(object): """Does the actual work of performing the evaluation. """ def __init__(self, helper): self.helper = helper self.particle_arrays = helper.object.particle_arrays self.nnps = None self._queue = helper._queue cfg = get_config() self._use_double = cfg.use_double self._use_local_memory = cfg.use_local_memory def _call_kernel(self, info, extra_args): nnps = self.nnps call = info.get('method') args = list(info.get('args')) dest = info['dest'] n = dest.get_number_of_particles(info.get('real', True)) args[1] = (n,) args[3:] = [x() for x in args[3:]] # Argument for NP_MAX extra_args[-1][...] = n - 1 if info.get('loop'): if self._use_local_memory: nnps.set_context(info['src_idx'], info['dst_idx']) nnps_args, gs_ls = self.nnps.get_kernel_args('float') self._queue.finish() args[1] = gs_ls[0] args[2] = gs_ls[1] # No need for the guard variable for the local memory code. args = args + extra_args[:-1] + nnps_args call(*args) self._queue.finish() else: nnps.set_context(info['src_idx'], info['dst_idx']) cache = nnps.current_cache cache.get_neighbors_gpu() self._queue.finish() args = args + [ cache._nbr_lengths_gpu.dev.data, cache._start_idx_gpu.dev.data, cache._neighbors_gpu.dev.data ] + extra_args call(*args) else: call(*(args + extra_args)) self._queue.finish() def _sync_from_gpu(self, eq): ary = eq._gpu.get() for i, name in enumerate(ary.dtype.names): setattr(eq, name, ary[0][i]) def _converged(self, equations): for eq in equations: if not (eq.converged() > 0): return False return True def compute(self, t, dt): helper = self.helper dtype = np.float64 if self._use_double else np.float32 extra_args = [np.asarray(t, dtype=dtype), np.asarray(dt, dtype=dtype), np.asarray(0, dtype=np.uint32)] i = 0 iter_count = 0 iter_start = 0 while i < len(helper.calls): info = helper.calls[i] type = info['type'] if type == 'method': method_name = info.get('method') method = getattr(self, method_name) if method_name == 'do_reduce': _args = info.get('args') method(_args[0], _args[1], t, dt) else: method(*info.get('args')) elif type == 'py_initialize': args = info['dest'], t, dt for call in info['calls']: call(*args) elif type == 'pre_post': func = info.get('callable') func(*info.get('args')) elif type == 'kernel': self._call_kernel(info, extra_args) elif type == 'start_iteration': iter_count = 0 iter_start = i elif type == 'stop_iteration': eqs = info['equations'] group = info['group'] iter_count += 1 if ((iter_count >= group.min_iterations) and (iter_count == group.max_iterations or self._converged(eqs))): pass else: i = iter_start i += 1 def set_nnps(self, nnps): self.nnps = nnps def update_particle_arrays(self, arrays): raise NotImplementedError('GPU backend is incomplete') def update_nnps(self): self.nnps.update_domain() self.nnps.update() def do_reduce(self, eqs, dest, t, dt): for eq in eqs: eq.reduce(dest, t, dt) class CUDAAccelerationEval(GPUAccelerationEval): def _call_kernel(self, info, extra_args): from pycuda.gpuarray import splay import pycuda.driver as drv nnps = self.nnps call = info.get('method') args = list(info.get('args')) dest = info['dest'] n = dest.get_number_of_particles(info.get('real', True)) # args is actually [queue, None, None, actual_meaningful_args] # we do not need the first 3 args on CUDA. args = [x() for x in args[3:]] # Argument for NP_MAX extra_args[-1][...] = n - 1 gs, ls = splay(n) gs, ls = int(gs[0]), int(ls[0]) num_blocks = (n + ls - 1) // ls #num_blocks = int((gs + ls - 1) / ls) num_tpb = ls if info.get('loop'): if self._use_local_memory: # FIXME: Fix local memory for CUDA nnps.set_context(info['src_idx'], info['dst_idx']) nnps_args, gs_ls = self.nnps.get_kernel_args('float') args[1] = gs_ls[0] args[2] = gs_ls[1] # No need for the guard variable for the local memory code. args = args + extra_args[:-1] + nnps_args call(*args) else: # find block sizes nnps.set_context(info['src_idx'], info['dst_idx']) cache = nnps.current_cache cache.get_neighbors_gpu() args = args + [ cache._nbr_lengths_gpu.dev, cache._start_idx_gpu.dev, cache._neighbors_gpu.dev ] + extra_args event = drv.Event() call(*args, block=(num_tpb, 1, 1), grid=(num_blocks, 1)) event.record() event.synchronize() else: event = drv.Event() call(*(args + extra_args), block=(num_tpb, 1, 1), grid=(num_blocks, 1)) event.record() event.synchronize() def add_address_space(known_types): for v in known_types.values(): if 'GLOBAL_MEM' not in v.type: v.type = 'GLOBAL_MEM ' + v.type def get_equations_with_converged(group): def _get_eqs(g): if g.has_subgroups: res = [] for x in g.equations: res.extend(_get_eqs(x)) return res else: return g.equations eqs = [x for x in _get_eqs(group) if is_overloaded_method(getattr(x, 'converged'))] return eqs def convert_to_float_if_needed(code): use_double = get_config().use_double if not use_double: code = re.sub(r'\bdouble\b', 'float', code) return code class AccelerationEvalGPUHelper(object): def __init__(self, acceleration_eval): self.object = acceleration_eval self.backend = acceleration_eval.backend self.all_array_names = get_all_array_names( self.object.particle_arrays ) self.known_types = get_known_types_for_arrays( self.all_array_names ) add_address_space(self.known_types) predefined = dict(get_predefined_types( self.object.all_group.pre_comp )) self.known_types.update(predefined) self.known_types['NBRS'] = KnownType('GLOBAL_MEM unsigned int*') self.data = [] self._array_map = None self._array_index = None self._equations = {} self._cpu_structs = {} self._gpu_structs = {} self.calls = [] self.program = None self._ctx = get_context(self.backend) self._queue = get_queue(self.backend) def _setup_arrays_on_device(self): pas = self.object.particle_arrays array_map = {} array_index = {} for idx, pa in enumerate(pas): if pa.gpu is None: pa.set_device_helper(DeviceHelper(pa, backend=self.backend)) array_map[pa.name] = pa array_index[pa.name] = idx self._array_map = array_map self._array_index = array_index self._setup_structs_on_device() def _setup_structs_on_device(self): if self.backend == 'opencl': import pyopencl as cl import pyopencl.array # noqa: 401 import pyopencl.tools # noqa: 401 gpu = self._gpu_structs cpu = self._cpu_structs for k, v in cpu.items(): if v is None: gpu[k] = v else: g_struct, code = cl.tools.match_dtype_to_c_struct( self._ctx.devices[0], "dummy", v.dtype ) g_v = v.astype(g_struct) gpu[k] = cl.array.to_device(self._queue, g_v) if k in self._equations: self._equations[k]._gpu = gpu[k] else: from pycuda import gpuarray from compyle.cuda import match_dtype_to_c_struct gpu = self._gpu_structs cpu = self._cpu_structs for k, v in cpu.items(): if v is None: gpu[k] = v else: g_struct, code = match_dtype_to_c_struct( None, "junk", v.dtype ) g_v = v.astype(g_struct) gpu[k] = gpuarray.to_gpu(g_v) if k in self._equations: self._equations[k]._gpu = gpu[k] def _get_argument(self, arg, dest, src=None): ary_map = self._array_map structs = self._gpu_structs # This is needed for late binding on the device helper's attributes # which may change at each iteration when particles are added/removed. if self.backend == 'opencl': def _get_array(gpu_helper, attr): return getattr(gpu_helper, attr).dev.data else: def _get_array(gpu_helper, attr): return getattr(gpu_helper, attr).dev def _get_struct(obj): return obj if arg.startswith('d_'): return partial(_get_array, ary_map[dest].gpu, arg[2:]) elif arg.startswith('s_'): return partial(_get_array, ary_map[src].gpu, arg[2:]) else: if self.backend == 'opencl': return partial(_get_struct, structs[arg].data) else: return partial(_get_struct, structs[arg]) def _setup_calls(self): calls = [] prg = self.program array_index = self._array_index for item in self.data: type = item.get('type') if type == 'kernel': kernel = item.get('kernel') method = getattr(prg, kernel) method = profile_kernel(method, self.backend) dest = item['dest'] src = item.get('source', dest) args = [self._queue, None, None] for arg in item['args']: args.append(self._get_argument(arg, dest, src)) loop = item['loop'] args.append(self._get_argument('kern', dest, src)) info = dict( method=method, dest=self._array_map[dest], src=self._array_map[src], args=args, loop=loop, src_idx=array_index[src], dst_idx=array_index[dest], type='kernel' ) elif type == 'method': info = dict(item) if info.get('method') == 'do_reduce': args = info.get('args') grp = args[0] args[0] = [x for x in grp.equations if hasattr(x, 'reduce')] args[1] = self._array_map[args[1]] elif type == 'pre_post': info = dict(item) elif type == 'py_initialize': info = dict(item) info['dest'] = self._array_map[item.get('dest')] elif 'iteration' in type: group = item['group'] equations = get_equations_with_converged(group._orig_group) info = dict(type=type, equations=equations, group=group) else: raise RuntimeError('Unknown type %s' % type) calls.append(info) return calls ########################################################################## # Public interface. ########################################################################## def get_code(self): path = os.path.join(os.path.dirname(__file__), 'acceleration_eval_gpu.mako') template = Template(filename=path) main = template.render(helper=self) if self.backend == 'opencl': from pyopencl._cluda import CLUDA_PREAMBLE elif self.backend == 'cuda': from pycuda._cluda import CLUDA_PREAMBLE double_support = get_config().use_double cluda = Template(CLUDA_PREAMBLE).render(double_support=double_support) main = "\n".join([cluda, main]) return main def setup_compiled_module(self, module): object = self.object self._setup_arrays_on_device() self.calls = self._setup_calls() if self.backend == 'opencl': acceleration_eval = GPUAccelerationEval(self) elif self.backend == 'cuda': acceleration_eval = CUDAAccelerationEval(self) object.set_compiled_object(acceleration_eval) def compile(self, code): if self.backend == 'opencl': ext = '.cl' backend = 'OpenCL' elif self.backend == 'cuda': ext = '.cu' backend = 'CUDA' code = convert_to_float_if_needed(code) path = os.path.expanduser(os.path.join( '~', '.pysph', 'source', get_platform_dir() )) if not os.path.exists(path): os.makedirs(path) fname = os.path.join(path, 'generated' + ext) with open(fname, 'w') as fp: fp.write(code) print("{backend} code written to {fname}".format( backend=backend, fname=fname) ) code = code.encode('ascii') if sys.version_info.major < 3 else code if self.backend == 'opencl': import pyopencl as cl self.program = cl.Program(self._ctx, code).build( options=['-w'] ) elif self.backend == 'cuda': from compyle.cuda import SourceModule self.program = SourceModule(code) return self.program ########################################################################## # Mako interface. ########################################################################## def get_header(self): object = self.object Converter = get_converter(self.backend) transpiler = Converter(known_types=self.known_types) headers = [] helpers = [] if hasattr(object.kernel, '_get_helpers_'): helpers.extend(object.kernel._get_helpers_()) for equation in object.all_group.equations: if hasattr(equation, '_get_helpers_'): for helper in equation._get_helpers_(): if helper not in helpers: helpers.append(helper) headers.extend(get_helper_code(helpers, transpiler, self.backend)) headers.append(transpiler.parse_instance(object.kernel)) cls_name = object.kernel.__class__.__name__ self.known_types['SPH_KERNEL'] = KnownType( 'GLOBAL_MEM %s*' % cls_name, base_type=cls_name ) headers.append(object.all_group.get_equation_wrappers( self.known_types )) # This is to be done after the above as the equation names are assigned # only at this point. cpu_structs = self._cpu_structs h = CStructHelper(object.kernel) cpu_structs['kern'] = h.get_array() for eq in object.all_group.equations: self._equations[eq.var_name] = eq h.parse(eq) cpu_structs[eq.var_name] = h.get_array() return '\n'.join(headers) def _get_arg_base_types(self, args): base_types = [] for arg in args: base_types.append( ocl_detect_pointer_base_type(arg, self.known_types.get(arg)) ) return base_types def _get_typed_args(self, args): code = [] for arg in args: type = ocl_detect_type(arg, self.known_types.get(arg)) code.append('{type} {arg}'.format( type=type, arg=arg )) return code def _clean_kernel_args(self, args): remove = ('d_idx', 's_idx') for a in remove: if a in args: args.remove(a) def _get_simple_kernel(self, g_idx, sg_idx, group, dest, all_eqs, kind, source=None): assert kind in ('initialize', 'initialize_pair', 'post_loop', 'loop') sub_grp = '' if sg_idx == -1 else 's{idx}'.format(idx=sg_idx) if source is None: kernel = 'g{g_idx}{sub}_{dest}_{kind}'.format( g_idx=g_idx, sub=sub_grp, dest=dest, kind=kind ) else: kernel = 'g{g_idx}{sg}_{source}_on_{dest}_{kind}'.format( g_idx=g_idx, sg=sub_grp, source=source, dest=dest, kind=kind ) sph_k_name = self.object.kernel.__class__.__name__ code = [ 'int d_idx = GID_0 * LDIM_0 + LID_0;', '/* Guard for padded threads. */', 'if (d_idx > NP_MAX) {return;};', 'GLOBAL_MEM %s* SPH_KERNEL = kern;' % sph_k_name ] all_args, py_args, _calls = self._get_equation_method_calls( all_eqs, kind, indent='' ) code.extend(_calls) s_ary, d_ary = all_eqs.get_array_names() if source is None: # We only need the dest arrays here as these are simple kernels # without a loop so there is no "source". _args = list(d_ary) else: d_ary.update(s_ary) _args = list(d_ary) py_args.extend(_args) all_args.extend(self._get_typed_args(_args)) all_args.extend( ['GLOBAL_MEM {kernel}* kern'.format(kernel=sph_k_name), 'double t', 'double dt', 'unsigned int NP_MAX'] ) body = '\n'.join([' ' * 4 + x for x in code]) self.data.append(dict( kernel=kernel, args=py_args, dest=dest, loop=False, real=group.real, type='kernel' )) sig = get_kernel_definition(kernel, all_args) return ( '{sig}\n{{\n{body}\n}}'.format( sig=sig, body=body ) ) def _get_equation_method_calls(self, eq_group, kind, indent=''): all_args = [] py_args = [] code = [] for eq in eq_group.equations: method = getattr(eq, kind, None) if method is not None: cls = eq.__class__.__name__ arg = 'GLOBAL_MEM {cls}* {name}'.format( cls=cls, name=eq.var_name ) all_args.append(arg) py_args.append(eq.var_name) call_args = list(getfullargspec(method).args) if 'self' in call_args: call_args.remove('self') call_args.insert(0, eq.var_name) code.extend( wrap_code( '{cls}_{kind}({args});'.format( cls=cls, kind=kind, args=', '.join(call_args) ), indent=indent ) ) return all_args, py_args, code def _declare_precomp_vars(self, context): decl = [] names = sorted(context.keys()) for var in names: value = context[var] if isinstance(value, int): declare = 'long ' decl.append('{declare}{var} = {value};'.format( declare=declare, var=var, value=value )) elif isinstance(value, float): declare = 'double ' decl.append('{declare}{var} = {value};'.format( declare=declare, var=var, value=value )) elif isinstance(value, (list, tuple)): decl.append( 'double {var}[{size}];'.format( var=var, size=len(value) ) ) return decl def _set_kernel(self, code, kernel): if kernel is not None: name = kernel.__class__.__name__ kern = '%s_kernel(kern, ' % name grad = '%s_gradient(kern, ' % name grad_h = '%s_gradient_h(kern, ' % name deltap = '%s_get_deltap(kern)' % name code = code.replace('DELTAP', deltap).replace('GRADIENT(', grad) return code.replace('KERNEL(', kern).replace('GRADH(', grad_h) else: return code def call_post(self, group): self.data.append(dict(callable=group.post, type='pre_post', args=())) def call_pre(self, group): self.data.append(dict(callable=group.pre, type='pre_post', args=())) def call_py_initialize(self, all_eq_group, dest): calls = [] for eq in all_eq_group.equations: method = getattr(eq, 'py_initialize', None) if method is not None: calls.append(method) if len(calls) > 0: self.data.append( dict(calls=calls, type='py_initialize', dest=dest) ) def call_reduce(self, all_eq_group, dest): self.data.append(dict(method='do_reduce', type='method', args=[all_eq_group, dest])) def call_update_nnps(self, group): self.data.append(dict(method='update_nnps', type='method', args=[])) def get_initialize_kernel(self, g_idx, sg_idx, group, dest, all_eqs): return self._get_simple_kernel( g_idx, sg_idx, group, dest, all_eqs, kind='initialize' ) def get_initialize_pair_kernel(self, g_idx, sg_idx, group, dest, source, eq_group): return self._get_simple_kernel( g_idx, sg_idx, group, dest, eq_group, kind='initialize_pair', source=source ) def get_simple_loop_kernel(self, g_idx, sg_idx, group, dest, all_eqs): return self._get_simple_kernel( g_idx, sg_idx, group, dest, all_eqs, kind='loop' ) def get_post_loop_kernel(self, g_idx, sg_idx, group, dest, all_eqs): return self._get_simple_kernel( g_idx, sg_idx, group, dest, all_eqs, kind='post_loop' ) def get_loop_kernel(self, g_idx, sg_idx, group, dest, source, eq_group): if get_config().use_local_memory: return self.get_lmem_loop_kernel(g_idx, sg_idx, group, dest, source, eq_group) kind = 'loop' sub_grp = '' if sg_idx == -1 else 's{idx}'.format(idx=sg_idx) kernel = 'g{g_idx}{sg}_{source}_on_{dest}_loop'.format( g_idx=g_idx, sg=sub_grp, source=source, dest=dest ) sph_k_name = self.object.kernel.__class__.__name__ context = eq_group.context all_args, py_args = [], [] code = self._declare_precomp_vars(context) code.extend([ 'unsigned int d_idx = GID_0 * LDIM_0 + LID_0;', '/* Guard for padded threads. */', 'if (d_idx > NP_MAX) {return;};', 'unsigned int s_idx, i;', 'GLOBAL_MEM %s* SPH_KERNEL = kern;' % sph_k_name, 'unsigned int start = start_idx[d_idx];', 'GLOBAL_MEM unsigned int* NBRS = &(neighbors[start]);', 'int N_NBRS = nbr_length[d_idx];', 'unsigned int end = start + N_NBRS;' ]) if eq_group.has_loop_all(): _all_args, _py_args, _calls = self._get_equation_method_calls( eq_group, kind='loop_all', indent='' ) code.extend(['', '// Calling loop_all of equations.']) code.extend(_calls) code.append('') all_args.extend(_all_args) py_args.extend(_py_args) if eq_group.has_loop(): code.append('// Calling loop of equations.') code.append('for (i=start; i 0: pre.append('') code.extend(pre) _all_args, _py_args, _calls = self._get_equation_method_calls( eq_group, kind, indent=' ' ) code.extend(_calls) for arg, py_arg in zip(_all_args, _py_args): if arg not in all_args: all_args.append(arg) py_args.append(py_arg) code.append('}') s_ary, d_ary = eq_group.get_array_names() s_ary.update(d_ary) _args = list(s_ary) py_args.extend(_args) all_args.extend(self._get_typed_args(_args)) body = '\n'.join([' ' * 4 + x for x in code]) body = self._set_kernel(body, self.object.kernel) all_args.extend( ['GLOBAL_MEM {kernel}* kern'.format(kernel=sph_k_name), 'GLOBAL_MEM unsigned int *nbr_length', 'GLOBAL_MEM unsigned int *start_idx', 'GLOBAL_MEM unsigned int *neighbors', 'double t', 'double dt', 'unsigned int NP_MAX'] ) self.data.append(dict( kernel=kernel, args=py_args, dest=dest, source=source, loop=True, real=group.real, type='kernel' )) sig = get_kernel_definition(kernel, all_args) return ( '{sig}\n{{\n{body}\n\n}}\n'.format( sig=sig, body=body ) ) def get_lmem_loop_kernel(self, g_idx, sg_idx, group, dest, source, eq_group): kind = 'loop' sub_grp = '' if sg_idx == -1 else 's{idx}'.format(idx=sg_idx) kernel = 'g{g_idx}{sg}_{source}_on_{dest}_loop'.format( g_idx=g_idx, sg=sub_grp, source=source, dest=dest ) sph_k_name = self.object.kernel.__class__.__name__ context = eq_group.context all_args, py_args = [], [] setup_code = self._declare_precomp_vars(context) setup_code.append('GLOBAL_MEM %s* SPH_KERNEL = kern;' % sph_k_name) if eq_group.has_loop_all(): raise NotImplementedError("loop_all not suported with local " "memory") loop_code = [] pre = [] for p, cb in eq_group.precomputed.items(): src = cb.code.strip().splitlines() pre.extend([' ' * 4 + x + ';' for x in src]) if len(pre) > 0: pre.append('') loop_code.extend(pre) _all_args, _py_args, _calls = self._get_equation_method_calls( eq_group, kind, indent=' ' ) loop_code.extend(_calls) for arg, py_arg in zip(_all_args, _py_args): if arg not in all_args: all_args.append(arg) py_args.append(py_arg) s_ary, d_ary = eq_group.get_array_names() source_vars = set(s_ary) source_var_types = self._get_arg_base_types(source_vars) def modify_var_name(x): if x.startswith('s_'): return x + '_global' else: return x s_ary.update(d_ary) _args = list(s_ary) py_args.extend(_args) _args_modified = [modify_var_name(x) for x in _args] all_args.extend(self._get_typed_args(_args_modified)) setup_body = '\n'.join([' ' * 4 + x for x in setup_code]) setup_body = self._set_kernel(setup_body, self.object.kernel) loop_body = '\n'.join([' ' * 4 + x for x in loop_code]) loop_body = self._set_kernel(loop_body, self.object.kernel) all_args.extend( ['GLOBAL_MEM {kernel}* kern'.format(kernel=sph_k_name), 'double t', 'double dt'] ) all_args.extend(get_kernel_args_list()) self.data.append(dict( kernel=kernel, args=py_args, dest=dest, source=source, loop=True, real=group.real, type='kernel' )) body = generate_body(setup=setup_body, loop=loop_body, vars=source_vars, types=source_var_types, wgs=get_config().wgs) sig = get_kernel_definition(kernel, all_args) return ( '{sig}\n{{\n{body}\n\n}}\n'.format( sig=sig, body=body ) ) def start_iteration(self, group): self.data.append(dict( type='start_iteration', group=group )) def stop_iteration(self, group): self.data.append(dict( type='stop_iteration', group=group, )) pysph-master/pysph/sph/acceleration_nnps_helper.py000066400000000000000000000110561356347341600230660ustar00rootroot00000000000000import sys from mako.template import Template disable_unicode = False if sys.version_info.major > 2 else True NNPS_TEMPLATE = r""" /* * Property owners * octree_dst: cids, unique_cids * octree_src: neighbor_cid_offset, neighbor_cids * self evident: xsrc, ysrc, zsrc, hsrc, xdst, ydst, zdst, hdst, pbounds_src, pbounds_dst, */ long i = get_global_id(0); int lid = get_local_id(0); int _idx = get_group_id(0); // Fetch dst particles ${data_t} _xd, _yd, _zd, _hd; int _cid_dst = _unique_cids[_idx]; uint2 _pbound_here = _pbounds_dst[_cid_dst]; char _svalid = (_pbound_here.s0 + lid < _pbound_here.s1); unsigned int d_idx; if (_svalid) { % if sorted: d_idx = _pbound_here.s0 + lid; % else: d_idx = _pids_dst[_pbound_here.s0 + lid]; % endif _xd = xd[d_idx]; _yd = yd[d_idx]; _zd = zd[d_idx]; _hd = hd[d_idx]; } // Set loop parameters int _cid_src, _pid_src; int _offset_src = _neighbor_cid_offset[_idx]; int _offset_lim = _neighbor_cid_offset[_idx + 1]; uint2 _pbound_here2; local ${data_t} _xs[${wgs}]; local ${data_t} _ys[${wgs}]; local ${data_t} _zs[${wgs}]; local ${data_t} _hs[${wgs}]; % for var, type in zip(vars, types): local ${type} ${var}[${wgs}]; % endfor char _nbrs[${wgs}]; int _nbr_cnt, _m; ${setup} while (_offset_src < _offset_lim) { _cid_src = _neighbor_cids[_offset_src]; _pbound_here2 = _pbounds_src[_cid_src]; while (_pbound_here2.s0 < _pbound_here2.s1) { _m = min(_pbound_here2.s1, _pbound_here2.s0 + ${wgs}) - _pbound_here2.s0; // Copy src data if (lid < _m) { % if sorted: _pid_src = _pbound_here2.s0 + lid; % else: _pid_src = _pids_src[_pbound_here2.s0 + lid]; % endif _xs[lid] = xs[_pid_src]; _ys[lid] = ys[_pid_src]; _zs[lid] = zs[_pid_src]; _hs[lid] = hs[_pid_src]; % for var in vars: ${var}[lid] = ${var}_global[_pid_src]; % endfor } barrier(CLK_LOCAL_MEM_FENCE); // Everything this point forward is done independently // by each thread. if (_svalid) { _nbr_cnt = 0; for (int _j=0; _j < _m; _j++) { ${data_t} _dist2 = NORM2(_xs[_j] - _xd, _ys[_j] - _yd, _zs[_j] - _zd); ${data_t} _r2 = MAX(_hs[_j], _hd) * _radius_scale; _r2 *= _r2; if (_dist2 < _r2) { _nbrs[_nbr_cnt++] = _j; } } int _j = 0; while (_j < _nbr_cnt) { int s_idx = _nbrs[_j]; ${loop_code} _j++; } } barrier(CLK_LOCAL_MEM_FENCE); _pbound_here2.s0 += ${wgs}; } // Process next neighboring cell _offset_src++; } """ NNPS_ARGS_TEMPLATE = """ GLOBAL_MEM int *_unique_cids, GLOBAL_MEM int *_pids_src, GLOBAL_MEM int *_pids_dst, GLOBAL_MEM int *_cids, GLOBAL_MEM uint2 *_pbounds_src, GLOBAL_MEM uint2 *_pbounds_dst, %(data_t)s _radius_scale, GLOBAL_MEM int *_neighbor_cid_offset, GLOBAL_MEM int *_neighbor_cids, GLOBAL_MEM %(data_t)s *xd, GLOBAL_MEM %(data_t)s *yd, GLOBAL_MEM %(data_t)s *zd, GLOBAL_MEM %(data_t)s *hd, GLOBAL_MEM %(data_t)s *xs, GLOBAL_MEM %(data_t)s *ys, GLOBAL_MEM %(data_t)s *zs, GLOBAL_MEM %(data_t)s *hs """ def _generate_nnps_code(sorted, wgs, setup, loop, vars, types, data_t='float'): # Note: Properties like the data type and sortedness # need to be fixed throughout the simulation since # currently this function is only called at the start of # the simulation. return Template(NNPS_TEMPLATE, disable_unicode=disable_unicode).render( data_t=data_t, sorted=sorted, wgs=wgs, setup=setup, loop_code=loop, vars=vars, types=types ) def generate_body(setup, loop, vars, types, wgs, c_type='float'): return _generate_nnps_code(True, wgs, setup, loop, vars, types, c_type) def get_kernel_args_list(c_type='float'): args = NNPS_ARGS_TEMPLATE % {'data_t': c_type} return args.split(",") pysph-master/pysph/sph/basic_equations.py000066400000000000000000000231461356347341600212140ustar00rootroot00000000000000""" Basic SPH Equations ################### """ from pysph.sph.equation import Equation class SummationDensity(Equation): r"""Good old Summation density: :math:`\rho_a = \sum_b m_b W_{ab}` """ def initialize(self, d_idx, d_rho): d_rho[d_idx] = 0.0 def loop(self, d_idx, d_rho, s_idx, s_m, WIJ): d_rho[d_idx] += s_m[s_idx]*WIJ class BodyForce(Equation): r"""Add a body force to the particles: :math:`\boldsymbol{f} = f_x, f_y, f_z` """ def __init__(self, dest, sources, fx=0.0, fy=0.0, fz=0.0): r""" Parameters ---------- fx : float Body force per unit mass along the x-axis fy : float Body force per unit mass along the y-axis fz : float Body force per unit mass along the z-axis """ self.fx = fx self.fy = fy self.fz = fz super(BodyForce, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] += self.fx d_av[d_idx] += self.fy d_aw[d_idx] += self.fz class VelocityGradient2D(Equation): r""" Compute the SPH evaluation for the velocity gradient tensor in 2D. The expression for the velocity gradient is: :math:`\frac{\partial v^i}{\partial x^j} = \sum_{b}\frac{m_b}{\rho_b}(v_b - v_a)\frac{\partial W_{ab}}{\partial x_a^j}` Notes ----- The tensor properties are stored in the variables v_ij where 'i' refers to the velocity component and 'j' refers to the spatial component. Thus v_10 is :math:`\frac{\partial v}{\partial x}` """ def initialize(self, d_idx, d_v00, d_v01, d_v10, d_v11): d_v00[d_idx] = 0.0 d_v01[d_idx] = 0.0 d_v10[d_idx] = 0.0 d_v11[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, s_rho, d_v00, d_v01, d_v10, d_v11, DWIJ, VIJ): tmp = s_m[s_idx]/s_rho[s_idx] d_v00[d_idx] += tmp * -VIJ[0] * DWIJ[0] d_v01[d_idx] += tmp * -VIJ[0] * DWIJ[1] d_v10[d_idx] += tmp * -VIJ[1] * DWIJ[0] d_v11[d_idx] += tmp * -VIJ[1] * DWIJ[1] class VelocityGradient3D(Equation): r""" Compute the SPH evaluation for the velocity gradient tensor in 2D. The expression for the velocity gradient is: :math:`\frac{\partial v^i}{\partial x^j} = \sum_{b}\frac{m_b}{\rho_b}(v_b - v_a)\frac{\partial W_{ab}}{\partial x_a^j}` Notes ----- The tensor properties are stored in the variables v_ij where 'i' refers to the velocity component and 'j' refers to the spatial component. Thus v_21 is :math:`\frac{\partial v}{\partial x}` """ def initialize(self, d_idx, d_v00, d_v01, d_v02, d_v10, d_v11, d_v12, d_v20, d_v21, d_v22): d_v00[d_idx] = 0.0 d_v01[d_idx] = 0.0 d_v02[d_idx] = 0.0 d_v10[d_idx] = 0.0 d_v11[d_idx] = 0.0 d_v12[d_idx] = 0.0 d_v20[d_idx] = 0.0 d_v21[d_idx] = 0.0 d_v22[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, s_rho, d_v00, d_v01, d_v02, d_v10, d_v11, d_v12, d_v20, d_v21, d_v22, DWIJ, VIJ): tmp = s_m[s_idx]/s_rho[s_idx] d_v00[d_idx] += tmp * -VIJ[0] * DWIJ[0] d_v01[d_idx] += tmp * -VIJ[0] * DWIJ[1] d_v02[d_idx] += tmp * -VIJ[0] * DWIJ[2] d_v10[d_idx] += tmp * -VIJ[1] * DWIJ[0] d_v11[d_idx] += tmp * -VIJ[1] * DWIJ[1] d_v12[d_idx] += tmp * -VIJ[1] * DWIJ[2] d_v20[d_idx] += tmp * -VIJ[2] * DWIJ[0] d_v21[d_idx] += tmp * -VIJ[2] * DWIJ[1] d_v22[d_idx] += tmp * -VIJ[2] * DWIJ[2] class IsothermalEOS(Equation): r""" Compute the pressure using the Isothermal equation of state: :math:`p = p_0 + c_0^2(\rho_0 - \rho)` """ def __init__(self, dest, sources, rho0, c0, p0): r""" Parameters ---------- rho0 : float Reference density of the fluid (:math:`\rho_0`) c0 : float Maximum speed of sound expected in the system (:math:`c0`) p0 : float Reference pressure in the system (:math:`p0`) """ self.rho0 = rho0 self.c0 = c0 self.c02 = c0 * c0 self.p0 = p0 super(IsothermalEOS, self).__init__(dest, sources) def loop(self, d_idx, d_rho, d_p): d_p[d_idx] = self.p0 + self.c02 * (d_rho[d_idx] - self.rho0) class ContinuityEquation(Equation): r"""Density rate: :math:`\frac{d\rho_a}{dt} = \sum_b m_b \boldsymbol{v}_{ab}\cdot \nabla_a W_{ab}` """ def initialize(self, d_idx, d_arho): d_arho[d_idx] = 0.0 def loop(self, d_idx, d_arho, s_idx, s_m, DWIJ, VIJ): vijdotdwij = DWIJ[0]*VIJ[0] + DWIJ[1]*VIJ[1] + DWIJ[2]*VIJ[2] d_arho[d_idx] += s_m[s_idx]*vijdotdwij class MonaghanArtificialViscosity(Equation): r"""Classical Monaghan style artificial viscosity [Monaghan2005]_ .. math:: \frac{d\mathbf{v}_{a}}{dt}&=&-\sum_{b}m_{b}\Pi_{ab}\nabla_{a}W_{ab} where .. math:: \Pi_{ab}=\begin{cases}\frac{-\alpha_{\pi}\bar{c}_{ab}\phi_{ab}+ \beta_{\pi}\phi_{ab}^{2}}{\bar{\rho}_{ab}}, & \mathbf{v}_{ab}\cdot \mathbf{r}_{ab}<0\\0, & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}\geq0 \end{cases} with .. math:: \phi_{ab}=\frac{h\mathbf{v}_{ab}\cdot\mathbf{r}_{ab}} {|\mathbf{r}_{ab}|^{2}+\epsilon^{2}}\\ \bar{c}_{ab}&=&\frac{c_{a}+c_{b}}{2}\\ \bar{\rho}_{ab}&=&\frac{\rho_{a}+\rho_{b}}{2} References ---------- .. [Monaghan2005] J. Monaghan, "Smoothed particle hydrodynamics", Reports on Progress in Physics, 68 (2005), pp. 1703-1759. """ def __init__(self, dest, sources, alpha=1.0, beta=1.0): r""" Parameters ---------- alpha : float produces a shear and bulk viscosity beta : float used to handle high Mach number shocks """ self.alpha = alpha self.beta = beta super(MonaghanArtificialViscosity, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, d_cs, d_au, d_av, d_aw, s_m, s_rho, s_cs, VIJ, XIJ, HIJ, R2IJ, RHOIJ1, EPS, DWIJ): vijdotxij = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] piij = 0.0 if vijdotxij < 0: cij = 0.5 * (d_cs[d_idx] + s_cs[s_idx]) muij = (HIJ * vijdotxij)/(R2IJ + EPS) piij = -self.alpha*cij*muij + self.beta*muij*muij piij = piij*RHOIJ1 d_au[d_idx] += -s_m[s_idx] * piij * DWIJ[0] d_av[d_idx] += -s_m[s_idx] * piij * DWIJ[1] d_aw[d_idx] += -s_m[s_idx] * piij * DWIJ[2] class XSPHCorrection(Equation): r"""Position stepping with XSPH correction [Monaghan1992]_ .. math:: \frac{d\mathbf{r}_{a}}{dt}=\mathbf{\hat{v}}_{a}=\mathbf{v}_{a}- \epsilon\sum_{b}m_{b}\frac{\mathbf{v}_{ab}}{\bar{\rho}_{ab}}W_{ab} References ---------- .. [Monaghan1992] J. Monaghan, Smoothed Particle Hydrodynamics, "Annual Review of Astronomy and Astrophysics", 30 (1992), pp. 543-574. """ def __init__(self, dest, sources, eps=0.5): r""" Parameters ---------- eps : float :math:`\epsilon` as in the above equation Notes ----- This equation must be used to advect the particles. XSPH can be turned off by setting the parameter ``eps = 0``. """ self.eps = eps super(XSPHCorrection, self).__init__(dest, sources) def initialize(self, d_idx, d_ax, d_ay, d_az): d_ax[d_idx] = 0.0 d_ay[d_idx] = 0.0 d_az[d_idx] = 0.0 def loop(self, s_idx, d_idx, s_m, d_ax, d_ay, d_az, WIJ, RHOIJ1, VIJ): tmp = -self.eps * s_m[s_idx]*WIJ*RHOIJ1 d_ax[d_idx] += tmp * VIJ[0] d_ay[d_idx] += tmp * VIJ[1] d_az[d_idx] += tmp * VIJ[2] def post_loop(self, d_idx, d_ax, d_ay, d_az, d_u, d_v, d_w): d_ax[d_idx] += d_u[d_idx] d_ay[d_idx] += d_v[d_idx] d_az[d_idx] += d_w[d_idx] class XSPHCorrectionForLeapFrog(Equation): r"""The XSPH correction [Monaghan1992]_ alone. This is meant to be used with a leap-frog integrator which already considers the velocity of the particles. It simply computes the correction term and adds that to ``ax, ay, az``. .. math:: \frac{d\mathbf{r}_{a}}{dt}=\mathbf{\hat{v}}_{a}= - \epsilon\sum_{b}m_{b}\frac{\mathbf{v}_{ab}}{\bar{\rho}_{ab}}W_{ab} References ---------- .. [Monaghan1992] J. Monaghan, Smoothed Particle Hydrodynamics, "Annual Review of Astronomy and Astrophysics", 30 (1992), pp. 543-574. """ def __init__(self, dest, sources, eps=0.5): r""" Parameters ---------- eps : float :math:`\epsilon` as in the above equation Notes ----- This equation must be used to advect the particles. XSPH can be turned off by setting the parameter ``eps = 0``. """ self.eps = eps super(XSPHCorrectionForLeapFrog, self).__init__(dest, sources) def initialize(self, d_idx, d_ax, d_ay, d_az): d_ax[d_idx] = 0.0 d_ay[d_idx] = 0.0 d_az[d_idx] = 0.0 def loop(self, s_idx, d_idx, s_m, d_ax, d_ay, d_az, WIJ, RHOIJ1, VIJ): tmp = -self.eps * s_m[s_idx]*WIJ*RHOIJ1 d_ax[d_idx] += tmp * VIJ[0] d_ay[d_idx] += tmp * VIJ[1] d_az[d_idx] += tmp * VIJ[2] pysph-master/pysph/sph/bc/000077500000000000000000000000001356347341600160475ustar00rootroot00000000000000pysph-master/pysph/sph/bc/__init__.py000066400000000000000000000000001356347341600201460ustar00rootroot00000000000000pysph-master/pysph/sph/bc/characteristic/000077500000000000000000000000001356347341600210375ustar00rootroot00000000000000pysph-master/pysph/sph/bc/characteristic/__init__.py000066400000000000000000000000001356347341600231360ustar00rootroot00000000000000pysph-master/pysph/sph/bc/characteristic/inlet.py000066400000000000000000000001621356347341600225230ustar00rootroot00000000000000""" Inlet boundary """ from pysph.sph.bc.inlet_outlet_manager import InletBase class Inlet(InletBase): pass pysph-master/pysph/sph/bc/characteristic/outlet.py000066400000000000000000000001661356347341600227300ustar00rootroot00000000000000""" Outlet boundary """ from pysph.sph.bc.inlet_outlet_manager import OutletBase class Outlet(OutletBase): pass pysph-master/pysph/sph/bc/characteristic/simple_inlet_outlet.py000066400000000000000000000163261356347341600255010ustar00rootroot00000000000000"""Method of characteristic proposed by - Lastiwka, Martin, Mihai Basa, and Nathan J. Quinlan. "Permeable and non‐reflecting boundary conditions in SPH." International journal for numerical methods in fluids 61.7 (2009): 709-724. """ from pysph.sph.equation import Equation from pysph.sph.integrator import PECIntegrator from pysph.sph.bc.inlet_outlet_manager import InletOutletManager from pysph.sph.wc.edac import EDACScheme import numpy class SimpleInletOutlet(InletOutletManager): def add_io_properties(self, pa, scheme=None): DEFAULT_PROPS = [ 'disp', 'ioid', 'Bp', 'A', 'wij', 'po', 'uho', 'vho', 'who', 'Buh', 'Bvh', 'Bwh', 'x0', 'y0', 'z0', 'uhat', 'vhat', 'what', 'xn', 'yn', 'zn', 'J1', 'J2u'] STRIDE_DATA = { 'A': 16, 'Bu': 4, 'Bv': 4, 'Bw': 4, 'Bp': 4, 'Brho': 4, 'uo': 4, 'vo': 4, 'wo': 4, 'po': 4, 'rhoo': 4, 'Bau': 4, 'Bav': 4, 'Baw': 4, 'auo': 4, 'avo': 4, 'awo': 4, 'Buh': 4, 'Bvh': 4, 'Bwh': 4, 'uho': 4, 'vho': 4, 'who': 4} for prop in DEFAULT_PROPS: if prop in STRIDE_DATA: pa.add_property(prop, stride=STRIDE_DATA[prop]) else: pa.add_property(prop) pa.add_constant('uref', 0.0) pa.add_constant('avgj2u', 0.0) pa.add_constant('avgj1', 0.0) def get_stepper(self, scheme, cls, edactvf=False): from pysph.sph.bc.inlet_outlet_manager import ( InletStep, OutletStep) steppers = {} if (cls == PECIntegrator): if isinstance(scheme, EDACScheme): for inlet in self.inlets: steppers[inlet] = InletStep() for outlet in self.outlets: steppers[outlet] = OutletStep() for g_inlet in self.ghost_inlets: steppers[g_inlet] = InletStep() self.active_stages = [2] return steppers def get_equations(self, scheme=None, summation_density=False, edactvf=False): from pysph.sph.equation import Group from pysph.sph.bc.interpolate import ( UpdateMomentMatrix, EvaluateUhat, EvaluateP, ExtrapolateUhat, ExtrapolateP, CopyUhatFromGhost, CopyPFromGhost) from pysph.sph.bc.inlet_outlet_manager import ( UpdateNormalsAndDisplacements, CopyNormalsandDistances) equations = [] g00 = [] i = 0 for info in self.inletinfo: g00.append(UpdateNormalsAndDisplacements( dest=info.pa_name, sources=None, xn=info.normal[0], yn=info.normal[1], zn=info.normal[2], xo=info.refpoint[0], yo=info.refpoint[1], zo=info.refpoint[2] )) g00.append(CopyNormalsandDistances( dest=self.inlet_pairs[info.pa_name], sources=[info.pa_name])) i = i + 1 for fluid in self.fluids: g00.append(EvalauteCharacterisctics( dest=fluid, sources=None, c0=10.0, u0=1.0, v0=0.0, p0=1.0, rho0=1000.0)) equations.append(Group(equations=g00, real=False)) g02 = [] for name in self.ghost_inlets: g02.append(UpdateMomentMatrix( dest=name, sources=self.fluids, dim=self.dim)) equations.append(Group(equations=g02, real=False)) g03 = [] for name in self.ghost_inlets: g03.append(EvaluateUhat(dest=name, sources=self.fluids, dim=self.dim)) g03.append(EvaluateP(dest=name, sources=self.fluids, dim=self.dim)) for name in self.outlets: g03.append(EvalauteNumberdensity(dest=name, sources=self.fluids)) g03.append(ShepardInterpolateCharacteristics(dest=name, sources=self.fluids)) equations.append(Group(equations=g03, real=False)) g04 = [] for name in self.ghost_inlets: g04.append(ExtrapolateUhat(dest=name, sources=None)) g04.append(ExtrapolateP(dest=name, sources=None)) for name in self.outlets: g04.append(EvaluatePropertyfromCharacteristics( dest=name, sources=None, c0=10.0, u0=1.0, v0=0.0, p0=1.0, rho0=1000.0)) equations.append(Group(equations=g04, real=False)) g05 = [] for io in self.inlet_pairs.keys(): g05.append(CopyUhatFromGhost( dest=io, sources=[self.inlet_pairs[io]])) g05.append(CopyPFromGhost( dest=io, sources=[self.inlet_pairs[io]])) equations.append(Group(equations=g05, real=False)) g06 = [] for inlet in self.inletinfo: for eqn in inlet.equations: g06.append(eqn) for outlet in self.outletinfo: for eqn in outlet.equations: g06.append(eqn) equations.append(Group(equations=g06, real=False)) return equations class EvalauteCharacterisctics(Equation): def __init__(self, dest, sources, c0, rho0, u0, p0, v0): self.c0 = c0 self.rho0 = rho0 self.p0 = p0 self.u0 = u0 self.v0 = v0 super(EvalauteCharacterisctics, self).__init__(dest, sources) def initialize(self, d_idx, d_u, d_v, d_p, d_rho, d_J1, d_J2u): a = self.c0 rhoref = self.rho0 uref = self.u0 pref = self.p0 d_J1[d_idx] = -a**2 * (d_rho[d_idx] - rhoref) + (d_p[d_idx] - pref) d_J2u[d_idx] = d_rho[d_idx] * a * (d_u[d_idx] - uref) + (d_p[d_idx] - pref) class EvalauteNumberdensity(Equation): def initialize(self, d_idx, d_wij): d_wij[d_idx] = 0.0 def loop(self, d_idx, d_wij, WIJ): d_wij[d_idx] += WIJ class ShepardInterpolateCharacteristics(Equation): def initialize(self, d_idx, d_J1, d_J2u): d_J1[d_idx] = 0.0 d_J2u[d_idx] = 0.0 def loop(self, d_idx, d_J1, d_J2u, s_J1, s_J2u, WIJ, s_idx): d_J1[d_idx] += s_J1[s_idx] * WIJ d_J2u[d_idx] += s_J2u[s_idx] * WIJ def post_loop(self, d_idx, d_J1, d_J2u, d_wij, d_avgj2u, d_avgj1): if d_wij[d_idx] > 1e-14: d_J1[d_idx] /= d_wij[d_idx] d_J2u[d_idx] /= d_wij[d_idx] else: d_J2u[d_idx] = d_avgj2u[0] d_J1[d_idx] = d_avgj1[0] def reduce(self, dst, t, dt): dst.avgj2u[0] = numpy.average(dst.J2u[dst.wij > 0.0001]) dst.avgj1[0] = numpy.average(dst.J1[dst.wij > 0.0001]) class EvaluatePropertyfromCharacteristics(Equation): def __init__(self, dest, sources, c0, rho0, u0, p0, v0): self.c0 = c0 self.rho0 = rho0 self.p0 = p0 self.u0 = u0 self.v0 = v0 super(EvaluatePropertyfromCharacteristics, self).__init__(dest, sources) def initialize(self, d_idx, d_u, d_v, d_p, d_rho, d_J1, d_J2u): j1 = d_J1[d_idx] j2u = d_J2u[d_idx] d_rho[d_idx] = self.rho0 + (-j1 + 0.5 * j2u) / self.c0**2 d_u[d_idx] = self.u0 + (j2u) / (2 * d_rho[d_idx] * self.c0) d_p[d_idx] = self.p0 + 0.5 * (j2u) pysph-master/pysph/sph/bc/donothing/000077500000000000000000000000001356347341600200405ustar00rootroot00000000000000pysph-master/pysph/sph/bc/donothing/__init__.py000066400000000000000000000000001356347341600221370ustar00rootroot00000000000000pysph-master/pysph/sph/bc/donothing/inlet.py000066400000000000000000000001621356347341600215240ustar00rootroot00000000000000""" Inlet boundary """ from pysph.sph.bc.inlet_outlet_manager import InletBase class Inlet(InletBase): pass pysph-master/pysph/sph/bc/donothing/outlet.py000066400000000000000000000001661356347341600217310ustar00rootroot00000000000000""" Outlet boundary """ from pysph.sph.bc.inlet_outlet_manager import OutletBase class Outlet(OutletBase): pass pysph-master/pysph/sph/bc/donothing/simple_inlet_outlet.py000066400000000000000000000103021356347341600244660ustar00rootroot00000000000000"""do-nothing outlet first used in SPH by - Federico, Ivan, et al. "Simulating 2D open-channel flows through an SPH model." European Journal of Mechanics-B/Fluids 34 (2012): 35-46. """ from pysph.sph.integrator import PECIntegrator from pysph.sph.bc.inlet_outlet_manager import InletOutletManager from pysph.sph.wc.edac import EDACScheme class SimpleInletOutlet(InletOutletManager): def add_io_properties(self, pa, scheme=None): DEFAULT_PROPS = [ 'disp', 'ioid', 'Bp', 'A', 'wij', 'po', 'uho', 'vho', 'who', 'Buh', 'Bvh', 'Bwh', 'x0', 'y0', 'z0', 'uhat', 'vhat', 'what', 'xn', 'yn', 'zn'] STRIDE_DATA = { 'A': 16, 'Bu': 4, 'Bv': 4, 'Bw': 4, 'Bp': 4, 'Brho': 4, 'uo': 4, 'vo': 4, 'wo': 4, 'po': 4, 'rhoo': 4, 'Bau': 4, 'Bav': 4, 'Baw': 4, 'auo': 4, 'avo': 4, 'awo': 4, 'Buh': 4, 'Bvh': 4, 'Bwh': 4, 'uho': 4, 'vho': 4, 'who': 4} for prop in DEFAULT_PROPS: if prop in STRIDE_DATA: pa.add_property(prop, stride=STRIDE_DATA[prop]) else: pa.add_property(prop) pa.add_constant('uref', 0.0) def get_stepper(self, scheme, cls, edactvf=False): from pysph.sph.bc.inlet_outlet_manager import ( InletStep, OutletStepWithUhat) steppers = {} if (cls == PECIntegrator): if isinstance(scheme, EDACScheme): for inlet in self.inlets: steppers[inlet] = InletStep() for outlet in self.outlets: steppers[outlet] = OutletStepWithUhat() for g_inlet in self.ghost_inlets: steppers[g_inlet] = InletStep() self.active_stages = [2] return steppers def get_equations(self, scheme=None, summation_density=False, edactvf=False): from pysph.sph.equation import Group from pysph.sph.bc.interpolate import ( UpdateMomentMatrix, EvaluateUhat, EvaluateP, ExtrapolateUhat, ExtrapolateP, CopyUhatFromGhost, CopyPFromGhost) from pysph.sph.bc.inlet_outlet_manager import ( UpdateNormalsAndDisplacements, CopyNormalsandDistances) equations = [] g00 = [] i = 0 for info in self.inletinfo: g00.append(UpdateNormalsAndDisplacements( dest=info.pa_name, sources=None, xn=info.normal[0], yn=info.normal[1], zn=info.normal[2], xo=info.refpoint[0], yo=info.refpoint[1], zo=info.refpoint[2] )) g00.append(CopyNormalsandDistances( dest=self.inlet_pairs[info.pa_name], sources=[info.pa_name])) i = i + 1 equations.append(Group(equations=g00, real=False)) g02 = [] for name in self.ghost_inlets: g02.append(UpdateMomentMatrix( dest=name, sources=self.fluids, dim=self.dim)) equations.append(Group(equations=g02, real=False)) g03 = [] for name in self.ghost_inlets: g03.append(EvaluateUhat(dest=name, sources=self.fluids, dim=self.dim)) g03.append(EvaluateP(dest=name, sources=self.fluids, dim=self.dim)) equations.append(Group(equations=g03, real=False)) g04 = [] for name in self.ghost_inlets: g04.append(ExtrapolateUhat(dest=name, sources=None)) g04.append(ExtrapolateP(dest=name, sources=None)) equations.append(Group(equations=g04, real=False)) g05 = [] for io in self.inlet_pairs.keys(): g05.append(CopyUhatFromGhost( dest=io, sources=[self.inlet_pairs[io]])) g05.append(CopyPFromGhost( dest=io, sources=[self.inlet_pairs[io]])) equations.append(Group(equations=g05, real=False)) g06 = [] for inlet in self.inletinfo: for eqn in inlet.equations: g06.append(eqn) for outlet in self.outletinfo: for eqn in outlet.equations: g06.append(eqn) equations.append(Group(equations=g06, real=False)) return equations pysph-master/pysph/sph/bc/hybrid/000077500000000000000000000000001356347341600173305ustar00rootroot00000000000000pysph-master/pysph/sph/bc/hybrid/__init__.py000066400000000000000000000000001356347341600214270ustar00rootroot00000000000000pysph-master/pysph/sph/bc/hybrid/inlet.py000066400000000000000000000024061356347341600210170ustar00rootroot00000000000000""" Inlet boundary """ import numpy as np from pysph.sph.bc.inlet_outlet_manager import InletBase class Inlet(InletBase): def update(self, time, dt, stage): dest_pa = self.dest_pa inlet_pa = self.inlet_pa ghost_pa = self.ghost_pa dest_pa.uref[0] = 0.5 * (inlet_pa.uref[0] + dest_pa.uref[0]) if not self._init: self.initialize() self._init = True if stage in self.active_stages: self.io_eval = self._create_io_eval() self.io_eval.update() self.io_eval.evaluate() io_id = inlet_pa.ioid cond = (io_id == 0) all_idx = np.where(cond)[0] inlet_pa.extract_particles(all_idx, dest_pa) # moving the moved particles back to the array beginning. inlet_pa.x[all_idx] += self.length * self.xn inlet_pa.y[all_idx] += self.length * self.yn inlet_pa.z[all_idx] += self.length * self.zn if ghost_pa: ghost_pa.x[all_idx] -= self.length * self.xn ghost_pa.y[all_idx] -= self.length * self.yn ghost_pa.z[all_idx] -= self.length * self.zn if self.callback is not None: self.callback(dest_pa, inlet_pa) pysph-master/pysph/sph/bc/hybrid/outlet.py000066400000000000000000000001671356347341600212220ustar00rootroot00000000000000""" Outlet boundary """ from pysph.sph.bc.inlet_outlet_manager import OutletBase class Outlet(OutletBase): pass pysph-master/pysph/sph/bc/hybrid/simple_inlet_outlet.py000066400000000000000000000222301356347341600237610ustar00rootroot00000000000000""" Hybrid Outlet proposed by - Negi, Pawan, Prabhu Ramachandran, and Asmelash Haftu. "An improved non-reflecting outlet boundary condition for weakly-compressible SPH." arXiv preprint arXiv:1907.04034 (2019). """ from pysph.sph.equation import Equation from pysph.sph.integrator import PECIntegrator from compyle.api import declare import numpy from pysph.sph.wc.edac import EDACScheme from pysph.sph.bc.inlet_outlet_manager import InletOutletManager class SimpleInletOutlet(InletOutletManager): def add_io_properties(self, pa, scheme=None): N = 6 DEFAULT_PROPS = [ 'disp', 'ioid', 'Bp', 'xn', 'yn', 'zn', 'A', 'wij', 'po', 'uho', 'vho', 'who', 'Buh', 'Bvh', 'Bwh', 'x0', 'y0', 'z0', 'uhat', 'vhat', 'what', 'uag', 'vag', 'pag', 'pacu', 'uacu', 'uta', 'pta', 'Eacu', 'vo', 'uo', 'wo', 'J1', 'J2u'] STRIDE_DATA = { 'A': 16, 'Bu': 4, 'Bv': 4, 'Bw': 4, 'Bp': 4, 'Brho': 4, 'uo': 4, 'vo': 4, 'wo': 4, 'po': 4, 'rhoo': 4, 'Bau': 4, 'Bav': 4, 'Baw': 4, 'auo': 4, 'avo': 4, 'awo': 4, 'Buh': 4, 'Bvh': 4, 'Bwh': 4, 'uho': 4, 'vho': 4, 'who': 4, 'Bauh': 4, 'Bavh': 4, 'Bawh': 4, 'auho': 4, 'avho': 4, 'awho': 4, 'Baz': 4, 'axo': 4, 'ayo': 4, 'Bay': 4, 'azo': 4, 'Bax': 4, 'uag': N, 'vag': N, 'pag': N, 'uhag': N, 'vhag': N} for prop in DEFAULT_PROPS: if prop in STRIDE_DATA: pa.add_property(prop, stride=STRIDE_DATA[prop]) else: pa.add_property(prop) pa.add_constant('avgj2u', 0.0) pa.add_constant('avgj1', 0.0) pa.add_constant('uref', 0.0) def get_stepper(self, scheme, cls, edactvf=True): from pysph.sph.bc.inlet_outlet_manager import InletStep, OutletStep steppers = {} if (cls == PECIntegrator): if isinstance(scheme, EDACScheme): for inlet in self.inlets: steppers[inlet] = InletStep() for outlet in self.outlets: steppers[outlet] = OutletStep() self.active_stages = [2] return steppers def get_equations(self, scheme=None, summation_density=False, edactvf=True): from pysph.sph.equation import Group from pysph.sph.bc.interpolate import ( UpdateMomentMatrix, EvaluateUhat, EvaluateP, ExtrapolateUhat, ExtrapolateP, CopyUhatFromGhost, CopyPFromGhost) from pysph.sph.bc.inlet_outlet_manager import ( UpdateNormalsAndDisplacements, CopyNormalsandDistances) all_pairs = {**self.inlet_pairs, **self.outlet_pairs} umax = [] for info in self.inletinfo: umax.append(info.umax) equations = [] g00 = [] i = 0 for info in self.inletinfo: g00.append(UpdateNormalsAndDisplacements( dest=info.pa_name, sources=None, xn=info.normal[0], yn=info.normal[1], zn=info.normal[2], xo=info.refpoint[0], yo=info.refpoint[1], zo=info.refpoint[2] )) g00.append(CopyNormalsandDistances( dest=all_pairs[info.pa_name], sources=[info.pa_name])) i = i + 1 equations.append(Group(equations=g00, real=False)) g02 = [] for name in self.fluids: g02.append(CopyTimeValues(dest=name, sources=None, rho=scheme.rho0, c0=scheme.c0, u0=min(umax))) g02.append((EvalauteCharacterisctics(dest=name, sources=None, c0=scheme.c0, rho0=scheme.rho0))) for name in self.ghost_inlets: g02.append(UpdateMomentMatrix( dest=name, sources=self.fluids, dim=self.dim)) equations.append(Group(equations=g02, real=False)) g02a = [] for name in self.fluids: g02a.append(ComputeTimeAverage(dest=name, sources=None)) for name in self.outlets: g02a.append(EvalauteNumberdensity(dest=name, sources=self.fluids)) g02a.append(ShepardInterpolateCharacteristics(dest=name, sources=self.fluids)) equations.append(Group(equations=g02a, real=False)) g03 = [] for name in self.ghost_inlets: g03.append(EvaluateUhat(dest=name, sources=self.fluids, dim=self.dim)) g03.append(EvaluateP(dest=name, sources=self.fluids, dim=self.dim)) equations.append(Group(equations=g03, real=False)) g04 = [] for name in self.ghost_inlets: g04.append(ExtrapolateUhat(dest=name, sources=None)) g04.append(ExtrapolateP(dest=name, sources=None)) for name in self.outlets: g04.append(EvaluatePropertyfromCharacteristics( dest=name, sources=None, c0=scheme.c0, rho0=scheme.rho0)) equations.append(Group(equations=g04, real=False)) g05 = [] for io in self.inlet_pairs.keys(): g05.append(CopyUhatFromGhost( dest=io, sources=[all_pairs[io]])) g05.append(CopyPFromGhost( dest=io, sources=[all_pairs[io]])) equations.append(Group(equations=g05, real=False)) g07 = [] for inlet in self.inletinfo: for eqn in inlet.equations: g07.append(eqn) for outlet in self.outletinfo: for eqn in outlet.equations: g07.append(eqn) equations.append(Group(equations=g07, real=False)) g08 = [] for name in self.ghost_inlets: g08.append(MoveGhostInlet(dest=name, sources=None)) equations.append(Group(equations=g08, real=False)) return equations class MoveGhostInlet(Equation): def loop(self, d_idx, d_u, d_x, dt): d_x[d_idx] += d_u[d_idx] * dt class CopyTimeValues(Equation): def __init__(self, dest, sources, rho, c0, u0): self.rho = rho self.c0 = c0 self.u0 = u0 self.Imin = 0.5 * rho * u0**2 super(CopyTimeValues, self).__init__(dest, sources) def initialize(self, d_idx, d_u, d_v, d_p, d_uag, d_pag, d_uta, d_pta, d_Eacu, t, d_uref): i6, i, N = declare('int', 3) N = 6 i6 = N * d_idx N -= 1 for i in range(N): d_uag[i6+(N-i)] = d_uag[i6+(N-(i+1))] d_pag[i6+(N-i)] = d_pag[i6+(N-(i+1))] u0 = d_uref[0] fac = 1.0 / (2. * self.rho * self.c0) Imin = (0.5 * self.rho * u0**2)**2 * fac d_Eacu[d_idx] = d_p[d_idx] * d_p[d_idx] * fac if d_Eacu[d_idx] < Imin: d_uag[i6] = d_u[d_idx] d_pag[i6] = d_p[d_idx] class ComputeTimeAverage(Equation): def initialize(self, d_idx, d_uag, d_pag, d_uta, d_pta): i6, i, N = declare('int', 3) N = 6 i6 = N * d_idx d_uta[d_idx] = 0.0 d_pta[d_idx] = 0.0 for i in range(N): d_uta[d_idx] += d_uag[i6+i] d_pta[d_idx] += d_pag[i6+i] d_uta[d_idx] /= N d_pta[d_idx] /= N class EvalauteCharacterisctics(Equation): def __init__(self, dest, sources, c0, rho0): self.c0 = c0 self.rho0 = rho0 super(EvalauteCharacterisctics, self).__init__(dest, sources) def initialize(self, d_idx, d_u, d_p, d_J1, d_J2u, d_uta, d_pta): a = self.c0 uref = d_uta[d_idx] pref = d_pta[d_idx] d_J1[d_idx] = (d_p[d_idx] - pref) d_J2u[d_idx] = self.rho0 * a * (d_u[d_idx] - uref) + (d_p[d_idx] - pref) class EvalauteNumberdensity(Equation): def initialize(self, d_idx, d_wij): d_wij[d_idx] = 0.0 def loop(self, d_idx, d_wij, WIJ): d_wij[d_idx] += WIJ class ShepardInterpolateCharacteristics(Equation): def initialize(self, d_idx, d_J1, d_J2u): d_J1[d_idx] = 0.0 d_J2u[d_idx] = 0.0 def loop(self, d_idx, d_J1, d_J2u, s_J1, s_J2u, WIJ, s_idx): d_J1[d_idx] += s_J1[s_idx] * WIJ d_J2u[d_idx] += s_J2u[s_idx] * WIJ def post_loop(self, d_idx, d_J1, d_J2u, d_wij, d_avgj2u, d_avgj1): if d_wij[d_idx] > 1e-14: d_J1[d_idx] /= d_wij[d_idx] d_J2u[d_idx] /= d_wij[d_idx] else: d_J2u[d_idx] = d_avgj2u[0] d_J1[d_idx] = d_avgj1[0] def reduce(self, dst, t, dt): dst.avgj2u[0] = numpy.average(dst.J2u[dst.wij > 0.0001]) dst.avgj1[0] = numpy.average(dst.J1[dst.wij > 0.0001]) class EvaluatePropertyfromCharacteristics(Equation): def __init__(self, dest, sources, c0, rho0): self.c0 = c0 self.rho0 = rho0 super(EvaluatePropertyfromCharacteristics, self).__init__(dest, sources) def initialize(self, d_idx, d_rho, d_J2u, d_uta, d_pta, d_u, d_p, dt, t): if t > 20 * dt: j2u = d_J2u[d_idx] d_u[d_idx] = d_uta[d_idx] + (j2u) / (2 * self.rho0 * self.c0) d_p[d_idx] = d_pta[d_idx] + 0.5 * (j2u) pysph-master/pysph/sph/bc/inlet_outlet_manager.py000066400000000000000000000570231356347341600226310ustar00rootroot00000000000000""" Inlet Outlet Manager """ from pysph.sph.equation import Equation from compyle.api import get_config from pysph.base.utils import get_particle_array from pysph.sph.integrator_step import IntegratorStep import numpy as np from compyle.api import declare class InletInfo(object): def __init__(self, pa_name, normal, refpoint, has_ghost=True, update_cls=None, equations=None, umax=1.0, props_to_copy=None): """Create object with information of inlets, all the others parameters which are not passed here get evaluated by `InletOutletManager`once the inlet is created. Parameters ---------- pa_name : str Name of the inlet normal : list Components of normal (float) refpoint : list Point at the fluid-inlet interface (float) has_ghost : bool if True, the ghost particles will be created update_cls : class_name the class which is to be used to update the inlet/outlet equations : list List of equations (optional) props_to_copy : array properties to copy """ self.pa_name = pa_name self.normal = normal self.refpoint = refpoint self.has_ghost = has_ghost self.update_cls = InletBase if update_cls is None else update_cls self.length = 0.0 self.dx = 0.1 self.umax = umax self.equations = [] if equations is None else equations self.props_to_copy = props_to_copy class OutletInfo(InletInfo): def __init__(self, pa_name, normal, refpoint, has_ghost=False, update_cls=None, equations=None, umax=1.0, props_to_copy=None): """Create object with information of outlet The name is kept different for distinction only. """ super(OutletInfo, self).__init__( pa_name, normal, refpoint, has_ghost, update_cls, equations, umax, props_to_copy) self.update_cls = OutletBase if update_cls is None else update_cls class InletOutletManager(object): def __init__(self, fluid_arrays, inletinfo, outletinfo, extraeqns=None): """Create the object to manage inlet outlet boundary conditions. Most of the variables are evaluated after the scheme and particles are created. Parameters ----------- fluid_arrays : list List of fluid particles array names (str) inletinfo : list List of inlets (InletInfo) outletinfo : list List of outlet (OutletInfo) extraeqns : dict Dict of custom equations """ self.fluids = fluid_arrays self.dim = None self.kernel = None self.inlets = [] if inletinfo is None else\ [x.pa_name for x in inletinfo] self.outlets = [] if outletinfo is None else\ [x.pa_name for x in outletinfo] self.inlet_pairs = {} self.outlet_pairs = {} self.inletinfo = inletinfo self.outletinfo = outletinfo self.ghost_inlets = [] self.ghost_outlets = [] self.inlet_pairs = {} self.outlet_pairs = {} self.extraeqns = {} if extraeqns is None else extraeqns self.active_stages = [] self._create_ghost_names() def create_ghost(self, pa_arr, inlet=True): """Creates ghosts for the given inlet/outlet particles Parameters ----------- pa_arr : Particle array particles array for which ghost is required inlet : bool if True, inlet info will be used for ghost creation """ xref, yref, zref = 0.0, 0.0, 0.0 xn, yn, zn = 0.0, 0.0, 0.0 has_ghost = True if inlet: for info in self.inletinfo: if info.pa_name == pa_arr.name: xref = info.refpoint[0] yref = info.refpoint[1] zref = info.refpoint[2] xn = info.normal[0] yn = info.normal[1] zn = info.normal[2] has_ghost = info.has_ghost break if not inlet: for info in self.outletinfo: if info.pa_name == pa_arr.name: xref = info.refpoint[0] yref = info.refpoint[1] zref = info.refpoint[2] xn = info.normal[0] yn = info.normal[1] zn = info.normal[2] has_ghost = info.has_ghost break if not has_ghost: return None x = pa_arr.x y = pa_arr.y z = pa_arr.z xij = x - xref yij = y - yref zij = z - zref disp = xij * xn + yij * yn + zij * zn x = x - 2 * disp * xn y = y - 2 * disp * yn z = z - 2 * disp * zn m = pa_arr.m h = pa_arr.h rho = pa_arr.rho u = pa_arr.u name = '' if inlet: name = self.inlet_pairs[pa_arr.name] else: name = self.outlet_pairs[pa_arr.name] ghost_pa = get_particle_array( name=name, m=m, x=x, y=y, h=h, u=u, p=0.0, rho=rho ) return ghost_pa def _create_ghost_names(self): '''Creates names for ghost for both inlets and outlets if needed''' for inlet in self.inletinfo: if inlet.has_ghost: name = 'ghost_' + inlet.pa_name self.inlet_pairs[inlet.pa_name] = name self.ghost_inlets.append(name) for outlet in self.outletinfo: if outlet.has_ghost: name = 'ghost_' + outlet.pa_name self.outlet_pairs[outlet.pa_name] = name self.ghost_outlets.append(name) def update_dx(self, dx): """Update the discretisation length Parameters ----------- dx : float The discretisation length """ all_info = self.inletinfo + self.outletinfo for info in all_info: info.dx = dx def _update_inlet_outlet_info(self, pa): """Updates refpoint, length and bound Parameters ----------- pa : Particle_array Particle array of inlet/outlet """ all_info = self.inletinfo + self.outletinfo for info in all_info: dx = info.dx if (info.pa_name == pa.name): x = pa.x y = pa.y z = pa.z xmax, xmin = max(x)+dx/2, min(x)-dx/2 ymax, ymin = max(y)+dx/2, min(y)-dx/2 zmax, zmin = max(z)+dx/2, min(z)-dx/2 xdist = xmax - xmin ydist = ymax - ymin zdist = zmax - zmin xn, yn, zn = info.normal[0], info.normal[1], info.normal[2] info.length = abs(xdist*xn+ydist*yn+zdist*zn) def add_io_properties(self, pa, scheme=None): """Add properties to be used in inlet/outlet equations Parameters ---------- pa : particle_array Particle array of inlet/outlet scheme : pysph.sph.scheme The insance of scheme class """ pass def get_io_names(self, ghost=False): """return all the names of inlets and outlets Parameters ---------- ghost : bool if True, return the names of ghost also """ if ghost: return (self.inlets + self.outlets + self.ghost_inlets + self.ghost_outlets) else: return self.inlets + self.outlets def get_stepper(self, scheme, integrator, **kw): """Returns the steppers for inlet/outlet Parameters ---------- scheme : pysph.sph.scheme The instance of the scheme class intergrator : pysph.sph.integrator The parent class of the integrator **kw : extra arguments Extra arguments depending upon the scheme used """ raise NotImplementedError() def setup_iom(self, dim, kernel): """Essential data passed Parameters ---------- dim : int dimension of the problem kernel : pysph.base.kernel the kernel instance """ self.dim = dim self.kernel = kernel def get_equations(self, scheme, **kw): """Returns the equations for inlet/outlet Parameters ---------- scheme : pysph.sph.scheme The instance of the scheme class **kw : extra arguments Extra arguments depending upon the scheme used """ return [] def get_equations_post_compute_acceleration(self): """Returns the equations for inlet/outlet used post acceleration computation""" return [] def get_inlet_outlet(self, particle_array): """ Returns list of `Inlet` and `Outlet` instances which performs the change in inlet particles to outlet particles. Parameters ----------- particle_array : list List of all particle_arrays """ result = [] for inlet in self.inletinfo: i_name = inlet.pa_name self._update_inlet_outlet_info(particle_array[i_name]) ghost_pa = None if i_name not in self.inlet_pairs\ else particle_array[self.inlet_pairs[i_name]] for fluid in self.fluids: in1 = inlet.update_cls( particle_array[i_name], particle_array[fluid], inlet, self.kernel, self.dim, self.active_stages, ghost_pa=ghost_pa ) result.append(in1) for outlet in self.outletinfo: o_name = outlet.pa_name self._update_inlet_outlet_info(particle_array[o_name]) ghost_pa = None if o_name not in self.outlet_pairs else\ particle_array[self.outlet_pairs[o_name]] for fluid in self.fluids: out1 = outlet.update_cls( particle_array[o_name], particle_array[fluid], outlet, self.kernel, self.dim, self.active_stages, ghost_pa=ghost_pa ) result.append(out1) return result class IOEvaluate(Equation): def __init__(self, dest, sources, x, y, z, xn, yn, zn, maxdist=1000.0): """Compute ioid for the particles 0 : particle is in fluid 1 : particle is inside the inlet/outlet 2 : particle is out of inlet/outlet parameters: ---------- dest : str destination particle array name sources : list List of source particle arrays x : float x coordinate of interface point y : float y coordinate of interface point z : float z coordinate of interface point xn : float x component of interface outward normal yn : float y component of interface outward normal zn : float z component of interface outward normal maxdist : float Maximum length of inlet/outlet """ self.x = x self.y = y self.z = z self.xn = xn self.yn = yn self.zn = zn self.maxdist = maxdist super(IOEvaluate, self).__init__(dest, sources) def initialize(self, d_ioid, d_idx): d_ioid[d_idx] = 1 def loop(self, d_idx, d_x, d_y, d_z, d_ioid, d_disp): delx = d_x[d_idx] - self.x dely = d_y[d_idx] - self.y delz = d_z[d_idx] - self.z d_disp[d_idx] = delx * self.xn + dely * self.yn + delz * self.zn if ((d_disp[d_idx] > 0.000001) and (d_disp[d_idx]-self.maxdist < 0.000001)): d_ioid[d_idx] = 1 elif (d_disp[d_idx] - self.maxdist > 0.000001): d_ioid[d_idx] = 2 else: d_ioid[d_idx] = 0 class UpdateNormalsAndDisplacements(Equation): def __init__(self, dest, sources, xn, yn, zn, xo, yo, zo): """Update normal and perpendicular distance from the interface for the inlet/outlet particles parameters: ---------- dest : str destination particle array name sources : list List of source particle arrays xn : float x component of interface outward normal yn : float y component of interface outward normal zn : float z component of interface outward normal xo : float x coordinate of interface point yo : float y coordinate of interface point zo : float z coordinate of interface point """ self.xn = xn self.yn = yn self.zn = zn self.xo = xo self.yo = yo self.zo = zo super(UpdateNormalsAndDisplacements, self).__init__(dest, sources) def loop(self, d_idx, d_xn, d_yn, d_zn, d_x, d_y, d_z, d_disp): d_xn[d_idx] = self.xn d_yn[d_idx] = self.yn d_zn[d_idx] = self.zn xij = declare('matrix(3)') xij[0] = d_x[d_idx] - self.xo xij[1] = d_y[d_idx] - self.yo xij[2] = d_z[d_idx] - self.zo d_disp[d_idx] = abs(xij[0]*d_xn[d_idx] + xij[1]*d_yn[d_idx] + xij[2]*d_yn[d_idx]) class CopyNormalsandDistances(Equation): """Copy normals and distances from outlet/inlet particles to ghosts""" def initialize_pair(self, d_idx, d_xn, d_yn, d_zn, s_xn, s_yn, s_zn, d_disp, s_disp): d_xn[d_idx] = s_xn[d_idx] d_yn[d_idx] = s_yn[d_idx] d_zn[d_idx] = s_zn[d_idx] d_disp[d_idx] = s_disp[d_idx] class InletStep(IntegratorStep): def initialize(self, d_x0, d_idx, d_x): d_x0[d_idx] = d_x[d_idx] def stage1(self, d_idx, d_x, d_x0, d_u, dt): dtb2 = 0.5 * dt d_x[d_idx] = d_x0[d_idx] + dtb2*d_u[d_idx] def stage2(self, d_idx, d_x, d_x0, d_u, dt): d_x[d_idx] = d_x0[d_idx] + dt*d_u[d_idx] class OutletStepWithUhat(IntegratorStep): def initialize(self, d_x0, d_idx, d_x): d_x0[d_idx] = d_x[d_idx] def stage1(self, d_idx, d_x, d_x0, d_uhat, dt): dtb2 = 0.5 * dt d_x[d_idx] = d_x0[d_idx] + dtb2*d_uhat[d_idx] def stage2(self, d_idx, d_x, d_x0, d_uhat, dt): d_x[d_idx] = d_x0[d_idx] + dt*d_uhat[d_idx] class OutletStep(InletStep): pass class InletBase(object): def __init__(self, inlet_pa, dest_pa, inletinfo, kernel, dim, active_stages=[1], callback=None, ghost_pa=None): """An API to add/delete particle when moving between inlet-fluid Parameters ---------- inlet_pa : particle_array particle array for inlet dest_pa : particle_array particle_array of the fluid inletinfo : InletInfo instance contains information fo inlet kernel : Kernel instance Kernel to be used for computations dim : int dimension of the problem active_stages : list stages of integrator at which update should be active callback : function callback after the update function ghost_pa : particle_array particle_array of the ghost_inlet """ self.inlet_pa = inlet_pa self.dest_pa = dest_pa self.ghost_pa = ghost_pa self.callback = callback self.dim = dim self.kernel = kernel self.inletinfo = inletinfo self.x = self.y = self.z = 0.0 self.xn = self.yn = self.zn = 0.0 self.length = 0.0 self.dx = 0.0 self.active_stages = active_stages self.io_eval = None self._init = False cfg = get_config() self.gpu = cfg.use_opencl or cfg.use_cuda def initialize(self): """Function to initialize the class variables after evaluation in SimpleInletOutlet class""" inletinfo = self.inletinfo self.x = inletinfo.refpoint[0] self.y = inletinfo.refpoint[1] self.z = inletinfo.refpoint[2] self.xn = inletinfo.normal[0] self.yn = inletinfo.normal[1] self.zn = inletinfo.normal[2] self.length = inletinfo.length self.dx = inletinfo.dx def _create_io_eval(self): """Evaluator to assign ioid to particles leaving a domain""" if self.io_eval is None: from pysph.sph.equation import Group from pysph.tools.sph_evaluator import SPHEvaluator i_name = self.inlet_pa.name f_name = self.dest_pa.name eqns = [] eqns.append(Group(equations=[ IOEvaluate( i_name, [], x=self.x, y=self.y, z=self.z, xn=self.xn, yn=self.yn, zn=self.zn, maxdist=self.length)], real=False, update_nnps=False)) eqns.append(Group(equations=[ IOEvaluate( f_name, [], x=self.x, y=self.y, z=self.z, xn=self.xn, yn=self.yn, zn=self.zn)], real=False, update_nnps=False)) if self.gpu: from pysph.base.gpu_nnps import ZOrderGPUNNPS as NNPS else: from pysph.base.nnps import LinkedListNNPS as NNPS arrays = [self.inlet_pa] + [self.dest_pa] io_eval = SPHEvaluator( arrays=arrays, equations=eqns, dim=self.dim, kernel=self.kernel, nnps_factory=NNPS) return io_eval else: return self.io_eval def update(self, time, dt, stage): """ Update function called after each stage""" if not self._init: self.initialize() self._init = True if stage in self.active_stages: dest_pa = self.dest_pa inlet_pa = self.inlet_pa ghost_pa = self.ghost_pa self.io_eval = self._create_io_eval() self.io_eval.update() self.io_eval.evaluate() if self.gpu: inlet_pa.gpu.pull(*'ioid x y z'.split()) ghost_pa.gpu.pull(*'x y z'.split()) dest_pa.gpu.pull('ioid') io_id = inlet_pa.ioid cond = (io_id == 0) all_idx = np.where(cond)[0] inlet_pa.extract_particles(all_idx, dest_pa) # moving the moved particles back to the array beginning. inlet_pa.x[all_idx] += self.length * self.xn inlet_pa.y[all_idx] += self.length * self.yn inlet_pa.z[all_idx] += self.length * self.zn if ghost_pa: ghost_pa.x[all_idx] -= self.length * self.xn ghost_pa.y[all_idx] -= self.length * self.yn ghost_pa.z[all_idx] -= self.length * self.zn if self.callback is not None: self.callback(dest_pa, inlet_pa) class OutletBase(object): def __init__(self, outlet_pa, source_pa, outletinfo, kernel, dim, active_stages=[1], callback=None, ghost_pa=None): """An API to add/delete particle when moving between fluid-outlet Parameters ---------- outlet_pa : particle_array particle array for outlet source_pa : particle_array particle_array of the fluid ghost_pa : particle_array particle_array of the outlet ghost outletinfo : OutletInfo instance contains information fo outlet kernel : Kernel instance Kernel to be used for computations dim : int dimnesion of the problem active_stages : list stages of integrator at which update should be active callback : function callback after the update function """ self.outlet_pa = outlet_pa self.source_pa = source_pa self.ghost_pa = ghost_pa self.dim = dim self.kernel = kernel self.outletinfo = outletinfo self.x = self.y = self.z = 0.0 self.xn = self.yn = self.zn = 0.0 self.length = 0.0 self.callback = callback self.active_stages = active_stages self.io_eval = None self._init = False self.props_to_copy = None cfg = get_config() self.gpu = cfg.use_opencl or cfg.use_cuda def initialize(self): """Function to initialize the class variables after evaluation in SimpleInletOutlet class""" outletinfo = self.outletinfo self.x = outletinfo.refpoint[0] self.y = outletinfo.refpoint[1] self.z = outletinfo.refpoint[2] self.xn = outletinfo.normal[0] self.yn = outletinfo.normal[1] self.zn = outletinfo.normal[2] self.length = outletinfo.length self.props_to_copy = outletinfo.props_to_copy def _create_io_eval(self): """Evaluator to assign ioid to particles leaving a domain""" if self.io_eval is None: from pysph.sph.equation import Group from pysph.tools.sph_evaluator import SPHEvaluator o_name = self.outlet_pa.name f_name = self.source_pa.name eqns = [] if self.gpu: from pysph.base.gpu_nnps import ZOrderGPUNNPS as NNPS else: from pysph.base.nnps import LinkedListNNPS as NNPS eqns.append(Group(equations=[ IOEvaluate( o_name, [], x=self.x, y=self.y, z=self.z, xn=self.xn, yn=self.yn, zn=self.zn, maxdist=self.length)], real=False, update_nnps=False)) eqns.append(Group(equations=[ IOEvaluate( f_name, [], x=self.x, y=self.y, z=self.z, xn=self.xn, yn=self.yn, zn=self.zn,)], real=False, update_nnps=False)) arrays = [self.outlet_pa] + [self.source_pa] io_eval = SPHEvaluator(arrays=arrays, equations=eqns, dim=self.dim, kernel=self.kernel, nnps_factory=NNPS) return io_eval else: return self.io_eval def update(self, time, dt, stage): """Update function called after each stage""" if not self._init: self.initialize() self._init = True if stage in self.active_stages: props_to_copy = self.props_to_copy outlet_pa = self.outlet_pa source_pa = self.source_pa self.io_eval = self._create_io_eval() self.io_eval.update() self.io_eval.evaluate() # adding particles to the destination array. if self.gpu: source_pa.gpu.pull('ioid') io_id = source_pa.ioid cond = (io_id == 1) all_idx = np.where(cond)[0] source_pa.extract_particles( all_idx, dest_array=outlet_pa, props=props_to_copy) source_pa.remove_particles(all_idx) if self.gpu: outlet_pa.gpu.pull('ioid') io_id = outlet_pa.ioid cond = (io_id == 2) all_idx = np.where(cond)[0] outlet_pa.remove_particles(all_idx) if self.callback is not None: self.callback(source_pa, outlet_pa) pysph-master/pysph/sph/bc/interpolate.py000066400000000000000000000262131356347341600207530ustar00rootroot00000000000000 # This is file is generated from interpolate.py.mako # Do not edit this file from pysph.sph.equation import Equation from compyle.api import declare from pysph.sph.wc.linalg import gj_solve, augmented_matrix class EvaluateUhat(Equation): def _get_helpers_(self): return [gj_solve, augmented_matrix] def __init__(self, dest, sources, dim=1): self.dim = dim super(EvaluateUhat, self).__init__(dest, sources) def initialize(self, d_idx, d_uho, d_Buh, d_vho, d_Bvh, d_who, d_Bwh): i = declare('int') for i in range(3): d_uho[4*d_idx+i] = 0.0 d_Buh[4*d_idx+i] = 0.0 d_vho[4*d_idx+i] = 0.0 d_Bvh[4*d_idx+i] = 0.0 d_who[4*d_idx+i] = 0.0 d_Bwh[4*d_idx+i] = 0.0 def loop(self, d_idx, d_h, s_h, s_x, s_y, s_z, d_x, d_y, d_z, s_rho, s_m, s_idx, XIJ, DWIJ, WIJ, s_uhat, d_Buh, s_vhat, d_Bvh, s_what, d_Bwh): Vj = s_m[s_idx] / s_rho[s_idx] uhj = s_uhat[s_idx] vhj = s_vhat[s_idx] whj = s_what[s_idx] i4 = declare('int') i4 = 4*d_idx d_Buh[i4+0] += uhj * WIJ * Vj d_Buh[i4+1] += uhj * DWIJ[0] * Vj d_Buh[i4+2] += uhj * DWIJ[1] * Vj d_Buh[i4+3] += uhj * DWIJ[2] * Vj d_Bvh[i4+0] += vhj * WIJ * Vj d_Bvh[i4+1] += vhj * DWIJ[0] * Vj d_Bvh[i4+2] += vhj * DWIJ[1] * Vj d_Bvh[i4+3] += vhj * DWIJ[2] * Vj d_Bwh[i4+0] += whj * WIJ * Vj d_Bwh[i4+1] += whj * DWIJ[0] * Vj d_Bwh[i4+2] += whj * DWIJ[1] * Vj d_Bwh[i4+3] += whj * DWIJ[2] * Vj def post_loop(self, d_idx, d_A, d_uho, d_Buh, d_vho, d_Bvh, d_who, d_Bwh): a_mat = declare('matrix(16)') aug_mat = declare('matrix(20)') b_uh = declare('matrix(4)') res_uh = declare('matrix(4)') b_vh = declare('matrix(4)') res_vh = declare('matrix(4)') b_wh = declare('matrix(4)') res_wh = declare('matrix(4)') i, n, i16, i4 = declare('int', 4) i16 = 16*d_idx i4 = 4*d_idx for i in range(16): a_mat[i] = d_A[i16+i] for i in range(20): aug_mat[i] = 0.0 for i in range(4): b_uh[i] = d_Buh[i4+i] res_uh[i] = 0.0 b_vh[i] = d_Bvh[i4+i] res_vh[i] = 0.0 b_wh[i] = d_Bwh[i4+i] res_wh[i] = 0.0 n = self.dim + 1 augmented_matrix(a_mat, b_uh, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res_uh) for i in range(4): d_uho[i4+i] = res_uh[i] augmented_matrix(a_mat, b_vh, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res_vh) for i in range(4): d_vho[i4+i] = res_vh[i] augmented_matrix(a_mat, b_wh, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res_wh) for i in range(4): d_who[i4+i] = res_wh[i] class ExtrapolateUhat(Equation): def initialize(self, d_idx, d_uhat, d_vhat, d_what): d_uhat[d_idx] = 0.0 d_vhat[d_idx] = 0.0 d_what[d_idx] = 0.0 def loop(self, d_idx, d_uhat, d_uho, d_vhat, d_vho, d_what, d_who, d_disp, d_xn, d_yn, d_zn): delx = 2 * d_disp[d_idx] * d_xn[d_idx] dely = 2 * d_disp[d_idx] * d_yn[d_idx] delz = 2 * d_disp[d_idx] * d_zn[d_idx] d_uhat[d_idx] = -1. * ( d_uho[4*d_idx+0] - delx * d_uho[4*d_idx+1] - dely * d_uho[4*d_idx+2] - delz * d_uho[4*d_idx+3] ) d_vhat[d_idx] = ( d_vho[4*d_idx+0] - delx * d_vho[4*d_idx+1] - dely * d_vho[4*d_idx+2] - delz * d_vho[4*d_idx+3] ) d_what[d_idx] = ( d_who[4*d_idx+0] - delx * d_who[4*d_idx+1] - dely * d_who[4*d_idx+2] - delz * d_who[4*d_idx+3] ) class CopyUhatFromGhost(Equation): def initialize_pair(self, d_idx, d_uhat, s_uhat, d_vhat, s_vhat, d_what, s_what): d_uhat[d_idx] = -1.0 * s_uhat[d_idx] d_vhat[d_idx] = s_vhat[d_idx] d_what[d_idx] = s_what[d_idx] class EvaluateU(Equation): def _get_helpers_(self): return [gj_solve, augmented_matrix] def __init__(self, dest, sources, dim=1): self.dim = dim super(EvaluateU, self).__init__(dest, sources) def initialize(self, d_idx, d_uo, d_Bu, d_vo, d_Bv, d_wo, d_Bw): i = declare('int') for i in range(3): d_uo[4*d_idx+i] = 0.0 d_Bu[4*d_idx+i] = 0.0 d_vo[4*d_idx+i] = 0.0 d_Bv[4*d_idx+i] = 0.0 d_wo[4*d_idx+i] = 0.0 d_Bw[4*d_idx+i] = 0.0 def loop(self, d_idx, d_h, s_h, s_x, s_y, s_z, d_x, d_y, d_z, s_rho, s_m, s_idx, XIJ, DWIJ, WIJ, s_u, d_Bu, s_v, d_Bv, s_w, d_Bw): Vj = s_m[s_idx] / s_rho[s_idx] uj = s_u[s_idx] vj = s_v[s_idx] wj = s_w[s_idx] i4 = declare('int') i4 = 4*d_idx d_Bu[i4+0] += uj * WIJ * Vj d_Bu[i4+1] += uj * DWIJ[0] * Vj d_Bu[i4+2] += uj * DWIJ[1] * Vj d_Bu[i4+3] += uj * DWIJ[2] * Vj d_Bv[i4+0] += vj * WIJ * Vj d_Bv[i4+1] += vj * DWIJ[0] * Vj d_Bv[i4+2] += vj * DWIJ[1] * Vj d_Bv[i4+3] += vj * DWIJ[2] * Vj d_Bw[i4+0] += wj * WIJ * Vj d_Bw[i4+1] += wj * DWIJ[0] * Vj d_Bw[i4+2] += wj * DWIJ[1] * Vj d_Bw[i4+3] += wj * DWIJ[2] * Vj def post_loop(self, d_idx, d_A, d_uo, d_Bu, d_vo, d_Bv, d_wo, d_Bw): a_mat = declare('matrix(16)') aug_mat = declare('matrix(20)') b_u = declare('matrix(4)') res_u = declare('matrix(4)') b_v = declare('matrix(4)') res_v = declare('matrix(4)') b_w = declare('matrix(4)') res_w = declare('matrix(4)') i, n, i16, i4 = declare('int', 4) i16 = 16*d_idx i4 = 4*d_idx for i in range(16): a_mat[i] = d_A[i16+i] for i in range(20): aug_mat[i] = 0.0 for i in range(4): b_u[i] = d_Bu[i4+i] res_u[i] = 0.0 b_v[i] = d_Bv[i4+i] res_v[i] = 0.0 b_w[i] = d_Bw[i4+i] res_w[i] = 0.0 n = self.dim + 1 augmented_matrix(a_mat, b_u, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res_u) for i in range(4): d_uo[i4+i] = res_u[i] augmented_matrix(a_mat, b_v, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res_v) for i in range(4): d_vo[i4+i] = res_v[i] augmented_matrix(a_mat, b_w, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res_w) for i in range(4): d_wo[i4+i] = res_w[i] class ExtrapolateU(Equation): def initialize(self, d_idx, d_u, d_v, d_w): d_u[d_idx] = 0.0 d_v[d_idx] = 0.0 d_w[d_idx] = 0.0 def loop(self, d_idx, d_u, d_uo, d_v, d_vo, d_w, d_wo, d_disp, d_xn, d_yn, d_zn): delx = 2 * d_disp[d_idx] * d_xn[d_idx] dely = 2 * d_disp[d_idx] * d_yn[d_idx] delz = 2 * d_disp[d_idx] * d_zn[d_idx] d_u[d_idx] = -1. * ( d_uo[4*d_idx+0] - delx * d_uo[4*d_idx+1] - dely * d_uo[4*d_idx+2] - delz * d_uo[4*d_idx+3] ) d_v[d_idx] = ( d_vo[4*d_idx+0] - delx * d_vo[4*d_idx+1] - dely * d_vo[4*d_idx+2] - delz * d_vo[4*d_idx+3] ) d_w[d_idx] = ( d_wo[4*d_idx+0] - delx * d_wo[4*d_idx+1] - dely * d_wo[4*d_idx+2] - delz * d_wo[4*d_idx+3] ) class CopyUFromGhost(Equation): def initialize_pair(self, d_idx, d_u, s_u, d_v, s_v, d_w, s_w): d_u[d_idx] = -1.0 * s_u[d_idx] d_v[d_idx] = s_v[d_idx] d_w[d_idx] = s_w[d_idx] class EvaluateP(Equation): def _get_helpers_(self): return [gj_solve, augmented_matrix] def __init__(self, dest, sources, dim=1): self.dim = dim super(EvaluateP, self).__init__(dest, sources) def initialize(self, d_idx, d_po, d_Bp): i = declare('int') for i in range(3): d_po[4*d_idx+i] = 0.0 d_Bp[4*d_idx+i] = 0.0 def loop(self, d_idx, d_h, s_h, s_x, s_y, s_z, d_x, d_y, d_z, s_rho, s_m, s_idx, XIJ, DWIJ, WIJ, s_p, d_Bp): Vj = s_m[s_idx] / s_rho[s_idx] pj = s_p[s_idx] i4 = declare('int') i4 = 4*d_idx d_Bp[i4+0] += pj * WIJ * Vj d_Bp[i4+1] += pj * DWIJ[0] * Vj d_Bp[i4+2] += pj * DWIJ[1] * Vj d_Bp[i4+3] += pj * DWIJ[2] * Vj def post_loop(self, d_idx, d_A, d_po, d_Bp): a_mat = declare('matrix(16)') aug_mat = declare('matrix(20)') b_p = declare('matrix(4)') res_p = declare('matrix(4)') i, n, i16, i4 = declare('int', 4) i16 = 16*d_idx i4 = 4*d_idx for i in range(16): a_mat[i] = d_A[i16+i] for i in range(20): aug_mat[i] = 0.0 for i in range(4): b_p[i] = d_Bp[i4+i] res_p[i] = 0.0 n = self.dim + 1 augmented_matrix(a_mat, b_p, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res_p) for i in range(4): d_po[i4+i] = res_p[i] class ExtrapolateP(Equation): def initialize(self, d_idx, d_p): d_p[d_idx] = 0.0 def loop(self, d_idx, d_p, d_po, d_disp, d_xn, d_yn, d_zn): delx = 2 * d_disp[d_idx] * d_xn[d_idx] dely = 2 * d_disp[d_idx] * d_yn[d_idx] delz = 2 * d_disp[d_idx] * d_zn[d_idx] d_p[d_idx] = ( d_po[4*d_idx+0] - delx * d_po[4*d_idx+1] - dely * d_po[4*d_idx+2] - delz * d_po[4*d_idx+3] ) class CopyPFromGhost(Equation): def initialize_pair(self, d_idx, d_p, s_p): d_p[d_idx] = s_p[d_idx] class UpdateMomentMatrix(Equation): def __init__(self, dest, sources, dim=1): self.dim = dim super(UpdateMomentMatrix, self).__init__(dest, sources) def initialize(self, d_idx, d_A): i, j = declare('int', 2) for i in range(4): for j in range(4): d_A[16*d_idx + j+4*i] = 0.0 def loop(self, d_idx, s_idx, d_h, s_h, s_x, s_y, s_z, d_x, d_y, d_z, s_rho, s_m, d_A, XIJ, WIJ, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] i16 = declare('int') i16 = 16*d_idx d_A[i16+0] += WIJ * Vj d_A[i16+1] += -XIJ[0] * WIJ * Vj d_A[i16+2] += -XIJ[1] * WIJ * Vj d_A[i16+3] += -XIJ[2] * WIJ * Vj d_A[i16+4] += DWIJ[0] * Vj d_A[i16+8] += DWIJ[1] * Vj d_A[i16+12] += DWIJ[2] * Vj d_A[i16+5] += -XIJ[0] * DWIJ[0] * Vj d_A[i16+6] += -XIJ[1] * DWIJ[0] * Vj d_A[i16+7] += -XIJ[2] * DWIJ[0] * Vj d_A[i16+9] += -XIJ[0] * DWIJ[1] * Vj d_A[i16+10] += -XIJ[1] * DWIJ[1] * Vj d_A[i16+11] += -XIJ[2] * DWIJ[1] * Vj d_A[i16+13] += -XIJ[0] * DWIJ[2] * Vj d_A[i16+14] += -XIJ[1] * DWIJ[2] * Vj d_A[i16+15] += -XIJ[2] * DWIJ[2] * Vj pysph-master/pysph/sph/bc/interpolate.py.mako000066400000000000000000000145741356347341600217100ustar00rootroot00000000000000<% vec_name = [ 'Uhat', 'U', 'P' ] u_vec = [ 'u', 'v', 'w' ] x_vec = [ 'x', 'y', 'z' ] %> ## **************************** ## ******* imports *********** ## **************************** # This is file is generated from interpolate.py.mako # Do not edit this file from pysph.sph.equation import Equation from compyle.api import declare from pysph.sph.wc.linalg import gj_solve, augmented_matrix\ %for var in vec_name: ## **************************** ## ******* setup *********** ## **************************** <%hat = '' ; h = '' ; acc = '' ; factor = '' ; vec_var = () ; vec_x = x_vec;%> % if (var == 'Uhat') or (var == 'Auhat'): <% hat = 'hat'; h = 'h' %> % endif % if (var == 'Uhat')or (var == 'U'): <% vec_var = u_vec; factor = '-1. * ' %> % endif % if (var == 'P'): <% vec_var = ['p'] %> % endif ## **************************** ## * function Evaluate ******** ## **************************** <%array5 = ''%>\ %for u in vec_var: <%array5=array5+', d_'+acc+u+h+'o'%>\ <%array5=array5+', d_B'+acc+u+h%>\ % endfor <%array6 = ''%>\ %for u in vec_var: %if not(var == 'Rho'): <%array6=array6+', s_'+acc+u+hat%>\ %endif <%array6=array6+', d_B'+acc+u+h%>\ % endfor <%array7 = ''%>\ %for u in vec_var: <%array7=array7+', d_'+acc+u+h+'o'%>\ <%array7=array7+', d_B'+acc+u+h%>\ % endfor class Evaluate${var}(Equation): def _get_helpers_(self): return [gj_solve, augmented_matrix] def __init__(self, dest, sources, dim=1): self.dim = dim super(Evaluate${var}, self).__init__(dest, sources) def initialize(self, d_idx${array5}): i = declare('int') for i in range(3): %for u in vec_var: d_${acc}${u}${h}o[4*d_idx+i] = 0.0 d_B${acc}${u}${h}[4*d_idx+i] = 0.0 % endfor def loop(self, d_idx, d_h, s_h, s_x, s_y, s_z, d_x, d_y, d_z, s_rho, s_m, s_idx, XIJ, DWIJ, WIJ${array6}): Vj = s_m[s_idx] / s_rho[s_idx] %for u in vec_var: ${acc}${u}${h}j = s_${acc}${u}${hat}[s_idx] %endfor i4 = declare('int') i4 = 4*d_idx %for u in vec_var: d_B${acc}${u}${h}[i4+0] += ${acc}${u}${h}j * WIJ * Vj d_B${acc}${u}${h}[i4+1] += ${acc}${u}${h}j * DWIJ[0] * Vj d_B${acc}${u}${h}[i4+2] += ${acc}${u}${h}j * DWIJ[1] * Vj d_B${acc}${u}${h}[i4+3] += ${acc}${u}${h}j * DWIJ[2] * Vj % endfor def post_loop(self, d_idx, d_A${array7}): a_mat = declare('matrix(16)') aug_mat = declare('matrix(20)') %for u in vec_var: b_${acc}${u}${h} = declare('matrix(4)') res_${acc}${u}${h} = declare('matrix(4)') %endfor i, n, i16, i4 = declare('int', 4) i16 = 16*d_idx i4 = 4*d_idx for i in range(16): a_mat[i] = d_A[i16+i] for i in range(20): aug_mat[i] = 0.0 for i in range(4): %for u in vec_var: b_${acc}${u}${h}[i] = d_B${acc}${u}${h}[i4+i] res_${acc}${u}${h}[i] = 0.0 %endfor n = self.dim + 1 %for u in vec_var: augmented_matrix(a_mat, b_${acc}${u}${h}, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res_${acc}${u}${h}) for i in range(4): d_${acc}${u}${h}o[i4+i] = res_${acc}${u}${h}[i] % endfor ## **************************** ## * function Extrpolate ******** ## **************************** <%array8 = ''%>\ %for u in vec_var: <%array8=array8+', d_'+acc+u+hat%>\ % endfor <%array9 = ''%>\ %for u in vec_var: <%array9=array9+', d_'+acc+u+hat%>\ <%array9=array9+', d_'+acc+u+h+'o'%>\ % endfor <%array10 = ''%>\ %for x in vec_x: <%array10=array10+', d_'+x+'n'%>\ % endfor class Extrapolate${var}(Equation): def initialize(self, d_idx${array8}): %for u in vec_var: d_${acc}${u}${hat}[d_idx] = 0.0 % endfor def loop(self, d_idx${array9}, d_disp${array10}): %for x in vec_x: del${x} = 2 * d_disp[d_idx] * d_${x}n[d_idx] % endfor %for u in vec_var: %if (loop.index > 0): <% factor = '' %> % endif d_${acc}${u}${hat}[d_idx] = ${factor}( d_${acc}${u}${h}o[4*d_idx+0] %for x in vec_x: %if ((u=='u') or (u=='x')) and (acc==''): - del${x} * d_${acc}${u}${h}o[4*d_idx+${loop.index + 1}] %else: - del${x} * d_${acc}${u}${h}o[4*d_idx+${loop.index + 1}] %endif %endfor ) %endfor ## **************************** ## * function CopyfromGhost *** ## **************************** <%array11 = ''%>\ %for u in vec_var: <%array11=array11+', d_'+acc+u+hat%>\ <%array11=array11+', s_'+acc+u+hat%>\ % endfor <%array12 = ''%>\ %for u in vec_var: <%array12=array12 +'+ s_'+acc+u+hat+'[d_idx] * d_'+x_vec[loop.index]+'n[d_idx]'%>\ % endfor <% factor = ' -1.0 *' %>\ class Copy${var}FromGhost(Equation): def initialize_pair(self, d_idx${array11}): %if not ((var == 'P') or (var == 'Rho')): %for u in vec_var: %if (loop.index > 0): <% factor = '' %> % endif d_${acc}${u}${hat}[d_idx] =${factor} s_${acc}${u}${hat}[d_idx] %endfor % else: %for u in vec_var: d_${u}[d_idx] = s_${u}[d_idx] % endfor % endif % endfor class UpdateMomentMatrix(Equation): def __init__(self, dest, sources, dim=1): self.dim = dim super(UpdateMomentMatrix, self).__init__(dest, sources) def initialize(self, d_idx, d_A): i, j = declare('int', 2) for i in range(4): for j in range(4): d_A[16*d_idx + j+4*i] = 0.0 def loop(self, d_idx, s_idx, d_h, s_h, s_x, s_y, s_z, d_x, d_y, d_z, s_rho, s_m, d_A, XIJ, WIJ, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] i16 = declare('int') i16 = 16*d_idx d_A[i16+0] += WIJ * Vj d_A[i16+1] += -XIJ[0] * WIJ * Vj d_A[i16+2] += -XIJ[1] * WIJ * Vj d_A[i16+3] += -XIJ[2] * WIJ * Vj d_A[i16+4] += DWIJ[0] * Vj d_A[i16+8] += DWIJ[1] * Vj d_A[i16+12] += DWIJ[2] * Vj d_A[i16+5] += -XIJ[0] * DWIJ[0] * Vj d_A[i16+6] += -XIJ[1] * DWIJ[0] * Vj d_A[i16+7] += -XIJ[2] * DWIJ[0] * Vj d_A[i16+9] += -XIJ[0] * DWIJ[1] * Vj d_A[i16+10] += -XIJ[1] * DWIJ[1] * Vj d_A[i16+11] += -XIJ[2] * DWIJ[1] * Vj d_A[i16+13] += -XIJ[0] * DWIJ[2] * Vj d_A[i16+14] += -XIJ[1] * DWIJ[2] * Vj d_A[i16+15] += -XIJ[2] * DWIJ[2] * Vj pysph-master/pysph/sph/bc/mirror/000077500000000000000000000000001356347341600173615ustar00rootroot00000000000000pysph-master/pysph/sph/bc/mirror/__init__.py000066400000000000000000000000001356347341600214600ustar00rootroot00000000000000pysph-master/pysph/sph/bc/mirror/inlet.py000066400000000000000000000001621356347341600210450ustar00rootroot00000000000000""" Inlet boundary """ from pysph.sph.bc.inlet_outlet_manager import InletBase class Inlet(InletBase): pass pysph-master/pysph/sph/bc/mirror/outlet.py000066400000000000000000000036701356347341600212550ustar00rootroot00000000000000""" Outlet boundary """ from pysph.sph.bc.inlet_outlet_manager import OutletBase import numpy as np class Outlet(OutletBase): def _get_ghost_xyz(self, x, y, z): xij = x - self.x yij = y - self.y zij = z - self.z disp = xij * self.xn + yij * self.yn + zij * self.zn x = x - 2 * disp * self.xn y = y - 2 * disp * self.yn z = z - 2 * disp * self.zn return x, y, z def update(self, time, dt, stage): if not self._init: self.initialize() self._init = True if stage in self.active_stages: props_to_copy = self.props_to_copy outlet_pa = self.outlet_pa source_pa = self.source_pa ghost_pa = self.ghost_pa self.io_eval = self._create_io_eval() self.io_eval.update() self.io_eval.evaluate() # adding particles to the destination array. io_id = source_pa.ioid cond = (io_id == 1) all_idx = np.where(cond)[0] pa_add = source_pa.extract_particles( all_idx, props=props_to_copy) outlet_pa.add_particles(**pa_add.get_property_arrays()) if ghost_pa: if len(all_idx) > 0: x, y, z = self._get_ghost_xyz( pa_add.x, pa_add.y, pa_add.z) pa_add.x = x pa_add.y = y pa_add.z = z pa_add.u = -1. * pa_add.u ghost_pa.add_particles(**pa_add.get_property_arrays()) source_pa.remove_particles(all_idx) io_id = outlet_pa.ioid cond = (io_id == 2) all_idx = np.where(cond)[0] outlet_pa.remove_particles(all_idx) if ghost_pa: ghost_pa.remove_particles(all_idx) if self.callback is not None: self.callback(source_pa, outlet_pa) pysph-master/pysph/sph/bc/mirror/simple_inlet_outlet.py000066400000000000000000000122261356347341600240160ustar00rootroot00000000000000"""Mirroring outlet Original paper by - Tafuni, Angelantonio, et al. "A versatile algorithm for the treatment of open boundary conditions in smoothed particle hydrodynamics gpu models." Computer Methods in Applied Mechanics and Engineering 342 (2018): 604-624. The implementation here is the modification of this by - Negi, Pawan, Prabhu Ramachandran, and Asmelash Haftu. "An improved non-reflecting outlet boundary condition for weakly-compressible SPH." arXiv preprint arXiv:1907.04034 (2019). """ from pysph.sph.integrator import PECIntegrator from pysph.sph.wc.edac import EDACScheme from pysph.sph.bc.inlet_outlet_manager import InletOutletManager class SimpleInletOutlet(InletOutletManager): def add_io_properties(self, pa, scheme=None): DEFAULT_PROPS = [ 'disp', 'ioid', 'Bu', 'Bv', 'Bw', 'Bp', 'xn', 'yn', 'zn', 'A', 'wij', 'uo', 'vo', 'wo', 'po', 'uho', 'vho', 'who', 'Buh', 'Bvh', 'Bwh', 'x0', 'y0', 'z0', 'uhat', 'vhat', 'what'] STRIDE_DATA = { 'A': 16, 'Bu': 4, 'Bv': 4, 'Bw': 4, 'Bp': 4, 'Brho': 4, 'uo': 4, 'vo': 4, 'wo': 4, 'po': 4, 'rhoo': 4, 'Buh': 4, 'Bvh': 4, 'Bwh': 4, 'uho': 4, 'vho': 4, 'who': 4} for prop in DEFAULT_PROPS: if prop in STRIDE_DATA: pa.add_property(prop, stride=STRIDE_DATA[prop]) else: pa.add_property(prop) pa.add_constant('uref', 0.0) def get_stepper(self, scheme, cls, edactvf=True): from pysph.sph.bc.inlet_outlet_manager import ( InletStep, OutletStepWithUhat) steppers = {} if (cls == PECIntegrator): if isinstance(scheme, EDACScheme): for inlet in self.inlets: steppers[inlet] = InletStep() for outlet in self.outlets: steppers[outlet] = OutletStepWithUhat() for g_inlet in self.ghost_inlets: steppers[g_inlet] = InletStep() for g_outlet in self.ghost_outlets: steppers[g_outlet] = OutletStepWithUhat() self.active_stages = [2] return steppers def get_equations(self, scheme=None, summation_density=False, edactvf=True): from pysph.sph.equation import Group from pysph.sph.bc.interpolate import ( UpdateMomentMatrix, EvaluateUhat, EvaluateP, EvaluateU, ExtrapolateUhat, ExtrapolateP, ExtrapolateU, CopyUhatFromGhost, CopyPFromGhost, CopyUFromGhost) from pysph.sph.bc.inlet_outlet_manager import ( UpdateNormalsAndDisplacements, CopyNormalsandDistances) all_ghosts = self.ghost_inlets + self.ghost_outlets all_info = self.inletinfo + self.outletinfo all_pairs = {**self.inlet_pairs, **self.outlet_pairs} equations = [] g00 = [] i = 0 for info in all_info: g00.append(UpdateNormalsAndDisplacements( dest=info.pa_name, sources=None, xn=info.normal[0], yn=info.normal[1], zn=info.normal[2], xo=info.refpoint[0], yo=info.refpoint[1], zo=info.refpoint[2] )) g00.append(CopyNormalsandDistances( dest=all_pairs[info.pa_name], sources=[info.pa_name])) i = i + 1 equations.append(Group(equations=g00, real=False)) g02 = [] for name in all_ghosts: g02.append(UpdateMomentMatrix( dest=name, sources=self.fluids, dim=self.dim)) equations.append(Group(equations=g02, real=False)) g03 = [] for name in all_ghosts: g03.append(EvaluateUhat(dest=name, sources=self.fluids, dim=self.dim)) g03.append(EvaluateP(dest=name, sources=self.fluids, dim=self.dim)) for name in self.ghost_outlets: g03.append(EvaluateU(dest=name, sources=self.fluids, dim=self.dim)) equations.append(Group(equations=g03, real=False)) g04 = [] for name in all_ghosts: g04.append(ExtrapolateUhat(dest=name, sources=None)) g04.append(ExtrapolateP(dest=name, sources=None)) for name in self.ghost_outlets: g04.append(ExtrapolateU(dest=name, sources=None)) equations.append(Group(equations=g04, real=False)) g05 = [] for io in all_pairs.keys(): g05.append(CopyUhatFromGhost( dest=io, sources=[all_pairs[io]])) g05.append(CopyPFromGhost( dest=io, sources=[all_pairs[io]])) for io in self.outlet_pairs.keys(): g05.append(CopyUFromGhost( dest=io, sources=[all_pairs[io]])) equations.append(Group(equations=g05, real=False)) g06 = [] for inlet in self.inletinfo: for eqn in inlet.equations: g06.append(eqn) for outlet in self.outletinfo: for eqn in outlet.equations: g06.append(eqn) equations.append(Group(equations=g06, real=False)) return equations pysph-master/pysph/sph/bc/mod_donothing/000077500000000000000000000000001356347341600206775ustar00rootroot00000000000000pysph-master/pysph/sph/bc/mod_donothing/__init__.py000066400000000000000000000000001356347341600227760ustar00rootroot00000000000000pysph-master/pysph/sph/bc/mod_donothing/inlet.py000066400000000000000000000001621356347341600223630ustar00rootroot00000000000000""" Inlet boundary """ from pysph.sph.bc.inlet_outlet_manager import InletBase class Inlet(InletBase): pass pysph-master/pysph/sph/bc/mod_donothing/outlet.py000066400000000000000000000001661356347341600225700ustar00rootroot00000000000000""" Outlet boundary """ from pysph.sph.bc.inlet_outlet_manager import OutletBase class Outlet(OutletBase): pass pysph-master/pysph/sph/bc/mod_donothing/simple_inlet_outlet.py000066400000000000000000000122631356347341600253350ustar00rootroot00000000000000""" Modified do nothing proposed by - Negi, Pawan, Prabhu Ramachandran, and Asmelash Haftu. "An improved non-reflecting outlet boundary condition for weakly-compressible SPH." arXiv preprint arXiv:1907.04034 (2019). """ from pysph.sph.equation import Equation from pysph.sph.integrator import PECIntegrator import numpy # local files from pysph.sph.bc.inlet_outlet_manager import InletOutletManager from pysph.sph.wc.edac import EDACScheme class SimpleInletOutlet(InletOutletManager): def add_io_properties(self, pa, scheme=None): DEFAULT_PROPS = [ 'disp', 'ioid', 'Bp', 'A', 'wij', 'po', 'uho', 'vho', 'who', 'Buh', 'Bvh', 'Bwh', 'x0', 'y0', 'z0', 'uhat', 'vhat', 'what', 'xn', 'yn', 'zn'] STRIDE_DATA = { 'A': 16, 'Bu': 4, 'Bv': 4, 'Bw': 4, 'Bp': 4, 'Brho': 4, 'uo': 4, 'vo': 4, 'wo': 4, 'po': 4, 'rhoo': 4, 'Bau': 4, 'Bav': 4, 'Baw': 4, 'auo': 4, 'avo': 4, 'awo': 4, 'Buh': 4, 'Bvh': 4, 'Bwh': 4, 'uho': 4, 'vho': 4, 'who': 4} for prop in DEFAULT_PROPS: if prop in STRIDE_DATA: pa.add_property(prop, stride=STRIDE_DATA[prop]) else: pa.add_property(prop) pa.add_constant('avguhat', 0.0) pa.add_constant('uref', 0.0) def get_stepper(self, scheme, cls, edactvf=False): from pysph.sph.bc.inlet_outlet_manager import ( InletStep, OutletStepWithUhat) steppers = {} if (cls == PECIntegrator): if isinstance(scheme, EDACScheme): for inlet in self.inlets: steppers[inlet] = InletStep() for outlet in self.outlets: steppers[outlet] = OutletStepWithUhat() for g_inlet in self.ghost_inlets: steppers[g_inlet] = InletStep() self.active_stages = [2] return steppers def get_equations(self, scheme=None, summation_density=False, edactvf=False): from pysph.sph.equation import Group from pysph.sph.bc.interpolate import ( UpdateMomentMatrix, EvaluateUhat, EvaluateP, ExtrapolateUhat, ExtrapolateP, CopyUhatFromGhost, CopyPFromGhost) from pysph.sph.bc.inlet_outlet_manager import ( UpdateNormalsAndDisplacements, CopyNormalsandDistances) equations = [] g00 = [] i = 0 for info in self.inletinfo: g00.append(UpdateNormalsAndDisplacements( dest=info.pa_name, sources=None, xn=info.normal[0], yn=info.normal[1], zn=info.normal[2], xo=info.refpoint[0], yo=info.refpoint[1], zo=info.refpoint[2] )) g00.append(CopyNormalsandDistances( dest=self.inlet_pairs[info.pa_name], sources=[info.pa_name])) i = i + 1 equations.append(Group(equations=g00, real=False)) g02 = [] for name in self.ghost_inlets: g02.append(UpdateMomentMatrix( dest=name, sources=self.fluids, dim=self.dim)) equations.append(Group(equations=g02, real=False)) g03 = [] for name in self.ghost_inlets: g03.append(EvaluateUhat(dest=name, sources=self.fluids, dim=self.dim)) g03.append(EvaluateP(dest=name, sources=self.fluids, dim=self.dim)) for name in self.outlets: g03.append(EvalauteNumberdensity(dest=name, sources=self.fluids)) g03.append(ExtrapolateUfromFluid(dest=name, sources=self.fluids)) equations.append(Group(equations=g03, real=False)) g04 = [] for name in self.ghost_inlets: g04.append(ExtrapolateUhat(dest=name, sources=None)) g04.append(ExtrapolateP(dest=name, sources=None)) equations.append(Group(equations=g04, real=False)) g05 = [] for io in self.inlet_pairs.keys(): g05.append(CopyUhatFromGhost( dest=io, sources=[self.inlet_pairs[io]])) g05.append(CopyPFromGhost( dest=io, sources=[self.inlet_pairs[io]])) equations.append(Group(equations=g05, real=False)) g06 = [] for inlet in self.inletinfo: for eqn in inlet.equations: g06.append(eqn) for outlet in self.outletinfo: for eqn in outlet.equations: g06.append(eqn) equations.append(Group(equations=g06, real=False)) return equations class EvalauteNumberdensity(Equation): def initialize(self, d_idx, d_wij): d_wij[d_idx] = 0.0 def loop(self, d_idx, d_wij, WIJ): d_wij[d_idx] += WIJ class ExtrapolateUfromFluid(Equation): def initialize(self, d_idx, d_uhat): d_uhat[d_idx] = 0.0 def loop(self, d_idx, s_idx, WIJ, s_u, d_uhat): d_uhat[d_idx] += s_u[s_idx] * WIJ def post_loop(self, d_idx, d_wij, d_uhat, d_avguhat): if d_wij[d_idx] > 1e-14: d_uhat[d_idx] /= d_wij[d_idx] else: d_uhat[d_idx] = d_avguhat[0] def reduce(self, dst, t, dt): dst.avguhat[0] = numpy.average(dst.uhat[dst.wij > 0.0001]) pysph-master/pysph/sph/bc/tests/000077500000000000000000000000001356347341600172115ustar00rootroot00000000000000pysph-master/pysph/sph/bc/tests/__init__.py000066400000000000000000000000001356347341600213100ustar00rootroot00000000000000pysph-master/pysph/sph/bc/tests/test_simple_inlet_outlet.py000066400000000000000000000204321356347341600247030ustar00rootroot00000000000000"""Tests for the simple_inlet_outlet. Copyright (c) 2015, Prabhu Ramachandran License: BSD """ import numpy as np try: # This is for Python-2.6.x import unittest2 as unittest except ImportError: import unittest from pysph.base.utils import get_particle_array from pysph.base.kernels import QuinticSpline from pysph.sph.bc.inlet_outlet_manager import ( InletInfo, OutletInfo, InletBase, OutletBase) class TestSimpleInlet1D(unittest.TestCase): def setUp(self): dx = 0.1 x = np.arange(-5*dx, 0.0, dx) m = np.ones_like(x) h = np.ones_like(x)*dx*1.5 p = np.ones_like(x)*5.0 self.inlet_pa = get_particle_array(name='inlet', x=x, m=m, h=h, p=p) # Empty particle array. self.dest_pa = get_particle_array(name='fluid') props = ['ioid', 'disp'] for p in props: for pa_arr in [self.dest_pa, self.inlet_pa]: pa_arr.add_property(p) self.dx = dx self.kernel = QuinticSpline(dim=1) self.inletinfo = InletInfo('inlet', normal=[-1., 0., 0.], refpoint=[-dx/2, 0., 0.]) self.inletinfo.length = 0.5 def test_update_creates_particles_in_destination(self): # Given inlet = InletBase( self.inlet_pa, self.dest_pa, self.inletinfo, dim=1, kernel=self.kernel) # Two rows of particles should move out. self.inlet_pa.x += 0.12 # When inlet.update(time=0.0, dt=0.0, stage=1) # Then x = self.inlet_pa.x p = self.inlet_pa.p h = self.inlet_pa.h self.assertEqual(len(x), 5) x_expect = (-np.arange(5, 0, -1)*self.dx + 0.12) x_expect[x_expect > 0.0] -= 0.5 self.assertTrue(np.allclose(list(x), list(x_expect))) self.assertTrue(np.allclose(p, np.ones_like(x)*5, atol=1e-14)) self.assertTrue( np.allclose(h, np.ones_like(x)*self.dx*1.5, atol=1e-14) ) # The destination particle array should now have one particles. x = self.dest_pa.x p = self.dest_pa.p h = self.dest_pa.h self.assertEqual(self.dest_pa.get_number_of_particles(), 1) x_expect = (-np.arange(1, 0, -1)*self.dx + 0.12) self.assertTrue(np.allclose(list(x), list(x_expect))) self.assertTrue(np.allclose(p, np.ones_like(x)*5, atol=1e-14)) self.assertTrue( np.allclose(h, np.ones_like(x)*self.dx*1.5, atol=1e-14) ) def test_particles_should_update_in_given_stage(self): # Given inlet = InletBase( self.inlet_pa, self.dest_pa, self.inletinfo, dim=1, kernel=self.kernel) # Two rows of particles should move out. self.inlet_pa.x += 0.15 inlet.active_stages = [1] # When inlet.update(time=0.0, dt=0.0, stage=2) # Then x = self.inlet_pa.x p = self.inlet_pa.p h = self.inlet_pa.h self.assertEqual(len(x), 5) x_expect = (-np.arange(5, 0, -1)*self.dx + 0.15) self.assertTrue(np.allclose(list(x), list(x_expect))) self.assertTrue(np.allclose(p, np.ones_like(x)*5, atol=1e-14)) self.assertTrue( np.allclose(h, np.ones_like(x)*self.dx*1.5, atol=1e-14) ) # The destination particle array should not have particles. self.assertEqual(self.dest_pa.get_number_of_particles(), 0) def test_inlet_calls_callback(self): # Given calls = [] def _callback(d, i): calls.append((d, i)) inlet = InletBase( self.inlet_pa, self.dest_pa, self.inletinfo, dim=1.0, kernel=self.kernel, callback=_callback) # When self.inlet_pa.x += 0.5 inlet.update(time=0.0, dt=0.0, stage=1) # Then self.assertEqual(len(calls), 1) d_pa, i_pa = calls[0] self.assertEqual(d_pa, self.dest_pa) self.assertEqual(i_pa, self.inlet_pa) class TestSimpleOutlet1D(unittest.TestCase): def setUp(self): dx = 0.1 x = np.arange(-5*dx, 0.0, dx) m = np.ones_like(x) h = np.ones_like(x)*dx*1.5 p = np.ones_like(x)*5.0 self.source_pa = get_particle_array(name='fluid', x=x, m=m, h=h, p=p) # Empty particle array. self.outlet_pa = get_particle_array(name='outlet') props = ['ioid', 'disp'] for p in props: for pa_arr in [self.source_pa, self.outlet_pa]: pa_arr.add_property(p) self.dx = dx self.kernel = QuinticSpline(dim=1) self.outletinfo = OutletInfo( 'outlet', normal=[1., 0., 0.], refpoint=[-dx/2, 0., 0.], props_to_copy=self.source_pa.get_lb_props()) self.outletinfo.length = 0.5 def test_outlet_absorbs_particles_from_source(self): # Given outlet = OutletBase( self.outlet_pa, self.source_pa, self.outletinfo, dim=1, kernel=self.kernel) # Two rows of particles should move out. self.source_pa.x += 0.12 # When outlet.update(time=0.0, dt=0.0, stage=1) # Then x = self.source_pa.x p = self.source_pa.p h = self.source_pa.h print(x) self.assertEqual(len(x), 4) x_expect = (-np.arange(5, 1, -1)*self.dx + 0.12) x_expect[x_expect > 0.0] -= 0.5 self.assertTrue(np.allclose(list(x), list(x_expect))) self.assertTrue(np.allclose(p, np.ones_like(x)*5, atol=1e-14)) self.assertTrue( np.allclose(h, np.ones_like(x)*self.dx*1.5, atol=1e-14) ) # The outlet particle array should now have one particles. x = self.outlet_pa.x p = self.outlet_pa.p h = self.outlet_pa.h self.assertEqual(self.outlet_pa.get_number_of_particles(), 1) x_expect = (-np.arange(1, 0, -1)*self.dx + 0.12) self.assertTrue(np.allclose(list(x), list(x_expect))) self.assertTrue(np.allclose(p, np.ones_like(x)*5, atol=1e-14)) self.assertTrue( np.allclose(h, np.ones_like(x)*self.dx*1.5, atol=1e-14) ) def test_particles_should_update_in_given_stage(self): # Given outlet = OutletBase( self.outlet_pa, self.source_pa, self.outletinfo, dim=1, kernel=self.kernel) # Two rows of particles should move out. self.source_pa.x += 0.15 outlet.active_stages = [1] # When outlet.update(time=0.0, dt=0.0, stage=2) # Then x = self.source_pa.x p = self.source_pa.p h = self.source_pa.h self.assertEqual(len(x), 5) x_expect = (-np.arange(5, 0, -1)*self.dx + 0.15) self.assertTrue(np.allclose(list(x), list(x_expect))) self.assertTrue(np.allclose(p, np.ones_like(x)*5, atol=1e-14)) self.assertTrue( np.allclose(h, np.ones_like(x)*self.dx*1.5, atol=1e-14) ) # The outlet particle array should not have particles. self.assertEqual(self.outlet_pa.get_number_of_particles(), 0) def test_outlet_deletes_particles(self): # Given outlet = OutletBase( self.outlet_pa, self.source_pa, self.outletinfo, dim=1, kernel=self.kernel) # Two rows of particles should move out. self.source_pa.x += 0.5 # When outlet.update(time=0.0, dt=0.0, stage=1) # Then self.assertEqual(self.source_pa.get_number_of_particles(), 0) self.assertEqual(self.outlet_pa.get_number_of_particles(), 5) # When self.outlet_pa.x += 0.12 outlet.update(time=0.0, dt=0.0, stage=1) # The outlet particle array should delete one particle. self.assertEqual(self.outlet_pa.get_number_of_particles(), 4) def test_outlet_calls_callback(self): # Given calls = [] def _callback(s, o): calls.append((s, o)) outlet = OutletBase( self.outlet_pa, self.source_pa, self.outletinfo, dim=1.0, kernel=self.kernel, callback=_callback) # When self.source_pa.x += 0.5 outlet.update(time=0.0, dt=0.0, stage=1) # Then self.assertEqual(len(calls), 1) s_pa, o_pa = calls[0] self.assertEqual(o_pa, self.outlet_pa) self.assertEqual(s_pa, self.source_pa) if __name__ == '__main__': unittest.main() pysph-master/pysph/sph/boundary_equations.py000066400000000000000000000052221356347341600217510ustar00rootroot00000000000000""" SPH Boundary Equations ###################### """ from pysph.sph.equation import Equation def wendland_quintic(rij=1.0, h=1.0): q = rij/h q1 = 2.0 - q val = 0.0 if q < 2.0: val = (1 + 2.5*q + 2*q*q)*q1*q1*q1*q1*q1 return val class MonaghanBoundaryForce(Equation): def __init__(self, dest, sources, deltap): self.deltap = deltap super(MonaghanBoundaryForce,self).__init__(dest,sources) def loop(self, d_idx, s_idx, s_m, s_rho, d_m, d_cs, s_cs, d_h, s_tx, s_ty, s_tz, s_nx, s_ny, s_nz, d_au, d_av, d_aw, XIJ): norm = declare('matrix((3,))') tang = declare('matrix((3,))') ma = d_m[d_idx] mb = s_m[s_idx] # particle sound speed cs = d_cs[d_idx] # boundary normals norm[0] = s_nx[s_idx] norm[1] = s_ny[s_idx] norm[2] = s_nz[s_idx] # boundary tangents tang[0] = s_tx[s_idx] tang[1] = s_ty[s_idx] tang[2] = s_tz[s_idx] # x and y projections x = XIJ[0]*tang[0] + XIJ[1]*tang[1] + XIJ[2]*tang[2] y = XIJ[0]*norm[0] + XIJ[1]*norm[1] + XIJ[2]*norm[2] # compute the force force = 0.0 q = y/d_h[d_idx] xabs = fabs(x) if (0 <= xabs) and (xabs <= self.deltap): beta = 0.02 * cs * cs/y tforce = 1.0 - xabs/self.deltap if (0 < q) and (q <= 2.0/3.0): nforce = 2.0/3.0 elif (2.0/3.0 < q) and (q <= 1.0): nforce = 2*q*(1.0 - 0.75*q) elif (1.0 < q) and (q <= 2.0): nforce = 0.5 * (2-q)*(2-q) else: nforce = 0.0 force = (mb/(ma+mb)) * nforce * tforce * beta else: force = 0.0 # boundary force accelerations d_au[d_idx] += force * norm[0] d_av[d_idx] += force * norm[1] d_aw[d_idx] += force * norm[2] class MonaghanKajtarBoundaryForce(Equation): def __init__(self, dest, sources, K=None, beta=None, h=None): self.K = K self.beta = beta self.h = h if None in [K, beta, h]: raise ValueError("Invalid parameter values") super(MonaghanKajtarBoundaryForce,self).__init__(dest,sources) def _get_helpers_(self): return [wendland_quintic] def loop(self, d_idx, s_idx, d_m, s_m, d_au, d_av, d_aw, RIJ, R2IJ, XIJ): ma = d_m[d_idx] mb = s_m[s_idx] w = wendland_quintic(RIJ, self.h) force = self.K/self.beta * w/R2IJ * 2*mb/(ma + mb) d_au[d_idx] += force * XIJ[0] d_av[d_idx] += force * XIJ[1] d_aw[d_idx] += force * XIJ[2] pysph-master/pysph/sph/equation.py000066400000000000000000000760771356347341600177030ustar00rootroot00000000000000"""Defines the basic Equation and all of its support machinery including the Group class. """ # System library imports. import ast from collections import defaultdict try: from collections import OrderedDict except ImportError: from ordereddict import OrderedDict import re from copy import deepcopy import inspect import itertools import numpy from textwrap import dedent from compyle.api import (CythonGenerator, KnownType, OpenCLConverter, get_symbols) from compyle.translator import CUDAConverter from compyle.config import get_config getfullargspec = getattr( inspect, 'getfullargspec', inspect.getargspec ) def camel_to_underscore(name): """Given a CamelCase name convert it to a name with underscores, i.e. camel_case. """ # From stackoverflow: :P # http://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-camel-case s1 = re.sub(r'(.)([A-Z][a-z]+)', r'\1_\2', name) return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower() def indent(text, prefix=' '): """Prepend prefix to every line in the text""" return ''.join(prefix + line for line in text.splitlines(True)) ############################################################################## # `Context` class. ############################################################################## class Context(dict): """Based on the Bunch receipe by Alex Martelli from Active State's recipes. A convenience class used to specify a context in which a code block will execute. Example ------- Basic usage:: >>> c = Context(a=1, b=2) >>> c.a 1 >>> c.x = 'a' >>> c.x 'a' >>> c.keys() ['a', 'x', 'b'] """ def __getattr__(self, key): try: return self.__getitem__(key) except KeyError: raise AttributeError('Context has no attribute %s' % key) def __setattr__(self, key, value): self[key] = value def get_array_names(symbols): """Given a set of symbols, return a set of source array names and a set of destination array names. """ src_arrays = set(x for x in symbols if x.startswith('s_') and x != 's_idx') dest_arrays = set(x for x in symbols if x.startswith('d_') and x != 'd_idx') return src_arrays, dest_arrays ############################################################################## # `BasicCodeBlock` class. ############################################################################## class BasicCodeBlock(object): """Encapsulates a string of code and the context in which it executes. It also performs some simple analysis of the code that proves handy. """ ########################################################################## # `object` interface. ########################################################################## def __init__(self, code, **kwargs): """Constructor. Parameters ---------- code : str: source code. kwargs : values which define the context of the code. """ self.setup(code, **kwargs) def __call__(self, **kwargs): """A simplistic test for the code that runs the code in the setup context with any additional arguments passed set in the context. Note that this will make a deepcopy of the context to prevent any changes to the original context. It returns a dictionary. """ context = deepcopy(dict(self.context)) if kwargs: context.update(kwargs) bytecode = compile(self.code, '', 'exec') glb = globals() exec(bytecode, glb, context) return Context(**context) ########################################################################## # Private interface. ########################################################################## def _setup_context(self): context = self.context symbols = self.symbols for index in ('s_idx', 'd_idx'): if index in symbols and index not in context: context[index] = 0 for a_name in itertools.chain(self.src_arrays, self.dest_arrays): if a_name not in context: context[a_name] = numpy.zeros(2, dtype=float) def _setup_code(self, code): """Perform analysis of the code and store the information in various attributes. """ code = dedent(code) self.code = code self.ast_tree = ast.parse(code) self.symbols = get_symbols(self.ast_tree) symbols = self.symbols self.src_arrays, self.dest_arrays = get_array_names(symbols) self._setup_context() ########################################################################## # Public interface. ########################################################################## def setup(self, code, **kwargs): """Setup the code and context with the given arguments. Parameters ---------- code : str: source code. kwargs : values which define the context of the code. """ self.context = Context(**kwargs) if code is not None: self._setup_code(code) ############################################################################## # Convenient precomputed symbols and their code. ############################################################################## def precomputed_symbols(): """Return a collection of predefined symbols that can be used in equations. """ c = Context() c.HIJ = BasicCodeBlock(code="HIJ = 0.5*(d_h[d_idx] + s_h[s_idx])", HIJ=0.0) c.EPS = BasicCodeBlock(code="EPS = 0.01*HIJ*HIJ", EPS=0.0) c.RHOIJ = BasicCodeBlock(code="RHOIJ = 0.5*(d_rho[d_idx] + s_rho[s_idx])", RHOIJ=0.0) c.RHOIJ1 = BasicCodeBlock(code="RHOIJ1 = 1.0/RHOIJ", RHOIJ1=0.0) c.XIJ = BasicCodeBlock( code=dedent( """ XIJ[0] = d_x[d_idx] - s_x[s_idx] XIJ[1] = d_y[d_idx] - s_y[s_idx] XIJ[2] = d_z[d_idx] - s_z[s_idx] """ ), XIJ=[0.0, 0.0, 0.0] ) c.VIJ = BasicCodeBlock( code=dedent( """ VIJ[0] = d_u[d_idx] - s_u[s_idx] VIJ[1] = d_v[d_idx] - s_v[s_idx] VIJ[2] = d_w[d_idx] - s_w[s_idx] """ ), VIJ=[0.0, 0.0, 0.0] ) c.R2IJ = BasicCodeBlock( code=dedent( """ R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2] """ ), R2IJ=0.0 ) c.RIJ = BasicCodeBlock(code="RIJ = sqrt(R2IJ)", RIJ=0.0) c.WIJ = BasicCodeBlock( code="WIJ = KERNEL(XIJ, RIJ, HIJ)", WIJ=0.0 ) # wdeltap for tensile instability correction c.WDP = BasicCodeBlock( code="WDP = KERNEL(XIJ, DELTAP*HIJ, HIJ)", WDP=0.0 ) c.WI = BasicCodeBlock( code="WI = KERNEL(XIJ, RIJ, d_h[d_idx])", WI=0.0 ) c.WJ = BasicCodeBlock( code="WJ = KERNEL(XIJ, RIJ, s_h[s_idx])", WJ=0.0 ) c.WDASHI = BasicCodeBlock( code="WDASHI = DWDQ(RIJ, d_h[d_idx])", WDASHI=0.0 ) c.WDASHJ = BasicCodeBlock( code="WDASHJ = DWDQ(RIJ, s_h[s_idx])", WDASHJ=0.0 ) c.WDASHIJ = BasicCodeBlock( code="WDASHIJ = DWDQ(RIJ, HIJ)", WDASHIJ=0.0 ) c.DWIJ = BasicCodeBlock( code="GRADIENT(XIJ, RIJ, HIJ, DWIJ)", DWIJ=[0.0, 0.0, 0.0] ) c.DWI = BasicCodeBlock( code="GRADIENT(XIJ, RIJ, d_h[d_idx], DWI)", DWI=[0.0, 0.0, 0.0] ) c.DWJ = BasicCodeBlock( code="GRADIENT(XIJ, RIJ, s_h[s_idx], DWJ)", DWJ=[0.0, 0.0, 0.0] ) c.GHI = BasicCodeBlock( code="GHI = GRADH(XIJ, RIJ, d_h[d_idx])", GHI=0.0 ) c.GHJ = BasicCodeBlock( code="GHJ = GRADH(XIJ, RIJ, s_h[s_idx])", GHJ=0.0 ) c.GHIJ = BasicCodeBlock(code="GHIJ = GRADH(XIJ, RIJ, HIJ)", GHIJ=0.0) return c def sort_precomputed(precomputed, all_pre_comp): """Sorts the precomputed equations in the given dictionary as per the dependencies of the symbols and returns an ordered dict. Note that this will not deal with finding any precomputed symbols that are dependent on other precomputed symbols. It only sorts them in the right order. """ weights = dict((x, None) for x in precomputed) pre_comp = all_pre_comp # Find the dependent pre-computed symbols for each in the precomputed. depends = dict((x, None) for x in precomputed) for pre, cb in precomputed.items(): depends[pre] = [x for x in cb.symbols if x in pre_comp and x != pre] # The basic algorithm is to assign weights to each of the precomputed # symbols based on the maximum weight of the dependencies of the # precomputed symbols. This way, those with no dependencies have weight # zero and those with more have heigher weights. The `levels` dict stores # a list of precomputed symbols for each weight. These are then stored # in an ordered dict in the order of the weights to produce the output. levels = defaultdict(list) pre_comp_names = list(precomputed.keys()) while pre_comp_names: for name in pre_comp_names[:]: wts = [weights[x] for x in depends[name]] if len(wts) == 0: weights[name] = 0 levels[0].append(name) pre_comp_names.remove(name) elif None in wts: continue else: level = max(wts) + 1 weights[name] = level levels[level].append(name) pre_comp_names.remove(name) result = OrderedDict() for level in range(len(levels)): for name in sorted(levels[level]): result[name] = pre_comp[name] return result def get_predefined_types(precomp): """Return a dictionary that can be used by a CythonGenerator for the precomputed symbols. """ result = {'dt': 0.0, 't': 0.0, 'dst': KnownType('object'), 'NBRS': KnownType('unsigned int*'), 'N_NBRS': KnownType('int'), 'src': KnownType('ParticleArrayWrapper')} for sym, value in precomp.items(): result[sym] = value.context[sym] return result def get_arrays_used_in_equation(equation): """Return two sets, the source and destination arrays used by the equation. """ src_arrays = set() dest_arrays = set() methods = ( 'initialize', 'initialize_pair', 'loop', 'loop_all', 'post_loop' ) for meth_name in methods: meth = getattr(equation, meth_name, None) if meth is not None: args = getfullargspec(meth).args s, d = get_array_names(args) src_arrays.update(s) dest_arrays.update(d) return src_arrays, dest_arrays def get_init_args(obj, method, ignore=None): """Return the arguments for the method given, typically an __init__. """ ignore = ignore if ignore is not None else [] spec = getfullargspec(method) keys = [k for k in spec.args[1:] if k not in ignore and k in obj.__dict__] args = ['%s=%r' % (k, getattr(obj, k)) for k in keys] return args ############################################################################## # `Equation` class. ############################################################################## class Equation(object): ########################################################################## # `object` interface. ########################################################################## def __init__(self, dest, sources): r""" Parameters ---------- dest : str name of the destination particle array sources : list of str or None names of the source particle arrays """ self.dest = dest if sources is not None and len(sources) > 0: self.sources = sources else: self.sources = None # Does the equation require neighbors or not. self.no_source = self.sources is None self.name = self.__class__.__name__ # The name of the variable used in the compiled AccelerationEval # instance. self.var_name = '' def __repr__(self): name = self.__class__.__name__ args = get_init_args(self, self.__init__, []) return '%s(%s)' % (name, ', '.join(args)) def converged(self): """Return > 0 to indicate converged iterations and < 0 otherwise. """ return 1.0 def _pull(self, *args): """Pull attributes from the GPU if needed. The GPU reduce and converged methods run on the host and not on the device and this is useful to call there. This is not useful on the CPU as this does not matter which is why this is a private method. """ if hasattr(self, '_gpu'): ary = self._gpu.get() if len(args) == 0: args = ary.dtype.names for arg in args: setattr(self, arg, ary[arg][0]) ############################################################################### # `Group` class. ############################################################################### class Group(object): """A group of equations. This class provides some support for the code generation for the collection of equations. """ pre_comp = precomputed_symbols() def __init__(self, equations, real=True, update_nnps=False, iterate=False, max_iterations=1, min_iterations=0, pre=None, post=None): """Constructor. Parameters ---------- equations: list a list of equation objects. real: bool specifies if only non-remote/non-ghost particles should be operated on. update_nnps: bool specifies if the neighbors should be re-computed locally after this group iterate: bool specifies if the group should continue iterating until each equation's "converged()" methods returns with a positive value. max_iterations: int specifies the maximum number of times this group should be iterated. min_iterations: int specifies the minimum number of times this group should be iterated. pre: callable A callable which is passed no arguments that is called before anything in the group is executed. pre: callable A callable which is passed no arguments that is called after the group is completed. Notes ----- When running simulations in parallel, one should typically run the summation density over all particles (both local and remote) in each processor. This is because we must update the pressure/density of the remote neighbors in the current processor. Otherwise the results can be incorrect with the remote particles having an older density. This is also the case for the TaitEOS. In these cases the group that computes the equation should set real to False. """ self.real = real self.update_nnps = update_nnps # iterative groups self.iterate = iterate self.max_iterations = max_iterations self.min_iterations = min_iterations self.pre = pre self.post = post only_groups = [x for x in equations if isinstance(x, Group)] if (len(only_groups) > 0) and (len(only_groups) != len(equations)): raise ValueError( 'All elements must be Groups if you use sub groups.' ) # This group has only sub-groups. self.has_subgroups = len(only_groups) > 0 self.equations = equations self.src_arrays = self.dest_arrays = None self.update() ########################################################################## # Non-public interface. ########################################################################## def __repr__(self): cls = self.__class__.__name__ eqs = ', \n'.join(repr(eq) for eq in self.equations) ignore = ['equations'] kws = ', '.join(get_init_args(self, self.__init__, ignore)) return '%s(equations=[\n%s\n],\n %s)' % ( cls, indent(eqs), kws ) def _has_code(self, kind='loop'): assert kind in ('initialize', 'initialize_pair', 'loop', 'loop_all', 'post_loop', 'reduce') for equation in self.equations: if hasattr(equation, kind): return True def _setup_precomputed(self): """Get the precomputed symbols for this group of equations. """ # Calculate the precomputed symbols for this equation. all_args = set() for equation in self.equations: if hasattr(equation, 'loop'): args = getfullargspec(equation.loop).args all_args.update(args) all_args.discard('self') pre = self.pre_comp precomputed = dict((s, pre[s]) for s in all_args if s in pre) # Now find the precomputed symbols in the pre-computed symbols. done = False found_precomp = set(precomputed.keys()) while not done: done = True all_new = set() for sym in found_precomp: code_block = pre[sym] new = set([s for s in code_block.symbols if s in pre and s not in precomputed]) all_new.update(new) if len(all_new) > 0: done = False for s in all_new: precomputed[s] = pre[s] found_precomp = all_new self.precomputed = sort_precomputed(precomputed, pre) # Update the context. context = self.context for p, cb in self.precomputed.items(): context[p] = cb.context[p] ########################################################################## # Public interface. ########################################################################## def update(self): self.context = Context() if not self.has_subgroups: self._setup_precomputed() def get_array_names(self, recompute=False): """Returns two sets of array names, the first being source_arrays and the second being destination array names. """ if not recompute and self.src_arrays is not None: return set(self.src_arrays), set(self.dest_arrays) src_arrays = set() dest_arrays = set() for equation in self.equations: s, d = get_arrays_used_in_equation(equation) src_arrays.update(s) dest_arrays.update(d) for cb in self.precomputed.values(): src_arrays.update(cb.src_arrays) dest_arrays.update(cb.dest_arrays) self.src_arrays = src_arrays self.dest_arrays = dest_arrays return src_arrays, dest_arrays def get_converged_condition(self): if self.has_subgroups: code = [g.get_converged_condition() for g in self.equations] return ' & '.join(code) else: code = [] for equation in self.equations: code.append('(self.%s.converged() > 0)' % equation.var_name) # Note, we use '&' because we want to call converged on all # equations and not be short-circuited by the first one that # returns False. return ' & '.join(code) def get_variable_names(self): # First get all the contexts and find the names. all_vars = set() for cb in self.precomputed.values(): all_vars.update(cb.symbols) # Filter out all arrays. filtered_vars = [x for x in all_vars if not x.startswith(('s_', 'd_'))] # Filter other things. ignore = ['KERNEL', 'GRADIENT', 's_idx', 'd_idx'] # Math functions. import math ignore += [x for x in dir(math) if not x.startswith('_') and callable(getattr(math, x))] try: ignore.remove('gamma') ignore.remove('lgamma') except ValueError: # Older Python's don't have gamma/lgamma. pass filtered_vars = [x for x in filtered_vars if x not in ignore] return filtered_vars def has_initialize(self): return self._has_code('initialize') def has_initialize_pair(self): return self._has_code('initialize_pair') def has_loop(self): return self._has_code('loop') def has_loop_all(self): return self._has_code('loop_all') def has_post_loop(self): return self._has_code('post_loop') def has_reduce(self): return self._has_code('reduce') class CythonGroup(Group): ########################################################################## # Non-public interface. ########################################################################## def _get_variable_decl(self, context, mode='declare'): decl = [] names = list(context.keys()) names.sort() for var in names: value = context[var] if isinstance(value, int): declare = 'cdef long ' if mode == 'declare' else '' decl.append('{declare}{var} = {value}'.format(declare=declare, var=var, value=value)) elif isinstance(value, float): declare = 'cdef double ' if mode == 'declare' else '' decl.append('{declare}{var} = {value}'.format(declare=declare, var=var, value=value)) elif isinstance(value, (list, tuple)): if mode == 'declare': decl.append( 'cdef DoubleArray _{var} = ' 'DoubleArray(aligned({size}, 8)*self.n_threads)' .format( var=var, size=len(value) ) ) decl.append('cdef double* {var} = _{var}.data' .format(size=len(value), var=var)) else: pass return '\n'.join(decl) def _get_code(self, kernel=None, kind='loop'): assert kind in ('initialize', 'initialize_pair', 'loop', 'loop_all', 'post_loop', 'reduce') # We assume here that precomputed quantities are only relevant # for loops and not post_loops and initialization. pre = [] if kind == 'loop': for p, cb in self.precomputed.items(): pre.append(cb.code.strip()) if len(pre) > 0: pre.extend(['', '']) preamble = self._set_kernel('\n'.join(pre), kernel) code = [] for eq in self.equations: meth = getattr(eq, kind, None) if meth is not None: args = getfullargspec(meth).args if 'self' in args: args.remove('self') if 'SPH_KERNEL' in args: args[args.index('SPH_KERNEL')] = 'self.kernel' if kind == 'reduce': args = ['dst.array', 't', 'dt'] call_args = ', '.join(args) c = 'self.{eq_name}.{method}({args})' \ .format(eq_name=eq.var_name, method=kind, args=call_args) code.append(c) if len(code) > 0: code.append('') return preamble + '\n'.join(code) def _set_kernel(self, code, kernel): if kernel is not None: k_func = 'self.kernel.kernel' w_func = 'self.kernel.dwdq' g_func = 'self.kernel.gradient' h_func = 'self.kernel.gradient_h' deltap = 'self.kernel.get_deltap()' code = code.replace('DELTAP', deltap) return code.replace('GRADIENT', g_func).replace( 'KERNEL', k_func ).replace('GRADH', h_func).replace('DWDQ', w_func) else: return code ########################################################################## # Public interface. ########################################################################## def get_array_declarations(self, names, known_types={}): decl = [] for arr in sorted(names): if arr in known_types: decl.append('cdef {type} {arr}'.format( type=known_types[arr].type, arr=arr )) else: decl.append('cdef double* %s' % arr) return '\n'.join(decl) def get_variable_declarations(self, context): return self._get_variable_decl(context, mode='declare') def get_variable_array_setup(self): names = list(self.context.keys()) names.sort() code = [] for var in names: value = self.context[var] if isinstance(value, (list, tuple)): code.append( '{var} = &_{var}.data[thread_id*aligned({size}, 8)]' .format(size=len(value), var=var) ) return '\n'.join(code) def get_initialize_code(self, kernel=None): return self._get_code(kernel, kind='initialize') def get_initialize_pair_code(self, kernel=None): return self._get_code(kernel, kind='initialize_pair') def get_loop_code(self, kernel=None): return self._get_code(kernel, kind='loop') def get_loop_all_code(self, kernel=None): return self._get_code(kernel, kind='loop_all') def get_post_loop_code(self, kernel=None): return self._get_code(kernel, kind='post_loop') def get_py_initialize_code(self): lines = [] for i, equation in enumerate(self.equations): if hasattr(equation, 'py_initialize'): code = ('self.all_equations["{name}"].py_initialize' '(dst.array, t, dt)').format(name=equation.var_name) lines.append(code) return '\n'.join(lines) def get_reduce_code(self): return self._get_code(kernel=None, kind='reduce') def get_equation_wrappers(self, known_types={}): classes = defaultdict(lambda: 0) eqs = {} for equation in self.equations: cls = equation.__class__.__name__ n = classes[cls] equation.var_name = '%s%d' % ( camel_to_underscore(equation.name), n ) classes[cls] += 1 eqs[cls] = equation wrappers = [] predefined = dict(get_predefined_types(self.pre_comp)) predefined.update(known_types) code_gen = CythonGenerator(known_types=predefined) for cls in sorted(classes.keys()): code_gen.parse(eqs[cls]) wrappers.append(code_gen.get_code()) return '\n'.join(wrappers) def get_equation_defs(self): lines = [] for equation in self.equations: code = 'cdef public {cls} {name}'.format(cls=equation.name, name=equation.var_name) lines.append(code) return '\n'.join(lines) def get_equation_init(self): lines = [] for i, equation in enumerate(self.equations): code = 'self.{name} = {cls}(**equations[{idx}].__dict__)' \ .format(name=equation.var_name, cls=equation.name, idx=i) lines.append(code) return '\n'.join(lines) class OpenCLGroup(Group): _Converter_Class = OpenCLConverter # #### Private interface ##### def _update_for_local_memory(self, predefined, eqs): modified_classes = [] loop_ann = predefined.copy() for k in loop_ann.keys(): if 's_' in k: # TODO: Make each argument have their own KnownType # right from the start new_type = loop_ann[k].type.replace( 'GLOBAL_MEM', 'LOCAL_MEM' ).replace('__global', 'LOCAL_MEM') loop_ann[k] = KnownType(new_type) for eq in eqs.values(): cls = eq.__class__ loop = getattr(cls, 'loop', None) if loop is not None: self._set_loop_annotation(loop, loop_ann) modified_classes.append(cls) return modified_classes def _set_loop_annotation(self, func, value): try: func.__annotations__ = value except AttributeError: func.im_func.__annotations__ = value ########################################################################## # Public interface. ########################################################################## def get_equation_wrappers(self, known_types={}): classes = defaultdict(lambda: 0) eqs = {} for equation in self.equations: cls = equation.__class__.__name__ n = classes[cls] equation.var_name = '%s%d' % ( camel_to_underscore(equation.name), n ) classes[cls] += 1 eqs[cls] = equation wrappers = [] predefined = dict(get_predefined_types(self.pre_comp)) predefined.update(known_types) predefined['NBRS'] = KnownType('GLOBAL_MEM unsigned int*') use_local_memory = get_config().use_local_memory modified_classes = [] if use_local_memory: modified_classes = self._update_for_local_memory(predefined, eqs) code_gen = self._Converter_Class(known_types=predefined) ignore = ['reduce', 'converged'] for cls in sorted(classes.keys()): src = code_gen.parse_instance(eqs[cls], ignore_methods=ignore) wrappers.append(src) if use_local_memory: # Remove the added annotations for cls in modified_classes: self._set_loop_annotation(cls.loop, {}) return '\n'.join(wrappers) class CUDAGroup(OpenCLGroup): _Converter_Class = CUDAConverter class MultiStageEquations(object): '''A class that allows a user to specify different equations for different stages. The object doesn't do much, except contain the different collections of equations. ''' def __init__(self, groups): ''' Parameters ---------- groups: list/tuple A list/tuple of list of groups/equations, one for each stage. ''' assert type(groups) in (list, tuple) self.groups = groups def __repr__(self): name = self.__class__.__name__ groups = [', \n'.join(str(stg_grps) for stg_grps in stg) for stg in self.groups] kw = indent('\n], [\n'.join(groups)) s = '%s(groups=[\n[\n%s\n ]\n])' % ( name, kw, ) return s def __len__(self): return len(self.groups) pysph-master/pysph/sph/gas_dynamics/000077500000000000000000000000001356347341600201245ustar00rootroot00000000000000pysph-master/pysph/sph/gas_dynamics/__init__.py000066400000000000000000000000001356347341600222230ustar00rootroot00000000000000pysph-master/pysph/sph/gas_dynamics/basic.py000066400000000000000000000406521356347341600215660ustar00rootroot00000000000000"""Basic equations for Gas-dynamics""" from compyle.api import declare from pysph.base.reduce_array import serial_reduce_array, parallel_reduce_array from pysph.sph.equation import Equation from math import sqrt, exp, log from pysph.base.particle_array import get_ghost_tag import numpy GHOST_TAG = get_ghost_tag() class ScaleSmoothingLength(Equation): def __init__(self, dest, sources, factor=2.0): super(ScaleSmoothingLength, self).__init__(dest, sources) self.factor = factor def loop(self, d_idx, d_h): d_h[d_idx] = d_h[d_idx] * self.factor class UpdateSmoothingLengthFromVolume(Equation): def __init__(self, dest, sources, dim, k=1.2): super(UpdateSmoothingLengthFromVolume, self).__init__(dest, sources) self.k = k self.dim1 = 1./dim def loop(self, d_idx, d_m, d_rho, d_h): d_h[d_idx] = self.k * pow(d_m[d_idx]/d_rho[d_idx], self.dim1) class SummationDensityADKE(Equation): """ References ---------- .. A comparison of SPH schemes for the compressible Euler equations, 2014, Journal of Computational Physics, 256, pp 308 -- 333 (http://dx.doi.org/10.1016/j.jcp.2013.08.060) """ def __init__(self, dest, sources, k=1.0, eps=0.0): self.k = k self.eps = eps super(SummationDensityADKE, self).__init__(dest, sources) def initialize(self, d_idx, d_arho, d_rho, d_h, d_h0): d_rho[d_idx] = 0.0 d_arho[d_idx] = 0.0 d_h[d_idx] = d_h0[d_idx] def loop(self, d_idx, d_rho, d_arho, s_idx, s_m, VIJ, DWI, WIJ): d_rho[d_idx] += s_m[s_idx]*WIJ mj = s_m[s_idx] vijdotdwij = VIJ[0]*DWI[0] + VIJ[1]*DWI[1] + VIJ[2]*DWI[2] # density accelerations d_arho[d_idx] += mj * vijdotdwij def post_loop(self, d_idx, d_rho, d_arho, d_div, d_logrho): d_div[d_idx] = -d_arho[d_idx]/d_rho[d_idx] d_arho[d_idx] = 0 d_logrho[d_idx] = log(d_rho[d_idx]) def reduce(self, dst, t, dt): n = len(dst.x) tmp_sum_logrho = serial_reduce_array(dst.logrho, 'sum') sum_logrho = parallel_reduce_array(tmp_sum_logrho, 'sum') g = exp(sum_logrho/n) lamda = declare('object') lamda = self.k*numpy.power(g/dst.rho, self.eps) dst.h[:] = lamda*dst.h0 class SummationDensity(Equation): def __init__(self, dest, sources, dim, density_iterations=False, iterate_only_once=False, k=1.2, htol=1e-6): r"""Summation density with iterative solution of the smoothing lengths. Parameters: density_iterations : bint Flag to indicate density iterations are required. iterate_only_once : bint Flag to indicate if only one iteration is required k : double Kernel scaling factor htol : double Iteration tolerance """ self.density_iterations = density_iterations self.iterate_only_once = iterate_only_once self.dim = dim self.k = k self.htol = htol # by default, we set the equation_has_converged attribute to True. If # density_iterations is set to True, we will have at least one # iteration to determine the new smoothing lengths since the # 'converged' property of the particles is intialized to False self.equation_has_converged = 1 super(SummationDensity, self).__init__(dest, sources) def initialize(self, d_idx, d_rho, d_div, d_grhox, d_grhoy, d_grhoz, d_arho, d_dwdh): d_rho[d_idx] = 0.0 d_div[d_idx] = 0.0 d_grhox[d_idx] = 0.0 d_grhoy[d_idx] = 0.0 d_grhoz[d_idx] = 0.0 d_arho[d_idx] = 0.0 d_dwdh[d_idx] = 0.0 # set the converged attribute for the Equation to True. Within # the post-loop, if any particle hasn't converged, this is set # to False. The Group can therefore iterate till convergence. self.equation_has_converged = 1 def loop(self, d_idx, s_idx, d_rho, d_grhox, d_grhoy, d_grhoz, d_arho, d_dwdh, s_m, d_converged, VIJ, WI, DWI, GHI): mj = s_m[s_idx] vijdotdwij = VIJ[0]*DWI[0] + VIJ[1]*DWI[1] + VIJ[2]*DWI[2] # density d_rho[d_idx] += mj * WI # density accelerations d_arho[d_idx] += mj * vijdotdwij # gradient of density d_grhox[d_idx] += mj * DWI[0] d_grhoy[d_idx] += mj * DWI[1] d_grhoz[d_idx] += mj * DWI[2] # gradient of kernel w.r.t h d_dwdh[d_idx] += mj * GHI def post_loop(self, d_idx, d_arho, d_rho, d_div, d_omega, d_dwdh, d_h0, d_h, d_m, d_ah, d_converged): # iteratively find smoothing length consistent with the if self.density_iterations: if not (d_converged[d_idx] == 1): # current mass and smoothing length. The initial # smoothing length h0 for this particle must be set # outside the Group (that is, in the integrator) mi = d_m[d_idx] hi = d_h[d_idx] hi0 = d_h0[d_idx] # density from the mass, smoothing length and kernel # scale factor rhoi = mi/(hi/self.k)**self.dim dhdrhoi = -hi/(self.dim*d_rho[d_idx]) dwdhi = d_dwdh[d_idx] omegai = 1.0 - dhdrhoi*dwdhi # correct omegai if omegai < 0: omegai = 1.0 # kernel multiplier. These are the multiplicative # pre-factors, or the "grah-h" terms in the # equations. Remember that the equations use 1/omega gradhi = 1.0/omegai d_omega[d_idx] = gradhi # the non-linear function and it's derivative func = rhoi - d_rho[d_idx] dfdh = omegai/dhdrhoi # Newton Raphson estimate for the new h hnew = hi - func/dfdh # Nanny control for h if (hnew > 1.2 * hi): hnew = 1.2 * hi elif (hnew < 0.8 * hi): hnew = 0.8 * hi # overwrite if gone awry if ((hnew <= 1e-6) or (gradhi < 1e-6)): hnew = self.k * (mi/d_rho[d_idx])**(1./self.dim) # check for convergence diff = abs(hnew - hi)/hi0 if not ((diff < self.htol) and (omegai > 0) or self.iterate_only_once): # this particle hasn't converged. This means the # entire group must be repeated until this fellow # has converged, or till the maximum iteration has # been reached. self.equation_has_converged = -1 # set particle properties for the next # iteration. For the 'converged' array, a value of # 0 indicates the particle hasn't converged d_h[d_idx] = hnew d_converged[d_idx] = 0 else: d_arho[d_idx] *= d_omega[d_idx] d_ah[d_idx] = d_arho[d_idx] * dhdrhoi d_converged[d_idx] = 1 # comptue the divergence of velocity d_div[d_idx] = -d_arho[d_idx]/d_rho[d_idx] def converged(self): return self.equation_has_converged class IdealGasEOS(Equation): def __init__(self, dest, sources, gamma): self.gamma = gamma self.gamma1 = gamma - 1.0 super(IdealGasEOS, self).__init__(dest, sources) def loop(self, d_idx, d_p, d_rho, d_e, d_cs): d_p[d_idx] = self.gamma1 * d_rho[d_idx] * d_e[d_idx] d_cs[d_idx] = sqrt(self.gamma * d_p[d_idx]/d_rho[d_idx]) class Monaghan92Accelerations(Equation): def __init__(self, dest, sources, alpha=1.0, beta=2.0): self.alpha = alpha self.beta = beta super(Monaghan92Accelerations, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw, d_ae): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 d_ae[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, s_rho, d_p, s_p, d_cs, s_cs, d_au, d_av, d_aw, d_ae, s_m, VIJ, DWIJ, XIJ, EPS, HIJ, R2IJ, RHOIJ1): rhoi2 = d_rho[d_idx] * d_rho[d_idx] rhoj2 = s_rho[s_idx] * s_rho[s_idx] tmpi = d_p[d_idx]/rhoi2 tmpj = s_p[s_idx]/rhoj2 vijdotxij = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] piij = 0.0 if vijdotxij < 0: muij = HIJ*vijdotxij/(R2IJ + EPS) cij = 0.5 * (d_cs[d_idx] + s_cs[s_idx]) piij = -self.alpha*cij*muij + self.beta*muij*muij piij *= RHOIJ1 d_au[d_idx] += -s_m[s_idx] * (tmpi + tmpj + piij) * DWIJ[0] d_av[d_idx] += -s_m[s_idx] * (tmpi + tmpj + piij) * DWIJ[1] d_aw[d_idx] += -s_m[s_idx] * (tmpi + tmpj + piij) * DWIJ[2] vijdotdwij = VIJ[0]*DWIJ[0] + VIJ[1]*DWIJ[1] + VIJ[2]*DWIJ[2] d_ae[d_idx] += 0.5 * s_m[s_idx] * (tmpi + tmpj + piij) * vijdotdwij class ADKEAccelerations(Equation): """ Reference --------- .. A comparison of SPH schemes for the compressible Euler equations, 2014, Journal of Computational Physics, 256, pp 308 -- 333 (http://dx.doi.org/10.1016/j.jcp.2013.08.060) """ def __init__(self, dest, sources, alpha, beta, g1, g2, k, eps): self.alpha = alpha self.g1 = g1 self.g2 = g1 self.alpha = alpha self.beta = beta self.k = k self.eps = eps super(ADKEAccelerations, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw, d_ae): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 d_ae[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_au, d_av, d_aw, d_ae, d_p, s_p, d_rho, s_rho, d_m, s_m, d_cs, s_cs, s_e, d_e, s_h, d_h, s_div, d_div, DWIJ, HIJ, XIJ, VIJ, R2IJ, EPS, RHOIJ, RHOIJ1): # particle pressure p_i = d_p[d_idx] pj = s_p[s_idx] # p_i/rhoi**2 rhoi2 = d_rho[d_idx]*d_rho[d_idx] pibrhoi2 = p_i/rhoi2 # pj/rhoj**2 rhoj2 = s_rho[s_idx]*s_rho[s_idx] pjbrhoj2 = pj/rhoj2 # averaged sound speed cij = 0.5 * (d_cs[d_idx] + s_cs[s_idx]) # averaged mass mj = s_m[s_idx] # averaged sound speed ci = d_cs[d_idx] cj = s_cs[s_idx] cij = 0.5 * (ci + cj) hi = d_h[d_idx] hj = s_h[s_idx] divi = d_div[d_idx] divj = s_div[s_idx] ei = d_e[d_idx] ej = s_e[s_idx] # Themal Conduction Hi = self.g1 * hi * ci + self.g2 * hi * hi*(abs(divi)-divi) Hj = self.g1 * hj * cj + self.g2 * hj * hj*(abs(divj)-divj) Hij = (Hi+Hj)*(ei-ej)/(RHOIJ*(R2IJ+EPS)) xijdotvij = XIJ[0]*VIJ[0] + XIJ[1]*VIJ[1] + XIJ[2]*VIJ[2] piij = 0.0 if xijdotvij < 0: muij = HIJ*xijdotvij/(R2IJ+EPS) piij = muij * (self.beta*muij - self.alpha*cij)*RHOIJ1 tmpv = pibrhoi2 + pjbrhoj2 + piij d_au[d_idx] += -mj*tmpv * DWIJ[0] d_av[d_idx] += -mj*tmpv * DWIJ[1] d_aw[d_idx] += -mj*tmpv * DWIJ[2] vijdotdwij = VIJ[0]*DWIJ[0] + VIJ[1]*DWIJ[1] + VIJ[2]*DWIJ[2] xijdotdwij = XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2] d_ae[d_idx] += 0.5*mj*(tmpv*vijdotdwij + 2*xijdotdwij*Hij) class MPMAccelerations(Equation): def __init__(self, dest, sources, beta=2.0, update_alpha1=False, update_alpha2=False, alpha1_min=0.1, alpha2_min=0.1, sigma=0.1): self.beta = beta self.sigma = sigma self.update_alpha1 = update_alpha1 self.update_alpha2 = update_alpha2 self.alpha1_min = alpha1_min self.alpha2_min = alpha2_min super(MPMAccelerations, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw, d_ae, d_am, d_aalpha1, d_aalpha2, d_del2e, d_dt_cfl): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 d_ae[d_idx] = 0.0 d_aalpha1[d_idx] = 0.0 d_aalpha2[d_idx] = 0.0 d_del2e[d_idx] = 0.0 d_dt_cfl[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, s_m, d_p, s_p, d_cs, s_cs, d_e, s_e, d_rho, s_rho, d_au, d_av, d_aw, d_ae, d_omega, s_omega, XIJ, VIJ, DWI, DWJ, DWIJ, HIJ, d_del2e, d_alpha1, s_alpha1, d_alpha2, s_alpha2, EPS, RIJ, R2IJ, RHOIJ, d_dt_cfl): # particle pressure p_i = d_p[d_idx] pj = s_p[s_idx] # p_i/rhoi**2 rhoi2 = d_rho[d_idx]*d_rho[d_idx] pibrhoi2 = p_i/rhoi2 # pj/rhoj**2 rhoj2 = s_rho[s_idx]*s_rho[s_idx] pjbrhoj2 = pj/rhoj2 # averaged sound speed cij = 0.5 * (d_cs[d_idx] + s_cs[s_idx]) mj = s_m[s_idx] # averaged sound speed ci = d_cs[d_idx] cj = s_cs[s_idx] cij = 0.5 * (ci + cj) # normalized interaction vector if RIJ < 1e-8: XIJ[0] = 0.0 XIJ[1] = 0.0 XIJ[2] = 0.0 else: XIJ[0] /= RIJ XIJ[1] /= RIJ XIJ[2] /= RIJ # v_{ij} \cdot r_{ij} or vijdotxij dot = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] # scalar part of the kernel gradient DWIJ Fij = XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2] # signal velocities pdiff = abs(p_i - pj) vsig1 = 0.5 * max(2*cij - self.beta*dot, 0.0) vsig2 = sqrt(pdiff/RHOIJ) # compute the Courant-limited time step factor. d_dt_cfl[d_idx] = max(d_dt_cfl[d_idx], cij + self.beta * dot) # Artificial viscosity if dot <= 0.0: # viscosity alpha1 = 0.5 * (d_alpha1[d_idx] + s_alpha1[s_idx]) tmpv = mj/RHOIJ * alpha1 * vsig1 * dot d_au[d_idx] += tmpv * DWIJ[0] d_av[d_idx] += tmpv * DWIJ[1] d_aw[d_idx] += tmpv * DWIJ[2] # viscous contribution to the thermal energy d_ae[d_idx] += -0.5*mj/RHOIJ*alpha1*vsig1*dot*dot*Fij # grad-h correction terms. These will be set to 1.0 by the # integrator and thus can be used safely. omegai = d_omega[d_idx] omegaj = s_omega[s_idx] # gradient terms d_au[d_idx] += -mj*(pibrhoi2*omegai*DWI[0] + pjbrhoj2*omegaj*DWJ[0]) d_av[d_idx] += -mj*(pibrhoi2*omegai*DWI[1] + pjbrhoj2*omegaj*DWJ[1]) d_aw[d_idx] += -mj*(pibrhoi2*omegai*DWI[2] + pjbrhoj2*omegaj*DWJ[2]) # accelerations for the thermal energy vijdotdwi = VIJ[0]*DWI[0] + VIJ[1]*DWI[1] + VIJ[2]*DWI[2] d_ae[d_idx] += mj * pibrhoi2 * omegai * vijdotdwi # thermal conduction alpha2 = 0.5 * (d_alpha2[d_idx] + s_alpha2[s_idx]) eij = d_e[d_idx] - s_e[s_idx] d_ae[d_idx] += mj/RHOIJ * alpha2 * vsig2 * eij * Fij # Laplacian of thermal energy d_del2e[d_idx] += mj/s_rho[s_idx] * eij/(RIJ + EPS) * Fij def post_loop(self, d_idx, d_h, d_cs, d_alpha1, d_aalpha1, d_div, d_del2e, d_e, d_alpha2, d_aalpha2): hi = d_h[d_idx] tau = hi/(self.sigma*d_cs[d_idx]) if self.update_alpha1: S1 = max(-d_div[d_idx], 0.0) d_aalpha1[d_idx] = (self.alpha1_min - d_alpha1[d_idx])/tau + S1 if self.update_alpha2: S2 = 0.01 * d_h[d_idx] * abs(d_del2e[d_idx])/sqrt(d_e[d_idx]) d_aalpha2[d_idx] = (self.alpha2_min - d_alpha2[d_idx])/tau + S2 class MPMUpdateGhostProps(Equation): def __init__(self, dest, sources=None, dim=2): super(MPMUpdateGhostProps, self).__init__(dest, sources) self.dim = dim assert GHOST_TAG == 2 def initialize(self, d_idx, d_orig_idx, d_p, d_cs, d_tag): idx = declare('int') if d_tag[d_idx] == 2: idx = d_orig_idx[d_idx] d_p[d_idx] = d_p[idx] d_cs[d_idx] = d_cs[idx] class ADKEUpdateGhostProps(Equation): def __init__(self, dest, sources=None, dim=2): super(ADKEUpdateGhostProps, self).__init__(dest, sources) self.dim = dim assert GHOST_TAG == 2 def initialize(self, d_idx, d_orig_idx, d_p, d_cs, d_tag, d_rho): idx = declare('int') if d_tag[d_idx] == 2: idx = d_orig_idx[d_idx] d_p[d_idx] = d_p[idx] d_cs[d_idx] = d_cs[idx] d_rho[d_idx] = d_rho[idx] pysph-master/pysph/sph/gas_dynamics/boundary_equations.py000066400000000000000000000034151356347341600244140ustar00rootroot00000000000000"""Boundary equations for Gas-dynamics""" from pysph.sph.equation import Equation class WallBoundary(Equation): def initialize(self, d_idx, d_p, d_rho, d_e, d_m, d_cs, d_div, d_h, d_htmp, d_h0, d_u, d_v, d_w, d_wij): d_p[d_idx] = 0.0 d_u[d_idx] = 0.0 d_v[d_idx] = 0.0 d_w[d_idx] = 0.0 d_m[d_idx] = 0.0 d_rho[d_idx] = 0.0 d_e[d_idx] = 0.0 d_cs[d_idx] = 0.0 d_div[d_idx] = 0.0 d_wij[d_idx] = 0.0 d_h[d_idx] = d_h0[d_idx] d_htmp[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_p, d_rho, d_e, d_m, d_cs, d_div, d_h, d_u, d_v, d_w, d_wij, d_htmp, s_p, s_rho, s_e, s_m, s_cs, s_h, s_div, s_u, s_v, s_w, WI): d_wij[d_idx] += WI d_p[d_idx] += s_p[s_idx]*WI d_u[d_idx] -= s_u[s_idx]*WI d_v[d_idx] -= s_v[s_idx]*WI d_w[d_idx] -= s_w[s_idx]*WI d_m[d_idx] += s_m[s_idx]*WI d_rho[d_idx] += s_rho[s_idx]*WI d_e[d_idx] += s_e[s_idx]*WI d_cs[d_idx] += s_cs[s_idx]*WI d_div[d_idx] += s_div[s_idx]*WI d_htmp[d_idx] += s_h[s_idx]*WI def post_loop(self, d_idx, d_p, d_rho, d_e, d_m, d_cs, d_div, d_h, d_u, d_v, d_w, d_wij, d_htmp): if (d_wij[d_idx]>1e-30): d_p[d_idx] = d_p[d_idx]/d_wij[d_idx] d_u[d_idx] = d_u[d_idx]/d_wij[d_idx] d_v[d_idx] = d_v[d_idx]/d_wij[d_idx] d_w[d_idx] = d_w[d_idx]/d_wij[d_idx] d_m[d_idx] = d_m[d_idx]/d_wij[d_idx] d_rho[d_idx] = d_rho[d_idx]/d_wij[d_idx] d_e[d_idx] = d_e[d_idx]/d_wij[d_idx] d_cs[d_idx] = d_cs[d_idx]/d_wij[d_idx] d_div[d_idx] = d_div[d_idx]/d_wij[d_idx] d_h[d_idx] = d_htmp[d_idx] /d_wij[d_idx] pysph-master/pysph/sph/gas_dynamics/gsph.py000066400000000000000000000431241356347341600214430ustar00rootroot00000000000000from math import exp from compyle.api import declare from pysph.sph.equation import Equation from pysph.sph.gas_dynamics.riemann_solver import (HELPERS, riemann_solve, printf) from pysph.base.particle_array import get_ghost_tag # Constants GHOST_TAG = get_ghost_tag() # Riemann solver types NonDiffusive = 0 VanLeer = 1 Exact = 2 HLLC = 3 Ducowicz = 4 HLLE = 5 Roe = 6 LLXF = 7 HLLCBall = 8 HLLBall = 9 HLLSY = 10 # GSPHInterpolationType Delta = 0 Linear = 1 Cubic = 2 def sgn(x=0.0): return (x > 0) - (x < 0) def monotonicity_min(_x1=0.0, _x2=0.0, _x3=0.0): x1, x2, x3, _min = declare('double', 4) x1 = 2.0 * abs(_x1) x2 = abs(_x2) x3 = 2.0 * abs(_x3) sx1 = sgn(_x1) sx2 = sgn(_x2) sx3 = sgn(_x3) if ((sx1 != sx2) or (sx2 != sx3)): return 0.0 else: if x2 < x1: if x3 < x2: _min = x3 else: _min = x2 else: if x3 < x1: _min = x3 else: _min = x1 return sx1 * _min class GSPHGradients(Equation): def initialize(self, d_idx, d_px, d_py, d_pz, d_ux, d_uy, d_uz, d_vx, d_vy, d_vz, d_wx, d_wy, d_wz): d_px[d_idx] = 0.0 d_py[d_idx] = 0.0 d_pz[d_idx] = 0.0 d_ux[d_idx] = 0.0 d_uy[d_idx] = 0.0 d_uz[d_idx] = 0.0 d_vx[d_idx] = 0.0 d_vy[d_idx] = 0.0 d_vz[d_idx] = 0.0 d_wx[d_idx] = 0.0 d_wy[d_idx] = 0.0 d_wz[d_idx] = 0.0 def loop(self, d_idx, d_px, d_py, d_pz, d_ux, d_uy, d_uz, d_vx, d_vy, d_vz, d_wx, d_wy, d_wz, d_p, d_u, d_v, d_w, s_idx, s_p, s_u, s_v, s_w, s_rho, s_m, DWI): rj1 = 1.0/s_rho[s_idx] pji = s_p[s_idx] - d_p[d_idx] uji = s_u[s_idx] - d_u[d_idx] vji = s_v[s_idx] - d_v[d_idx] wji = s_w[s_idx] - d_w[d_idx] tmp = rj1*s_m[s_idx]*pji d_px[d_idx] += tmp*DWI[0] d_py[d_idx] += tmp*DWI[1] d_pz[d_idx] += tmp*DWI[2] tmp = rj1*s_m[s_idx]*uji d_ux[d_idx] += tmp*DWI[0] d_uy[d_idx] += tmp*DWI[1] d_uz[d_idx] += tmp*DWI[2] tmp = rj1*s_m[s_idx]*vji d_vx[d_idx] += tmp*DWI[0] d_vy[d_idx] += tmp*DWI[1] d_vz[d_idx] += tmp*DWI[2] tmp = rj1*s_m[s_idx]*wji d_wx[d_idx] += tmp*DWI[0] d_wy[d_idx] += tmp*DWI[1] d_wz[d_idx] += tmp*DWI[2] class GSPHUpdateGhostProps(Equation): """Copy the GSPH gradients and other props required for GSPH from real particle to ghost particles """ def __init__(self, dest, sources=None): super(GSPHUpdateGhostProps, self).__init__(dest, sources) assert GHOST_TAG == 2 def initialize(self, d_idx, d_tag, d_orig_idx, d_px, d_py, d_pz, d_ux, d_uy, d_uz, d_vx, d_vy, d_vz, d_wx, d_wy, d_wz, d_grhox, d_grhoy, d_grhoz, d_dwdh, d_rho, d_div, d_p, d_cs): idx = declare('int') if d_tag[d_idx] == 2: idx = d_orig_idx[d_idx] # copy pressure grads d_px[d_idx] = d_px[idx] d_py[d_idx] = d_py[idx] d_pz[d_idx] = d_pz[idx] # copy u grads d_ux[d_idx] = d_ux[idx] d_uy[d_idx] = d_uy[idx] d_uz[d_idx] = d_uz[idx] # copy u grads d_vx[d_idx] = d_vx[idx] d_vy[d_idx] = d_vy[idx] d_vz[d_idx] = d_vz[idx] # copy u grads d_wx[d_idx] = d_wx[idx] d_wy[d_idx] = d_wy[idx] d_wz[d_idx] = d_wz[idx] # copy density grads d_grhox[d_idx] = d_grhox[idx] d_grhoy[d_idx] = d_grhoy[idx] d_grhoz[d_idx] = d_grhoz[idx] # other misc props d_dwdh[d_idx] = d_dwdh[idx] d_rho[d_idx] = d_rho[idx] d_div[d_idx] = d_div[idx] d_p[d_idx] = d_p[idx] d_cs[d_idx] = d_cs[idx] class GSPHAcceleration(Equation): """Class to implement the GSPH acclerations. We implement Inutsuka's original GSPH algorithm I02 defined in 'Reformulation of Smoothed Particle Hydrodynamics with Riemann Solver', (2002), JCP, 179, 238--267 and Iwasaki and Inutsuka's vesion (specifically the monotonicity constraint) described in 'Smoothed particle magnetohydrodynamics with a Riemann solver and the method of characteristics' 2011, MNRAS, referred to as IwIn Additional details about the algorithm are also described by Murante et al. """ def __init__(self, dest, sources, g1=0.0, g2=0.0, monotonicity=0, rsolver=Exact, interpolation=Linear, interface_zero=True, hybrid=False, blend_alpha=5.0, tf=1.0, gamma=1.4, niter=20, tol=1e-6): """ Parameters ---------- g1, g2 : double ADKE style thermal conduction parameters rsolver: int Riemann solver to use. See pysph.sph.gas_dynamics.gsph for valid options. interpolation: int Kind of interpolation for the specific volume integrals. monotonicity : int Type of monotonicity algorithm to use: 0 : First order GSPH 1 : I02 algorithm 2 : IwIn algorithm interface_zero : bool Set Interface position s^*_{ij} = 0 for the Riemann problem. hybrid, blend_alpha : bool, double Hybrid scheme and blending alpha value tf: float Final time of simulation for using in blending. gamma: float Gamma for Equation of state. niter: int Max number of iterations for iterative Riemann solvers. tol: double Tolerance for iterative Riemann solvers. """ self.gamma = gamma self.niter = niter self.tol = tol self.g1 = g1 self.g2 = g2 self.monotonicity = monotonicity self.interpolation = interpolation self.rsolver = rsolver # Interface position for data reconstruction self.sstar = 0.0 if (g1 == 0 and g2 == 0): self.thermal_conduction = 0 else: self.thermal_conduction = 1 self.interface_zero = interface_zero self.hybrid = hybrid self.blend_alpha = blend_alpha self.tf = tf super(GSPHAcceleration, self).__init__(dest, sources) def _get_helpers_(self): return HELPERS + [monotonicity_min, sgn] def initialize(self, d_idx, d_au, d_av, d_aw, d_ae): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 d_ae[d_idx] = 0.0 def loop(self, d_idx, d_m, d_h, d_rho, d_cs, d_div, d_p, d_e, d_grhox, d_grhoy, d_grhoz, d_u, d_v, d_w, d_px, d_py, d_pz, d_ux, d_uy, d_uz, d_vx, d_vy, d_vz, d_wx, d_wy, d_wz, d_au, d_av, d_aw, d_ae, s_idx, s_rho, s_m, s_h, s_cs, s_div, s_p, s_e, s_grhox, s_grhoy, s_grhoz, s_u, s_v, s_w, s_px, s_py, s_pz, s_ux, s_uy, s_uz, s_vx, s_vy, s_vz, s_wx, s_wy, s_wz, XIJ, DWIJ, DWI, DWJ, RIJ, RHOIJ, EPS, dt, t): blending_factor = exp(-self.blend_alpha*t/self.tf) g1 = self.g1 g2 = self.g2 hi = d_h[d_idx] hj = s_h[s_idx] eij = declare('matrix(3)') if RIJ < 1e-14: eij[0] = 0.0 eij[1] = 0.0 eij[2] = 0.0 sij = 1.0/(RIJ + EPS) else: eij[0] = XIJ[0]/RIJ eij[1] = XIJ[1]/RIJ eij[2] = XIJ[2]/RIJ sij = 1.0/RIJ vl = s_u[s_idx]*eij[0] + s_v[s_idx]*eij[1] + s_w[s_idx]*eij[2] vr = d_u[d_idx]*eij[0] + d_v[d_idx]*eij[1] + d_w[d_idx]*eij[2] Hi = g1*hi*d_cs[d_idx] + g2*hi*hi*(abs(d_div[d_idx]) - d_div[d_idx]) grhoi_dot_eij = (d_grhox[d_idx]*eij[0] + d_grhoy[d_idx]*eij[1] + d_grhoz[d_idx]*eij[2]) grhoj_dot_eij = (s_grhox[s_idx]*eij[0] + s_grhoy[s_idx]*eij[1] + s_grhoz[s_idx]*eij[2]) sp_vol = declare('matrix(3)') self.interpolate(hi, hj, d_rho[d_idx], s_rho[s_idx], RIJ, grhoi_dot_eij, grhoj_dot_eij, sp_vol) vij_i = sp_vol[0] vij_j = sp_vol[1] sstar = sp_vol[2] # Gradients in the local coordinate system rsi = (d_grhox[d_idx]*eij[0] + d_grhoy[d_idx]*eij[1] + d_grhoz[d_idx]*eij[2]) psi = (d_px[d_idx]*eij[0] + d_py[d_idx]*eij[1] + d_pz[d_idx]*eij[2]) vsi = (eij[0]*eij[0]*d_ux[d_idx] + eij[0]*eij[1]*(d_uy[d_idx] + d_vx[d_idx]) + eij[0]*eij[2]*(d_uz[d_idx] + d_wx[d_idx]) + eij[1]*eij[1]*d_vy[d_idx] + eij[1]*eij[2]*(d_vz[d_idx] + d_wy[d_idx]) + eij[2]*eij[2]*d_wz[d_idx]) rsj = (s_grhox[s_idx]*eij[0] + s_grhoy[s_idx]*eij[1] + s_grhoz[s_idx]*eij[2]) psj = (s_px[s_idx]*eij[0] + s_py[s_idx]*eij[1] + s_pz[s_idx]*eij[2]) vsj = (eij[0]*eij[0]*s_ux[s_idx] + eij[0]*eij[1]*(s_uy[s_idx] + s_vx[s_idx]) + eij[0]*eij[2]*(s_uz[s_idx] + s_wx[s_idx]) + eij[1]*eij[1]*s_vy[s_idx] + eij[1]*eij[2]*(s_vz[s_idx] + s_wy[s_idx]) + eij[2]*eij[2]*s_wz[s_idx]) csi = d_cs[d_idx] csj = s_cs[s_idx] rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] pi = d_p[d_idx] pj = s_p[s_idx] if self.monotonicity == 0: # First order scheme rsi = 0.0 rsj = 0.0 psi = 0.0 psj = 0.0 vsi = 0.0 vsj = 0.0 if self.monotonicity == 1: # I02 algorithm if (vsi * vsj) < 0: vsi = 0. vsj = 0. # default to first order near a shock if (min(csi, csj) < 3.0*(vl - vr)): rsi = 0. rsj = 0. psi = 0. psj = 0. vsi = 0. vsj = 0. if self.monotonicity == 2 and RIJ > 1e-14: # IwIn algorithm qijr = rhoi - rhoj qijp = pi - pj qiju = vr - vl delr = rsi * RIJ delrp = 2 * delr - qijr delp = psi * RIJ delpp = 2 * delp - qijp delv = vsi * RIJ delvp = 2 * delv - qiju # corrected values for i rsi = monotonicity_min(qijr, delr, delrp)/RIJ psi = monotonicity_min(qijp, delp, delpp)/RIJ vsi = monotonicity_min(qiju, delv, delvp)/RIJ delr = rsj * RIJ delrp = 2 * delr - qijr delp = psj * RIJ delpp = 2 * delp - qijp delv = vsj * RIJ delvp = 2 * delv - qiju # corrected values for j rsj = monotonicity_min(qijr, delr, delrp)/RIJ psj = monotonicity_min(qijp, delp, delpp)/RIJ vsj = monotonicity_min(qiju, delv, delvp)/RIJ elif self.monotonicity == 2 and RIJ < 1e-14: # IwIn algorithm rsi = 0.0 rsj = 0.0 psi = 0.0 psj = 0.0 vsi = 0.0 vsj = 0.0 # Input to the riemann solver sstar *= 2.0 # left and right density rhol = rhoj + 0.5 * rsj * RIJ * (1.0 - csj*dt*sij + sstar) rhor = rhoi - 0.5 * rsi * RIJ * (1.0 - csi*dt*sij + sstar) # corrected density if rhol < 0: rhol = rhoj if rhor < 0: rhor = rhoi # left and right pressure pl = pj + 0.5 * psj * RIJ * (1.0 - csj*dt*sij + sstar) pr = pi - 0.5 * psi * RIJ * (1.0 - csi*dt*sij + sstar) # corrected pressure if pl < 0: pl = pj if pr < 0: pr = pi # left and right velocity ul = vl + 0.5 * vsj * RIJ * (1.0 - csj*dt*sij + sstar) ur = vr - 0.5 * vsi * RIJ * (1.0 - csi*dt*sij + sstar) # Intermediate state from the Riemann solver result = declare('matrix(2)') riemann_solve( self.rsolver, rhol, rhor, pl, pr, ul, ur, self.gamma, self.niter, self.tol, result ) pstar = result[0] ustar = result[1] # blend of two intermediate states if self.hybrid: riemann_solve( 10, rhoj, rhoi, pl, pr, vl, vr, self.gamma, self.niter, self.tol, result ) pstar2 = result[0] ustar2 = result[1] ustar = ustar + blending_factor * (ustar2 - ustar) pstar = pstar + blending_factor * (pstar2 - pstar) # three dimensional velocity (70) vstar = declare('matrix(3)') vstar[0] = ustar*eij[0] vstar[1] = ustar*eij[1] vstar[2] = ustar*eij[2] # velocity accelerations mj = s_m[s_idx] d_au[d_idx] += -mj * pstar * (vij_i * DWI[0] + vij_j * DWJ[0]) d_av[d_idx] += -mj * pstar * (vij_i * DWI[1] + vij_j * DWJ[1]) d_aw[d_idx] += -mj * pstar * (vij_i * DWI[2] + vij_j * DWJ[2]) # contribution to the thermal energy term # (85). The contribution due to \dot{x}^*_i will # be added in the integrator. vstardotdwi = vstar[0]*DWI[0] + vstar[1]*DWI[1] + vstar[2]*DWI[2] vstardotdwj = vstar[0]*DWJ[0] + vstar[1]*DWJ[1] + vstar[2]*DWJ[2] d_ae[d_idx] += -mj * pstar * (vij_i * vstardotdwi + vij_j * vstardotdwj) # artificial thermal conduction terms if self.thermal_conduction: divj = s_div[s_idx] Hj = g1 * hj * csj + g2 * hj*hj * (abs(divj) - divj) Hij = (Hi + Hj) * (d_e[d_idx] - s_e[s_idx]) Hij /= (RHOIJ * (RIJ*RIJ + EPS)) d_ae[d_idx] += mj*Hij*(XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2]) def interpolate(self, hi=0.0, hj=0.0, rhoi=0.0, rhoj=0.0, sij=0.0, gri_eij=0.0, grj_eij=0.0, result=[0.0, 0.0, 0.0]): """Interpolation for the specific volume integrals in GSPH. Parameters: ----------- hi, hj : double Particle smoothing length (scale?) at i (right) and j (left) rhoi, rhoj : double Particle densities at i (right) and j (left) sij : double Particle separation in the local coordinate system (si - sj) gri_eij, grj_eij : double Gradient of density at the particles in the global coordinate system dot product with eij Notes: ------ The interpolation scheme determines the form of the 'specific volume' contributions Vij^2 in the GSPH equations. The simplest Delta or point interpolation uses Vi^2 = 1./rho_i^2. The more involved linear or cubic spline interpolations are defined in the GSPH references. Most recent papers on GSPH typically assume the interface to be located midway between the particles. In the local coordinate system, this corresponds to sij = 0. From I02, it seems the definition is only really valid for the constant smoothing length case. Set interface_zero to False if you want to include this term """ Vi = 1./rhoi Vj = 1./rhoj Vip = -1./(rhoi*rhoi) * gri_eij Vjp = -1./(rhoj*rhoj) * grj_eij aij, bij, cij, dij, hij, vij, vij_i, vij_j = declare('double', 8) hij = 0.5 * (hi + hj) sstar = self.sstar # simplest delta or point interpolation if self.interpolation == 0: vij_i2 = 1./(rhoi * rhoi) vij_j2 = 1./(rhoj * rhoj) # linear interpolation elif self.interpolation == 1: # avoid singularities if sij < 1e-8: cij = 0.0 else: cij = (Vi - Vj)/sij dij = 0.5 * (Vi + Vj) vij_i2 = 0.25 * hi * hi * cij * cij + dij * dij vij_j2 = 0.25 * hj * hj * cij * cij + dij * dij # approximate value for the interface location when using # variable smoothing lengths if not self.interface_zero: vij = 0.5 * (vij_i2 + vij_j2) sstar = 0.5 * hij*hij * cij*dij/vij # cubic spline interpolation elif self.interpolation == 2: if sij < 1e-8: aij = bij = cij = 0.0 dij = 0.5 * (Vi + Vj) else: aij = -2.0 * (Vi - Vj)/(sij*sij*sij) + (Vip + Vjp)/(sij*sij) bij = 0.5 * (Vip - Vjp)/sij cij = 1.5 * (Vi - Vj)/sij - 0.25 * (Vip + Vjp) dij = 0.5 * (Vi + Vj) - 0.125*(Vip - Vjp)*sij hi2 = hi*hi hj2 = hj*hj hi4 = hi2*hi2 hj4 = hj2*hj2 hi6 = hi4*hi2 hj6 = hj4*hj2 vij_i2 = ((15.0)/(64.0)*hi6 * aij*aij + (3.0)/(16.0) * hi4 * (2*aij*cij + bij*bij) + 0.25*hi2*(2*bij*dij + cij*cij) + dij * dij) vij_j2 = ((15.0)/(64.0)*hj6 * aij*aij + (3.0)/(16.0) * hj4 * (2*aij*cij + bij*bij) + 0.25*hj2 * (2*bij*dij + cij*cij) + dij * dij) hij2 = hij*hij hij4 = hij2*hij2 if not self.interface_zero: vij = 0.5*(vij_i2 + vij_j2) sstar = ((15.0/32.0)*hij4*hij2*aij*bij + (3.0/8.0)*hij4*(aij*dij + bij*cij) + 0.5*hij2*cij*dij)/vij else: printf("%s", "Unknown interpolation type") result[0] = vij_i2 result[1] = vij_j2 result[2] = sstar pysph-master/pysph/sph/gas_dynamics/riemann_solver.py000066400000000000000000000670341356347341600235330ustar00rootroot00000000000000"""GSPH functions""" from math import sqrt from compyle.api import declare def printf(s): print(s) def SIGN(x=0.0, y=0.0): if y >= 0: return abs(x) else: return -abs(x) def riemann_solve(method=1, rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): if method == 0: return non_diffusive(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 1: return van_leer(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 2: return exact(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 3: return hllc(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 4: return ducowicz(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 5: return hlle(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 6: return roe(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 7: return llxf(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 8: return hllc_ball(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 9: return hll_ball(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) elif method == 10: return hllsy(rhol, rhor, pl, pr, ul, ur, gamma, niter, tol, result) def non_diffusive(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): result[0] = 0.5*(pl + pr) result[1] = 0.5*(ul + ur) return 0 def van_leer(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Van Leer Riemann solver. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ if ((rhol < 0) or (rhor < 0) or (pl < 0) or (pr < 0)): result[0] = 0.0 result[1] = 0.0 return 1 # local variables gamma1, gamma2 = declare('double', 2) Vl, Vr, cl, cr, zl, zr, wl, wr = declare('double', 8) ustar_l, ustar_r, pstar, pstar_old = declare('double', 4) converged, iteration = declare('int', 2) gamma2 = 1.0 + gamma gamma1 = 0.5 * gamma2/gamma smallp = 1e-25 # initialize variables wl = 0.0 wr = 0.0 zl = 0.0 zr = 0.0 # specific volumes Vl = 1./rhol Vr = 1./rhor # Lagrangean sound speeds cl = sqrt(gamma * pl * rhol) cr = sqrt(gamma * pr * rhor) # Initial guess for pstar pstar = pr - pl - cr * (ur - ul) pstar = pl + pstar * cl/(cl + cr) pstar = max(pstar, smallp) # Now Iterate using NR to obtain the star values iteration = 0 while iteration < niter: pstar_old = pstar wl = 1.0 + gamma1 * (pstar - pl)/pl wl = cl * sqrt(wl) wr = 1.0 + gamma1 * (pstar - pr)/pr wr = cr * sqrt(wr) zl = 4.0 * Vl * wl * wl zl = -zl * wl/(zl - gamma2*(pstar - pl)) zr = 4.0 * Vr * wr * wr zr = zr * wr/(zr - gamma2*(pstar - pr)) ustar_l = ul - (pstar - pl)/wl ustar_r = ur + (pstar - pr)/wr pstar = pstar + (ustar_r - ustar_l)*(zl*zr)/(zr - zl) pstar = max(smallp, pstar) converged = (abs(pstar - pstar_old)/pstar < tol) if converged: break iteration += 1 # calculate the averaged ustar ustar_l = ul - (pstar - pl)/wl ustar_r = ur + (pstar - pr)/wr ustar = 0.5 * (ustar_l + ustar_r) result[0] = pstar result[1] = ustar if converged: return 0 else: return 1 def prefun_exact(p=0.0, dk=0.0, pk=0.0, ck=0.0, g1=0.0, g2=0.0, g4=0.0, g5=0.0, g6=0.0, result=[0.0, 0.0]): """The pressure function. Updates result with f, fd. """ pratio, f, fd, ak, bk, qrt = declare('double', 6) if (p <= pk): pratio = p/pk f = g4*ck*(pratio**g1 - 1.0) fd = (1.0/(dk*ck))*pratio**(-g2) else: ak = g5/dk bk = g6*pk qrt = sqrt(ak/(bk+p)) f = (p-pk)*qrt fd = (1.0 - 0.5*(p-pk)/(bk + p))*qrt result[0] = f result[1] = fd def exact(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Exact Riemann solver. Solve the Riemann problem for the Euler equations to determine the intermediate states. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ # save derived variables tmp1 = 1.0/(2*gamma) tmp2 = 1.0/(gamma - 1.0) tmp3 = 1.0/(gamma + 1.0) gamma1 = (gamma - 1.0) * tmp1 gamma2 = (gamma + 1.0) * tmp1 gamma3 = 2*gamma * tmp2 gamma4 = 2 * tmp2 gamma5 = 2 * tmp3 gamma6 = tmp3/tmp2 gamma7 = 0.5 * (gamma - 1.0) # sound speed to the left and right based on the Ideal EOS cl, cr = declare('double', 2) cl = sqrt(gamma*pl/rhol) cr = sqrt(gamma*pr/rhor) # Iteration constants i = declare('int') i = 0 # local variables qser, cup, ppv, pmin, pmax, qmax = declare('double', 6) pq, um, ptl, ptr, pm, gel, ger = declare('double', 7) pstar, pold, udifff = declare('double', 3) p, change = declare('double', 2) # initialize variables fl, fr = declare('matrix(2)', 2) p = 0.0 # check the initial data if (gamma4*(cl+cr) <= (ur-ul)): return 1 # compute the guess pressure 'pm' qser = 2.0 cup = 0.25 * (rhol + rhor)*(cl+cr) ppv = 0.5 * (pl + pr) + 0.5*(ul - ur)*cup pmin = min(pl, pr) pmax = max(pl, pr) qmax = pmax/pmin ppv = max(0.0, ppv) if ((qmax <= qser) and (pmin <= ppv) and (ppv <= pmax)): pm = ppv elif (ppv < pmin): pq = (pl/pr)**gamma1 um = (pq*ul/cl + ur/cr + gamma4*(pq - 1.0))/(pq/cl + 1.0/cr) ptl = 1.0 + gamma7 * (ul - um)/cl ptr = 1.0 + gamma7 * (um - ur)/cr pm = 0.5*(pl*ptl**gamma3 + pr*ptr**gamma3) else: gel = sqrt((gamma5/rhol)/(gamma6*pl + ppv)) ger = sqrt((gamma5/rhor)/(gamma6*pr + ppv)) pm = (gel*pl + ger*pr - (ur-ul))/(gel + ger) # the guessed value is pm pstart = pm pold = pstart udifff = ur-ul for i in range(niter): prefun_exact(pold, rhol, pl, cl, gamma1, gamma2, gamma4, gamma5, gamma6, fl) prefun_exact(pold, rhor, pr, cr, gamma1, gamma2, gamma4, gamma5, gamma6, fr) p = pold - (fl[0] + fr[0] + udifff)/(fl[1] + fr[1]) change = 2.0 * abs((p - pold)/(p + pold)) if change <= tol: break pold = p if i == niter - 1: printf("%s", "Divergence in Newton-Raphson Iteration") return 1 # compute the velocity in the star region 'um' um = 0.5 * (ul + ur + fr[0] - fl[0]) result[0] = p result[1] = um return 0 def sample(pm=0.0, um=0.0, s=0.0, rhol=1.0, rhor=0.0, pl=1.0, pr=0.0, ul=1.0, ur=0.0, gamma=1.4, result=[0.0, 0.0, 0.0]): """Sample the solution to the Riemann problem. Parameters ---------- pm, um : float Pressure and velocity in the star region as returned by `exact` s : float Sampling point (x/t) rhol, rhor : float Densities on either side of the discontinuity pl, pr : float Pressures on either side of the discontinuity ul, ur : float Velocities on either side of the discontinuity gamma : float Ratio of specific heats result : list(double) (rho, u, p) : Sampled density, velocity and pressure """ tmp1, tmp2, tmp3 = declare('double', 3) g1, g2, g3, g4, g5, g6 = declare('double', 6) cl, cr = declare('double', 2) # save derived variables tmp1 = 1.0/(2*gamma) tmp2 = 1.0/(gamma - 1.0) tmp3 = 1.0/(gamma + 1.0) g1 = (gamma - 1.0) * tmp1 g2 = (gamma + 1.0) * tmp1 g3 = 2*gamma * tmp2 g4 = 2 * tmp2 g5 = 2 * tmp3 g6 = tmp3/tmp2 g7 = 0.5 * (gamma - 1.0) # sound speeds at the left and right data states cl = sqrt(gamma*pl/rhol) cr = sqrt(gamma*pr/rhor) if s <= um: # sampling point lies to the left of the contact discontinuity if (pm <= pl): # left rarefaction # speed of the head of the rarefaction shl = ul - cl if (s <= shl): # sampled point is left state rho = rhol u = ul p = pl else: cml = cl*(pm/pl)**g1 # eqn (4.54) stl = um - cml # eqn (4.55) if (s > stl): # sampled point is Star left state. eqn (4.53) rho = rhol*(pm/pl)**(1.0/gamma) u = um p = pm else: # sampled point is inside left fan u = g5 * (cl + g7*ul + s) c = g5 * (cl + g7*(ul - s)) rho = rhol*(c/cl)**g4 p = pl * (c/cl)**g3 else: # pm <= pl # left shock pml = pm/pl sl = ul - cl*sqrt(g2*pml + g1) if (s <= sl): # sampled point is left data state rho = rhol u = ul p = pl else: # sampled point is Star left state rho = rhol*(pml + g6)/(pml*g6 + 1.0) u = um p = pm else: # s < um # sampling point lies to the right of the contact discontinuity if (pm > pr): # right shock pmr = pm/pr sr = ur + cr * sqrt(g2*pmr + g1) if (s >= sr): # sampled point is right data state rho = rhor u = ur p = pr else: # sampled point is star right state rho = rhor*(pmr + g6)/(pmr*g6 + 1.0) u = um p = pm else: # right rarefaction shr = ur + cr if (s >= shr): # sampled point is right state rho = rhor u = ur p = pr else: cmr = cr*(pm/pr)**g1 STR = um + cmr if (s <= STR): # sampled point is star right rho = rhor*(pm/pr)**(1.0/gamma) u = um p = pm else: # sampled point is inside left fan u = g5*(-cr + g7*ur + s) c = g5*(cr - g7*(ur - s)) rho = rhor * (c/cr)**g4 p = pr*(c/cr)**g3 result[0] = rho result[1] = u result[2] = p def ducowicz(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Ducowicz Approximate Riemann solver for the Euler equations to determine the intermediate states. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ al, ar = declare('double', 2) # strong shock parameters al = 0.5 * (gamma + 1.0) ar = 0.5 * (gamma + 1.0) csl, csr, umin, umax, plmin, prmin, bl, br = declare('double', 8) a, b, c, d, pstar, ustar, dd = declare('double', 7) # Lagrangian sound speeds csl = sqrt(gamma * pl * rhol) csr = sqrt(gamma * pr * rhor) umin = ur - 0.5 * csr/ar umax = ul + 0.5 * csl/al plmin = pl - 0.25*rhol*csl*csl/al prmin = pr - 0.25*rhor*csr*csr/ar bl = rhol*al br = rhor*ar a = (br-bl) * (prmin-plmin) b = br*umin*umin - bl*umax*umax c = br*umin - bl*umax d = br*bl * (umin-umax)*(umin-umax) # Case A : ustar - umin > 0 dd = sqrt(max(0.0, d-a)) ustar = (b + prmin-plmin)/(c - SIGN(dd, umax-umin)) if (((ustar - umin) >= 0.0) and ((ustar - umax) <= 0)): pstar = 0.5 * (plmin + prmin + br*abs(ustar-umin)*(ustar-umin) - bl*abs(ustar-umax)*(ustar-umax)) pstar = max(pstar, 0.0) result[0] = pstar result[1] = ustar return 0 # Case B: ustar - umin < 0, ustar - umax > 0 dd = sqrt(max(0.0, d+a)) ustar = (b - prmin + plmin)/(c - SIGN(dd, umax-umin)) if (((ustar-umin) <= 0.0) and ((ustar-umax) >= 0.0)): pstar = 0.5 * (plmin+prmin + br*abs(ustar-umin)*(ustar-umin) - bl*abs(ustar-umax)*(ustar-umax)) pstar = max(pstar, 0.0) result[0] = pstar result[1] = ustar return 0 a = (bl+br)*(plmin-prmin) b = bl*umax + br*umin c = 1./(bl + br) # Case C : ustar-umin >0, ustar-umax > 0 dd = sqrt(max(0.0, a-d)) ustar = (b+dd)*c if (((ustar-umin) >= 0) and ((ustar-umax) >= 0.0)): pstar = 0.5 * (plmin+prmin + br*abs(ustar-umin)*(ustar-umin) - bl*abs(ustar-umax)*(ustar-umax)) pstar = max(pstar, 0.0) result[0] = pstar result[1] = ustar return 0 # Case D: ustar - umin < 0, ustar - umax < 0 dd = sqrt(max(0.0, -a - d)) ustar = (b-dd)*c pstar = 0.5 * (plmin+prmin + br*abs(ustar-umin)*(ustar-umin) - bl*abs(ustar-umax)*(ustar-umax)) pstar = max(pstar, 0.0) result[0] = pstar result[1] = ustar return 0 def roe(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Roe's approximate Riemann solver for the Euler equations to determine the intermediate states. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ rrhol, rrhor, denominator, plr, vlr, ulr = declare('double', 6) cslr, cslr1 = declare('double', 2) # Roe-averaged pressure and specific volume rrhol = sqrt(rhol) rrhor = sqrt(rhor) denominator = 1./(rrhor + rrhol) plr = (rrhol*pl + rrhor*pr) * denominator vlr = (rrhol/rhol + rrhor/rhor) * denominator ulr = (rrhol*ul + rrhor*ur) * denominator # average sound speed at the interface cslr = sqrt(gamma * plr/vlr) cslr1 = 1./cslr # the intermediate states result[0] = plr - 0.5 * (ur - ul) * cslr result[1] = ulr - 0.5 * (pr - pl) * cslr1 return 0 def llxf(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Lax Friedrichs approximate Riemann solver for the Euler equations to determine the intermediate states. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ gamma1, csl, csr, cslr, el, El, er, Er, pstar = declare('double', 9) gamma1 = 1./(gamma - 1.0) # Lagrangian sound speeds csl = sqrt(gamma * pl * rhol) csr = sqrt(gamma * pr * rhor) cslr = max(csr, csl) # Total energy on either side el = pl*gamma1/rhol El = el + 0.5 * ul*ul er = pr*gamma1/rhor Er = er + 0.5 * ur*ur # the intermediate states # cdef double ustar = 0.5 * ( ul + ur - 1./cslr * (pr - pl) ) pstar = 0.5 * (pl + pr - cslr * (ur - ul)) result[0] = pstar result[1] = 1./pstar * (0.5 * ((pl*ul + pr*ur) - cslr*(Er - El))) return 0 def hllc(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Harten-Lax-van Leer-Contact approximate Riemann solver for the Euler equations to determine the intermediate states. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ gamma1, pstar, ustar, mstar, estar = declare('double', 5) gamma1 = 1./(gamma - 1.0) # Roe-averaged interface velocity rrhol = sqrt(rhol) rrhor = sqrt(rhor) ulr = (rrhol*ul + rrhor*ur)/(rrhol + rrhor) # relative velocities at the interface vl = ul - ulr vr = ur - ulr # Sound speeds at the interface csl = sqrt(gamma * pl/rhol) csr = sqrt(gamma * pr/rhor) cslr = (rrhol*csl + rrhor*csr)/(rrhol + rrhor) # wave speed estimates at the interface sl = min(vl - csl, 0 - cslr) sr = max(vr + csr, 0 + cslr) sm = rhor*vr*(sr-vr) - rhol*vl*(sl-vl) + pl - pr sm /= (rhor*(sr-vr) - rhol*(sl-vl)) # phat phat = rhol*(vl - sl)*(vl - sm) + pl # Total energy on either side el = pl*gamma1/rhol El = rhol*(el + 0.5 * ul*ul) er = pr*gamma1/rhor Er = rhor*(er + 0.5 * ur*ur) # Momentum on either side Ml = rhol*ul Mr = rhor*ur # compute the values based on the wave speeds if (sl > 0): pstar = pl ustar = ul elif (sl <= 0.0) and (0.0 < sm): mstar = 1./(sl - sm) * ((sl - vl) * Ml + (phat - pl)) estar = 1./(sl - sm) * ((sl - vl) * El - pl*vl + phat*sm) pstar = sm*mstar + phat ustar = sm*estar + (sm + ulr)*phat ustar /= pstar elif (sm <= 0) and (0 < sr): mstar = 1./(sr - sm) * ((sr - vr) * Mr + (phat - pr)) estar = 1./(sr - sm) * ((sr - vr) * Er - pr*vr + phat*sm) pstar = sm*mstar + phat ustar = sm*estar + (sm + ulr)*phat ustar /= pstar elif (sr < 0): pstar = pr ustar = ur else: printf("%s", "Incorrect wave speeds") return 1 result[0] = pstar result[1] = ustar return 0 def hllc_ball(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Harten-Lax-van Leer-Contact approximate Riemann solver for the Euler equations to determine the intermediate states. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ gamma1, csl, csr, cslr, rholr, pstar, ustar = declare('double', 7) Sl, Sr, ql, qr, pstar_l, pstar_r = declare('double', 6) gamma1 = 0.5 * (gamma + 1.0)/gamma # sound speeds in the undisturbed fluid csl = sqrt(gamma * pl/rhol) csr = sqrt(gamma * pr/rhor) cslr = 0.5 * (csl + csr) # averaged density rholr = 0.5 * (rhol + rhor) # provisional intermediate pressure pstar = 0.5 * (pl + pr - rholr*cslr*(ur - ul)) # Wave speed estimates (ustar is taken as the intermediate velocity) ustar = 0.5 * (ul + ur - 1./(rholr*cslr)*(pr - pl)) Hl = pstar/pl Hr = pstar/pr ql = 1.0 if (Hl > 1): ql = sqrt(1 + gamma1*(Hl - 1.0)) qr = 1.0 if (Hr > 1): qr = sqrt(1 + gamma1*(Hr - 1.0)) Sl = ul - csl*ql Sr = ur + csr*qr # compute the intermediate pressure pstar_l = pl + rhol*(ul - Sl)*(ul - ustar) pstar_r = pr + rhor*(ur - Sr)*(ur - ustar) pstar = 0.5 * (pstar_l + pstar_r) # return intermediate states result[0] = pstar result[1] = ustar return 0 def hlle(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Harten-Lax-van Leer-Einfeldt approximate Riemann solver for the Euler equations to determine the intermediate states. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ gamma1 = 1./(gamma - 1.0) # Roe-averaged interface velocity rrhol = sqrt(rhol) rrhor = sqrt(rhor) # ulr = (rrhol*ul + rrhor*ur)/(rrhol + rrhor) # lagrangian sound speeds csl = sqrt(gamma * pl * rhol) csr = sqrt(gamma * pr * rhor) cslr = (rrhol*csl + rrhor*csr)/(rrhol + rrhor) # wave speed estimates sl = min(ul - csl, -cslr) sr = max(ur + csr, +cslr) smax = max(sl, sr) smin = min(sl, sr) # cdef double smax = max( (ur-ulr) + csr, cslr ) # cdef double smin = max( (ul-ulr) - csl, -cslr ) # Total energy on either side el = pl*gamma1/rhol El = el + 0.5 * ul*ul er = pr*gamma1/rhor Er = er + 0.5 * ur*ur # Momentum on either side # Ml = rhol*ul # Mr = rhor*ur # the intermediate states pstar = ((smax * pl - smin * pr)/(smax - smin) + smax*smin/(smax - smin)*(ur - ul)) ustar = ((smax * pl*ul - smin * pr*ur)/(smax - smin) + smax*smin/(smax - smin)*(Er - El)) result[0] = pstar result[1] = ustar/pstar return 0 def hll_ball(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """Harten-Lax-van Leer-Ball approximate Riemann solver for the Euler equations to determine the intermediate states. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ # Roe-averaged pressure and specific volume rrhol = sqrt(rhol) rrhor = sqrt(rhor) denominator = 1./(rrhor + rrhol) # sound speeds csl = sqrt(gamma * pl/rhol) csr = sqrt(gamma * pr/rhor) # cslr2 = denominator * (rrhol*csl*csl + rrhor*csr*csr ) eta = 0.5 * (gamma - 1.0) * (rrhor*rrhol) * denominator * denominator betal = abs(ul) betar = abs(ur) # averaged velocity and cs2 ulr = (rrhol*ul + rrhor*ur)/(rrhol*rrhor) cslr2 = (rrhol*csl*csl + rrhor*csr*csr)/(rrhol*rrhor) cslr = sqrt(cslr2 + eta * (betar - betal) * (betar - betal)) # wave speed estimates Sl = min(ulr - cslr, ul - csl) Sr = max(ulr + cslr, ur + csr) # intermediate states # rhostar = 1./(Sr - Sl) * (rhor*(Sr-ur) - rhol*(Sl-ul)) ustar = ((Sr*Sl*(rhor-rhol) + rhol*ul*Sr - rhor*ur*Sl) / (rhol*(ul-Sl) + rhor*(Sr-ur))) # pstar = 0.5 * (pl + pr - rhostar*ustar*( (Sr-ustar) + (Sl-ustar) ) + \ # rhor*ur*(ur - Sr) + rhol*ul*(ul-Sl) ) pstar = (pr*(ustar-Sl) - pl*(ustar-Sr) + rhor*ur*(ustar-Sl)*(ur-Sr) - rhol*ul*(ustar-Sr)*(ul-Sl)) pstar = pstar/(Sr - Sl) result[0] = pstar result[1] = ustar return 0 def hllsy(rhol=0.0, rhor=1.0, pl=0.0, pr=1.0, ul=0.0, ur=1.0, gamma=1.4, niter=20, tol=1e-6, result=[0.0, 0.0]): """HLL Riemann solver defined by Sirotkin and Yoh in 'A Smoothed Particle Hydrodynamics method with approximate Riemann solvers for simulation of strong explosions' (2013), Computers and Fluids. Parameters ---------- rhol, rhor: double: left and right density. pl, pr: double: left and right pressure. ul, ur: double: left and right speed. gamma: double: Ratio of specific heats. niter: int: Max number of iterations to try for iterative schemes. tol: double: Error tolerance for convergence. result: list: List of length 2. Result will contain computed pstar, ustar Returns ------- Returns 0 if the value is computed correctly else 1 if the iterations either did not converge or if there is an error. """ gamma1 = 1./(gamma - 1.0) # Roe-averaging factors rrhol = sqrt(rhol) rrhor = sqrt(rhor) denominator = 1./(rrhor + rrhol) # Lagrangian sound speed Eq. (35) in SY13 csl = sqrt(gamma * pl*rhol) csr = sqrt(gamma * pr*rhor) cslr = denominator * (rrhol*csl + rrhor*csr) # weighting factors Eqs. (33 - 35) in SY13 bl = max(csl, cslr) br = max(csr, cslr) wl = br/(bl + br) wr = bl/(bl + br) wlr = bl*br/(bl + br) # Total energy on either side el = pl*gamma1/rhol El = el + 0.5 * ul*ul er = pr*gamma1/rhor Er = er + 0.5 * ur*ur # intermediate states Eq.(32) in SY13 pstar = wl*pl + wr*pr - wlr*(ur - ul) ustar = wl*(pl*ul) + wr*(pr*ur) - wlr*(Er - El) result[0] = pstar result[1] = ustar/pstar return 0 HELPERS = [ SIGN, riemann_solve, non_diffusive, ducowicz, exact, hll_ball, hllc, hllc_ball, hlle, hllsy, llxf, roe, van_leer, prefun_exact ] pysph-master/pysph/sph/iisph.py000066400000000000000000000550251356347341600171600ustar00rootroot00000000000000"""The basic equations for the IISPH formulation of M. Ihmsen, J. Cornelis, B. Solenthaler, C. Horvath, M. Teschner, "Implicit Incompressible SPH," IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 3, pp. 426-435, March 2014. http://dx.doi.org/10.1109/TVCG.2013.105 """ from numpy import sqrt, fabs from compyle.api import declare from pysph.base.particle_array import get_ghost_tag from pysph.sph.equation import Equation from pysph.sph.integrator_step import IntegratorStep from pysph.base.reduce_array import serial_reduce_array, parallel_reduce_array from pysph.sph.scheme import Scheme, add_bool_argument GHOST_TAG = get_ghost_tag() class IISPHStep(IntegratorStep): """A straightforward and simple integrator to be used for IISPH. """ def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_uadv, d_vadv, d_wadv, d_au, d_av, d_aw, d_ax, d_ay, d_az, dt): d_u[d_idx] = d_uadv[d_idx] + dt * d_au[d_idx] d_v[d_idx] = d_vadv[d_idx] + dt * d_av[d_idx] d_w[d_idx] = d_wadv[d_idx] + dt * d_aw[d_idx] d_x[d_idx] += dt * d_u[d_idx] d_y[d_idx] += dt * d_v[d_idx] d_z[d_idx] += dt * d_w[d_idx] class NumberDensity(Equation): def initialize(self, d_idx, d_V): d_V[d_idx] = 0.0 def loop(self, d_idx, d_V, WIJ): d_V[d_idx] += WIJ class SummationDensity(Equation): def initialize(self, d_idx, d_rho): d_rho[d_idx] = 0.0 def loop(self, d_idx, d_rho, s_idx, s_m, WIJ): d_rho[d_idx] += s_m[s_idx]*WIJ class SummationDensityBoundary(Equation): def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(SummationDensityBoundary, self).__init__(dest, sources) def loop(self, d_idx, d_rho, s_idx, s_V, WIJ): d_rho[d_idx] += self.rho0/s_V[s_idx]*WIJ class NormalizedSummationDensity(Equation): def initialize(self, d_idx, d_rho, d_rho_adv, d_rho0, d_V): d_rho0[d_idx] = d_rho[d_idx] d_rho[d_idx] = 0.0 d_rho_adv[d_idx] = 0.0 d_V[d_idx] = 0.0 def loop(self, d_idx, d_rho, d_rho_adv, d_V, s_idx, s_m, s_rho0, WIJ): tmp = s_m[s_idx]*WIJ d_rho[d_idx] += tmp d_rho_adv[d_idx] += tmp/s_rho0[s_idx] d_V[d_idx] += WIJ def post_loop(self, d_idx, d_rho, d_rho_adv): d_rho[d_idx] /= d_rho_adv[d_idx] class AdvectionAcceleration(Equation): def __init__(self, dest, sources, gx=0.0, gy=0.0, gz=0.0): self.gx = gx self.gy = gy self.gz = gz super(AdvectionAcceleration, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw, d_uadv, d_vadv, d_wadv): d_au[d_idx] = self.gx d_av[d_idx] = self.gy d_aw[d_idx] = self.gz d_uadv[d_idx] = 0.0 d_vadv[d_idx] = 0.0 d_wadv[d_idx] = 0.0 def post_loop(self, d_idx, d_au, d_av, d_aw, d_uadv, d_vadv, d_wadv, d_u, d_v, d_w, dt=0.0): d_uadv[d_idx] = d_u[d_idx] + dt*d_au[d_idx] d_vadv[d_idx] = d_v[d_idx] + dt*d_av[d_idx] d_wadv[d_idx] = d_w[d_idx] + dt*d_aw[d_idx] class ViscosityAcceleration(Equation): def __init__(self, dest, sources, nu): self.nu = nu super(ViscosityAcceleration, self).__init__(dest, sources) def loop(self, d_idx, d_au, d_av, d_aw, s_idx, s_m, EPS, VIJ, XIJ, RHOIJ1, R2IJ, DWIJ): dwijdotxij = DWIJ[0]*XIJ[0] + DWIJ[1]*XIJ[1] + DWIJ[2]*XIJ[2] fac = 2.0*self.nu*s_m[s_idx]*RHOIJ1*dwijdotxij/(R2IJ + EPS) d_au[d_idx] += fac*VIJ[0] d_av[d_idx] += fac*VIJ[1] d_aw[d_idx] += fac*VIJ[2] class ViscosityAccelerationBoundary(Equation): """The acceleration on the fluid due to a boundary. """ def __init__(self, dest, sources, rho0, nu): self.nu = nu self.rho0 = rho0 super(ViscosityAccelerationBoundary, self).__init__(dest, sources) def loop(self, d_idx, d_au, d_av, d_aw, d_rho, s_idx, s_V, EPS, VIJ, XIJ, R2IJ, DWIJ): phi_b = self.rho0/(s_V[s_idx]*d_rho[d_idx]) dwijdotxij = DWIJ[0]*XIJ[0] + DWIJ[1]*XIJ[1] + DWIJ[2]*XIJ[2] fac = 2.0*self.nu*phi_b*dwijdotxij/(R2IJ + EPS) d_au[d_idx] += fac*VIJ[0] d_av[d_idx] += fac*VIJ[1] d_aw[d_idx] += fac*VIJ[2] class ComputeDII(Equation): def initialize(self, d_idx, d_dii0, d_dii1, d_dii2): d_dii0[d_idx] = 0.0 d_dii1[d_idx] = 0.0 d_dii2[d_idx] = 0.0 def loop(self, d_idx, d_rho, d_dii0, d_dii1, d_dii2, s_idx, s_m, DWIJ): rho_1 = 1.0/d_rho[d_idx] fac = -s_m[s_idx]*rho_1*rho_1 d_dii0[d_idx] += fac*DWIJ[0] d_dii1[d_idx] += fac*DWIJ[1] d_dii2[d_idx] += fac*DWIJ[2] class ComputeDIIBoundary(Equation): def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(ComputeDIIBoundary, self).__init__(dest, sources) def loop(self, d_idx, d_dii0, d_dii1, d_dii2, d_rho, s_idx, s_m, s_V, DWIJ): rhoi1 = 1.0/d_rho[d_idx] fac = -rhoi1*rhoi1*self.rho0/s_V[s_idx] d_dii0[d_idx] += fac*DWIJ[0] d_dii1[d_idx] += fac*DWIJ[1] d_dii2[d_idx] += fac*DWIJ[2] class ComputeRhoAdvection(Equation): def initialize(self, d_idx, d_rho_adv, d_rho, d_p0, d_p, d_piter, d_aii): d_rho_adv[d_idx] = d_rho[d_idx] d_p0[d_idx] = d_p[d_idx] d_piter[d_idx] = 0.5*d_p[d_idx] def loop(self, d_idx, d_rho, d_rho_adv, d_uadv, d_vadv, d_wadv, d_u, d_v, d_w, s_idx, s_m, s_uadv, s_vadv, s_wadv, DWIJ, dt=0.0): vijdotdwij = (d_uadv[d_idx] - s_uadv[s_idx])*DWIJ[0] + \ (d_vadv[d_idx] - s_vadv[s_idx])*DWIJ[1] + \ (d_wadv[d_idx] - s_wadv[s_idx])*DWIJ[2] d_rho_adv[d_idx] += dt*s_m[s_idx]*vijdotdwij class ComputeRhoBoundary(Equation): def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(ComputeRhoBoundary, self).__init__(dest, sources) def loop(self, d_idx, d_rho, d_rho_adv, d_uadv, d_vadv, d_wadv, s_idx, s_u, s_v, s_w, s_V, WIJ, DWIJ, dt=0.0): phi_b = self.rho0/s_V[s_idx] vijdotdwij = (d_uadv[d_idx] - s_u[s_idx])*DWIJ[0] + \ (d_vadv[d_idx] - s_v[s_idx])*DWIJ[1] + \ (d_wadv[d_idx] - s_w[s_idx])*DWIJ[2] d_rho_adv[d_idx] += dt*phi_b*vijdotdwij class ComputeAII(Equation): def initialize(self, d_idx, d_aii): d_aii[d_idx] = 0.0 def loop(self, d_idx, d_aii, d_dii0, d_dii1, d_dii2, d_m, d_rho, s_idx, s_m, s_rho, DWIJ): rho1 = 1.0/d_rho[d_idx] fac = d_m[d_idx]*rho1*rho1 # The following is m_j (d_ii - d_ji) . DWIJ # DWIJ = -DWJI dijdotdwij = (d_dii0[d_idx] - fac*DWIJ[0])*DWIJ[0] + \ (d_dii1[d_idx] - fac*DWIJ[1])*DWIJ[1] + \ (d_dii2[d_idx] - fac*DWIJ[2])*DWIJ[2] d_aii[d_idx] += s_m[s_idx]*dijdotdwij class ComputeAIIBoundary(Equation): """ This is important and not really discussed in the original IISPH paper. """ def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(ComputeAIIBoundary, self).__init__(dest, sources) def loop(self, d_idx, d_m, d_aii, d_dii0, d_dii1, d_dii2, d_rho, s_idx, s_m, s_V, DWIJ): phi_b = self.rho0/s_V[s_idx] rho1 = 1.0/d_rho[d_idx] fac = d_m[d_idx]*rho1*rho1 dijdotdwij = ((d_dii0[d_idx] - fac*DWIJ[0])*DWIJ[0] + (d_dii1[d_idx] - fac*DWIJ[1])*DWIJ[1] + (d_dii2[d_idx] - fac*DWIJ[2])*DWIJ[2]) d_aii[d_idx] += phi_b*dijdotdwij class ComputeDIJPJ(Equation): def initialize(self, d_idx, d_dijpj0, d_dijpj1, d_dijpj2): d_dijpj0[d_idx] = 0.0 d_dijpj1[d_idx] = 0.0 d_dijpj2[d_idx] = 0.0 def loop(self, d_idx, d_dijpj0, d_dijpj1, d_dijpj2, s_idx, s_m, s_rho, s_piter, DWIJ): rho1 = 1.0/s_rho[s_idx] fac = -s_m[s_idx]*rho1*rho1*s_piter[s_idx] d_dijpj0[d_idx] += fac*DWIJ[0] d_dijpj1[d_idx] += fac*DWIJ[1] d_dijpj2[d_idx] += fac*DWIJ[2] class UpdateGhostProps(Equation): def __init__(self, dest, sources=None): super(UpdateGhostProps, self).__init__(dest, sources) # We do this to ensure that the ghost tag is indeed 2. # If not the initialize method will never work. assert GHOST_TAG == 2 def initialize(self, d_idx, d_tag, d_orig_idx, d_dijpj0, d_dijpj1, d_dijpj2, d_dii0, d_dii1, d_dii2, d_piter): idx = declare('int') if d_tag[d_idx] == 2: idx = d_orig_idx[d_idx] d_dijpj0[d_idx] = d_dijpj0[idx] d_dijpj1[d_idx] = d_dijpj1[idx] d_dijpj2[d_idx] = d_dijpj2[idx] d_dii0[d_idx] = d_dii0[idx] d_dii1[d_idx] = d_dii1[idx] d_dii2[d_idx] = d_dii2[idx] d_piter[d_idx] = d_piter[idx] class PressureSolve(Equation): def __init__(self, dest, sources, rho0, omega=0.5, tolerance=1e-2, debug=False): self.rho0 = rho0 self.omega = omega self.compression = 0.0 self.debug = debug self.tolerance = tolerance super(PressureSolve, self).__init__(dest, sources) def initialize(self, d_idx, d_p, d_compression): d_p[d_idx] = 0.0 d_compression[d_idx] = 0.0 def loop(self, d_idx, d_p, d_piter, d_rho, d_m, d_dijpj0, d_dijpj1, d_dijpj2, s_idx, s_m, s_dii0, s_dii1, s_dii2, s_piter, s_dijpj0, s_dijpj1, s_dijpj2, DWIJ): # Note that a good way to check this is to see that when # d_idx == s_idx the contribution is zero, as is expected. rho1 = 1.0/d_rho[d_idx] fac = d_m[d_idx]*rho1*rho1*d_piter[d_idx] djkpk0 = s_dijpj0[s_idx] - fac*DWIJ[0] djkpk1 = s_dijpj1[s_idx] - fac*DWIJ[1] djkpk2 = s_dijpj2[s_idx] - fac*DWIJ[2] tmp0 = d_dijpj0[d_idx] - s_dii0[s_idx]*s_piter[s_idx] - djkpk0 tmp1 = d_dijpj1[d_idx] - s_dii1[s_idx]*s_piter[s_idx] - djkpk1 tmp2 = d_dijpj2[d_idx] - s_dii2[s_idx]*s_piter[s_idx] - djkpk2 tmpdotdwij = tmp0*DWIJ[0] + tmp1*DWIJ[1] + tmp2*DWIJ[2] # This is corrected in the post_loop. d_p[d_idx] += s_m[s_idx]*tmpdotdwij def post_loop(self, d_idx, d_piter, d_p0, d_p, d_aii, d_rho_adv, d_rho, d_compression, dt=0.0): dt2 = dt*dt # Recall that d_p now has \sum_{j\neq i} a_ij p_j tmp = self.rho0 - d_rho_adv[d_idx] - d_p[d_idx]*dt2 dnr = d_aii[d_idx]*dt2 if fabs(dnr) > 1e-9: # Clamp pressure to positive values. p = max((1.0 - self.omega)*d_piter[d_idx] + self.omega/dnr*tmp, 0.0) else: p = 0.0 if p != 0.0: d_compression[d_idx] = fabs(p*dnr - tmp) + self.rho0 else: d_compression[d_idx] = self.rho0 d_piter[d_idx] = p d_p[d_idx] = p def reduce(self, dst, t, dt): dst.tmp_comp[0] = serial_reduce_array(dst.compression > 0.0, 'sum') dst.tmp_comp[1] = serial_reduce_array(dst.compression, 'sum') dst.tmp_comp[:] = parallel_reduce_array(dst.tmp_comp, 'sum') if dst.tmp_comp[0] > 0: avg_rho = dst.tmp_comp[1]/dst.tmp_comp[0] else: avg_rho = self.rho0 self.compression = fabs(avg_rho - self.rho0)/self.rho0 def converged(self): debug = self.debug compression = self.compression if compression > self.tolerance: if debug: print("Not converged:", compression) return -1.0 else: if debug: print("Converged:", compression) return 1.0 class PressureSolveBoundary(Equation): def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(PressureSolveBoundary, self).__init__(dest, sources) def loop(self, d_idx, d_p, d_rho, d_dijpj0, d_dijpj1, d_dijpj2, s_idx, s_V, DWIJ): phi_b = self.rho0/s_V[s_idx] dijdotwij = (d_dijpj0[d_idx]*DWIJ[0] + d_dijpj1[d_idx]*DWIJ[1] + d_dijpj2[d_idx]*DWIJ[2]) d_p[d_idx] += phi_b*dijdotwij class UpdateGhostPressure(Equation): def initialize(self, d_idx, d_tag, d_orig_idx, d_p, d_piter): idx = declare('int') if d_tag[d_idx] == 2: idx = d_orig_idx[d_idx] d_piter[d_idx] = d_piter[idx] d_p[d_idx] = d_p[idx] class PressureForce(Equation): def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, d_rho, d_p, d_au, d_av, d_aw, s_idx, s_m, s_rho, s_p, DWIJ): rhoi1 = 1.0/d_rho[d_idx] rhoj1 = 1.0/s_rho[s_idx] fac = -s_m[s_idx]*(d_p[d_idx]*rhoi1*rhoi1 + s_p[s_idx]*rhoj1*rhoj1) d_au[d_idx] += fac*DWIJ[0] d_av[d_idx] += fac*DWIJ[1] d_aw[d_idx] += fac*DWIJ[2] def post_loop(self, d_idx, d_au, d_av, d_aw, d_uadv, d_vadv, d_wadv, d_dt_cfl, d_dt_force): fac = d_au[d_idx]*d_au[d_idx] + d_av[d_idx]*d_av[d_idx] +\ d_aw[d_idx]*d_aw[d_idx] vmag = sqrt(d_uadv[d_idx]*d_uadv[d_idx] + d_vadv[d_idx]*d_vadv[d_idx] + d_wadv[d_idx]*d_wadv[d_idx]) d_dt_cfl[d_idx] = 2.0*vmag d_dt_force[d_idx] = 2.0*fac class PressureForceBoundary(Equation): def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(PressureForceBoundary, self).__init__(dest, sources) def loop(self, d_idx, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, DWIJ): rho1 = 1.0/d_rho[d_idx] fac = -d_p[d_idx]*rho1*rho1*self.rho0/s_V[s_idx] d_au[d_idx] += fac*DWIJ[0] d_av[d_idx] += fac*DWIJ[1] d_aw[d_idx] += fac*DWIJ[2] class IISPHScheme(Scheme): def __init__(self, fluids, solids, dim, rho0, nu=0.0, gx=0.0, gy=0.0, gz=0.0, omega=0.5, tolerance=1e-2, debug=False, has_ghosts=False): '''The IISPH scheme Parameters ---------- fluids : list(str) List of names of fluid particle arrays. solids : list(str) List of names of solid particle arrays. dim: int Dimensionality of the problem. rho0 : float Density of fluid. nu : float Kinematic viscosity. gx, gy, gz : float Componenents of body acceleration (gravity, external forcing etc.) omega : float Relaxation parameter for relaxed-Jacobi iterations. tolerance: float Tolerance for the convergence of pressure iterations as a fraction. debug: bool Produce some debugging output on iterations. has_ghosts: bool The problem has ghost particles so add equations for those. ''' self.fluids = fluids self.solids = solids self.dim = dim self.rho0 = rho0 self.nu = nu self.gx = gx self.gy = gy self.gz = gz self.omega = omega self.tolerance = tolerance self.debug = debug self.has_ghosts = has_ghosts def add_user_options(self, group): group.add_argument( '--omega', action="store", type=float, dest="omega", default=None, help="Relaxation parameter for Jacobi iterations." ) group.add_argument( '--tolerance', action='store', type=float, dest='tolerance', default=None, help='Tolerance for convergence of iterations as a fraction' ) add_bool_argument( group, 'iisph-debug', dest='debug', default=None, help="Produce some debugging output on convergence of iterations." ) def consume_user_options(self, options): vars = ['omega', 'tolerance', 'debug'] data = dict((var, self._smart_getattr(options, var)) for var in vars) self.configure(**data) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. This is to be called before `get_solver` is called. Parameters ---------- dim : int Number of dimensions. kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import CubicSpline from pysph.sph.integrator import EulerIntegrator from pysph.solver.solver import Solver if kernel is None: kernel = CubicSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = IISPHStep() cls = integrator_cls if integrator_cls is not None else EulerIntegrator integrator = cls(**steppers) self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def get_equations(self): from pysph.sph.equation import Group has_ghosts = self.has_ghosts equations = [] if self.solids: g1 = Group( equations=[NumberDensity(dest=x, sources=[x]) for x in self.solids] ) equations.append(g1) g2 = Group( equations=[SummationDensity(dest=x, sources=self.fluids) for x in self.fluids], real=False ) equations.append(g2) if self.solids: g3 = Group( equations=[ SummationDensityBoundary( dest=x, sources=self.solids, rho0=self.rho0 ) for x in self.fluids], real=False ) equations.append(g3) eq = [] for fluid in self.fluids: eq.extend([ AdvectionAcceleration( dest=fluid, sources=None, gx=self.gx, gy=self.gy, gz=self.gz ), ComputeDII(dest=fluid, sources=self.fluids) ]) if self.nu > 0.0: eq.append( ViscosityAcceleration( dest=fluid, sources=self.fluids, nu=self.nu ) ) if self.solids: if self.nu > 0.0: eq.append( ViscosityAccelerationBoundary( dest=fluid, sources=self.solids, nu=self.nu, rho0=self.rho0, ) ) eq.append( ComputeDIIBoundary(dest=fluid, sources=self.solids, rho0=self.rho0) ) g4 = Group(equations=eq, real=False) equations.append(g4) eq = [] for fluid in self.fluids: eq.extend([ ComputeRhoAdvection(dest=fluid, sources=self.fluids), ComputeAII(dest=fluid, sources=self.fluids), ]) if self.solids: eq.extend([ ComputeRhoBoundary(dest=fluid, sources=self.solids, rho0=self.rho0), ComputeAIIBoundary(dest=fluid, sources=self.solids, rho0=self.rho0), ]) g5 = Group(equations=eq) equations.append(g5) sg1 = Group( equations=[ ComputeDIJPJ(dest=x, sources=self.fluids) for x in self.fluids ] ) eq = [] for fluid in self.fluids: eq.append( PressureSolve(dest=fluid, sources=self.fluids, rho0=self.rho0, tolerance=self.tolerance, debug=self.debug) ) if self.solids: eq.append( PressureSolveBoundary( dest=fluid, sources=self.solids, rho0=self.rho0, ) ) sg2 = Group(equations=eq) if has_ghosts: ghost1 = Group( equations=[UpdateGhostProps(dest=x, sources=None) for x in self.fluids], real=False ) ghost2 = Group( equations=[UpdateGhostPressure(dest=x, sources=None) for x in self.fluids], real=False ) solver_eqs = [sg1, ghost1, sg2, ghost2] else: solver_eqs = [sg1, sg2] g6 = Group( equations=solver_eqs, iterate=True, max_iterations=30, min_iterations=2 ) equations.append(g6) eq = [] for fluid in self.fluids: eq.append( PressureForce(dest=fluid, sources=self.fluids) ) if self.solids: eq.append( PressureForceBoundary( dest=fluid, sources=self.solids, rho0=self.rho0 ) ) g7 = Group(equations=eq) equations.append(g7) return equations def setup_properties(self, particles, clean=True): """Setup the particle arrays so they have the right set of properties for this scheme. Parameters ---------- particles : list List of particle arrays. clean : bool If True, removes any unnecessary properties. """ from pysph.base.utils import get_particle_array_iisph dummy = get_particle_array_iisph() props = set(dummy.properties.keys()) for pa in particles: self._ensure_properties(pa, props, clean) for c, v in dummy.constants.items(): if c not in pa.constants: pa.add_constant(c, v) pa.set_output_arrays(dummy.output_property_arrays) if self.has_ghosts: particle_arrays = dict([(p.name, p) for p in particles]) for fluid in self.fluids: pa = particle_arrays[fluid] pa.add_property('orig_idx', type='int') pysph-master/pysph/sph/integrator.py000066400000000000000000000365741356347341600202320ustar00rootroot00000000000000"""Basic code for the templated integrators. Currently we only support two-step integrators. These classes are used to generate the code for the actual integrators from the `sph_eval` module. """ from numpy import sqrt import numpy as np # Local imports. from .integrator_step import IntegratorStep ############################################################################### # `Integrator` class ############################################################################### class Integrator(object): r"""Generic class for multi-step integrators in PySPH for a system of ODES of the form :math:`\frac{dy}{dt} = F(y)`. """ def __init__(self, **kw): """Pass fluid names and suitable `IntegratorStep` instances. For example:: >>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep()) where "fluid" and "solid" are the names of the particle arrays. """ for array_name, integrator_step in kw.items(): if not isinstance(integrator_step, IntegratorStep): msg = ('Stepper %s must be an instance of ' 'IntegratorStep' % (integrator_step)) raise ValueError(msg) self.steppers = kw self.parallel_manager = None self.nnps = None self.acceleration_evals = None # This is set later when the underlying compiled integrator is created # by the SPHCompiler. self.c_integrator = None self._has_dt_adapt = None self.fixed_h = False def __repr__(self): name = self.__class__.__name__ s = self.steppers args = ', '.join(['%s=%s' % (k, s[k]) for k in s]) return '%s(%s)' % (name, args) def _get_dt_adapt_factors(self): a_eval = self.acceleration_evals[0] factors = [-1.0, -1.0, -1.0] for pa in a_eval.particle_arrays: prop_names = [] for i, name in enumerate(('dt_cfl', 'dt_force', 'dt_visc')): if name in pa.properties: if pa.gpu: prop_names.append(name) else: max_val = np.max(pa.get(name)) factors[i] = max(factors[i], max_val) if pa.gpu: pa.gpu.update_minmax_cl(prop_names, only_max=True) for i, name in enumerate(('dt_cfl', 'dt_force', 'dt_visc')): if name in pa.properties: max_val = getattr(pa.gpu, name).maximum factors[i] = max(factors[i], max_val) cfl_f, force_f, visc_f = factors return cfl_f, force_f, visc_f def _get_explicit_dt_adapt(self): """Checks if the user is defining a 'dt_adapt' property where the timestep is directly specified. This returns None if no such parameter is found, else it returns the allowed timestep. """ a_eval = self.acceleration_evals[0] if self._has_dt_adapt is None: self._has_dt_adapt = any( 'dt_adapt' in pa.properties for pa in a_eval.particle_arrays ) if self._has_dt_adapt: dt_min = np.inf for pa in a_eval.particle_arrays: if 'dt_adapt' in pa.properties: if pa.gpu is not None: if pa.gpu.get_number_of_particles() > 0: from compyle.array import minimum min_val = minimum(pa.gpu.dt_adapt) else: min_val = np.inf else: if pa.get_number_of_particles() > 0: min_val = np.min(pa.dt_adapt) else: min_val = np.inf dt_min = min(dt_min, min_val) if dt_min > 0.0: return dt_min else: return None else: return None ########################################################################## # Public interface. ########################################################################## def set_acceleration_evals(self, a_evals): '''Set the acceleration evaluators. This must be done before the integrator is used. If you are using the SPHCompiler, it automatically calls this method. ''' if isinstance(a_evals, (list, tuple)): self.acceleration_evals = a_evals else: self.acceleration_evals = [a_evals] def set_fixed_h(self, fixed_h): # compute h_minimum once for constant smoothing lengths if fixed_h: self.compute_h_minimum() self.fixed_h = fixed_h def set_nnps(self, nnps): self.nnps = nnps self.c_integrator.set_nnps(nnps) def compute_h_minimum(self): a_eval = self.acceleration_evals[0] hmin = 1.0 for pa in a_eval.particle_arrays: if pa.gpu: h = pa.gpu.get_device_array('h') else: h = pa.get_carray('h') if h.minimum < hmin: hmin = h.minimum self.h_minimum = hmin def compute_time_step(self, dt, cfl): """If there are any adaptive timestep constraints, the appropriate timestep is returned, else None is returned. """ dt_adapt = self._get_explicit_dt_adapt() if dt_adapt is not None: return dt_adapt dt_cfl_fac, dt_force_fac, dt_visc_fac = self._get_dt_adapt_factors() # iterate over particles and find hmin if using variable h if not self.fixed_h: self.compute_h_minimum() hmin = self.h_minimum # default time steps set to some large value dt_cfl = dt_force = dt_viscous = np.inf # stable time step based on courant condition if dt_cfl_fac > 0: dt_cfl = hmin/dt_cfl_fac # stable time step based on force criterion if dt_force_fac > 0: dt_force = sqrt(hmin/sqrt(dt_force_fac)) # stable time step based on viscous condition if dt_visc_fac > 0: dt_viscous = hmin/dt_visc_fac # minimum of all three dt_min = min(dt_cfl, dt_force, dt_viscous) # return the computed time steps. If dt factors aren't # defined, the default dt is returned if dt_min <= 0.0 or np.isinf(dt_min): return None else: return cfl*dt_min def one_timestep(self, t, dt): """User written function that actually does one timestep. This function is used in the high-performance Cython implementation. The assumptions one may make are the following: - t and dt are passed. - the following methods are available: - self.initialize() - self.stage1(), self.stage2() etc. depending on the number of stages available. - self.compute_accelerations(index=0, update_nnps=True) - self.do_post_stage(stage_dt, stage_count_from_1) - self.update_domain() Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predict-evaluate-correct method, the same as PECIntegrator. """ self.initialize() # Predict self.stage1() self.update_domain() # Call any post-stage functions. self.do_post_stage(0.5*dt, 1) self.compute_accelerations() # Correct self.stage2() self.update_domain() # Call any post-stage functions. self.do_post_stage(dt, 2) def set_compiled_object(self, c_integrator): """Set the high-performance compiled object to call internally. """ self.c_integrator = c_integrator def set_parallel_manager(self, pm): self.parallel_manager = pm self.c_integrator.set_parallel_manager(pm) def set_post_stage_callback(self, callback): """This callback is called when the particles are moved, i.e one stage of the integration is done. This callback is passed the current time value, the timestep and the stage. The current time value is t + stage_dt, for example this would be 0.5*dt for a two stage predictor corrector integrator. """ self.c_integrator.set_post_stage_callback(callback) def step(self, time, dt): """This function is called by the solver. To implement the integration step please override the ``one_timestep`` method. """ self.c_integrator.step(time, dt) def compute_accelerations(self, index=0, update_nnps=True): if update_nnps: # update NNPS since particles have moved if self.parallel_manager: self.parallel_manager.update() self.nnps.update() # Evaluate c_integrator = self.c_integrator a_eval = self.acceleration_evals[index] a_eval.compute(c_integrator.t, c_integrator.dt) def initial_acceleration(self, t, dt): """Compute the initial accelerations if needed before the iterations start. The default implementation only does this for the first acceleration evaluator. So if you have multiple evaluators, you must override this method in a subclass. """ self.acceleration_evals[0].compute(t, dt) def update_domain(self): """Update the domain of the simulation. This is to be called when particles move so the ghost particles (periodicity, mirror boundary conditions) can be reset. Further, this also recalculates the appropriate cell size based on the particle kernel radius, `h`. This should be called explicitly when desired but usually this is done when the particles are moved or the `h` is changed. The integrator should explicitly call this when needed in the `one_timestep` method. """ self.nnps.update_domain() ############################################################################### # `EulerIntegrator` class ############################################################################### class EulerIntegrator(Integrator): def one_timestep(self, t, dt): self.compute_accelerations() self.stage1() self.update_domain() self.do_post_stage(dt, 1) ############################################################################### # `PECIntegrator` class ############################################################################### class PECIntegrator(Integrator): r""" In the Predict-Evaluate-Correct (PEC) mode, the system is advanced using: .. math:: y^{n+\frac{1}{2}} = y^n + \frac{\Delta t}{2}F(y^{n-\frac{1}{2}}) --> Predict F(y^{n+\frac{1}{2}}) --> Evaluate y^{n + 1} = y^n + \Delta t F(y^{n+\frac{1}{2}}) """ def one_timestep(self, t, dt): self.initialize() # Predict self.stage1() self.update_domain() # Call any post-stage functions. self.do_post_stage(0.5*dt, 1) self.compute_accelerations() # Correct self.stage2() self.update_domain() # Call any post-stage functions. self.do_post_stage(dt, 2) ############################################################################### # `EPECIntegrator` class ############################################################################### class EPECIntegrator(Integrator): r""" Predictor corrector integrators can have two modes of operation. In the Evaluate-Predict-Evaluate-Correct (EPEC) mode, the system is advanced using: .. math:: F(y^n) --> Evaluate y^{n+\frac{1}{2}} = y^n + F(y^n) --> Predict F(y^{n+\frac{1}{2}}) --> Evaluate y^{n+1} = y^n + \Delta t F(y^{n+\frac{1}{2}}) --> Correct Notes: The Evaluate stage of the integrator forces a function evaluation. Therefore, the PEC mode is much faster but relies on old accelertions for the Prediction stage. In the EPEC mode, the final corrector can be modified to: :math:`y^{n+1} = y^n + \frac{\Delta t}{2}\left( F(y^n) + F(y^{n+\frac{1}{2}}) \right)` This would require additional storage for the accelerations. """ def one_timestep(self, t, dt): self.initialize() self.compute_accelerations() # Predict self.stage1() self.update_domain() # Call any post-stage functions. self.do_post_stage(0.5*dt, 1) self.compute_accelerations() # Correct self.stage2() self.update_domain() # Call any post-stage functions. self.do_post_stage(dt, 2) ############################################################################### # `TVDRK3Integrator` class ############################################################################### class TVDRK3Integrator(Integrator): r""" In the TVD-RK3 integrator, the system is advanced using: .. math:: y^{n + \frac{1}{3}} = y^n + \Delta t F( y^n ) y^{n + \frac{2}{3}} = \frac{3}{4}y^n + \frac{1}{4}(y^{n + \frac{1}{3}} + \Delta t F(y^{n + \frac{1}{3}})) y^{n + 1} = \frac{1}{3}y^n + \frac{2}{3}(y^{n + \frac{2}{3}} + \Delta t F(y^{n + \frac{2}{3}})) """ def one_timestep(self, t, dt): self.initialize() # stage 1 self.compute_accelerations() self.stage1() self.update_domain() self.do_post_stage(1./3*dt, 1) # stage 2 self.compute_accelerations() self.stage2() self.update_domain() self.do_post_stage(2./3*dt, 2) # stage 3 and end self.compute_accelerations() self.stage3() self.update_domain() self.do_post_stage(dt, 3) ############################################################################### class LeapFrogIntegrator(PECIntegrator): r"""A leap-frog integrator. """ def one_timestep(self, t, dt): self.stage1() self.update_domain() self.do_post_stage(0.5*dt, 1) self.compute_accelerations() self.stage2() self.update_domain() self.do_post_stage(dt, 2) ############################################################################### class PEFRLIntegrator(Integrator): r"""A Position-Extended Forest-Ruth-Like integrator [Omeylan2002]_ References ---------- .. [Omeylan2002] I.M. Omelyan, I.M. Mryglod and R. Folk, "Optimized Forest-Ruth- and Suzuki-like algorithms for integration of motion in many-body systems", Computer Physics Communications 146, 188 (2002) http://arxiv.org/abs/cond-mat/0110585 """ def one_timestep(self, t, dt): self.stage1() self.update_domain() self.do_post_stage(0.1786178958448091*dt, 1) self.compute_accelerations() self.stage2() self.update_domain() self.do_post_stage(0.1123533131749906*dt, 2) self.compute_accelerations() self.stage3() self.update_domain() self.do_post_stage(0.8876466868250094*dt, 3) self.compute_accelerations() self.stage4() self.update_domain() self.do_post_stage(0.8213821041551909*dt, 4) self.compute_accelerations() self.stage5() self.update_domain() self.do_post_stage(dt, 5) pysph-master/pysph/sph/integrator_cython.mako000066400000000000000000000064241356347341600221040ustar00rootroot00000000000000# Automatically generated, do not edit. #cython: cdivision=True <%def name="indent(text, level=0)" buffered="True"> % for l in text.splitlines(): ${' '*4*level}${l} % endfor from libc.math cimport * from cython import address from pysph.base.nnps_base cimport NNPS ${helper.get_helper_code()} ${helper.get_stepper_code()} # ############################################################################# cdef class Integrator: cdef public ParticleArrayWrapper ${helper.get_particle_array_names()} cdef public AccelerationEval acceleration_eval cdef public object integrator cdef public double dt, t, orig_t cdef object _post_stage_callback cdef object steppers ${indent(helper.get_stepper_defs(), 1)} def __init__(self, integrator, acceleration_eval, steppers): self.integrator = integrator self.acceleration_eval = acceleration_eval self.steppers = steppers self._post_stage_callback = None % for name in sorted(helper.object.steppers.keys()): self.${name} = acceleration_eval.${name} % endfor ${indent(helper.get_stepper_init(), 2)} def set_nnps(self, NNPS nnps): pass def set_parallel_manager(self, object pm): pass def set_post_stage_callback(self, object callback): self._post_stage_callback = callback cpdef compute_accelerations(self, int index=0, update_nnps=True): self.integrator.compute_accelerations(index, update_nnps) cpdef update_domain(self): self.integrator.update_domain() cpdef do_post_stage(self, double stage_dt, int stage): """This is called after every stage of the integrator. Internally, this calls any post_stage_callback function that has been given to take suitable action. Parameters ---------- - stage_dt : double: the timestep taken at this stage. - stage : int: the stage completed (starting from 1). """ self.t = self.orig_t + stage_dt if self._post_stage_callback is not None: self._post_stage_callback(self.t, self.dt, stage) cpdef step(self, double t, double dt): """Main step routine. """ self.orig_t = t self.t = t self.dt = dt self.one_timestep(t, dt) cdef one_timestep(self, double t, double dt): ${indent(helper.get_timestep_code(), 2)} % for method in helper.get_stepper_method_wrapper_names(): cdef ${method}(self): cdef long NP_DEST cdef long d_idx cdef ParticleArrayWrapper dst cdef double dt = self.dt cdef double t = self.t ${indent(helper.get_array_declarations(method), 2)} % for dest in sorted(helper.object.steppers.keys()): # --------------------------------------------------------------------- # Destination ${dest}. dst = self.${dest} ## ${indent(helper.get_py_stage_code(dest, method), 2)} ## % if helper.has_stepper_loop(dest, method): # Only iterate over real particles. NP_DEST = dst.size(real=True) ${indent(helper.get_array_setup(dest, method), 2)} for d_idx in range(NP_DEST): ${indent(helper.get_stepper_loop(dest, method), 3)} % endif % endfor % endfor pysph-master/pysph/sph/integrator_cython_helper.py000066400000000000000000000176611356347341600231510ustar00rootroot00000000000000"""Basic code for the templated integrators. Currently we only support two-step integrators. These classes are used to generate the code for the actual integrators from the `sph_eval` module. """ import inspect from os.path import join, dirname from textwrap import dedent from mako.template import Template # Local imports. from pysph.sph.equation import get_array_names from .acceleration_eval_cython_helper import get_helper_code from compyle.api import CythonGenerator, get_func_definition getfullargspec = getattr( inspect, 'getfullargspec', inspect.getargspec ) class IntegratorCythonHelper(object): """A helper that generates Cython code for the Integrator class. """ def __init__(self, integrator, acceleration_eval_helper): self.object = integrator self.acceleration_eval_helper = acceleration_eval_helper pas = acceleration_eval_helper.object.particle_arrays self._particle_arrays = dict((x.name, x) for x in pas) if self.object is not None: self._check_integrator_steppers() def get_code(self): if self.object is not None: path = join(dirname(__file__), 'integrator_cython.mako') template = Template(filename=path) return template.render(helper=self) else: return '' def setup_compiled_module(self, module, acceleration_eval): # Create the compiled module. cython_integrator = module.Integrator( self.object, acceleration_eval, self.object.steppers ) # Setup the integrator to use this compiled module. self.object.set_compiled_object(cython_integrator) ########################################################################## # Mako interface. ########################################################################## def get_particle_array_names(self): return ', '.join(sorted(self.object.steppers.keys())) def get_helper_code(self): helpers = [] for stepper in self.object.steppers.values(): if hasattr(stepper, '_get_helpers_'): for helper in stepper._get_helpers_(): if helper not in helpers: helpers.append(helper) code = get_helper_code(helpers) return '\n'.join(code) def get_stepper_code(self): classes = {} for dest, stepper in self.object.steppers.items(): cls = stepper.__class__.__name__ classes[cls] = stepper known_types = dict(self.acceleration_eval_helper.known_types) known_types.update(dict(t=0.0, dt=0.0)) code_gen = CythonGenerator(known_types=known_types) wrappers = [] for cls in sorted(classes.keys()): code_gen.parse(classes[cls]) wrappers.append(code_gen.get_code()) return '\n'.join(wrappers) def get_stepper_defs(self): lines = [] for dest, stepper in self.object.steppers.items(): cls_name = stepper.__class__.__name__ code = 'cdef public {cls} {name}'.format(cls=cls_name, name=dest+'_stepper') lines.append(code) return '\n'.join(lines) def get_stepper_init(self): lines = [] for dest, stepper in self.object.steppers.items(): cls_name = stepper.__class__.__name__ code = ( 'self.{name} = {cls}(**steppers["{dest}"].__dict__)' .format(name=dest+'_stepper', cls=cls_name, dest=dest) ) lines.append(code) return '\n'.join(lines) def get_args(self, dest, method): stepper = self.object.steppers[dest] meth = getattr(stepper, method, None) if meth is None: return [] else: return getfullargspec(meth).args def get_array_declarations(self, method): arrays = set() for dest in self.object.steppers: s, d = get_array_names(self.get_args(dest, method)) self._check_arrays_for_properties(dest, s | d) arrays.update(s | d) known_types = self.acceleration_eval_helper.known_types decl = [] for arr in sorted(arrays): decl.append('cdef {type} {arr}'.format( type=known_types[arr].type, arr=arr )) return '\n'.join(decl) def get_array_setup(self, dest, method): s, d = get_array_names(self.get_args(dest, method)) lines = ['%s = dst.%s.data' % (n, n[2:]) for n in sorted(s | d)] return '\n'.join(lines) def get_stepper_loop(self, dest, method): args = self.get_args(dest, method) if 'self' in args: args.remove('self') call_args = ', '.join(args) c = 'self.{obj}.{method}({args})'.format( obj=dest+'_stepper', method=method, args=call_args ) return c def get_py_stage_code(self, dest, method): stepper = self.object.steppers[dest] method = 'py_' + method if hasattr(stepper, method): return 'self.steppers["{dest}"].{method}(dst.array, t, dt)'.format( dest=dest, method=method ) else: return '' def has_stepper_loop(self, dest, method): return hasattr(self.object.steppers[dest], method) def get_stepper_method_wrapper_names(self): """Returns the names of the methods we should wrap. For a 2 stage method this will return ('initialize', 'stage1', 'stage2') """ methods = set() for stepper in self.object.steppers.values(): stages = [] for x in dir(stepper): if x.startswith('py_stage'): stages.append(x[3:]) elif x.startswith('stage') or x == 'initialize': stages.append(x) methods.update(stages) return list(sorted(methods)) def get_timestep_code(self): method = self.object.one_timestep sourcelines = inspect.getsourcelines(method)[0] defn, lines = get_func_definition(sourcelines) return dedent(''.join(lines)) ########################################################################## # Private interface. ########################################################################## def _check_arrays_for_properties(self, dest, args): """Given a particle array name and a set of arguments used by an integrator stepper method, check if the particle array has the required props. """ pa = self._particle_arrays[dest] # Remove the 's_' or 'd_' props = set([x[2:] for x in args]) available_props = set(pa.properties.keys()).union(pa.constants.keys()) if not props.issubset(available_props): diff = props.difference(available_props) integrator_name = self.object.steppers[dest].__class__.__name__ names = ', '.join([x for x in sorted(diff)]) msg = "ERROR: {integrator_name} requires the following "\ "properties:\n\t{names}\n"\ "Please add them to the particle array '{dest}'.".format( integrator_name=integrator_name, names=names, dest=dest ) self._runtime_error(msg) def _check_integrator_steppers(self): for name, stepper in self.object.steppers.items(): if name not in self._particle_arrays: msg = "ERROR: Integrator keyword arguments must correspond "\ "to particle array names.\n"\ "Given keyword: '{name}' not a valid particle array.\n"\ "Valid particle array names: {valid}".format( name=name, valid=sorted(self._particle_arrays.keys()) ) self._runtime_error(msg) def _runtime_error(self, msg): print('*'*70) print(msg) print('*'*70) raise RuntimeError(msg) pysph-master/pysph/sph/integrator_gpu_helper.py000066400000000000000000000254241356347341600224340ustar00rootroot00000000000000from collections import defaultdict import functools import inspect from textwrap import dedent import types from mako.template import Template import numpy as np from compyle.config import get_config from .equation import get_array_names from .integrator_cython_helper import IntegratorCythonHelper from .acceleration_eval_gpu_helper import ( get_kernel_definition, get_converter, profile_kernel, wrap_code, get_helper_code ) class GPUIntegrator(object): """Does the actual work of calling the kernels for integration. """ def __init__(self, helper, c_acceleration_eval): self.helper = helper self.acceleration_eval = c_acceleration_eval self.nnps = None self.parallel_manager = None self.integrator = helper.object self._post_stage_callback = None self._use_double = get_config().use_double self._setup_methods() def _setup_methods(self): """This sets up a few methods of this class. This is unfortunately a bit hacky right now and should be cleaned later. It creates the methods for the following: self.one_timestep: this is the same as the integrator's method. self.initialize, self.stage1 ... self.stagen are created based on the number of steppers. """ code = self.helper.get_timestep_code() ns = {} exec(code, ns) self.one_timestep = types.MethodType(ns['one_timestep'], self) for method in self.helper.get_stepper_method_wrapper_names(): setattr(self, method, functools.partial(self._do_stage, method)) def _do_stage(self, method): # Call the appropriate kernels for either initialize/stage computation. call_info = self.helper.calls[method] py_call_info = self.helper.py_calls['py_' + method] dtype = np.float64 if self._use_double else np.float32 extra_args = [np.asarray(self.t, dtype=dtype), np.asarray(self.dt, dtype=dtype), np.asarray(0, dtype=np.uint32)] # Call the py_{method} for each destination. for name, (py_meth, dest) in py_call_info.items(): py_meth(dest, *(extra_args[:-1])) # Call the stage* method for each destination. for name, (call, args, dest) in call_info.items(): n = dest.get_number_of_particles(real=True) args[1] = (n,) # For NP_MAX extra_args[-1][...] = n - 1 # Compute the remaining arguments. rest = [x() for x in args[3:]] call(*(args[:3] + rest + extra_args)) def set_nnps(self, nnps): self.nnps = nnps def set_parallel_manager(self, pm): self.parallel_manager = pm def set_post_stage_callback(self, callback): self._post_stage_callback = callback def compute_accelerations(self, index=0, update_nnps=True): self.integrator.compute_accelerations(index, update_nnps) def update_domain(self): self.integrator.update_domain() def do_post_stage(self, stage_dt, stage): """This is called after every stage of the integrator. Internally, this calls any post_stage_callback function that has been given to take suitable action. Parameters ---------- - stage_dt : double: the timestep taken at this stage. - stage : int: the stage completed (starting from 1). """ self.t = self.orig_t + stage_dt if self._post_stage_callback is not None: self._post_stage_callback(self.t, self.dt, stage) def step(self, t, dt): """Main step routine. """ self.orig_t = t self.t = t self.dt = dt self.one_timestep(t, dt) class CUDAIntegrator(GPUIntegrator): """Does the actual work of calling the kernels for integration. """ def _do_stage(self, method): from pycuda.gpuarray import splay import pycuda.driver as drv # Call the appropriate kernels for either initialize/stage computation. call_info = self.helper.calls[method] py_call_info = self.helper.py_calls['py_' + method] dtype = np.float64 if self._use_double else np.float32 extra_args = [np.asarray(self.t, dtype=dtype), np.asarray(self.dt, dtype=dtype)] # Call the py_{method} for each destination. for name, (py_meth, dest) in py_call_info.items(): py_meth(dest, *extra_args) # Call the stage* method for each destination. for name, (call, args, dest) in call_info.items(): n = dest.get_number_of_particles(real=True) gs, ls = splay(n) gs, ls = int(gs[0]), int(ls[0]) num_blocks = (n + ls - 1) // ls num_tpb = ls # Compute the remaining arguments. args = [x() for x in args[3:]] call(*(args + extra_args), block=(num_tpb, 1, 1), grid=(num_blocks, 1)) class IntegratorGPUHelper(IntegratorCythonHelper): def __init__(self, integrator, acceleration_eval_helper): super(IntegratorGPUHelper, self).__init__( integrator, acceleration_eval_helper ) self.backend = acceleration_eval_helper.backend self.py_data = defaultdict(dict) self.data = defaultdict(dict) self.py_calls = defaultdict(dict) self.calls = defaultdict(dict) self.program = None def _setup_call_data(self): array_map = self.acceleration_eval_helper._array_map q = self.acceleration_eval_helper._queue calls = self.calls py_calls = self.py_calls steppers = self.object.steppers for method, info in self.py_data.items(): for dest_name in info: py_meth = getattr(steppers[dest_name], method) dest = array_map[dest_name] py_calls[method][dest] = (py_meth, dest) for method, info in self.data.items(): for dest_name, (kernel, args) in info.items(): dest = array_map[dest_name] # Note: This is done to do some late binding. Instead of # just directly storing the dest.gpu.x, we compute it on # the fly as the number of particles and the actual buffer # may change. if self.backend == 'opencl': def _getter(dest_gpu, x): return getattr(dest_gpu, x).dev.data elif self.backend == 'cuda': def _getter(dest_gpu, x): return getattr(dest_gpu, x).dev _args = [ functools.partial(_getter, dest.gpu, x[2:]) for x in args ] all_args = [q, None, None] + _args call = getattr(self.program, kernel) call = profile_kernel(call, self.backend) calls[method][dest] = (call, all_args, dest) def get_code(self): if self.object is not None: tpl = dedent(""" // ------------------------------------------------------------ // Integrator steppers. ${helper.get_stepper_code()} // ------------------------------------------------------------ % for dest in sorted(helper.object.steppers.keys()): // Steppers for ${dest} % for method in helper.get_stepper_method_wrapper_names(): <% helper.get_py_stage_code(dest, method) %> % if helper.has_stepper_loop(dest, method): ${helper.get_stepper_kernel(dest, method)} % endif % endfor % endfor // ------------------------------------------------------------ """) template = Template(text=tpl) return template.render(helper=self) else: return '' def setup_compiled_module(self, module, acceleration_eval): # Create the compiled module. self.program = module self._setup_call_data() if self.backend == 'opencl': gpu_integrator = GPUIntegrator(self, acceleration_eval) elif self.backend == 'cuda': gpu_integrator = CUDAIntegrator(self, acceleration_eval) # Setup the integrator to use this compiled module. self.object.set_compiled_object(gpu_integrator) def get_py_stage_code(self, dest, method): stepper = self.object.steppers[dest] method = 'py_' + method if hasattr(stepper, method): self.py_data[method][dest] = dest def get_timestep_code(self): method = self.object.one_timestep return dedent(''.join(inspect.getsourcelines(method)[0])) def get_stepper_code(self): classes = {} helpers = [] for stepper in self.object.steppers.values(): cls = stepper.__class__.__name__ classes[cls] = stepper if hasattr(stepper, '_get_helpers_'): for helper in stepper._get_helpers_(): if helper not in helpers: helpers.append(helper) known_types = dict(self.acceleration_eval_helper.known_types) Converter = get_converter(self.acceleration_eval_helper.backend) code_gen = Converter(known_types=known_types) wrappers = get_helper_code(helpers, code_gen, self.backend) for cls in sorted(classes.keys()): wrappers.append(code_gen.parse_instance(classes[cls])) return '\n'.join(wrappers) def get_stepper_kernel(self, dest, method): kernel = '{method}_{dest}'.format(dest=dest, method=method) stepper = self.object.steppers.get(dest) cls = stepper.__class__.__name__ args = self.get_args(dest, method) if 'self' in args: args.remove('self') s, d = get_array_names(args) all_args = self.acceleration_eval_helper._get_typed_args( list(d) + ['t', 'dt'] ) all_args.append('unsigned int NP_MAX') # All the steppers are essentially empty structs so we just pass 0 as # the stepper struct as it is not used at all. This simplifies things # as we do not need to generate structs and pass them around. code = [ 'int d_idx = GID_0 * LDIM_0 + LID_0;', '/* Guard for padded threads. */', 'if (d_idx > NP_MAX) {return;};' ] + wrap_code( '{cls}_{method}({args});'.format( cls=cls, method=method, args=', '.join(['0'] + args) ), indent='' ) body = '\n'.join(' '*4 + x for x in code) self.data[method][dest] = (kernel, list(d)) sig = get_kernel_definition(kernel, all_args) return ( '{sig}\n{{\n{body}\n}}\n'.format( sig=sig, body=body ) ) pysph-master/pysph/sph/integrator_step.py000066400000000000000000000742341356347341600212600ustar00rootroot00000000000000"""Integrator steps for different schemes. Implement as many stages as needed. """ ############################################################################### # `IntegratorStep` class ############################################################################### class IntegratorStep(object): """Subclass this and implement the methods ``initialize``, ``stage1`` etc. Use the same conventions as the equations. """ def __repr__(self): return '%s()'%(self.__class__.__name__) ############################################################################### # `EulerStep` class ############################################################################### class EulerStep(IntegratorStep): """Fast but inaccurate integrator. Use this for testing""" def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y, d_z, d_rho, d_arho, dt): d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] d_x[d_idx] += dt*d_u[d_idx] d_y[d_idx] += dt*d_v[d_idx] d_z[d_idx] += dt*d_w[d_idx] d_rho[d_idx] += dt*d_arho[d_idx] ############################################################################### # `WCSPHStep` class ############################################################################### class WCSPHStep(IntegratorStep): """Standard Predictor Corrector integrator for the WCSPH formulation Use this integrator for WCSPH formulations. In the predictor step, the particles are advanced to `t + dt/2`. The particles are then advanced with the new force computed at this position. This integrator can be used in PEC or EPEC mode. The same integrator can be used for other problems. Like for example solid mechanics (see SolidMechStep) """ def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_rho0[d_idx] = d_rho[d_idx] def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, dt): dtb2 = 0.5*dt d_u[d_idx] = d_u0[d_idx] + dtb2*d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dtb2*d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dtb2*d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dtb2 * d_ax[d_idx] d_y[d_idx] = d_y0[d_idx] + dtb2 * d_ay[d_idx] d_z[d_idx] = d_z0[d_idx] + dtb2 * d_az[d_idx] # Update densities and smoothing lengths from the accelerations d_rho[d_idx] = d_rho0[d_idx] + dtb2 * d_arho[d_idx] def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, dt): d_u[d_idx] = d_u0[d_idx] + dt*d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dt*d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dt*d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dt * d_ax[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_ay[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_az[d_idx] # Update densities and smoothing lengths from the accelerations d_rho[d_idx] = d_rho0[d_idx] + dt * d_arho[d_idx] ############################################################################### # `WCSPHTVDRK3` Integrator ############################################################################### class WCSPHTVDRK3Step(IntegratorStep): r"""TVD RK3 stepper for WCSPH This integrator requires :math:`2` stages for the storage of the acceleration variables. """ def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_rho0[d_idx] = d_rho[d_idx] def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, dt): # update velocities d_u[d_idx] = d_u0[d_idx] + dt * d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dt * d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dt * d_aw[d_idx] # update positions d_x[d_idx] = d_x0[d_idx] + dt * d_ax[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_ay[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_az[d_idx] # update density d_rho[d_idx] = d_rho0[d_idx] + dt * d_arho[d_idx] def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, dt): # update velocities d_u[d_idx] = 0.75*d_u0[d_idx] + 0.25*( d_u[d_idx] + dt * d_au[d_idx] ) d_v[d_idx] = 0.75*d_v0[d_idx] + 0.25*( d_v[d_idx] + dt * d_av[d_idx] ) d_w[d_idx] = 0.75*d_w0[d_idx] + 0.25*( d_w[d_idx] + dt * d_aw[d_idx] ) # update positions d_x[d_idx] = 0.75*d_x0[d_idx] + 0.25*( d_x[d_idx] + dt * d_ax[d_idx] ) d_y[d_idx] = 0.75*d_y0[d_idx] + 0.25*( d_y[d_idx] + dt * d_ay[d_idx] ) d_z[d_idx] = 0.75*d_z0[d_idx] + 0.25*( d_z[d_idx] + dt * d_az[d_idx] ) # Update density d_rho[d_idx] = 0.75*d_rho0[d_idx] + 0.25*( d_rho[d_idx] + dt * d_arho[d_idx] ) def stage3(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, dt): oneby3 = 1./3. twoby3 = 2./3. # update velocities d_u[d_idx] = oneby3*d_u0[d_idx] + twoby3*( d_u[d_idx] + dt * d_au[d_idx] ) d_v[d_idx] = oneby3*d_v0[d_idx] + twoby3*( d_v[d_idx] + dt * d_av[d_idx] ) d_w[d_idx] = oneby3*d_w0[d_idx] + twoby3*( d_w[d_idx] + dt * d_aw[d_idx] ) # update positions d_x[d_idx] = oneby3*d_x0[d_idx] + twoby3*( d_x[d_idx] + dt * d_ax[d_idx] ) d_y[d_idx] = oneby3*d_y0[d_idx] + twoby3*( d_y[d_idx] + dt * d_ay[d_idx] ) d_z[d_idx] = oneby3*d_z0[d_idx] + twoby3*( d_z[d_idx] + dt * d_az[d_idx] ) # Update density d_rho[d_idx] = oneby3*d_rho0[d_idx] + twoby3*( d_rho[d_idx] + dt * d_arho[d_idx] ) ############################################################################### # `SolidMechStep` class ############################################################################### class SolidMechStep(IntegratorStep): """Predictor corrector Integrator for solid mechanics problems""" def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_e0, d_e): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_rho0[d_idx] = d_rho[d_idx] d_e0[d_idx] = d_e[d_idx] d_s000[d_idx] = d_s00[d_idx] d_s010[d_idx] = d_s01[d_idx] d_s020[d_idx] = d_s02[d_idx] d_s110[d_idx] = d_s11[d_idx] d_s120[d_idx] = d_s12[d_idx] d_s220[d_idx] = d_s22[d_idx] def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, d_e, d_e0, d_ae, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_as00, d_as01, d_as02, d_as11, d_as12, d_as22, dt): dtb2 = 0.5*dt d_u[d_idx] = d_u0[d_idx] + dtb2*d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dtb2*d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dtb2*d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dtb2 * d_ax[d_idx] d_y[d_idx] = d_y0[d_idx] + dtb2 * d_ay[d_idx] d_z[d_idx] = d_z0[d_idx] + dtb2 * d_az[d_idx] # Update densities and smoothing lengths from the accelerations d_rho[d_idx] = d_rho0[d_idx] + dtb2 * d_arho[d_idx] d_e[d_idx] = d_e0[d_idx] + dtb2 * d_ae[d_idx] # update deviatoric stress components d_s00[d_idx] = d_s000[d_idx] + dtb2 * d_as00[d_idx] d_s01[d_idx] = d_s010[d_idx] + dtb2 * d_as01[d_idx] d_s02[d_idx] = d_s020[d_idx] + dtb2 * d_as02[d_idx] d_s11[d_idx] = d_s110[d_idx] + dtb2 * d_as11[d_idx] d_s12[d_idx] = d_s120[d_idx] + dtb2 * d_as12[d_idx] d_s22[d_idx] = d_s220[d_idx] + dtb2 * d_as22[d_idx] def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, d_e, d_ae, d_e0, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_as00, d_as01, d_as02, d_as11, d_as12, d_as22, dt): d_u[d_idx] = d_u0[d_idx] + dt*d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dt*d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dt*d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dt * d_ax[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_ay[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_az[d_idx] # Update densities and smoothing lengths from the accelerations d_rho[d_idx] = d_rho0[d_idx] + dt * d_arho[d_idx] d_e[d_idx] = d_e0[d_idx] + dt * d_ae[d_idx] # update deviatoric stress components d_s00[d_idx] = d_s000[d_idx] + dt * d_as00[d_idx] d_s01[d_idx] = d_s010[d_idx] + dt * d_as01[d_idx] d_s02[d_idx] = d_s020[d_idx] + dt * d_as02[d_idx] d_s11[d_idx] = d_s110[d_idx] + dt * d_as11[d_idx] d_s12[d_idx] = d_s120[d_idx] + dt * d_as12[d_idx] d_s22[d_idx] = d_s220[d_idx] + dt * d_as22[d_idx] ############################################################################### # `TransportVelocityStep` class ############################################################################### class TransportVelocityStep(IntegratorStep): """Integrator defined in 'A transport velocity formulation for smoothed particle hydrodynamics', 2013, JCP, 241, pp 292--307 For a predictor-corrector style of integrator, this integrator should operate only in PEC mode. """ def initialize(self): pass def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_uhat, d_auhat, d_vhat, d_avhat, d_what, d_awhat, d_x, d_y, d_z, dt): dtb2 = 0.5*dt # velocity update eqn (14) d_u[d_idx] += dtb2*d_au[d_idx] d_v[d_idx] += dtb2*d_av[d_idx] d_w[d_idx] += dtb2*d_aw[d_idx] # advection velocity update eqn (15) d_uhat[d_idx] = d_u[d_idx] + dtb2*d_auhat[d_idx] d_vhat[d_idx] = d_v[d_idx] + dtb2*d_avhat[d_idx] d_what[d_idx] = d_w[d_idx] + dtb2*d_awhat[d_idx] # position update eqn (16) d_x[d_idx] += dt*d_uhat[d_idx] d_y[d_idx] += dt*d_vhat[d_idx] d_z[d_idx] += dt*d_what[d_idx] def stage2(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_vmag2, dt): dtb2 = 0.5*dt # corrector update eqn (17) d_u[d_idx] += dtb2*d_au[d_idx] d_v[d_idx] += dtb2*d_av[d_idx] d_w[d_idx] += dtb2*d_aw[d_idx] # magnitude of velocity squared d_vmag2[d_idx] = (d_u[d_idx]*d_u[d_idx] + d_v[d_idx]*d_v[d_idx] + d_w[d_idx]*d_w[d_idx]) ############################################################################### # `AdamiVerletStep` class ############################################################################### class AdamiVerletStep(IntegratorStep): """Verlet time integration described in `A generalized wall boundary condition for smoothed particle hydrodynamics` 2012, JCP, 231, pp 7057--7075 This integrator can operate in either PEC mode or in EPEC mode as described in the paper. """ def initialize(self): pass def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y, d_z, dt): dtb2 = 0.5*dt # velocity predictor eqn (14) d_u[d_idx] += dtb2*d_au[d_idx] d_v[d_idx] += dtb2*d_av[d_idx] d_w[d_idx] += dtb2*d_aw[d_idx] # position predictor eqn (15) d_x[d_idx] += dtb2*d_u[d_idx] d_y[d_idx] += dtb2*d_v[d_idx] d_z[d_idx] += dtb2*d_w[d_idx] def stage2(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y, d_z, d_rho, d_arho, d_vmag2, dt): dtb2 = 0.5*dt # position corrector eqn (17) d_x[d_idx] += dtb2*d_u[d_idx] d_y[d_idx] += dtb2*d_v[d_idx] d_z[d_idx] += dtb2*d_w[d_idx] # velocity corrector eqn (18) d_u[d_idx] += dtb2*d_au[d_idx] d_v[d_idx] += dtb2*d_av[d_idx] d_w[d_idx] += dtb2*d_aw[d_idx] # density corrector eqn (16) d_rho[d_idx] += dt * d_arho[d_idx] # magnitude of velocity squared d_vmag2[d_idx] = (d_u[d_idx]*d_u[d_idx] + d_v[d_idx]*d_v[d_idx] + d_w[d_idx]*d_w[d_idx]) ############################################################################### # `GasDFluidStep` class ############################################################################### class GasDFluidStep(IntegratorStep): """Predictor Corrector integrator for Gas-dynamics""" def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_h, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e, d_e0, d_h0, d_converged, d_omega, d_rho, d_rho0, d_alpha1, d_alpha2, d_alpha10, d_alpha20): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_e0[d_idx] = d_e[d_idx] d_h0[d_idx] = d_h[d_idx] d_rho0[d_idx] = d_rho[d_idx] # set the converged attribute to 0 at the beginning of a Group d_converged[d_idx] = 0 # likewise, we set the default omega (grad-h) terms to 1 at # the beginning of this Group. d_omega[d_idx] = 1.0 d_alpha10[d_idx] = d_alpha1[d_idx] d_alpha20[d_idx] = d_alpha2[d_idx] def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e0, d_e, d_au, d_av, d_aw, d_ae, d_rho, d_rho0, d_arho, d_h, d_h0, d_ah, d_alpha1, d_aalpha1, d_alpha10, d_alpha2, d_aalpha2, d_alpha20, dt): dtb2 = 0.5*dt d_u[d_idx] = d_u0[d_idx] + dtb2 * d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dtb2 * d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dtb2 * d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dtb2 * d_u[d_idx] d_y[d_idx] = d_y0[d_idx] + dtb2 * d_v[d_idx] d_z[d_idx] = d_z0[d_idx] + dtb2 * d_w[d_idx] # update thermal energy d_e[d_idx] = d_e0[d_idx] + dtb2 * d_ae[d_idx] # predict density and smoothing lengths for faster # convergence. NNPS need not be explicitly updated since it # will be called at the end of the predictor stage. d_h[d_idx] = d_h0[d_idx] + dtb2 * d_ah[d_idx] d_rho[d_idx] = d_rho0[d_idx] + dtb2 * d_arho[d_idx] # update viscosity coefficients d_alpha1[d_idx] = d_alpha10[d_idx] + dtb2*d_aalpha1[d_idx] d_alpha2[d_idx] = d_alpha20[d_idx] + dtb2*d_aalpha2[d_idx] def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e0, d_e, d_au, d_av, d_alpha1, d_aalpha1, d_alpha10, d_alpha2, d_aalpha2, d_alpha20, d_aw, d_ae, dt): d_u[d_idx] = d_u0[d_idx] + dt * d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dt * d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dt * d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dt * d_u[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_v[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_w[d_idx] # Update densities and smoothing lengths from the accelerations d_e[d_idx] = d_e0[d_idx] + dt * d_ae[d_idx] # update viscosity coefficients d_alpha1[d_idx] = d_alpha10[d_idx] + dt*d_aalpha1[d_idx] d_alpha2[d_idx] = d_alpha20[d_idx] + dt*d_aalpha2[d_idx] class GSPHStep(IntegratorStep): def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_e, d_au, d_av, d_aw, d_ae, dt): dtb2 = dt*0.5 ustar = d_u[d_idx] + dtb2*d_au[d_idx] vstar = d_v[d_idx] + dtb2*d_av[d_idx] wstar = d_w[d_idx] + dtb2*d_aw[d_idx] d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] d_e[d_idx] += dt*(d_ae[d_idx] - ustar*d_au[d_idx] - vstar*d_av[d_idx] - wstar*d_aw[d_idx]) d_x[d_idx] += dt*ustar d_y[d_idx] += dt*vstar d_z[d_idx] += dt*wstar class ADKEStep(IntegratorStep): """Predictor Corrector integrator for Gas-dynamics ADKE""" def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e, d_e0, d_rho, d_rho0): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_e0[d_idx] = d_e[d_idx] d_rho0[d_idx] = d_rho[d_idx] def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e0, d_e, d_au, d_av, d_aw, d_ae, d_rho, d_rho0, d_arho, dt): dtb2 = 0.5*dt d_u[d_idx] = d_u0[d_idx] + dtb2 * d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dtb2 * d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dtb2 * d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dtb2 * d_u[d_idx] d_y[d_idx] = d_y0[d_idx] + dtb2 * d_v[d_idx] d_z[d_idx] = d_z0[d_idx] + dtb2 * d_w[d_idx] # update thermal energy d_e[d_idx] = d_e0[d_idx] + dtb2 * d_ae[d_idx] def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e0, d_e, d_au, d_av, d_aw, d_ae, dt): d_u[d_idx] = d_u0[d_idx] + dt * d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dt * d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dt * d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dt * d_u[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_v[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_w[d_idx] # Update densities and smoothing lengths from the accelerations d_e[d_idx] = d_e0[d_idx] + dt * d_ae[d_idx] ############################################################################### # `TwoStageRigidBodyStep` class ############################################################################### class TwoStageRigidBodyStep(IntegratorStep): """Simple rigid-body motion At each stage of the integrator, the prescribed velocity and accelerations are incremented by dt/2. Note that the time centered velocity is used for updating the particle positions. This ensures exact motion for a constant acceleration. """ def initialize(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0): d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] def stage1(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_au, d_av, d_aw, dt): dtb2 = 0.5*dt d_u[d_idx] = d_u0[d_idx] + dtb2 * d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dtb2 * d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dtb2 * d_aw[d_idx] # positions are updated based on the time centered velocity d_x[d_idx] = d_x0[d_idx] + dtb2 * 0.5 * (d_u[d_idx] + d_u0[d_idx]) d_y[d_idx] = d_y0[d_idx] + dtb2 * 0.5 * (d_v[d_idx] + d_v0[d_idx]) d_z[d_idx] = d_z0[d_idx] + dtb2 * 0.5 * (d_w[d_idx] + d_w0[d_idx]) def stage2(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_au, d_av, d_aw, dt): d_u[d_idx] = d_u0[d_idx] + dt * d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dt * d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dt * d_aw[d_idx] # positions are updated based on the time centered velocity d_x[d_idx] = d_x0[d_idx] + dt * 0.5 * (d_u[d_idx] + d_u0[d_idx]) d_y[d_idx] = d_y0[d_idx] + dt * 0.5 * (d_v[d_idx] + d_v0[d_idx]) d_z[d_idx] = d_z0[d_idx] + dt * 0.5 * (d_w[d_idx] + d_w0[d_idx]) ############################################################################### # `OneStageRigidBodyStep` class ############################################################################### class OneStageRigidBodyStep(IntegratorStep): """Simple one stage rigid-body motion """ def initialize(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0): d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] def stage1(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_au, d_av, d_aw, dt): pass def stage2(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_au, d_av, d_aw, dt): # update velocities d_u[d_idx] += dt * d_au[d_idx] d_v[d_idx] += dt * d_av[d_idx] d_w[d_idx] += dt * d_aw[d_idx] # upadte positions using time-centered velocity d_x[d_idx] += dt * 0.5 * (d_u[d_idx] + d_u0[d_idx]) d_y[d_idx] += dt * 0.5 * (d_v[d_idx] + d_v0[d_idx]) d_z[d_idx] += dt * 0.5 * (d_w[d_idx] + d_w0[d_idx]) ############################################################################### # `VerletSymplecticWCSPHStep` class ############################################################################### class VerletSymplecticWCSPHStep(IntegratorStep): """Symplectic second order integrator described in the review paper by Monaghan: J. Monaghan, "Smoothed Particle Hydrodynamics", Reports on Progress in Physics, 2005, 68, pp 1703--1759 [JM05] Notes: This integrator should run in PEC mode since in the first stage, the positions are updated using the current velocity. The accelerations are then computed to advance to the full time step values. This version of the integrator does not update the density. That is, the summation density is used instead of the continuity equation. """ def initialize(self): pass def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, dt): dtb2 = 0.5 * dt # Eq. (5.39) in [JM05] d_x[d_idx] += dtb2 * d_u[d_idx] d_y[d_idx] += dtb2 * d_v[d_idx] d_z[d_idx] += dtb2 * d_w[d_idx] def stage2(self, d_idx, d_x, d_y, d_z, d_ax, d_ay, d_az, d_u, d_v, d_w, d_au, d_av, d_aw, dt): dtb2 = 0.5 * dt # Eq. (5.40) in [JM05] d_u[d_idx] += dt * d_au[d_idx] d_v[d_idx] += dt * d_av[d_idx] d_w[d_idx] += dt * d_aw[d_idx] # Eq. (5.41) in [JM05] using XSPH velocity correction d_x[d_idx] += dtb2 * d_ax[d_idx] d_y[d_idx] += dtb2 * d_ay[d_idx] d_z[d_idx] += dtb2 * d_az[d_idx] ############################################################################### # `VelocityVerletSymplecticWCSPHStep` class ############################################################################### class VelocityVerletSymplecticWCSPHStep(IntegratorStep): """Another symplectic second order integrator described in the review paper by Monaghan: J. Monaghan, "Smoothed Particle Hydrodynamics", Reports on Progress in Physics, 2005, 68, pp 1703--1759 [JM05] kick--drift--kick form of the verlet integrator """ def initialize(self): pass def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, dt): dtb2 = 0.5 * dt # Eq. (5.51) in [JM05] d_u[d_idx] += dtb2 * d_au[d_idx] d_v[d_idx] += dtb2 * d_av[d_idx] d_w[d_idx] += dtb2 * d_aw[d_idx] def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_au, d_av, d_aw, dt): dtb2 = 0.5 * dt # Eq. (5.52) in [JM05] d_x[d_idx] += dt * d_u[d_idx] d_y[d_idx] += dt * d_v[d_idx] d_z[d_idx] += dt * d_w[d_idx] # Eq. (5.53) in [JM05] d_u[d_idx] += dtb2 * d_au[d_idx] d_v[d_idx] += dtb2 * d_av[d_idx] d_w[d_idx] += dtb2 * d_aw[d_idx] ############################################################################### # `InletOutletStep` class ############################################################################### class InletOutletStep(IntegratorStep): """A trivial integrator for the inlet/outlet particles """ def initialize(self): pass def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, dt): dtb2 = 0.5*dt d_x[d_idx] += dtb2 * d_u[d_idx] d_y[d_idx] += dtb2 * d_v[d_idx] d_z[d_idx] += dtb2 * d_w[d_idx] def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, dt): dtb2 = 0.5*dt d_x[d_idx] += dtb2 * d_u[d_idx] d_y[d_idx] += dtb2 * d_v[d_idx] d_z[d_idx] += dtb2 * d_w[d_idx] ############################################################################### class LeapFrogStep(IntegratorStep): r"""Using this stepper with XSPH as implemented in `pysph.base.basic_equations.XSPHCorrection` is not directly possible and requires a nicer implementation where the correction alone is added to ``ax, ay, az``. """ def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_ax, d_ay, d_az, dt): d_x[d_idx] += 0.5 * dt * (d_u[d_idx] + d_ax[d_idx]) d_y[d_idx] += 0.5 * dt * (d_v[d_idx] + d_ay[d_idx]) d_z[d_idx] += 0.5 * dt * (d_w[d_idx] + d_az[d_idx]) def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt): d_u[d_idx] += dt * d_au[d_idx] d_v[d_idx] += dt * d_av[d_idx] d_w[d_idx] += dt * d_aw[d_idx] d_rho[d_idx] += dt * d_arho[d_idx] d_e[d_idx] += dt * d_ae[d_idx] d_x[d_idx] += 0.5 * dt * (d_u[d_idx] + d_ax[d_idx]) d_y[d_idx] += 0.5 * dt * (d_v[d_idx] + d_ay[d_idx]) d_z[d_idx] += 0.5 * dt * (d_w[d_idx] + d_az[d_idx]) ############################################################################### class PEFRLStep(IntegratorStep): r"""Using this stepper with XSPH as implemented in `pysph.base.basic_equations.XSPHCorrection` is not directly possible and requires a nicer implementation where the correction alone is added to ``ax, ay, az``. """ def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_ax, d_ay, d_az, dt): xi = 0.1786178958448091 d_x[d_idx] += xi * dt * (d_u[d_idx] + d_ax[d_idx]) d_y[d_idx] += xi * dt * (d_v[d_idx] + d_ay[d_idx]) d_z[d_idx] += xi * dt * (d_w[d_idx] + d_az[d_idx]) def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt=0.0): lamda = -0.2123418310626054 fac = (1. - 2.*lamda) / 2. d_u[d_idx] += fac * dt * d_au[d_idx] d_v[d_idx] += fac * dt * d_av[d_idx] d_w[d_idx] += fac * dt * d_aw[d_idx] d_rho[d_idx] += fac * dt * d_arho[d_idx] d_e[d_idx] += fac * dt * d_ae[d_idx] chi = -0.06626458266981849 d_x[d_idx] += chi * dt * (d_u[d_idx] + d_ax[d_idx]) d_y[d_idx] += chi * dt * (d_v[d_idx] + d_ay[d_idx]) d_z[d_idx] += chi * dt * (d_w[d_idx] + d_az[d_idx]) def stage3(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt=0.0): lamda = -0.2123418310626054 d_u[d_idx] += lamda * dt * d_au[d_idx] d_v[d_idx] += lamda * dt * d_av[d_idx] d_w[d_idx] += lamda * dt * d_aw[d_idx] d_rho[d_idx] += lamda * dt * d_arho[d_idx] d_e[d_idx] += lamda * dt * d_ae[d_idx] xi = +0.1786178958448091 chi = -0.06626458266981849 fac = 1. - 2.*(xi + chi) d_x[d_idx] += fac * dt * (d_u[d_idx] + d_ax[d_idx]) d_y[d_idx] += fac * dt * (d_v[d_idx] + d_ay[d_idx]) d_z[d_idx] += fac * dt * (d_w[d_idx] + d_az[d_idx]) def stage4(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt=0.0): lamda = -0.2123418310626054 d_u[d_idx] += lamda * dt * d_au[d_idx] d_v[d_idx] += lamda * dt * d_av[d_idx] d_w[d_idx] += lamda * dt * d_aw[d_idx] d_rho[d_idx] += lamda * dt * d_arho[d_idx] d_e[d_idx] += lamda * dt * d_ae[d_idx] chi = -0.06626458266981849 d_x[d_idx] += chi * dt * (d_u[d_idx] + d_ax[d_idx]) d_y[d_idx] += chi * dt * (d_v[d_idx] + d_ay[d_idx]) d_z[d_idx] += chi * dt * (d_w[d_idx] + d_az[d_idx]) def stage5(self, d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt=0.0): lamda = -0.2123418310626054 fac = (1. - 2.*lamda) / 2. d_u[d_idx] += fac * dt * d_au[d_idx] d_v[d_idx] += fac * dt * d_av[d_idx] d_w[d_idx] += fac * dt * d_aw[d_idx] d_rho[d_idx] += fac * dt * d_arho[d_idx] d_e[d_idx] += fac * dt * d_ae[d_idx] xi = +0.1786178958448091 d_x[d_idx] += xi * dt * (d_u[d_idx] + d_ax[d_idx]) d_y[d_idx] += xi * dt * (d_v[d_idx] + d_ay[d_idx]) d_z[d_idx] += xi * dt * (d_w[d_idx] + d_az[d_idx]) pysph-master/pysph/sph/isph/000077500000000000000000000000001356347341600164265ustar00rootroot00000000000000pysph-master/pysph/sph/isph/__init__.py000066400000000000000000000000001356347341600205250ustar00rootroot00000000000000pysph-master/pysph/sph/isph/isph.py000066400000000000000000000352351356347341600177530ustar00rootroot00000000000000""" Incompressible SPH. The divergence free formulation of ISPH is implemented here. See "An SPH Projection Method", Cummins S.J., Rudman M, JCOMP(1999). Taylor-Green example only works with --shift-freq=1. """ import numpy from compyle.api import declare from pysph.sph.scheme import Scheme, add_bool_argument from pysph.base.utils import get_particle_array from pysph.sph.integrator import Integrator from pysph.sph.integrator_step import IntegratorStep from pysph.sph.equation import Equation, Group, MultiStageEquations def get_particle_array_isph(constants=None, **props): isph_props = [ 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'rho0', 'diag', 'rhs', 'V0', 'V', 'au', 'av', 'aw' ] # No of particles N = len(props['gid']) consts = { 'np': numpy.array([N], dtype=int), } if constants: consts.update(constants) pa = get_particle_array( additional_props=isph_props, constants=consts, **props ) pa.add_property('ctr', type='int') pa.add_property('coeff', stride=100) pa.add_property('col_idx', stride=100, type='int') pa.add_property('row_idx', stride=100, type='int') pa.add_output_arrays(['p']) return pa class ISPHIntegrator(Integrator): def one_timestep(self, t, dt): self.initialize() self.compute_accelerations(0) self.stage1() self.update_domain() self.do_post_stage(0.5*dt, 1) self.compute_accelerations(1) self.stage2() self.update_domain() self.do_post_stage(dt, 2) def initial_acceleration(self, t, dt): pass class ISPHStep(IntegratorStep): def initialize(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0, dt, d_rho0, d_rho, d_V): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_rho0[d_idx] = d_rho[d_idx] def stage1(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_au, d_av, d_aw, d_V0, d_V, dt): d_x[d_idx] += dt*d_u[d_idx] d_y[d_idx] += dt*d_v[d_idx] d_z[d_idx] += dt*d_w[d_idx] d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] d_V0[d_idx] = d_V[d_idx] def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_x0, d_y0, d_z0, dt, d_au, d_av, d_aw): d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + 0.5*dt * (d_u[d_idx] + d_u0[d_idx]) d_y[d_idx] = d_y0[d_idx] + 0.5*dt * (d_v[d_idx] + d_v0[d_idx]) d_z[d_idx] = d_z0[d_idx] + 0.5*dt * (d_w[d_idx] + d_w0[d_idx]) class MomentumEquationBodyForce(Equation): def __init__(self, dest, sources, gx=0.0, gy=0.0, gz=0.0): self.gx = gx self.gy = gy self.gz = gz super(MomentumEquationBodyForce, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def post_loop(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] += self.gx d_av[d_idx] += self.gy d_aw[d_idx] += self.gz class VelocityDivergence(Equation): def initialize(self, d_idx, d_rhs): d_rhs[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, s_rho, d_rhs, dt, VIJ, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] vdotdwij = VIJ[0]*DWIJ[0] + VIJ[1]*DWIJ[1] + VIJ[2]*DWIJ[2] d_rhs[d_idx] += -Vj * vdotdwij / dt class VelocityDivergenceDFDI(Equation): def initialize(self, d_idx, d_rhs): d_rhs[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, s_rho, d_rhs, dt, VIJ, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] vdotdwij = VIJ[0]*DWIJ[0] + VIJ[1]*DWIJ[1] + VIJ[2]*DWIJ[2] d_rhs[d_idx] += -2*Vj * vdotdwij / dt class DensityInvariance(Equation): def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(DensityInvariance, self).__init__(dest, sources) def post_loop(self, d_idx, d_rho, d_rhs, dt): rho0 = self.rho0 d_rhs[d_idx] = (rho0 - d_rho[d_idx]) / (dt*dt*rho0) class DensityInvarianceDFDI(Equation): def post_loop(self, d_idx, d_V, d_V0, d_rhs, dt): V0 = d_V0[d_idx] d_rhs[d_idx] = 2*(V0 - d_V[d_idx]) / (dt*dt*V0) class PressureCoeffMatrix(Equation): def initialize(self, d_idx, d_ctr, d_diag, d_col_idx): # Make only the diagonals zero as the rest are not summed. d_diag[d_idx] = 0.0 d_ctr[d_idx] = 0 # Initialize col_idx to -1 so as to use it in cond while constructing # pressure coeff matrix. i = declare('int') for i in range(100): d_col_idx[d_idx*100 + i] = -1 def loop(self, d_idx, s_idx, s_m, d_rho, s_rho, d_gid, d_coeff, d_ctr, d_col_idx, d_row_idx, d_diag, XIJ, DWIJ, R2IJ, EPS): rhoij = (s_rho[s_idx] + d_rho[d_idx]) rhoij2_1 = 1.0/(rhoij*rhoij) xdotdwij = XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2] fac = 8.0 * s_m[s_idx] * rhoij2_1 * xdotdwij / (R2IJ + EPS) j, k = declare('int', 3) j = d_gid[s_idx] d_diag[d_idx] += fac k = int(d_ctr[d_idx]) d_coeff[d_idx*100 + k] = -fac d_col_idx[d_idx*100 + k] = j d_row_idx[d_idx*100 + k] = d_idx d_ctr[d_idx] += 1 class PPESolve(Equation): def py_initialize(self, dst, t, dt): import scipy.sparse as sp from scipy.sparse.linalg import bicgstab coeff = declare('object') cond = declare('object') n = declare('int') n = dst.np[0] # Mask all indices which are not used in the construction. cond = (dst.col_idx != -1) coeff = sp.csr_matrix( (dst.coeff[cond], (dst.row_idx[cond], dst.col_idx[cond])), shape=(n, n) ) # Add tiny random noise so the matrix is not singular. cond = abs(dst.rhs) > 1e-9 dst.diag[cond] -= numpy.random.random(n)[cond] coeff += sp.diags(dst.diag) # Pseudo-Neumann boundary conditions dst.rhs[cond] -= dst.rhs[cond].mean() dst.p[:], ec = bicgstab(coeff, dst.rhs, x0=dst.p) assert ec == 0, "Not converging!" class MomentumEquationPressureGradient(Equation): def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_p, s_p, d_rho, s_rho, d_au, d_av, d_aw, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] pij = (d_p[d_idx] - s_p[s_idx]) fac = Vj * pij / d_rho[d_idx] d_au[d_idx] += fac * DWIJ[0] d_av[d_idx] += fac * DWIJ[1] d_aw[d_idx] += fac * DWIJ[2] class MomentumEquationPressureGradientSymmetric(Equation): def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_p, s_p, d_rho, s_rho, d_au, d_av, d_aw, DWIJ): rhoi2 = d_rho[d_idx]*d_rho[d_idx] rhoj2 = s_rho[s_idx]*s_rho[s_idx] pij = d_p[d_idx]/rhoi2 + s_p[s_idx]/rhoj2 fac = -s_m[s_idx] * pij d_au[d_idx] += fac * DWIJ[0] d_av[d_idx] += fac * DWIJ[1] d_aw[d_idx] += fac * DWIJ[2] class UpdatePosition(Equation): def post_loop(self, d_idx, d_au, d_av, d_aw, d_x, d_y, d_z, dt): d_x[d_idx] += d_au[d_idx] * dt*dt * 0.5 d_y[d_idx] += d_av[d_idx] * dt*dt * 0.5 d_z[d_idx] += d_aw[d_idx] * dt*dt * 0.5 class CheckDensityError(Equation): def __init__(self, dest, sources, rho0, tol=0.01): self.conv = 0 self.rho0 = rho0 self.tol = tol self.count = 0 self.rho_err = 0 super(CheckDensityError, self).__init__(dest, sources) def py_initialize(self, dst, t, dt): self.rho_err = numpy.abs(dst.rho - self.rho0).max() self.conv = 1 if self.rho_err < self.tol else -1 self.count += 1 def converged(self): return self.conv class FreeSurfaceBoundaryCondition(Equation): def initialize(self, d_rho, d_rho0, d_rhs, d_diag, d_idx, d_coeff, d_ctr, d_col_idx, d_row_idx): i = declare('int') if d_rho[d_idx]/d_rho0[d_idx] < 0.98: d_rhs[d_idx] = 0.0 d_diag[d_idx] = 1.0 d_ctr[d_idx] = 1 for i in range(100): d_coeff[d_idx*100 + i] = 0.0 d_col_idx[d_idx*100 + i] = -1 d_row_idx[d_idx*100 + i] = d_idx class MomentumEquationPressureGradientSymmetricMirror(Equation): def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_p, s_p, d_rho, s_rho, d_au, d_av, d_aw, s_rho0, DWIJ, XIJ, RIJ, HIJ, SPH_KERNEL): rhoi2 = d_rho[d_idx]*d_rho[d_idx] rhoj2 = s_rho[s_idx]*s_rho[s_idx] pij = d_p[d_idx]/rhoi2 + s_p[s_idx]/rhoj2 fac = -s_m[s_idx] * pij xij = declare('matrix(3)') dwij = declare('matrix(3)') if s_rho[s_idx]/s_rho0[s_idx] < 0.98: rhoi2 = d_rho[d_idx]*d_rho[d_idx] rhoj2 = s_rho[s_idx]*s_rho[s_idx] # Shao and Lao mirror condition, Pj = -Pi. pij = d_p[d_idx]/rhoi2 - d_p[d_idx]/rhoj2 fac = -s_m[s_idx] * pij xij[0] = XIJ[0]*2 xij[1] = XIJ[1]*2 xij[2] = XIJ[2]*2 SPH_KERNEL.gradient(xij, 2*RIJ, HIJ, dwij) d_au[d_idx] += fac * dwij[0] d_av[d_idx] += fac * dwij[1] d_aw[d_idx] += fac * dwij[2] else: d_au[d_idx] += fac * DWIJ[0] d_av[d_idx] += fac * DWIJ[1] d_aw[d_idx] += fac * DWIJ[2] class ISPHScheme(Scheme): def __init__(self, fluids, solids, dim, nu, rho0, c0, alpha, beta=0.0, gx=0.0, gy=0.0, gz=0.0, tolerance=0.01, symmetric=False): self.fluids = fluids self.solver = None self.dim = dim self.nu = nu self.gx = gx self.gy = gy self.gz = gz self.c0 = c0 self.alpha = alpha self.beta = beta self.tolerance = tolerance self.rho0 = rho0 self.symmetric = symmetric def add_user_options(self, group): group.add_argument( '--alpha', action='store', type=float, dest='alpha', default=None, help='Artificial viscosity.' ) add_bool_argument( group, 'symmetric', dest='symmetric', default=None, help='Use symmetric form of pressure gradient.' ) def consume_user_options(self, options): _vars = ['alpha', 'symmetric'] data = dict((var, self._smart_getattr(options, var)) for var in _vars) self.configure(**data) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): import pysph.base.kernels as kern if kernel is None: kernel = kern.QuinticSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) step_cls = ISPHStep for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = step_cls() if integrator_cls is not None: cls = integrator_cls else: cls = ISPHIntegrator integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def _get_viscous_eqns(self): from pysph.sph.wc.transport_velocity import ( MomentumEquationArtificialViscosity ) from pysph.sph.wc.viscosity import LaminarViscosity all = self.fluids eq, stg = [], [] for fluid in self.fluids: eq.append( LaminarViscosity(dest=fluid, sources=self.fluids, nu=self.nu) ) eq.append( MomentumEquationArtificialViscosity( dest=fluid, sources=self.fluids, c0=self.c0, alpha=self.alpha ) ) eq.append( MomentumEquationBodyForce( dest=fluid, sources=self.fluids, gx=self.gx, gy=self.gy, gz=self.gz) ) stg.append(Group(equations=eq)) return stg def _get_ppe(self): all = self.fluids eq2, stg = [], [] for fluid in self.fluids: eq2.append(VelocityDivergence(dest=fluid, sources=all)) eq2.append(PressureCoeffMatrix(dest=fluid, sources=all)) stg.append(Group(equations=eq2)) eq22 = [] for fluid in self.fluids: eq22.append(PPESolve(dest=fluid, sources=all)) stg.append(Group(equations=eq22)) return stg def get_equations(self): all = self.fluids all_eqns = [] # Compute Viscous and Body forces stg1 = self._get_viscous_eqns() all_eqns.append(stg1) stg2 = self._get_ppe() # Compute acceleration due to pressure, initialize au/av/aw to 0. eq4 = [] for fluid in self.fluids: if self.symmetric: eq4.append( MomentumEquationPressureGradientSymmetric( dest=fluid, sources=all ) ) else: eq4.append( MomentumEquationPressureGradient(dest=fluid, sources=all) ) stg2.append(Group(equations=eq4)) all_eqns.append(stg2) return MultiStageEquations(all_eqns) def setup_properties(self, particles, clean=True): particle_arrays = dict([(p.name, p) for p in particles]) dummy = get_particle_array_isph(name='junk', gid=particle_arrays['fluid'].gid) props = [] for x, arr in dummy.properties.items(): tmp = dict(name=x, type=arr.get_c_type()) if x in dummy.stride: tmp.update(stride=dummy.stride[x]) props.append(tmp) constants = [dict(name=x, data=v) for x, v in dummy.constants.items()] output_props = dummy.output_property_arrays for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) for const in constants: pa.add_constant(**const) pysph-master/pysph/sph/isph/sisph.py000066400000000000000000000614771356347341600201450ustar00rootroot00000000000000""" Simple iterative Incompressible SPH See https://arxiv.org/abs/1908.01762 for details """ import numpy from numpy import sqrt from compyle.api import declare from pysph.sph.scheme import Scheme, add_bool_argument from pysph.base.utils import get_particle_array from pysph.sph.integrator import Integrator from pysph.sph.integrator_step import IntegratorStep from pysph.sph.equation import Equation, Group, MultiStageEquations def get_particle_array_sisph(constants=None, **props): sisph_props = [ 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'rho0', 'diag', 'odiag', 'pk', 'rhs', 'pdiff', 'wg', 'vf', 'vg', 'ug', 'wij', 'wf', 'uf', 'V', 'au', 'av', 'aw', 'dt_force', 'dt_cfl', 'vmag', 'auhat', 'avhat', 'awhat', 'p0', 'uhat', 'vhat', 'what', 'uhat0', 'vhat0', 'what0', 'pabs' ] pa = get_particle_array( additional_props=sisph_props, constants=constants, **props ) pa.add_constant('iters', [0.0]) pa.add_constant('pmax', [0.0]) pa.add_output_arrays(['p', 'V', 'vmag', 'p0']) return pa class SISPHIntegrator(Integrator): def one_timestep(self, t, dt): self.initialize() self.compute_accelerations(0) self.stage1() self.update_domain() self.do_post_stage(0.5*dt, 1) self.compute_accelerations(1, update_nnps=False) self.stage2() self.update_domain() self.do_post_stage(dt, 2) def initial_acceleration(self, t, dt): pass class SISPHStep(IntegratorStep): def initialize(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0, dt): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, dt): d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_x0, d_y0, d_z0, d_au, d_av, d_aw, d_vmag, d_dt_cfl, d_dt_force, dt): d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + 0.5*dt * (d_u[d_idx] + d_u0[d_idx]) d_y[d_idx] = d_y0[d_idx] + 0.5*dt * (d_v[d_idx] + d_v0[d_idx]) d_z[d_idx] = d_z0[d_idx] + 0.5*dt * (d_w[d_idx] + d_w0[d_idx]) d_vmag[d_idx] = sqrt(d_u[d_idx]*d_u[d_idx] + d_v[d_idx]*d_v[d_idx] + d_w[d_idx]*d_w[d_idx]) d_dt_cfl[d_idx] = 2.0*d_vmag[d_idx] au = (d_u[d_idx] - d_u0[d_idx])/dt av = (d_v[d_idx] - d_v0[d_idx])/dt aw = (d_w[d_idx] - d_w0[d_idx])/dt d_dt_force[d_idx] = 4.0*(au*au + av*av + aw*aw) class SISPHGTVFStep(IntegratorStep): def initialize(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_uhat, d_vhat, d_what, d_uhat0, d_vhat0, d_what0): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_uhat0[d_idx] = d_uhat[d_idx] d_vhat0[d_idx] = d_vhat[d_idx] d_what0[d_idx] = d_what[d_idx] def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, dt): d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] def stage2(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_x0, d_y0, d_z0, d_au, d_av, d_aw, d_uhat, d_vhat, d_what, d_auhat, d_avhat, d_awhat, d_uhat0, d_vhat0, d_what0, d_vmag, d_dt_cfl, dt, d_u0, d_v0, d_w0, d_dt_force): d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] d_vmag[d_idx] = sqrt(d_u[d_idx]*d_u[d_idx] + d_v[d_idx]*d_v[d_idx] + d_w[d_idx]*d_w[d_idx]) d_dt_cfl[d_idx] = 2.0*d_vmag[d_idx] d_uhat[d_idx] = d_u[d_idx] + dt*d_auhat[d_idx] d_vhat[d_idx] = d_v[d_idx] + dt*d_avhat[d_idx] d_what[d_idx] = d_w[d_idx] + dt*d_awhat[d_idx] d_x[d_idx] = d_x0[d_idx] + 0.5*dt * (d_uhat[d_idx] + d_uhat0[d_idx]) d_y[d_idx] = d_y0[d_idx] + 0.5*dt * (d_vhat[d_idx] + d_vhat0[d_idx]) d_z[d_idx] = d_z0[d_idx] + 0.5*dt * (d_what[d_idx] + d_what0[d_idx]) au = (d_u[d_idx] - d_u0[d_idx])/dt av = (d_v[d_idx] - d_v0[d_idx])/dt aw = (d_w[d_idx] - d_w0[d_idx])/dt d_dt_force[d_idx] = 4*(au*au + av*av + aw*aw) class MomentumEquationBodyForce(Equation): def __init__(self, dest, sources, gx=0.0, gy=0.0, gz=0.0): self.gx = gx self.gy = gy self.gz = gz super(MomentumEquationBodyForce, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def post_loop(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] += self.gx d_av[d_idx] += self.gy d_aw[d_idx] += self.gz class VelocityDivergence(Equation): def initialize(self, d_idx, d_rhs, d_pk, d_p): d_rhs[d_idx] = 0.0 d_pk[d_idx] = d_p[d_idx] def loop(self, d_idx, s_idx, s_m, s_rho, d_rhs, dt, VIJ, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] vdotdwij = VIJ[0]*DWIJ[0] + VIJ[1]*DWIJ[1] + VIJ[2]*DWIJ[2] d_rhs[d_idx] += -Vj * vdotdwij / dt class VelocityDivergenceSolid(Equation): def loop(self, d_idx, s_idx, s_m, s_rho, d_rhs, dt, d_u, d_v, d_w, s_ug, s_vg, s_wg, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] uij = d_u[d_idx] - s_ug[s_idx] vij = d_v[d_idx] - s_vg[s_idx] wij = d_w[d_idx] - s_wg[s_idx] vdotdwij = uij*DWIJ[0] + vij*DWIJ[1] + wij*DWIJ[2] d_rhs[d_idx] += -Vj * vdotdwij / dt class DensityInvariance(Equation): def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(DensityInvariance, self).__init__(dest, sources) def post_loop(self, d_idx, d_rho, d_rhs, dt): rho0 = self.rho0 d_rhs[d_idx] = (rho0 - d_rho[d_idx]) / (dt*dt*rho0) class PressureCoeffMatrixIterative(Equation): def initialize(self, d_idx, d_diag, d_odiag): d_diag[d_idx] = 0.0 d_odiag[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_rho, s_rho, d_diag, d_odiag, s_pk, XIJ, DWIJ, R2IJ, EPS): rhoij = (s_rho[s_idx] + d_rho[d_idx]) rhoij2_1 = 1.0/(d_rho[d_idx]*rhoij) xdotdwij = XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2] fac = 4.0 * s_m[s_idx] * rhoij2_1 * xdotdwij / (R2IJ + EPS) d_diag[d_idx] += fac d_odiag[d_idx] += -fac * s_pk[s_idx] class PPESolve(Equation): def __init__(self, dest, sources, rho0, rho_cutoff=0.8, omega=0.5, tolerance=0.05, max_iterations=1000): self.rho0 = rho0 self.rho_cutoff = rho_cutoff self.conv = 0.0 self.omega = omega self.tolerance = tolerance self.count = 0.0 self.max_iterations = max_iterations super(PPESolve, self).__init__(dest, sources) def post_loop(self, d_idx, d_p, d_pk, d_rhs, d_odiag, d_diag, d_pdiff, d_rho, d_m, d_pabs, d_pmax): omega = self.omega rho = d_rho[d_idx] / self.rho0 diag = d_diag[d_idx] if abs(diag) < 1e-12 or rho < self.rho_cutoff: p = 0.0 else: pnew = (d_rhs[d_idx] - d_odiag[d_idx]) / diag p = omega * pnew + (1.0 - omega) * d_pk[d_idx] d_pdiff[d_idx] = abs(p - d_pk[d_idx]) d_pabs[d_idx] = abs(p) d_p[d_idx] = p d_pk[d_idx] = p d_pmax[0] = max(abs(d_pmax[0]), d_p[d_idx]) def reduce(self, dst, t, dt): self.count += 1 dst.iters[0] = self.count if dst.gpu is not None: from compyle.array import sum n = dst.gpu.p.length pdiff = sum(dst.gpu.pdiff, dst.backend) / n pmean = sum(dst.gpu.pabs, dst.backend) / n conv = pdiff/pmean if pmean < 1.0: conv = pdiff self.conv = 1 if conv < self.tolerance else -1 else: pdiff = dst.pdiff.mean() pmean = numpy.abs(dst.p).mean() conv = pdiff/pmean if pmean < 1.0: conv = pdiff self.conv = 1 if conv < self.tolerance else -1 def converged(self): if self.conv == 1 and self.count < self.max_iterations: self.count = 0 if self.count > self.max_iterations: self.count = 0 print("Max iterations exceeded") return self.conv class UpdateGhostPressure(Equation): def initialize(self, d_idx, d_tag, d_gid, d_p, d_pk): idx = declare('int') if d_tag[d_idx] == 2: idx = d_gid[d_idx] d_pk[d_idx] = d_pk[idx] d_p[d_idx] = d_p[idx] class MomentumEquationPressureGradient(Equation): def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_p, s_p, d_rho, s_rho, d_au, d_av, d_aw, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] pji = (s_p[s_idx] - d_p[d_idx]) fac = -Vj * pji / d_rho[d_idx] d_au[d_idx] += fac * DWIJ[0] d_av[d_idx] += fac * DWIJ[1] d_aw[d_idx] += fac * DWIJ[2] class MomentumEquationPressureGradientSymmetric(Equation): def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_p, s_p, d_rho, s_rho, d_au, d_av, d_aw, DWIJ): rhoi2 = d_rho[d_idx]*d_rho[d_idx] rhoj2 = s_rho[s_idx]*s_rho[s_idx] pij = d_p[d_idx]/rhoi2 + s_p[s_idx]/rhoj2 fac = -s_m[s_idx] * pij d_au[d_idx] += fac * DWIJ[0] d_av[d_idx] += fac * DWIJ[1] d_aw[d_idx] += fac * DWIJ[2] class EvaluateNumberDensity(Equation): def initialize(self, d_idx, d_wij): d_wij[d_idx] = 0.0 def loop(self, d_idx, d_wij, WIJ): d_wij[d_idx] += WIJ class VolumeSummationBand(Equation): def initialize(self, d_idx, d_rhoband): d_rhoband[d_idx] = 0.0 def loop(self, d_idx, d_rhoband, d_m, WIJ): d_rhoband[d_idx] += WIJ * d_m[d_idx] class SetPressureSolid(Equation): def __init__(self, dest, sources, gx=0.0, gy=0.0, gz=0.0, hg_correction=True): self.gx = gx self.gy = gy self.gz = gz self.hg_correction = hg_correction super(SetPressureSolid, self).__init__(dest, sources) def initialize(self, d_idx, d_p): d_p[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_p, s_p, s_rho, d_au, d_av, d_aw, WIJ, XIJ): # numerator of Eq. (27) ax, ay and az are the prescribed wall # accelerations which must be defined for the wall boundary # particle gdotxij = (self.gx - d_au[d_idx])*XIJ[0] + \ (self.gy - d_av[d_idx])*XIJ[1] + \ (self.gz - d_aw[d_idx])*XIJ[2] d_p[d_idx] += s_p[s_idx]*WIJ + s_rho[s_idx]*gdotxij*WIJ def post_loop(self, d_idx, d_wij, d_p, d_rho, d_pk): # extrapolated pressure at the ghost particle if d_wij[d_idx] > 1e-14: d_p[d_idx] /= d_wij[d_idx] if self.hg_correction: d_p[d_idx] = max(0.0, d_p[d_idx]) d_pk[d_idx] = d_p[d_idx] class GTVFAcceleration(Equation): def __init__(self, dest, sources, pref, internal_flow=False, use_pref=False): self.pref = pref assert self.pref is not None, "pref should not be None" self.internal = internal_flow self.hij_fac = 1 if self.internal else 0.5 self.use_pref = use_pref super(GTVFAcceleration, self).__init__(dest, sources) def initialize(self, d_idx, d_auhat, d_avhat, d_awhat, d_p0, d_p, d_pmax): d_auhat[d_idx] = 0.0 d_avhat[d_idx] = 0.0 d_awhat[d_idx] = 0.0 if self.internal: if self.use_pref: d_p0[d_idx] = self.pref else: pref = 2*d_pmax[0] d_p0[d_idx] = pref else: d_p0[d_idx] = min(10*abs(d_p[d_idx]), self.pref) def loop(self, d_p0, s_m, s_idx, d_rho, d_idx, d_auhat, d_avhat, d_awhat, XIJ, RIJ, SPH_KERNEL, HIJ): rhoi2 = d_rho[d_idx]*d_rho[d_idx] tmp = -d_p0[d_idx] * s_m[s_idx]/rhoi2 dwijhat = declare('matrix(3)') SPH_KERNEL.gradient(XIJ, RIJ, self.hij_fac*HIJ, dwijhat) d_auhat[d_idx] += tmp * dwijhat[0] d_avhat[d_idx] += tmp * dwijhat[1] d_awhat[d_idx] += tmp * dwijhat[2] class SmoothedVelocity(Equation): def initialize(self, d_ax, d_ay, d_az, d_idx): d_ax[d_idx] = 0.0 d_ay[d_idx] = 0.0 d_az[d_idx] = 0.0 def loop(self, d_ax, d_ay, d_az, d_idx, s_uhat, s_vhat, s_what, s_idx, s_m, s_rho, WIJ): fac = s_m[s_idx]*WIJ / s_rho[s_idx] d_ax[d_idx] += fac*s_uhat[s_idx] d_ay[d_idx] += fac*s_vhat[s_idx] d_az[d_idx] += fac*s_what[s_idx] class SolidWallNoSlipBC(Equation): def __init__(self, dest, sources, nu): self.nu = nu super(SolidWallNoSlipBC, self).__init__(dest, sources) def loop(self, d_idx, s_idx, d_m, d_rho, s_rho, s_m, d_u, d_v, d_w, d_au, d_av, d_aw, s_ug, s_vg, s_wg, DWIJ, R2IJ, EPS, XIJ): mj = s_m[s_idx] rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] rhoij1 = 1.0/(rhoi + rhoj) Fij = XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2] tmp = mj * 4 * self.nu * rhoij1 * Fij/(R2IJ + EPS) d_au[d_idx] += tmp * (d_u[d_idx] - s_ug[s_idx]) d_av[d_idx] += tmp * (d_v[d_idx] - s_vg[s_idx]) d_aw[d_idx] += tmp * (d_w[d_idx] - s_wg[s_idx]) class SummationDensity(Equation): def initialize(self, d_idx, d_rho): d_rho[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, s_m, WIJ): d_rho[d_idx] += s_m[s_idx]*WIJ class SISPHScheme(Scheme): def __init__(self, fluids, solids, dim, nu, rho0, c0, alpha=0.0, beta=0.0, gx=0.0, gy=0.0, gz=0.0, tolerance=0.05, omega=0.5, hg_correction=False, has_ghosts=False, pref=None, gtvf=False, symmetric=False, rho_cutoff=0.8, max_iterations=1000, internal_flow=False, use_pref=False): self.fluids = fluids self.solids = solids self.solver = None self.dim = dim self.nu = nu self.gx = gx self.gy = gy self.gz = gz self.c0 = c0 self.alpha = alpha self.beta = beta self.rho0 = rho0 self.rho_cutoff = rho_cutoff self.tolerance = tolerance self.omega = omega self.hg_correction = hg_correction self.has_ghosts = has_ghosts self.pref = pref self.gtvf = gtvf self.symmetric = symmetric self.max_iterations = max_iterations self.internal_flow = internal_flow self.use_pref = use_pref def add_user_options(self, group): group.add_argument( "--tol", action="store", dest="tolerance", type=float, help="Tolerance for convergence." ) group.add_argument( "--omega", action="store", dest="omega", type=float, help="Omega for convergence." ) group.add_argument( '--alpha', action='store', type=float, dest='alpha', default=None, help='Artificial viscosity.' ) add_bool_argument( group, 'gtvf', dest='gtvf', default=None, help='Use GTVF.' ) add_bool_argument( group, 'symmetric', dest='symmetric', default=None, help='Use symmetric form of pressure gradient.' ) add_bool_argument( group, 'internal', dest='internal_flow', default=None, help='If the simulation is internal or external.' ) def consume_user_options(self, options): _vars = ['tolerance', 'omega', 'alpha', 'gtvf', 'symmetric', 'internal_flow'] data = dict((var, self._smart_getattr(options, var)) for var in _vars) self.configure(**data) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): import pysph.base.kernels as kern if kernel is None: kernel = kern.QuinticSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) step_cls = SISPHStep if self.gtvf: step_cls = SISPHGTVFStep for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = step_cls() if integrator_cls is not None: cls = integrator_cls else: cls = SISPHIntegrator integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def _get_velocity_bc(self): from pysph.sph.isph.wall_normal import SetWallVelocityNew eqs = [SetWallVelocityNew(dest=s, sources=self.fluids) for s in self.solids] return Group(equations=eqs) def _get_pressure_bc(self): eqs = [] all_solids = self.solids for solid in all_solids: eqs.append( EvaluateNumberDensity( dest=solid, sources=self.fluids ) ) eqs.append( SetPressureSolid( dest=solid, sources=self.fluids, gx=self.gx, gy=self.gy, gz=self.gz, hg_correction=self.hg_correction ) ) return Group(equations=eqs) if eqs else None def _get_normals(self, pa): from pysph.tools.sph_evaluator import SPHEvaluator from pysph.sph.isph.wall_normal import ComputeNormals, SmoothNormals pa.add_property('normal', stride=3) pa.add_property('normal_tmp', stride=3) name = pa.name seval = SPHEvaluator( arrays=[pa], equations=[ Group(equations=[ ComputeNormals(dest=name, sources=[name]) ]), Group(equations=[ SmoothNormals(dest=name, sources=[name]) ]), ], dim=self.dim ) seval.evaluate() def _get_viscous_eqns(self): from pysph.sph.wc.transport_velocity import ( MomentumEquationArtificialViscosity) from pysph.sph.wc.viscosity import LaminarViscosity from pysph.sph.wc.gtvf import MomentumEquationArtificialStress all = self.fluids + self.solids eq, stg = [], [] for fluid in self.fluids: eq.append(SummationDensity(dest=fluid, sources=all)) stg.append(Group(equations=eq, real=False)) eq = [] for fluid in self.fluids: if self.nu > 0.0: eq.append( LaminarViscosity(fluid, sources=self.fluids, nu=self.nu) ) if self.alpha > 0.0: eq.append( MomentumEquationArtificialViscosity( dest=fluid, sources=self.fluids, c0=self.c0, alpha=self.alpha ) ) eq.append( MomentumEquationBodyForce( fluid, sources=None, gx=self.gx, gy=self.gy, gz=self.gz) ) if self.gtvf: eq.append( MomentumEquationArtificialStress( dest=fluid, sources=self.fluids, dim=self.dim) ) if self.solids and self.nu > 0.0: eq.append( SolidWallNoSlipBC( dest=fluid, sources=self.solids, nu=self.nu ) ) stg.append(Group(equations=eq)) return stg def _get_ppe(self): from pysph.sph.wc.transport_velocity import VolumeSummation all = self.fluids + self.solids all_solids = self.solids eq, stg = [], [] for fluid in self.fluids: eq.append(SummationDensity(dest=fluid, sources=all)) stg.append(Group(equations=eq, real=False)) eq2 = [] for fluid in self.fluids: eq2.append(VolumeSummation(dest=fluid, sources=all)) eq2.append( VelocityDivergence( dest=fluid, sources=self.fluids )) if self.solids: eq2.append( VelocityDivergenceSolid(fluid, sources=self.solids) ) stg.append(Group(equations=eq2)) solver_eqns = [] if self.has_ghosts: ghost_eqns = Group( equations=[UpdateGhostPressure(dest=fluid, sources=None) for fluid in self.fluids], real=False ) solver_eqns = [ghost_eqns] if all_solids: g3 = self._get_pressure_bc() solver_eqns.append(g3) eq3 = [] for fluid in self.fluids: if not fluid == 'outlet': eq3.append( PressureCoeffMatrixIterative(dest=fluid, sources=all) ) eq3.append( PPESolve( dest=fluid, sources=all, rho0=self.rho0, rho_cutoff=self.rho_cutoff, tolerance=self.tolerance, omega=self.omega, max_iterations=self.max_iterations ) ) eq3 = Group(equations=eq3) solver_eqns.append(eq3) stg.append( Group( equations=solver_eqns, iterate=True, max_iterations=self.max_iterations, min_iterations=2 ) ) if self.has_ghosts: ghost_eqns = Group( equations=[UpdateGhostPressure(dest=fluid, sources=None) for fluid in self.fluids], real=False ) stg.append(ghost_eqns) return stg def get_equations(self): all = self.fluids + self.solids all_solids = self.solids stg1 = [] if all_solids: g0 = self._get_velocity_bc() stg1.append(g0) stg1.extend(self._get_viscous_eqns()) stg2 = [] if all_solids: g0 = self._get_velocity_bc() stg2.append(g0) stg2.extend(self._get_ppe()) if all_solids: g3 = self._get_pressure_bc() stg2.append(g3) if all_solids: g0 = self._get_velocity_bc() stg2.append(g0) eq4 = [] for fluid in self.fluids: if self.symmetric: eq4.append( MomentumEquationPressureGradientSymmetric(fluid, all) ) else: eq4.append( MomentumEquationPressureGradient(fluid, sources=all) ) if self.gtvf: eq4.append( GTVFAcceleration(dest=fluid, sources=all, pref=self.pref, internal_flow=self.internal_flow, use_pref=self.use_pref) ) stg2.append(Group(equations=eq4)) return MultiStageEquations([stg1, stg2]) def setup_properties(self, particles, clean=True): particle_arrays = dict([(p.name, p) for p in particles]) dummy = get_particle_array_sisph( name='junk', gid=particle_arrays['fluid'].gid ) props = list(dummy.properties.keys()) props += [dict(name=x, stride=v) for x, v in dummy.stride.items()] constants = [dict(name=x, data=v) for x, v in dummy.constants.items()] output_props = dummy.output_property_arrays for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) for const in constants: pa.add_constant(**const) solid_props = ['wij', 'ug', 'vg', 'wg', 'uf', 'vf', 'wf', 'pk', 'V'] all_solids = self.solids for solid in all_solids: pa = particle_arrays[solid] for prop in solid_props: pa.add_property(prop) self._get_normals(pa) pa.add_output_arrays(['p', 'ug', 'vg', 'wg', 'normal']) pysph-master/pysph/sph/isph/wall_normal.py000066400000000000000000000075561356347341600213240ustar00rootroot00000000000000from math import sqrt from compyle.api import declare from pysph.sph.equation import Equation class ComputeNormals(Equation): """Compute normals using a simple approach .. math:: -\frac{m_j}{\rho_j} DW_{ij} First compute the normals, then average them and finally normalize them. """ def initialize(self, d_idx, d_normal_tmp, d_normal): idx = declare('int') idx = 3*d_idx d_normal_tmp[idx] = 0.0 d_normal_tmp[idx + 1] = 0.0 d_normal_tmp[idx + 2] = 0.0 d_normal[idx] = 0.0 d_normal[idx + 1] = 0.0 d_normal[idx + 2] = 0.0 def loop(self, d_idx, d_normal_tmp, s_idx, s_m, s_rho, DWIJ): idx = declare('int') idx = 3*d_idx fac = -s_m[s_idx]/s_rho[s_idx] d_normal_tmp[idx] += fac*DWIJ[0] d_normal_tmp[idx + 1] += fac*DWIJ[1] d_normal_tmp[idx + 2] += fac*DWIJ[2] def post_loop(self, d_idx, d_normal_tmp, d_h): idx = declare('int') idx = 3*d_idx mag = sqrt(d_normal_tmp[idx]**2 + d_normal_tmp[idx + 1]**2 + d_normal_tmp[idx + 2]**2) if mag > 0.25/d_h[d_idx]: d_normal_tmp[idx] /= mag d_normal_tmp[idx + 1] /= mag d_normal_tmp[idx + 2] /= mag else: d_normal_tmp[idx] = 0.0 d_normal_tmp[idx + 1] = 0.0 d_normal_tmp[idx + 2] = 0.0 class SmoothNormals(Equation): def loop(self, d_idx, d_normal, s_normal_tmp, s_idx, s_m, s_rho, WIJ): idx = declare('int') idx = 3*d_idx fac = s_m[s_idx]/s_rho[s_idx]*WIJ d_normal[idx] += fac*s_normal_tmp[3*s_idx] d_normal[idx + 1] += fac*s_normal_tmp[3*s_idx + 1] d_normal[idx + 2] += fac*s_normal_tmp[3*s_idx + 2] def post_loop(self, d_idx, d_normal, d_h): idx = declare('int') idx = 3*d_idx mag = sqrt(d_normal[idx]**2 + d_normal[idx + 1]**2 + d_normal[idx + 2]**2) if mag > 1e-3: d_normal[idx] /= mag d_normal[idx + 1] /= mag d_normal[idx + 2] /= mag else: d_normal[idx] = 0.0 d_normal[idx + 1] = 0.0 d_normal[idx + 2] = 0.0 class SetWallVelocityNew(Equation): r"""Modified SetWall velocity which sets a suitable normal velocity. This requires that the destination array has a 3-strided "normal" property. """ def initialize(self, d_idx, d_uf, d_vf, d_wf, d_wij): d_uf[d_idx] = 0.0 d_vf[d_idx] = 0.0 d_wf[d_idx] = 0.0 d_wij[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_uf, d_vf, d_wf, s_u, s_v, s_w, d_wij, XIJ, RIJ, HIJ, SPH_KERNEL): wij = SPH_KERNEL.kernel(XIJ, RIJ, 0.5*HIJ) d_wij[d_idx] += wij d_uf[d_idx] += s_u[s_idx] * wij d_vf[d_idx] += s_v[s_idx] * wij d_wf[d_idx] += s_w[s_idx] * wij def post_loop(self, d_uf, d_vf, d_wf, d_wij, d_idx, d_ug, d_vg, d_wg, d_u, d_v, d_w, d_normal): idx = declare('int') idx = 3*d_idx # calculation is done only for the relevant boundary particles. # d_wij (and d_uf) is 0 for particles sufficiently away from the # solid-fluid interface if d_wij[d_idx] > 1e-12: d_uf[d_idx] /= d_wij[d_idx] d_vf[d_idx] /= d_wij[d_idx] d_wf[d_idx] /= d_wij[d_idx] # Dummy velocities at the ghost points using Eq. (23), # d_u, d_v, d_w are the prescribed wall velocities. d_ug[d_idx] = 2*d_u[d_idx] - d_uf[d_idx] d_vg[d_idx] = 2*d_v[d_idx] - d_vf[d_idx] d_wg[d_idx] = 2*d_w[d_idx] - d_wf[d_idx] vn = (d_ug[d_idx]*d_normal[idx] + d_vg[d_idx]*d_normal[idx+1] + d_wg[d_idx]*d_normal[idx+2]) if vn < 0: d_ug[d_idx] -= vn*d_normal[idx] d_vg[d_idx] -= vn*d_normal[idx+1] d_wg[d_idx] -= vn*d_normal[idx+2] pysph-master/pysph/sph/misc/000077500000000000000000000000001356347341600164165ustar00rootroot00000000000000pysph-master/pysph/sph/misc/__init__.py000066400000000000000000000000001356347341600205150ustar00rootroot00000000000000pysph-master/pysph/sph/misc/advection.py000066400000000000000000000011561356347341600207470ustar00rootroot00000000000000""" Functions for advection ####################### """ from pysph.sph.equation import Equation from numpy import cos from numpy import pi as M_PI class Advect(Equation): def loop(self, d_idx, d_ax, d_ay, d_u, d_v): d_ax[d_idx] = d_u[d_idx] d_ay[d_idx] = d_v[d_idx] class MixingVelocityUpdate(Equation): def __init__(self, dest, sources, T): self.T = T super(MixingVelocityUpdate, self).__init__(dest, sources) def loop(self, d_idx, d_u, d_v, d_u0, d_v0, t=0.1): d_u[d_idx] = cos(M_PI*t/self.T) * d_u0[d_idx] d_v[d_idx] = -cos(M_PI*t/self.T) * d_v0[d_idx] pysph-master/pysph/sph/rigid_body.py000066400000000000000000000641361356347341600201620ustar00rootroot00000000000000# -*- coding: utf-8 -*- """Rigid body related equations. """ from pysph.base.reduce_array import parallel_reduce_array from pysph.sph.equation import Equation from pysph.sph.integrator_step import IntegratorStep import numpy as np import numpy from math import sqrt def skew(vec): import sympy as S x, y, z = vec[0], vec[1], vec[2] return S.Matrix([[0, -z, y], [z, 0, -x], [-y, x, 0]]) def get_alpha_dot(): """Use sympy to perform most of the math and use the resulting formulae to calculate: inv(I) (\tau - w x (I w)) """ import sympy as S ixx, iyy, izz, ixy, ixz, iyz = S.symbols("ixx, iyy, izz, ixy, ixz, iyz") tx, ty, tz = S.symbols("tx, ty, tz") wx, wy, wz = S.symbols('wx, wy, wz') tau = S.Matrix([tx, ty, tz]) I = S.Matrix([[ixx, ixy, ixz], [ixy, iyy, iyz], [ixz, iyz, izz]]) w = S.Matrix([wx, wy, wz]) Iinv = I.inv() Iinv.simplify() # inv(I) (\tau - w x (Iw)) res = Iinv*(tau - w.cross(I*w)) res.simplify() # Now do some awesome sympy magic. syms, result = S.cse(res, symbols=S.numbered_symbols('tmp')) for lhs, rhs in syms: print("%s = %s" % (lhs, rhs)) for i in range(3): print("omega_dot[%d] =" % i, result[0][i]) def get_torque(): """Use sympy to perform some simple math. R x F C_m x F w x r """ import sympy as S x, y, z, fx, fy, fz = S.symbols("x, y, z, fx, fy, fz") R = S.Matrix([x, y, z]) F = S.Matrix([fx, fy, fz]) print("Torque:", R.cross(F)) cx, cy, cz = S.symbols('cx, cy, cz') d = S.Matrix([cx, cy, cz]) print("c_m x f = ", d.cross(F)) wx, wy, wz = S.symbols('wx, wy, wz') rx, ry, rz = S.symbols('rx, ry, rz') w = S.Matrix([wx, wy, wz]) r = S.Matrix([rx, ry, rz]) print("w x r = %s" % w.cross(r)) # This is defined to silence editor warnings for the use of declare. def declare(*args): pass class RigidBodyMoments(Equation): def reduce(self, dst, t, dt): # FIXME: this will be slow in opencl nbody = declare('int') i = declare('int') base_mi = declare('int') base = declare('int') nbody = dst.num_body[0] if dst.gpu: dst.gpu.pull('omega', 'x', 'y', 'z', 'fx', 'fy', 'fz') d_mi = declare('object') m = declare('object') x = declare('object') y = declare('object') z = declare('object') fx = declare('object') fy = declare('object') fz = declare('object') d_mi = dst.mi cond = declare('object') for i in range(nbody): cond = dst.body_id == i base = i*16 m = dst.m[cond] x = dst.x[cond] y = dst.y[cond] z = dst.z[cond] # Find the total_mass, center of mass and second moments. d_mi[base + 0] = numpy.sum(m) d_mi[base + 1] = numpy.sum(m*x) d_mi[base + 2] = numpy.sum(m*y) d_mi[base + 3] = numpy.sum(m*z) # Only do the lower triangle of values moments of inertia. d_mi[base + 4] = numpy.sum(m*(y*y + z*z)) d_mi[base + 5] = numpy.sum(m*(x*x + z*z)) d_mi[base + 6] = numpy.sum(m*(x*x + y*y)) d_mi[base + 7] = -numpy.sum(m*x*y) d_mi[base + 8] = -numpy.sum(m*x*z) d_mi[base + 9] = -numpy.sum(m*y*z) # the total force and torque fx = dst.fx[cond] fy = dst.fy[cond] fz = dst.fz[cond] d_mi[base + 10] = numpy.sum(fx) d_mi[base + 11] = numpy.sum(fy) d_mi[base + 12] = numpy.sum(fz) # Calculate the torque and reduce it. d_mi[base + 13] = numpy.sum(y*fz - z*fy) d_mi[base + 14] = numpy.sum(z*fx - x*fz) d_mi[base + 15] = numpy.sum(x*fy - y*fx) # Reduce the temporary mi values in parallel across processors. d_mi[:] = parallel_reduce_array(dst.mi) # Set the reduced values. for i in range(nbody): base_mi = i*16 base = i*3 m = d_mi[base_mi + 0] dst.total_mass[i] = m cx = d_mi[base_mi + 1]/m cy = d_mi[base_mi + 2]/m cz = d_mi[base_mi + 3]/m dst.cm[base + 0] = cx dst.cm[base + 1] = cy dst.cm[base + 2] = cz # The actual moment of inertia about center of mass from parallel # axes theorem. ixx = d_mi[base_mi + 4] - (cy*cy + cz*cz)*m iyy = d_mi[base_mi + 5] - (cx*cx + cz*cz)*m izz = d_mi[base_mi + 6] - (cx*cx + cy*cy)*m ixy = d_mi[base_mi + 7] + cx*cy*m ixz = d_mi[base_mi + 8] + cx*cz*m iyz = d_mi[base_mi + 9] + cy*cz*m d_mi[base_mi + 0] = ixx d_mi[base_mi + 1] = ixy d_mi[base_mi + 2] = ixz d_mi[base_mi + 3] = ixy d_mi[base_mi + 4] = iyy d_mi[base_mi + 5] = iyz d_mi[base_mi + 6] = ixz d_mi[base_mi + 7] = iyz d_mi[base_mi + 8] = izz fx = d_mi[base_mi + 10] fy = d_mi[base_mi + 11] fz = d_mi[base_mi + 12] dst.force[base + 0] = fx dst.force[base + 1] = fy dst.force[base + 2] = fz # Acceleration of CM. dst.ac[base + 0] = fx/m dst.ac[base + 1] = fy/m dst.ac[base + 2] = fz/m # Find torque about the Center of Mass and not origin. tx = d_mi[base_mi + 13] ty = d_mi[base_mi + 14] tz = d_mi[base_mi + 15] tx -= cy*fz - cz*fy ty -= -cx*fz + cz*fx tz -= cx*fy - cy*fx dst.torque[base + 0] = tx dst.torque[base + 1] = ty dst.torque[base + 2] = tz wx = dst.omega[base + 0] wy = dst.omega[base + 1] wz = dst.omega[base + 2] # Find omega_dot from: omega_dot = inv(I) (\tau - w x (Iw)) # This was done using the sympy code above. tmp0 = iyz**2 tmp1 = ixy**2 tmp2 = ixz**2 tmp3 = ixx*iyy tmp4 = ixy*ixz tmp5 = 1./(ixx*tmp0 + iyy*tmp2 - 2*iyz*tmp4 + izz*tmp1 - izz*tmp3) tmp6 = ixy*izz - ixz*iyz tmp7 = ixz*wx + iyz*wy + izz*wz tmp8 = ixx*wx + ixy*wy + ixz*wz tmp9 = tmp7*wx - tmp8*wz + ty tmp10 = ixy*iyz - ixz*iyy tmp11 = ixy*wx + iyy*wy + iyz*wz tmp12 = -tmp11*wx + tmp8*wy + tz tmp13 = tmp11*wz - tmp7*wy + tx tmp14 = ixx*iyz - tmp4 dst.omega_dot[base + 0] = tmp5*(-tmp10*tmp12 - tmp13*(iyy*izz - tmp0) + tmp6*tmp9) dst.omega_dot[base + 1] = tmp5*(tmp12*tmp14 + tmp13*tmp6 - tmp9*(ixx*izz - tmp2)) dst.omega_dot[base + 2] = tmp5*(-tmp10*tmp13 - tmp12*(-tmp1 + tmp3) + tmp14*tmp9) if dst.gpu: dst.gpu.push( 'total_mass', 'mi', 'cm', 'force', 'ac', 'torque', 'omega_dot' ) class RigidBodyMotion(Equation): def initialize(self, d_idx, d_x, d_y, d_z, d_u, d_v, d_w, d_cm, d_vc, d_ac, d_omega, d_body_id): base = declare('int') base = d_body_id[d_idx]*3 wx = d_omega[base + 0] wy = d_omega[base + 1] wz = d_omega[base + 2] rx = d_x[d_idx] - d_cm[base + 0] ry = d_y[d_idx] - d_cm[base + 1] rz = d_z[d_idx] - d_cm[base + 2] d_u[d_idx] = d_vc[base + 0] + wy*rz - wz*ry d_v[d_idx] = d_vc[base + 1] + wz*rx - wx*rz d_w[d_idx] = d_vc[base + 2] + wx*ry - wy*rx class BodyForce(Equation): def __init__(self, dest, sources, gx=0.0, gy=0.0, gz=0.0): self.gx = gx self.gy = gy self.gz = gz super(BodyForce, self).__init__(dest, sources) def initialize(self, d_idx, d_m, d_fx, d_fy, d_fz): d_fx[d_idx] = d_m[d_idx]*self.gx d_fy[d_idx] = d_m[d_idx]*self.gy d_fz[d_idx] = d_m[d_idx]*self.gz class SummationDensityBoundary(Equation): r"""Equation to find the density of the fluid particle due to any boundary or a rigid body :math:`\rho_a = \sum_b {\rho}_fluid V_b W_{ab}` """ def __init__(self, dest, sources, fluid_rho=1000.0): self.fluid_rho = fluid_rho super(SummationDensityBoundary, self).__init__(dest, sources) def loop(self, d_idx, d_rho, s_idx, s_m, s_V, WIJ): d_rho[d_idx] += self.fluid_rho * s_V[s_idx] * WIJ class NumberDensity(Equation): def initialize(self, d_idx, d_V): d_V[d_idx] = 0.0 def loop(self, d_idx, d_V, WIJ): d_V[d_idx] += WIJ class SummationDensityRigidBody(Equation): def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(SummationDensityRigidBody, self).__init__(dest, sources) def initialize(self, d_idx, d_rho): d_rho[d_idx] = 0.0 def loop(self, d_idx, d_rho, s_idx, s_V, WIJ): d_rho[d_idx] += self.rho0/s_V[s_idx]*WIJ class ViscosityRigidBody(Equation): """The viscous acceleration on the fluid/solid due to a boundary. Implemented from Akinci et al. http://dx.doi.org/10.1145/2185520.2185558 Use this with the fluid as a destination and body as source. """ def __init__(self, dest, sources, rho0, nu): self.nu = nu self.rho0 = rho0 super(ViscosityRigidBody, self).__init__(dest, sources) def loop(self, d_idx, d_m, d_au, d_av, d_aw, d_rho, s_idx, s_V, s_fx, s_fy, s_fz, EPS, VIJ, XIJ, R2IJ, DWIJ): phi_b = self.rho0/(s_V[s_idx]*d_rho[d_idx]) vijdotxij = min(VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2], 0.0) fac = self.nu*phi_b*vijdotxij/(R2IJ + EPS) ax = fac*DWIJ[0] ay = fac*DWIJ[1] az = fac*DWIJ[2] d_au[d_idx] += ax d_av[d_idx] += ay d_aw[d_idx] += az s_fx[s_idx] += -d_m[d_idx]*ax s_fy[s_idx] += -d_m[d_idx]*ay s_fz[s_idx] += -d_m[d_idx]*az class PressureRigidBody(Equation): """The pressure acceleration on the fluid/solid due to a boundary. Implemented from Akinci et al. http://dx.doi.org/10.1145/2185520.2185558 Use this with the fluid as a destination and body as source. """ def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(PressureRigidBody, self).__init__(dest, sources) def loop(self, d_idx, d_m, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, s_fx, s_fy, s_fz, DWIJ): rho1 = 1.0/d_rho[d_idx] fac = -d_p[d_idx]*rho1*rho1*self.rho0/s_V[s_idx] ax = fac*DWIJ[0] ay = fac*DWIJ[1] az = fac*DWIJ[2] d_au[d_idx] += ax d_av[d_idx] += ay d_aw[d_idx] += az s_fx[s_idx] += -d_m[d_idx]*ax s_fy[s_idx] += -d_m[d_idx]*ay s_fz[s_idx] += -d_m[d_idx]*az class AkinciRigidFluidCoupling(Equation): """Force between a solid sphere and a SPH fluid particle. This is implemented using Akinci's[1] force and additional force from solid bodies pressure which is implemented by Liu[2] [1]'Versatile Rigid-Fluid Coupling for Incompressible SPH' URL: https://graphics.ethz.ch/~sobarbar/papers/Sol12/Sol12.pdf [2]A 3D Simulation of a Moving Solid in Viscous Free-Surface Flows by Coupling SPH and DEM https://doi.org/10.1155/2017/3174904 Note: Here forces for both the phases are added at once. Please make sure that this force is applied only once for both the particle properties. """ def __init__(self, dest, sources, fluid_rho=1000): super(AkinciRigidFluidCoupling, self).__init__(dest, sources) self.fluid_rho = fluid_rho def loop(self, d_idx, d_m, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, s_fx, s_fy, s_fz, DWIJ, s_m, s_p, s_rho): psi = s_V[s_idx] * self.fluid_rho _t1 = 2 * d_p[d_idx] / (d_rho[d_idx]**2) d_au[d_idx] += -psi * _t1 * DWIJ[0] d_av[d_idx] += -psi * _t1 * DWIJ[1] d_aw[d_idx] += -psi * _t1 * DWIJ[2] s_fx[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[0] s_fy[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[1] s_fz[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[2] class LiuFluidForce(Equation): """Force between a solid sphere and a SPH fluid particle. This is implemented using Akinci's[1] force and additional force from solid bodies pressure which is implemented by Liu[2] [1]'Versatile Rigid-Fluid Coupling for Incompressible SPH' URL: https://graphics.ethz.ch/~sobarbar/papers/Sol12/Sol12.pdf [2]A 3D Simulation of a Moving Solid in Viscous Free-Surface Flows by Coupling SPH and DEM https://doi.org/10.1155/2017/3174904 Note: Here forces for both the phases are added at once. Please make sure that this force is applied only once for both the particle properties. """ def __init__(self, dest, sources): super(LiuFluidForce, self).__init__(dest, sources) def loop(self, d_idx, d_m, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, s_fx, s_fy, s_fz, DWIJ, s_m, s_p, s_rho): _t1 = s_p[s_idx] / (s_rho[s_idx]**2) + d_p[d_idx] / (d_rho[d_idx]**2) d_au[d_idx] += -s_m[s_idx] * _t1 * DWIJ[0] d_av[d_idx] += -s_m[s_idx] * _t1 * DWIJ[1] d_aw[d_idx] += -s_m[s_idx] * _t1 * DWIJ[2] s_fx[s_idx] += d_m[d_idx] * s_m[s_idx] * _t1 * DWIJ[0] s_fy[s_idx] += d_m[d_idx] * s_m[s_idx] * _t1 * DWIJ[1] s_fz[s_idx] += d_m[d_idx] * s_m[s_idx] * _t1 * DWIJ[2] class RigidBodyForceGPUGems(Equation): """This is inspired from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch29.html and BK Mishra's article on DEM http://dx.doi.org/10.1016/S0301-7516(03)00032-2 A review of computer simulation of tumbling mills by the discrete element method: Part I - contact mechanics """ def __init__(self, dest, sources, k=1.0, d=1.0, eta=1.0, kt=1.0): """Note that d is a factor multiplied with the "h" of the particle. """ self.k = k self.d = d self.eta = eta self.kt = kt super(RigidBodyForceGPUGems, self).__init__(dest, sources) def loop(self, d_idx, d_fx, d_fy, d_fz, d_h, d_total_mass, XIJ, RIJ, R2IJ, VIJ): vijdotrij = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] if RIJ > 1e-9: vijdotrij_r2ij = vijdotrij/R2IJ nij_x = XIJ[0]/RIJ nij_y = XIJ[1]/RIJ nij_z = XIJ[2]/RIJ else: vijdotrij_r2ij = 0.0 nij_x = 0.0 nij_y = 0.0 nij_z = 0.0 vijt_x = VIJ[0] - vijdotrij_r2ij*XIJ[0] vijt_y = VIJ[1] - vijdotrij_r2ij*XIJ[1] vijt_z = VIJ[2] - vijdotrij_r2ij*XIJ[2] d = self.d*d_h[d_idx] fac = self.k*d_total_mass[0]/d*max(d - RIJ, 0.0) d_fx[d_idx] += fac*nij_x - self.eta*VIJ[0] - self.kt*vijt_x d_fy[d_idx] += fac*nij_y - self.eta*VIJ[1] - self.kt*vijt_y d_fz[d_idx] += fac*nij_z - self.eta*VIJ[2] - self.kt*vijt_z class RigidBodyCollision(Equation): """Force between two spheres is implemented using DEM contact force law. Refer https://doi.org/10.1016/j.powtec.2011.09.019 for more information. Open-source MFIX-DEM software for gas–solids flows: Part I—Verification studies . """ def __init__(self, dest, sources, kn=1e3, mu=0.5, en=0.8): """Initialise the required coefficients for force calculation. Keyword arguments: kn -- Normal spring stiffness (default 1e3) mu -- friction coefficient (default 0.5) en -- coefficient of restitution (0.8) Given these coefficients, tangential spring stiffness, normal and tangential damping coefficient are calculated by default. """ self.kn = kn self.kt = 2. / 7. * kn m_eff = np.pi * 0.5**2 * 1e-6 * 2120 self.gamma_n = -(2 * np.sqrt(kn * m_eff) * np.log(en)) / ( np.sqrt(np.pi**2 + np.log(en)**2)) self.gamma_t = 0.5 * self.gamma_n self.mu = mu super(RigidBodyCollision, self).__init__(dest, sources) def loop(self, d_idx, d_fx, d_fy, d_fz, d_h, d_total_mass, d_rad_s, d_tang_disp_x, d_tang_disp_y, d_tang_disp_z, d_tang_velocity_x, d_tang_velocity_y, d_tang_velocity_z, s_idx, s_rad_s, XIJ, RIJ, R2IJ, VIJ): overlap = 0 if RIJ > 1e-9: overlap = d_rad_s[d_idx] + s_rad_s[s_idx] - RIJ if overlap > 0: # normal vector passing from particle i to j nij_x = -XIJ[0] / RIJ nij_y = -XIJ[1] / RIJ nij_z = -XIJ[2] / RIJ # overlap speed: a scalar vijdotnij = VIJ[0] * nij_x + VIJ[1] * nij_y + VIJ[2] * nij_z # normal velocity vijn_x = vijdotnij * nij_x vijn_y = vijdotnij * nij_y vijn_z = vijdotnij * nij_z # normal force with conservative and dissipation part fn_x = -self.kn * overlap * nij_x - self.gamma_n * vijn_x fn_y = -self.kn * overlap * nij_y - self.gamma_n * vijn_y fn_z = -self.kn * overlap * nij_z - self.gamma_n * vijn_z # ----------------------Tangential force---------------------- # # tangential velocity d_tang_velocity_x[d_idx] = VIJ[0] - vijn_x d_tang_velocity_y[d_idx] = VIJ[1] - vijn_y d_tang_velocity_z[d_idx] = VIJ[2] - vijn_z dtvx = d_tang_velocity_x[d_idx] dtvy = d_tang_velocity_y[d_idx] dtvz = d_tang_velocity_z[d_idx] _tang = sqrt(dtvx*dtvx + dtvy*dtvy + dtvz*dtvz) # tangential unit vector tij_x = 0 tij_y = 0 tij_z = 0 if _tang > 0: tij_x = d_tang_velocity_x[d_idx] / _tang tij_y = d_tang_velocity_y[d_idx] / _tang tij_z = d_tang_velocity_z[d_idx] / _tang # damping force or dissipation ft_x_d = -self.gamma_t * d_tang_velocity_x[d_idx] ft_y_d = -self.gamma_t * d_tang_velocity_y[d_idx] ft_z_d = -self.gamma_t * d_tang_velocity_z[d_idx] # tangential spring force ft_x_s = -self.kt * d_tang_disp_x[d_idx] ft_y_s = -self.kt * d_tang_disp_y[d_idx] ft_z_s = -self.kt * d_tang_disp_z[d_idx] ft_x = ft_x_d + ft_x_s ft_y = ft_y_d + ft_y_s ft_z = ft_z_d + ft_z_s # coulomb law ftij = sqrt((ft_x**2) + (ft_y**2) + (ft_z**2)) fnij = sqrt((fn_x**2) + (fn_y**2) + (fn_z**2)) _fnij = self.mu * fnij if _fnij < ftij: ft_x = -_fnij * tij_x ft_y = -_fnij * tij_y ft_z = -_fnij * tij_z d_fx[d_idx] += fn_x + ft_x d_fy[d_idx] += fn_y + ft_y d_fz[d_idx] += fn_z + ft_z else: d_tang_velocity_x[d_idx] = 0 d_tang_velocity_y[d_idx] = 0 d_tang_velocity_z[d_idx] = 0 d_tang_disp_x[d_idx] = 0 d_tang_disp_y[d_idx] = 0 d_tang_disp_z[d_idx] = 0 class RigidBodyWallCollision(Equation): """Force between sphere and a wall is implemented using DEM contact force law. Refer https://doi.org/10.1016/j.powtec.2011.09.019 for more information. Open-source MFIX-DEM software for gas–solids flows: Part I—Verification studies . """ def __init__(self, dest, sources, kn=1e3, mu=0.5, en=0.8): """Initialise the required coefficients for force calculation. Keyword arguments: kn -- Normal spring stiffness (default 1e3) mu -- friction coefficient (default 0.5) en -- coefficient of restitution (0.8) Given these coefficients, tangential spring stiffness, normal and tangential damping coefficient are calculated by default. """ self.kn = kn self.kt = 2. / 7. * kn m_eff = np.pi * 0.5**2 * 1e-6 * 2120 self.gamma_n = -(2 * np.sqrt(kn * m_eff) * np.log(en)) / ( np.sqrt(np.pi**2 + np.log(en)**2)) print(self.gamma_n) self.gamma_t = 0.5 * self.gamma_n self.mu = mu super(RigidBodyWallCollision, self).__init__(dest, sources) def loop(self, d_idx, d_fx, d_fy, d_fz, d_h, d_total_mass, d_rad_s, d_tang_disp_x, d_tang_disp_y, d_tang_disp_z, d_tang_velocity_x, d_tang_velocity_y, d_tang_velocity_z, s_idx, XIJ, RIJ, R2IJ, VIJ, s_nx, s_ny, s_nz): # check overlap amount overlap = d_rad_s[d_idx] - (XIJ[0] * s_nx[s_idx] + XIJ[1] * s_ny[s_idx] + XIJ[2] * s_nz[s_idx]) if overlap > 0: # basic variables: normal vector nij_x = -s_nx[s_idx] nij_y = -s_ny[s_idx] nij_z = -s_nz[s_idx] # overlap speed: a scalar vijdotnij = VIJ[0] * nij_x + VIJ[1] * nij_y + VIJ[2] * nij_z # normal velocity vijn_x = vijdotnij * nij_x vijn_y = vijdotnij * nij_y vijn_z = vijdotnij * nij_z # normal force with conservative and dissipation part fn_x = -self.kn * overlap * nij_x - self.gamma_n * vijn_x fn_y = -self.kn * overlap * nij_y - self.gamma_n * vijn_y fn_z = -self.kn * overlap * nij_z - self.gamma_n * vijn_z # ----------------------Tangential force---------------------- # # tangential velocity d_tang_velocity_x[d_idx] = VIJ[0] - vijn_x d_tang_velocity_y[d_idx] = VIJ[1] - vijn_y d_tang_velocity_z[d_idx] = VIJ[2] - vijn_z _tang = ( (d_tang_velocity_x[d_idx]**2) + (d_tang_velocity_y[d_idx]**2) + (d_tang_velocity_z[d_idx]**2))**(1. / 2.) # tangential unit vector tij_x = 0 tij_y = 0 tij_z = 0 if _tang > 0: tij_x = d_tang_velocity_x[d_idx] / _tang tij_y = d_tang_velocity_y[d_idx] / _tang tij_z = d_tang_velocity_z[d_idx] / _tang # damping force or dissipation ft_x_d = -self.gamma_t * d_tang_velocity_x[d_idx] ft_y_d = -self.gamma_t * d_tang_velocity_y[d_idx] ft_z_d = -self.gamma_t * d_tang_velocity_z[d_idx] # tangential spring force ft_x_s = -self.kt * d_tang_disp_x[d_idx] ft_y_s = -self.kt * d_tang_disp_y[d_idx] ft_z_s = -self.kt * d_tang_disp_z[d_idx] ft_x = ft_x_d + ft_x_s ft_y = ft_y_d + ft_y_s ft_z = ft_z_d + ft_z_s # coulomb law ftij = ((ft_x**2) + (ft_y**2) + (ft_z**2))**(1. / 2.) fnij = ((fn_x**2) + (fn_y**2) + (fn_z**2))**(1. / 2.) _fnij = self.mu * fnij if _fnij < ftij: ft_x = -_fnij * tij_x ft_y = -_fnij * tij_y ft_z = -_fnij * tij_z d_fx[d_idx] += fn_x + ft_x d_fy[d_idx] += fn_y + ft_y d_fz[d_idx] += fn_z + ft_z # print(d_fz[d_idx]) else: d_tang_velocity_x[d_idx] = 0 d_tang_velocity_y[d_idx] = 0 d_tang_velocity_z[d_idx] = 0 d_tang_disp_x[d_idx] = 0 d_tang_disp_y[d_idx] = 0 d_tang_disp_z[d_idx] = 0 class EulerStepRigidBody(IntegratorStep): """Fast but inaccurate integrator. Use this for testing""" def initialize(self): pass def stage1(self, d_idx, d_u, d_v, d_w, d_x, d_y, d_z, d_omega, d_omega_dot, d_vc, d_ac, d_num_body, dt=0.0): _i = declare('int') _j = declare('int') base = declare('int') if d_idx == 0: for _i in range(d_num_body[0]): base = 3*_i for _j in range(3): d_vc[base + _j] += d_ac[base + _j]*dt d_omega[base + _j] += d_omega_dot[base + _j]*dt d_x[d_idx] += dt*d_u[d_idx] d_y[d_idx] += dt*d_v[d_idx] d_z[d_idx] += dt*d_w[d_idx] class RK2StepRigidBody(IntegratorStep): def initialize(self, d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_omega, d_omega0, d_vc, d_vc0, d_num_body): _i = declare('int') _j = declare('int') base = declare('int') if d_idx == 0: for _i in range(d_num_body[0]): base = 3*_i for _j in range(3): d_vc0[base + _j] = d_vc[base + _j] d_omega0[base + _j] = d_omega[base + _j] d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] def stage1(self, d_idx, d_u, d_v, d_w, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_omega, d_omega_dot, d_vc, d_ac, d_omega0, d_vc0, d_num_body, dt=0.0): dtb2 = 0.5*dt _i = declare('int') j = declare('int') base = declare('int') if d_idx == 0: for _i in range(d_num_body[0]): base = 3*_i for j in range(3): d_vc[base + j] = d_vc0[base + j] + d_ac[base + j]*dtb2 d_omega[base + j] = (d_omega0[base + j] + d_omega_dot[base + j]*dtb2) d_x[d_idx] = d_x0[d_idx] + dtb2*d_u[d_idx] d_y[d_idx] = d_y0[d_idx] + dtb2*d_v[d_idx] d_z[d_idx] = d_z0[d_idx] + dtb2*d_w[d_idx] def stage2(self, d_idx, d_u, d_v, d_w, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_omega, d_omega_dot, d_vc, d_ac, d_omega0, d_vc0, d_num_body, dt=0.0): _i = declare('int') j = declare('int') base = declare('int') if d_idx == 0: for _i in range(d_num_body[0]): base = 3*_i for j in range(3): d_vc[base + j] = d_vc0[base + j] + d_ac[base + j]*dt d_omega[base + j] = (d_omega0[base + j] + d_omega_dot[base + j]*dt) d_x[d_idx] = d_x0[d_idx] + dt*d_u[d_idx] d_y[d_idx] = d_y0[d_idx] + dt*d_v[d_idx] d_z[d_idx] = d_z0[d_idx] + dt*d_w[d_idx] pysph-master/pysph/sph/scheme.py000066400000000000000000001563701356347341600173150ustar00rootroot00000000000000"""Abstract class to define the API for an SPH scheme. The idea is that one can define a scheme and thereafter one simply instantiates a suitable scheme, gives it a bunch of particles and runs the application. """ class Scheme(object): """An API for an SPH scheme. """ def __init__(self, fluids, solids, dim): """ Parameters ---------- fluids: list List of names of fluid particle arrays. solids: list List of names of solid particle arrays (or boundaries). dim: int Dimensionality of the problem. """ self.fluids = fluids self.solids = solids self.dim = dim self.solver = None self.attributes_changed() # Public protocol ################################################### def add_user_options(self, group): pass def attributes_changed(self): """Overload this to compute any properties that depend on others. This is automatically called when configure is called. """ pass def configure(self, **kw): """Configure the scheme with given parameters. Overload this to do any scheme specific stuff. """ for k, v in kw.items(): if not hasattr(self, k): msg = 'Parameter {param} not defined for {scheme}.'.format( param=k, scheme=self.__class__.__name__ ) raise RuntimeError(msg) setattr(self, k, v) self.attributes_changed() def consume_user_options(self, options): pass def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. Parameters ---------- kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ raise NotImplementedError() def get_equations(self): raise NotImplementedError() def get_solver(self): return self.solver def setup_properties(self, particles, clean=True): """Setup the particle arrays so they have the right set of properties for this scheme. Parameters ---------- particles : list List of particle arrays. clean : bool If True, removes any unnecessary properties. """ raise NotImplementedError() # Private protocol ################################################### def _ensure_properties(self, pa, desired_props, clean=True): """Given a particle array and a set of properties desired, this removes unnecessary properties (if `clean=True`), and adds the desired properties. Parameters ---------- pa : ParticleArray Desired particle array. desired_props : sequence Desired properties to have in the array, can be a list of strings or dicts with stride info or both. clean : bool Remove undesirable properties. """ all_props = {} for p in desired_props: if isinstance(p, dict): all_props.update({p['name']: p}) elif p not in all_props: all_props.update({p: {'name': p}}) pa_props = set(pa.properties.keys()) if clean: to_remove = pa_props - set(all_props.keys()) for prop in to_remove: pa.remove_property(prop) to_add = set(all_props.keys()) - pa_props for prop in to_add: pa.add_property(**all_props[prop]) def _smart_getattr(self, obj, var): res = getattr(obj, var) if res is None: return getattr(self, var) else: return res class SchemeChooser(Scheme): def __init__(self, default, **schemes): """ Parameters ---------- default: str Name of the default scheme to use. **schemes: kwargs The schemes to choose between. """ self.default = default self.schemes = dict(schemes) self.scheme = schemes[default] def add_user_options(self, group): for scheme in self.schemes.values(): scheme.add_user_options(group) choices = list(self.schemes.keys()) group.add_argument( "--scheme", action="store", dest="scheme", default=self.default, choices=choices, help="Specify scheme to use (one of %s)." % choices ) def attributes_changed(self): self.scheme.attributes_changed() def configure(self, **kw): self.scheme.configure(**kw) def consume_user_options(self, options): self.scheme = self.schemes[options.scheme] self.scheme.consume_user_options(options) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): self.scheme.configure_solver( kernel=kernel, integrator_cls=integrator_cls, extra_steppers=extra_steppers, **kw ) def get_equations(self): return self.scheme.get_equations() def get_solver(self): return self.scheme.get_solver() def setup_properties(self, particles, clean=True): """Setup the particle arrays so they have the right set of properties for this scheme. Parameters ---------- particles : list List of particle arrays. clean : bool If True, removes any unnecessary properties. """ self.scheme.setup_properties(particles, clean) ############################################################################ def add_bool_argument(group, arg, dest, help, default): group.add_argument( '--%s' % arg, action="store_true", dest=dest, help=help ) neg_help = 'Do not ' + help[0].lower() + help[1:] group.add_argument( '--no-%s' % arg, action="store_false", dest=dest, help=neg_help ) group.set_defaults(**{dest: default}) class WCSPHScheme(Scheme): def __init__(self, fluids, solids, dim, rho0, c0, h0, hdx, gamma=7.0, gx=0.0, gy=0.0, gz=0.0, alpha=0.1, beta=0.0, delta=0.1, nu=0.0, tensile_correction=False, hg_correction=False, update_h=False, delta_sph=False, summation_density=False): """Parameters ---------- fluids: list List of names of fluid particle arrays. solids: list List of names of solid particle arrays (or boundaries). dim: int Dimensionality of the problem. rho0: float Reference density. c0: float Reference speed of sound. gamma: float Gamma for the equation of state. h0: float Reference smoothing length. hdx: float Ratio of h/dx. gx, gy, gz: float Body force acceleration components. alpha: float Coefficient for artificial viscosity. beta: float Coefficient for artificial viscosity. delta: float Coefficient used to control the intensity of diffusion of density nu: float Real viscosity of the fluid, defaults to no viscosity. tensile_correction: bool Use tensile correction. hg_correction: bool Use the Hughes-Graham correction. update_h: bool Update the smoothing length as per Ferrari et al. delta_sph: bool Use the delta-SPH correction terms. summation_density: bool Use summation density instead of continuity. References ---------- .. [Hughes2010] J. P. Hughes and D. I. Graham, "Comparison of incompressible and weakly-compressible SPH models for free-surface water flows", Journal of Hydraulic Research, 48 (2010), pp. 105-117. .. [Marrone2011] S. Marrone et al., "delta-SPH model for simulating violent impact flows", Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526--1542. .. [Cherfils2012] J. M. Cherfils et al., "JOSEPHINE: A parallel SPH code for free-surface flows", Computer Physics Communications, 183 (2012), pp 1468--1480. """ self.fluids = fluids self.solids = solids self.solver = None self.rho0 = rho0 self.c0 = c0 self.gamma = gamma self.dim = dim self.h0 = h0 self.hdx = hdx self.gx = gx self.gy = gy self.gz = gz self.alpha = alpha self.beta = beta self.delta = delta self.nu = nu self.tensile_correction = tensile_correction self.hg_correction = hg_correction self.update_h = update_h self.delta_sph = delta_sph self.summation_density = summation_density def add_user_options(self, group): group.add_argument( "--alpha", action="store", type=float, dest="alpha", default=None, help="Alpha for the artificial viscosity." ) group.add_argument( "--beta", action="store", type=float, dest="beta", default=None, help="Beta for the artificial viscosity." ) group.add_argument( "--delta", action="store", type=float, dest="delta", default=None, help="Delta for the delta-SPH." ) group.add_argument( "--gamma", action="store", type=float, dest="gamma", default=None, help="Gamma for the state equation." ) add_bool_argument( group, 'tensile-correction', dest='tensile_correction', help="Use tensile instability correction.", default=None ) add_bool_argument( group, "hg-correction", dest="hg_correction", help="Use the Hughes Graham correction.", default=None ) add_bool_argument( group, "update-h", dest="update_h", help="Update the smoothing length as per Ferrari et al.", default=None ) add_bool_argument( group, "delta-sph", dest="delta_sph", help="Use the delta-SPH correction terms.", default=None ) add_bool_argument( group, "summation-density", dest="summation_density", help="Use summation density instead of continuity.", default=None ) def consume_user_options(self, options): vars = ['gamma', 'tensile_correction', 'hg_correction', 'update_h', 'delta_sph', 'alpha', 'beta', 'summation_density'] data = dict((var, self._smart_getattr(options, var)) for var in vars) self.configure(**data) def get_timestep(self, cfl=0.5): return cfl*self.h0/self.c0 def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): from pysph.base.kernels import CubicSpline if kernel is None: kernel = CubicSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) from pysph.sph.integrator import PECIntegrator, TVDRK3Integrator from pysph.sph.integrator_step import WCSPHStep, WCSPHTVDRK3Step cls = integrator_cls if integrator_cls is not None else PECIntegrator step_cls = WCSPHTVDRK3Step if cls is TVDRK3Integrator else WCSPHStep for name in self.fluids + self.solids: if name not in steppers: steppers[name] = step_cls() integrator = cls(**steppers) from pysph.solver.solver import Solver if 'dt' not in kw: kw['dt'] = self.get_timestep() self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def get_equations(self): from pysph.sph.equation import Group from pysph.sph.wc.basic import ( MomentumEquation, TaitEOS, TaitEOSHGCorrection, UpdateSmoothingLengthFerrari ) from pysph.sph.wc.basic import (ContinuityEquationDeltaSPH, ContinuityEquationDeltaSPHPreStep, MomentumEquationDeltaSPH) from pysph.sph.basic_equations import \ (ContinuityEquation, SummationDensity, XSPHCorrection) from pysph.sph.wc.viscosity import (LaminarViscosity, LaminarViscosityDeltaSPH) from pysph.sph.wc.kernel_correction import (GradientCorrectionPreStep, GradientCorrection) equations = [] g1 = [] all = self.fluids + self.solids if self.summation_density: g0 = [] for name in self.fluids: g0.append(SummationDensity(dest=name, sources=all)) equations.append(Group(equations=g0, real=False)) for name in self.fluids: g1.append(TaitEOS( dest=name, sources=None, rho0=self.rho0, c0=self.c0, gamma=self.gamma )) # This correction applies only to solids. for name in self.solids: if self.hg_correction: g1.append(TaitEOSHGCorrection( dest=name, sources=None, rho0=self.rho0, c0=self.c0, gamma=self.gamma )) else: g1.append(TaitEOS( dest=name, sources=None, rho0=self.rho0, c0=self.c0, gamma=self.gamma )) equations.append(Group(equations=g1, real=False)) if self.delta_sph and not self.summation_density: eq2_pre = [] for name in self.fluids: eq2_pre.append( GradientCorrectionPreStep(dest=name, sources=[name], dim=self.dim) ) equations.append(Group(equations=eq2_pre, real=False)) eq2 = [] for name in self.fluids: eq2.extend([ GradientCorrection(dest=name, sources=[name]), ContinuityEquationDeltaSPHPreStep( dest=name, sources=[name] )]) equations.append(Group(equations=eq2)) g2 = [] for name in self.solids: g2.append(ContinuityEquation(dest=name, sources=self.fluids)) for name in self.fluids: if not self.summation_density: g2.append( ContinuityEquation(dest=name, sources=all) ) if self.delta_sph and not self.summation_density: g2.append( ContinuityEquationDeltaSPH( dest=name, sources=[name], c0=self.c0, delta=self.delta )) # This is required since MomentumEquation (ME) adds artificial # viscosity (AV), so make alpha 0.0 for ME and enable delta sph AV. alpha = 0.0 if self.delta_sph else self.alpha g2.append( MomentumEquation( dest=name, sources=all, c0=self.c0, alpha=alpha, beta=self.beta, gx=self.gx, gy=self.gy, gz=self.gz, tensile_correction=self.tensile_correction )) if self.delta_sph: g2.append( MomentumEquationDeltaSPH( dest=name, sources=[name], rho0=self.rho0, c0=self.c0, alpha=self.alpha )) g2.append(XSPHCorrection(dest=name, sources=[name])) if abs(self.nu) > 1e-14: if self.delta_sph: eq = LaminarViscosityDeltaSPH( dest=name, sources=all, dim=self.dim, rho0=self.rho0, nu=self.nu ) else: eq = LaminarViscosity( dest=name, sources=all, nu=self.nu ) g2.insert(-1, eq) equations.append(Group(equations=g2)) if self.update_h: g3 = [ UpdateSmoothingLengthFerrari( dest=x, sources=None, dim=self.dim, hdx=self.hdx ) for x in self.fluids ] equations.append(Group(equations=g3, real=False)) return equations def setup_properties(self, particles, clean=True): from pysph.base.utils import get_particle_array_wcsph dummy = get_particle_array_wcsph(name='junk') props = list(dummy.properties.keys()) output_props = ['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'm', 'h', 'pid', 'gid', 'tag', 'p'] if self.delta_sph: delta_sph_props = [ {'name': 'm_mat', 'stride': 9}, {'name': 'gradrho', 'stride': 3}, ] props += delta_sph_props for pa in particles: self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) class TVFScheme(Scheme): def __init__(self, fluids, solids, dim, rho0, c0, nu, p0, pb, h0, gx=0.0, gy=0.0, gz=0.0, alpha=0.0, tdamp=0.0): self.fluids = fluids self.solids = solids self.solver = None self.rho0 = rho0 self.c0 = c0 self.pb = pb self.p0 = p0 self.nu = nu self.dim = dim self.h0 = h0 self.gx = gx self.gy = gy self.gz = gz self.alpha = alpha self.tdamp = 0.0 def add_user_options(self, group): group.add_argument( "--alpha", action="store", type=float, dest="alpha", default=None, help="Alpha for the artificial viscosity." ) group.add_argument( "--tdamp", action="store", type=float, dest="tdamp", default=None, help="Time for which the accelerations are damped." ) def consume_user_options(self, options): vars = ['alpha', 'tdamp'] data = dict((var, self._smart_getattr(options, var)) for var in vars) self.configure(**data) def get_timestep(self, cfl=0.25): dt_cfl = cfl * self.h0/self.c0 if self.nu > 1e-12: dt_viscous = 0.125 * self.h0**2/self.nu else: dt_viscous = 1.0 dt_force = 1.0 return min(dt_cfl, dt_viscous, dt_force) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. Parameters ---------- kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import QuinticSpline from pysph.sph.integrator_step import TransportVelocityStep from pysph.sph.integrator import PECIntegrator if kernel is None: kernel = QuinticSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) step_cls = TransportVelocityStep for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = step_cls() cls = integrator_cls if integrator_cls is not None else PECIntegrator integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def get_equations(self): from pysph.sph.equation import Group from pysph.sph.wc.transport_velocity import ( SummationDensity, StateEquation, MomentumEquationPressureGradient, MomentumEquationArtificialViscosity, MomentumEquationViscosity, MomentumEquationArtificialStress, SolidWallPressureBC, SolidWallNoSlipBC, SetWallVelocity ) equations = [] all = self.fluids + self.solids g1 = [] for fluid in self.fluids: g1.append(SummationDensity(dest=fluid, sources=all)) equations.append(Group(equations=g1, real=False)) g2 = [] for fluid in self.fluids: g2.append(StateEquation( dest=fluid, sources=None, p0=self.p0, rho0=self.rho0, b=1.0 )) for solid in self.solids: g2.append(SetWallVelocity(dest=solid, sources=self.fluids)) if len(g2) > 0: equations.append(Group(equations=g2, real=False)) g3 = [] for solid in self.solids: g3.append(SolidWallPressureBC( dest=solid, sources=self.fluids, b=1.0, rho0=self.rho0, p0=self.p0, gx=self.gx, gy=self.gy, gz=self.gz )) if len(g3) > 0: equations.append(Group(equations=g3, real=False)) g4 = [] for fluid in self.fluids: g4.append( MomentumEquationPressureGradient( dest=fluid, sources=all, pb=self.pb, gx=self.gx, gy=self.gy, gz=self.gz, tdamp=self.tdamp ) ) if self.alpha > 0.0: g4.append( MomentumEquationArtificialViscosity( dest=fluid, sources=all, c0=self.c0, alpha=self.alpha ) ) if self.nu > 0.0: g4.append( MomentumEquationViscosity( dest=fluid, sources=self.fluids, nu=self.nu ) ) if len(self.solids) > 0: g4.append( SolidWallNoSlipBC( dest=fluid, sources=self.solids, nu=self.nu ) ) g4.append( MomentumEquationArtificialStress( dest=fluid, sources=self.fluids) ) equations.append(Group(equations=g4)) return equations def setup_properties(self, particles, clean=True): from pysph.base.utils import get_particle_array_tvf_fluid, \ get_particle_array_tvf_solid particle_arrays = dict([(p.name, p) for p in particles]) dummy = get_particle_array_tvf_fluid(name='junk') props = list(dummy.properties.keys()) output_props = dummy.output_property_arrays for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) dummy = get_particle_array_tvf_solid(name='junk') props = list(dummy.properties.keys()) output_props = dummy.output_property_arrays for solid in self.solids: pa = particle_arrays[solid] self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) class AdamiHuAdamsScheme(TVFScheme): """This is a scheme similiar to that in the paper: Adami, S., Hu, X., Adams, N. A generalized wall boundary condition for smoothed particle hydrodynamics. Journal of Computational Physics 2012;231(21):7057-7075. The major difference is in how the equations are integrated. The paper has a different scheme that does not quite fit in with how things are done in PySPH readily so we simply use the WCSPHStep which works well. """ def __init__(self, fluids, solids, dim, rho0, c0, nu, h0, gx=0.0, gy=0.0, gz=0.0, p0=0.0, gamma=7.0, tdamp=0.0, alpha=0.0): self.fluids = fluids self.solids = solids self.solver = None self.rho0 = rho0 self.c0 = c0 self.h0 = h0 self.p0 = p0 self.nu = nu self.dim = dim self.gx = gx self.gy = gy self.gz = gz self.alpha = alpha self.gamma = float(gamma) self.tdamp = tdamp self.attributes_changed() def add_user_options(self, group): super(AdamiHuAdamsScheme, self).add_user_options(group) group.add_argument( "--gamma", action="store", type=float, dest="gamma", default=None, help="Gamma for the state equation." ) def attributes_changed(self): self.B = self.c0*self.c0*self.rho0/self.gamma def consume_user_options(self, options): vars = ['alpha', 'tdamp', 'gamma'] data = dict((var, self._smart_getattr(options, var)) for var in vars) self.configure(**data) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. Parameters ---------- kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import QuinticSpline from pysph.sph.integrator_step import WCSPHStep from pysph.sph.integrator import PECIntegrator if kernel is None: kernel = QuinticSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) step_cls = WCSPHStep for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = step_cls() cls = integrator_cls if integrator_cls is not None else PECIntegrator integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def get_equations(self): from pysph.sph.equation import Group from pysph.sph.wc.basic import TaitEOS from pysph.sph.basic_equations import XSPHCorrection from pysph.sph.wc.transport_velocity import ( ContinuityEquation, ContinuitySolid, MomentumEquationPressureGradient, MomentumEquationViscosity, MomentumEquationArtificialViscosity, SolidWallPressureBC, SolidWallNoSlipBC, SetWallVelocity, VolumeSummation ) equations = [] all = self.fluids + self.solids g2 = [] for fluid in self.fluids: g2.append(VolumeSummation(dest=fluid, sources=all)) g2.append(TaitEOS( dest=fluid, sources=None, rho0=self.rho0, c0=self.c0, gamma=self.gamma, p0=self.p0 )) for solid in self.solids: g2.append(VolumeSummation(dest=solid, sources=all)) g2.append(SetWallVelocity(dest=solid, sources=self.fluids)) equations.append(Group(equations=g2, real=False)) g3 = [] for solid in self.solids: g3.append(SolidWallPressureBC( dest=solid, sources=self.fluids, b=1.0, rho0=self.rho0, p0=self.B, gx=self.gx, gy=self.gy, gz=self.gz )) equations.append(Group(equations=g3, real=False)) g4 = [] for fluid in self.fluids: g4.append( ContinuityEquation(dest=fluid, sources=self.fluids) ) if self.solids: g4.append( ContinuitySolid(dest=fluid, sources=self.solids) ) g4.append( MomentumEquationPressureGradient( dest=fluid, sources=all, pb=0.0, gx=self.gx, gy=self.gy, gz=self.gz, tdamp=self.tdamp ) ) if self.alpha > 0.0: g4.append( MomentumEquationArtificialViscosity( dest=fluid, sources=all, c0=self.c0, alpha=self.alpha ) ) if self.nu > 0.0: g4.append( MomentumEquationViscosity( dest=fluid, sources=self.fluids, nu=self.nu ) ) if len(self.solids) > 0: g4.append( SolidWallNoSlipBC( dest=fluid, sources=self.solids, nu=self.nu ) ) g4.append(XSPHCorrection(dest=fluid, sources=[fluid])) equations.append(Group(equations=g4)) return equations def setup_properties(self, particles, clean=True): super(AdamiHuAdamsScheme, self).setup_properties(particles, clean) particle_arrays = dict([(p.name, p) for p in particles]) props = ['cs', 'arho', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'ax', 'ay', 'az'] for fluid in self.fluids: pa = particle_arrays[fluid] for prop in props: pa.add_property(prop) class GasDScheme(Scheme): def __init__(self, fluids, solids, dim, gamma, kernel_factor, alpha1=1.0, alpha2=0.1, beta=2.0, adaptive_h_scheme='mpm', update_alpha1=False, update_alpha2=False, max_density_iterations=250, density_iteration_tolerance=1e-3, has_ghosts=False): """ Parameters ---------- fluids: list List of names of fluid particle arrays. solids: list List of names of solid particle arrays (or boundaries), currently not supported dim: int Dimensionality of the problem. gamma: float Gamma for Equation of state. kernel_factor: float Kernel scaling factor. alpha1: float Artificial viscosity parameter. alpha2: float Artificial viscosity parameter. beta: float Artificial viscosity parameter. adaptive_h_scheme: str Adaptive h scheme to use. One of ['mpm', 'gsph'] update_alpha1: bool Update the alpha1 parameter dynamically. update_alpha2: bool Update the alpha2 parameter dynamically. max_density_iterations: int Maximum number of iterations to run for one density step density_iteration_tolerance: float Maximum difference allowed in two successive density iterations has_ghosts: bool if ghost particles (either mirror or periodic) is used """ self.fluids = fluids self.solids = solids self.dim = dim self.solver = None self.gamma = gamma self.alpha1 = alpha1 self.alpha2 = alpha2 self.update_alpha1 = update_alpha1 self.update_alpha2 = update_alpha2 self.beta = beta self.kernel_factor = kernel_factor self.adaptive_h_scheme = adaptive_h_scheme self.density_iteration_tolerance = density_iteration_tolerance self.max_density_iterations = max_density_iterations self.has_ghosts = has_ghosts def add_user_options(self, group): choices = ['gsph', 'mpm'] group.add_argument( "--adaptive-h", action="store", dest="adaptive_h_scheme", default=None, choices=choices, help="Specify scheme for adaptive smoothing lengths %s" % choices ) group.add_argument( "--alpha1", action="store", type=float, dest="alpha1", default=None, help="Alpha1 for the artificial viscosity." ) group.add_argument( "--beta", action="store", type=float, dest="beta", default=None, help="Beta for the artificial viscosity." ) group.add_argument( "--alpha2", action="store", type=float, dest="alpha2", default=None, help="Alpha2 for artificial viscosity" ) group.add_argument( "--gamma", action="store", type=float, dest="gamma", default=None, help="Gamma for the state equation." ) add_bool_argument( group, "update-alpha1", dest="update_alpha1", help="Update the alpha1 parameter.", default=None ) add_bool_argument( group, "update-alpha2", dest="update_alpha2", help="Update the alpha2 parameter.", default=None ) def consume_user_options(self, options): vars = ['gamma', 'alpha2', 'alpha1', 'beta', 'update_alpha1', 'update_alpha2', 'adaptive_h_scheme'] data = dict((var, self._smart_getattr(options, var)) for var in vars) self.configure(**data) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. Parameters ---------- kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import Gaussian if kernel is None: kernel = Gaussian(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) from pysph.sph.integrator import PECIntegrator from pysph.sph.integrator_step import GasDFluidStep cls = integrator_cls if integrator_cls is not None else PECIntegrator step_cls = GasDFluidStep for name in self.fluids: if name not in steppers: steppers[name] = step_cls() integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def get_equations(self): from pysph.sph.equation import Group from pysph.sph.gas_dynamics.basic import ( ScaleSmoothingLength, UpdateSmoothingLengthFromVolume, SummationDensity, IdealGasEOS, MPMAccelerations, MPMUpdateGhostProps ) from pysph.sph.gas_dynamics.boundary_equations import WallBoundary equations = [] # Find the optimal 'h' if self.adaptive_h_scheme == 'mpm': g1 = [] for fluid in self.fluids: g1.append( SummationDensity( dest=fluid, sources=self.fluids, k=self.kernel_factor, density_iterations=True, dim=self.dim, htol=self.density_iteration_tolerance ) ) equations.append(Group( equations=g1, update_nnps=True, iterate=True, max_iterations=self.max_density_iterations )) elif self.adaptive_h_scheme == 'gsph': group = [] for fluid in self.fluids: group.append( ScaleSmoothingLength(dest=fluid, sources=None, factor=2.0) ) equations.append(Group(equations=group, update_nnps=True)) group = [] for fluid in self.fluids: group.append( SummationDensity( dest=fluid, sources=self.fluids, dim=self.dim ) ) equations.append(Group(equations=group, update_nnps=False)) group = [] for fluid in self.fluids: group.append( UpdateSmoothingLengthFromVolume( dest=fluid, sources=None, k=self.kernel_factor, dim=self.dim ) ) equations.append(Group(equations=group, update_nnps=True)) group = [] for fluid in self.fluids: group.append( SummationDensity( dest=fluid, sources=self.fluids, dim=self.dim ) ) equations.append(Group(equations=group, update_nnps=False)) # Done with finding the optimal 'h' g2 = [] for fluid in self.fluids: g2.append(IdealGasEOS(dest=fluid, sources=None, gamma=self.gamma)) equations.append(Group(equations=g2)) g3 = [] for solid in self.solids: g3.append(WallBoundary(solid, sources=self.fluids)) equations.append(Group(equations=g3)) if self.has_ghosts: gh = [] for fluid in self.fluids: gh.append( MPMUpdateGhostProps(dest=fluid, sources=None) ) equations.append(Group(equations=gh, real=False)) g4 = [] for fluid in self.fluids: g4.append(MPMAccelerations( dest=fluid, sources=self.fluids + self.solids, alpha1_min=self.alpha1, alpha2_min=self.alpha2, beta=self.beta, update_alpha1=self.update_alpha1, update_alpha2=self.update_alpha2 )) equations.append(Group(equations=g4)) return equations def setup_properties(self, particles, clean=True): from pysph.base.utils import get_particle_array_gasd import numpy particle_arrays = dict([(p.name, p) for p in particles]) dummy = get_particle_array_gasd(name='junk') props = list(dummy.properties.keys()) output_props = dummy.output_property_arrays for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.add_property('orig_idx', type='int') nfp = pa.get_number_of_particles() pa.orig_idx[:] = numpy.arange(nfp) pa.set_output_arrays(output_props) solid_props = set(props) | set('div cs wij htmp'.split(' ')) for solid in self.solids: pa = particle_arrays[solid] self._ensure_properties(pa, solid_props, clean) pa.set_output_arrays(output_props) class GSPHScheme(Scheme): def __init__(self, fluids, solids, dim, gamma, kernel_factor, g1=0.0, g2=0.0, rsolver=2, interpolation=1, monotonicity=1, interface_zero=True, hybrid=False, blend_alpha=5.0, tf=1.0, niter=20, tol=1e-6, has_ghosts=False): """ Parameters ---------- fluids: list List of names of fluid particle arrays. solids: list List of names of solid particle arrays (or boundaries), currently not supported dim: int Dimensionality of the problem. gamma: float Gamma for Equation of state. kernel_factor: float Kernel scaling factor. g1, g2 : double ADKE style thermal conduction parameters rsolver: int Riemann solver to use. See pysph.sph.gas_dynamics.gsph for valid options. interpolation: int Kind of interpolation for the specific volume integrals. monotonicity : int Type of monotonicity algorithm to use: 0 : First order GSPH 1 : I02 algorithm 2 : IwIn algorithm interface_zero : bool Set Interface position s^*_{ij} = 0 for the Riemann problem. hybrid, blend_alpha : bool, double Hybrid scheme and blending alpha value tf: double Final time used for blending. niter: int Max number of iterations for iterative Riemann solvers. tol: double Tolerance for iterative Riemann solvers. has_ghosts: bool if ghost particles (either mirror or periodic) is used """ self.fluids = fluids self.solids = solids self.dim = dim self.solver = None self.gamma = gamma self.kernel_factor = kernel_factor self.g1 = g1 self.g2 = g2 self.rsolver = rsolver self.interpolation = interpolation self.monotonicity = monotonicity self.interface_zero = interface_zero self.hybrid = hybrid self.blend_alpha = blend_alpha self.tf = tf self.niter = niter self.tol = tol self.has_ghosts = has_ghosts def add_user_options(self, group): group.add_argument( "--rsolver", action="store", type=int, dest="rsolver", default=None, help="Riemann solver to use." ) group.add_argument( "--interpolation", action="store", type=int, dest="interpolation", default=None, help="Interpolation algorithm to use." ) group.add_argument( "--monotonicity", action="store", type=int, dest="monotonicity", default=None, help="Monotonicity algorithm to use." ) group.add_argument( "--g1", action="store", type=float, dest="g1", default=None, help="ADKE style thermal conduction parameter." ) group.add_argument( "--g2", action="store", type=float, dest="g2", default=None, help="ADKE style thermal conduction parameter." ) group.add_argument( "--gamma", action="store", type=float, dest="gamma", default=None, help="Gamma for the state equation." ) group.add_argument( "--blend-alpha", action="store", type=float, dest="blend_alpha", default=None, help="Blending factor for hybrid scheme." ) add_bool_argument( group, "interface-zero", dest="interface_zero", help="Set interface position to zero for Riemann problem.", default=None ) add_bool_argument( group, "hybrid", dest="hybrid", help="Use the hybrid scheme.", default=None ) def consume_user_options(self, options): vars = ['gamma', 'g1', 'g2', 'rsolver', 'interpolation', 'monotonicity', 'interface_zero', 'hybrid', 'blend_alpha'] data = dict((var, self._smart_getattr(options, var)) for var in vars) self.configure(**data) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. Parameters ---------- kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import Gaussian if kernel is None: kernel = Gaussian(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) from pysph.sph.integrator import EulerIntegrator from pysph.sph.integrator_step import GSPHStep cls = integrator_cls if integrator_cls is not None else EulerIntegrator step_cls = GSPHStep for name in self.fluids: if name not in steppers: steppers[name] = step_cls() integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) if 'tf' in kw: self.tf = kw['tf'] def get_equations(self): from pysph.sph.equation import Group from pysph.sph.gas_dynamics.basic import ( ScaleSmoothingLength, UpdateSmoothingLengthFromVolume, SummationDensity, IdealGasEOS ) from pysph.sph.gas_dynamics.boundary_equations import WallBoundary from pysph.sph.gas_dynamics.gsph import ( GSPHGradients, GSPHAcceleration, GSPHUpdateGhostProps ) equations = [] # Find the optimal 'h' group = [] for fluid in self.fluids: group.append( ScaleSmoothingLength(dest=fluid, sources=None, factor=2.0) ) equations.append(Group(equations=group, update_nnps=True)) group = [] for solid in self.solids: group.append(WallBoundary(solid, sources=self.fluids)) equations.append(Group(equations=group)) all_pa = self.fluids + self.solids group = [] for fluid in self.fluids: group.append( SummationDensity( dest=fluid, sources=all_pa, dim=self.dim ) ) equations.append(Group(equations=group, update_nnps=False)) group = [] for solid in self.solids: group.append(WallBoundary(solid, sources=self.fluids)) equations.append(Group(equations=group)) group = [] for fluid in self.fluids: group.append( UpdateSmoothingLengthFromVolume( dest=fluid, sources=None, k=self.kernel_factor, dim=self.dim ) ) equations.append(Group(equations=group, update_nnps=True)) group = [] for fluid in self.fluids: group.append( SummationDensity( dest=fluid, sources=all_pa, dim=self.dim ) ) equations.append(Group(equations=group, update_nnps=False)) # Done with finding the optimal 'h' group = [] for fluid in self.fluids: group.append(IdealGasEOS(dest=fluid, sources=None, gamma=self.gamma)) equations.append(Group(equations=group)) group = [] for solid in self.solids: group.append(WallBoundary(solid, sources=self.fluids)) equations.append(Group(equations=group)) g2 = [] for fluid in self.fluids: g2.append(GSPHGradients(dest=fluid, sources=all_pa)) equations.append(Group(equations=g2)) if self.has_ghosts: g3 = [] for fluid in self.fluids: g3.append(GSPHUpdateGhostProps(dest=fluid, sources=None)) equations.append(Group( equations=g3, update_nnps=False, real=False )) g4 = [] for fluid in self.fluids: g4.append(GSPHAcceleration( dest=fluid, sources=all_pa, g1=self.g1, g2=self.g2, monotonicity=self.monotonicity, rsolver=self.rsolver, interpolation=self.interpolation, interface_zero=self.interface_zero, hybrid=self.hybrid, blend_alpha=self.blend_alpha, gamma=self.gamma, niter=self.niter, tol=self.tol )) equations.append(Group(equations=g4)) return equations def setup_properties(self, particles, clean=True): from pysph.base.utils import get_particle_array_gasd import numpy particle_arrays = dict([(p.name, p) for p in particles]) dummy = get_particle_array_gasd(name='junk') props = (list(dummy.properties.keys()) + 'px py pz ux uy uz vx vy vz wx wy wz'.split()) output_props = dummy.output_property_arrays for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.add_property('orig_idx', type='int') nfp = pa.get_number_of_particles() pa.orig_idx[:] = numpy.arange(nfp) pa.set_output_arrays(output_props) solid_props = set(props) | set(('wij', 'htmp')) for solid in self.solids: pa = particle_arrays[solid] self._ensure_properties(pa, solid_props, clean) pa.set_output_arrays(output_props) class ADKEScheme(Scheme): def __init__(self, fluids, solids, dim, gamma=1.4, alpha=1.0, beta=2.0, k=1.0, eps=0.0, g1=0, g2=0, has_ghosts=False): """ Parameters ---------- fluids: list a list with names of fluid particle arrays solids: list a list with names of solid (or boundary) particle arrays dim: int dimensionality of the problem gamma: double Gamma for equation of state alpha: double artificial viscosity parameter beta: double artificial viscosity parameter k: double kernel scaling parameter eps: double kernel scaling parameter g1: double artificial heat conduction parameter g2: double artificial heat conduction parameter has_ghosts: bool if problem uses ghost particles (periodic or mirror) """ self.fluids = fluids self.solids = solids self.dim = dim self.solver = None self.gamma = gamma self.alpha = alpha self.beta = beta self.k = k self.eps = eps self.g1 = g1 self.g2 = g2 self.has_ghosts = has_ghosts def get_equations(self): from pysph.sph.equation import Group from pysph.sph.basic_equations import SummationDensity from pysph.sph.gas_dynamics.basic import ( IdealGasEOS, ADKEAccelerations, SummationDensityADKE, ADKEUpdateGhostProps ) from pysph.sph.gas_dynamics.boundary_equations import WallBoundary equations = [] g1 = [] for solid in self.solids: g1.append(WallBoundary(solid, sources=self.fluids)) equations.append(Group(equations=g1)) g2 = [] for fluid in self.fluids: g2.append( SummationDensityADKE( fluid, sources=self.fluids + self.solids, k=self.k, eps=self.eps ) ) equations.append(Group(g2, update_nnps=True, iterate=False)) g3 = [] for solid in self.solids: g3.append(WallBoundary(solid, sources=self.fluids)) equations.append(Group(equations=g3)) g4 = [] for fluid in self.fluids: g4.append(SummationDensity(fluid, self.fluids+self.solids)) equations.append(Group(g4)) g5 = [] for solid in self.solids: g5.append(WallBoundary(solid, sources=self.fluids)) equations.append(Group(equations=g5)) g6 = [] for elem in self.fluids+self.solids: g6.append(IdealGasEOS(elem, sources=None, gamma=self.gamma)) equations.append(Group(equations=g6)) if self.has_ghosts: gh = [] for fluid in self.fluids: gh.append( ADKEUpdateGhostProps(dest=fluid, sources=None) ) equations.append(Group(equations=gh, real=False)) g7 = [] for fluid in self.fluids: g7.append( ADKEAccelerations( dest=fluid, sources=self.fluids + self.solids, alpha=self.alpha, beta=self.beta, g1=self.g1, g2=self.g2, k=self.k, eps=self.eps ) ) equations.append(Group(equations=g7)) return equations def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. Parameters ---------- kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import Gaussian if kernel is None: kernel = Gaussian(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) from pysph.sph.integrator import PECIntegrator from pysph.sph.integrator_step import ADKEStep cls = integrator_cls if integrator_cls is not None else PECIntegrator step_cls = ADKEStep for name in self.fluids: if name not in steppers: steppers[name] = step_cls() integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def setup_properties(self, particles, clean=True): from pysph.base.utils import get_particle_array import numpy particle_arrays = dict([(p.name, p) for p in particles]) required_props = [ 'x', 'y', 'z', 'u', 'v', 'w', 'rho', 'h', 'm', 'cs', 'p', 'e', 'au', 'av', 'aw', 'arho', 'ae', 'am', 'ah', 'x0', 'y0', 'z0', 'u0', 'v0', 'w0', 'rho0', 'e0', 'h0', 'div', 'h0', 'wij', 'htmp', 'logrho'] dummy = get_particle_array(additional_props=required_props, name='junk') dummy.set_output_arrays( ['x', 'y', 'u', 'v', 'rho', 'm', 'h', 'cs', 'p', 'e', 'au', 'av', 'ae', 'pid', 'gid', 'tag'] ) props = list(dummy.properties.keys()) output_props = dummy.output_property_arrays for solid in self.solids: pa = particle_arrays[solid] self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.add_property('orig_idx', type='int') nfp = pa.get_number_of_particles() pa.orig_idx[:] = numpy.arange(nfp) pa.set_output_arrays(output_props) pysph-master/pysph/sph/solid_mech/000077500000000000000000000000001356347341600175715ustar00rootroot00000000000000pysph-master/pysph/sph/solid_mech/__init__.py000066400000000000000000000000001356347341600216700ustar00rootroot00000000000000pysph-master/pysph/sph/solid_mech/basic.py000066400000000000000000000473071356347341600212370ustar00rootroot00000000000000""" Basic Equations for Solid Mechanics ################################### References ---------- .. [Gray2001] J. P. Gray et al., "SPH elastic dynamics", Computer Methods in Applied Mechanics and Engineering, 190 (2001), pp 6641 - 6662. """ from pysph.sph.equation import Equation from pysph.sph.scheme import Scheme from textwrap import dedent from pysph.base.utils import get_particle_array import numpy as np def get_bulk_mod(G, nu): ''' Get the bulk modulus from shear modulus and Poisson ratio ''' return 2.0 * G * (1 + nu) / (3 * (1 - 2 * nu)) def get_speed_of_sound(E, nu, rho0): return np.sqrt(E / (3 * (1. - 2 * nu) * rho0)) def get_shear_modulus(E, nu): return E / (2. * (1. + nu)) def get_particle_array_elastic_dynamics(constants=None, **props): """Return a particle array for the Standard SPH formulation of solids. Parameters ---------- constants : dict Dictionary of constants Other Parameters ---------------- props : dict Additional keywords passed are set as the property arrays. See Also -------- get_particle_array """ solids_props = [ 'cs', 'e', 'v00', 'v01', 'v02', 'v10', 'v11', 'v12', 'v20', 'v21', 'v22', 'r00', 'r01', 'r02', 'r11', 'r12', 'r22', 's00', 's01', 's02', 's11', 's12', 's22', 'as00', 'as01', 'as02', 'as11', 'as12', 'as22', 's000', 's010', 's020', 's110', 's120', 's220', 'arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'ae', 'rho0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'e0' ] # set wdeltap to -1. Which defaults to no self correction consts = { 'wdeltap': -1., 'n': 4, 'G': 0.0, 'E': 0.0, 'nu': 0.0, 'rho_ref': 1000.0, 'c0_ref': 0.0 } if constants: consts.update(constants) pa = get_particle_array(constants=consts, additional_props=solids_props, **props) # set the shear modulus G pa.G[0] = get_shear_modulus(pa.E[0], pa.nu[0]) # set the speed of sound pa.cs = np.ones_like(pa.x) * get_speed_of_sound(pa.E[0], pa.nu[0], pa.rho_ref[0]) pa.c0_ref[0] = get_speed_of_sound(pa.E[0], pa.nu[0], pa.rho_ref[0]) # default property arrays to save out. pa.set_output_arrays([ 'x', 'y', 'z', 'u', 'v', 'w', 'rho', 'm', 'h', 'pid', 'gid', 'tag', 'p' ]) return pa class IsothermalEOS(Equation): r""" Compute the pressure using the Isothermal equation of state: :math:`p = p_0 + c_0^2(\rho_0 - \rho)` """ def loop(self, d_idx, d_rho, d_p, d_c0_ref, d_rho_ref): d_p[d_idx] = d_c0_ref[0] * d_c0_ref[0] * (d_rho[d_idx] - d_rho_ref[0]) class MonaghanArtificialStress(Equation): r"""**Artificial stress to remove tensile instability** The dispersion relations in [Gray2001] are used to determine the different components of :math:`R`. Angle of rotation for particle :math:`a` .. math:: \tan{2 \theta_a} = \frac{2\sigma_a^{xy}}{\sigma_a^{xx} - \sigma_a^{yy}} In rotated frame, the new components of the stress tensor are .. math:: \bar{\sigma}_a^{xx} = \cos^2{\theta_a} \sigma_a^{xx} + 2\sin{\theta_a} \cos{\theta_a}\sigma_a^{xy} + \sin^2{\theta_a}\sigma_a^{yy}\\ \bar{\sigma}_a^{yy} = \sin^2{\theta_a} \sigma_a^{xx} + 2\sin{\theta_a} \cos{\theta_a}\sigma_a^{xy} + \cos^2{\theta_a}\sigma_a^{yy} Components of :math:`R` in rotated frame: .. math:: \bar{R}_{a}^{xx}=\begin{cases}-\epsilon\frac{\bar{\sigma}_{a}^{xx}} {\rho^{2}} & \bar{\sigma}_{a}^{xx}>0\\0 & \bar{\sigma}_{a}^{xx}\leq0 \end{cases}\\ \bar{R}_{a}^{yy}=\begin{cases}-\epsilon\frac{\bar{\sigma}_{a}^{yy}} {\rho^{2}} & \bar{\sigma}_{a}^{yy}>0\\0 & \bar{\sigma}_{a}^{yy}\leq0 \end{cases} Components of :math:`R` in original frame: .. math:: R_a^{xx} = \cos^2{\theta_a} \bar{R}_a^{xx} + \sin^2{\theta_a} \bar{R}_a^{yy}\\ R_a^{yy} = \sin^2{\theta_a} \bar{R}_a^{xx} + \cos^2{\theta_a} \bar{R}_a^{yy}\\ R_a^{xy} = \sin{\theta_a} \cos{\theta_a}\left(\bar{R}_a^{xx} - \bar{R}_a^{yy}\right) """ def __init__(self, dest, sources, eps=0.3): r""" Parameters ---------- eps : float constant """ self.eps = eps super(MonaghanArtificialStress, self).__init__(dest, sources) def _cython_code_(self): code = dedent(""" cimport cython from pysph.base.linalg3 cimport eigen_decomposition from pysph.base.linalg3 cimport transform_diag_inv """) return code def loop(self, d_idx, d_rho, d_p, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_r00, d_r01, d_r02, d_r11, d_r12, d_r22): r"""Compute the stress terms Parameters ---------- d_sxx : DoubleArray Stress Tensor Deviatoric components (Symmetric) d_rxx : DoubleArray Artificial stress components (Symmetric) """ # 1/rho_a^2 rhoi = d_rho[d_idx] rhoi21 = 1. / (rhoi * rhoi) ## Matrix and vector declarations ## # Matrix of Eigenvectors (columns) R = declare('matrix((3,3))') # Artificial stress in the original coordinates Rab = declare('matrix((3,3))') # Stress tensor with pressure. S = declare('matrix((3,3))') # Eigenvalues V = declare('matrix((3,))') # Artificial stress in principle direction rd = declare('matrix((3,))') # get the diagonal terms for the stress tensor adding pressure S[0][0] = d_s00[d_idx] - d_p[d_idx] S[1][1] = d_s11[d_idx] - d_p[d_idx] S[2][2] = d_s22[d_idx] - d_p[d_idx] S[1][2] = d_s12[d_idx] S[2][1] = d_s12[d_idx] S[0][2] = d_s02[d_idx] S[2][0] = d_s02[d_idx] S[0][1] = d_s01[d_idx] S[1][0] = d_s01[d_idx] # compute the principle stresses eigen_decomposition(S, R, cython.address(V[0])) # artificial stress corrections if V[0] > 0: rd[0] = -self.eps * V[0] * rhoi21 else: rd[0] = 0 if V[1] > 0: rd[1] = -self.eps * V[1] * rhoi21 else: rd[1] = 0 if V[2] > 0: rd[2] = -self.eps * V[2] * rhoi21 else: rd[2] = 0 # transform artificial stresses in original frame transform_diag_inv(cython.address(rd[0]), R, Rab) # store the values d_r00[d_idx] = Rab[0][0] d_r11[d_idx] = Rab[1][1] d_r22[d_idx] = Rab[2][2] d_r12[d_idx] = Rab[1][2] d_r02[d_idx] = Rab[0][2] d_r01[d_idx] = Rab[0][1] class MomentumEquationWithStress(Equation): r"""**Momentum Equation with Artificial Stress** .. math:: \frac{D\vec{v_a}^i}{Dt} = \sum_b m_b\left(\frac{\sigma_a^{ij}}{\rho_a^2} +\frac{\sigma_b^{ij}}{\rho_b^2} + R_{ab}^{ij}f^n \right)\nabla_a W_{ab} where .. math:: f_{ab} = \frac{W(r_{ab})}{W(\Delta p)}\\ R_{ab}^{ij} = R_{a}^{ij} + R_{b}^{ij} """ def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, s_rho, s_m, d_p, s_p, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, s_s00, s_s01, s_s02, s_s11, s_s12, s_s22, d_r00, d_r01, d_r02, d_r11, d_r12, d_r22, s_r00, s_r01, s_r02, s_r11, s_r12, s_r22, d_au, d_av, d_aw, d_wdeltap, d_n, WIJ, DWIJ): pa = d_p[d_idx] pb = s_p[s_idx] rhoa = d_rho[d_idx] rhob = s_rho[s_idx] rhoa21 = 1. / (rhoa * rhoa) rhob21 = 1. / (rhob * rhob) s00a = d_s00[d_idx] s01a = d_s01[d_idx] s02a = d_s02[d_idx] s10a = d_s01[d_idx] s11a = d_s11[d_idx] s12a = d_s12[d_idx] s20a = d_s02[d_idx] s21a = d_s12[d_idx] s22a = d_s22[d_idx] s00b = s_s00[s_idx] s01b = s_s01[s_idx] s02b = s_s02[s_idx] s10b = s_s01[s_idx] s11b = s_s11[s_idx] s12b = s_s12[s_idx] s20b = s_s02[s_idx] s21b = s_s12[s_idx] s22b = s_s22[s_idx] r00a = d_r00[d_idx] r01a = d_r01[d_idx] r02a = d_r02[d_idx] r10a = d_r01[d_idx] r11a = d_r11[d_idx] r12a = d_r12[d_idx] r20a = d_r02[d_idx] r21a = d_r12[d_idx] r22a = d_r22[d_idx] r00b = s_r00[s_idx] r01b = s_r01[s_idx] r02b = s_r02[s_idx] r10b = s_r01[s_idx] r11b = s_r11[s_idx] r12b = s_r12[s_idx] r20b = s_r02[s_idx] r21b = s_r12[s_idx] r22b = s_r22[s_idx] # Add pressure to the deviatoric components s00a = s00a - pa s00b = s00b - pb s11a = s11a - pa s11b = s11b - pb s22a = s22a - pa s22b = s22b - pb # compute the kernel correction term # if wdeltap is less than zero then no correction # needed if d_wdeltap[0] > 0.: fab = WIJ / d_wdeltap[0] fab = pow(fab, d_n[0]) art_stress00 = fab * (r00a + r00b) art_stress01 = fab * (r01a + r01b) art_stress02 = fab * (r02a + r02b) art_stress10 = art_stress01 art_stress11 = fab * (r11a + r11b) art_stress12 = fab * (r12a + r12b) art_stress20 = art_stress02 art_stress21 = art_stress12 art_stress22 = fab * (r22a + r22b) else: art_stress00 = 0.0 art_stress01 = 0.0 art_stress02 = 0.0 art_stress10 = art_stress01 art_stress11 = 0.0 art_stress12 = 0.0 art_stress20 = art_stress02 art_stress21 = art_stress12 art_stress22 = 0.0 # compute accelerations mb = s_m[s_idx] d_au[d_idx] += ( mb * (s00a * rhoa21 + s00b * rhob21 + art_stress00) * DWIJ[0] + mb * (s01a * rhoa21 + s01b * rhob21 + art_stress01) * DWIJ[1] + mb * (s02a * rhoa21 + s02b * rhob21 + art_stress02) * DWIJ[2]) d_av[d_idx] += ( mb * (s10a * rhoa21 + s10b * rhob21 + art_stress10) * DWIJ[0] + mb * (s11a * rhoa21 + s11b * rhob21 + art_stress11) * DWIJ[1] + mb * (s12a * rhoa21 + s12b * rhob21 + art_stress12) * DWIJ[2]) d_aw[d_idx] += ( mb * (s20a * rhoa21 + s20b * rhob21 + art_stress20) * DWIJ[0] + mb * (s21a * rhoa21 + s21b * rhob21 + art_stress21) * DWIJ[1] + mb * (s22a * rhoa21 + s22b * rhob21 + art_stress22) * DWIJ[2]) class HookesDeviatoricStressRate(Equation): r""" **Rate of change of stress ** .. math:: \frac{dS^{ij}}{dt} = 2\mu\left(\epsilon^{ij} - \frac{1}{3}\delta^{ij} \epsilon^{ij}\right) + S^{ik}\Omega^{jk} + \Omega^{ik}S^{kj} where .. math:: \epsilon^{ij} = \frac{1}{2}\left(\frac{\partial v^i}{\partial x^j} + \frac{\partial v^j}{\partial x^i}\right)\\ \Omega^{ij} = \frac{1}{2}\left(\frac{\partial v^i}{\partial x^j} - \frac{\partial v^j}{\partial x^i} \right) """ def initialize(self, d_idx, d_as00, d_as01, d_as02, d_as11, d_as12, d_as22): d_as00[d_idx] = 0.0 d_as01[d_idx] = 0.0 d_as02[d_idx] = 0.0 d_as11[d_idx] = 0.0 d_as12[d_idx] = 0.0 d_as22[d_idx] = 0.0 def loop(self, d_idx, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_v00, d_v01, d_v02, d_v10, d_v11, d_v12, d_v20, d_v21, d_v22, d_as00, d_as01, d_as02, d_as11, d_as12, d_as22, d_G): v00 = d_v00[d_idx] v01 = d_v01[d_idx] v02 = d_v02[d_idx] v10 = d_v10[d_idx] v11 = d_v11[d_idx] v12 = d_v12[d_idx] v20 = d_v20[d_idx] v21 = d_v21[d_idx] v22 = d_v22[d_idx] s00 = d_s00[d_idx] s01 = d_s01[d_idx] s02 = d_s02[d_idx] s10 = d_s01[d_idx] s11 = d_s11[d_idx] s12 = d_s12[d_idx] s20 = d_s02[d_idx] s21 = d_s12[d_idx] s22 = d_s22[d_idx] # strain rate tensor is symmetric eps00 = v00 eps01 = 0.5 * (v01 + v10) eps02 = 0.5 * (v02 + v20) eps10 = eps01 eps11 = v11 eps12 = 0.5 * (v12 + v21) eps20 = eps02 eps21 = eps12 eps22 = v22 # rotation tensor is asymmetric omega00 = 0.0 omega01 = 0.5 * (v01 - v10) omega02 = 0.5 * (v02 - v20) omega10 = -omega01 omega11 = 0.0 omega12 = 0.5 * (v12 - v21) omega20 = -omega02 omega21 = -omega12 omega22 = 0.0 tmp = 2.0 * d_G[0] trace = 1.0 / 3.0 * (eps00 + eps11) # S_00 d_as00[d_idx] = tmp*( eps00 - trace ) + \ ( s00*omega00 + s01*omega01 + s02*omega02) + \ ( s00*omega00 + s10*omega01 + s20*omega02) # S_01 d_as01[d_idx] = tmp*(eps01) + \ ( s00*omega10 + s01*omega11 + s02*omega12) + \ ( s01*omega00 + s11*omega01 + s21*omega02) # S_02 d_as02[d_idx] = tmp*eps02 + \ (s00*omega20 + s01*omega21 + s02*omega22) + \ (s02*omega00 + s12*omega01 + s22*omega02) # S_11 d_as11[d_idx] = tmp*( eps11 - trace ) + \ (s10*omega10 + s11*omega11 + s12*omega12) + \ (s01*omega10 + s11*omega11 + s21*omega12) # S_12 d_as12[d_idx] = tmp*eps12 + \ (s10*omega20 + s11*omega21 + s12*omega22) + \ (s02*omega10 + s12*omega11 + s22*omega12) # S_22 d_as22[d_idx] = tmp*(eps22 - trace) + \ (s20*omega20 + s21*omega21 + s22*omega22) + \ (s02*omega20 + s12*omega21 + s22*omega22) class EnergyEquationWithStress(Equation): def __init__(self, dest, sources, alpha=1.0, beta=1.0, eta=0.01): self.alpha = float(alpha) self.beta = float(beta) self.eta = float(eta) super(EnergyEquationWithStress, self).__init__(dest, sources) def initialize(self, d_idx, d_ae): d_ae[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_rho, s_rho, d_p, s_p, d_cs, s_cs, d_ae, XIJ, VIJ, DWIJ, HIJ, R2IJ, RHOIJ1): rhoa = d_rho[d_idx] ca = d_cs[d_idx] pa = d_p[d_idx] rhob = s_rho[s_idx] cb = s_cs[s_idx] pb = s_p[s_idx] mb = s_m[s_idx] rhoa2 = 1. / (rhoa * rhoa) rhob2 = 1. / (rhob * rhob) # artificial viscosity vijdotxij = VIJ[0] * XIJ[0] + VIJ[1] * XIJ[1] + VIJ[2] * XIJ[2] piij = 0.0 if vijdotxij < 0: cij = 0.5 * (d_cs[d_idx] + s_cs[s_idx]) muij = (HIJ * vijdotxij) / (R2IJ + self.eta * self.eta * HIJ * HIJ) piij = -self.alpha * cij * muij + self.beta * muij * muij piij = piij * RHOIJ1 vijdotdwij = VIJ[0] * DWIJ[0] + VIJ[1] * DWIJ[1] + VIJ[2] * DWIJ[2] # thermal energy contribution d_ae[d_idx] += 0.5 * mb * (pa * rhoa2 + pb * rhob2 + piij) def post_loop(self, d_idx, d_rho, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_v00, d_v01, d_v02, d_v10, d_v11, d_v12, d_v20, d_v21, d_v22, d_ae): # particle density rhoa = d_rho[d_idx] # deviatoric stress rate (symmetric) s00a = d_s00[d_idx] s01a = d_s01[d_idx] s02a = d_s02[d_idx] s10a = d_s01[d_idx] s11a = d_s11[d_idx] s12a = d_s12[d_idx] s20a = d_s02[d_idx] s21a = d_s12[d_idx] s22a = d_s22[d_idx] # strain rate tensor (symmetric) eps00 = d_v00[d_idx] eps01 = 0.5 * (d_v01[d_idx] + d_v10[d_idx]) eps02 = 0.5 * (d_v02[d_idx] + d_v20[d_idx]) eps10 = eps01 eps11 = d_v11[d_idx] eps12 = 0.5 * (d_v12[d_idx] + d_v21[d_idx]) eps20 = eps02 eps21 = eps12 eps22 = d_v22[d_idx] # energy accelerations #sdoteij = s00a*eps00 + s01a*eps01 + s10a*eps10 + s11a*eps11 sdoteij = (s00a * eps00 + s01a * eps01 + s02a * eps02 + s10a * eps10 + s11a * eps11 + s12a * eps12 + s20a * eps20 + s21a * eps21 + s22a * eps22) d_ae[d_idx] += 1. / rhoa * sdoteij class ElasticSolidsScheme(Scheme): def __init__(self, elastic_solids, solids, dim, artificial_stress_eps=0.3, xsph_eps=0.5, alpha=1.0, beta=1.0): self.elastic_solids = elastic_solids self.solids = solids self.dim = dim self.solver = None self.alpha = alpha self.beta = beta self.xsph_eps = xsph_eps self.artificial_stress_eps = artificial_stress_eps def get_equations(self): from pysph.sph.equation import Group from pysph.sph.basic_equations import ( ContinuityEquation, MonaghanArtificialViscosity, XSPHCorrection, VelocityGradient2D) from pysph.sph.solid_mech.basic import ( IsothermalEOS, MomentumEquationWithStress, HookesDeviatoricStressRate, MonaghanArtificialStress) equations = [] g1 = [] all = self.solids + self.elastic_solids for elastic_solid in self.elastic_solids: g1.append( # p IsothermalEOS(elastic_solid, sources=None)) g1.append( # vi,j : requires properties v00, v01, v10, v11 VelocityGradient2D(dest=elastic_solid, sources=all)) g1.append( # rij : requires properties r00, r01, r02, r11, r12, r22, # s00, s01, s02, s11, s12, s22 MonaghanArtificialStress(dest=elastic_solid, sources=None, eps=self.artificial_stress_eps)) equations.append(Group(equations=g1)) g2 = [] for elastic_solid in self.elastic_solids: g2.append(ContinuityEquation(dest=elastic_solid, sources=all), ) g2.append( # au, av MomentumEquationWithStress(dest=elastic_solid, sources=all), ) g2.append( # au, av MonaghanArtificialViscosity(dest=elastic_solid, sources=all, alpha=self.alpha, beta=self.beta), ) g2.append( # a_s00, a_s01, a_s11 HookesDeviatoricStressRate(dest=elastic_solid, sources=None), ) g2.append( # ax, ay, az XSPHCorrection(dest=elastic_solid, sources=[elastic_solid], eps=self.xsph_eps), ) equations.append(Group(g2)) return equations def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): from pysph.base.kernels import CubicSpline if kernel is None: kernel = CubicSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) from pysph.sph.integrator import EPECIntegrator from pysph.sph.integrator_step import SolidMechStep cls = integrator_cls if integrator_cls is not None else EPECIntegrator step_cls = SolidMechStep for name in self.elastic_solids: if name not in steppers: steppers[name] = step_cls() integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver(dim=self.dim, integrator=integrator, kernel=kernel, **kw) pysph-master/pysph/sph/solid_mech/hvi.py000066400000000000000000000054451356347341600207410ustar00rootroot00000000000000""" Equations for the High Velocity Impact Problems ############################################### """ from math import sqrt from pysph.sph.equation import Equation class VonMisesPlasticity2D(Equation): def __init__(self, dest, sources, flow_stress): self.flow_stress2 = float(flow_stress*flow_stress) self.factor = sqrt( 2.0/3.0 )*flow_stress super(VonMisesPlasticity2D,self).__init__(dest, sources) def loop(self, d_idx, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22): s00a = d_s00[d_idx] s01a = d_s01[d_idx] s02a = d_s02[d_idx] s10a = d_s01[d_idx] s11a = d_s11[d_idx] s12a = d_s12[d_idx] s20a = d_s02[d_idx] s21a = d_s12[d_idx] s22a = d_s22[d_idx] J = s00a* s00a + 2.0 * s01a*s10a + s11a*s11a scale = 1.0 if (J > 2.0/3.0 * self.flow_stress2): scale = self.factor/sqrt(J) # store the stresses d_s00[d_idx] = scale * s00a d_s01[d_idx] = scale * s01a d_s02[d_idx] = scale * s02a d_s11[d_idx] = scale * s11a d_s12[d_idx] = scale * s12a d_s22[d_idx] = scale * s22a class StiffenedGasEOS(Equation): """Stiffened-gas EOS from "A Free Lagrange Augmented Godunov Method for the Simulation of Elastic-Plastic Solids", B. P. Howell and G. J. Ball, JCP (2002). http://dx.doi.org/10.1006/jcph.2001.6931 """ def __init__(self, dest, sources, gamma, r0, c0): self.gamma = float(gamma) # Gruneisen gamma self.c0 = float(c0) # unshocked sound speed self.r0 = float(r0) # reference density super(StiffenedGasEOS,self).__init__(dest, sources) def loop(self, d_idx, d_e, d_rho, d_p, d_cs): # Eq. (17) d_p[d_idx] = self.c0*self.c0 * (d_rho[d_idx] - self.r0) + \ (self.gamma - 1.0)*d_rho[d_idx]*d_e[d_idx] # Eq. (21) d_cs[d_idx] = sqrt( self.c0*self.c0 + (self.gamma - 1.0) * (d_e[d_idx] + d_p[d_idx]/d_rho[d_idx]) ) class MieGruneisenEOS(Equation): def __init__(self, dest, sources, gamma,r0, c0, S): self.gamma = float(gamma) self.r0 = float(r0) self.c0 = float(c0) self.S = float(S) self.a0 = a0 = float(r0 * c0 * c0) self.b0 = a0 * ( 1 + 2.0*(S - 1.0) ) self.c0 = a0 * ( 2*(S - 1.0) + 3*(S - 1.0)*(S - 1.0) ) super(MieGruneisenEOS, self).__init__(dest, sources) def loop(self, d_idx, d_p, d_rho, d_e): rhoa = d_rho[d_idx] ea = d_e[d_idx] gamma = self.gamma ratio = rhoa/self.r0 - 1.0 ratio2 = ratio * ratio PH = self.a0 * ratio if ratio > 0: PH = PH + ratio2 * (self.b0 + self.c0*ratio) d_p[d_idx] = (1. - 0.5*gamma*ratio) * PH + rhoa * ea * gamma pysph-master/pysph/sph/sph_compiler.py000066400000000000000000000072631356347341600205310ustar00rootroot00000000000000class SPHCompiler(object): def __init__(self, acceleration_evals, integrator): """Compiles the acceleration evaluator and integrator to produce a fast version using one of the supported backends. If the backend is not given, one is automatically chosen based on the configuration. Parameters ---------- acceleration_eval: .acceleration_eval.AccelerationEval instance or list of instances. integrator: .integrator.Integrator instance """ if not isinstance(acceleration_evals, (list, tuple)): acceleration_evals = [acceleration_evals] self.acceleration_evals = list(acceleration_evals) self.integrator = integrator if integrator is not None: integrator.set_acceleration_evals(self.acceleration_evals) self.backend = acceleration_evals[0].backend self._setup_helpers() self.module = None # Public interface. #################################################### def compile(self): """Compile the generated code to extension modules and setup the objects that need this by calling their setup_compiled_module. """ if self.module is not None: return # We compile the first acceleration eval along with the integrator. # The rest of the acceleration evals (if present) are independent. code0 = self._get_code() helper0 = self.acceleration_eval_helpers[0] mod = helper0.compile(code0) helper0.setup_compiled_module(mod) self.module = mod c_accel_eval0 = self.acceleration_evals[0].c_acceleration_eval if self.backend == 'cython': if self.integrator is not None: self.integrator_helper.setup_compiled_module( mod, c_accel_eval0 ) elif self.backend == 'opencl' or self.backend == 'cuda': if self.integrator is not None: self.integrator_helper.setup_compiled_module( mod, c_accel_eval0 ) # Setup the remaining acceleration evals. for helper in self.acceleration_eval_helpers[1:]: mod = helper.compile(helper.get_code()) helper.setup_compiled_module(mod) # Private interface. #################################################### def _get_code(self): main = self.acceleration_eval_helpers[0].get_code() integrator_code = self.integrator_helper.get_code() return main + integrator_code def _setup_helpers(self): if self.backend == 'cython': from .acceleration_eval_cython_helper import \ AccelerationEvalCythonHelper cls = AccelerationEvalCythonHelper elif self.backend == 'opencl' or self.backend == 'cuda': from .acceleration_eval_gpu_helper import \ AccelerationEvalGPUHelper cls = AccelerationEvalGPUHelper self.acceleration_eval_helpers = [ cls(a_eval) for a_eval in self.acceleration_evals ] self._setup_integrator_helper() def _setup_integrator_helper(self): a_helper0 = self.acceleration_eval_helpers[0] if self.backend == 'cython': from .integrator_cython_helper import \ IntegratorCythonHelper self.integrator_helper = IntegratorCythonHelper( self.integrator, a_helper0 ) elif self.backend == 'opencl' or self.backend == 'cuda': from .integrator_gpu_helper import IntegratorGPUHelper self.integrator_helper = IntegratorGPUHelper( self.integrator, a_helper0 ) pysph-master/pysph/sph/surface_tension.py000066400000000000000000001155551356347341600212400ustar00rootroot00000000000000""" Implementation of the equations used for surface tension modelling, for example in KHI simulations. The references are as under: - M. Shadloo, M. Yildiz, "Numerical modelling of Kelvin-Helmholtz isntability using smoothed particle hydrodynamics", IJNME, 2011, 87, pp 988--1006 [SY11] - Joseph P. Morris "Simulating surface tension with smoothed particle hydrodynamics", JCP, 2000, 33, pp 333--353 [JM00] - Adami et al. "A new surface-tension formulation for multi-phase SPH using a reproducing divergence approximation", JCP 2010, 229, pp 5011--5021 [A10] - X.Y.Hu, N.A. Adams. "A multi-phase SPH method for macroscopic and mesoscopic flows", JCP 2006, 213, pp 844-861 [XA06] """ from pysph.sph.equation import Equation from math import sqrt from pysph.sph.equation import Group, Equation from pysph.sph.gas_dynamics.basic import ScaleSmoothingLength from pysph.sph.wc.transport_velocity import SummationDensity, \ MomentumEquationPressureGradient, StateEquation,\ MomentumEquationArtificialStress, MomentumEquationViscosity, \ SolidWallNoSlipBC from pysph.sph.wc.linalg import gj_solve, augmented_matrix from pysph.sph.wc.basic import TaitEOS class SurfaceForceAdami(Equation): def initialize(self, d_au, d_av, d_idx): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 def loop(self, d_au, d_av, d_aw, d_idx, d_m, DWIJ, d_pi00, d_pi01, d_pi02, d_pi10, d_pi11, d_pi12, d_pi20, d_pi21, d_pi22, s_pi00, s_pi01, s_pi02, s_pi10, s_pi11, s_pi12, s_pi20, s_pi21, s_pi22, d_V, s_V, s_idx): s2 = s_V[s_idx]*s_V[s_idx] f00 = (d_pi00[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi00[s_idx]/s2) f01 = (d_pi01[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi01[s_idx]/s2) f02 = (d_pi02[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi02[s_idx]/s2) f10 = (d_pi10[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi10[s_idx]/s2) f11 = (d_pi11[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi11[s_idx]/s2) f12 = (d_pi12[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi12[s_idx]/s2) f20 = (d_pi20[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi20[s_idx]/s2) f21 = (d_pi21[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi21[s_idx]/s2) f22 = (d_pi22[d_idx]/(d_V[d_idx]*d_V[d_idx]) + s_pi22[s_idx]/s2) d_au[d_idx] += (DWIJ[0]*f00 + DWIJ[1]*f01 + DWIJ[2]*f02)/d_m[d_idx] d_av[d_idx] += (DWIJ[0]*f10 + DWIJ[1]*f11 + DWIJ[2]*f12)/d_m[d_idx] d_aw[d_idx] += (DWIJ[0]*f20 + DWIJ[1]*f21 + DWIJ[2]*f22)/d_m[d_idx] class ConstructStressMatrix(Equation): def __init__(self, dest, sources, sigma, d=2): self.sigma = sigma self.d = d super(ConstructStressMatrix, self).__init__(dest, sources) def initialize(self, d_pi00, d_pi01, d_pi02, d_pi10, d_pi11, d_pi12, d_pi20, d_pi21, d_pi22, d_cx, d_cy, d_cz, d_idx, d_N): mod_gradc2 = d_cx[d_idx]*d_cx[d_idx] + d_cy[d_idx]*d_cy[d_idx] + \ d_cz[d_idx]*d_cz[d_idx] mod_gradc = sqrt(mod_gradc2) d_N[d_idx] = 0.0 if mod_gradc > 1e-14: factor = self.sigma/mod_gradc d_pi00[d_idx] = (-d_cx[d_idx]*d_cx[d_idx] + (mod_gradc2)/self.d)*factor d_pi01[d_idx] = -factor*d_cx[d_idx]*d_cy[d_idx] d_pi02[d_idx] = -factor*d_cx[d_idx]*d_cz[d_idx] d_pi10[d_idx] = -factor*d_cx[d_idx]*d_cy[d_idx] d_pi11[d_idx] = (-d_cy[d_idx]*d_cy[d_idx] + (mod_gradc2)/self.d)*factor d_pi12[d_idx] = -factor*d_cy[d_idx]*d_cz[d_idx] d_pi20[d_idx] = -factor*d_cx[d_idx]*d_cz[d_idx] d_pi21[d_idx] = -factor*d_cy[d_idx]*d_cz[d_idx] d_pi22[d_idx] = (-d_cz[d_idx]*d_cz[d_idx] + (mod_gradc2)/self.d)*factor d_N[d_idx] = 1.0 else: d_pi00[d_idx] = 0.0 d_pi01[d_idx] = 0.0 d_pi02[d_idx] = 0.0 d_pi10[d_idx] = 0.0 d_pi11[d_idx] = 0.0 d_pi12[d_idx] = 0.0 d_pi20[d_idx] = 0.0 d_pi21[d_idx] = 0.0 d_pi22[d_idx] = 0.0 class ColorGradientAdami(Equation): def initialize(self, d_idx, d_cx, d_cy, d_cz): d_cx[d_idx] = 0.0 d_cy[d_idx] = 0.0 d_cz[d_idx] = 0.0 def loop(self, d_idx, d_cx, d_cy, d_cz, d_V, s_V, d_color, s_color, DWIJ, s_idx): c_i = d_color[d_idx]/(d_V[d_idx]*d_V[d_idx]) c_j = s_color[s_idx]/(s_V[s_idx]*s_V[s_idx]) factor = d_V[d_idx]*(c_i + c_j) d_cx[d_idx] += factor*DWIJ[0] d_cy[d_idx] += factor*DWIJ[1] d_cz[d_idx] += factor*DWIJ[2] class MomentumEquationViscosityAdami(Equation): def initialize(self, d_au, d_av, d_aw, d_idx): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, d_V, d_au, d_av, d_aw, s_V, d_p, s_p, DWIJ, s_idx, d_m, R2IJ, XIJ, EPS, VIJ, d_nu, s_nu): factor = 2.0*d_nu[d_idx]*s_nu[s_idx]/(d_nu[d_idx] + s_nu[s_idx]) V_i = 1/(d_V[d_idx]*d_V[d_idx]) V_j = 1/(s_V[s_idx]*s_V[s_idx]) dwijdotrij = (DWIJ[0]*XIJ[0] + DWIJ[1]*XIJ[1] + DWIJ[2]*XIJ[2]) dwijdotrij /= (R2IJ + EPS) factor = factor*(V_i + V_j)*dwijdotrij/d_m[d_idx] d_au[d_idx] += factor*VIJ[0] d_av[d_idx] += factor*VIJ[1] d_aw[d_idx] += factor*VIJ[2] class MomentumEquationPressureGradientAdami(Equation): def initialize(self, d_au, d_av, d_aw, d_idx): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, d_V, d_au, d_av, d_aw, s_V, d_p, s_p, DWIJ, s_idx, d_m): p_i = d_p[d_idx]/(d_V[d_idx]*d_V[d_idx]) p_j = s_p[s_idx]/(s_V[s_idx]*s_V[s_idx]) d_au[d_idx] += -(p_i+p_j)*DWIJ[0]/d_m[d_idx] d_av[d_idx] += -(p_i+p_j)*DWIJ[1]/d_m[d_idx] d_aw[d_idx] += -(p_i+p_j)*DWIJ[2]/d_m[d_idx] class MomentumEquationViscosityMorris(Equation): def __init__(self, dest, sources, eta=0.01): self.eta = eta*eta super(MomentumEquationViscosityMorris, self).__init__(dest, sources) def loop(self, d_idx, s_idx, d_au, d_av, d_aw, s_m, d_nu, s_nu, d_rho, s_rho, DWIJ, R2IJ, VIJ, HIJ, XIJ): r2 = R2IJ + self.eta*HIJ*HIJ dw = (DWIJ[0]*XIJ[0] + DWIJ[1]*XIJ[1] + DWIJ[2]*XIJ[2])/(r2) mult = s_m[s_idx]*(d_nu[d_idx] + s_nu[s_idx]) / \ (d_rho[d_idx]*s_rho[s_idx]) d_au[d_idx] += dw*mult*VIJ[0] d_av[d_idx] += dw*mult*VIJ[1] d_aw[d_idx] += dw*mult*VIJ[2] class MomentumEquationPressureGradientMorris(Equation): def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_au, d_av, d_aw, s_m, d_p, s_p, DWIJ, d_rho, s_rho): factor = -s_m[s_idx]*(d_p[d_idx] + s_p[s_idx]) / \ (d_rho[d_idx]*s_rho[s_idx]) d_au[d_idx] += factor*DWIJ[0] d_av[d_idx] += factor*DWIJ[1] d_aw[d_idx] += factor*DWIJ[2] class InterfaceCurvatureFromDensity(Equation): def __init__(self, dest, sources, with_morris_correction=True): self.with_morris_correction = with_morris_correction super(InterfaceCurvatureFromDensity, self).__init__(dest, sources) def initialize(self, d_idx, d_kappa, d_wij_sum): d_kappa[d_idx] = 0.0 d_wij_sum[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_kappa, d_nx, d_ny, d_nz, s_nx, s_ny, s_nz, d_V, s_V, d_N, s_N, d_wij_sum, s_rho, s_m, WIJ, DWIJ): nijdotdwij = (d_nx[d_idx] - s_nx[s_idx]) * DWIJ[0] + \ (d_ny[d_idx] - s_ny[s_idx]) * DWIJ[1] + \ (d_nz[d_idx] - s_nz[s_idx]) * DWIJ[2] tmp = 1.0 if self.with_morris_correction: tmp = min(d_N[d_idx], s_N[s_idx]) d_wij_sum[d_idx] += tmp * s_m[s_idx]/s_rho[s_idx] * WIJ d_kappa[d_idx] += tmp*nijdotdwij*s_m[s_idx]/s_rho[s_idx] def post_loop(self, d_idx, d_wij_sum, d_nx, d_kappa): if self.with_morris_correction: if d_wij_sum[d_idx] > 1e-12: d_kappa[d_idx] /= d_wij_sum[d_idx] class SolidWallPressureBCnoDensity(Equation): def initialize(self, d_idx, d_p, d_wij): d_p[d_idx] = 0.0 d_wij[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_p, s_p, d_wij, s_rho, d_au, d_av, d_aw, WIJ, XIJ): d_p[d_idx] += s_p[s_idx]*WIJ d_wij[d_idx] += WIJ def post_loop(self, d_idx, d_wij, d_p, d_rho): if d_wij[d_idx] > 1e-14: d_p[d_idx] /= d_wij[d_idx] class SummationDensitySourceMass(Equation): def initialize(self, d_idx, d_V, d_rho): d_rho[d_idx] = 0.0 def loop(self, d_idx, d_V, d_rho, d_m, WIJ, s_m, s_idx): d_rho[d_idx] += s_m[s_idx]*WIJ def post_loop(self, d_idx, d_V, d_rho, d_m): d_V[d_idx] = d_rho[d_idx]/d_m[d_idx] class SmoothedColor(Equation): r"""Smoothed color function. Eq. (17) in [JM00] .. math:: c_a = \sum_b \frac{m_b}{\rho_b} c_b^i \nabla_a W_{ab}\,, where, :math:`c_b^i` is the color index associated with a particle. """ def initialize(self, d_idx, d_scolor): d_scolor[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, s_rho, s_m, s_color, d_scolor, WIJ): # Smoothed color Eq. (17) in [JM00] d_scolor[d_idx] += s_m[s_idx]/s_rho[s_idx] * s_color[s_idx] * WIJ class ColorGradientUsingNumberDensity(Equation): r"""Gradient of the color function using Eq. (13) of [SY11]: .. math:: \nabla C_a = \sum_b \frac{2 C_b - C_a}{\psi_a + \psi_a} \nabla_{a} W_{ab} Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop. Singularities are avoided as per the recommendation by [JM00] (see eqs 20 & 21) using the parameter :math:`\epsilon` """ def __init__(self, dest, sources, epsilon=1e-6): self.epsilon2 = epsilon*epsilon super(ColorGradientUsingNumberDensity, self).__init__(dest, sources) def initialize(self, d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N): # color gradient d_cx[d_idx] = 0.0 d_cy[d_idx] = 0.0 d_cz[d_idx] = 0.0 # interface normals d_nx[d_idx] = 0.0 d_ny[d_idx] = 0.0 d_nz[d_idx] = 0.0 # discretized dirac delta d_ddelta[d_idx] = 0.0 # reliability indicator for normals d_N[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_scolor, s_scolor, d_cx, d_cy, d_cz, d_V, s_V, DWIJ): # average particle volume psiab1 = 2.0/(d_V[d_idx] + s_V[s_idx]) # difference in color divided by psiab. Eq. (13) in [SY11] Cba = (s_scolor[s_idx] - d_scolor[d_idx]) * psiab1 # color gradient d_cx[d_idx] += Cba * DWIJ[0] d_cy[d_idx] += Cba * DWIJ[1] d_cz[d_idx] += Cba * DWIJ[2] def post_loop(self, d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_N, d_ddelta): # absolute value of the color gradient mod_gradc2 = d_cx[d_idx]*d_cx[d_idx] + \ d_cy[d_idx]*d_cy[d_idx] + \ d_cz[d_idx]*d_cz[d_idx] # avoid sqrt computations on non-interface particles # (particles for which the color gradient is zero) Eq. (19, # 20) in [JM00] if mod_gradc2 > self.epsilon2: # this normal is reliable in the sense of [JM00] d_N[d_idx] = 1.0 # compute the normals mod_gradc = 1./sqrt(mod_gradc2) d_nx[d_idx] = d_cx[d_idx] * mod_gradc d_ny[d_idx] = d_cy[d_idx] * mod_gradc d_nz[d_idx] = d_cz[d_idx] * mod_gradc # discretized Dirac Delta function d_ddelta[d_idx] = 1./mod_gradc class MorrisColorGradient(Equation): r"""Gradient of the color function using Eq. (17) of [JM00]: .. math:: \nabla c_a = \sum_b \frac{m_b}{\rho_b}(c_b - c_a) \nabla_{a} W_{ab}\,, where a smoothed representation of the color is used in the equation. Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop. Singularities are avoided as per the recommendation by [JM00] (see eqs 20 & 21) using the parameter :math:`\epsilon` """ def __init__(self, dest, sources, epsilon=1e-6): self.epsilon2 = epsilon*epsilon super(MorrisColorGradient, self).__init__(dest, sources) def initialize(self, d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N): # color gradient d_cx[d_idx] = 0.0 d_cy[d_idx] = 0.0 d_cz[d_idx] = 0.0 # interface normals d_nx[d_idx] = 0.0 d_ny[d_idx] = 0.0 d_nz[d_idx] = 0.0 # reliability indicator for normals and dirac delta d_N[d_idx] = 0.0 d_ddelta[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_scolor, s_scolor, d_cx, d_cy, d_cz, s_m, s_rho, DWIJ): # Eq. (17) in [JM00] Cba = (s_scolor[s_idx] - d_scolor[d_idx]) * s_m[s_idx]/s_rho[s_idx] # color gradient d_cx[d_idx] += Cba * DWIJ[0] d_cy[d_idx] += Cba * DWIJ[1] d_cz[d_idx] += Cba * DWIJ[2] def post_loop(self, d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_N, d_ddelta): # absolute value of the color gradient mod_gradc2 = d_cx[d_idx]*d_cx[d_idx] + \ d_cy[d_idx]*d_cy[d_idx] + \ d_cz[d_idx]*d_cz[d_idx] # avoid sqrt computations on non-interface particles # (particles for which the color gradient is zero) Eq. (19, # 20) in [JM00] if mod_gradc2 > self.epsilon2: # this normal is reliable in the sense of [JM00] d_N[d_idx] = 1.0 # compute the normals mod_gradc = 1./sqrt(mod_gradc2) d_nx[d_idx] = d_cx[d_idx] * mod_gradc d_ny[d_idx] = d_cy[d_idx] * mod_gradc d_nz[d_idx] = d_cz[d_idx] * mod_gradc # discretized Dirac Delta function d_ddelta[d_idx] = 1./mod_gradc class SY11ColorGradient(Equation): r"""Gradient of the color function using Eq. (13) of [SY11]: .. math:: \nabla C_a = \sum_b \frac{2 C_b - C_a}{\psi_a + \psi_a} \nabla_{a} W_{ab} Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop. """ def __init__(self, dest, sources, epsilon=1e-6): self.epsilon2 = epsilon*epsilon super(SY11ColorGradient, self).__init__(dest, sources) def initialize(self, d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N): # color gradient d_cx[d_idx] = 0.0 d_cy[d_idx] = 0.0 d_cz[d_idx] = 0.0 # interface normals d_nx[d_idx] = 0.0 d_ny[d_idx] = 0.0 d_nz[d_idx] = 0.0 # discretized dirac delta d_ddelta[d_idx] = 0.0 # reliability indicator for normals d_N[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_color, s_color, d_cx, d_cy, d_cz, d_V, s_V, DWIJ): # average particle volume psiab1 = 2.0/(d_V[d_idx] + s_V[s_idx]) # difference in color divided by psiab. Eq. (13) in [SY11] Cba = (s_color[s_idx] - d_color[d_idx]) * psiab1 # color gradient d_cx[d_idx] += Cba * DWIJ[0] d_cy[d_idx] += Cba * DWIJ[1] d_cz[d_idx] += Cba * DWIJ[2] def post_loop(self, d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_N, d_ddelta): # absolute value of the color gradient mod_gradc2 = d_cx[d_idx]*d_cx[d_idx] + \ d_cy[d_idx]*d_cy[d_idx] + \ d_cz[d_idx]*d_cz[d_idx] # avoid sqrt computations on non-interface particles if mod_gradc2 > self.epsilon2: # this normal is reliable in the sense of [JM00] d_N[d_idx] = 1.0 # compute the normals mod_gradc = 1./sqrt(mod_gradc2) d_nx[d_idx] = d_cx[d_idx] * mod_gradc d_ny[d_idx] = d_cy[d_idx] * mod_gradc d_nz[d_idx] = d_cz[d_idx] * mod_gradc # discretized Dirac Delta function d_ddelta[d_idx] = 1./mod_gradc class SY11DiracDelta(Equation): r"""Discretized dirac-delta for the SY11 formulation Eq. (14) in [SY11] This is essentially the same as computing the color gradient, the only difference being that this might be called with a reduced smoothing length. Note that the normals should be computed using the SY11ColorGradient equation. This function will effectively overwrite the color gradient. """ def __init__(self, dest, sources, epsilon=1e-6): self.epsilon2 = epsilon*epsilon super(SY11DiracDelta, self).__init__(dest, sources) def initialize(self, d_idx, d_cx, d_cy, d_cz, d_ddelta): # color gradient d_cx[d_idx] = 0.0 d_cy[d_idx] = 0.0 d_cz[d_idx] = 0.0 # discretized dirac delta d_ddelta[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_color, s_color, d_cx, d_cy, d_cz, d_V, s_V, DWIJ): # average particle volume psiab1 = 2.0/(d_V[d_idx] + s_V[s_idx]) # difference in color divided by psiab. Eq. (13) in [SY11] Cba = (s_color[s_idx] - d_color[d_idx]) * psiab1 # color gradient d_cx[d_idx] += Cba * DWIJ[0] d_cy[d_idx] += Cba * DWIJ[1] d_cz[d_idx] += Cba * DWIJ[2] def post_loop(self, d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_N, d_ddelta): # absolute value of the color gradient mod_gradc2 = d_cx[d_idx]*d_cx[d_idx] + \ d_cy[d_idx]*d_cy[d_idx] + \ d_cz[d_idx]*d_cz[d_idx] # avoid sqrt computations on non-interface particles if mod_gradc2 > self.epsilon2: mod_gradc = sqrt(mod_gradc2) # discretized Dirac Delta function d_ddelta[d_idx] = mod_gradc class InterfaceCurvatureFromNumberDensity(Equation): r"""Interface curvature using number density. Eq. (15) in [SY11]: .. math:: \kappa_a = \sum_b \frac{2.0}{\psi_a + \psi_b} \left(\boldsymbol{n_a} - \boldsymbol{n_b}\right) \cdot \nabla_a W_{ab} """ def __init__(self, dest, sources, with_morris_correction=True): self.with_morris_correction = with_morris_correction super(InterfaceCurvatureFromNumberDensity, self).__init__(dest, sources) def initialize(self, d_idx, d_kappa, d_wij_sum): d_kappa[d_idx] = 0.0 d_wij_sum[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_kappa, d_nx, d_ny, d_nz, s_nx, s_ny, s_nz, d_V, s_V, d_N, s_N, d_wij_sum, s_rho, s_m, WIJ, DWIJ): nijdotdwij = (d_nx[d_idx] - s_nx[s_idx]) * DWIJ[0] + \ (d_ny[d_idx] - s_ny[s_idx]) * DWIJ[1] + \ (d_nz[d_idx] - s_nz[s_idx]) * DWIJ[2] # averaged particle number density psiij1 = 2.0/(d_V[d_idx] + s_V[s_idx]) # local number density with reliable normals Eq. (24) in [JM00] tmp = 1.0 if self.with_morris_correction: tmp = min(d_N[d_idx], s_N[s_idx]) d_wij_sum[d_idx] += tmp * s_m[s_idx]/s_rho[s_idx] * WIJ # Eq. (15) in [SY11] with correction Eq. (22) in [JM00] d_kappa[d_idx] += tmp * psiij1 * nijdotdwij def post_loop(self, d_idx, d_wij_sum, d_nx, d_kappa): # correct the curvature estimate. Eq. (23) in [JM00] if self.with_morris_correction: if d_wij_sum[d_idx] > 1e-12: d_kappa[d_idx] /= d_wij_sum[d_idx] class ShadlooYildizSurfaceTensionForce(Equation): r"""Acceleration due to surface tension force Eq. (7,9) in [SY11]: .. math: \frac{d\boldsymbol{v}_a} = \frac{1}{m_a} \sigma \kappa_a \boldsymbol{n}_a \delta_a^s\,, where, :math:`\delta^s` is the discretized dirac delta function, :math:`\boldsymbol{n}` is the interface normal, :math:`\kappa` is the discretized interface curvature and :math:`\sigma` is the surface tension force constant. """ def __init__(self, dest, sources, sigma=0.1): self.sigma = sigma # base class initialization super(ShadlooYildizSurfaceTensionForce, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, d_au, d_av, d_aw, d_kappa, d_nx, d_ny, d_nz, d_m, d_rho, d_ddelta): mi = 1./d_m[d_idx] rhoi = 1./d_rho[d_idx] # acceleration per uint mass term Eq. (7) in [SY11] tmp = self.sigma * d_kappa[d_idx] * d_ddelta[d_idx] * rhoi d_au[d_idx] += tmp * d_nx[d_idx] d_av[d_idx] += tmp * d_ny[d_idx] d_aw[d_idx] += tmp * d_nz[d_idx] class CSFSurfaceTensionForce(Equation): r"""Acceleration due to surface tension force Eq. (25) in [JM00]: .. math: \frac{d\boldsymbol{v}_a}{dt} = \frac{1}{rho_a} \sigma_b \kappa_a \boldsymbol{n}_a Note that as per Eq. (17) in [JM00], the un-normalized normal is basically the gradient of the color function. The acceleration term therefore depends on the gradient of the color field. """ def __init__(self, dest, sources, sigma=0.1): self.sigma = sigma # base class initialization super(CSFSurfaceTensionForce, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, d_au, d_av, d_aw, d_kappa, d_cx, d_cy, d_cz, d_rho): rhoi = 1./d_rho[d_idx] # acceleration per uint mass term Eq. (25) in [JM00] tmp = self.sigma * d_kappa[d_idx] * rhoi d_au[d_idx] += tmp * d_cx[d_idx] d_av[d_idx] += tmp * d_cy[d_idx] d_aw[d_idx] += tmp * d_cz[d_idx] class AdamiReproducingDivergence(Equation): r"""Reproducing divergence approximation Eq. (20) in [A10] to compute the curvature .. math:: \nabla \cdot \boldsymbol{\phi}_a = d\frac{\sum_b \boldsymbol{\phi}_{ab}\cdot \nabla_a W_{ab}V_b}{\sum_b\boldsymbol{x}_{ab}\cdot \nabla_a W_{ab} V_b} """ def __init__(self, dest, sources, dim): self.dim = dim super(AdamiReproducingDivergence, self).__init__(dest, sources) def initialize(self, d_idx, d_kappa, d_wij_sum): d_kappa[d_idx] = 0.0 d_wij_sum[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_kappa, d_wij_sum, d_nx, d_ny, d_nz, s_nx, s_ny, s_nz, d_V, s_V, DWIJ, XIJ, RIJ, EPS): # particle volumes Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] # dot product in the numerator of Eq. (20) nijdotdwij = (d_nx[d_idx] - s_nx[s_idx]) * DWIJ[0] + \ (d_ny[d_idx] - s_ny[s_idx]) * DWIJ[1] + \ (d_nz[d_idx] - s_nz[s_idx]) * DWIJ[2] # dot product in the denominator of Eq. (20) xijdotdwij = XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2] # accumulate the contributions d_kappa[d_idx] += nijdotdwij * Vj d_wij_sum[d_idx] += xijdotdwij * Vj def post_loop(self, d_idx, d_kappa, d_wij_sum): # normalize the curvature estimate if d_wij_sum[d_idx] > 1e-12: d_kappa[d_idx] /= d_wij_sum[d_idx] d_kappa[d_idx] *= -self.dim class CSFSurfaceTensionForceAdami(Equation): def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def post_loop(self, d_idx, d_au, d_av, d_aw, d_kappa, d_cx, d_cy, d_cz, d_m, d_alpha, d_rho): d_au[d_idx] += -d_alpha[d_idx]*d_kappa[d_idx]*d_cx[d_idx]/d_rho[d_idx] d_av[d_idx] += -d_alpha[d_idx]*d_kappa[d_idx]*d_cy[d_idx]/d_rho[d_idx] d_aw[d_idx] += -d_alpha[d_idx]*d_kappa[d_idx]*d_cz[d_idx]/d_rho[d_idx] class ShadlooViscosity(Equation): def __init__(self, dest, sources, alpha): self.alpha = alpha super(ShadlooViscosity, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, d_au, d_av, d_aw, d_h, s_idx, s_h, d_cs, s_cs, d_rho, s_rho, VIJ, XIJ, d_V, s_V, R2IJ, EPS, DWIJ): mu1 = 0.125*self.alpha*d_h[d_idx]*d_cs[d_idx]*d_rho[d_idx] mu2 = 0.125*self.alpha*s_h[s_idx]*s_cs[s_idx]*s_rho[s_idx] mu12 = 2.0*mu1*mu2/(mu1 + mu2) vijdotxij = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] denominator = d_V[d_idx]*s_V[s_idx]*(R2IJ + EPS) piij = 8.0*mu12*vijdotxij/denominator d_au[d_idx] += -piij*DWIJ[0] d_av[d_idx] += -piij*DWIJ[1] d_aw[d_idx] += -piij*DWIJ[2] class AdamiColorGradient(Equation): r"""Gradient of color Eq. (14) in [A10] .. math:: \nabla c_a = \frac{1}{V_a}\sum_b \left[V_a^2 + V_b^2 \right]\tilde{c}_{ab}\nabla_a W_{ab}\,, where, the average :math:`\tilde{c}_{ab}` is defined as .. math:: \tilde{c}_{ab} = \frac{\rho_b}{\rho_a + \rho_b}c_a + \frac{\rho_a}{\rho_a + \rho_b}c_b """ def initialize(self, d_idx, d_cx, d_cy, d_cz, d_nx, d_ny, d_nz, d_ddelta, d_N): d_cx[d_idx] = 0.0 d_cy[d_idx] = 0.0 d_cz[d_idx] = 0.0 d_nx[d_idx] = 0.0 d_ny[d_idx] = 0.0 d_nz[d_idx] = 0.0 # reliability indicator for normals d_N[d_idx] = 0.0 # Discretized dirac-delta d_ddelta[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_V, s_V, d_rho, s_rho, d_cx, d_cy, d_cz, d_color, s_color, DWIJ): # particle volumes Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # averaged particle color rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] rhoij1 = 1./(rhoi + rhoj) # Eq. (15) in [A10] cij = rhoj*rhoij1*d_color[d_idx] + rhoi*rhoij1*s_color[s_idx] # comute the gradient tmp = cij * (Vi2 + Vj2)/Vi d_cx[d_idx] += tmp * DWIJ[0] d_cy[d_idx] += tmp * DWIJ[1] d_cz[d_idx] += tmp * DWIJ[2] def post_loop(self, d_idx, d_cx, d_cy, d_cz, d_h, d_nx, d_ny, d_nz, d_ddelta, d_N): # absolute value of the color gradient mod_gradc2 = d_cx[d_idx]*d_cx[d_idx] + \ d_cy[d_idx]*d_cy[d_idx] + \ d_cz[d_idx]*d_cz[d_idx] # avoid sqrt computations on non-interface particles h2 = d_h[d_idx]*d_h[d_idx] if mod_gradc2 > 1e-4/h2: # this normal is reliable in the sense of [JM00] d_N[d_idx] = 1.0 # compute the normals one_mod_gradc = 1./sqrt(mod_gradc2) d_nx[d_idx] = d_cx[d_idx] * one_mod_gradc d_ny[d_idx] = d_cy[d_idx] * one_mod_gradc d_nz[d_idx] = d_cz[d_idx] * one_mod_gradc # discretized dirac delta d_ddelta[d_idx] = 1./one_mod_gradc def get_surface_tension_equations(fluids, solids, scheme, rho0, p0, c0, b, factor1, factor2, nu, sigma, d, epsilon, gamma, real=False): """ This function returns the required equations for the multiphase formulation taking inputs of the fluid particles array, solid particles array, the scheme to be used and other physical parameters Parameters ------------------ fluids: list List of names of fluid particle arrays solids: list List of names of solid particle arrays scheme: string The scheme with which the equations are to be setup. Supported Schemes: 1. TVF scheme with Morris' surface tension. String to be used: "tvf" 2. Adami's surface tension implementation which doesn't involve calculation of curvature. String to be used: "adami_stress" 3. Adami's surface tension implementation which involves calculation of curvature. String to be used: "adami" 4. Shadloo Yildiz surface tension formulation. String to be used: "shadloo" 5. Morris' surface tension formulation. This is the default scheme which will be used if none of the above strings are input as scheme. rho0 : float The reference density of the medium (Currently multiple reference densities for different particles is not supported) p0 : float The background pressure of the medium(Currently multiple background pressures for different particles is not supported) c0 : float The speed of sound of the medium(Currently multiple speeds of sounds for different particles is not supported) b : float The b parameter of the generalized Tait Equation of State. Refer to the Tait Equation's documentation for reference factor1 : float The factor for scaling of smoothing length for calculation of interface curvature number for shadloo's scheme factor2 : float The factor for scaling back of smoothing length for calculation of forces after calculating the interface curvature number in shadloo's scheme nu : float The kinematic viscosity of the medium sigma : float The surface tension of the system d : int The number of dimensions of the problem in the cartesian space epsilon: float Put this option false if the equations are supposed to be evaluated for the ghost particles, else keep it True """ if scheme == 'tvf': result = [] equations = [] for i in fluids+solids: equations.append(SummationDensity(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(StateEquation(dest=i, sources=None, rho0=rho0, p0=p0)) equations.append(SmoothedColor(dest=i, sources=fluids+solids)) for i in solids: equations.append(SolidWallPressureBCnoDensity(dest=i, sources=fluids)) equations.append(SmoothedColor(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(MorrisColorGradient(dest=i, sources=fluids+solids, epsilon=epsilon)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(InterfaceCurvatureFromNumberDensity( dest=i, sources=fluids+solids, with_morris_correction=True)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(MomentumEquationPressureGradient(dest=i, sources=fluids+solids, pb=p0)) equations.append(MomentumEquationViscosity(dest=i, sources=fluids, nu=nu)) equations.append(CSFSurfaceTensionForce(dest=i, sources=None, sigma=sigma)) equations.append(MomentumEquationArtificialStress(dest=i, sources=fluids)) if len(solids) != 0: equations.append(SolidWallNoSlipBC(dest=i, sources=solids, nu=nu)) result.append(Group(equations)) elif scheme == 'adami_stress': result = [] equations = [] for i in fluids+solids: equations.append(SummationDensity(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(TaitEOS(dest=i, sources=None, rho0=rho0, c0=c0, gamma=gamma, p0=p0)) for i in solids: equations.append(SolidWallPressureBCnoDensity(dest=i, sources=fluids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(ColorGradientAdami(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(ConstructStressMatrix(dest=i, sources=None, sigma=sigma, d=d)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(MomentumEquationPressureGradientAdami(dest=i, sources=fluids+solids)) equations.append(MomentumEquationViscosityAdami(dest=i, sources=fluids)) equations.append(SurfaceForceAdami(dest=i, sources=fluids+solids)) if len(solids) != 0: equations.append(SolidWallNoSlipBC(dest=i, sources=solids, nu=nu)) result.append(Group(equations)) elif scheme == 'adami': result = [] equations = [] for i in fluids+solids: equations.append(SummationDensity(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(StateEquation(dest=i, sources=None, rho0=rho0, p0=p0, b=b)) for i in solids: equations.append(SolidWallPressureBCnoDensity(dest=i, sources=fluids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(AdamiColorGradient(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(AdamiReproducingDivergence(dest=i, sources=fluids+solids, dim=d)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(MomentumEquationPressureGradient( dest=i, sources=fluids+solids, pb=0.0)) equations.append(MomentumEquationViscosityAdami(dest=i, sources=fluids)) equations.append(CSFSurfaceTensionForceAdami(dest=i, sources=None)) if len(solids) != 0: equations.append(SolidWallNoSlipBC(dest=i, sources=solids, nu=nu)) result.append(Group(equations)) elif scheme == 'shadloo': result = [] equations = [] for i in fluids+solids: equations.append(SummationDensity(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(StateEquation(dest=i, sources=None, rho0=rho0, p0=p0, b=b)) equations.append(SY11ColorGradient(dest=i, sources=fluids+solids)) for i in solids: equations.append(SolidWallPressureBCnoDensity(dest=i, sources=fluids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(ScaleSmoothingLength(dest=i, sources=None, factor=factor1)) result.append(Group(equations, real=real, update_nnps=True)) equations = [] for i in fluids: equations.append(SY11DiracDelta(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(InterfaceCurvatureFromNumberDensity( dest=i, sources=fluids+solids, with_morris_correction=True)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(ScaleSmoothingLength(dest=i, sources=None, factor=factor2)) result.append(Group(equations, real=real, update_nnps=True)) equations = [] for i in fluids: equations.append(MomentumEquationPressureGradient( dest=i, sources=fluids+solids, pb=0.0)) equations.append(MomentumEquationViscosity(dest=i, sources=fluids, nu=nu)) equations.append(ShadlooYildizSurfaceTensionForce(dest=i, sources=None, sigma=sigma)) if len(solids) != 0: equations.append(SolidWallNoSlipBC(dest=i, sources=solids, nu=nu)) result.append(Group(equations)) else: result = [] equations = [] for i in fluids+solids: equations.append(SummationDensitySourceMass(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(TaitEOS(dest=i, sources=None, rho0=rho0, c0=c0, gamma=gamma, p0=0.0)) equations.append(SmoothedColor(dest=i, sources=fluids+solids)) for i in solids: equations.append(SolidWallPressureBCnoDensity(dest=i, sources=fluids)) equations.append(SmoothedColor(dest=i, sources=fluids+solids)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(MorrisColorGradient(dest=i, sources=fluids+solids, epsilon=epsilon)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(InterfaceCurvatureFromDensity( dest=i, sources=fluids+solids, with_morris_correction=True)) result.append(Group(equations, real=real)) equations = [] for i in fluids: equations.append(MomentumEquationPressureGradientMorris(dest=i, sources=fluids+solids)) equations.append(MomentumEquationViscosityMorris(dest=i, sources=fluids)) equations.append(CSFSurfaceTensionForce(dest=i, sources=None, sigma=sigma)) if len(solids) != 0: equations.append(SolidWallNoSlipBC(dest=i, sources=solids, nu=nu)) result.append(Group(equations)) return result pysph-master/pysph/sph/tests/000077500000000000000000000000001356347341600166255ustar00rootroot00000000000000pysph-master/pysph/sph/tests/__init__.py000066400000000000000000000000001356347341600207240ustar00rootroot00000000000000pysph-master/pysph/sph/tests/test_acceleration_eval.py000066400000000000000000000705311356347341600237040ustar00rootroot00000000000000# Standard library imports. try: # This is for Python-2.6.x import unittest2 as unittest except ImportError: import unittest # Library imports. import pytest import numpy as np # Local imports. from pysph.base.utils import get_particle_array from compyle.config import get_config from compyle.api import declare from pysph.sph.equation import Equation, Group from pysph.sph.acceleration_eval import ( AccelerationEval, MegaGroup, CythonGroup, check_equation_array_properties ) from pysph.sph.basic_equations import SummationDensity from pysph.base.kernels import CubicSpline from pysph.base.nnps import LinkedListNNPS as NNPS from pysph.sph.sph_compiler import SPHCompiler from pysph.base.reduce_array import serial_reduce_array class DummyEquation(Equation): def initialize(self, d_idx, d_rho, d_V): d_rho[d_idx] = d_V[d_idx] def loop(self, d_idx, d_rho, s_idx, s_m, s_u, WIJ): d_rho[d_idx] += s_m[s_idx] * WIJ def post_loop(self, d_idx, d_rho, s_idx, s_m, s_V): d_rho[d_idx] += s_m[d_idx] class FindTotalMass(Equation): def initialize(self, d_idx, d_m, d_total_mass): # FIXME: This is stupid and should be fixed if we add a separate # initialize_once function or so. d_total_mass[0] = 0.0 def post_loop(self, d_idx, d_m, d_total_mass): d_total_mass[0] += d_m[d_idx] class TestCheckEquationArrayProps(unittest.TestCase): def test_should_raise_runtime_error_when_invalid_dest_source(self): # Given f = get_particle_array(name='f') # When eq = SummationDensity(dest='fluid', sources=['f']) # Then self.assertRaises( RuntimeError, check_equation_array_properties, eq, [f] ) # When eq = SummationDensity(dest='f', sources=['fluid']) # Then self.assertRaises( RuntimeError, check_equation_array_properties, eq, [f] ) def test_should_pass_when_properties_exist(self): # Given f = get_particle_array(name='f') # When eq = SummationDensity(dest='f', sources=['f']) # Then check_equation_array_properties(eq, [f]) def test_should_fail_when_props_dont_exist(self): # Given f = get_particle_array(name='f') # When eq = DummyEquation(dest='f', sources=['f']) # Then self.assertRaises(RuntimeError, check_equation_array_properties, eq, [f]) def test_should_fail_when_src_props_dont_exist(self): # Given f = get_particle_array(name='f') f.add_property('V') s = get_particle_array(name='s') # When eq = DummyEquation(dest='f', sources=['f', 's']) # Then self.assertRaises(RuntimeError, check_equation_array_properties, eq, [f, s]) def test_should_pass_when_src_props_exist(self): # Given f = get_particle_array(name='f') f.add_property('V') s = get_particle_array(name='s') s.add_property('V') # When eq = DummyEquation(dest='f', sources=['f', 's']) # Then check_equation_array_properties(eq, [f, s]) def test_should_check_constants(self): # Given f = get_particle_array(name='f') # When eq = FindTotalMass(dest='f', sources=['f']) # Then. self.assertRaises(RuntimeError, check_equation_array_properties, eq, [f]) # When. f.add_constant('total_mass', 0.0) # Then. check_equation_array_properties(eq, [f]) class SimpleEquation(Equation): def __init__(self, dest, sources): super(SimpleEquation, self).__init__(dest, sources) self.count = 0 def initialize(self, d_idx, d_u, d_au): d_u[d_idx] = 0.0 d_au[d_idx] = 0.0 def loop(self, d_idx, d_au, s_idx, s_m): d_au[d_idx] += s_m[s_idx] def post_loop(self, d_idx, d_u, d_au): d_u[d_idx] = d_au[d_idx] def converged(self): self.count += 1 result = self.count - 1 if result > 0: # Reset the count for the next loop. self.count = 0 return result class MixedTypeEquation(Equation): def initialize(self, d_idx, d_u, d_au, d_pid, d_tag): d_u[d_idx] = 0.0 + d_pid[d_idx] d_au[d_idx] = 0.0 + d_tag[d_idx] def loop(self, d_idx, d_au, s_idx, s_m, s_pid, s_tag): d_au[d_idx] += s_m[s_idx] + s_pid[s_idx] + s_tag[s_idx] def post_loop(self, d_idx, d_u, d_au, d_pid): d_u[d_idx] = d_au[d_idx] + d_pid[d_idx] class SimpleReduction(Equation): def initialize(self, d_idx, d_au): d_au[d_idx] = 0.0 def reduce(self, dst, t, dt): dst.total_mass[0] = serial_reduce_array(dst.m, op='sum') if dst.gpu is not None: dst.gpu.push('total_mass') class PyInit(Equation): def py_initialize(self, dst, t, dt): self.called_with = t, dt if dst.gpu: dst.gpu.pull('au') dst.au[:] = 1.0 if dst.gpu: dst.gpu.push('au') def initialize(self, d_idx, d_au): d_au[d_idx] += 1.0 class LoopAllEquation(Equation): def initialize(self, d_idx, d_rho): d_rho[d_idx] = 0.0 def loop(self, d_idx, d_rho, s_m, s_idx, WIJ): d_rho[d_idx] += s_m[s_idx] * WIJ def loop_all(self, d_idx, d_x, d_rho, s_m, s_x, s_h, SPH_KERNEL, NBRS, N_NBRS): i = declare('int') s_idx = declare('long') xij = declare('matrix((3,))') rij = 0.0 sum = 0.0 xij[1] = 0.0 xij[2] = 0.0 for i in range(N_NBRS): s_idx = NBRS[i] xij[0] = d_x[d_idx] - s_x[s_idx] rij = abs(xij[0]) sum += s_m[s_idx] * SPH_KERNEL.kernel(xij, rij, s_h[s_idx]) d_rho[d_idx] += sum class InitializePair(Equation): def initialize_pair(self, d_idx, d_u, s_u): # Will only work if the source/destinations are the same # but should do for a test. d_u[d_idx] = s_u[d_idx]*1.5 class TestMegaGroup(unittest.TestCase): def test_ensure_group_retains_user_order_of_equations(self): # Given group = Group(equations=[ SimpleEquation(dest='f', sources=['s', 'f']), DummyEquation(dest='f', sources=['s', 'f']), MixedTypeEquation(dest='f', sources=['f']), ]) # When mg = MegaGroup(group, CythonGroup) # Then data = mg.data self.assertEqual(list(data.keys()), ['f']) eqs_with_no_source, sources, all_eqs = data['f'] self.assertEqual(len(eqs_with_no_source.equations), 0) all_eqs_order = [x.__class__.__name__ for x in all_eqs.equations] expect = ['SimpleEquation', 'DummyEquation', 'MixedTypeEquation'] self.assertEqual(all_eqs_order, expect) self.assertEqual(sorted(sources.keys()), ['f', 's']) s_eqs = [x.__class__.__name__ for x in sources['s'].equations] expect = ['SimpleEquation', 'DummyEquation'] self.assertEqual(s_eqs, expect) f_eqs = [x.__class__.__name__ for x in sources['f'].equations] expect = ['SimpleEquation', 'DummyEquation', 'MixedTypeEquation'] self.assertEqual(f_eqs, expect) def test_mega_group_copies_props_of_group(self): # Given def nothing(): pass g = Group( equations=[], real=False, update_nnps=True, iterate=True, max_iterations=20, min_iterations=2, pre=nothing, post=nothing ) # When mg = MegaGroup(g, CythonGroup) # Then props = ('real update_nnps iterate max_iterations ' 'min_iterations pre post').split() for prop in props: self.assertEqual(getattr(mg, prop), getattr(g, prop)) class TestAccelerationEval1D(unittest.TestCase): def setUp(self): self.dim = 1 n = 10 dx = 1.0 / (n - 1) x = np.linspace(0, 1, n) m = np.ones_like(x) h = np.ones_like(x) * dx * 1.05 pa = get_particle_array(name='fluid', x=x, h=h, m=m) self.pa = pa def _make_accel_eval(self, equations, cache_nnps=False): arrays = [self.pa] kernel = CubicSpline(dim=self.dim) a_eval = AccelerationEval( particle_arrays=arrays, equations=equations, kernel=kernel ) comp = SPHCompiler(a_eval, integrator=None) comp.compile() nnps = NNPS(dim=kernel.dim, particles=arrays, cache=cache_nnps) nnps.update() a_eval.set_nnps(nnps) return a_eval def test_should_support_constants(self): # Given pa = self.pa pa.add_constant('total_mass', 0.0) equations = [FindTotalMass(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then self.assertEqual(pa.total_mass, 10.0) def test_should_not_iterate_normal_group(self): # Given pa = self.pa equations = [SimpleEquation(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([3., 4., 5., 5., 5., 5., 5., 5., 4., 3.]) self.assertListEqual(list(pa.u), list(expect)) def test_should_work_with_cached_nnps(self): # Given pa = self.pa equations = [SimpleEquation(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations, cache_nnps=True) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([3., 4., 5., 5., 5., 5., 5., 5., 4., 3.]) self.assertListEqual(list(pa.u), list(expect)) def test_should_iterate_iterated_group(self): # Given pa = self.pa equations = [Group( equations=[ SimpleEquation(dest='fluid', sources=['fluid']), SimpleEquation(dest='fluid', sources=['fluid']), ], iterate=True )] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([3., 4., 5., 5., 5., 5., 5., 5., 4., 3.]) * 2 self.assertListEqual(list(pa.u), list(expect)) def test_should_iterate_nested_groups(self): pa = self.pa equations = [Group( equations=[ Group( equations=[SimpleEquation(dest='fluid', sources=['fluid'])] ), Group( equations=[SimpleEquation(dest='fluid', sources=['fluid'])] ), ], iterate=True, )] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([3., 4., 5., 5., 5., 5., 5., 5., 4., 3.]) self.assertListEqual(list(pa.u), list(expect)) def test_should_run_reduce(self): # Given. pa = self.pa pa.add_constant('total_mass', 0.0) equations = [SimpleReduction(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.sum(pa.m) self.assertAlmostEqual(pa.total_mass[0], expect, 14) def test_should_call_initialize_pair(self): # Given. pa = self.pa pa.u[:] = 1.0 if pa.gpu: pa.gpu.push('u') equations = [InitializePair(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.0, 0.1) # Then if pa.gpu: pa.gpu.pull('u') np.testing.assert_array_almost_equal( pa.u, np.ones_like(pa.x)*1.5 ) def test_should_call_py_initialize(self): # Given. pa = self.pa equations = [PyInit(dest='fluid', sources=None)] eq = equations[0] a_eval = self._make_accel_eval(equations) # When a_eval.compute(1.0, 0.1) # Then if pa.gpu: pa.gpu.pull('au') np.testing.assert_array_almost_equal( pa.au, np.ones_like(pa.x) * 2.0 ) self.assertEqual(eq.called_with, (1.0, 0.1)) def test_should_work_with_non_double_arrays(self): # Given pa = self.pa equations = [MixedTypeEquation(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([3., 4., 5., 5., 5., 5., 5., 5., 4., 3.]) self.assertListEqual(list(pa.u), list(expect)) def test_should_support_loop_all_and_loop(self): # Given pa = self.pa equations = [SummationDensity(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) a_eval.compute(0.1, 0.1) ref_rho = pa.rho.copy() # When pa.rho[:] = 0.0 equations = [LoopAllEquation(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) a_eval.compute(0.1, 0.1) # Then # 2*ref_rho as we are doing both the loop and loop_all to test if # both are called. self.assertTrue(np.allclose(pa.rho, 2.0 * ref_rho)) def test_should_handle_repeated_helper_functions(self): pa = self.pa def helper(x=1.0): return x * 1.5 class SillyEquation2(Equation): def initialize(self, d_idx, d_au, d_m): d_au[d_idx] += helper(d_m[d_idx]) def _get_helpers_(self): return [helper] equations = [SillyEquation2(dest='fluid', sources=['fluid']), SillyEquation2(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.ones(10) * 3.0 self.assertListEqual(list(pa.au), list(expect)) def test_should_call_pre_post_functions_in_group(self): # Given pa = self.pa def pre(): pa.m += 1.0 def post(): pa.u += 1.0 equations = [ Group( equations=[ SimpleEquation(dest='fluid', sources=['fluid']) ], pre=pre, post=post ) ] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([7., 9., 11., 11., 11., 11., 11., 11., 9., 7.]) self.assertListEqual(list(pa.u), list(expect)) class EqWithTime(Equation): def initialize(self, d_idx, d_au, t, dt): d_au[d_idx] = t + dt def loop(self, d_idx, d_au, s_idx, s_m, t, dt): d_au[d_idx] += t + dt class TestAccelerationEval1DGPU(unittest.TestCase): # Fix this to be a subclass of TestAccelerationEval1D def setUp(self): self.dim = 1 n = 10 dx = 1.0 / (n - 1) x = np.linspace(0, 1, n) m = np.ones_like(x) h = np.ones_like(x) * dx * 1.05 pa = get_particle_array(name='fluid', x=x, h=h, m=m) self.pa = pa def _get_nnps_cls(self): from pysph.base.gpu_nnps import ZOrderGPUNNPS as GPUNNPS return GPUNNPS def _make_accel_eval(self, equations, cache_nnps=True): pytest.importorskip('pysph.base.gpu_nnps') GPUNNPS = self._get_nnps_cls() arrays = [self.pa] kernel = CubicSpline(dim=self.dim) a_eval = AccelerationEval( particle_arrays=arrays, equations=equations, kernel=kernel, backend='opencl' ) comp = SPHCompiler(a_eval, integrator=None) comp.compile() self.sph_compiler = comp nnps = GPUNNPS(dim=kernel.dim, particles=arrays, cache=cache_nnps, backend='opencl') nnps.update() a_eval.set_nnps(nnps) return a_eval def test_accel_eval_should_work_on_gpu(self): # Given pa = self.pa equations = [SimpleEquation(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([3., 4., 5., 5., 5., 5., 5., 5., 4., 3.]) pa.gpu.pull('u') self.assertListEqual(list(pa.u), list(expect)) def test_precomputed_should_work_on_gpu(self): # Given pa = self.pa equations = [SummationDensity(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([7.357, 9.0, 9., 9., 9., 9., 9., 9., 9., 7.357]) pa.gpu.pull('rho') print(pa.rho, pa.gpu.rho) self.assertTrue(np.allclose(expect, pa.rho, atol=1e-2)) def test_precomputed_should_work_on_gpu_with_double(self): orig = get_config().use_double def _cleanup(): get_config().use_double = orig get_config().use_double = True self.addCleanup(_cleanup) # Given pa = self.pa equations = [SummationDensity(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([7.357, 9.0, 9., 9., 9., 9., 9., 9., 9., 7.357]) pa.gpu.pull('rho') print(pa.rho, pa.gpu.rho) self.assertTrue(np.allclose(expect, pa.rho, atol=1e-2)) def test_equation_with_time_should_work_on_gpu(self): # Given pa = self.pa equations = [EqWithTime(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.2, 0.1) # Then expect = np.asarray([4., 5., 6., 6., 6., 6., 6., 6., 5., 4.]) * 0.3 pa.gpu.pull('au') print(pa.au, expect) self.assertTrue(np.allclose(expect, pa.au)) def test_update_nnps_is_called_on_gpu(self): # Given equations = [ Group( equations=[ SummationDensity(dest='fluid', sources=['fluid']), ], update_nnps=True ), Group( equations=[EqWithTime(dest='fluid', sources=['fluid'])] ), ] # When a_eval = self._make_accel_eval(equations) # Then h = a_eval.c_acceleration_eval.helper assert len(h.calls) == 5 call = h.calls[0] assert call['type'] == 'kernel' assert call['method'].function_name == 'g0_fluid_initialize' assert call['loop'] is False call = h.calls[1] assert call['type'] == 'kernel' assert call['method'].function_name == 'g0_fluid_on_fluid_loop' assert call['loop'] is True call = h.calls[2] assert call['type'] == 'method' assert call['method'] == 'update_nnps' call = h.calls[3] assert call['type'] == 'kernel' assert call['method'].function_name == 'g1_fluid_initialize' assert call['loop'] is False call = h.calls[4] assert call['type'] == 'kernel' assert call['method'].function_name == 'g1_fluid_on_fluid_loop' assert call['loop'] is True def test_should_stop_iteration_with_max_iteration_on_gpu(self): pa = self.pa class SillyEquation(Equation): def loop(self, d_idx, d_au, s_idx, s_m): d_au[d_idx] += s_m[s_idx] def converged(self): return 0 equations = [Group( equations=[ Group( equations=[SillyEquation(dest='fluid', sources=['fluid'])] ), Group( equations=[SillyEquation(dest='fluid', sources=['fluid'])] ), ], iterate=True, max_iterations=2, )] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([3., 4., 5., 5., 5., 5., 5., 5., 4., 3.]) * 4.0 pa.gpu.pull('au') self.assertListEqual(list(pa.au), list(expect)) def test_should_stop_iteration_with_converged_on_gpu(self): pa = self.pa class SillyEquation1(Equation): def __init__(self, dest, sources): super(SillyEquation1, self).__init__(dest, sources) self.conv = 0 def loop(self, d_idx, d_au, s_idx, s_m): d_au[d_idx] += s_m[s_idx] def post_loop(self, d_idx, d_au): if d_au[d_idx] > 19.0: self.conv = 1 def converged(self): if hasattr(self, '_pull'): # _pull is not available on CPU. self._pull('conv') return self.conv equations = [Group( equations=[ Group( equations=[SillyEquation1(dest='fluid', sources=['fluid'])] ), Group( equations=[SillyEquation1(dest='fluid', sources=['fluid'])] ), ], iterate=True, max_iterations=10, )] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([3., 4., 5., 5., 5., 5., 5., 5., 4., 3.]) * 6.0 pa.gpu.pull('au') self.assertListEqual(list(pa.au), list(expect)) def test_should_handle_helper_functions_on_gpu(self): pa = self.pa def helper(x=1.0): return x * 1.5 class SillyEquation2(Equation): def initialize(self, d_idx, d_au, d_m): d_au[d_idx] += helper(d_m[d_idx]) def _get_helpers_(self): return [helper] equations = [SillyEquation2(dest='fluid', sources=['fluid']), SillyEquation2(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.ones(10) * 3.0 pa.gpu.pull('au') self.assertListEqual(list(pa.au), list(expect)) def test_should_run_reduce_when_using_gpu(self): # Given. pa = self.pa pa.add_constant('total_mass', 0.0) equations = [SimpleReduction(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.sum(pa.m) pa.gpu.pull('total_mass') self.assertAlmostEqual(pa.total_mass[0], expect, 14) def test_should_call_initialize_pair_on_gpu(self): # Given. pa = self.pa pa.u[:] = 1.0 if pa.gpu: pa.gpu.push('u') equations = [InitializePair(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.0, 0.1) # Then if pa.gpu: pa.gpu.pull('u') np.testing.assert_array_almost_equal( pa.u, np.ones_like(pa.x)*1.5 ) def test_should_call_py_initialize_for_gpu_backend(self): # Given. pa = self.pa equations = [PyInit(dest='fluid', sources=None)] eq = equations[0] a_eval = self._make_accel_eval(equations) # When a_eval.compute(1.0, 0.1) # Then if pa.gpu: pa.gpu.pull('au') np.testing.assert_array_almost_equal( pa.au, np.ones_like(pa.x) * 2.0 ) self.assertEqual(eq.called_with, (1.0, 0.1)) def test_get_equations_with_converged(self): pytest.importorskip('pysph.base.gpu_nnps') from pysph.sph.acceleration_eval_gpu_helper import \ get_equations_with_converged # Given se = SimpleEquation(dest='fluid', sources=['fluid']) se1 = SimpleEquation(dest='fluid', sources=['fluid']) sd = SummationDensity(dest='fluid', sources=['fluid']) me = MixedTypeEquation(dest='fluid', sources=['fluid']) eq_t = EqWithTime(dest='fluid', sources=['fluid']) g = Group( equations=[ Group(equations=[Group(equations=[se, sd])], iterate=True, max_iterations=10), Group(equations=[me, eq_t, se1]), ], ) # When eqs = get_equations_with_converged(g) # Then assert eqs == [se, se1] def test_should_support_loop_all_and_loop_on_gpu(self): # Given pa = self.pa equations = [SummationDensity(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) a_eval.compute(0.1, 0.1) pa.gpu.pull('rho') ref_rho = pa.rho.copy() # When pa.rho[:] = 0.0 pa.gpu.push('rho') equations = [LoopAllEquation(dest='fluid', sources=['fluid'])] a_eval = self._make_accel_eval(equations) a_eval.compute(0.1, 0.1) # Then # 2*ref_rho as we are doing both the loop and loop_all to test if # both are called. pa.gpu.pull('rho') self.assertTrue(np.allclose(pa.rho, 2.0 * ref_rho)) def test_should_call_pre_post_functions_in_group_on_gpu(self): # Given pa = self.pa def pre(): pa.m += 1.0 pa.gpu.push('m') def post(): pa.gpu.pull('u') pa.u += 1.0 equations = [ Group( equations=[ SimpleEquation(dest='fluid', sources=['fluid']) ], pre=pre, post=post ) ] a_eval = self._make_accel_eval(equations) # When a_eval.compute(0.1, 0.1) # Then expect = np.asarray([7., 9., 11., 11., 11., 11., 11., 11., 9., 7.]) self.assertListEqual(list(pa.u), list(expect)) class TestAccelerationEval1DGPUOctree(TestAccelerationEval1DGPU): def _get_nnps_cls(self): from pysph.base.gpu_nnps import OctreeGPUNNPS return OctreeGPUNNPS class TestAccelerationEval1DGPUOctreeNonCached( TestAccelerationEval1DGPUOctree ): def setUp(self): self.old_flag = get_config().use_local_memory get_config().use_local_memory = True super(TestAccelerationEval1DGPUOctreeNonCached, self).setUp() def tearDown(self): super(TestAccelerationEval1DGPUOctreeNonCached, self).tearDown() get_config().use_local_memory = self.old_flag @pytest.mark.skip("Loop all not supported with non-cached NNPS") def test_should_support_loop_all_and_loop_on_gpu(self): pass class TestAccelerationEval1DCUDA(TestAccelerationEval1DGPU): def _make_accel_eval(self, equations, cache_nnps=True): pytest.importorskip('pycuda') pytest.importorskip('pysph.base.gpu_nnps') GPUNNPS = self._get_nnps_cls() arrays = [self.pa] kernel = CubicSpline(dim=self.dim) a_eval = AccelerationEval( particle_arrays=arrays, equations=equations, kernel=kernel, backend='cuda' ) comp = SPHCompiler(a_eval, integrator=None) comp.compile() self.sph_compiler = comp nnps = GPUNNPS(dim=kernel.dim, particles=arrays, cache=cache_nnps, backend='cuda') nnps.update() a_eval.set_nnps(nnps) return a_eval def test_update_nnps_is_called_on_gpu(self): # Given equations = [ Group( equations=[ SummationDensity(dest='fluid', sources=['fluid']), ], update_nnps=True ), Group( equations=[EqWithTime(dest='fluid', sources=['fluid'])] ), ] # When a_eval = self._make_accel_eval(equations) # Then h = a_eval.c_acceleration_eval.helper assert len(h.calls) == 5 call = h.calls[0] assert call['type'] == 'kernel' assert call['loop'] is False call = h.calls[1] assert call['type'] == 'kernel' assert call['loop'] is True call = h.calls[2] assert call['type'] == 'method' assert call['method'] == 'update_nnps' call = h.calls[3] assert call['type'] == 'kernel' assert call['loop'] is False call = h.calls[4] assert call['type'] == 'kernel' assert call['loop'] is True pysph-master/pysph/sph/tests/test_acceleration_eval_cython_helper.py000066400000000000000000000036161356347341600266270ustar00rootroot00000000000000# Standard library imports. import unittest # Library imports. import numpy as np # Local library imports. from pysph.base.particle_array import ParticleArray from compyle.api import KnownType from pysph.sph.acceleration_eval_cython_helper import ( get_all_array_names, get_known_types_for_arrays ) class TestGetAllArrayNames(unittest.TestCase): def test_that_all_properties_are_found(self): x = np.linspace(0, 1, 10) pa = ParticleArray(name='f', x=x) result = get_all_array_names([pa]) self.assertEqual(len(result), 3) self.assertEqual(result['DoubleArray'], set(('x',))) self.assertEqual(result['IntArray'], set(('pid', 'tag'))) self.assertEqual(result['UIntArray'], set(('gid',))) def test_that_all_properties_are_found_with_multiple_arrays(self): x = np.linspace(0, 1, 10) pa1 = ParticleArray(name='f', x=x) pa2 = ParticleArray(name='b', y=x) result = get_all_array_names([pa1, pa2]) self.assertEqual(len(result), 3) self.assertEqual(result['DoubleArray'], set(('x', 'y'))) self.assertEqual(result['IntArray'], set(('pid', 'tag'))) self.assertEqual(result['UIntArray'], set(('gid',))) class TestGetKnownTypesForAllArrays(unittest.TestCase): def test_that_all_types_are_detected_correctly(self): x = np.linspace(0, 1, 10) pa = ParticleArray(name='f', x=x) pa.remove_property('pid') info = get_all_array_names([pa]) result = get_known_types_for_arrays(info) expect = {'d_gid': KnownType("unsigned int*"), 'd_tag': KnownType("int*"), 'd_x': KnownType("double*"), 's_gid': KnownType("unsigned int*"), 's_tag': KnownType("int*"), 's_x': KnownType("double*")} for key in expect: self.assertEqual(repr(result[key]), repr(expect[key])) pysph-master/pysph/sph/tests/test_equations.py000066400000000000000000000211361356347341600222510ustar00rootroot00000000000000 # Standard library imports. import numpy from textwrap import dedent import unittest # Local imports. from compyle.api import KnownType from pysph.sph.equation import ( BasicCodeBlock, Context, CythonGroup, Equation, Group, sort_precomputed ) class TestContext(unittest.TestCase): def test_basic_usage(self): c = Context(a=1, b=2) self.assertEqual(c.a, 1) self.assertEqual(c.b, 2) self.assertEqual(c['a'], 1) self.assertEqual(c['b'], 2) c.c = 3 self.assertEqual(c.c, 3) self.assertEqual(c['c'], 3) def test_context_behaves_like_dict(self): c = Context(a=1) c.b = 2 keys = list(c.keys()) keys.sort() self.assertEqual(keys, ['a', 'b']) values = list(c.values()) values.sort() self.assertEqual(values, [1, 2]) self.assertTrue('a' in c) self.assertTrue('b' in c) self.assertTrue('c' not in c) class TestBase(unittest.TestCase): def assert_seq_equal(self, got, expect): g = list(got) g.sort() e = list(expect) e.sort() self.assertEqual(g, e, 'got %s, expected %s' % (g, e)) class TestBasicCodeBlock(TestBase): def test_basic_code_block(self): code = ''' x = 1 d_x[d_idx] += s_x[s_idx] + x ''' cb = BasicCodeBlock(code=code) expect = ['d_idx', 'd_x', 's_idx', 's_x', 'x'] self.assert_seq_equal(cb.symbols, expect) self.assert_seq_equal(cb.src_arrays, ['s_x']) self.assert_seq_equal(cb.dest_arrays, ['d_x']) ctx = cb.context self.assertEqual(ctx.s_idx, 0) self.assertEqual(ctx.d_idx, 0) x = numpy.zeros(2, dtype=float) self.assertTrue(numpy.alltrue(ctx.d_x == x)) self.assertTrue(numpy.alltrue(ctx.s_x == x)) def test_that_code_block_is_callable(self): code = ''' x = 1 d_x[d_idx] += s_x[s_idx] + x ''' cb = BasicCodeBlock(code=code) # The code block should be callable. res = cb() self.assertTrue(sum(res.d_x) == 1) # Should take arguments to update the context. x = numpy.ones(2, dtype=float)*10 res = cb(s_x=x) self.assertTrue(sum(res.d_x) == 11) class TestEquations(TestBase): def test_simple_equation(self): eq = Equation('fluid', None) self.assertEqual(eq.name, 'Equation') self.assertEqual(eq.no_source, True) self.assertEqual(eq.dest, 'fluid') self.assertEqual(eq.sources, None) self.assertFalse(hasattr(eq, 'loop')) self.assertFalse(hasattr(eq, 'post_loop')) self.assertFalse(hasattr(eq, 'initialize')) eq = Equation('fluid', sources=['fluid']) self.assertEqual(eq.name, 'Equation') self.assertEqual(eq.no_source, False) self.assertEqual(eq.dest, 'fluid') self.assertEqual(eq.sources, ['fluid']) class Test(Equation): pass eq = Test('fluid', []) self.assertEqual(eq.name, 'Test') self.assertEqual(eq.no_source, True) def test_continuity_equation(self): from pysph.sph.basic_equations import ContinuityEquation e = ContinuityEquation(dest='fluid', sources=['fluid']) # Call the loop code. d_arho = [0.0, 0.0, 0.0] s_m = [0.0, 0.0] e.loop(d_idx=0, d_arho=d_arho, s_idx=0, s_m=s_m, DWIJ=[0, 0, 0], VIJ=[0, 0, 0]) self.assertEqual(d_arho[0], 0.0) self.assertEqual(d_arho[1], 0.0) # Now call with specific arguments. s_m = [1, 1] e.loop(d_idx=0, d_arho=d_arho, s_idx=0, s_m=s_m, DWIJ=[1, 1, 1], VIJ=[1, 1, 1]) self.assertEqual(d_arho[0], 3.0) self.assertEqual(d_arho[1], 0.0) def test_order_of_precomputed(self): try: pre_comp = Group.pre_comp pre_comp.AIJ = BasicCodeBlock( code=dedent(""" AIJ[0] = XIJ[0]/RIJ AIJ[1] = XIJ[1]/RIJ AIJ[2] = XIJ[2]/RIJ """), AIJ=[1.0, 0.0, 0.0]) input = dict((x, pre_comp[x]) for x in ['RIJ', 'R2IJ', 'XIJ', 'HIJ', 'AIJ']) pre = sort_precomputed(input, pre_comp) self.assertEqual( list(pre.keys()), ['HIJ', 'XIJ', 'R2IJ', 'RIJ', 'AIJ'] ) finally: from pysph.sph.equation import precomputed_symbols Group.pre_comp = precomputed_symbols() class Equation1(Equation): def loop(self, WIJ=0.0): x = WIJ x += 1 def post_loop(self, d_idx, d_h): x = d_h[d_idx] x += 1 class Equation2(Equation): def loop(self, d_idx, s_idx): x = s_idx + d_idx x += 1 class TestGroup(TestBase): def setUp(self): from pysph.sph.basic_equations import SummationDensity from pysph.sph.wc.basic import TaitEOS self.group = CythonGroup( [SummationDensity('f', ['f']), TaitEOS('f', None, rho0=1.0, c0=1.0, gamma=1.4, p0=1.0)] ) def test_precomputed(self): g = self.group self.assertEqual(len(g.precomputed), 5) self.assertEqual(list(g.precomputed.keys()), ['HIJ', 'XIJ', 'R2IJ', 'RIJ', 'WIJ']) def test_array_names(self): g = self.group src, dest = g.get_array_names() s_ex = ['s_m', 's_x', 's_y', 's_z', 's_h'] d_ex = ['d_rho', 'd_p', 'd_h', 'd_cs', 'd_x', 'd_y', 'd_z'] self.assert_seq_equal(src, s_ex) self.assert_seq_equal(dest, d_ex) def test_variable_names(self): g = self.group names = g.get_variable_names() expect = ['WIJ', 'RIJ', 'R2IJ', 'XIJ', 'HIJ'] self.assert_seq_equal(names, expect) def test_array_declarations(self): g = self.group expect = 'cdef double* d_x' self.assertEqual(g.get_array_declarations(['d_x']), expect) def test_array_declarations_with_known_types(self): # Given g = self.group known_types = {'d_x': KnownType('float*')} # When result = g.get_array_declarations(['d_x'], known_types) # Then. expect = 'cdef float* d_x' self.assertEqual(result, expect) def test_variable_declarations(self): g = self.group context = Context(x=1.0) expect = 'cdef double x = 1.0' self.assertEqual(g.get_variable_declarations(context), expect) context = Context(x=1) expect = 'cdef long x = 1' self.assertEqual(g.get_variable_declarations(context), expect) context = Context(x=[1., 2.]) expect = ('cdef DoubleArray _x = DoubleArray(aligned(2, 8)*' 'self.n_threads)\n' 'cdef double* x = _x.data') self.assertEqual(g.get_variable_declarations(context), expect) context = Context(x=(0, 1., 2.)) expect = ('cdef DoubleArray _x = DoubleArray(aligned(3, 8)*' 'self.n_threads)\n' 'cdef double* x = _x.data') self.assertEqual(g.get_variable_declarations(context), expect) def test_loop_code(self): from pysph.base.kernels import CubicSpline k = CubicSpline(dim=3) e1 = Equation1('f', ['f']) e2 = Equation2('f', ['f']) g = CythonGroup([e1, e2]) # First get the equation wrappers so the equation names are setup. g.get_equation_wrappers() result = g.get_loop_code(k) expect = dedent('''\ HIJ = 0.5*(d_h[d_idx] + s_h[s_idx]) XIJ[0] = d_x[d_idx] - s_x[s_idx] XIJ[1] = d_y[d_idx] - s_y[s_idx] XIJ[2] = d_z[d_idx] - s_z[s_idx] R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2] RIJ = sqrt(R2IJ) WIJ = self.kernel.kernel(XIJ, RIJ, HIJ) self.equation10.loop(WIJ) self.equation20.loop(d_idx, s_idx) ''') msg = 'EXPECTED:\n%s\nGOT:\n%s' % (expect, result) self.assertEqual(result, expect, msg) def test_post_loop_code(self): from pysph.base.kernels import CubicSpline k = CubicSpline(dim=3) e1 = Equation1('f', ['f']) e2 = Equation2('f', ['f']) g = CythonGroup([e1, e2]) # First get the equation wrappers so the equation names are setup. g.get_equation_wrappers() result = g.get_post_loop_code(k) expect = dedent('''\ self.equation10.post_loop(d_idx, d_h) ''') msg = 'EXPECTED:\n%s\nGOT:\n%s' % (expect, result) self.assertEqual(result, expect, msg) if __name__ == '__main__': unittest.main() pysph-master/pysph/sph/tests/test_integrator.py000066400000000000000000000437631356347341600224310ustar00rootroot00000000000000# Standard library imports. import unittest # Library imports. import numpy as np import pytest # Local imports. from pysph.base.utils import get_particle_array, get_particle_array_wcsph from compyle.config import get_config from pysph.sph.equation import Equation from pysph.sph.acceleration_eval import AccelerationEval from pysph.base.kernels import CubicSpline from pysph.base.nnps import LinkedListNNPS from pysph.sph.sph_compiler import SPHCompiler from pysph.sph.integrator import (LeapFrogIntegrator, PECIntegrator, PEFRLIntegrator, EulerIntegrator) from pysph.sph.integrator_step import ( IntegratorStep, LeapFrogStep, PEFRLStep, TwoStageRigidBodyStep ) class SHM(Equation): """Simple harmonic oscillator equation. """ def initialize(self, d_idx, d_x, d_au): d_au[d_idx] = -d_x[d_idx] class TestIntegrator(unittest.TestCase): def test_detection_of_missing_arrays_for_integrator(self): # Given. x = np.asarray([1.0]) u = np.asarray([0.0]) h = np.ones_like(x) pa = get_particle_array(name='fluid', x=x, u=u, h=h, m=h) arrays = [pa] # When integrator = LeapFrogIntegrator(fluid=LeapFrogStep()) equations = [SHM(dest="fluid", sources=None)] kernel = CubicSpline(dim=1) a_eval = AccelerationEval( particle_arrays=arrays, equations=equations, kernel=kernel ) comp = SPHCompiler(a_eval, integrator=integrator) # Then self.assertRaises(RuntimeError, comp.compile) def test_detect_missing_arrays_for_many_particle_arrays(self): # Given. x = np.asarray([1.0]) u = np.asarray([0.0]) h = np.ones_like(x) fluid = get_particle_array_wcsph(name='fluid', x=x, u=u, h=h, m=h) solid = get_particle_array(name='solid', x=x, u=u, h=h, m=h) arrays = [fluid, solid] # When integrator = PECIntegrator( fluid=TwoStageRigidBodyStep(), solid=TwoStageRigidBodyStep() ) equations = [SHM(dest="fluid", sources=None)] kernel = CubicSpline(dim=1) a_eval = AccelerationEval( particle_arrays=arrays, equations=equations, kernel=kernel ) comp = SPHCompiler(a_eval, integrator=integrator) # Then self.assertRaises(RuntimeError, comp.compile) class TestIntegratorBase(unittest.TestCase): def setUp(self): x = np.asarray([1.0]) u = np.asarray([0.0]) h = np.ones_like(x) pa = get_particle_array(name='fluid', x=x, u=u, h=h, m=h) for prop in ('ax', 'ay', 'az', 'ae', 'arho', 'e'): pa.add_property(prop) self.pa = pa def _setup_integrator(self, equations, integrator): kernel = CubicSpline(dim=1) arrays = [self.pa] a_eval = AccelerationEval( particle_arrays=arrays, equations=equations, kernel=kernel ) comp = SPHCompiler(a_eval, integrator=integrator) comp.compile() nnps = LinkedListNNPS(dim=kernel.dim, particles=arrays) a_eval.set_nnps(nnps) integrator.set_nnps(nnps) def _integrate(self, integrator, dt, tf, post_step_callback): """The post_step_callback is called after each step and is passed the current time. """ t = 0.0 while t < tf: integrator.step(t, dt) post_step_callback(t+dt) t += dt class EulerStep(IntegratorStep): def stage1(self, d_idx, d_x, d_u, dt): d_x[d_idx] += dt*d_u[d_idx] class TestIntegratorAdaptiveTimestep(TestIntegratorBase): def test_compute_timestep_without_adaptive(self): # Given. integrator = EulerIntegrator(fluid=EulerStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # When dt = integrator.compute_time_step(0.1, 0.5) # Then self.assertEqual(dt, None) def test_compute_timestep_with_dt_adapt(self): # Given. self.pa.extend(1) self.pa.align_particles() self.pa.add_property('dt_adapt') self.pa.dt_adapt[:] = [0.1, 0.2] integrator = EulerIntegrator(fluid=EulerStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # When dt = integrator.compute_time_step(0.1, 0.5) # Then self.assertEqual(dt, 0.1) def test_compute_timestep_with_dt_adapt_with_invalid_values(self): # Given. self.pa.extend(1) self.pa.align_particles() self.pa.add_property('dt_adapt') self.pa.dt_adapt[:] = [0.0, -2.0] integrator = EulerIntegrator(fluid=EulerStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # When dt = integrator.compute_time_step(0.1, 0.5) # Then self.assertEqual(dt, None) def test_compute_timestep_with_dt_adapt_trumps_dt_cfl(self): # Given. self.pa.extend(1) self.pa.align_particles() self.pa.add_property('dt_adapt') self.pa.add_property('dt_cfl') self.pa.h[:] = 1.0 self.pa.dt_adapt[:] = [0.1, 0.2] self.pa.dt_cfl[:] = 1.0 integrator = EulerIntegrator(fluid=EulerStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # When cfl = 0.5 dt = integrator.compute_time_step(0.1, cfl) # Then self.assertEqual(dt, 0.1) def test_compute_timestep_with_dt_cfl(self): # Given. self.pa.extend(1) self.pa.align_particles() self.pa.add_property('dt_cfl') self.pa.h[:] = 1.0 self.pa.dt_cfl[:] = [1.0, 2.0] integrator = EulerIntegrator(fluid=EulerStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # When cfl = 0.5 dt = integrator.compute_time_step(0.1, cfl) # Then expect = cfl*1.0/(2.0) self.assertEqual(dt, expect) class S1Step(IntegratorStep): def py_stage1(self, dest, t, dt): self.called_with1 = t, dt dest.u[:] = 1.0 def stage1(self, d_idx, d_u, d_au, dt): d_u[d_idx] += d_au[d_idx] * dt * 0.5 def stage2(self, d_idx, d_x, d_u, d_au, dt): d_u[d_idx] += 0.5*dt * d_au[d_idx] d_x[d_idx] += dt * d_u[d_idx] class S12Step(IntegratorStep): def py_stage1(self, dest, t, dt): self.called_with1 = t, dt dest.u[:] = 1.0 if dest.gpu: dest.gpu.push('u') def stage1(self, d_idx, d_u, d_au, dt): d_u[d_idx] += d_au[d_idx] * dt * 0.5 def py_stage2(self, dest, t, dt): self.called_with2 = t, dt if dest.gpu: dest.gpu.pull('u') dest.u += 0.5 if dest.gpu: dest.gpu.push('u') def stage2(self, d_idx, d_x, d_u, d_au, dt): d_u[d_idx] += 0.5*dt * d_au[d_idx] d_x[d_idx] += dt * d_u[d_idx] class OnlyPyStep(IntegratorStep): def py_stage1(self, dest, t, dt): self.called_with1 = t, dt dest.x[:] = 0.0 dest.u[:] = 1.0 def py_stage2(self, dest, t, dt): self.called_with2 = t, dt dest.u += 0.5 dest.x += 0.5 def my_helper(dt=0.0): return dt*2.0 class StepWithHelper(IntegratorStep): def _get_helpers_(self): return [my_helper] def stage1(self, d_idx, d_u, d_au, dt): d_u[d_idx] += d_au[d_idx] * my_helper(dt) class TestLeapFrogIntegrator(TestIntegratorBase): def test_leapfrog(self): # Given. integrator = LeapFrogIntegrator(fluid=LeapFrogStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) tf = np.pi dt = 0.02*tf # When energy = [] def callback(t): x, u = self.pa.x[0], self.pa.u[0] energy.append(0.5*(x*x + u*u)) callback(0.0) self._integrate(integrator, dt, tf, callback) # Then energy = np.asarray(energy) self.assertAlmostEqual(np.max(np.abs(energy - 0.5)), 0.0, places=3) def test_integrator_calls_py_stage1(self): # Given. stepper = S1Step() integrator = LeapFrogIntegrator(fluid=stepper) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) tf = 1.0 dt = tf # When call_data = [] def callback(t): call_data.append(t) self._integrate(integrator, dt, tf, callback) # Then self.assertEqual(len(call_data), 1) self.assertTrue(hasattr(stepper, 'called_with1')) self.assertEqual(stepper.called_with1, (0.0, dt)) # These are not physically significant as the main purpose is to see # if the py_stage* methods are called. np.testing.assert_array_almost_equal(self.pa.x, [1.5]) np.testing.assert_array_almost_equal(self.pa.u, [0.5]) def test_integrator_calls_py_stage1_stage2(self): # Given. stepper = S12Step() integrator = LeapFrogIntegrator(fluid=stepper) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) tf = 1.0 dt = tf # When def callback(t): pass self._integrate(integrator, dt, tf, callback) # Then self.assertTrue(hasattr(stepper, 'called_with1')) self.assertEqual(stepper.called_with1, (0.0, dt)) self.assertTrue(hasattr(stepper, 'called_with2')) self.assertEqual(stepper.called_with2, (0.5*dt, dt)) # These are not physically significant as the main purpose is to see # if the py_stage* methods are called. np.testing.assert_array_almost_equal(self.pa.x, [2.0]) np.testing.assert_array_almost_equal(self.pa.u, [1.0]) def test_integrator_calls_only_py_when_no_stage(self): # Given. stepper = OnlyPyStep() integrator = LeapFrogIntegrator(fluid=stepper) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) tf = 1.0 dt = tf # When def callback(t): pass self._integrate(integrator, dt, tf, callback) # Then self.assertTrue(hasattr(stepper, 'called_with1')) self.assertEqual(stepper.called_with1, (0.0, dt)) self.assertTrue(hasattr(stepper, 'called_with2')) self.assertEqual(stepper.called_with2, (0.5*dt, dt)) np.testing.assert_array_almost_equal(self.pa.x, [0.5]) np.testing.assert_array_almost_equal(self.pa.u, [1.5]) def test_leapfrog_is_second_order(self): # Given. integrator = LeapFrogIntegrator(fluid=LeapFrogStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # Take a dt, find the error, halve dt, and see that error is drops as # desired. # When tf = np.pi dt = 0.02*tf energy = [] def callback(t): x, u = self.pa.x[0], self.pa.u[0] energy.append(0.5*(x*x + u*u)) callback(0.0) self._integrate(integrator, dt, tf, callback) energy = np.asarray(energy) err1 = np.max(np.abs(energy - 0.5)) # When self.pa.x[0] = 1.0 self.pa.u[0] = 0.0 energy = [] dt *= 0.5 callback(0.0) self._integrate(integrator, dt, tf, callback) energy = np.asarray(energy) err2 = np.max(np.abs(energy - 0.5)) # Then self.assertTrue(err2 < err1) self.assertAlmostEqual(err1/err2, 4.0, places=2) def test_helper_can_be_used_with_stepper(self): # Given. integrator = EulerIntegrator(fluid=StepWithHelper()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # When tf = 1.0 dt = tf/2 def callback(t): pass self._integrate(integrator, dt, tf, callback) # Then if self.pa.gpu is not None: self.pa.gpu.pull('u') u = self.pa.u self.assertEqual(u, -2.0*self.pa.x) class TestPEFRLIntegrator(TestIntegratorBase): def test_pefrl(self): # Given. integrator = PEFRLIntegrator(fluid=PEFRLStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) tf = np.pi dt = 0.1*tf # When energy = [] def callback(t): x, u = self.pa.x[0], self.pa.u[0] energy.append(0.5*(x*x + u*u)) callback(0.0) self._integrate(integrator, dt, tf, callback) # Then energy = np.asarray(energy) self.assertAlmostEqual(np.max(np.abs(energy - 0.5)), 0.0, places=4) def test_pefrl_is_fourth_order(self): # Given. integrator = PEFRLIntegrator(fluid=PEFRLStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # Take a dt, find the error, halve dt, and see that error is drops as # desired. # When tf = np.pi dt = 0.1*tf energy = [] def callback(t): x, u = self.pa.x[0], self.pa.u[0] energy.append(0.5*(x*x + u*u)) callback(0.0) self._integrate(integrator, dt, tf, callback) energy = np.asarray(energy) err1 = np.max(np.abs(energy - 0.5)) # When self.pa.x[0] = 1.0 self.pa.u[0] = 0.0 energy = [] dt *= 0.5 callback(0.0) self._integrate(integrator, dt, tf, callback) energy = np.asarray(energy) err2 = np.max(np.abs(energy - 0.5)) # Then self.assertTrue(err2 < err1) self.assertTrue(err1/err2 > 16.0) class TestLeapFrogIntegratorGPU(TestIntegratorBase): def _setup_integrator(self, equations, integrator): pytest.importorskip('pysph.base.gpu_nnps') kernel = CubicSpline(dim=1) arrays = [self.pa] from pysph.base.gpu_nnps import BruteForceNNPS as GPUNNPS a_eval = AccelerationEval( particle_arrays=arrays, equations=equations, kernel=kernel, backend='opencl' ) comp = SPHCompiler(a_eval, integrator=integrator) comp.compile() nnps = GPUNNPS(dim=kernel.dim, particles=arrays, cache=True, backend='opencl') nnps.update() a_eval.set_nnps(nnps) integrator.set_nnps(nnps) def test_leapfrog(self): # Given. integrator = LeapFrogIntegrator(fluid=LeapFrogStep()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) tf = np.pi/5 dt = 0.05*tf # When energy = [] def callback(t): self.pa.gpu.pull('x', 'u') x, u = self.pa.x[0], self.pa.u[0] energy.append(0.5*(x*x + u*u)) callback(0.0) self._integrate(integrator, dt, tf, callback) # Then energy = np.asarray(energy) self.assertAlmostEqual(np.max(np.abs(energy - 0.5)), 0.0, places=3) def test_py_stage_is_called_on_gpu(self): # Given. stepper = S12Step() integrator = LeapFrogIntegrator(fluid=stepper) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) dt = 1.0 tf = dt # When def callback(t): pass self._integrate(integrator, dt, tf, callback) self.pa.gpu.pull('x', 'u') # Then self.assertTrue(hasattr(stepper, 'called_with1')) self.assertEqual(stepper.called_with1, (0.0, dt)) self.assertTrue(hasattr(stepper, 'called_with2')) self.assertEqual(stepper.called_with2, (0.5*dt, dt)) # These are not physically significant as the main purpose is to see # if the py_stage* methods are called. np.testing.assert_array_almost_equal(self.pa.x, [2.0]) np.testing.assert_array_almost_equal(self.pa.u, [1.0]) def test_leapfrog_with_double(self): orig = get_config().use_double def _cleanup(): get_config().use_double = orig get_config().use_double = True self.addCleanup(_cleanup) self.test_leapfrog() def test_helper_can_be_used_with_stepper_on_gpu(self): # Given. integrator = EulerIntegrator(fluid=StepWithHelper()) equations = [SHM(dest="fluid", sources=None)] self._setup_integrator(equations=equations, integrator=integrator) # When tf = 1.0 dt = tf/2 def callback(t): pass self._integrate(integrator, dt, tf, callback) # Then if self.pa.gpu is not None: self.pa.gpu.pull('u') u = self.pa.u self.assertEqual(u, -2.0*self.pa.x) class TestLeapFrogIntegratorCUDA(TestLeapFrogIntegratorGPU): def _setup_integrator(self, equations, integrator): pytest.importorskip('pycuda') pytest.importorskip('pysph.base.gpu_nnps') kernel = CubicSpline(dim=1) arrays = [self.pa] from pysph.base.gpu_nnps import BruteForceNNPS as GPUNNPS a_eval = AccelerationEval( particle_arrays=arrays, equations=equations, kernel=kernel, backend='cuda' ) comp = SPHCompiler(a_eval, integrator=integrator) comp.compile() nnps = GPUNNPS(dim=kernel.dim, particles=arrays, cache=True, backend='cuda') nnps.update() a_eval.set_nnps(nnps) integrator.set_nnps(nnps) if __name__ == '__main__': unittest.main() pysph-master/pysph/sph/tests/test_integrator_cython_helper.py000066400000000000000000000022331356347341600253370ustar00rootroot00000000000000import unittest import numpy as np from pysph.base.utils import get_particle_array from pysph.base.kernels import QuinticSpline from pysph.sph.acceleration_eval import AccelerationEval from pysph.sph.acceleration_eval_cython_helper import AccelerationEvalCythonHelper from pysph.sph.integrator import PECIntegrator from pysph.sph.integrator_step import WCSPHStep from pysph.sph.integrator_cython_helper import IntegratorCythonHelper from pysph.sph.basic_equations import SummationDensity class TestIntegratorCythonHelper(unittest.TestCase): def test_invalid_kwarg_raises_error(self): # Given x = np.linspace(0, 1, 10) pa = get_particle_array(name='fluid', x=x) equations = [SummationDensity(dest='fluid', sources=['fluid'])] kernel = QuinticSpline(dim=1) a_eval = AccelerationEval([pa], equations, kernel=kernel) a_helper = AccelerationEvalCythonHelper(a_eval) # When/Then integrator = PECIntegrator(f=WCSPHStep()) self.assertRaises( RuntimeError, IntegratorCythonHelper, integrator, a_helper ) if __name__ == '__main__': unittest.main() pysph-master/pysph/sph/tests/test_integrator_step.py000066400000000000000000000074231356347341600234550ustar00rootroot00000000000000"""Simple tests for the Integrator steps""" import numpy import unittest from pysph.base.utils import get_particle_array as gpa from pysph.sph.integrator_step import OneStageRigidBodyStep, TwoStageRigidBodyStep class RigidBodyMotionTestCase(unittest.TestCase): """Tests for linear motion. A particle array is subjected to acceleration along one coordinate direction and tested for the final acceleration, position and velocity. """ def setUp(self): # create a single particle particle array x = numpy.array( [0.] ) y = numpy.array( [0.] ) additional_props = ['ax', 'ay', 'az', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0'] # create the particle array self.pa = pa = gpa(additional_props, name='square', x=x, y=y) self._set_stepper() def _integrate(self, final_time, dt, epec=False): """Integrate""" pa = self.pa; num_part = pa.get_number_of_particles() stepper = self.stepper current_time = 0.0 iteration = 0 while( current_time < final_time ): # initialize for i in range(num_part): stepper.initialize( i, pa.x, pa.y, pa.z, pa.x0, pa.y0, pa.z0, pa.u, pa.v, pa.w, pa.u0, pa.v0, pa.w0) # update accelerations for EPEC if epec: self._update_accelerations(current_time) # stage 1 for i in range( num_part ): stepper.stage1( i, pa.x, pa.y, pa.z, pa.x0, pa.y0, pa.z0, pa.u, pa.v, pa.w, pa.u0, pa.v0, pa.w0, pa.ax, pa.ay, pa.az, dt) # update time current_time = current_time + 0.5 * dt # evaluate between stages self._update_accelerations(current_time) # call stage 2 for i in range( num_part ): stepper.stage2( i, pa.x, pa.y, pa.z, pa.x0, pa.y0, pa.z0, pa.u, pa.v, pa.w, pa.u0, pa.v0, pa.w0, pa.ax, pa.ay, pa.az, dt) # update time and iteration current_time = current_time + 0.5 * dt iteration += 1 class ConstantAccelerationTestCase(RigidBodyMotionTestCase): """ Constnat linear acceleration """ def _update_accelerations(self, time): " Constant accelerations " self.pa.ax[0] = 1.0 self.pa.ay[0] = 1.0 self.pa.az[0] = 1.0 def _set_stepper(self): # create the integrator stepper class we want to test self.stepper = TwoStageRigidBodyStep() def _test_motion(self, final_time=1.0, dt=1e-2, epec=True): """ Test motion for constant acceleration """ # we simulate a two-stage integration with constant # acceleration ax = 1. Initial velocities are zero so we can # compare with the elementry formulae: S = 1/2 * a * t * t etc... self._integrate( final_time, dt, epec ) # get the particle arrays to test x, y, z, u, v, w = self.pa.get('x', 'y', 'z', 'u', 'v', 'w') # positions S = 0 + 1/2 * at^2 = 0.5 self.assertAlmostEqual( x[0], 0.5, 14 ) self.assertAlmostEqual( y[0], 0.5, 14 ) self.assertAlmostEqual( z[0], 0.5, 14 ) # velocities v = u0 + a*t = 1.0 self.assertAlmostEqual( u[0], 1.0, 14 ) self.assertAlmostEqual( v[0], 1.0, 14 ) self.assertAlmostEqual( w[0], 1.0, 14 ) def test_one_stage(self): self.stepper = OneStageRigidBodyStep() self._test_motion() def test_two_stage(self): self.stepper = TwoStageRigidBodyStep() self._test_motion() if __name__ == '__main__': unittest.main() pysph-master/pysph/sph/tests/test_kernel_corrections.py000066400000000000000000000214601356347341600241330ustar00rootroot00000000000000import unittest import numpy as np from pysph.base.utils import get_particle_array from pysph.base.kernels import CubicSpline from pysph.sph.equation import Equation, Group from pysph.tools.sph_evaluator import SPHEvaluator from pysph.sph.basic_equations import SummationDensity from pysph.sph.wc.kernel_correction import ( GradientCorrectionPreStep, MixedKernelCorrectionPreStep, GradientCorrection, MixedGradientCorrection ) from pysph.sph.wc.crksph import ( CRKSPHPreStep, CRKSPH, CRKSPHSymmetric, SummationDensityCRKSPH, NumberDensity ) class GradPhi(Equation): def initialize(self, d_idx, d_gradu): d_gradu[3*d_idx] = 0.0 d_gradu[3*d_idx + 1] = 0.0 d_gradu[3*d_idx + 2] = 0.0 def loop(self, d_idx, d_gradu, d_u, s_idx, s_m, s_rho, s_u, DWIJ): fac = s_m[s_idx]/s_rho[s_idx]*(s_u[s_idx] - d_u[d_idx]) d_gradu[3*d_idx] += fac*DWIJ[0] d_gradu[3*d_idx + 1] += fac*DWIJ[1] d_gradu[3*d_idx + 2] += fac*DWIJ[2] class GradPhiSymm(Equation): def initialize(self, d_idx, d_gradu): d_gradu[3*d_idx] = 0.0 d_gradu[3*d_idx + 1] = 0.0 d_gradu[3*d_idx + 1] = 0.0 def loop(self, d_idx, d_rho, d_m, d_gradu, d_u, s_idx, s_m, s_rho, s_u, DWIJ): fac = s_m[s_idx]/s_rho[s_idx]*(s_u[s_idx] + d_u[d_idx])/d_rho[d_idx] d_gradu[3*d_idx] += fac*DWIJ[0] d_gradu[3*d_idx + 1] += fac*DWIJ[1] d_gradu[3*d_idx + 2] += fac*DWIJ[2] class VerifyCRKSPH(Equation): def initialize(self, d_idx, d_zero_mom, d_first_mom): d_zero_mom[d_idx] = 0.0 d_first_mom[3*d_idx] = 0.0 d_first_mom[3*d_idx + 1] = 0.0 d_first_mom[3*d_idx + 2] = 0.0 def loop(self, d_idx, d_zero_mom, d_first_mom, d_cwij, s_idx, s_m, s_rho, WIJ, XIJ): vjwijp = s_m[s_idx]/s_rho[s_idx]*WIJ*d_cwij[d_idx] d_zero_mom[d_idx] += vjwijp d_first_mom[3*d_idx] += vjwijp * XIJ[0] d_first_mom[3*d_idx + 1] += vjwijp * XIJ[1] d_first_mom[3*d_idx + 2] += vjwijp * XIJ[2] class TestKernelCorrection2D(unittest.TestCase): def setUp(self): self.dim = 2 x, y = np.mgrid[0.5:1:2j, 0.5:1:2j] u = x + y pa = get_particle_array( name='fluid', x=x, y=y, h=0.5, m=1.0, u=u, V=1.0 ) pa.add_property('gradu', stride=3) pa.add_property('cwij') pa.add_property('dw_gamma', stride=3) pa.add_property('m_mat', stride=9) # for crksph pa.add_property('ai') pa.add_property('gradai', stride=3) pa.add_property('bi', stride=3) pa.add_property('gradbi', stride=9) pa.cwij[:] = 1.0 result = np.ones((4, 3)) result[:, 2] = 0.0 self.expect = result.ravel() self.pa = pa def _perturb_particles(self): pa = self.pa pa.x[:] = pa.x + [0.1, 0.05, -0.1, -0.05] pa.y[:] = pa.y + [0.1, 0.05, -0.1, -0.05] pa.u[:] = pa.x + pa.y def _make_accel_eval(self, equations): kernel = CubicSpline(dim=self.dim) seval = SPHEvaluator( arrays=[self.pa], equations=equations, dim=self.dim, kernel=kernel ) return seval def test_gradient_correction(self): # Given pa = self.pa dest = 'fluid' sources = ['fluid'] eqs = [ Group(equations=[ SummationDensity(dest=dest, sources=sources), ]), Group(equations=[ GradientCorrectionPreStep(dest=dest, sources=sources, dim=self.dim) ]), Group(equations=[ GradientCorrection(dest=dest, sources=sources, dim=self.dim, tol=100.0), GradPhi(dest=dest, sources=sources) ]) ] a_eval = self._make_accel_eval(eqs) # When a_eval.evaluate(0.0, 0.1) # Then np.testing.assert_array_almost_equal(pa.gradu, self.expect) def test_gradient_correction_perturbed(self): # Given self._perturb_particles() self.test_gradient_correction() def test_mixed_gradient_correction(self): # Given pa = self.pa dest = 'fluid' sources = ['fluid'] eqs = [ Group(equations=[ SummationDensity(dest=dest, sources=sources), ]), Group(equations=[ MixedKernelCorrectionPreStep( dest=dest, sources=sources, dim=self.dim ) ]), Group(equations=[ MixedGradientCorrection(dest=dest, sources=sources, dim=self.dim, tol=100.0), GradPhi(dest=dest, sources=sources) ]) ] a_eval = self._make_accel_eval(eqs) # When a_eval.evaluate(0.0, 0.1) # Then np.testing.assert_array_almost_equal(pa.gradu, self.expect) def test_mixed_gradient_correction_perturbed(self): # Given self._perturb_particles() self.test_mixed_gradient_correction() def test_crksph(self): # Given pa = self.pa dest = 'fluid' sources = ['fluid'] pa.add_property('zero_mom') pa.add_property('V') pa.add_property('first_mom', stride=3) pa.rho[:] = 1.0 eqs = [ Group(equations=[ NumberDensity(dest=dest, sources=sources), ]), Group(equations=[ SummationDensity(dest=dest, sources=sources), ]), Group(equations=[ CRKSPHPreStep(dest=dest, sources=sources, dim=self.dim) ]), Group(equations=[ CRKSPH(dest=dest, sources=sources, dim=self.dim, tol=1000.0), GradPhi(dest=dest, sources=sources), VerifyCRKSPH(dest=dest, sources=sources) ]) ] a_eval = self._make_accel_eval(eqs) # When a_eval.evaluate(0.0, 0.1) # Then np.testing.assert_array_almost_equal(pa.zero_mom, 1.0) np.testing.assert_array_almost_equal(pa.first_mom, 0.0) np.testing.assert_array_almost_equal(pa.gradu, self.expect) def test_crksph_perturbed(self): # Given self._perturb_particles() self.test_crksph() def test_crksph_symmetric(self): # Given pa = self.pa dest = 'fluid' sources = ['fluid'] pa.add_property('zero_mom') pa.add_property('V') pa.add_property('rhofac') pa.add_property('first_mom', stride=3) pa.rho[:] = 1.0 eqs = [ Group(equations=[ NumberDensity(dest=dest, sources=sources), ]), Group(equations=[ SummationDensity(dest=dest, sources=sources), ]), Group(equations=[ CRKSPHPreStep(dest=dest, sources=sources, dim=self.dim) ]), Group(equations=[ CRKSPHSymmetric(dest=dest, sources=sources, dim=self.dim, tol=1000.0), GradPhiSymm(dest=dest, sources=sources), VerifyCRKSPH(dest=dest, sources=sources) ]) ] a_eval = self._make_accel_eval(eqs) # When a_eval.evaluate(0.0, 0.1) # Then np.testing.assert_array_almost_equal(pa.zero_mom, 1.0) np.testing.assert_array_almost_equal(pa.first_mom, 0.0) # Here all we can test is that the total acceleration is zero. print(pa.gradu) self.assertAlmostEqual(np.sum(pa.gradu[::3]), 0.0) self.assertAlmostEqual(np.sum(pa.gradu[1::3]), 0.0) def test_crksph_symmetric_perturbed(self): # Given self._perturb_particles() self.test_crksph_symmetric() class TestKernelCorrection3D(TestKernelCorrection2D): def setUp(self): self.dim = 3 x, y, z = np.mgrid[0.5:1:2j, 0.5:1:2j, 0.5:1:2j] u = x + y + z pa = get_particle_array( name='fluid', x=x, y=y, z=z, h=0.5, m=1.0, u=u ) pa.add_property('gradu', stride=3) pa.add_property('cwij') pa.add_property('dw_gamma', stride=3) pa.add_property('m_mat', stride=9) # for crksph pa.add_property('ai') pa.add_property('gradai', stride=3) pa.add_property('bi', stride=3) pa.add_property('gradbi', stride=9) pa.cwij[:] = 1.0 result = np.ones((8, 3)) self.expect = result.ravel() self.pa = pa def _perturb_particles(self): pa = self.pa np.random.seed(123) dx, dy, dz = (np.random.random((3, 8)) - 0.5)*0.1 pa.x[:] = pa.x + dx pa.y[:] = pa.y + dy pa.z[:] = pa.z + dz pa.u[:] = pa.x + pa.y + pa.z pysph-master/pysph/sph/tests/test_linalg.py000066400000000000000000000147011356347341600215070ustar00rootroot00000000000000from pysph.sph.wc.linalg import ( augmented_matrix, gj_solve, mat_mult, mat_vec_mult ) import numpy as np import unittest def gj_solve_helper(a, b, n): m = np.zeros((n, n+1)).ravel().tolist() augmented_matrix(a, b, n, 1, n, m) result = [0.0]*n is_singular = gj_solve(m, n, 1, result) return is_singular, result class TestLinalg(unittest.TestCase): def _to_array(self, x, shape=None): x = np.asarray(x) if shape: x.shape = shape return x def test_augmented_matrix(self): # Given a = np.random.random((3, 3)) b = np.random.random((3, 2)) res = np.zeros((3, 5)).ravel().tolist() expect = np.zeros((3, 5)) expect[:, :3] = a expect[:, 3:] = b # When augmented_matrix(a.ravel(), b.ravel(), 3, 2, 3, res) res = self._to_array(res, (3, 5)) # Then np.testing.assert_array_almost_equal(res, expect) def test_augmented_matrix_with_lower_dimension(self): # Given a = np.random.random((3, 3)) b = np.random.random((3, 2)) res = np.zeros((3, 5)).ravel().tolist() expect = np.zeros((2, 4)) expect[:, :2] = a[:2, :2] expect[:, 2:] = b[:2, :] expect.resize(3, 5) # When augmented_matrix(a.ravel(), b.ravel(), 2, 2, 3, res) res = self._to_array(res, (3, 5)) # Then np.testing.assert_array_almost_equal(res, expect) def test_augmented_matrix_with_gjsolve_with_lower_dimension(self): # Given nmax = 3 mat = np.array([[7., 4., 2.], [8., 9., 4.], [1., 4., 10.]]) b = np.array([5., 4., 2.]) expect = np.linalg.solve(mat[:2, :2], b[:2]) augmat = np.zeros((3, 4)).ravel().tolist() res = np.zeros(2).ravel().tolist() # When augmented_matrix(mat.ravel(), b.ravel(), 2, 1, nmax, augmat) gj_solve(augmat, 2, 1, res) # Then np.testing.assert_array_almost_equal(res, expect) def test_general_matrix(self): # Test Gauss Jordan solve. """ This is a general matrix which needs partial pivoting to be solved. References ---------- http://web.mit.edu/10.001/Web/Course_Notes/GaussElimPivoting.html """ n = 4 mat = [[0.02, 0.01, 0., 0.], [1., 2., 1., 0.], [0., 1., 2., 1.], [0., 0., 100., 200.]] b = [0.02, 1., 4., 800.] sing, result = gj_solve_helper(np.ravel(mat), b, n) mat = np.matrix(mat) new_b = mat * np.transpose(np.matrix(result)) new_b = np.ravel(np.array(new_b)) assert np.allclose(new_b, np.array(b)) self.assertAlmostEqual(sing, 0.0) def test_band_matrix(self): n = 3 mat = [[1., -2., 0.], [1., -1., 3.], [2., 5., 0.]] b = [-3., 1., 0.5] sing, result = gj_solve_helper(np.ravel(mat), b, n) mat = np.matrix(mat) new_b = mat * np.transpose(np.matrix(result)) new_b = np.ravel(np.array(new_b)) assert np.allclose(new_b, np.array(b)) self.assertAlmostEqual(sing, 0.0) def test_dense_matrix(self): n = 3 mat = [[0.96, 4.6, -3.7], [2.7, 4.3, -0.67], [0.9, 0., -5.]] b = [2.4, 3.6, -5.8] sing, result = gj_solve_helper(np.ravel(mat), b, n) mat = np.matrix(mat) new_b = mat * np.transpose(np.matrix(result)) new_b = np.ravel(np.array(new_b)) assert np.allclose(new_b, np.array(b)) self.assertAlmostEqual(sing, 0.0) def test_tridiagonal_matrix(self): n = 4 mat = [[-2., 1., 0., 0.], [1., -2., 1., 0.], [0., 1., -2., 0.], [0., 0., 1., -2.]] b = [-1., 0., 0., -5.] sing, result = gj_solve_helper(np.ravel(mat), b, n) mat = np.matrix(mat) new_b = mat * np.transpose(np.matrix(result)) new_b = np.ravel(np.array(new_b)) assert np.allclose(new_b, np.array(b)) self.assertAlmostEqual(sing, 0.0) def test_symmetric_matrix(self): n = 3 mat = [[0.96, 4.6, -3.7], [4.6, 4.3, -0.67], [-3.7, -0.67, -5.]] b = [2.4, 3.6, -5.8] sing, result = gj_solve_helper(np.ravel(mat), b, n) mat = np.matrix(mat) new_b = mat * np.transpose(np.matrix(result)) new_b = np.ravel(np.array(new_b)) assert np.allclose(new_b, np.array(b)) self.assertAlmostEqual(sing, 0.0) def test_symmetric_positivedefinite_Matrix(self): n = 4 mat = [[1., 1., 4., -1.], [1., 5., 0., -1.], [4., 0., 21., -4.], [-1., -1., -4., 10.]] b = [2.4, 3.6, -5.8, 0.5] sing, result = gj_solve_helper(np.ravel(mat), b, n) mat = np.matrix(mat) new_b = mat * np.transpose(np.matrix(result)) new_b = np.ravel(np.array(new_b)) assert np.allclose(new_b, np.array(b)) self.assertAlmostEqual(sing, 0.0) def test_inverse(self): # Given n = 3 mat = [[1.0, 2.0, 2.5], [2.5, 1.0, 0.0], [0.0, 0.0, 1.0]] b = np.identity(3).ravel().tolist() A = np.zeros((3, 6)).ravel().tolist() augmented_matrix(np.ravel(mat), b, 3, 3, 3, A) result = np.zeros((3, 3)).ravel().tolist() # When sing = gj_solve(A, n, n, result) # Then mat = np.asarray(mat) res = np.asarray(result) res.shape = 3, 3 np.testing.assert_allclose(res, np.linalg.inv(mat)) self.assertAlmostEqual(sing, 0.0) def test_matmult(self): # Given n = 3 a = np.random.random((3, 3)) b = np.random.random((3, 3)) result = [0.0]*9 # When mat_mult(a.ravel(), b.ravel(), n, result) # Then. expect = np.dot(a, b) result = np.asarray(result) result.shape = 3, 3 np.testing.assert_allclose(result, expect) def test_mat_vec_mult(self): # Given n = 3 a = np.random.random((3, 3)) b = np.random.random((3,)) result = [0.0]*3 # When mat_vec_mult(a.ravel(), b, n, result) # Then. expect = np.dot(a, b) result = np.asarray(result) np.testing.assert_allclose(result, expect) def test_singular_matrix(self): # Given n = 3 mat = [[1., 1., 0.], [1., 1., 0.], [1., 1., 1.]] b = [1.0, 1.0, 1.0] # sing, result = gj_solve_helper(np.ravel(mat), b, n) self.assertAlmostEqual(sing, 1.0) if __name__ == '__main__': unittest.main() pysph-master/pysph/sph/tests/test_multi_group_integrator.py000066400000000000000000000060741356347341600250510ustar00rootroot00000000000000'''Tests for integrator having different acceleration evaluators for different stages. ''' import unittest import pytest import numpy as np from pysph.base.utils import get_particle_array from pysph.sph.equation import Equation, MultiStageEquations from pysph.sph.acceleration_eval import make_acceleration_evals from pysph.base.kernels import CubicSpline from pysph.base.nnps import LinkedListNNPS as NNPS from pysph.sph.sph_compiler import SPHCompiler from pysph.sph.integrator import Integrator from pysph.sph.integrator_step import IntegratorStep class Eq1(Equation): def initialize(self, d_idx, d_au): d_au[d_idx] = 1.0 class Eq2(Equation): def initialize(self, d_idx, d_au): d_au[d_idx] += 1.0 class MyStepper(IntegratorStep): def stage1(self, d_idx, d_u, d_au, dt): d_u[d_idx] += d_au[d_idx]*dt def stage2(self, d_idx, d_u, d_au, dt): d_u[d_idx] += d_au[d_idx]*dt class MyIntegrator(Integrator): def one_timestep(self, t, dt): self.compute_accelerations(0, update_nnps=False) self.stage1() self.do_post_stage(dt, 1) self.compute_accelerations(1, update_nnps=False) self.stage2() self.update_domain() self.do_post_stage(dt, 2) class TestMultiGroupIntegrator(unittest.TestCase): def setUp(self): self.dim = 1 n = 10 dx = 1.0/(n-1) x = np.linspace(0, 1, n) m = np.ones_like(x) h = np.ones_like(x)*dx*1.05 pa = get_particle_array(name='fluid', x=x, h=h, m=m, au=0.0, u=0.0) self.pa = pa self.NNPS_cls = NNPS self.backend = 'cython' def _make_integrator(self): arrays = [self.pa] kernel = CubicSpline(dim=self.dim) eqs = [ [Eq1(dest='fluid', sources=['fluid'])], [Eq2(dest='fluid', sources=['fluid'])] ] meqs = MultiStageEquations(eqs) a_evals = make_acceleration_evals( arrays, meqs, kernel, backend=self.backend ) integrator = MyIntegrator(fluid=MyStepper()) comp = SPHCompiler(a_evals, integrator=integrator) comp.compile() nnps = self.NNPS_cls(dim=kernel.dim, particles=arrays) nnps.update() for ae in a_evals: ae.set_nnps(nnps) integrator.set_nnps(nnps) return integrator def test_different_accels_per_integrator(self): # Given pa = self.pa integrator = self._make_integrator() # When integrator.step(0.0, 0.1) # Then if pa.gpu: pa.gpu.pull('u', 'au') one = np.ones_like(pa.x) np.testing.assert_array_almost_equal(pa.au, 2.0*one) np.testing.assert_array_almost_equal(pa.u, 0.3*one) class TestMultiGroupIntegratorGPU(TestMultiGroupIntegrator): def setUp(self): pytest.importorskip('pysph.base.gpu_nnps') super(TestMultiGroupIntegratorGPU, self).setUp() from pysph.base.gpu_nnps import ZOrderGPUNNPS self.NNPS_cls = ZOrderGPUNNPS self.backend = 'opencl' if __name__ == '__main__': unittest.main() pysph-master/pysph/sph/tests/test_riemann_solver.py000066400000000000000000000047021356347341600232640ustar00rootroot00000000000000import pytest from pytest import approx import pysph.sph.gas_dynamics.riemann_solver as R solvers = [ R.ducowicz, R.exact, R.hll_ball, R.hllc, R.hllc_ball, R.hlle, R.hllsy, R.llxf, R.roe, R.van_leer ] def _check_shock_tube(solver, **approx_kw): # Given # Shock tube gamma = 1.4 rhol, pl, ul = 1.0, 1.0, 0.0 rhor, pr, ur = 0.125, 0.1, 0.0 result = [0.0, 0.0] # When solver(rhol, rhor, pl, pr, ul, ur, gamma, niter=20, tol=1e-6, result=result) # Then assert result == approx((0.30313, 0.92745), **approx_kw) def _check_blastwave(solver, **approx_kw): # Given gamma = 1.4 result = [0.0, 0.0] rhol, pl, ul = 1.0, 1000.0, 0.0 rhor, pr, ur = 1.0, 0.01, 0.0 # When solver(rhol, rhor, pl, pr, ul, ur, gamma, niter=20, tol=1e-6, result=result) # Then assert result == approx((460.894, 19.5975), **approx_kw) def _check_sjogreen(solver, **approx_kw): # Given gamma = 1.4 result = [0.0, 0.0] rhol, pl, ul = 1.0, 0.4, -2.0 rhor, pr, ur = 1.0, 0.4, 2.0 # When solver(rhol, rhor, pl, pr, ul, ur, gamma, niter=20, tol=1e-6, result=result) # Then assert result == approx((0.0018938, 0.0), **approx_kw) def _check_woodward_collela(solver, **approx_kw): # Given gamma = 1.4 result = [0.0, 0.0] rhol, pl, ul = 1.0, 0.01, 0.0 rhor, pr, ur = 1.0, 100.0, 0.0 # When solver(rhol, rhor, pl, pr, ul, ur, gamma, niter=20, tol=1e-6, result=result) # Then assert result == approx((46.0950, -6.19633), **approx_kw) def test_exact_riemann(): solver = R.exact _check_shock_tube(solver, rel=1e-4) _check_blastwave(solver, rel=1e-3) _check_sjogreen(solver, abs=1e-4) _check_woodward_collela(solver, rel=1e-4) def test_van_leer(): solver = R.van_leer _check_shock_tube(solver, rel=1e-3) _check_blastwave(solver, rel=1e-2) _check_sjogreen(solver, abs=1e-2) _check_woodward_collela(solver, rel=1e-2) def test_ducowicz(): solver = R.ducowicz _check_shock_tube(solver, rel=0.2) _check_blastwave(solver, rel=0.4) _check_sjogreen(solver, abs=1e-2) _check_woodward_collela(solver, rel=0.4) # Most other solvers seem rather poor in comparison. @pytest.mark.parametrize("solver", solvers) def test_all_solver_api(solver): if solver.__name__ in ['roe', 'hllc']: rel = 2.0 else: rel = 1.0 _check_shock_tube(solver, rel=rel) pysph-master/pysph/sph/tests/test_scheme.py000066400000000000000000000016441356347341600215070ustar00rootroot00000000000000from argparse import ArgumentParser from pysph.sph.scheme import SchemeChooser, WCSPHScheme from pysph.sph.wc.edac import EDACScheme def test_scheme_chooser_does_not_clobber_default(): # When wcsph = WCSPHScheme( ['f'], ['b'], dim=2, rho0=1.0, c0=10.0, h0=0.1, hdx=1.3, alpha=0.2, beta=0.1, ) edac = EDACScheme( fluids=['f'], solids=['b'], dim=2, c0=10.0, nu=0.001, rho0=1.0, h=0.1, alpha=0.0, pb=0.0 ) s = SchemeChooser(default='wcsph', wcsph=wcsph, edac=edac) p = ArgumentParser(conflict_handler="resolve") s.add_user_options(p) opts = p.parse_args([]) # When s.consume_user_options(opts) # Then assert s.scheme.alpha == 0.2 assert s.scheme.beta == 0.1 # When opts = p.parse_args(['--alpha', '0.3', '--beta', '0.4']) s.consume_user_options(opts) # Then assert s.scheme.alpha == 0.3 assert s.scheme.beta == 0.4 pysph-master/pysph/sph/wc/000077500000000000000000000000001356347341600160745ustar00rootroot00000000000000pysph-master/pysph/sph/wc/__init__.py000066400000000000000000000000001356347341600201730ustar00rootroot00000000000000pysph-master/pysph/sph/wc/basic.py000066400000000000000000000356141356347341600175400ustar00rootroot00000000000000""" Basic WCSPH Equations ##################### """ from pysph.sph.equation import Equation class TaitEOS(Equation): r"""**Tait equation of state for water-like fluids** :math:`p_a = \frac{c_{0}^2\rho_0}{\gamma}\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} -1\right)` References ---------- .. [Cole1948] H. R. Cole, "Underwater Explosions", Princeton University Press, 1948. .. [Batchelor2002] G. Batchelor, "An Introduction to Fluid Dynamics", Cambridge University Press, 2002. .. [Monaghan2005] J. Monaghan, "Smoothed particle hydrodynamics", Reports on Progress in Physics, 68 (2005), pp. 1703-1759. """ def __init__(self, dest, sources, rho0, c0, gamma, p0=0.0): r""" Parameters ---------- rho0 : float reference density of fluid particles c0 : float maximum speed of sound expected in the system gamma : float constant p0 : float reference pressure in the system (defaults to zero). Notes ----- The reference speed of sound, c0, is to be taken approximately as 10 times the maximum expected velocity in the system. The particle sound speed is given by the usual expression: :math:`c_a = \sqrt{\frac{\partial p}{\partial \rho}}` """ self.rho0 = rho0 self.rho01 = 1.0/rho0 self.c0 = c0 self.gamma = gamma self.gamma1 = 0.5*(gamma - 1.0) self.B = rho0*c0*c0/gamma self.p0 = p0 super(TaitEOS, self).__init__(dest, sources) def loop(self, d_idx, d_rho, d_p, d_cs): ratio = d_rho[d_idx] * self.rho01 tmp = pow(ratio, self.gamma) d_p[d_idx] = self.p0 + self.B * (tmp - 1.0) d_cs[d_idx] = self.c0 * pow(ratio, self.gamma1) class TaitEOSHGCorrection(Equation): r"""**Tait Equation of State with Hughes and Graham Correction** .. math:: p_a = \frac{c_{0}^2\rho_0}{\gamma}\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} -1\right) where .. math:: \rho_{a}=\begin{cases}\rho_{a} & \rho_{a}\geq\rho_{0}\\ \rho_{0} & \rho_{a}<\rho_{0}\end{cases}` References ---------- .. [Hughes2010] J. P. Hughes and D. I. Graham, "Comparison of incompressible and weakly-compressible SPH models for free-surface water flows", Journal of Hydraulic Research, 48 (2010), pp. 105-117. """ def __init__(self, dest, sources, rho0, c0, gamma): r""" Parameters ---------- rho0 : float reference density c0 : float reference speed of sound gamma : float constant Notes ----- The correction is to be applied on boundary particles and imposes a minimum value of the density (rho0) which is set upon instantiation. This correction avoids particle sticking behaviour at walls. """ self.rho0 = rho0 self.rho01 = 1.0/rho0 self.c0 = c0 self.gamma = gamma self.gamma1 = 0.5*(gamma - 1.0) self.B = rho0*c0*c0/gamma super(TaitEOSHGCorrection, self).__init__(dest, sources) def loop(self, d_idx, d_rho, d_p, d_cs): if d_rho[d_idx] < self.rho0: d_rho[d_idx] = self.rho0 ratio = d_rho[d_idx] * self.rho01 tmp = pow(ratio, self.gamma) d_p[d_idx] = self.B * (tmp - 1.0) d_cs[d_idx] = self.c0 * pow(ratio, self.gamma1) class MomentumEquation(Equation): r"""**Classic Monaghan Style Momentum Equation with Artificial Viscosity** .. math:: \frac{d\mathbf{v}_{a}}{dt}=-\sum_{b}m_{b}\left(\frac{p_{b}} {\rho_{b}^{2}}+\frac{p_{a}}{\rho_{a}^{2}}+\Pi_{ab}\right) \nabla_{a}W_{ab} where .. math:: \Pi_{ab}=\begin{cases} \frac{-\alpha\bar{c}_{ab}\mu_{ab}+\beta\mu_{ab}^{2}}{\bar{\rho}_{ab}} & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}<0;\\ 0 & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}\geq0; \end{cases} with .. math:: \mu_{ab}=\frac{h\mathbf{v}_{ab}\cdot\mathbf{r}_{ab}} {\mathbf{r}_{ab}^{2}+\eta^{2}}\\ \bar{c}_{ab} = \frac{c_a + c_b}{2}\\ \bar{\rho}_{ab} = \frac{\rho_a + \rho_b}{2} References ---------- .. [Monaghan1992] J. Monaghan, Smoothed Particle Hydrodynamics, "Annual Review of Astronomy and Astrophysics", 30 (1992), pp. 543-574. """ def __init__(self, dest, sources, c0, alpha=1.0, beta=1.0, gx=0.0, gy=0.0, gz=0.0, tensile_correction=False): r""" Parameters ---------- c0 : float reference speed of sound alpha : float produces a shear and bulk viscosity beta : float used to handle high Mach number shocks gx : float body force per unit mass along the x-axis gy : float body force per unit mass along the y-axis gz : float body force per unit mass along the z-axis tensilte_correction : bool switch for tensile instability correction (Default: False) """ self.alpha = alpha self.beta = beta self.gx = gx self.gy = gy self.gz = gz self.c0 = c0 self.tensile_correction = tensile_correction super(MomentumEquation, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw, d_dt_cfl): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 d_dt_cfl[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, d_cs, d_p, d_au, d_av, d_aw, s_m, s_rho, s_cs, s_p, VIJ, XIJ, HIJ, R2IJ, RHOIJ1, EPS, DWIJ, WIJ, WDP, d_dt_cfl): rhoi21 = 1.0/(d_rho[d_idx]*d_rho[d_idx]) rhoj21 = 1.0/(s_rho[s_idx]*s_rho[s_idx]) vijdotxij = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] piij = 0.0 if vijdotxij < 0: cij = 0.5 * (d_cs[d_idx] + s_cs[s_idx]) muij = (HIJ * vijdotxij)/(R2IJ + EPS) piij = -self.alpha*cij*muij + self.beta*muij*muij piij = piij*RHOIJ1 # compute the CFL time step factor _dt_cfl = 0.0 if R2IJ > 1e-12: _dt_cfl = abs(HIJ * vijdotxij/R2IJ) + self.c0 d_dt_cfl[d_idx] = max(_dt_cfl, d_dt_cfl[d_idx]) tmpi = d_p[d_idx]*rhoi21 tmpj = s_p[s_idx]*rhoj21 fij = WIJ/WDP Ri = 0.0 Rj = 0.0 # tensile instability correction if self.tensile_correction: fij = fij*fij fij = fij*fij if d_p[d_idx] > 0: Ri = 0.01 * tmpi else: Ri = 0.2*abs(tmpi) if s_p[s_idx] > 0: Rj = 0.01 * tmpj else: Rj = 0.2 * abs(tmpj) # gradient and correction terms tmp = (tmpi + tmpj) + (Ri + Rj)*fij d_au[d_idx] += -s_m[s_idx] * (tmp + piij) * DWIJ[0] d_av[d_idx] += -s_m[s_idx] * (tmp + piij) * DWIJ[1] d_aw[d_idx] += -s_m[s_idx] * (tmp + piij) * DWIJ[2] def post_loop(self, d_idx, d_au, d_av, d_aw, d_dt_force): d_au[d_idx] += self.gx d_av[d_idx] += self.gy d_aw[d_idx] += self.gz acc2 = (d_au[d_idx]*d_au[d_idx] + d_av[d_idx]*d_av[d_idx] + d_aw[d_idx]*d_aw[d_idx]) # store the square of the max acceleration d_dt_force[d_idx] = acc2 class MomentumEquationDeltaSPH(Equation): r"""**Momentum equation defined in JOSEPHINE and the delta-SPH model** .. math:: \frac{du_{i}}{dt}=-\frac{1}{\rho_{i}}\sum_{j}\left(p_{j}+p_{i}\right) \nabla_{i}W_{ij}V_{j}+\mathbf{g}_{i}+\alpha hc_{0}\rho_{0}\sum_{j} \pi_{ij}\nabla_{i}W_{ij}V_{j} where .. math:: \pi_{ij}=\frac{\mathbf{u}_{ij}\cdot\mathbf{r}_{ij}} {|\mathbf{r}_{ij}|^{2}} References ---------- .. [Marrone2011] S. Marrone et al., "delta-SPH model for simulating violent impact flows", Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526--1542. .. [Cherfils2012] J. M. Cherfils et al., "JOSEPHINE: A parallel SPH code for free-surface flows", Computer Physics Communications, 183 (2012), pp 1468--1480. """ def __init__(self, dest, sources, rho0, c0, alpha=1.0): r""" Parameters ---------- rho0 : float reference density c0 : float reference speed of sound alpha : float coefficient used to control the intensity of the diffusion of velocity Notes ----- Artificial viscosity is used in this momentum equation and is controlled by the parameter :math:`\alpha`. This form of the artificial viscosity is similar but not identical to the Monaghan-style artificial viscosity. """ self.alpha = alpha self.c0 = c0 self.rho0 = rho0 super(MomentumEquationDeltaSPH, self).__init__(dest, sources) def loop(self, d_idx, s_idx, d_rho, d_cs, d_p, d_au, d_av, d_aw, s_m, s_rho, s_cs, s_p, VIJ, XIJ, HIJ, R2IJ, RHOIJ1, EPS, WIJ, DWIJ): # src paricle volume mj/rhoj Vj = s_m[s_idx]/s_rho[s_idx] # viscous contribution second part of eqn (5b) in [Maronne2011] vijdotxij = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] fac = self.alpha * HIJ * self.c0 * self.rho0 piij = vijdotxij/(R2IJ + EPS) # gradient and viscous terms eqn 5b in [Maronne2011] tmp = fac * piij * Vj/d_rho[d_idx] # accelerations d_au[d_idx] += tmp * DWIJ[0] d_av[d_idx] += tmp * DWIJ[1] d_aw[d_idx] += tmp * DWIJ[2] class ContinuityEquationDeltaSPHPreStep(Equation): r"""**Continuity equation with dissipative terms** See :class:`pysph.sph.wc.basic.ContinuityEquationDeltaSPH` The matrix :math:`L_a` is multiplied to :math:`\nabla W_{ij}` in the :class:`pysph.sph.scheme.WCSPHScheme` class by using :class:`pysph.sph.wc.kernel_correction.GradientCorrectionPreStep` and :class:`pysph.sph.wc.kernel_correction.GradientCorrection`. """ def initialize(self, d_idx, d_gradrho): d_gradrho[d_idx*3 + 0] = 0.0 d_gradrho[d_idx*3 + 1] = 0.0 d_gradrho[d_idx*3 + 2] = 0.0 def loop(self, d_idx, d_arho, s_idx, d_rho, s_rho, s_m, d_gradrho, DWIJ): rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] Vj = s_m[s_idx]/rhoj # renormalized density of eqn (5a) d_gradrho[d_idx*3 + 0] += (rhoj - rhoi) * DWIJ[0] * Vj d_gradrho[d_idx*3 + 1] += (rhoj - rhoi) * DWIJ[1] * Vj d_gradrho[d_idx*3 + 2] += (rhoj - rhoi) * DWIJ[2] * Vj class ContinuityEquationDeltaSPH(Equation): r"""**Continuity equation with dissipative terms** :math:`\frac{d\rho_a}{dt} = \sum_b \rho_a \frac{m_b}{\rho_b} \left( \boldsymbol{v}_{ab}\cdot \nabla_a W_{ab} + \delta \eta_{ab} \cdot \nabla_{a} W_{ab} (h_{ab}\frac{c_{ab}}{\rho_a}(\rho_b - \rho_a)) \right)` References ---------- .. [Marrone2011] S. Marrone et al., "delta-SPH model for simulating violent impact flows", Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526--1542. """ def __init__(self, dest, sources, c0, delta=0.1): r""" Parameters ---------- c0 : float reference speed of sound delta : float coefficient used to control the intensity of diffusion of density """ self.c0 = c0 self.delta = delta super(ContinuityEquationDeltaSPH, self).__init__(dest, sources) def loop(self, d_idx, d_arho, s_idx, s_m, d_rho, s_rho, DWIJ, XIJ, R2IJ, HIJ, EPS, d_gradrho, s_gradrho): rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] Vj = s_m[s_idx]/rhoj fac = -2.0 * (rhoj - rhoi)/(R2IJ + EPS) psix = fac*XIJ[0] - d_gradrho[d_idx*3 + 0] - s_gradrho[s_idx*3 + 0] psiy = fac*XIJ[1] - d_gradrho[d_idx*3 + 1] - s_gradrho[s_idx*3 + 1] psiz = fac*XIJ[2] - d_gradrho[d_idx*3 + 2] - s_gradrho[s_idx*3 + 2] psidotdwij = psix*DWIJ[0] + psiy*DWIJ[1] + psiz*DWIJ[2] # standard term with dissipative penalization eqn (5a) d_arho[d_idx] += self.delta * HIJ * self.c0 * psidotdwij * Vj class UpdateSmoothingLengthFerrari(Equation): r"""**Update the particle smoothing lengths** :math:`h_a = hdx \left(\frac{m_a}{\rho_a}\right)^{\frac{1}{d}}` References ---------- .. [Ferrari2009] A. Ferrari et al., "A new 3D parallel SPH scheme for free surface flows", Computers and Fluids, 38 (2009), pp. 1203--1217. """ def __init__(self, dest, sources, dim, hdx): r""" Parameters ---------- dim : float number of dimensions hdx : float scaling factor Notes ----- Ideally, the kernel scaling factor should be determined from the kernel used based on a linear stability analysis. The default value of (hdx=1) reduces to the formulation suggested by Ferrari et al. who used a Cubic Spline kernel. Typically, a change in the smoothing length should mean the neighbors are re-computed which in PySPH means the NNPS must be updated. This equation should therefore be placed as the last equation so that after the final corrector stage, the smoothing lengths are updated and the new NNPS data structure is computed. Note however that since this is to be used with incompressible flow equations, the density variations are small and hence the smoothing lengths should also not vary too much. """ self.dim1 = 1./dim self.hdx = hdx super(UpdateSmoothingLengthFerrari, self).__init__(dest, sources) def loop(self, d_idx, d_rho, d_h, d_m): # naive estimate of particle volume Vj = d_m[d_idx]/d_rho[d_idx] d_h[d_idx] = self.hdx * pow(Vj, self.dim1) class PressureGradientUsingNumberDensity(Equation): r"""Pressure gradient discretized using number density: .. math:: \frac{d \boldsymbol{v}_a}{dt} = -\frac{1}{m_a}\sum_b (\frac{p_a}{V_a^2} + \frac{p_b}{V_b^2})\nabla_a W_{ab} """ def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, d_rho, s_rho, d_au, d_av, d_aw, d_p, s_p, d_V, s_V, DWIJ): # particle volumes Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # pressure gradient term p_i = d_p[d_idx] pj = s_p[s_idx] pij = p_i*Vi2 + pj*Vj2 # accelerations tmp = -pij * 1.0/(d_m[d_idx]) d_au[d_idx] += tmp * DWIJ[0] d_av[d_idx] += tmp * DWIJ[1] d_aw[d_idx] += tmp * DWIJ[2] pysph-master/pysph/sph/wc/crksph.py000066400000000000000000001131061356347341600177420ustar00rootroot00000000000000''' CRKSPH corrections These are equations for the basic kernel corrections in [CRKSPH2017]. References ----------- .. [CRKSPH2017] Nicholas Frontiere, Cody D. Raskin, J. Michael Owen "CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme", Journal of Computational Physics 332 (2017), pp. 160--209. ''' from math import sqrt, exp from compyle.api import declare from pysph.sph.equation import Equation, Group, MultiStageEquations from pysph.sph.wc.linalg import ( augmented_matrix, dot, gj_solve, identity, mat_vec_mult ) from pysph.sph.scheme import Scheme from pysph.base.utils import get_particle_array from pysph.sph.integrator import Integrator, IntegratorStep from pysph.base.particle_array import get_ghost_tag # Constants GHOST_TAG = get_ghost_tag() class CRKSPHPreStep(Equation): def __init__(self, dest, sources, dim=2): self.dim = dim super(CRKSPHPreStep, self).__init__(dest, sources) def _get_helpers_(self): return [augmented_matrix, gj_solve, identity, dot, mat_vec_mult] def loop_all(self, d_idx, d_x, d_y, d_z, d_h, s_x, s_y, s_z, s_h, s_m, s_rho, SPH_KERNEL, NBRS, N_NBRS, d_ai, d_gradai, d_bi, s_V, d_gradbi): x = d_x[d_idx] y = d_y[d_idx] z = d_z[d_idx] h = d_h[d_idx] i, j, k, s_idx, d, d2 = declare('int', 6) alp, bet, gam, phi, psi = declare('int', 5) xij = declare('matrix(3)') dwij = declare('matrix(3)') d = self.dim d2 = d*d m0 = 0.0 m1 = declare('matrix(3)') m2 = declare('matrix(9)') temp_vec = declare('matrix(3)') temp_aug_m2 = declare('matrix(18)') m2inv = declare('matrix(9)') grad_m0 = declare('matrix(3)') grad_m1 = declare('matrix(9)') grad_m2 = declare('matrix(27)') ai = 0.0 bi = declare('matrix(3)') grad_ai = declare('matrix(3)') grad_bi = declare('matrix(9)') for i in range(3): m1[i] = 0.0 grad_m0[i] = 0.0 bi[i] = 0.0 grad_ai[i] = 0.0 for j in range(3): m2[3*i + j] = 0.0 grad_m1[3*i + j] = 0.0 grad_bi[3*i + j] = 0.0 for k in range(3): grad_m2[9*i + 3*j + k] = 0.0 for i in range(N_NBRS): s_idx = NBRS[i] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = z - s_z[s_idx] hij = (h + s_h[s_idx]) * 0.5 rij = sqrt(xij[0] * xij[0] + xij[1] * xij[1] + xij[2] * xij[2]) wij = SPH_KERNEL.kernel(xij, rij, hij) SPH_KERNEL.gradient(xij, rij, hij, dwij) V = 1.0/s_V[s_idx] m0 += V * wij for alp in range(d): m1[alp] += V * wij * xij[alp] for bet in range(d): m2[d*alp + bet] += V * wij * xij[alp] * xij[bet] for gam in range(d): grad_m0[gam] += V * dwij[gam] for alp in range(d): fac = 1.0 if alp == gam else 0.0 temp = (xij[alp] * dwij[gam] + fac * wij) grad_m1[d*gam + alp] += V * temp for bet in range(d): fac2 = 1.0 if bet == gam else 0.0 temp = xij[alp] * fac2 + xij[bet] * fac temp2 = (xij[alp] * xij[bet] * dwij[gam] + temp * wij) grad_m2[d2*gam + d*alp + bet] += V * temp2 identity(m2inv, d) augmented_matrix(m2, m2inv, d, d, d, temp_aug_m2) # If is_singular > 0 then matrix was singular is_singular = gj_solve(temp_aug_m2, d, d, m2inv) if is_singular > 0.0: # Cannot do much if the matrix is singular. Perhaps later # we can tag such particles to see if the user can do something. pass else: mat_vec_mult(m2inv, m1, d, temp_vec) # Eq. 12. ai = 1.0/(m0 - dot(temp_vec, m1, d)) # Eq. 13. mat_vec_mult(m2inv, m1, d, bi) for gam in range(d): bi[gam] = -bi[gam] # Eq. 14. and 15. for gam in range(d): temp1 = grad_m0[gam] for alp in range(d): temp2 = 0.0 for bet in range(d): temp1 -= m2inv[d*alp + bet] * ( m1[bet] * grad_m1[d*gam + alp] + m1[alp] * grad_m1[d*gam + bet] ) temp2 -= ( m2inv[d*alp + bet] * grad_m1[d*gam + bet] ) for phi in range(d): for psi in range(d): temp1 += ( m2inv[d*alp + phi] * m2inv[d*psi + bet] * grad_m2[d2*gam + d*phi + psi] * m1[bet] * m1[alp] ) temp2 += ( m2inv[d*alp + phi] * m2inv[d*psi + bet] * grad_m2[d2*gam + d*phi + psi] * m1[bet] ) grad_bi[d*gam + alp] = temp2 grad_ai[gam] = -ai*ai*temp1 if N_NBRS < 2 or is_singular > 0.0: d_ai[d_idx] = 1.0 for i in range(d): d_gradai[d * d_idx + i] = 0.0 d_bi[d * d_idx + i] = 0.0 for j in range(d): d_gradbi[d2 * d_idx + d * i + j] = 0.0 else: d_ai[d_idx] = ai for i in range(d): d_gradai[d * d_idx + i] = grad_ai[i] d_bi[d * d_idx + i] = bi[i] for j in range(d): d_gradbi[d2 * d_idx + d * i + j] = grad_bi[d*i + j] class CRKSPH(Equation): r"""**Conservative Reproducing Kernel SPH** Equations from the paper [CRKSPH2017]. .. math:: W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij} .. math:: \partial_{\gamma}W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha} x_{ij}^{\alpha}\right)\partial_{\gamma}W_{ij} + \partial_{\gamma}A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij} + A_{i}\left(\partial_{\gamma}B_{i}^{\alpha} x_{ij}^{\alpha} + B_{i}^{\gamma}\right)W_{ij} .. math:: \nabla\tilde{W}_{ij} = 0.5 * \left(\nabla W_{ij}^{R}-\nabla W_{ji}^{R} \right) where, .. math:: A_{i} = \left[m_{0} - \left(m_{2}^{-1}\right)^{\alpha \beta} m_1^{\beta}m_1^{\alpha}\right]^{-1} .. math:: B_{i}^{\alpha} = -\left(m_{2}^{-1}\right)^{\alpha \beta} m_{1}^{\beta} .. math:: \partial_{\gamma}A_{i} = -A_{i}^{2}\left(\partial_{\gamma} m_{0}-\left(m_{2}^{-1}\right)^{\alpha \beta}\left( m_{1}^{\beta}\partial_{\gamma}m_{1}^{\alpha} + \partial_{\gamma}m_{1}^{\beta}m_{1}^{\alpha}\right) + \left(m_{2}^{-1}\right)^{\alpha \phi}\partial_{\gamma} m_{2}^{\phi \psi}\left(m_{2}^{-1}\right)^{\psi \beta} m_{1}^{\beta}m_{1}^{\alpha} \right) .. math:: \partial_{\gamma}B_{i}^{\alpha} = -\left(m_{2}^{-1}\right)^ {\alpha \beta}\partial_{\gamma}m_{1}^{\beta} + \left(m_{2}^{-1}\right)^ {\alpha \phi}\partial_{\gamma}m_{2}^{\phi \psi}\left(m_{2}^ {-1}\right)^{\psi \beta}m_{1}^{\beta} .. math:: m_{0} = \sum_{j}V_{j}W_{ij} .. math:: m_{1}^{\alpha} = \sum_{j}V_{j}x_{ij}^{\alpha}W_{ij} .. math:: m_{2}^{\alpha \beta} = \sum_{j}V_{j}x_{ij}^{\alpha} x_{ij}^{\beta}W_{ij} .. math:: \partial_{\gamma}m_{0} = \sum_{j}V_{j}\partial_{\gamma} W_{ij} .. math:: \partial_{\gamma}m_{1}^{\alpha} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}\partial_{\gamma}W_{ij}+\delta^ {\alpha \gamma}W_{ij} \right] .. math:: \partial_{\gamma}m_{2}^{\alpha \beta} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}x_{ij}^{\beta}\partial_{\gamma}W_{ij} + \left(x_{ij}^{\alpha}\delta^{\beta \gamma} + x_{ij}^{\beta} \delta^{\alpha \gamma} \right)W_{ij} \right] """ def __init__(self, dest, sources, dim=2, tol=0.5): r""" Parameters ---------- dim : int Dimensionality of the problem. tol : float Tolerence value to decide std or corrected kernel """ self.dim = dim self.tol = tol super(CRKSPH, self).__init__(dest, sources) def loop(self, d_idx, s_idx, d_ai, d_gradai, d_cwij, d_bi, d_gradbi, WIJ, DWIJ, XIJ, HIJ): alp, gam, d = declare('int', 3) res = declare('matrix(3)') dbxij = declare('matrix(3)') d = self.dim ai = d_ai[d_idx] eps = 1.0e-04 * HIJ bxij = 0.0 for alp in range(d): bxij += d_bi[d*d_idx + alp] * XIJ[alp] for gam in range(d): temp = 0.0 for alp in range(d): temp += d_gradbi[d*d*d_idx + d*gam + alp]*XIJ[alp] dbxij[gam] = temp d_cwij[d_idx] = (ai*(1 + bxij)) for gam in range(d): res[gam] = ((ai * DWIJ[gam] + d_gradai[d * d_idx + gam] * WIJ) * (1 + bxij)) res[gam] += ai * (dbxij[gam] + d_bi[d * d_idx + gam]) * WIJ res_mag = 0.0 dwij_mag = 0.0 for i in range(d): res_mag += abs(res[i]) dwij_mag += abs(DWIJ[i]) change = abs(res_mag - dwij_mag)/(dwij_mag + eps) if change < self.tol: for i in range(d): DWIJ[i] = res[i] class CRKSPHSymmetric(Equation): r"""**Conservative Reproducing Kernel SPH** This is symmetric and will only work for particles of the same array. Equations from the paper [CRKSPH2017]. .. math:: W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij} .. math:: \partial_{\gamma}W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha} x_{ij}^{\alpha}\right)\partial_{\gamma}W_{ij} + \partial_{\gamma}A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij} + A_{i}\left(\partial_{\gamma}B_{i}^{\alpha} x_{ij}^{\alpha} + B_{i}^{\gamma}\right)W_{ij} .. math:: \nabla\tilde{W}_{ij} = 0.5 * \left(\nabla W_{ij}^{R}-\nabla W_{ji}^{R} \right) where, .. math:: A_{i} = \left[m_{0} - \left(m_{2}^{-1}\right)^{\alpha \beta} m1_{\beta}m1_{\alpha}\right]^{-1} .. math:: B_{i}^{\alpha} = -\left(m_{2}^{-1}\right)^{\alpha \beta} m_{1}^{\beta} .. math:: \partial_{\gamma}A_{i} = -A_{i}^{2}\left(\partial_{\gamma} m_{0}-\left[m_{2}^{-1}\right]^{\alpha \beta}\left[ m_{1}^{\beta}\partial_{\gamma}m_{1}^{\beta}m_{1}^{\alpha} + \partial_{\gamma}m_{1}^{\alpha}m_{1}^{\beta}\right] + \left[m_{2}^{-1}\right]^{\alpha \phi}\partial_{\gamma} m_{2}^{\phi \psi}\left[m_{2}^{-1}\right]^{\psi \beta} m_{1}^{\beta}m_{1}^{\alpha} \right) .. math:: \partial_{\gamma}B_{i}^{\alpha} = -\left(m_{2}^{-1}\right)^ {\alpha \beta}\partial_{\gamma}m_{1}^{\beta} + \left(m_{2}^{-1}\right)^ {\alpha \phi}\partial_{\gamma}m_{2}^{\phi \psi}\left(m_{2}^ {-1}\right)^{\psi \beta}m_{1}^{\beta} .. math:: m_{0} = \sum_{j}V_{j}W_{ij} .. math:: m_{1}^{\alpha} = \sum_{j}V_{j}x_{ij}^{\alpha}W_{ij} .. math:: m_{2}^{\alpha \beta} = \sum_{j}V_{j}x_{ij}^{\alpha} x_{ij}^{\beta}W_{ij} .. math:: \partial_{\gamma}m_{0} = \sum_{j}V_{j}\partial_{\gamma} W_{ij} .. math:: \partial_{\gamma}m_{1}^{\alpha} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}\partial_{\gamma}W_{ij}+\delta^ {\alpha \gamma}W_{ij} \right] .. math:: \partial_{\gamma}m_{2}^{\alpha \beta} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}x_{ij}^{\beta}\partial_{\gamma}W_{ij} + \left(x_{ij}^{\alpha}\delta^{\beta \gamma} + x_{ij}^{\beta} \delta^{\alpha \gamma} \right)W_{ij} \right] """ def __init__(self, dest, sources, dim=2, tol=0.5): self.dim = dim self.tol = tol super(CRKSPHSymmetric, self).__init__(dest, sources) def loop(self, d_idx, s_idx, d_ai, d_gradai, d_cwij, d_bi, d_gradbi, s_ai, s_gradai, s_bi, s_gradbi, d_h, s_h, WIJ, DWIJ, XIJ, HIJ, RIJ, DWI, DWJ, WI, WJ, SPH_KERNEL): alp, gam, d = declare('int', 3) dbxij = declare('matrix(3)') dbxji = declare('matrix(3)') dwij, dwji = declare('matrix(3)', 2) SPH_KERNEL.gradient(XIJ, RIJ, d_h[d_idx], dwij) SPH_KERNEL.gradient(XIJ, RIJ, s_h[s_idx], dwji) wij = SPH_KERNEL.kernel(XIJ, RIJ, d_h[d_idx]) wji = SPH_KERNEL.kernel(XIJ, RIJ, s_h[s_idx]) d = self.dim ai = d_ai[d_idx] aj = s_ai[s_idx] bxij = 0.0 bxji = 0.0 for alp in range(d): bxij += d_bi[d*d_idx + alp] * XIJ[alp] bxji -= s_bi[d*s_idx + alp] * XIJ[alp] for gam in range(d): temp = 0.0 temp1 = 0.0 for alp in range(d): temp += d_gradbi[d*d*d_idx + d*gam + alp]*XIJ[alp] temp1 -= s_gradbi[d*d*s_idx + d*gam + alp]*XIJ[alp] dbxij[gam] = temp dbxji[gam] = temp1 d_cwij[d_idx] = (ai*(1 + bxij)) WI = (ai*(1 + bxij)) * WIJ WJ = (aj*(1 + bxji)) * WIJ for gam in range(d): temp = ((ai * dwij[gam] + d_gradai[d * d_idx + gam] * wij) * (1 + bxij)) temp += ai * (dbxij[gam] + d_bi[d * d_idx + gam]) * wij temp1 = ((-aj * dwji[gam] + s_gradai[d * s_idx + gam] * wji) * (1 + bxji)) temp1 += aj * (dbxji[gam] + s_bi[d * s_idx + gam]) * wji DWIJ[gam] = 0.5*(temp - temp1) DWI[gam] = temp DWJ[gam] = temp1 class NumberDensity(Equation): r"""**Number Density** From [CRKSPH2017], equation (75): .. math:: V_{i}^{-1} = \sum_{j} W_{i} Note that the quantity `V` is the inverse of particle volume, so when using in the equation use :math:`\frac{1}{V}`. """ def initialize(self, d_idx, d_V): d_V[d_idx] = 0.0 def loop(self, d_idx, d_V, WI): d_V[d_idx] += WI class SummationDensityCRKSPH(Equation): r"""**Summation Density CRKSPH** From [CRKSPH2017], equation (76): .. math:: \rho_{i} = \frac{\sum_j m_{ij} V_j W_{ij}^R} {\sum_{j} V_{j}^{2} W_{ij}^R} where, .. math:: mij = \begin{cases} m_j, &i \text{ and } j \text{ are same material} \\ m_i, &i \text{ and } j \text{ are different material} \end{cases} Note that in this equation we are just using :math:`m_{ij}` to be :math:`m_i` as the mass remains constant through out the simulation. """ def initialize(self, d_idx, d_rho, d_rhofac): d_rho[d_idx] = 0.0 d_rhofac[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, d_rho, d_rhofac, s_V, WIJ, d_cwij): Vj = 1.0/s_V[s_idx] fac = Vj * d_cwij[d_idx] * WIJ d_rho[d_idx] += d_m[d_idx] * fac d_rhofac[d_idx] += Vj * fac def post_loop(self, d_idx, d_rho, d_rhofac): d_rho[d_idx] = d_rho[d_idx] / d_rhofac[d_idx] class VelocityGradient(Equation): r"""**Velocity Gradient** From [CRKSPH2017], equation (74) .. math:: \partial_{\beta} v_i^{\alpha} = -\sum_j V_j v_{ij}^{\alpha} \partial_{\beta} W_{ij}^R """ def __init__(self, dest, sources, dim): r""" Parameters ---------- dim : int Dimensionality of the Problem. """ self.dim = dim super(VelocityGradient, self).__init__(dest, sources) def initialize(self, d_idx, d_gradv): i = declare('int') for i in range(9): d_gradv[9*d_idx + i] = 0.0 def loop(self, d_idx, s_idx, s_V, d_gradv, d_h, s_h, XIJ, DWIJ, VIJ, DWI): alp, bet, d = declare('int', 3) d = self.dim Vj = 1.0/s_V[s_idx] for alp in range(d): for bet in range(d): d_gradv[d_idx*d*d + d*alp + bet] += -Vj * VIJ[alp] * DWI[bet] class MomentumEquation(Equation): r"""**Momentum Equation** From [CRKSPH2017], equation (64): .. math:: \frac{Dv_{i}^{\alpha}}{Dt} = -\frac{1}{2 m_i}\sum_{j} V_i V_j (P_i + P_j + Q_i + Q_j) (\partial_{\alpha}W_{ij}^R - \partial_{\alpha} W_{ji}^R) where, .. math:: V_{i/j} = \text{dest/source particle number density} .. math:: P_{i/j} = \text{dest/source particle pressure} .. math:: Q_i = \rho_{i} (-C_{l} c_{i} \mu_{i} + C_{q} \mu_{i}^{2}) .. math:: \mu_i = \min \left(0, \frac{\hat{v}_{ij} \eta_{i}^{\alpha}}{\eta_{i}^{\alpha}\eta_{i}^{\alpha} + \epsilon^{2}}\right) .. math:: \hat{v}_{ij}^{\alpha} = v_{i}^{\alpha} - v_{j}^{\alpha} - \frac{\phi_{ij}}{2}\left(\partial_{\beta} v_i^{\alpha} + \partial_{\beta}v_j^{\alpha}\right) x_{ij}^{\beta} .. math:: \phi_{ij} = \max \left[0, \min \left[1, \frac{4r_{ij}}{(1 + r_{ij})^2}\right]\right] \times \begin{cases} \exp{\left[-\left((\eta_{ij} - \eta_{crit})/\eta_{fold}\right)^2\right]}, &\eta_{ij} < \eta_{crit} \\ 1, & \eta_{ij} >= \eta_{crit} \end{cases} .. math:: \eta_{ij} = \min(\eta_i, \eta_j) .. math:: \eta_{i/j} = (x_{ij}^{\alpha} x_{ij}^{\alpha})^{1/2} / h_{i/j} .. math:: r_{ij} = \frac{\partial_{\beta} v_i^{\alpha} x_{ij}^{\alpha} x_{ij}^{\beta}}{\partial_{\beta} v_j^{\alpha}x_{ij}^{\alpha} x_{ij}^{\beta}} .. math:: \partial_{\beta} v_i^{\alpha} = -\sum_j V_j v_{ij}^{\alpha} \partial_{\beta} W_{ij}^R """ def __init__(self, dest, sources, dim, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, eta_crit=0.3, eta_fold=0.2, tol=0.5): self.dim = dim self.gx = gx self.gy = gy self.gz = gz self.cl = cl self.cq = cq self.eta_crit = eta_crit self.eta_fold = eta_fold self.tol = tol super(MomentumEquation, self).__init__(dest, sources) def _get_helpers_(self): return [dot] def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = self.gx d_av[d_idx] = self.gy d_aw[d_idx] = self.gz def loop(self, d_idx, s_idx, d_m, s_m, d_rho, s_rho, d_p, s_p, d_cs, s_cs, d_u, d_v, d_w, s_u, s_v, s_w, d_gradv, s_gradv, d_h, s_h, s_ai, s_bi, s_gradai, s_gradbi, d_au, d_av, d_aw, d_V, s_V, XIJ, VIJ, DWIJ, EPS, HIJ, WIJ, DWI, DWJ): Cl = self.cl Cq = self.cq eta_fold = self.eta_fold eta_crit = self.eta_crit mi = d_m[d_idx] Vi = 1.0/d_V[d_idx] Vj = 1.0/s_V[s_idx] pi = d_p[d_idx] pj = s_p[s_idx] rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] ci = d_cs[d_idx] cj = s_cs[s_idx] hi = d_h[d_idx] hj = s_h[s_idx] alp, bet, i, d = declare('int', 4) uijhat, tmpdvxij = declare('matrix(3)', 2) d = self.dim tmpri = 0.0 tmprj = 0.0 for alp in range(d): for bet in range(d): tmpri += d_gradv[d*d*d_idx + d*alp + bet] * XIJ[alp] * XIJ[bet] tmprj += s_gradv[d*d*s_idx + d*alp + bet] * XIJ[alp] * XIJ[bet] rij = tmpri/tmprj tmprij = min(1, 4*rij/((1 + rij)*(1 + rij))) phiij = max(0, tmprij) tmpxij = dot(XIJ, XIJ, d) tmpxij2 = sqrt(tmpxij) etai_scalar = tmpxij2/hi etaj_scalar = tmpxij2/hj etaij = min(etai_scalar, etaj_scalar) if etaij < eta_crit: tmpphi = (etaij - eta_crit)/eta_fold phiij = phiij * exp(-tmpphi*tmpphi) for alp in range(d): s = 0.0 for bet in range(d): s += (d_gradv[d*d*d_idx + d*alp + bet] + s_gradv[d*d*s_idx + d*alp + bet]) * XIJ[bet] tmpdvxij[alp] = s uijhat[0] = d_u[d_idx] - s_u[s_idx] - 0.5*phiij * tmpdvxij[0] uijhat[1] = d_v[d_idx] - s_v[s_idx] - 0.5*phiij * tmpdvxij[1] uijhat[2] = d_w[d_idx] - s_w[s_idx] - 0.5*phiij * tmpdvxij[2] tmpmui = dot(uijhat, XIJ, d) / (tmpxij/hi + EPS * hi) mui = min(0, tmpmui) tmpmuj = dot(uijhat, XIJ, d) / (tmpxij/hi + EPS * hj) muj = min(0, tmpmuj) Qi = rhoi * (-Cl*ci*mui + Cq*mui*mui) Qj = rhoj * (-Cl*cj*muj + Cq*muj*muj) fac = -(1.0/mi) * Vi * Vj * (pi + pj + Qi + Qj) d_au[d_idx] += fac * DWIJ[0] d_av[d_idx] += fac * DWIJ[1] d_aw[d_idx] += fac * DWIJ[2] class EnergyEquation(Equation): r"""**Energy Equation** From [CRKSPH2017], equation (66): .. math:: \Delta u_{ij} = \frac{f_{ij}}{2}\left(v_j^{\alpha}(t) + v_j^{\alpha}(t + \Delta t) - v_i^{\alpha}(t) - v_i^{\alpha}(t + \Delta t)\right) \frac{Dv_{ij}^{\alpha}}{Dt} .. math:: f_{ij} = \begin{cases} 1/2 &|s_i - s_j| = 0,\\ s_{\min} / (s_{\min} + s_{\max}) &\Delta u_{ij}\times(s_i - s_j) > 0\\ s_{\max} / (s_{\min} + s_{\max}) &\Delta u_{ij}\times(s_i - s_j) < 0\\ \end{cases} .. math:: s_{\min} = \min(|s_i|, |s_j|) .. math:: s_{\max} = \max(|s_i|, |s_j|) .. math:: s_{i/j} = \frac{p_{i/j}}{\rho_{i/j}^\gamma} see MomentumEquation for :math:`\frac{Dv_{ij}^{\alpha}}{Dt}` """ def __init__(self, dest, sources, dim, gamma, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, eta_crit=0.5, eta_fold=0.2, tol=0.5): self.dim = dim self.gamma = gamma self.dim = dim self.gx = gx self.gy = gy self.gz = gz self.cl = cl self.cq = cq self.eta_crit = eta_crit self.eta_fold = eta_fold self.tol = tol super(EnergyEquation, self).__init__(dest, sources) def _get_helpers_(self): return [dot] def initialize(self, d_idx, d_ae): d_ae[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_au, d_av, d_aw, d_ae, s_au, s_av, s_aw, d_u0, d_v0, d_w0, s_u0, s_v0, s_w0, d_u, d_v, d_w, s_u, s_v, s_w, d_p, d_rho, s_p, s_rho, d_m, d_V, s_V, d_cs, s_cs, d_h, s_h, XIJ, d_gradv, s_gradv, EPS, DWIJ): Cl = self.cl Cq = self.cq eta_fold = self.eta_fold eta_crit = self.eta_crit mi = d_m[d_idx] Vi = 1.0/d_V[d_idx] Vj = 1.0/s_V[s_idx] pi = d_p[d_idx] pj = s_p[s_idx] rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] ci = d_cs[d_idx] cj = s_cs[s_idx] hi = d_h[d_idx] hj = s_h[s_idx] alp, bet, i, d = declare('int', 4) uijhat, tmpdvxij = declare('matrix(3)', 2) d = self.dim tmpri = 0.0 tmprj = 0.0 for alp in range(d): for bet in range(d): tmpri += d_gradv[d*d*d_idx + d*alp + bet] * XIJ[alp] * XIJ[bet] tmprj += s_gradv[d*d*s_idx + d*alp + bet] * XIJ[alp] * XIJ[bet] rij = tmpri/tmprj tmprij = min(1, 4*rij/((1 + rij)*(1 + rij))) phiij = max(0, tmprij) tmpxij = dot(XIJ, XIJ, d) tmpxij2 = sqrt(tmpxij) etai_scalar = tmpxij2/hi etaj_scalar = tmpxij2/hj etaij = min(etai_scalar, etaj_scalar) if etaij < eta_crit: tmpphi = (etaij - eta_crit)/eta_fold phiij = phiij * exp(-tmpphi*tmpphi) for alp in range(d): s = 0.0 for bet in range(d): s += (d_gradv[d*d*d_idx + d*alp + bet] + s_gradv[d*d*s_idx + d*alp + bet]) * XIJ[bet] tmpdvxij[alp] = s uijhat[0] = d_u0[d_idx] - s_u0[s_idx] - 0.5*phiij * tmpdvxij[0] uijhat[1] = d_v0[d_idx] - s_v0[s_idx] - 0.5*phiij * tmpdvxij[1] uijhat[2] = d_w0[d_idx] - s_w0[s_idx] - 0.5*phiij * tmpdvxij[2] tmpmui = dot(uijhat, XIJ, d) / (tmpxij/hi + EPS * hi) mui = min(0, tmpmui) tmpmuj = dot(uijhat, XIJ, d) / (tmpxij/hi + EPS * hj) muj = min(0, tmpmuj) Qi = rhoi * (-Cl*ci*mui + Cq*mui*mui) Qj = rhoj * (-Cl*cj*muj + Cq*muj*muj) fac = -(1.0 / mi) * Vi * Vj * (pi + pj + Qi + Qj) gamma = self.gamma d = self.dim auij, delu = declare('matrix(3)', 2) auij[0] = fac * DWIJ[0] auij[1] = fac * DWIJ[1] auij[2] = fac * DWIJ[2] delu[0] = s_u0[s_idx] + s_u[s_idx] - d_u0[d_idx] - d_u[d_idx] delu[1] = s_v0[s_idx] + s_v[s_idx] - d_v0[d_idx] - d_v[d_idx] delu[2] = s_w0[s_idx] + s_w[s_idx] - d_w0[d_idx] - d_w[d_idx] aeij = dot(delu, auij, d) si = d_p[d_idx]/((d_rho[d_idx])**gamma) sj = s_p[s_idx]/((s_rho[s_idx])**gamma) smin = min(abs(si), abs(sj)) smax = max(abs(si), abs(sj)) fij = 0.5 sdiff = si - sj if sdiff * aeij > 0: fij = smin/(smin + smax) elif sdiff * aeij < 0: fij = smax/(smin + smax) d_ae[d_idx] += 0.5*fij * aeij class StateEquation(Equation): r"""**State Equation** State equation for ideal gas, from [CRKSPH2017] equation (77): .. math:: p_i = (\gamma - 1)\rho_{i} u_i where, :math:`u_i` is the specific thermal energy. """ def __init__(self, dest, sources, gamma): self.gamma = gamma super(StateEquation, self).__init__(dest, sources) def initialize(self, d_idx, d_p, d_rho, d_e): d_p[d_idx] = (self.gamma - 1) * d_rho[d_idx] * d_e[d_idx] class SpeedOfSound(Equation): def __init__(self, dest, sources=None, gamma=7.0): super(SpeedOfSound, self).__init__(dest, sources) self.gamma = gamma def initialize(self, d_cs, d_idx, d_p, d_rho): d_cs[d_idx] = (self.gamma * d_p[d_idx] / d_rho[d_idx])**0.5 class CRKSPHUpdateGhostProps(Equation): def __init__(self, dest, sources=None, dim=2): super(CRKSPHUpdateGhostProps, self).__init__(dest, sources) self.dim = dim assert GHOST_TAG == 2 def initialize(self, d_idx, d_orig_idx, d_p, d_cs, d_V, d_ai, d_bi, d_tag, d_gradbi, d_gradai, d_rho, d_u0, d_v0, d_w0, d_gradv, d_u, d_v, d_w): idx, d, alp, bet = declare('int', 4) d = self.dim if d_tag[d_idx] == 2: idx = d_orig_idx[d_idx] d_p[d_idx] = d_p[idx] d_cs[d_idx] = d_cs[idx] d_V[d_idx] = d_V[idx] d_rho[d_idx] = d_rho[idx] d_u0[d_idx] = d_u0[idx] d_v0[d_idx] = d_v0[idx] d_w0[d_idx] = d_w0[idx] d_u[d_idx] = d_u[idx] d_v[d_idx] = d_v[idx] d_w[d_idx] = d_w[idx] d_ai[d_idx] = d_ai[idx] for alp in range(d): d_gradai[d*d_idx + alp] = d_gradai[d*idx + alp] d_bi[d*d_idx + alp] = d_bi[d*idx + alp] for bet in range(d): d_gradv[d*d*d_idx + d*alp + bet] = \ d_gradv[d*d*idx + d*alp + bet] d_gradbi[d*d*d_idx + d*alp + bet] = \ d_gradbi[d*d*idx + d*alp + bet] def get_particle_array_crksph(constants=None, **props): crksph_props = [ 'e', 'au', 'av', 'aw', 'ae', 'u0', 'v0', 'w0', 'cs', 'V', 'rhofac', 'x0', 'y0', 'z0', 'rho0', 'ax', 'ay', 'az', 'arho' ] pa = get_particle_array( additional_props=crksph_props, constants=constants, **props ) pa.add_property('cwij') pa.add_property('ai') pa.add_property('bi', stride=3) pa.add_property('gradai', stride=3) pa.add_property('gradbi', stride=9) pa.add_property('gradv', stride=9) pa.add_output_arrays(['p', 'V']) return pa class CRKSPHIntegrator(Integrator): def one_timestep(self, t, dt): self.stage1() self.do_post_stage(dt, 1) self.compute_accelerations(0) self.stage2() self.do_post_stage(dt, 2) self.compute_accelerations(1) # We update domain here alone as positions only change here. self.stage3() self.do_post_stage(dt, 3) self.update_domain() class CRKSPHStep(IntegratorStep): def stage1(self, d_idx, d_u, d_v, d_w, d_u0, d_v0, d_w0): d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] def stage2(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, dt): d_u[d_idx] += d_au[d_idx] * dt d_v[d_idx] += d_av[d_idx] * dt d_w[d_idx] += d_aw[d_idx] * dt def stage3(self, d_idx, d_e, d_ae, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_x, d_y, d_z, dt): d_e[d_idx] += d_ae[d_idx] * dt d_x[d_idx] += 0.5*dt*(d_u[d_idx] + d_u0[d_idx]) d_y[d_idx] += 0.5*dt*(d_v[d_idx] + d_v0[d_idx]) d_z[d_idx] += 0.5*dt*(d_w[d_idx] + d_w0[d_idx]) class CRKSPHScheme(Scheme): def __init__(self, fluids, dim, rho0, c0, nu, h0, p0, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, gamma=7.0, eta_crit=0.3, eta_fold=0.2, tol=0.5, has_ghosts=False): """ Parameters ---------- fluids: list a list with names of fluid particle arrays solids: list a list with names of solid (or boundary) particle arrays """ self.fluids = fluids self.solver = None self.dim = dim self.rho0 = rho0 self.c0 = c0 self.h0 = h0 self.p0 = p0 self.nu = nu self.gx = gx self.gy = gy self.gz = gz self.cl = cl self.cq = cq self.gamma = gamma self.eta_crit = eta_crit self.eta_fold = eta_fold self.tol = tol self.has_ghosts = has_ghosts def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. Parameters ---------- kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import QuinticSpline if kernel is None: kernel = QuinticSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) step_cls = CRKSPHStep for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = step_cls() if integrator_cls is not None: cls = integrator_cls print("warning: CRKSPHIntegrator is not being used") else: cls = CRKSPHIntegrator integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, output_at_times=[0, 0.2, 0.4, 0.8], **kw ) def get_equations(self): from pysph.sph.wc.viscosity import LaminarViscosity all = self.fluids equations_stage1 = [] equations_stage2 = [] eq00 = [] for fluid in self.fluids: eq00.append( StateEquation(dest=fluid, sources=None, gamma=self.gamma) ) eq00.append( SpeedOfSound(dest=fluid, sources=None, gamma=self.gamma) ) equations_stage1.append(Group(equations=eq00)) gh = [] for fluid in self.fluids: gh.append( CRKSPHUpdateGhostProps(dest=fluid, sources=None, dim=self.dim) ) equations_stage1.append(Group(equations=gh, real=False)) eq0 = [] for fluid in self.fluids: eq0.append(NumberDensity(dest=fluid, sources=all)) equations_stage1.append(Group(equations=eq0, real=False)) if self.has_ghosts: gh = [] for fluid in self.fluids: gh.append( CRKSPHUpdateGhostProps( dest=fluid, sources=None, dim=self.dim ) ) equations_stage1.append(Group(equations=gh, real=False)) eq1 = [] for fluid in self.fluids: eq1.append(CRKSPHPreStep(dest=fluid, sources=all, dim=self.dim)) equations_stage1.append(Group(equations=eq1, real=False)) if self.has_ghosts: gh = [] for fluid in self.fluids: gh.append( CRKSPHUpdateGhostProps( dest=fluid, sources=None, dim=self.dim ) ) equations_stage1.append(Group(equations=gh, real=False)) eq2 = [] for fluid in self.fluids: eq2.extend([ CRKSPHSymmetric( dest=fluid, sources=all, dim=self.dim, tol=self.tol ), SummationDensityCRKSPH(dest=fluid, sources=all) ]) equations_stage1.append(Group(equations=eq2, real=False)) eq3 = [] for fluid in self.fluids: eq3.append( StateEquation(dest=fluid, sources=None, gamma=self.gamma) ) eq3.append( SpeedOfSound(dest=fluid, sources=None, gamma=self.gamma) ) equations_stage1.append(Group(equations=eq3)) if self.has_ghosts: gh = [] for fluid in self.fluids: gh.append( CRKSPHUpdateGhostProps( dest=fluid, sources=None, dim=self.dim ) ) equations_stage1.append(Group(equations=gh, real=False)) eq4 = [] for fluid in self.fluids: eq4.extend([ CRKSPHSymmetric( dest=fluid, sources=all, dim=self.dim, tol=self.tol ), VelocityGradient(dest=fluid, sources=all, dim=self.dim) ]) equations_stage1.append(Group(equations=eq4)) if self.has_ghosts: gh = [] for fluid in self.fluids: gh.append( CRKSPHUpdateGhostProps( dest=fluid, sources=None, dim=self.dim ) ) equations_stage1.append(Group(equations=gh, real=False)) eq5 = [] for fluid in self.fluids: eq5.append( CRKSPHSymmetric( dest=fluid, sources=all, dim=self.dim, tol=self.tol ) ) eq5.append( MomentumEquation( dest=fluid, sources=all, dim=self.dim, gx=self.gx, gy=self.gy, gz=self.gz, cl=self.cl, cq=self.cq, eta_crit=self.eta_crit, eta_fold=self.eta_fold ) ) if abs(self.nu) > 1e-14: eq5.append(LaminarViscosity( dest=fluid, sources=self.fluids, nu=self.nu )) equations_stage1.append(Group(equations=eq5)) if self.has_ghosts: gh = [] for fluid in self.fluids: gh.append( CRKSPHUpdateGhostProps( dest=fluid, sources=None, dim=self.dim ) ) equations_stage2.append(Group(equations=gh, real=False)) eq6 = [] for fluid in self.fluids: eq6.append( CRKSPHSymmetric( dest=fluid, sources=all, dim=self.dim, tol=self.tol ) ) eq6.append( EnergyEquation( dest=fluid, sources=all, dim=self.dim, gamma=self.gamma ) ) equations_stage2.append(Group(equations=eq6)) return MultiStageEquations([equations_stage1, equations_stage2]) def setup_properties(self, particles, clean=True): import numpy particle_arrays = dict([(p.name, p) for p in particles]) crksph_props = [ 'au', 'av', 'aw', 'ae', 'u0', 'v0', 'w0', 'cs', 'V', 'rhofac', 'x0', 'y0', 'z0', 'rho0', 'ax', 'ay', 'az', 'arho' ] dummy = get_particle_array_crksph(name='junk') props = list(dummy.properties.keys()) props += [dict(name=p, stride=v) for p, v in dummy.stride.items()] props += crksph_props output_props = dummy.output_property_arrays output_props += ['p', 'V', 'e'] for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.add_property('cwij') pa.add_property('ai') pa.add_property('bi', stride=3) pa.add_property('gradai', stride=3) pa.add_property('gradbi', stride=9) pa.add_property('gradv', stride=9) pa.add_property('orig_idx', type='int') nfp = pa.get_number_of_particles() pa.orig_idx[:] = numpy.arange(nfp) pa.set_output_arrays(output_props) pysph-master/pysph/sph/wc/density_correction.py000066400000000000000000000150121356347341600223530ustar00rootroot00000000000000from math import sqrt from pysph.sph.equation import Equation from compyle.api import declare from pysph.sph.wc.linalg import gj_solve, augmented_matrix class ShepardFilter(Equation): r"""**Shepard Filter density reinitialization** This is a zeroth order density reinitialization .. math:: \tilde{W_{ab}} = \frac{W_{ab}}{\sum_{b} W_{ab}\frac{m_{b}} {\rho_{b}}} .. math:: \rho_{a} = \sum_{b} \m_{b}\tilde{W_{ab}} References ---------- .. [Panizzo, 2004] Panizzo, Physical and Numerical Modelling of Subaerial Landslide Generated Waves, PhD thesis. """ def initialize(self, d_idx, d_rho, d_rhotmp): d_rhotmp[d_idx] = d_rho[d_idx] def loop_all(self, d_idx, d_rho, d_x, d_y, d_z, s_m, s_rhotmp, s_x, s_y, s_z, d_h, s_h, SPH_KERNEL, NBRS, N_NBRS): i, s_idx = declare('int', 2) xij = declare('matrix(3)') tmp_w = 0.0 x = d_x[d_idx] y = d_y[d_idx] z = d_z[d_idx] d_rho[d_idx] = 0.0 for i in range(N_NBRS): s_idx = NBRS[i] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = z - s_z[s_idx] rij = sqrt(xij[0] * xij[0] + xij[1] * xij[1] + xij[2] * xij[2]) hij = (d_h[d_idx] + s_h[s_idx]) * 0.5 wij = SPH_KERNEL.kernel(xij, rij, hij) tmp_w += wij * s_m[s_idx] / s_rhotmp[s_idx] d_rho[d_idx] += wij * s_m[s_idx] d_rho[d_idx] /= tmp_w class MLSFirstOrder2D(Equation): r"""**Moving Least Squares density reinitialization** This is a first order density reinitialization .. math:: W_{ab}^{MLS} = \beta\left(\mathbf{r_{a}}\right)\cdot\left( \mathbf{r}_a - \mathbf{r}_b\right)W_{ab} .. math:: \beta\left(\mathbf{r_{a}}\right) = A^{-1} \left[1 0 0\right]^{T} where .. math:: A = \sum_{b}W_{ab}\tilde{A}\frac{m_{b}}{\rho_{b}} .. math:: \tilde{A} = pp^{T} where .. math:: p = \left[1 x_{a}-x_{b} y_{a}-y_{b}\right]^{T} .. math:: \rho_{a} = \sum_{b} \m_{b}W_{ab}^{MLS} References ---------- .. [Dilts, 1999] Dilts, G. A. Moving-Least-Squares-Particle Hydrodynamics - I. Consistency and stability, Int. J. Numer. Meth. Engng, 1999. """ def _get_helpers_(self): return [gj_solve, augmented_matrix] def initialize(self, d_idx, d_rho, d_rhotmp): d_rhotmp[d_idx] = d_rho[d_idx] def loop_all(self, d_idx, d_rho, d_x, d_y, s_x, s_y, d_h, s_h, s_m, s_rhotmp, SPH_KERNEL, NBRS, N_NBRS): n, i, j, k, s_idx = declare('int', 5) n = 3 amls = declare('matrix(9)') aug_mls = declare('matrix(12)') x = d_x[d_idx] y = d_y[d_idx] xij = declare('matrix(3)') for i in range(n): for j in range(n): amls[n * i + j] = 0.0 for k in range(N_NBRS): s_idx = NBRS[k] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = 0. rij = sqrt(xij[0] * xij[0] + xij[1] * xij[1]) hij = (d_h[d_idx] + s_h[s_idx]) * 0.5 wij = SPH_KERNEL.kernel(xij, rij, hij) for i in range(n): if i == 0: fac1 = 1.0 else: fac1 = xij[i - 1] for j in range(n): if j == 0: fac2 = 1.0 else: fac2 = xij[j - 1] amls[n * i + j] += fac1 * fac2 * \ s_m[s_idx] * wij / s_rhotmp[s_idx] res = declare('matrix(3)') res[0] = 1.0 res[1] = 0.0 res[2] = 0.0 augmented_matrix(amls, res, 3, 1, 3, aug_mls) gj_solve(aug_mls, n, 1, res) b0 = res[0] b1 = res[1] b2 = res[2] d_rho[d_idx] = 0.0 for k in range(N_NBRS): s_idx = NBRS[k] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = 0. rij = sqrt(xij[0] * xij[0] + xij[1] * xij[1]) hij = (d_h[d_idx] + s_h[s_idx]) * 0.5 wij = SPH_KERNEL.kernel(xij, rij, hij) wmls = (b0 + b1 * xij[0] + b2 * xij[1]) * wij d_rho[d_idx] += s_m[s_idx] * wmls class MLSFirstOrder3D(Equation): def _get_helpers_(self): return [gj_solve, augmented_matrix] def initialize(self, d_idx, d_rho, d_rhotmp): d_rhotmp[d_idx] = d_rho[d_idx] def loop_all(self, d_idx, d_rho, d_x, d_y, d_z, s_x, s_y, s_z, d_h, s_h, s_m, s_rhotmp, SPH_KERNEL, NBRS, N_NBRS): n, i, j, k, s_idx = declare('int', 5) n = 4 amls = declare('matrix(16)') aug_mls = declare('matrix(20)') x = d_x[d_idx] y = d_y[d_idx] z = d_z[d_idx] xij = declare('matrix(4)') for i in range(n): for j in range(n): amls[n * i + j] = 0.0 for k in range(N_NBRS): s_idx = NBRS[k] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = z - s_z[s_idx] rij = sqrt(xij[0] * xij[0] + xij[1] * xij[1] + xij[2] * xij[2]) hij = (d_h[d_idx] + s_h[s_idx]) * 0.5 wij = SPH_KERNEL.kernel(xij, rij, hij) for i in range(n): if i == 0: fac1 = 1.0 else: fac1 = xij[i - 1] for j in range(n): if j == 0: fac2 = 1.0 else: fac2 = xij[j - 1] amls[n * i + j] += fac1 * fac2 * \ s_m[s_idx] * wij / s_rhotmp[s_idx] res = declare('matrix(4)') res[0] = 1.0 res[1] = 0.0 res[2] = 0.0 res[3] = 0.0 augmented_matrix(amls, res, n, 1, aug_mls) gj_solve(aug_mls, n, 1, res) b0 = res[0] b1 = res[1] b2 = res[2] b3 = res[3] d_rho[d_idx] = 0.0 for k in range(N_NBRS): s_idx = NBRS[k] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = z - s_z[s_idx] rij = sqrt(xij[0] * xij[0] + xij[1] * xij[1] + xij[2] * xij[2]) hij = (d_h[d_idx] + s_h[s_idx]) * 0.5 wij = SPH_KERNEL.kernel(xij, rij, hij) wmls = (b0 + b1 * xij[0] + b2 * xij[1] + b3 * xij[2]) * wij d_rho[d_idx] += s_m[s_idx] * wmls pysph-master/pysph/sph/wc/edac.py000066400000000000000000001017261356347341600173510ustar00rootroot00000000000000""" EDAC SPH formulation ##################### Equations for the Entropically Damped Artificial Compressibility SPH scheme. Please note that this scheme is still under development and this module may change at some point in the future. References ---------- .. [PRKP2017] Prabhu Ramachandran and Kunal Puri, Entropically damped artificial compressibility for SPH, under review, 2017. http://arxiv.org/pdf/1311.2167v2.pdf """ from math import sin from math import pi as M_PI from pysph.base.utils import get_particle_array from pysph.base.utils import DEFAULT_PROPS from pysph.sph.equation import Equation, Group from pysph.sph.integrator_step import IntegratorStep from pysph.sph.scheme import Scheme, add_bool_argument EDAC_PROPS = ('ap', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'x0', 'y0', 'z0', 'u0', 'v0', 'w0', 'p0', 'V') def get_particle_array_edac(constants=None, **props): "Get the fluid array for the transport velocity formulation" pa = get_particle_array( constants=constants, additional_props=EDAC_PROPS, **props ) pa.set_output_arrays(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'p', 'au', 'av', 'aw', 'ap', 'm', 'h']) return pa EDAC_SOLID_PROPS = ('ap', 'p0', 'wij', 'uf', 'vf', 'wf', 'ug', 'vg', 'wg', 'ax', 'ay', 'az', 'V') def get_particle_array_edac_solid(constants=None, **props): "Get the fluid array for the transport velocity formulation" pa = get_particle_array( constants=constants, additional_props=EDAC_SOLID_PROPS, **props ) pa.set_output_arrays(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'p', 'h']) return pa class ComputeAveragePressure(Equation): """Simple function to compute the average pressure at each particle. This is used for the Basa, Quinlan and Lastiwka correction from their 2009 paper. This equation should be in a separate group and computed before the Momentum equation. """ def initialize(self, d_idx, d_pavg, d_nnbr): d_pavg[d_idx] = 0.0 d_nnbr[d_idx] = 0.0 def loop(self, d_idx, d_pavg, s_idx, s_p, d_nnbr): d_pavg[d_idx] += s_p[s_idx] d_nnbr[d_idx] += 1.0 def post_loop(self, d_idx, d_pavg, d_nnbr): if d_nnbr[d_idx] > 0: d_pavg[d_idx] /= d_nnbr[d_idx] class EDACStep(IntegratorStep): """Standard Predictor Corrector integrator for the WCSPH formulation Use this integrator for WCSPH formulations. In the predictor step, the particles are advanced to `t + dt/2`. The particles are then advanced with the new force computed at this position. This integrator can be used in PEC or EPEC mode. The same integrator can be used for other problems. Like for example solid mechanics (see SolidMechStep) """ def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_p0, d_p): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_p0[d_idx] = d_p[d_idx] def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_p0, d_p, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_ap, dt=0.0): dtb2 = 0.5*dt d_u[d_idx] = d_u0[d_idx] + dtb2*d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dtb2*d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dtb2*d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dtb2 * d_ax[d_idx] d_y[d_idx] = d_y0[d_idx] + dtb2 * d_ay[d_idx] d_z[d_idx] = d_z0[d_idx] + dtb2 * d_az[d_idx] d_p[d_idx] = d_p0[d_idx] + dtb2 * d_ap[d_idx] def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_p0, d_p, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_ap, dt=0.0): d_u[d_idx] = d_u0[d_idx] + dt*d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dt*d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dt*d_aw[d_idx] d_x[d_idx] = d_x0[d_idx] + dt * d_ax[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_ay[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_az[d_idx] d_p[d_idx] = d_p0[d_idx] + dt * d_ap[d_idx] class SolidWallPressureBC(Equation): r"""Solid wall pressure boundary condition from Adami and Hu (transport velocity formulation). """ def __init__(self, dest, sources, gx=0.0, gy=0.0, gz=0.0): self.gx = gx self.gy = gy self.gz = gz super(SolidWallPressureBC, self).__init__(dest, sources) def initialize(self, d_idx, d_p): d_p[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_p, s_p, s_rho, d_au, d_av, d_aw, WIJ, XIJ): # numerator of Eq. (27) ax, ay and az are the prescribed wall # accelerations which must be defined for the wall boundary # particle gdotxij = (self.gx - d_au[d_idx])*XIJ[0] + \ (self.gy - d_av[d_idx])*XIJ[1] + \ (self.gz - d_aw[d_idx])*XIJ[2] d_p[d_idx] += s_p[s_idx]*WIJ + s_rho[s_idx]*gdotxij*WIJ def post_loop(self, d_idx, d_wij, d_p): # extrapolated pressure at the ghost particle if d_wij[d_idx] > 1e-14: d_p[d_idx] /= d_wij[d_idx] class ClampWallPressure(Equation): r"""Clamp the wall pressure to non-negative values. """ def post_loop(self, d_idx, d_p): if d_p[d_idx] < 0.0: d_p[d_idx] = 0.0 class SourceNumberDensity(Equation): r"""Evaluates the number density due to the source particles""" def initialize(self, d_idx, d_wij): d_wij[d_idx] = 0.0 def loop(self, d_idx, d_wij, WIJ): d_wij[d_idx] += WIJ class SetWallVelocity(Equation): r"""Extrapolating the fluid velocity on to the wall Eq. (22) in REF1: .. math:: \tilde{\boldsymbol{v}}_a = \frac{\sum_b\boldsymbol{v}_b W_{ab}} {\sum_b W_{ab}} Notes: This should be used only after (or in the same group) as the SolidWallPressureBC equation. The destination particle array for this equation should define the *filtered* velocity variables :math:`uf, vf, wf`. """ def initialize(self, d_idx, d_uf, d_vf, d_wf): d_uf[d_idx] = 0.0 d_vf[d_idx] = 0.0 d_wf[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_uf, d_vf, d_wf, s_u, s_v, s_w, WIJ): # sum in Eq. (22) # this will be normalized in post loop d_uf[d_idx] += s_u[s_idx] * WIJ d_vf[d_idx] += s_v[s_idx] * WIJ d_wf[d_idx] += s_w[s_idx] * WIJ def post_loop(self, d_uf, d_vf, d_wf, d_wij, d_idx, d_ug, d_vg, d_wg, d_u, d_v, d_w): # calculation is done only for the relevant boundary particles. # d_wij (and d_uf) is 0 for particles sufficiently away from the # solid-fluid interface # Note that d_wij is already computed for the pressure BC. if d_wij[d_idx] > 1e-12: d_uf[d_idx] /= d_wij[d_idx] d_vf[d_idx] /= d_wij[d_idx] d_wf[d_idx] /= d_wij[d_idx] # Dummy velocities at the ghost points using Eq. (23), # d_u, d_v, d_w are the prescribed wall velocities. d_ug[d_idx] = 2*d_u[d_idx] - d_uf[d_idx] d_vg[d_idx] = 2*d_v[d_idx] - d_vf[d_idx] d_wg[d_idx] = 2*d_w[d_idx] - d_wf[d_idx] class NoSlipVelocityExtrapolation(Equation): '''No Slip boundary condition on the wall The velocity of the fluid is extrapolated over to the wall using shepard extrapolation. The velocity normal to the wall is reflected back to impose no penetration. ''' def initialize(self, d_idx, d_u, d_v, d_w): d_u[d_idx] = 0.0 d_v[d_idx] = 0.0 d_w[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_u, d_v, d_w, s_u, s_v, s_w, WIJ, XIJ): d_u[d_idx] += s_u[s_idx]*WIJ d_v[d_idx] += s_v[s_idx]*WIJ d_w[d_idx] += s_w[s_idx]*WIJ def post_loop(self, d_idx, d_wij, d_u, d_v, d_w, d_xn, d_yn, d_zn): if d_wij[d_idx] > 1e-14: d_u[d_idx] /= d_wij[d_idx] d_v[d_idx] /= d_wij[d_idx] d_w[d_idx] /= d_wij[d_idx] projection = d_u[d_idx]*d_xn[d_idx] +\ d_v[d_idx]*d_yn[d_idx] + d_w[d_idx]*d_zn[d_idx] d_u[d_idx] = d_u[d_idx] - 2 * projection * d_xn[d_idx] d_v[d_idx] = d_v[d_idx] - 2 * projection * d_yn[d_idx] d_w[d_idx] = d_w[d_idx] - 2 * projection * d_zn[d_idx] class NoSlipAdvVelocityExtrapolation(Equation): '''No Slip boundary condition on the wall The advection velocity of the fluid is extrapolated over to the wall using shepard extrapolation. The advection velocity normal to the wall is reflected back to impose no penetration. ''' def initialize(self, d_idx, d_uhat, d_vhat, d_what): d_uhat[d_idx] = 0.0 d_vhat[d_idx] = 0.0 d_what[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_uhat, d_vhat, d_what, s_uhat, s_vhat, s_what, WIJ, XIJ): d_uhat[d_idx] += s_uhat[s_idx]*WIJ d_vhat[d_idx] += s_vhat[s_idx]*WIJ d_what[d_idx] += s_what[s_idx]*WIJ def post_loop(self, d_idx, d_wij, d_uhat, d_vhat, d_what, d_xn, d_yn, d_zn): if d_wij[d_idx] > 1e-14: d_uhat[d_idx] /= d_wij[d_idx] d_vhat[d_idx] /= d_wij[d_idx] d_what[d_idx] /= d_wij[d_idx] projection = d_uhat[d_idx]*d_xn[d_idx] +\ d_vhat[d_idx]*d_yn[d_idx] + d_what[d_idx]*d_zn[d_idx] d_uhat[d_idx] = d_uhat[d_idx] - 2 * projection * d_xn[d_idx] d_vhat[d_idx] = d_vhat[d_idx] - 2 * projection * d_yn[d_idx] d_what[d_idx] = d_what[d_idx] - 2 * projection * d_zn[d_idx] class MomentumEquation(Equation): r"""Momentum equation (gradient of pressure) based on the number density formulation of Hu and Adams JCP 213 (2006), 844-861. """ def __init__(self, dest, sources, c0, gx=0.0, gy=0.0, gz=0.0, tdamp=0.0): self.gx = gx self.gy = gy self.gz = gz self.c0 = c0 self.tdamp = tdamp super(MomentumEquation, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, d_rho, d_p, d_V, d_au, d_av, d_aw, s_m, s_rho, s_p, s_V, DWIJ): rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] p_i = d_p[d_idx] p_j = s_p[s_idx] pij = rhoj * p_i + rhoi * p_j pij /= (rhoj + rhoi) Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # inverse mass of destination particle mi1 = 1.0/d_m[d_idx] tmp = -pij * mi1 * (Vi2 + Vj2) d_au[d_idx] += tmp * DWIJ[0] d_av[d_idx] += tmp * DWIJ[1] d_aw[d_idx] += tmp * DWIJ[2] def post_loop(self, d_idx, d_au, d_av, d_aw, t): damping_factor = 1.0 if t < self.tdamp: damping_factor = 0.5 * (sin((-0.5 + t/self.tdamp)*M_PI) + 1.0) d_au[d_idx] += damping_factor*self.gx d_av[d_idx] += damping_factor*self.gy d_aw[d_idx] += damping_factor*self.gz class EDACEquation(Equation): def __init__(self, dest, sources, cs, nu, rho0): self.cs = cs self.nu = nu self.rho0 = rho0 super(EDACEquation, self).__init__(dest, sources) def initialize(self, d_idx, d_ap): d_ap[d_idx] = 0.0 def loop(self, d_idx, d_m, d_rho, d_ap, d_p, d_V, s_idx, s_m, s_rho, s_p, s_V, DWIJ, VIJ, XIJ, R2IJ, EPS): Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj etai = d_rho[d_idx] etaj = s_rho[s_idx] etaij = 2 * self.nu * (etai * etaj)/(etai + etaj) # This is the same as continuity acceleration times cs^2 rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] vijdotdwij = DWIJ[0]*VIJ[0] + DWIJ[1]*VIJ[1] + DWIJ[2]*VIJ[2] d_ap[d_idx] += rhoi/rhoj*self.cs*self.cs*s_m[s_idx]*vijdotdwij # Viscous damping of pressure. xijdotdwij = DWIJ[0]*XIJ[0] + DWIJ[1]*XIJ[1] + DWIJ[2]*XIJ[2] tmp = 1.0/d_m[d_idx]*(Vi2 + Vj2)*etaij*xijdotdwij/(R2IJ + EPS) d_ap[d_idx] += tmp*(d_p[d_idx] - s_p[s_idx]) class MomentumEquationPressureGradient(Equation): r"""Momentum equation for internal flows with EDAC. This uses the basic formulation from the TVF scheme but modifies it to subtract an average pressure from Basa, Quinlan and Lastiwka correction from their 2009 paper. """ def __init__(self, dest, sources, pb, gx=0., gy=0., gz=0., tdamp=0.0): r""" Parameters ---------- pb : float background pressure gx : float Body force per unit mass along the x-axis gy : float Body force per unit mass along the y-axis gz : float Body force per unit mass along the z-axis tdamp : float damping time Notes ----- This equation should have the destination as fluid and sources as fluid and boundary particles. This function also computes the contribution to the background pressure and accelerations due to a body force or gravity. The body forces are damped according to Eq. (13) in [Adami2012] to avoid instantaneous accelerations. By default, damping is neglected. """ self.pb = pb self.gx = gx self.gy = gy self.gz = gz self.tdamp = tdamp super(MomentumEquationPressureGradient, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw, d_auhat, d_avhat, d_awhat): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 d_auhat[d_idx] = 0.0 d_avhat[d_idx] = 0.0 d_awhat[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, d_rho, s_rho, d_au, d_av, d_aw, d_p, d_pavg, s_p, d_auhat, d_avhat, d_awhat, d_V, s_V, DWIJ): # averaged pressure Eq. (7) rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] pavg = d_pavg[d_idx] pi = d_p[d_idx] pj = s_p[s_idx] pij = rhoj * (pi - pavg) + rhoi * (pj - pavg) pij /= (rhoj + rhoi) # particle volumes Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # inverse mass of destination particle mi1 = 1.0/d_m[d_idx] # accelerations 1st term in Eq. (8) tmp = -pij * mi1 * (Vi2 + Vj2) d_au[d_idx] += tmp * DWIJ[0] d_av[d_idx] += tmp * DWIJ[1] d_aw[d_idx] += tmp * DWIJ[2] # contribution due to the background pressure Eq. (13) tmp = -self.pb * mi1 * (Vi2 + Vj2) d_auhat[d_idx] += tmp * DWIJ[0] d_avhat[d_idx] += tmp * DWIJ[1] d_awhat[d_idx] += tmp * DWIJ[2] def post_loop(self, d_idx, d_au, d_av, d_aw, t): # damped accelerations due to body or external force damping_factor = 1.0 if t < self.tdamp: damping_factor = 0.5 * (sin((-0.5 + t/self.tdamp)*M_PI) + 1.0) d_au[d_idx] += self.gx * damping_factor d_av[d_idx] += self.gy * damping_factor d_aw[d_idx] += self.gz * damping_factor class EDACTVFStep(IntegratorStep): def initialize(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_p0, d_p): d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_p0[d_idx] = d_p[d_idx] def stage1(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_p0, d_p, d_au, d_av, d_auhat, d_avhat, d_awhat, d_uhat, d_vhat, d_what, d_aw, d_ap, dt=0.0): dtb2 = 0.5*dt d_u[d_idx] = d_u0[d_idx] + dtb2*d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dtb2*d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dtb2*d_aw[d_idx] d_uhat[d_idx] = d_u[d_idx] + dtb2*d_auhat[d_idx] d_vhat[d_idx] = d_v[d_idx] + dtb2*d_avhat[d_idx] d_what[d_idx] = d_w[d_idx] + dtb2*d_awhat[d_idx] d_x[d_idx] = d_x0[d_idx] + dtb2 * d_uhat[d_idx] d_y[d_idx] = d_y0[d_idx] + dtb2 * d_vhat[d_idx] d_z[d_idx] = d_z0[d_idx] + dtb2 * d_what[d_idx] d_p[d_idx] = d_p0[d_idx] + dtb2 * d_ap[d_idx] def stage2(self, d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_p0, d_p, d_au, d_av, d_aw, d_auhat, d_avhat, d_awhat, d_uhat, d_vhat, d_what, d_ap, dt=0.0): d_u[d_idx] = d_u0[d_idx] + dt*d_au[d_idx] d_v[d_idx] = d_v0[d_idx] + dt*d_av[d_idx] d_w[d_idx] = d_w0[d_idx] + dt*d_aw[d_idx] d_uhat[d_idx] = d_u[d_idx] + dt*d_auhat[d_idx] d_vhat[d_idx] = d_v[d_idx] + dt*d_avhat[d_idx] d_what[d_idx] = d_w[d_idx] + dt*d_awhat[d_idx] d_x[d_idx] = d_x0[d_idx] + dt * d_uhat[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_vhat[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_what[d_idx] d_p[d_idx] = d_p0[d_idx] + dt * d_ap[d_idx] class EDACScheme(Scheme): def __init__(self, fluids, solids, dim, c0, nu, rho0, pb=0.0, gx=0.0, gy=0.0, gz=0.0, tdamp=0.0, eps=0.0, h=0.0, edac_alpha=0.5, alpha=0.0, bql=True, clamp_p=False, inlet_outlet_manager=None, inviscid_solids=None): """The EDAC scheme. Parameters ---------- fluids : list(str) List of names of fluid particle arrays. solids : list(str) List of names of solid particle arrays. dim: int Dimensionality of the problem. c0 : float Speed of sound. nu : float Kinematic viscosity. rho0 : float Density of fluid. pb : float Background pressure value, if unset or zero, this uses an different formulation, else it uses the TVF with EDAC. gx, gy, gz : float Componenents of body acceleration (gravity, external forcing etc.) tdamp: float Time for which the acceleration should be damped. eps : float XSPH smoothing factor, defaults to zero. h : float Parameter h used for the particles -- used to calculate viscosity. edac_alpha : float Factor to use for viscosity. alpha : float Factor to use for artificial viscosity. bql : bool Use the Basa Quinlan Lastiwka correction. clamp_p : bool Clamp the boundary pressure to positive values. This is only used for external flows. inlet_outlet_manager : InletOutletManager Instance Pass the manager if inlet outlet boundaries are present inviscid_solids : list list of inviscid solid array names """ self.c0 = c0 self.nu = nu self.rho0 = rho0 self.gx = gx self.gy = gy self.gz = gz self.tdamp = tdamp self.dim = dim self.eps = eps self.fluids = fluids self.solids = solids self.pb = pb self.solver = None self.bql = bql self.clamp_p = clamp_p self.edac_alpha = edac_alpha self.alpha = alpha self.h = h self.inlet_outlet_manager = inlet_outlet_manager self.inviscid_solids = [] if inviscid_solids is None else\ inviscid_solids self.attributes_changed() # Public protocol ################################################### def add_user_options(self, group): group.add_argument( "--alpha", action="store", type=float, dest="alpha", default=None, help="Alpha for the artificial viscosity." ) group.add_argument( "--edac-alpha", action="store", type=float, dest="edac_alpha", default=None, help="Alpha for the EDAC scheme viscosity." ) add_bool_argument( group, 'clamp-pressure', dest='clamp_p', help="Clamp pressure on boundaries to be non-negative.", default=None ) add_bool_argument( group, 'use-bql', dest='bql', help="Use the Basa-Quinlan-Lastiwka correction.", default=None ) group.add_argument( "--tdamp", action="store", type=float, dest="tdamp", default=None, help="Time for which the accelerations are damped." ) def consume_user_options(self, options): vars = ['alpha', 'edac_alpha', 'clamp_p', 'bql', 'tdamp'] data = dict((var, self._smart_getattr(options, var)) for var in vars) self.configure(**data) def attributes_changed(self): if self.pb is not None: self.use_tvf = abs(self.pb) > 1e-14 if self.h is not None and self.c0 is not None: self.art_nu = self.edac_alpha*self.h*self.c0/8 def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. This is to be called before `get_solver` is called. Parameters ---------- dim : int Number of dimensions. kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import QuinticSpline from pysph.sph.integrator import PECIntegrator if kernel is None: kernel = QuinticSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) step_cls = EDACTVFStep if self.use_tvf else EDACStep cls = integrator_cls if integrator_cls is not None else PECIntegrator for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = step_cls() iom = self.inlet_outlet_manager if iom is not None: iom_stepper = iom.get_stepper(self, cls, self.use_tvf) for name in iom_stepper: steppers[name] = iom_stepper[name] integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) if iom is not None: iom.setup_iom(dim=self.dim, kernel=kernel) def get_equations(self): if self.use_tvf: return self._get_internal_flow_equations() else: return self._get_external_flow_equations() def get_solver(self): return self.solver def setup_properties(self, particles, clean=True): """Setup the particle arrays so they have the right set of properties for this scheme. Parameters ---------- particles : list List of particle arrays. clean : bool If True, removes any unnecessary properties. """ particle_arrays = dict([(p.name, p) for p in particles]) TVF_FLUID_PROPS = set([ 'uhat', 'vhat', 'what', 'ap', 'auhat', 'avhat', 'awhat', 'V', 'p0', 'u0', 'v0', 'w0', 'x0', 'y0', 'z0', 'pavg', 'nnbr' ]) extra_props = TVF_FLUID_PROPS if self.use_tvf else EDAC_PROPS all_fluid_props = DEFAULT_PROPS.union(extra_props) iom = self.inlet_outlet_manager fluids_with_io = self.fluids if iom is not None: io_particles = iom.get_io_names(ghost=True) fluids_with_io = self.fluids + io_particles for fluid in fluids_with_io: pa = particle_arrays[fluid] self._ensure_properties(pa, all_fluid_props, clean) pa.set_output_arrays(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'p', 'm', 'h', 'V']) if 'pavg' in pa.properties: pa.add_output_arrays(['pavg']) if iom is not None: iom.add_io_properties(pa, self) TVF_SOLID_PROPS = ['V', 'wij', 'ax', 'ay', 'az', 'uf', 'vf', 'wf', 'ug', 'vg', 'wg'] if self.inviscid_solids: TVF_SOLID_PROPS += ['xn', 'yn', 'zn', 'uhat', 'vhat', 'what'] extra_props = TVF_SOLID_PROPS if self.use_tvf else EDAC_SOLID_PROPS all_solid_props = DEFAULT_PROPS.union(extra_props) for solid in (self.solids+self.inviscid_solids): pa = particle_arrays[solid] self._ensure_properties(pa, all_solid_props, clean) pa.set_output_arrays(['x', 'y', 'z', 'u', 'v', 'w', 'rho', 'p', 'm', 'h', 'V']) # Private protocol ################################################### def _get_edac_nu(self): if self.art_nu > 0: nu = self.art_nu print("Using artificial viscosity for EDAC with nu = %s" % nu) else: nu = self.nu print("Using real viscosity for EDAC with nu = %s" % self.nu) return nu def _get_internal_flow_equations(self): from pysph.sph.wc.transport_velocity import ( VolumeSummation, SolidWallNoSlipBC, SummationDensity, MomentumEquationArtificialStress, MomentumEquationArtificialViscosity, MomentumEquationViscosity ) edac_nu = self._get_edac_nu() iom = self.inlet_outlet_manager fluids_with_io = self.fluids all_solids = self.solids + self.inviscid_solids if iom is not None: fluids_with_io = self.fluids + iom.get_io_names() all = fluids_with_io + all_solids equations = [] # inlet-outlet if iom is not None: io_eqns = iom.get_equations(self, self.use_tvf) for grp in io_eqns: equations.append(grp) group1 = [] avg_p_group = [] has_solids = len(all_solids) > 0 for fluid in fluids_with_io: group1.append(SummationDensity(dest=fluid, sources=all)) if self.bql: eq = ComputeAveragePressure(dest=fluid, sources=all) if has_solids: avg_p_group.append(eq) else: group1.append(eq) for solid in self.solids: group1.extend([ SourceNumberDensity(dest=solid, sources=fluids_with_io), VolumeSummation(dest=solid, sources=all), SolidWallPressureBC(dest=solid, sources=fluids_with_io, gx=self.gx, gy=self.gy, gz=self.gz), SetWallVelocity(dest=solid, sources=fluids_with_io), ]) for solid in self.inviscid_solids: group1.extend([ SourceNumberDensity(dest=solid, sources=fluids_with_io), NoSlipVelocityExtrapolation( dest=solid, sources=fluids_with_io), NoSlipAdvVelocityExtrapolation( dest=solid, sources=fluids_with_io), VolumeSummation(dest=solid, sources=all), SolidWallPressureBC(dest=solid, sources=fluids_with_io, gx=self.gx, gy=self.gy, gz=self.gz) ]) equations.append(Group(equations=group1, real=False)) # Compute average pressure *after* the wall pressure is setup. if self.bql and has_solids: equations.append(Group(equations=avg_p_group, real=True)) group2 = [] for fluid in self.fluids: group2.append( MomentumEquationPressureGradient( dest=fluid, sources=all, pb=self.pb, gx=self.gx, gy=self.gy, gz=self.gz, tdamp=self.tdamp ) ) if self.alpha > 0.0: sources = fluids_with_io + self.solids group2.append( MomentumEquationArtificialViscosity( dest=fluid, sources=sources, alpha=self.alpha, c0=self.c0 ) ) if self.nu > 0.0: group2.append( MomentumEquationViscosity( dest=fluid, sources=fluids_with_io, nu=self.nu ) ) if len(self.solids) > 0 and self.nu > 0.0: group2.append( SolidWallNoSlipBC( dest=fluid, sources=self.solids, nu=self.nu ) ) group2.extend([ MomentumEquationArtificialStress( dest=fluid, sources=fluids_with_io ), EDACEquation( dest=fluid, sources=all, nu=edac_nu, cs=self.c0, rho0=self.rho0 ), ]) equations.append(Group(equations=group2)) # inlet-outlet if iom is not None: io_eqns = iom.get_equations_post_compute_acceleration() for grp in io_eqns: equations.append(grp) return equations def _get_external_flow_equations(self): from pysph.sph.basic_equations import XSPHCorrection from pysph.sph.wc.transport_velocity import ( VolumeSummation, SolidWallNoSlipBC, SummationDensity, MomentumEquationArtificialViscosity, MomentumEquationViscosity ) iom = self.inlet_outlet_manager fluids_with_io = self.fluids all_solids = self.solids + self.inviscid_solids if iom is not None: fluids_with_io = self.fluids + iom.get_io_names() all = fluids_with_io + all_solids edac_nu = self._get_edac_nu() equations = [] # inlet-outlet if iom is not None: io_eqns = iom.get_equations(self, self.use_tvf) for grp in io_eqns: equations.append(grp) group1 = [] for fluid in fluids_with_io: group1.append(SummationDensity(dest=fluid, sources=all)) for solid in self.solids: group1.extend([ SourceNumberDensity(dest=solid, sources=fluids_with_io), VolumeSummation(dest=solid, sources=all), SolidWallPressureBC(dest=solid, sources=fluids_with_io, gx=self.gx, gy=self.gy, gz=self.gz), SetWallVelocity(dest=solid, sources=fluids_with_io), ]) if self.clamp_p: group1.append( ClampWallPressure(dest=solid, sources=None) ) for solid in self.inviscid_solids: group1.extend([ SourceNumberDensity(dest=solid, sources=fluids_with_io), NoSlipVelocityExtrapolation( dest=solid, sources=fluids_with_io), VolumeSummation(dest=solid, sources=all), SolidWallPressureBC(dest=solid, sources=fluids_with_io, gx=self.gx, gy=self.gy, gz=self.gz) ]) equations.append(Group(equations=group1, real=False)) group2 = [] for fluid in self.fluids: group2.append( MomentumEquation( dest=fluid, sources=all, gx=self.gx, gy=self.gy, gz=self.gz, c0=self.c0, tdamp=self.tdamp ) ) if self.alpha > 0.0: sources = fluids_with_io + self.solids group2.append( MomentumEquationArtificialViscosity( dest=fluid, sources=sources, alpha=self.alpha, c0=self.c0 ) ) if self.nu > 0.0: group2.append( MomentumEquationViscosity( dest=fluid, sources=fluids_with_io, nu=self.nu ) ) if len(self.solids) > 0 and self.nu > 0.0: group2.append( SolidWallNoSlipBC( dest=fluid, sources=self.solids, nu=self.nu ) ) group2.extend([ EDACEquation( dest=fluid, sources=all, nu=edac_nu, cs=self.c0, rho0=self.rho0 ), XSPHCorrection(dest=fluid, sources=[fluid], eps=self.eps) ]) equations.append(Group(equations=group2)) # inlet-outlet if iom is not None: io_eqns = iom.get_equations_post_compute_acceleration() for grp in io_eqns: equations.append(grp) return equations pysph-master/pysph/sph/wc/gtvf.py000066400000000000000000000521511356347341600174200ustar00rootroot00000000000000""" Generalized Transport Velocity Formulation ########################################## Some notes on the paper, - In the viscosity term of equation (17) a factor of '2' is missing. - A negative sign is missing from equation (22) i.e, either put a negative sign in equation (22) or at the integrator step equation(25). - The Solid Mechanics Equations are not tested. References ----------- .. [ZhangHuAdams2017] Chi Zhang, Xiangyu Y. Hu, Nikolaus A. Adams "A generalized transport-velocity formulation for smoothed particle hydrodynamics", Journal of Computational Physics 237 (2017), pp. 216--232. """ from compyle.api import declare from pysph.sph.equation import Equation from pysph.base.utils import get_particle_array from pysph.sph.integrator import Integrator from pysph.sph.integrator_step import IntegratorStep from pysph.sph.equation import Group, MultiStageEquations from pysph.sph.scheme import Scheme from pysph.sph.wc.linalg import mat_vec_mult, mat_mult def get_particle_array_gtvf(constants=None, **props): gtvf_props = [ 'uhat', 'vhat', 'what', 'rho0', 'rhodiv', 'p0', 'auhat', 'avhat', 'awhat', 'arho', 'arho0' ] pa = get_particle_array( constants=constants, additional_props=gtvf_props, **props ) pa.add_property('gradvhat', stride=9) pa.add_property('sigma', stride=9) pa.add_property('asigma', stride=9) pa.set_output_arrays([ 'x', 'y', 'z', 'u', 'v', 'w', 'rho', 'p', 'h', 'm', 'au', 'av', 'aw', 'pid', 'gid', 'tag' ]) return pa class GTVFIntegrator(Integrator): def one_timestep(self, t, dt): self.stage1() self.do_post_stage(dt, 1) self.compute_accelerations(0, update_nnps=False) self.stage2() # We update domain here alone as positions only change here. self.update_domain() self.do_post_stage(dt, 2) self.compute_accelerations(1) self.stage3() self.do_post_stage(dt, 3) class GTVFStep(IntegratorStep): def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_uhat, d_vhat, d_what, d_auhat, d_avhat, d_awhat, dt): dtb2 = 0.5*dt d_u[d_idx] += dtb2*d_au[d_idx] d_v[d_idx] += dtb2*d_av[d_idx] d_w[d_idx] += dtb2*d_aw[d_idx] d_uhat[d_idx] = d_u[d_idx] + dtb2*d_auhat[d_idx] d_vhat[d_idx] = d_v[d_idx] + dtb2*d_avhat[d_idx] d_what[d_idx] = d_w[d_idx] + dtb2*d_awhat[d_idx] def stage2(self, d_idx, d_uhat, d_vhat, d_what, d_x, d_y, d_z, d_rho, d_arho, d_sigma, d_asigma, dt): d_rho[d_idx] += dt*d_arho[d_idx] i = declare('int') for i in range(9): d_sigma[d_idx*9 + i] += dt * d_asigma[d_idx*9 + i] d_x[d_idx] += dt*d_uhat[d_idx] d_y[d_idx] += dt*d_vhat[d_idx] d_z[d_idx] += dt*d_what[d_idx] def stage3(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, dt): dtb2 = 0.5*dt d_u[d_idx] += dtb2*d_au[d_idx] d_v[d_idx] += dtb2*d_av[d_idx] d_w[d_idx] += dtb2*d_aw[d_idx] class ContinuityEquationGTVF(Equation): r"""**Evolution of density** From [ZhangHuAdams2017], equation (12), .. math:: \frac{\tilde{d} \rho_i}{dt} = \rho_i \sum_j \frac{m_j}{\rho_j} \nabla W_{ij} \cdot \tilde{\boldsymbol{v}}_{ij} """ def initialize(self, d_arho, d_idx): d_arho[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_rho, s_rho, d_uhat, d_vhat, d_what, s_uhat, s_vhat, s_what, d_arho, DWIJ): uhatij = d_uhat[d_idx] - s_uhat[s_idx] vhatij = d_vhat[d_idx] - s_vhat[s_idx] whatij = d_what[d_idx] - s_what[s_idx] udotdij = DWIJ[0]*uhatij + DWIJ[1]*vhatij + DWIJ[2]*whatij fac = d_rho[d_idx] * s_m[s_idx] / s_rho[s_idx] d_arho[d_idx] += fac * udotdij class CorrectDensity(Equation): r"""**Density correction** From [ZhangHuAdams2017], equation (13), .. math:: \rho_i = \frac{\sum_j m_j W_{ij}} {\min(1, \sum_j \frac{m_j}{\rho_j^{*}} W_{ij})} where, .. math:: \rho_j^{*} = \text{density before this correction is applied.} """ def initialize(self, d_idx, d_rho, d_rho0, d_rhodiv): d_rho0[d_idx] = d_rho[d_idx] d_rho[d_idx] = 0.0 d_rhodiv[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, d_rhodiv, s_m, WIJ, s_rho0): d_rho[d_idx] += s_m[s_idx]*WIJ d_rhodiv[d_idx] += s_m[s_idx]*WIJ/s_rho0[s_idx] def post_loop(self, d_idx, d_rho, d_rhodiv): d_rho[d_idx] = d_rho[d_idx] / min(1, d_rhodiv[d_idx]) class MomentumEquationPressureGradient(Equation): r"""**Momentum Equation** From [ZhangHuAdams2017], equation (17), .. math:: \frac{\tilde{d} \boldsymbol{v}_i}{dt} = - \sum_j m_j \nabla W_{ij} \cdot \left[\left(\frac{p_i}{\rho_i^2} + \frac{p_j}{\rho_j^2} \right)\textbf{I} - \left(\frac{\boldsymbol{A_i}}{\rho_i^2} + \frac{\boldsymbol{A_j}}{\rho_j^2} \right)\right] + \sum_j \frac{\eta_{ij}\boldsymbol{v}_{ij}}{\rho_i \rho_j r_{ij}} \nabla W_{ij} \cdot \boldsymbol{x}_{ij} where, .. math:: \boldsymbol{A_{i/j}} = \rho_{i/j} \boldsymbol{v}_{i/j} \otimes (\tilde{\boldsymbol{v}}_{i/j} - \boldsymbol{v}_{i/j}) .. math:: \eta_{ij} = \frac{2\eta_i \eta_j}{\eta_i + \eta_j} .. math:: \eta_{i/j} = \rho_{i/j} \nu for solids, replace :math:`\boldsymbol{A}_{i/j}` with :math:`\boldsymbol{\sigma}'_{i/j}`. The rate of change of transport velocity is given by, .. math:: (\frac{d\boldsymbol{v}_i}{dt})_c = -p_i^0 \sum_j \frac{m_j} {\rho_i^2} \nabla \tilde{W}_{ij} where, .. math:: \tilde{W}_{ij} = W(\boldsymbol{x}_ij, \tilde{0.5 h_{ij}}) .. math:: p_i^0 = \min(10|p_i|, p_{ref}) Notes: A negative sign in :math:`(\frac{d\boldsymbol{v}_i}{dt})_c` is missing in the paper [ZhangHuAdams2017]. """ def __init__(self, dest, sources, pref, gx=0.0, gy=0.0, gz=0.0): r""" Parameters ---------- pref : float reference pressure gx : float body force per unit mass along the x-axis gy : float body force per unit mass along the y-axis gz : float body force per unit mass along the z-axis """ self.pref = pref self.gx = gx self.gy = gy self.gz = gz super(MomentumEquationPressureGradient, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw, d_auhat, d_avhat, d_awhat, d_p0, d_p): d_au[d_idx] = self.gx d_av[d_idx] = self.gy d_aw[d_idx] = self.gz d_auhat[d_idx] = 0.0 d_avhat[d_idx] = 0.0 d_awhat[d_idx] = 0.0 d_p0[d_idx] = min(10*abs(d_p[d_idx]), self.pref) def loop(self, d_rho, s_rho, d_idx, s_idx, d_p, s_p, s_m, d_au, d_av, d_aw, DWIJ, d_p0, d_auhat, d_avhat, d_awhat, XIJ, RIJ, SPH_KERNEL, HIJ): rhoi2 = d_rho[d_idx] * d_rho[d_idx] rhoj2 = s_rho[s_idx] * s_rho[s_idx] pij = d_p[d_idx]/rhoi2 + s_p[s_idx]/rhoj2 tmp = -s_m[s_idx] * pij d_au[d_idx] += tmp * DWIJ[0] d_av[d_idx] += tmp * DWIJ[1] d_aw[d_idx] += tmp * DWIJ[2] tmp = -d_p0[d_idx] * s_m[s_idx]/rhoi2 dwijhat = declare('matrix(3)') SPH_KERNEL.gradient(XIJ, RIJ, 0.5*HIJ, dwijhat) d_auhat[d_idx] += tmp * dwijhat[0] d_avhat[d_idx] += tmp * dwijhat[1] d_awhat[d_idx] += tmp * dwijhat[2] class MomentumEquationViscosity(Equation): r"""**Momentum equation Artificial stress for solids** See the class MomentumEquationPressureGradient for details. Notes: A factor of '2' is missing in the viscosity equation given by [ZhangHuAdams2017]. """ def __init__(self, dest, sources, nu): r""" Parameters ---------- nu : float viscosity of the fluid. """ self.nu = nu super(MomentumEquationViscosity, self).__init__(dest, sources) def loop(self, d_idx, s_idx, d_rho, s_rho, s_m, d_au, d_av, d_aw, VIJ, R2IJ, EPS, DWIJ, XIJ): etai = self.nu * d_rho[d_idx] etaj = self.nu * s_rho[s_idx] etaij = 4 * (etai * etaj)/(etai + etaj) xdotdij = DWIJ[0]*XIJ[0] + DWIJ[1]*XIJ[1] + DWIJ[2]*XIJ[2] tmp = s_m[s_idx]/(d_rho[d_idx] * s_rho[s_idx]) fac = tmp * etaij * xdotdij/(R2IJ + EPS) d_au[d_idx] += fac * VIJ[0] d_av[d_idx] += fac * VIJ[1] d_aw[d_idx] += fac * VIJ[2] class MomentumEquationArtificialStress(Equation): r"""**Momentum equation Artificial stress for solids** See the class MomentumEquationPressureGradient for details. """ def __init__(self, dest, sources, dim): r""" Parameters ---------- dim : int Dimensionality of the problem. """ self.dim = dim super(MomentumEquationArtificialStress, self).__init__(dest, sources) def _get_helpers_(self): return [mat_vec_mult] def loop(self, d_idx, s_idx, d_rho, s_rho, d_u, d_v, d_w, d_uhat, d_vhat, d_what, s_u, s_v, s_w, s_uhat, s_vhat, s_what, d_au, d_av, d_aw, s_m, DWIJ): rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] i, j = declare('int', 2) ui, uj, uidif, ujdif, res = declare('matrix(3)', 5) Aij = declare('matrix(9)') for i in range(3): res[i] = 0.0 for j in range(3): Aij[3*i + j] = 0.0 ui[0] = d_u[d_idx] ui[1] = d_v[d_idx] ui[2] = d_w[d_idx] uj[0] = s_u[s_idx] uj[1] = s_v[s_idx] uj[2] = s_w[s_idx] uidif[0] = d_uhat[d_idx] - d_u[d_idx] uidif[1] = d_vhat[d_idx] - d_v[d_idx] uidif[2] = d_what[d_idx] - d_w[d_idx] ujdif[0] = s_uhat[s_idx] - s_u[s_idx] ujdif[1] = s_vhat[s_idx] - s_v[s_idx] ujdif[2] = s_what[s_idx] - s_w[s_idx] for i in range(3): for j in range(3): Aij[3*i + j] = (ui[i]*uidif[j] / rhoi + uj[i]*ujdif[j] / rhoj) mat_vec_mult(Aij, DWIJ, 3, res) d_au[d_idx] += s_m[s_idx] * res[0] d_av[d_idx] += s_m[s_idx] * res[1] d_aw[d_idx] += s_m[s_idx] * res[2] class VelocityGradient(Equation): r"""**Gradient of velocity vector** .. math:: (\nabla \otimes \tilde{\boldsymbol{v}})_i = \sum_j \frac{m_j} {\rho_j} \tilde{\boldsymbol{v}}_{ij} \otimes \nabla W_{ij} """ def __init__(self, dest, sources, dim): r""" Parameters ---------- dim : int Dimensionality of the problem. """ self.dim = dim super(VelocityGradient, self).__init__(dest, sources) def initialize(self, d_idx, d_gradvhat): for i in range(9): d_gradvhat[9*d_idx + i] = 0.0 def loop(self, s_idx, d_idx, s_m, d_uhat, d_vhat, d_what, s_uhat, s_vhat, s_what, s_rho, d_gradvhat, DWIJ): i, j = declare('int', 2) uhatij = declare('matrix(3)') Vj = s_m[s_idx]/s_rho[s_idx] uhatij[0] = d_uhat[d_idx] - s_uhat[s_idx] uhatij[1] = d_vhat[d_idx] - s_vhat[s_idx] uhatij[2] = d_what[d_idx] - s_what[s_idx] for i in range(3): for j in range(3): d_gradvhat[d_idx*9 + 3*i + j] += Vj * uhatij[i] * DWIJ[j] class DeviatoricStressRate(Equation): r"""**Stress rate for solids** From [ZhangHuAdams2017], equation (5), .. math:: \frac{d \boldsymbol{\sigma}'}{dt} = 2 G (\boldsymbol{\epsilon} - \frac{1}{3} \text{Tr}(\boldsymbol{\epsilon})\textbf{I}) + \boldsymbol{\sigma}' \cdot \boldsymbol{\Omega}^{T} + \boldsymbol{\Omega} \cdot \boldsymbol{\sigma}' where, .. math:: \boldsymbol{\Omega_{i/j}} = \frac{1}{2} \left(\nabla \otimes \boldsymbol{v}_{i/j} - (\nabla \otimes \boldsymbol{v}_{i/j})^{T}\right) .. math:: \boldsymbol{\epsilon_{i/j}} = \frac{1}{2} \left(\nabla \otimes \boldsymbol{v}_{i/j} + (\nabla \otimes \boldsymbol{v}_{i/j})^{T}\right) see the class VelocityGradient for :math:`\nabla \otimes \boldsymbol{v}_i` """ def __init__(self, dest, sources, dim, G): r""" Parameters ---------- dim : int Dimensionality of the problem. G : float value of shear modulus """ self.G = G self.dim = dim super(DeviatoricStressRate, self).__init__(dest, sources) def _get_helpers_(self): return [mat_vec_mult, mat_mult] def initialize(self, d_idx, d_sigma, d_asigma, d_gradvhat): i, j, ind = declare('int', 3) eps, omega, omegaT, sigmai, dvi = declare('matrix(9)', 5) G = self.G for i in range(9): sigmai[i] = d_sigma[d_idx*9 + i] dvi[i] = d_gradvhat[d_idx*9 + i] d_asigma[d_idx*9 + i] = 0.0 eps_trace = 0.0 for i in range(3): for j in range(3): eps[3*i + j] = 0.5*(dvi[3*i + j] + dvi[3*j + i]) omega[3*i + j] = 0.5*(dvi[3*i + j] - dvi[3*j + i]) if i == j: eps_trace += eps[3*i + j] for i in range(3): for j in range(3): omegaT[3*j + i] = omega[3*i + j] smo, oms = declare('matrix(9)', 2) mat_mult(sigmai, omegaT, 3, smo) mat_mult(omega, sigmai, 3, oms) for i in range(3): for j in range(3): ind = 3*i + j d_asigma[d_idx*9 + ind] = 2*G * eps[ind] + smo[ind] + oms[ind] if i == j: d_asigma[d_idx*9 + ind] += -2*G * eps_trace/3.0 class MomentumEquationArtificialStressSolid(Equation): r"""**Momentum equation Artificial stress for solids** See the class MomentumEquationPressureGradient for details. """ def __init__(self, dest, sources, dim): r""" Parameters ---------- dim : int Dimensionality of the problem. """ self.dim = dim super(MomentumEquationArtificialStressSolid, self).__init__(dest, sources) def _get_helpers_(self): return [mat_vec_mult] def loop(self, d_idx, s_idx, d_sigma, s_sigma, d_au, d_av, d_aw, s_m, DWIJ): i = declare('int') sigmaij = declare('matrix(9)') res = declare('matrix(3)') for i in range(9): sigmaij[i] = d_sigma[d_idx*9 + i] + s_sigma[s_idx*9 + i] mat_vec_mult(sigmaij, DWIJ, 3, res) d_au[d_idx] += s_m[s_idx] * res[0] d_av[d_idx] += s_m[s_idx] * res[1] d_aw[d_idx] += s_m[s_idx] * res[2] class GTVFScheme(Scheme): def __init__(self, fluids, solids, dim, rho0, c0, nu, h0, pref, gx=0.0, gy=0.0, gz=0.0, b=1.0, alpha=0.0): r"""Parameters ---------- fluids: list List of names of fluid particle arrays. solids: list List of names of solid particle arrays. dim: int Dimensionality of the problem. rho0: float Reference density. c0: float Reference speed of sound. nu: float Real viscosity of the fluid. h0: float Reference smoothing length. pref: float reference pressure for rate of change of transport velocity. gx: float Body force acceleration components in x direction. gy: float Body force acceleration components in y direction. gz: float Body force acceleration components in z direction. b: float constant for the equation of state. """ self.fluids = fluids self.solids = solids self.dim = dim self.rho0 = rho0 self.c0 = c0 self.nu = nu self.h0 = h0 self.pref = pref self.gx = gx self.gy = gy self.gz = gz self.b = b self.alpha = alpha self.solver = None def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): """Configure the solver to be generated. Parameters ---------- kernel : Kernel instance. Kernel to use, if none is passed a default one is used. integrator_cls : pysph.sph.integrator.Integrator Integrator class to use, use sensible default if none is passed. extra_steppers : dict Additional integration stepper instances as a dict. **kw : extra arguments Any additional keyword args are passed to the solver instance. """ from pysph.base.kernels import WendlandQuintic if kernel is None: kernel = WendlandQuintic(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) step_cls = GTVFStep for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = step_cls() if integrator_cls is not None: cls = integrator_cls print("Warning: GTVF Integrator is not being used.") else: cls = GTVFIntegrator integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def get_equations(self): from pysph.sph.wc.transport_velocity import ( StateEquation, SetWallVelocity, SolidWallPressureBC, VolumeSummation, SolidWallNoSlipBC, MomentumEquationArtificialViscosity, ContinuitySolid ) all = self.fluids + self.solids stage1 = [] if self.solids: eq0 = [] for solid in self.solids: eq0.append(SetWallVelocity(dest=solid, sources=self.fluids)) stage1.append(Group(equations=eq0, real=False)) eq1 = [] for fluid in self.fluids: eq1.append(ContinuityEquationGTVF(dest=fluid, sources=self.fluids)) if self.solids: eq1.append( ContinuitySolid(dest=fluid, sources=self.solids) ) stage1.append(Group(equations=eq1, real=False)) eq2, stage2 = [], [] for fluid in self.fluids: eq2.append(CorrectDensity(dest=fluid, sources=all)) stage2.append(Group(equations=eq2, real=False)) eq3 = [] for fluid in self.fluids: eq3.append( StateEquation(dest=fluid, sources=None, p0=self.pref, rho0=self.rho0, b=1.0) ) stage2.append(Group(equations=eq3, real=False)) g2_s = [] for solid in self.solids: g2_s.append(VolumeSummation(dest=solid, sources=all)) g2_s.append(SolidWallPressureBC( dest=solid, sources=self.fluids, b=1.0, rho0=self.rho0, p0=self.pref, gx=self.gx, gy=self.gy, gz=self.gz )) if g2_s: stage2.append(Group(equations=g2_s, real=False)) eq4 = [] for fluid in self.fluids: eq4.append( MomentumEquationPressureGradient( dest=fluid, sources=all, pref=self.pref, gx=self.gx, gy=self.gy, gz=self.gz )) if self.alpha > 0.0: eq4.append( MomentumEquationArtificialViscosity( dest=fluid, sources=all, c0=self.c0, alpha=self.alpha )) if self.nu > 0.0: eq4.append( MomentumEquationViscosity( dest=fluid, sources=all, nu=self.nu )) if self.solids: eq4.append( SolidWallNoSlipBC( dest=fluid, sources=self.solids, nu=self.nu )) eq4.append( MomentumEquationArtificialStress( dest=fluid, sources=self.fluids, dim=self.dim )) stage2.append(Group(equations=eq4, real=True)) return MultiStageEquations([stage1, stage2]) def setup_properties(self, particles, clean=True): particle_arrays = dict([(p.name, p) for p in particles]) dummy = get_particle_array_gtvf(name='junk') props = list(dummy.properties.keys()) props += [dict(name=p, stride=v) for p, v in dummy.stride.items()] output_props = dummy.output_property_arrays for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) solid_props = ['uf', 'vf', 'wf', 'vg', 'ug', 'wij', 'wg', 'V'] props += solid_props for solid in self.solids: pa = particle_arrays[solid] self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) pysph-master/pysph/sph/wc/kernel_correction.py000066400000000000000000000171421356347341600221620ustar00rootroot00000000000000''' Kernel Corrections ################### These are the equations for the kernel corrections that are mentioned in the paper by Bonet and Lok [BonetLok1999]. References ----------- .. [BonetLok1999] Bonet, J. and Lok T.-S.L. (1999) Variational and Momentum Preservation Aspects of Smoothed Particle Hydrodynamic Formulations. ''' from math import sqrt from compyle.api import declare from pysph.sph.equation import Equation from pysph.sph.wc.density_correction import gj_solve class KernelCorrection(Equation): r"""**Kernel Correction** From [BonetLok1999], equation (53): .. math:: \mathbf{f}_{a} = \frac{\sum_{b}\frac{m_{b}}{\rho_{b}} \mathbf{f}_{b}W_{ab}}{\sum_{b}\frac{m_{b}}{\rho_{b}}W_{ab}} """ def initialize(self, d_idx, d_cwij): d_cwij[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_cwij, s_m, s_rho, WIJ): d_cwij[d_idx] += s_m[s_idx] * WIJ / s_rho[s_idx] class GradientCorrectionPreStep(Equation): def __init__(self, dest, sources, dim=2): self.dim = dim super(GradientCorrectionPreStep, self).__init__(dest, sources) def initialize(self, d_idx, d_m_mat): i = declare('int') for i in range(9): d_m_mat[9 * d_idx + i] = 0.0 def loop_all(self, d_idx, d_m_mat, s_m, s_rho, d_x, d_y, d_z, d_h, s_x, s_y, s_z, s_h, SPH_KERNEL, NBRS, N_NBRS): x = d_x[d_idx] y = d_y[d_idx] z = d_z[d_idx] h = d_h[d_idx] i, j, s_idx, n = declare('int', 4) xij = declare('matrix(3)') dwij = declare('matrix(3)') n = self.dim for k in range(N_NBRS): s_idx = NBRS[k] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = z - s_z[s_idx] hij = (h + s_h[s_idx]) * 0.5 r = sqrt(xij[0] * xij[0] + xij[1] * xij[1] + xij[2] * xij[2]) SPH_KERNEL.gradient(xij, r, hij, dwij) V = s_m[s_idx] / s_rho[s_idx] if r > 1.0e-12: for i in range(n): for j in range(n): xj = xij[j] d_m_mat[9 * d_idx + 3 * i + j] -= V * dwij[i] * xj class GradientCorrection(Equation): r"""**Kernel Gradient Correction** From [BonetLok1999], equations (42) and (45) .. math:: \nabla \tilde{W}_{ab} = L_{a}\nabla W_{ab} .. math:: L_{a} = \left(\sum \frac{m_{b}}{\rho_{b}} \nabla W_{ab} \mathbf{\otimes}x_{ba} \right)^{-1} """ def _get_helpers_(self): return [gj_solve] def __init__(self, dest, sources, dim=2, tol=0.1): self.dim = dim self.tol = tol super(GradientCorrection, self).__init__(dest, sources) def loop(self, d_idx, d_m_mat, DWIJ, HIJ): i, j, n, nt = declare('int', 4) n = self.dim nt = n + 1 # Note that we allocate enough for a 3D case but may only use a # part of the matrix. temp = declare('matrix(12)') res = declare('matrix(3)') eps = 1.0e-04 * HIJ for i in range(n): for j in range(n): temp[nt * i + j] = d_m_mat[9 * d_idx + 3 * i + j] # Augmented part of matrix temp[nt*i + n] = DWIJ[i] gj_solve(temp, n, 1, res) res_mag = 0.0 dwij_mag = 0.0 for i in range(n): res_mag += abs(res[i]) dwij_mag += abs(DWIJ[i]) change = abs(res_mag - dwij_mag)/(dwij_mag + eps) if change < self.tol: for i in range(n): DWIJ[i] = res[i] class MixedKernelCorrectionPreStep(Equation): r"""**Mixed Kernel Correction** From [BonetLok1999], equations (54), (57) and (58) .. math:: \tilde{W}_{ab} = \frac{W_{ab}}{\sum_{b} V_{b}W_{ab}} .. math:: \nabla \tilde{W}_{ab} = L_{a}\nabla \bar{W}_{ab} where, .. math:: L_{a} = \left(\sum_{b} V_{b} \nabla \bar{W}_{ab} \mathbf{\otimes}x_{ba} \right)^{-1} .. math:: \nabla \bar{W}_{ab} = \frac{\nabla W_{ab} - \gamma} {\sum_{b} V_{b}W_{ab}} .. math:: \gamma = \frac{\sum_{b} V_{b}\nabla W_{ab}} {\sum_{b} V_{b}W_{ab}} """ def __init__(self, dest, sources, dim=2): self.dim = dim super(MixedKernelCorrectionPreStep, self).__init__(dest, sources) def initialize(self, d_idx, d_m_mat): i = declare('int') for i in range(9): d_m_mat[9 * d_idx + i] = 0.0 def loop_all(self, d_idx, d_x, d_y, d_z, d_h, s_x, s_y, s_z, s_h, SPH_KERNEL, N_NBRS, NBRS, d_m_mat, s_m, s_rho, d_cwij, d_dw_gamma): x = d_x[d_idx] y = d_y[d_idx] z = d_z[d_idx] h = d_h[d_idx] i, j, n, k, s_idx = declare('int', 5) n = self.dim xij = declare('matrix(3)') dwij = declare('matrix(3)') dwij1 = declare('matrix(3)') numerator = declare('matrix(3)') for i in range(3): numerator[i] = 0.0 den = 0.0 for k in range(N_NBRS): s_idx = NBRS[k] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = z - s_z[s_idx] V = s_m[s_idx] / s_rho[s_idx] rij = sqrt(xij[0] * xij[0] + xij[1] * xij[1] + xij[2] * xij[2]) hij = (h + s_h[s_idx]) * 0.5 SPH_KERNEL.gradient(xij, rij, hij, dwij) wij = SPH_KERNEL.kernel(xij, rij, hij) den += V * wij for i in range(n): numerator[i] += V * dwij[i] for i in range(n): d_dw_gamma[3*d_idx + i] = numerator[i]/den d_cwij[d_idx] = den for k in range(N_NBRS): s_idx = NBRS[k] xij[0] = x - s_x[s_idx] xij[1] = y - s_y[s_idx] xij[2] = z - s_z[s_idx] hij = (h + s_h[s_idx]) * 0.5 r = sqrt(xij[0] * xij[0] + xij[1] * xij[1] + xij[2] * xij[2]) SPH_KERNEL.gradient(xij, r, hij, dwij) for i in range(n): dwij1[i] = (dwij[i] - numerator[i] / den) / den V = s_m[s_idx] / s_rho[s_idx] if r > 1.0e-12: for i in range(n): for j in range(n): xj = xij[j] d_m_mat[9 * d_idx + 3 * i + j] -= V * dwij1[i] * xj class MixedGradientCorrection(Equation): r"""**Mixed Kernel Gradient Correction** This is as per [BonetLok1999]. See the MixedKernelCorrectionPreStep for the equations. """ def _get_helpers_(self): return [gj_solve] def __init__(self, dest, sources, dim=2, tol=0.1): self.dim = dim self.tol = tol super(MixedGradientCorrection, self).__init__(dest, sources) def loop(self, d_idx, d_m_mat, d_dw_gamma, d_cwij, DWIJ, HIJ): i, j, n, nt = declare('int', 4) n = self.dim nt = n + 1 temp = declare('matrix(12)') # The augmented matrix res = declare('matrix(3)') dwij = declare('matrix(3)') eps = 1.0e-04 * HIJ for i in range(n): dwij[i] = (DWIJ[i] - d_dw_gamma[3*d_idx + i])/d_cwij[d_idx] for j in range(n): temp[nt * i + j] = d_m_mat[9 * d_idx + 3 * i + j] temp[nt*i + n] = dwij[i] gj_solve(temp, n, 1, res) res_mag = 0.0 dwij_mag = 0.0 for i in range(n): res_mag += abs(res[i]) dwij_mag += abs(dwij[i]) change = abs(res_mag - dwij_mag)/(dwij_mag + eps) if change < self.tol: for i in range(n): DWIJ[i] = res[i] pysph-master/pysph/sph/wc/linalg.py000066400000000000000000000113641356347341600177210ustar00rootroot00000000000000from compyle.api import declare def identity(a=[0.0, 0.0], n=3): """Initialize an identity matrix. """ i, j = declare('int', 2) for i in range(n): for j in range(n): if i == j: a[n*i + j] = 1.0 else: a[n*i + j] = 0.0 def dot(a=[0.0, 0.0], b=[0.0, 0.0], n=3): i = declare('int') result = 0.0 for i in range(n): result += a[i]*b[i] return result def mat_mult(a=[1.0, 0.0], b=[1.0, 0.0], n=3, result=[0.0, 0.0]): """Multiply two square matrices (not element-wise). Stores the result in `result`. Parameters ---------- a: list b: list n: int : number of rows/columns result: list """ i, j, k = declare('int', 3) for i in range(n): for k in range(n): s = 0.0 for j in range(n): s += a[n*i + j] * b[n*j + k] result[n*i + k] = s def mat_vec_mult(a=[1.0, 0.0], b=[1.0, 0.0], n=3, result=[0.0, 0.0]): """Multiply a square matrix with a vector. Parameters ---------- a: list b: list n: int : number of rows/columns result: list """ i, j = declare('int', 2) for i in range(n): s = 0.0 for j in range(n): s += a[n*i + j] * b[j] result[i] = s def augmented_matrix(A=[0.0, 0.0], b=[0.0, 0.0], n=3, na=1, nmax=3, result=[0.0, 0.0]): """Create augmented matrix. Given flattened matrix, `A` of max rows/columns `nmax`, and flattened columns `b` with `n` rows of interest and `na` additional columns, put these in `result`. Result must be already allocated and be flattened. The `result` will contain `(n + na)*n` first entries as the augmented_matrix. Parameters ---------- A: list: given matrix. b: list: additional columns to be augmented. n: int : number of rows/columns to use from `A`. na: int: number of added columns in `b`. nmax: int: the maximum dimension 'A' result: list: must have size of (nmax + na)*n. """ i, j, nt = declare('int', 3) nt = n + na for i in range(n): for j in range(n): result[nt*i + j] = A[nmax * i + j] for j in range(na): result[nt*i + n + j] = b[na*i + j] def gj_solve(m=[1., 0.], n=3, nb=1, result=[0.0, 0.0]): r"""A gauss-jordan method to solve an augmented matrix. The routine is given the augmented matrix, the number of rows/cols in the original matrix and the number of added columns. The result is stored in the result array passed. Parameters ---------- m : list: a flattened list representing the augmented matrix [A|b]. n : int: number of columns/rows used from A in augmented_matrix. nb: int: number of columns added to A. result: list: with size n*nb References ---------- https://ricardianambivalence.com/2012/10/20/pure-python-gauss-jordan -solve-ax-b-invert-a/ """ i, j, eqns, colrange, augCol, col, row, bigrow, nt = declare('int', 9) eqns = n colrange = n augCol = n + nb nt = n + nb for col in range(colrange): bigrow = col for row in range(col + 1, colrange): if abs(m[nt*row + col]) > abs(m[nt*bigrow + col]): bigrow = row temp = m[nt*row + col] m[nt*row + col] = m[nt*bigrow + col] m[nt*bigrow + col] = temp rr, rrcol, rb, rbr, kup, kupr, kleft, kleftr = declare('int', 8) for rrcol in range(0, colrange): for rr in range(rrcol + 1, eqns): dnr = float(m[nt*rrcol + rrcol]) if abs(dnr) < 1e-12: return 1.0 cc = -float(m[nt*rr + rrcol]) / dnr for j in range(augCol): m[nt*rr + j] = m[nt*rr + j] + cc * m[nt*rrcol + j] backCol, backColr = declare('int', 2) tol = 1.0e-12 for rbr in range(eqns): rb = eqns - rbr - 1 if (m[nt*rb + rb] == 0): if abs(m[nt*rb + augCol - 1]) > tol: # Error, singular matrix. return 1.0 else: for backColr in range(rb, augCol): backCol = rb + augCol - backColr - 1 m[nt*rb + backCol] = m[nt*rb + backCol] / m[nt*rb + rb] if not (rb == 0): for kupr in range(rb): kup = rb - kupr - 1 for kleftr in range(rb, augCol): kleft = rb + augCol - kleftr - 1 kk = -m[nt*kup + rb] / m[nt*rb + rb] m[nt*kup + kleft] = (m[nt*kup + kleft] + kk * m[nt*rb + kleft]) for i in range(n): for j in range(nb): result[nb*i + j] = m[nt*i + n + j] return 0.0 pysph-master/pysph/sph/wc/parshikov.py000066400000000000000000000045041356347341600204570ustar00rootroot00000000000000from pysph.sph.equation import Equation class Continuity(Equation): def initialize(self, d_idx, d_arho): d_arho[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_u, d_v, d_w, s_u, s_v, s_w, d_cs, s_cs, d_rho, d_arho, s_rho, d_p, s_p, DWIJ, RIJ, XIJ): rl = d_rho[d_idx] rr = s_rho[s_idx] pl = d_p[d_idx] pr = s_p[s_idx] cl = d_cs[d_idx] cr = s_cs[s_idx] uxl = d_u[d_idx] uyl = d_v[d_idx] uzl = d_w[d_idx] uxr = s_u[s_idx] uyr = s_v[s_idx] uzr = s_w[s_idx] if RIJ >= 1.0e-16: ul = -(uxl * XIJ[0] + uyl * XIJ[1] + uzl * XIJ[2]) / RIJ ur = -(uxr * XIJ[0] + uyr * XIJ[1] + uzr * XIJ[2]) / RIJ else: ul = 0.0 ur = 0.0 u_star = (ul * rl * cl + ur * rr * cr + pl - pr) / (rl * cl + rr * cr) dwdr = sqrt(DWIJ[0] * DWIJ[0] + DWIJ[1] * DWIJ[1] + DWIJ[2] * DWIJ[2]) d_arho[d_idx] += 2.0 * s_m[s_idx] * dwdr * (ul - u_star) * rl / rr class Momentum(Equation): def __init__(self, dest, sources, gx=0.0, gy=0.0, gz=0.0): self.gx = gx self.gy = gy self.gz = gz super(Momentum, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = self.gx d_av[d_idx] = self.gy d_aw[d_idx] = self.gz def loop(self, d_idx, s_idx, s_m, d_u, d_v, d_w, s_u, s_v, s_w, d_cs, s_cs, d_rho, s_rho, d_p, s_p, d_au, d_av, d_aw, RIJ, XIJ, DWIJ): rl = d_rho[d_idx] rr = s_rho[s_idx] pl = d_p[d_idx] pr = s_p[s_idx] cl = d_cs[d_idx] cr = s_cs[s_idx] uxl = d_u[d_idx] uyl = d_v[d_idx] uzl = d_w[d_idx] uxr = s_u[s_idx] uyr = s_v[s_idx] uzr = s_w[s_idx] m = s_m[s_idx] if RIJ >= 1.0e-16: ul = -(uxl * XIJ[0] + uyl * XIJ[1] + uzl * XIJ[2]) / RIJ ur = -(uxr * XIJ[0] + uyr * XIJ[1] + uzr * XIJ[2]) / RIJ else: ul = 0.0 ur = 0.0 p_star = pl * rr * cr + pr * cl * rl - rl * rr * cl * cr * (ur - ul) p_star /= (rl * cl + rr * cr) factor = -2.0 * m * p_star / (rl * rr) d_au[d_idx] += factor * DWIJ[0] d_av[d_idx] += factor * DWIJ[1] d_aw[d_idx] += factor * DWIJ[2] pysph-master/pysph/sph/wc/pcisph.py000066400000000000000000000272741356347341600177500ustar00rootroot00000000000000""" Predictive-Corrective Incompressible SPH (PCISPH) ################################################# References ----------- .. [SolPaj2009] B. Solenthaler, R. Pajarola "Predictive-Corrective Incompressible SPH", ACM Trans. Graph 28 (2009), pp. 1--6. """ import numpy as np from pysph.sph.integrator import Integrator from pysph.sph.equation import Equation, Group from pysph.base.utils import get_particle_array from pysph.sph.integrator_step import IntegratorStep from pysph.sph.scheme import Scheme, add_bool_argument def get_particle_array_pcisph(constants=None, **props): pcisph_props = [ 'au', 'av', 'aw', 'arho', 'dwij2', 'u0', 'v0', 'w0', 'aup', 'avp', 'awp', 'x0', 'y0', 'z0', 'rho0' ] pa = get_particle_array( constants=constants, additional_props=pcisph_props, **props ) pa.add_constant('iters', np.zeros(10000)) pa.add_property('dw', stride=3) pa.add_output_arrays(['p', 'dwij2']) return pa class PCISPHIntegrator(Integrator): def one_timestep(self, t, dt): self.initialize() self.compute_accelerations(0) self.stage1() self.update_domain() self.do_post_stage(dt, 1) def initial_acceleration(self, t, dt): pass class PCISPHStep(IntegratorStep): def __init__(self, show_itercount=False): self.show_itercount = show_itercount self.index = 0 def initialize(self, d_idx, d_u, d_v, d_w, d_u0, d_v0, d_w0, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_rho, d_rho0): d_u0[d_idx] = d_u[d_idx] d_v0[d_idx] = d_v[d_idx] d_w0[d_idx] = d_w[d_idx] d_x0[d_idx] = d_x[d_idx] d_y0[d_idx] = d_y[d_idx] d_z0[d_idx] = d_z[d_idx] d_rho0[d_idx] = d_rho[d_idx] def py_stage1(self, dst, t, dt): if self.show_itercount: print("Iteration count = ", dst.iters[self.index]) self.index += 1 def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y, d_z, d_aup, d_avp, d_awp, d_u0, d_v0, d_w0, d_x0, d_y0, d_z0, dt): d_u[d_idx] = d_u0[d_idx] + dt * (d_au[d_idx] + d_aup[d_idx]) d_v[d_idx] = d_v0[d_idx] + dt * (d_av[d_idx] + d_avp[d_idx]) d_w[d_idx] = d_w0[d_idx] + dt * (d_aw[d_idx] + d_awp[d_idx]) d_x[d_idx] = d_x0[d_idx] + dt * d_u[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_v[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_w[d_idx] class MomentumEquationViscosity(Equation): r"""**Momentum Equation Viscosity** See "pysph.sph.wc.viscosity.LaminarViscocity" """ def __init__(self, dest, sources, nu=0.0, gx=0.0, gy=0.0, gz=0.0): self.nu = nu self.gx = gx self.gy = gy self.gz = gz super(MomentumEquationViscosity, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = self.gx d_av[d_idx] = self.gy d_aw[d_idx] = self.gz def loop(self, d_idx, s_idx, s_m, d_rho, s_rho, d_au, d_av, d_aw, DWIJ, XIJ, VIJ, R2IJ, EPS): mb = s_m[s_idx] rhoij = (d_rho[d_idx] + s_rho[s_idx]) xdotdwij = DWIJ[0] * XIJ[0] + DWIJ[1] * XIJ[1] + DWIJ[2] * XIJ[2] tmp = mb * 4 * self.nu * xdotdwij / (rhoij * (R2IJ + EPS)) d_au[d_idx] += tmp * VIJ[0] d_av[d_idx] += tmp * VIJ[1] d_aw[d_idx] += tmp * VIJ[2] def post_loop(self, d_idx, d_au, d_av, d_aw, d_u, d_v, d_w, d_p, d_aup, d_avp, d_awp, dt): d_u[d_idx] += dt * d_au[d_idx] d_v[d_idx] += dt * d_av[d_idx] d_w[d_idx] += dt * d_aw[d_idx] # Retaining the old pressure seems to give better results for the # TG problem. #d_p[d_idx] = 0.0 d_aup[d_idx] = 0.0 d_avp[d_idx] = 0.0 d_awp[d_idx] = 0.0 class Predict(Equation): r"""**Predict velocity and position** .. math:: \mathbf{v}^{*}(t+1) = \mathbf{v}(t) + dt \left(\frac{d \mathbf{v}_{visc, g}(t)}{dt} + \frac{d \mathbf{v}_{p} (t)}{dt} \right) .. math:: \mathbf{x}^{*}(t+1) = \mathbf{x}(t) + dt * \mathbf{v}(t+1) """ def initialize(self, d_idx, d_u, d_v, d_w, d_aup, d_avp, d_awp, d_x, d_y, d_z, d_au, d_av, d_aw, d_u0, d_v0, d_w0, d_x0, d_y0, d_z0, dt): d_u[d_idx] = d_u0[d_idx] + dt * (d_au[d_idx] + d_aup[d_idx]) d_v[d_idx] = d_v0[d_idx] + dt * (d_av[d_idx] + d_avp[d_idx]) d_w[d_idx] = d_w0[d_idx] + dt * (d_aw[d_idx] + d_awp[d_idx]) d_x[d_idx] = d_x0[d_idx] + dt * d_u[d_idx] d_y[d_idx] = d_y0[d_idx] + dt * d_v[d_idx] d_z[d_idx] = d_z0[d_idx] + dt * d_w[d_idx] class ComputePressure(Equation): r"""**Compute Pressure** Compute pressure iteratively maintaining density within a given tolerance. .. math:: p_i += \delta \rho^{*}_{{err}_i} where, .. math:: \rho_{err_i} = \rho_i^{*} - \rho_0 .. math:: \delta = \frac{-1}{\beta (-\sum_j \nabla W_{ij} \cdot \sum_j \nabla W_{ij} - \sum_j \nabla W_{ij} \nabla W_{ij})} """ def __init__(self, dest, sources, rho0): self.rho0 = rho0 super(ComputePressure, self).__init__(dest, sources) def initialize(self, d_idx, d_dw, d_dwij2): d_dw[d_idx * 3 + 0] = 0.0 d_dw[d_idx * 3 + 1] = 0.0 d_dw[d_idx * 3 + 2] = 0.0 d_dwij2[d_idx] = 0.0 def loop(self, d_idx, d_dw, d_dwij2, DWIJ): d_dw[d_idx * 3 + 0] += DWIJ[0] d_dw[d_idx * 3 + 1] += DWIJ[1] d_dw[d_idx * 3 + 2] += DWIJ[2] dwij2 = DWIJ[0] * DWIJ[0] + DWIJ[1] * DWIJ[1] + DWIJ[2] * DWIJ[2] d_dwij2[d_idx] += dwij2 def post_loop(self, d_idx, d_dw, d_m, dt, d_dwij2, d_p, d_rho): dwx = d_dw[d_idx * 3 + 0] dwy = d_dw[d_idx * 3 + 1] dwz = d_dw[d_idx * 3 + 2] tmp = dwx * dwx + dwy * dwy + dwz * dwz mi = d_m[d_idx] rho0 = self.rho0 beta = 2 * mi * mi * (dt / rho0) * (dt / rho0) delta = 1.0 / (beta * (tmp + d_dwij2[d_idx])) rho_err = d_rho[d_idx] - rho0 d_p[d_idx] += delta * rho_err class MomentumEquationPressureGradient(Equation): r"""**Momentum Equation pressure gradient** Standard WCSPH pressure gradient, .. math:: \frac{d\mathbf{v}}{dt} = - \sum_j m_j \left(\frac{p_i}{\rho_i^2} + \frac{p_i}{\rho_i^2}\right) \nabla W(x_{ij}, h) """ def __init__(self, dest, sources, rho0, tolerance, debug): self.rho0 = rho0 self.tolerance = tolerance self.debug = debug self.rho_err = 0.0 self.ctr = 0 super(MomentumEquationPressureGradient, self).__init__(dest, sources) def loop(self, d_idx, s_idx, d_p, s_p, d_rho, s_rho, s_m, d_aup, d_avp, d_awp, DWIJ): rhoi2 = 1.0 / (d_rho[d_idx] * d_rho[d_idx]) rhoj2 = 1.0 / (s_rho[s_idx] * s_rho[s_idx]) mj = s_m[d_idx] pij = -1.0 * mj * (d_p[d_idx] * rhoi2 + s_p[s_idx] * rhoj2) d_aup[d_idx] += pij * DWIJ[0] d_avp[d_idx] += pij * DWIJ[1] d_awp[d_idx] += pij * DWIJ[2] def reduce(self, dst, t, dt): import numpy as np self.rho_err = np.mean(np.abs(dst.rho / self.rho0 - 1.0)) dst.iters[self.ctr] += 1 def converged(self): debug = self.debug rho_err = self.rho_err if rho_err > self.tolerance: if debug: print("Not converged:", rho_err) return -1.0 else: self.ctr += 1 if debug: print("Converged:", rho_err) return 1.0 class PCISPHScheme(Scheme): def __init__(self, fluids, dim, rho0, nu, gx=0.0, gy=0.0, gz=0.0, tolerance=0.1, debug=False, show_itercount=False): self.fluids = fluids self.solver = None self.dim = dim self.rho0 = rho0 self.nu = nu self.gx = gx self.gy = gy self.gz = gz self.tolerance = tolerance self.debug = debug self.show_itercount = show_itercount def add_user_options(self, group): group.add_argument( '--pcisph-tol', action='store', type=float, dest='tolerance', default=None, help='relative error tolerance for convergence as a percentage.' ) add_bool_argument( group, 'pcisph-debug', dest='debug', default=None, help="Produce some debugging output on convergence of iterations." ) add_bool_argument( group, 'pcisph-itercount', dest='show_itercount', default=False, help="Produce some debugging output on convergence of iterations." ) def consume_user_options(self, options): vars = ['tolerance', 'debug', 'show_itercount'] data = dict((var, self._smart_getattr(options, var)) for var in vars) self.configure(**data) def configure_solver(self, kernel=None, integrator_cls=None, extra_steppers=None, **kw): from pysph.base.kernels import QuinticSpline if kernel is None: kernel = QuinticSpline(dim=self.dim) steppers = {} if extra_steppers is not None: steppers.update(extra_steppers) step_cls = PCISPHStep for fluid in self.fluids: if fluid not in steppers: steppers[fluid] = step_cls(self.show_itercount) cls = PCISPHIntegrator if integrator_cls is None else integrator_cls integrator = cls(**steppers) from pysph.solver.solver import Solver self.solver = Solver( dim=self.dim, integrator=integrator, kernel=kernel, **kw ) def get_equations(self): from pysph.sph.basic_equations import SummationDensity all = self.fluids equations = [] eq1 = [] for fluid in self.fluids: eq1.append( MomentumEquationViscosity( dest=fluid, sources=all, nu=self.nu, gx=self.gx, gy=self.gy, gz=self.gz ) ) equations.append(Group(equations=eq1)) eq1, g2 = [], [] for fluid in self.fluids: eq1.append(Predict(dest=fluid, sources=None)) g2.append(Group(equations=eq1, update_nnps=True)) eq2 = [] for fluid in self.fluids: eq2.append(SummationDensity(dest=fluid, sources=all)) g2.append(Group(equations=eq2)) eq3 = [] for fluid in self.fluids: eq3.append( ComputePressure(dest=fluid, sources=all, rho0=self.rho0) ) g2.append(Group(equations=eq3, update_nnps=True)) eq4 = [] for fluid in self.fluids: eq4.append( MomentumEquationPressureGradient( dest=fluid, sources=all, rho0=self.rho0, tolerance=self.tolerance, debug=self.debug ), ) g2.append(Group(equations=eq4)) equations.append( Group(equations=g2, iterate=True, max_iterations=500, min_iterations=2) ) return equations def setup_properties(self, particles, clean=True): particle_arrays = dict([(p.name, p) for p in particles]) dummy = get_particle_array_pcisph(name='junk') props = list(dummy.properties.keys()) props += [dict(name=x, stride=y) for x, y in dummy.stride.items()] output_props = dummy.output_property_arrays constants = [dict(name=x, data=y) for x, y in dummy.constants.items()] for fluid in self.fluids: pa = particle_arrays[fluid] self._ensure_properties(pa, props, clean) pa.set_output_arrays(output_props) for const in constants: pa.add_constant(**const) pysph-master/pysph/sph/wc/shift.py000066400000000000000000000227371356347341600175760ustar00rootroot00000000000000""" Shift particle positions ######################## Equations to maintain uniform particle distribution. There are two ways proposed: 1. 'SimpleShift' is the earlier one, see [XuStaLau2009]. 2. 'FickianShift' is based on Fick's diffusion law, see [LiXuStaRo2012]. TODO: Implement for free surface. References ---------- .. [XuStaLau2009] Rui Xu, Peter Stansby, Dominique Laurence "Accuracy and stability in incompressible SPH (ISPH) based on the projection method and a new approach", Journal of Computational Physics 228 (2009), pp. 6703--6725. .. [LiXuStaRo2012] S.J Lind, R. Xu, P.K. Stansby, B.D. Rogers "Incompressible smoothed particle hydrodynamics for free-surface flows: A generalised diffusion-based algorithm for stability and validations for impulsive flows and propagating waves", Journal of Computational Physics 231 (2009), pp. 1499--1523. .. [SkLiStaRo2013] Alex Skillen, S. Lind, P.K. Stansby, B.D. Rogers "Incompressible smoothed particle hydrodynamics (SPH) with reduced temporal noise and generalised Fickian smoothing applied to body-water slam and efficient wave-body interaction", Computer Methods in Applied Mechanics and Engineering 265 (2013), pp. 163--173. """ from math import sqrt from compyle.api import declare from pysph.sph.equation import Equation from pysph.base.reduce_array import parallel_reduce_array, serial_reduce_array from pysph.solver.tools import Tool class SimpleShift(Equation): r"""**Simple shift** See the paper [XuStaLau2009], equation(35) """ def __init__(self, dest, sources, const=0.04): self.beta = const super(SimpleShift, self).__init__(dest, sources) def py_initialize(self, dst, t, dt): from numpy import sqrt vmag = sqrt(dst.u**2 + dst.v**2 + dst.w**2) dst.vmax[0] = serial_reduce_array(vmag, 'max') dst.vmax[:] = parallel_reduce_array(dst.vmax, 'max') def loop_all(self, d_idx, d_x, d_y, d_z, s_x, s_y, s_z, d_vmax, d_dpos, dt, N_NBRS, NBRS): i, s_idx = declare('int', 2) ri = 0.0 dxi = 0.0 dyi = 0.0 dzi = 0.0 eps = 1.0e-08 for i in range(N_NBRS): s_idx = NBRS[i] xij = d_x[d_idx] - s_x[s_idx] yij = d_y[d_idx] - s_y[s_idx] zij = d_z[d_idx] - s_z[s_idx] rij = sqrt(xij*xij + yij*yij + zij*zij) r3ij = rij * rij * rij dxi += xij / (r3ij + eps) dyi += yij / (r3ij + eps) dzi += zij / (r3ij + eps) ri += rij ri = ri/N_NBRS fac = self.beta * ri*ri * d_vmax[0] * dt d_dpos[d_idx*3] = fac*dxi d_dpos[d_idx*3 + 1] = fac*dyi d_dpos[d_idx*3 + 2] = fac*dzi d_x[d_idx] += d_dpos[d_idx*3] d_y[d_idx] += d_dpos[d_idx*3 + 1] d_z[d_idx] += d_dpos[d_idx*3 + 2] class FickianShift(Equation): r"""**Fickian-shift** See the paper [LiXuStaRo2012], equation(21-24), for the constant see [SkLiStaRo2013], equation(13). """ def __init__(self, dest, sources, fickian_const=10, tensile_const=0.2, tensile_pow=4, hdx=1.0, tensile_correction=False): self.fickian_const = fickian_const self.tensile_const = tensile_const self.tensile_pow = tensile_pow self.hdx = hdx self.tensile_correction = tensile_correction super(FickianShift, self).__init__(dest, sources) def loop_all(self, d_idx, d_x, d_y, d_z, s_x, s_y, s_z, d_u, d_v, d_w, d_h, s_h, s_m, s_rho, dt, d_dpos, N_NBRS, NBRS, SPH_KERNEL): i, s_idx = declare('int', 2) xij, dwij, grad_c = declare('matrix(3)', 3) grad_c[0] = 0.0 grad_c[1] = 0.0 grad_c[2] = 0.0 ui = d_u[d_idx] vi = d_v[d_idx] wi = d_w[d_idx] vmag = sqrt(ui*ui + vi*vi + wi*wi) hi = d_h[d_idx] dx = declare('matrix(3)') dx[0] = hi/self.hdx dx[1] = 0.0 dx[2] = 0.0 fij = 0.0 wdx = SPH_KERNEL.kernel(dx, dx[0], d_h[d_idx]) for i in range(N_NBRS): s_idx = NBRS[i] xij[0] = d_x[d_idx] - s_x[s_idx] xij[1] = d_y[d_idx] - s_y[s_idx] xij[2] = d_z[d_idx] - s_z[s_idx] rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] + xij[2]*xij[2]) hij = (hi + s_h[s_idx]) * 0.5 SPH_KERNEL.gradient(xij, rij, hij, dwij) Vj = s_m[s_idx] / s_rho[s_idx] if self.tensile_correction: R = self.tensile_const n = self.tensile_pow wij = SPH_KERNEL.kernel(xij, rij, hij) fij = R * (wij/wdx)**n grad_c[0] += Vj * (1 + fij) * dwij[0] grad_c[1] += Vj * (1 + fij) * dwij[1] grad_c[2] += Vj * (1 + fij) * dwij[2] fac = -self.fickian_const * hi * vmag * dt d_dpos[d_idx*3] = fac*grad_c[0] d_dpos[d_idx*3 + 1] = fac*grad_c[1] d_dpos[d_idx*3 + 2] = fac*grad_c[2] d_x[d_idx] += d_dpos[d_idx*3] d_y[d_idx] += d_dpos[d_idx*3 + 1] d_z[d_idx] += d_dpos[d_idx*3 + 2] class CorrectVelocities(Equation): r"""**Correct velocities** Correct the velocities after shifting to a new position by using taylor series approximation, see equation (34) of [XuStaLau2009]. .. math:: \phi_{i}^' = \phi_i + (\nabla \phi)_i \cdot \delta \mathbf{r}_{ii^'} where, \phi_{i} is the hydrodynamic variable at old position, \phi_{i}^' is at new position, delta \mathbf{r}_{ii^'} is the vector between new and old position. """ def initialize(self, d_idx, d_gradv): i = declare('int') for i in range(9): d_gradv[9*d_idx + i] = 0.0 def loop(self, d_idx, s_idx, s_m, s_rho, d_gradv, DWIJ, VIJ): alp, bet = declare('int', 2) Vj = s_m[s_idx] / s_rho[s_idx] for alp in range(3): for bet in range(3): d_gradv[d_idx*9 + 3*bet + alp] += -Vj * VIJ[alp] * DWIJ[bet] def post_loop(self, d_idx, d_u, d_v, d_w, d_gradv, d_dpos): res = declare('matrix(3)') i, j = declare('int', 2) for i in range(3): tmp = 0.0 for j in range(3): tmp += d_gradv[d_idx*9 + 3*i + j] * d_dpos[d_idx*3 + j] res[i] = tmp d_u[d_idx] += res[0] d_v[d_idx] += res[1] d_w[d_idx] += res[2] class ShiftPositions(Tool): def __init__(self, app, array_name, freq=1, shift_kind='simple', correct_velocity=False, parameter=None): """ Parameters ---------- app : pysph.solver.application.Application. The application instance. arr_name : array Name of the particle array whose position needs to be shifted. freq : int Frequency to apply particle position shift. shift_kind: str Kind to shift to apply available are "simple" and "fickian". correct_velocity: bool Correct velocities after shift in particle position. parameter: float Correct velocities after shift in particle position. """ from pysph.solver.utils import get_array_by_name self.particles = app.particles self.dt = app.solver.dt self.dim = app.solver.dim self.kernel = app.solver.kernel self.array = get_array_by_name(self.particles, array_name) self.freq = freq self.kind = shift_kind self.correct_velocity = correct_velocity self.parameter = parameter self.count = 1 self._sph_eval = None options = ['simple', 'fickian'] assert self.kind in options, 'shift_kind should be one of %s' % options def _get_sph_eval(self, kind): from pysph.tools.sph_evaluator import SPHEvaluator from pysph.sph.equation import Group if self._sph_eval is None: arr = self.array eqns = [] name = arr.name if 'vmax' not in arr.constants.keys(): arr.add_constant('vmax', [0.0]) if 'dpos' not in arr.properties.keys(): arr.add_property('dpos', stride=3) if kind == 'simple': const = 0.04 if not self.parameter else self.parameter eqns.append(Group( equations=[SimpleShift(name, [name], const=const)], update_nnps=True) ) elif kind == 'fickian': const = 4 if not self.parameter else self.parameter eqns.append(Group( equations=[FickianShift(name, [name], fickian_const=const)], update_nnps=True) ) if self.correct_velocity: if 'gradv' not in arr.properties.keys(): arr.add_property('gradv', stride=9) eqns.append(Group(equations=[ CorrectVelocities(name, [name])])) sph_eval = SPHEvaluator( arrays=[arr], equations=eqns, dim=self.dim, kernel=self.kernel) return sph_eval else: return self._sph_eval def post_step(self, solver): if self.freq == 0: pass elif self.count % self.freq == 0: self._sph_eval = self._get_sph_eval(self.kind) self._sph_eval.update() self._sph_eval.evaluate(dt=self.dt) self.count += 1 pysph-master/pysph/sph/wc/transport_velocity.py000066400000000000000000000512001356347341600224160ustar00rootroot00000000000000""" Transport Velocity Formulation ############################## References ---------- .. [Adami2012] S. Adami et. al "A generalized wall boundary condition for smoothed particle hydrodynamics", Journal of Computational Physics (2012), pp. 7057--7075. .. [Adami2013] S. Adami et. al "A transport-velocity formulation for smoothed particle hydrodynamics", Journal of Computational Physics (2013), pp. 292--307. """ from pysph.sph.equation import Equation from math import sin, pi # constants M_PI = pi class SummationDensity(Equation): r"""**Summation density with volume summation** In addition to the standard summation density, the number density for the particle is also computed. The number density is important for multi-phase flows to define a local particle volume independent of the material density. .. math:: \rho_a = \sum_b m_b W_{ab}\\ \mathcal{V}_a = \frac{1}{\sum_b W_{ab}} Notes ----- Note that in the pysph implementation, V is the inverse volume of a particle, i.e. the equation computes V as follows: .. math:: \mathcal{V}_a = \sum_b W_{ab} For this equation, the destination particle array must define the variable `V` for particle volume. """ def initialize(self, d_idx, d_V, d_rho): d_V[d_idx] = 0.0 d_rho[d_idx] = 0.0 def loop(self, d_idx, d_V, d_rho, d_m, WIJ): d_V[d_idx] += WIJ d_rho[d_idx] += d_m[d_idx]*WIJ class VolumeSummation(Equation): r"""**Number density for volume computation** See `SummationDensity` Note that the quantity `V` is really :math:`\sigma` of the original paper, i.e. inverse of the particle volume. """ def initialize(self, d_idx, d_V): d_V[d_idx] = 0.0 def loop(self, d_idx, d_V, WIJ): d_V[d_idx] += WIJ class VolumeFromMassDensity(Equation): """**Set the inverse volume using mass density**""" def loop(self, d_idx, d_V, d_rho, d_m): d_V[d_idx] = d_rho[d_idx]/d_m[d_idx] class SetWallVelocity(Equation): r"""**Extrapolating the fluid velocity on to the wall** Eq. (22) in [Adami2012]: .. math:: \tilde{\boldsymbol{v}}_a = \frac{\sum_b\boldsymbol{v}_b W_{ab}} {\sum_b W_{ab}} Notes ----- The destination particle array for this equation should define the *filtered* velocity variables :math:`uf, vf, wf`. """ def initialize(self, d_idx, d_uf, d_vf, d_wf, d_wij): d_uf[d_idx] = 0.0 d_vf[d_idx] = 0.0 d_wf[d_idx] = 0.0 d_wij[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_uf, d_vf, d_wf, s_u, s_v, s_w, d_wij, WIJ): # normalisation factor is different from 'V' as the particles # near the boundary do not have full kernel support d_wij[d_idx] += WIJ # sum in Eq. (22) # this will be normalized in post loop d_uf[d_idx] += s_u[s_idx] * WIJ d_vf[d_idx] += s_v[s_idx] * WIJ d_wf[d_idx] += s_w[s_idx] * WIJ def post_loop(self, d_uf, d_vf, d_wf, d_wij, d_idx, d_ug, d_vg, d_wg, d_u, d_v, d_w): # calculation is done only for the relevant boundary particles. # d_wij (and d_uf) is 0 for particles sufficiently away from the # solid-fluid interface if d_wij[d_idx] > 1e-12: d_uf[d_idx] /= d_wij[d_idx] d_vf[d_idx] /= d_wij[d_idx] d_wf[d_idx] /= d_wij[d_idx] # Dummy velocities at the ghost points using Eq. (23), # d_u, d_v, d_w are the prescribed wall velocities. d_ug[d_idx] = 2*d_u[d_idx] - d_uf[d_idx] d_vg[d_idx] = 2*d_v[d_idx] - d_vf[d_idx] d_wg[d_idx] = 2*d_w[d_idx] - d_wf[d_idx] class ContinuityEquation(Equation): r"""**Conservation of mass equation** Eq (6) in [Adami2012]: .. math:: \frac{d\rho_a}{dt} = \rho_a \sum_b \frac{m_b}{\rho_b} \boldsymbol{v}_{ab} \cdot \nabla_a W_{ab} """ def initialize(self, d_idx, d_arho): d_arho[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_arho, s_m, s_rho, d_rho, VIJ, DWIJ): vijdotdwij = VIJ[0] * DWIJ[0] + VIJ[1] * DWIJ[1] + VIJ[2] * DWIJ[2] d_arho[d_idx] += d_rho[d_idx] * vijdotdwij * s_m[s_idx] / s_rho[s_idx] class ContinuitySolid(Equation): """Continuity equation for the solid's ghost particles. The key difference is that we use the ghost velocity ug, and not the particle velocity u. """ def loop(self, d_idx, s_idx, d_rho, d_u, d_v, d_w, d_arho, s_m, s_rho, s_ug, s_vg, s_wg, DWIJ): Vj = s_m[s_idx] / s_rho[s_idx] rhoi = d_rho[d_idx] uij = d_u[d_idx] - s_ug[s_idx] vij = d_v[d_idx] - s_vg[s_idx] wij = d_w[d_idx] - s_wg[s_idx] vij_dot_dwij = uij*DWIJ[0] + vij*DWIJ[1] + wij*DWIJ[2] d_arho[d_idx] += rhoi*Vj*vij_dot_dwij class StateEquation(Equation): r"""**Generalized Weakly Compressible Equation of State** .. math:: p_a = p_0\left[ \left(\frac{\rho}{\rho_0}\right)^\gamma - b \right] + \mathcal{X} Notes ----- This is the generalized Tait's equation of state and the suggested values in [Adami2013] are :math:`\mathcal{X} = 0`, :math:`\gamma=1` and :math:`b = 1`. The reference pressure :math:`p_0` is calculated from the artificial sound speed and reference density: .. math:: p_0 = \frac{c^2\rho_0}{\gamma} """ def __init__(self, dest, sources, p0, rho0, b=1.0): r""" Parameters ---------- p0 : float reference pressure rho0 : float reference density b : float constant (default 1.0). """ self.b = b self.p0 = p0 self.rho0 = rho0 super(StateEquation, self).__init__(dest, sources) def loop(self, d_idx, d_p, d_rho): d_p[d_idx] = self.p0 * (d_rho[d_idx]/self.rho0 - self.b) class MomentumEquationPressureGradient(Equation): r"""**Momentum equation for the Transport Velocity Formulation: Pressure** Eq. (8) in [Adami2013]: .. math:: \frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[-\bar{p}_{ab}\nabla_a W_{ab} \right] where .. math:: \bar{p}_{ab} = \frac{\rho_b p_a + \rho_a p_b}{\rho_a + \rho_b} """ def __init__(self, dest, sources, pb, gx=0., gy=0., gz=0., tdamp=0.0): r""" Parameters ---------- pb : float background pressure gx : float Body force per unit mass along the x-axis gy : float Body force per unit mass along the y-axis gz : float Body force per unit mass along the z-axis tdamp : float damping time Notes ----- This equation should have the destination as fluid and sources as fluid and boundary particles. This function also computes the contribution to the background pressure and accelerations due to a body force or gravity. The body forces are damped according to Eq. (13) in [Adami2012] to avoid instantaneous accelerations. By default, damping is neglected. """ self.pb = pb self.gx = gx self.gy = gy self.gz = gz self.tdamp = tdamp super(MomentumEquationPressureGradient, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw, d_auhat, d_avhat, d_awhat): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 d_auhat[d_idx] = 0.0 d_avhat[d_idx] = 0.0 d_awhat[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, d_rho, s_rho, d_au, d_av, d_aw, d_p, s_p, d_auhat, d_avhat, d_awhat, d_V, s_V, DWIJ): # averaged pressure Eq. (7) rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] p_i = d_p[d_idx] p_j = s_p[s_idx] pij = rhoj * p_i + rhoi * p_j pij /= (rhoj + rhoi) # particle volumes; d_V is inverse volume Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # inverse mass of destination particle mi1 = 1.0/d_m[d_idx] # accelerations 1st term in Eq. (8) tmp = -pij * mi1 * (Vi2 + Vj2) d_au[d_idx] += tmp * DWIJ[0] d_av[d_idx] += tmp * DWIJ[1] d_aw[d_idx] += tmp * DWIJ[2] # contribution due to the background pressure Eq. (13) tmp = -self.pb * mi1 * (Vi2 + Vj2) d_auhat[d_idx] += tmp * DWIJ[0] d_avhat[d_idx] += tmp * DWIJ[1] d_awhat[d_idx] += tmp * DWIJ[2] def post_loop(self, d_idx, d_au, d_av, d_aw, t): # damped accelerations due to body or external force damping_factor = 1.0 if t < self.tdamp: damping_factor = 0.5 * (sin((-0.5 + t/self.tdamp)*M_PI) + 1.0) d_au[d_idx] += self.gx * damping_factor d_av[d_idx] += self.gy * damping_factor d_aw[d_idx] += self.gz * damping_factor class MomentumEquationViscosity(Equation): r"""**Momentum equation for the Transport Velocity Formulation: Viscosity** Eq. (8) in [Adami2013]: .. math:: \frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[ \bar{\eta}_{ab}\hat{r}_{ab}\cdot \nabla_a W_{ab} \frac{\boldsymbol{v}_{ab}}{|\boldsymbol{r}_{ab}|}\right] where .. math:: \bar{\eta}_{ab} = \frac{2\eta_a \eta_b}{\eta_a + \eta_b} """ def __init__(self, dest, sources, nu): r""" Parameters ---------- nu : float kinematic viscosity """ self.nu = nu super(MomentumEquationViscosity, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, s_rho, d_m, d_V, s_V, d_au, d_av, d_aw, R2IJ, EPS, DWIJ, VIJ, XIJ): # averaged shear viscosity Eq. (6) etai = self.nu * d_rho[d_idx] etaj = self.nu * s_rho[s_idx] etaij = 2 * (etai * etaj)/(etai + etaj) # scalar part of the kernel gradient Fij = DWIJ[0]*XIJ[0] + DWIJ[1]*XIJ[1] + DWIJ[2]*XIJ[2] # particle volumes, d_V is inverse volume. Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # accelerations 3rd term in Eq. (8) tmp = 1./d_m[d_idx] * (Vi2 + Vj2) * etaij * Fij/(R2IJ + EPS) d_au[d_idx] += tmp * VIJ[0] d_av[d_idx] += tmp * VIJ[1] d_aw[d_idx] += tmp * VIJ[2] class MomentumEquationArtificialViscosity(Equation): r"""**Artificial viscosity for the momentum equation** Eq. (11) in [Adami2012]: .. math:: \frac{d \boldsymbol{v}_a}{dt} = -\sum_b m_b \alpha h_{ab} c_{ab} \frac{\boldsymbol{v}_{ab}\cdot \boldsymbol{r}_{ab}}{\rho_{ab}\left(|r_{ab}|^2 + \epsilon \right)}\nabla_a W_{ab} where .. math:: \rho_{ab} = \frac{\rho_a + \rho_b}{2}\\ c_{ab} = \frac{c_a + c_b}{2}\\ h_{ab} = \frac{h_a + h_b}{2} """ def __init__(self, dest, sources, c0, alpha=0.1): r""" Parameters ---------- alpha : float constant c0 : float speed of sound """ self.alpha = alpha self.c0 = c0 super(MomentumEquationArtificialViscosity, self).__init__( dest, sources ) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_au, d_av, d_aw, RHOIJ1, R2IJ, EPS, DWIJ, VIJ, XIJ, HIJ): # v_{ab} \cdot r_{ab} vijdotrij = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] # scalar part of the accelerations Eq. (11) piij = 0.0 if vijdotrij < 0: muij = (HIJ * vijdotrij)/(R2IJ + EPS) piij = -self.alpha*self.c0*muij piij = s_m[s_idx] * piij*RHOIJ1 d_au[d_idx] += -piij * DWIJ[0] d_av[d_idx] += -piij * DWIJ[1] d_aw[d_idx] += -piij * DWIJ[2] class MomentumEquationArtificialStress(Equation): r"""**Artificial stress contribution to the Momentum Equation** .. math:: \frac{d\boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[ \frac{1}{2}(\boldsymbol{A}_a + \boldsymbol{A}_b) : \nabla_a W_{ab}\right] where the artificial stress terms are given by: .. math:: \boldsymbol{A} = \rho \boldsymbol{v} (\tilde{\boldsymbol{v}} - \boldsymbol{v}) """ def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_rho, d_u, d_v, d_w, d_V, d_uhat, d_vhat, d_what, d_au, d_av, d_aw, d_m, s_rho, s_u, s_v, s_w, s_V, s_uhat, s_vhat, s_what, DWIJ): rhoi = d_rho[d_idx] rhoj = s_rho[s_idx] # physical and advection velocities ui = d_u[d_idx] uhati = d_uhat[d_idx] vi = d_v[d_idx] vhati = d_vhat[d_idx] wi = d_w[d_idx] whati = d_what[d_idx] uj = s_u[s_idx] uhatj = s_uhat[s_idx] vj = s_v[s_idx] vhatj = s_vhat[s_idx] wj = s_w[s_idx] whatj = s_what[s_idx] # particle volumes; d_V is inverse volume. Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # artificial stress tensor Axxi = rhoi*ui*(uhati - ui) Axyi = rhoi*ui*(vhati - vi) Axzi = rhoi*ui*(whati - wi) Ayxi = rhoi*vi*(uhati - ui) Ayyi = rhoi*vi*(vhati - vi) Ayzi = rhoi*vi*(whati - wi) Azxi = rhoi*wi*(uhati - ui) Azyi = rhoi*wi*(vhati - vi) Azzi = rhoi*wi*(whati - wi) Axxj = rhoj*uj*(uhatj - uj) Axyj = rhoj*uj*(vhatj - vj) Axzj = rhoj*uj*(whatj - wj) Ayxj = rhoj*vj*(uhatj - uj) Ayyj = rhoj*vj*(vhatj - vj) Ayzj = rhoj*vj*(whatj - wj) Azxj = rhoj*wj*(uhatj - uj) Azyj = rhoj*wj*(vhatj - vj) Azzj = rhoj*wj*(whatj - wj) # contraction of stress tensor with kernel gradient Ax = 0.5*( (Axxi + Axxj)*DWIJ[0] + (Axyi + Axyj)*DWIJ[1] + (Axzi + Axzj)*DWIJ[2] ) Ay = 0.5*( (Ayxi + Ayxj)*DWIJ[0] + (Ayyi + Ayyj)*DWIJ[1] + (Ayzi + Ayzj)*DWIJ[2] ) Az = 0.5*( (Azxi + Azxj)*DWIJ[0] + (Azyi + Azyj)*DWIJ[1] + (Azzi + Azzj)*DWIJ[2] ) # accelerations 2nd part of Eq. (8) tmp = 1./d_m[d_idx] * (Vi2 + Vj2) d_au[d_idx] += tmp * Ax d_av[d_idx] += tmp * Ay d_aw[d_idx] += tmp * Az class SolidWallNoSlipBC(Equation): r"""**Solid wall boundary condition** [Adami2012]_ This boundary condition is to be used with fixed ghost particles in SPH simulations and is formulated for the general case of moving boundaries. The velocity and pressure of the fluid particles is extrapolated to the ghost particles and these values are used in the equations of motion. No-penetration: Ghost particles participate in the continuity and state equations with fluid particles. This means as fluid particles approach the wall, the pressure of the ghost particles increases to generate a repulsion force that prevents particle penetration. No-slip: Extrapolation is used to set the `dummy` velocity of the ghost particles for viscous interaction. First, the smoothed velocity field of the fluid phase is extrapolated to the wall particles: .. math:: \tilde{v}_a = \frac{\sum_b v_b W_{ab}}{\sum_b W_{ab}} In the second step, for the viscous interaction in Eqs. (10) in [Adami2012] and Eq. (8) in [Adami2013], the velocity of the ghost particles is assigned as: .. math:: v_b = 2v_w -\tilde{v}_a, where :math:`v_w` is the prescribed wall velocity and :math:`v_b` is the ghost particle in the interaction. """ def __init__(self, dest, sources, nu): r""" Parameters ---------- nu : float kinematic viscosity Notes ----- For this equation the destination particle array should be the fluid and the source should be ghost or boundary particles. The boundary particles must define a prescribed velocity :math:`u_0, v_0, w_0` """ self.nu = nu super(SolidWallNoSlipBC, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, d_rho, s_rho, d_V, s_V, d_u, d_v, d_w, d_au, d_av, d_aw, s_ug, s_vg, s_wg, DWIJ, R2IJ, EPS, XIJ): # averaged shear viscosity Eq. (6). etai = self.nu * d_rho[d_idx] etaj = self.nu * s_rho[s_idx] etaij = 2 * (etai * etaj)/(etai + etaj) # particle volumes; d_V inverse volume. Vi = 1./d_V[d_idx] Vj = 1./s_V[s_idx] Vi2 = Vi * Vi Vj2 = Vj * Vj # scalar part of the kernel gradient Fij = XIJ[0]*DWIJ[0] + XIJ[1]*DWIJ[1] + XIJ[2]*DWIJ[2] # viscous contribution (third term) from Eq. (8), with VIJ # defined appropriately using the ghost values tmp = 1./d_m[d_idx] * (Vi2 + Vj2) * (etaij * Fij/(R2IJ + EPS)) d_au[d_idx] += tmp * (d_u[d_idx] - s_ug[s_idx]) d_av[d_idx] += tmp * (d_v[d_idx] - s_vg[s_idx]) d_aw[d_idx] += tmp * (d_w[d_idx] - s_wg[s_idx]) class SolidWallPressureBC(Equation): r"""**Solid wall pressure boundary condition** [Adami2012]_ This boundary condition is to be used with fixed ghost particles in SPH simulations and is formulated for the general case of moving boundaries. The velocity and pressure of the fluid particles is extrapolated to the ghost particles and these values are used in the equations of motion. Pressure boundary condition: The pressure of the ghost particle is also calculated from the fluid particle by interpolation using: .. math:: p_g = \frac{\sum_f p_f W_{gf} + \boldsymbol{g - a_g} \cdot \sum_f \rho_f \boldsymbol{r}_{gf}W_{gf}}{\sum_f W_{gf}}, where the subscripts `g` and `f` relate to the ghost and fluid particles respectively. Density of the wall particle is then set using this pressure .. math:: \rho_w=\rho_0\left(\frac{p_w - \mathcal{X}}{p_0} + 1\right)^{\frac{1}{\gamma}} """ def __init__(self, dest, sources, rho0, p0, b=1.0, gx=0.0, gy=0.0, gz=0.0): r""" Parameters ---------- rho0 : float reference density p0 : float reference pressure b : float constant (default 1.0) gx : float Body force per unit mass along the x-axis gy : float Body force per unit mass along the y-axis gz : float Body force per unit mass along the z-axis Notes ----- For a two fluid system (boundary, fluid), this equation must be instantiated with boundary as the destination and fluid as the source. The boundary particle array must additionally define a property :math:`wij` for the denominator in Eq. (27) from [Adami2012]. This array sums the kernel terms from the ghost particle to the fluid particle. """ self.rho0 = rho0 self.p0 = p0 self.b = b self.gx = gx self.gy = gy self.gz = gz super(SolidWallPressureBC, self).__init__(dest, sources) def initialize(self, d_idx, d_p, d_wij): d_p[d_idx] = 0.0 d_wij[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_p, s_p, d_wij, s_rho, d_au, d_av, d_aw, WIJ, XIJ): # numerator of Eq. (27) ax, ay and az are the prescribed wall # accelerations which must be defined for the wall boundary # particle gdotxij = (self.gx - d_au[d_idx])*XIJ[0] + \ (self.gy - d_av[d_idx])*XIJ[1] + \ (self.gz - d_aw[d_idx])*XIJ[2] d_p[d_idx] += s_p[s_idx]*WIJ + s_rho[s_idx]*gdotxij*WIJ # denominator of Eq. (27) d_wij[d_idx] += WIJ def post_loop(self, d_idx, d_wij, d_p, d_rho): # extrapolated pressure at the ghost particle if d_wij[d_idx] > 1e-14: d_p[d_idx] /= d_wij[d_idx] # update the density from the pressure Eq. (28) d_rho[d_idx] = self.rho0 * (d_p[d_idx]/self.p0 + self.b) pysph-master/pysph/sph/wc/viscosity.py000066400000000000000000000106431356347341600205060ustar00rootroot00000000000000"""Viscosity functions""" from pysph.sph.equation import Equation class LaminarViscosity(Equation): def __init__(self, dest, sources, nu, eta=0.01): self.nu = nu self.eta = eta super(LaminarViscosity, self).__init__(dest, sources) def loop(self, d_idx, s_idx, s_m, d_rho, s_rho, d_au, d_av, d_aw, DWIJ, XIJ, VIJ, R2IJ, HIJ): rhoa = d_rho[d_idx] rhob = s_rho[s_idx] # scalar part of the kernel gradient Fij = DWIJ[0] * XIJ[0] + DWIJ[1] * XIJ[1] + DWIJ[2] * XIJ[2] mb = s_m[s_idx] tmp = mb * 4 * self.nu * Fij/((rhoa + rhob)*(R2IJ + self.eta*HIJ*HIJ)) # accelerations d_au[d_idx] += tmp * VIJ[0] d_av[d_idx] += tmp * VIJ[1] d_aw[d_idx] += tmp * VIJ[2] class MonaghanSignalViscosityFluids(Equation): def __init__(self, dest, sources, alpha, h): self.alpha = 0.125 * alpha * h super(MonaghanSignalViscosityFluids, self).__init__(dest, sources) def loop(self, d_idx, s_idx, d_rho, s_rho, s_m, d_au, d_av, d_aw, d_cs, s_cs, RIJ, HIJ, VIJ, XIJ, DWIJ): nua = self.alpha * d_cs[d_idx] nub = self.alpha * s_cs[s_idx] rhoa = d_rho[d_idx] rhob = s_rho[s_idx] mb = s_m[s_idx] vabdotrab = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] eta = nua * nub/(nua*rhoa + nub*rhob) force = -16 * eta * vabdotrab/(HIJ * (RIJ + 0.01*HIJ*HIJ)) d_au[d_idx] += -mb * force * DWIJ[0] d_av[d_idx] += -mb * force * DWIJ[1] d_aw[d_idx] += -mb * force * DWIJ[2] class ClearyArtificialViscosity(Equation): r"""Artificial viscosity proposed By P. Cleary: .. math:: \mathcal{Pi}_{ab} = -\frac{16}{\mu_a \mu_b}{\rho_a \rho_b (\mu_a + \mu_b)}\left( \frac{\boldsymbol{v}_{ab} \cdot \boldsymbol{r}_{ab}}{\boldsymbol{r}_{ab}^2 + \epsilon} \right), where the viscosity is determined from the parameter :math:`\alpha` as .. math:: \mu_a = \frac{1}{8}\alpha h_a c_a \rho_a This equation is described in the 2005 review paper by Monaghan - J. J. Monaghan, "Smoothed Particle Hydrodynamics", Reports on Progress in Physics, 2005, 68, pp 1703--1759 [JM05] """ def __init__(self, dest, sources, dim, alpha=1.0): self.alpha = alpha self.factor = 16.0 if dim == 3: self.factor = 20.0 # Base class initialization super(ClearyArtificialViscosity, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = 0.0 d_av[d_idx] = 0.0 d_aw[d_idx] = 0.0 def loop(self, d_idx, s_idx, d_m, s_m, d_rho, s_rho, d_h, s_h, d_cs, s_cs, d_au, d_av, d_aw, XIJ, VIJ, R2IJ, EPS, DWIJ): # viscosity parameters for each particle Eq. (8.8) in [JM05] mua = 0.125 * self.alpha * d_h[d_idx] * d_cs[d_idx] * d_rho[d_idx] mub = 0.125 * self.alpha * s_h[s_idx] * s_cs[s_idx] * s_rho[s_idx] # \boldsymbol{v}_{ab} \cdot \boldsymbol{r}_{ab} dot = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] # Pi_ab term. Eq. (8.9) in [JM05] rhoa = d_rho[d_idx] rhob = s_rho[s_idx] eta = mua*mub/(rhoa*rhob*(mua + mub)) piab = -s_m[s_idx] * self.factor*eta * (dot/(R2IJ + EPS)) # accelerations due to viscosity Eq. (8.2) in [JM05] d_au[d_idx] += piab * DWIJ[0] d_av[d_idx] += piab * DWIJ[1] d_aw[d_idx] += piab * DWIJ[2] class LaminarViscosityDeltaSPH(Equation): r""" See section 2 of the below reference - P. Sun, A. Colagrossi, S. Marrone, A. Zhang "The plus-SPH model: simple procedures for a further improvement of the SPH scheme", Computer Methods in Applied Mechanics and Engineering 315 (2017), pp. 25-49. """ def __init__(self, dest, sources, dim, rho0, nu): self.dim = dim self.rho0 = rho0 self.nu = nu super(LaminarViscosityDeltaSPH, self).__init__(dest, sources) def loop(self, d_idx, s_idx, s_m, s_rho, d_rho, d_au, d_av, d_aw, HIJ, DWIJ, R2IJ, EPS, VIJ, XIJ): Vj = s_m[s_idx]/s_rho[s_idx] rhoi = d_rho[d_idx] vdotxij = VIJ[0]*XIJ[0] + VIJ[1]*XIJ[1] + VIJ[2]*XIJ[2] piij = vdotxij/(R2IJ + EPS) fac = 2 * (self.dim + 2) * self.nu * self.rho0 * piij * Vj / rhoi d_au[d_idx] += fac*DWIJ[0] d_av[d_idx] += fac*DWIJ[1] d_aw[d_idx] += fac*DWIJ[2] pysph-master/pysph/sph/wc/zhanghuadams.py000066400000000000000000000062751356347341600211320ustar00rootroot00000000000000from pysph.sph.equation import Equation class Continuity(Equation): def __init__(self, dest, sources, c0): self.c0 = c0 super(Continuity, self).__init__(dest, sources) def initialize(self, d_idx, d_arho): d_arho[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_m, d_u, d_v, d_w, s_u, s_v, s_w, d_cs, s_cs, d_rho, d_arho, s_rho, d_p, s_p, DWIJ, RIJ, XIJ): rl = d_rho[d_idx] rr = s_rho[s_idx] pl = d_p[d_idx] pr = s_p[s_idx] cl = d_cs[d_idx] cr = s_cs[s_idx] uxl = d_u[d_idx] uyl = d_v[d_idx] uzl = d_w[d_idx] uxr = s_u[s_idx] uyr = s_v[s_idx] uzr = s_w[s_idx] co = self.c0 eij = declare('matrix(3)') vij = declare('matrix(3)') v_star = declare('matrix(3)') vij[0] = 0.5 * (uxl + uxr) vij[1] = 0.5 * (uyl + uyr) vij[2] = 0.5 * (uzl + uzr) for i in range(3): if RIJ >= 1.0e-12: eij[i] = -XIJ[i] / RIJ else: eij[i] = 0.0 ul = uxl * eij[0] + uyl * eij[1] + uzl * eij[2] ur = uxr * eij[0] + uyr * eij[1] + uzr * eij[2] rhobar = 0.5 * (rl + rr) u_star = 0.5 * (ul + ur) + 0.5 * (pl - pr) / (rhobar * co) p_star = 0.5 * (pl + pr) + 0.5 * rhobar * co * (ul - ur) for i in range(3): v_star[i] = (u_star - 0.5 * (ul + ur)) * eij[i] + vij[i] vdotw = (uxl - v_star[0]) * DWIJ[0] + (uyl - v_star[1]) * DWIJ[1] vdotw += (uzl - v_star[2]) * DWIJ[2] d_arho[d_idx] += 2.0 * s_m[s_idx] * vdotw * rl / rr class MomentumFluid(Equation): def __init__(self, dest, sources, c0, gx=0.0, gy=0.0, gz=0.0): self.gx = gx self.gy = gy self.gz = gz self.c0 = c0 super(MomentumFluid, self).__init__(dest, sources) def initialize(self, d_idx, d_au, d_av, d_aw): d_au[d_idx] = self.gx d_av[d_idx] = self.gy d_aw[d_idx] = self.gz def loop(self, d_idx, s_idx, s_m, d_u, d_v, d_w, s_u, s_v, s_w, d_cs, s_cs, d_rho, s_rho, d_p, s_p, d_au, d_av, d_aw, RIJ, XIJ, DWIJ, HIJ): rl = d_rho[d_idx] rr = s_rho[s_idx] pl = d_p[d_idx] pr = s_p[s_idx] cl = d_cs[d_idx] cr = s_cs[s_idx] uxl = d_u[d_idx] uyl = d_v[d_idx] uzl = d_w[d_idx] uxr = s_u[s_idx] uyr = s_v[s_idx] uzr = s_w[s_idx] m = s_m[s_idx] co = self.c0 eij = declare('matrix(3)') vij = declare('matrix(3)') v_star = declare('matrix(3)') vij[0] = 0.5 * (uxl + uxr) vij[1] = 0.5 * (uyl + uyr) vij[2] = 0.5 * (uzl + uzr) for i in range(3): if RIJ >= 1.0e-12: eij[i] = -XIJ[i] / RIJ else: eij[i] = 0.0 ul = uxl * eij[0] + uyl * eij[1] + uzl * eij[2] ur = uxr * eij[0] + uyr * eij[1] + uzr * eij[2] rhobar = 0.5 * (rl + rr) p_star = 0.5 * (pl + pr) + 0.5 * rhobar * co * (ul - ur) factor = -2.0 * m * p_star / (rl * rr) d_au[d_idx] += factor * DWIJ[0] d_av[d_idx] += factor * DWIJ[1] d_aw[d_idx] += factor * DWIJ[2] pysph-master/pysph/tools/000077500000000000000000000000001356347341600160315ustar00rootroot00000000000000pysph-master/pysph/tools/__init__.py000066400000000000000000000000001356347341600201300ustar00rootroot00000000000000pysph-master/pysph/tools/binder.py000066400000000000000000000131311356347341600176450ustar00rootroot00000000000000"""Make a directory compatible with mybinder.org and ready for upload to a Github repo. """ from pysph.solver.utils import get_files, mkdir import os from shutil import copy import nbformat as nbf import argparse import sys import re import glob def find_viewer_type(path): ''' Finds the type of viewer to use in the jupyter notebook. Parses the log file at the file path and searches for 'dim=d', where 'dD' is taken to be the viewer type. example: if 'dim=2', then what is returned is '2D' ''' log_file_path = os.path.abspath(path) + '/*.log' regex = r'dim=(\d)' log_files = glob.glob(log_file_path) if not log_files: return '2D' match_list = [] with open(log_files[0], 'r') as file: for line in file: for match in re.finditer(regex, line, re.S): match_text = match.group() match_list.append(match_text) if len(match_list) > 0: break if len(match_list) > 0: break return match_list[0][-1] + 'D' def make_notebook(path, sim_name, config_dict={}): ''' Makes a jupyter notebook to view simulation results stored in a given directory path: the directory conatining the output files sim_name: name of the simulation ex. 'cavity_output' config_dict: configuration dictionary for the notbeook viewer [dict] ex. {'fluid': {'frame': 20}} ''' viewer_type = find_viewer_type(path) cell1_src = [ "import os\n", "from pysph.tools.ipy_viewer import Viewer" + viewer_type ] cell2_src = [ "cwd = os.getcwd()" ] cell3_src = [ "viewer = Viewer" + viewer_type + "(cwd)\n", "viewer.interactive_plot(" + str(config_dict) + ")" ] nb = nbf.v4.new_notebook() nb.cells = [ nbf.v4.new_code_cell(source=cell1_src), nbf.v4.new_code_cell(source=cell2_src), nbf.v4.new_code_cell(source=cell3_src) ] nbf.write( nb, os.path.join( path, sim_name+'.ipynb' ) ) return def find_sim_dirs(path, sim_paths_list=[]): ''' Finds all the directories in a given directory that contain pysph output files. ''' path = os.path.abspath(path) sim_files = get_files(path) if len(sim_files) != 0: sim_paths_list.append(path) elif len(sim_files) == 0: files = os.listdir(path) files = [f for f in files if os.path.isdir(f)] files = [os.path.abspath(f) for f in files if not f.startswith('.')] for f in files: sim_paths_list = find_sim_dirs(f, sim_paths_list) return sim_paths_list def find_dir_size(path): ''' Finds the size of a given directory. ''' total_size = 0 for dir_path, dir_names, file_names in os.walk(path): for f in file_names: fp = os.path.join(dir_path, f) if not os.path.islink(fp): total_size += os.path.getsize(fp) return total_size def make_binder(path): src_path = os.path.abspath(path) sim_paths_list = find_sim_dirs(src_path) for path in sim_paths_list: sim_name = os.path.split(path)[1] make_notebook(path, sim_name) if len(sim_paths_list) == 1 and sim_paths_list[0] == src_path: files = os.listdir(src_path) files = [os.path.join(src_path, f) for f in files] mkdir(os.path.join(src_path, sim_name)) could_not_copy = [] for f in files: try: copy(f, os.path.join(src_path, sim_name)) except BaseException as exc: could_not_copy.append([f, exc]) continue os.remove(f) if len(could_not_copy) != 0: print("Could not copy the following files:\n") for f in could_not_copy: print('file: ', f[0]) print('error: ', f[1], '\n') with open(os.path.join(src_path, 'requirements.txt'), 'w') as file: file.write( "ipympl\n" + "matplotlib\n" + "ipyvolume\n" + "numpy\n" + "pytools\n" + "-e git+https://github.com/pypr/compyle#egg=compyle\n" + "-e git+https://github.com/pypr/pysph#egg=pysph" ) with open(os.path.join(src_path, 'README.md'), 'w') as file: file.write( "# Title\n" + "[![Binder](https://mybinder.org/badge_logo.svg)]" + "(https://mybinder.org/v2/gh/user_name/repo_name/branch_name)" + "\n" + "[comment]: # (The above link is for repositories hosted " + "on GitHub. For links corresponding to other hosting services, " + "please visit https://mybinder.readthedocs.io)" ) def main(argv=None): if argv is None: argv = sys.argv[1:] parser = argparse.ArgumentParser( prog='binder', description=__doc__, add_help=False ) parser.add_argument( "-h", "--help", action="store_true", default=False, dest="help", help="show this help message and exit" ) parser.add_argument( "src_path", type=str, nargs=1, help="the directory containing the directories/files to be prepared" ) if len(argv) > 0 and argv[0] in ['-h', '--help']: parser.print_help() sys.exit() options, extra = parser.parse_known_args(argv) src_path = options.src_path[0] make_binder(src_path) if __name__ == '__main__': main() pysph-master/pysph/tools/cli.py000066400000000000000000000046771356347341600171700ustar00rootroot00000000000000"""Convenience script for running various PySPH related tasks. """ from __future__ import print_function from argparse import ArgumentParser from os.path import exists, join import sys def run_viewer(args): from pysph.tools.mayavi_viewer import main main(args) def run_examples(args): from pysph.examples.run import main main(args) def output_vtk(args): from pysph.solver.vtk_output import main main(args) def _has_pysph_dir(): init_py = join('pysph', '__init__.py') init_pyc = join('pysph', '__init__.pyc') return exists(init_py) or exists(init_pyc) def run_tests(args): argv = ['--pyargs', 'pysph'] + args from pytest import cmdline cmdline.main(args=argv) def make_binder(args): from pysph.tools.binder import main main(args) def cull_files(args): from pysph.tools.cull import main main(args) def main(): parser = ArgumentParser(description=__doc__, add_help=False) parser.add_argument( "-h", "--help", action="store_true", default=False, dest="help", help="show this help message and exit" ) subparsers = parser.add_subparsers(help='sub-command help') viewer = subparsers.add_parser( 'view', help='View output files generated by PySPH', add_help=False ) viewer.set_defaults(func=run_viewer) runner = subparsers.add_parser( 'run', help='Run PySPH examples', add_help=False ) runner.set_defaults(func=run_examples) vtk_out = subparsers.add_parser( 'dump_vtk', help='Dump VTK Output', add_help=False ) vtk_out.set_defaults(func=output_vtk) tests = subparsers.add_parser( 'test', help='Run entire PySPH test-suite', add_help=False ) tests.set_defaults(func=run_tests) binder = subparsers.add_parser( 'binder', help='Make a mybinder.org compatible directory for upload to a ' + 'host repo', add_help=False ) binder.set_defaults(func=make_binder) cull = subparsers.add_parser( 'cull', help='Cull files in a given directory by a specified culling_factor', add_help=False ) cull.set_defaults(func=cull_files) if (len(sys.argv) == 1 or (len(sys.argv) > 1 and sys.argv[1] in ['-h', '--help'])): parser.print_help() sys.exit() args, extra = parser.parse_known_args() args.func(extra) if __name__ == '__main__': main() pysph-master/pysph/tools/cull.py000066400000000000000000000041361356347341600173460ustar00rootroot00000000000000"""Culls files in a given directory, one in every 'c' files is spared. The specified directory can contain other directories that house the output files; files in all these directories will be culled, sparing one in every 'c' files. Note that DELETION IS PERMANENT. """ from pysph.tools.binder import find_sim_dirs, find_dir_size from pysph.solver.utils import get_files import os import sys import argparse def cull(src_path, c): src_path = os.path.abspath(src_path) sim_paths_list = find_sim_dirs(src_path) initial_size = find_dir_size(src_path) for path in sim_paths_list: files = get_files(path) l = len(files) del_files = [ files[i] for i in set(range(l)) - set(range(0, l, c)) ] if len(del_files) != 0: for f in del_files: os.remove(f) final_size = find_dir_size(src_path) print("Initial size of the directory was: "+str(initial_size)+" bytes") print("Final size of the directory is: "+str(final_size)+" bytes") return def main(argv=None): if argv is None: argv = sys.argv[1:] parser = argparse.ArgumentParser( prog='cull', description=__doc__, add_help=False ) parser.add_argument( "-h", "--help", action="store_true", default=False, dest="help", help="show this help message and exit" ) parser.add_argument( "src_path", metavar='src_path', type=str, nargs=1, help="the directory containing the directories/files to be culled" ) parser.add_argument( "-c", "--cull-factor", metavar="cull_factor", type=int, default=2, help="one in every 'c' files is spared, all remaining output " + "files are deleted [default=2]" ) if len(argv) > 0 and argv[0] in ['-h', '--help']: parser.print_help() sys.exit() options, extra = parser.parse_known_args(argv) src_path = options.src_path[0] x = options.cull_factor cull(src_path, x) if __name__ == '__main__': main() pysph-master/pysph/tools/fortranfile.py000066400000000000000000000216011356347341600207160ustar00rootroot00000000000000# Copyright 2008, 2009 Neil Martinsen-Burrell # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """Defines a file-derived class to read/write Fortran unformatted files taken from http://wiki.scipy.org/Cookbook/FortranIO/FortranFile The assumption is that a Fortran unformatted file is being written by the Fortran runtime as a sequence of records. Each record consists of an integer (of the default size [usually 32 or 64 bits]) giving the length of the following data in bytes, then the data itself, then the same integer as before. Examples -------- To use the default endian and size settings, one can just do:: >>> f = FortranFile('filename') >>> x = f.readReals() One can read arrays with varying precisions:: >>> f = FortranFile('filename') >>> x = f.readInts('h') >>> y = f.readInts('q') >>> z = f.readReals('f') Where the format codes are those used by Python's struct module. One can change the default endian-ness and header precision:: >>> f = FortranFile('filename', endian='>', header_prec='l') for a file with little-endian data whose record headers are long integers. """ import struct import numpy class FortranFile(file): """File with methods for dealing with fortran unformatted data files""" def _get_header_length(self): return struct.calcsize(self._header_prec) _header_length = property(fget=_get_header_length) def _set_endian(self,c): """Set endian to big (c='>') or little (c='<') or native (c='@') :Parameters: `c` : string The endian-ness to use when reading from this file. """ if c in '<>@=': self._endian = c else: raise ValueError('Cannot set endian-ness') def _get_endian(self): return self._endian ENDIAN = property(fset=_set_endian, fget=_get_endian, doc="Possible endian values are '<', '>', '@', '='" ) def _set_header_prec(self, prec): if prec in 'hilq': self._header_prec = prec else: raise ValueError('Cannot set header precision') def _get_header_prec(self): return self._header_prec HEADER_PREC = property(fset=_set_header_prec, fget=_get_header_prec, doc="Possible header precisions are 'h', 'i', 'l', 'q'" ) def __init__(self, fname, endian='@', header_prec='i', *args, **kwargs): """Open a Fortran unformatted file for writing. Parameters ---------- endian : character, optional Specify the endian-ness of the file. Possible values are '>', '<', '@' and '='. See the documentation of Python's struct module for their meanings. The deafult is '>' (native byte order) header_prec : character, optional Specify the precision used for the record headers. Possible values are 'h', 'i', 'l' and 'q' with their meanings from Python's struct module. The default is 'i' (the system's default integer). """ file.__init__(self, fname, *args, **kwargs) self.ENDIAN = endian self.HEADER_PREC = header_prec def _read_exactly(self, num_bytes): """Read in exactly num_bytes, raising an error if it can't be done.""" data = '' while True: l = len(data) if l == num_bytes: return data else: read_data = self.read(num_bytes - l) if read_data == '': raise IOError('Could not read enough data.' ' Wanted %d bytes, got %d.' % (num_bytes, l)) data += read_data def _read_check(self): return struct.unpack(self.ENDIAN+self.HEADER_PREC, self._read_exactly(self._header_length) )[0] def _write_check(self, number_of_bytes): """Write the header for the given number of bytes""" self.write(struct.pack(self.ENDIAN+self.HEADER_PREC, number_of_bytes)) def readRecord(self): """Read a single fortran record""" l = self._read_check() data_str = self._read_exactly(l) check_size = self._read_check() if check_size != l: raise IOError('Error reading record from data file') return data_str def writeRecord(self,s): """Write a record with the given bytes. Parameters ---------- s : the string to write """ length_bytes = len(s) self._write_check(length_bytes) self.write(s) self._write_check(length_bytes) def readString(self): """Read a string.""" return self.readRecord() def writeString(self,s): """Write a string Parameters ---------- s : the string to write """ self.writeRecord(s) _real_precisions = 'df' def readReals(self, prec='f'): """Read in an array of real numbers. Parameters ---------- prec : character, optional Specify the precision of the array using character codes from Python's struct module. Possible values are 'd' and 'f'. """ _numpy_precisions = {'d': numpy.float64, 'f': numpy.float32 } if prec not in self._real_precisions: raise ValueError('Not an appropriate precision') data_str = self.readRecord() num = len(data_str)/struct.calcsize(prec) numbers =struct.unpack(self.ENDIAN+str(num)+prec,data_str) return numpy.array(numbers, dtype=_numpy_precisions[prec]) def writeReals(self, reals, prec='f'): """Write an array of floats in given precision Parameters ---------- reals : array Data to write prec` : string Character code for the precision to use in writing """ if prec not in self._real_precisions: raise ValueError('Not an appropriate precision') # Don't use writeRecord to avoid having to form a # string as large as the array of numbers length_bytes = len(reals)*struct.calcsize(prec) self._write_check(length_bytes) _fmt = self.ENDIAN + prec for r in reals: self.write(struct.pack(_fmt,r)) self._write_check(length_bytes) _int_precisions = 'hilq' def readInts(self, prec='i'): """Read an array of integers. Parameters ---------- prec : character, optional Specify the precision of the data to be read using character codes from Python's struct module. Possible values are 'h', 'i', 'l' and 'q' """ if prec not in self._int_precisions: raise ValueError('Not an appropriate precision') data_str = self.readRecord() num = len(data_str)/struct.calcsize(prec) return numpy.array(struct.unpack(self.ENDIAN+str(num)+prec,data_str)) def writeInts(self, ints, prec='i'): """Write an array of integers in given precision Parameters ---------- reals : array Data to write prec : string Character code for the precision to use in writing """ if prec not in self._int_precisions: raise ValueError('Not an appropriate precision') # Don't use writeRecord to avoid having to form a # string as large as the array of numbers length_bytes = len(ints)*struct.calcsize(prec) self._write_check(length_bytes) _fmt = self.ENDIAN + prec for item in ints: self.write(struct.pack(_fmt,item)) self._write_check(length_bytes) pysph-master/pysph/tools/geometry.py000066400000000000000000000577211356347341600202520ustar00rootroot00000000000000from __future__ import division import numpy as np import copy from pysph.base.nnps import LinkedListNNPS from pysph.base.utils import get_particle_array, get_particle_array_wcsph from cyarray.api import UIntArray from numpy.linalg import norm def distance(point1, point2=np.array([0.0, 0.0, 0.0])): return np.sqrt(sum((point1 - point2) * (point1 - point2))) def distance_2d(point1, point2=np.array([0.0, 0.0])): return np.sqrt(sum((point1 - point2) * (point1 - point2))) def matrix_exp(matrix): """ Exponential of a matrix. Finds the exponential of a square matrix of any order using the formula exp(A) = I + (A/1!) + (A**2/2!) + (A**3/3!) + ......... Parameters ---------- matrix : numpy matrix of order nxn (square) filled with numbers Returns ------- result : numpy matrix of the same order Examples -------- >>>A = np.matrix([[1, 2],[2, 3]]) >>>matrix_exp(A) matrix([[19.68002699, 30.56514746], [30.56514746, 50.24517445]]) >>>B = np.matrix([[0, 0],[0, 0]]) >>>matrix_exp(B) matrix([[1., 0.], [0., 1.]]) """ matrix = np.asmatrix(matrix) tol = 1.0e-16 result = matrix**(0) n = 1 condition = True while condition: adding = matrix**(n) / (1.0 * np.math.factorial(n)) result += adding residue = np.sqrt(np.sum(np.square(adding)) / np.sum(np.square(result))) condition = (residue > tol) n += 1 return result def extrude(x, y, dx=0.01, extrude_dist=1.0, z_center=0.0): """ Extrudes a 2d geometry. Takes a 2d geometry with x, y values and extrudes it in z direction by the amount extrude_dist with z_center as center Parameters ---------- x : 1d array object with numbers y : 1d array object with numbers dx : a number extrude_dist : a number z_center : a number x, y should be of the same length and no x, y pair should be the same Returns ------- x_new : 1d numpy array object with new x values y_new : 1d numpy array object with new y values z_new : 1d numpy array object with z values x_new, y_new, z_new are of the same length Examples -------- >>>x = np.array([0.0]) >>>y = np.array([0.0]) >>>extrude(x, y, 0.1, 0.2, 0.0) (array([ 0., 0., 0.]), array([ 0., 0., 0.]), array([-0.1, 0., 0.1])) """ z = np.arange(z_center - extrude_dist / 2., z_center + (extrude_dist + dx) / 2., dx) x_new = np.tile(np.asarray(x), len(z)) y_new = np.tile(np.asarray(y), len(z)) z_new = np.repeat(z, len(x)) return x_new, y_new, z_new def translate(x, y, z, x_translate=0.0, y_translate=0.0, z_translate=0.0): """ Translates set of points in 3d cartisean space. Takes set of points and translates each and every point by some mentioned amount in all the 3 directions. Parameters ---------- x : 1d array object with numbers y : 1d array object with numbers z : 1d array object with numbers x_translate : a number y_translate : a number z_translate : a number Returns ------- x_new : 1d numpy array object with new x values y_new : 1d numpy array object with new y values z_new : 1d numpy array object with new z values Examples -------- >>>x = np.array([0.0, 1.0, 2.0]) >>>y = np.array([-1.0, 0.0, 1.5]) >>>z = np.array([0.5, -1.5, 0.0]) >>>translate(x, y, z, 1.0, -0.5, 2.0) (array([ 1., 2., 3.]), array([-1.5, -0.5, 1.]), array([2.5, 0.5, 2.])) """ x_new = np.asarray(x) + x_translate y_new = np.asarray(y) + y_translate z_new = np.asarray(z) + z_translate return x_new, y_new, z_new def rotate(x, y, z, axis=np.array([0.0, 0.0, 1.0]), angle=90.0): """ Rotates set of points in 3d cartisean space. Takes set of points and rotates each point with some angle w.r.t a mentioned axis. Parameters ---------- x : 1d array object with numbers y : 1d array object with numbers z : 1d array object with numbers axis : 1d array with 3 numbers angle(in degrees) : number Returns ------- x_new : 1d numpy array object with new x values y_new : 1d numpy array object with new y values z_new : 1d numpy array object with new z values Examples -------- >>>x = np.array([0.0, 1.0, 2.0]) >>>y = np.array([-1.0, 0.0, 1.5]) >>>z = np.array([0.5, -1.5, 0.0]) >>>axis = np.array([0.0, 0.0, 1.0]) >>>rotate(x, y, z, axis, 90.0) (array([ 0.29212042, -0.5, 2.31181936]), array([ -0.5, 3.31047738, 11.12095476]), array([-0.5, -0.5, 3.5])) """ theta = angle * np.pi / 180.0 unit_vector = np.asarray(axis) / norm(np.asarray(axis)) matrix = np.cross(np.eye(3), unit_vector * theta) rotation_matrix = matrix_exp(np.matrix(matrix)) new_points = [] for xi, yi, zi in zip(np.asarray(x), np.asarray(y), np.asarray(z)): point = np.array([xi, yi, zi]) new = np.dot(rotation_matrix, point) new_points.append(np.asarray(new)[0]) new_points = np.array(new_points) x_new = new_points[:, 0] y_new = new_points[:, 1] z_new = new_points[:, 2] return x_new, y_new, z_new def get_2d_wall(dx=0.01, center=np.array([0.0, 0.0]), length=1.0, num_layers=1, up=True): """ Generates a 2d wall which is parallel to x-axis. The wall can be rotated parallel to any axis using the rotate function. 3d wall can be also generated using the extrude function after generating particles using this function. ^ | | y|******************* | wall particles | |____________________> x Parameters ---------- dx : a number which is the spacing required center : 1d array like object which is the center of wall length : a number which is the length of the wall num_layers : Number of layers for the wall up : True if the layers have to created on top of base wall Returns ------- x : 1d numpy array with x coordinates of the wall y : 1d numpy array with y coordinates of the wall """ x = np.arange(-length / 2., length / 2. + dx, dx) + center[0] y = np.ones_like(x) * center[1] value = 1 if up else -1 for i in range(1, num_layers): y1 = np.ones_like(x) * center[1] + value * i * dx y = np.concatenate([y, y1]) return np.tile(x, num_layers), y def get_2d_tank(dx=0.05, base_center=np.array([0.0, 0.0]), length=1.0, height=1.0, num_layers=1, outside=True, staggered=False, top=False): """ Generates an open 2d tank with the base parallel to x-axis and the side walls parallel to y-axis. The tank can be rotated to any direction using rotate function. 3d tank can be generated using extrude function. ^ |* * |* 2d tank * y|* particles * |* * |* * * * * * * * * | base |____________________> x Parameters ---------- dx : a number which is the spacing required base_center : 1d array like object which is the center of base wall length : a number which is the length of the base height : a number which is the length of the side wall num_layers : Number of layers for the tank outside : A boolean value which decides if the layers are inside or outside staggered : A boolean value which decides if the layers are staggered or not top : A boolean value which decides if the top is present or not Returns ------- x : 1d numpy array with x coordinates of the tank y : 1d numpy array with y coordinates of the tank """ dy = dx fac = 1 if outside else 0 if staggered: dx = dx/2 start = fac*(1 - num_layers)*dx end = fac*num_layers*dx + (1 - fac) * dx x, y = np.mgrid[start:length+end:dx, start:height+end:dy] topset = 0 if top else 10*height if staggered: topset += dx y[1::2] += dx offset = 0 if outside else (num_layers-1)*dx cond = ~((x > offset) & (x < length-offset) & (y > offset) & (y < height+topset-offset)) return x[cond] + base_center[0] - length/2, y[cond] + base_center[1] def get_2d_circle(dx=0.01, r=0.5, center=np.array([0.0, 0.0])): """ Generates a completely filled 2d circular area. Parameters ---------- dx : a number which is the spacing required r : a number which is the radius of the circle center : 1d array like object which is the center of the circle Returns ------- x : 1d numpy array with x coordinates of the circle particles y : 1d numpy array with y coordinates of the circle particles """ N = int(2.0 * r / dx) + 1 x, y = np.mgrid[-r:r:N * 1j, -r:r:N * 1j] x, y = np.ravel(x), np.ravel(y) condition = (x * x + y * y <= r * r) x, y = x[condition], y[condition] return x + center[0], y + center[1] def get_2d_hollow_circle(dx=0.01, r=1.0, center=np.array([0.0, 0.0]), num_layers=2, inside=True): """ Generates a hollow 2d circle with some number of layers either on the inside or on the outside of the body which is taken as an argument Parameters ---------- dx : a number which is the spacing required r : a number which is the radius of the circle center : 1d array like object which is the center of the circle num_layers : a number (int) inside : boolean (True or False). If this is True then the layers are generated inside the circle Returns ------- x : 1d numpy array with x coordinates of the circle particles y : 1d numpy array with y coordinates of the circle particles """ r_grid = r + dx * num_layers N = int(2.0 * r_grid / dx) + 1 x, y = np.mgrid[-r_grid:r_grid:N * 1j, -r_grid:r_grid:N * 1j] x, y = np.ravel(x), np.ravel(y) if inside: cond1 = (x * x + y * y <= r * r) cond2 = (x * x + y * y >= (r - num_layers * dx)**2) else: cond1 = (x * x + y * y >= r * r) cond2 = (x * x + y * y <= (r + num_layers * dx)**2) cond = cond1 & cond2 x, y = x[cond], y[cond] return x + center[0], y + center[0] def get_3d_hollow_cylinder(dx=0.01, r=0.5, length=1.0, center=np.array([0.0, 0.0, 0.0]), num_layers=2, inside=True): """ Generates a 3d hollow cylinder which is a extruded geometry of the hollow circle with a closed base. Parameters ---------- dx : a number which is the spacing required r : a number which is the radius of the cylinder length : a number which is the length of the cylinder center : 1d array like object which is the center of the cylinder num_layers : a number (int) inside : boolean (True or False). If this is True then the layers are generated inside the cylinder Returns ------- x : 1d numpy array with x coordinates of the cylinder particles y : 1d numpy array with y coordinates of the cylinder particles z : 1d numpy array with z coordinates of the cylinder particles """ x_2d, y_2d = get_2d_hollow_circle(dx, r, center[:2], num_layers, inside) x, y, z = extrude(x_2d, y_2d, dx, length - dx, center[2] + dx / 2.) x_circle, y_circle = get_2d_circle(dx, r, center[:2]) z_circle = np.ones_like(x_circle) * (center[2] - length / 2.) x = np.concatenate([x, x_circle]) y = np.concatenate([y, y_circle]) z = np.concatenate([z, z_circle]) return x, y, z def get_2d_block(dx=0.01, length=1.0, height=1.0, center=np.array([0., 0.])): """ Generates a 2d rectangular block of particles with axes parallel to the coordinate axes. ^ | |h * * * * * * * |e * * * * * * * y|i * * * * * * * |g * * * * * * * |h * * * * * * * |t * * * * * * * | * * * * * * * | length |________________> x Parameters ---------- dx : a number which is the spacing required length : a number which is the length of the block height : a number which is the height of the block center : 1d array like object which is the center of the block Returns ------- x : 1d numpy array with x coordinates of the block particles y : 1d numpy array with y coordinates of the block particles """ n1 = int(length / dx) + 1 n2 = int(height / dx) + 1 x, y = np.mgrid[-length / 2.:length / 2.:n1 * 1j, -height / 2.:height / 2.:n2 * 1j] x, y = np.ravel(x), np.ravel(y) return x + center[0], y + center[1] def get_3d_sphere(dx=0.01, r=0.5, center=np.array([0.0, 0.0, 0.0])): """ Generates a 3d sphere. Parameters ---------- dx : a number which is the spacing required r : a number which is the radius of the sphere center : 1d array like object which is the center of the sphere Returns ------- x : 1d numpy array with x coordinates of the sphere particles y : 1d numpy array with y coordinates of the sphere particles z : 1d numpy array with z coordinates of the sphere particles """ N = int(2.0 * r / dx) + 1 x, y, z = np.mgrid[-r:r:N * 1j, -r:r:N * 1j, -r:r:N * 1j] x, y, z = np.ravel(x), np.ravel(y), np.ravel(z) cond = (x * x + y * y + z * z <= r * r) x, y, z = x[cond], y[cond], z[cond] return x + center[0], y + center[1], z + center[2] def get_3d_block(dx=0.01, length=1.0, height=1.0, depth=1.0, center=np.array([0., 0., 0.])): """ Generates a 3d block of particles with the length, height and depth parallel to x, y and z axis respectively. Paramters --------- dx : a number which is the spacing required length : a number which is the length of the block height : a number which is the height of the block depth : a number which is the depth of the block center : 1d array like object which is the center of the block Returns ------- x : 1d numpy array with x coordinates of the block particles y : 1d numpy array with y coordinates of the block particles z : 1d numpy array with z coordinates of the block particles """ n1 = int(length / dx) + 1 n2 = int(height / dx) + 1 n3 = int(depth / dx) + 1 x, y, z = np.mgrid[-length / 2.:length / 2.:n1 * 1j, -height / 2.:height / 2.:n2 * 1j, -depth / 2.:depth / 2.:n3 * 1j] x, y, z = np.ravel(x), np.ravel(y), np.ravel(z) return x + center[0], y + center[1], z + center[2] def get_4digit_naca_airfoil(dx=0.01, airfoil='0012', c=1.0): """ Generates a 4 digit series NACA airfoil. For a 4 digit series airfoil, the first digit is the (maximum camber / chord) * 100, second digit is (location of maximum camber / chord) * 10 and the third and fourth digits are the (maximum thickness / chord) * 100. The particles generated using this function will form a solid 2d airfoil. Parameters ---------- dx : a number which is the spacing required airfoil : a string of 4 characters which is the airfoil name c : a number which is the chord of the airfoil Returns ------- x : 1d numpy array with x coordinates of the airfoil particles y : 1d numpy array with y coordinates of the airfoil particles References ---------- https://en.wikipedia.org/wiki/NACA_airfoil """ n = int(c / dx) + 1 x, y = np.mgrid[0:c:n * 1j, -c / 2.:c / 2.:n * 1j] x = np.ravel(x) y = np.ravel(y) x_naca = [] y_naca = [] t = float(airfoil[2:]) * 0.01 * c if airfoil[:2] == '00': for xi, yi in zip(x, y): yt = 5.0 * t * (0.2969 * np.sqrt(xi / c) - 0.1260 * (xi / c) - 0.3516 * ((xi / c)**2.) + 0.2843 * ((xi / c)**3.) - 0.1015 * ((xi / c)**4.)) if abs(yi) <= yt: x_naca.append(xi) y_naca.append(yi) else: m = 0.01 * float(airfoil[0]) p = 0.1 * float(airfoil[1]) for xi, yi in zip(x, y): yt = 5.0 * t * (0.2969 * np.sqrt(xi / c) - 0.1260 * (xi / c) - 0.3516 * ((xi / c)**2.) + 0.2843 * ((xi / c)**3.) - 0.1015 * ((xi / c)**4.)) if xi <= p * c: yc = (m / (p * p)) * (2. * p * (xi / c) - (xi / c)**2.) dydx = (2. * m / (p * p)) * (p - xi / c) / c else: yc = (m / ((1. - p) * (1. - p))) * \ (1. - 2. * p + 2. * p * (xi / c) - (xi / c)**2.) dydx = (2. * m / ((1. - p) * (1. - p))) * (p - xi / c) / c theta = np.arctan(dydx) if yi >= 0.0: yu = yc + yt * np.cos(theta) if yi <= yu: xu = xi - yt * np.sin(theta) x_naca.append(xu) y_naca.append(yi) else: yl = yc - yt * np.cos(theta) if yi >= yl: xl = xi + yt * np.sin(theta) x_naca.append(xl) y_naca.append(yi) x_naca = np.array(x_naca) y_naca = np.array(y_naca) return x_naca, y_naca def _get_m_k(series): if series == '210': return 0.058, 361.4 elif series == '220': return 0.126, 51.64 elif series == '230': return 0.2025, 15.957 elif series == '240': return 0.290, 6.643 elif series == '250': return 0.391, 3.23 elif series == '221': return 0.130, 51.99 elif series == '231': return 0.217, 15.793 elif series == '241': return 0.318, 6.52 elif series == '251': return 0.441, 3.191 def get_5digit_naca_airfoil(dx=0.01, airfoil='23112', c=1.0): """ Generates a 5 digit series NACA airfoil. For a 5 digit series airfoil, the first digit is the design lift coefficient * 20 / 3, second digit is (location of maximum camber / chord) * 20, third digit indicates the reflexitivity of the camber and the fourth and fifth digits are the (maximum thickness / chord) * 100. The particles generated using this function will form a solid 2d airfoil. Parameters ---------- dx : a number which is the spacing required airfoil : a string of 5 characters which is the airfoil name c : a number which is the chord of the airfoil Returns ------- x : 1d numpy array with x coordinates of the airfoil particles y : 1d numpy array with y coordinates of the airfoil particles References ---------- https://en.wikipedia.org/wiki/NACA_airfoil http://www.aerospaceweb.org/question/airfoils/q0041.shtml """ n = int(c / dx) + 1 x, y = np.mgrid[0:c:n * 1j, -c / 2.:c / 2.:n * 1j] x = np.ravel(x) y = np.ravel(y) x_naca = [] y_naca = [] t = 0.01 * float(airfoil[3:]) series = airfoil[:3] m, k = _get_m_k(series) for xi, yi in zip(x, y): yt = 5.0 * t * (0.2969 * np.sqrt(xi / c) - 0.1260 * (xi / c) - 0.3516 * ((xi / c)**2.) + 0.2843 * ((xi / c)**3.) - 0.1015 * ((xi / c)**4.)) xn = xi / c if xn <= m: yc = c * (k / 6.) * (xn**3. - 3. * m * xn * xn + m * m * (3. - m) * xn) dydx = (k / 6.) * (3. * xn * xn - 6. * m * xn + m * m * (3. - m)) else: yc = c * (k * (m**3.) / 6.) * (1. - xn) dydx = -(k * (m**3.) / 6.) theta = np.arctan(dydx) if yi >= 0.0: yu = yc + yt * np.cos(theta) if yi <= yu: xu = xi - yt * np.sin(theta) x_naca.append(xu) y_naca.append(yi) else: yl = yc - yt * np.cos(theta) if yi >= yl: xl = xi + yt * np.sin(theta) x_naca.append(xl) y_naca.append(yi) x_naca = np.array(x_naca) y_naca = np.array(y_naca) return x_naca, y_naca def get_naca_wing(dx=0.01, airfoil='0012', span=1.0, chord=1.0): """ Generates a wing using a NACA 4 or 5 digit series airfoil. This will generate only a rectangular wing. Parameters ---------- dx : a number which is the spacing required airfoil : a string of 4 or 5 characters which is the airfoil name span : a number which is the span of the wing c : a number which is the chord of the wing Returns ------- x : 1d numpy array with x coordinates of the airfoil particles y : 1d numpy array with y coordinates of the airfoil particles z : 1d numpy array with z coordinates of the airfoil particles """ if len(airfoil) == 4: x, y = get_4digit_naca_airfoil(dx, airfoil, chord) elif len(airfoil) == 5: x, y = get_5digit_naca_airfoil(dx, airfoil, chord) return extrude(x, y, dx, span) def find_overlap_particles(fluid_parray, solid_parray, dx_solid, dim=3): """This function will take 2 particle arrays as input and will find all the particles of the first particle array which are in the vicinity of the particles from second particle array. The function will find all the particles within the dx_solid vicinity so some particles may be identified at the outer surface of the particles from the second particle array. The particle arrays should atleast contain x, y and h values for a 2d case and atleast x, y, z and h values for a 3d case. Parameters ---------- fluid_parray : a pysph particle array object solid_parray : a pysph particle array object dx_solid : a number which is the dx of the second particle array dim : dimensionality of the problem Returns ------- list of particle indices to remove from the first array. """ x = fluid_parray.x x1 = solid_parray.x y = fluid_parray.y y1 = solid_parray.y z = fluid_parray.z z1 = solid_parray.z if dim == 2: z = np.zeros_like(x) z1 = np.zeros_like(x1) to_remove = [] ll_nnps = LinkedListNNPS(dim, [fluid_parray, solid_parray]) for i in range(len(x)): nbrs = UIntArray() ll_nnps.get_nearest_particles(1, 0, i, nbrs) point_i = np.array([x[i], y[i], z[i]]) near_points = nbrs.get_npy_array() distances = [] for ind in near_points: dest = [x1[ind], y1[ind], z1[ind]] distances.append(distance(point_i, dest)) if len(distances) == 0: continue elif min(distances) < (dx_solid * (1.0 - 1.0e-07)): to_remove.append(i) return to_remove def remove_overlap_particles(fluid_parray, solid_parray, dx_solid, dim=3): """ This function will take 2 particle arrays as input and will remove all the particles of the first particle array which are in the vicinity of the particles from second particle array. The function will remove all the particles within the dx_solid vicinity so some particles are removed at the outer surface of the particles from the second particle array. The particle arrays should atleast contain x, y and h values for a 2d case and atleast x, y, z and h values for a 3d case Parameters ---------- fluid_parray : a pysph particle array object solid_parray : a pysph particle array object dx_solid : a number which is the dx of the second particle array dim : dimensionality of the problem Returns ------- None """ idx = find_overlap_particles(fluid_parray, solid_parray, dx_solid, dim) fluid_parray.remove_particles(idx) def show_2d(points, **kw): """Show two-dimensional geometry data. The `points` are a tuple of x, y, z values, the extra keyword arguments are passed along to the scatter function. """ import matplotlib.pyplot as plt plt.scatter(points[0], points[1], **kw) plt.xlabel('X') plt.ylabel('Y') def show_3d(points, **kw): """Show two-dimensional geometry data. The `points` are a tuple of x, y, z values, the extra keyword arguments are passed along to the `mlab.points3d` function. """ from mayavi import mlab mlab.points3d(points[0], points[1], points[2], **kw) mlab.axes(xlabel='X', ylabel='Y', zlabel='Z') pysph-master/pysph/tools/geometry_stl.pyx000066400000000000000000000322431356347341600213140ustar00rootroot00000000000000from pysph.base.particle_array import ParticleArray from pysph.base import nnps import numpy as np from stl import mesh from numpy.linalg import norm from cyarray.api import UIntArray cimport cython cimport numpy as np DTYPE = np.float ctypedef np.float_t DTYPE_t class ZeroAreaTriangleException(Exception): pass class PolygonMeshError(ValueError): pass cpdef _point_sign(double x1, double y1, double x2, double y2, double x3, double y3): return (x1 - x3) * (y2 - y3) - (x2 - x3) * (y1 - y3) def _in_triangle(px, py, vx1, vy1, vx2, vy2, vx3, vy3): b1 = _point_sign(px, py, vx1, vy1, vx2, vy2) < 0 b2 = _point_sign(px, py, vx2, vy2, vx3, vy3) < 0 b3 = _point_sign(px, py, vx3, vy3, vx1, vy1) < 0 return ((b1 == b2) and (b2 == b3)) def _interp_2d(v0, v1, dx): l = norm(v0 - v1) p = np.linspace(0., 1., int(l / dx + 2)).reshape(-1, 1) return v0.reshape(1, -1) * p + v1.reshape(1, -1) * (1. - p) def _get_triangle_sides(triangle, dx): """Interpolate points on the sides of the triangle""" sides = np.vstack([_interp_2d(triangle[0], triangle[1], dx), _interp_2d(triangle[1], triangle[2], dx), _interp_2d(triangle[2], triangle[0], dx)]) return sides[:, 0], sides[:, 1], sides[:, 2] def _projection_parameters(x, a, b, p): """Parameters of projection of x on surface (sa + tb + p) """ return np.dot(x - p, a), np.dot(x - p, b) def _fill_triangle(triangle, h=0.1): EPS = np.finfo(float).eps if triangle.shape[0] != 3: raise PolygonMeshError("non-triangular meshes are not supported: " "dim({}) ! = 3.".format(triangle)) if norm(triangle[0] - triangle[1]) < EPS or \ norm(triangle[1] - triangle[2]) < EPS or \ norm(triangle[2] - triangle[0]) < EPS: raise ZeroAreaTriangleException( "unable to interpolate in zero area triangle: {}".format(triangle)) # Surface will be of the form v = sa + tb + p # p is (arbitrarily) taken as the centroid of the triangle p = (triangle[0] + triangle[1] + triangle[2]) / 3 a = (triangle[1] - triangle[0]) / norm(triangle[1] - triangle[0]) b = np.cross(np.cross(a, triangle[2] - triangle[1]), a) b /= norm(b) st = np.array([_projection_parameters(triangle[0], a, b, p), _projection_parameters(triangle[1], a, b, p), _projection_parameters(triangle[2], a, b, p)]) st_min, st_max = np.min(st, axis=0), np.max(st, axis=0) s_mesh, t_mesh = np.meshgrid(np.arange(st_min[0], st_max[0], h), np.arange(st_min[1], st_max[1], h)) s_mesh, t_mesh = s_mesh.ravel(), t_mesh.ravel() mask = np.empty(len(s_mesh), dtype='bool') for i in range(len(s_mesh)): mask[i] = _in_triangle(s_mesh[i], t_mesh[i], st[0, 0], st[0, 1], st[1, 0], st[1, 1], st[2, 0], st[2, 1]) s_mesh, t_mesh = s_mesh[mask], t_mesh[mask] # Final mesh coordinates generated from parameters s and t result = np.dot(s_mesh.reshape(-1, 1), a.reshape(-1, 1).T) + \ np.dot(t_mesh.reshape(-1, 1), b.reshape(-1, 1).T) + p return result[:, 0], result[:, 1], result[:, 2] def _get_stl_mesh(stl_fname, dx_triangle, uniform = False): """Interpolates points within triangles in the stl file""" m = mesh.Mesh.from_file(stl_fname) x_list, y_list, z_list = [], [], [] for i in range(len(m.vectors)): x1, y1, z1 = _fill_triangle(m.vectors[i], dx_triangle) x2, y2, z2 = _get_triangle_sides(m.vectors[i], dx_triangle) x_list.append(np.r_[x1, x2]) y_list.append(np.r_[y1, y2]) z_list.append(np.r_[z1, z2]) x = np.concatenate(x_list) y = np.concatenate(y_list) z = np.concatenate(z_list) if uniform: return x, y, z, x_list, y_list, z_list, m else: return x, y, z def remove_repeated_points(x, y, z, dx_triangle): EPS = np.finfo(float).eps pa_mesh = ParticleArray(name="mesh", x=x, y=y, z=z, h=EPS) pa_grid = ParticleArray(name="grid", x=x, y=y, z=z, h=EPS) pa_list = [pa_mesh, pa_grid] nps = nnps.LinkedListNNPS(dim=3, particles=pa_list, radius_scale=1) cdef int src_index = 1, dst_index = 0 nps.set_context(src_index=1, dst_index=0) nbrs = UIntArray() cdef list idx = [] cdef int i = 0 for i in range(len(x)): nps.get_nearest_particles(src_index, dst_index, i, nbrs) neighbours = nbrs.get_npy_array() idx.append(neighbours.min()) idx_set = set(idx) idx = list(idx_set) return pa_mesh.x[idx], pa_mesh.y[idx], pa_mesh.z[idx] def prism(tri_normal, tri_points, dx_sph): """ Parameters ---------- tri_normal : outward normal of triangle tri_points : points forming the triangle dx_sph : grid spacing Returns ------- prism_normals : 5X3 array containing the 5 outward normals of the prism prism _points : 6X3 array containing the 6 points used to define the prism prism_face_centres : 5X3 array containing the centres of the 5 faces of the prism """ cdef np.ndarray prism_normals = np.zeros((5, 3), dtype=DTYPE) cdef np.ndarray prism_points = np.zeros((6, 3), dtype=DTYPE) cdef np.ndarray prism_face_centres = np.zeros((5, 3), dtype=DTYPE) cdef int sign = 1 cdef int m, n # unit normals of the triangular faces of the prism. prism_normals[0] = tri_normal / norm(tri_normal) prism_normals[4] = -1 * prism_normals[0] # distance between triangular faces of prism cdef float h = 1.5 * dx_sph for m in range(3): prism_points[m] = tri_points[m] # second triangular face at a distance h from STL triangle. for m in range(3): for n in range(3): prism_points[m+3][n] = tri_points[m][n] - tri_normal[n]*h #need to determine if point orientation is clockwise or anticlockwise #to determine normals direction. normal_tri_cross = np.cross(prism_points[2]-prism_points[0], prism_points[1]-prism_points[0]) if np.dot(tri_normal, normal_tri_cross) < 0: sign = 1 # clockwise else: sign = -1 # anti-clockwise # Normals of the reactangular faces of the prism. prism_normals[1] = sign * np.cross(prism_points[1]-prism_points[0], prism_points[1]-prism_points[4]) prism_normals[2] = sign * np.cross(prism_points[2]-prism_points[0], prism_points[5]-prism_points[2]) prism_normals[3] = sign * np.cross(prism_points[1]-prism_points[2], prism_points[5]-prism_points[2]) # centroids of the triangles prism_face_centres[0] = (prism_points[0] + prism_points[1] + prism_points[2])/3 prism_face_centres[4] = (prism_points[3] + prism_points[4] + prism_points[5])/3 # centres of the rectangular faces prism_face_centres[1] = (prism_points[0] + prism_points[3] + prism_points[1] + prism_points[4])/4 prism_face_centres[2] = (prism_points[0] + prism_points[3] + prism_points[2] + prism_points[5])/4 prism_face_centres[3] = (prism_points[1] + prism_points[2] + prism_points[4] + prism_points[5])/4 return prism_normals, prism_points, prism_face_centres def get_points_from_mgrid(pa_grid, pa_mesh, x_list, y_list, z_list, float radius_scale, float dx_sph, my_mesh): """ The function finds nearest neighbours for a given mesh on a given grid and only returns those points which lie within the volume of the stl object Parameters ---------- pa_grid : Source particle array pa_mesh : Destination particle array x_list, y_list, z_list : Coordinates of surface points for each triangle """ pa_list = [pa_mesh, pa_grid] nps = nnps.LinkedListNNPS(dim=3, particles=pa_list, radius_scale=radius_scale) cdef int src_index = 1, dst_index = 0 nps.set_context(src_index=1, dst_index=0) nbrs = UIntArray() cdef np.ndarray prism_normals = np.zeros((5, 3), dtype=DTYPE) cdef np.ndarray prism_face_centres = np.zeros((5, 3), dtype=DTYPE) cdef np.ndarray prism_points = np.zeros((6, 3), dtype=DTYPE) cdef list selected_points = [] cdef int counter = 0, l = 0 # Iterating over each triangle for i in range(np.shape(x_list)[0]): prism_normals, prism_points, prism_face_centres = prism( my_mesh.normals[i], my_mesh.vectors[i], dx_sph ) # Iterating over surface points in triangle to find nearest # neighbour on grid. for j in range(len(x_list[i])): nps.get_nearest_particles(src_index, dst_index, counter, nbrs) neighbours = nbrs.get_npy_array() l = len(neighbours) for t in range(l): point = np.array([pa_grid.x[neighbours[t]], pa_grid.y[neighbours[t]], pa_grid.z[neighbours[t]]]) # determining whether point is within prism. if inside_prism(point, prism_normals, prism_points, prism_face_centres): selected_points.append(neighbours[t]) counter = counter + 1 idx_set = set(selected_points) idx = list(idx_set) return pa_grid.x[idx], pa_grid.y[idx], pa_grid.z[idx] cdef bint inside_prism(double[:] point, double[:,:] prism_normals, double[:,:] prism_points, double[:,:] prism_face_centres): """ Identifies whether a point is within the corresponding prism by checking if all dot products of the normals of the prism with the vector joining the point and a point on the corresponding side is negative """ if dot(prism_normals[0], point, prism_face_centres[0]) > 0: return False if dot(prism_normals[4], point, prism_face_centres[4]) > 0: return False if dot(prism_normals[1], point, prism_face_centres[1]) > 0: return False if dot(prism_normals[2], point, prism_face_centres[2]) > 0: return False if dot(prism_normals[3], point, prism_face_centres[3]) > 0: return False return True cdef double dot(double[:] normal, double[:] point, double[:] face_centre): return normal[0]*(point[0]-face_centre[0]) + \ normal[1]*(point[1]-face_centre[1]) + \ normal[2]*(point[2]-face_centre[2]) def get_stl_surface_uniform(stl_fname, dx_sph=1, h_sph=1, radius_scale=1.0, dx_triangle=None): """Generate points to cover surface described by stl file The function generates a grid with a spacing of dx_sph and keeps points on the grid which lie within the STL object. The algorithm for this is straightforward and consists of the following steps: 1. Interpolate a set of points over the STL triangles 2. Create a grid that covers the entire STL object 3. Remove grid points generated outside the given STL object by checking if the points lie within prisms formed by the triangles. Parameters ---------- stl_fname : str File name of STL file dx_sph : float Spacing in generated grid points h_sph : float Smoothing length radius_scale : float Kernel radius scale dx_triangle : float, optional By default, dx_triangle = 0.5 * dx_sph Returns ------- x : ndarray 1d numpy array with x coordinates of surface grid y : ndarray 1d numpy array with y coordinates of surface grid z : ndarray 1d numpy array with z coordinates of surface grid Raises ------ PolygonMeshError If polygons in STL file are not all triangles """ if dx_triangle is None: dx_triangle = 0.5 * dx_sph x, y, z, x_list, y_list, z_list, my_mesh = \ _get_stl_mesh(stl_fname, dx_triangle, unifrom = True) pa_mesh = ParticleArray(name='mesh', x=x, y=y, z=z, h=h_sph) offset = radius_scale * h_sph x_grid, y_grid, z_grid = np.meshgrid( np.arange(x.min() - offset, x.max() + offset, dx_sph), np.arange(y.min() - offset, y.max() + offset, dx_sph), np.arange(z.min() - offset, z.max() + offset, dx_sph) ) pa_grid = ParticleArray(name='grid', x=x_grid, y=y_grid, z=z_grid, h=h_sph) xf, yf, zf = get_points_from_mgrid(pa_grid, pa_mesh, x_list, y_list, z_list, radius_scale, dx_sph, my_mesh) return xf, yf, zf def get_stl_surface(stl_fname, dx_triangle, radius_scale=1.0): """ Generate points to cover surface described by stl file Returns ------- x : ndarray 1d numpy array with x coordinates of surface grid y : ndarray 1d numpy array with y coordinates of surface grid z : ndarray 1d numpy array with z coordinates of surface grid """ x, y, z = _get_stl_mesh(stl_fname, dx_triangle, uniform = False) xf, yf, zf = remove_repeated_points(x, y, z, dx_triangle) return xf, yf, zf pysph-master/pysph/tools/geometry_utils.py000066400000000000000000000054571356347341600214710ustar00rootroot00000000000000""" Helper functions to generate commonly used geometries. PySPH used an axis convention as follows: Y | | | | | | /Z | / | / | / | / | / |/_________________X """ import numpy def create_2D_tank(x1,y1,x2,y2,dx): """ Generate an open rectangular tank. Parameters: ----------- x1,y1,x2,y2 : Coordinates defining the rectangle in 2D dx : The spacing to use """ yl = numpy.arange(y1, y2+dx/2, dx) xl = numpy.ones_like(yl) * x1 nl = len(xl) yr = numpy.arange(y1,y2+dx/2, dx) xr = numpy.ones_like(yr) * x2 nr = len(xr) xb = numpy.arange(x1+dx, x2-dx+dx/2, dx) yb = numpy.ones_like(xb) * y1 nb = len(xb) n = nb + nl + nr x = numpy.empty( shape=(n,) ) y = numpy.empty( shape=(n,) ) idx = 0 x[idx:nl] = xl; y[idx:nl] = yl idx += nl x[idx:idx+nb] = xb; y[idx:idx+nb] = yb idx += nb x[idx:idx+nr] = xr; y[idx:idx+nr] = yr return x, y def create_3D_tank(x1, y1, z1, x2, y2, z2, dx): """ Generate an open rectangular tank. Parameters: ----------- x1,y1,x2,y2,x3,y3 : Coordinates defining the rectangle in 2D dx : The spacing to use """ points = [] # create the base X-Y plane x, y = numpy.mgrid[x1:x2+dx/2:dx, y1:y2+dx/2:dx] x = x.ravel(); y = y.ravel() z = numpy.ones_like(x) * z1 for i in range(len(x)): points.append( (x[i], y[i], z[i]) ) # create the front X-Z plane x, z = numpy.mgrid[x1:x2+dx/2:dx, z1:z2+dx/2:dx] x = x.ravel(); z = z.ravel() y = numpy.ones_like(x) * y1 for i in range(len(x)): points.append( (x[i], y[i], z[i]) ) # create the Y-Z plane y, z = numpy.mgrid[y1:y2+dx/2:dx, z1:z2+dx/2:dx] y = y.ravel(); z = z.ravel() x = numpy.ones_like(y) * x1 for i in range(len(x)): points.append( (x[i], y[i], z[i]) ) # create the second X-Z plane x, z = numpy.mgrid[x1:x2+dx/2:dx, z1:z2+dx/2:dx] x = x.ravel(); z = z.ravel() y = numpy.ones_like(x) * y2 for i in range(len(x)): points.append( (x[i], y[i], z[i]) ) # create the second Y-Z plane y, z = numpy.mgrid[y1:y2+dx/2:dx, z1:z2+dx/2:dx] y = y.ravel(); z = z.ravel() x = numpy.ones_like(y) * x2 for i in range(len(x)): points.append( (x[i], y[i], z[i]) ) points = set(points) x = numpy.array( [i[0] for i in points] ) y = numpy.array( [i[1] for i in points] ) z = numpy.array( [i[2] for i in points] ) return x, y, z def create_2D_filled_region(x1, y1, x2, y2, dx): x,y = numpy.mgrid[x1:x2+dx/2:dx, y1:y2+dx/2:dx] x = x.ravel(); y = y.ravel() return x, y def create_3D_filled_region(x1, y1, z1, x2, y2, z2, dx): x,y,z = numpy.mgrid[x1:x2+dx/2:dx, y1:y2+dx/2:dx, z1:z2+dx/2:dx] x = x.ravel() y = y.ravel() z = z.ravel() return x, y, z pysph-master/pysph/tools/gmsh.py000066400000000000000000000330061356347341600173430ustar00rootroot00000000000000"""Utility module to read input mesh files. This is primarily for meshes generated using Gmsh. This module also provides some simple classes that allow one to create extruded 3D surfaces by generating a gmsh file in Python. There is also a function to read VTK dataset and produce points from them. This is very useful as Gmsh can generate VTK datasets from its meshes and thus the meshes can be imported as point clouds that may be used in an SPH simulation. """ # Copyright (c) 2015 Prabhu Ramachandran import gzip import json import numpy as np import subprocess import tempfile from tvtk.api import tvtk import os from os.path import exists, expanduser, join import sys def _read_vtk_file(fname): """Given a .vtk file (or .vtk.gz), read it and return the output. """ if fname.endswith('.vtk.gz'): tmpfname = tempfile.mktemp(suffix='.vtk') with open(tmpfname, 'wb') as tmpf: data = gzip.open(fname).read() tmpf.write(data) r = tvtk.DataSetReader(file_name=tmpfname) else: tmpfname = None r = tvtk.DataSetReader(file_name=fname) r.update() if tmpfname is not None: os.remove(tmpfname) return r.output def _convert_to_points(dataset, vertices=True, cell_centers=True): """Given a VTK dataset, convert it to a set of points that can be used for simulation with SPH. Parameters ---------- dataset : tvtk.DataSet vertices: bool If True, it converts the vertices to points. cell_centers: bool If True, converts the cell centers to points. Returns ------- x, y, z of the points. """ pts = np.array([], dtype=float) if vertices: pts = np.append(pts, dataset.points.to_array()) if cell_centers: cell_centers = tvtk.CellCenters(input=dataset) cell_centers.update() p = cell_centers.output.points.to_array() pts = np.append(pts, p) pts.shape = len(pts)//3, 3 x, y, z = pts.T return x, y, z def vtk_file_to_points(fname, vertices=True, cell_centers=True): """Given a file containing a VTK dataset (currently only an old style .vtk file), convert it to a set of points that can be used for simulation with SPH. Parameters ---------- fname : str File name. vertices: bool If True, it converts the vertices to points. cell_centers: bool If True, converts the cell centers to points. Returns ------- x, y, z of the points. """ dataset = _read_vtk_file(fname) return _convert_to_points(dataset, vertices, cell_centers) def transform_points(x, y, z, transform): """Given the coordinates, x, y, z and the TVTK transform instance, return the transformed coordinates. """ assert isinstance(transform, tvtk.Transform) m = transform.matrix.to_array() xt, yt, zt, wt = np.dot(m, np.array((x, y, z, np.ones_like(x)))) return xt, yt, zt class Loop(object): """Create a Line Loop in Gmsh parlance but using a turtle-graphics like approach. Use this to create a 2D closed surface. The surface is always in the x-y plane. Examples -------- Here is a simple example:: >>> l1 = Loop((0.0, 0.0), mesh_size=0.1) >>> l1.move(1.0).turn(90).move(1.0).turn(90).move(1.0).turn(90).move(1.0) This will create a square shape. """ def __init__(self, start, mesh_size=0.1): self.mesh_size = mesh_size self.points = [start] self.elems = [] self.tolerance = 1e-12 self._last_angle = 0.0 self._last_position = start self._index = 1 ### Public Protocol ################################### def turn(self, angle): """Turn by angle (in degrees). """ self._last_angle += angle return self def move(self, dist): """Move by given distance at the current angle. """ x0, y0 = self._last_position angle = self._last_angle rad = np.pi*angle/180. x = x0 + dist*np.cos(rad) y = y0 + dist*np.sin(rad) p1 = self._index p2 = self._add_point(x, y) self._add_elem('line', (p1, p2)) return self def arc(self, radius, angle=180): """Create a circular arc given the radius and for an angle as specified. """ x0, y0 = self._last_position last_angle = self._last_angle rad = np.pi*last_angle/180. last = complex(x0, y0) dz = complex(np.cos(rad), np.sin(rad)) cen = last + radius*dz rad = np.pi*angle/180. end = cen + (last - cen)*complex(np.cos(rad), np.sin(rad)) s_idx = self._index c_idx = self._add_point(cen.real, cen.imag) e_idx = self._add_point(end.real, end.imag) self._add_elem('circle', (s_idx, c_idx, e_idx)) return self def write(self, fp, point_id_base=0, elem_id_base=0): """Write data to given file object `fp`. """ points = self.points for i in range(len(points)): idx = point_id_base + i + 1 self._write_point(fp, points[i], idx) elems = self.elems idx = 0 loop = [] for i in range(len(elems)): elem = elems[i] idx = elem_id_base + i + 1 loop.append(str(idx)) kind, data = elem[0], elem[1] if kind == 'line': self._write_line(fp, data, idx, point_id_base) elif kind == 'circle': self._write_circle(fp, data, idx, point_id_base) idx += 1 ids = ', '.join(loop) s = 'Line Loop({idx}) = {{{ids}}};\n'.format(idx=idx, ids=ids) fp.write(s) return len(points), len(elems) + 1 ### Private Protocol ################################# def _add_elem(self, kind, data): self.elems.append((kind, data)) def _check_for_existing_point(self, x, y): tol = self.tolerance for i, (xi, yi) in enumerate(self.points): if abs(xi - x) < tol and abs(yi - y) < tol: return i + 1 def _add_point(self, x, y): last_x, last_y = self._last_position pnt = (x, y) existing = self._check_for_existing_point(x, y) if existing is None: self.points.append(pnt) self._index += 1 index = self._index else: index = existing pnt = self.points[existing] self._last_position = pnt return index def _write_point(self, fp, pnt, idx): fp.write('Point({idx}) = {{{x}, {y}, 0.0, {mesh_size}}};\n'.\ format(x=pnt[0], y=pnt[1], idx=idx, mesh_size=self.mesh_size)) def _write_line(self, fp, data, e_idx, p_idx): s = 'Line({idx}) = {{{p1}, {p2}}};\n'.format( idx=e_idx, p1=data[0]+p_idx, p2=data[1] +p_idx ) fp.write(s) def _write_circle(self, fp, data, e_idx, p_idx): s = 'Circle({idx}) = {{{start}, {cen}, {end}}};\n'.format( idx=e_idx, start=data[0]+p_idx, cen=data[1]+p_idx, end=data[2] +p_idx ) fp.write(s) class Surface(object): def __init__(self, *loops): """Constructor. Parameters ---------- loops : tuple(Loop) Any additional positional arguments are treated as loop objects. """ self.loops = list(loops) self.idx = 0 def write(self, fp, point_id_base=0, elem_id_base=0): pid_base = point_id_base eid_base = elem_id_base loop_ids = [] for loop in self.loops: np, ne = loop.write(fp, pid_base, eid_base) pid_base += np eid_base += ne loop_ids.append(str(eid_base)) idx = eid_base + 1 loop_str = ', '.join(loop_ids) fp.write('Plane Surface({idx}) = {{{loop_str}}};\n'.format( idx=idx, loop_str=loop_str)) self.idx = idx return pid_base, eid_base class Extrude(object): def __init__(self, dx=0.0, dy=0.0, dz=1.0, surfaces=None): """Extrude a given set of surfaces by the displacements given along each directions. Parameters ---------- dx : float Extrusion along x. dy : float Extrusion along y. dz : float Extrusion along z. surfaces: list List of surfaces to extrude. """ self.dx, self.dy, self.dz = dx, dy, dz if surfaces is None: self.surfaces = [] else: self.surfaces = list(surfaces) def write(self, fp, point_id_base=0, elem_id_base=0): pid_base = point_id_base eid_base = elem_id_base surf_ids = [] for surf in self.surfaces: np, ne = surf.write(fp, pid_base, eid_base) pid_base += np eid_base += ne surf_ids.append(str(surf.idx)) s_ids = ', '.join(surf_ids) fp.write( 'Extrude {{{dx}, {dy}, {dz}}} {{\n' ' Surface{{{s_ids}}};\n}}\n'.format( dx=self.dx, dy=self.dy, dz=self.dz, s_ids=s_ids ) ) return pid_base, eid_base class Gmsh(object): def __init__(self, gmsh=None): """Construct a Gmsh helper object that can be used to mesh objects. Parameters ---------- gmsh: str Path to gmsh executable. """ self.config = expanduser(join('~', '.pysph', 'gmsh.json')) if gmsh is None: if exists(self.config): self._read_config() else: gmsh = self._ask_user_for_gmsh() self._set_gmsh(gmsh) else: self._set_gmsh(gmsh) #### Public Protocol ################################## def write_geo(self, entities, fp): """Write a list of given entities to the given file pointer. """ p_count, e_count = 0, 0 for entity in entities: np, ne = entity.write(fp, p_count, e_count) p_count += np e_count += ne def write_vtk_mesh(self, entities, fname): """Write a list of given entities to the given file name. """ tmp_geo = tempfile.mktemp(suffix='.geo') try: self.write_geo(entities, open(tmp_geo, 'w')) self._call_gmsh('-3', tmp_geo, '-o', fname) finally: os.remove(tmp_geo) def get_points(self, entities, vertices=True, cell_centers=False): """Given a list of entities, return x, y, z arrays for the position. Parameters ---------- entities : list List of entities. vertices: bool If True, it converts the vertices to points. cell_centers: bool If True, converts the cell centers to points. """ tmp_vtk = tempfile.mktemp(suffix='.vtk') try: self.write_vtk_mesh(entities, tmp_vtk) x, y, z = vtk_file_to_points( tmp_vtk, vertices=vertices, cell_centers=cell_centers ) return x, y, z finally: os.remove(tmp_vtk) def get_points_from_geo(self, geo_file_name, vertices=True, cell_centers=False): """Given a .geo file, generate a mesh and get the points from the mesh. Parameters ---------- geo_file_name: str Filename of the .geo file. vertices: bool If True, it converts the vertices to points. cell_centers: bool If True, converts the cell centers to points. """ tmp_vtk = tempfile.mktemp(suffix='.vtk') try: self._call_gmsh('-3', geo_file_name, '-o', tmp_vtk) return vtk_file_to_points( tmp_vtk, vertices=vertices, cell_centers=cell_centers ) finally: os.remove(tmp_vtk) #### Private Protocol ################################# def _ask_user_for_gmsh(self): gmsh = input('Please provide the path to gmsh executable: ') return gmsh def _read_config(self): if exists(self.config): data = json.load(open(self.config)) self.gmsh = data['path'] def _set_gmsh(self, gmsh): self.gmsh = gmsh data = dict(path=gmsh) json.dump(data, open(self.config, 'w')) def _call_gmsh(self, *args): if self.gmsh is None: raise RuntimeError('Gmsh is not configured, set the gmsh path.') cmd = [self.gmsh] + list(args) subprocess.check_call(cmd) def example_3d_p(fp=sys.stdout): """Creates a 3D "P" with a hole inside it. """ # The exterior of the "P" l1 = Loop((0.0, 0.0), mesh_size=0.1) l1.turn(-90).move(1.0).turn(90).move(0.2).turn(90).move(0.5)\ .arc(0.25, -180).turn(90).move(0.2) # The inner loop for the hole in the middle. l2 = Loop((0.1, -0.25)) l2.arc(0.1, 90).turn(90).arc(0.1, 90).turn(90)\ .arc(0.1, 90).turn(90).arc(0.1, 90) s = Surface(l1, l2) ex = Extrude(0.0, 0.0, 1.0, surfaces=[s]) ex.write(fp) return ex def example_cube(fp=sys.stdout): """Simple example of a cube. """ l1 = Loop((0.0, 0.0), mesh_size=0.1) l1.move(1.0).turn(90).move(1.0).turn(90).move(1.0).turn(90).move(1.0) s = Surface(l1) ex = Extrude(0.0, 0.0, 1.0, surfaces=[s]) ex.write(fp) return ex def example_plot_3d_p(gmsh): """Note: this will only work if you have gmsh installed. """ import io fp = io.StringIO() ex = example_3d_p(fp) g = Gmsh(gmsh) x, y, z = g.get_points([ex]) from mayavi import mlab mlab.points3d(x, y, z, color=(1, 0, 0)) pysph-master/pysph/tools/interpolator.py000066400000000000000000000423071356347341600211330ustar00rootroot00000000000000# Standard library imports from functools import reduce # Library imports. import numpy as np # Package imports. from pysph.base.utils import get_particle_array from pysph.base.kernels import Gaussian from pysph.base.nnps import LinkedListNNPS as NNPS from pysph.sph.equation import Equation, Group from pysph.sph.acceleration_eval import AccelerationEval from pysph.sph.sph_compiler import SPHCompiler from compyle.api import declare from pysph.sph.wc.linalg import gj_solve, augmented_matrix from pysph.sph.basic_equations import SummationDensity class InterpolateFunction(Equation): def initialize(self, d_idx, d_prop, d_number_density): d_prop[d_idx] = 0.0 d_number_density[d_idx] = 0.0 def loop(self, s_idx, d_idx, s_temp_prop, d_prop, d_number_density, WIJ): d_number_density[d_idx] += WIJ d_prop[d_idx] += WIJ*s_temp_prop[s_idx] def post_loop(self, d_idx, d_prop, d_number_density): if d_number_density[d_idx] > 1e-12: d_prop[d_idx] /= d_number_density[d_idx] class InterpolateSPH(Equation): def initialize(self, d_idx, d_prop): d_prop[d_idx] = 0.0 def loop(self, d_idx, s_idx, s_rho, s_m, s_temp_prop, d_prop, WIJ): d_prop[d_idx] += s_m[s_idx]/s_rho[s_idx]*WIJ*s_temp_prop[s_idx] class SPHFirstOrderApproximationPreStep(Equation): def __init__(self, dest, sources, dim=1): self.dim = dim super(SPHFirstOrderApproximationPreStep, self).__init__(dest, sources) def initialize(self, d_idx, d_moment): i, j = declare('int', 2) for i in range(4): for j in range(4): d_moment[16*d_idx + j+4*i] = 0.0 def loop(self, d_idx, s_idx, d_h, s_h, s_x, s_y, s_z, d_x, d_y, d_z, s_rho, s_m, WIJ, XIJ, DWIJ, d_moment): Vj = s_m[s_idx] / s_rho[s_idx] i16 = declare('int') i16 = 16*d_idx d_moment[i16+0] += WIJ * Vj d_moment[i16+1] += -XIJ[0] * WIJ * Vj d_moment[i16+2] += -XIJ[1] * WIJ * Vj d_moment[i16+3] += -XIJ[2] * WIJ * Vj d_moment[i16+4] += DWIJ[0] * Vj d_moment[i16+8] += DWIJ[1] * Vj d_moment[i16+12] += DWIJ[2] * Vj d_moment[i16+5] += -XIJ[0] * DWIJ[0] * Vj d_moment[i16+6] += -XIJ[1] * DWIJ[0] * Vj d_moment[i16+7] += -XIJ[2] * DWIJ[0] * Vj d_moment[i16+9] += - XIJ[0] * DWIJ[1] * Vj d_moment[i16+10] += -XIJ[1] * DWIJ[1] * Vj d_moment[i16+11] += -XIJ[2] * DWIJ[1] * Vj d_moment[i16+13] += -XIJ[0] * DWIJ[2] * Vj d_moment[i16+14] += -XIJ[1] * DWIJ[2] * Vj d_moment[i16+15] += -XIJ[2] * DWIJ[2] * Vj class SPHFirstOrderApproximation(Equation): """ First order SPH approximation ------------- The method used to solve the linear system in this function is not same as in the reference. In the function `Ax=b` is solved where `A := moment` (Moment matrix) and `b := p_sph` (Property calculated using basic SPH). The calculation need the `moment` to be evaluated before this step which is done in `SPHFirstOrderApproximationPreStep` References ---------- .. [Liu2006] M.B. Liu, G.R. Liu, "Restoring particle consistency in smoothed particle hydrodynamics", Applied Numerical Mathematics Volume 56, Issue 1 2006, Pages 19-36, ISSN 0168-9274 """ def _get_helpers_(self): return [gj_solve, augmented_matrix] def __init__(self, dest, sources, dim=1): self.dim = dim super(SPHFirstOrderApproximation, self).__init__(dest, sources) def initialize(self, d_idx, d_prop, d_p_sph): i = declare('int') for i in range(3): d_prop[4*d_idx+i] = 0.0 d_p_sph[4*d_idx+i] = 0.0 def loop(self, d_idx, d_h, s_h, s_x, s_y, s_z, d_x, d_y, d_z, s_rho, s_m, WIJ, DWIJ, s_temp_prop, d_p_sph, s_idx): i4 = declare('int') Vj = s_m[s_idx] / s_rho[s_idx] pj = s_temp_prop[s_idx] i4 = 4*d_idx d_p_sph[i4+0] += pj * WIJ * Vj d_p_sph[i4+1] += pj * DWIJ[0] * Vj d_p_sph[i4+2] += pj * DWIJ[1] * Vj d_p_sph[i4+3] += pj * DWIJ[2] * Vj def post_loop(self, d_idx, d_moment, d_prop, d_p_sph): a_mat = declare('matrix(16)') aug_mat = declare('matrix(20)') b = declare('matrix(4)') res = declare('matrix(4)') i, n, i16, i4 = declare('int', 4) i16 = 16*d_idx i4 = 4*d_idx for i in range(16): a_mat[i] = d_moment[i16+i] for i in range(20): aug_mat[i] = 0.0 for i in range(4): b[i] = d_p_sph[4*d_idx+i] res[i] = 0.0 n = self.dim + 1 augmented_matrix(a_mat, b, n, 1, 4, aug_mat) gj_solve(aug_mat, n, 1, res) for i in range(4): d_prop[i4+i] = res[i] def get_bounding_box(particle_arrays, tight=False, stretch=0.05): """Find the size of the domain given a sequence of particle arrays. If `tight` is True, the bounds are tight, if not the domain is stretched along each dimension by an amount `stretch` specified as a percentage of the length along that dimension is added in each dimension. """ xmin, xmax = 1e20, -1e20 ymin, ymax = 1e20, -1e20 zmin, zmax = 1e20, -1e20 for pa in particle_arrays: x, y, z = pa.x, pa.y, pa.z xmin = min(xmin, x.min()) xmax = max(xmax, x.max()) ymin = min(ymin, y.min()) ymax = max(ymax, y.max()) zmin = min(zmin, z.min()) zmax = max(zmax, z.max()) bounds = np.asarray((xmin, xmax, ymin, ymax, zmin, zmax)) if not tight: # Add the extra space. lengths = stretch*np.repeat(bounds[1::2] - bounds[::2], 2) lengths[::2] *= -1.0 bounds += lengths return bounds def get_nx_ny_nz(num_points, bounds): """Given a number of points to use and the bounds, return a triplet of integers for a uniform mesh with approximately that many points. """ bounds = np.asarray(bounds, dtype=float) length = bounds[1::2] - bounds[::2] total_length = length.sum() rel_length = length/total_length non_zero = rel_length > 1e-3 dim = int(non_zero.sum()) volume = np.prod(length[non_zero]) delta = pow(volume/num_points, 1.0/dim) dimensions = np.ones(3, dtype=int) for i in range(3): if rel_length[i] > 1e-4: dimensions[i] = int(round(length[i]/delta)) return dimensions class Interpolator(object): """Convenient class to interpolate particle properties onto a uniform grid or given set of particles. This is particularly handy for visualization. """ def __init__(self, particle_arrays, num_points=125000, kernel=None, x=None, y=None, z=None, domain_manager=None, equations=None, method='shepard'): """ The x, y, z coordinates need not be specified, and if they are not, the bounds of the interpolated domain is automatically computed and `num_points` number of points are used in this domain uniformly placed. Parameters ---------- particle_arrays: list A list of particle arrays. num_points: int the number of points to interpolate on to. kernel: Kernel the kernel to use for interpolation. x: ndarray the x-coordinate of points on which to interpolate. y: ndarray the y-coordinate of points on which to interpolate. z: ndarray the z-coordinate of points on which to interpolate. domain_manager: DomainManager An optional Domain manager for periodic domains. equations: sequence A sequence of equations or groups. Defaults to None. This is used only if the default interpolation equations are inadequate. method : str String with the following allowed methods: 'shepard', 'sph', 'order1' """ self._set_particle_arrays(particle_arrays) bounds = get_bounding_box(self.particle_arrays) shape = get_nx_ny_nz(num_points, bounds) self.dim = 3 - list(shape).count(1) if kernel is None: self.kernel = Gaussian(dim=self.dim) else: self.kernel = kernel self.pa = None self.nnps = None self.equations = equations self.func_eval = None self.domain_manager = domain_manager self.method = method if method not in ['sph', 'shepard', 'order1']: raise RuntimeError('%s method is not implemented' % (method)) if x is None and y is None and z is None: self.set_domain(bounds, shape) else: self.set_interpolation_points(x=x, y=y, z=z) # ## Interpolator protocol ############################################### def set_interpolation_points(self, x=None, y=None, z=None): """Set the points on which we must interpolate the arrays. If any of x, y, z is not passed it is assumed to be 0.0 and shaped like the other non-None arrays. Parameters ---------- x: ndarray the x-coordinate of points on which to interpolate. y: ndarray the y-coordinate of points on which to interpolate. z: ndarray the z-coordinate of points on which to interpolate. """ tmp = None for tmp in (x, y, z): if tmp is not None: break if tmp is None: raise RuntimeError('At least one non-None array must be given.') def _get_array(_t): return np.asarray(_t) if _t is not None else np.zeros_like(tmp) x, y, z = _get_array(x), _get_array(y), _get_array(z) self.shape = x.shape self.pa = self._create_particle_array(x, y, z) arrays = self.particle_arrays + [self.pa] if self.func_eval is None: self._compile_acceleration_eval(arrays) self.update_particle_arrays(self.particle_arrays) def set_domain(self, bounds, shape): """Set the domain to interpolate into. Parameters ---------- bounds: tuple (xmin, xmax, ymin, ymax, zmin, zmax) shape: tuple (nx, ny, nz) """ self.bounds = np.asarray(bounds) self.shape = np.asarray(shape) x, y, z = self._create_default_points(self.bounds, self.shape) self.set_interpolation_points(x, y, z) def interpolate(self, prop, comp=0): """Interpolate given property. Parameters ---------- prop: str The name of the property to interpolate. comp: int The component of the gradient required Returns ------- A numpy array suitably shaped with the property interpolated. """ assert isinstance(comp, int), 'Error: only interger value is allowed' for array in self.particle_arrays: data = array.get(prop, only_real_particles=False) array.get('temp_prop', only_real_particles=False)[:] = data self.func_eval.compute(0.0, 0.1) # These are junk arguments. if comp and (self.method in ['sph', 'shepard']): raise RuntimeError("Error: use 'order1' method to evaluate" "gradient") elif self.method in ['sph', 'shepard']: result = self.pa.prop.copy() else: if comp > 3: raise RuntimeError("Error: Only 0, 1, 2, 3 allowed") result = self.pa.prop[comp::4].copy() result.shape = self.shape return result.squeeze() def update(self, update_domain=True): """Update the NNPS when particles have moved. If the update_domain is False, the domain is not updated. Use this when the arrays are the same but the particles have themselves changed. If the particle arrays themselves change use the `update_particle_arrays` method instead. """ if update_domain: self.nnps.update_domain() self.nnps.update() def update_particle_arrays(self, particle_arrays): """Call this for a new set of particle arrays which have the same properties as before. For example, if you are reading the particle array data from files, each time you load a new file a new particle array is read with the same properties. Call this function to reset the arrays. """ self._set_particle_arrays(particle_arrays) arrays = self.particle_arrays + [self.pa] self._create_nnps(arrays) self.func_eval.update_particle_arrays(arrays) # ### Private protocol ################################################### def _create_nnps(self, arrays): # create the neighbor locator object self.nnps = NNPS(dim=self.kernel.dim, particles=arrays, radius_scale=self.kernel.radius_scale, domain=self.domain_manager, cache=True) self.func_eval.set_nnps(self.nnps) def _create_default_points(self, bounds, shape): b = bounds n = shape x, y, z = np.mgrid[ b[0]:b[1]:n[0]*1j, b[2]:b[3]:n[1]*1j, b[4]:b[5]:n[2]*1j, ] return x, y, z def _create_particle_array(self, x, y, z): xr = x.ravel() yr = y.ravel() zr = z.ravel() self.x, self.y, self.z = x.squeeze(), y.squeeze(), z.squeeze() hmax = self._get_max_h_in_arrays() h = hmax*np.ones_like(xr) pa = get_particle_array( name='interpolate', x=xr, y=yr, z=zr, h=h, number_density=np.zeros_like(xr) ) if self.method in ['sph', 'shepard']: pa.add_property('prop') else: pa.add_property('moment', stride=16) pa.add_property('p_sph', stride=4) pa.add_property('prop', stride=4) return pa def _compile_acceleration_eval(self, arrays): names = [x.name for x in self.particle_arrays] if self.equations is None: if self.method == 'shepard': equations = [ InterpolateFunction(dest='interpolate', sources=names) ] elif self.method == 'sph': equations = [ InterpolateSPH(dest='interpolate', sources=names) ] else: equations = [ Group(equations=[ SummationDensity(dest=name, sources=names) for name in names], real=False), Group(equations=[ SPHFirstOrderApproximationPreStep(dest='interpolate', sources=names, dim=self.dim)], real=True), Group(equations=[ SPHFirstOrderApproximation(dest='interpolate', sources=names, dim=self.dim)], real=True) ] else: equations = self.equations self.func_eval = AccelerationEval(arrays, equations, self.kernel) compiler = SPHCompiler(self.func_eval, None) compiler.compile() def _get_max_h_in_arrays(self): hmax = -1.0 for array in self.particle_arrays: hmax = max(array.h.max(), hmax) return hmax def _set_particle_arrays(self, particle_arrays): self.particle_arrays = particle_arrays self._make_all_arrays_have_same_props(particle_arrays) for array in self.particle_arrays: if 'temp_prop' not in array.properties: array.add_property('temp_prop') def _make_all_arrays_have_same_props(self, particle_arrays): """Make sure all arrays have the same props. """ all_props = reduce( set.union, [set(x.properties.keys()) for x in particle_arrays] ) for array in particle_arrays: all_props.update(array.properties.keys()) for array in particle_arrays: array_props = set(array.properties.keys()) for prop in (all_props - array_props): array.add_property(prop) def main(fname, prop, npoint): from pysph.solver.utils import load print("Loading", fname) data = load(fname) arrays = list(data['arrays'].values()) interp = Interpolator(arrays, num_points=npoint) print(interp.shape) print("Interpolating") prop = interp.interpolate(prop) print("Visualizing") from mayavi import mlab src = mlab.pipeline.scalar_field(interp.x, interp.y, interp.z, prop) if interp.dim == 3: mlab.pipeline.scalar_cut_plane(src) else: mlab.pipeline.surface(src) mlab.pipeline.outline(src) mlab.show() if __name__ == '__main__': import sys if len(sys.argv) < 4: print("Usage: interpolator.py filename property num_points") sys.exit(1) else: main(sys.argv[1], sys.argv[2], int(sys.argv[3])) pysph-master/pysph/tools/ipy_viewer.py000066400000000000000000002762541356347341600206050ustar00rootroot00000000000000import json import glob from pysph.solver.utils import load, get_files, mkdir from IPython.display import display, Image, clear_output, HTML import ipywidgets as widgets import numpy as np import matplotlib as mpl from decimal import Decimal mpl.use('module://ipympl.backend_nbagg') # Now the user does not have to use the IPython magic command # '%matplotlib ipympl' in the notebook, this takes care of it. # The matplotlib backend needs to be set before matplotlib.pyplot # is imported and this ends up violating the PEP 8 style guide. import matplotlib.pyplot as plt class Viewer(object): ''' Base class for viewers. ''' def __init__(self, path, cache=True): self.path = path self.paths_list = get_files(path) # Caching # # Note : Caching is only used by get_frame and widget handlers. if cache: self.cache = {} else: self.cache = None def get_frame(self, frame): '''Return particle arrays for a given frame number with caching. Parameters ---------- frame : int Returns ------- A dictionary. Examples -------- >>> sample = Viewer2D('/home/deep/pysph/trivial_inlet_outlet_output/') >>> sample.get_frame(12) { 'arrays': { 'fluid': , 'inlet': , 'outlet': }, 'solver_data': {'count': 240, 'dt': 0.01, 't': 2.399999999999993} } ''' if self.cache is not None: if frame in self.cache: temp_data = self.cache[frame] else: self.cache[frame] = temp_data = load(self.paths_list[frame]) else: temp_data = load(self.paths_list[frame]) return temp_data def show_log(self): ''' Prints the content of log file. ''' print("Printing log : \n\n") path = self.path + "/*.log" with open(glob.glob(path)[0], 'r') as logfile: for lines in logfile: print(lines) def show_results(self): ''' Show if there are any png, jpeg, jpg, or bmp images. ''' imgs = tuple() for extension in ['png', 'jpg', 'jpeg', 'bmp']: temppath = self.path + "/*." + extension for paths in glob.glob(temppath): imgs += (Image(paths),) if len(imgs) != 0: display(*imgs) else: print("No results to show.") def show_info(self): ''' Print contents of the .info file present in the output directory, keys present in results.npz, number of files and information about paricle arrays. ''' # General Info # path = self.path + "/*.info" with open(glob.glob(path)[0], 'r') as infofile: data = json.load(infofile) print('Printing info : \n') for key in data.keys(): if key == 'cpu_time': print(key + " : " + str(data[key]) + " seconds") else: print(key + " : " + str(data[key])) print('Number of files : {}'.format(len(self.paths_list))) # Particle Info # temp_data = load(self.paths_list[0])['arrays'] for key in temp_data: print(" {} :".format(key)) print(" Number of particles : {}".format( temp_data[key].get_number_of_particles()) ) print(" Output Property Arrays : {}".format( temp_data[key].output_property_arrays) ) # keys in results.npz from numpy import load as npl path = self.path + "*results*" files = glob.glob(path) if len(files) != 0: data = npl(files[0]) print("\nKeys in results.npz :") print(data.keys()) def show_all(self): self.show_info() self.show_results() self.show_log() def _cmap_helper(self, data, array_name, for_plot_vectors=False): ''' Helper Function: Takes in a numpy array and returns its maximum, minimum , subject to the constraints provided by the user in the legend_lower_lim and legend_upper_lim text boxes. Also returns the input array normalized by the maximum. ''' pa_widgets = self._widgets.particles[array_name] if for_plot_vectors is False: ulim = pa_widgets.legend_upper_lim.value llim = pa_widgets.legend_lower_lim.value elif for_plot_vectors is True: ulim = '' llim = '' if llim == '' and ulim == '': pass elif llim != '' and ulim == '': for i in range(len(data)): if data[i] < float(llim): data[i] = float(llim) elif llim == '' and ulim != '': for i in range(len(data)): if data[i] > float(ulim): data[i] = float(ulim) elif llim != '' and ulim != '': for i in range(len(data)): if data[i] > float(ulim): data[i] = float(ulim) elif data[i] < float(llim): data[i] = float(llim) actual_minm = np.min(data) if llim != '' and actual_minm > float(llim): actual_minm = float(llim) actual_maxm = np.max(data) if ulim != '' and actual_maxm < float(ulim): actual_maxm = float(ulim) if len(set(data)) == 1: # This takes care of the case when all the values are the same. # Use case is the initialization of some scalars (like density). if ulim == '' and llim == '': if actual_maxm != 0: return actual_minm, actual_maxm, np.ones_like(data) else: return actual_minm, actual_maxm, np.zeros_like(data) else: data_norm = (data-actual_minm)/(actual_maxm-actual_minm) return actual_minm, actual_maxm, data_norm else: data_norm = (data-actual_minm)/(actual_maxm-actual_minm) return actual_minm, actual_maxm, data_norm def _create_widgets(self): if self.viewer_type == 'Viewer1D': self._widgets = Viewer1DWidgets( file_name=self.paths_list[0], file_count=len(self.paths_list) - 1, ) elif self.viewer_type == 'Viewer2D': self._widgets = Viewer2DWidgets( file_name=self.paths_list[0], file_count=len(self.paths_list) - 1, ) elif self.viewer_type == 'Viewer3D': self._widgets = Viewer3DWidgets( file_name=self.paths_list[0], file_count=len(self.paths_list) - 1, ) if 'general_properties' in self.config.keys(): gen_prop = self.config['general_properties'] for widget_name in gen_prop.keys(): try: widget = getattr( self._widgets, widget_name ) widget.value = gen_prop[widget_name] except AttributeError: continue if 'cull_factor' in gen_prop.keys(): self.cull_factor = gen_prop['cull_factor'] if self.cull_factor > 0: self._widgets.frame.step = self.cull_factor self._widgets.play_button.step = self.cull_factor else: print('cull_factor must be a positive integer.') self._widgets.frame.observe(self._frame_handler, 'value') self._widgets.save_figure.on_submit(self._save_figure_handler) self._widgets.delay_box.observe(self._delay_box_handler, 'value') self._widgets.save_all_plots.observe( self._save_all_plots_handler, 'value' ) self._widgets.print_config.on_click( self._print_present_config_dictionary ) if self.viewer_type == 'Viewer2D' or self.viewer_type == 'Viewer1D': self._widgets.show_solver_time.observe( self._show_solver_time_handler, 'value' ) # PLEASE NOTE: # All widget handlers take in 'change' as an argument. This is usually # a dictionary conatining information about the widget and the change # in state. However, these functions are also used outside of the use # case of a user-triggered-event, and in these scenarios None should # be passed as the argument. This is of particular significance # because in some of these functions plt.figure.show() gets called # only if the argument passed is not None. for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] # Changing the properties as per the configuration dictionary. if array_name in self.config.keys(): pa_config = self.config[array_name] for widget_name in pa_config.keys(): try: widget = getattr( pa_widgets, widget_name ) widget.value = pa_config[widget_name] except AttributeError: continue for widget_name in list(pa_widgets.__dict__.keys())[1:]: widget = getattr( pa_widgets, widget_name ) if (widget_name == 'legend_lower_lim' or widget_name == 'legend_upper_lim'): widget_handler = self._legend_lim_handler elif (widget_name == 'right_spine_lower_lim' or widget_name == 'right_spine_upper_lim'): widget_handler = self._right_spine_lim_handler else: widget_handler = getattr( self, '_' + widget_name + '_handler' ) widget.observe(widget_handler, 'value') def _legend_lim_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.scalar.value != 'None': temp_data = self.get_frame( self._widgets.frame.value )['arrays'] sct = self._scatters[array_name] n = pa_widgets.masking_factor.value stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) colormap = getattr( plt.cm, pa_widgets.scalar_cmap.value ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) if self.viewer_type == 'Viewer2D': sct.set_facecolors(colormap(c_norm[::n])) self._legend_handler(None) self.figure.show() elif self.viewer_type == 'Viewer3D': sct.color = colormap(c_norm[::n]) self._legend_handler(None) def _delay_box_handler(self, change): self._widgets.play_button.interval = change['new']*1000 def _save_all_plots_handler(self, change): if self.viewer_type == 'Viewer3D': import ipyvolume.pylab as p3 if change['new'] is True: mkdir('all_plots') self._widgets.frame.disabled = True self._widgets.play_button.disabled = True self._widgets.delay_box.disabled = True self._widgets.save_figure.disabled = True self._widgets.save_all_plots.disabled = True self._widgets.print_config.disabled = True if self.viewer_type == 'Viewer2D': self._widgets.show_solver_time.disabled = True for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] for widget_name in list(pa_widgets.__dict__.keys())[1:]: widget = getattr( pa_widgets, widget_name ) widget.disabled = True file_count = len(self.paths_list) - 1 for i in np.arange(0, file_count + 1, self.cull_factor): self._widgets.frame.value = i self._frame_handler(None) if self.viewer_type == 'Viewer2D': self.figure.savefig( 'all_plots/frame_%s.png' % i, dpi=300 ) elif self.viewer_type == 'Viewer3D': p3.savefig( 'all_plots/frame_%s.png' % i, width=600, height=600, fig=self.plot ) print( "Saved the plots in the folder 'all_plots'" + " in the present working directory" ) self._widgets.frame.disabled = False self._widgets.play_button.disabled = False self._widgets.delay_box.disabled = False self._widgets.save_figure.disabled = False self._widgets.save_all_plots.disabled = False self._widgets.print_config.disabled = False if self.viewer_type == 'Viewer2D': self._widgets.show_solver_time.disabled = False self._widgets.save_all_plots.value = False for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] for widget_name in list(pa_widgets.__dict__.keys())[1:]: widget = getattr( pa_widgets, widget_name ) widget.disabled = False def _print_present_config_dictionary(self, change): _widgets = self._widgets config = {'general_properties': {}} gen_prop = config['general_properties'] gen_prop['frame'] = _widgets.frame.value gen_prop['delay_box'] = _widgets.delay_box.value gen_prop['cull_factor'] = _widgets.frame.step if self.viewer_type == 'Viewer2D': gen_prop[ 'show_solver_time' ] = _widgets.show_solver_time.value for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] config[array_name] = {} pa_config = config[array_name] for widget_name in list(pa_widgets.__dict__.keys())[1:]: widget = getattr( pa_widgets, widget_name ) pa_config[widget_name] = widget.value print(config) def _masking_factor_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.is_visible.value is True: n = pa_widgets.masking_factor.value if n > 0: temp_data = self.get_frame(self._widgets.frame.value)['arrays'] stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) colormap = getattr( plt.cm, pa_widgets.scalar_cmap.value ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) if self.viewer_type == 'Viewer2D': self._scatters[array_name].remove() del self._scatters[array_name] self._scatters[array_name] = self._scatter_ax.scatter( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n], s=pa_widgets.scalar_size.value, ) self._scatters[array_name].set_facecolors( colormap(c_norm[::n]) ) self.figure.show() elif self.viewer_type == 'Viewer3D': import ipyvolume.pylab as p3 copy = self.plot.scatters.copy() copy.remove(self._scatters[array_name]) del self._scatters[array_name] if array_name in self._vectors.keys(): copy.remove(self._vectors[array_name]) del self._vectors[array_name] self.plot.scatters = copy self._scatters[array_name] = p3.scatter( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n], temp_data[array_name].z[component::stride][::n], color=colormap(c_norm[::n]), size=pa_widgets.scalar_size.value, marker=pa_widgets.scatter_plot_marker.value, ) self._plot_vectors( pa_widgets, temp_data[array_name], array_name ) else: print('Masking factor must be a positive integer.') def _get_c(self, pa_widgets, data, component=0, stride=1, need_vmag=False): c = [0] if pa_widgets.scalar.value == 'vmag' or need_vmag is True: u = getattr(data, 'u')[component::stride] v = getattr(data, 'v')[component::stride] c = u**2 + v**2 if self.viewer_type == 'Viewer3D': w = getattr(data, 'w')[component::stride] c += w**2 c = c**0.5 elif pa_widgets.scalar.value != 'None': c = getattr( data, pa_widgets.scalar.value )[component::stride] return c def _opacity_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.is_visible.value is True: alpha = pa_widgets.opacity.value n = pa_widgets.masking_factor.value temp_data = self.get_frame( self._widgets.frame.value )['arrays'] stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) colormap = getattr( plt.cm, pa_widgets.scalar_cmap.value ) sct = self._scatters[array_name] min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) cm[0:, 3] *= alpha if self.viewer_type == 'Viewer2D': sct.set_facecolors(cm) self._legend_handler(None) self.figure.show() def _components_handler(self, change, array_name=None): if array_name is None: array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] scalar = pa_widgets.scalar.value if pa_widgets.is_visible.value is True: temp_data = self.get_frame(self._widgets.frame.value)['arrays'] if scalar in temp_data[array_name].stride.keys(): stride = temp_data[array_name].stride[array_name] if pa_widgets.components.value > stride: pa_widgets.components.value = stride elif pa_widgets.components.value < 1: pa_widgets.components.value = 1 self._scalar_handler({'owner': pa_widgets.scalar}) def _stride_and_component(self, data, pa_widgets): if pa_widgets.scalar.value in data.stride.keys(): pa_widgets.components.disabled = False component = pa_widgets.components.value - 1 stride = data.stride[pa_widgets.scalar.value] else: pa_widgets.components.disabled = True stride = 1 component = 0 return stride, component class ParticleArrayWidgets1D(object): def __init__(self, particlearray): self.array_name = particlearray.name self.scalar = widgets.Dropdown( options=['None'] + particlearray.output_property_arrays + ['vmag'], value='rho', description="scalar", disabled=False, layout=widgets.Layout(width='240px', display='flex') ) self.scalar.owner = self.array_name self.is_visible = widgets.Checkbox( value=True, description='visible', disabled=False, layout=widgets.Layout(width='170px', display='flex') ) self.is_visible.owner = self.array_name self.masking_factor = widgets.IntText( value=1, description='masking', disabled=False, layout=widgets.Layout(width='160px', display='flex'), ) self.masking_factor.owner = self.array_name self.components = widgets.IntText( value=1, description='component', disabled=True, layout=widgets.Layout(width='160px', display='flex'), ) self.components.owner = self.array_name self.right_spine = widgets.Checkbox( value=False, description='scale', disabled=False, layout=widgets.Layout(width='170px', display='flex') ) self.right_spine.owner = self.array_name self.right_spine_lower_lim = widgets.Text( value='', placeholder='min', description='lower limit', disabled=False, layout=widgets.Layout(width='160px', display='flex'), continuous_update=False ) self.right_spine_lower_lim.owner = self.array_name self.right_spine_upper_lim = widgets.Text( value='', placeholder='max', description='upper limit', disabled=False, layout=widgets.Layout(width='160px', display='flex'), continuous_update=False ) self.right_spine_upper_lim.owner = self.array_name def _tab_config(self): VBox1 = widgets.VBox( [ self.scalar, self.components, ] ) VBox2 = widgets.VBox( [ self.masking_factor, self.is_visible, ] ) VBox3 = widgets.VBox( [ self.right_spine, widgets.HBox( [ self.right_spine_upper_lim, self.right_spine_lower_lim, ] ), ] ) hbox = widgets.HBox([VBox1, VBox2, VBox3]) return hbox class Viewer1DWidgets(object): def __init__(self, file_name, file_count): self.temp_data = load(file_name) self.time = str(self.temp_data['solver_data']['t']) self.temp_data = self.temp_data['arrays'] self.frame = widgets.IntSlider( min=0, max=file_count, step=1, value=0, description='frame', layout=widgets.Layout(width='500px'), continuous_update=False, ) self.play_button = widgets.Play( min=0, max=file_count, step=1, disabled=False, ) self.link = widgets.jslink( (self.frame, 'value'), (self.play_button, 'value'), ) self.delay_box = widgets.FloatText( value=0.2, description='Delay', disabled=False, layout=widgets.Layout(width='160px', display='flex'), ) self.save_figure = widgets.Text( value='', placeholder='example.pdf', description='Save figure', disabled=False, layout=widgets.Layout(width='240px', display='flex'), ) self.save_all_plots = widgets.ToggleButton( value=False, description='Save all plots!', disabled=False, tooltip='Saves the corresponding plots for all the' + ' frames in the presently set styling.', icon='', layout=widgets.Layout(display='flex'), ) self.solver_time = widgets.HTML( value=self.time, description='Solver time:' ) self.show_solver_time = widgets.Checkbox( value=False, description="Show solver time", disabled=False, layout=widgets.Layout(display='flex'), ) self.print_config = widgets.Button( description='print present config.', tooltip='Prints the configuration dictionary ' + 'for the current viewer state', disabled=False, layout=widgets.Layout(display='flex'), ) self.particles = {} for array_name in self.temp_data.keys(): self.particles[array_name] = ParticleArrayWidgets1D( self.temp_data[array_name], ) def _create_tabs(self): children = [] for array_name in self.particles.keys(): children.append(self.particles[array_name]._tab_config()) tab = widgets.Tab(children=children) for i in range(len(children)): tab.set_title(i, list(self.particles.keys())[i]) return widgets.VBox( [ tab, widgets.HBox( [ self.play_button, self.frame ] ), widgets.HBox( [ self.delay_box, self.save_figure, self.save_all_plots, ] ), widgets.HBox( [ self.print_config, self.show_solver_time, self.solver_time, ] ) ] ) class Viewer1D(Viewer): ''' Example ------- >>> from pysph.tools.ipy_viewer import Viewer1D >>> sample = Viewer1D( '/home/uname/pysph_files/blastwave_output' ) >>> sample.interactive_plot() >>> sample.show_log() >>> sample.show_info() ''' def _configure_plot(self): ''' Set attributes for plotting. ''' self.figure, temp = plt.subplots() self.add_axes = False self._scatters_ax = {'host': temp} self._scatters = {} self._solver_time_ax = {} self.figure.show() def interactive_plot(self, config={}): ''' Set plotting attributes, create widgets and display them along with the interactive plot. ''' self.config = config # The configuration dictionary. self.viewer_type = 'Viewer1D' self._configure_plot() self._create_widgets() display(self._widgets._create_tabs()) temp_data = self.get_frame(0) self.time = str(temp_data['solver_data']['t']) temp_data = temp_data['arrays'] self.xmin = None self.xmax = None for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if (pa_widgets.scalar.value != 'None' and pa_widgets.is_visible.value is True): n = pa_widgets.masking_factor.value stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) if self.xmin is None: self.xmin = min(temp_data[array_name].x) elif min(temp_data[array_name].x) < self.xmin: self.xmin = min(temp_data[array_name].x) if self.xmax is None: self.xmax = max(temp_data[array_name].x) elif max(temp_data[array_name].x) > self.xmax: self.xmax = max(temp_data[array_name].x) min_c = min(c) max_c = max(c) llim = pa_widgets.right_spine_lower_lim.value ulim = pa_widgets.right_spine_upper_lim.value if llim != '': min_c = float(llim) if ulim != '': max_c = float(ulim) if max_c-min_c != 0: c_norm = (c - min_c)/(max_c - min_c) elif max_c != 0: c_norm = c/max_c else: c_norm = c color = 'C' + str( list( self._widgets.particles.keys() ).index(array_name) ) ax = self._scatters_ax['host'].twinx() self._scatters[array_name] = ax.scatter( temp_data[array_name].x[component::stride][::n], c_norm[::n], color=color, label=array_name, ) ax.set_ylim(-0.1, 1.1) self._make_patch_spines_invisible(ax) self._scatters_ax[array_name] = ax self._scatters_ax['host'].set_xlim(self.xmin-0.1, self.xmax+0.1) self._make_patch_spines_invisible(self._scatters_ax['host']) self.solver_time_textbox = None # So that _show_solver_time_handler does not glitch at intialization. self._frame_handler(None) self._show_solver_time_handler(None) self._right_spine_handler(None) def _make_patch_spines_invisible(self, ax, val=True): ''' Helper function for making individual y-axes for different particle arrays ''' ax.set_frame_on(val) ax.patch.set_visible(False) for sp in ax.spines.values(): sp.set_visible(False) def _frame_handler(self, change): temp_data = self.get_frame(self._widgets.frame.value) self.time = str(temp_data['solver_data']['t']) self._widgets.solver_time.value = self.time temp_data = temp_data['arrays'] for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if (pa_widgets.scalar.value != 'None' and pa_widgets.is_visible.value is True): n = pa_widgets.masking_factor.value stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) sct = self._scatters[array_name] c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) if self.xmin is None: self.xmin = min(temp_data[array_name].x) elif min(temp_data[array_name].x) < self.xmin: self.xmin = min(temp_data[array_name].x) if self.xmax is None: self.xmax = max(temp_data[array_name].x) if max(temp_data[array_name].x) > self.xmax: self.xmax = max(temp_data[array_name].x) min_c = min(c) max_c = max(c) llim = pa_widgets.right_spine_lower_lim.value ulim = pa_widgets.right_spine_upper_lim.value if llim != '': min_c = float(llim) if ulim != '': max_c = float(ulim) if max_c-min_c != 0: c_norm = (c - min_c)/(max_c - min_c) elif max_c != 0: c_norm = c/max_c else: c_norm = c sct.set_offsets( np.vstack( ( temp_data[array_name].x[component::stride][::n], c_norm[::n] ) ).T ) self._scatters_ax[array_name].set_ylim(-0.1, 1.1) self._scatters_ax['host'].set_xlim(self.xmin-0.1, self.xmax+0.1) self._right_spine_handler(None) self._show_solver_time_handler(None) self.figure.show() def _scalar_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.is_visible.value is True: n = pa_widgets.masking_factor.value temp_data = self.get_frame( self._widgets.frame.value )['arrays'] stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) if self.xmin is None: self.xmin = min(temp_data[array_name].x) elif min(temp_data[array_name].x) < self.xmin: self.xmin = min(temp_data[array_name].x) if self.xmax is None: self.xmax = max(temp_data[array_name].x) elif max(temp_data[array_name].x) > self.xmax: self.xmax = max(temp_data[array_name].x) min_c = min(c) max_c = max(c) llim = pa_widgets.right_spine_lower_lim.value ulim = pa_widgets.right_spine_upper_lim.value if llim != '': min_c = float(llim) if ulim != '': max_c = float(ulim) if max_c-min_c != 0: c_norm = (c - min_c)/(max_c - min_c) elif max_c != 0: c_norm = c/max_c else: c_norm = c new = change['new'] old = change['old'] if (new == 'None' and old == 'None'): pass elif (new == 'None' and old != 'None'): sct = self._scatters[array_name] copy = self._scatters_ax[array_name].collections.copy() copy.remove(sct) self._scatters_ax[array_name].collections = copy del self._scatters[array_name] elif (new != 'None' and old == 'None'): color = 'C' + str( list( self._widgets.particles.keys() ).index(array_name) ) self._scatters[array_name] = self._scatters_ax[ array_name ].scatter( temp_data[array_name].x[component::stride][::n], c_norm[::n], color=color ) self._scatters_ax[array_name].set_ylim(-0.1, 1.1) elif (new != 'None' and old != 'None'): sct = self._scatters[array_name] sct.set_offsets( np.vstack( ( temp_data[array_name].x[component::stride][::n], c_norm[::n] ) ).T ) self._scatters_ax[array_name].set_ylim(-0.1, 1.1) self._scatters_ax['host'].set_xlim(self.xmin-0.1, self.xmax+0.1) self._right_spine_handler(None) self.figure.show() def _save_figure_handler(self, change): file_was_saved = False for extension in [ '.eps', '.pdf', '.pgf', '.png', '.ps', '.raw', '.rgba', '.svg', '.svgz' ]: if self._widgets.save_figure.value.endswith(extension): self.figure.savefig(self._widgets.save_figure.value) print( "Saved figure as {} in the present working directory" .format( self._widgets.save_figure.value ) ) file_was_saved = True break self._widgets.save_figure.value = "" if file_was_saved is False: print( "Please use a valid extension, that is, one of the following" + ": '.eps', '.pdf', '.pgf', '.png', '.ps', '.raw', '.rgba'," + " '.svg' or '.svgz'." ) def _is_visible_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] temp_data = self.get_frame(self._widgets.frame.value)['arrays'] if pa_widgets.scalar.value != 'None': if change['new'] is False: sct = self._scatters[array_name] copy = self._scatters_ax[array_name].collections.copy() copy.remove(sct) self._scatters_ax[array_name].collections = copy del self._scatters[array_name] elif change['new'] is True: n = pa_widgets.masking_factor.value stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) if self.xmin is None: self.xmin = min(temp_data[array_name].x) elif min(temp_data[array_name].x) < self.xmin: self.xmin = min(temp_data[array_name].x) if self.xmax is None: self.xmax = max(temp_data[array_name].x) elif max(temp_data[array_name].x) > self.xmax: self.xmax = max(temp_data[array_name].x) min_c = min(c) max_c = max(c) llim = pa_widgets.right_spine_lower_lim.value ulim = pa_widgets.right_spine_upper_lim.value if llim != '': min_c = float(llim) if ulim != '': max_c = float(ulim) if max_c-min_c != 0: c_norm = (c - min_c)/(max_c - min_c) elif max_c != 0: c_norm = c/max_c else: c_norm = c color = 'C' + str( list( self._widgets.particles.keys() ).index(array_name) ) self._scatters[array_name] = self._scatters_ax[ array_name ].scatter( temp_data[array_name].x[component::stride][::n], c_norm[::n], color=color ) self._scatters_ax[array_name].set_ylim(-0.1, 1.1) self._scatters_ax['host'].set_xlim(self.xmin-0.1, self.xmax+0.1) self._right_spine_handler(None) self.figure.show() def _show_solver_time_handler(self, change): if self._widgets.show_solver_time.value is True: if self.solver_time_textbox is not None: self.solver_time_textbox.remove() self.solver_time_textbox = self._scatters_ax['host'].text( x=0.02, y=0.02, s='Solver time: ' + self.time, verticalalignment='bottom', horizontalalignment='left', transform=self._scatters_ax['host'].transAxes, fontsize=12, bbox={'facecolor': 'white', 'alpha': 0.7, 'pad': 3}, ) elif self._widgets.show_solver_time.value is False: if self.solver_time_textbox is not None: self.solver_time_textbox.remove() self.solver_time_textbox = None if change is not None: self.figure.show() def _right_spine_handler(self, change): temp_data = self.get_frame( self._widgets.frame.value )['arrays'] number_of_spines = 0 self._make_patch_spines_invisible( self._scatters_ax['host'], False ) for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] ax = self._scatters_ax[array_name] self._make_patch_spines_invisible(ax, False) if (pa_widgets.right_spine.value is True and pa_widgets.is_visible.value is True and pa_widgets.scalar.value != 'None'): number_of_spines += 1 stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) self._scatters_ax['host'].set_position( [0, 0, 1. - 0.2*number_of_spines, 1] ) min_c = min(c) max_c = max(c) llim = pa_widgets.right_spine_lower_lim.value ulim = pa_widgets.right_spine_upper_lim.value if llim != '': min_c = float(llim) if ulim != '': max_c = float(ulim) locs = np.linspace(0, 1, 20) if min_c - max_c != 0: labels = [ '%.2E' % Decimal(str(val)) for val in np.linspace( min_c, max_c, 20 ) ] elif max_c == 0: labels = ['%.2E' % Decimal(str(val)) for val in locs] else: labels = [ '%.2E' % Decimal(str(val)) for val in np.linspace( 0, max_c, 20 ) ] color = 'C' + str( list( self._widgets.particles.keys() ).index(array_name) ) ax.set_frame_on(True) ax.spines["right"].set_position( ( "axes", 1.+0.2*(number_of_spines-1)/(1.-0.2*number_of_spines) ) ) ax.spines["right"].set_visible(True) ax.set_ylabel( array_name + " : " + pa_widgets.scalar.value ) ax.set_yticks(ticks=locs) ax.set_yticklabels(labels=labels) ax.tick_params(axis='y', colors=color) ax.yaxis.label.set_color(color) ax.spines["right"].set_color(color) else: ax.spines["right"].set_color('none') ax.yaxis.set_ticks([]) ax.set_ylabel('') if number_of_spines == 0: self._scatters_ax['host'].set_frame_on(True) self._scatters_ax['host'].set_position( [0, 0, 1, 1] ) if change is not None: self.figure.show() def _right_spine_lim_handler(self, change): self._frame_handler(None) class ParticleArrayWidgets2D(object): def __init__(self, particlearray): self.array_name = particlearray.name self.scalar = widgets.Dropdown( options=['None'] + particlearray.output_property_arrays + ['vmag'], value='rho', description="scalar", disabled=False, layout=widgets.Layout(width='240px', display='flex') ) self.scalar.owner = self.array_name self.scalar_cmap = widgets.Dropdown( options=list(map(str, plt.colormaps())), value='viridis', description="Colormap", disabled=False, layout=widgets.Layout(width='240px', display='flex') ) self.scalar_cmap.owner = self.array_name self.legend = widgets.Checkbox( value=False, description='legend', disabled=False, layout=widgets.Layout(width='170px', display='flex') ) self.legend.owner = self.array_name self.legend_lower_lim = widgets.Text( value='', placeholder='min', description='lower limit', disabled=False, layout=widgets.Layout(width='160px', display='flex'), continuous_update=False ) self.legend_lower_lim.owner = self.array_name self.legend_upper_lim = widgets.Text( value='', placeholder='max', description='upper limit', disabled=False, layout=widgets.Layout(width='160px', display='flex'), continuous_update=False ) self.legend_upper_lim.owner = self.array_name self.vector = widgets.Text( value='', placeholder='variable1,variable2', description='vector', disabled=False, layout=widgets.Layout(width='240px', display='flex'), continuous_update=False ) self.vector.owner = self.array_name self.vector_width = widgets.FloatSlider( min=1, max=100, step=1, value=25, description='vector width', layout=widgets.Layout(width='300px'), continuous_update=False, ) self.vector_width.owner = self.array_name self.vector_scale = widgets.FloatSlider( min=1, max=100, step=1, value=55, description='vector scale', layout=widgets.Layout(width='300px'), continuous_update=False, ) self.vector_scale.owner = self.array_name self.scalar_size = widgets.FloatSlider( min=0, max=50, step=1, value=10, description='scalar size', layout=widgets.Layout(width='300px'), continuous_update=False, ) self.scalar_size.owner = self.array_name self.is_visible = widgets.Checkbox( value=True, description='visible', disabled=False, layout=widgets.Layout(width='170px', display='flex') ) self.is_visible.owner = self.array_name self.masking_factor = widgets.IntText( value=1, description='masking', disabled=False, layout=widgets.Layout(width='160px', display='flex'), ) self.masking_factor.owner = self.array_name self.opacity = widgets.FloatSlider( min=0, max=1, step=0.01, value=1, description='opacity', layout=widgets.Layout(width='300px'), continuous_update=False, ) self.opacity.owner = self.array_name self.components = widgets.IntText( value=1, description='component', disabled=True, layout=widgets.Layout(width='160px', display='flex'), ) self.components.owner = self.array_name def _tab_config(self): VBox1 = widgets.VBox( [ self.scalar, self.components, self.scalar_size, self.scalar_cmap, ] ) VBox2 = widgets.VBox( [ self.vector, self.vector_scale, self.vector_width, self.opacity, ] ) VBox3 = widgets.VBox( [ self.legend, widgets.HBox( [ self.legend_upper_lim, self.legend_lower_lim, ] ), self.masking_factor, self.is_visible, ] ) hbox = widgets.HBox([VBox1, VBox2, VBox3]) return hbox class Viewer2DWidgets(object): def __init__(self, file_name, file_count): self.temp_data = load(file_name) self.time = str(self.temp_data['solver_data']['t']) self.temp_data = self.temp_data['arrays'] self.frame = widgets.IntSlider( min=0, max=file_count, step=1, value=0, description='frame', layout=widgets.Layout(width='500px'), continuous_update=False, ) self.play_button = widgets.Play( min=0, max=file_count, step=1, disabled=False, ) self.link = widgets.jslink( (self.frame, 'value'), (self.play_button, 'value'), ) self.delay_box = widgets.FloatText( value=0.2, description='Delay', disabled=False, layout=widgets.Layout(width='160px', display='flex'), ) self.save_figure = widgets.Text( value='', placeholder='example.pdf', description='Save figure', disabled=False, layout=widgets.Layout(width='240px', display='flex'), ) self.save_all_plots = widgets.ToggleButton( value=False, description='Save all plots!', disabled=False, tooltip='Saves the corresponding plots for all the' + ' frames in the presently set styling.', icon='', layout=widgets.Layout(display='flex'), ) self.solver_time = widgets.HTML( value=self.time, description='Solver time:' ) self.show_solver_time = widgets.Checkbox( value=False, description="Show solver time", disabled=False, layout=widgets.Layout(display='flex'), ) self.print_config = widgets.Button( description='print present config.', tooltip='Prints the configuration dictionary ' + 'for the current viewer state', disabled=False, layout=widgets.Layout(display='flex'), ) self.particles = {} for array_name in self.temp_data.keys(): self.particles[array_name] = ParticleArrayWidgets2D( self.temp_data[array_name], ) def _create_tabs(self): children = [] for array_name in self.particles.keys(): children.append(self.particles[array_name]._tab_config()) tab = widgets.Tab(children=children) for i in range(len(children)): tab.set_title(i, list(self.particles.keys())[i]) return widgets.VBox( [ tab, widgets.HBox( [ self.play_button, self.frame ] ), widgets.HBox( [ self.delay_box, self.save_figure, self.save_all_plots, ] ), widgets.HBox( [ self.print_config, self.show_solver_time, self.solver_time, ] ) ] ) class Viewer2D(Viewer): ''' Example ------- >>> from pysph.tools.ipy_viewer import Viewer2D >>> sample = Viewer2D( '/home/uname/pysph_files/dam_Break_2d_output' ) >>> sample.interactive_plot() >>> sample.show_log() >>> sample.show_info() ''' def _configure_plot(self): ''' Set attributes for plotting. ''' self.figure = plt.figure() self._scatter_ax = self.figure.add_axes([0, 0, 1, 1]) self._vector_ax = self.figure.add_axes( self._scatter_ax.get_position(), frameon=False ) self._vector_ax.get_xaxis().set_visible(False) self._vector_ax.get_yaxis().set_visible(False) self._scatters = {} self._cbar_ax = {} self._cbars = {} self._vectors = {} self._solver_time_ax = {} self.figure.show() def interactive_plot(self, config={}): ''' Set plotting attributes, create widgets and display them along with the interactive plot. ''' self.config = config # The configuration dictionary. self.viewer_type = 'Viewer2D' self._configure_plot() self._create_widgets() display(self._widgets._create_tabs()) temp_data = self.get_frame(0) self.time = str(temp_data['solver_data']['t']) temp_data = temp_data['arrays'] for sct in self._scatters.values(): if sct in self._scatter_ax.collections: self._scatter_ax.collections.remove(sct) self._scatters = {} for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if pa_widgets.scalar.value != 'None': n = pa_widgets.masking_factor.value alpha = pa_widgets.opacity.value stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) sct = self._scatters[array_name] = self._scatter_ax.scatter( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n], s=pa_widgets.scalar_size.value, ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) colormap = getattr( plt.cm, pa_widgets.scalar_cmap.value ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) cm[0:, 3] *= alpha sct.set_facecolors(cm) if pa_widgets.is_visible.value is False: sct.set_offsets(None) self._scatter_ax.axis('equal') self.solver_time_textbox = None # So that _show_solver_time_handler does not glitch at intialization. self._frame_handler(None) self._show_solver_time_handler(None) self._legend_handler(None) self._plot_vectors() def _plot_vectors(self): temp_data = self.get_frame(self._widgets.frame.value) temp_data = temp_data['arrays'] self.figure.delaxes(self._vector_ax) self._vector_ax = self.figure.add_axes( self._scatter_ax.get_position(), frameon=False ) self._vector_ax.get_xaxis().set_visible(False) self._vector_ax.get_yaxis().set_visible(False) self._vectors = {} for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if (pa_widgets.vector.value != '' and pa_widgets.is_visible.value is True): n = pa_widgets.masking_factor.value stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) temp_data_arr = temp_data[array_name] x = temp_data_arr.x[component::stride][::n] y = temp_data_arr.y[component::stride][::n] try: v1 = getattr( temp_data_arr, pa_widgets.vector.value.split(",")[0] )[component::stride][::n] v2 = getattr( temp_data_arr, pa_widgets.vector.value.split(",")[1] )[component::stride][::n] except AttributeError: continue vmag = (v1**2 + v2**2)**0.5 self._vectors[array_name] = self._vector_ax.quiver( x, y, v1, v2, vmag, scale=pa_widgets.vector_scale.value, width=(pa_widgets.vector_width.value)/10000, units='xy' ) self._vector_ax.set_xlim(self._scatter_ax.get_xlim()) self._vector_ax.set_ylim(self._scatter_ax.get_ylim()) def _frame_handler(self, change): temp_data = self.get_frame(self._widgets.frame.value) self.time = str(temp_data['solver_data']['t']) self._widgets.solver_time.value = self.time temp_data = temp_data['arrays'] for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if (pa_widgets.scalar.value != 'None' and pa_widgets.is_visible.value is True): n = pa_widgets.masking_factor.value alpha = pa_widgets.opacity.value stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) sct = self._scatters[array_name] sct.set_offsets( np.vstack( ( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n] ) ).T ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) colormap = getattr( plt.cm, pa_widgets.scalar_cmap.value ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) cm[0:, 3] *= alpha sct.set_facecolors(cm) self._legend_handler(None) self._vector_handler(None) self._show_solver_time_handler(None) self._adjust_axes() self.figure.show() def _scalar_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.is_visible.value is True: n = pa_widgets.masking_factor.value alpha = pa_widgets.opacity.value temp_data = self.get_frame( self._widgets.frame.value )['arrays'] stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) sct = self._scatters[array_name] c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) colormap = getattr( plt.cm, pa_widgets.scalar_cmap.value ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) cm[0:, 3] *= alpha new = change['new'] old = change['old'] if (new == 'None' and old == 'None'): pass elif (new == 'None' and old != 'None'): sct.set_offsets(None) elif (new != 'None' and old == 'None'): sct.set_offsets( np.vstack( ( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n] ) ).T ) sct.set_facecolors(cm) elif (new != 'None' and old != 'None'): sct.set_facecolors(cm) self._plot_vectors() self._legend_handler(None) self.figure.show() def _vector_handler(self, change): ''' Bug : Arrows go out of the figure ''' self._plot_vectors() if change is not None: pa_widgets = self._widgets.particles[change['owner'].owner] if pa_widgets.is_visible.value is True: self.figure.show() def _vector_scale_handler(self, change): self._plot_vectors() pa_widgets = self._widgets.particles[change['owner'].owner] if pa_widgets.is_visible.value is True: self.figure.show() def _adjust_axes(self): if hasattr(self, '_vector_ax'): self._vector_ax.set_xlim(self._scatter_ax.get_xlim()) self._vector_ax.set_ylim(self._scatter_ax.get_ylim()) else: pass def _scalar_size_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.is_visible.value is True: self._scatters[array_name].set_sizes([change['new']]) self.figure.show() def _vector_width_handler(self, change): self._plot_vectors() pa_widgets = self._widgets.particles[change['owner'].owner] if pa_widgets.is_visible.value is True: self.figure.show() def _scalar_cmap_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.is_visible.value is True: n = pa_widgets.masking_factor.value alpha = pa_widgets.opacity.value temp_data = self.get_frame( self._widgets.frame.value )['arrays'] stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) colormap = getattr( plt.cm, pa_widgets.scalar_cmap.value ) sct = self._scatters[array_name] min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) cm[0:, 3] *= alpha sct.set_facecolors(cm) self._legend_handler(None) self.figure.show() def _legend_handler(self, change): temp_data = self.get_frame( self._widgets.frame.value )['arrays'] for _cbar_ax in self._cbar_ax.values(): self.figure.delaxes(_cbar_ax) self._cbar_ax = {} self._cbars = {} for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if (pa_widgets.legend.value is True and pa_widgets.is_visible.value is True): if pa_widgets.scalar.value != 'None': stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) cmap = pa_widgets.scalar_cmap.value colormap = getattr(mpl.cm, cmap) self._scatter_ax.set_position( [0, 0, 0.84 - 0.15*len(self._cbars.keys()), 1] ) self._cbar_ax[array_name] = self.figure.add_axes( [ 0.85 - 0.15*len(self._cbars.keys()), 0.02, 0.02, 0.82 ] ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) if max_c == 0: ticks = np.linspace(0, 1, 26) norm = mpl.colors.Normalize(vmin=0, vmax=1) elif max_c == min_c: # this occurs at initialization for some properties # like pressure, and stays true throughout for # others like mass of the particles ticks = np.linspace(0, max_c, 26) norm = mpl.colors.Normalize(vmin=0, vmax=max_c) else: ticks = np.linspace(min_c, max_c, 26) norm = mpl.colors.Normalize(vmin=min_c, vmax=max_c) self._cbars[array_name] = mpl.colorbar.ColorbarBase( ax=self._cbar_ax[array_name], cmap=colormap, norm=norm, ticks=ticks, ) self._cbars[array_name].set_label( array_name + " : " + pa_widgets.scalar.value ) if len(self._cbars.keys()) == 0: self._scatter_ax.set_position( [0, 0, 1, 1] ) self._plot_vectors() if change is not None: self.figure.show() def _save_figure_handler(self, change): file_was_saved = False for extension in [ '.eps', '.pdf', '.pgf', '.png', '.ps', '.raw', '.rgba', '.svg', '.svgz' ]: if self._widgets.save_figure.value.endswith(extension): self.figure.savefig(self._widgets.save_figure.value) print( "Saved figure as {} in the present working directory" .format( self._widgets.save_figure.value ) ) file_was_saved = True break self._widgets.save_figure.value = "" if file_was_saved is False: print( "Please use a valid extension, that is, one of the following" + ": '.eps', '.pdf', '.pgf', '.png', '.ps', '.raw', '.rgba'," + " '.svg' or '.svgz'." ) def _is_visible_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] temp_data = self.get_frame(self._widgets.frame.value)['arrays'] sct = self._scatters[array_name] if pa_widgets.scalar.value != 'None': if change['new'] is False: sct.set_offsets(None) elif change['new'] is True: n = pa_widgets.masking_factor.value alpha = pa_widgets.opacity.value stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) sct.set_offsets( np.vstack( ( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n] ) ).T ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) colormap = getattr( plt.cm, pa_widgets.scalar_cmap.value ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) cm[0:, 3] *= alpha sct.set_facecolors() self._legend_handler(None) self._plot_vectors() self.figure.show() def _show_solver_time_handler(self, change): if self._widgets.show_solver_time.value is True: if self.solver_time_textbox is not None: self.solver_time_textbox.remove() self.solver_time_textbox = self._scatter_ax.text( x=0.02, y=0.02, s='Solver time: ' + self.time, verticalalignment='bottom', horizontalalignment='left', transform=self._scatter_ax.transAxes, fontsize=12, bbox={'facecolor': 'white', 'alpha': 0.7, 'pad': 3}, ) elif self._widgets.show_solver_time.value is False: if self.solver_time_textbox is not None: self.solver_time_textbox.remove() self.solver_time_textbox = None if change is not None: self.figure.show() class ParticleArrayWidgets3D(object): def __init__(self, particlearray): self.array_name = particlearray.name self.scalar = widgets.Dropdown( options=['None'] + particlearray.output_property_arrays + ['vmag'], value='rho', description="scalar", disabled=False, layout=widgets.Layout(width='240px', display='flex') ) self.scalar.owner = self.array_name self.scalar_cmap = widgets.Dropdown( options=list(map(str, plt.colormaps())), value='viridis', description="Colormap", disabled=False, layout=widgets.Layout(width='240px', display='flex') ) self.scalar_cmap.owner = self.array_name self.legend = widgets.Checkbox( value=False, description="legend", disabled=False, layout=widgets.Layout(width='200px', display='flex') ) self.legend.owner = self.array_name self.legend_lower_lim = widgets.Text( value='', description='lower limit', placeholder='min', disabled=False, continuous_update=False, layout=widgets.Layout(width='160px', display='flex'), ) self.legend_lower_lim.owner = self.array_name self.legend_upper_lim = widgets.Text( value='', description='upper limit', placeholder='max', disabled=False, continuous_update=False, layout=widgets.Layout(width='160px', display='flex') ) self.legend_upper_lim.owner = self.array_name self.velocity_vectors = widgets.Checkbox( value=False, description="Velocity Vectors", disabled=False, layout=widgets.Layout(width='300px', display='flex') ) self.velocity_vectors.owner = self.array_name self.vector_size = widgets.FloatSlider( min=1, max=10, step=0.01, value=5.5, description='vector size', layout=widgets.Layout(width='300px'), ) self.vector_size.owner = self.array_name self.scalar_size = widgets.FloatSlider( min=0, max=3, step=0.02, value=1, description='scalar size', layout=widgets.Layout(width='300px'), ) self.scalar_size.owner = self.array_name self.is_visible = widgets.Checkbox( value=True, description="visible", disabled=False, layout=widgets.Layout(width='200px', display='flex') ) self.is_visible.owner = self.array_name self.masking_factor = widgets.IntText( value=1, description='masking', disabled=False, layout=widgets.Layout(width='160px', display='flex') ) self.masking_factor.owner = self.array_name self.scatter_plot_marker = widgets.Dropdown( options=[ 'arrow', 'box', 'diamond', 'sphere', 'point_2d', 'square_2d', 'triangle_2d', 'circle_2d' ], value='circle_2d', description="Marker", disabled=False, layout=widgets.Layout(width='240px', display='flex') ) self.scatter_plot_marker.owner = self.array_name self.components = widgets.IntText( value=1, description='component', disabled=True, layout=widgets.Layout(width='160px', display='flex'), ) self.components.owner = self.array_name def _tab_config(self): VBox1 = widgets.VBox( [ self.scalar, self.components, self.scalar_size, self.scalar_cmap, ] ) VBox2 = widgets.VBox( [ self.velocity_vectors, self.vector_size, self.is_visible, ] ) VBox3 = widgets.VBox( [ self.legend, widgets.HBox( [ self.legend_upper_lim, self.legend_lower_lim, ] ), self.masking_factor, self.scatter_plot_marker, ] ) hbox = widgets.HBox([VBox1, VBox2, VBox3]) return hbox class Viewer3DWidgets(object): def __init__(self, file_name, file_count): self.temp_data = load(file_name) self.time = str(self.temp_data['solver_data']['t']) self.temp_data = self.temp_data['arrays'] self.frame = widgets.IntSlider( min=0, max=file_count, step=1, value=0, description='frame', layout=widgets.Layout(width='500px'), continuous_update=False ) self.play_button = widgets.Play( min=0, max=file_count, step=1, disabled=False, interval=1000, ) self.link = widgets.jslink( (self.frame, 'value'), (self.play_button, 'value') ) self.delay_box = widgets.FloatText( value=1.0, description='Delay', disabled=False, layout=widgets.Layout(width='160px', display='flex') ) self.save_figure = widgets.Text( value='', placeholder='example.png', description='Save figure', disabled=False, layout=widgets.Layout(width='240px', display='flex') ) self.save_all_plots = widgets.ToggleButton( value=False, description='Save all plots!', disabled=False, tooltip='Saves the corresponding plots for all the' + ' frames in the presently set styling.', icon='', layout=widgets.Layout(display='flex'), ) self.solver_time = widgets.HTML( value=self.time, description='Solver time:', ) self.print_config = widgets.Button( description='print present config.', tooltip='Prints the configuration dictionary ' + 'for the current viewer state', disabled=False, layout=widgets.Layout(display='flex'), ) self.particles = {} for array_name in self.temp_data.keys(): self.particles[array_name] = ParticleArrayWidgets3D( self.temp_data[array_name], ) def _create_tabs(self): children = [] for array_name in self.particles.keys(): children.append(self.particles[array_name]._tab_config()) tab = widgets.Tab(children=children) for i in range(len(children)): tab.set_title(i, list(self.particles.keys())[i]) return widgets.VBox( [ tab, widgets.HBox( [ self.play_button, self.frame ] ), widgets.HBox( [ self.delay_box, self.save_figure, self.save_all_plots, ] ), widgets.HBox( [ self.print_config, self.solver_time, ] ) ] ) class Viewer3D(Viewer): ''' Example ------- >>> from pysph.tools.ipy_viewer import Viewer3D >>> sample = Viewer3D( '/home/uname/pysph_files/dam_Break_3d_output' ) >>> sample.interactive_plot() >>> sample.show_log() >>> sample.show_info() ''' def interactive_plot(self, config={}): import ipyvolume.pylab as p3 self.config = config # The configuration dictionary. self.viewer_type = 'Viewer3D' self._create_widgets() self._scatters = {} self._vectors = {} self._cbars = {} self._cbar_ax = {} self._cbar_labels = {} self.pltfigure = plt.figure(figsize=(9, 1), dpi=100) self._initial_ax = self.pltfigure.add_axes([0, 0, 1, 1]) self._initial_ax.axis('off') # Creating a dummy axes element, that prevents the plot # from glitching and showing random noise when no legends are # being displayed. self.plot = p3.figure(width=800) temp_data = self.get_frame(0)['arrays'] for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if pa_widgets.scalar.value != 'None': n = pa_widgets.masking_factor.value colormap = getattr(mpl.cm, pa_widgets.scalar_cmap.value) stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) self._scatters[array_name] = p3.scatter( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n], temp_data[array_name].z[component::stride][::n], color=cm, size=pa_widgets.scalar_size.value, marker=pa_widgets.scatter_plot_marker.value, ) self._plot_vectors( pa_widgets, temp_data[array_name], array_name ) if pa_widgets.is_visible.value is False: self._scatters[array_name].visible = False if array_name in self._vectors.keys(): copy = self.plot.scatters.copy() copy.remove(self._vectors[array_name]) self.plot.scatters = copy del self._vectors[array_name] p3.squarelim() # Makes sure the figure doesn't appear distorted. self._frame_handler(None) self._legend_handler(None) self.pltfigure.show() display(self.plot) display(self._widgets._create_tabs()) def _plot_vectors(self, pa_widgets, data, array_name): if pa_widgets.velocity_vectors.value is True: n = pa_widgets.masking_factor.value colormap = getattr(mpl.cm, pa_widgets.scalar_cmap.value) stride, component = self._stride_and_component( data, pa_widgets ) c = self._get_c( pa_widgets, data, component, stride, need_vmag=True ) min_c, max_c, c_norm = self._cmap_helper( c, array_name, for_plot_vectors=True ) if array_name in self._vectors.keys(): vectors = self._vectors[array_name] vectors.x = data.x[component::stride][::n] vectors.y = data.y[component::stride][::n] vectors.z = data.z[component::stride][::n] vectors.vx = getattr(data, 'u')[component::stride][::n] vectors.vy = getattr(data, 'v')[component::stride][::n] vectors.vz = getattr(data, 'w')[component::stride][::n] vectors.size = pa_widgets.vector_size.value vectors.color = colormap(c_norm)[::n] else: import ipyvolume.pylab as p3 self._vectors[array_name] = p3.quiver( x=data.x[component::stride][::n], y=data.y[component::stride][::n], z=data.z[component::stride][::n], u=getattr(data, 'u')[component::stride][::n], v=getattr(data, 'v')[component::stride][::n], w=getattr(data, 'w')[component::stride][::n], size=pa_widgets.vector_size.value, color=colormap(c_norm)[::n] ) else: pass def _frame_handler(self, change): temp_data = self.get_frame(self._widgets.frame.value) self.time = str(temp_data['solver_data']['t']) self._widgets.solver_time.value = self.time temp_data = temp_data['arrays'] for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if pa_widgets.is_visible.value is True: if pa_widgets.scalar.value != 'None': n = pa_widgets.masking_factor.value colormap = getattr(mpl.cm, pa_widgets.scalar_cmap.value) stride, comp = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], comp, stride ) scatters = self._scatters[array_name] min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) scatters.x = temp_data[array_name].x[comp::stride][::n] scatters.y = temp_data[array_name].y[comp::stride][::n] scatters.z = temp_data[array_name].z[comp::stride][::n] scatters.color = cm self._plot_vectors( pa_widgets, temp_data[array_name], array_name ) self._legend_handler(None) def _scalar_handler(self, change): import ipyvolume.pylab as p3 array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] new = change['new'] old = change['old'] if pa_widgets.is_visible.value is True: if old == 'None' and new == 'None': pass elif old != 'None' and new == 'None': copy = self.plot.scatters.copy() copy.remove(self._scatters[array_name]) self.plot.scatters = copy del self._scatters[array_name] else: n = pa_widgets.masking_factor.value colormap = getattr(mpl.cm, pa_widgets.scalar_cmap.value) temp_data = self.get_frame(self._widgets.frame.value)['arrays'] stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) if old != 'None' and new != 'None': self._scatters[array_name].color = cm else: self._scatters[array_name] = p3.scatter( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n], temp_data[array_name].z[component::stride][::n], color=cm, size=pa_widgets.scalar_size.value, marker=pa_widgets.scatter_plot_marker.value, ) self._legend_handler(None) def _velocity_vectors_handler(self, change): import ipyvolume.pylab as p3 temp_data = self.get_frame(self._widgets.frame.value)['arrays'] array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.is_visible.value is True: if change['new'] is False: copy = self.plot.scatters.copy() copy.remove(self._vectors[array_name]) self.plot.scatters = copy del self._vectors[array_name] else: self._plot_vectors( pa_widgets, temp_data[array_name], array_name ) def _scalar_size_handler(self, change): array_name = change['owner'].owner if (array_name in self._scatters.keys() and self._widgets.particles[array_name].is_visible.value is True): self._scatters[array_name].size = change['new'] def _vector_size_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if array_name in self._vectors.keys(): if (pa_widgets.velocity_vectors.value is True and pa_widgets.is_visible.value is True): self._vectors[array_name].size = change['new'] def _scalar_cmap_handler(self, change): array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] temp_data = self.get_frame(self._widgets.frame.value)['arrays'] if pa_widgets.is_visible.value is True: n = pa_widgets.masking_factor.value colormap = getattr(mpl.cm, change['new']) stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) self._scatters[array_name].color = cm self._legend_handler(None) def _legend_handler(self, change): if len(self._cbar_ax) == 4: print( 'Four legends are already being displayed. This is the ' + 'maximum number of legends that can be displayed at once. ' + 'Please deactivate one of them if you wish to display another.' ) else: temp_data = self.get_frame(self._widgets.frame.value)['arrays'] for array_name in self._cbar_ax.keys(): self.pltfigure.delaxes(self._cbar_ax[array_name]) self._cbar_labels[array_name].remove() self._cbar_labels = {} self._cbar_ax = {} self._cbars = {} for array_name in self._widgets.particles.keys(): pa_widgets = self._widgets.particles[array_name] if (pa_widgets.scalar.value != 'None' and pa_widgets.legend.value is True): if pa_widgets.is_visible.value is True: cmap = getattr(mpl.cm, pa_widgets.scalar_cmap.value) stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) if max_c == 0: ticks = np.linspace(0, 1, 11) norm = mpl.colors.Normalize(vmin=0, vmax=1) elif min_c == max_c: # This occurs at initialization for some properties # like pressure, and stays true throughout for # others like mass of the particles. ticks = np.linspace(0, max_c, 11) norm = mpl.colors.Normalize(vmin=0, vmax=max_c) else: ticks = np.linspace(min_c, max_c, 11) norm = mpl.colors.Normalize(vmin=min_c, vmax=max_c) self._cbar_ax[array_name] = self.pltfigure.add_axes( [ 0.05, 0.75 - 0.25*len(self._cbars.keys()), 0.75, 0.09 ] ) self._cbars[array_name] = mpl.colorbar.ColorbarBase( ax=self._cbar_ax[array_name], cmap=cmap, norm=norm, ticks=ticks, orientation='horizontal' ) self._cbar_ax[array_name].tick_params( direction='in', pad=0, bottom=False, top=True, labelbottom=False, labeltop=True, ) self._cbar_labels[array_name] = self._initial_ax.text( x=0.82, y=1 - 0.25*len(self._cbars.keys()), s=array_name + " : " + pa_widgets.scalar.value, ) self.pltfigure.show() def _save_figure_handler(self, change): import ipyvolume.pylab as p3 file_was_saved = False for extension in [ '.jpg', '.jpeg', '.png', '.svg' ]: if self._widgets.save_figure.value.endswith(extension): p3.savefig( self._widgets.save_figure.value, width=600, height=600, fig=self.plot ) print( "Saved figure as {} in the present working directory" .format( self._widgets.save_figure.value ) ) flag = True break if file_was_saved is False: print( "Please use '.jpg', '.jpeg', '.png' or" + "'.svg' as the file extension." ) self._widgets.save_figure.value = "" def _is_visible_handler(self, change): import ipyvolume.pylab as p3 array_name = change['owner'].owner pa_widgets = self._widgets.particles[array_name] if pa_widgets.scalar.value != 'None': if change['new'] is False: copy = self.plot.scatters.copy() copy.remove(self._scatters[array_name]) self.plot.scatters = copy del self._scatters[array_name] if array_name in self._vectors.keys(): copy = self.plot.scatters.copy() copy.remove(self._vectors[array_name]) self.plot.scatters = copy del self._vectors[array_name] elif change['new'] is True: n = pa_widgets.masking_factor.value colormap = getattr(mpl.cm, pa_widgets.scalar_cmap.value) temp_data = self.get_frame(self._widgets.frame.value)['arrays'] stride, component = self._stride_and_component( temp_data[array_name], pa_widgets ) c = self._get_c( pa_widgets, temp_data[array_name], component, stride ) min_c, max_c, c_norm = self._cmap_helper( c, array_name ) cm = colormap(c_norm[::n]) self._scatters[array_name] = p3.scatter( temp_data[array_name].x[component::stride][::n], temp_data[array_name].y[component::stride][::n], temp_data[array_name].z[component::stride][::n], color=cm, size=pa_widgets.scalar_size.value, marker=pa_widgets.scatter_plot_marker.value, ) self._velocity_vectors_handler(change) self._legend_handler(None) def _scatter_plot_marker_handler(self, change): change['new'] = False self._is_visible_handler(change) change['new'] = True self._is_visible_handler(change) pysph-master/pysph/tools/mayavi_viewer.py000066400000000000000000001345111356347341600212570ustar00rootroot00000000000000"""A particle viewer using Mayavi. This code uses the :py:class:`MultiprocessingClient` solver interface to communicate with a running solver and displays the particles using Mayavi. It can also display a list of supplied files or a directory. """ from functools import reduce import glob import sys import math import numpy import os import os.path import time if not os.environ.get('ETS_TOOLKIT'): # Set the default toolkit to qt4 unless the user has explicitly # set the default manually via the env var. from traits.etsconfig.api import ETSConfig ETSConfig.toolkit = 'qt4' from traits.api import (Any, Array, Dict, HasTraits, Instance, # noqa: E402 on_trait_change, List, Str, Int, Range, Float, Bool, Button, Directory, Event, Password, Property, cached_property) from traitsui.api import (View, Item, Group, Handler, HSplit, ListEditor, EnumEditor, HGroup, ShellEditor) # noqa: E402 from mayavi.core.api import PipelineBase # noqa: E402 from mayavi.core.ui.api import ( MayaviScene, SceneEditor, MlabSceneModel) # noqa: E402 from pyface.timer.api import Timer, do_later, do_after # noqa: E402 from tvtk.api import tvtk # noqa: E402 from tvtk.array_handler import array2vtk # noqa: E402 from pysph.base.particle_array import ParticleArray # noqa: E402 from pysph.solver.solver_interfaces import MultiprocessingClient # noqa: E402 from pysph.solver.utils import load, dump, output_formats # noqa: E402 from pysph.solver.utils import remove_irrelevant_files, _sort_key # noqa: E402 from pysph.tools.interpolator import ( get_bounding_box, get_nx_ny_nz, Interpolator) # noqa: E402 import logging # noqa: E402 logger = logging.getLogger() def set_arrays(dataset, particle_array): """ Code to add all the arrays to a dataset given a particle array.""" props = set(particle_array.properties.keys()) # Add the vector data. vec = numpy.empty((len(particle_array.x), 3), dtype=float) vec[:, 0] = particle_array.u vec[:, 1] = particle_array.v vec[:, 2] = particle_array.w va = tvtk.to_tvtk(array2vtk(vec)) va.name = 'velocity' dataset.data.point_data.add_array(vec) # Now add the scalar data. scalars = props - set(('u', 'v', 'w')) for sc in scalars: arr = particle_array.get(sc) va = tvtk.to_tvtk(array2vtk(arr)) va.name = sc dataset.data.point_data.add_array(va) dataset._update_data() def get_files_in_dir(pth): '''Get the files in a given directory. ''' _files = glob.glob(os.path.join(pth, '*.hdf5')) if len(_files) == 0: _files = glob.glob(os.path.join(pth, '*.npz')) _files = [x for x in _files if os.path.basename(x) != 'results.npz'] return _files def glob_files(fname): """Glob for all similar files given one of them. This assumes that the files are of the form *_[0-9]*.*. """ fbase = fname[:fname.rfind('_')+1] ext = fname[fname.rfind('.'):] return glob.glob("%s*%s" % (fbase, ext)) def sort_file_list(files): """Given a list of input files, sort them in serial order, in-place. """ files[:] = remove_irrelevant_files(files) files.sort(key=_sort_key) return files def is_running(timer): '''Backward compatible timer check.''' if hasattr(timer, 'active'): return timer.active else: return timer.IsRunning() ############################################################################## # `InterpolatorView` class. ############################################################################## class InterpolatorView(HasTraits): # The bounds on which to interpolate. bounds = Array(cols=3, dtype=float, desc='spatial bounds for the interpolation ' '(xmin, xmax, ymin, ymax, zmin, zmax)') # The number of points to interpolate onto. num_points = Int(100000, enter_set=True, auto_set=False, desc='number of points on which to interpolate') # The particle arrays to interpolate from. particle_arrays = List # The scalar to interpolate. scalar = Str('rho', desc='name of the active scalar to view') # Sync'd trait with the scalar lut manager. show_legend = Bool(False, desc='if the scalar legend is to be displayed') # Enable/disable the interpolation visible = Bool(False, desc='if the interpolation is to be displayed') # A button to use the set bounds. set_bounds = Button('Set Bounds') # A button to recompute the bounds. recompute_bounds = Button('Recompute Bounds') # Private traits. ###################################################### # The interpolator we are a view for. interpolator = Instance(Interpolator) # The mlab plot for this particle array. plot = Instance(PipelineBase) scalar_list = List scene = Instance(MlabSceneModel) source = Instance(PipelineBase) _arrays_changed = Bool(False) # View definition ###################################################### view = View(Item(name='visible'), Item(name='scalar', editor=EnumEditor(name='scalar_list')), Item(name='num_points'), Item(name='bounds'), Item(name='set_bounds', show_label=False), Item(name='recompute_bounds', show_label=False), Item(name='show_legend'), ) # Private protocol ################################################### def _change_bounds(self): interp = self.interpolator if interp is not None: interp.set_domain(self.bounds, self.interpolator.shape) self._update_plot() def _setup_interpolator(self): if self.interpolator is None: interpolator = Interpolator( self.particle_arrays, num_points=self.num_points, method='shepard' ) self.bounds = interpolator.bounds self.interpolator = interpolator else: if self._arrays_changed: self.interpolator.update_particle_arrays(self.particle_arrays) self._arrays_changed = False # Trait handlers ##################################################### def _particle_arrays_changed(self, pas): if len(pas) > 0: all_props = reduce(set.union, [set(x.properties.keys()) for x in pas]) else: all_props = set() self.scalar_list = list(all_props) self._arrays_changed = True self._update_plot() def _num_points_changed(self, value): interp = self.interpolator if interp is not None: bounds = self.interpolator.bounds shape = get_nx_ny_nz(value, bounds) interp.set_domain(bounds, shape) self._update_plot() def _recompute_bounds_fired(self): bounds = get_bounding_box(self.particle_arrays) self.bounds = bounds self._change_bounds() def _set_bounds_fired(self): self._change_bounds() def _bounds_default(self): return [0, 1, 0, 1, 0, 1] @on_trait_change('scalar, visible') def _update_plot(self): if self.visible: mlab = self.scene.mlab self._setup_interpolator() interp = self.interpolator prop = interp.interpolate(self.scalar) if self.source is None: src = mlab.pipeline.scalar_field( interp.x, interp.y, interp.z, prop ) self.source = src else: self.source.mlab_source.reset( x=interp.x, y=interp.y, z=interp.z, scalars=prop ) src = self.source if self.plot is None: if interp.dim == 3: plot = mlab.pipeline.scalar_cut_plane( src, colormap='viridis' ) else: plot = mlab.pipeline.surface(src, colormap='viridis') self.plot = plot scm = plot.module_manager.scalar_lut_manager scm.trait_set(show_legend=self.show_legend, use_default_name=False, data_name=self.scalar) self.sync_trait('show_legend', scm, mutual=True) else: self.plot.visible = True scm = self.plot.module_manager.scalar_lut_manager scm.data_name = self.scalar else: if self.plot is not None: self.plot.visible = False ############################################################################## # `ParticleArrayHelper` class. ############################################################################## class ParticleArrayHelper(HasTraits): """ This class manages a particle array and sets up the necessary plotting related information for it. """ # The particle array we manage. particle_array = Instance(ParticleArray) # The name of the particle array. name = Str # Current time. time = Float(0.0) # The active scalar to view. scalar = Str('rho', desc='name of the active scalar to view') # Formula to use for scalar. formula = Str('', enter_set=True, auto_set=False, desc='formula to use for scalar (you may use np/numpy)') # Scalar range to use. range = Str('', enter_set=True, auto_set=False, desc='scalar range to display (enter a tuple)') # The mlab scalar plot for this particle array. plot = Instance(PipelineBase) # The mlab vectors plot for this particle array. plot_vectors = Instance(PipelineBase) # List of available scalars in the particle array. scalar_list = List(Str) scene = Instance(MlabSceneModel) # Sync'd trait with the scalar lut manager. show_legend = Bool(False, desc='if the scalar legend is to be displayed') # Show all scalars. list_all_scalars = Bool(False, desc='if all scalars should be listed') # Sync'd trait with the dataset to turn on/off visibility. visible = Bool(True, desc='if the particle array is to be displayed') # Sync'd trait for point size point_size = Float(6.0, enter_set=True, auto_set=False, desc='the point size of the particles') # Sync'd trait for point size opacity = Float(1.0, enter_set=True, auto_set=False, desc='the opacity of the particles') # Show the time of the simulation on screen. show_time = Bool(False, desc='if the current time is displayed') # Edit the colors and legends edit_colors = Button('Edit colors ...') # Edit the scalars. edit_scalars = Button('More options ...') # Show vectors. show_vectors = Bool(False, desc='if vectors should be displayed') vectors = Str('u, v, w', enter_set=True, auto_set=False, desc='the vectors to display') mask_on_ratio = Int(3, desc='mask one in specified points') scale_factor = Float(1.0, desc='scale factor for vectors', enter_set=True, auto_set=False) edit_vectors = Button('More options ...') stride = Int(1, desc='stride value for property') component = Int(0) # Private attribute to store the Text module. _text = Instance(PipelineBase) # Extra scalars to show. These will be added and saved to the data if # needed. extra_scalars = List(Str) # Set to True when the particle array is updated with a new property say. updated = Event # Private attribute to store old value of visibility in case of empty # arrays. _old_visible = Bool(True) # The namespace in which we evaluate any formulae. _eval_ns = Dict() ######################################## # View related code. view = View( Group( Group( Group( Item(name='visible'), Item(name='show_legend', label='Legend'), Item(name='scalar', enabled_when='len(formula) == 0', editor=EnumEditor(name='scalar_list')), Item(name='list_all_scalars', label='All scalars'), Item(name='show_time', label='Time'), Item(name='component', enabled_when='stride > 1'), Item(name='formula'), Item(name='range'), Item(name='point_size'), Item(name='opacity'), columns=2, ), Group( Item(name='edit_scalars', show_label=False), Item(name='edit_colors', show_label=False), columns=2, ), label='Scalars', ), Group( Item(name='show_vectors'), Item(name='vectors'), Item(name='mask_on_ratio'), Item(name='scale_factor'), Item(name='edit_vectors', show_label=False), label='Vectors', ), layout='tabbed' ) ) # Private protocol ############################################ def _add_vmag(self, pa): if 'vmag' not in pa.properties: if 'vmag2' in pa.output_property_arrays: vmag = numpy.sqrt(pa.get('vmag2', only_real_particles=False)) else: u, v, w = pa.get('u', 'v', 'w', only_real_particles=False) vmag = numpy.sqrt(u**2 + v**2 + w**2) pa.add_property(name='vmag', data=vmag) if len(pa.output_property_arrays) > 0: # We do not call add_output_arrays when the default is empty # as if it is empty, all arrays are saved anyway. However, # adding just vmag in this case will mean that when the # particle array is saved it will only save vmag! This is # not what we want, hence we add vmag *only* if the # output_property_arrays is non-zero length. pa.add_output_arrays(['vmag']) self.updated = True def _eval_formula(self): if len(self.formula) > 0: try: array = eval(self.formula, self._eval_ns) except Exception: return None else: return array else: return None def _get_scalar(self, pa, scalar): """Return the requested scalar from the given particle array. """ array = self._eval_formula() if array is not None: return array if scalar in self.extra_scalars: method_name = '_add_' + scalar method = getattr(self, method_name) method(pa) self.stride = stride = pa.stride.get(scalar, 1) component = max(0, min(self.component, stride - 1)) array = pa.get(scalar, only_real_particles=False) if stride > 1: return array[component::stride] else: return array # Traits handlers ############################################# def _edit_scalars_fired(self): self.plot.edit_traits() def _edit_colors_fired(self): self.plot.module_manager.scalar_lut_manager.edit_traits() def _edit_vectors_fired(self): self.plot_vectors.edit_traits() def _particle_array_changed(self, old, pa): self.name = pa.name self._eval_ns = {k: v.get_npy_array() for k, v in pa.properties.items()} self._eval_ns.update(dict(np=numpy, numpy=numpy)) self._list_all_scalars_changed(self.list_all_scalars) # Update the plot. x, y, z = pa.get('x', 'y', 'z', only_real_particles=False) s = self._get_scalar(pa, self.scalar) p = self.plot mlab = self.scene.mlab empty = len(x) == 0 if old is None: old_empty = True else: old_x = old.get('x', only_real_particles=False) old_empty = len(old_x) == 0 if p is None and not empty: src = mlab.pipeline.scalar_scatter(x, y, z, s) p = mlab.pipeline.glyph( src, mode='point', scale_mode='none', colormap='viridis' ) p.actor.property.point_size = 6 scm = p.module_manager.scalar_lut_manager scm.trait_set(show_legend=self.show_legend, use_default_name=False, data_name=self.scalar) self.sync_trait('visible', p, mutual=True) self.sync_trait('show_legend', scm, mutual=True) self.sync_trait('point_size', p.actor.property, mutual=True) self.sync_trait('opacity', p.actor.property, mutual=True) # set_arrays(p.mlab_source.m_data, pa) self.plot = p elif not empty: if len(x) == len(p.mlab_source.x): p.mlab_source.set(x=x, y=y, z=z, scalars=s) if self.plot_vectors: self._vectors_changed(self.vectors) else: if self.plot_vectors: u, v, w = self._get_vectors_for_plot(self.vectors) p.mlab_source.reset( x=x, y=y, z=z, scalars=s, u=u, v=v, w=w ) else: p.mlab_source.reset(x=x, y=y, z=z, scalars=s) p.mlab_source.update() if empty and not old_empty: if p is not None: src = p.parent.parent self._old_visible = src.visible src.visible = False if old_empty and not empty: if p is not None: p.parent.parent.visible = self._old_visible self._show_vectors_changed(self.show_vectors) # Setup the time. self._show_time_changed(self.show_time) def _range_changed(self, value): scm = self.plot.module_manager.scalar_lut_manager try: rng = eval(value) len(rng) except Exception: rng = None if rng is not None and len(rng) == 2: scm.use_default_range = False scm.data_range = rng else: scm.use_default_range = True def _formula_changed(self, value): self._scalar_changed(self.scalar) def _scalar_changed(self, value): p = self.plot if p is not None: p.mlab_source.scalars = self._get_scalar( self.particle_array, value ) if len(self.formula) > 0: name = self.formula else: name = value p.module_manager.scalar_lut_manager.data_name = name def _component_changed(self, value): self._scalar_changed(self.scalar) def _list_all_scalars_changed(self, list_all_scalars): pa = self.particle_array if list_all_scalars: sc_list = list(pa.properties.keys()) self.scalar_list = sorted(set(sc_list + self.extra_scalars)) else: if len(pa.output_property_arrays) > 0: self.scalar_list = sorted( set(pa.output_property_arrays + self.extra_scalars) ) else: sc_list = list(pa.properties.keys()) self.scalar_list = sorted(set(sc_list + self.extra_scalars)) def _show_time_changed(self, value): txt = self._text mlab = self.scene.mlab if value: if txt is not None: txt.visible = True elif self.plot is not None: mlab.get_engine().current_object = self.plot txt = mlab.text(0.01, 0.01, 'Time = 0.0', width=0.35) self._text = txt self._time_changed(self.time) else: if txt is not None: txt.visible = False def _get_vectors_for_plot(self, vectors): comps = vectors.split(',') namespace = self._eval_ns if len(comps) == 3: try: vec = eval(vectors, namespace) except Exception: return None else: return vec def _set_vector_plot_data(self, vectors): vec = self._get_vectors_for_plot(vectors) if vec is not None: self.plot.mlab_source.vectors = numpy.column_stack(vec) def _vectors_changed(self, value): if self.plot_vectors is None: return self._set_vector_plot_data(value) def _show_vectors_changed(self, value): pv = self.plot_vectors if pv is not None: pv.visible = value elif self.plot is not None and value: self._set_vector_plot_data(self.vectors) pv = self.scene.mlab.pipeline.vectors( self.plot.mlab_source.m_data, mask_points=self.mask_on_ratio, scale_factor=self.scale_factor, colormap='viridis', reset_zoom=False ) self.plot_vectors = pv def _mask_on_ratio_changed(self, value): pv = self.plot_vectors if pv is not None: pv.glyph.mask_points.on_ratio = value def _scale_factor_changed(self, value): pv = self.plot_vectors if pv is not None: pv.glyph.glyph.scale_factor = value def _time_changed(self, value): txt = self._text if txt is not None: txt.text = 'Time = %.3e' % (value) def _extra_scalars_default(self): return ['vmag'] class PythonShellView(HasTraits): ns = Dict() view = View(Item('ns', editor=ShellEditor(), show_label=False)) class ViewerHandler(Handler): def closed(self, info, is_ok): """Call the viewer's on_close method when the UI is closed. """ info.object.on_close() ############################################################################## # `MayaviViewer` class. ############################################################################## class MayaviViewer(HasTraits): """ This class represents a Mayavi based viewer for the particles. They are queried from a running solver. """ particle_arrays = List(Instance(ParticleArrayHelper), []) pa_names = List(Str, []) interpolator = Instance(InterpolatorView) # The default scalar to load up when running the viewer. scalar = Str("rho") scene = Instance(MlabSceneModel, ()) ######################################## # Traits to pull data from a live solver. live_mode = Bool(False, desc='if data is obtained from a running solver ' 'or from saved files') shell = Button('Launch Python Shell') host = Str('localhost', enter_set=True, auto_set=False, desc='machine to connect to') port = Int(8800, enter_set=True, auto_set=False, desc='port to use to connect to solver') authkey = Password('pysph', enter_set=True, auto_set=False, desc='authorization key') host_changed = Bool(True) client = Instance(MultiprocessingClient) controller = Property(depends_on='live_mode, host_changed') ######################################## # Traits to view saved solver output. files = List(Str, []) directory = Directory() current_file = Str('', desc='the file being viewed currently') update_files = Button('Refresh') file_count = Range(low='_low', high='_n_files', value=0, desc='the file counter') play = Bool(False, desc='if all files are played automatically') play_delay = Float(0.2, enter_set=True, auto_set=False, desc='the delay between loading files') play_step = Int(1, enter_set=True, auto_set=False, desc='steps between files played') loop = Bool(False, desc='if the animation is looped') # This is len(files) - 1. _n_files = Int(0) _low = Int(0) ######################################## # Timer traits. timer = Instance(Timer) interval = Float( 5.0, enter_set=True, auto_set=False, desc='suggested frequency in seconds with which plot is updated' ) ######################################## # Solver info/control. current_time = Float(0.0, desc='the current time in the simulation') time_step = Float(0.0, desc='the time-step of the solver') iteration = Int(0, desc='the current iteration number') pause_solver = Bool(False, desc='if the solver should be paused') ######################################## # Movie. record = Bool(False, desc='if PNG files are to be saved for animation') frame_interval = Range(1, 100, 5, desc='the interval between screenshots') movie_directory = Str # internal counters. _count = Int(0) _frame_count = Int(0) _last_time = Float _solver_data = Any _file_name = Str _particle_array_updated = Bool _doing_update = Bool(False) _poll_interval = Float(5.0) ######################################## # The layout of the dialog created view = View(HSplit( Group( Group( Group( Group( Item(name='directory'), Item(name='current_file'), Item(name='file_count'), padding=0, ), HGroup( Item(name='play'), Item(name='play_step', label='Step'), Item(name='play_delay', label='Delay'), Item(name='loop'), Item(name='update_files', show_label=False), padding=0, ), padding=0, label='Saved Data', selected=True, enabled_when='not live_mode', ), Group( Group( Item(name='live_mode'), ), Group( Item(name='host'), Item(name='port'), Item(name='authkey'), enabled_when='live_mode', ), label='Connection', ), layout='tabbed', ), Group( Group( Item(name='current_time', style='readonly', format_str='%.4e'), Item(name='pause_solver', enabled_when='live_mode' ), Item(name='iteration', style='readonly'), Item(name='interval', enabled_when='live_mode' ), Item(name='time_step', style='readonly', format_str='%.4e'), columns=2, label='Solver', ), Group( Item(name='record'), Item(name='frame_interval'), Item(name='movie_directory'), label='Movie', ), layout='tabbed', ), Group( Item(name='particle_arrays', style='custom', show_label=False, editor=ListEditor(use_notebook=True, deletable=False, page_name='.name' ) ), Item(name='interpolator', style='custom', show_label=False), layout='tabbed' ), Item(name='shell', show_label=False), ), Group( Item('scene', editor=SceneEditor(scene_class=MayaviScene), height=400, width=600, show_label=False), ) ), resizable=True, title='PySPH Particle Viewer', height=640, width=1024, handler=ViewerHandler ) ###################################################################### # `MayaviViewer` interface. ###################################################################### def on_close(self): self._handle_particle_array_updates() @on_trait_change('scene:activated') def start_timer(self): if not self.live_mode: # No need for the timer if we are rendering files. return # Just accessing the timer will start it. t = self.timer if not is_running(t): t.Start(int(self._poll_interval*1000)) @on_trait_change('scene:activated') def update_plot(self): # No need to do this if files are being used. if self._doing_update or not self.live_mode: return # do not update if solver is paused if self.pause_solver: return if self.client is None: self.host_changed = True controller = self.controller if controller is None: return try: start = time.time() self._doing_update = True self.current_time = t = controller.get_t() self.time_step = controller.get_dt() self.iteration = controller.get_count() arrays = [] for idx, name in enumerate(self.pa_names): pa = controller.get_named_particle_array(name) arrays.append(pa) pah = self.particle_arrays[idx] pah.trait_set(particle_array=pa, time=t) self.interpolator.particle_arrays = arrays total = time.time() - start if total*3 > self._poll_interval or total*5 < self._poll_interval: self._poll_interval = max(3*total, self.interval) self._interval_changed(self._poll_interval) if self.record: self._do_snap() finally: self._doing_update = False def run_script(self, path): """Execute a script in the namespace of the viewer. """ pas = self.particle_arrays if len(pas) == 0 or pas[0].plot is None: do_after(2000, self.run_script, path) return with open(path) as fp: data = fp.read() ns = self._get_shell_namespace() exec(compile(data, path, 'exec'), ns) ###################################################################### # Private interface. ###################################################################### def _do_snap(self): """Generate the animation.""" p_arrays = self.particle_arrays if len(p_arrays) == 0: return if self.current_time == self._last_time: return if len(self.movie_directory) == 0: controller = self.controller output_dir = controller.get_output_directory() movie_dir = os.path.join(output_dir, 'movie') self.movie_directory = movie_dir else: movie_dir = self.movie_directory if not os.path.exists(movie_dir): os.mkdir(movie_dir) interval = self.frame_interval count = self._count if count % interval == 0: fname = 'frame%06d.png' % (self._frame_count) p_arrays[0].scene.save_png(os.path.join(movie_dir, fname)) self._frame_count += 1 self._last_time = self.current_time self._count += 1 @on_trait_change('host,port,authkey') def _mark_reconnect(self): if self.live_mode: self.host_changed = True @cached_property def _get_controller(self): ''' get the controller, also sets the iteration count ''' if not self.live_mode: return None reconnect = self.host_changed if not reconnect: try: c = self.client.controller except Exception as e: logger.info('Error: no connection or connection closed: ' 'reconnecting: %s' % e) reconnect = True self.client = None else: try: self.client.controller.get_count() except IOError: self.client = None reconnect = True if reconnect: self.host_changed = False try: if MultiprocessingClient.is_available((self.host, self.port)): self.client = MultiprocessingClient( address=(self.host, self.port), authkey=self.authkey ) else: logger.info( 'Could not connect: Multiprocessing Interface' ' not available on %s:%s' % (self.host, self.port) ) return None except Exception as e: logger.info('Could not connect: check if solver is ' 'running:%s' % e) return None c = self.client.controller self.iteration = c.get_count() if self.client is None: return None else: return self.client.controller def _client_changed(self, old, new): if not self.live_mode: return self._clear() if new is None: return else: self.pa_names = self.client.controller.get_particle_array_names() self.particle_arrays = [ self._make_particle_array_helper(self.scene, x) for x in self.pa_names ] self.interpolator = InterpolatorView(scene=self.scene) do_later(self.update_plot) output_dir = self.client.controller.get_output_directory() config_file = os.path.join(output_dir, 'mayavi_config.py') if os.path.exists(config_file): do_later(self.run_script, config_file) else: # Turn on the legend for the first particle array. if len(self.particle_arrays) > 0: self.particle_arrays[0].trait_set( show_legend=True, show_time=True ) def _timer_event(self): # catch all Exceptions else timer will stop try: self.update_plot() except Exception as e: logger.info('Exception: %s caught in timer_event' % e) def _interval_changed(self, value): t = self.timer if t is None: return if is_running(t): t.Stop() interval = max(value, self._poll_interval) t.Start(int(interval*1000)) def _timer_default(self): return Timer(int(self._poll_interval*1000), self._timer_event) def _pause_solver_changed(self, value): if self.live_mode: c = self.controller if c is None: return if value: c.pause_on_next() else: c.cont() def _record_changed(self, value): if value: self._do_snap() def _files_changed(self, value): if len(value) == 0: self._n_files = 0 return else: d = os.path.dirname(os.path.abspath(value[0])) self.movie_directory = os.path.join(d, 'movie') self.trait_set(directory=d, trait_change_notify=False) self._n_files = len(value) - 1 self._frame_count = 0 self._count = 0 self.frame_interval = 1 fc = self.file_count self.file_count = 0 if fc == 0: # Force an update when our original file count is 0. self._file_count_changed(fc) t = self.timer if not self.live_mode: if is_running(t): t.Stop() else: if not is_running(t): t.Stop() t.Start(self._poll_interval*1000) def _file_count_changed(self, value): # Save out any updates for the previous file if needed. self._handle_particle_array_updates() if not self.files: return # Load the new file. value = min(value, len(self.files) - 1) fname = self.files[value] if not os.path.exists(fname): print("File %s is missing, ignoring!" % fname) return self._file_name = fname self.current_file = os.path.basename(fname) # Code to read the file, create particle array and setup the helper. data = load(fname) solver_data = data["solver_data"] arrays = data["arrays"] self._solver_data = solver_data self.current_time = t = float(solver_data['t']) self.time_step = float(solver_data['dt']) self.iteration = int(solver_data['count']) names = list(arrays.keys()) pa_names = self.pa_names if len(pa_names) == 0: self.interpolator = InterpolatorView(scene=self.scene) self.pa_names = names pas = [] for name in names: pa = arrays[name] pah = self._make_particle_array_helper(self.scene, name) # Must set this after setting the scene. pah.trait_set(particle_array=pa, time=t) pas.append(pah) self.particle_arrays = pas else: for idx, name in enumerate(pa_names): pa = arrays[name] pah = self.particle_arrays[idx] pah.trait_set(particle_array=pa, time=t) self.interpolator.particle_arrays = list(arrays.values()) if self.record: self._do_snap() def _loop_changed(self, value): if value and self.play: self._play_changed(self.play) def _play_changed(self, value): t = self.timer if value: t.Stop() if hasattr(t, 'callback'): t.callback = self._play_event else: t.callable = self._play_event t.Start(1000*self.play_delay) else: t.Stop() if hasattr(t, 'callback'): t.callback = self._timer_event else: t.callable = self._timer_event def _clear(self): self.pa_names = [] self.scene.mayavi_scene.children[:] = [] def _play_event(self): nf = self._n_files pc = self.file_count pc += self.play_step if pc > nf: if self.loop: pc = 0 else: self.timer.Stop() pc = nf elif pc < 0: if self.loop: pc = nf else: self.timer.Stop() pc = 0 self.file_count = pc self._handle_particle_array_updates() def _play_delay_changed(self): if self.play: self._play_changed(self.play) def _scalar_changed(self, value): for pa in self.particle_arrays: pa.scalar = value def _update_files_fired(self): fc = self.file_count if len(self.files) == 0: files = get_files_in_dir(self.directory) else: files = glob_files(self.files[fc]) sort_file_list(files) self.files = files if len(files) > 0: fc = min(len(files) - 1, fc) self.file_count = fc if self.play: self._play_changed(self.play) def _shell_fired(self): ns = self._get_shell_namespace() obj = PythonShellView(ns=ns) obj.edit_traits() def _get_shell_namespace(self): pas = {} for i, x in enumerate(self.particle_arrays): pas[i] = x pas[x.name] = x return dict(viewer=self, particle_arrays=pas, interpolator=self.interpolator, scene=self.scene, mlab=self.scene.mlab) def _directory_changed(self, d): files = get_files_in_dir(d) if len(files) > 0: self._clear() sort_file_list(files) self.files = files self.file_count = min(self.file_count, len(files) - 1) else: pass config_file = os.path.join(d, 'mayavi_config.py') if os.path.exists(config_file): self.run_script(config_file) def _live_mode_changed(self, value): if value: self._file_name = '' self.client = None self._clear() self._mark_reconnect() self.start_timer() else: self.client = None self._clear() self.timer.Stop() def _particle_array_helper_updated(self, value): self._particle_array_updated = True def _handle_particle_array_updates(self): # Called when the particle array helper fires an updated event. if self._particle_array_updated and self._file_name: sd = self._solver_data arrays = [x.particle_array for x in self.particle_arrays] detailed = self._requires_detailed_output(arrays) dump(self._file_name, arrays, sd, detailed_output=detailed, only_real=False) self._particle_array_updated = False def _requires_detailed_output(self, arrays): detailed = False for pa in arrays: props = set(pa.properties.keys()) output = set(pa.output_property_arrays) diff = props - output for prop in diff: array = pa.get(prop) if (array.max() - array.min()) > 0: detailed = True break if detailed: break return detailed def _make_particle_array_helper(self, scene, name): pah = ParticleArrayHelper(scene=scene, name=name, scalar=self.scalar) pah.on_trait_change(self._particle_array_helper_updated, 'updated') return pah ###################################################################### def usage(): print("""Usage: pysph view [-v] [directory or fl.npz or sc.py] If a directory or *.npz files are not supplied it will connect to a running solver, if not it will display the given files. The arguments are optional settings like host, port and authkey etc. The following traits are available: scalar -- the default scalar to display on the view. host -- hostname/IP address to connect to. port -- Port to connect to authkey -- authorization key to use. interval -- time interval to refresh display pause_solver -- Set True/False, will pause running solver movie_directory -- directory to dump movie files (automatically set if not supplied) record -- True/False: record movie, i.e. store screenshots of display. play -- True/False: Play all stored data files. loop -- True/False: Loop over data files. If a Python script is supplied, the code is executed in the same namespace as provided by the embedded Python shell, i.e. the following names are available in the namespace of the script: viewer: The MayaviViewer instance. particle_arrays: A list of ParticleArrayHelper instances corresponding to the data. interpolator: An InterpolatorView instance. scene: The active scene mlab: The scene's mlab attribute. See the Mayavi documentation. Options: -------- -h/--help prints this message. -v sets verbose mode which will print solver connection status failures on stdout. Examples:: ---------- $ pysph view scalar=u play=True loop=True elliptical_drop_output/ $ pysph view ellptical_drop_100.npz $ pysph view interval=10 host=localhost port=8900 """) def error(msg): print(msg) sys.exit() def main(args=None): if args is None: args = sys.argv[1:] if '-h' in args or '--help' in args: usage() sys.exit(0) if '-v' in args: logger.addHandler(logging.StreamHandler()) logger.setLevel(logging.INFO) args.remove('-v') kw = {} files = [] scripts = [] directory = None for arg in args: if '=' not in arg: if arg.endswith('.py'): scripts.append(arg) continue elif arg.endswith(output_formats): try: _sort_key(arg) except ValueError: print("Error: file name is not supported") print("filename format accepted is *_number.npz" " or *_number.hdf5") sys.exit(1) files.extend(glob.glob(arg)) continue elif os.path.isdir(arg): directory = arg continue else: usage() sys.exit(1) key, arg = [x.strip() for x in arg.split('=')] try: val = eval(arg, math.__dict__) # this will fail if arg is a string. except Exception: val = arg kw[key] = val sort_file_list(files) live_mode = (len(files) == 0 and directory is None) # If we set the particle arrays before the scene is activated, the arrays # are not displayed on screen so we use do_later to set the files. m = MayaviViewer(live_mode=live_mode) if files: kw['files'] = files if directory: kw['directory'] = directory do_later(m.trait_set, **kw) for script in scripts: do_later(m.run_script, script) m.configure_traits() if __name__ == '__main__': main() pysph-master/pysph/tools/ndspmhd.py000066400000000000000000000055271356347341600200510ustar00rootroot00000000000000"""Utility functions to read Daniel Price's NDSPMHD solution files""" import struct from pysph.base.utils import get_particle_array_gasd as gpa from .fortranfile import FortranFile def ndspmhd2pysph(fname, dim=2, read_type=False): """Read output data file from NDSPMHD Parameters: fname : str NDSPMHD data filename dim : int Problem dimension read_type : bint Flag to read the `type` property for particles Returns the ParticleArray representation of the data that can be used in PySPH. """ f = FortranFile(fname) # get the header length header_length = f._header_length endian = f.ENDIAN # get the length of the record to be read length = f._read_check() # now read the individual entries: # current time : double t = f._read_exactly(8) t = struct.unpack(endian+"1d", t)[0] # number of particles and number printed : int npart = f._read_exactly(4) nprint = f._read_exactly(4) npart = struct.unpack(endian+"1i", npart)[0] nprint = struct.unpack(endian+"1i", nprint)[0] # gamma and hfact : double gamma = f._read_exactly(8) hfact = f._read_exactly(8) gamma = struct.unpack(endian+"1d", gamma)[0] hfact = struct.unpack(endian+"1d", hfact)[0] # ndim, ndimV : int ndim = f._read_exactly(4) ndimV = f._read_exactly(4) # ncollumns, iformat, ibound : int nc = f._read_exactly(4) ifmt = f._read_exactly(4) ib1 = f._read_exactly(4) ib2 = f._read_exactly(4) nc = struct.unpack(endian+"1i", nc)[0] # xmin, xmax : double xmin1 = f._read_exactly(8) xmin2 = f._read_exactly(8) xmax1 = f._read_exactly(8) xmax2 = f._read_exactly(8) # n : int n = f._read_exactly(4) n = struct.unpack(endian+"1i", n)[0] # geometry type geom = f._read_exactly(n) # end reading this header f._read_check() # Now go on to the arrays. Remember, there are 16 entries # correcponding to the columns x = f.readReals(prec="d") y = f.readReals(prec="d") u = f.readReals(prec="d") v = f.readReals(prec="d") w = f.readReals(prec="d") h = f.readReals(prec="d") rho = f.readReals(prec="d") e = f.readReals(prec="d") m = f.readReals(prec="d") alpha1 = f.readReals(prec="d") alpha2 = f.readReals(prec="d") p = f.readReals(prec="d") drhobdtbrho = f.readReals("d") gradh = f.readReals("d") au = f.readReals("d") av = f.readReals("d") aw = f.readReals("d") # By default, NDSPMHD does not output the type array. You need to # add this to the output routine if you want it. if read_type: type = f.readInts(prec="i") # now create the particle array pa = gpa(name='fluid', x=x, y=y, m=m, h=h, rho=rho, e=e, p=p, u=u, v=v, w=w, au=au, av=av, aw=aw, div=drhobdtbrho) return pa pysph-master/pysph/tools/pprocess.py000066400000000000000000000122211356347341600202370ustar00rootroot00000000000000"""General post-processing utility for solution data""" TVTK = True try: from tvtk.api import tvtk, write_data except (ImportError, SystemExit): TVTK = False if TVTK: from tvtk.array_handler import array2vtk from os import path import numpy as np import pysph.solver.utils as utils def get_ke_history(files, array_name): t, ke = [], [] for sd, array in utils.iter_output(files, array_name): t.append(sd['t']) m, u, v, w = array.get('m', 'u', 'v', 'w') _ke = 0.5 * np.sum( m * (u**2 + v**2 + w**2) ) ke.append(_ke) return np.asarray(t), np.asarray(ke) class Results(object): def __init__(self, dirname=None, fname=None, endswith=".npz"): self.dirname = dirname self.fname = fname self.endswith = endswith # the starting file number self.start = 0 if ( (dirname is not None) and (fname is not None) ): self.load() def set_dirname(self, dirname): self.dirname=dirname def set_fname(self, fname): self.fname = fname def load(self): self.files = files = utils.get_files( self.dirname, self.fname, self.endswith) self.nfiles = len(files) def reload(self): self.start = self.nfiles self.load() def get_ke_history(self, array_name): self.t, self.ke = get_ke_history(self.files, array_name) def _write_vtk_snapshot(self, mesh, directory, _fname): fname = path.join(directory, _fname) write_data( mesh, fname ) def write_vtk(self, array_name, props): if not TVTK: return # create a list of props if type(props) != list: props = [ props ] # create an output folder for the vtk files dirname = path.join(self.dirname, 'vtk') utils.mkdir(dirname) nfiles = self.nfiles for i in range(self.start, nfiles): f = self.files[i] data = utils.load(f) array = data['arrays'][array_name] num_particles = array.num_real_particles # save the points points = np.zeros( shape=(num_particles,3) ) points[:, 0] = array.z points[:, 1] = array.y points[:, 2] = array.x mesh = tvtk.PolyData(points=points) # add the scalar props for prop in props: if prop == 'vmag': u, v, w = array.get('u','v','w') numpy_array = np.sqrt(u**2 + v**2 + w**2) else: numpy_array = array.get(prop) vtkarray = array2vtk(numpy_array) vtkarray.SetName(prop) # add the array as point data mesh.point_data.add_array(vtkarray) # set the last prop as the active scalar mesh.point_data.set_active_scalars(props[-1]) # spit it out fileno = data['solver_data']['count'] _fname = self.fname + '_%s_%s'%(array_name, fileno) self._write_vtk_snapshot(mesh, dirname, _fname) class PySPH2VTK(object): """Convert PySPH array data to Paraview legible VTK data""" def __init__(self, arrays, dirname='.', fileno=None): self.arrays = arrays self.dirname = dirname self.fileno=fileno array_dict = {} for array in arrays: array_dict[ array.name ] = array self.array_dict = array_dict def _write_vtk_snapshot(self, mesh, directory, _fname): fname = path.join(directory, _fname) write_data( mesh, fname ) def write_vtk(self, array_name, props): # ceck if it is possible if not TVTK: raise RuntimeError('Cannot generate VTK output!') # check if the array is legal if array_name not in list(self.array_dict.keys()): raise RuntimeError('Array %s not defined'%array_name) # create a list of props if type(props) != list: props = [ props ] # create an output folder for the vtk files dirname = path.join(self.dirname, 'vtk') utils.mkdir(dirname) array = self.array_dict[array_name] num_particles = array.num_real_particles # save the points points = np.zeros( shape=(num_particles,3) ) points[:, 0] = array.z points[:, 1] = array.y points[:, 2] = array.x mesh = tvtk.PolyData(points=points) # add the scalar props for prop in props: if prop == 'vmag': u, v, w = array.get('u','v','w') numpy_array = np.sqrt(u**2 + v**2 + w**2) else: numpy_array = array.get(prop) vtkarray = array2vtk(numpy_array) vtkarray.SetName(prop) # add the array as point data mesh.point_data.add_array(vtkarray) # set the last prop as the active scalar mesh.point_data.set_active_scalars(props[-1]) # spit it out if self.fileno is None: _fname = '%s'%(array_name) else: _fname = '%s_%03d'%(array_name, self.fileno) self._write_vtk_snapshot(mesh, dirname, _fname) pysph-master/pysph/tools/pysph_to_vtk.py000066400000000000000000000245671356347341600211520ustar00rootroot00000000000000''' convert pysph .npz output to vtk file format ''' from __future__ import print_function import os import re from enthought.tvtk.api import tvtk, write_data from numpy import array, c_, ravel, load, zeros_like def write_vtk(data, filename, scalars=None, vectors={'V':('u','v','w')}, tensors={}, coords=('x','y','z'), dims=None, **kwargs): ''' write data in to vtk file Parameters ---------- data : dict mapping of variable name to their numpy array filename : str the file to write to (can be any recognized vtk extension) if extension is missing .vts extension is appended scalars : list list of arrays to write as scalars (defaults to data.keys()) vectors : dict mapping of vector name to vector component names to take from data tensors : dict mapping of tensor name to tensor component names to take from data coords : list the name of coordinate data arrays (default=('x','y','z')) dims : 3 tuple the size along the dimensions for (None means x.shape) **kwargs : extra arguments for the file writer example file_type=binary/ascii ''' x = data[coords[0]] y = data.get(coords[1], zeros_like(x)) z = data.get(coords[2], zeros_like(x)) if dims is None: dims = array([1,1,1]) dims[:x.ndim] = x.shape else: dims = array(dims) sg = tvtk.StructuredGrid(points=c_[x.flat,y.flat,z.flat],dimensions=array(dims)) pd = tvtk.PointData() if scalars is None: scalars = [i for i in data.keys() if i not in coords] for v in scalars: pd.scalars = ravel(data[v]) pd.scalars.name = v sg.point_data.add_array(pd.scalars) for vec,vec_vars in vectors.items(): u,v,w = [data[i] for i in vec_vars] pd.vectors = c_[ravel(u),ravel(v),ravel(w)] pd.vectors.name = vec sg.point_data.add_array(pd.vectors) for ten,ten_vars in tensors.items(): vars = [data[i] for i in ten_vars] tensors = c_[[ravel(i) for i in vars]].T pd.tensors = tensors pd.tensors.name = ten sg.point_data.add_array(pd.tensors) write_data(sg, filename, **kwargs) def detect_vectors_tensors(keys): ''' detect the vectors and tensors from given array names Vectors are identified as the arrays with common prefix followed by 0,1 and 2 in their names Tensors are identified as the arrays with common prefix followed by two character codes representing ij indices (00,01,02,11,12,22) for a symmetric tensor (00,01,02,10,11,12,20,21,22) for a tensor Arrays not belonging to vectors or tensors are returned as scalars Returns scalars,vectors,tensors in a format suitable to be used as arguments for :py:func:`write_vtk` ''' d = {} for k in keys: d[len(k)] = d.get(len(k), []) d[len(k)].append(k) scalars = [] vectors = {} tensors = {} for n,l in d.items(): if n<2: continue l.sort() idx = -1 while idx.+)_(?P\d+)_(?P.+)_(?P

    Ӄb}kVXjo=+9|~18pI s7+?(GsR!]x3(Ȗo0vm~Gv'^ng9a P|e/]fTE hYVgO<+k>!B.-/s¡b2:tuPS&[ `*O?),7cMwZ[?!T:608/+1`` t0h*axZi"(A4<TYظXzP ڇؘTkJNqfC xi_pN=r]10@L?9xǑN9Oo.Eh n<j&r7S p1BxsHe>,:0LrBi*!PzyF05/OeUF@"=osH0 kDVd.ϙSO Zl4O0AF~ If8,ZVڿhRew~uU(?#m*ʔK0 ӌhIygށ AV 9F dẆ0 V}2|;]p `/k'ٷt1ŚѪ&4#ؾ3R0;y1+?o%J PF_IZ"D嬮_jӳ1 ؘ22(\lkSӦif-r=?Xg{\xj@<Q 3Ț1  n(ZE/5g/Y+y'&~Q4)FM٠cs3;T롾$6-ẋ<~[^}ui&SITb&xkvt"\vP{nUzδ l(W޲Ə?I&&&\䟅Sbf/pw6@ b!ToTŇR-BHB$NX1X݊2*M/yCAk-!Lf"@Rąu!0w]>4#'|׉*|~޽;,n$`&P!PkYٌL yӲnr,UH^{`8XiY񉉧6MJC[;,f 0_wr^2 םMd0W*m۾ڶop~,BTVvg0 ޘw]̊(<7w<OXViF;:~8<|Z p) h`+cÙIuh9\C> l[9N3=zjۀL` w3}A9Mf8 |(V@+Q,9E+j4DB1] pG5}TK|4MzaFDB2 }> O 9@%QI̴F@/!ԐB#>t`Q.M6՟?{~Ks}hJe E2OdG41.+^f'z8 (< P2w,vDѷ J [DK2F ,ADc2$mv'\-4|@cJWqnzn-I:"X>_LλG F -Fᣅsy_~>N|4f4aYf},) wySON3[3f,UhTkmYֽN;?y ˜*MMkV:@+N4V.e9A%@Ł>)Qc4 ,zC1X"* a,()ҤQaBFRA 7N9++ 9R˖av^a= wN; pɈ2c2olo,[&N~TYvoC4]v.[j!H+HuW~|29Ydlۭwމ\D$c8 rWGJAT!-XD@$2[@6sO{|AzZWWT>Yg6tΰX ID+)F"A 5yFj"T?|mk>X~y٣oZ)oRKLۑuapPyt&Sxodf* jVz~Y$xh $:vY9)(4+5jS̎P CXmyyE @`‚$©a duKoz+b0F4cJp1fhZf} ~ig榱&S}~eؿ}x舀\x3$$ ˙ o~!s*4ZiYhn&ۖ;vȣ+?׷t(::pr|?^|O3-jj{3볞sbtizF5&omZ#}ݒ=z<-S3f@-ɑabי?K~a|hoRH$rQ1o]de `:jw:@+e꛱_}v2 CSO/L ]C{0+m"dT6/`-۴lD&:g"!"iq޶f)oQ@9"#2[pN@b9!(UQuG5"0+hvY `RMJYf&sUXwǺB`N IDAThjE9% `,ҬL+::C`ݻC?> X=iucc#J5]gj{Zj[S̶\.uu*[Z}ժ5h piZ?>8D͛gߘ;лv!_gϤ}0ꢹX솖/hx@Eƒ..I'u0qiFF)u CM9sR,kY`\t]l KT<"'Fسi?^@ῳ,>N -KEK.YuP(;wco+W,Xjջ"wi&\nF/7$`Y1c}tvpd{a "Vf2'wtI3f<͌LPW2;r3ᇃf̀\S뎎& qr 2EvUZs]:דv6CҮsPhXJ*Tzn&s 3'3+s-Q} !3pLs/ ?&.c>.0_kݭ}ٲxLAZ"w݅x,װ<q:U s[;Y{H*- ۲(rFeI Fk?<@% D\0bBK  "]]Ě@=;~|?>\r~~D R8g*o*mY_ځ$ (ƠB(ˀ$6d'^K?4<@!{)&6ovs5+-6+;>{g{zb }R)50U2{D@R6jkU !Xkr7n=EWGyX́7r0}ZyČZchTcB V&0`LN񪫅md]=4|2fvR}Mtt'iOc[fesYSh.}\MN56θCΰҨ@xU|K_}9v62DUu`z8b"Â@B!61dY^$X˾nEj%F'SG\VGat>F=@+EgKwFr/#4@PQ]ozyYYA~aZ "E6CG^T o ÿ_DrB;y<[~*W4 GOf'|qo_- 747J5'+>[JGMsi?4=~RJ,O͛請z/k/7o,4<72}!2 E{6(̍}V47 2U"7DuRN&cMEUJt|49먣f $` !J qd ЦuAeU74lh@Cå7^׉F6;_W;{Wr䤷iӳӯjfw[gGh]wvIwpX7~J/04.0D$riam `08$ZZW52.P(<V0kۧKy̓@#0;asm_Mf~ ' 3I9iYe==7TWS> , h$Z$Uҫ+. \7g1_'j'vm'gWZ' 1,X_மo~"sJ5-@#0`1;@QeՔmtZO,eB`P_z9$h0x'px5+s\wgt eЀfjhy'M0@=,̓\# էFB%`SIL3e0-s}޸bϯx~6VIpTK" a݆niw\7XfrEk;谠 LKjRs'JaEU`1%<(!Ҕ+˙23@dVNE*s4)@T8<`m` Vё5Eq5i,醁P^e,poVz酼q5nxgaN0332:" eGכeqݞ RBcth],X#;CT wBȉR\h@`ՙL@EE?Q߬"Ob5X8,1\KKPV6UX|7'?9^7 ;e*A`D5OtHw, CʪQFYWUMM5 AUUֈ a@&I>Q(kxg/7Ghexp8{b mA87ʷ Ƕd8`+5RA1l&YSO0j?^&x ?7oz4aY ٲqXgl4yӎF~0˕ bkOBScfKpyX&BJz@TNSM1ɐ!Ņ;y*O-Y<<0޽MٌȄ&OUDtYe+ϙBCOmؐ`n aMfR}Ev<_&4@ hq䈒LSD+%#Uruu|"Mj,,"'B${{[nmnח޾ʊK?mG,[,t_x9׾.\D~|f Z3M;֩4j2V _&pJQnrlDik#c xb̌L~b2 kނHXGdkv ,x"@=;~2 lEHͬȠ]AaXkf)CʢE6175Yk"$XKЁR *}p 1ԍ*vttA:,=1Vgnw40ò;x\ ˂G1Ѕ V"JJaY6 Û3?niϯО7/7xMΞ(Zjw.= _٧_lh BE~! Db,?SrM,j?XV`+r*h4 fzADR)^~FGG9JϻL齽ᰭ߀Db},vpݬR'yOẕ; mذ&~\!?} 4v5@B""!rZS/L*< ,+Cu\0H8h~i۲'==f6 4Ř/w%cH1(3T*|5p7p B:WUlޜ|\%c;p8|>` ~(Dֶ&b^y BA2+9U "轁@VK:ׁk9ZS O1_yF6T(]^t>b*a(m\⡬tx}gہk@p=wZ7]Aq]u'+>Sa04^IDĸgBɫlk3Șmy??`0+&O2` xA1& i'5S 9Fׁc11"tA# 0x XOk|vMaR¥OUOq#vY4NdQ&y;fL1C:Ӳ ,X3=r@0K|˴u OX`ɩK60@|&7k,^]>"m+ ""̐"?xKUq +K( ЏiQ'"MY#X(LÁxIX(x+;2\K iKU)@PpOOz* ټ8g bJ&o Zej='oYƺuTZ{"="]&P't١3(C v('SͧhgCÉX X3C@7R;^CNfx@H Ad *k+viBw3 0@0$ &HPD&d^<*Q A `ROtUUӺ5x'^b}⪩^U^JժB.w=2 ?]g ڝ_mcCS0Y:0v4SC}vW6ݶSJ}9{ر[}L?Au_uڝ;ѪOᨏꊊxLsUҚ]۶MTVs 0a;ߙlTCPn1#ye88bF F-V FqjȎWo%R]8,eULk^X*TLO/]]TdhНqUUQVEh NS6S` np-M7p1|d68 pG/UJ(e) k;9sޣ ֭|Kx.$ 0&0imJ̠3eu<ϴ4bTc UOMx z zhDk\w,̝["sb5~ۚh᯳'"ݻT~:{8/ɇ?4Gۈ0O&#"FBKTؓ_9.ޣD{Q8D3 Ú=a%>u"JD`]{XZ'ncֺftܛokrkViR(H" \Zʇ7 Kso+u`aDvXm0Q >W߅K悁rQN}Z) n"REh:Sx,VU,nR*1CвXX,n. Ɂo*!.k B,ͩgAalUjRW@?N8RB&"pRDJ{"<~%ZAd1]7>"A;;i۩ۿP,^iL_/73vb]F <āp'~(76}>ށI21(& Y:B"Z\8Gϛl(ʥaC,hA6 hsN  ;tPnUY1"X&@w+6VZ*f$2"GkXQ[rz04Jك2H(ޔ'qѵzbؑcc#mj^ z/L4xH :+%w?3?+N{+̔.1 RZZ Y\.3}dU&FHCeYPE *Vb,]Ij개--R ;r xއ1:N$1bx<$曷 ~hhjzW15].gU۷foo=Lҏ)׭FB7{N-6FW2|֕Ző=-ܵ{m(+==aE$t.顣v!c]TG"A#Eykn E\sQ}v[UUyo&꼳&:Eµk%0֒Jb@ix[s-G=*50 t!ZUmm%oYӟ:;}[1jM5G4sowh?ȟ|r0o B iUi:9\V1A @hT il QeOgV 3ͨf}L6o姃B}rck[/v0c>tʡNsF˜Wjfܒ`g6OTG dsJ(EczFahߛObUTM5UՃ=?Lf٪Uϟ{@[ekeL{` 'n5 ~_::e}}pJ+k7U&NvNY0Ce&_o}MCN t^ܸB%c!lo_a͛ 9X#~/ضё~=nSQob mŅ 'zW[a9;~%&iNC|/g<1p~sN8A]=D'Ii.үuHipEVP(E+RA%\Qt .,upTM*MQ) ȌHB{rmq7B:81O;pGC(B-ղ~xw{M5T2=a1q0mN;J[[߹lurrrժoyF011q)mEZ!yV->̝;o5~XL)eD6j o}n :L !$$Ssoo+!ԐT S>A =wmUuU&sK]( U08U/FP ytޮumQa5r9s"hB#OV/Mwlo9gLJ̏\ wb\ ϣ*溕m EIQ2UjQ$2΄{TKѮTtq&eK~iT =XQDlLh Fڰ;%ɧLJo_hњ;2^hUwpʩ&醻EEvGc?8~׷:cU0'FN_{-?~2BPm6dz ,EƊ=AI;t[ӂɱDFkw[c:ߛ UU[}{&o_努 +x-ݰy'魭-b17 P=8N9e۱-n?^Uy&~s5C_}{tj΄;dR|̮]˵DjHDuVa;qZ pqGyx";>vO<o;Wǯ \Vs>N[Dkjó4HjPܑ:|j*0= ;CR%}hUUOvw]Ž0qb1|SMhy~طt=䠳ۦXdz .T*6Pf  O=U, i*F"u&cՂ%(U{ZԨZL3[>DGH4--ƈ1 5qvgD,g84S'+}?^LKz\DGʓN y|埽*s{ue+jRWutyqcm}mW}܃|ǭhx&''o^u]-[xg'Jht1bD"w1Sn*-J)AAU _6=ۻp!og߾2J)ˢu3&_`mqp6x/x4{4X&v !G%`J]*2>B :=8;Nq,+.0+ =`n~;T}n:H0v9v!Z|Ox"_'sp`Yǡ@, C]y헳jJ=ReUA*Jq"sð_+^8p9,1l|[@"ё; ^aLپVU 6*82I#>Ɏ~1lRzY0T6+@*E)5.#K$ݹ@,_D  vP3瓇&("oS0`%iPȋJ8Q͊WQDc]LU&J'caBS,u^ݑ%Tϯ?{x?n(lT柠2ȆgȒh˨ry4^}D)O8ʣ!&xG 80cb RPE+s]b:{Y.#oM ,ԛ܎˝/ox?ݹ~#񊊊~F 1xbփSWkՠb *_̛eR1Wv?LzF+ԕDoQ>J+15FUZ{چ-i{CX]]+֬lKޖN 4C=M{m?' Bב|B RJꉉs{b{ėtǎsLL)eܹ_|;߹Os7ןfb{aqS5^=x)%KRJvUW?VKw~|ŝ^Vթ";Gsb_dJn czyvFO*j\7exxL'g-lڤ̉NIgkkm$u(2 @T$ؘDUV6'1y6Nw)),IjjRP]ZsOQMFM`= ŽR_^NxR>у:冚Ј[H0ȶÃh\땏ܳH }g*sD!s¬\ʪHT.WXTݪ9u6ֵoLhmq(ҍ'Toz*tb&y*V1]UhVl\քbtDRi M]J";bqc>;E˒0Tu c5fYcڹ޷b/fA >ٜYlY=dE.$,nݠsv nZCnTQUzM}T|qҵW4~kza+S*KUdcc! WNcL2[?:Fc~ଳG&>JctWӞbvbI]hcv.aRcDF) !\Ҋz#WDVW=z:,üy%O|ffthh7h=| RS☈mcG+5~O>ٞ=?gOV{&5o'x/@o_{Rczz Rzt{˿ỉ?kFG/l֮ ǿ#2oy*:|ez bNut|sh&c^Zpx\I{K)LvNMm*"f<'2aY.H\1'zh4*b7°q]gncYaJDdz0qhABĂ<߂s#q{`0{@-z VItY`Xd "Z\J'VkZ0=>9mLR"!X"Sp \J#J&ðVYm 6dR"[{7/V\XF {`|yPvJpX khl<{tDrBWBfbBƑV@6A!%KɁ ۰ MfeXΪʁG,"*9ύ! Du8J]x%Hp4,nzams:FFGFmzƙ+ "ɐ:*0zL4\JF]7=Gx,?aP Y13C H&gz0 7,dXh$ m.kVb5 1^.+ ȤAu/33f!9R%@OkMfIxB/<)Hc0*ы<ϓaѯiÎ+{]iUNmރi2J)0l&W.dVC;Zl]Т1BQ9R.ªhXа-jJEcQC+toړ@и&Ζ2/ߪ|\-\E 28QF91'˷mν,⿧@6q7 { ߷1T5O[n8ns+_UzjFGZx^?Wc ˥Z ѱUn>KK Ci\߭:"?-jФ~߰v1(EVێ`#]Zii+?[qr[Uۚ_lbDRhC>A+VZG JT\,2xdHkxyRZW"ʴD"Njk맦r7/;9E؉ 1SU5OwAu[{YjN{sWzw6sywVVnFR67-ݞJ쉐T2S9j;26cTNol틌'xyޜ9"~[Va~ zotD#J"%ꖪ f|cTЗ=tyR|ΐL{uPĎFǙhJVNe|,Qa88ۏ;㸫>Wu#]]tzv?LU\vĸymBcdiEfيhr.a*C 5|ACMAi.x%zyUcʶS~/8*!^ꦹshTGwyIzHysvtGRUON  @ytw{X"Եr >~.D lZlTP&-Ki=&*|jPo>gpߝ^U޻3G̉)rnww}-t߼LIOQ(ÝI}H2y*1 aYVn]$ 58ln)-qxWERMc:5TRLI>b Vj` F 7'p8&_A;C8~ 7 ̇7WDa:rZ0Ȱ*(\\|h0>-6Cik]m3fb x nɉW _2usL`dd"wEy_PӋnGڸZXyn[: ##u{^[%qћM GULtlJK}Q% "Yo;'Q(Qx3|릆 T9 O|58DB)w\f,ց!2"??3GKh`d%` q+(&x{#=LOj" o6vZw[~uϽwUp|vIc5U~pwoV򱸹BT3Z41P*gSFdwYƄoUUER( ӷ*T6[SXZNΞ=Q[Myc;sEYd~Ǝ"vUU_[E@"h4Lmmj׿ DOm{ɾ^ш1ҷ{v]4-Ŷ6%8κ:/ ihЋ#݃bƺb6yQڋxe%duuNg` ױjw |̏5#Jaף'&+Ǟ,q7_)-;q\zjONN޿re|@}c`e|B?j'b i @uZ3>Kc\7R/#G]O+sm?Z%nʍ^^T!b?6j&s^hUSm驥 IDATxJE++c֦͋3fӨ1nĄ'Jc@J5C9eg8ceE2E$ԁ3??֖szq[|CA]ݮ03`W߰*5TJ)N֬ɲ^5fgo4hrNbWw gїH5lzONN\yu*Q/?-O]-5rq,򥗮u;zvZ+јh~|;iK/hx>pW^Fawp$!7`ͰJ뗻8\_dy8[ף9aحu"+| "XJi@dҘU0&FH5ez*PPYY耯;@C,([OeEqpA aϣ&C\Ji>~{w<(˥>κaC?li8&p'Dd uEs׭3WRN7~o{_ש]0Oks6^U3ѦdW24ᦻ7a8,+3j{h|QB(ЯH}ckm}Gٮ5F9xyAA4ީT\ap']5NSYupԗ,ʗ2)+ ǝ6g-sha;iW9m[ֺgV>`jw >b E%2Ģ=7BmOmF@p(2{fbRj5se y"DXESnl;^151JAONfjF{nL'Ynknt_nHmS̊?.[b ƽ6"ѪSoq^pYЮC$98TV>,?gYP:}bŻ3֟&Cɰ(bȰVI֝~zG?F7~Vţ*l5=̯Þ1JЧ"Jġm.5)l1JR~{ "b֮8/۵ 8Tv}=pN hLc9Sy]^Oh.6W)mB{pptժG @5uNȦŘz"P̎Bw^vӷ;,(;ʛBDKSmۅ†'"?0d8(ꅽ6fGa<63Ic| tQ>!nA  _P[of|[c`mmq1<<@㈉ ic<?cfG]|žDRʉ_2hEC%AQ$‰ĭpԢI34㤓p]! J@<E)+LL-Z5>~wkD Ml>u$v¡"rg/OFN!#{9&ctW6?ĀX!Ug,61VnBD4TmmV,ӱ#Z/jךDJ`9!d o]xP|D{ˎN) @;)]LHUPF۶UD]ugK=O/ D"e?K GmǷmFo]/30DNAaLOiD¶ ՌEcͭb_҈ƗEŔ<%t"Z0xYd  gjf bHS"-%'298ziM =$'`ZB$@jl٪VI#̞]~px'dfvYYM P3IӴrgWuMp߂&$,ꎌfg-mV⊰ ̍7EB>ٻ SH׮h3:*ZM.2yFvLRRV0;:m&^'oQy-/|=[ -Ua#^;^OC-u M4M`UW%;wy]19۴ctͭv tRJAHQd{$R$ڀ1Nd>Z'VqĺcM ##QDO?C"Rʈw֧L} ֈmxլ2<ԸeMt87D[e˽_8"h*_ݱ)"#]]lE}WrQ7, "IHôBGO]3"[U䚛=k?ɼ  6~;_BxihMhر=ա_J",R=1V_wujʣtÞ]O{;ƥ2{,*+g 6 w?qm0UUOM&"L؟s1Ѯ5JƓՇT;;WΚaljluu--EW~hڿb/?v]qō^xu\xSD!5{[ߵEuMq}`h ̅‰{'>oG/Nx8N6 8x *B)mL[%fq1:tl(#Ǧ<);fw)u1 O&k/o__mll-aolv T@{)A^[z9 3! `%TA'a0H/gբwI {B%h W 1ȃƜgACNS*E"@X(tb= _ϗ=. ×AX l,*aC]MJf<DZ `Z`8*`3Wr '~ﶬi=T^;;UعsL7Z/a(WIŧ8K<  Hajhŭy{NIx%MLE8>~oлcAlS̞kםS~+Ol$/?]W-sS[:_{Ƙw JÍ%K6=1J)[Q\O7vU{>8'WT43gIG#PB.7 K$&#O>}Dd0KI_]ߟ0cH`cG)z'aGNNn_bB]KQJ hscA:L4g'0(J̋e'^ĹM-NǢ4u5  q$(cNd-tA=br2ƣg$M/Ckħ`[V`Q cbvuF[/!)kjs}㚛nȫefq#O4`R& Tc e%;z83!wO0_.3cZTVʉRn)"7"@ƌ͙Uq睫`nږ̝[ٰ?,lTSǡvX/tU Qr갰p6*8s7|{ů5DR}~*> Z7]l=%|C glu膑(P6KR ',b IHp1\—[_7@8Y!ΤR|tt <+e=a3]T1Nh'd!`A)0Д:b|cð,Аk`;T @tԋ\&-yRD~%.p`=e#X?RS'XRe [:s[]7 CeY{dxRd/}Ed^#\؇tɣbub#eNl|ԓ۶𤋮(+/ 7DgI~., yA s8fds%Ķ1V=e& ou '(Ÿ!ޕ8[֗NtL\qŭdJl_7nHB/kѠ `ذiC=k Fp($pRʣ4ԃB)%}g{}3'{:N~MC-[bZ"8O9nk6I`4<[ę4P%&ajK1TBUx_UUU]qZ]7x[( i!0"!)Q|8U'`mF:>6 ptୃ^tT#\P00*  Ga)1VLٝfM~)<)IrdGT"'yo}֠>PNIWD~xG:gkpc=_+;9 D <ݶmxb"#ֺ۶b_w yg]4b 7_l6dv;',/\z;g9]7,-~~TׯxLCR\CPWW5)%6GFx~TX9;@0>cfz{(4B_ӁVvr߰t秓@P9X639Ts#c %6kUI]j@I}gB|*NDOŭgG{ӫ/tk3wyZkcSGA3h@ Q9<]a}c$|ϋ`PjnJ)ihd<TLD G}K)ㅲ>_20=Dnҗ-NbW$x?3~إZ$Dުu+f+=4/*u0,Vt IDAT 1*k齨ڂh)Jz,]g//[6ؘPꡇ̗1]Qik>iZ:ԬuGǎtor&N*;a\{UsV/aso \,( L[*":~# D4&yiUY6bM.w94 olG>pŧΛk--1xX+ cF"r]' @֮ Ō\3f #'*‡ˤoo4A˗/ݯW"&X_K=0m6*UhkyuܚY~%?%Mm(/պ"FfwI tp=߻;ׯeϞm75 `ڴ|>[40i"a;l, l-PxJd!Cb/4ተ]I h Q3 B?,4  . CYp Uz{~+.jLغ\+[ESߢgჰBU2e/yRO`㥼nү^s -0 +|(<KJ8pN T>z$:b2:#v'j z&hce8Z:%"]Ix #~Ɍq(T!%}7kϾ+,rR>OY/ԡ@Ngr8j& pnLi~Mwp=BkK%kk;;M̞=gtoob+5W]{ݺsܓH$}; +zp[s?M$fǮr_9`p7I8/E{Z3QH쉏 jVyEyݵ=ZrHBT4%:ʒ{%uvJLuo-~7=hÛ$(OIF=JeG|~OM-1b*21ы23NB‚{CaF+#iA螞dN1H#}&P>&J{;&P}-OF[82F֭)*?g|8(@( DyQI]E.DC\ً$}1J+p$(P1}$Ӫ9 l"bL5ӌm<Gϖd\`xdPiw^ُ*s'b0 GApbr?Us1AT)8)ͦ&LΤƦh2 KcNBUďʒ1]&RؼZ4‹U4ȹ]{,I *dZy537L_oO4's>RXvRd&nv}3,tY JwĞyƏ% ʓNŚs?ؑ4g U"90{f`:cn=fCꪠOM}k؈%(aƝ,IJ1W:Mv5;LzJǙ-_6Dz.*\ShȌۯؗV9+LkbrxwGn[f5гL~d7:4v%AHu^&^-J9-K'"0쁹^wJra4)ГnhE@{"ill1 X~em$Iઉ|0[['%"JA_d-YU{V8Ҿl36%k{l8WEf,^<1k H LeD ՁgRw o/R:`LJgOy+/jk֭eպZטE'W8{SZB֬jm1Jli&߿RJ1Sq,ju<2qa2ro l8u$m~˚=B}9_JT;bFZИEi sK +%YT XWAfȞ\Nr>ʸfcw."v= Cײd8 q]}啛>+wNDr"Ӿ@9sJO`Vl[X֜۷?ok°QuʶW~k]= +ZD0 >ٍ(mEE͓`9$6X͞j8Ј9_UW^uUSlw{ $c)r]_>(VI4ɯ2NɘAJl4(C%*'Lx =yNx> -(&oQ'ZTe3Yg}GJbGATÓ0 QWḒ`BBT_ѱ]2kܦx$xo=nTIE,xb7(e ;#!,\i3ٌ]nA M(ulMr*z!Qz8fhX١:FcӺMQT #4<8G?,^VkҕVFQQELUg]cbJ)1hߣ/|tu^c~&O&a܌h_yq/鑀1;@-[ETWÍb1>>^Θ1?GzՏyR3gΒtg+H+8 #qp"~`Yo_;+OFHTVn.~%Ӵ3cnj1LrzzL0*WYJLuu" O뚩q̙x=q D>/_/:[X{9oT (=o'bd;e#VQE7WX%nq J2hwmǪ~dM" zzɔNR*iD1Q< _ye(=mj)Ub&sxVxPQq">nG iYE" eyjn-25wպA+s,J}yrKP+*,k"Ød*nҥ#ws͚Sј}/q\U()JTHvw5& 9[LgVd5&r\h3 sUAXoS良 F솺`~l4BFMi2QE6#3bnl>s _mmj͌2Uώ&CAe6^.ƵOX]Vf⸌T)Pe2xhÆO) ;wx TZnQOʁ1#3{"iߤ`@d:yC>Lvw֯DG ^^z)r{m⦛ιTX5.sfU9X#ʘx8dJ{{cl˲ӹP1&%s.zb>ij>  <묮ygL.ںiCW]Qdz\&X 'w GUUU/ ljj}?Z_ݷ~}g*eor))2x^ܸ~[mBko  C? 7EJD̾ORjͶmNa߄O{!YDTM##KaJLNk3\ Jޘρ>p|'p,¿ *y8TC=RXP_/(< -"}JՔJ4&9i !%Ajf\Ƙw+ p XJ,fGKbEۘ#Zׁ'RφERf ж\w "UX{mLCCmm>k7-DOkih"6 :'}ɝ~^.mWEKcLqQ2؁.NJ2F^Qc*ԡQ\kM qqBTAO~>HI e6ϣ*r D7^d0997Ҋ>8a3r8 aĹAh,<#A;7=jn,4n7[~rKUUo۱pL>H~+)C7C :*&!xvxOɼϻyA Feb5aٞl}!6P! Ppk+ULF|5 ŽBgg Bz-+2{֌ >㎻scNx> @)n3"!)}/%Oud7 >Xx-_x;Rzt޶ھD*-͖BE,mBlG4-mm#TT2),o8^bՒif@P΍u[m/p=}^赵Yp|<| [}ٳ+?,Y5PUgvw1s&Z`Z%={y{)޽}zķ__ž?^Hhs;~<ޤ6vXs _ ؙ˧\{S~xH]-e[#+/.+S ЀQg`Ξoh;v{kgp=+ZP$mO6GFJA(l}3DR;H_Zjdy'u_wud*H>w]ײ ˲ maX^VIұآK#߶f J EٸQ0ftXGS/_zikGGgExR$Kۮ,u۞y_Y0 ?ճg_s%\SP9E C *D^s_D{`.sn-p*A򐇓p |L)!N~ꌅex~!k.W2 NWg_r]RwTr.h7K_0^U ^ ?:`PLѬ_dC7aiomWE ĶT*[(t\u :;{]ׂRF=h?Ȉ7Ր@ }&~|V&3XZ"" [AYu N*Ҍ4Nߋ4B EUVXS1mt D9b+l!ˠ 7k2Ea`2k(V솊 _]jX2 w, ^di[T!'Ī/C9XHV`@=*D%/ޟ<7E=^|4мhv|H䐦NɬǿHC'" [v?pmSE_$a<Ct}(rp9@AFG6{U%@18j%""E=Sn&82TOԒPVYh\( HĤVX>`Vq Gqm׿-P<zݭZ)(QDѓ>Ƙ]׍7sŠ3Q|g7z; 77~fEx܏y՝[:Nے%K//.Y#==?s ,C/ht6t+wl bZǬWG麆jc^ oba4\; Ƙb Ιv483͐,XT]tĽc7m9s kvd/})Fy-W*++/7+++?<ᄲ{w^)C"a:#n?ly9޿liԮ2չ=`QZdbK8h!r˶kk-9 }ӝ96Z#A `P&C>-$PZH%lb3ϴtuϽKC;_~l _st瘵[{.';θ̸q~*nPȍf Ue5$~_5N"E), b#9er'UZUWaݑѱ_J)ŝn rg !28.9(G1 -3߷o)NnRdز/m3CztHǢ}s~yLعkǴ~J.LשI{Brz7.^ݙ3JI",6qUx8ȜHU3Y翮N MQ ٵ͵(7I+5?5i _JzNOV|>P&[vJCۛs͘#K4r9DXy+ˣeK6N9%馡x|H`PoW}cìgO K6fWU[`GGMX9h$JNɽS saxFy69oS^04UE/ַ^UX&m?Bz2y/](# .A\Ur\, )ˊ ycDDDb=\a#f}Vy߈F("Q8$b J)}AY0k|?κ o D)ϰ*Jݡd`T)5WpH> ;`%"( ` Y6#,%+<ʡbBJyJƌa6!A𥁁;/3g^#۶D\*Z u73yuC3ڷJ#ٌYiYWX5 yvZ w1 Kx)\c/,+"""&-A5"؊x|2*/X9^ؘm6q#Bt'srX^d7-nwR`Ap"tlǯ9^#NRTH9H5pF:@tBc!HFHP?Z=>v .\ya7mjYu!J21S+E3l-\bM%yѰB A||pǃ6'xU\w~sz6*a/l,}q( %"a*cPVB H䜜γUbIFFiTrCXKD bK*#^3{%WumεO]-&d``2ۘ ,2DsAHjBHT+z?K{o qF uT9\3@alkQ(m$Ko PEOq=&an6t7\zǥ]u}1X'ww\zXOOKɗ$Yl\LN7\yU7X8ug75ݔN%d&`1:LST[-鮟}۹[ ]bhȊ{sɝK[VIDjO(ށ BS*}'w7V̊[S0յ?\_(-M$ :z>JKs-:HJfBjYF^E[瞋F 2$֩g[Z)!Y wƱ'=Pat]ͯN{ Ty;映"1J&:I_f^ iEBq;2˫3Ė.A$d@8a8زQkzZ/C$ &t'j"tBMu4hmM`:ں^^brśyʳNo4H ^׏:kvxBmJ%ɗ퍍!pe&s܄ቖ8HC*3AR!Q3UW{UUkRЀ#G4V|Capxfz MD\R֩9?Ufcኰi~ϸf_,m}p#si /JL9ldɗYJZ^5]w `V)B\<:H}eS$5ܠ-^ϻy:,tu v"}YC@aWn@7ppā}p gTn`XD\) 4vZ^Iiٱ_7T+v gȩ(be}F*gܾjA^ЙQR۶M9#sGhљmeik瞛Wp.A`(ffJkƤ$U2Pdž3;"Wh,T%r_t"aؾr$B!%Ør!DWN=~y6]0R6ɕϛ_?ܳy0P܉CC󿭨Xktfd{c 3E`'4h'w+TE,FYSqE``>ddc柌&_oA r_OWSeeNaHO2tQyݣ_)Oi nYRnE5iLQC5#}w/dn)- whzZQZ!djqb RN#Хm U8vEg ^C HO(Lm[~:%$Ka0XXW 5d,WGE%/dlT ni?[/ٶSmwFS%Wx'\)n`rWT*o~\  8J[|;H-[r1yQ`( H 0,Rn F6!(s x5CR^ M@pp PWʮ땺hP!!LV9PQ@Ly\4^cJ \ <JZ+e(j?g! b?3ˆǥ&uحu]Jb^/DS6L;O?8 y^c@YV Ynf@poˁ?\; c]|k:Vnߎ^54{~#/wv_YGP7<y;491ѫF` ԥ@&Ѐ^ o ۷@Dc8cFSl6u ÓZeZ_*ג/@#?Ѕ(BL/~h| [B@(M`6 e,@';ܲ'74'HH@\4`29" 䀷a}|۸֥ÿ 0]I_i(9.} wY@I p   & Sv5T`ڠA:PJ !DPk@ c_@P# @ Rcx'5߮ LL5U5Fw(R4_" L 10_#T>wԀIo8*vS.BFG}*'^h/U~lX،iS z z%˸|M!09_'ޛ0v7#ΰÛY'v4s{i@@ÿ顔MdEEv]w=vmǫ?zqL튆ڵ(8VU9kXP' 1,)wof$28>&+fz8QvjSU%wxՍ.XUINMvP(&>NToWV׏=zqI܁-BLAA&v+X;dK?xOe]o: ՠKźtbR۽` $h q(TQU;NyvKrP"UHp }{e4utE_{t]u͛)461;LCğ}l<Q;v͝2|t fWږ;˗>oy؝'dϜiQ,Y RG:9hBV(q{j=Ҙ΄24#Z% i ='=ð@g(aXYVg2yu UetmW*W/ ˂;\/ #u6PU(T30n1a}'̋|vYs..]bلdO̿qyw@D4-vo(tS<_ŋWtvy_zx.G\bٛ l  xx\09?x+:;*#_zoKq*:r(r:㈕7e `Er3.v(VS5o_3jdA" f8p'YX |X 2d7٦ak|d:3O)N`WYq:.(S2)h5'K t @CSJd(#fb4\Bm'&/iRsMSjn9[sу%熞{Ww߂{dR e_٥qu: p88DNH1ABdLkGk^ՕFrh1Hh.f~y+MY/;($k l T(OWo,{ܗ;A 2(`Y0*^2;x=~QewxT#tDG`6wvbk/\" @%pX ,*&FЁZ@+ = @B:^~ 4AvDZ ` !:4o!m% +2t|͗L}xl(G5, t0j MJ j24r_ggi|7b80He~mۥ pv,]d+zo CF ]&{4;8JR!YR4q1p /NP=>`Bdb W8h^v Tm-Rz9$NG950 EHܧ]j " `P[l/1%!PؾQS:5 (c?`?CC4UUT#<IlFz Lc/@8 a >i="EkߨGBspDk"n7ŧ99.0&]AUJqQ9\8y ۲#)d &P'%9/ |>˲BLDl{mÓ^E<=g#7E/6TRBP.6{ޜG&= QĬ$Id;nqԼ~M+AMvj޺r%_~xm֯\mHjuޚFY|RL/ǴA)zzS<7fiHgrall0}G^p5eV%[q৲;@*!'͌Y[̥꜅Nk(m\x)1d}c/yZCRy^X'ߏ6lR@]n0'ͧXՑܵڠ\ʐM"D*5X]W":pi@-;!at(8RoIC{墩 *+$>C]3f,]ZrWx4YQ@T +^;<^o+TPhrX8 y-X0H0 R{{3V*rCX 1oLN*ktOK$p~f΄A_sHD͙AW}}WV:,57G4reo@+( P57 /YX_&AP 8}QHo /[{MezyTE/t/)Df(zr R)s%UPi{덋NMCʠDl@Fz/Q/6H `-S4[qDm-Y*^_װ,n,S/uۼrlNzQ w?cvxÊxd@^|ԛ-1&l< bf0Ű#$HRj@[˲Hdp]P(!b3{ v4eH&/MSy]nb74ҶwK&?K(kh}}Æ@<"址~$q2@;[\ v%* ]mXw%WrXV/g BIM}`@3rWӶDSF曀; @P T;H 1{D_03ۉ QkQ@UҮ7s7WM8h`9u`& h~ @,#D̆mSO_eO;ˁO|1ј뮱È $efX\6 } P?&t5%X,@1D!:' WU>9ٺǗN$,_YFU,٣t͟~{ݼ-n9S%# Rzg{-@Iq(d!Y(ވw/Sc:HGCQg \!:c7M`/ vX!39FQep- AS}1@SWT΀ ڔ7rғBK͐#zHO],XT0ʟ+# vy 2W lv++S,!A d0zV;p?Έyq HI xҐ Pn2:sO=zhaM!O}_!  e[zWUe\ܹ3@OaN&t] B0y)ƀƀ3N`x),m]&eMG4]ǤJ\Ϧk^~4|4{1'Z !***ij^]߹;\Mdr2&⺾qs뱻:FKC[@K6qͺ sK_=y4 ZZ:9]&oYv|r#^::}lٲ3-~ժ0nm,B+M/>mͰ@1~CR:<3YwlXqKD:"MMC<E Ss&3fWeKpaI"Pjxp0dl(=<^8Uj|<^[4kV0+6}觛NOjʕkV4suדK=ᾑm#Dok~~{ ?r@0olƆ1ݘ,ŌAQ]T̰ c E By50@oओ"Ar ,G=!x?n ^EVL@Ɋ[L,L= Urg.~}g kD<9H$2_+A鞼ƛU)۵<WRN'z(c w^pyhh~Ct} u@(LhD5DRvmM1dl\Q.?A>IiUc@ -p)GSL<3v -pK(<ڹ&8tnh@ 'j[VI <4L:B.zjvД>=Ijn9;_x@,$ J{f=Ɓڍ8T4BXb`{{z6o~ Q@,+TN=9J$Tcї_? Y*tӊ+y* )£ U:hp Ɩ\[k,jjѳGD0`2Kg.<'< R<؏eK;SNs^}-8RrTߦc> xzM4`l1A9+ׂ dwE}ILNfrT_zty0pҸNJ2˨DcBC#pid\`kͨ5]lG oj1Xioh+ѱsOϾz«~eA ۙ=ᄟlo 2M55a k|9[*,>81uGp7ԌT .Tz=B$$ XL 3[κ7qu~yÿQDn |a\yCuJMEp=}r):6wv|GFGSAдb\`gF `7N`Ej9 p1c`.e󘯌F6cuʕ>'PK?*4ԗ3KD;O)aoVʟ=O{WPJq,>j hi4y\.zi%U= ,+ LuݏsD27hZi8)ux00 xxh$`@X m.2rGw-@a](vu@E9%zY ]bǹЁLSJ0>\ dzM ϋkZ{7 xxd HۀvY^{ٯE C2-ߘu R_jۥD"mt矿ONPnP/ rdM iT dh.ցj0$P * 8H%mmjaQ =7DMĕ/O(Rah@zv ĀQ_7JDz.֢I)@JG.K beׁJQҖ*Iǘ|@2`(4 9E`0cgПwas)TM4PD53Q`7/{.M@'P Z@L),V_.3(6_[Y{YLK_3^׿]B@#:o 8`~K@%pU! pu19#d`gGۥP \p TA%*8P20h hx4Yl>uX7c;Cμ ҉/|>m; MNNJ) CάggGNM$n^s'Yg)?M?in[蹒"*6556JK\30pa ԙk?٘9s4jo5m+V,O_Y$4]ݜ#3!8r!9Rvl&DžmmE Ѡ8Zk3B~`r{d3)V'X^:d;V0_Ʉ3[le$H2tձm&96SOtbCCX Ïͽ'/أ炙r)i a>0pD7k$)wד/)lZ@B)HS>,Y}ԴZbUE~SN kEf6v]% bJiA[FǠ u0m4U80 8mݲIN@T}췄imuR穥֡5R~_lY$X'س_ 9>H `""$=G_絙3qjb׬AER,bA}yr: 5pxk =!ĤLWV twN"z$5_- jdqqhQC*m0:ZLZpA0xcfcڿKGܠ#=֞q@AP x8|( 05":!FT2*<,j5T-*v{Ƿo^u;P9ig}BfYD;{7O;uuRn~1É'Nښ]]xE o!E& ^iN™'DgKH7, @/(Yը9s>/֎/R'%2: %#ի\}QrX]4!tjq洦DYvR@1U2wBSpOD,DVIR0uDa(MˀfGGW[w Ɓ{qh("@V } ۉo? :y10 lB7͙j1 (LqG}@ߠ\cP/ p.̇\ \q8W*T{2Ic-RʮZD'kSJ•prkBB]1.xR/HdRiRP>p9åJ5+uZTcZON  Ҭ֜ڔϳq$P(֦KqF /m [a5fpE4A]g0qvMa46dPBW2.$AkRv*.ʏs[s^}iW _2C$l7a?h\؝Z)ڢP}p]T9EyJ!q #e &S@BLHBTTɠP ̃MD܃]yD:_s_o.[9Vʷ_ar`1*OC| θ'ι]߰/"VJc%(B,5a1vlp[T4p(dPV1[%aKS;Fi|_ﷷm=y瑲UۖyZX3%X 2R'ٛbgL)?\5Ѣktda$u;swS'arUUUJ5B^rċD0 L]^Xw1._ζuYEC. ]tZ^3DG$t4iM\sy-:N_$6?=&vϯS̼inVJMjt>n2Qϛ+6^zfU"QƖџx=#{K4Zc IDAThв3KSMATɕwqt"UpJg/fA(r"\6RTCL aݲmS4qľoMb<;gxj*3N}}h(Bkkk֘V"XZ׉b̯tk7b/aWVV'_;Xnx66n凓*O' _X]z&]cd2!ۼul6Ģ5A뒟/V 9xnxdh%NxI6xڊr%3;|.{L@}iR15Kwފt:<{M?GF:K3M 頧yi XM܄{ܮfZ>+)q㓧|67uJ)0NUm->G]&LP]]êUM~{0:sJޯt'գ a jb.gּ0:<$|3Sʇ?Bu-'ɸvx:'o61UZW(5C3 *o{/wv;kH2Uخ cRmMKB$ FV0g3{hvD/ހ[HN@0q'OO=TWudgW` *1Pʝqoҿ zA55W ~"]PS4XDLGrԉ_XZ=S5z :U#U.B+ĻgNiEЎr#0Ι빦8)aF ?=x| hH&$22bFZkVWYvX<UiDmhUU'Pǒ%mF J94S@`Z(Z`/dҋDhzl,4&R*R>0p.oefmlXy{i΅ q] 7xRN$mkX|q}{Zt aQ΅PV] eaDbp%hX(6up&q_澼G= 0 IP^O$fޯԅ|,B^T@,U%==,kXW"]PIpFVpDPf +*(l=33{~W|kVjqTn42ic;67z~7$p<IE Ìِ.߯8 Uh G}@]2qQ:] e?S"4(J%iPDl*xY*4B L _~N31Ϙ{uνyv3E&HA 1`m#&}eo}'Sq8S5j@ _wua2? 8$5oՖ0:a 56 Idlܶ1R A;9&j8㈿˹;/[E(T9]2)/(, [%Q}>@5OkƃmmD/ѡ vGmdFD[Axh&w&3D"_/r+g__sw<={b]x]Ҧ.8G"ܹsJr߽ Y9?Te+ʈ^*>g]˯[~ELݣ׺ W(rVueJxʊwW,t \Lڎ*G_v5+Ub,Da]~hm q:V$*n¾I w%v&t#YS,Q+ ][om>RSB"{{Kw=}{ӥE~DBU׼N&NayϺj̀tuzjT21]_?zFu߹sb-a𬯶7o|?Ϟɂm⡫(\93u2Yf$YXQ{ V!V ےg\~Zsslkq{x&Di7jjJ]a{vZ)enҺDvck~^>ߓH4~[~oR9$D)kAR)p;$3Jn# {#TW_C0~ w4̇@ȷl @ `?V*mm00I90 S` \d__Hu׽>a{ZڂȘR+Dn 6wC=7[j8Xa0nAcmZ7 sCC@HRqaF@Gc.~JU84pEERuaغe׬E "5p~E&N_!ҬP^Z3m5:0@b 9 .s*1s Áp2!W3^.G/".Gkg__?k۲eGYO$fFqDIHO<b6ǝC==,W*B'bѼ o:A45 FT,1wFċLSVݲ xiKF-ąCxnҜTJd2iEXddpyG0ZobLC)JQXk!5n0' 42؂%d'DdCcs2sڻڃhj&oHËrm1~x"xдpA*6N؋O+[/}sm )a *R]q(aMq 8P +&188봞y`G3l112h d8l K` R-ݴ}Ma y]FOKI; NVg#J[mK[Je^;踃Ξ"ߕN 4dd2;վnݢw{s+_ܲSڛT@HM[Owwd,_9oأnd` j sOX(u4jwLJ mm]u3ݡ~?GD]^KˇF$PNi ̉ս66S8SF3ZGgc?iTjeWݤEWR;?>H7NdigߥnUI#Eb Da.\I P)+N?hٓpW̡[ ^8{JĖJo GX(JYCIcԋ8)zzׯ?U N?u%Q؈&liv W9!Iv^WzJ^ JgƒF5/0(fcxs ;:]XmDK8fUi4"Ԧ|zL36{bς9lx+"* F7Wm3ՠT]bh"6`M mv:NI+㍏M4O6mFWwo /qMޞS}[j-I~tl]-N(uy wn1VpF# ^'317XRF{ގBM*HcdQW=\sMzA C c˕Z1ຮD PyUP.M-*(3eo|3UUXRǼf BJf2yTptGʛ*1P1Q3Ԟ+#/om|Ŋ['xEp+#5Ӟzݽib1N&-eegpĸqS60z"&f8'kmwst(+* ԉǮH$H$v_w7b5 ~s]v,(9eb۾9kdZҶ||~ _[{ĩ^pC+F}䟲KE8N8DjsaEW~@M4:{Μ54\zr 19kERi繣lgʧ}[")tt u/"j=UDlCg$qD(K`u \o[ۂ~SSCCJMW$C.΁ PTބupQ1*@j82# yI$f64@ZymD)'w=g'z8.+˕8 .f4Z(,7Fd:D68Rɷ~UJ{_)[-z8DDfF1MX19BjFkc]ؙq/g87(+Ds|X\ᱹé $@`Xal  y{-YG<4uTG|~ᜁG` ʪh, '҃ʫZ{ h? WO r*P8B X&&m^}7xLPT@Xä`-qڅ_ XđJ+4 WzD=!ARnËD?->U,Cvxs< D,1:>*5"~Q36l20a>/0=Bl2#Kƚ`,h t\[yR8F*J,.Hbf ǨJ::XG8 |4U֊R~A}dZV.| 7>RNwkov`@)8<{p5o7NJf2 /Y" T"0qiU$.߾SNmiMah=R h8E_VG"nu}GHa, GF*(kĝ=Z:c $J&zuOii-.vu a1H(iEL~;;RcR5=_!ZWTM\m_^PC\Hbl7sUH O*nN1EFv젱\_ ֢t{ԦcUj{lM=)?.s Ro::3Fj(l B֐K>I>):&e\D$2̂?i5FD~ BLzHbÛ";j m ?+)w_~AV&JP7spB+XR(+QԴ3{QŢټ3/ OWjhB!f$\Ck ө{rnr4ZxyOo)ggGUw1#N>0=c/\uF 1S{5zMӦZ,~ǟ`%&K]2]7Y?Om^t0%)߫]q:Xe)CRgDc4{Փ[o8sDdƌ/'|F^S1E?ԐG]GO~nze/7 qkPq׭5z[*]B̂+W3^oqw (ϖlK˵vpfep7^r")D^O&g}`nJ]yf_JTTzHR0[6KFP@tqHZT* JHnmmvd2EK%/@UeHYV!4'˱ JZژ O/*|̃,\$ZZ[2ަG=4mCPȃuݺ0lWDJ˫@Ad.+7h |铎@p!{ީ"߲,6vx ~,aaRʁyJD"Ûp:DŲbe>2nRJ=[\MNt.\+ҌY@ {(@|F`*DUZpQ %u zsjgx [؁"Ǒ83_@Cb~{?̓jy <5/l/o/RH^ *&`ijSۏ=rRƧ?5yu۫&mP GB"1 )WCc~?5(i!VhW0nH^_@JP\%Jʙ/r샱rczͺau(kEQLv'b? IDAT*P CTe'Pu Lk aYPUBXAP^x >QяHŇb)zI+2)(WCs$'@L6+7)J𯇃(ĊlioP,ظ-_A!VJHx7u]0cvI4g.оaXUZ=g̷ʎw_}wɱ`evbm rV"x1ֈꮎD$uB^|={[]o|\$l0q[32rWK,v =7צֈܰO~h t:t[&jMP@ ֣'̈w3g8j(AY}}SAD6|yؔpc' "&.+?Lq0#9ʠqC+o% OL81!tvuGŴ"2ڜ+JDbiLJɆ#"HD'ꍕkM$Xm +XCuDqJw<#J;h_Z5 GI48v{v]\ZTkP^wl!g>D:{8|m1{k]54L&Sk8SXq+A5&صn8q"WN^Pn&R>榛Fm  N$w=,FՕֺRPs(TA>*YtUQg<lS:;#aX;QI&\?EjLtvDS$F&ZjàhjA٠p\oD *y!1H%AwaxԦ3oZ{I2;`J{kn 'xWG;̘uȈDc˸ y)֗jLZILoeޝZ%B膖z5Ab~e R{. E!e80pRM b <a]𑱱qv8(k_?8PsAi$ P*Q,Dd" /vg޵28^(O__wxbl~H6Bz\YcLo0R>sg=[wa{#]ĤB:=rbŵA]Ǚ",=6 }PԷ[+ R|B _ڈvIOd_]…lyfbFNpȉeBhb<)1_TGDr2-]avJE~V5UX&<ϱ֊x"`zE>ZIv(k WQXlڽ* 9J;AEl߾_3W9ξJiØ?ghoYp֩X,v %U2/; vX`|h_wOp<MPz}kZgLUW_V|ڳaHPa'xAz  ^sd$8zjp$ӺU %V\HqDImT{gt#f{WpN9,^Mj\Ry%"%ܐ@T^yiϖD{E(w7CgtC̡CBţx-Y^}` Bgyyk]S%U#/o D!뺿.(iCV-ϰdo yS2U0/"xFa62&`Wٲ\H(b·:GFG ǘ1{i숵#CaJyQMSbR$ =P1RJx4;~*P5$ :sU-\JX0<*UX Jn`%7T%i7D`BC,ۅWXF^"%3orCdžOǺʩsiGKPKQ4' Sv4;*ӹdZSԉwʎ ]镎TWWJ[w_wu]W\}cꝫ;owaw>Z0 G/cOa4"tuocV8u/(cHS>NW$Ep}%6EpcLֿ𗟹s ?ڵo;=Ӻ*H8gHK,sox b*M>uv*H\ &MMz*@**PL%%ypjaёӑ=E"6pe:ΖXnn=i)[f|yĉl]]΃riK':;%b(_Q2JVJډQNyPrI_8J~3v .C(Er+\PVOhT)/gxNd8%H>8}U¨"9rB_7I$բ>~f;QC Vs.۶mu[[fbD``Գd 4$7~Vymߵ)i75MS$@=HG__̙h,VZn& vS?G8yAlHRl ɖPIq _)}/>Kwx KN35zN]:~lvVg?5 B+P 5wV;>U^F5mXJ LSqS{$E^|nh<'?y!{R"VxzJj:"6?9Gv:atI(u@o7U8ջ2fNh)٢fh}|ko[yW^ݹhAkmzkr788ۻe˖S /qw¿yػX\H&wh!wݍ*r;I.ֵ;אݧgrz r{XT\)D-LLt'Qiy -PDZFbL^VUyf5}/-_~N$&&"\TZy:ÌRR)vf)UT`mH)cCC٦{:jfa8$ R8\(2ڙ")5Z(rS(7+c/X"∔ R50rJ5i"|C;w LŲ"W ,yOKɾ3.{Rؕ"lavq(3F+ Jeϕ:K$@RrRS\T\nAِ NjG +$ƒ6ey{%*(^-Tu'[%j%Z#&ăepP~ȪiJ7b p|G_2(P0k@PUq=0+ŝEsvԴ KA x~6JXM,/Q/)F]2)JM/mR(Lh*[>ZsݬDŦĜVsLh&U|-s ~ * J(|$t`C[|Qd;&CޮWhajf$ :z@SkKIzԄݪM`؅C=&2 SiԉwLX,oݕcq"MHן~ӧO_Q?Y1hT~bͱ~w?lQ6VHT K5(lc۷#hvSRSX#k|K_iJ{X:%Ŀjt`qn_ vSbÂ?0kB_'OI74q}4Dl{;7d7o2>qm^ 󙫼᠐AN-k8PP H#KJW*F<.B0 .Ʒq' PNKhba@@;dl/H^OhJ[5^iY%STCC;+ $b4 1fz0MJB@#o n`djupxK8]Ǚ՜8 @= p<0(2K~ -uǟR}7B} fI[߼a < 25U MT7,dWW|N6k'|'FY[}{zZږe' zKU]iZ 86`Z3>x޽DaS~djdՂAk80{^qnAm"Qe KKz,+yS\%tk5#'ֳdG:,h瞟Ih6Q's`,ǹ0N4u]X`6|mDt!=V|HCC-iZRWiڍJ)9\,ҴtgmI9S$ϛ\ L ˑ,`.!WU]h@J1(hPb zf nPj3]~,)/sFtŬ6Ҷ `+%u}N7rHߌDT__o*5=2x \\ D:@Wn&2͛KgB@;p9`?S2`+jb 8I?i72/ye8@"52J4xt_DJ5fliMo @4*p*% 8@300ҩtyX/y !@*pGQ~TɌ(N8݌mM@%Gg2#`8x8gZ-&A% ]Jr=Od=N1M3ka-@J/yd`r-,ogͺ|A챗Po*+NudF謇NjB:n[]j *&eǢ/`pzͪ>{F<p4P $C?j%C<ZzDz8@i6.cG ] o.OЌxuV! @(4y5.ǔ`]{R>N|45gLWWWT*[7 J?86yߞ,;6m}۳鯛Y?7y{fu==`uVi`zhpl?/ 'O.>)U@rF|NO8/X訫C_M*o ݓ՞I7]yd,F0't*p膙3K>s8H0Wkz] /xaD>yy_q꩟暥t:}e%I_W649k))۝ Va<n6mFsڭc]og&z-Wd8<LJ-98uݗ:Ykv^G" ۻJH)B=f'AJ}tǿ?q3oţ[a0ںȴy~pѷ/\Mk&u;-#pJ$` XFe>si TW}=[ RV;Ky/ޕ`؉vMUF ϚݼfNux3Trp\vJBOek ;T2!CچۆE9JnsZ+ L%[P*30Sa8% usXWU $0VJ}:PJ%'C8ȪIy;,Ԯ5&nWJ*7; ? l ~,u}**FZs] g1X,˖pjh`> 3kP&ZȰLae@:#(FGf)<JK. u]]wݽX@@U|ȎTnՆ9S2#}~ūjJ#2 n]ѻuxj1!H`Rq!6o<:6ռiݧi'dWSήHnջ q ]]gS> ~5F 0kb%E޹0Pd67̙=/.X7 *evn%buk\Y~f~sWvۍgQWbM޳ӭSbe7SDߖ')Dd|xRz]G=888N6K.\xf2L&N^gjjj>ZJ2 MCKjY?j*\tz+pwF B ax5\ W:WiɒUUdm +טvS *սt1ۻVtvv|w+u"XG$xF'  S"ՠdWg"-x piwE9sF0{Ԭ\\]a `i:w|E53lb>BPa KQ~ I1{pa2G҉=,J۶m'krqg/K&/Z'*C3B~|[}soЄ2e$EO~|nB/^\;}z--/{MTo@T:6z5%p)WЦPa(ޮTgg,\62oJMQjsp Lfll e<L&B\ŖcPPԯ]ۄ, e~[/Eep\`o`X~9"ꋉ6`Y,A'smmWb N?]]]o7qlh[#?r|4N2uOy U[;J4 MC0x`xϞk< `_6^+uLhe^<=W?k{jBuM)G#)_XK/]+_קS 89Y[Z\kbg`ujlRzd@rJ|Ё<2J ӿJJl !i:M-Qi;p(wȗ)\a5/R|pc8+إ/,]6v>)ݦuyۢ{WhF}MQ%fBe @ꚦE"ڌJ]4Nή\&Js:^UiȔ*sTʝ#ՃkƂ\ = d ?1YA@ONa#Bxݞe@z{JBnǠ%22J2IoSt ȧ|ʮ~n =JE  Ą\d1+K34I܌FF>J]ܼ#W~7*4!"XߏJDD %9t*!3..CPJ;)S*D~ߪJUpJl,vrUdQDER;yK/;}x U%|~nmtSYEO6~7:}43:|}f)E/P,H' a8]rxXGްr}E^$hK&w9ck8'O 5[h]zVch9#go~_=y/ח\uJ\b(ך rhr 4Z۶g̘qT"]C>uꟉO/g[VWX] f8Ni.u{ ˚ bzB_i.N&G @ĦNF54,篖9fR%:)UUlT4׶h4:w6 KY' D:<'/_n 0T*nJ72_+)_n.THR->+ ]G$ڀ N9ӖNO>D#Iy="(UzDfs"eYT,z1 ]E@hopxV8R 0`"q[F] h AxoYZ|b,sC~Z2oۺF r4 E`77ÒR*$! nM U566~s_[eXN8/"g"Tz@͸>sǀ&e@T3 B xu'(Ni6kͧq%_%k1t@Uc@5p0Joz=;zuM' 2ABcli#t.lrwoϢY~T*5l7ej ~q8*FNr9i=4V(p' vR {3 [Fղuc[ppQ6ȃ=ofٞ>^a#<*zDXALvfM 2<" J{= CW{'G)URBƮ?_w G@G@#y@X.P^v۲_D8_V24ǩA29ihЄyr,, uTG֙ و;oN,xW]Y=R_{m:yGIyf,"m:dVwtW/nr|$d\$J?RIF-N,C53P~qǵ cF-tW;dž zwȝ34GcfTI$Di_EQf\3Xu5yr7tw8NRů{i=n`J!zdx4bFi=SrDR)i??^ ٥^`^H$_u+|m*W[wn/=ew (Bir]xlcȎW^Q;w~F,6|1rHC(ԓ/| #q-5ـ`V\z:-KEyŞpi2I-%K֬0mc\5 fAu(|gq+W-{ >jW-"~91ʹ{{uӂUs B2 /f௙25˖E"ի/=zcB8'\Zj3{0>fzu'ҥWJ/1^QnY>; +>`?rz٫<<ɢm`#-!23ɵdw60OYMY:5u]\\1:!D BCW^yci's{Eɬ 1M 1Sq"3k#yM;TJ+\`tBݲB]JO)2{,%"0u{g7p,| aLWJ}w` i6r""RRv 1$ d##2[Ӧk;CM՘4wMs׌^rOR.\.eU3pK%4r9]p75.3 j/L73pq  \~nYQo1yLb&plmo9,urMSI]7upцИD{\(uc5%4-fgTiR@cn׀*]M׿ϖ{_;xָ(*I8LT @NRݜKai{rȊZ%Ă130eqyW\C7M3ۡ S((Mwvuvuu-_|xp<dpE Aqc cO|@TЋE2}c:Kz؟W >#^>%:Jp43|&xrw9Ozp/{7x b<-{:սi\$,"X)6>b`ւ_Zo;;cRTHcםu$aiFM%: CӅ4e~w~. .Rx޻[(VQ-̑,^yN_ֽ;<`1{y-S5+'6qa*7} hBY:B47[кu2ߵ/[B]c λ鼓|2B@  7Xŋ8@R>T3#ܽxO]l>+@05ݎFEXxhn'!`R{VgekP=&gJ}E"I' |e3KBEk۴|qFFF֬y7R2މG,K/^ŮR+LPcߒK-wb{%`1 8iJNGW;:SF}E;6Cu[8om=+V xg8CMm%"x?< ̜ gc\\( Ncŗ~~Hc$} oQm'jٝ:]+оD){Yg Ja<"PD_CLFQxpgfJ1| 5zDh+i]_pY8 :$uSur*A1. {3ǜ-.ߵ/ KpZ]bN&y&,p_oW 65?-5T0*S0[11/ |w,;ǘME k:u C]F\H:_7|ŋ7}L,&u}(=Xu.fzuS !qwWAHir{F"aV[5GF$LPp\9\*0Qr~gcw?XvB-g65:WK-~[__600eɃV5*@2L9~$iBP*9LG=O4]_3n 4PZ?y?%g.~r~[/U,8`- J`Ժ[~+dY!y uq=b29&eH/w0`H)/I&75-/P|@s}BYD.1͞;\3iP+z{wڦi>."H#zȲfz9O`n>vcXwՎs?sg9Q@uZ=歆qG{۶]D 8܇$PD+yeĵU@J#-78YXDMt 8hs-f0L"On-mkKzけ nSR&^SQyPp@M0.fRM>_ B$lrD̽JGU|āO&xR"!Zˀ& @~ (CEpG被ٻw%ܼkWJ)ӶӮL)_A `0: >_żv}uCr`. qgȖUi[4䭤Yva/ЄeJ}T2p p xy;B!@Pr-h .ٌn9ՋaOP(J`"G U<79ݎ~nϱ ơѕtj` (}*A䄎=߀#@ *03{΄|@ P E-qG;sMW)Q%@UA` Jf ,wݺҵ<'OB*Pj ,S8";왧09[yy/L5݇ÕRҜg  31W<>3 5nlLmŗ M.2" ?.92@,y젱D+B=Ş}k?ҖVrء9A=Ѷ;;Ű;.vv{t[  ق Yo}E_G[T\-GG@ ʐdi~]s'Q\3DVL-4fUKx~ć ]0r*&\\ TH pobaPpQЀվH諲38egXhR̞Fs3/Y2zu>WREal78"#"k==hm42|Vߐ t\{U51>wnd7Jx ;T] T5PC[=:U>~N='ipi/nܸa-/Jrrx{WSHSL/w̙]cG v^aJ[ߖeN袟5{n9d_oo9ىY"S6r>7Ǿ*X]Lz͏lǧvj Du]Os"SB  joO?/; Ʊ tfϓັB&F">@+o N?厃Hc {Bg+gmjMrԑ+Ʒl Y&M? su'qkP%~~KѨiەO0TY_1,@"jҞNkÆ#6: 򱢭 g|z> Њqc?=>dy2uf褔r{7w^-q3v}oe{~0_:G, IDATCW^yN]݉˪Oƒɑ|a\v\5S+3vfkSQ\4OMpAIow. .7H8Jy#ge~EX1.RMѱd7 iS}wD#" οK4$];tZW D8S5ʕ<ؿ;$Xz"Sr6\^Fg?qTm)>Bñ5m0)HƙzR^?rJ[<bˉѳ\ꋒǪT=+NɢjM%8MH$ .Ԍwkg76 *xj=R& w_it4T󕿽X.go8kK/=ooXTדbEaz=q׭!LջN9}'f㲿_wJS6dfWwW5z)ǒ!˟9gPi;dk֮^+1_UDxyoGu] @e(Ú]j kҨuSa璸sjb:=5'\_iE;nUCOw(hWJt֯w [^[AQ}xs|! , l'SޞL%xpXJcM4)=]o]yVggia6 BSŸ0@} e~tvZ5mJB ()`s8 R^Zw^WGjTGx XSc:Hex d3(+\VdcqC0#0p ۿ jM7fw;vy|e=bfNRS^L[~󋳥Cw^度:Q%45OOkuF$Q3Wbpnф`Vib0ҭB'K@Z(tKq &smw4Ӱ0rѼ@GXT}s 2BgM+3. 8`DJY3M1U<'  ؽlURbwQQi@&qgy- Jao'N[a۶QxtYtudGp(c8P;XU]mIڲT{[C!m+61rHm k|`KMޢvA5Ts;L;Rp :{Ts0pm Y,"ȕ^NDyzஷO6?\ CI`%MqJ_)ƶm۶gϞ hD"]]]mmmg '4 g_L.N,rޮ?ya螷hJWXƿxo`ݴi{UC7z놇#%BDFHӴL 1ӧ7+Ƽ9lt󝀦T/N*1Aɜ} P[]\SrrkP& Biy -{IU`-:*;drUDB ø|RQFj /҄RV7i{gu?eʌH$[6n`LI `H1K 'BhƄ\rC1B(ntĴ@ t#nE}$͌c4ǽ[b//W[~ۖb2@^)hm}ZCt!v|ͥO޽ v1Y\x 8`s@/SR^aY@+c/r3ט捽?#:Ht>40`,w Q@?Qoe=>{SOf̓;ujeZ7feZn{!0P|]m QQYBYYTVTwR21J'lfw ps;ppl"O"[~ 8Xp?P2?crv^æzX_R˵<(pr[?[݀  d,Qqesq$P^wR C$ MIJ4ͧNA 'Ѕ:R}B)D O :PC hȹy88u@9KkskuMu݈;o6[;T3 a]CߦLlSfqrI S@8BNㄜDqIaR29:<{\UPEi,G+ӝiG9ҔRXEEQ5DEOpٯ|B\|?_ve'%*NY 9NJ7p >8m1/,>Hj} Qk~ۼjڐ7M6Ppڗ02qsuoq%B7صZ{/h쁺Z%P*ݕzzI/{cմԿ^gJŖF`rLa[tV9${ @2Sbr{o~Y KSdzV9FmbNre{bnض^QZjdooR9|sKknk. Jq1w*uȨy@= c|Ġbjypӏ巏cT۶q6ڵKHiaE(u~O֖54=lDPm`$1\;ҐVwn듏21eg/r9~K948թHflUkB%U^o@.zu|])VĈ}gH猧8Rceg`eKkS\ϟiޒ`u n:  6LJ`2uװGBzɄʝ1댛iz[dmBW_ gK ﷷ\rU{2RzS\|M;3o ~w0PUU}O>g-?iH&]`NXlH8kpe:eY\u\ɜQk͚ǟ[Vmhہ@8 `<%u=`0t=F-~V,"ɿȐ:[/r7 m`;7oV` aW'f:-jFҎ57VW7@:e?Vtz+0XT`mEE'> !@at65ݱg#>PpSƁ~`1U (P4,  :kځB49 q M/M?# T{p~훔)wݝ[l$23Q×7X}teN9~,? o. D0%H:ve(ЁH)_H0J?oٔ'^8SB; *dY: \4|iPyh'#2?nu(T@p:k, L 4c # 3b"T@9W >91 8 q3kVP.iD h4l)͖L6ihRȘF c#~`$)۶]$.u]cݻ{<z /ƌп/v]x=cƌs9s/yW?h21|_udWk-u98iS\w XWTW_S`OQt>͆r3}XьU ,+nVw~GK ۚ7BsWY"6›ke2RͭhIJ{9t2=?kZUkU%.ϸ!K̜c!.䫁DMCq:Mj6P{]=;Uw~Veٲ==;:nqDA(+m,NSX-Jx{ZVi-+HfH0RdOV.:+ Cvm|W6;EH{Rb/L=~vzzUaw$?A1Daq6tCي3tںP>3O9&gJ`^$eF:*ݪ`J淚ύGkGՆ/]bt؅FTYntrx* C0JvA%@ B(|hK 4AހYIӔ4tkV.n>en%푶uTD3%m;vцm15gGÒqϛ~Zm L$5qܴ&mW1u@a8EGHGC&-}w\%ʫ^Q-0R\Z:d?q6SDB.$xL/l gwc xtkzѩc+|ȍ[w05-N? ~4;\ce,:s\K)MkPec=˴gG ҽX((tmܶM~Ib-U5!؟ŨAuV"(r1ͻ4dsZ^5tCrD̚2V2J0U 5mk@|~øqħ':|kZY{˖˼:zf$RWj%%SJJ; zy*% 8\nMH X {R7Ul|F]x'OE"_w=SqժpyuoRG!38es|R/+Y"#ߎuЎnuzfѢdR:H̊(] \5|yTl@r ؅:lC'era ͣvM+C7ADhc[mCTB GMw_ w==Er h%J},Y(Ѓ XPyU"PR9m,8yGhBR4҈].|a!g,8]9J CGx BQWp.SSkWh x?}/OTKX`uBXQ)TqT DgUG=*sC>bGyXZP@bP1E.iNseZR  X>FE狵EzyhBՒKXntYTCjTp?{BVlGII˞_Vx.s{j=$Us IDAT_Ot&rfD5)ם;d!gd fTȂ,L&ee2^z)[5gD"Q,D0|7;KU5UsswM/~6qa4tks((8g>sa-O msNwlRBLG|(93ZFpC,w[~\%|kF]\_>!=b.M2;PI]~vGM2Y*G?r2LQس'ܟcgDF5Wh{Q@TĖDZsKkW%tbR zae˴߱$X{iԘ93dkCCл( Kbi<՝wi֡u #xSU4y;vН8 @ש5n6zr?^+kO0)Rrg燽[]! Yy /r*k<ޙ"8qaݱS:9튩^~>ϲ,"Zӣc3e|>&?cncÐz hOT97 fYfTQ3+{ۂ@ '2:\ "b0\jv"۶--C .3Q&V~JsB]H"Ht^6&]@V {;oM -ee]$5X{DF|leNTK鱜|<.s2,AC5["SP(ڽ{vAΓNjzɫW\T (enu,&}&WW:` QyY_x\BB2_YE"]p]~ (~|1 |*}L_UV z%[2CCsG0pЛgVREKSY9*қLf7Z88Gy@D6f27Zl_nٲ7y-@J@ e6h6p $kj`JqPhj_$ɠjOC6uAV…O|iL[޳[" d?c`|@tcҬ} ,=D:[:z\u-34ԠTAtݻof  @b>+-i3^ `Tv.4P s}Ѿ%U\'UIDN|pemۉlB:j:( .uiii\Jv57]AwH?R9Iq18 Ad` LaWJ#53[Lq dZB|Q9`[*M >Q-Z<+K.wp#;˶O`By8z}7 `$hPX| ;Pd1 :=oXq}e^L*q|u!4Csq?)rIʹH "D` 0JSX"a膄(`GU:Zv#~G̟ = -{ٽܻ.rj~۫ ·7}u @a#r ˖-+XdoWG|.а;vUOk9hPhDqWб`f N}?d;^ 4 ż!C8/!VoYkg5+p 5C(/!PZfL9{^EQ%BHDO)B$&؎2[&^{&;]x{S瞌M]fAtu 4*<tc˛<zy-u9SO>!D1\5HweyPb@Xi2zײہ䯰"LHo#Lg(MU4^D*tTjcG^dx\[{Yg&a"^ݳ17uWHk<+gbժ'--z*m8GmuUOo0ͪ}~w^OlV4z]W .Ï~$b!w5CL8Tkv<[K~vݱL?T]LFVQa[8cɅ!_mYR]=eڴ9snpF[Zj:O>+.:eP ׍orq 3Qy+ fC1Dy)z5+;+g͐M \tݑ2_Q?xӲy/7$G)$N|״z$ٛjjN;$Rg##6d3j} ypT 0l;ʎqg{*Z:ֲx׭ [V6tm&]hD" \\lhVqT5ͻPH (%Nsؤ|>,1̍}5=zǣG/- 3ÚYiN7f3,%'5I!E+y+z2I&&X1Vh_ hHAHZz߯+//?Gdu^kb`8VP`0ݠ@)VÝvpfCA8ݙ :TL07H Ad0@P()d 2 AbM B J)ak #>1Be/K)qM Ba@5E *)DTHBKi" kx<p>s ++m_9.ml ֌S>2}ty4i̝;H?iМ T]$qVWK>Cم6x' j" RӺ^@~G9g9Ai<w SZ:1hD}2iPnA(u3/0Z[$(eQeC8\a|rƛe@`7ZCIDh(Q^&],jns maZ D 0˂!Шǘ6%k}B!xW&͓'SZڣI/w8nre%I(?Z1}1Ǎ3gξr J}\~Yo() 7oh)*UWϹ /~}ױ-]f|#J\(wܻhtf.rɡl!,4uм.vH)Q\/xDxTo~x5DqHF Ԅl=)Vm0WX><ơ sP}H'5ls=~AEh؊Kr/:E+8pgM-ms8+)!={Kyf%~/q@cqCV> ;shVCy(+Oa{S2{v.RB* .^2>z -fTN # xPu&;) |#QZ#kڻQMCl'WBwmgoݱe0}vdjcו4%j/-M[NxSO//ͷ6g^cH] ْB/d*@#' v"/:Srygyu!'B>'`RHSw9-xnٸK0LO,4r / N\^Z fNgO r駟~w|2?AN" ^e\B'l5JtUzmw)4i(BH,~k|MDz}; p 8 x 7NNQ 9488H( wUCHr<%ۇF7Ya+IH' ?%@LQ.Q4P jUnˮNz|g<jim2Txd ޻E7F!I@a\ȶg-BTP@9è  RNT T1qEOwp[3# (6=\/u h͍N̋suڰ29C.~S$Ѷ~lܙP(|ʕԍ4 B VϢZȒΝj\s͹ӧG/R;?PL[,3JA_Oij($WAj(30{ˑ/P(d|逸YhĊ0t1g-@3kֿ1*TIe>h5LA0 {y^zBrPb֢G?={OS$q]3r6bq@#0PSOZpa`ģG0HZ[ T MA Fϥ yKPW=E+٠fDuG2{:TΛ3 ZضM;`ڬg-_tݻwuwz&Uw)> #ve2b ڎCm)=Β%uXRNe{H)w򩓽^Ȉ{0?pstP)ս{W6/$ě՞<5WVVI)57'ڵ%P 4*q(<^ *3;Q?et̀Gh&%ƌf̐#i f$ hARǺ\e ݴcO׿n$Ӭ1 i,+069/J&ǂ5VdcǕh, Ao3p4fwG>ߝtdf$==\^I7k8*~KYah^GHͻ>S 1`k23β{|yvZ5bףюs[&J!ݾ=PN]/j#PaPs#{:"u&\*eN9sQX,v̙TtL)r:,&ϝ3gGH1o?~\kpcg}VAFo2bvd߻7yhɒ,E|@Su˻W'̟k㏿i$*HJdMd p.gg/Qs%%F"}[!z0 da@ `pipp]M4xR [g϶lv! 咀`ax~ \ O޼|>3Ϝydz띜P:o#'uJ$|~ϪU;Ma8\X|~hBj ¼ fl}@iVL%RuЇ~ m(>kf5-[,\q:)œ+EHWw&8q5@Az%p7p0uu@Hv"!S@ڮ޶5z]Y fltf5z{ݱ."4oSjmi+k IDATDXlS^%PgWO& 1fÖDeGc/j/b!'v,) )6 b4xXt@׶˔u<  oĔ#f:k8W WCv[(2 : Wn3N8c0cdAe5ȢC T9 ġV 4de78^u-; ;R_7 :FZsBSgn[('O3UTD H t a^őYa 0RaZ4̂`E-`#PVԕ( PV,\`X dlXp8 ꀏÀJ @$ Ҋ,_ S3jw+gO~[xǒ;-__ _۷O? p6{ozN'sッsXӐB *D Pԩ*9^ǎmؤt>F,a9[q*=jּZ*"VٽᑶO35ۭ;~lc8׭[q챦P.6x>`+a?jOԴR%ӏHE4jd. ͕M|RةBfW>k/7:L; @`@  0=de+LRx@൲`(Ԍfr~T%.nT1gsN660f}@UU6Zy0&z6BRB%vi<,wktAou54ׄm[QD80n:ӪI)B"K[Z4Mǝ[7~;#]_|GFMӔ›>ayu`(?fҸ O#\ h b ß#78k|[ c5=~ɞAo]#.G~M[;~_8z:;1weҹ4ؕph/lvx 꺚RvLӬoӳkW \X#|o3^ M/Xw1-9Yb`zAmq,;lJZ %bЕrZy>Ͷ6x(|2: T^a@}˲OΌNO&k?;Nβ\M2[f{M%$@BP)"( ABB5"*EBR$" QC( d7lٝ;μyYbD;ߑf}ߧu_uMTy#H]b*(3"J0syVm;[!F$uv2cpWqڃf? OѬ8Љa`Ju+MMa/A`ųR@@R1ϫW޷-#kR+{HEڶsX8ƶaT Uf<'M 7Fz!&Hiw\ .XdӥK/K$~d1MF*E7^l _hhh!'E}lMr⊻5k( B}KOO'|Ygr-.c ݈3brvu֍7eWiZU:)vg?#7ݴgߐ*M Nرf00p~wE'[kqG1  <Y@ "Zμa̚}˄XIB\nk՟Lfڦ%pp0+%] j'  -R-+@-DRfe#MTy5]] 1.ϔZ6:ZiDm{ + ׉Y)ǔb R"˹BbۏBޏ@q`^D((_p"R@'86jZadT(2QF;pJ@;M̳rh.7F4|,p-yQ#_p|}rwK5 0dEc4pjV \IQsϽ1m{Q'%F;/cI}q]n?kMFXZBkR|"S< IR)S7 ey`#0ZUEi@ Q.v>;Ch>3f H xy_?8&htt=lg@-dQ 8#:NC@ Ѐr` RJC7tlɝK|Ͻ] HY0o0F xdZJ`U@lolcz=u¡ P~ϭɽa̜MȠ$XD<`8A@7J֒\6x!ެU8e x@9MD5g"3TO {fbb9`,mw0d}0V`(Kx )$^PZH).ƀ<SœajƊGH?E 0@Sh'h9~;y7/|)?fwvvvA/p'y7ojJ$J^šrЩCgTHBP^Q4fAP0$bjM -P4% Pg;Wx#p^03mNig=' oN f0 0PSnD)Ӓyb @9EMi۶1!f0M\`% H:Ya` E<.񼕦y9,*KE)EүPoz-=_yV[\|T*ͮ૬PU׃nAWN7uiwvn;.o#R@~>w#P?#AWl3pp8 npH}M=QE Pvx宕=緡SB0MӦOo7@m Tۼl3@8x8M1lspu \+^{ԖT"P* 2x`I'.P ? ' J)'`뻾tꗞ|6抝+3cڀ-Q$6`@ahdФE}].W9sU}@ O@a'`H@ͼ$.# {ԖHGEڢtYX5yD;", +!:*z+rscJ4A&[h0HP"beYffΐS^ Yd֛ˤI-SMA3ʂn(>HVo6j9((V~_ugggee]::&z'>>4H&g-:J%Hj!5Eoڴٺ:rJl'OB['|>л+pR+25M 0"iLX `Y9쳋,]}_7Hz^ U̔ U;{h)Ud|TF|`Dv 15bsol|&57v*uAqǃן A8cp8 XP@HamltZO[D{΅R,_N%*gZDa)GO+]vuN@)麶ǘ^k@308@8 X < 1s UyF?=i=D3b 9]BIe'vN X@U3JnV @Л yƏFɟ[?1 ?Qъ~_>pK̓F`!TV.>lG:@`@x~ڟ譙ʁX陛::\eVKMӪ*R)s9쩥E!   /X*N>ᵗ}{Fǖ>vw)x. P(cÀ JS{bqZ3.pkz*{' $VG f T6AПrʽklj¡\x/("@Ё< hGLH<a` 8Zd1#T]" TQ C&-LP9.gƜ9;xg.g%6Yߡ[[򓒇82i<kV7T>o켯i/H.؊UP@Ձ_%oy%_~' >Z@3;?N쟘 I7FQA s/&ED ^{hmj2>Xn_QǮR  _4M`4Q .뭇Da ,RH"Y55E~ԜЛz5\[FGFXU0)j )LwO=5wn''P[$Ǔ2z?sS3TCH#g7~ƲG4JI3DulUI{X4tykA)m`uf^?O7\ BDEo,P>J F'1={δ;/jVB{Ly@ף޶ߩTչꚋ IDATfM D=. 2E v*q jW`lÆͮ6'r>`&YAKGMeo$hlU!pp#X+.lvK|d!Vz$]6†'<R%.@@@ !]ID%f q~Qe(`} ſ.]Ytsd!# 8xЁO@f+?ߓ7?w~/p N<ĵk}ÇO*qcҟKk*("eh4蝏}= "qX1ȗ7޳-619ƠK=(ߧXy`eD&&BDyR 0<l(nCׁ_a\( [6o>lK2>okh,]vPR*>O$3DLXUaa F3˟RSw>72ft-<_vMR|>ieM̳DžB!]vƌhP( q_&K,;"pw_嗯I}4s#u[YX~ΏVewYP&P ټT?Q׵?_ؚ{w"x˖[ -C0|g,cڴy]]Nzgg@^yŢ ) Mf1sPP;)`fP QH8}}NCs؃OCEPv9ՈtBR BpBD"eAl{&/͎ΎZPY_=|dAII@.532Z7 :UܴPK=A\^/'jF7[2Tkii5 u?Hm߹O^xV~N4N2o`e|\!Z[g2UO-KfwO{F*h :HN$翛σYß?'?r M+F3\ohoo6_xamѴ5k`$S13Z@s~k~z-`n\p@"=0dXu ؏ ~ӧшNNO4x[qnY'ë`gƲR&8s8 -ɒ.¾7Je=@w4?G9"\^Kbd5T>LQ/)ğ*ݻKvEWK֟|z L"طr`>YrY #DE"\fl]iF2r`R;BH6ۣpRA?KqcrݰA5QqrN طA; 0[~@ 4&C44qt:T~~l3! KLrJ8һ^(T !s;w ñb u4wd]][l 1tDt l,yLꁓ=ͯ{y4j҂u(gL'b$NR Dq}Ugw{6JCXx}>2`Lڈ(+/\adžt(o X'N 1L.YV/)Q!8 @p<(]BKD`Y1|%Lh!M>P4]-4xf䎹 ?)d* _(C.+J ]4!~2{_漙| 2+1ȭF#bvZ7۶-[9^pL樣ZdWY/d\|œ(؇_{Җ0R^}0M0Ӧ;Ћ/Vpi6ݰ֬$3^MG]Q쨮7~#LGEEknb8jk>{Eiζm)TYl3-No_Aӹ a f!*ܱX$/f<cٍh=ΎϼC:30`4~4T٘I$+EL#Dnj!9L&ˈl"VJDSJ&(6E\".CQG=M!bx(?"9vaGta{|P##K3P0P˾iJ&5˼aZ?쮆UeJI U^-,44Q5[//}v>Hr0 L)N=:pۘFP\ ɱ)ї.'{z^klքԾ'ϟď Jax0+1#R~s X9=)ax=W6 nRuWʌi(_NY_%;؏lЙJ$Rp8?`>/&r:/=;ґ^eIH7=NB(ڿRe;iс@Z5"bHp X7*+_hp/L.}8,8ڲ 9,xόd;㮭[2*tzꆧ4RD1'H9iE,RlxU$gNV 14 Z@<|ܰ(=i;D4"eԷ@Tb$`NF <։tcʼn̶۬{@:f >P+o1g4Z_i|uTVo3~6ݻи7CW4zaa(l<#0:pRYX_%+m [YK_WS{pS\°NuXtQ _;^1k Mv^o*7@8izLND9^n[D;Zl|?1w9,7fxR:k/zט4@0'UsTNVϘW܆2\d]ë|2!0J)J"< 18P\L*`<˪ڱcG6{?Pl^ZQЁ93LA\D\,ܒB'xm}ƼҴoW(umѓB]@vw]ہCoU41)$'jcX +S*MV[ż0:-M<\6`@-PS2x<8ujm+ lJy L`ںY;w.bc"@ 5%[3l6h`3S`) 00 Tg-R7E;TrN.{mommvmWr ",=0 {c4M@{`}4Nn_ͻE@3htGlٵg}vrΖiK-G&x=+1@YT4>:21HA e@58~O{lkMloF8l_ \Z>!@x0DBx4/ϸ~Gu% X]l)d'+H9fw=N8JPr {ZbnEvJ\V\ͣ+㕷pwE|lo6%REݨ7OsF'~[ QjLaTL.PAίʒ5 (F ̌CVBZ*xۊZHZ[ZzdHR,XzR&eO0bYǟ˔.p'1b@Ga PyeRS3;| ̜RH/ԢE;{>_8Y/K&aN/nDBXw75zHR^{{qGgOq#>x船l:0;;|W~w®5o K^: wt^\Q=RwۘH#grMB'-SSj?kF9Ԝ)7 RqUX́6fFGVo-hdIaJX*?J~2Kt6!~qsgK MeE|~Ŋj[.4͡Ee9VЫ鍍͍hh`Ͼ)/*ma?&@BR2QnK *<}zc 0iBXFx K3g2!,bj >BHt9 l Azܧ:c+Gblj|{4}Hb/C! %l2WC瘶ʹ^/@$V}# 0Dl-JQ-7_ʼnQnkk9$Vz,tsPHOO zfe$U!<٥T@XlNv6ϟpu<":OR S@8<EڑˍJɤY4K" "~5+./_arԼyҶ몾>b>xd<&l 0R#N5,("1>懍B#AAb mM槴Q,ʴPY`˘})n{o1vK ڿK7 ՀL*ЦޞVCk覟[=|g,P6k= guʨkW>|/3߲`6ZР7 Kkjӗ~{׮]K "^ttMU_)M{&' jBX`;ʫK IK#ӜCyPQΖ@"jtwXK C}g=|cyi(%s%b,Ph7os|]Ѱ }ϗJ)u.|ӣ M)<Y5ch Puu/ ED BAױo_2l޼ӟ%KQ:@DŽǹRrq2k(Nh4zsv옟9$B!J?6vi0X i~:IǢ5kDW(ŎrCXVy,ʀXDY/9)d63JP=@'Q9Bt`BTt;Kb질(YS02Iʏ\ dLĎÕ}F~|?nwqF5L[3߷NMSRJ"(ayJ] 4 tM yRVT/e3_H47EYuեx-6(JZ\(]$4gTЬ͜86uD&iTzu+| \phN9!`1(p0p=ϭ{ZۛꃓXxTe3؛x"d;EUS؁Htànk6L g|~鉶DןFK+aӏ<}C3RZ*9D6 @9`?*S~wΖl!Лpg8'zv5S8YNP@%0 X |(f?E40㢰"F#T'?9 8* W c xmvrKRH(8B)V "glY&@C@宧vFQVtxzX{Ql$&>/\ ` &v (ؘԏoͽoIUaz! Lc0X1, RЁ:k sD`ov{VBH0YVUdo/ϗWZ9)S$M4;wB@(D EEfsv{ ''{C}447+O3DSAֽ*f[~2JthenpO?S[ug~Mfg-Z-]ᰦi}#g]fOh+!n-'.^lO&=xoj98(|W;t:柪ɐHXQL-ВyZL-Й49[^ʏ CP(Z[+4L&_5]o`Nͯ{_ӧca/9yCㅁ߽@YaDJAz`Hfh~fT(Euh47yc-}My#k]"SLt}e h1%4AՆ:E }(( * y==D.TAa+<~5AcQY;?kʈ0+dvtI~˥72uq}HbEK= +̆>Yhs}wOY)ïH/@& b mj+|GF lV&;>Z~>ufFeT-`lBo 4S 1CB $t5 `JZBBm dʌi}80$e9seg=yA$"ę u+eN`kڇ*^QQXrM Rzm9tF--%y[zz+pH "uuE16^ZMv1aժ\7 l{; * T #nq).: :P lN!*oYiiOZ0x5sP7꘏nNʀ0|< }:MMġ۲RXQ,)Uf&L8d|yM S<`/E^t``GW نQM/0yO2\|%}}yèerV)`ϛobcpkY{M@÷li0pp?1&;y80`/`)ppOHH Oe/o$q"vɈ?})6`h^7[Hދ!WdRYnPArVnן7 ؀s:d˶-T-$"u>NAqdzE.j>yִY6f?uT E'pv 4[spx>;j/*D.Cxl15h q-~#Sq}gVSbǝ;} YVѶTRU qO>_}iSk֬|.%:a/z'3Q1X%3*Ƥ?[Z=>@I ݷĜJyC첛,kTye…G>\*m"l"Im8;yN 餓L6o^%Mir "[oG:6NNq }LK@ư{Ѹ_Y&%e!v̇45)S"/ܑs%qRNDW @ sPYapu]+ϥ]G0|bfԢ\YҠp+u\'=Dƒ*W\of%oZgo1泥\2-os~O]Zl^ JIPk]ܑPŒmNߨDl^- Ph&Ѷ 1U,ǂν.8?}MuIg&,3s8_B;*: lIdHFUBYSp`HA*=2%%z5k6UUUe.2-׶*yrk߭K.84U]HG|j3ū+\ܳ,|FNvK1}65lV]:R,b$l㎻ߘlΨl`!ЀP_ =*}<'SiY']WNپZ 8e52G\Mz1ם,>|(ӈgHcekjsc{I`cb<#] Pe[5?袭[WwUgM<;мXp_ o!R4RyɳNgS»N\V^|6JED.:g~k[Id/Sb$}Mȇ*'l23.VǀO& oWCE.}ݠ.L$lo owܵͱXl޼Y6TVVzX)tt.TV:ͧbح.EozG{KsUud\C9ho@05㭭7qetLӴm{7(?>c?Kߋ?|YV>Z b_xϞ]⋟]qm.M}?&Q\Yl`R 7u̦瘛2эDO0/wŘӈ3D*u+0 Ѐ7r@Ql8ljt/* < \D"@MaYߡiZccuOOmm"}֬~ ~ysH"rP裦7s0SDJx^CJ]R` 3s~'QHJT#P\DD¶ۯ6uG"  A4Lf{!00ͧQ%TSĤImR!.q@;"ț28c ;Zȴy+ ]3Lɮ#:iZMS zcGgC( >;N Pfhc#+ #m~OpHG7LO4&mWms&ySeOgfNڹ3G^C 2!)%ږK(Ȓug?CD, p4&Z3|-J`,P`'G]2\ndW jnp\~+wWƧͧڈ.Or{ L'ѹf7/,2ND gNs:]m&хw DDPЃ?i7vpK/z^{5C`QiEEN/p!q㍿{;Wx=<#䒳 Cd2yZiI -o6wa{J!'~o)(~,zQJi[Ѵ+JD0",ͧ>b! 3Othٳm}J9 !?:W1,/memkڴDbu; u1/OƤ"ǻ4;u8zw3KYG C)%¢Gn╟ }d0K06g~I&\m,c-sg?_恗7o^7{/E@n!f˼WJ;$ղĪWt$L} R幓sW傘_PU duӄΠ%.-hDBp30$pOG-)#:Ғv)nKβe0.n""/N:O`@)!R?Oj:HH`sTm$|1*P<Й9/rÈ|adOkv'}RN焘Yq 끀'spq9.rO$ٶ6Ғ}]/(BBzSvwygjjJm-YR9n&0M T0S$P zՍDp3aiJ- ͪwUDuv&'N=++E55ᣎ[ݧ})ٵ{?5{0Yd jHjé2#4locb|8VfbYQ\RDqq䓋.}4}˴<ϕ #n<Ɠp~|7^Mxm lP˖ݳ}"ظT\\QT&:>X߷侵Y+!u%5&퇗&Բd0{zQޜ~~OPիW/ϟ%p/YO򣏞#x'g}2_.n-I?VP cl@Bj/Jiެi&MR[vH7ʫj0⮻y+0HykEJ-J!"RƔ hڭ1UJh4e:suPjrusU(tU,vH@ p7 d+ cYp/m4J,sLmL$,Zxhؕ/뻁|!9NQ9s9PW qx3'0`!@90$|0 |714 Ú4f4=$ׅ0?-J8G' ۇ ptٱeP:Z;[K>*9S\64.L)W\0tMw=ZƊ-=t.Pn|ҍHA%Foz@0԰ʿǁ*3^1P4zUЅJ* l.\2k"0[JEN8wa#qNq[]p s=LUFd=ֿ?;νKf#L$Vq }W}xM JQ~EBvH/W}Fqґޞ:;4ʹ`օvMF RV]~{{ꃆ>цO)--&[ӴPjkig=AHfx.[|˲޳kDai)ض-bEG[ ϝXD>3{.4Yғҗ`!0@n&Jg68q$3WU/_y뭟:Ѩ~\[[V'yP8"J?)M۹nt턁HC=YALֈ:RO>>8~wJкu]='<[D𷙡E^z=[ AWx3ycҥqֻ=E`Df*MdWD\J&jBO>I9.`B DV!J5B/u{{E;;ŋ3'Pe#eޖ,| :ȘDQx tdRYߍ>^z3"Du賏N?3 ҧ 4CәY U,,~isze]iGشiޢ}V>^OqG1N 7|̇q740*ڵRj4k ? w.XpmEvno@pݺL&C`a뺖e7UVmvg2a$@ IDAT=읧ru"qƸp.A󟟗fKJJv7O>8N RHx~ʔSsffQRA51(NF_w==N / dmYy?e)%?`&UW.@?`=>`[O`Twu4MO$8׵=JlDbΟ4w x {XeWOyX/bxxIڲ1 wT3~O6$ qmZRL.v)0{<*}+|&8:)[zު*۴sn2diȷQD l@:ЇM@0P  &? )>H`(He[ 07eF2>pA9XdЄ2\ 0x# ~ۖo; >! E>/ ,y7:MXȺԫ)#jP"聶R;3X:lDs[I#+fyihiYw'p0m>BY-=,V [XI`Dnrh>4fΑs:zm32}JȧaA0ݞZv^o|)%o<ƅ)5ۦ];VaY9fgcW/<-M?>adsu=cԫz=Zúiޕ[[׮BHɓgzks,K&Aqx!v둲nFv e`S#0%Ůu};./"$sW@וmu穧g ׄ}deLw?#a4IB;b"l[c[٨q XiMde+c߮~禳g?8c{] ;:ǖ%dn}~phh)w{ ѹ]R"ø 0`n~8~^ĥ\8W:;{kQ\v-- 75GxE @Guu!˴ԩe;j*Bu稱~㍍-ۖRP7P|O!hk?⅝smJ5k~5tMϲ4"!]'5~s1Rʧ~YquV:KRJFC1`[1Z-]d@7s?8oYӚs/2BhUU;Z[|BRqMc㯫K:;_fo"t 5 6s8+HFf8fR Gt=k_q6nUnN&#^)c<4_-L2s Q440𳆆5"RjWM cd \xeD%ڇ ~f?^zCt=\XP^kuYs.[G۾1lVUuϥ\ |a8C׿P*ϼtBq/m{Ey=wkne⽕:9XlAo%hq \h` ](87|O'^Vh~WvQ@oǒ&Q"I)/!^t2?'w {m9Ȍl}O㾏d FQ\N!&]h0+@p!4=l|hTP nBM(qFS{V 9A}`pg{P :ia գ*-g! U* jt~G=O b0HeZͻ[7mͮrR@h1MP\ʘ '23)J5WCfw2M"BC^9%ZBD| š5*W ݴLA.CZ({HiϦqu3d~qiqjpsSc[fGR$Į0aU6Biyp shw*n|aqiT4k[}H"(ۥNpd]w݇Nejia( ]Cђ\_TTDDAe2w-hC K^K7%̮T'|_WBH_m۱n=<aU۬M:D #dFE E#[#bh 3V~G5 ZACRjA4ޛ[ӟ`5- _ՠGxu`Ѡ6l4 lU3L8ƲeeNw]mK_&7CG){ )' qS?.&d;eBo2җ,@ a`C9lnϬ*:T2S Ӫ"mOtboooo]c۶KH4?,njI)Q\9ᄢ'ٹ3ݛ˴sxܱ s8lU( y. s6QY*?Z]啭o^ij|]` YQQF__X卙VhcL e)N 3OĄRX .%uv_.t7{.Я: myJ?lTZg~8f.#T]>z ;#4MruBg.V7SnSg]lucc$(^TP [a֓Js5f%mXڴ'_Hgۊ娮L=:A(Jݕ {Da##Pj's08c5?iZ߽bŝX\.F/~Wr9 `p3 L1M'oHYhBHЀees|Lf^}TPWYo"\۸9 *&]˪q.b H4TT4hZE2yq(/[. 9,ϫZqHdfؼtj3d!t|DR>Q *p2Jtg8<+9j]50xvrmL{&sR6W.Ni! 1!?RH0⒆Csy=&!p8ƍ|yT~g1p g2@!0Hb;r)2+ٺJ 4 BD-sĈW4LY96n2/U~P4Oz^' &h:}tv5DtɿIBdb8/:(lF9w"M P#`%B&OBI?Ǿ؀UiYV׫ A_Y~_#ӣUbp3j-|!T`.⃊<276 f @P<ޏ1PP񖰾e9{h=#cxA6銔\ kzo_}]>cl'_=ʋk'ڦ=%Ν*+KȺ|bpb$7<꨾/=^oEy3/4GOlym.,)Se&C^{xR&qUws-suْ-ۀM1%B1HtL P HbLI 0`bp7.,443s?$ Cw}}*suh/Lt7i@0" .R۶=u=ϛw3ϼn?RfXDm/9D?XU5̟_o2 )' a2v7s4-]r*>TYMP(8MQ(eleL3=̙ygc~TlfY`.RJ| lsf MĀ@Xؿ2fS˅0\)D! 3˘  @Ƥtqp:C ~8 KY%et=^Ss;5qe|l"4f6i 2,QN @1eRƿ˾Wpl)@/{PQƔzq. +(>\oLך{IebghC4h:.\iO,:rؾ|r7I~1I4>|r -zI+c|!Ta y@ >ீ+eHyN31rk{bQu.!u799V錕3 }xx-AO43H1"Py3=VŌRO_"$>WqltrFzW3 06񡏭kt tBu?qV#W}:Jg[,H)aw=q78 G)h05{*uw}@DߪB_!䈋0u;g}g ?T*?|opMxlå[B%{ƧIBh4iGy䣏>Zz^k@;4a``ٶm c>< F>'Q"ラ=rpE!|bo9 4n4 ّHE`UW4Z(4M[8eRGpbܱN02+~'Qࡑ* ?lդ8z5a;"83 p3*[tzrGtv@`h,4H4a,}75Z4\g҄IFw{ qܕ ^(1Xߘrʏ/c+c}QK$|?[QAC\{j`AdH:\w- Zd [|œ }9Nq4¾rjkxg8"`IFiKٹ V ]J;kkDZx8;''׮]X1GyӗHAZ:Ǘ&:)Y]](v y?k\UUc>˔z;3>uQ_͆G6JQTjLm,ƊoyBL|2͐:41;GSNLHBn4jýW?e/r]_U6Y ?ܙ Ճʵ[K7_XWk"=d` Dpj  O\p^"r|w[C fҖrYCCbSW~G?PBę3b554z22؀Mkliq|rPԛ(sh1XĚ5]ͳ0gdlN=ZGfi/OO7 IDAT[gl4Vz{k?F4ɲ8YD?N*80Dr^+/I0U(G2oc~dlR!{N3;@}i^=v]wOƲ`l"OFCL&-R`>SCյ="3Llq6tfUYXamTeYk"zZ ΘbڐwF' )2Όjc`@Q#/| }[(/}!y ge )$L|zI;dfF%P0/3&V5#T26doDѬeafn&׻ xf,.^}')*R;8J49,AO43b ,Y\\lj&cLd۳mAovp3= x<^RR}O/|xw憢!Dd^,Rl׆fhz.=ɍ +VX#E:LBoֿU|5 3+M9v.Ɔ߼c9뺁/ohh(By륥{^S#\YsZZV\wݪ;|s43g7oR'{w4!Y/{9Rc,trly89\sĊB=> 8abUNM[>Mj.GMUfL#QkkY凾K5[-cBt 纫-;'7Tl+ij򼍟iʽٟ(CB DLL?mC= ࿜3C#3o X'{Y5k^BbQ u;D!(lܮۄ(u}G3!B5^@ D( K@ XٲfG򗿜s9=ٳ{%͎啖\裏;:ưØ yO= ϔ,Z5לg& 1Ȓ 566XoNs-BR|_ym]Hmm]{{Qgصb]3Ř6zb,c@2 .`mn݅P&*D'%DRøFJN 7[KKWܓJ]aY߷m-. 8ؑB%0%+4E xȗaƃ5:錱|wޫ(D#q1U|X r4h4 6Zƺ_-;w!IGBAm9&$Qeee7]w7۷7 v(03YQu1d]Cf$R*'rc`a3+**nc5ni#nְ L)v*.bqԉ_׋-?<(:꨽m/?t \~^J.zɓNH==\so~<~CToP`DrzLFo]@KؚyÂDw>uG^?礔6B*lB!mKod=l߶Ye'j3<{Lz ,/s:x汵ݶ[lt 'W^ ޽E'y4x|RЈN:[?8L$f*r"7=f8嗫aۿm޸LFp8gHoMc\ס6>kt>6ݾm^4 5MK&)S<+e<b}*֬3p8#>$Ҡ>I)zHr$/dj".h`@(Mq{XX$/g 5cqƠiiB W]_4yą 5\]wlҴ_2v c\inO&̉Ч^k55WmY!33.b'јʯ* eyc~h}(NۆBG e2-ٸЕr1JN+b;iYmkP灵$ 0 (~y-ȵ]Z&CjL''ϘRV* #qUg[6Ƙxpaf!ٺ"p,`Cc#$U;^WFΈD`8'g={ʬdq;'1&NW;ݱb FeA8#@ǀ0 "ր$CS;AViZc]cצ.jL/Q_[_.K047KM5뚟>!;V)% O,޾ގ}OV2Qi^~ŧnOh=~ժf:1W3]&Qp0Hȼ$dEC~ NܔjUf&>"dZG ׼=$ T>WJx%p m?ógzcuD" Z.ÑG=`)Eم ss6Md\Ecq_ߟm6k *>`m>/.--='LX1y`hHb82o=x<D6Ѳe{H$ 5ădmBqW 뇔pʏGyi5wѾ+beMYd HlMo!.WjCօKq7ܱ%OUc}&C4g2n#YoCȳHV$rwI moWEޜym#/߹}UZ:?X3vEnN!X^V= M$uj no i9?2*4junqN(-'5tHюu?*~ݨ ȋFF̙sǺutd<]FlB]V[[Ed"=VWe<R=YuߊJ رPfB]FE46TOD{Gko'W\rnwٶ`\V$vgͺhTH;'CSd`&d; (Z@*ά =@ اN[gYH .#j2T4$ȈBF$:Hb|p ""rU `v#IJ^]pbĊvRdʋ6-2KC" %BT54A)r TR|x|WV%S)sOvX̷L_."ӰgM+1*3r, rIFϥ"!CXsmuuݼj˗؄ÁڟscݓEccM8yt*@>8m͝{֪U7[]G<=Bd> 9`OϥWp@RҶx9swex.%K|{,4҉˰vERCĔR 55|qu|G%DJ^x<׬YPΟ8SC!z{RVrn)$Ix-vuUDO?ްaQ+0utpinf J) 8pׁ'[t22870 hIWWW8κkkYoe-=pXXD:NL[xmO%Tu[~)/?9L:?r10ڱep"p M;Ra xe@'Ra:M[%@D8 (bLf] ,iXM6/~c;+*f+zT~iRڑH9|ow_ns׿p˖x>"Q/#ާ(\mTF̃A VFe&@HѲ|81*B -x^-^eҬ3 eںڰ5mf2#O;M+߱*,"hc;/7@|+'v;Jͨ9V f6 @G:IQ$Pm\wܳ^⼝E\DH/p?T* bŌN?U*J:'׼5XJS<8i3AuAw/?'M4iҤ~x|^^X7f&g0N9Ebꬳj?vEƱ(8TS,,#tcm^xLSSgU{ō75+UQ)!*sWyScY6qQ10S]Q4%o[*O(h7vc)+VS@2̵|4y+S1ƟzFY E]㏟x{G<؈0!Բ|_G c'< c2 dCxLgqhn1!>qŊ ^J-LLƊHr^oOp-f̷ IDATʀCP:P ܱJM ~a lƶ2%øGr2딺-HHAy08**]v/Wt@~ R <4Mkni,G(# a!2;wU>Q^vy n[/O36Z6NBq(. ,<5p 0Jm2zYt&꺦 x4p~8> 1YƲr?3͝{|c0 H OKYhBmJJ@6BKPd",aA?e~)dAwA ~=㶪YUe%e+;ӝH; b%\5tn(ơ b HTjXp$\}zuCeC?Dy|4 !6:Ul`A%ׁb@ 6xzPQefS&'slEHEK@%f_x˃H2e=c7>VJ]7紐L{c+"ͺY jOIup*1Y,?GKŧ_:M3Dypg ׁ"ER@xObxdO <lc;y?O+R+|0);0 ҉Ř*SS_x͂jKMvwyoSZFq<u͛7W_݋25708kE2R^)"2?ȸǦrmZe{ŌZZ^Y`Yyyw,_tq~X| X)c0ׁ?t]wr^fN=so@hH~ϻ8C5y͇6(kj16"i^`pߢi9\LX'x|̘VTjYJ1&Gna'׬i~Fֽ>50p? ˜t%ÒV y^lT[ASn&F"C]LMz4hS̏oէvmaCrs J)dNΑi(@D`0~.N2Ύ7@[CCCs̙?!_]O>7>c,8NvOoPmIhQCC_RHsO,KLj!sH(`=1Ld͚ZWWaTH3@ GXQdBR*PP q6U]ҍVd颗TQeG&Zrfs #C3shVey~&tʷ4>nDsI-[vO4:x,q(K \$ˍ- zz^xU"`3ϬRc;v"zgc6~A o!WMmb5k +P!0I}ޛ7_)qBlH&_$x]im]aCxjc Π I{큳vZЧV$h44Tuj'&UHeIeex/px;e3+i>/)j{f \5=w?lY5MNuG2dZ&L$ìeb5l}}"j50hܯK2/%vO 50rA!8Xe#sn$jD-c,`ՎW)zcc'˂#Bhx٥C+M;Z 84<xsw[*M+Fט뭯ZqWi@D\fg43ƥ݅1Ӄ҉R"l?A5zoԌ;Gz1RW'xSC*|0m HP&2=zj0d{ovڙ HiܶK3S|e}pex+%]aF4\0ּs^{[:VMQz+EvҲ=li2ۦ|-.74'ؙ=0%m͚qnYY(CPB|&/^r%O??r555L&o#F"f6kAkP5~ 7XWjռ@DL7}&;|~f`.N"PrHJh'4-dGKYs6= McFzmU*~i*3\P8Y3NL&zz* :"Jd$2%q"H\3[[,uDe@(l$!$^u͜aY3gԩF˚Q(16}teR:γ`n/9.^-[lP\K.ղe?%25M,tDk ؾDD uƔR閕d ~"eEӦ2M\@4U L+7ow~c?\!;wT+{3=\xTJ˚9#R/w-[64=#nm`0/F 0Ռk#XČ|Y5f$[1kR(`!ISV:T/dF ߬7仾t|#r3x`Ti2JRezC 0/U^{Q(E5k6n(QR`PJqb$=+dx-i2t}unF"6=Ovǜn/j( 0v/ +04XA!c!K$O4b}K+gWZ_TBBa71@#+84rM,`4szybĉa3jv&_8GJOVȀЊ|!Q>mJ@r203FcE?|;? 0y߻ox E,\p"" :UW]u\qǽ{AdQEEr׮ B!A©c;ݩ?rc.hm{}4R,f1S<)e Vo-RjM:DT]ao8EmA4@Fݙ4D?Ou55lTowK>2Rw^]xS N1*A*!D7UWWi'ǰ1efGojHaIj:v=//RJ;[iXfhN}4Cȉ R^ ""MRzHV؎]݋&3fΜ8wg+-416 5[I8oh4!HdCXz%*L5zӬ92Թs|sObPQٓ$ɞGUժֽ"QAm]Z|w*.VKҺ" ;Bg&I2YqRg/+29sy繟oRb~XO' '@Cß!eʺ,_+i]XJ(FNG^~|X8r`S]p8V9y'+L-==|X:~5T>Ś?f7OfyE&K Z`;ش/|W#Cfg-[v틽(-AN6~.NTjG#+VlմW돁Rg UU3Ĉ & lذcsu1VPQo&[ B ׫= m?.]g_94Q3PRٷ/Px,chiBo,i̥d"(-Tه SY$~ǎyiZ`Z+> "=_&q"Y߰m^9U0H[[ӎ訨:Q©>~gMv,[+~0 錧byF]eVRRCÚn[߮0"/ ӟnW_}Zʾx<?ӏf}4F{ %-b@Jaa?/0} x qz P/29 vq~@ӂ5z mݺSQ̋D~ #Mu!bm.RsM!nYTJI&35;.{YFf^ow^^A8K$2.D[] NڳgO2 cc@7c& :? CY졃'kD1FdfTi3w"^xDU_4qbՎ.HUۅj zWF[w 91)kW4w&(RRNob,ƘeYv DmE/N07wuoY38O}cO P[۴S?ظ34/{_O{m;`ZEt>)[8 lnLKflaDh1`4HZZUc2<X_x{ĞqͩpL,i?j6DU;1:1 t{<36gH~aXlWW>RR6;>QTf粃/ 22q΋ mA0.K{޸g ;ڜιs'fwsz՜qC^$ 7pؖ<*Gi)f%-WE>KB)$zp<Y>nB%%`A8t SZisEXw;o}&sd92eSBN4][@-OӮ]I >7_y5c֗/ 8xc@)ka E6d2Bwh_dɆ 6lO(u(?sP_UVV{{S; er"==^E9*]& "pn[VCC )Dhh`%%;:;,UՑ6#3LYfn{{(d0MJ&'g܊Sܜ3!̦(]+tɝ /PTW}8pqDp(<9勖Oyq-C*SU)s-+-e`'`,s3cƔNeE,=*+Moe]Fԕt/X0nڐز<}P]C' R.TWy =odyZAsRO*7P )"s⇾aPGQwh'g_|!8;ㅿ@Q]ow\kiBт+/r aWjW\?iR*peX^S@Oe MjNߑ9*%fWpN)t ffV+Y>Di1sM73L8{) hn3ˊ`u驔gVzaO} ϰxOFsWTW%ς)"5ݢn-r396aUQY)ihgsYYԩUw"RJM`ڐ$IۦrnkAmPE=Lk[KM@HiμɅΝ1~g*t#/8a޽lyW]i' oI*܈/#u t#mﺶ`.}P("mm]KsF̽Lp*5Vꞎ7XCC̯h 5ǙWHyamY=a92>}c+W3B묳qͅE72{Sm:A$T W1c믿~i_;\wuwqǫ:g_4o]êp3LKs>v_ 67=f6uMR[N;,;;crh:0උBa8|1ƲbӅcFK3O–j(kˊ54i7gR-B p)DPa1Gp)ʕs.\m,HT`;0(M3Ig|pvuDpN,MTudmȃf2o۞4f2:PpKLp{26FkdHCv4`*N| qf0LcdI䋆*/T8ū=k' |gNf,ハBHW+9n•DӏnFR+FHڲ{K6c!9S#i969;/>@p{ GuwӾGN,Ȩ(_9_aUkLcttzQ S5li!Vlr]wI3> 4Ol֜9~puXJW<X;,*:*@ܝ7o"LZt:ଳ<Y] 9Iͷ sɤ!DN1hnΓ;Z--qc=WUkw[ߪ|iKU6(JgJW7 2c75STYJue 뵩ءάzKJpi 8vC2d`k` 2E|re'aanoR0nƼXcERE +:p`mo^,&0mO,UU#Hċ$t']H pA{ 2j qp)}YM(qjn 2d6pZlW>ՌC9c?fMg2ԩ7Lw*qàG^3=;I%0YqKRŰD()'-춀f0#|BxԴHG.YHo*իf{M`X݀!L6I_PЦT`+b0/zuPe`@^(++V Y҃1m& 7DJZ}\>ݙߺSbƬJfϧٹ}vo=!:_ S.gʔ*B;[_2ڒ9H:ܹ/Lfۈ\rO$2 @XDOdLOEr\IV';@e"PtSQx]:}g;mcsiyimկuppg4覶\hQZsd4a&l[f m̰ dR1D^{m֬YW_4ֶΦMnj~;q@Ü9s~g}Nk|ɣi7+=,.YJ}!ND◕oܸOLBO̙s& vA\&b@aL0.ca?b$r94;ﱬl (2 0Op~W(t7xŴAV2Ũ~.zs#;W9 BDin2;诨āMnͽW!gW5:10@ h A`@aeY4*jj5]~W0/hJXd 0p[āİa*S\Q[w6$ qF$e߁}bY YPNS!$D"4ʗ]r^_zzMNmm3V=>:J&KY'&8SlK%Du}Vګ~vr Ӵ,8܄<7) S-Ǘfq3cRZ@*:JŖdE36mi>E  LX֞`RU].m?V{2S^U8؇yF"?EqIiṗ< =xAEp9%F~6]pٍ7.oTQM#/ Y6v "3̽OIq6@ R>}~ &M]AA-BSjOxeej-@"nb\\oaѐdYc+ݴ,dD1op6-=nO<۷$AaY]{+^Ƚi>m0畷^eA!1bO˜+SFYh8gfS t:8kۛ4")~uKsvovoߗW_vbJtuwvf ¶mmN˴kaE:G@0ƺ"ey>phO'z=[A`U =>B>$%ӬǤf<=?q*r'>߸x<&򒠽`,QD iδaRIHrMg[׼Z*dO3<72/r-k֬9C<뮻ne}]=퍴epD^W}SO|vZs}Ĉg7>e q?u:8MU((IU}:F4 Ps^Um'=ݚc'*,gOi P lO]3-O}b]8z4hADkdL2t{7bBqe\%I q9;r;ah̝y3L#zhSuDiЉJŌ3ZZnDXcO%==.ܚ6S@:edyX(d'7vmXЀ82g̮>/D6oߝLj`pa $"[NY!ҹ# ?xʍhH"M\cX2i{zY@r͟~Sӑ1L5-@$ɍMvIB/IY{}9eE E/?@߸CXlov4_ZO )f#V(F(!-egeDzԤ*%e} =`c_ʆu B@ ,94dnonwP0$D#eZϾ|6}@Îx%Olڵi })Dmv\Z|\g6Q T6^mrW3V)y9qũȾ\1+}sR+@K}j`V`=0ʻ _Ĺ`W*ɍwZS?ԊH %rn f7n܌3:R~<[#0._Jm۶mٵg\rUFaK!]ȸB8~1Z:'a77oNrꚫ#U%nƸ?,/xPHTaimyMUKJ3yR8-%#]шoqKJX Nop6^;N?b2Ow8ZZ̼THɄ0u}oee͹((0^Xd͘FӖB~nᚶ5/})SBPA\SK|eI-ٗ4jĶRp?Ap~ rbםug&kο<xp}^~ySIIP DHT bRd'ntX)th{h%{uqdB6=!; ՀZz;JcuqSS2`cL|ek İ\<۶wQgh0R;y3g^ziq2/azK~y645gTw`<0rš>!TҶ9`q#v3CTT$D"]b2DZc Hg3FRiD-_~$^CCݧT;󧈢Ls%f5*;4u dh^AD`c\wxCjY x.k% 35z.;wɲE&L6c=@f`pZ2DGn>Labjs;+j&+,g+)<*o,mqY'՚k1UO*}2v[r䄞J:t|j㿴![r…_=1sdaժUs}7O<ģ(CT*uwDACll2XP?ьq"ݽ\rstv!z0a^PצoUPm%v)]R^kDN!>p~(~/9x(QUi)R^L. D[88zI)SnfY(L ŀ_ӥrrd0RMӭi~AoƌݳTj-a ƖfjHZr흥Ʀ+BTղBDuO0αiOx N d}{au>xUCD0"zcHE2.{`o~~vaaPU--wmc6JkomIy ^"IU[*w>Se|})d~.+H!2 YR,qr@UvJS5M$ :Q@`ak%Y\3aNi27SAdӴ s:FfGM&b`&2Z5P3f@sP ˲ZZZu֪V1V=:e(q%;NDt:UUXVn0كgמs\ͫݪJ>+JQcH|D;-fU{{Ǟ= zzL ~uYo]|>|&!TE=/[ŊN(-u) 򓟔OP0'CrSeBq|G<)J|Pme NqLժ*Ι3gTpvjˎ}rB:SIf5~rB}\!^oqIɻ'AL_`LN4K00.L;\QfHB[iQReY tg.j–5k+z4|+U@<0nDVgC:=]=}1 M ߥ F::>Q<6eYzϼlj'~NG$rq5>F'SJ--Ze?ҙ6|df b@Jux' ܔ<Ȭh{G?{UU׿YSI4Qm7Js<ʰbŊ_o 'p4x]pY:Pu2]߾{wy:؁bop:HIpg21 az#Ev_T4.}/i-噌1)o #pܗ<,eLeL(J}eRU eT0=&<1 zX<dvW+)g[Vitwo8>UD`f2Ky=~!.T3m+z)=ȉ&28yyW=0 xg>ߝz4ٜRdՐ$c 0]#p 45$i,ow{ccBU?=ǫ}h#44߳bYGqfOfyϾQDToY"s9K2KX@7 I ^ zUFpn+R+)`ԴZ0 /HBj1 4M}4֭[εk{Z]Gơ´EyaւP[7?9)yER&I4G4R_QsFqs/Pَ{|Aq&O. IDATܵABR]\LެWxJOzʑmlp*WiTղ,5=;(66XJI#0-XTj;cBSn%4H}*cҺnhTeJ cLJ-C)~\QgjW\riΘC+Yޱ.ڑbezY}Qy-#BD;݁\),n1ifJsOi?( c F;xL,_J65-NTW^_fVv >L8OsNiή-ΰ ={6VLzk% 2̘1.6)KKK=>4BcʇfvyG#ќc G9r|yx\ňy}uH ^ؿwZ1世wN|.@UT3&z+3oTq'9e 7PbVm+<@9tK^Ɲ ^6no-ktYȆ.Z h١C ~9'}0r_2o>;:.Ϯ7jYv(ë/pW).2:xo#Y9p}9<=% 2`5 $4CGDөsl; .O9'ZYII|lLԺvx盜^yEK)SѲe뮷~?!Q`07hcn 뿔οWG"l"++*ʴn)D,cR)7 L[`eW]uO.31! ZV/8/4HjZ > {t؎J,^)s Yy)?߽rUGn4u1v 0Xp rŊ 1h6VB?ʕ۬4aӬp2uuYYX0xY*ui^d[)SUp[LNfBJ ت99=`}aߣFIIۭf\MB 0 4?ܹ᷾xEy:}$󩕕%'d2Ln!eO3F&uE23JšVJIpc &)lHsYЙTC5R|ǧ;$e-*K+7~/Ն7f7oޤIwCQ@3a 7Gsj Ezîg̛;0d2 9Elɇ]:@)F| O>J]y4ƌ!"׫RU`׈5R!0ȯqYƦc#G=l"r;vݻ=Ev?xՕ^/WUYXHW^Ų˱truϧVӴwo˒%Óɛ8')S8b3g޿p᪃>&3iiK9{7+@"rJ͓Uba[[8W!D ߽⻗/kLqGLGۣJl3-zi}v $UpNڛs4\u#w/6blapNaa ,f֦Mرӻ`aeiuLKt٭h<O=;X{S뛺20R>>[>*cj6Ϗ};Z3H%UO[6Sd*%܌A,ehSQ\T"$ i=Vg-FcWr),+aQ:6Ms۶ViT+V,4)?,[a I]Κs.ZpA$jk'\[ E?[)i S:AI`DSVxV֚5>;͎ kЯ|Y0UEUݒNV8O6廴d~k>Q7kD퀻yj@bu28W5\Q]0-Š8.ޱ#mz(q1bL@!(p|NI]VXԜ9rrr~eݍRNwdX`Avw:L&s8! {(?];Z1)_?lڝUUT*,!NQv1G|噌uB8@vM IT-D/cX(躮m.PH#Ubrnڀ>]];(/2y%U9S;LV"p8t(JB|}D9@`K9Mc!e=cb12a8`q=e,ӓ*Ѵ@QѥԽ$F=`>]ieg?BTݝQU'CU-& Bo|/LOf6EH Sb@;iAxSn. or̙l`"AдXIWcutFs:vߒNCr #cbc`7D!W^L6홗lKd R"ш,n}]Kzj~aʸÈyB ׉1blinrBzDzӻ{{\=no]l慡ŽH 2DK̜3pZsqjtOl/lO΀LE_[`}d6XKq+ՑRr6XI1)$ 1| lT``^@P$ɫ,8DFP3J-rQv)wL=s-XƏ708ۇ>hK H3N5k֘1cfΜ.p(𵌆Ckg8ϻ)@B(b׆6$ 04Mz8~L%ɡ $sqy-|3gڃIs~aak*UjY. $3Sq%;8;+80n0[@]aaq@Dz7>;60r$c{ N;@ p렐t+޾tiKQeAz9_IKI8άH&Nzmp= }c[qmW~x<̊g6mΘ $Ej^"W4t1OIzAq hWW^i֎痸S{E+\|]r9"ؒ%%YYU|Υ\xZ3%G K\?BO֦n%%HUTNz M ٬d .:[+6t(%jYfC~mq@H )%0A. sK k^udc߫bp0k-x՜.ظ={ԴHaaId1/t\͖l.sׁ2L&6N͘t:B1EQX"t)g(JvmmYcciFšC"I?S_(-QU9w;K2$3Y -.UZ` w*"(VEj~"EŰd!d%d{qaD|8fΝ{y<] [ zoRz(A!V QCSYCTƏ(}҇C~{fXT(}_ ;vEq^MM/Xf;vciT2/>#]FF aK^ (DS$,ᆟ>9p#YTUt,jF/^hkꖶ4Ri}\s #;]e$ſx<iT1ZW Lh0TCky'k dD}Geg83OE!!dC}H\#,l:EA W; P*{,4Cpzjz8;>1, xQhE@\0DG4q4E'v#_fҼO\;H ]v{YIY k9uhU?sB{,[oVf DOx*mVUUظB1](Ahj^2%iݠ0ljRβG|ثtU3"]\'O/N'O;T}0(쥜?|?Dq$D=: 32ɎdžLǧr_|1a3dݯ9o}oO9[zAޛˢ 4ށPD [9آ{o!H2ޤ )V"pŖ\_j<&37?YA@]G9Ր j͑ul$s+SmiQr|ۭT衻{΂X@'KY*dYVsvf*]!CE|dUUlٺ$Q)mmHߍ9Bb'74M6.)ºXzttQR"$(_EtgBt4ZCʄq>>4GИAۭO2aQ~7k\r?XX: Rl; Wk}qIA@]UEDDю'5|„CINUL2Ĥ4Yrzo>;tRP(X,G)섐zCGۣQÎTc8;xMMmg_{c{޽{ϩ h_i+w/|FcsML)"2߳gO꟟s9?֮]kXN'GVVVj${="S΀aYl6X`8&c$ʔ;8*6wRs !\\ƞ|%#KTQ8.sRWnm'r}(8,Dl8L]g@@#@=@9!c8.;m|ln68x0 .[})s1nhdbk4*ePX?D Jebj;tZ6;Y [Q:ƍ:aByי={ʕXq(罄X Jyeecdž2GyK#=MK0v\O+@]yQ7域]6d6a!A@:EiH0&u^/ϝo ? Hd\-U BTw:Ctt oǫ  Q$|Iix}ޘ !؉ 5#X,پ||a4$&3|;k^}[6e:SNR)In3+#I=cfC9ۭh8 ', Xʚʽ채 šE @܄RN2 !q!HQ<0Fᦢ"yrdt;S78p. #iM/֪^x텗Yu.X=#}3$I2zD,++SU5x$\rɿh8?}hhHH(YrKTr:ig_rU_~M=mM5Ԓa1;B͕D`ʔڻtr]P966&=E#xTc+z`ArUX86 YBwgP9p==8!R.@[EEQ*֐,;Ҡ3Ql̘~_[,s jjvpr96ƷW6 1yy]cvⱢr?4 TQ1oYC;뇏{Ak)=u4-fXC2d!,`ڡwoz&5СC lئCQasGnvo' : B5vI@A JU*[hRt H uiȑOnz2(% ˖3vK~(;i"9P<i9lI/]c 6jjJwյģ6rlg{148JWgy1k.1Bkuu V;p F +]!⨊\4IL%.=} 0])SF\.'LH>==J ¶!CI CJC="r@xj?ޗt+JXknBAU?P Nm 1KC]ePJP(iS0k-DQt rP` uLlٲEӴ_|7߼rssCGp8<7uCC?~/^ kNd<ǐCqwnV䜛!D~]w!Jfdo ЊX`jx|Wn Ӧcc F.\KErohf{q{8_w`OO =X,wMWM+l2|+@uD@xyhՔ>9yZb(Fĉwmm6[iNU-!9kejZڔ)?"Cǜpk}sUU.**ϧ55jkN{@ ysc<~%F_ IDATg״q~+!96-//`l/>+|m&!cذF0+Mzճ&Nk8+81 ̄Ef-b^d7WO}= )q"ߨU.?Q=JDcuRK@hXZ\qG]؆( LL:i#Ap"VjNJhY#/'(!98bu;2O+*mv5nj^̈1#jIyxAUU5MS%)O0nx#^8In( Y60hHbME鉾UǪ`ftG$}NX d@"[eQ0 m* +̇ 9t3)=1`1dYNm8s93 45W[ -It1Mʚ[IIɪe?_xwRR!/R@>Ī[WG)\4,F;4scwnIA0N;YE>Eգcj}Gz^Zֲ} |>٘jRmmS̙ni(,iB#/'?z,l`γο;M=Α @+x(w}~B`}T>^4c c{G&{iB7"\nx">iڵ|-IHj ڱ3 [|K5}O.*(aƘ Ymra3$)^a}{Oe6*8ׄ"ٞ(}^XBt5?&wV7`m ̝*#";pp;Q*%o{I$+UT{X% gyD36z1GcsO2I!Br GZ,+|X,H>zG(TAEB8)dsa 7@z+ċƙ/s%FLK-~bj1[t̒=;Zno} 4bI󛎤/HCu^z?QFW^yRj2ԩS<'馛233 p o2?.+ի-1I)6`ҲñX'T:.Bsr:.臷PvvvJzG~ }@mݵ{o<%h!_Èyqc]ZQ\g񱌵zQ(;Q3c"x1dy%[ZƂ>ߊXLmX.[qAEuȘ r- ]φB_5y? x-`w+7R:n$c::^q=rñ5F"aYn}YYYe ,=.K}EF]t9&'@P n"n=z(;;ޭ *XLS%)O6{r^V7i!CedwNkKPB0I[nKtE| Bb$!(8!֗h!fp'`1OiRHM$_4 'YRjy/S'NN.ms'Nݧe$0$ qfDmSoKj_~@~5FpR$(x|YC7KY0-*DXviWkb;n-+O?薳,,@$HL OUd` !Oː Q:VswnI _rȑ#gΜݚ8 4GǬfmyS;kАG/sYuḾs(//=9o5[-٥`TWrIMS gF>oaQ}LgF Tqɗ)-?tv~D)C++k&Od<)0:F46Z P(*ȑ#ۆ o/ K,`(:G՗ݦv:fpH^s2U*_V:;^R0@˞K/_~{inn6K/>$t:Mi 1 l5?p(`/'ky|T{oDk0+^jQhȦh`=n),|0%ٹouPK8&Y~Z_ceYnUW=sExٔfX p5@pcxUU̙/|3w{6n|nWڂi== Um(/w">i[al׻]G'b7🧞:(vuu5vvNK"^to/"f$]c#~`wKa~]ྃ{sR_h4saY-Ś !49[ zz "33ZE 0k"h7fB9@v/lrH=Z.&Q~#.Gl!Ĉ;ٙek8T >l$HoMP% HRgw%- (pn ƐN4Ax0|u3@GVZ` gfel ""9`雚W\tz4{wC>ŖL#4G<`Ϛru{aA] <y6.rȽdq[/ϹsNmr=wʕW\uI[[br2OVZZ26ncݞ1xD?aG+}2tI#( `(8IMM5o^!3\趸%@S$z^ߺ9ӗoT#[#,aٚeq Bim]]RpE ! [[ogwQ]C)镟Y@4:zřb1C䀔P2rXW LV)CEˆC\(]z_JҥK? L9>iGvww/^>!o DX,O?W˹q` %V^|͗rYJ7B)ڟ|v($aN u0[o=(6ن1Ao4Gn|g-65k:[ s(J$Bd _}SYAayFo%c7S:ѮTbLVӰ>]шmFWg@,qs /߼gC&qmҰ;7WyGh\$q‘ezQˮEQLDoO?]Y܈RĀuz;cjY$ҋJyS)(FRP]Q f D!\p:(">6 o;Ψ7J|\@# N   V-YpF:/رMEV0!TBA8"~T@CR`8Ad9JLzr=]*v$:gղU.ko^~(gJTd?⪦rGR p&ps'nñ@UUӟb {{w͚5ߨH8q$^sb.}GHٝsS\#PDL$,KV-cɽ!|'oYS%$ su#2dۺZD4+!?p 0*qP{y󮬭E1Q|O]vJ~Zhk 7A"c`:cx|,۸s"m0}>5j^t1LuLs 튼%3^BԄycLv0E2H"gt9s/y B8~E6_͒%aWW]wMBmwsn4o]}\^|7ZQ(!O(k'Oښp.fYԔ~nEmIήl`"@2}/_XsrE=&uks145=Tw鈷A`4 g=/7"t=0ci /d(Z-;vdN@#1J|弤%ð~ɩ2)SjN4PZ[]EB(PD=l;33\ ùWq)r}J>u>;mof: 74ubV$u_C!8I"[;߃EȽ4|Y='vb] X3{yL, γYBGQXQ{yři["C7$.8$шCiբN# 3DUU9SA 4 e(2."H]o/^n1?9 vz0/y \lHk?N(!H䰼ESƣD -H)DPb_{}O;^e`Mg0"^?@bJ/@ F1 :d)(%[$ MofB(EO3UH$2y9sL0_:y< 4W+t]FVdl/D{,N8kڇi>QX E|) 3U[jC6TƢwe%*mB'1][S"#޽E SŊo%mP6>LBLA8Bpx=to>S(4[0P{$_ZF)!ȹ /鷁1S|W93:݌ysS.rDp Jh4Kxa#IfB= #"1]j7LHh'oG&Jw\u=gٔ#{OuΜW_} }FI @ E M Zxj lxTCRoa*FjKr[bPgF58siEs]*R}ʴ'tӽ|@Uǚw`D7lܴ'Y,x<JkQQ&b&bVG<YVVPћ78w-[U3z@ʖ[+}j4ljMJo\zEj嚻jy? qB}ؠ':GTK52b⍈EoSB8I'S-eNt]߽gw`D i^{ls+ Rw?k Y|q b7⳺s d!#cMtnyy#Gz֮W1aFA@2fCTSY+h!G^9?d-ųPu"ڣ# UíB`/m~kqv8JQGydժU~iEEt饗ۗ]vYvv/\ZZzB*+>!$33s֤k#cAwE,ֈH.'^Y]]?C1at$}=F@o>8Nf.W= Db&! 0-BY>1Xo=r[pVU-8xX6bu `Aĸk7|+ grޗ=<@НM A?tVXJ&tH͛7[O܅hJzVCUU/uu}yLz=VD4D6N==X[VV@)elpl;{1W IzEujcF!&"lH *#u8#FB&w13tɦ9÷n]\q6۱Bfa j i_PHf )_1&NJ8x@:U"VBrZ"8 IDATPm"wrUV ð[EQ+zP&xK 6 "J"x,0v=˞%ą4o,Y{?Tѐ-008&@0rHW7vSjA1G~+"bj6Q3fo5"H!p~7޷;Rs.p&3B o;-q \0ڏ\`08+_^BRQ5тȍ||xa\On #7qr:b\̗ :X!܂j\ $v!⭷:|ɞǓ@ÿb ^[o->t1T7UeURCx)b " ~p=ISȠi f_D"ӻio3yZCYCl<18A[5ի r05q"sCn]7W3:%!Y9$Y02'y #x<=%"#qPOWbQ(:㢇,^ `jv9Nbؠ)<|0 r dQ[8d C|t0tH[ npjm b™0Qef0np@8nK7i|rzGD"δ hJ~DgMdMXKLh|(8m΋~t] gi"^|Ŝ;n}njPG)!*w[/ )"` E@C9eqm,Lrwtw1O-hfVc+Ozǰi a褡;5#nFW?}2kۚaƍ&k='D"]]]?<矯5EЪ]48r5Al" iy  x’׺ȶ"F}7D(/_xd)SAߴpSKVA'ԝKI B#vơQ &WbKMKgLe7poKگj+%xR$j|CO~ĭE̍hӟeEHҺAIH_C襥b|H}@FWO=HJ)ՙ󬪪(I Нcr;"4/Nik fy msOns'|2hqOqmB/D庾;ߩ2]LElTjjJ'O~j>3g q/qT>2aR0ԼCZ)6OZ5t~k)է+?Y̺'@1i\߮` v@yV񖶖"OQ'[RDp%(slׂidBy7F {-v];]s\C29_H$TbQۨA9>?Tm*% Z{ t;%tD>r"$w$*?-߱QVVj3&mF [-gBNK+q*B BRor"ݪ19ݻwϞ={͚5i!АɞǓ@ÿv'j"_tQWedMմ!4Tsῐh\YnΗ080ҫ4D+ʬ' xח痕׭[Wq 7444[.ɓE[F虙?{ l1b M/Ą.2%Kzvx-5}0tEWaLDvI`-HIDd8͹0gr$rw,@!!vD pY5MGM#Ui -f5mwn_O.uf}"ř@SWƅxc{9 HJ9bTA-8û,i?3W-Ԝ/^x${_]Io~TGnȢnh8 œbJ#9[ )R؀%ԞSJ%u_7C0|˕OUU~.y[[Հ;Vgȅ| XA5'`}<2JWkjhlݢ*_=HD7$p`1 yp`bBwB:aI1$xi縇.pYg"v3W5o?ٲeKaadw(++Ȩ ^w„ QoU,/? iR.aEB.WkyәV ,H7}Z}dn:C$KKRQJ}jzzjM(IR%8gώ0[Aic+|N/5o)Iiȑի2!"sp8~q䫯}i,''rR3.QjDRR_RZ{k1?b9cP)= pgBnW躎P <&ӌ>:Ip?}D[mێ_ v;) `B6tsEyreB|u6k0 i䔈*'l!s>pq6.n~"q*pM34#@X  xk;9%<|G|@ Jv.R @Ngoxd9x^ )v&t`.]i1hϑ=M̧s1м$&Dh[Rԣ!;ءjޫg]wDغcL MΗ@zzi [au@x c an*c/ 5B^QlhU\J2jBBqf͚5hРǐf8Cc?#2" iiileDH~JۚABϋt2UӤp,V6NYz,嗷rQ@7\x8AT(]Uʑ#U&H1o9?$Fi] e $Y2zt]~_[kvt\{6Qnщo8.P`[UU\4ALI_w- M9•pc<-o^Eݮ:"Iw%_<%\0K8yLS'D$[iϹ>įW^pMSQE\|5{}rX[4 B~z#WF(,ɘҚ )ѹ{K9~w +qOjsT\J8t(3&" rLA@(L8.~vʔ"!!ӄ rOᦛ!W-)u d  rjJUG=(G)%|*;٢{nadsPAƧH.Zj$r6MZS޸Õ<ϳl])筷^3bɻ:ѹ!p"+SjلRJ1A)(%wԘ} KN9R6Wv۳yyKS GH{ٙ3fijF1EQIٵV(U 4}/Lc*/u-##! 0L$+Wa>pPvCn(;ya?@!B0.(  82FQ LOi0gΜO?t˖-e޼yo6 >~[n P6m4m48sΝ,g;wƕ'[SJuJK *ϟE3cII=ĉkC5ڵkau%^7c(D8=ERs&uZ]7]Lkڵ46b,^;khx0n.</]ep8S/Emm5ua]GmA8`D7 `"߭r;!WL)c,-0QBVAt80BqEG46xi p!Mwi $ѱ}ҤW/>2fMvkioo߽b~@GhJ뮻0yal%Ӽ- )!. PP{K4.kxAr u0B~_ĭ6Ö4J7PHhh9[ɳ2RX+]hzs}KD}NwSJ@ŒC{v^uI. BP@[hdҋSܥkU`V)6b# 4M5x6ɹ2!T ћnν(a{$IÄXvz:8H k X{jX QK5 Ӊl6Q~'ճ%:Lfŕf/~ i3=@? KF d  +jk!Cƥ%*˗_ZQaB]z5T IDATLh @pb3,F?L?ʄL%aֳimJW7++lj)JO: ټzŊE]Bһ9 8n.QDiŪMXcH>x/+PA+ d@!Lp'eyIcm^].K)sb(-53aʕ=gUEFR&chAt)tMbw4tϼҠg1cΝT\.-1FÇ￟8Pomjr%d$WLk3 ėq}a[JG8tC7)@7(N͉rD[1vMsdd2`wPBe^P]s8  "@R =}cx<= } :𠪪*A@ :V@O^Q1BUUs׷ o/vjz;^ʳaTJT'* /I1+}7;N%li<=ӏ4}57iP")qYhj8eAĺ▵Pϸt g^z}T(يo>  Ե֩J1 [9gb%4ή?;QnNfř8NI3/?su|5Wpl'\gscƿ/ٗg[ B(t?,m6L(NVWG :Lu]4YJ3LJ9Jv/ĝD$ z7)$q*hk3aoD@O8sÇ3,y̝; cwY1BGfAYu?Z7>IHUUOC.gBPIUU bE XhSŐ$W]tEJ1q:g:Ӥ\GZTOhn?zrO8{x_{k;qB](7jiiMoA' NdRB'GjYz2yUUU[lq:pC@ @)MIIajJQmMܺOQLUO=pժMS}5R0  J!l3&K0. y R$_z]-a#BBnEVȲc(0dG|]M\lBICTAh%BA&~}E)QYX>n4B0Nik fc|G>MsQ,+ƍ+:]Mvaap.ה@`~wQsk׾%?/DR 7sBǝP;LqFGPQT0@%̇ȎԬp Hq:s 㸬,ɓ~B4F Z\DQ< ‘Ϋ!66[C`"$zӧǂ߉z@^|-W. ┙׿8Ai.46H f 2y>?y/ q4YN@<|2B}/UCß=}]ĉmwH)Z AQ% 'Ք ê }e')JСbҸrMPp'DWƵKc 0 ?tΏ兝5̿8 gddů, W0R0ɗؽ.HB }ܬԮO\tU\']QzTW".FI3O| [6> nK+fWΎ"]H FlB[c\ Ԥdqou_]y͕pBv;YH@-m-y9y8ӹ`We! A\CB[/{U_L/auQ["Y:4 :rȶzUUʬj 1~['XOa i2qff&j$,/Iר;3J ,0==$Oڳ!Qf7ad ""ݵ|6:IUW\,Ԏ` ׿n_;/ KؿCPRsovgdd3ݶ꭫V}z "?IKFw˧(D 9P/ʕ&3|pq%XUU/G#9ΗL0 I}vj.kfS[?3Ŏ!QQtI'?g76RSo*oݴiS?w֦7%b7}\ $IbEvv}M,9Ƙ{6YF)a(tSOA)4 @ptOF(`y>\jj}Mzg-YV̈QmՊRҲ: cZQfE֚D+16uOKsҞ4dpWz!14k)v{$>XQ#%}W$ ?'ʢ#(փ.7|3(%Ǖ>R|lq\qv-VZ+8'?n,ĦBQy(S :?TePxqa+==ޜi&7s\j$SDOmEL 6-/"Ǖ'!/}%@?AwxU(8!x saN̺I*$[h&{B!sL2eʔ~ӟ37~W`P(>UPJ1Lmu F[rssc)+^N-]:~i^ΪUv,%kÆ< 1[!82 //7oޝ)lF(&21qq̙B>޾}50nv8ĂC{;XE~_Y ~sF0B@ELRJ7 ss2[\mfTRg^ABK#w:n>(ϑEʌT ]5 ^oHi5V\Am( ,9LվlD;E-] I4GQٲeddsN|*맠l9Pqk+W>c/4͜GQ(bcUU#BR <q1>ztq )apv==kY\ t..3fH(;Q(}ieC},qׯ?(#ښ.Ӱa%4Bokm}0g k0(l29hiP(0mڴG~'?LTu@lۭLDɲ v?aQPY'R<mdus-۰ဪfJRal{cxA(FȨ1N<&w=z) HF"9WVTl'OБ#)J+G~o%+./"iPj"mFy FDa+Q[B"K  {>9B|}#WjrWW *,FnD1 gl;D a2M6B,c8^]W}7dPԆ-SJouBk!J)e{^^f:apwa S7 I"vRJ23G~>$wbv;Pjv~WXBiJ$AlcHܧ;m,k :5ůHί *() 1BLO'Blya!| ƚ~;"Cif% ]2ȼ.b $ pGT3j[2֬YCF.ڱUreP_P˯y|3.LBw\3'%%%q[-z*. F]WaHHzPڼ;6{(&62Ru*\F©7 =ZաfDQl>։ZHc܍ϷK޿}vNA2}wŻɒ4T9Ҹ}v]q)^KnWN)5kVyyY~p[K2B !Qǎ$uѓ{^1J>5^Y龛"iqsWK/ IBSj6|X]Cޗ(땊&z5"Q?$- ԥ<Ԥ[^]{-*: ۶|6 %?''hk}#6cT:h>2Omd\ED1OW܁Ւ3' $S |YPX# dI6<mۋYG2 4!dD3< Cc$V?}I{XP(>_Hٺ!ضPkG] %'' !_-HB ]? /TԸ>5#FC?i@%509Wl(.BѽhI2*)%w=`Æ (سH '^L#{ߒI#O!V0oHC @N巟YZԖQu*}lELq@ԃ{@%)1ZG4Q{#[®[a`7""y '1:B|/P|Y!gm*'%HN1HMĿAOm]xў=y&AȲ=/zG,.:vŮ^} 8|եNp^y镱PNWpZPگ\9NWB0qKx?*nO>$;ǚ<ۼ^?yW@lJdp@ΝdGǎ$uQQ"(AM6s=IBDmP0+wѮK"kt޽' h=u*926ۊjQJK$F̚[=15xG}3(1aIi$fQL޽x9kiq< H;5g6-wh)U^ I6Hn4hl$l9N0G)u OuGM áXgƌ-@M7_(h(+gI0kځfx]ZzcXd q@I8m )nu|xb۶_ɳ--!at!q-ُ "iQ&L|mmms zi^x^Cly?.k֢Hbv!=I(cɽ'ĉ_iyqe_ :DžC <}0cƔvvvfv]uE5kʎ${cjǎ DS٥6N8 2@z82~ Q:#xomg߆{  n1k EGĢ d$r\42 jɲOMM'lqaK֢o0ĖPz?]/$`\. t-{<ݻu͈_~2>6Des};w;6 aPkl8i M8cw{{M7=y^|۶G_yȷ#DO?zstYf3DqxsgYiwzީSvﰦPPyyqYq)Eeo_z)apq\8E1PԽ_yG!ι&EՍvg7lm0Ǜ(ԁ ) f>$/t7OAvl>F]\Nph#ˁSAgϼufJJJԑB>̺/KDXEf(s{a`k:oM_$Mt]O$0u{;~ꫯ暸ܹsnjSRR_|Mw~{E3e`EQ 6R1x>6>q'FQJbyՑՕ),]nH^YFFV`3DxSZkuua"'Nyۜ$Hr$&xłu=6R8g_y0xA0>l;va`,Q,no&`Cʺg1eJY_gK7U絆nd4"maJnD]c=2>QwB~Lh5 HA"tȬv-_uQyZHj\3,hCz`>m`+*nB)e5$gKS0&`ogJ;`33cb zQA~1S>]ez $2O2zuשĆkqI`FoKk*ϛ~XRA]xQtNt0ط(so[?3n:8uǎ}k-(8e2 <r\p|mmv2WG? 4MUU)I{9wSJEqXU˚EX Frs+sqSb~orf') {;쫳هAX{KD#.wg8CbS`8r#(B<"2%8˚A.ruPHZMz<5iZV\=_ "]MAy*otc+Rb휱3qCF e) ݮ}\.lkDž%һᄏ=z#U7o袋޹lٲ-[X;a„Mh8C-^%Vebǒ-I=zÆgA(%L< 4H@ I,{|Yjjj?`v܂vg8n'B!U= #%+^ZqGhfMh&$.*!LzQ4&M*RP 4kk %N]O#ˎɓ/3g%׼ғzu5ҡ83MhQ]SH"{D]W\q{N4J\QT &o{eB"M,΀bg^qSÑDN%v mz$̇(e]&}%FA|iJLШNr )Yj*pjV dEbXi[5HA{bH`.#Ɉl荚P(4i$47l`Ugg'8---Ju5I'oP}Z?H~ۣre|EQ+•{0Q0$fwrͶ9--DRƖ^Yo0 UU#K,1[Z FBūVDn~{+s93 3ޗ}72i_tM_U"K# n;8o}HCuMDQC;c:YgQq)ZP38Ԣ ԝVG  {.  yd3)%,}ufa85K<> @-_^1ruWX K"8 x7cm)ҕ) H=vJK8&^z;wH lCz%336ֶl27o#7~ ~_u (UӈBƘNG\OiFAGjyA٬3]n{vI}4m# Vnf퉣@pN1^ e~Z~}a͏ 4%C % p睓 t6A4m99f~D$j)ײ4+߫3ʼnF I(N A(TMQ{~wj w8煮/vO9h-"IR, a=J:#?85#5"7gëA&l=RL27 b TvNU`-qLD%0[ yM{݄LIgzW~2Z܃P: pէ5"M4JvBՀ8reY|})dG,wgF䔲dI3{ {t?Z/a^SpS{e7)Yqǽ[nd>鷤%K>*ߡL-OȔ&7}l_"\D>'NޞVoeG5ĖiqHۻbŊqfʀqh,=Į!r!#_`'3>R%1GU-hjM'a,(@Lh-{aN0gΧݎ颅IzII,{S[[/'\0asx6ܑ:-zy$&zzMh%xEw{!.OTO9M20a۫f]:jWa|kas"aVe?ßpFٷWwa:I4M6XF!dffT&4 QD_3J?Z~\<QST 7~f:Vr|cLvv|+'%&8|*!LK s3'Zg4oP($8ĉ^Tz2 IDAT!Kp\`]={xohFqÐ,#LJjE&QN3ך4q}I71Ӻ >'N:!xE)x@8!D~0uSqNEa$ z`'admIi$I@IKzKM&)eBĖiS+Vj L0.ptnai~X|k#Mc G-p˥T*~ӟ~ 벗FÅ!gvd3LOՎ>nqߠ.k;aTTۗtuE/iuaTNSӶ}=}G&r7 `58떻>rR8r^tʚ0H$?#)((""oQƯu5`kCбq |CL&f TWWW&Mz<z js>:bBX.kJ pTzR=,}cuu!+zIB LsM9 ^kIӲAkMqLufT@S r7DjjlپmU~oS8{/x}g|*) &28Ib0O\\XZ!R^[׭zz/pbGi錴dhB"/^$ŜAZ޸qNN}#g?~93cLl[ٞDR{iS^'d\ֹqgG`t{2ˤ_=%)Õ+WxI&}ᇑz`\;,7ncMHǏ/Us;]taKC.,Po5[2)p.l=FA{5)?IQ1*Ϲ.T&9r;μJS'_mX'Obڟ551mʩJ>ԘI3)!t BlD%!"]D)>nɆO/DԆCA'N()) *bdddGl6[pfJ*l-IA -pC^W&)J6!^wu~t];Rh$ܤ[n ڸO? Y#ԉl XBܔRE8ۅWkM8fUWֶnXE3oloS|yݖ$)+eX}} TJvw䨇H[.(NNNۀ]$u~ vޭ?sBz'9rs,88NazKaoaVUUG#,^>I I6.epGԩSA/g]BX%޶)g[jIJad29m1!$1kNw!Lz߼?4ov-dIRǠE~Qzww?>mڴ}ݨ㒸O-G&&01z__:4~\02ЫzhN 2q2|?"&H9 "{ϲM7"[!r!ڔH#ehov!C_OHR'\2=4M\r"!$0LG_/!ħIpHSN^b!{IIwBnjks<)Va8'uR]/+1DNMH$鲃?OV䓒O:|)imm=-=Oq۽{7QR#a8E7NB;o/giϵgd]qJu#ߑhG`*Z# Vuvۻ{c`\Wtr"oe_i=}SnH3~_02);c 3Дy@qݧ,|>LbZbĄ?7f͜ K{pËUm(Uzzz^48^xAT:Np8tӃ,,,6  kl6jQ(jZ&]r.[oZ-ȧ$|I%6b||v!D~!,ˆ3<;NogǏx-(~U3^4|! G!iQۻa_tN̔q/o˃K$ں2T6}j;)!9mFy/[Yf?Cqvw7%#C*HRyFywzw=Ļo?T* O  8A8¸^jPFB++zLX8vwvs͘@;bw<>߽3Ïz uuu=裳fzwbp…۷+D:aW^yEPd2\S#T)# ۋBs)*2D)aFgF@Y'&&F5{j{/Gğ;w';#=39ur~eMI|3* !?kDrmD"s_) qq߈p\.jTnp"/D҈'90~K39lNNʺyTZVnrG__ߏ~͛7~ihhApϬ 7|ꩧBKZ SD̦|s&xc.I}͛դX-(( I FGٕ@r[o5*?- ? J#ħD>MU*d,Tj~}Iyi,$ҏ3V˫WӀ趜m|=|u^ސa˖uui.ILxNGn+asM>BXƛ56}5_ub@˅ #b{'W t3IlBș3ܧ-n=`C?^2Lñ4Mkk+_eYd۫ZmmACJPŧFD*z~~V BpcZ1$ дLZ%.JH@ԫ3]]R=3ji;;OGӦ%0 zrs_37?ؿ3 czv120_kɋ޹U fU`֬ymRy'0wJM*q@cr!)JD˗.]zرӒa~gaEww;w^`qmM'\ "UfaLܐd B*r~S=y<FqRAOt ;dK:_rL#>2G_B~@aLZJmuŞu[u\p,.}ѿǟo6FYpaz}~~ZF0H \;I&M4ipM%hq…^x!33N&ja9Cny%Ȗ-mV-Lp.-O+B3ftAjҩ-Dz fǎ7$$u*w|"XL*A_,| Kf^'|-ҥKK,y72 b²L&\K|_{ހU$J *f0 ~?#B- RIa/$ a h! a҅$HGt;7"D7}\aG4DI&^I{ (.-q:>/999x %%%Wl6%S~LgwaϵC/&Qi27~{ھ-E_΋y3rK[:a={зB[os=+W3 yve~߆Ay,]HB;R'Pkb0|.K"( ?yS@ZQ!).{tҙx@jtm677@#INO?)s%+8ry7.;BNQ7 m?~iP:}H>߮#?dh-E].q±C=C={_|.'1Qնsss" v=x&,ۦ0Yj!O>{EY===w# iEǀY Ih- f" ΅$N6(hK^X 9:uԳ>{a*9PfYR=AI&}gzg!SmA5^zr"cDj8\r\.+KyyFC9:NsS*~Lslt?!-]vHE7XtZi%K^W_eY699yԩ=lµ*1\ꢢ"VSo=BA~aҀfa\ro\\ a ,"°Zǯ=p!# 1؉0\.] 5faB$I#!?}-X //AI@Aga㸞BHENz㶲.k2[ntNz RllZm͛wQ: 5 c0A]m6wW__hѢ~S ɓ'̕^ʡhnlK1[1n%k5T(vNBYseKHb=IDATIe]WkD2b] nx 0k*8B8BЎ:8p,0C$2w6wǗ.]nڴ醤 06 =ذaBO"GU!0BVҥKot: !R4Bσka,$(e2?=n^{mǎ]mX|M.|̙3g|邂y\ Ec@dbR a] q@ZKf]aBJ$X_6?ؿZa{| 9q99!;n^85wno!w_ۣrٴoC?{ נS<…/?.]ZZZ_!1R# 8FYf(Q8s\oQFYl2\0aB-ل3gΔwkiiIOONjA@>/\gjv].Jӧ\R*"1D59ǹk8VΜ5Urm'}CFO0h2;vlٲe?ƍoN5G!X 7jHQ°lWsZ6:::?.KۓN4-;}'?޻dx>4}G|rao;FN=|W_}hlv@VTT444CVggg>|{b8yٳo^QQA----!lzl6k4zkIŒUQQi&0xn-99o7JLJ JX08xD'"ScmݺuHB2Nt:]ww7zmhh"TVV'm6[}}Hf9;;4⋬T" ?P*OL*rh&Z̔H$IIIO}7֮]! XB׫j m6^JI L&Bq@b8R ->JO.]㏯Yf,J ђ2|yyy۷o_z5R~m۶`0aWDh4%AJ%@Uff&v JHjvT:u~S b1s%Fo*9eOV\sHĠGmUUU323376 hѝJ|yyy3gܱcqS :WB*V|J?U2e6M&v% .S ꫯ>V Jйv߹D䶠JDN?YF"e+..6²!t:˲PDne?ǏJ\'N8Q|*AWp!S DFT8òL&|njﳱqڵˑ2uu"_(..tbn(7>!S!;K% AGD8p֞&iҥ͖?9417m6^4׍cDID~ ,={6!bl9s`"h>Tx j4n("7I>cǎ5kЬο6ɓ&IV?>`و>rȼy'tM6BjkkB%%%|V__fL_vzr7 voFnf09h"eCC?h\xp jׯ_?F{*Θ8|GGܹsKa] \.w l-ZTSSC{477ϛ7my nڪl6oݺ+++Fsu)|>캸1g@u%~ׯ[n׮]W\ N%.^ / \JsffޗPT7M&bA,VJ? l6WXA_/eeekmmW@~444|*TXXOgPTXb̊9@qw;w,P( G4b x0`AUQq oU?lB`HgV^/3xb̙1 5 GLg14<2f913;xv=xo#@3+?(77WoCL#2ȟ`2ƚ63+4XP\\l4Qnzl c?đCccd?*3; ZYfO0v#{ @lBeee~n0l6 ol,YI~h%D\\\Yc?92{8E ,(..e~~~qqqIIIFFF^^U`XUTTz^j%&3~DlT\㮢փLF|ޒ ,W f͙3;pglL4`7kX_<'~8T[[K>yqY^L`BHii)˲+СCc| +xEn ټxbw3~d0 fɵW&]h #G,Z~cMfbK5h=/h4;RzfB:;;RRRw*1 ^gs&Qe TVVz:\)(++AfϧcG]amۆc]s%$\KKKFFh$⹸ "^566YR.X j\TK|uvvۖxٝv{AAAuu5^xX)*'-bFKK 0Gjuccc틊,K|F1:fQBt:V?XUUhV˲qjl\o2r<weeeƜ|uj8)///6۶mo)#X1QKLE6Hzz̴X,c$BiJJJh?aBP -l',*m#g 8eWWoOwssV-//csxa\XXXTTdB~x$ª̿Bnq K' ΦbޏBZmmmm|3JKK͛GEѪgD.ݶm[UUYNNNWqqqyyyzzz}}}UUUiiЯA8bSTn@ .aFV^8=YEX_0:wm$fQ-!$77l60vq9P x$)8w\ḱh4ۑ'oA@m4e@p`p_ Ԅ  ~xLɿmbmCoqqq^^)lCBƼz?ܑ3gNiin455x߆J/1"cvvV]~}f&oɲlSS_*>쾡m6!3;n!r3Ef֭[:~iKKKU*Մ :;;{9xb0ˏ9nFaYa`F(]v< "oTVVуn/((2On)C!FzJJʘ= JhsCvv0ih4!`fh4²"`èhZ[[?0\xbᚑ/aĞ@T477߿h4677deeѸ?`r 3A3-;;lFV!2/**HcF?/#X؁vDt /^lٴZmEE"?_0FF@ Aͭh4UUUL&Syyy(1A@lA fÇfsaaaQQfB_)B(K'`*...//OOO*--s)//_reʹZpp bs'h;w.eAA͛Fczf!DR566!r3fNjX=z(v}AJMl=> !&Lx$+++6E>"7-T*ݻfsee%hg.@s1LvWMMMO<?6rssWX!\7o޼[bD pCTVVT*zMt:]YYYgg'nT &tvv>sE`09r$wFŋ~?HvvV]~H>6v{FFFff#G 4-7}~9|lXRRRsƺ|ҒAs`*,,ܽ{d={6}bdee-Z_X:c^WTyyyt`]SS06ݻw)!$##c۶mw.f4Q6@ h bNsA A B( J4Q6@ h5`Cl(Œ -eMMf.`4؅FQ5QAD  5 j4@ hAQAD  5 j4@ hAQAD  5?jJIENDB`pysph-master/docs/Images/pysph-examples-common-steps.graphml000066400000000000000000000544711356347341600246530ustar00rootroot00000000000000 Create particles Construct solver Add operations pysph.base ParticleArray pysph.solver Application Integrator pysph.sph Equation list Numpy Solve! <?xml version="1.0" encoding="utf-8"?> <!-- Generator: Adobe Illustrator 15.0.2, SVG Export Plug-In --> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" [ <!ENTITY ns_flows "http://ns.adobe.com/Flows/1.0/"> ]> <svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:a="http://ns.adobe.com/AdobeSVGViewerExtensions/3.0/" x="0px" y="0px" width="68px" height="60px" viewBox="-0.435 -0.869 68 60" enable-background="new -0.435 -0.869 68 60" xml:space="preserve"> <defs> </defs> <path fill="#666666" d="M52.462,30.881c-0.021,0-0.037,0.01-0.059,0.012c-0.021-0.002-0.037-0.012-0.059-0.012h-18.5v-7.555 c0-0.414-0.335-0.75-0.75-0.75c-0.414,0-0.75,0.336-0.75,0.75v7.555h-18.5c-0.02,0-0.037,0.01-0.057,0.012 c-0.02-0.002-0.037-0.012-0.057-0.012c-0.414,0-0.75,0.336-0.75,0.75v3.834c0,0.414,0.336,0.75,0.75,0.75s0.75-0.336,0.75-0.75 v-3.084H51.71v3.084c0,0.414,0.336,0.75,0.75,0.75s0.75-0.336,0.75-0.75v-3.834C53.212,31.217,52.876,30.881,52.462,30.881z"/> <linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="130.7236" y1="-184.1631" x2="130.7236" y2="-191.9565" gradientTransform="matrix(1 0 0 -1 -97.6001 -158.6377)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_1_)" d="M36.296,29.976c-0.832,0-1.513-0.681-1.513-1.513v-1.424c0-0.832-0.681-1.513-1.513-1.513h-0.214 c-0.832,0-1.513,0.681-1.513,1.513v1.424c0,0.832-0.681,1.513-1.513,1.513h-2.499c-0.832,0-1.513,0.681-1.513,1.513v0.317 c0,0.832,0.681,1.513,1.513,1.513h11.187c0.832,0,1.513-0.681,1.513-1.513v-0.317c0-0.833-0.681-1.513-1.513-1.513H36.296z"/> <linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="605.8877" y1="2040.6665" x2="593.1709" y2="2040.6665" gradientTransform="matrix(1 0 0 1 -585.5996 -1982.4023)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="1" style="stop-color:#999999"/> </linearGradient> <path fill="url(#SVGID_2_)" d="M20.205,57.452c0,0.519-3.619,0.752-6.627,0.752c-2.083,0-5.846-0.186-6.089-0.678 c0,0.238,0,0.806,0,0.89c0,0.389,2.573,0.661,6.084,0.661c3.511,0,6.632-0.344,6.632-0.729C20.205,58.264,20.205,57.7,20.205,57.452 z"/> <path fill="#808080" d="M13.846,56.806c3.512,0,6.358,0.313,6.358,0.699s-2.846,0.763-6.358,0.763c-3.59,0-6.358-0.375-6.358-0.763 S10.335,56.806,13.846,56.806z"/> <linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="600.833" y1="2037.4702" x2="598.1563" y2="2037.4702" gradientTransform="matrix(1 0 0 1 -585.5996 -1982.4023)"> <stop offset="0" style="stop-color:#999999"/> <stop offset="0.0417" style="stop-color:#8D8D8D"/> <stop offset="0.1617" style="stop-color:#717171"/> <stop offset="0.2821" style="stop-color:#5D5D5D"/> <stop offset="0.4021" style="stop-color:#515151"/> <stop offset="0.5212" style="stop-color:#4D4D4D"/> <stop offset="0.6202" style="stop-color:#565656"/> <stop offset="0.7817" style="stop-color:#6E6E6E"/> <stop offset="0.9844" style="stop-color:#969696"/> <stop offset="1" style="stop-color:#999999"/> </linearGradient> <path fill="url(#SVGID_3_)" d="M15.215,57.657c0,0-0.792,0.053-1.339,0.053s-1.338-0.053-1.338-0.053v-5.231h2.677V57.657z"/> <radialGradient id="SVGID_4_" cx="465.1113" cy="2023.4497" r="12.8975" gradientTransform="matrix(1.15 0 0 1 -526.6041 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#F2F2F2"/> <stop offset="1" style="stop-color:#666666"/> </radialGradient> <path fill="url(#SVGID_4_)" d="M0.065,36.888c0-0.59,0.482-1.071,1.072-1.071H26.98c0.589,0,1.071,0.481,1.071,1.071v16.108 c0,0.589-0.482,1.07-1.071,1.07H1.137c-0.59,0.002-1.072-0.481-1.072-1.07V36.888z"/> <path fill="none" stroke="#666666" stroke-width="0.1305" stroke-linecap="round" stroke-linejoin="round" stroke-miterlimit="10" d=" M0.065,36.888c0-0.59,0.482-1.071,1.072-1.071H26.98c0.589,0,1.071,0.481,1.071,1.071v16.108c0,0.589-0.482,1.07-1.071,1.07H1.137 c-0.59,0.002-1.072-0.481-1.072-1.07V36.888z"/> <radialGradient id="SVGID_5_" cx="439.1309" cy="2019.0845" r="28.5715" fx="461.6079" fy="2015.234" gradientTransform="matrix(1.1935 0 0 1 -509.6013 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="1" style="stop-color:#999999"/> </radialGradient> <path fill="url(#SVGID_5_)" d="M0.613,37.436c0-0.591,0.482-1.072,1.071-1.072h24.871c0.589,0,1.071,0.481,1.071,1.072v14.893 c0,0.59-0.482,1.072-1.071,1.072H1.685c-0.589,0-1.071-0.482-1.071-1.072V37.436z"/> <radialGradient id="SVGID_6_" cx="440.0439" cy="2019.1304" r="18.3134" gradientTransform="matrix(1.1923 0 0 1 -510.0601 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="1" style="stop-color:#3C89C9"/> </radialGradient> <path fill="url(#SVGID_6_)" d="M0.917,37.679c0-0.59,0.482-1.071,1.072-1.071h24.262c0.589,0,1.071,0.481,1.071,1.071v14.406 c0,0.588-0.482,1.069-1.071,1.069H1.989c-0.59,0-1.072-0.481-1.072-1.069V37.679z"/> <path opacity="0.24" fill="#F2F2F2" d="M0.917,49.11V37.679c0-0.59,0.482-1.071,1.072-1.071h24.262c0.589,0,1.071,0.481,1.071,1.071 v7.252l-12.407,2.646c-0.57,0.146-1.52,0.293-2.107,0.326L0.917,49.11z"/> <linearGradient id="SVGID_7_" gradientUnits="userSpaceOnUse" x1="644.3887" y1="2040.6665" x2="631.6719" y2="2040.6665" gradientTransform="matrix(1 0 0 1 -585.5996 -1982.4023)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="1" style="stop-color:#999999"/> </linearGradient> <path fill="url(#SVGID_7_)" d="M58.706,57.452c0,0.518-3.621,0.752-6.627,0.752c-2.084,0-5.848-0.186-6.09-0.678 c0,0.237,0,0.805,0,0.889c0,0.389,2.572,0.662,6.084,0.662s6.633-0.344,6.633-0.729C58.706,58.263,58.706,57.7,58.706,57.452z"/> <path fill="#808080" d="M52.347,56.805c3.512,0,6.357,0.313,6.357,0.699s-2.847,0.762-6.357,0.762c-3.59,0-6.357-0.373-6.357-0.762 C45.989,57.118,48.837,56.805,52.347,56.805z"/> <linearGradient id="SVGID_8_" gradientUnits="userSpaceOnUse" x1="639.333" y1="2037.4683" x2="636.6553" y2="2037.4683" gradientTransform="matrix(1 0 0 1 -585.5996 -1982.4023)"> <stop offset="0" style="stop-color:#999999"/> <stop offset="0.0417" style="stop-color:#8D8D8D"/> <stop offset="0.1617" style="stop-color:#717171"/> <stop offset="0.2821" style="stop-color:#5D5D5D"/> <stop offset="0.4021" style="stop-color:#515151"/> <stop offset="0.5212" style="stop-color:#4D4D4D"/> <stop offset="0.6202" style="stop-color:#565656"/> <stop offset="0.7817" style="stop-color:#6E6E6E"/> <stop offset="0.9844" style="stop-color:#969696"/> <stop offset="1" style="stop-color:#999999"/> </linearGradient> <path fill="url(#SVGID_8_)" d="M53.716,57.657c0,0-0.791,0.052-1.34,0.052c-0.547,0-1.338-0.052-1.338-0.052v-5.232h2.678V57.657z" /> <radialGradient id="SVGID_9_" cx="498.5898" cy="2023.4487" r="12.8975" gradientTransform="matrix(1.15 0 0 1 -526.6041 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#F2F2F2"/> <stop offset="1" style="stop-color:#666666"/> </radialGradient> <path fill="url(#SVGID_9_)" d="M38.566,36.887c0-0.59,0.481-1.072,1.071-1.072h25.844c0.589,0,1.07,0.482,1.07,1.072v16.107 c0,0.59-0.481,1.072-1.07,1.072H39.638c-0.59,0-1.071-0.482-1.071-1.072V36.887z"/> <path fill="none" stroke="#666666" stroke-width="0.1305" stroke-linecap="round" stroke-linejoin="round" stroke-miterlimit="10" d=" M38.566,36.887c0-0.59,0.481-1.072,1.071-1.072h25.844c0.589,0,1.07,0.482,1.07,1.072v16.107c0,0.59-0.481,1.072-1.07,1.072H39.638 c-0.59,0-1.071-0.482-1.071-1.072V36.887z"/> <radialGradient id="SVGID_10_" cx="471.3896" cy="2019.0845" r="28.5697" fx="493.8652" fy="2015.2343" gradientTransform="matrix(1.1935 0 0 1 -509.6013 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="1" style="stop-color:#999999"/> </radialGradient> <path fill="url(#SVGID_10_)" d="M39.114,37.434c0-0.59,0.482-1.072,1.071-1.072h24.87c0.589,0,1.07,0.482,1.07,1.072v14.895 c0,0.589-0.481,1.07-1.07,1.07h-24.87c-0.589,0-1.071-0.481-1.071-1.07V37.434z"/> <radialGradient id="SVGID_11_" cx="472.334" cy="2019.1294" r="18.3139" gradientTransform="matrix(1.1923 0 0 1 -510.0601 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="1" style="stop-color:#3C89C9"/> </radialGradient> <path fill="url(#SVGID_11_)" d="M39.419,37.678c0-0.59,0.481-1.072,1.07-1.072h24.264c0.588,0,1.07,0.482,1.07,1.072v14.406 c0,0.588-0.482,1.07-1.07,1.07H40.489c-0.589,0-1.07-0.482-1.07-1.07V37.678z"/> <path opacity="0.24" fill="#F2F2F2" d="M39.419,49.108v-11.43c0-0.59,0.481-1.072,1.07-1.072h24.264c0.588,0,1.07,0.482,1.07,1.072 v7.252l-12.408,2.645c-0.57,0.146-1.52,0.295-2.106,0.326L39.419,49.108z"/> <linearGradient id="SVGID_12_" gradientUnits="userSpaceOnUse" x1="624.8936" y1="2004.9155" x2="612.1787" y2="2004.9155" gradientTransform="matrix(1 0 0 1 -585.5996 -1982.4023)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="1" style="stop-color:#999999"/> </linearGradient> <path fill="url(#SVGID_12_)" d="M39.212,21.701c0,0.518-3.621,0.752-6.626,0.752c-2.083,0-5.847-0.186-6.089-0.678 c0,0.238,0,0.805,0,0.889c0,0.389,2.573,0.662,6.084,0.662c3.51,0,6.631-0.344,6.631-0.729 C39.212,22.513,39.212,21.949,39.212,21.701z"/> <path fill="#808080" d="M32.854,21.055c3.511,0,6.358,0.313,6.358,0.699c0,0.386-2.848,0.762-6.358,0.762 c-3.589,0-6.358-0.374-6.358-0.762C26.496,21.367,29.342,21.055,32.854,21.055z"/> <linearGradient id="SVGID_13_" gradientUnits="userSpaceOnUse" x1="619.8379" y1="2001.7183" x2="617.1611" y2="2001.7183" gradientTransform="matrix(1 0 0 1 -585.5996 -1982.4023)"> <stop offset="0" style="stop-color:#999999"/> <stop offset="0.0417" style="stop-color:#8D8D8D"/> <stop offset="0.1617" style="stop-color:#717171"/> <stop offset="0.2821" style="stop-color:#5D5D5D"/> <stop offset="0.4021" style="stop-color:#515151"/> <stop offset="0.5212" style="stop-color:#4D4D4D"/> <stop offset="0.6202" style="stop-color:#565656"/> <stop offset="0.7817" style="stop-color:#6E6E6E"/> <stop offset="0.9844" style="stop-color:#969696"/> <stop offset="1" style="stop-color:#999999"/> </linearGradient> <path fill="url(#SVGID_13_)" d="M34.222,21.906c0,0-0.791,0.052-1.338,0.052c-0.547,0-1.338-0.052-1.338-0.052v-5.232h2.677 L34.222,21.906L34.222,21.906z"/> <radialGradient id="SVGID_14_" cx="481.6387" cy="1987.6978" r="12.8975" gradientTransform="matrix(1.15 0 0 1 -526.6041 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#F2F2F2"/> <stop offset="1" style="stop-color:#666666"/> </radialGradient> <path fill="url(#SVGID_14_)" d="M19.072,1.137c0-0.59,0.482-1.072,1.071-1.072h25.843c0.589,0,1.071,0.482,1.071,1.072v16.108 c0,0.589-0.482,1.071-1.071,1.071H20.145c-0.589,0-1.071-0.482-1.071-1.071L19.072,1.137L19.072,1.137z"/> <path fill="none" stroke="#666666" stroke-width="0.1305" stroke-linecap="round" stroke-linejoin="round" stroke-miterlimit="10" d=" M19.072,1.137c0-0.59,0.482-1.072,1.071-1.072h25.843c0.589,0,1.071,0.482,1.071,1.072v16.108c0,0.589-0.482,1.071-1.071,1.071 H20.145c-0.589,0-1.071-0.482-1.071-1.071L19.072,1.137L19.072,1.137z"/> <radialGradient id="SVGID_15_" cx="455.0566" cy="1983.3345" r="28.5689" fx="477.5316" fy="1979.4844" gradientTransform="matrix(1.1935 0 0 1 -509.6013 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="1" style="stop-color:#999999"/> </radialGradient> <path fill="url(#SVGID_15_)" d="M19.621,1.685c0-0.59,0.482-1.072,1.072-1.072h24.87c0.589,0,1.071,0.482,1.071,1.072v14.894 c0,0.589-0.482,1.071-1.071,1.071h-24.87c-0.589,0-1.072-0.482-1.072-1.071V1.685z"/> <radialGradient id="SVGID_16_" cx="455.9854" cy="1983.3784" r="18.3134" gradientTransform="matrix(1.1923 0 0 1 -510.0601 -1982.4023)" gradientUnits="userSpaceOnUse"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="1" style="stop-color:#3C89C9"/> </radialGradient> <path fill="url(#SVGID_16_)" d="M19.924,1.928c0-0.59,0.482-1.072,1.072-1.072h24.262c0.589,0,1.07,0.482,1.07,1.072v14.406 c0,0.588-0.481,1.07-1.07,1.07H20.997c-0.589,0-1.072-0.482-1.072-1.07V1.928z"/> <path opacity="0.24" fill="#F2F2F2" d="M19.924,13.358V1.928c0-0.59,0.482-1.072,1.072-1.072h24.262c0.589,0,1.07,0.482,1.07,1.072 V9.18l-12.408,2.646c-0.569,0.146-1.519,0.294-2.106,0.326L19.924,13.358z"/> </svg> pysph-master/docs/Images/pysph-examples-common-steps.png000066400000000000000000000740341356347341600240020ustar00rootroot00000000000000PNG  IHDRE_?wIDATx|Ǐ!]fYUAGPF [@ E)Cʐ(Ph -tҽw=n+V}ϓki&xr\};MG9||s>s>9||s>s>99||s>s>99|s>s>99||>s>99||>s>s99||>s>s99|NShR_Z?R/I?\~A ɽRRH Hq6YG zR7G$Qf 6Y@ )i7O0#q\Gr%9nD~um&X:D|`Ref~%d_!F:Ƒ(pa6mNqdb:9V1y/|N+ iKfrIuy DC6V  LGC8ʈr.a&h`? ɢ,8JtI!݋'Frn,gDNQ[a:z#'~@,ɶ|rq da#]ٰ$׋d^E!>*y7@A\ab-Vxx?iHd NIVC$WYc"ratx.$1IdK>!ڑYurq9a:9aJH%D{x\sn? 'i$qKbVr ӒJ 5/4ўd^f1 %GB)Ʀe YJ8u>R!iI;a9D?Y]D{Y7N#>4,=61<qz+9:={ўJ>܍%y^Ҏߋ]CњJ?3wЬ9mvw{SDhV\I<NOA'BI4F캵AFhV\D~2O;~:=yў D ߝTK#s=61P{<|._9qҔ{~m#ccN044zcϏŲC?QhO'Ьsse=׳Yiu%(8xʴfv+wܥ~1I9%IϣyF0")94iYhnO8%&SBO!~/V4贻ѩw܍dANt',%8TqM0MBQ@iO['o|a冽f,h٦]S_ΞSvԗeňDhHP:D|v#vHꗍ[[~y'1GmKƎڂ2X3nMMv?^NnbfH_{j47KMB<88{\_s1K)4+mb~$&%6rt-7n ]&Q&)9^9^2We.R\s\srx|/%V4Ot}g_alb6i98*ԐcňDh\8qvnD|v#vYqܤ<Gw5GPZZD}{L\?)B'B約}]_'k7bի':f`ϷD}[^b t9\^t5|]u;4:oTTu ЧLk61P[[83/s&[JeNh܂3>:]q !uiO ЬssNnNk8 CCXnP/JF |Dݶxe 3[ԯ/SV*_?RWA&gXYsj2^\ܦGsT:[#T[C^j\dxFMzq.]1J>>wMBD{"4+|N[( &] W_]Ej{]ڮzѽ:٢79]=j\*@3 3oF< ,*b } ݏͯShVĠ)++ݧ2'bk+.FMU9ir톎=nu qvtў  $YάӘݵunBk CY{> !<{jWOf뽶t5}F6[[ {af;iHW~WHOmwR_![ML$y˲j61mf}sʜsY^l}bkzD^LsđS>"D{"4k |N|I;]RR? `sonEhV\mٴiBS,v΋[U=M9?tsNҎNw YsBAɺ*2fcSc5nwnպ ҎN7!YsBɸ$/ DhVzl8::7 cExf {Cǟ*;8w+A>_cy| zs\lu>CӪMHX,^d ΜxEKҎN7iF%8flaF:_[fiyҎ" _hOfM ^ &Lhذرcݕ0c̭;+ q遟N{xmI>r["\bkbU{bkbk<>*[y0}޺JZVVFuyȐ!^bWCrQ\q 7]U?>)oEhV9mti==>D:8p@"Ȯзo?+se>bo8u-;l>ƦL4x܄)qioբe+==ܸA}IS i.'䎝8xrs7ogѼh̄i! y}oݢE+]=Nu|#܅kt4oѪG4cӧٺWs7 #24/s7҆ddl*24:r ^I:zM]l/[l}nݓL#-[jՊLII WsҎNs5#}pΒ> R .0?mf^n7hIFŸEka_|Bz[ǡ~0c3ՔFhVI/H; $t =61xePdz7gΜXY++k9OɥkF|ݧd;ⴼNt褼vCf]܉3Wc23=߀A\<+礃s?Q!q!9 U}[bs<:u:q3898>o?g*9؉3s|9Wϝu|~v&Dt/Tϱu~s }K"wg[7=٭Go}]?Ub+3.߾b WXO5#b~6s>hÝTYϫX(Bz?>C4XWƱc!C8;;guuus*^v~n~>Wqٹ\0*1WL .Y NP*\pm:{3[if϶jӖgH2;&tѨQ#yK(:\Ϲկ_ҍp >w= \O_n_-\ݬy+b+_.xDzgSdRMaܗߗٕ4v>a?z8g/µ*34agP|>CsD"iܸ1o&&&K,III]Ҳm`Dϝbjjvɔ{y.:?Ir~6jܸS#"~;{xU7'>Bzg~:bS3y,wfe y`LWHRmf[B}?.O@l>w]K`o37lx؀AS|r&s4vCG: g$s7#wty׈-Ne|bE6rVG.OS3g+: sJ|zU.̂ MpgS8ǜ{1׹'7kMF;s65Ks9>*V.secsرRwUe/\Q[{];ϕ_v.6p770hz羚?==V-ͤiEtsF Oț8uy}}vV+~,wasTV:jܔ<2ӿZעU}N: '{qf EF[\[@CGI:r|yw J5KSz,/Xkto {w+,Y{iJezϞ=lق>h /L[r< f:8rVs9>N>=00P:'OshuRr֗{j=eK2~*2 g>7%WV!Ͻ&;mNGyrԩ>SsݳUT^F8_FX=}==[T^FXnp㻝}}q:ܧS5!9H$H}W=[_窹g{|)d%lU28WK➭Us'7ƁSG}ʯfs9> hsXlU~{F\l-beg͚>sZsI9cjԇjݺW78W+Vl}ּg-:5o244G|opzop6\}SZӯ-D ||lll~[+,b%cǎCs9 9޷9-u%e_eltlʝJU<ρjoe{_͊wT[Cjz3\z~>sjs489ƌo}ЎKPڱ9gבkn\>Gs-Hvs>`lzh7gnVWzm m}ڹ(P6ݵr sρrF9}ꊭ鯸V%gBhzP!-‘Cs9MAe7qwvqv,εePa&;Ouҭ}n=˕clp۰ja}`=>T&1Nsra+sN:m۵_1/:vvv|4(z,Zqk<=NV+ڡ{M}>j9[˗,ښ.t2 ]Zq)ߣCofliױ9ꭠ /d)?Bqٹ/ݼZ"!9[P%n辡d5>WnG#]T|Y]7kU{4gfq}d3mM#s@___twiV%s/\g.~=I۶mb1iŖx-;…|FWK\J6+lNp͌XLE甾I~kt k |t:urlϥ,&XlPYlMZj5[s| fkk[\\>ssyۘQEaTS`dojL7n\w&me~уQ^W>f+{y}gW)}|.:[A9؉3r/,rU[p͚Y̙3!9[W a'kmhgjzN?DRy\SCv[͸FMvf5[$[K5eccӭ{_OO,R(>xbkbk-Xl]OټVVvvv8*s|>>}]Y Jqpp*# W\BӋ_Cn ֤;_/X"S,^t+6g;G#>s95⏹=q=alH$SNM<ֶm۶Bښ3g͚XRRև!9|Nh;n9I8s99Cs99Cs9R%/{I8> |͢lAfd#E9 ^FN$nH I>sl吖dT*9q> Sq# U YMu$I3$ǁ"25Eep \1 L&sA/qE?$ak@ړc:$;6$'x>|KH/1!&kr@]6J,cHvO)͜-%vrm 'XC9ܔḲlbĬ&sul+WW>Xq#l $!+c ؈ 0+|J[ H(Ȑ2a)dc^+WW3V$j{aN2eMў<79"dK\x/ $68w/?~|\$q^bҰLJgs}Kcr'z!8IcѤY4>|s>|s>9|s>9s>9s>9|>9|>9|s>9|s>95?&/-|br/RH HqKdDo/@`GbȹTKr܈!L.t") a?9KG|N3$|or҂Dm#97H3u' (]-0d Gqq1U7q666tښN-,,xʆ. ^$* aW$d9s ɾA-BE{hРmu*zL$Y.'29BCCdaÆd9a:9aJH%m}?05KqVr͹I,(bcc#scǎgi,s5/4A"Izl,2/x(ĉ>Xų [0!i_7vSUe7}5I|ۊɣ `'N\_OE3ݫM̌K|ϭ۵h2F"=sE_~Z):e٤1C*}KљGϞ0`Ƹ W<Dze:6ۍɆ`PqƼu >>4mN&8=޳az6瀞nѿyY&&}>ڋ./91l@e_}&{i61Xd si2wcIb~I>K-t9qq:UK:# ?荆Ƿ|0x[ G\dh+_z fM 2222!*JJJiss1$)^nۨB c]u3tf=7-_k$cv*@K%|/:ސ۵RwָӇBgzH:걓vƆ|M ^U 610lذs4I4u+]?tyU1Lc7+y|۲ -]tb/:*}zؔ\My#n ׷ҩ?MxU-DhVTsl,Fp9MN @_{7S2QG|jτggV??̹/׃kkDrS{a>'_/* ߣKGmfHv~6V%|_#4+mbxy,[r)kFƜahdlV^َe+se?k?Sus3CY>=*<{ 4嵽Vt5}F6^^%OrdlWݦh;]nfl`q?mY]m[qԑy457b/_#4+mbL6N]t}?t/"&)'$#WE|NnpNT\M1s>'\BQ2*sU|'*;*7qE"Cm=υlWi‘=1c<ϷYiA"`7蓄ԼG1v隗{F}utwў S6m,?SF[#j.b+rțN]b{ miBb!YiڦO~ [˜sU[_q5sIk7t̘qpsOs$˙utўJ6zUe[}Zs~8r9'iDbv!YsMdddӦ͒ |NZR;!*瞦Gx C:|i( YWUўJ* ܱ sQl WYl تL->;iS=+ӨDhVĠyӽ]{[T[^ؚQ֘g.R}[lbj&qs5O}Q;-Dh|\m2Ͻ#?8'Wl|Z.*ws,a_]6w8}Ηd\f?FlB'B&Dii(9[ !siJeUTsQ*>w3,wmp s5C2.I;K$|=61xnܸQ{\}~}?GUnp3AbgBT9eXA +VQlUqٹJS]lTUlu>CӪMHoj&EўJ&LаaC:U>:c̭; ΝxO're?<6vdԧ+>XlN-*38WsB5񙋭>V,&Lnl\M>:~: G'B^"ҩS#GȮзo?+rse>bkDB'F MMf̞r>aomfآy?.>vPOOi3n>פ}z4~]}-Z`jiܕ>>t8C#cg'F$iܤۈ>˞>t͖-tu7ߊ ̹'4@dhlb:~Wsc f~vŬgRs L6z)'#'eP}Ւ5;5o>7+|nݓLϩ$ד_vHD{"4+mbزe 'S2Ccd}.>%%Rp=nA} |U[>?OF:{3N070442>w[3[;jdl 糎jELzX/J2N:%greeeY]]r-;[c3!S9Lش+׷hٚ5PϹ >N z֙9,6>-s܂>W홭~9󗬵hъ!6>WsN>qtt.ވv-|$_ܙ>W!srͷƑi{3:>_ kH$ ^D"ccceWl sg/ϹVܭP1,wjmI}/" x>'=52@m0qOGэ)y7]g|DnEQ}~W8u't+܅;SsThлO_l\&N ͦs9fѢE;o~]pG}lET\x\࡟D&&Sg|uWWP|=蚺zzt=}ѕܤiEtF~:pٹu[7ozٚM6@*ɳљ^Uψtu:vz)g&}JbOdϭq?opa41koʲ=+}fU[vpsssúpFpvXLZrW}'k^.'gܤذ7򅇾ژqq)ؒl@7/ۍ3 EÕ?=\{*W{|3ݳK=[h:۹5|NM}w)ӐYҊ5R6a>77 7kUMTubr"\ʆ*k 9:079+& r hsXlybkQ-[/]?k,E&ܟ tfF,&l+j8VU].ssšO^p!ߗXs:v^\]*=k?y89M38S}MM;roғ|x'׳=JS\scq33+;`K򶱗GaFmp[Eúp_a9[_Wucǎi ~ٛs@]*VN?6g}Yq~!vc*ml~wYVqp S(s)joe{_݊*!Zl`fn0{kiϵsȑ#Ϙ]]5[U[կz=(ݐEs9PRRҩSmmZs[z4s9u!##%,gBb)>e瞾غf~kkkDc*Csځ/UmbYl تLzbmV,hs|N}G:u٩YKSYlMؚW[lؚ4h0[[b'ZꫮjC%>9؉3/ؚؚRVsnYk7ibΜ9>{~9ۯn;9P^^^666ݺwČ"b-,&h_eէbtp?eӮ#[EFFÓ&\+SvW1ꯁKj>nvvV}nG\y}8~Ǧr]G\3v!ڱ[{)n.nl7vQ:EMq;9888PDÆ ~qiEO_l<bkGw\~?|ȑ899ʪ?ٴW{%gSzr۰zc7rX6|ۈd|2ז7}mgh'%ޅonAƒ|;9|H$SNM<ֶm۶Bښ3g͚XRR#V;4֣}\\˹7]EsmA+ڡd߳8ݼi^[|h_:Ç^GUZ S722+. ,]yصJ^dhitߐ_Gq;"M8g>sSgK\_eөq5aNv|NbsBW}9s9|9}pj_f~I5ټmʗM~F[}O^~7vx?h˭ >Og2>S;ߟV̴rM Y%foN\`NVUa+u5涏Q}qj?;sB|.x9Ւ"I'e9|N;@i5_IRlG.2]ʮ]߈:#V+>6 ΅ٓ3mM9|37"Q[Ƹ% YɦSHF||< [Jutu+.IZ֔1; > Rϒso+-]2v+A YAV3?iAw|_||Kd9?X5h);=ƺ4+'L> 9ׁ; sP=-R7vƫTxiX<'ݙ`sbWHrإhO)XE|o|s>9|s>9|s>9|s>9|s>9|s9|s9|s>|s>9|sSk)E4' s"'EA R{h3ƢMBɓI|?Gu$ZF'QfuHHA)cv+WO$$σ#-/')gHU5m&XI'H"ւT2I){1Pd˖-D粳8Lzl'|%k:eVwTds7>vseeetH$:{,>|N3HI  YΈf'ם,dtyHm|źTzJg:t@TsI+܃yaW$@ɓ'sR6lUzj|2fDBo$z !Z1d  B.;; >d|Nc.l|·G'n\O0_fHii /s:tiűz+9:M=hOfM 2,Y.;i8wcibF&B&@)))TD">>.:~Or ўJj2dȜ9s9ss$׃ut|R#vڠA}u+ЬggXsΝ(v;j{73 SMJ=>>˖8iGZ1u[ձWo1OZrmph8{Y>.QF;~:M<_ImDhV@] :}oʝ~:w麇_DLRNfAIF~yX>LHJ̓dl$+"bIɔdHߋ&&%:nt(>)w#YD$$ K*fI,&F\DLP'0PI%=?_Xa Zi̢ŌH^vV cjFJl@>9Bo[2v݄L픁tI,Luhj*/?vr3CSv7U61P?GݼEKo>ɢ)x"#s>PxxWȜ >W.sU|.R9A\9^)eɿeYWM4-77G>>NrܤU]s7i#ѥsySӒeqGH9*Y^=3Tse-WO:~{M~f-FUK(++[3s5Sf=9>' )\S9 sLT\M1s>'\BQ2*sU|'*;*7qE"Cm=υlWi5ٞR&ym{std:th^tF 8a/G\\eh+ߗڏЬ HlIBj^vci))CM/ >˜yGyEw KQc'8瞑P}]_'$bի':Syy8Zgr4D촆 ?t9UM b/4k$ImGZ0SA^){/qSYiu /.nr9ŏi νbkbkD֢Ule>Ys4yӾ^ܹKWsBa0"]H>5*PSŷ6vCgg{?p7/fUڵ4WIX^%n#4+mbP۔owy|bk+.F~rp9ir톎3n|}.d9NcvIW˺ -L tf$| `_?CuټoUl_.IX$"ݗ)5ЬAm3 [UN)>{=0o؄#{zsvN$fD|eT?{z:o<[xoF}#4+|lڴYrVϩabkb)>ǵ0C#|{ HUiDP1SMn"[xoF}#4+mbP0pǞCiD5J΄xb+sṓf-2K?ɼ:~:ڪ&_^Ja :oqmm%uЬAq{VY+f,iz5晋TxsbE\MSsv$r =5>W l#s)+bkz-[^vb;/.;[E\>K2.N#6!YiZ@U5Q,zʋrp: 24Ry9RYl a[n <qI_"Ь+ 66vǎ*qtto@*=?mdl|b[}NI5Ce5bkb9g P5RU=\M6"##qs}Λd\vI:D{"4+mbj9rCC߄cƌ[wWa::[b|y) Jr>"V3!d}nyos0{D{"4+|A}bccs ካ}l?Y|̅ew >f;`~;77րȴACGD&fSg|ZL}f0u[[4oi` ;aZXb>/s٣O5613421vRPl.t5?iޢuQ[vEVzvv-\&ZOOf_o5_qA D&O;2)G_w7i||uC]=t{b+sK4e*6|N%$@B YiWƍ97ndyVV1J|smTÆ Ccx٨ Ccӻ|\J2|丘ԂvC>6:\ns˸wf0}on_/Xܰ:/*?:OAf/`>w=?9wD~ٸUsgv!#C3:ތoSc7Y+"gʬvw#^zN\sx켇bf롳lHT#I;:~: YhOfM ^֜Æ svv63 +:\I,69/;VLe>&Ok"ŹtOHfO'rs!tXzm,ǥ_ _l=p6s 4t㋭']oѲs7/;wTs7R<>~5mܚMm/ѦiM4imbiQ`˔Rp d8A9DFw:<M3l5zdv\`lAtFaG ~C,0~arG߈WG~sJl?M6λUsdرcą^6[}|꿧+ٺUmgsQ "^F3ۊWW* Os!SuD1Q׃nȐ!eK\+0bHUb5Z|7 Y\\J眽*{.=_.ϝS윣ǑlMʩ OV> /ݵ>|&3sӿLe)c.LؙnCEI=г;O0ZSagWo_?cUˑӿ{9zECCرc'm͚5wƋ/KL˻gYڻ=&!5?9D̒W~RV͞0|RV7V\>0.1}뮞S>u>x~j^Mljuvv9 ϥ)?YZ*b.,>Nu.}#`L>Q\pl8x<2=!Ngߟ>7z<"UQm߼sU c#l9zC*W^177m {nUkTǗ/~lqeCrV;M:t /k)ߪ|ٹČw&K_~esqiS>|1O}luzyeω>2x~Kk=%_dbذ/}>sGsI8davMeϝ=?#F1JV;aG=f\MW Nkok>9d?=7[ئae9z^~~CBBzΒ%K̭{}_/#ץ^;/#=z^_F{WV1Xm9z^[[[>cֽ{{|uwb==sw\F/#|J_G2w-Ή?96@s~$ #F({_eZ_=_ä=[_{y}޳^{|9< H9z~LSLu<#n=quv+W[Q-=B=7=z/}~szWx5fk`|鯟GD* ß3}07,C~[{s':6[^FAm&f=7`za4=G="^?ʣٚfkҏ3=Wo"\_l˞|+ʃb+leXG:w"{fLu}&HObX*]YLق'MMϡٳ~ey5sm)hOѐ,v[K= }^Hc쐞@#I=ec3E=/=8sHQV[] ;ֺp?ָ>ko Ƶ\A55Fg~oHs&7yss08{{UGJ1ʎh9s@z9z9s@=s@z9z9s@~=wחGђ%KTUU\mm8@.11Q4رc=gnn>f̘'x_)L4Iiٜ BBB839ZZZƌ=^yN zUU=gnn9@@sC z+s0`L6Msk֬l` P\zz:g=O[[?[oq*s0Prshikk 1daaa``cc#.[YYkii+{QWW߱c͛7vRQQپ};o aht$/,T4 L,,llLS!)AIVϘǛ1=s#5Sj}eċq1lRzNnPig){>)5m|Z8R*p08r.鲵GElZ 9c:A%rbk.ڝH9}r[DuƜ8(f{Xz=?Fi;h;%715rHF i3+[₴2~U|1cq2Fӵ*zi «"k3tΩi"2PbeN9q5;s*_qbs=tLDvHbQtI'm'5tsL9ħuEs折3EF{N؞ZTVuF7`pQZ;#uKg.Y'Js"Nōlso"""4w+T\-DƜfwJvwF EəJ31 kW,@?{n_eʤ ;%6N;f1C\>o]'\dNX`.`Z5o<ʤ  S,ԉ;iԉ SS5{B;*7U>| %^şz.p+j'=ԩSll?ÿX$ݮe{iUh)N'J;J3T7X뛪U[WXWHC"ʖ(]]سh{'.s90uÛd-\[ַxo*>%+)][9}[&nO鲃(YQr ;q̝eN[s_BCCzʹHn16c9>q_Z BSEy}U|gx}!eBBq.s>vșiM&=}`uG6YEq̛'ZW,T qA|*os\Eeo5j<=%88+=FMRoQum2g*>gC6S 8(4>[\,EN1MYK߄n2/}|h;} f [$̶L#ՙ=8>"Q\gYG>4(z UXlSSSu}-Fn[ݕcۺVXRe[0rݬ \68owZˋ ~V\\M|***[l9yd\99xn޼Y]]].s6s00ܼy.\oDm۶-55ڵk~-g ="&l@@7mdeeekk++455qs(\QQ7SSSCCCmm]vܹs{wuNz9~۷o޼z Q.Y[[_xYd\KK8Ζ+zk׮YY[;9;s*s0 5^k=t $'=Po3ۜrn+zVf1BnD@fKbmzyͭ.#iWyՓbEև5垗+ޢsпWVR-ʕʕc-+|#uª]V[[{mN>z~mmEjS">l5-om׮]VVV"89׭[Ы䭾Y3f1yg%99;;psC>}Z\QQ^ݺ}Rsk)J/#j} WWW/..=Kezv+W4(\M6]|E ݈/ݸqo߾5c)jiiedd\}.bRSqW RO8w2s>`c~㰭;y㐵A1,ۻٸp60ڥomZF#"#[ZZz[堥!+7q倹rF}KG3;M=_3&5ަikh] 0Hi06>]kSo%ΩnM^>+PvHQo~ˁYnWR']Id[?f{iEEE/3ZZZ: )719]g[ope״F&k>y- ZJo)lkW]Ү:\׷N tvTTT+sk z.4e\gϥ\&Mh)QTGZ*H+&FdEZ%^i?N/@ܹshnnq0n)aĝ=s!Es^Y)&+)>∈KE5kڞd:݀krW RB=]gWoX-KvUkD*Onˁkӯ*%+RӢ 9 zar&ҢyiQ0""]ӯzd6{+&ʻ!>RA*VS8$5eS/~lc֭@=RHYfVZkpJmtM$MTw4f.isq"΋o>sSFQThi:bRiWJ&kI|jR\crbU4q$=0{N;Rk5ETWvrVrh%[zbeN$iqR)L|- >{n8Ъ-Wzg֊uAёw&ufy\E(Hm z=`Ұ9MnDxbTtg.'IOSU'ZJ4bO]յj]'ZPs{c+uKt5b;N4|)kR+aĤ6ʘ,Hj-sW R{ *Wԕ@QWʤhxIRu Eu,)cNN&P3 O{nXri]Pt]Igx,wg?ݙ6g.9 ëN5s9OPv&bռ3DE9%ċ,.XՋSv2tBj]1{iWJ*NDi̽&M$QYrqu5QƜvHé6l@v[z}tO?%71d]gX^Z*Sr]ϭHS3v=w(HU:ƊG4>83大kjŤabR e]s9^j_\+RWLX5F29-ڮN9]Z+-[]سhC6{T=ZLڑtThHG(&56:eÉI5CՃ6\s|OO= &z9vbc%_/}X:e`V UVR'X3jˉ{n|&ʽhkB[ {t;ÿ+餎T,)f&ՑFvXFT;TWX9<.p_hqFJAj~M T)|d_46S?ɿ|_z[qȩ/|˖-%]īxgBO\:ͷNܢGosT&U*/SƹMΉI7;ZZL꫘ԧLَʒԣp[|iz~Ǭ s-r_&F"O=D9b[-FIGxJO\9v̝e3"a>{pUB'1iB;~.MΤ%݆tD_:J.oC,9fѪܕ9ԗSRm6*#v;q&Ql EEu qd"IQ} ֙6sI-?Θa%L\<1='uSLZԎsr%'C1͛7sW RڽkiT,ӧYeNUg=>g"DI͊IŌeee._,IƤ@xܾ}>;;;&&&$$$((q_|ѣGŧ)))pL\.7,2dʕ'N&==G=imaaa``mhhk׮M6)hjjoll|P655;99۷oŽ;v)>O---O:=z=z=zW__?dȐ*nyyy_99'uuunkhhXj{ァҌ3/^|5moF{eiӦM:_:"eŊ嚚pN2z+W,\:tSO=5wJDƉOGOܼygyzzd2###֯_ώ1_P/,,=z4'=z=z9z9s9s9s9s@s@@=@=@=z=z=z9z9s9s9s9s@s@@@=@=z=z=z9z9z9s9s9s@s@@@=@=z=z=z20M&IENDB`pysph-master/docs/Images/pysph_viewer.png000066400000000000000000003074271356347341600211320ustar00rootroot00000000000000PNG  IHDRsBIT|dtEXtSoftwaregnome-screenshot> IDATxw|4 ;"E"EEE {}kEbk; RUKMH/ۦl$ >_?lpS@!B!B!B!B!B!B!B!)ؾg!B!B}=;+Iy'dt]''; V[*B!BړN۶H(iG,I`n6@elFFYd>N>!:!B!BlEy74lؔ.]zz`lQU˲X¡[G,Q`TxVJm@AVVB"\.'$ B!B#ad 4hb= Ӱ|B=lހ*DhB!B!]7Qm6CǁΞIu=.۶+{[!B!G(4q:C,q`ϤЅJ'GTl0j-`H$a6( N>7Uu !B!G˲͆PI]:ǁmm G)*,ax 5G?>JXX_b|.~wuвe 4iBRb".]3((,Yn= .qYPCӨBwL!B!j0̲ QP)M(Tvu5UU$DeshpbXr.hpϏ(QUe 8cUKUU\.%%%̙3UI]L[A7tSuo-/;4TZW l'Ea7 ڶ}x mB!B,Ïf"@>U8= GI,І,2*/'Oh"z{e*Æ ƲLB՘&躆dذ!̞MUYyh]Q]Ļp{ʾBDL'%yDL.,c^x߾i&>!)6 G6Śnj"nSucQO^LՁfy)1Yg=o+Tzqnn"xNlݝE|s6+9;v(N2∲3}dB8 jO!B!T.b= 5P@8P˲),,[ ` pGL+|n˲)(*aȐADQ,kuocjݑۧ1mI-kV2Wq X{^Bv .ݚ"6/ 4򂮽†N4M6rstEo q͋kIHI*OX|Q(Ԁ;_B[x=`Ƶ{#ІM=,5NX|m7f'Q,tt@|ǸSDAG݃'ޞB]̿Ϋ^B!HUNՏ#Ķw޵_] yhҿ%%y5qiZl& _׺*4h`uUmħ,oYhء/guIǗsLEzr"&*EQJ qܸɜ5 KP+F]:>(>s;wU7XvmU)4DBX< nLvSXX@4A7ci Ӊ#a{&8M*~WK8Ja0JͰ@Q+v=3 Fu8IMpdzkHC?;v•֋~$P>(Y5&nJWU<'99Iu;6MxBv훈<Ջ *?.MS!B!q) b(wq[8:&XYOtCsy=ued%2D3؞ϺF~iTY\7k$aͷ0NObi*ۯ;Hs'Iup͛- ϗOaƯEW6Њ gJGҞ0J~6{$Sc7pWoJ_"'>>xzq1'hEӴ [dCiu99`[\}ζzp;P/< 4*gE4s?k1-7mG#$NE]üYo8vPw,t7~{:P4P` F.*W'72^ȨC;^P}\&"uzwY.p9%Vֿi]7Un:q<ڟO!B!L% àD p לtIh{8s#auc+V]UYe]"9Iy 1y&hۮ]b_E^00Oediyl][@s=ħصL^Q=ߣAuW L*u\Q -۞7kzJ}t@'jTr׋ (6@ UGfF}6X:+D=ήOXB~ao玡)AIN\J+_|uxBB:(/k@}L4 9vݦVٷV%hݐVCdjU@C&YsƯEFZ&U!B!ZL6V4s3}CZ6qߘ]ST,kgݶ bzy~}l{u= vWquUǮEC*Fm9AlS/$ Sfwq3z5΂1:.-F^##lY>}DeqSX_~Ƹx{ڽAc A 1@rOw-R:>6_wkt\7Y|RHNRkd=+FtVz/~!#l.e# 8gh o6Ir+.jIQ=h^Vpگ;`)0;.Hz {nCg +ᘽƋ_.e; hЉ&{6ASv-?vNYz%DB!BaUEH3H 4/߾,37qyqF3L:vّB@Uw( PEAUآ>낝= vuq.a|8LnMdfoEKsm|.[ЃCpB46K)< ۮ8v|jTHzUO]&\e'.|wrp+ͥҬBG*]r8&9 ;}ظtD %´=(MY J4<+Hj@S!bٻuh~8Q╿mlt!ư=pU#^~r~>ӥYc^0K6du 5o=ScJm9ϪB!Bu~8p8T ,u<ҎRz6EW޲+{,P)**&._-?/oVձ׺ 6beʶWVWuTVR/aߘg,\W_[軵 n`ۼ*6nu9oXE]t>uޡ`_q֖V/PӸVk+e\sbK&}[VE24b۲KUl,LˢiGmcQ԰0(=dUXbAsaۻR`ާqm386~/E{SQ,-:pv䏙@Dk-߯ 8Yg@k[6 !B!DUjm|.um(n`J4+ي,M )Q7V:ǁSa^˽3qr:߼iq8defVWTZaUֵ.MERH9pLKfbtk0I 6=vDt܀bc,!H8@حR@Can֞&GYGj,~nO Ci\UoyT,BML^Pon fjӉNʣ?d"!1j[PGZ}S+/щFfo"d3!t Q.tRh6>eS :pwrH F59[JމI ,;j."6߳c0]-I!B!Qā nE 6h40ikXe1 .|Jxj(a ﵮ>}TYOaAAuU% ?.`6N5\x;~ηi}a$Y=%,bG%wSi=ﺽ}w_I A\syĻk(hv~^gӭM啐;<6YJz=윅;OK`:B]"߻гǥ'~_; Nƾqt:o [.\\ ,w9`雙UǝZt=nR ;d~I@/,#:v-Q0?p,l5^|}Cs?S!B!Ju-,{aabD73) %2x=W9˛Of ciOX8My=}:4#=o?g懿ChqvYұ UlߊN7\0oFh6Ζ#ll wyO=K3]2>,fF7Ē7\80[s;4dk[3Y=< B:.\EI2-]g OX9"::!G,*4qǾ~>)v646eɱU+糎g+Jp=`{%J)B!UkF >^S=(i$I -ZfPXoy2/@Ç㪲.˄ &v=th4֭[QԵ/A4&g4p FF5".;~&%^7Ar aVT*~7芊e{ Vm;().A3LnۣBu%#RR:dP8!5`t[aB/3jy X{S^ {c.B.@"˃VRa4JprzG0JO*T +#J3v`u+$&R4ѾS( XYJ$7 Q!B!eY|Ӏ0! X04q0xоni}!i4wD7֛,_BYYE,8~{&c E*;Nmh{•6RB'ۇ9k1hٙcF+P p)O]0qygcB e3jt?ڴlK&ٔb~N kB!B!N.:^槼LU/0xF+F;j>=yWqS/xn`J2aVO ^{ =s\(__"T.{F=-sOLWfhݣK(bu IOi K.xwɬx*W_5M>33D|˯a'QugG!v7eƣ8ade?⨑Ǐ_as#zFflfܡ p7G'%;]vdpӫ14lP0,5,Āʍ^u#G8ҟ8:Ъs ͖~bz1<1y0@ ~[QIx0t|s,WlNz>/#:lny;okO.ə/OٿJn5wNMFr%O1{&AgL;% *t=:=pTbͲ!UxUyE޺6+ڷTf൛x2 lKmcAݞٸ\.9o^ w{/__{zPOa}/ G W{f$HNcD4P0IK Ъ$G-I<|rF b݋?-A\1҉miaH;oOl;э;>4P܉8uhAњf u[pաSŏCMe%qZ.:zSG!0)Zm* /Ը5[9˙>p7 +Onۡu)]kg߰8ǁߥu)Sb534$5a(]Qn 'uH$e0B:3cȠjtq*4Mr Mr `ik5~_>>{`U1#nFuqy |ҥZM c:?A4,B7-"hTEE4Ml۪\uBQ~}3ufC.-;hpI}ug.!Qft>k?conP~*~81Z~ ,TIoO3A&E&I$~ee!!4o?>gN)<\wهY K@q;<سÍp(.l u&1kzBmntc'9Sv~z"oI;k7`ѕs$|Kt+ߣ%}5E51˔%%\xqGlL )qĆ#d@-(xjz}7y%= lKG-{h6rb埜6T^!ŗkI ̽}ObZ}ZlɴiӸ(^:`ٽxeI!Mx(,"M1?Nbq`he8Pcw`Kao0m |՗8;uòq>s:VB#Rdl'oB=m<5k./259X*jMv8]7 Qkmb+NbG?q;];ey,l}9#BؘVfc%*<4'~ 76XGSL˶y/D nÉǵ;*b]P/ L2scuF,HC;ݪ2Rc 8/ԉEDU$ i[h[K04/xPذaywU">>M6<-[g8^y%oNY)pb'NjX8z,K_z؀k(,u;ٞſu3 XEjJ7 a~_~uꤑ@ @odgPTTD=_.neK?hܸA;v!8Tdv!ٟQ[t'ez4mb͚l&[[HnSe{k#kݛ3c1M?9ѰK=8e2NXGS̷?[5I- .ocGlL^}W%B}aBZrl$Hwu½ZVoX9LN~ FXF va[,Xy䚫.]-&:|[۷/ڵcʽp<Ê -Hߋ_Du@ryz'-BHqlvڠiz2yyy蚆eQXXHAAC!4-eZ5X2=ۆrrؾ=P(LZZ ڵ&%ѨmC0AAF4b8B8K~fz4mUHj^\2[vҼA ui ǖܶ=ҷV Lعl` Rਉ{_°K٥K92cnZ,7=;{@l KVLxsrwyª^VaXe~nN_l@:u*hWvoQP1.< X~?=3@ѣav80t˶5p8L0$=iY4 -Ezzhe˖dg琟Kf}\NQ0Q SQϛʅ$wWMs$hCu+Yܫn'O\QV:a[/{5pǴXo_x17-?GK̯ 7SŘ=н;wArlBZ"D4,@h}4()\xE5:-bhl+ejXaD5f\ irwYAK.qFP+$h"t;q~y]u;{A ج];8$7'-&p݁aDam۸nrwUoCO?ԠaF7oq&k׮g?ر bΜoh߶A9^!8$q 5=~Yqh",ZxA3Lg~U}HLYuӸ`!9g{Qncf'ICyq F '?y%= lKG-{h6R ϛSO=u-5z{-f8x0|hy^=KڶR.itlEmZ8Q'q|>?s~z1oI.-6tH !RY?9W~v `Sqsf=vn,Z`qMoN#ahq}L+;;V:UkGp8?/栅5?@Zjj=WL V6ז ?1Fm;xʾ>L} ع!֬qʌ݌' :-G@!B#M _a4Cٖ-[g+wS>&L8GYUA!B#>f&݉{Ğ2l8MG?~<3gdҤwSNz!B^eY B =B!nCՌ%[qk:?ӎCd[GIMz!Bއ*Oj:S >$nyNFl}hD ҶK(@քNǝĤkv}U^u/w<[ 8f#_DJek}?Jz!BA~Κx'4Yeca!v fW;im|\?j9_e41a1Y$tv,x%m$WIYpe;&y_Yr/3_'mtGq B!v{nW?(ciI 7il4~x;J&ic"*s5eǰN#ys{\>/-msxhX|ˇIcxыVb(*Ӂ+JNWz[vX6Vp%_8mZѬ 4lXt%i#އ]qGU߿ϹeC'}gN>>ŝ̛<};X7r\Zeꞻ]{/X_p9=7=gC8ӛsԵ<@R fqۀ\8uۣ#Wsqބq r:Wb²i`F<&^~>cGĐEVߠ`ś4a$Ӈ>cyhem mG`~tяgܭY#4~.्!rHߑcZXV?L㢓 <_\YaUMq0d.!B!6{oWk}9sҾU/F>kB6. 9A-(.R9Иs: t7Og.h̞#M5i~kt;wҒw2]Och?LFMvYq)xR 5gA']}>`KBg>Jo$k>ˢ?um,CxU4}WeaY6f>S)xn>0f=ǻCRdYY_r7q9s>坷v,'டr㫟d̼e([r㣓hQ8 ~&5tO,whptΛ<1Al$ġ&=B! .$׹\uyϿһݵM,f\"ķ.mڝOJ^[{3w'|+9O2%BWMaD)3Z]L8e@_Rܥ Z,9~yh6$lV>z (h߽ªWs&ThE5QlOqb'.MWt+]YlmzO_ N̘W3cu =M Q2ǁB!8=qǥw5)NR?ƙ 2 _+7V;JX| ,nZq 6M"[GABDRwN y4Lac[#o[3iʎl潲dJn0^u3h(i[L>{c ؾ=;Xdq IpOxEEmxSW-l UAqDǶTl[WσjYqQ"F" W?1ʓ:f"es Zl8*Ud-ua<}GM~K,l~>!jLVUB!G$w㼴XFL4+fa^YG6 S-"EyX-Kչa<(IRb ~1?9ӆD˗@7pL.gFab+- ]'j*-jMXl HkE,~uօz)~duF;/,"{+ =sUgڱtb :%]m9 ʨT`LwGl8XvjEsꖱ(/o*q- f>ӇݗP QYyƤte~,&vضeyѾNlGVdxPlk/mġ'=B!ns|(xə M=tջt{äADp=zPYYl;vG)w2/|q9whG#`ڏdkq WzvGg[8i`X6v9oьF1 _&.4{78bHQz|4f҉\{p)iժc!co}|rDߑkȆn6uS\{wp7fboO5{r!v uKI vA^~YA%xZ!_31"l\29) Qי-a\ %7ّ1KvmxFX+YXM=hq =@!B㠺(:C_E=n<֛ b6LY){ve"|0G*iDxbsyz+WqĞwX˯+$}@7|[*C턷y|\>m3J2Mw%Qs飏qg)pգxtlgᆪ?w{BHB ޫtJSitP &#_ *bџPAA&ғI`M6$d|^Ν;{Ly&3R"F9y /ec^L%|/&ş 0򩶘^D*O}\׋i3ЂI=f0z o0̸Vϫ㹷 jv`ikSlFȵz9 ҇5s)];o]:+׍ׯgH.G\VrRA2KΣ^DtT9ISG^4 """"r+$59ȕASEDDDDDD$ʘtvbQo.KHHQ}GHQH)-~ :՝ڹӬl!""""Rf $_H(~H3'8 "׵Դ4gwADDDDL1:"""""""Rv)p """""""v)p """""""v)9H:t])U o8 """""JSDDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.DDDDDDD.Wgw@s ))=)iN$ʸ$6[nKȋv(pP 쮈júUt""""""%H2,7h쮈*""% g3waa/`4\?IT ̺=P@DJERs@Jf\pHYl8lZ@rzCD _CN IDAT H)M2tRR0 9j"`6Ah'7DDDDDDTgi]ۏV `6d)[k妋؛LzfZ]VDDDDʸ){9+ވHLLzտ9&@ F5)@ >L; *iJxn}h n%| :"'}wRGe8_*nAz SSDDDDD.)|iG>GR?m3_H=sXr]hۿ}dfK>8ĜE.U`t??ϒNbj&FR`\]=XVLfނfH˞FY?~ļMi2cٮuu㟲7),-A=iјznfAt}Avzv1zkIӆMiN8B C;yzq0~7bq9gw1wt^htǗŠSGUVn*$b{ޅg9?^rMR^}5 &OsaP=` ZO*1lc#'絺+x@ Kx洃ݿh )?YF =\RGXsO֣wQI_epkݴ ;N_p45S7|*h@97#n5h2fNԣXܞqsw'h;rQRpJ#;mě!&ޕ4ջVq_ueO뇋р 4I"""""׋nYLr.fW+S?ɦq}Yþ)i%v7bT@DDDD䊹)WAFgWExULKH25Y`4`κ-Rs. jsn_2^5/P0X=UJsnz/:II U@SFx.7vJDDDDQEfffrϘI,ֶS{GL,Nf$\p( |#<FܖHʉxVD__xvoĦe1?ywH8^Bq9K[aL J(܍xyF\QeHƎ)mؔwo%>fGx|3M:`$Xl'XZtta/;\?Fpw5b,?W#F""""""WJٝ^?arAm5,gM 5z< AmFQ7|htlFmL`γk5w&r1 Ru\\(n """"r5!o;#T ]0JvS1 ж"""""Rt \)ޠLǫ"_nF0a\F0kȁȕZQ+bcud6e~/AYF4@DDDDQ@DJ̜M(٤IifMR8(údúUDDD8+"Zp6]zD^; $g촂fBBȕAF,Y8])T= #Y>>ZDDDDD QࠌKJJbúUHեG<ʰܠAv]2kúUt"""""rMR DDD8+"eVDDKΦg5IEDDDDDD.DDDDDDD.DDDDDDD."0wM6yH2Ggw2jd`"vEG;$ $ "GeX)LĢ헰5ZH[DDDDDDiMӆjօGİxg#־]/|m?FcxtT- bqĠiO3[M̠i |#M&ÍZoI1űxx]lymj'JSn5',"""""RJ)p5+>Ň6wɆwp>hG3e+YY\8|*v'ڷ0oeWب#yhPG4yql?|CO.~qCng~tӀQ=:Ѳp|%^Nz]ueN4oRz9Sen{Ӷiۉ v1wTԈO˃R;450vg>mѶI8ЧԽuzqu,#DXֿq/pgj7980}Թ5 7vhk8z*P!/暶!"""""T6[j5F # yc޳n#('Fpk?E,?D~o6Xj5'?KW}͞@]xoOl[ݕ*a6~mlۘx|GY4v2_?盷;xa<&ta/;/p֌b`4dҺsI_seo4C7~#ٷkYH~w#|,1 5ٵ<7F3q:?=]K<̬{Yu(`[gudŦ]u7|˨cO3!6FuҎ~ݺ=3t/F3x/ys+81_#pANo`>߰='Xӿk 7|ɾS8u;+qMT,|ƉHJ #a?.T*|\,NǟU[kDV0we?PH=;Ϧ%><ȧOFoO*C$z*~yO~0̜z3|i,N f}-7;Uo{*Fw?ETf{Xӝx3z]1zҨS3]3|0C$͈gyZ^DG$<D]{w_Sƶ28r>GS}Lqӹm oX.OF-jGҏ9R@ e<8;g;d}f&q<ŕK\<4q2C`+U!>ҁk~njsp|g\.o{/gN~/:Jx K'nOտGR@Wi3zWk/d"h|7 Fn!$ZdcW Ϟ)yaLɿHjo@qV|-kPiAܹ\zDsY=!VLpNx`"Mee,Zř0xa8z>۲"""""L2UׂlĒ_tfW;~gLބw;/ Lb$,T54cJ;_‡޻xW'8g! Z,̦dQ^2&d  YMɜ+hG=mH=η/pDz[ߴr:vjaTOGu̘''v>ϒOEY 7}.C(RZх~X5)+e؍i9gH6xPs̳ IAA !;PV]+;FÒִ ʸٛ,|\{^n[y]}᝛SQgN9*;9GX_O`kql ˛Ĵ.Ԯ܊Gv;Kw㹻3cE*VϺ<=n;"jT5cEԙӍ-{g[Q!.oϨѵu"0UQ$kgٻdG%yrL&'p]xO=*5lJ}KK\ Arx]%0k!?`bVZTi??"ٳ[Խ`(r66H>ޛ>DlPXJ7.D!Af:+fúUt"""""LK"אܠADDRj"""Xp6=#ovvWDDDD #N>]ʽlԹ-z{̉#) %8HMKcoxgΚ[" [.dzyYl`gdY:^~L ^᠁ҥKz;zE>:'|RfFgw C֭aUj\ٕ6Ƀ Gѭ=v2`JdǼݡu[f؋s:Dd40yLKf3YYYL4ol׵d0";Qz=;I[3|3cѩy*ջ%o)s}?mkSfxcM\ ՛fbccqPuԡf͚ fرW=z(c] epā/mAٙ F\pm\L5袵Ō1|=*nvJII)˨0LyfO#;v#++kٔ^ZuňGV@1jp^tv)3δ3hv퟼Ȥ (gr/EDDDDgϞ%3/;7v3f̘y:v9vX$UZ`?݂.v d OOm֏_BpoΔ̞/^9Z0tllnD3{>ιU2nTtK:߿ +k?K`"Fdۉt;~;zn<1 AY;r{Tc1.hoѨt*`yY҇헲ᚸcw#>%0u~us{'tƧY·-IOtsLdQ,#&ȋ#gLe$S>{VL=`ƚwh:߽|gd Yp etP'724 r<46=f;G8goC[ax|'e֮z‰T=/0zÔ,/"~LffuѤuGq`;v Rzȧ9?&}"ެ(ϴVu_ԍ[c;`cb^!:jE6YԽa8wlӢf캼rM>nV!8J2iumȵ_cccYx1.OWGeٲe6ܳn:nsKM4f;wÇYjr6uQp4z#vG`ʧf&\"獠܇,RR L, 7RIP-F*U# >3M\}pɈ'\K2#Gg9~Zǔ3ksww<<8p,їC^b(iZ@ĒָR|y233YlYA(8pfA\+4XJHHWJzBvA6m쾇5kqFUZ`{P*Vq`?u;a)Q7O3GF{<gE\a.GI7Lə>Os5E-a&VKz_1q6şçܦy4Ky6v|*AzE#{Iּw5@DDDD $$/X?c[СC8q>ݖm۶l~Bϙl6yfos8S !VwnCc/lK=nݶ >ϢV/rw Oɟ hRq3O0$[/Jz݉hϯ/pjC]tOl;Nkv9X Y~Y3Q }/K:QIkRR.h,̣eG͙ddelJ'--w7\ 230c"#=t7.o մ|8F:w/+?j{WDDYQϝU6lFB>&g6ڼ@w/B:cmp+X}F֟!5VzPP~ݻwi{q=[80 hNEMfX E:Vi Y>z K" 7hOyiSМag.ԟvQx8Yu*XpW8n[GSn 4u|68rzL߆׌g`rDu~]x:=_bb% W]D4j29bF-/NI^r듖PzAk߯[Iv,wĕ.q]l֮N y̯)7+3c!M~|n~^$""WKatȵ8Ν;W=P8 \983g:%[r!֙V&؊onXiTNϭ_siћ:dI' ?чC!טmBgME6}[ k91c46MpjP>>_ADF\q/\ts{m|ѶIJZu5ڠp}{=ngs`'>\bgs ,㉈\.{Gu{{dΞ=P >}:OAvݻסYsu(n+ǂ8U:ٳw/}źojklerIIKy%WA &Zdآ-YUQS/c^ouoYpEy(/DGM(&~QZgɻ_4\6uyeVǶ>~FWg_kbs5U=عG%[) v{ A'V\nۂE||eJR@^իU+ݷ_A'(hKF,8?>q rPp K H%=@AU28pss;Z ԩSF-4: ##sZWm$,m-z{̉#)f쮔% g39VjZkW}ۇ>s9st-_ xW ݿrʌ;Yf9ÃSܶe@j똝DG[ls515ˮmEe71bllf+h7Af?:/@۶m]m4)p r IM@pHpvWJUjgwADC#-/>~쬘pEyV*qyq^ɩ^i1^1DG(/iˢnurq8rl>Y9u y?#Au+/غ׬WްgEJ[I'(nL&ĄGRJ6Y&whYf͚ѫW/ =3(p r IOK%=-ɧ)LU( Gll,YYYGo|>|v777vZ1; (p """Rž3=ԃ غ,Na=++s@@698((`/xqqN}hժUSN1wnƪWC7ڌ3HNN.<6Z_~涚5kr]wzJKKc6H """"""e•ZQ!11ѡ=1bC믿:tG?;5h(MU="ܜ[$Av{ԢnF Q.coCuo5բrN,޴E9u,~":EyKNuk{EʶjQ*urB[=cy{vggm},Mkk}L! ߽f}/{z?{_D~+ `^@ի9r~))))>ViЈqwwK1D >>ԂL&>S6o%%%rJ/^| @#DDDDDD +~G݋cAptdb߿ݻSjUgǎl޼t[V(9%GG *UDݺuQABB >}ݻwsIfl#5@DDDDDDf3'Nĉ"""""""bFuV-޲:umgv$VLxҢbN1u cQ&M/9u+ nQ^&o]tT6֫*(oiТ/[洩mQw9m,e(lɮ{,se'>gl3rskZ_6y+/l`략^UhāإإUDD,hU8F]J($'%9 WogwADD.u)WsJXhX.:jEr{MYiw?(/i3آYic+au2BwrzNuQ~Ĝ6-ieQBtT)}jlQXnO}ƑVFazfuy*m{y5{1սg+Y{;d"3|gUe qv7DDVCȵM\X֬uvW*3_}{ݤ\:x2ݕJbbc؝w9+R”Qڢ_AEJk"W/G#jāɌdrv/*DDz)\8f&SqUQ@DDDDDd`mޢɕURӌ\]n){GZ ^!~Eœu0R]9u,upńNra%9mװ(ickZiwV,0ZM9mZԕ':E6-ʵrdQ׉訵6d=6ykfk=& y66}UC`f;)"W+8HǴޓ0YIL}+z*k t՞ǡ$ާxx8Aݘ5]8nځW4s7C^=Q^?p~"0T WUAaL{v =JΒ~"լd3䵣#q_[~p 7`➗^`b צѻ}[ LII=5 iæ4~'V! |}'zG}tVyי~ߝTgNU(`+kdofs&qgMmQ&u: ugH:˪71zi~' [Cgƴ>GoM}?SQV }TՊ.da6Hڿz j *hB{Vp2#g9CL!e]*Խ;g"1լ"""""W|AqPT >?@ŊJds.gժ3U.2e!1|>jci|}b^];FiG>糔ܛا?bQ\;& g?d&ϓn`@3xn&>T Gǰ[^Î98)x7>/gpOFAl6gtOr(pT3f#[=!'W0T}qp{=T[a]yqH?ߋ/^3ÿ hw̝/4d㳕{G5!g?Kgc? 9j/[cqn+ڸvmGq.9UܨgC]/>o1vC*fmL- eS—+ dΪwo{4f1mq orF00$e \:a@7&F fO8\39OvAk=) /ibD7؁q/lgv~آpDUl|glvv$ƁsYUԩSv-88ڵk… ILL,_{bW֔P/|̢[yJVgdX8q Hp, h„qEG0X$04g/ |q͙aO]XH)IYxn6>L.fpBG3E|jfG hār@RRIIIίaKKnٲbW֔P[@0!%L`S |3U [}_K]|v9F^,]2+I.72e2sK-_WjᕲsiP shS7 +(GSweW,P LU~3nG3F硏쮏V?r-bnA/b ;a¥ ^FϏDm*fY=bJwI4鴒"rWU0r(p "rղZ;=!o6Z7hܪ6~F;R6Ml)@pڸ[Mu,snSށ6MmimUoU6_G*Crm6Z'}hz5;pX.gBݺu/޿~򎏏/~eM=2I7߽6;xNofZT&^~BMO ]>%) 9iD \h1^z!.gw}م=M`"b~x~%Scakl}y?#0*_::nXak兼=ֽ;ʼnLۥE}@wdW9qb}~ĿxzO/ŰjkޢENd%>K`uF\s݁+d2{[2;%r+L>l+a~b͡ڃ* ""O?{E5pPBUzҥ4AHQQΪ4A(t)RE@THN, )'gfܹ;̜+BAi@F7K"E+6duȬSy%.nGe|s/ !.E-^}鮈;,>! ҥ[4͹B!D~`٦t2PPޓO>I*PweTC !((I&9}/W}ĉ,X [muXOsk89fSly !Bn/awdw)B rn&9ydˍnK/™d%B!DnM```Z;+RRRҊ+G@@uMk;v,Fr׺#|6gݟ]SV\M6t7"(?իTbh#7&Ҭ;9<4K7ck֠Y*(zNi%vR_is+&|pBPGX.e5zVHX [5YNsWbŭvi9Ӿ2yf9Nq 鏽#x\V^H?֌cё~Ǵ#rd3s}q;e k|||B,YΝ;n:M69J9r$]ŬnɅ{\HZy%_/)Wcf<0ӏǚyqČao,r?׌#'9kE!B!*)( )('(#K]k]8B!r=" #yr@!B!|;.B!]ebB?%6ߊ6Ris-+ zqń'zN%#ީTSbq?WC0Ă,vOBJ1 eJ[߮;mLW,0lP_itՕXU#PiGȁ;bedUg8}+(835[8Fcȑ~̚`6ӟ#sȑ\3_y!9k27 .8pA# !B!%@!B!|;.ɪ BUBqt2Ś[edUyTA!B!.(B!qNTG4V믴9m*zNC%MUWb,)=fITz%f٥0])~7ECca!@;䢈sQ@zf96%nVډzNA%vݪ/xf}6e|-O>sQCژc h<އ̊*Ljq Nc89ǬqL;rҏ}1;ҟkIӟN YTks64BhϖB3]dY׾ʅFUB!B!KRQ!RQ;K8Birt2tGk7!"1 \,Bd" oɅ!B!P$CC!w6B]xQJla ތY'YPg\4K/Hi֠Y+zN%vD*(+&D*`4 ]^٢X~S~KճjD*`+H+dtJOF[iĶY5vFs+Vͩ՜nwއ32~gU&2q4֌cp11p䘝C5IiGNs8790"A9[IqD!B!BdzAj|6CZ}BomX*rB!w܏.\0]USǏȡwCBDTBx^VUB!D~e)t2z^. Y^UǗjՉVnU!B!܌"B!B# B!gVJ{s &+<ʹY(9 팫*UmoĶ.1?%fQz>%VJze JUcҮBY}cP}QϩĎXi"ޫTWb]4JmjtUKJ sJ{3XMjNmi_d38`|gd61;fckqakƱ8Yƾp䘝C51zbBs878rL\c#B!BȷC !B!ptUzO"UB_Yk9݅Lr BWUH>}RNwA!B!rq \py+RFٗB fǴJlf=ũ*y_Y*zN/%ȼ@MJ4evy3ۥ‡iFe}un;*. Rb[5,o9~Tb5i~ 1duѰ{B#35{sc֬q G9\3v?sC5ƹ,a  ~X=u\F̟?/qWZ=B!B[`]}QgϞe͚]իB!q \0-=zWzrٳXf5=zU_}ngHqD!BWZNw!SBvccn$B!B#nsntFipmK=wv],B!ByTAr rp0L@rjדR޷|anEY!&mR#yJlujQ7c,t=I%SۑcnVXݬ{ c+ɛU7V?9JfTQ.VL9 6F FǺ*VJ{3DMj S9KJC=U%6ɪҞbu78=HYl5c}[F>g8}cxlp#f+/ǚq,Ncle7[aJhb\le."/9MzT!Ssᥕ5h *ql~c۹c)$W&d왿prq%]ڌppVy*lۡlzOwƯr?^_ؐ"B!wq \p//3Sghw0~> ]##~ H)][ϹK7js$;ע+ذxW&?Ӗ*UL6)vDs-NOIL:wm*JDfǒ .G!lWS];1B!B0/w-WUvbw]i>ћ=CFY) ԃifKp9ُ1y꿚+ɵhP~Y 'ia;ЏR^|͟8̄͟oftzE&hD#G_gY9El\kNXGQ=^B<{lU *!"m eG[vBZ ؾBnƬ-Y(9?=Y+-zN%vRZis̊Rbpv h\+lVډzNjSڧJlU⢈J{W}+,jhVTJ][Jm=3nY+|f9w}?Y4 0?fckV@8FcpkƱ8YƾYPsx&MiGNs878r!9Ƒc2Y ^8hѢN3zw}ztܙtpÉ`ܹN]nq YYʏ=C׉+5lJqQ|1E|Yc{a N[+9~HiĬ,TA8NqpsP s6v QW9&Yhm?(o/vk{ O-Go$<)ա /@tu*ZĉμGZ@Җ@׈>;~vWEJ37H+]!)/__yibŊ|/zV.(P޽{S`A9ŋI&l޼9&w莃Mʯ<#s#bɥ"ʅΝ;}z*7nܹs,XmRD *U@VMnY1ҹ &Ӌ9 Z(خoYx~ט< 6"Bi/lܜN}ynFRRVf[&0?KHt9ͽѭ%ovY3]$0+Y֭ZfrK]ZKKp.p~BH<:./]e7RήG㣧 NK$~ }ה@%_0!̓y_ .e 6g yh 8 ŽLuNȜ\.du2eЯ_IAAA 2XfΜ~HNNfرnvqČ8TϷ-q x~, j?ڪYwsqs?<b vzhd'BLncyX6;70v8iq/ʴ~y<*$EfL\;<܆Eb< QZ3y0KB!2Bb-OdN3bbb2uZji?ܹ3ݗ+Wr;W1JqD۲pm[+̪6l6.՚צ55n#{XWkjoH]ᦳXfn3#cDB!">Jp*UѣG~stttmns_\8B!E-S*ݕ+m*9V7MGQ c%J{>f^=ZFS5XTWߥY5N+zM[5vun=ɪNisz)EV=gX>pbq[7ַ?[(9Yh3Rלf9Ӿ8lUcNVc̏=4Fcpkf+zǬqLNcxn9d<׌#ǸIFsV!9\E,ss-m{ƥKrœr# +B!"˧*QXvf|%%.duH.de9F!B! xy\>_3rnrUK1׮fj”{q- Dӥ[Ӹ B]ZVJ#ƍ3W7oNfXn۷ow>|8Enؐlp/)\ zXxٜ"{"+]B!GFFfꢁq .87˿bCVˍL/}/e!p]"Oz+$+w3f=fyNis(#zNA%vJ4e [c͟cv5IiGNs߼8b98ǀ\$2 8 !!!i?_|ŋOY@r#@!N~_)]V4Km׹@E(ت᭴/FJU#Bis*mVVJ{[n裴9,KJCUdddJ#=gT[i/s+%'zNg%LsjN2'c>3gck<1!i"4fcps#/3Y9 >eAB~w}r7/^bs7)mhtʑsxEGоnsb3>#' #pcsҥEFFRdI:w̺u눊"<<6m#G=r#UN /B{N|B֭UBqQ Ry:uо}{8q"v)o߾iٳ+WaÆi斯nj3~nwedUyTA!B!D?z<`\좁aaaim] ;v̙3._#::;}v< BJAVJ{I/݌Y?D{Sy'53O Rb,-OzN%vܼJ%QyY"vYJ)1wFҎs*(.=Z5)oJlbxÔVW$=J]=f,g'9/*i~J{c,j 3 IDATNJ{yfE 3>k”fxle4֌cw44CslE9k<9UX sqqE98NsqnyOF #,X0p^f1|4h@DDɡCعs'IIINv\8B!B9݁;o޼y͹vV/ l6n֭[3Y. B!B!d=Gż}|VMLp^?H# !^et2źP. )(wr111q]NwC|pO!B\8R/DDDtW"""XtZ},"DS*ϷPRWL3f݉mjj}4K}DϩYvsZFlQڍؠTvoe28W/O9NF*z%65+&U7]1aD2PisRb_XEي:(UzN[%NZisVb4Y/kN}2]U!#ոʄ3>kpl3q NclE5ii웭0"tiGNs878r!9Ƒc2,8}ƹ1ߓc.ȅ!B!pIB!B!\Ur 뾗GC2 B{e]ZNw!_Ȫ rǁ+qfX~6!B!qܡI\2w>Y#-x gxݙzM{~u2no[h?/W1ڿ49{ UM1sWy|!EA ?ig,siOq+RF0r,&Yl) +#ҝۧ"~n{Sϳ\L 4jN:v w21軥Qɗ4ao~}RӛW|H"ʇA޽0?_EBϷ}g= !D*Y,n;oyUMB RڳJ =Ffw9ƂhvsDUaR|m^|jHTzNtQάh\+%Qis(VAJ{j^ 2f:EJLW7u v 5줴W9ƢfE Rb[5,zJTUb5YNm蓱`RdѬȣ3PEñXpXq Nc81k\ԬP9d<׌"ti7/~11(e9>(Ľ|WI!y~%^ǰ`Vv/?ITJfKe( ;QHn,-zݯu`̉v{3[oD{kkd0WӌS$f`_nEh}ݷoi:6~?|TI܃ iɐVϳ=VqKo.e_GٺUZ-7md ./qo|v½%p|NPKd~Ϸ׭,w%BsG#l_8_øqqߘ9Lx! Sx%y.gwp$v~K][R=&<#I=+MZ1<ף=̣}é  ű?ӿec4ͬwNNi>w{T:z>D1jWCqXy5C(|#~~Μ3%tUp|6v+CfuT2pʗ/C!tܘ R&(ARlQ $d͔c:ԤLH봤E1Ǖyk/^=zԿвh֊rnmtf!CWѵ Pĸ{enlg!B!rRN?z *Z.pl2?-CL^%uz/FeY€AsVA驴|d6/` cɗWѯޘ[Ԛ</GByE7W)m;ʌ.,|4[VU7UX$g3zݐ{sՏѮxF-IOWYe[fN'{Kyl]f\҃^ĒMosUqӳ`>~?ɜא3۶>Pbt6!4vܧBۙqpel_)群aݏSyT6n^H8Z2`$Ѷ/YÄ6!iȤ\vh{1Ǚ(i^؄_dߍjw{ic{RwzOoŢ;P|jSH'[̯4Zds/G]qRd=G&_ɴ)rBdF~C-/j9݅LNr ])t(=Q/ʫ=Kb)\\?^gd F!Y_콊 #R|Ъ?gFE3ˋ/-+Ӿk1c:£e)]][pn %|~6iA߃//Ni$ǜ!KH-b9C.y**R2/reв' Gߛx{2ެ׆|͹$H>kǙ(䖉0"qpO'=QmƎ+FE NaN<2];;Y!m5BshNA wZ U`%eG"\=n$Ť`ܱp:v%E+hX),bܺnnSo= U"Ńv5.,R p.na@,g6^ƻje xQU?_ q!|qV 7/xgb_Fٷ;>^ȀQwnJ055F~t3~> l$&n#)!N;t-ɱ W ' kpA ӋÜrG71핷9]MB<*/=vbmwSs{O޼Tocd.ovЪ[c޽ݻmKzҔgteۼ[6{B!"'Bu>nB;Mcvĭx^R.}ϾMtjە/{ gX=:0c3h(-j<@ynV.'{цVyקR|tx_#xZ j?k^ze0g֢Mi?K~חU?y3q%JPDLX%&]ȧӿ8w|a6/OɆ#v6+;FZPYv_  ڒ;NEa6WB[:.a=0ũb0o_7/UFfjx(Qw8#U~}A3DT{j%Yʄq'840ǟРxPX0=ooӯ;[Y!B!rl ;VUBd BqkZNw!SBU]@!B!D> B!Z+ž[ VrZ,Ք[(z;v(1Ӣij=vUZj*(=;*mVGZ=[@Y;1{J#=gayJi<ľc*Ej(}zNU%vXRVi VzNF Мf9ӾL (dZ@ތpL JN8fck<1!p.iV8fcps!f<sxNsTȜb6s#Px07 q B!=FExEX!2#_ϡrǁpAB!B!\∹XRr ?o\GgtW}6f-ye/8-BX~C`GuDHq<.($f-۲9![ʵȜB~BMzх r APH(]z.!B'Ʌ\,1!ĄBTWbX+%Ui/s+yVaJc=g{ߪasm⢊~o%PYi/s*uz۬ayPisj+5,+zNy%vBRRis*duc)]9ŕhxo8}F1|c.=aCX3Ei44Csx.N}I?G\~2]y0Fi5αy(p B!Y?"GVUB!B!ȵQ!B!B$wb>pOdqrZ1!Ji}3fMD,L Wr!Jl*%XiG9髛pBGP/c~jjTPJl{\-jQ JlUU=IyKmհvma駴9Jf(_aE*).V04`q~dsV^0W:01W2^LWcV0;fcka4֌cp1 8}\39k27  f+(s#dN3}ƹpCV^ȫ\8bbby㺜yZm.!B!2G.b j6h]"Oy:l,N,Bq \Ur 뾧fDDDtW}6Vmm B{իylUIU8B!B!\GB|NlEO-|XfzEdž),J;J)NYN+z%֊ ʿzE+nհ)mw='\jTWVJlĖZ5UsaJc(cs3fayYiOs(O5,=b=[ayXiosTb;Xm%Bf+bJST]ѰVzװ-JWsjN2'cf|`R@ѬȣXcf8c8ƚq,Nc8o<72ksp:g4ti11sqri87ؼLUȅ!B!!C Jx"3gݩ[.˗'44___"##9r۶m#%%n 7mjKB!B'xyyQhQŋP:wLhhhxxx8DDD0w\Y.;V-ǎf)ȃ_ndWoKDQfӦ&A݀.#/IE:_tZȣxq-v?Z{a~>F y,7qxƬO_3&ԩ80{ċL^r OEO 3+eW %ua̬I,xeO gsà GS?71lL a}=Ž+iV \U?/ lѻq7bp)ݕ,\jaްA_t3z z8O1\$ä8B!ĭ c?@qp _~=[n(Po ~g6oޜqkxq:}}76 !.tʡWA,9Ml\qqqĜz 5ê~M<Ŝ/l9ߛG/$?*ɲ{0t`d0WӌS$m -..س+xDIy" ǘH'fyfƦ?k|Ы2. vց1'1ovlyklZw!캒ڟ>@.!Bϖd˗6msM;wNU.]̼y*UtGMrwq/o||| Զ7^7m@Z;C)\X8_bK׮M.5N,E\3#W] I߷=,?V jSf :\‘af'׌܌T~׏Y;3[6N;M<9=ǧD]iѲ+/? eO-`'?p!9;޾닏#,d ?6jb%\ـ/Mx/^&~!X>ZJmyql lǦK6hBx{dg\!B!R%&&Gd_p!S B gq8}4˗/O vMq0I=c :fZ+nNx4BYFUx n0*ެvͣ>Hǹz p5AQXz^@m|㻄e_GelUO[TG<[b㕆AHF5-mr/OiQtlM}xu܎07.vv;3Lkh!Mly i)J@mSqlUܪ_|Ъ%Fm`5w>\)Ts!*/|>O`T7}yi<+S+e IDATyyİڞw0Eƹʑc2>8͡96/˯tbbbժUKyΝ\Y+Wr<~ ,Aڂ1 ߄p?J4E JcL_DG4@& ϟx^N!!ݻ(k~R*pϐ& nΦ/N i~&7"^Ų%ٴ lq)1PI+FE NaW.}x )_dH"}~:yoagYy1ndWItp(M C/8i'ǜ!P_1gI[e'|0K&B!yz +)=z7EG߬Er=z%.ecx3)ίP,ˢeH8E|'hU >HQy{CNy =>GYnΘm%o$pwt=G"O{~ϻՠFDU|!E\ lWüze%>]w73^Ԯ86E(\0^-P+֣i<㱖q;{S/7_"0a~-J Z4s k;/!zm:1bNsZ*UPJjvg;ϕ 7R}ORp'2/)_ͿҌnzŊh,?/&B!BdBJٹp斶fҥKN9aaaNvQXU!M~|Q3nu9+-O60zmu#3H*}WdXZ?'Gxg}rv M<93twtbcc< Tw*#YӲ:#wމpt,ϥx$_߫) {A~F:[\Hg ;m ;w ;_Wtyc9+r,-j6nmPe'j}eFNT?jb'Xp"K[r3TW=B͑ 7U;Țw s|>ςfԙЉ_{ @>>ОdQY Ke_nߗeUƶ7E:F˝|t`epAVUB!ت s28=//G ˗o߾ NѢE:t(਍0k5kFXn۷ovw[YU?_aq)\_cؿw/{eے4e]T<۹h#>o|I'DHNnW]FwhX |BiR.?ZQIĜbD5$_̿WO|vxU n+t-ɱ W ' kpAp mCW9o'|PS88ooŸyamGhs~tLZYP+C@5^b's㟽0 ="c}6jb޿}]? E!B! fwdu(OG0㧽]p"*Њ4&uZwq6_o1`D"1H!fmVT$DIJvJJ MI=dg~ι̝y?{Ι{|Ϝ~Ob{ٝJL$z!e̍0%\{}İnH|9wut@zq܃rƾ/hO)hK߯s.cڤh=l bPxChB#ZC*<{$@cY5ia}#?3lc:_L,~e s^^: >y⾫3<;6z#ˋς{Gg Z=|FɌ3^C r}ƤgG2 KZMis@}wӿOxY4E涅;W=ѳH,W r,j0*V3c۞Jl&Ln4PbXsc)ʸ r]2٥kZQr+`I2c$) .˕VNo%ݮfL*VCJ91p^11+iVMJ2^dtPbߙ~B}NEBa+G/O' Ub>#:ŊSџ}t*G%N)({v`;s9 }kHk]5 خ}}nls>xr"{Nļ>Km8/n}{{_$wҨtzӌ~^ Viqj> OWJztS^nq0("#'}Iwh<.<*\V?}n6@PS$s𢏯G7>ι毱e !h01?yo_u-O̿'ِ?}m{>~E]0V_Wg>T獩|id]ze8~ٞݘAӿe+Իw.?눳oÆ?IIM|d !BpJ^^*$l 22*U]iTiwyWԢB!%T;*|tB͚5 >W]iT#wqD!I8RQ!e+ Fll,K.OG)OXҥ :t`ɒ%\;)(w'I8B!lP; U*T/.oڴ锷+d@!|X28B!DYVkT^kk ..♙^ SSS]6}a ~6 z Nz NL;l~sM?9Ӏׯ pkM5_`猪 mOUs'杧ύ9Tcy.eq@oc*ի/… aÆh>##EyŊ]i$ X\|&3wƤ!ZN]O"m'w!B6A~~-'((ȫ-~Wիiܸ1s8|0sѣeHJc'p)!Dd;.q BhM ?*VX;nf͚E6mHJJZjʆ Xf 999-vtUBtUB!DyucuUϮ ĤB@!ByTA!|B!(ObqDqzB w@dV*y?kU\pa+ ǮU.L#Xi<&`Q8iR%V߱ .ʸu+jp4PƛKU3+%6׊ݭĦLTǔ.cTs%6诌X9*MNxZ51+?Jl+WUCTb>:hUʸ\u2mcZtcp켠w~v9vkǎNL;l~sM?9Ӏׯ pkM5_Ӏp=xr"{Nļ>}nxϡ sq (5! Bq B!,!OrǁB!yTA"]JHJxOl111dge4W*!&*, tU(f !ZǮE!w!<Y8(Ŏ/hөwEͲtLXhH"㔂\Ox r=i5] ͦv]F)2fCDL2lPbk'ʸ*j2IS1_**>#Ub\_ώej{NJ Qb|G4+71P/X9w)&F?eS-21:(﬜JlW!Dp(ӬP%^0p2nlWbYѩa1rN/3` A"Cq׎)({v`;s9 א~"؋#4`!\=gs'Gѓ=s,8Ł@UA)v| ))wEܳHB!BB$w! }*K&sq`b;/B)Z4>n e]ظzrٷ ɊCN^xs4W-ZEy!;@"SUJu.V(?^GDŽJeL#\g[96+]4U{W2Ku4Sƿ[9؊S2~WWKV`%6ĸI`$+&ƥxXibFu%fuUUǫv*D+tuUY4mc_۟ AB2ԉ"Tht̴c{v`;sN]s]k]5 خ} scǹ{4OyAϱױBY88̓АDu.G|85ܭkqL r%l#~!kBM8B!Dy"w_Kz#抶x6d@ξzOo.m~!^MyyߛWa[o3n\Ԕ/JqesF>dĐ)l wmO3L/}? _ĭɗ(5miާ١c~;oncԇl˲~|5emH#Bxx)2{uص[5E&鿹nBG+]O!,s?qNW=7oؗ<ͬm Y?gJΓ ر{_er댏xXB!OYBMpۃ|sG|:UqPo\LvauZWoNU}v4IZ[q= Xӯ).I-8#iva~gZ˘3v(ͤB!ĉ +iOWIxkOKڙёԼfF[J]v<Wӧm*Q=w_}_lIoe(FtMTX,2Bc_w dCPd]]?+Sho~b  N(Xz9D˛oD׷ Rh?Eg  zb4g`~[}Lf ķ%9ׂ#3x!ĉHW!B!8"ٙD{3ckv4|wW寇Wܹ9RԈ &?=&o,x!$61YlC٭/6O{{/㮆'S*B?gW~u-l@fX5ZTFpxl+ǎnfu\]+`׉(.O!,+GQ+@#B^o#'~$W1FfEP~kz?$:Vf9g4_ #O痠:ԯu̬y7›I :`p6 ((ͿK_>b?96ݏ gdԘDlx-6>W^Hi(1lyb8"@.<ΠfbGuҋQ_)a^_KJj"BQf∿JqŸ∁pPp tUB!81Y8(Y8{ׅq BDnB ]D 2*ս[ս#X R8v|,Uai+cSMZuJ9h ZOW%eLntVb_LWAJltUOfܦf}|+h.F}%ĨZ9J,ӊ*V!1ޱ?)M)cS~V˄S' vht̴c{v`;st ך~-kV 1k.9͓uUFOϱ< d@!|;BQY8B!B'p |∥XNn,[-w 9y§3&ѱk2a',"]BQ^]`∧G pq tJzWh;u%.>#Sq BD8A)~y`I/3X"z* IDATҵWUwaQcWc声Jl8V(w(:>#Qb;\&OVN7%er20#ĶU+'TYL%ibUV>[UЩ8^p2+=ftqΩ?XN%Z?iV?kM5_`/ T{_swL<9s>7:6ǂ\d@" XvV&Y%B[rǁB!p >BB!@"BGt"BQV ;8i!B!BE8B!BrKzD% B!D3)paޯ_{WWb{]F±w\F#%'>Q<\e[J51YU!QtPƿbWJ3U[nb4W묜3Uak裫i N] )}tUк(8vUӂ?]~V#9g~m@*׬~M]9Dc<9NsU{Nx}C9\-D?yZ<{"B!(Q,K۶;wgpOB!e,_/1PWݕ_~ rǁ#]BQ^XW~vU:t(qqq^={osS~}$55VZE^Lq;nW쭼|n$((4x D*!!B!8bqİ0VjٳX@>}HHH'&&HRRӧO-w"\.m[/ ͟sc4wwA!B!lr?WNPF,DGGӿ/5kҡCӲ]irkAzύ5y5|L^#Azm8v<%lN~W/$)W_51_ONJь{Gv[ޜ /L gV&)}k!dlG"/=[ !4_Od36ʐѿi׊JUܙlpO = >}kdqsvZF?WՉ gD;E>Ƃ ylvAI1ϋ!aR ^%V !YY4)9Jl NJ=[aXU2i:Ů ÔثgB%Rlɮ ەع>*ZUc 9uug ]pW!8s]kUAfk몠!"u9bQ2믿Ndd$ *gO>TXCl2vIŊINNVZ4lؐ˗vi㠐F/KX6K|Ooz5GO&Ȃ8l 7wFqe|1pY省tġ <_Yox잷B^*R% kޟswA!B!μlRSS ~|~~G\\ 4 ==3f~z<Ȏ;X`AAn||)oWڜ慃4~z.j2%}J(BB+{=#Y'}Hn`n0iEI sG]\GL۫GjڝN~xUoq.1Zňn [pCwL+M{45_| w7KMA|^]AfDT<# Y;dz}Lc׌ xQF4`7?3r#B!14NmפI׬YѣG~ϳOyuUʋ7d{̺,*T t6}"}哶|-BOBce&ﷆpx5_ruǑZ )3M޻k&=;C.!xO 9Y|S25Lϒb]~cd|Q}z5.Ir1ʫxX<"/䑑U1.b:o_D"tU쑮 B!(Ϯ z;֭[={vILL`ԩڵ+WfĈڵSvg@a["lLb `Lx0/NEJhz̺y,9 b3͸ԮR.o-'ݫRgپY~frih?E)@!8LhB8"1 |~g|qIl0U؊4]>hDV H+B!BnU(ewAPPPn{rjԨa{nW√d'rA*3tWX>Y# G[piaV.2=vd9v]HP)(Bà iȐqh@'c_-]U1 \OK/a\UkԠBȟR[_j6W7c޳|}F;$ќ?3AyC Pﻌc3A :/I]sx%rGBqc\"]J,˅il){>vRr0JNeGci+cTTqkXq8b2ir:+]&+yTbbTWV>#RY\%jbUJ,G?*Tp޴r*|%L;(_kخY{qD}ns ,}Nx}C9bQr^zy޽-]it"cy{wկCm鞸Cc6z"z v1N=ȿ^‹4$*(ןdl^?3g5i&aGN!B!D)hwKXXXs@q+NAyz-#mߡ͡8%`x]z<|}We">r2mP \\NS.Kc yl_<Gs~G7+UqIv]z"d !B!D /n}Ơ.$ەF8BRQ!UZG8bNر#|E>}QF[l߾둑=tl0a)mw@qD!B!(UשF/n5k|YPJ#)/B!:ՅJ*|izz >ߺu)oWB!DbtL3ە0JNkJZ_]*F2rj(Vde|_WjJ,e7X9+o]&+v%6T* WƯX9w(& #+ %hZ9 F2eTVbXQ!@"Q[fFԱ@j lA1AL(:Z*ckL;l~sM?9[ܮ Nא~"`fk몠!s\=gs'G몠͍mNXO}.VpWP\Zj^QB.MNyHB!BɅ #^8#&&*LMMvӇŋsA޽;QQQx=PJ#Y8B!BQ&)dddp1Ǽс_~ |mÆ  8ж}FF-wH*!B*!`]UgϞnye۶m۶ FUp_|ի m9=>ܹsm&]B!B,>OaĊ+,Áf֬Yiӆ$UFPPlذ5k֐c]i$ B3Wlyqy,{5[!∫]m]PxJ?#Pb/c8ʩĶY18ʸ*eV>#6Sƿ[9 5x3@cņ(I.sNj#>Vs=nbܯ_r+i&M+'Y-61.UƫJO\el3+V,F(З"}۫ĪN NEbvq j?;8Cp(L q̴c{v`;s9 }kHk]Nk_\i.9͓GF6su8bY4s̓9ruʕ+Yreq+md@r(--o-.8k:vM.X<B!βO")Y89hТMޕe5А!BRK/RQrfEhӉޕjItKy<)(Bj]GlGqDqrRQ!B!_KpI(_RL)]B!BQrǁ8?So ơ!!ߩBƍN~ !0)սT~,=j+7U[4RϜmCDj(VLe wUHS1֭QIeiVBW_]P/X9w)&F?eS-21:(﬜JlQ_orj*]V;ࣲ!@"ituU(FUA eBDw;Zh|3#95\lSW/nWZl׬~Mk߹<_9EZmNh67zrP}8dRK 7oÒv%ΩD)U"+ly [7o{~GY<B!—~T!{+/vn5_$dsy7rqi޸% 3s1'j#:3~c?SpE-[v 瓗?w~n7lI~,nyǼ+/ky締e;yjѿdy%n4@򝯳;<30wT_ڵoa:{siTJ2v+ìyfڜW9Sv{ !B!J/sS cc1S\"ܹCr wdo㕫oN/3C \_HJ cfoQq #3##>N[|=?/L^^޾P9M2oD`rҺg'TKyS7k?qڸ#aݓN:'nnjP{ d'< hw]Z$b{$FZdn;UV=4 B!'0*\#]NrU!i7roX؝uOv2>>ܱ]4,Ə[Wq9OŲ?Z5>ͰK_ߙޏW-걞kd[] κGzrwk,{6ۖf_8pcst hOWMi,8ksB!dR -x4|T ʱS\k{?޻fQ7GU}6Igrc% ~Ys rv͋g@'zT cM0]',"O^ql}8hWIɼy6'\8'coU¹:%/nzА}ctbk]_c~?o-@‰>ϷOp,[aΛHѪtM!%k]a-h&?Ϣhmld[n<<FQC9oohޅqOokH<=Dۿ .5`*5ʇ`w&p D 2G+t}z0cW{ǪJxqxPM4qHX-Jcq_ڻL+_X∗(5.gV-Jl[MQ1%U8v=ܸGM61+9VεJc2^nVbkMO+Ǐ"X2>t*S%Yq2nk,WbL)_}BN"H0۱Տ=`;GskY{qDkHkk]񞋜+9ѓ=s,8B2pNӪ!@VS;6ssϯyY2INa ?&(]MGkHdCï*!BQ,V[8E;KPHp8;}0ϵ/n@J[@[kU&wb9"t6}"}哶;`jFu@)xK<"G;`*5AsG9Ro']r VqD<,DQNfAL{^\EٟUX2/jE^HtM؟SMfjAQ@dnf4KNĄpNT.eۼ̽dWF,!B$.R GĞsh-o'v|n#W%Oyq?oţXDzF&?@ yA_\,O9 =/n{&|>+o_c@X2>.frbGLOxftYx@~71W½e;ג222kp-wmWBٷw֫glv8 k?糇7s^ˎrǃ}9^98;/pa4p]5|G s+n놙0K9pa^0.秱 ;O6,RGB!81Ә\һP$랒ޅ2G<GkE>Ư?#WOCߑuTkǠ:ϑ^xbD.[0ػb&)Bqn}yjj.1Ymx*9{d;{zȭ=~"ys΋ ":э_G*n76:]ƴIzF}}Wgxv"mO!B!QqFi3CRӦMت'ۡSWlqQm_NB&*¼_faQӅiT+qz>vR[4•lGc#lv(:]U*sR5JJ2wX9-د.K+F%ĦLF(㗭GS.±i}J56e<W(%VN;%hY9 =8WmQٿ;C@ 51*Vލ`2n+ck_m^˱cO!MCG Xc[sD?۹iv8_C_կivs8uUc<9s?s'杧ύ9Tcy.IIB^% e< N7ϔ>zXnn.6aUqFddeɢB!N{Xp Nܺu9WcTQ B!"ȣ B3‰ɣ B!(LcEIB% e< B!O'ORpI5_X^һqV͝1]Kz7B!HrǁL\|&3wƤޕ}'p`jIgyR5w[[9Z4]o{>+F3eU!X1eJk|tUp+`I2c$) .˕VNo%ݮfL*VCJ91p^11+iVMJ2^dtPbߙ-VNS%ފiU+:]: d*H6 (cs#T嚶S`{ӶN]&NN*Nvlcl~.sV?۹_| ך~-kVUA1|=ybs_6ǂ\p& B3LjOປ< L?Vһ B!D@!ʙL2Kz7B!D#53  B4._һP&UA# !B!'yTA!(J9.KjQ0\dѨZ؇F2iTTbG1Te\YĺV)*vXUv*VN%eVrz(ϭX_%nexr"}8iܧύmV; J)Y8B!BtUȣ B!B!I# !B# !R wƟB!BƁEUB!B!OrǁBQƙ(_paT?u>p̉Rr20%*Z9uLc2cxWN_TVfU[ߥjL8ITb;]&͔VN7%Ԋ]L)V0%d2~xW1x~e3XM31)w^J2ʹTbZ~2ʩĶ?VE2k*+Vޱy;SLA'ǎ eB{׎vlcl~.sV?۹_| ך~-kV۵U{O\U9M;69CL8B!B!O$B!q | B B!(L]һP$Kn?B!BrmСyoQ /~$$$Ijj*)))ZӺ]i B!DgU wwaTƱ \٘9sVliTSƩVN+%í-J>1YhZI2 *.x^be3DMr<'X9)]&ƨ±yƽJu6e<xA}gZ+&FSei6(t*xn^ :폞T)cj ~zAIUXc[sD?۹iv8_C_կivs'{O\U9M͡Ó!FժUm={;&$$ЧO≉$&&mݮ{;B!BY8'WzuOg ::~WլY:J@[RB!B!o>^u"##4hPA8 }bEOCl2vIŊINNVZ4lؐ˗vGBGB!DyeQ% Eb2έ[.v[x„ }\\CzyIOO7ѣ_gȐ!2~Sl*vqFIM1e%8[*ş(IIB!8ҊhФIY|}ە6  xZ!ʓFIM @!B>ɮ.1 |qF+TPÇOy$I34;#BT~ޅGϷ:&W/00z諌߷rSb|tUء`+q{*ػIJ]&xLnź).<+v%6e2LjQbϸLcxJ%.e:]: TbL:/8u>ǩ:&zW.zkXcwЎ=`;GskYl~m5_k(5 خ}}n]O\=gs`:&s>6Y8pTf޽5j^ەFUA!B!DD E=66PϢ޽{[&:-wҨ.) !B!(ށ3"66qQAnnW9vU8oW %H*!2'O*ELn7nM7xxꩧKNر#/~ >*UOWyTA!B!D?55Hw+"## ~W;6wHUB!&^K!ή=GNB|||}f͚ ݮ4:u#_>[!N_Qһ "W(!ąi4*R<+9s\F%G*F%6)Lc2n J~.?qc"{q*VN%يUb\&=V-Jlne<^tb'&FeN41+uVNq:TVƇH%ib*\+%i;ҿ`ۧbw~#=l~,1ӎc !v"`;gsא~"`fk]gNq,}Nx}XJ*|izz=nzەF>2_v6EF,B!B'66`\ժUz *p7mtەF _|H L1k̒ލrk@ϪvRU!JwkB=e{-{wmA\\11Z233ٻwoRSS]6}a>\һR.CD+VM:^4•X6[װrS 5 Rb5]&m$(=V,ImptV_[9esN"v±ks~%1XOr+9Vob$+V^ϩ_K%Tr*&F}e (VU (:>^˱O>MSI֏8f4v9TtS?לw~Ns_6ү5Zl׬STOVPc<9NsiOy-;^= #VXw8p̚56mڐDj "55 6frrrl[J,8쿧U,.pyeY4!B!(a3gܑ#GpNvYr%+W,/v͉rpHDz~s7%({/ȷ$ϖ'-$yeXپpԻ.9UB!BS{TA',x|=-7cGi{xO&e}5&A-{:>{1bss`zN>{3UOvck0.kϟ/5)(B4.wL8b^ #(bhy\o P^s.Immq}.oP߫*NݱY7%y{nx7O岋ng1V,|?[4ֹGjn } w$samѝ'PӞgs#n߂uϣ^w|ߦ AEiT.EA* EZ(*T@ H"'zV}FK'"Τ|='VH2(W ^3:}>;ӱqe耞IJ !)}!B!DNtӊ#t]GoǧFi߾">'X;;V'm щТJGُ.ʫ]_u]GwFӖYkfӨ3FL.Я*yzx:k}_?1F>x$[ ?ˆpyVw /U @ʁ8Z|~zVc{Y%uuF4$Vbڏ ,ɛ%栃V_iR+oSfQgs:qFvcՔ wsj͉Kst Z4! WIz͙I9w?qX^ۖ }?6v~G }}Ѹy~ b۞K*Ψ,:_b@1+tm+NN'N%v3)~ewբ<[77fg9s#; WjqHv?Ge#?i\[WBJ}׸+Սf ŇtϫHCԩ]y/=̈́WS@G2zǁSw}}h#sS>SOBtч!vzAފχJq\*S h[\BL%L"}GY>DehV$NtrƧBgipߞV'{|5kjwgLՌ]qď-zѰo~Ckl !B;q ]{ _t  #,0y$άد1<@O3~>'ArkR$X Oźr']&gWDt] *B=cU0!D q\$6*3Iyh7$WQ;m N':1bo,C1$t.@t1\Hrؠ(v@ bL3]#OBS[kP8֮9sy+;j"R*'rb^+ZmWKSڞN CxJ^egGq ubgVv}~k?1&~R+8}Z kP7ԤyGy]gV=<>Jx"㧨4^ GQqU $Ji9\ώe MȎ$NclЌ>gcϒE`;7>I ? OM-IĶ]ףʒ w=[F%uvJ܇FqѬiY?2W瓫3VzT!j?#;33tǺ!'xS{r:> }/ma tgxdVMdlR/eT\ {hi7w^XXѥt~{2+/0uAOxugޢ^Y1žw:-]y|+K[dV;̪ BJMH}| IDATͱ;7!GYtX-K/5kE#<+6 $._#=F5h4I)@0i5bbXVoQE?9Ͽ4~NGU)].͇-dm^^Rݓ7+ۈisJ)i]'MBlUatX\FyOu([o;*oH~?=?G>$Trt䂄?\w 8Ы &1)3ٱo"%^L!8 \>N%X'$/b<1d;Goۀ¾N㣺"H@8eǙ]nF~\{$oAxc>rOxhҶNcn;?B!BdoG.שqB~;=ŨE_3JMB e? ¦VS}wu*M i3 >ĠV3ベ]dilF3 ﰎ_;ZlU|9[q[>#|! VhIb))mate_Hw.:5hƬ :|:?Z̶sM&,|7|̱KI.t7[z:nN}?ҧ׌4cy t>b2_[czbG\Н:w=ɈwJV& e]GwR[7h0^g;xLL!"MFrȔ;B Z/{ hՔ/rտDih9eF)%^Ii4r*(vBBPت_k8ڜJLj*i3k79g>J{3D0r/~TbӍ޳J{C-ְ?9BY**LmJJkͣmE ME)jhYMȼ}mYꘙULs1!"gs|E5ӮsTk+bƸr,Ekx\ƜO k78>_joI1M} ;3j%'噴r._J>uNPs N]^0 NA_ӊu'ɊUƺMsNt, t0NxljZӖf`6Kʶmǔ)СdF߄ѦtyY9B!Bc4аZf^P|nѾ;2{kB!.~]ܹY1 EVte֯_G2{S+"zTc6w;8@# !"d&ؖٛ#xSQUa22_)>Y"5B!B!ErCd&ܱO2{3Mҡs!BL# kשq |yCޭM"[; _Xٛ!mT*Ow='Ϙ0<5xmrrNo%g>p66S^NNcvx%fPaF2%֍JeziT*V)J{\}FeJlCfx9xV跬?Xa若9Yf#0Ϫ`5yoViasKj >2C1j& ofL0{xaG5iaěs|.v帟k+bƸr,Ekx\F!TёW];-s+|2e@fomu!ظ!BL"qulT{vl-u;n@!B!OU,gU7grmߨ^}?ݻuBdqqle5dV!BTyfoBh/3{rU`\x+Y~foB!Bdq򨂰eqD]A!h` ‡Fj۸t*fU01B94vS\thTQ(>Xio5r)4 Wo9cT#6I.gĦhm%6M>LicUbs5O*폍sE^0‡VE Mdf㳙?;xL|,ULs1!Y4Ys<s\j,5 }wyTAXj@eB!7'8ys~JC#ہf9!J*=Ji9'X0B :v帟k+b9 ŵHq^݋9 (WJM"E(\p^oY'ַbMʍ;λjtmB!Bodq\8bu)"ڕ]=ypĽ6*I!1ױjtjS@yq7,)֯_'^B@rAve&Xٛ#ܔ∐G|b)KGuY:TK,X&t]B!(@O9;L9WM`,"+:v;,r>4z`AIaͻG_<'uwٗգIDO]Õ] L0"Q2._8%o✮/^E*1IoMB!Bx#=Gxǁ%z,fmR  >y8U|P0 厄ܸc 9{~ x}\3sֻQki]ܯYkX_1_m '4!W>_B!B/erAVuk)3 Е ;;RF0ut=+c*[6#U_8B!B[@jk^ͪp)┗KߏZb|6!$qzEh4;4"4j/}KP®eX|b9~َF=uӂ6: saa:fU8w.BfUBxMfUB!ĝJwMHͱ47!GfV#꺗?x1H@sӰ%68M$Zs ۬ :GѦ)t`_C8=6I2ݡp!B!GQʫG R@7u-B|w- ]Ѽ~!$8Iڻ%C QR}:'SG'Bqi=ah"{k2.yY%?_;뱑jn,3 6[ͯ͌\rQˬn@:G!VM_K9-sٛ*9)Ө^IDR'sfQbTEiW3_VbJ; SbiT_Шw9MfF;AJlCc~Hc J,yJlChg4n0#>]9\ ܯ!k q-2]4!Dڼq}Bn(\tfo" ; _Xٛ!BrC `n3g0{ LJ5kRlY $""k.nrY5#=J2j֪kgO}"vb2{B!2YΛUߟ xϜ9Ӿ}{ŋxTRyy dtzggϮ<+qϞ,B+(bfͥ8B!Hc AnRxq߲e ;wLΝrͼo۷rM+(B!"y*;w@I5"#wo>Kꫯ8y$!!!4k֌bŊPB.伞!B!BDDDӧӵ R\9?>ŋ8qիS*T膗jn莃s[2l2vBls/Ư_➠ ,}?oPg' _>{WѹH{F!ȎgӨZnv|,y慞J"4{GĖQ\ݪL 'X)JjV#XC#@i9aJ,¡Ϥ1BEsوPb?94)FN%ơOilqm1JlѶ̱Z)f.6c5;y!౯Z͘`#>x5s_ϙAf1}ʱ:3Xzbv帟ku qXbqf9jm(ҵ|ժUS~߽{7W^u{… )rY Dsu;|{ubt :w?܋ׇOhnb^?˂ B!B!n;c #)TP!]9ʱ5RQ/NsexfcPIrYm~3.$_#1*&o*Vc+B!Nwܚ;/$%%yyy@ؐ岢[:ppEΨ콐! s^<H'B!BCd@8 JnEdt(I:k6Qk71UjYJqJJ i5[ES(y8>#ӮK5.oZWGI#GB!ĝNMH1իC ֭[>ǏO+xܱ裏R~}lΝ;ohG_VBk㷯x$K_J;}PJe\c!B!7AfAp8H4q~ s{-O<ԬY3}С^.+pqD| ~~+LfjʽJh|o^Mi{֖ZSbfx3B!Bܙ .Nk`ƺ*DoߞM6qE/NӦM r=x# ].+B!6Rd>݁ҞcTDR}\%}Tj}ҞlgU讴9-gZUbJ;ȉPbah#JCcRžQ~F|s}J!Jj:thP?1Ӏլ gi #9.w>y!x΄a5[1f9s1!"g}ڕ+r?׬g=q?ggUp?Hb֘E[֏ggj}"##-ZlI2eطok֬Iy)c6lrY-}TA!B!DvُG346mZD] ?Oqe.\իWorYq B!"GfFs… SUT!,, FDD`$$x"gUdV!DFɬ B!iOB1Y@ !B!D Y< B[Ʋ8b_%6WZ$HkյFqB!26FieoE/{+FN%)ȩv{oh2JĮ1i%V;GiTXw(z(lRb'X%vY)F0rgTb>^"Ϥ9*f^7LlVڜcއVh>i}=G}k1Ϙާ]9}|nr!s|.rY9 xk8B!B!"M2p B[aybCo{5iWUw?G+\\t `Q(tm喒" >w?ب LK8ȏ@~Fok,2ԉ~KΝK~2ͅB!f?ⶑ[lUZ{0_u.u_´>E: хRrko\~ٛ"B!&∷XĦy|eUs^d͙Pz^J4z?#Ȃ5P.E'7=Aٲ1 Zo4UP4u֮"[B!iòYqwQr-V~Ԉ̼SbI߃iۣ,UЉ9)#7JjTa%gM*ʾiр*vj~6!Nf lǃRݶ Ib.w7r:1}kܹl6Ν6!B!~@UȲ8-fWu ;Wf"-Ў^:d6t%r'^+މ+f;O{rhSq:SX9.W2u+{r Vy)ސ:S7ۯ3-[r a~XϘ~EWif9 2H" 2ppWI}+n`R(} hO Md\27 2T,XأD&6p Жo;#qu$RB!BL2pp;{tu>|65vVb (hk֣)F8bm)Ď#vc.hU0G94Z,gU ʹ`>\9^{s|NGtRQADcB!BP2p *!B!"MRQ!B!ZlVq^iO6rRbKX%Gi3r*h0aWb'UFyY(Z1N%eJlUbYF͘%V͡qDiM#X\oql`aއ?>y_[͘G}ȕ}ѕg}ڕc+ӹޟ"IqD@!B!i;B!B"M2H}{{W["%*!N=fUX. 2%un!B!nA?HǰmE] /z?#Ȃ5P. 'xL@7P94*ԛg^(O9 A#*f^jJR3&VE,V9 rX@D6<ډ{a)=,[^Ż+r.:͈NL._rͰWxMic2s^Rgq=qvmW/eQcH?1hb>]-g)N_?=6]y& j#h&-{iѷ .qm;o|l &uR; 5~[6IȎX#.A2u Wխ{ױ?fg6Gt?:'S9xO2+izYF먕L5/D_4G5'|=g17G*2BV[Sx\3 'd HW*9MF_unH+S*A"i=m8i{/7'SZ>ŷeͦ׀u[+'oK&= CmXKOX9=}[bo=vsC~F<[b}鳇;_ֲyNu^iދÂg+ ZB\lP+kdJQ^d3L8w,I~Ҟ3^q,k&VK%];K_W1T4?&t=_%CTMԌ@4F[8ﯞFb\~?J5䡻puBeR%'{Gdh2"A$NY۟MKjc6b˟D >pb&80SvYת3&MwqDUAd2B!tZl62u3x3'` 3x*74ATBJNPތlSTh wEp㟱6.~uϠ=ڥlj!7eۆsa\9 ̐0'Y8?X'"|Byh|/*H@Z<,;#7q1G5B(֠?/Tzyxd ~!B %a!fIi $wm8J]Oȋ/ёJqO |RPZζqF?% 6˘?᏶QP|)XdzNj$LޏoC@_|JP׆_@h4>@r'6x1h B!ُm,h#W xЉ;swSvA;'Er$҇숀U/bgV|3+Ct"r#7\ۃ͡oҢ<MM'<<~ 1s}yy!%BGu_?|kXv>^O m1KaaDƵ/C< "z)ZϲUbsh#t#ǛqJe#[fw9͕ؗhJ{XmEQG"zyȹWF>s1>G4QeJ}nl㊹˻‡V9ͺX>t3cq#>qkq^5#s8,Iɣ " #\Ek}pE|ùԸe X'Ftw^z߬J&,RǃX5e1i7yB/7G \/5@囇yt.I9G"Oe^DRhY  3`A;wd7 Z}b12u\>6mf~.ty{5D{<°>f=]M/bB!Bqm^ $~ήCg˥߱pAlP$׵ #K\|ӫ:Qolc_)?0k6j -ψf#)` /{m4qY~¸q*|7N@`9~&o@BaWoc4)`C;ɖ_a=DpxyuAUZ}mW!1S?KXR\B[s/u5Ck]&\ o hJz klS\cFė_ٹ'H<ÚÙwN\ NfRR䊓BGB!Кd[{fȐ!,X-vfϞf͚-[pڵ$Q.wySѫ'ݮm7Jʮ!-]j_%&DM22p 8B!ĝ.'3n8l6+y3<<nS7o @F88\bLW! B!Y8!/-#Zo/\Ǡ8HܹsOvER~}o~e%2pcv̇gжB!@ fUsLv;FɛHjP%6˙Qbk="#fX͘.v!J>CuN \9d=9Ǜ?p?f5{Wy ofLpS3iWE7Ϫ`:DgU8w>'%ۧ|t_}'O$$$f͚QX1*T6岒,=(@*}rXTB!B;J||<ai,a`+Whϟx"'N`) @!92-!ZUH"***]WZ5ݻwsU/\{|| /HqD!@# !BhYqjݺ5Սzm7/N{ۗŋ0gN:z|>|8*t8gΜZvB!B!sq6-eyٳg=rI.B!hV޴ػˊ+hmR׿Jmbt>Jl^o4Ubbgj#\ȩ~Oa9%v^Fi5r}2s_\9悅ql*wժȡ1s?cq#>qkiYS6}>GDCl6 Ni zhh(~~gϞ2j eERQ!B!DJ;_н[j eErǁHtɮrg&!Bq{ ܹs^>G2\V$Ν=ǪU+2{3n:Qp֝E|RQ!w:z6+׻Gg5jDÆ חYf}tBJG>#;z`` cƌ\367&Ms9lK);_Ie@!BlF #*nEY"8^ٲy#CGřٛr[, ΡS=2{SB!=rhq8ț7oﱱ)Z"8sά_ʽ{sj0d)0WUj0%NˍVb;Sێ=f^9C 4{f)hJ{cUjJuLVoA›1լ ȼ]9Vڼncq?3&sL}d:3KFR+#O<ԬY3}С^.+5]wtfaț}2p B!8Y?3D8(X FXB(Q۳i&.^Hiڴ)AAACdde^˖-Sطok֬Iy)c6lrY̪99 ȖUQh$K,fQ$%]3/tu<;gQ6hѣTb#/]3/($ȫWYh8Ȭ B!i٬ \q-ZPV-=ʢEÔxA5~!"+e6;^Ji<`";wP+I7qrr3߾SS>v.zG LЅ~Y>2XE܏-'wOY|Gmla\fۆ)T N{!; R; Vm@ypcfmKq|O<șx-ñ_>e`'@BBPBhwȂ`A}u,uQр+(ҋ '30I& LH~+s<9B[+&8.9:HyQ⠒x{r q l@Cqu~#yL^:2azʋC'&81!{2 =Cc IDAT4FgS1QMԃJ8(cn K^ WׅӼpkn>=PYD@R9..QEDDD]|)ĵv~Uŗ6_/x/|v=?4hȗ<$o# ?^t[my5]qݖY羒#شZp a]qMC\N ⎓x,'ϧā89@* ?eGHsv(=Od$^8S.KKMŋB UDDDspIrʃVUre11T4dyg20Xāld≃r\ߧT1S&PwϤ0}GO`)Rjs_gOĕ>EDffP]Ī {k&1coi1ju^7ubf8XUg/"R(q 2hP|nP*Ŋ+4x١H96C,)s4V~ܙKy3{ݝYa`ʔ)NFDDDJQ{k}[ʼQle޻23K.0ҔyAJ'HMRj&LߋƵl@neBZ7i9wND0B!thӁ7 ~'dyې3MUq$ 0q~mt{.Эu=:"j7?iJY\Ø]hs$w_^؏)?X7\>;>}F+y6f}#ض>͒YX\5|>oYeޱ bccm֤b)TEz UVIAW8f8%?Dr}Mcٶ-,H&qw&dGݽލ]1Sy`x7dD,6oIH'VƷi:?N$t.'hD^ՕWVl/18]+i0j)/*U]S6fYh2I=9`IfˋNmĺz]$.?|oϑ}=:::_@CDDDψ4$ rG}|L;`+({x|T }i9|2Mҷs$L%HɌzx}213X 2!P.䌳=\#fOzwiqYnd̻Q }-Fά `j`jq r)Eݼ^C{0*eB>ؿ+M1&x!- eSL%  """"rʾRi`a?c!<11~{w F-Sә>dlI+ "r&K79lnx9 'ϲ9z$߻ZwfB[sr0I] yy֦X7w+ZOKL3o`LmkZÍ:v?ǚ$ """""ՀRj8g~7&LkBhB;^+\̂pl.J;{h-fH Nz! X*|Ż>6-v#'¼pL xD.產.d_(^ qKd &O].@JHyYEDj3Txf9-qdet'{C~+WK vr!; 2s#7!RX:;LN,{at}[H셅l:b"-;BٝP|5cN?{[y-E" _#_jW=9= S9{聆*TY&qɥ<pZwۘ~h0Bk #x,9<ÍQ]-txq#}r_7|TC҃ \7<ZU >tx|貇edzÒ .>щO ];x'fnm⍗ĺş7Sg̈۹1z~rD+% DDDDDD'GMOsɞ,O&?(O^qlw2u l4pv42ةSD"R2X[ƺ5rm5{^%G&"""R9b\[Ц3iv3j:5ƔqH}KlwxU+iJT[fTkJT,N] MH`;/ """"""R8 2 wv"""""Rse8;8_G\\\ҥ ᄆEBBd˖-L:fnݺ={]vq*S<)q """"""pPƌChhh 6aÆDFF22%7n]w]aaasNV\zMwgGP~|||8q"E_>}eÆ :'N,ץK>L\\U%DDDDDDZ3f-iӧgȐ!4h-[:qп[#>)) 6{]"(q e\cNcf:;rDHOOg\;#11|6g@I|}}ڵ+)))|Ǥp%L&_ qZāِn)Lll,SLa͊*!"\mڴ=߾}-i`u%Rk׮yf[#=vZRg8t4nB0{=T'QDGG;; Zli{~B[$'װaCsG|}}m](q] yl kA6=3aܕd~q0Ӷ˵v~;PX}C`00͜?PzٞǗRS mܸ'Vā%e;}`n|ۦiȮpoͷ^OHXb`ʔ)NFDDDDDjڵkp:å&q`?҈\_i*JOvq؄{Ѹށ h{íLh_vq,9IԞxcs_? ËСM:p/,L qsui| oL_qo5s,9y&HdUHyLї֝r?xq3{";yl==iEѼcƼW37[͹]X mkbc.,*)p5$#GҨQ#<== cʔ)pi=zîxu{+/&tjM7 ve.˲m _t㞝ǨOrw&dX0r϶nȈY%cDmN8ol~ܝc}9,ʗqۯ7̌`u:}m\< ޒaSR* d('Ν;簌}PG?n[vL_p`K-DSL%  """""RRv>--RwӦMdee[fÆ 9sƩ*JOφyڸqy1=yw7HrcQ2|ZRz,#|8WjDMny4VrffdX՗fA0\ޏ{+ld[2Inw}<}Čd*G}@DDDDD٬w.\P۞#ϟϱc"!!!ߝÇqF׫(5~B~SǀtK.-r@e׫5ǁr^{o)[I$O,| Y|]&c齕Bv6/dF/Fo7WCFtv75=ޝRY<" tt|i#tp{i}e0I9Y=rS\=4LADDDDD~|ѥKv\\U#""ֈ7|ήWj|;ߓKyDH11`*;*{GXsDmyrS[~!CZKII/pv'tj8tZO>,Z(7`߾}m۶E 0 Exx8YYY|7iӆi۶-߰D9|D׮]g׮]ezOe*DQߙe:n85joߒ%K8qD #Gˋ?;vP߃5+\zb>64J>wfܛti?R{ >Lrl+Ot_g8,/"ηiӦ|I{NX۴iSws"t!k... 6 _~~ȹ, c ://2'5ӧ9smL?# }vmۖ-[b2XlY~W]5+Ap6ڻ#+ [BiәE\vՋ Zgzr(rǣ鏼&!rnOuHaLXx un#gz\'Sp `ȴֽQdqA98-v0/ xq n˴zIDAT>S6e9z&2 {H1̈eݯm€i }(L6Nv.z[x .Ө<>緜ЉSl\2`4~~?cV|~[/m^Y'mK&=3WP̽=~.`Nǿ}Y =&>ϻOdycw3nV|w̴ek/6e7;ҤhHA]XÒy?xf F"""w|"##l6lٲR-YPy~>}zexws5Šqhl| <|;;qnoyۼ.=e?3sJ6/F}kϓc`<iB,[r||ꏋdpï3} {}+y]8HnG?'ATMGCܲ)ޕS?'4iʤnM,9 yd s7eGs|t>#9MOͫG`oZ,aaʋt~3䳕ض>g?[@u{>6ͱl%_^ m?v xu/+?Ÿ8IgyW|||'uKg?@FFU_~} >?]߃橰./ZU8VeAR/o˳]輀']R(BHy4c kyn $ah"NhC Nn); oy:Ivb]$dwy2€,.y`l6s6\CIeư4̋"Gcikc+S֝djq_M:Pg OhIWVrixf|| @ތ ~5g28O CC>zw!=9OШ=<믌lQ|Aq[MP/`ϸܵϷ}V""V! [n˪ iii>>>[0X,L4 ĬYJ)`k7|p|||S{̙3K,WU8Vr*X`&y^x"Yx VXm`=6vw2\}hb; SY-F?CsZF@?ny5xjL"\x@^yrLlG@ZxXLi-x83z8ghiQ>w\n&3s0欩77 8[;Zns!93}φxx g#2-:_'8ROq<4?95,R@1> U?Nhժk׮-<<;6}+Ow޻A8Gѷg]m, <ڡi7v}/cY0?gfN053k^:TVȊ+ؽ{7:uq`2tqqql߾=*TE8:2j=6;AA%W!RSr"8S"""""Uٟq`11TFp=Kv۰Ԝ] jc)E&f3&5Y&5Ya`Q,hnWHb7 9a8L2|(/Z\H3|(BBC8ICBBCto4E,[VFCer&=.x^Os+6v@F {d=2~fBnDnr%`/"""""""צrY@6WzX{XPkws^׼ISuEDDDDDD`(Mn~؂mBQkoք\m~0Cz8J]DDDDDDD*o}\U.(q """"""r-`m,SsW'  """""""k{>`hJ\,f lJSDDDDDDD]~?;6A VIENDB`pysph-master/docs/Images/rings-collision.png000066400000000000000000007007221356347341600215140ustar00rootroot00000000000000PNG  IHDR XvpsBIT|d pHYsaa?i IDATxw|u̖d;I !jAA (]P~(** z zJD(E) "Mt$@HB ?<,|<2|3uw{e RJ)R k}J)RJCRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)FRJ)Rk4(RJ)lJٲeq\ԩS u߿?ӴiS֮]Oaȑ$''KHH-Z`WtRJ)*Qogر$''3w\zwr|ZnMVVoQQQL4vڱzjZhqylZjũSxꩧU999lܸ `tԉ+WҶmt>3p\)Rꚸx"[n!""߿?.\(RRb۾*}JU?| 4oޜd^z@>3cUJ)3gNs$''~ۧ!D]5*Z˿uYg/IVXpG ?US:t('Nևuzޥw]zOqov PH͚5?>~ ;vFrݔo/Db"/G.URnw:*!!!Aϻ.]K"kw3z׮]fѢEWϚ5e^1?[wϞ=l޼rYQQs̡qDGGCӦM/xeXvC)Ra+ITՀk׎6mCEŊ?>9s0 fϞ`L<ݻ3f"##2e gWgܸqjՊ:0|p ^#-->答RJ?7V(m*Uj@>Cȑ#i߾=}OAxcGO֭7 **I&Ѯ];V^M-XbŊ̝;tuBRJ)T &$$pY222>}3صk7nQFUQV-η~{. WǯOK)R3qwD?:%KPJfѷo_6oɓ'RJ=Z;wf͚)OII`W8pPN'III<Ñ)o&XJ)Rk233 M/en&Gjgʕ|W[;Q J)R%pXe6W#G-[| 4?n_h,RJjh !pdff];`ӦMWWh RJ)T WScjԨQ {UQJ)*b]g6o|9sиqcwٳgФIb8: ZRJ)U]+WÅ صk-cǎ\.fϞ`L<ݻ3f"##2e g_{lذ'ҥKʗ/Ihݺ5:u*3VޠD)R+ڊ_9a,\ b">>ǃN5k0|pLnn.uaʕ4or111SOqv;Ɍ=aÆ*oRJ)U]rС̙39soʣ5kﮛʕ+>TɣD)R+IЕRJ)U][@RJ)J8Q%RJ v4(RJp6禮8jURJ)Ui *I4(RJpvQLUjӾEJ)RJ)`RJ)UiUhQJ)* (\m*D)R^ &oWn@RJ)J8(I4(RJt bZ\m*D)R3 k**uiQJ)*Lj@*@RJ)J8Ӵ] hQJ)*4DRJ)T g3^ O iQ@RJ)T Wlj'tU 4(RJtP MA 8hQJ)* ',hQ@RJ)TIgxAhQ:x8u. ͧ~Jaa!ʕcڵJzz:{ ξ0 بHN>C\ʗ+Kag)MxX(gH*U8G=z|Լ 恻gۏ` > 5Sox{-t>ѫ99g69xsc5Yf][YيqR*54uTr q.g(dxgCpcf kpY!+}J h͟;:'ot+YBr;Y˭ I| l{#x֬%ثU"IoJd-^'+[g;/D>܍'9'OAFFw_`U" |R*}n7bsp8 _>y+睷={!- g`vavdWpK!1±uΦYrҎBd{un~ZvP%T ?wFÑ#Ȋ8^~ [[1l6 <ӭ?Z(ILR77BaȩqRr5=mE۬rɐzȺju3wR`rk>1\(WGk`H=a ;  (9r{x.dSOja_;,s à̈)((?.RPL~-"AhɜL fXE`VY& I {/\8gL9+Dʀ~Z.,v}e͇ԟYʎ>>~pҾE3NCxFDg֡)"Т9v&//jJ=iQ@R^Ɯ9s1cO>$6|T9u [&ַς w#̟u/|.Z~V?2aӰXf7@xF0^(,ܪfX>M[7_P>dfvFmفᰃi>{z|!o9VXO?ķ~KjjW_{u}JMM%YoHk݆`sr"fj_Zew XR *Pok5ržM@B͛Zn>N,}˪AM (Ĕӧ{0c/cђoܜ'Mv 6#Gxb Z?LJ]eȥm5uVիǖ-[[>?(,,={yvzΜ>'0)eթENBU̦MqN61Yس 6,XT67* |(<vN+ Gê;N⁠00/.^f[jma#A0e<,A 8{C7lNЍx]ϒt=%mu !!H(AԫWԿz&b[la\S9}rSfhv=6'tNy=d#^gCGɭP oN~s|@ jBl8sl@ /jVkYѕKfA@8}"5a'j`|wnmn9g69(2wJ}R8LpgOppOHSn]bccWvX/!N0 ֋P0zo.Q׍-[ [lևKn[^~e @bY;_nw\`)wX.NreWk)'؝O0z+w})(6xa!*QhExP!:ykpk3&~ABׇ; ~BZBAvk{rb ҧD>K;b+ q„3/ )P u8"CøɕdTK=R*Q&f\.1ZZͅ5*5/z eF-1}PČ-#>]Z˃]Ya?FhwYe0}!$L?"4z3*5?t0z9i7DxmWS%?$lo-nQK&To) 1V*Kd}*V[o7X)K0 1;vbz> WM6]R}'"RO[{uWJ]E>(<,:Lj֠.{7!'N-B}V;k#àWfR PV83<4[ Rj&cQP Tj [WZ6i՞}2zfۣV/c!7͇{| Mwtl1r Hv.[3o,iX=N>a‹c"5Q*_Jڷ9]6;?\7S&MD-X;ggr6/[-jUw[}0*kL iӎ1 j*Mn[E,>dZgpӽ[? zkض,5 Nj ˆ'YX5F0jأV>s;ހpd8Ƴz5RTc$|wb~6m8|BUi,UeUr>iGqr<g[zjnaݰkc`+qn׻ `3`tb5Մ>(fh30gjtt&߭/p\pki ޽nsO` ;NUXsh*T C{s~I[X_Iqj@ٕW7P]ǎcСD<҃iOQx,JsF_;݇!!Ⱥ5a.8?FS<=1q3g!$F*&n5b~bÂ׭QɃ~khůAV ¬!Vب ςOB[>q?j"z`L+.O>úV-.H=m(gPKhǨWOFPD}["kt*Q\X W+^Klj[Xfuo;w.'gBЈQFւaECv IDAT9Q߀ӁψG]h0UjuqZ{×v^̆5aLkӞGCQ T3! _̓H`}O״ξ/xȻ p.gX"' =\HyOAb}˂}Ya|y aF\9̦Mx- cGV[WaxU Dѣ`p`VQטe"1ֆ㬎Q·omovo[MGG0Ѻ;zb,Y T cЩ+`m2VV`$V_ibdFbX|lb"Mޫ;fX_l!¹ l/h}[@MAANO0|6A1qp;?B }>qxm[ohoؼ.b`݈}FpFR%GͶFjւ YS+?E15ų{8wa=< gq:Si8#$Ǽ!:F M[0n 6>- ж3wgAh MhVip8&s|=9UlAo0zqԪU21ѴjՊtRܿE.!f] ) ㉉A&L83gPPP]]Xi۶-ɨQ𡦥ѿ"##iӦ]w׹x"ɘkC׺u[bA"ʔnO2т%5[)+8|~5 Aֈ6~~BB^i" ~I5z]"$NjM~S1&MFj\j%G1mP5ΐbem“ f6bs`bY?1[ S;Ihn1B֩=!FD1wG91:u*UǙb}ߤ®L!8D'یY>k/n^]1B/|+i'8MO>#ÐG2&\0ħ~uQIiѲdgg_ˢT9t萴l}5T(/@wե @ºh-#=9}R)Ƈ+$1&KP1xkV-14gy1 CoPkd7)WMl͛Ϝ5!7\ޯmHs!C|}:_!p=rQIu(B&GpshtFa1KYG&l61u^ڽQ煻 6SVxuFrGngϞk}Yz(X)4Ӗ> ֡C$$$DZl)w!FCǜ'5jԐx7o^Zt"Cްaälٲbk릮R@.\ C Xڵk=}}!~~~ҤIYfﮓ+*U0d񿻬K/dlթ!yskn5hdB&y0RM!$FW(̺xb0/4Mvk{Oe͗-+ƀUJ~_ 7v qּiZV` #*f`k>]0M1"ĈLJb4oߥ) ^duέ[$0H-ęq^H t'ʋ}#{P}0blaI+> eZ7{|X*N(;%ftСCQjdeeI| PV_~D$|7Ǻ~^Y(||Ԛ |9KrV@׭y)% U* vto@LSňb/ƥwg #abp g:b >NW$5)t7鯂5uqb$[GV%IѭX[劗6YiB@\S\6" .p_ʥ>qELׅ/ ObE! \V1e]kqM]|>/&{֗Gv&WWGdb|ˊzҨQΦMG-ZO6mHhhL:U˩}޼y7Һ̓߳%Ϳ 6%EX7[7zBۄ(ac0ܭÄ9VnaՆ6!,[70 o+ַAM$`PؾB*O盥ۭ̅ $"o=$_-$|uC:{7^` x+fXTO$Gѡ∳#L'mKlPJ]  aN!j| t~?!$ kDxqs۷:c/_ Ͽ֗H0i$1l6I8LB)RY& _M0GXL5Fq͞"YpV FbTco=Ƙ@cB`Q3fu  AVkKոm ;L$xyeˈ߀;l.iXl5JN +8"γ>[Kւ>$fl[qq^&>ZSFVNŬ^:NY>+Zequ%" ]vR[a[Ag61wRS%8?UNG8ݧϵ|Xȟ@b V^͛o}Ǎ7ԩSiӦ Ow޴nݚE:7ofҤIuJKΝ=8p#&] Z-|:j+~84ak08rAưp$^ݱu Dud{$![>p؉ ?Xoʗ{KWNV-LÌs8El^:pqMFUl>NW}ߝ1]=Lਖ*,+m w jB@$CLV4oEE>ؓi/]8cym̆v fطaH;fP |۬YYdddV^ߍpV}:gry E ea7|pԪCķ0>阏>gO6o\#wD:.OSHY*!H9SX%cd%xb)ܱ{b<75!6;P>o qC`f- Ŗ\a&VM(p9ע`v /.7#g2ٳ+.ѡ['}I@aHb}^i 9<sê \aӬ7>*9<OG7B1ŋ/\UFڹs'5kMyJJ v|/u'QȒ%K {W0'OiӦ]J*4jrfo߾l޼+/((` 4_Q^l43,r8r fz_w!XcX8;fv+ -l݆1ø"A`d? 6O`vY MBÉR8s悏z6>n a`q~<o}Y%'ZT!`3l0`Ο??'!''t:'"Ϊ\~7|#DA0iRd7B6x<ݫ;x<}(|9x`*F[PXD3i=o p9kp[&kL< p;s;?>Zcpc<9d|7k7fhkRx% GxB7X*V:[`O->#,|%F|yG}7PR‚e\!p(Mx6^٧ahpYo|0B|. (QJ;333 M/eҶm7no6.6q W5n-ܹs!ӦMFGGKϞ=Sl21 CVZuE3<#+&X%cǎ˥zJ`~+Qp:I|O*VPEWͅY.| 3/i57Y_(#HeOKOIL)H=İ$n? t Cj%H_cJTFS!Fq nu$WK16iI ){lh/frGBVVs}شE ߡVH! z=Mjֵ`b?J"޲PBb+:D_֗OWPP /tlo+}.lЩ$l~eWznPzfbk>!}(O !JD1E[$ѳClޮ\X|"$eMi* |)Lo@Sʽ3Bb'A8onzi1C]eB.!*NJ-: 1]>V/CW\cfĥ6!"Lq )O<$fͪv;n]Blya[xuyluֶ^'zv`5m+_Ah D;eU^qcu0C{͛\KT.`C՟ԻMNe1 nAz-(rp{=P&i=xg߲Duj@_R,Au;\ hn:1u-M&v&`5@tf_@`f5ν8& O=,nw#0f(Tb=xt$30 /}E0|`C!M|[Sp9y_ UYb9aÆTMI򕣮P2 tyWPM/eL8Ç3rHΝ;ǹs!yϝ;5%T PTTĀի7|>_xWyH_ #2i#;Jfx2p=c{E|ذh=+`@82Na{] 8u O㺩>N%_{[<?Khv>+Qm9eflzOHHjzo{QA@"`AT ]@ M^C)v;q|,3>ׇRx,W))]j9fa %8ñW U(f&U8e&sqf/;"JQ\ ni\DGo_*9KUM"GQp*xV[s)(+Ciފ`[%#fA߱ b)Z *8_>CY=e&R2I5|n6Cn&xX'r'%Loq6Ax ?@MPaĎlOhN+BN %~|l \\)bd 0)\rRRZr:J| +Sx"'.9s3o=bֱgU!T?*^8<ńwFᬧ`C#5{^5|.Ju (dCˇq/w/0k$|^4{փ1/V+5ԩSbs_QGp2㍤{UT_~O@jj*`hAX"nMz|q߿`98V H@@)7WY#h׮~d9O/c ,F\e%xi>`dxz>4C=!?Kp)[8)#>rE wB\d%.H{O]ҧܫ!NP,Qyh&6U5#/-y88Fz$ϭ&akc~}ܭO4[sY>7a]SXL9?we7nr5x Eg{{Ix ErD<^v6ǕwLrje mF1iN3N\aSx IDATOQ1B}X.<9yq8jX2q|5Z?q:'[m9{']7@n ua*8 O{tG GkmR4ӧ߯02#Fݺ' ȿ;r!o~w]YYs%33PCSOaÆߍ· † ;X34h狢(um˖q.\(ػw(׀-[fx!w6PP6s{]KVcԗ ύoJՂIGI{뺢'Hx.}{*bT(Vh>0/(!STILF;^ŤOX !f?("5cgPWB3&sNAWU5E *^MDDY$v8&xQ]%,~IBG&qaq3dg5hAn*Y+R!NHHj’ 4گZ+arB䄄"w'fh^I޽Ś,rT\=)l¡+[+MM} o#df (VCG['I!,Y'Ȯ"U^{LTU筒EkDmwj/{E".hh>vq6iEUժ!&_6J~7'j@Z|8b =G^J6|F2vXAAҞl$`QtUzfH@q]C}DU>bvYi% XL{<1ի)K wftj)arBLES~5]¨ O t`Y_!G 1b˻ֻU~߂$V~UVŋ?EQ[nxbYxȀDuɹ]qq ֬Y#;v,6m/oU7߈(p߭oѢDFFYf(m۶JKKrʒuwݡCdƍ ,EQ7Kxb#E|7+f$ c_&]4"(f.VJklG8$rc""G56߱2ĥ5j i[Y>brZ ͥJd#^ٲJwn.ZuE,&uE}|᧐V7\~@-<;M5n/m(n|$ TU~~*jkoH>}Rj(wb:d* ;8'st"4n"XC:|]X-}{6%pi]"яSԺJ*~WߑJgV&CKMz {hUWk0 !qH\*4I\Ro+;=B+0DIޑUĕ"3%HKKi]ETyttyj(cx4_>P9˦n7KdgSC%o5qإҀg/1P|DJx$߽5=PLU]kڳ( {ZB^R˃BT0yo` I5 BB@X_xX\֬)iժ/A;$66UUU|i1nb$;;5w)O@.[`lْf͚1d>֯_ϠAXz5'O[}L&Μ9swPrevYv-ݺuѣL4璒_?[S~}ǟ/=… y}4PJ>]Z#<gAi)8]Mx6x-ZqԉSzM^}|V5lJǡ 1Hm6~ON|DF7aRCGG h?AYs];BL-܃[}Ȉ|>g=LdDrƣ7KN} \X ݪPtZۡ*76P$ԝޙh w"^θ@D+HCĿ yGAQ,&>zh<[!W磏~P%,m%gCj-8Ty+ىDB*’Sy(U;!NVM_a&ܺuO>?q 3WȰ`p@n N%υ^#?uߣVǶi5ZF>HQ1}SSx<l%vx<l#9|8&|5-˳V/~gÔs?<:{-,"v4-ȁ(@ܾTjmR;xE6U(8vs$BTN-Ah$}E` vrqDush} EUp!U%0= ?R;/){ FT8bhz4QmfRxvJAZ.8;FkQگ (G4[Ew98Y3V=&ְo\gb=y$^׋rtt4}^`>c^Jaa!?#7ofll,^#Gr ͛2|p "˟RZ;w+5!*ILTU,OQ=! ;~¶ ³SϾ%!:^uQf61O\>D5%F,~k lSCiotfM2^$}=qDKXvtU [C:Bb[%%ٓZK`0Q4E%Jt$QM+H`z&U1P>f1X$$;V4&&5NUWE]5BLNXm=(*ɜx"ߏլIx7N4.iIE3:jҤͥwE5iһdd'4*Z&cqM+E)'ٻ\!%OuqƸ塲%zĴ$T4(S5oQF5 ꅈ -_.YiJPIT*UefV%['APz4di8?#%ݢh]fAA5*.>/$kK` 1ɛdz֊ob,y$xڨ;ɈEH= ՍU'q3BH`2}wKώR @܇։rK~Q8gvB@F☘ j|eb1bxFSl׺X+F"8U @4H"vj s7C:0TIL!4Ez+Yp9ݸlUIL.C{_%܈_EQQ4oě):ZvAARň"HT@YDѐt( T6&,I&)j#Y )>Cj`hKΒ_g W"oZy$rbƒܠh;H.8l7~ɯwu#/ fWUL򣠪wl#=u>)z[5lXrK$ xF H9rL|$2 ? %>"`91&28\0L s&GAKcZyOJ)rN@F(̢xJ0_Ǝ:l E(]F 8 b'OAlf0/e& ۠Y4LnC4N9M w`vhNs|]t^b蹮1Mct*g~؃BE,l3}o+$6.ֶSl&~7\GsX8gh+Όx. D0Dqx8a( P"P_|wNЃcs l]pƴι78+s1AIkFbHH5 -M0Xl"B4xa98eS#~O,Z%T#tO BHxJ5jW<žA&]rd.~VqV4}oC>o6{=ۥ+x6=&aG|8*yHOHmKz VF7k㝱IĐ}p-7%k?߅ngk(¥osQ4'Q_b1q2dGzt D-j0 -̝vzFXߨ7h|15)h)(N176P9#?`S@yRο-.(VQ-Fn?a3#ZۣV>u`{C֊uXk7?q{>L*4!ťx VkR@/ >?ckz q ˶O(P?sQVrH"#{Hh/P\PLdV8{Di'qxE!8={/W]ov)]*QsDm>VjܚƟfV"n4n8*]J@Pv\K 6ʮpsa4Cc?͕8hD_`3#ap?%ȍOC MZόs8c4#oP0ē:#S rxꩧs/­[a׮]\,41™`CL|0& BA.t H.(W[+̽oMᆟVM$t'-rrKsVQT/qn N̞XGqe~B ɯnݞghQZ0=2 i4*ܾRH09uGdвFIe%\ιG0LTgAI~Y|J GΖsW !ZEf#^WnqۣG7M ?Pty\?ǥ*U,zż }<_i"TJK/#u @:o"\{QT&Ԓo5!HECT/Zf+$HomEABS\wfPQTcԀiIeEAj\O+1fQ$5~{j76t&&"M4"8& LƆR)uۭZt;Mš$Klҷm #ΞbtMTMliF{0 Q/j>$T34!`8Lb:Fp m'nLLVU7JI*={%U-(C"LPY)I\U|-Mr}?l/Tm+Z i.q>Ro|\QzEE>DLVULM6 _Iw'*aIv=L4*%aƦ[ 7&eH$M#QIe*V*Jۢ5ɚVڬz"Uf>,hdNJbE}q0\1&eqQT;|7 Bf #FO Bz8I,gK;Bw.@͛wL"cbѹJ IDATD8v.~i,aiˆW_ʄ_2o &&^q،K.COHI^,!߫fvyt#aRwԚKP1lSl&iӳ,&m&:& 1V!w_4F_pUi'C>1:܍DzM.<2mk5H;QW46>QZ5s_&o- @. &mbES$>x_ib3O"ZU$z?S~`;RWMqU d4*~dI3v;hz._Qv"}C!-SXy\|7'= 9p TJ7og20=ܝ77sc{\^1qŰo(-$e 'ngUMLIiBh3Oc)M2&\~*ۣ.N*ul[}iVz̑b+vK^Xɟ{or|2].ė_ʙ"*g H n\(j)x4_juvn)rH,6 1dc'b놑4> @)9BTT 0i>"nĖ.3P-&vNX׊g惪Wn^UE&2bV9is`[`y;/8?y6zdbR֮]'Eݻ9{4tXP,6uåMǮ0m~R2u᩾inŵLy jt 5)CԔGsj>Sè:9mWH~#sX]n Κ &),%]? oVSZh,ZtB<3 +_NKbjtdU9*Y}04vLbO+B7+l^pS'4WocM2;8ju2R^+i&,OBHB8_l.??AykMxnƧNp|>JKQui wٓ3 ^8B!$VEۡfc t O'I؟r+-:k׮?5.˹rl|.x baKFMuJ!m^ \1 {L/Cq >O=Jƒ_⨞șֳwl<(QȒ\Bp8{ϰ礨_5`7 ),b2<ͫ 9s}|ì Ow8;čn~ob=S+m{n?Sp¼Rh Kr?/Or(+P|S%"͟S{9#Jٻ7K upLKTD8l M'u~Ӊ!׎UPt eEB\|Qge%\f7&-.jdfʎ6:Bϱi),ٸD=:^/0~q~} 7@Րņs{x!S\\ rb2;7ug#?oGڏv)` .}GkPtCbFhM(¶VXxX&E 4[U{1ltR#n^/!U1a|.t2qjmBf( CQ~Qj`kVNp_O Gض>AfLa+D|1$7 YFCYG\6A/aub8:vZjf"ey(ظ[` bGOW\:VuœIs} p4Cz`2Cΰg O^ܾتW"b4Lѡ>u}r\R~So`1|GHbF >V0_n wжpD4( +VUqM)J[w$X(tHBEO14픰FEW3 $bJD0yQJ)D32bw2ty}y~g Q5?$G#Aa[ }PQiƈHtJ*!uZ٥ig(*R]RHvTUE*V;|! 4NnXLe'"f*@$ES$v3\^S W]F-I- vwT*/(#IkEc2U&= "nO/1<+1=jߣ>61;D a7}^1|Cgu(=y d6JVrP(K?9Qw%SN#>}Zw`xq<~1H5J <2θ&@H1ʑ^?AQepK#Ŝ!O,=>JTIٕſS}G"Qzp4AaE}}:S~v04Qxz''愛o_dϒ"~7cK`E;ۖ`s)REy&OlEг9l ^/ZlC١<'Qr4nŚB8o}ĵ]);x5r37Pg_~Mi˹,+"^ 鴉Y]~?H%>Yqg/dց/?I@ |Lh򅽧WTH._Zu2PtBu. J-+afߒHp+~H"*JJMs@V|p" Vu aWiy|,Hz #&ݖKV[[r vOo3<3®_S7 gy=+a]BiA1ME`'i276 '0އ}GoPܑps|SݟGu؁x ={+ _3"7۟Kq0n Q'X Vތ53fܧH\rzzNO&tN_́uat7>ԯ7f옙lj1qxQݝgx=Gs]Auhs13:hb6@i<Iud7q`4ʼ1M|>־}kgnaQ`! (+\z1VVs Y=)-aB*(+άعx=o\$/Hh{(/|g~ucl-6F3eFTNXΊTFA}&8l^W"߿+kC3lb,. =;9դaScr'RB^5)Xx6nC(Wƿg*%mpczj,듡 0U@`` E1"ee=B,(u-z?)<)oMAA6mEƻK$`IO6gqބs<;uBrGL8-6Ora[x.\%OMUl'q2ξ_0rڇf1Qyx#8ȡ@"E Y5t 2!LYT?MۤV7نP ޱ3y29y'Ӧ2 oAN,؁2군nMnxƖzH`xIeg<63n[7u WϖҰ{ 04v>ޛ@<2%=x=BVHnŮ_z8|].x*r\DT#2<%xnx#p;oNo%Cңx _ 9sC`  m@T"<9x?x6Wm/>TʀL<&NZS1c^& EO(ޱ괆ˍŗ/BbeXݔ&mU*scZy KHl\D7l#M% =MSǵ'X[suiB$OO,/(<ǩ gX9d5a~f$"/'_6"N EyTb\>z$O$/٦2׶e`u[ɱ 8ރ_+{js#rʹǔU9[JJJhҼ9,^ \yz:xA4X8n`C,'FQbcPk [bKEn~z]#6_{cWɸbeS!] 9Oc||@z];cϥ`KOGQaRV;~%4@0K ]T{GU{{$3齒H{JHJyQAT;AT@Hd}9=9;;*~k]Lvfzf}Z=/:\DUNi/}.]̏3ٵ6F积H˪7nUN|͟dcZ[ 2ʱI!>Nν^,/ruĎ}FlMkpL*㞿P>0{wt͎'^#yAۜ:u׮ӱGAY)zp\>Sس귁7Ս{vo ϖ4k\w+,u)F;IX'ooE5H݋ Y|y=AMQz}O`vZp&m62Zڊۮqe5X| In诚asIm{cܹZBaDTs~]^~^qȪvbS}ae!m0?S4ૹ7pZ>6;W%2—/ X=62{R.6c2+=&3L|H}tDIM XNNU=K Ҹ~J64<<;<74 $ʀ=L)58. &|6|sYd|{h#9Gc /A}RZq֤z(}@v@R2<6ܼV;;rdԮiZL?|GY=bWbR_VQmYdb:%JexC„YW[YԶm|wz(bT& -2DlU"uDW1Mtiq 풥Uՠg2dqcI|Z2c@U*p( 19R#9덛+b^Qz/s}5!5o UbT5KTIT R&! 4]UCcǛ,m8 bDU6 sbĘG_g0ny]Q5E̮'QIh&>a6tI/C"MC$oUMod筤k E5i<=DPK i(-X*(d9$Ƅh?;Z}/C*A/f!$D4M޽th{k3^ዝ Ǐ7!19Yp%}V3sjJxo>"ߜ"M&6쓨%3RgJco" HoF "A *I%#`K%s5*M*d4yl?VK4&f5I7se4}4I&V+i9"RyRsAA|ZK/!Tn'4@ɪ_I}ǭ!qɖ}&ʯZ5Mxq`~-* b f3kp AͮH䙷B幷OYVEڏ D$2OޓZ!"vY vV KŠJoFj1JäOb"sG `£E]L@f~(=kvۻE.Œ#{w7? =|R)4 yo{Wh@*V/˚5k06VрBЍ_p|r'Pxo,:cXvsW ~)>Id.B=әlx֦Lxִ"85P?V^|;d&$$ iet Eׄbxw*Ti=T؛B6&+Xu(z-Ti3q$V7Qg6P5 Тo _\N/T♯Rq׫0jAK&fq{Y. a›d]+Ig>| Ħ9Xq-|7uKo[*/7 K fܓ)+pquU[ǡj 9e4CDxEQHڑKGPx)TcrUH[ʷX*jpl Ih/k'!:Ք?7E [й: j@>}9zg\.'NҹsJȿGAI E#l`Lk"?A׳PnXS(ڼ {@SQ4cb _>cV?YETSBjsf >/lfHYoAD>».7J))d.~Ys IX)B{81ʼۓK<4 .zMOPZ(fO^<>;? uxƉ+vu,Ԩд&L%A~>̝M~VD@מ)Y,x˹yeevKhUGc(,$KO1M"-uwgc8q2qkEZюıo305:GyJvMcu(G8)6TJv rޗPxengRȼM7 dy‰} 5Y8jX9I%3 2obj DQ V9:`hjv e[ɣs_6=|dFGZ{v[u F!%\= Uגq$|?f #9e.l\G<^+|$ G.ʼoAS ٘u1c6ޜ-k?#i*W;Mֳ R|>qc(ϺGKI ESIy1CLphnf;.D>K a)~|:n'05츁xcF:~s?z8%*6zLΕR*7a؃ĥ9{15x&qx19(+9N@'KHoǴbjERHk`Kg=TfN`0@j*%PذLn<ZIq?w0q\ ,q5_hVyfuEo`%q׎!y| k;oQ o}N~E:kT&JGahwk<7ތxϟ@U+%([_;L>}W޾L =ymQ[HY960Gdu!s13yro~9rj|K !*PlkǰT$(% ?; Nś-6pqmZv`3tF&oԘu<ærL!:P;batMZ*)0.6̘- SgӮ2a[t5sˠN.䳙7pۿawi?<dpZ):+lw3ŮL"lGv{\PNX2!f/ ݟK~pS|yfE*}QpV nIZڐ3_g1/uyex7w]!~ ? <55uF|:~[%i])r7->MAq]xΝnc35i|>?~q-Bƿk<<(xQgvݼCݻ'zg\"33EKb !É(}-2u?f1Rp՟@T4,ڍf/%?e=Z/yEѩN­Sy,䓺sÃ˫S/rl>&S6.|7_Dd#^ upr[6R/s)AH$J1MEmm(06,ͥEʷyfX43GHkdrHn"0De|PTfSزZ׃e{ ~v_/p/C63P(* }.Q/Pa8F7gDQF8ݳ9>>s ES;1WBU'gTw:q{>s+v/B iD`Y>FD8IiΔUO=a"h/A PNQm8{8+͢$TˊυXQUAUܸiqLyTI"7'27 f(6VbVIHd幷|%SꐾgK &(__zvDY^.E c21[}'JF>*ݪ&kl1viȼ" S)# CRdB6Ej54f@ F$& h$( ͊8tݡ82"v&Tk'F*&i[N( {)2_MdqY7_yHdtNGQ |RVzRz P|,ki|]ʧ4-N$su1،b$]T{~6&CCTY0$$(Z 1 }_k ߊШR!ԔmGz8Y|-k_f")\fѵ?=+AFr_٫hv(FMY),t:8Mz\xYQ5i F_QTE#鐰Q Qk* Z.otV@0]",pp|,Ӻ'%e׶Ր7N@&~, @?,,*+#$Q|O%0*[YhQ 1 i-vJmL4ǹ:INӏQӯo qiRXth%6E?$o~+Ȩ%?닢 f$:5` +ݥx7w% V$qJOSqo]E|<&%'tyN!"B5?J&(UE[X ?nɢZ,ҫO=2)4 yoTh@*1n3 *x@4s&>>vnǝ\ )uQo(*Jt')KJ'M-8&?Bh?c$(sڵg-uL4Y?Q83eWLB1h6e>9x9u.QLXǞ/oRvA&O>sG;f1y/F;GAzfqG`qa sNz8O%.·H Ν( .Nl6EEEᠨF#wޥEQȺsʉ \bpaOyz*ΗSO!^P54&~Z_-hf>LlM'^ !iHAaL«ٿ6nM_RU6{gQdԸ,f Xo7°W1W '~r㽙!!fC T yrf0Z!3 ܺR?hv KVs0a0b&8McW_Lm,]KS<u>κI\6NĞϖvoԨ_3+[Xœ-=Wj}k/ .`桹^r]EWuie_a*>g>3=yki Q`6;/[+#[F2Ջx^N'6{CI5zIp:/l6 C:nrlyrw)* m'$1gy<U~"aݲ޹Cq棻4hVь nt fu9#>Bl0MT,Nc/RΖSHg cЉQ/By9 >ܹJޟva_!ύP j㉋g9r5ka㷂{|ЀT{g@; V] Ӧ]&M6Ń90q"&ÚI7R_XzM1 >+FUSj-"`,vNjoQ>m#\k!t&3ŮJ@&c_ jsZ3x^MFX̩)l,Oom% .]?,^W.^(.\ׯ˲edٲeh"3f 4H|RcTiF=Q eMՠHp[w MN6*fT$q YIt~<~Pihy[K $S/7`Ժ{mߩWaZNԗg "$kZ|\%W24[9ZabseYyg IKyQeO(3bSyH+镢 ͤ#"=^JEERE+NXf JݺuM62rHyd̙|r9zl۶M222\~{-OJXS>n(Z&֐hh,W̪YWJIIpI&(<:]m)K]k)2b;1F_x'Iϗ⨝ #{J礡Aءb|{( :Z\F)?4l|}eٿyS A߾\RoOHRee>^A'px5J=^//p`2-(&ⳗz.y!ħpb笏{{\9K!к[t~2m]eZ[Aa{81w^O߂|, {>Ue\\?G~Я? ::VERJ0`?SF5+oL=.h^X Ӌ( `몛XX|yۗKx*|2 8#c[Vdz:n '^҅#r{R<A][Uސ lZ౏ϔ… Yp!! = ݻw3zXN= BD|&d݀ldpǗPd՗}{TMK6nI؆[XO \%DuM÷Rw\Oc^&Om|.e6pcbYi{U!,=BW\RP.//ܹTD۱zOg־tBݰpsv":p-#WM4k#^4Z*Y(o/!F"Ex|zt40jMFDLR #Y7,wUSX;f`ߜApe?|BM=L-5#r]p1n;25= S8b磽p~E180`!ggF<^jOZ<5d'kĂؽ:6R8\zMhzMOH=o޾M>4g}~Ͷw.ãn]qa)nT# /T|#|ugQC KTU 2c7(+t}8+%qd/㌰h*>v5Xx݉'V)jl8L@[,x_s rUo/[ }^;Jce1oH/ܹ yx\'Nu۶\*svf!z"0z*M?aZ߾@ ;( ߚ^/59ܟ+䝸AbTሴS{4,~EUaEoXx)04O<[@.aEj3֓foHpR6"yXj.viIII09t$Rvɣr-;vBY^U*״Hc%?G`6xq ݎ٪qs9GbE3}&x"uƗ?sgqwqD2);|)wcL!p<UF MV"yHY/JN]; oq1;vŋTPA_T̙3 >/ڵ 9~! :xܘ`J$oosdB12+Dmq6j6MlgĝTKxZ +GnSU3Pxֽ|Xu"U{\_}RLa>S5yLrvC)-eJZ:b gƌ05&Liw%yFr& 5{)-tqysX^ ` z;Q"Id0pvUVzyZUmd/(+W{ h5> oQ[ _i-3_? ށ;~<@G2F+ܭ +ܲ!ulay~ \.׃ 9s|)h=B|3 p {HyffarcķV"b2-)im:p'DxtUn]͙MxZ $<Ma0lLH%F Ǻcױ9u c{Q5Qy'.>x׷]`?һOot ___|- p>|L<%]ɁBVöMn\¾ͅծp7C&xnie s\ S֣8Aa)) ;e6x$؟_P RVΝ3u Ha F%*Wpulc;ci_-Fkވ$11P^xp+/ăSs!5:DTML:cz=9(ת0}Jr~stѢB%z hkC|8%J= I N(^%e#']OgPLX∲K\SUCFΊUwH~A dgN*M%G+f@쾊 cTݝ98 ҫVkvxx4lP  +WCrܥ(rfh5,DT&(HIQd]{N+]rHvO隌T?8^:O!A"JXtQ&EJ;_9:Xm/M.E>*jHа`0Aaf4>؏w/ >p-tED$04T|H9}w4+(<=G~Ҥ~T]0J|Esؤi>l|+R3ڈtҿƯU?"-F*(4.fY2iy]P_/+t~*QHl~F 5#Gq ݻwt[7ސÇKf$$TUaX**!Xm8Tz#V))LK JIb FYT[ݑG$}HoJ7ŨI >, ؠh%{N [ 5sږ}(kbhPY'ĥS%l\LFi߱x<? HD H=; 0 ߟ7 (&0|qr.ƛߤxne܍{s?JapRx b=rDHzd.QCq+-z g~N3|ד{X5,ݧV&1i[;.ޙ~FO>1j9:3{yz_٪ IDAT5)0$&&%(($xꩧ;w.F 8-jS8t-H`Ml (TeuP#f;4Az NΝKx\^ vҨxSN:y,k<oq)|AWu1z5Ho?Ho+v?z w ^/޼|ͭmba߾}<'~ժU(t5޼||o>i}SO+k ܆)\ ?0n,ęIP~질Kv$ w]3k/T-swB3an_*<> P *>y dI~Uס>tZ9F4lԈ ٹc'K,7ޠcǎIgnEQZSZPNygJ F_:}k!"@ ~gqo^>oa0"EMfGBO]?U?_3˯/pe瓷b@U/kz _({?~[- # BT ` ħNeP4 ˳cݷu3x<<(R;u1{lz5 g1E:;"/3F/w9IT~|Y^KHI(gռx~Va&foWc˼Џrvxz%dѦMTUGpS<ª!PծЦ/`RݘMu r]o`im{w&[(MT45ۯˬ};=a6 <Ⱥ;o M{œ{?awjwHA0]<-:2vX|O?>}7YPN]T/ed~`Vp4: >T2?AGpMd.ތVOW0(Bo%KuL9HA\p_f[Zz:|S*G6f36ak]&~ >L:Z+sm*xV{Bʰ$Fɍ&? aSo0X6ٍ)ԟQf80 TE٫eodxб d߆P^7/A͖V:tz'<: |{s̙?}*$#L줶g+ 4Up+emo?f1R=Wոx+Lv'ṑbrcH9von7ƚɳo9L6LiȽPUYEH%B?СC dE_-j0HR9cCR$ߡ]E1U5!ΖRZav80]] IֱqՅ]wr L_VC6Ii3@ORC+YE5(SUޣ8qAwrɔ)S᫋h5E &U"usB $0DD PAsD9#d-O ȢiaIb3$ݾ+IESE5y0I?.x2q6IB,)bZIޙ n6w/"( zm+ᔗ^zaFLs.1;- : ?o"{b- "MK|Ahk!%YUAU$iTS1:-^Q:v/WhX͢jP$r^Xl4#$ծHӎy3RuѢJgjJE1$E% oYnjR|b0)E4 7|~ħڤ(P5q[-"(nESR"b 4UQ,j,R9'%JT1nߛ>bKMkȒ^E,> "zCO!BȬ߾W!B෧b^[_a!xBZ< D" "~!K_Ř@ִQI.Ǵd~uIy1l;Gpߦmx=3x^2v͘)-#wo3ˠ}V=pcPӧ 0$ ~&!bqhU~rz ES1$r-wQgoz#=BB ziҋ 4E("*.H.@(޷3w< 1wxy,4~ w+fQ'FP ; JD#;^ؗOGܿefsxEN?V],Jes W^eĈ6Zbdfdŋ(FAi _̆0{9vH >6*Ah('sc.MG8%)|~^ /ɀزs}bc[ˍb-x'czqv / :m[f͚)9/r̙3\rO@:sD_Nwx`=ҡ1GI܌@Ʀ}|-U 8B=9^:Zoΰ2+8+? XP7HHJ___^QsBUVn:wFBJ^:cƌaݺu,X@Pc0.ωC6J)Uȹ0,vybrrfye}HB l6xesošDb-LB#/֣/HOܺh n Lfh8:#D6A&P)Kc233ԩ(vs9 VMZbv @@?|$=2_\*St q? kՏ i,?5JCq'D5 †a^(1Y?$#{ Hzb4ƠzKOZb3o\t 8ΥKٲe 3f̠t/ ...ۗ5j^HwȕZ>&7fv>O۸8vl+FZnzrXƳhZdz:2cm>%)M<lv|"k2@I5(Z-r%q9 E&Ub up_˖(w(ϱ9q81檥AQ;yx(y{l;vB(Z 4Gv$W]}&N[pDވFI|<:| #jNVj39-IITJ7JV(˻ iFAUUBCC_z/+VÕKsw6&C#p/bvG?&WG73`ɵmZ4׶'#Eak<9%OɡjF!cen&Jnţ%i0I|eRhG  l^QTBW`:uf)Z 4qC`| |xz G~ Q&=Ƣz*x2BH*m_"xF{9TRsopneCQxnhK zT%y_a|7=e%66P\\~kpO\r\>GN YEoP/5{dǼj^gJbnXjI{Dp - ?gp[.ݡ_[g{s8/9q)Z} c7mۘ+OsnR$"v!f:K:EqF%"Й4t5^ eO;KwdRmڴ~)8h\\\x睱MOF(u{˴vv[{jb [J}p7bP-;N<ʅ3_Ɩw@Woѽp|"C&n)`fdFm%=<eиz¼/!76킃?2f|V;Or!?YE~c%PD iĭ5Kƽb~u+/wӳt/#w:ܽ&sT,_K\޼hk׮]" fQ}Ғ\;LJB)x;(WƕEMxrZ_y /(ņk>hT#"i4fF&zG/`MJC@*hULUJXr0.{>ϐh⊶!c.)s_g_{ҾCTo"MB y{z|@ؤT Xyލaߜ03LJ_:jkz1o)K2e GNz+GURص1YV~?ͧʫ]4/Bt*qa)z/37m'vcϷ12FGjhh@pH{9Aа7xA)4nrM\5ڸq#Z_P>7FЬdga4k6f᭯f98?bE찡,܊ά.xnoJ?4:,P79nJ FEݎ`؉m̧32ޤ}~ڑŪUk|: cӦ-\kw\)g8 㭰X؊I;{kFWFbݸ `kwrssQQZ ܻ_ Kxh9 t:mTӼʭ) h[E%~~6*ġT"NtډbApLǩ)y+sxTw`-Q—oGh *9qKCNSNӦM_͛3v8tF-u¦j˰br2A/%Ʊo)̵b2\bSgz=1 B5h_t4GJ%Y3lvJV!wp .Šó}}PU¶Πc?ZĞ_HZ;҇gq)Νs_ݻI~HI{4*a߽Oddd< Q"eRbLTsP3.rQǔj:7#7"M pw^ڑl߁%q&GWq]r;%GP/d28w.5ED[{J~~>wP.?&ޓ ?`6cYDIC?Z%4ի]HB+Faj7=H3nnpD`ɵ_w;CX | &wdz=r+M$5m EG&=EJT_>3iÚaѻh&>ά%7%~<9ЃKSwu5:mm?Y,գ)w?E[#Rb!<~gQq] }y,B9V4t2~%7^'N`# N2$&&`.[h(r'lX|!MDt&e#pEݧ~J}|S.עu3Ğڟ";&f(µMnAsc]Vhgs}G6QDTp Z*ՆV E{pk]Ėp<ݫ77ws}Bp`SViτRZMpn]h*MGP_Q~t Un =Ekr9 -O}F=CPZRbEk/h~V N0 uV\6N]0o<0^a>`4`=y %c. =N&hs>BP Zү<&v\sboIOM- n_o.p(S=}4̕tdFQ` >RQ+}f@D 1ecuwnݺr{ZkȪoKnLXI~~gZ}Җ[ۅ.$kAGm015%\pZ: ³)x6;c^#rd$lfZ+`A71"x':l}]6m2iҤjRR{j֬Ɂ~q:@zɽ{~|8eZΖ4a? 'xFb ү_?5n&kNfLٴ,[׭1-:Vb'+~ +RUy- 6;nrW'e|4(j#Պ`ԧ1pp1tg.8jkGG0'Ự_`͚5Nٲi|‹ owr x1qM%OAׯbqk$?!D Ro@b4'na=bCQ`IY[lP{3>IM+c>^ #g%Q+G!RSyz!mwpkg<ߛFݨ]6ufc>A܉4YH}RP&O7#r i(Zy喐 ;kNSbC5XIOMh_nDj$o:@;(:DŽb{R`U?ǃ>&{I2>}7Ηӯ6JvvxQ-Zb]vZ/((QFK߾}ٰab:t5juyEJ37nUըCQpy;LUn>@yKjqRR*Q/Q'cH/zxUnERgPdRwi$f/=R;i7xnoo.Cg@ZUL>:RSbU= 5hEQ7n-_7oTY]||DU 0HP1\h1C FE*&J` - -]H W#\.&ZFHDk${MoRB)v(bth[6uwUf|x?]z)WMiI˔hޢj;*G: ]pļӟ ..fQŽFh͂gltϲY6X:%ΐ ׎^%UxEWįIvr+!W$B: ry+Or ԈU=%"(FEz3iݦ1UQ5M'|讓RTYsa C%rzU Et[$A'1_.d-E&-ˉQEacT_3U'& oiB.hX2T>/hT1QڢRbx0^JYǷJ)))(L4wM%Ce|+ώ1H[\p{i^@ep'(pD.KiU81awusR& +擩pM|O =M q'i>Zz ^ ߜY*2b”iY6'6sڗE3;aٲe߿ӿ?\wٶu|t9¢P*e^ grכ\L 6Vh0ǂ{zW[Ád1j)x΁:Sn9-D0xQvr{f}gri(Fw/Ek)LLǖWHp`"7NFlv-EU9/ּCS3w>E "?g]l޼hU2FC=8uO<_qҳi&r)E_Tm/"WW:h<\q;S4qL`*V-5#7є _ h?X{p]Vk+:1h8%Q]$) LhǞR*F*rjvXho6E}A{J|k 62xn)BbHKlYmbgAc$Ƈ?CL.^(֥2u&Ш$i>%X03Hz 110k$ĝs"v޼ Îfkʡ} п!= B45`߱qwp}g }ItU${p"Mk rZRt ?|v+6|̐VOؾ1fZ襑~T5~d0ZaÆ "уjժVɟ>,ERi8+_?ȹ=ʫحvlV+Fd a 9hMZP Z?MAc*OX*l6;!c{1w?Zpcwd]b$!O{j>/ǭ\1{cɲ^TN_ȕ+WeI^YV z*#G$""9SpfrS.C|qezQ!vmb'q6D@b%j=Hp1곘_tCa"n&g~@O[b5sl\=yv-ӏ`ɵ^/o_b06EJz( >P ( /JKKqC*FOx}z$5sd&}xsre|99f/_=SO  b"zTΪd&aI|q&=5R@t b5O?Ǿ[aJ-ػ1*b2uTƏ/|l6zv9 $~>7V&(֑,м. 3:6'85|zw#eC9~4T r'F=8 sR9iZQT8;BRhg7o !q[:6@3Amӫj{SYQ݁FVh#5. E4_ g݈lhz)‘"HFx6# ۔KFRD6Ш~D Sq(C7-c 4c{_Q"[wa蛼{,[_EzjvlѠla)\KztCpw3Mv O٧&AMKqc\(V!;9OOdfNNZ!?LUbx1pd?Hel<τ ݛ|v=؅k@2~xz<+ឬ)|̻ϰИt~):W#[q/V[㖑~&9e@1XlQIqlvR_ u.{s{<==HIIyrgm)hկ5#~~~9s'Owa۩Sji0:!'Y 儔6E n~zi'T]~1F'dΜ9:UPCKz!b2ȇR_//o=)"ETETFJksEilCo$޵UlSd\;AoKڥQD&"RbJwAU5&T" მ$JDHbb⋖O0OFdNM =00Pt;vEQd޽IIIb2dk{N^># N^Z2t(5$=h<\v.@԰&e`n\J^5SWqXeNkJ~?^%lHs~3+8m1$̑SrymwWtf-翻Fx ꏯŠ^mwiA\μı70gQT+ڿ)W$ [ܛsHM0aDQ8Nfec{1!]qij`1{1q#A5IZw[^Wލ]_CVέ_@aタ7C}Amlk/ (+8*.v೭`amT۸wf{|?4) ,T~@m[kn!.81vOBvkJR̗][Ԉά2qWl]GNix#Sb6l/!C0uT;5>lȩw!i_r9b.[i}mLA@T6EUx"YeBm(=m^V)Hɢ0-bM(-4.nvU3?BԹ)OR裏^d@?Kٲeto_|2eqҲfD2wAk6` !iY Yϣ3^ùε5#x@U!T.4_>|;6g?hKN!s_HVT+G?z ;םAGPL^rsៀnǭ;5'6 c !TA{sY\_wDhr41O@Ux$h>ٰm܀V5,%ϘgΜOСCzXVE]AU4J,!{5ARWIGȘ"uՋƨEKQ@ZjINRlՠIHKYΕEkŷe%ZISĿMeyEcҋܻwEKRx@#ogE(( ""һw_믋L&YodҵkW)^ 1 !C >?Q(>"&?>QSťEѪ2KECUJQ:v&(jԋ)2Hbw|Hd~LMt&y-D%Vc%h5Y&H*Li#em*"[O_d/Ed"vvmBw9K͛7:u*&Jhd]{̞w;obQge_bF7!q:=_RsT=lYyOhzh=̤KI0/IG9k ǔ{snHcvf(Ѿ|`3ݿv;G}Ǖ oJgqy{ ? __)]_RbyU3k1u1օs2pqS5gv&Lj&]u)KOqs [1_N2 E{.%CPZ jr3J1.ß ?*<߯_(F#!=9oٿ{l606 7:WoR=VՊۄ7PD0 IDATqLuqIKsqٞsI"$#"0.;W5;ܕKSi6(I4L*5Z{!h 5kCb"TkBty=HıUuYjՊW2tJ,ɩxt&Bsfj5<(ZGڲ>G5hybc[ar)-q7u7a#a%zDv,[?o&I4mΏ܄W<mGI7~܋̉'/)_'/WBhLz<ScyEէ0e87`1'BlvCgPB+i'o35*|'\漮45Q˥?_)Sb*"7p+D5 DDPE%ZwoeҤIm1%'\<)*(q |Dž Vhsbz5kb(x9 ht ZߢO˜۟FPӖa <*ʴi8(Q3fp 郋 fW>Bg??jC\=1تuN]b\p'If?)6yqiw(CVt,pNiX*܈V͛6SPPC48q8yT Qob__Tٕжy #&R{lRȉO$A*uwd]{ĥ7_=*N1zq{)Bz0z%f2bŽ:J710~8||0h &0yD&OXxBnP >QfJ㴲u'!YO4fcĒOx|\9U~8ִ,,{cth +3C)̆}@U_{R 9fު٣^k_V+;u"7+yk)|=w:C(iI]Bh6boRkZ6窏Fj9ZΨbg~m;Ƌz>*j(!'Kq6=ݨR߄*\=[t:NIDbT^R=au!vlS~utTLC ݡxR EBR*H {~9}zy䔓gY5$\N}'~&'QRHhsKm䃙5Z8ˏ"\zLG@&PpQQm@s// F"j"cM e{QgI2Bdh,Y^'++ld7%* .ZCΕgh qjGʪxoc>GSB8r$ Dф|:-/\ýW( >O8ۅu]g/$jY|0,~6N_uÚk)XXZRi2f͚vZv W`c|N)* A&hE9⌕s;{'Y!gBUv)-tkfmݩ^do?d0^ 4ϿyHgkJp&%)}o޼Ǐu@z7!H޾؉X^ˇўNN~#(d=0 !Tem46k^v17ގEc2- bVG3qD4MfΜIhŖ%S#f,| 0lOr+Ȯ!ȁȎ^xQo<ė:?~U,Ք,O6\NM0cD-4#Z[Mȥ\tmf@{4#2rW.c+&>h;O$aeJ4b(kmn~{TW11۟ E댃g֬O mA 3V*bLl4J2|9濽̤v7+K+NO\濫/F̽*[ *=6- *؋$H/&S>}¡7D"t1%;obkTwu֬X3e_af.#=M8܌۲{eϕRU͖U(N `o˸p uFMcJ)9wD'8.qT@=T|WBHz&r=a{4͎lܼ  )n:bv }iHtrLXRig`:UĦM0&ls9Z"pcV~$aڑ(5rbﳦ\;Y3sISOJ}:,,x{0'h3G'{~3T<-%A6(ruOLPz8&Rz#SЗVqrZrE! + &,$B?oGPʉ_~ Qo$^:uF4E246I=zj8\:5)/Ŧ;>^V۩k90 &.a84whY_,Meݱrܞ(&Kw+&p4];syA7 qi. @&)+ nڮM?EEf)~q[PY`mBXXXZL]ql Zӵ˙GZRU&2i'1I(Id:ϯgt:jffTV?.x@YcRmEP:8k$G33ؽRn^7Y$#5mPb 7t:d,>N]Pf|?,ďPRm 8{[_FŽOg=Ж8& ḽ/86_Lw$~4- g{Rk҄c\Dx|bLMaaa5a"gCکxHF}q% 5 -#"\FYlǯEQc$ әT |ƷAfBsm~ Ph Di\6?wNW y&&3aeQYYApX[Xq"tnW<w??G_"O({MyRO(ģEq2DpgImSP9Y"!ӍG_{(zd/F%-7cP4X@eee-gy뭷pppdV8^>$\EF]X;Н~S1ʎo; r'P[M=W¾I]d~rJ#TV"o  W¶¶Vo?_m_:ub(C7{fbt4ڶo݂ @0WpHQ%7|'Rp:IK3ד.;s򓇬 ?MZlAWG<+ oSSsXQj`; W1mڴyL~/h\l P9: Qwu)_~$JHFΡy3 ͶO&`(q6Iۯ (d|@Uf1΀9=JONTsm>dJ9[@1.L]SLktX!dffr]rP"0s0ToI:Kɳ\|7ި5JK]JUf11s5OW ` d(A v cMlOٴ/]EK:gqxK)Fڞυa娢zysd!:#q@zF=T \hCh҆/ MHIIX̛P`eI-TurEeޙ2s M T6d[OʛJ@!tT~8,NΓ F+`z%d^zN/"D"MYj*&=ЖuU3h+V%vE&q` GQo_+fMfKge7ne˖ťQ z45a1Uc̟+L%kڜbj=B,qYIX~d /gbhOt?: #ry>2目*.`$i :wJn2y Zjܵ iEY(-U걑$4BWš[(MG_тt96Y%<\pFި]mxy~\ߜ,Þ q[ԟf[&Qt%Q:Oy>Ώ©{( }1I F^l΋MWt Em_[.6V'盓'׃\!Â4<(E4ܙz h$39QIg `pO]-"=ODɧoC@EZ9\LtE\++̞=#Rxt%0c'7 _Fn< XW؅⾽ƤKGQeYdSLtJغi6guF ws܃- ĉ_{zpszc(J=._U:QPQHޛK*w)(ly*.?`DȾ]:~՝+Rc=/?ɝɋzN cSXZ_i5b:Lj݁xΑҏrNH( z~S[q6)84Ķ Dt%.(,(N `B6L&Sb&lPIY6>>@ طG҃r-+|93b.4u ߆V8zi;ԙԶ|((SX)yy- FHP–v!~9 Jy2oi8j:Vcn' YS6sg/:vk}ACUM;w0huW{w An+ĮKX :CDEr  h`IٳL{afٖ[$d豬H+hXvRP(,U,.%wapؘ׀9ϞúRA $Qu&/,C`67ցJF \Nhh< ^>DK"V#$$~sOȏŧ?^ti{oq ]Kl$Qϯb1y ZwE`[Ǣ4gi1-Vn 7;JπP4~2!+GmœxV }H?t[{.o}܌cЇ:s-B\w+ P?*۷m7F˭y<*"e2tFQX:(Q/}F~JON$skR1R+*k3wK{T#(l?ƪPAoj-v?|TPD5!\>`c } փb8uo?Ξ|VC`0 ""K3V}ڐ~r\{p <PL\dz8ѻؑ Y.jsZ~x>vS#؊76d蒺8zSV\zwѣQ[o!y)}q%[/" H#@OTN8iEіU# [22FWXS?o/D4݈tW*u mx5M6MLFLIS(HgPr@aaF݉8};muעԡv{'~: 톡\'Ӄ8v /s5x& G(,zVُkOswz'#DVCh˕]sh}t築)Sмysu@|sui dǁvqߒ9e*x~8mi52-vBcSNNŠSy/byͱjA_s7E oZ"d Ԥ$?];fh w4v#dyNC ď'#V D|=)O+!%2ޞDr@.N4邵 uqsveN|G<ɲ71SX0cƌ1si`8S ~b_MB8o="9J,ux[>IC:Ey!ȸ:ܻl 0JtZ݋Ex5ȩOrta~DŚR&*ab;3 &n>$l= ٖnc:!3SRyo/k3E&C\8kkKGc.0E qKH-@i[}[|{`f#2#$"z7%ڧz4-=|+@ 8`app>B٦鷰;/cs~ظ}Ø @HnY51|s.8?{t+bi%jz]Bp9zzc;Aqb^+ T XzjK9NNI`4i0VCE˗>SXXe iT%c,*, {jԁ<:{\<ƲJT|4;Sd @$n QJg?FLCMz;O>fض.|;r'1T?be)kTԺ={X'?ckBw~i4N-#7EV)?B_{V?| DjKهԙӗ&Rr>!Vl-.m)4y-x>w1HZGڷqߛLfQy]S\TFvRC9j#OzDQ@ivu2J0s"\<{E)FX^ӧpv>Ƅ$PjJ}SM` E7̦`͔Ѩ;a?xr9 KdMGB=~j\ E_D'[(pʔ|zZc?"0,Iss73\?r_Vr|e"kzA9Sa-C1KoR)AS '~gTħRXV3ogEAScq9fya.ڴ<\XXV=0s%`rG<7#vdf H<L@_5s7Q IDATB+d@P/_/o̬)^wf:2u2 /OO=,Ix)z#Mcruc83cFPXSz9 h$=ԭ+='hzy-YoE Y:؇^Kj?#HxoSNe֬Y-AQTԈFc-Yˑٔx~5+0q'kO+:}9&"Si iȭ-@h}r.`HHYr` d 8;"팁Uc^&'Ә%@ōHԫWsW F0oB /Js$t=H)D]KO3Py|<]mޅOtĨBDļm1Ǎ8՜<}5dʨT*Ο9 e FRZNuKe(l-b"M}CI%z,C9|"Ly5jkZ8 A)'t~7,}8[$Qƒ\ ,@DH`7U7FObbbd*@Lj$ITUUao{$Hsh8><] scg8>V.l4 S`!JF-8g4r*Q#Ǿq20md*%?~HɉEPTNHz#QDag-9R0Vj)ăk0VI\7bSϕ7anCϋ#OWh2b Z#ћ=4L4y会WtfeRDQd䷉ $nDߍTUt~GNoޜX7VF}1Ǯ7nY@Ȯi:UPKڷlj5`sd5~>Q^'vc6k, ¦3Q[8Si/ gi=z/>{A.#X,A@ng`C0JHEM DFD 8~5#!)AmFES6])6mT %*X:uꀙu> w~!KsvFծ%rGݏ7-S|E@hyAבn`NW$&ʲE۟p%FikZo&ݟ2uL3֖؏cL$NZM=2)ހX^mƘ{c? :gu&YPUMw[!$zA/~@K,fS#r A1snrLLLLwԪ-[2h};N-f[ݴ ZKcgzpWe)EGGv̽hY?f<[{ߜ`("˃dm?OcKxAG?!(jfG >Ő &9WٹgaQ(l~#߇!]aYȔrh`ReP"r-\F[(՘1sjog#oߙY%(<]ոE$s"ЕG`Л+<]={pJ%~ .^?w eF9UoVu-燱֙>f2vcz"(x~h:yYtf"ğM$i&;DT]J>8gw*PѨziXӦMQq+ ?œmQx,$f1zE5vNE8l]č_ $E`$3*[ m|w7܆o;w2crY`7nĻuV*=N0fhRM_ft0V&j GwC*27HN\ZnIex~!/?ƫK s%O>AyZ ;()D< @֓d^݆8I0ﬡ^NOʬ3k-/&*,X3[ 1d أiTPfG? 7_Ղk/q) Ee(49 ({r9YgQ~,2K,wv0d*./m@s,߁0ǏcǏ1`j Z]TXwCI%ImF$$ V 1<^;p9r.sVr<\YS1U2~v+wKݘ/dpD|-soϧͩƦwгgO1c/20>OAGA"W ayh_f*$P:XQoJ;BwWPH<O\j9<͒b-xHT*K(M{rs򈌌FLIz9>m\]#@V*~5 s%?fk1j4ZܓbiKxN‹~Bj@i\ɟD{k'^GI_ (8]V=3gdҤI)nEb릦 sҙӳ.cfΝ_-$J^ܞvv&^*Q$ivRy%'d.$aG0p3_۷m.\`" e_nvZCPYd] @ECZ-xJd 9*k5sHQ$l˛8q"b o ?nes˝qh|,1{_bfьğ9a Vx#IBƢEe U *޺1+ꓗrlPxn@TXTn!I""(!Ss WNt'V&N<ڟ]O)e @Bv8J׍#\tZП@Y0)4yٹs'˖-#22-Z0rH"""Z-]tҥK|?~zիWy]JJ 7ne˖ܹ .[oj*?k!11T:yɣ/i:.;3MNPU݊}OÏcQ/19 ȕoɖU?nՠyk0>AoXծc!уݺcvkzE/Didyr [0J4۞ۮ=q6n!K,{M A{qJn'b{hDġemOCG2yHY|Ưb>CYr 22-fZtRt8{kN?H;} >'i\;u@.`EuV *¹+3#w8hvm -3`CgMe>+aYV8G3:&Nݷt'PVVлDe:MW0eSqIEɰl-jGK|G\Qoۈ:#R-]hfcZ#:x,*OQli m+i6=`q?WQ|ab;-B?uϟ'""kwЁ͛鶫[nܼyVZбcG5juh~yo۶mbƌܸq6mڼOǶ-`t,y}"lHte_K!dlc:&8s# @<@S,o)>u߯. mvR;OAfgQS0bi9ΝPLKL("F7^Do;Y)4D2H4ۊ/S]E&.85vg=)xMj38vRQ HBÎQ(wrSx+:.r9ʷG"^Pgv/,뺒 gG֭k~WH#y7ޝt[) +<kWp"sM$4=pdNǧ["6|SͶҢ څijt`fB4J\ُgX0 έ|xy mJBלW?NDИ#ng7Q0K*Gj݄챋-˨Ac(mCe37&~L<Q]PA-l T7Yw  .-FS8ZzN읰E Z{ ȫ&&\]{6lد0aDGGR|@ͽcƌdee55j_$I:tSQ#D'IM5鲸qbxV>6H @c6r 3D:8 j?) 6/(E`R~2I' qTX~HܽkjkժV6m&?x5 ҩӧIxNU~%!.Xzݥ] FWEUDξkH:ma֝+xUi<}Az;{w'7XZ/nnnKJ ;wA)=!/ (8u(߉UZe4K؎L?| .N? Ӝ𵃱cSğLf -\Y'K:lIA"4'jjiB*tj2v`>q(Rq/z̡=J&R8J Q9Qgt zFKiRQ$'& #Wʸr=JnVX _[F5zcfɓ?1"t_#887 ކ {Ο\~U6(Piu2AfD(- #3;Z~яq{  K"  !{!$?TYMH,:6Dz[+ѶnCu`#Fm|̜ٲeK-gaaa⅋xȸW_$DU]r`yZKH}bҊɼ!mx<r@sʞfa޴.N(m\1#Ays@Z<̃O(b?ٺ3d !8>ש}65G Fˈn>Qؚ7>h?'}Jfa8ɩQH;`gǭHY5* ;0k,S\aa!qd,ڄB.P*(t#ZTZ.7 ssF֢wn}NT!Qĭc ^v}!mɽ J+X2<Ũ3"o#AL$IFra އ_2x`4h OE.FBCjÓ}J=FK7KN̾$J4AXl[4nh+[|,I_u 3(ݰuPZ7"gJ2',C,ˮ-:)q[=zi'p:U@k1C&3vXtZr~)H(PY0ꌸt1xJɾ}D,E6ʮ,7Aj6b8uOr»5w[_ &UK: \@ElCx6S%ct EČlXJy;[a($[8Gڠ+hj2' _9PGCuw:!Μ;~Q3֤I̞=Fh4b޺!._Qlw`Z([dBXN2,ЫP;)F`MtehsJx/2 UzdJ9->k0(d Vj*94*E[WQURIϞ=k'9f_n HmHNNo߾Ϛ5 [[_=7rHFB[`4ˋ.rmZNmDyiqm 85"~[_3(V@E˫y2q-j>(5ا'.ELDbϱv%.QCm1 F3a[<A.LJ l4t|)?(:oʵ'W"S*@u 6$gѦ Up|o8?r>[]ƣ{u`и%w݃K<YǎHr(z}([.2B,RNe`g2EzR']@SCi#gw54GR] ҸqjΎ;^JZS\\\M/^hy5u@HEaaq칩kB3g^X=K.~1xvIݾAٕ!ס w @ң>=B~G = 'ofD /w*/Ř*%qw<*ГGM<挢dRҞהF#H A'0GPmC'`4-G-%s^.v$HА֌[Ne/<;@ר?#ܚߞт "ux>D tx(&@*pk<q^1[[[! y731$>Bxo? HTrT8ך":N"* =2KKv]KEV)*үf2PО>AVB17fDL%II / Ͽ=uM45ba1{ r4h@bb,9̧ IDATN ޽{/sSSS@Ξ=KUővn_7v='A ]ߒ{i@igr~ib#c|q RK'n2 ^(8v'Q0ޖP> =]y0([a ^-KTϺg7Vfʕԭ[bcDEEQRD/nEn>=/ sI$qMـvС2z'N#?cb?DȞgT%? 1FmY>#[w Qɑ9Xc-*C@d><0Je,h]#293,\k%ٻ/D2p $駓(hd FaBL%c1~2NKǎJ4k _%eɒ%4!gP*=tH=Af CO~A,-*o? *5tƱ.-; NDST[W6Mz3B P#S.wcBl3, C5G-#XUkDEEQ^^Ξ={Pi&D\\܋:^֭[iѢ/҈@Ebbb3ݣ~HdS7$IE8("Rpi)[O}936J:k"cN2~؅\FGJ.& v-2+`B !ul?HO+1x{`)QJ>s>Yk׎xG)d'УUpjLݟR]jsa! gޡ.mw~4<5J FIfmOQz!m’D@hE-` ?΂;̱88;qP^E 2W SJY^s% (x^ޥJJ)X9QY΅G[XٚHdR.M:ē=hCwr/$s<@"D@N|~gTWqs M;!ދfriJ%#Ab YQc0SA5b Ei֧#s' wFWZ㍗Ue{ֶ \vq Pl<"V!w I:Ɋw8 W|ly&z9Jo53WN ܹ([N?cQq/1b(ĴuF_#n߾]wZ!Y Y ڶ+ڿ.4F/g/*BDR&~M)n&vY',*^ buRD\D&U^@rNj=E k1+\fXQQuh$0PD&JEE'E@صPt[?KDE&D@t}D." F%Lg bp@kۈ6JQe+ku!-D\(~!.!JqҥW:~o  ^}[obF@9s&]vܹs:F#Fi]v|ٓ;F6m^@JJ ZAѪU?׿ɩSHyBԚ;R\UQh6.raX[b\ɩAbl (e5! 3]Q G ߸?_Ð&!<-T* xnNmc) rDdΊ[=VGhRr$`f?2%WynbA!䧈9y2 RΌT?2nơ.nixDɱk n2AdExZd5Yy43>x"uC[XAH-Xx9׻auö#K䵨,}NS B tJUWuţF cJ Uj42j: 9sb()* s~M^997WҏSZ£HR2oV ; KMkJue~ dk+4oQx}U-BK.eҥ7qƗ]]]ٴi#""^ZcbfL!çj<Zyרڃ:k7&V%O I?jJT*p\ʯ?A#gBGbm.⾣M/[3V,OS>[LCbm!+dРA׫}1xiAg<^p)$Mٌ[o25Ǔ|9/qiz.H§ Q3O=Wǰ pbFn`ӡ1Qq3t&Xրá:p<`42Lo.gq{$ ) wuP\Aj 2k|CyZѝWs |?ꀥ+O~97d{LĦP9Yqnvֿ KV!>$/1zbUۓ̋Rc2eT_Jxp z|eHgx=%ƏAq33VHd.K*`UA&%4NœghŞ~)e~t \Sf}IWKKmltHZF ^z _QfXQ^^^zr PkW HA7 IyuwT2 3:N&2y>2k%ځs`ldBD$m]j6),Wo#߭ uQP>xRfram [({?A'q\t&{R4lmiG"~AlZ?p6hJ 3郎1!{U:=Ǐ7o 0a}/gFA{UE$:("?r9½Hr3R7/©?O vzh˴tߒO5An"SIЩ CqF9F+I8}^=Ay =L2Ν;D"at{sss6l...XYYѪU+bbbpLYY| ZRIZo1{J%m#H}I9N3G'.R/ϾT.!v9yQ;s/TfUm*Sx#fQ|Bk/tF}á '%%%h4rUoMs;Z#edR\h0" pߚ翞Vl,H}秒oV wB$ O R(XLC@#kngCCїFRkKdTMlv廰U flR.(WВs0⫏ )OCYU/"ZOIa.M!}$>]sƄݑ}+d&!61RbƃLfR U߂ހ.ԏ]ԦKOC,$V6S.d"HL}SmU1%R/-~gA9ڷo_-xQ^yG[NGTT`hСgϞeٲe:t777t_իi֬7o̙39%KУG3bf/ĬziZvVs{nE]: [ӄ^GY,CvmKƕ ˁTQRMmZ r<2w\ k%Dhrt<#d>P1ɜߕ}3ps"G=j0XВsq::xQ~ #Kñ;IAxmRA魧\IF4 9>P,F`Esss3;mcܢZ|_IIy[`0puKAb ZAxO_{[GchZu8>^w#U ^2=ww&`Ŷqs4T<,Hg|W?A"ymۿ?ukRAGVV|Mhh(a1GeԨQHq1Ej;ubu%tT$J _ғ[>Q6y1 i͎s6FގYk  :#nXvg.08>nCj)j߂E?T1{]fΜNV"J(<}%΀^4: EJN~lIʪҔ{Q|Vd#[/#q󁸭 Fy=9> ЖF.4 :m#T378V]g.O:ăi8\ ( 7;gocnHM赫7FUmh(,$^9Il}fA =1Tҭ{W󆄯 D?9_q}4hR}HH\G@Pʰy9:cK(Z5F\'ɈoA4g9킱lïq?_3N~KլYm[bTknF½dn2B0 oA|>ܟ]WiLk Gã{Q,0V%PxǙʬ2w #&[?%}PYд#߮@<U`Em@: u{(#]B[<ٿ#QPz2֎EWPQQAfo95ʞSO f'ܽ{ 6c=g2mdس DDeTDF fHl9zmg?po)XFuUFnwޙ=gπ_c#C,%CUWh4a'F@U8::Tx{Xp!{~mfob\zF}JJ5h?nmql &FKZ@n@lǥpú_/ATW!`ZߎŻ" #H&d #O {%*"(dx~wdN./hHd2ܪ+0# 9 aPQ`GPUzH);sbv%0zJj p;\ym 1D$2 ˂ݽ_Zꈃ5ݫM.{w=^xKkN1q1VV6ޝ;'763ϴDa)eq*Ae#cW}s:m`d*9vhs_LMXPL^М4zbϚ^p7](?Nj.7 |YXX("jlXÈ#+ ,:"djQW*n>m R%Rˎ4;oEx?7FRA-c5{ /`NNIJv g l^VrǥA5uf1 7oOŖH8=;jbkGERNqz4YmBB|uX Y:'d#W S{^.{FRQ6m!X[Azi1z`/AWDAxY=G`0b(.ǮK 5\Q:X6+wk65ۀNm f뜤LTvUi(+PT Qy0 Jl9d2'"''' _[<Ү]; gΜ1'g1º=Ӥ=W95m}ڀO<+AdO\\ĢbpGҿEͦ*iKzs0-nxl έLq]zE 뱚"SnF'az LO~xl[jBY1ev' %#kuYwt6è5xx P`@W?DdgTNiLo\ovvAJދ.=[uW̤IҘAC aPY ݇uiS=+MP ?`9/%rOgF]wWU`J[v~K9ceI.n̈ G[i@6̎OO `^CĮHv)k+ZTTIIIŽlݺ-Z>--DQ$&&5jy ,҉+#Hg~|w%O+:L&V6N87y}tX3 u`I :[ucox68}FAB"a|8マ\ >'Of JA*G3a:H۴Ab9Bp=d5;.-A"29CyR52+% \I;x둪\AY߇[ɤx~#Elݑ+% _C xlHDx2؇fY;K j-o"s%0ZLS/͖=({wd'E_F"7K=h9#pz)?#ٸYuw)]jn >Cw y0Vl^a * rRUT$a)AJg9 x_O$P2EEpS~bB[ D1/;2ڰwqCqv^Vl{?g `hݟ/ $<.~ܖ˃Qt:R[KEnCG̃V@@_v6J&`mrظ[-4i;}Y ///ILӭD#YOqZWoP*)6 ڼw=w6okN# nJC]8[Ce.51yL7i)'NdѢELl#RaBѣGj'wf@JJ >>>FF :)S_PYYIF8vmڴyq\BB)))Vs֬Y)R׮$%`dیFtHs> @fR{'o( A ?h˻,ى~\2B(y ,^8;'aNiɠL&Y=A1eeeVЬ/iF}tcᏮ@XoOV lĩueB<~fƌ?;5$%%E`0g7ʦMϧK.ѾףEDDk1w>n֭%#wfRYYA0p5] ]kONnݪ78foTrr2dȸǠU7~LGq M(bJ_P5F \^vMLJ{C7tj}Eӎ\)A| g%2 jeRZbDpt+G]DWC"zj :/nGT6$xD6hD~sV[pZBHen=h7GNa#=!)z/~Ze" )4`2w{hImzO%FnďXITds#['5Tkk\%eܞ Y@\ܑMeq tˆ3Z>9˖/<KDbU|h?Uf݉')6jD2zY=T*a$omM$  7t.X #H}:![Ep)Z7ms<`,_fo8gP.Yh2x ,?@R 瓰tDD$ =."[r$*J4hcT&дGWdP^lꄨKx8Y 0Kt{WT˜= Y;S_I^poJ섣l_AGUˋpM4N=ZP :i%ig94ѻ6>2lRL/n4@js|VZU=Ĭ]ڴjr(ɆMtݛo3q6ciĮk3^R qT-:B `r |&o@x?4w@@0 NnoD g֔\S@lC bȝl_qI*$l6EZ$2?fcӾ UEU/ou* EcDڊ٫fTv^H'^.*p">9kWVaiOk'uR:rXUQ߷CX=(.&{i- z퇑I00RS`W=]+"Aa) e!X@(5:ds\Jrǚ(H$uѥbYߏzȹL%/UCN*<-Jc[۱cSErbH΄&mZά4 ?m^CS<ϑq%bk+p!,Rgl-Bw/@X?oZt%jm8qr^jꂤ!xa6x@jkBR$̴YnɈ50ꌴ\rt\%O8!R+?]G aEŅ&,,V3bf0p@T*6%d'3Y2hۛ#1茈z#[z}SI9u"6}ъfPv %y+s?}49'Цi.sHkؓP ѩR9_(deeaLj?ޕPYG Qqz(ڝ#(HNI!#R;+tV͇TIQqD0&v!h D3 T\:VNAv8 ytV.HC? ~x52K9,$r:vv9*KaPغ0W#jj{r k?d1백Ƙ{:E$ HkcPx:Q%ga2@YDRFiZ a^"5:0Q$VNJڎˤ .y @ar!k~ڲ\{ mc՜ "ymU3^;VBw]+LEWgP;]ȣϼ['dʢ{uP(9}-G4H=cE|UD(/ˡP8*PU !z( 'Ab7ƂD$X ᓯ f7ْ0PwQR\==0 N'@ !BNpttc]Qy{Yg{WoV-TzjWF -'>+W+vhʎ߹w G6bF3% m@n' FC͚29D^Tnu}y{#+ ?=5.u`{\yGxoZZv~7L$N>8Mkpkyܹ͕rq | pgOxBn:MyEew$r_C_{řKDn]U\yx0w *ʐU Γz=|wU51:g'\B+7W%ς6&]U 'ZmG% {pwu{3vG# 3lrN^(rŕi0pG |=:vrŒ0n+&#ѵ`gÿyv폰vl};pX9=lFte_?GP*H]u8\mY i4&f_F5HJO7Zƴ%\Aq]@<-R^UX1&B;*yX >wЩ`d?N!}ݷ|  ܡ[ĜEbl? &39o׿(Jb i @г+׮͉Lxx8EE00LpY- xzvEu%/¾M=>Gs(K/iji%.:La|r:.?DAz9* =A\q خܰXO%PH 9::ҡc{.oCU绚-/g!X!Zj_TnMeޜ| @ XK]? EW?)sgQ7 ov'N6lOG}#f^\(!) sZSaFwqԋZԣG^KFsIv#SxKka&r UBU{v<#ˡȊ# i[*_lϿF<5|Aq.#Cq*ӞàW23j8 U';~ 3CpqEdo;7Ɣlf ޽adسfѠȅʮ~ݍqd=zK M΍6Ln>A~5- @tV}N =W<A2mhƳ3(sS$?'><(UrT$_Đ!ClI$5)u E?X-`-7p`v?ŸmpCc||*M7QhTԞٙIWԁ 2_m^Y-0qʋoHQ\=H jU|A73<+ғAeȴcG`Ewc(/2ŘJ-c/\Lޅ+ٵ, ,.P DBT<܋徭iX {ۥz*q/9S'E7Gb_]33y0Ħ"U.kЩ,O 9{!x 2 GR,Iާڵk#WbAw+O s(uSD$bŜ[HX2ošWߢuӪ2: E!Au`zj(H7Pu5.lJy Yn_KJ@$EVVJ\ \ X}5 ˌ9ɋ=ԡ Eo8|abuGQ6[o28j7X9\10 &kpH޳&`~֭wYLe0bTo×rrd2%z\Zy<~㺒WJ#FiGڣX,P ,@ЁR=g$vU.գ2-]4Xfprq/#S(rwŚ9vփĮKwsFXTJNB{;#G.I8`1Ù-pbXJq7Io^~~4e͵ȍH(6;3NAIyx;gOg7{-}SJ*Uk0**d"H8 eZꫯ_)3hJEG$.=n,t4źd!֯#_G-Zd#G Kps`\qXo?At؉s5 @_?;e)TTp:;W Fa,3/ˌxT&66y| ZFq@Q\)i9sf1UiH Bmܸuu^VXmN2BjM7i1YX"g>ǹL5 kƧ2wl:*kjʆa2 N~ h- O.kQhLᑼG}5f` !;^9^5H‹({N;͵ IDAT`h28n}_^?3Qke|].7 [/WRr ߉{́A `bCj..WP,ag v`]̾K܋O4Iob4EgǺUӍBPj!6xzǀ @P3 V/ O)#7GW휽<SRJAaRxz ^\sl9z80} &˕^Co 빫Ȝ ? * KAy 4mMٔR3m)E#S*[t')@.3ŧX-=*Z%Mzy7!Nݝw{V<r;v,.Hz7䐒B>8D6s%/aцa[p7&w\{yw2dd U|1h?˳(JFN:þ'~K=_m$ydj0BP ;lj7^[; P9pjVڹvNJ*9G%&6*>Щ>Գbr4 .Aﮠ@/ ߜY' 4;gU(1n;lF\<أЩ)O\XLTZ:V,+_xP' AhpjNĆO~c@ (5bz 5s! ݈hvh.NG&v.1rƲz3 r*vKMͩjOp?Ժ/|uT›/^&(8kb`I*H*J$r-=bU0Ws( +*A{Uګ9e)J3( "Y>; PX k~K?mFb<ړ\Ϧӭm;k][CVl1K_hdZ l)^x@xM:lƈ\MȵLY k=NNP5jٳ8 0npxQ*ݜq?ee?$R"0:}zs}M$ře4UR y txc(5Q{Db2tūMUߝO~sH{J_DU2. 'C [S70pdp*?{hT퍋'Ϝᑼg۶m#7/<BpnFGs =~\!=rxLbPؓ$ny_DlVIET" ;Pn!VtD \":S!52j~RY`v4Ts^1y7=7}`kex 2%hka3 7&`NHlsܺ`y %yb9M] Oj89٣yFcdvݛ{^uxBbd _m\ѣmPAs@VQRJR7jLqFjVnb~ )Xq %>MyDݟRgz_GkX3Q,%*̝#OC`5AaJ |]A5ZNgXHlAQz P,jp?&><270!'S#zuPд:o@߯h(Õey|RY σr#0`{WfG W–=į7^<P8s<` h? Ј ЖBXH>!0.BmΉ\Dv[J gz!D?Ov;F8Tn\b4TPO;aQxGj@W[b)H.%Sg˺&G9>rUfh|ج*lHjR"PYYYxVq=E&0LsJ<55/\7εLZي1@½L\*;{MV+ǶJ*XI$5骒T(/// K鹤9#"pg_ Rl8juY|bjb>$l^dj.C:SJczx+] 5! n| pi{±X, &4J;0{]yN΄;͓10dvC]-qt8Ӻ Y&΍ o.?'G & ~v29zhu;VcP k)#syu^U^FAU7:Pxx׍K٨4A ;õb"j-HPځL:OmNoΝ;e"֋ ά>*yR ZۨX ˨f(#x߃= Jƺ~L6=!WOfa|v R #aH>R"P#GPb҂Wqµ{gDqh rp肗<:IÝopm8jW{$hD3d2~:olB((B:3F#u3pbмd\7Р,8@VF:o߶uX$ٝ;wA绂\) $b2X0ƻO)); w)*Pb =]65;Y;+:A'ԩǖ@3")~B" .Ng ~wc/#X-؉#k`KT?fg(D߸Zp3z"XCb߼Hۇ2(W gWòN-ipiqx2XˎOpN4\ jؖWb6W~''ڑ*?>uV(gW&դJR<==Z5u/2P=7Ǒ]L㾾lz׍F}|P䌍I-)Kɧe:u~TOooQU'y#gS|+2gxBftO @]GD8 ǣPW\#//϶AwvBY)6\X^ހ{T&}1LbrcC\+;*⍘agsY^'Bip&d+ `,kXqRC0}&zvg7 9U/N%q1y\_^8Wv"dN_9i,Eh= BP; *$7/q? b˖ǐE?F~~>f`4BC^0w?F\q0(U~[A${A5+#S=Ԯ *{5~zG=5tԋ/U'A..nr&քcglzu`.5i|*6&]U ba -9ތ?RSNܩ[ P"3Iא2c25)i 7 0f/> {߂oN,v#HaR_)n2x}Qyn r1-Ƌ{sk#fK{sTj4QRderqN/7&3,M }aH0aDb1#%j; 0fggX0 L]{W{򚇱LXJxr|9CuPx(˄C6N'XsDH.S³[PZ$.\YCphWwڲ'`(8|vz׫/Щx:bZ7ON"⟕ ?~Np- G:<hٓ]())uH>p $H*:J6drgbM%M)MyFi9w?R#O'mE^{r\C(*CѶ5/A4:3@i!t eP^ V+= [i?MjP % k r\~*@_-8q2zvى#^zxqaV(4s'aOWz+3-◕F%hԐs9s)+ݯ =cBUc<~d)}Gto* A{ wa#.H|?Uff&p{uF9^%(G Gި!q= wYdoa)3QX)J* a|Bas;z,faS]?w]C \"U%PZMkM!S휮;1=2ft"?N`pEL@ӡwȬ_0 PVÎyܾ(BYɏm۶,ۘ0aLX;6U( 1" KfqrJH:k]_Ӌ.>ʋ,y*~]́^/•a #!5:WRI`tؼ2rY5*w5!hJww"P=PH8b!yT;%c;| ?b6".?| rRYDޅXϞ Bq)ڏc7J{-5#x|G[j!X>678Hvq4aG2Z O'+ @^2(73p@GCVR/bH 2al,f \ SXjͫZӦ ^|ϙwR~R Ɍ~(adeFp08zB`Kx~Eú8 Gt.9Sy+ǐ5kD»w ԩÎۑiPi;@IZ {姯͑{aE(?zLčt̓]]&ׅq` gA$1^hԨ)W" r#[CQ!n_/[ i9X,)R\_`I@-: >8 LiPi5X Ba$~gGӷo_[QVRj؝:FйŒnh0t큐1!Gtvg{ wG : еjY5^I2zӧZ<]yA[fE_Iex%A$`00k,nݢqc8~JFVj6gjdbC[ӑg˯"X|ZOg9sb*Y-QvuCQ zyOKxv 0z[dhG4-Պ382[ϙ:e 2%7JŹr<>/.BQ&M1JD4* $Q,`PVd״@U^IV ւ褄a]ʂoZ$0/ɠJFfc\ tq={FbF0[1E WӣPXfZ K= U;H_2;HyFɮ;ɤ'DnnnDb8~HP{?n\P8r\3l5{gFqrk=d>xljF}OvbC0~C5v88aCVh3J(1A@00,X1182[)++C=9̓o@i[?RrsLQWʨt tJNJ ,C_"&+0F?݂z;`q1^ݟS=7(P).@"G-,F&(G֩#mNXƁ6@ Qj?FaXlǚpi%'2` _|2kZ6ֲ;Phn ݍRB)cx2hjB}9ܽi:L{C͘1c8p #a TT&}FHzU%=| ]ow^3rY&xpj[<( FW"?&6ե 7gCF12B=.{K81Sk(%0̪Uhڴ#իcf#%"4,77l)2{#d \\+;R9"/QTV+Ͱ3Nlj#/asq@eϽ<8fÉ(fqFp ~YROK>-!^(yG~*Jx~] @PxF!??uْ%KhR.n}!7=_>Dtn~فP.&2G'VB@Sx;2C&JF&w&U~BY$պ?i6.t"7[2d_jʂ`Zm!pH B\p{7T0B6>'#S nɲx8"C3Sio,(LX"cDFZJ $v\Ԟpo(@^ˣB9ܾ}ٳg:4jٲ%V8xc?@Tb1 Pz8qw s1+i6 >,ΣeXvvEA,5Go 0|$MbH0b@Z*UQ1h#n}Ebxr9Xb8 /#y,2z]19::rM>sq(_k`|qvv|VؓחIŨ ڍJfl{9,LT0|I$5)T=zP[J̉XaoS𯪡j];N.}KFl)vJ,or{Y`J|}Bߨo &BPLW< v@eqş|b#X!v!HII!,&V`v@H+pkfPɐ|PLP{8QeP١)1^+y~S SKӟ[Y>*3LV"aoĜ#vv9ATWdEVaDWI7R{~ ,!4CY@^ XlkGͰ ۇ!H\{J8vy`e;`F?pcdFokH9 WZP~Oһ"""l UZZ <c h!h64-ڿ ~BT㉘qrc7*)^ܠ@SŞv-T볠\1L ַYGjbU PTxWߝ9^; oX; 2!? 'B砩t8_"Zo]aعs' {O7 NLBa1X<"AL$a1TufH+A cu؃mG_ Gu 8؇,_DUOqD><rItUI*Ŀ֐o'sRad¯u>)苩<[[Wi84B“h|Q(1SvrwDX5 tYِC(ρ:=QoW$&&0"[>Q=d% 0IP鿃+TԺ.mEmJN%P$uy1iM]ŕ;C ah(tkSZWxg'N@3M ^b[ XS`xz-^: cǒ 0 M{=ш`Z~?'@򷖘ţ!~J[ /0{"JDX\ ^ }<G#-8B~H$AJ@$vxzs\TȕɼGӾ^ͬ|E䰣ev4X)/zQ< cz> aJZzS\ǰ(.P,< ed۠Hlj۶mVĞ@p1Cp(a-h܁gh9;V@4zR#WٺK+ac!TfyHX ڂwdP)!+Cc9G"79j{Fdr86a"ѹis]&ж^A!wWgO8-t` -P\?Oo(((yqQsxP+ht0t4쁼a]4ph;(q߳wAAPX{%jĨQcFc{7**V *{[-?&~? spvfΙgvyӼu;4eS81|5),I rIR/i:OíwXX${ֵ[ECO3bMcT)R)gk% W|? D&PZ#|- zh2.׷?5Po@TUlA0NMM$ޞpz $=0waOk"I9R_[ٳ0ǁ Bm s bA&jޔ%pQ1dJE4ӟ#n<ȆΰxďT 1%szeZػSO==_@RKs}Mk$Gk/q'hRȘ | a`|qS qbK0b& BZt?n"5رc08.~vu!hJ=^XЖ"|"{IƲul? AlC c]-[ մUl I lErcce#񀈙34rhRXXȔ)Sh۶-H$|XY(pX4j hn30mtAF #q۠86tB>-h]B}K4RzfJ< m5f n`&Y}73k}g?Ia.F`/*yX;Pdffn:޽8eRBCC`:tJ*Ѿ}{.\ھ **  !#冕%* 9 ﴠ5Y@֌XJnr1rh6*ƝA%1RVP Фdqwth#(<[Po ­P7</qȝvZEHҪUK bf{;A:0kdaDlŠT+sba Ϯgqcs`;L )z3cg œ00TCF|Bw:gZnp`-)cymX8qcK,)(-Ř 눝H׃!|- NxH0ľ.MtnݺEoW W@ΐ| za! L1qG"pt| 92)& 7aW͜T5[Eړ"}YS؃Uy.yi%UH9tHЖCɣ;999DDD0h6l &&ݻwFhh(xzz2eʔ]nܹ@cFFH:-7?'Q!A*)SK)R#;BY9sLaF1)FGwòM>EFs,3Fl ^f'C%/O`6ZM,+Z$F*ȋ8>l=1 ha ]R* hʱoo-{Ubb/ϧ"3C"y\-q]5 ul"23 +Bd KkZsszm>#`C OAf2y)+o߾-w_OE``}REJJʿhGbT@ `W αuДaJۇ HۜB[NO B[wz'tŠzlyP$^$5Ԝ _Jǿ4tBB2 \&k5ωA3`qkI iYoF[µ*yKe p}"-ͭKVVoހ Q_K$Hͻp?ӧhDjcV)Kqbs`Zm#_ A'<ħ32Xɰrj.mGjcV?k Qǒ5kVT=H#-f|T4Usڵ++V`˔Gͮ\)pSF.B).RHe7؄^D&DE8cZdn>6VZ ۠J ~{0:ie+1'!#o89U")i#HznlP  ?אXYؓP5:gY".fH$Ю22`NXp|<,ÃE=Y`8HL{r"VCXP(ࣶE$C3b:_T݋N' (h  ks{@v*$BZ2$栏fl"H΀$ E ?T*bf2ίn A?ucaRl`NBKV|65۱S3"PbYł}xtH/:6 ׳j /IFc81  |Tq6' /fl ώڙ^+ّIZ#Ftt42C=RuøD򳴴S yYBĆt Ap@2 "5nnPU]0yρR0sH O.>ܗpZB0Vpu^5 !h,^L1v P4{9e 5UP:_s+/е tmYФ6`mv QI>[.,LaTm*4FrA{r2~IۣH3LU2s@n4 `pPWҮPA ƍ[Fw{{{1V$)y'aN4 [1-P [ /c j', ͯy78~82-Xy Ê X7Z􅨀G6 dVJlG.C]GoLl@`W4BPTuS + LdС h  xVa)((B%+KDPѥ?>;JJAB6U;? ikhG7Ajb]L*AE΀d2 ȭj ]Dpԩ? b`nǾ]Tְc1܍JU){Bħ6?Aݙ+OphRf|5!n䥨i2`@$gcC>&P |ޛO7"wF3 4ݻw'66Z-۶m#((''rۄQ1Rn4hЀEz쉙TWdɥqekȖCO>5;z9G$D 4 .YZ N oчf35 } te-# bHA$+X3xL~ 7ţQwHS9KsK|Z/ 7;jT=`c㻈q@(S:ւem9< rX\{(if.\ze<"?>eC>lq<bE3/> -4@[0v'  y!YOBA:`ojAnT _t%/f`tEj!~E$&rrds~9:˛ԑ2#3 RʾLZ2f6զ\?5-wC9ŀQㄇ2(~ aÆ!y ::uЫW/vӧݻ7?f…޽{ ͛oZF&1 Fʍ " |'ŀrPҒ^t(L;r`1LlL(eܝy5;^1󊉒Y EKǡDX So1&; IdE^flJǎ1cK~Zuڏ*`)w ! IDAT$|,̣]pmwa}ήOd^.\aTsV]I!д8| +UfdF0m\#6 հes|'E sVSlW|XMQDi6@[ 9bU( f_U@E&AB-awCz0&\s r57ȇlVPiy' ?em ݄DK`e.ϻDDWpmD+&<'F!eዞci/5xժ\.XЗCƪ:1cx9 .ٳ={  Q(9s)S0n8 4k1""":t @^Y?S'hAvG(P0 JQTukx{{#q0S>S->n?#1| uSAgδ_0 Qu@1~ˍ-Z@&pxm&AgƶřdjؔIzOg\psN zq٘I[{r Ot()[лi  *ѐSgWS➠4 0s/y 浡 gPMr@JcUkucb!cA(bg" PELJ\~1 !3Ș y_ ]G:u`%DL 8zRR 4 MǑȠ$/w89X0MZX@nF!I>U0mUAtUHU~VН16pM33rmXS0為59K(]}N7  Βz>TJʍ9NţKY>Grӵ\JHpxe U\6+an-TCI5#F4#T*`-֒^Fqj?JD&0y>u+`io%r +'.C"v;' #W߀O0\>ʲA+4ZQ6`%qrX[B\\1-΁3kؿ!W!k _}Pr1Vu*s@n|]+'%w)*s k|-Yum SmO^FKIMPSQG@WByĕ1bTk111tz>0 z%7KOޕ2:YKiIcR3k<pK`yn/_e(-*~g h;=llQ*|R_⿿f|rr k!uKµu $LÖgj&l7O°00CB VpgCL^Qrc=BVRCtD>@(2C7[b䭡y-agT]@c"҂% ꈠ>P9[T*x|5cəGiيޟUBSPGvǖXvNJ˾`ccSy7!)fțT2a\\\PTk׮ձ <hܸ1gϞ>}`ppp`Ȑ!dddKy֒ͳSTݒÝEmjʳ;\]=`i p$Hm*%EcBs$/qϰ# D $wX7P1=]{1 ӈ)xӁP{#,:_uNDbBœ29Ӱ3u =ϥ^[sXZU4aMqO>bA{oC9`fgO9z@zrpdC3}hs2-V Uź8g @Ytx\BbќݻTؽ1RQR/wLXe_v !N߂|ps59G̯Î)Ѥ?)Y6W>j0y&‰` b/3),+?ypႨE/ ?[8=Wt[{ND-)7 5I[}A1~U8a8= )xim* .r~ĎG+RV@`'RJؕB%!-=d.C@[:bx, ǎӬZ#FڵkiӦ 'OF6l &&ݻwFhh(xzz2eʔ\?SH!6Ȋs`I)H$'!ѕj{BdԿH9>}TVvpu1 *6 N.AFX[[W@TCG sOP @n ׃xק)֒mquĵU5.s,l++pjVK{)\M pI-WEQ66V,j5v+Z@|8ޑUZO `?|UQŸ{THI H#{̀/?+::oo?9bbbcCBB_uvv&::Suח˗/ vvvlF@9Uڞ˪i8Kװ~l - lN9͐%\;"Ȥ$DYz.RNW mWW7cyu7gB)ӫ J*#޽xD.jo:w PV6Q1ב׫CѼ@تxy5 %_ڌ`I˷0aK 躠.X_DT"`kkTƾub*t UډS<0w$'a:|d (?]i,HO5$h@$ x_QVG*d4kFN 88${h1`4h2%O@"!mv9J9wŢf%$J98)J{ϰv2!%A iA O50ؽ$ǷhפzEKyyӼW HVV5jCٿWg4HKM]0P߄2H g6$9RNžGЩ8`$s jdMQ4PLȠIpv?eAqŮ$zͅ``q?vP!++ ;; (߀y'xZV\PgP<|&J2?$`d=bGaFQwSаؘW٠x0%ƕKFg;2n? 0w9Xv&]Ї[`N= z#b M_:O  EQ~ *`ݳa\_#|<`?9[d 3 6-jeJz d?Di Ά./w6ޣX%FAP㛳91 J­tw#sI 1+N0zZQ4brzߤ0a]v}ܹoݛ@b5Av{R]94fC+LU B)AR k\ \fƞY^/B{vA0U&>6Py0K0|,nū!^|ץd Vʍ\tT(KCnV5ޝ!NL⩺րۦ0- l&m^ ?N%hKuTgO~` d=œݯ)T^]-hs@-@ h_bmmenݡLm?cAemEġ_Y cv4?|@a<̋c@|-؝X9؏Xh5k!vo ;wo *zZ[@ʣ1y;;k:^le8tk2taee#yoss"s䪠U/kFe!gn%z;:}jcI>:Z  CIrM8Y¢(WgFsBG}LN@ ׄ/a0t=ܡV[ Cjh2A4-!XýbƯ@+_LE S54n}Y): M0AP|<72tGO s7FIy67#.&&8:reK7¯su#OJ)2h vݩ|}+Z$IXX~-[V€w+țlQ{Ν;Łܿ/{}||X___ݻ~oݣ}!JH_cUr@S 3>MLϜηi7 T ^Bf7G \{#3S8G1rW*fH-o[6`j?lD?f  eb:SiSR)}tqݥ.P1Wa)N!է#L e JFAJ _= ]C`_ nBZ>g &RFeڵh&ĥCRXB;;@@=;<8Ag f 2bQoCY](UÍ_aTC2 `pCD; $*y;Gnl-Hd- , NLF1Y:ܔ L3똻X x1)rcvA*{<5XfgkŬ\Fc!cUyӈJ޽;ֿyf\\\^p%**>V˶m ŅFm۶^zGѣG7|Uo/݊u_y(fD*`wյ$7UÏCbP+P*980 "lA[ƴQmCE׶O}b24i?} I~ /N(Fø*pGݒ7o wڵ:YrE@ >ܭ I گHͮ&a3--ICH|1% , wE͠߄U 2 a\/v< ~|!ejLJЫ0]5_T~<Pu^t5T^/!y:䟃{WQn}ݳSvz@HB& IDATB (Uҥ EAQIQ-tB %C :9=Oy^Cq>u_̵3{̺7{5k}ru۷x!rqQ56OKQb#BOX8L&-©3֡<}kø1}'eIt ~F7<-͙4iok|=>{JYQDnv}b+q㦤I !!gVՋݻ3yd6ls8q"'Odg7JEF?j;ƍ3x`vӧ2d)))|WtDӧٱcC !,,1cƼkKe;ƉOa 0LY!F4㊃o-et>0}9]!k@>\ !n\<_D V=;`! ~/U@<U pzޤ'` 29T/DijV{m_l-#`—gx: RNp{g~ꅋ5V*l3Vd 2O*%5,MwH_~Fŧ~J޽qv'bb _$j5gΜ!**ӧӯ_?rss9v*_ٹsg=Jvv6cƌDGGsT* ֺ~89۳=UvQ\=Ȟ Ed'WM7rP֓i @;8LFmVS;i[P[A1y^f7*8zAafx3M...ue:QhryP%=(,4 > Z?u 7׾£iv s 8+>d ؙ @I MbYČDr $5 kvl^!3{ S9;a?'Qd\B(oNq' M/i˭yY" 2^eK)oߦEܺu͛?S\\F6=04YOZ\}T,EM78.FD߂O S)hAwVAD=I4 LfsRiQ?}m#s@!y*![HN:UF6o,F?KP}h*"N­AqXǤוGElz#jt*y<{[( fzEzGϞܻ~oOA!&nh407g/-H+GekEUv(=ЌFo ʋX7&jW.#` >A bx6J~%.u8Cud{L s/?FA&6*%VK?Fk:- IJK*|3ώޣ*˺$)N>"#NjȥAS-_go}Iey8G*J-_N t ]M}Q$<2Hl.h&zݟUVd_sue6a^3X4gd"M9χSD}DX0;xXʼnS ֎7AiN#Z^F#::5 3j7X $AR($xA.@ --F% %%}U%d!`҃`ځ&w7j>\PZc|<{bwǹ1yLۡ>w`w;{azN&|Džw$(2eݟ=$ğHbozoI {{[inLK@M4j GT4Wg@ l4H+ ` YЯIMJ[Aeرy ӒZ!6" h4J-$BϷ.FMҮ{Z@ɏ +> U3{CPtj`\8LiNQ R/tAF03%0XFr"Kk*3J(ICAw/~o TAA;s' 4~Ϋ8. P)2UC%])G}85nm|WҊw1ӧipвfW:8n2b;*ы|ٟӒ_?%JE{{n ڐUFPPֶVlZ [qxC&#y$G4v!~y h9Ʋw>d> P̀7'۩ypU tw-rRQPגw8u?CTZ~-rc38>bM5D:PBpFr >M7r>?A_OB^1@H#9O|)I@sj&%U3;EWcE`=ԛ@УT*Kt(5yj D$:{QD4:.rs uy^ܹ>:; U%FZ rnU1pfřXUE ./ǿ*#!!"u$jׯSUQCFaW[0nZR% jL)cky!OL } 2 ՟}P]vvn>:vNLMqh8=?jzt* ^/E0?M?0:uoͻ {$t<=Lua5 ^%M;4Rh8l5н>Dx®P0w(,/}=A-c`O;90m?JV $m2s"p2 dn@ C0$:>{"гgO46~~ lvZ k'[@f4U\` ̓i5<{d2VRSvmj\ӣiS dpnWvNJ$$dHRD`չZ´/iۖ ѓB[yHBڕ#$g |,X4ޡ曵P^zVm _r>m-sa~`eeU+z-rZq$Cq )]Xz t.4jʺt"ޭ<95ѠW`o>&ߍ-3 Nv?~{ծe[hIA){ǾV-Z{w&z?Opn[W'lz;PO ߯`C*t $qx׳;B : E!oVVDEE9xy(d2>}%gb0!TW|/V"a.AP Or? Qz) zt0'gibfa]zGBUpW0bCzUyEڐZ#00l#ﭭR%J`&|)30}g+B:e̙ F0X1P{4y; . ^xM<~ ~_˻>wN Q8qbE8},X1TbH  x|:AmCnZӶFVTU$';!: &J#dёsRA~^IJ%:)yzA^r9omM`;gҏ%!-TT`ա9 ĎeI@G VC@StT\ݝIڍb|ۑ|1Մِ<];"%lw΢gx=?[OvÚ2.OںSgz4~`w;-y,pr#cG7/K/s`aOu_f FN OhC@nu"c[ByE%c#t?ɁN>!۷Z ,JTTpr~*9\T!M~6u] 1ï؈Nva h>Hv C(B@.GðnBgLf[ގ(5JF6&tgh7̇GzNl+Ej+'QL \[]hl RH@ҀHۡsTV,qTY<-lWdJ9ߐVOϹA.b<'n~ W9߇] Qq"dWЫ{l___L,)Ph A0Uyx9S!YE:pnSnlS E#\~s \c=xk舸 k.<[`"CFyo lU+F(ߍ$ek,#P*wpb=(@XWv_x r;q.P^;!/[FkcKō0d8`o_ +AD˳6b׵92;ɜ92{yzJW=/LLN 5yO I:s d$VŽ[ Uw0\ :TDQYYu D\x?݇9QQbaMES kh/وiQ<L g=TD̲{5LjxzVvds&^wQXPNXX4D2tPT*5TMypf wGAUk ߟF@", ?;#aDpMFl++P+a Bt2>: Щ><Dem T;(Pе0aL2pvgh4:dmxm,l{`(^ XdC6/>0gAԩS]LK-pj%: Ud;`J'yS$+g MćZ[?эߴ0SYkyl`f*{w !Xrpyyy]D!!g#9 Fͩ,3~S=!+{\EqТѩ|o,0Wi|~#iJhGͯ?:\`v}*>B]DGN" 6Ca&4iLvd +/UzMmW6$=>i(Q H*Zٳg=WB ?&n5 Bvb^vOU~ޭ=9DQ\6T YCa~A,K-5Gګs þ`e|z%yum:"%%LG5`"ΠY@RldB;i..U4rspl1N. @#(,`Y_'PiC`\;og> n0Ղ-DVݗ& dQUfYb9Zr4y`!#$^.JJJčΓ J\c׸-l.^2;[IOq2}zMmd˲ɘ btdhU|ɗ 0̬oPh8f`. )PP9p48Cq e"|ᐛibTA{j 3i^&#L&@`#G s@A򳕘K0'>0*3ŏع1wˉ yKt$\`Z1c2}13`b{!Li4mg78w:uW~ sjߥ~f1W@:L' o%F+A:*29(Li_DHspG'\2p'?8ؠ'Ӷyd;EխDr*c1oǔXVt9(p|Pm^d{1Q \|!̕?5r9A`V \o7f<1oxs~(%FvƽE4z[s:M7k>v6;>N.Y?gYј 3:th]B  @00 ̓D`2PbfJ РGHG.K'z9qw >ޑLp>LIQ KO۠U]p&}IXBc" q@6/_vsJ˨>@|y6@Q T<r_ k( y=E⯋FCvpf5\Zx| 5YW|ɭR4nQ!:tUjMwQY˙;I^w)O=cubGƮ eOYޅi )]Z~HDm 9 F@@H\L.Xpb}ƊՆ%Tm 94+G-*=ma4 ĜC޹367•c Fnv|X>p27o֥9$Rĉcm Ҁ>5Ã%G}_bA{(u v&P HAiRaZYEeࠅ [KN5l 7dб/$'?kbƄ/Qko`ӀXΝ ف=L"Z&m IDAT0 ` ,~-HIIkH!:b X CoJزT /7NmA;y$qxZoDaF5 ޠHlV͠PqꦌAoph d~ Wgæ`0\pPU Cal*B`[YFc a Pa";/6kyGF2VN)`7=[rg #!@@QZP^IrڷSŏT|x ,&1g> 5(SPe2( iii/l$^^+AZt杠(SJ:a}1\l$MڎV̓=Y]3 ΦhQ |d|q:OB ڮ)jbUT0;60^KڡUc1Y u;v#D_.@=1`*A5p3 k>~~~/z*$^BF { nl3=*|> Ϙ/EVx"υIyE F<.FA! +F:*Q|ߋe X׃o3^#TtulW3Z6"QkXkm8zY[A3ƸQ]f|= \OOFt`ɌMsĂ=tnp4 #u&u, A53f4fsA4 `2-Ю'֭Bq8L5&.MQwO& TP*hŹTFtvo62z5Ift:QbŸBfx(h om-aШ$ތQo&vC* yPmזXloMZBO@!de脀%dcرu0/Jnk%p{'8@\ ܿ;hG+/Aio!%ih}ށYiPj<͕pŴ}͉'ypPOJ RIL՝1^id0$$|$DV(++#]}X"WvgPʘz" ˑ+d2y)ȕ2bV /~PRц'8uz*+Eח `n1|yAN&xDÀ\t"Q gDǷ@ 6`  J ΧETJ~_|˛R+e`-PP^{4 !t3 Cz؈pX]< k-a4scS߼}vacc+cƌ!??d'1;ݾ@y 2Ğb_^ sܵG( ӕTN<.~8#6#TY|B0 7Xʪ!JII!S(A P@9:x+TQqlF08!"=ߞi}]ͣ"|}h $ <<(נV*"tx5r`։nWv,qqnڈυ`ȃ߲$u~,ֽZ|ZcE޽/Ϸb 4VlDh(VuP*HMW;6$A+Zwgz&,{lɏ{Gl5Ɂ7CZ9x 3ɹ>[ϋgt?]a1 T`=I9cwTBN11 xX0n{AbɊk s Űr$No=r,L]IB 4[ь`h753)65-Eq ȎGӬeDiǓ=]EBɉݣ.JbAE-z@qq1ΝcٲeֹnܸxðaÈf߾}1gΜ?$$$}ٳg<`ӦMc{I-"Q+T* 1_'^>֮֜^AE5Oec1H^J)|鳤%f ^a6D6v"\v]K!&ىv%9 oCwԯW4D ' 7 sR>ht+i=;Wyu` Nr .K,57&?nFf나Ve~qZ}p/ d=vbśWa,꧰w)_j2zgxD!:X*aS[#** J8 wWRAsAk0~a;Gd /OEyq '3\9.[b2! C3. Xϻ.9q(:ëP _:+ބϣ w}] ' &ůx5fstKŹFJ2*] E*)}#8Ee T'!~y~#c 9ςM]ˡC!1Pxla# ,xk:TC|y+_Z_\݁EPh8w7)J+ŷqrvC܃u|'< qhQֵ̑ wӘL:ʫūT4m?&>|~.!y$j,쬘~{o`/'<#}gp9_4j-7pgvdJoi|)󠱂;q+9PS"إ#+v7X!,Hgz۟RrM^,uOϥv**= .u{X6C B=8_P`g=X  X$=ۂsOhW#˲~l -|56-?o y:~~@(85֚%^-[@Z(xE_)6dHSiJ;-1 U&iԳ8xj;+ oŠVCD|;A^&gG֔SVVVFyx^k_n_"9 BÆ )!?!b9EZ5볺b笢z|rZG+|^lAS恵5nQTpj/mK9!g׏k׮ՕI$^9R邨ET2NsSp Xx/w`!3Ö7ŷ2ZoΗ@'Cv%݅7R,ZVCW[x(NOg{hY:JM_Ek8[? x)䓦 98  )(?~E^%E.]wVaʆ7ְ/8NPQ*6PZJeBSKh{RYD![,n?Jh'gf`C=ru(NɌEgz$ken[O#q/#UѻˇH }χCRYPB [fTpPwd`&n^ hD >)J86AeS<0 CXo RbۤM lUX2 h -NkF`u A&RҀjV!jmyt8X,Vө.ǒZ8t_PByD#N*a;-9\lHF/[bTp}al~ŧcAבj~ ~:n-r%3Q'≹ wDzMh3v~mu %4!YhQ9FI.%+eٳg?ޏԩks_ ;BP,fXz<ob35C;ggp ĝy6fǟ] 2'=PXww(HP 7q AA޸#8CrKtm߁w؃8 '47ӿ5 7Ĭ3`^g_`j3Ou5{#,6v< *¬R8kv!6R`]k} #[0zae62j8؝-Q%3V_E11M+VgBdk!a`I-*ڴo{˛7oNOG9x7`V)!"J5DԋE޹=?c Ц-NR$=V M no$v\=ƻ) ~*'i>~YoachM!Q&s%U4M/'ۗhݻs&={ЬY34̕*UI&ٳkܹCLL &0L9::̃VjׁE?odJL.J%=6caIaӮ)`ZtaI3y?E$rk *(V*Hw^>yeU3p Co`2ȍFG +y7U|9RIAIP(LYdz,8 G >ikAA~[X8B%,3P:({v+fx9[hMN?/6AR<2x}qdnK`ɃS \X~zohp :6lPGRNQZPdZ 5;}*5``giX]b0q5d.q/VMWØ] !&,Ѵ9pr"TMrjN[ >RY^!K+qϜ;wÇs)@^ua>Lii)ƍCT;v,kf߿˗/3h bccYll28p /_f߾} 4`ƌS|RDR޼&MRY+`tTDg:RF4*V5G -.Epu@.c=o.|'$?kCN6˫ט]dPI9:v8BH[j A1doY`.gED meW-n }Ia@qtr m``0k`ݽ0DOv2X=7L=A!R:(U*[yRgiSfpo[ggXy:b?q27유&/BbW4 ;[10[+DHQ+8Ú_}jր!/Զo0$v]܌@=;na(ȟ[ǣpPSGO[r,F+U:3r[,f8!;{ڳq7,2X 6!#X0#712yd ĸq0 O}oXX,Rr ڵcԩՋ Ν;G֭6mpYի9:tʕ+Rg/LHʔ` "6s4t*Cy|78v7wP(.>`_;_7 6`4!GRBp5T C55S< 㡣mnX^zwX$`ֈ7U~P|lh%G}cw:S:lZPO-pdTSS9PmYYJڰ+x\`#f 7."AgOU^`'[<΅ɸVTѸ B"Qb[Í{X n; =GBF7APTNE>A eK,wS#qԸ T'N@cŏ1 IDAT89aU؎Kx{7i賋y"k`fXV\رyv?M0τJ-g[mBtmi =AN%TX_Pix~`?<`<==ٱchZ"""h߾q?;vhZپ};^$|5뫸_ Zw31 *,t֌^IXɫѥ;Nɶ[,+"PBa._ӲM1~'md֭[־];d.Cv  O@20!?tYV 9 GREfOP\NĤu${II,P[DgI*&^ʼnEmTнK*knƟP 2̓3@.tSU3X<5O2#`%f4+Nq X %)6 ۹BPe3@r?&1!,DūWP%`bP1 }Pӓb53`-&xN6YHO~j)ϒqw7"HKݦ6|">58W0w'aw0R!BDHʔRف-E8p #;EOnأqQnZ-?j}j I9@S䍛K`\4v.7/xu;9[6&B8SHuƏ 2HF:AZhSl"]&`H<񌃕a1YdlX{;m~2.  Uy:k렒'ԬX# h2F+'lϽ39G5Fu;#Fx/$݃th* úqv/OĸqHʝhwo24/`Lx '@ωX.]´g/P4Grj20k:mc%{f+}aoQȨZߞwR%+@E~LjK2 6'u@$e*<<"bLA|W xq` OeR+]ֈWCojnFEA&pd1^Gł#0}띻P.|3cp 5 hoě;?N͠ꐕ G -F5υ]~ `6 kfӖ[31-x]U#,#+4QqP;86ς6 :!q c(qpݺ+JJ B|+d Җ~(=Ols`ZHzq7bm!}~סpb -'JRҩ#;g /{pl8޲=Ϧb%.Mjt; Q4#O»’h=ƏC|8%\#VjRr#g2k*jW_Z;K׀H?tUIԝ;wи2lRU[Q8 NצcA)cpo+{P8 B/CL[ &= @]HYQ8Ŀ]?iԫ_r<BH!*ԛ ?MԧCSiUS1qhH1KnfX6:!PZ5_N?o~;oMJ2kB1#&@p ,CPo d8d2+Q')Jb_dL )]@խ-]/@iCCu S]Eh#I1)0>j=Siĉ`2}:`k+/g\oBD U6ԚM;]PǿGMboe㻸QcF9Cf/~qS tY܂QW8c D%u@$e}ff:ۜ_U%;@YհsR>K#Sɹ=j;yOA!'qOȝjK1}AmqY!~82 Yf=t\ IeUv.gdS$$N(үkmhR.Aħyܪ¼X7TJ\7/Ķu,%$ 8u/kU'=2k sڈ"@V =k0z1 ]²p|=8E$ > ^Bqs"f2u-qw+31d5(734|-FU` 5򸣴hެΈATTph%(l`5j7`5Yp=+^$OY a[?gr5~ΠϥsbK@`+O.`ඎ̈θ;Pk =VdXW"*I fxejU|;3xT{ cN@ =f; 9cfa@?@5`B4yïTh8ixojʔ0e=;N@nxcE (|*ytN-):k^KM~L1{:* 0|BM`2CGw8{j x/$/,A}!uѧ;$!SɱNL/c1';yL&{ FAi!& p\a`6BA\9f~UNgD 11Q|0!t\/&;Gb+6xP8u'/пKRPثo^Pi%{W(!zάMFok`6Z9:[{qUqcK!ÊFtW 2ǎ4݊f̫aY qh8Ӄ1/ȎG[\*o* X) HIZ<}iuIEttB]d u</Dqpf+qՊ%bQ8iP8ء@ȫ@p"JFX3D~3xBvԓټYvL0Yfwx$ 1)\ wPS?W쀬jg gSt?2 rHGe]BԚDL&:́=ߏ_m&@J~\ *l :Sv-;a_T]" qPKΡ4./p֑jp8g*@s.|~0EIq$/YYY4mْ@'AI8E/ Lok?K"g5421I: F9 ±~~d'hsR(/#QO,b MkNDDDy/ X`\') RDRfAʼnb Co-;!=B]m5ή|Lσ<}\ F" {JO]+Ф%Tڭ` <\C~,wNY pۂҹXoߖw$F9\n )XM|RBѹ7 S^o>߄k(]4*wSj,ΰ0`dd@ෳp:&8wZ g/Ð>0SqKˑtV\:*92,w7>CP(3V KmPUa~~'ն.8tLPxL.eZb9>NP[AðoQpouu˱dHѝxBq|7Gl$%LTnb$/ B* fag^TnIiFSK) ,HWL>Ɛ}ȓ(AI*f:JiMqViO=GUgĎYAgXr8 A !1 5⼕`+𨂪75M)ck8|pER.RRRpt@60ew{tW;`O";y6w3R7^ձXR#M^sIعh>&. <z΃+waEwh m+>杸|]lJi<rk%5w 2ח |y}!GYػۢvTpo+ 28p`9GɂPfDg: 25|*zTby2tձrl,*)z.yKI -YWTj$w-Ά Z2 F rGP b Ug_ϏR '֊P`2^~hcSqlVʓcEi5`䒪>WNƴ l$ͅ0#n>IVh\|$%985 V'P{Pyl'2=QM&aHHk 1Zc,:z^B?!NBL2C|x%%%n28ݪg}\wOA CM(^쎣lʱzbf i+4S=9B]UUs`Imxȁ ms%\Ȥ? _VeI?tUIʔwnߡ]h{ Xՙ29>ߚ6[ĉ'G]̣c_VOiBb+~g X,8ل^dAqpGxz ^ R3X@= 陴lCCRN6oLrJ eWm۠a8 F)NƐY@# =|Ǝu&| GD42'7v}v$H.OnhS@kP(a g_D.R-g[!S$4MkQ=bJo7Z2Ji#I B0p,:#ajd;ҥ}$X-=lF@I?,f*% w;88N KZ4=/~EqyFQ/k^%P ;clsJLDnzAk<;իWyr_F?: u@$e͍`6AyTfֶL:+]fX8VEW82^} . (>| A)戗UNfm)}kF6Eczs Otԩ":s7j W@@XP Maj17 |/|7$ޥ;#ړWʼy~-6|mِoY=<ԑbPi vpbʂ:pYif zqxgI62Dܒ 0&gM q}BZ=*NILl{ X hr:;_`E;7fwxt؏}9y *;ҟevKLH;qyuo>Hj^?R.gԩdm8R6Dg* KpqgeX RSE=FNqh`S lGGm-ed]R$zY b7Ż/7b:R<`V 2oh2JRApg-J tx \ r96#QݽD5GMy9jO{-D!>ȰNUk)98i-psb• CۦOܽF`,…}qD`'jߙ֋$O0$fX Aǚ ÿ@a#\%9b rs989 J=)rURRV++9{ ةJUaF(*p|7 ܇wuP;@SS7?O^Jb/88]c2ܜrCک gd#`m0/G=}[σшQ5P,[JQCW>ELULI89f'[Ք" Kdc[Q(Z i9֟!}B-&)y.]D^Ѽ),݋R +P|$]>& XԐ S3Mc-G} }: 3bEr&ЯŪ-{` >:Pя?Gv+BZIE?1Ī7RÓaPri>C74Ud/# qpQP-;V0cp:spSCgEW˄2$?|q 燛"l3j?E_b U}OfcjK8kUDF@՝sQTpA3y8N?p29@b r98{|_'7 ]FB`}d.Qf!d&z0'X[l_f턯\:ûh=_\#;&DF3=Zc* hzbױ)%gUmՀU'NVӾp^]n߂uOX4EHOcߗәڵknèbc1;< (o%cu|O#6 Lqb-~y!=Ɇ.a6ZP ͍D |:NLEC _ŮI-KƗѦqB!X 2UUfpr ;>j/j2z~EV4.Ju`,60 ь͆֓j@v8 &l,%: ^muh.ASq=Թ z޽ y&B#vt '7@V2ZPDN_'$ @p @Ȕfz9;O;OB5O Oa_oy:_gG̭uR[U:(g臔WR%%%QݍaPڨ Vʏn񦘂`8)\4N(p/"Ө,3>ȉ_ ߂\ @ޞj2`oo_a|`cƌ^c BtX;I\}vt\,:b%eJ0،Bj^]fh_ē{\y%LTŠ2 fmP9C _-o•`$cm}q•{I%~qE$<}`_o{6­S}>%l`·55uƚo/")w(,qP,@'PlFlKW7O@aO]{nvZZ3KbeM[,m{ :, Ek|~JEQ,:~b.!)7&Lɓ0vp;qnw5uLѐd'Ѵ !u䝽m _tYEWp p`JS Qy׉KjǚR;׾IQj}GWZ/(q M? *p`bO%ܫY?w S 284/*MV`Rgl\qu?X7 Rxu_,(ɂ%]oP(!ʟ.H>`/:]UTcB[ԍkH>jAʾ[cpWab._OE-u79^ʡATm/؟̘u1XSu3c_#R74Y|g&٤ӧǎj)բj=׼B-d=*J@jףp)17bJARVu_:^vׂA nb5f S܊ύ_BPTa>.K&_7;e=JZUr??I'!M"SP(PZleNo^qjuyfԞ{GWUm>Z(A*DblPQzQkw@{;e~D} {ܫcZ{={5לEyhT56?CI^  rⲹI' Q6Gƣkь*?O$]rǜ[$Eֆ5a\ձop`AŅxW/VJ߅SN&l 8"L0{8'Bz yҒq[g0yS'xAهHەO)$f%wαKLjz0AXI;@xB9w~~~ezĦfS?uU><7*U@Av!z`#/LabvUX\[~NU$o@`0:/،$HMoA~,-ږI5F)-3pd;Z}S.¡m}xp>={P͜(jtKb*TLG}pkS }J()0O,ikA3ˆ~ E5}uz87. 5Yl)Q*UkXBRtP'gʾTkQȏKGlfѡ.G=:_WߛO'`ذuNpd!5x 3kgnZ-ZTc98}f*k.cRi(H`j7g49X?#~߾G$A^X9Vͯ/9O6*5.9f!^a5 :0>Fg-!mˆ-7p \Kv>KwPp6P$ԾiaQ;W'3_;?)}4@iJmѣgOJJJV9v)"yCJP  ieRTMRp"_AiHnNu+4Un$h5 NE#o4z5fF_f=}3 W^fa42hQA{&W_E._`RIpr䖰ya,Eݵ( $.Y8/zE'[93zB3돑oc>}݄&^|a!6jQ98wn΂y0lĈ$?~ Zh7 ZG?*g'|ne$v$w \GesL=Zh$ -s'ІdϢ$ߌ'KjϛkWDҭ:B~,kG3yAY/lVS ;Ob1w\]I[Ȱ#X9!i\9oE}jnDADS84J¨٠QԾ`0]%w4,<ӠZ )<./8N oԩSZ=v!.\@mccƐb4<WX|=Νݝ(|"ރS™䞻gFŠR{9rp*o=iXi_3s"3~u> -PnS?+$rdw"=!"_.OD-?ۃhŨ:\k6=1[,/`q?FW2 s8Ёr6+aH3xx7ʕ !\<{øXR|${ĵKBAۉl>gCOQ f| z@Z| _ CW.KX\1Vv\|5ߏͦzfb><7._D~LtN:jP78BQC][Ř}-'⫥E%ķ{6R{gժUdefRj8Ĕ~ *D_<sǗGsKW9|F:1P#jufJX6( ^̝Ym݌Ebùm]3WP׊г#|>4oI ^l6 m_M ))9k@Dr T?Dp ͝m!k짘zGΝ)IJX9ENngNAj.nŻ꟰7,1؋GOT-qWƠA}}V%nK`ۻG)(/Qņ}p7*Mn XB-%,QLޝ}!>2v"ޮ$Ӊo119+.tҲ$;Ϝ/b:#}&ij5 ?cprޝ5*P? 9z3:sw LI^GM[M<8$J+޼~xԓצ }ȿͪ`جv,;Ob1@.mmȌTtF5Up3yoC*7u'q #ɹD$ kvuJn+Bmۃ' n],ܼ~YnCVEYx13gĭE$˶rzw bԼWSe?ڠx{{{{%I/ *GkPJLzb  d@S]#gѪ8p"'݉WF ^{~4]9>\!BPԜ"gƂ"E)Jޥ*x 0/q<ĜCP%o/DL n"M.o@kb %! ).ث3ܾ[~b$|7zVs N a&^^j⯸DڒmkTw仛qP8/5-E7ICS( 6 : /n2Ď8p ;ud^d'Zw/p;&b2@L\]\& a#[o?q)VQ(lwQZCyYsHI~]A&9زr0 ҥ ""̜%>R<(y qu@!0h,`Qi1НSsH4~\|OsH"֜6$ht qt0Qe4c/ZWJ;h{WKxGb$e2_~ՆX|q-JkTz M*Qky*4Rb~%ZFP5q]Qҗ NKĜ :-[s_~3ț](n05xde V-뱜ähjFp3j^- dp8?_fsӹfeui7 =9*ECqkXpOo0-Qp+GWÖ_HT Kv!9ۏsI4ڒS[b-دu֎yEBꮋdFҪøwmDS9z[A:rLD<6>~Gd±?}<*T@Zț5i!E}2R!/Y#ߥڌQ?ݎo؄+GarDS;f$jhM\\XN,9f*6fftM3(2sKLoN>}X;%g v<}FZjgj 0:ITB %Y<}#EyDqr6nMR a>(Z OHf6,_^q/|HXٸuVY3@b0)OEm1s˘oQ'6`Bjqk2N}ope<>yY?!V!/WǥˣĂѵ4Nt,?KØr%{S݊>ȋMpkN-#WJ0g{5>Fv3KX vio'4o 69}ǐziwIU^e 7AQxtG4%jr{R/=L;ؐ} y7Ǭ bFFZX*,/;܉#Q'Җ~]ޏ_F0pS{^!YŸW@e\8VA APxgZFuH7|TzkޝȊť{A /|˯!9P&| IDAT#vj^={=L'lr4F޽(z?$rm8q_n|cȿD’ ë}.ڜ 1TO•=)uUyyZ5rroFV&oئ.$Zx@bš]C,QT*glA8vu2NӉ;Es*|49X)ĝa+)e)ڐQ߿Fw?|; vڥP5 Ɓ^ۿvOL+ {sq&*qr:n.9IttNZšz)QMdQin1 [T}QkU)AKQQQk/3!= vvs%== 7]ٕ]`w@?=8׌#O7Y~G/p} Ewx|")󽇨}q=m`2 yM͆?Bv:d8 o2FD oˉC l!m9&ur/Q0KY'7{Pr%rn<©f8uo߃w5-a Y|9}N` %2Mf8ۇ^m2|NɅkZ6$}WN=A?;QkϷoG3<ɉUqܴuYDŽbr齺/mǶ2_ ycرc4Y&?]vѸqcL&tܙW^nn.G& @Ŋ/lyg3@ %%元5bOe4jWWWz=LZ+$NHa ߏr/7BX&cyIxA:yAj"cMJU٣ENEZۅkӶ8TXJ}y0{=?wp[Tz-ۢ[Q4*z7E|^nĕGpA>%ZpEh6 7G̛uCGh K_HRWصwp.Ձ v4|~sfbl%^FsPbNN2'gIxbq/=Mop),d)6m,R\~=9ww2;ʄIva1mai"4+W;şNGV aM"f7r[1z䝻E GDl[XKrfij7keפ:!"|֤僇yP7F|wճbT`U?}2w֍e˖1evIݺuիWyfڷoOpp0[neѴiS޽bM6/|ٳ_~'2nܸ:;ehӦ… 2tPQEVZOIժU%((HVZ%{]VC=w%1c͛ȑ#2m41LҺuxg gϞ}*Hqqxy{Jd.Ș͍e~Z}WWGRkѻEӈc4\>XigR*b^5 IT*$ N ?uyZEv2ǎG'GAAjDf11Uy9KX2L/E@TF05,7Q;މ kQ4*Q4*o[QP2H"}-PYz]FfȰRepy٤Rreq_UV֤ߪbc-GA E.sW-msP1zI驢tT('j+-d8 *HĹ^(&\*N]e=\v2εxvO鷫STD@N#n*HݭEoHGD!圸!h5(ޞJUUB۞^ Nq^3_QUse;O4qquAWӭ8EM毒sVICd+QiRVRkZWQ*8A]:* 㺈{(z.Ojo׊[=ZAL+tQJPiL|D+'lG{lU rNǗ!a jGW^<_pof]&VO)!($-@^ꎬ%hLZyP/SdL1ſ^DTVf)ٵN R>\V8l&.* xy vDq4 5.jg U^\D 4 ھbL)*5_,*}E砑ח7vG{KjjjYal>#ܱ=}|OmȐ!wߍǏiDQ2e~kذ888}:;ekƍ899+p@9u?m_R%ׯLVӷo_N>MRR& 5jgׯ֭ &C˓x-v΋EFSsPݰYlLQT**hF8ξjg'Hx,iYX2іBq4k͆cKN{ X&٢5^ (rI9R*Xt)t_q pDۣJR9v*Up.Xo6oG7^-Qy(~"`QxGhTܩSj)\҇6!:6 8#&ڽaC~D|;^CDR݈ʤ|!$3MF`XK8pw;;)[;)4ߝ$†@cr{@IZwQ덺'f3[q-?^Ve)cZYt)h5/ZRbZq#ڦu SvFɈي)@U]%FHz9>{熤rl6R'CdPr$[b ;~8?dJfdܹOTO؉)Oŋ)WGZK%m89sS9 W]O`kUǒJ_ABՃ(߯>]CKXb%q?mFeЦ  >堰fGj Hp[_QyψJd/Fqj.r8@ת"WX!}9̃ VA'$ΓV@qN ^}+>|3e„ ̞3c(,k7u$#DP{aJߢx6q.( *$sH bRt2ɏˤ}^;[5VCU/<ʪQgϴH&o7ڴnCHHH*识Uyv'ޱ;v.\(FFFb69suoa?Y~'##>663gҭ[7V_^'v*@ u=%t+OpނbWɺ*SptCрG:dQh 8̷!iQATÖJІtF∩A5ըc޼ye";O2ga؈| AvJw=4sīQt?1Pw, K^1bR7@.˭eEWT ޥk݂A)S<fεa+0i,l:zOwt:-ZFscW}Rj@1W LަHI QljkOĿmen}#>QkG4@\ KEQ)d`r4|#FŃ#S.& ͢7E`xLK(XP2t &uUdE򆾃h :C\Θoֵ)vĩi Ssy4_:|Eɇb̅\} lwY1vwg4.~&RdjXCA Ȗ02nVT9۷_MBB#??yO>3dbcc)((`ƍ̚5w_;r|,$66Lz}:uDpp0-ǎKΝQ⿍ R\dlt3NRkf;])R$5qǏ/n#\<^Kں#<ĖWD/c 6y8_o'bIH3Jz%aVy_jGr2ыGRR[o%7)sc4SgI?u#[Q?C֫=pA5TYb k;k*KT%85k\"Hy˼!GSTnR8ska1xu_|Bܢ4UzGXkh Z:2Ѻ:pCR6}K-vϥ9^_DKp 4О/ɓ'))*¥_GDpن(~>5 Ud%$7V`={熸EWÖG["d[qQωWQ{HU"3io̔ ZՌy+s;VXaVcxF;tv}RG|ᇌ7I&Fpp0f;wl6#ÇgOԳ߇;RJ<@PP,{}\\t:ۇ?9sU?)=vA՝96biR0XpϽ) u#r4r?赠RȻxڑ$<|B0e! T68h i6 e&.A.M貪+W#&uрDUc%op^]D%Ѻp{r]$E*.KHnɖv <>}y=;2y*G73>բ]GW!V *M -7q#/.dJʦSԝږwb?͉ŷ?" fuW8q~p{m&>+//o򻏼-MJ[FFq4al}H=Tlw^|׹Sčo GQf!O-Ϧ ,GPg(+gܫ_|GfuN+&::5W^뉲sQv2oXgq?z^J2'oZͬYOwЮ];(Wuʕ+ǓODD1114k)\2w?O &NNN0bÉ'm۶ʕ+|ڴi($%%=Q~} PD 62MC$:RaTE:/h-*Z+{gbp7J:6h%^:VA/Z/A{HhY^:F ~Tӏ|*Q4j3gNYο۷V{KE=Jg:ĻN+{Vq#8+'FJ^IT&"5BEP^":OWCflh\%dPDxC|"QAd٥ua^ÅfKm!0Q9ޣ_ġV#UEQ)b .2<椶5-Q4*qs^5@"/&t>6Fɜ'޴N餸οAaazxˀRvV^-+/SkQ#E'*w -<+ ?%Md"KC@JCSלA"F$!+X&-*(HM]dRΛ֩DQF@m ż2"Cy#gR#`9+\/kv!?Dyv$ Qܹshdܹn% @W ^z<֭[D%K"?juN>bb 4hD&ҍ"߿{1ԭ]m7 kRsE}Tl¯EcԐq==C7JuEf ~~7^\9*WQt4jKHa1>3Q5H1~tq#Tz-MQp>kPh<]ZzqС6j" oP]DDFQ1dw<"\H<_8?p q\RBAAs/;OcǒW:FT*' 0ѯg C֭[DZcظq#:ubΝXqrc;} f?a Ddd$ Wf޽у[n1cƌRRR&99?dN$ԏbARm8j6{W\U/Pl6j#M4N&TFid9-r+ze$lB򑻸8Ln9iyq^ n[zKz`\\\yF߇Rn"omQ @["ykv [v.m0yT%`PKΝg=K]Mʚt~\ڱ44+/p3ج6R/&qtv7w5;aLJ~H퉉a͚5Ol6l6"/Fa͚5tڕ^x+Wq'ŠjܸqhтÇc28u͛7niP+0ϝ<3f^?, 0@T*=Q#߿xxxhFɾ}sQET*(:u%"xbh41EWK@ 4btEo{D/o'ޥx DeI>ͤEYN:%DhA]D.T.޳ bCV~Q>\z}Yοfk{ L*S 6oTjE߮.sd4Zt.5j{\w4@T:wgwJJocNvuZ[}hϝPʯ7E_gR\\,ʉڤn\eR4 {(M]QִRߛ]Jݷ4j Fގ|"Ն@ڞ$]kQ+~Pi~]13鹺ԊxF m6 !20oԛ |M&j'gap`2cs ͏}ZJV?c+!cᆪO?'@&$t"UT&A"EwNhxܙ;wLp<>kX9'g6=EXɊ_9 @D}B4)kGPT4x CQkpp q /^ވ-XWms=X{& o?D!7m$wOv:g+eB&&k(/qҊs ~jzbJEe+' ]Lv9**w-pP2PΟ'Bty׾}ؼyseիB JT]W5F1xw'QwPbܥn[=  )"|lc֞~"M5ՅCЊ P9*zy^x d: >mD#ŠybJ \(5*tR!/_K5EfvBZ}8KAB ]8/g£WS!Sʅ{ZqIc g{ @-Y%j'PzCb f<'8` R CltnE^ ُ@@Sd U7%tRn`lcz##eAnNGgȎgPGEͳޟS;%2sKB!3x`"## ̋&DY'?U+G+ŽSWcfRt+aUm0}6jY݌0-vgcwC&{;ޑ׳'\=r3?BΝ3 #&4mQ l~LOٳ ]yџ#o{"[σ& Rn]-;qjvvDpK~ZK?m\)Xj?Zng9 3JT^ӳN@*UhG˥4[R!5Px:d^LD*3uV<<<~qIDvA.ރe;QA/zOCRwULLLGx%Qڨzq-j @!9>o‘xz1B{mǩ/טlt9sYժ¡Cx<[(9$ nנmMNa/݀!lr-'l!t{ hue)_|?:(x˙Dqf)5w6?bў]]+\7ke;SBz;Ԫ]o'%>+9<(q[O;ުMCȸScA9';$)װHz#*{BWAhخ8E$c<: y_m÷RQ vڀقǨοI&>O%XA%''Ӣes*D:B+^ YL8K31 aԙj,Eed;X  H{5Iī:ӑ١^զ)1kyur F޽G}DtH:Y8. l`ߦrǞ_Jr' a&c)7RA//틹DAh=x"ބc(/4"%J9"hʞ}8uALk7kPMNyu̬~?vȵTx E *rowr>YNr2=ަꗣQ9UHмv[Ч!"qf&2h{-W Rqjdz/T<2k,Zme?_V@~˺bX' V֖^zUr"} lxFiF)3) hJvq"\Ob0c)*Vm X ˰H=@!a8nI ȴj K^Y I+ ǂ d%l[e01*>鸆DNlU3̆*S_3*.uB4+x:rLF&НK8í_KēJ ACw-9{ĩP(شi3(Fٶn%]{)-בݨvzƂ2[ףE6z-$1FSoFG/qqUDڗWȸwU5_ $GLLzGkQ$bTt?5x:RF66#Jd5G9 .jp! |>AadHխyx灀ٱTY8zm˸t\nلpJ;>zw6>⫯z3:^ee;>]YU: dȉ~v 4ǀ{ )GȽd iW w5"fX /?Rn4!{#ת;uIoK?tp.&RAƪ!}5vP^Nɓ'+;=ViU5*n#4Tv=-]vRd8ƞDc)5b|ԃ$dR d>̏Ǧ#WyceQy:#w-3+RBM&_T"6{6DSQt2ŧA.GK30tC~L!#q5^w'nNyxF$I;':8&VNގ=w1TV"(O3BBn@ 8lpFE'P_~űM8foFKq ~q;Yh5 7}1..S;:/fk: ^ú5z#HD,$[ Wtn:yP^c+B&a((ar#*>0ϐX5d&s9 Pp" _ʓy6 Q_yay dJ%S }Z>dϨCA!<1cf <\p/:]UW{GiJ]5&IDH|\*Iz ABSPoԦl^&J rk#ߟ`U6}` ?.X(7Pj4PeLT~$ɫe{uGX]U5b`,eFCڑ4\=V=fޑ<&ko[edA&qy]rV++ VuĪzvڅd8 2ec.3H[q rakQC~96.lB^̀EoDQ;]'oG0%O7(BN : S( [Ov f|: Ti']zR-yH 6H fLDO~yIF\)9a up4>,nGɣTNHʖK`OӐկO;(|{(6BƸqxy[/^(,BQѨp RIRmIYuG t3: D%&XryF4jr(&=}xߜϝek2}7Y=)x'y2+W7֭[v:`L"y"J.E*38m 5AKk O5Fru֏EHB'v<4gawbsSonZNϿQ-2<=ߥsWVjg@@Jo>hv=ў'L2 ,q^=c[ss_~Itr]ǐ #ߏFVMx[)XW0gS܈=oL2ҹ{ne)))6m['rfK$;D( N0̃cihZ< $6-5Ƿ6Е,"6HCS1%,!,"4G厴oo%yf Xl|y[{^uTa(saҨ7+ %? 7|!=]ZaL@ l<$'cj[HqhFQ0zݘ/]Ƴs=jz6x.ڽ{Q̩׹],%nPFJ˟x:tP, ތm83;R\c znK <<1A1s&L\X0[;~a4# Ω 2\N@1o=υj1#WP)9s=Dl8s{WX:˔a8{Pѹh͊~  M&;ƭbe!=u2Tj6Q1&rg'X`\?7a) d4b6nv1d![E/A (<]@&HYX 6q1kpg=e`4V,ʙcjQwd^OVDݚ/Q,4>"HIit~̞;=h D6%$~X AV=b 8QZ螨=m(dトLYPr.VEim: W_qr hm״CcAw&bR1%6O}aݻx)xp>$Y?}S_ iqavܱql<D2[Oy9\@VkQJO ]?{n?*Csw!nNۃ!tJ2?I0(ncU'E mPu?XzX' Vo LFC@DjL)V0?e酘K 48㦢qb.*GZ Iud\T]'u@a!h\\"C=g6j{sWFҎ? t`Zv-J& VS;ll&r~sfI''~[g/sb/tڭݸKjm#?KiU/M\ .15AXt<r^ߌݡ~BDzAsq2vN'*m#c!STg"[[[S^<-.Yt3 }1>Jml~(:ܙ)<\nDF512 N7M/{Su(w%w%=AZMl1}ܨVǞ[1יf-<цa9U,zB'xzOȡQv9H%q)@33:4 NUqt%g].=ˮn[I{ՎfTߞΏ?u}IKOՏ\VV3ӔaÆI? hAdCÅhEwS{KOt5Ez~̀Ԁ3AAӑDyJ.{,\j -,Ra~4!u>n1s)<^y"]2lb zlPk`4@.Ҵec0'w?r pdYnC٫|^s=_Ԯs?Fhz5WqFֿubƈ~D8l%3ro«y,OIPo#Ǭ'y Y$b0#˰®k29g & n$qo6vZ2WBBl?ٗs\DB{9Z5|'*b2,fAYX2)NOkx| {-21|NM/_kB=aӸ&w$NCR 㒊Ҥ|=|{: %yv[Csӧ?cΧhղ%%[O"(Jn.m[AC6bΞEzy+WT5n]:s`R'PSʉ(~Y@IYg>: BI/0lIzGøbn_ۯi4iF)\ƙ :OZ% ӶP*i]=w/!wp<lKOkE@ԘЖԝW,J3Ѻْz1W]IP!t/#WUEjĽȾ?[i9SyrYYάUrCҥ[nņoo%s$AԖ ^l[lFXO ,z2 c[0oqwf=Od*'; ƂrffkDtۯGNobƌUfIӧׯ_blw/[rbw)J :|r,s?t5: ަYo^8xAJOr8V0 {`c[ P9~ ɑ?)S8Ң,ŨV| (Z17@IoPx\Fȵj4nvJ;-&mܞK\ɹL`(1c(Xe7;ƴKD^I3U V"~\@6c(M)$c{Կ>)y?kV",\6Еj̞?YN}@@7^_ĥA aS:\FҾ$n~KcN1"aS3.DjWbfO#e5,z#"`ytNhJaܘr^< d`(Sarחqu%juvIZZt" XAfr,Eyy~YX2)SU'\!7KÈ9kω O NERMf`Wp6rWg,hZ7@~u7 hDX'::^Cݱ|-ҳ'"ub}Dyb:rkc6y& S o;G% PGUqyq7N$TC6Ԩjon i92vVwB_?$ ^}wQh -O.K MQؒw1Ps@2'a:亊8rGw}sٖ;Cm_NkQRBQUZyt5 0V/+ߙubƉRr͈< }f =E>ӥ01y;aWV8̀#Ldl.1B^ItfKL LFJ \ U 3OcС:tźAZQAn#5/[;ӎg\Rifd8yqNqp%ww8=w"3N=·} [5v\X>Dݱע[Y1=N~껈R#7rdZ4Ec~_COߏRYEtsB/qihP*v'r{[l jGօ~#WlT'5>J^M(/eqՔ|U\?g=#IMKVxӡZ2ϖʵkd0صkÆ {ԝH&3{hFқI=|[?gE4^1a4SeX >IOg1)Tq&Pbj}c o'#Ld{$Kyf1UZUc@hdtB~]yhi&=}Hp!vJ-(**̔\~XYOFuxx5q.7cR,BSa6GqJ2G%c5\C]8[ Bߘ cQIaV[ˠܹb$텙{KY(1ʹFFBc4iV7tUӧ7ǷrQky?PWJ3"-d_M&bN'hO8ĵI0bi4>!תH\sVђp2n7~`)F%pnTK$y ̥ rZr4qReǺbG"J+))zeB"4¯gD_& N-Y6 :52B\lrht.\U&XHMn'!S n d hE;1`İ-[=FT BS1E*:1bĈNٟ?,rp p L4N2θ HYQ]fBN:1PxV2LlTBa_GT pyEBf*>~tQBֵ]!\ԊP EQQZRa`= ZP5/ɄYOCm?W֏v)~vp XzĢA`V]ވ.J%B&C$.@pb'zk+ :/o!luQd2Ѷk^TU;bqp"::7AtzUŠBR@hw]rUF]Q#E@{ @AwbpZ_gsv蜹Rطo_eߪ2~嵙/N 볉ϺbFe! ]-?RFl˽GQ9h)OAiAJ}S^)ZO.G'^zBWKhh7vU] ܄}yxrwE_dF窔sofyqVɴh6mDaaa%f?_^^MFƊtUAٌ씋$AT Qqvs$pޅ# un3mrb.a 24D 2rG;?^ 89Z35Am z,a)1b'WѴiSΞ9}A8:±-rTހ̱8^_ƬY:ER)}>- ?FQrr}ϽIRj97ĖUйڠQQ[Q$${InbcKPu9Rufth=)Oͧ6]~=듸 S6qv!M֠ph|z5ƾ/z-EDӓ3 uVm7Ů/F:ZZ& Yfj^tKi4^QXɳ $~SA)aQk4z"X"K{OMDM31D&b{]DEE^?e'1=q53\Y<}cS/a8 ޘKTp2z#i PXk7DV+h>N1OY~9wAd: G$tq%!h)/0?Zs_&qq >-芍ɰɻɍqӢЪI|Á@v% MlcWnEvsq81Ӹ1HLBJy|RN< GGXXf$Š`6Àј< +(dC謟 E%/~fA)!Kj.L 3JЗBͨӳ ]M. …b"32_-<<ʩo(+_Gj}!&6aWR-qO<7ZGVC Q6Ҕ1N 6BRd?CJW->=tv)4^҃ӎ/?L^J1BEf@ U9=4~Ӊ1aU8h '3%F6Myѣ m۶m7n`РAW/8.v CARj' vKSTDI?~Fʴm<kF=`/@jP8h(j>4im`xGĺ1?<}rߢzzsQ_V?b6b/BSNZ:WPz>d.-ۿҨ0aߒ{6kIKT\5FEVˏ=j$,hƚ{\]uY%P*,}̭hU4Jd*/ym~C^۷_jZl۶[7d(9T[H{,"[+51b4%|1w!hL,JIv"1;+5Rx!UM؂w)8z/c)!tDD$?Yժq&ǣb<?7L ܙx.&#[?=ލӽ{YϑCA!ڹьTvQywa*(ޓ/K-:_ @595wv$Qu,qPj o#+$]lOg|1V?Е[,YggoA"?"ʳ0L.#yq?s.3Uk&MW7Pث(8w*4kOqg*B|f "iQ,z#w|AS(MH k.TIZ Jd:4k+fÆ[b/b]t0jF\:n OփB\Z)@/Z/%8{rtJq0}a9bNA4,j $bh=~s8%$D֝(5JtEO1 {gK/ݻlLX*J0HZFc/S+SLᥩ4W>oa)3-SGߧөХexDSzy YH-1YѠhԩnM^7Kx6Ν; =0`Z eȮ.ؽjF^S| wS $8:XOoB%1f 9 hˇÛ9sd ҠJ›]A #ub}6vV߼Nx?J7 jQW2l_ J8TFܴ, DԔ\ۙ ې6/)3wp kWɳbj흂$I~ uqK6d9UYJO~Ѕ#Tu>:{o_(\\} stS"PWH$/aqsJ4  e]n){ R=N.ZIxh[wqwH Z& tnj av=_mEɓš*3TTTIV34WJ!T s`B~YUHh5>ϷP8rT R&7>k78n^E֖4Ɍġ-E`%,kV!L&HN, U?Zy6u . >J[AqI1Tj )yDDIohy0)[Io=%|OcSd.ڃGƸGƢl9 nN\yBޕ$2Ph ߁P:igPp&cxٰy@lFyZ-eI$vYRaʄDn(7-7$c/XtFלa`s/Bi~KJ 1(LȠK <>""p r!w qfmH.,OT4Gʙ\!P)XbM4?Пe˖3bڵ\?^LNn{Z |XJzW~=w 9+ %F OjOOsg 2ΧQH*NFLOGІV| F}W0u]71L@vmm f3?Ҥy߽Z(G"f}Ѱ*#_R6m>/܌"@,4^?2&^KSOSVĝOR^WK=u\d:ѱ?͉)Hl*k1O~A*- £{tBBB4i_RZZʄ t߿))$,k6;"=C?.vnv?ȡh2 7"ku?6Khy|" 7ƣm=|Đ'KA5v*Jb+ھԫ8]D0;M'k)*db WgF7Vk-Z0qD͛G$Ad|gn";4A3ޥXt8iHM\x0[f nQ!5wL(x erc<:oa稤ބԱgMSHgDΨK} U:0zh/_d{ u,[~BJKkL=FppqֿXޑA +6{Vm&0Kǖ w0:n*\ǖ*LT}oT.GQ_tq(,LdV~J_êGp =Ng8:wd?F#9P^PbsN^%klPȼ} Ycr"K݉;˯aC㐔ɗsy_ȹ5eUP;{@f \=ïgc8{ a֬Ylٲw_H~2<뻀,1d#T,msC΢*s% JV6o|M9H3,vjlGTd)OhXt(}@Q&do I]>[:ğ! <ݾ˨x{{bͱ̀x% 6^ $ICX Fv+) T<lY}7N.3*^범ܸ (ބQC!MqHia I{b,5ps]nGQpuc*;RA,:L8z>-ڽ{5# ʗkLv:FUڰyfݹǐS}()%}xS)XIZ$A`U€R-ntJLI G&318& ~OTTdm~IK٣L 9Nm hp) 韡0l =a[>þ{iӦIٴq## /kh 0E~w4T*gmH*`,գr1؛Y>xEpuK ̜ؓr$]ʧ^TRm[W]0}[et\~ P٫9;hɢlصk_]i݇ڱhԍqQee?)a.ؕ%s!H S4X:*Cp]B&tpK9{VBSIo;TGó 9;Oxq: ;5aQڗsw45iҨ15k|ٰuB:aK@l6CawtOvXt R?[fi::= faTd`v(I$X 1Up tp#ĭ M܌KXoIfu(TGw5䤖pX˪XpXYRݻ |P(- 4Pp ?q^cifd|gT2s۞H oe)/hvu$9G=k _bwx{êyc==a (+uWԩЯR6|7lۊ((!@Gh?ݠndڎ`O҆+@UH*kv=oM Ym!N!40Sw㬞[ xZ~LJ|9,Q%6nW0]صkA+] BǿouL&X}l0QL:, Z0 F(5?|N}ԑ5'7Gd IwIZr,T;KiY wֹ)*6{,F,m0Ž#`HmB]Ov寂\l3 6l ׆p*r2kgr5!ɸvjFiC>&#OKYĭY8J-QT(4oRwTn"j *Vt^ЂChX>f,(qc}wNlO(**BiouB|vl/75bCf*[RX)ipdcFexVu$ZLzc1} _k۟ԛֲN7}%h1ۈn$A>΁v^>ա&77V 殂= ,`l wFde!tzn8;O'#)d,%xb`((Tf`d_J%StEzE IzTd{v]:}TB"~B%sym,0j~yVYO~>999X.fI:Jqf1 ޫǵ%[GǷpPнEP^ S. L|/H /‡u.d=Z %Ϲa>mHZ; 0.]Ӂ#ĺF|#bAqzM55 Gq;:kay`'tہLO{f3B$s<>ASɍMK֛lf;ӫ0O: ^S749Yؒ8I2Xd=ޠu7Kj bccٺuٳgt"ɧ3((z,fڝGwjF4?FRH'394#e |w Z~ >ɵ6?\L<&V JphZ Sqpa . a8>=Ij A 4gx% H=ظq#SNȑ#4jԈ{ҡC8p֭#--hRRR~܎;n:&Nș3gX|9IIIDGG_a/Í7 nܸCKn:J_wB!\%AƉӋ > L­_*^6U>!4>΢eF8{ ԪDPteW B<]SjS?+ᛚ Zn*!IŜ!Nn&,˫/Ɍ3,K>Bm/ j]V?D#\eX^h&}Ut; Z  w,-1y_I$K¹['m%>n!n /YÅfZ-ez-sIę3gpRt+h/Hp,hS$Fp 5f,[ *5J~Cj"NBaNլm^B%QO 1@PɢVof^#Ƭ \+~]U(T v ,Ԏ*??gE8+is#ѺpssN#wvvQsPW1d_c/$BEpOAкEb-vnvbD`1G ޙ.jM!Psp!kT"d ٹPy AIё " ^hAwv*aX6/RSS_| ^/ c;x$Il۶7۵k'*W,fsXXhܸo={Lىܗ($I~o{M!IXpU6 ׎=z`Hی”Bj6v`efoɪ\8k׮JL&,]Dn炡‚ξ()'b\Q+prSr$v?Ȅ`ƚxs(20Aav3 %jAА/J(| `;p0.;c< L&%$7z])KzNibͰ(swSEhO"nˎ^/z n'Yb9-Zrko"Lf;T IDAT7H2nY:E;vudΜx,=UgS]Q#o=KF<_Hi#Oegh1.CՍ7oQڟGF5'wuIcW>{;Y&Лj?~J;uٛ:_O@vsg<\i߶UTy  }899ѻw:t(Ϟ=#...//D:t~~~ԫW={Sa]˛Jb㯌-kgtYop?YoaOA!:P[Rӳ f 1RsVbUdO,# XLj*E[ٕCH4.fήLbӨMN e&~g?7ftQP# dK@999d?A󎎔XWD2akbxAS*/a{7֌͕O#~, \ԪɹLCP9sRJSr{[KN[IΘ KhndrU4;0 f "kay:t蟦U޽ٴa/ɀ!p<g^䈦{{]bU31rG͕n1WI\{SYT'|DsoW-/hW۝cTK, ԰v#B EYqdE#./`rb]Rq!GcYTNtW-̟X7>>5jDZvܿ P j=V)//')) UҫW/,Xŋt$''3vXׯ(k+ QA%e :a^߉5ѥ<'y,` E?+"eSӝOuE"?Lnw]*$5\ޘBNr)nj*@QO1izQɬZN:ߏ/:j, /^dӦM4wLj0d̦8&Yx>σvc[huk:S+en~[pwnNއ[?r{O3(4D蚱r[1`?RF-$T$UDuCɾ8jjQS~̟ H^^}yyyu>>>羼۷o#I/oݺf͚Vzo5@WS\nXGyv)JLo[ӕ=s6=)/00)E&o~\)4ȭ#L-,n޺h֬NT^ѣG_ ɠA0q;lEͧs^PR`X.KaD,I54jJA>H +rLB=igӑd.y.g݊YBpm/a@w]'qt(Џqzpq*< wѰ|Y -ZCE9ҽZZ޻PJ @ 9k)l7srl F-Hj7[z>3TPcW,HRr栴SP:Zw;}PkLˇ[!NͺJ>ሗL{<7NU_kXߓx2|b-k_M+'nE|6W\a4j-Z`!qxc>/L ċbBsF{{ֻwU4nAMjךӌ\tռ@"~1%?\n'SiX{C+lL\Tc4a~JQ9Bv)=u Sv[Ԋ󇖞5iD(2~!b޼y)++CrVl6ӭ[7N8˹r w͍-[gUPl Ν; & UmnjSQR!UB)j6HCl-u{EC bv/\ $^)<"A%1b'C4zCh.1@jm*QHX>tzXspS {gh6"Lp g'qW-ƤIEwmGK-Fe/ ,*BB5{bBVJbȢb&|^]H$Fj/*EzV7P{9 ˯ψkEಱB,5M&$Bf/ȒP)B-ZmR|w vݫ PT)T*aj1vP(U}$4 !b7 9P81K5uG7 BzY̘1CI(ՒA.6#4oJLXZY^~yZ*l$d$z,i&DeOͅV Y|iBzMT[1ZxcǢyІUV#g#T~BۦP J<wDݝ, ߸2$7ѭbDVV֫/_„`7;vI&"**≏$իW1L&1~xh$IB$ѹsg1b!IxB;vIā~R! ;(hU`ҳgOv\?,\@Ѻ=Arǐ`mTyâ3|qL֞QU3rقWVv=\?rJ:9i' `*N XՂ>˚19'UWGii)K.!fTiSGytُ0TX_Eao=ϔł0VX{Rkv|S/(~Ga`e#+LTW.nEKljDk~Ao=1s0B&) Ԩ'Sal˂fBH$|vnVch/i+1*Jvmߎ#r"(pFq*^?#%I!d;52`B+27 h:eIlLkv Z3Ll_SBx)ySOQߺqDc޳HLP)ټyk/89Ȟ88+yZfZ+Xӑ>*zLIv*O>‹8-szȷFT!a/9]opv_ڣx|:-0$>AS:#;ˋ1%#;6C$𥳐Ah[:u_wjG/瓟Ͻ{b߾}QjU*UGhذo^@XXm~5BT9c4+N#z#%U{0[ęi]DY!RTrF:Ck2n)#xOى+|x}Ya0dATy$K@aԝf]f/1JLD}CXZܒHͮ|>΍IOpRMpE` %N%~9!Oϧr{%Tj)((OVe͚5xUq)$ϦEWW J X ~ JqtS2~eU21uš8cԙiiϸ^̏h21貋#I`)OBrC'?z ^>>1 A@Z0L&\ƿ/pͳ zz/#+W+0޽;ܹ7ׯ_OʕiܸV%""nݺũS;v+j@PP65۵CX*tr^܂eH$?_R|J3V4~q}3Bo$bRWܞ[_ERH-1Lhp $UKDpzE Cs{ Fϟ)-0#x8ߨ7sw_wfWaj=y NL4>}{PSgM/$ >J'nY6[Ot}RBVcK}-(1..,O l; g/ y q(T<=XŃfp?\h@݈r薨!k{Cqmm:ؿlJཱP'n$|NU FA_^Ի?= %bŒ(gQSGlo('I6N-jfNS$;NSܚ??0gEULJ؅pBTn88+)b3If~XHbG3yv7 Gҏ=FX,T䕱3f%;ZT䕰~zqrrFmۖ~c6WATtcC*{{O%z X(  **" tJ=!X^=~~~G3z\{s[4UUnhwX2eS#'?x=P䲰sNHQ(.O7h;5Ѿ|_hÕ P|Z$^B+#p8K,hW $A0L&F oƷ7T\*d HΝСFbѢEر#Fey{fÆ d2q{ݽ{7fb۶mڵwy:ХKF}޽{СC?>{ᆪ}]RɿDω'jA-Q|bn&Sq{B:b,S%ފO =?Mg&U]|^j HZ2UIw[G2:KTħZi{_&R? sbn\M'dԆ ׼٢f eҤIe˖_7oJ.v. bqd̈́$e/E{U$Ool/ & iPCWDՑ/[(#EfYX [Xi&jh_JMEf;ު(b%MًϝZNfo+@vu\***+<;A7eQflv!iQSuQ[[UA$'%q,#/.ŤIp$C3b[M&YF,&i42 &A6qd$sD$ZEՐ- #1>b0_XӍ%S[䥍k I]Hwۋ5*Hd i\Ν~HQQ,]Tz->|ԫ_GBd7**>ͤjV@&DnE,fï#П L-5$}y3MFi.v#bҌ80i֫Hh/t?N]E pbs0jY#QUs=%t]lKԾew}ϛ,C~_aa[܊gX,.|3dQUU^zﱽ{J GVɬYvu 6LEu;ʎ;Ge2c޽oKX!rUd x!I8hb[CThI: ]o͕sj1IˊuRu3Xt19Qd`}oi5 $ɍӐX8%&$mCxDěj7DH7nVk{d*$.qh0Z}u7&l7=WʤzJuqFK;*Rx-yi?JG *NE3&"uZ% K#@5QHGYY^br٫5eAV)H\4$ah .i.gb-)Ż%炪Ңh߮43Κ& #=#Lg 7J%RӤJ\UT~R,4X;QB:fvw>w!Zh/5Wu1.~ŶranX ZDQAG=I Ho*&T`U/OӦMo4bNd6Rk?_;#kx{{?STrvM*լ#nr7B4~AB[$sM|USX6$&Ɖ*8ubtvtAH&*LtB~:^=Bn=h,:VPg†{z2~[[N_!Co̩38ҬlVmuA2.18ߝB<jaըQ?% 4@LE2gs?GW]2N->J))̮ <ه=frp}6Y =TN397ʰt~ʨ<7nPNΡ8q6we;թ>>-/?̙3ۇVeΪL a`1zfa;]θwMPKlUY3RE ]^8~Z\>>s+{y%!T& [PT\@ȈXSѺwEkژ6ཙI];ku-sl޼&d< fX-\kF3UouG3j~݆dm۶Q/p^%^ESf]c<i݂ɬb2+\;]Wc_9K>G7eQ+ޣUzۡEP[ZNsމ nRUxh5SW[#Q~  CEh #||y' 3p:ٵ^zحI1о~G v%Xw Z`HJ&{{b,On;j6` I ImN)HP_VFU~cb@ֵ2,Ɲ%ل%pR cvc1s];GˍAQqb RucNlM^l]E< dOM .h:4Úk0.t19:=÷nQH~}̕D}*cwߍh1x~'q|\p pJns˛riHRNmqMkOBY9E:P6U|i ֕өuy!O //~s%/VI%0 H%4k҄-+S\QCB2ԨfCqqUśG[hH7? \~ { EW99{ߝJK%f ǤkNs<.|=6`j<~ZϯɽnaIQE(!f#J+I2~uWOMWhyBܼTہbN*˳ YȼTع6P;7AYn)]JR~#s9326z#c"nAQ K@ P<3_GêE &_~݆sx>/, amRПܢ>Bvl1C`SQCA+JbH9fTΌG []ſj(jq|4~8 P\>Yƛ.ӧS8C,~f٦|J \(o_/_ ꭽWXVu߷Yo?IHh0))4i֘3n8^~eV^ݻͥ?>r!EDW(Eid86L~* ;\l/zm3%:Α `5-`e[OQvMEsy\9|lsaJ/gaj߂CktoK4m K7p Em I\IhI>BO߳~/f5jX?HeRɿTGO2g\.w{kB8E7ôj5͹0?܌DEq0i^㱞HY9\w&7N EuN6wYdݣt'rR7njPZ ^X]wx͒<&yMGe>"[$PU>^#q{ ݇M?c~cc1C=(862eG/07cߗ{yj=bFZnKx~Lz@^m:KeX{5,9{ Rډ%Ld9r$]j5CΉh6)9swi͌5UW2V1]Ua{Γjva~\oczmG6nvS^XfZ.˺΋nՙ; E,.%ʵ6 Eȅqt"cQ5FpP^łX,, h㡠CII1y{κ*\nTt-?6(gһ}tѢ+\]-HS%ԈȾ ݩK]E1i5VZ6ȸگKǧ$CVJyn!V=Wu8?Bݜ5ZcdE2˛I|:y/8ȓ}DQ&- 2aQBT1}g&nDMG߃4 4\v4p3$V&mUĺs!D|2^Uf.[JY}!QT]ݮKɭd@4ܩ bUCaVTˌ=$'.h#Ԕ]L]U$QXfy`F==G,,EkR-s[tZp>G3Q4Q#ԥ l?&lB"ܮ$rImG&LpC|k؇b_8MaH؝fEwա"Umb(HlUh:x]P?eHԨ"v+ҥ)*H#mj)gtL_(TE&]lnB"~A|_c>KZmDQy;2wO 2KJ8V0LQ[Y7D,!>7}8i8'NuI.?$ &]Hq{8cWMQo OEL3%h$pٻ0Kw:,<*֩rnR_J*QF\N?ٮhvZVByF]xwS5r3+76>L$疋KkѨG(e"ffܝGZPogLV.d ~_-R_F'K(0D?ژ*MXX22/C8*U7u9˅IW e㬋=4x$+{ǍQUIa& #`zNlF_aϏ8+[1N 7wnycy-F&ĮyƘN?.ͦZc?Z `74FL nʡH6 h+<%nj4h vc3xV*t$7ժ}$;i= K8C,3y7K(Ϋ@5|~59 *:a ZW@4*Rv#oh-Jh=wf"NYv#mKBP $b\vȦ1|~tƦ9cv%\ؗKJv|tbsC'(AaѰU.8~74ȽUNt-oON]^CǢ)ԮI09{!1rqh ϼ͵3ܺ\ 9|/S?e2/KHeP_ t N_1sFyIP|&[iˉ|: U*Oie]JpqrwAQN. ]p;-sh-Qur]J }ŤW7sFPBs]zyӸ'Dg~ܸvSOJaO*2ߖ24xyx]x>ʼnx^7C 4_2w#dr>x\䔲V~n^h[ EUnà=CF8#}X52Um_+޻ /ճxܰ"8yx=v ť_5k+ܸ&\P'IZVlQƹDěf2ֵsr9Ui y.k8* WPh*os",85v.nKcc70[5Ҝڙù}y%ٙ?$wr@>V_OMSK.װHyTXn$ev(A~8FH9W|Qd;bSt M}#ޢR0v.Akµgx ph-l;8Opg}_zcǎ;2ofΝ h6~ &טQe`gv8جhk7RAޮ0Shv7Mɕ;XP4#þgl-Mi=>B߭C;)wJIh^D4fNxµcwySZbô;ؿ!Nj O0ӊ;zKqf{&!n+BUTMsU)kb'XɽU@Rm/V}q U!\DDܼFŢp疇 K mfio!8?w'+JInJG3)+R_grsdݨ7rb)' -QM*Uֱn$E'.Spm@Ue6y'pMo: sn=AѼQ8 v=)*PVc0I~׾T/Ɵ[%T& ҰaCypp݆}l,BaBi=Js/݀cszR,~]BH85ֶ(OI#)9{W9vk(+YxnGg#y&@1\Gt5ocӷ%( LqEP='CZ,| 6 w{ PP8uMPq_:" X\ȸ& X#7MqppUx 3spbxKNAl[|"{KdӰO$bZ rٵjyo$<5o^9Kޭrn-adCU%V?3f_ fwB.s|T b r3{@y>D5QN>F?NZpA p <L:o,@ ]# xKPQ/qo %|awI2͢qr&v[{727(pimfUA34@ 5'p؟ϹP&iLgeU|ui'.0k]S|ly4v,&,> vv}qx6_sOpbwaqF}b̔ ϔqM^8 gUUV* N@S/ $ւdqEjC+t~%łǫ0E.,'qNz@[u36?3E ij+jN]& =TqW(Q8W|V w[^GÛ_f3jE1|$b|A({RfT~TT7*E,GRk=*FXc/6k]Ub?󝉂JrV >F4?4NKCRnw_fw"&&o=$H #">Af%ztydr̊#F.# DCIK6D䟭0I` PNS) p%^yCpˑ2oou"eDhyqemQQP2*ZT K=BD 4IB=?hbQQMƿ#6vO} So5I:O!J2Ut!y瞂 +OzIt! !#b6 *1Sڧ>AU_EC{IM8l $JBLB^mEd;|a\. M O>kz_"0Ԧt'WY%v i.[*Mz6E5iR|3=LGQtM,vfYߺ}(aw}$zNkKQ%8/b\0[{>g\5=MMsEȧ~z_a[ʹI%< _ƍR1bV 1 4,0&8D 4UtD5i=B&({\'JW; XVO4*!I2hDI~23}[mAZGf˳3bS'F˘YZ7w 0&8R{E4 QTlAT v\~HvUM3)ƄNՐ-|f RMt6VLUvI˫I`IZ<(_]#Cߌ@b$:l!of?&I/5#jrTi+.Il:5EU8fKRO#iJ=ԹFRĒ!J7IťF!כFrj8)IsE򺠛y;|(EqHqq?qƉj !zT9׍¶KBXТ0Ed6jqJs59Jl)qػML>vaaQTEz|&ͷ$C Uf ɮZ Rgd]PC"^/6?|Psgd2eS]dn^~d[yqSd4iP:Hi5(R=РKH d6Ev#4 4=EAl6-VbAvy+.W=9fz,+ 5̒W]o i;DK;\8,A{e>Y4QtU[CMa܋rWHoEuXCU 4UbjXO:}#–S#ONpQp;vxrT& T򿣲kؼa8qJN5jgL6q^p-|of>#&:r~x\yk@2\]\Y5F ڢ yKU^nfo[OBg[t|p!0EiT ^PWRg&|ތ|İ46vҏ/,1Dp%{cpk8*lMB[`{U83h@0K6&hFhfN~g\]#uӸ{ g)"$BXt+ {A^ͧ(iMSx]^n*<ؓq=1f*a vpgZL!ILNy[36:֯_  o~ |lRvAȊmē(M˜6\> o<ځM,]~B_id[{7V q0f:| T%6@=ᔝI驫d/!V%WMIt$ iѹ& p쓣|>%wJ}, Wnzh&ofRط< SKHooHLji{SD(_MU%jf9I+l>@pJvډORH-+S8~MW` 8| RVط:?EW0FTЗlx8W)? EFKD$<2E׸9o&` n4hڵ\.X 9oX}$U&x죤z:/ >Kw^Çg̙(OJpRP K"䒘7— ?4(JNf ]h%֖PY?'΀&L{R${GUhGM7Ju:="}и+?g)"H-)knhk5%(}BAl'ZϷǍqEpuh7ǥZ{GYQm:s9M74M959+I^s10HA Js:GqמcqSNYs5PJd=5tÍv͕3C^uܾvUZa]NCi-X6'ǔ0Ԩ0n|%K.l^Sir8 /O2Ԯ_>\kQoh?̫1oGii7N{C޹ڕ7-92[D]K4xE_e {.?'GF)25_xԗdz )NYv?7UUIS] sNrZCU)vx WJڅo ?|M/?EVD+NoRO v)b([0QCXYD`\n\Ka_Ӕ A[.z2Na eQT3:G,^]?3t:?+0&nZrQq?TPU֣~Kl}İ7xyD՛_͠Q{5>! :R!+F&2STN.S|P/X ?~iy醲\;r<}>UF?.@9w5ob2yNA0d*'HU)6 &LaJ7_"B#2q-: CR@ &$<7TYݺ{'+,H嫔[Qr7ȓ"2.!0stYn2:2q6[۶qFr:4B7 PSx\_I TY{)9MBקÝsAlǦk(/XY.б"NgۺcV]JdoN\j0nwk:ޛ]K[,=J[()(!3?>c\$j; 6/~`92pz|?O8__g'rrQC7=EN%quE "q̰`j.F:~]>US(Yc 4{T!Px ?C@'xKEWlpj gȠ^+\wur|#831i%Twhz$x3T EAF56Pl,ĬQg8*TR .a8,/Š!gMZs?z泝)PD3sbA-Nj74T>%P_־3aβ DLVT7vOUlAL%nyL6ynG;rD M 5yvI.$2csᒅaqYFj1_Upx G[ZMFm!פiP!(Kq5(<]̦yipc~f% qy9k 3? nŎ~y"@g1Â)۾o!ۙ`y5`Frќ*-*;NPXtYN *=/NfWLK70y.qVȄ^ nfNڵky஻LKn`[ .nԦ|"חK[KfW()3oqqה=Y+pD@7n@[S\8p[_PA[`*iiug1-Nc9 6>@r0-/BQ=XȺNx@l|cSVRN@/6䭚 rqlIיT3se}v9t^}˩hG287~/a?ORϔ8 vRھ 9b.,ypXV=ὕcpd ^ p!4oޜ#FaGߟٳf[9!L?_x';?冻#>T>qQܣw}F󽔝=ϥes?ñovUZPLX &c:W'w\>֑& ;K'rjYfO_< bXsG,"Q,K|jtl[pj'F([҅RSZT#MQAanX$WApJ˩:cmzîNrj`ؘ2kHN/#4lȖ72ͫγE 7G#'>7?&^vϑ &v"mHCHKDuKCqLJQ'6!ᝇ)۵qR~J(4UP~$]pspj>j4G߆YM4,:Kl߶Yc@~k\) \EEΝ;JNծU+f~+R3EbD2ՓgVUrd$ dIR)MTR|yj1uj?QRe:MEV<(,ơ۞en6g(ޓ.P!z3;tŅ4@L q?!4wהagmvW'*m=^4T=^Vtj6ҕSgLːa9|B= Lǩ!U&(FZ}Bk$(fCp .{Ea0Mחnӱ"?}QX [C(WrCż]c!8nK11mË2,Pt|Zm_$2U|u3Hp_/᪝!p ː2U\2iTYCm#4vk 9\G2L?Je]֭5kp]lk&8U jkQ}y<[N|R,qMŹtiHtr0d8-@5vTW>^y mYU P(+o]D~n#?H y=U,9dN m?o#!^a￿_?EV~WTGر-p[UCf o] G}l@(x$W» Ce8 mQEӒaj|{s>i EGC Tuˎ[]rx&3+OS-m;i&[j=J|z2 dZu R2##w[~~r+.K~ѾvcU{|Eߜ?1XtӞ;$Iip|b:C#j>\~B^akUޥa!2LCuST!rze\9/(N98"0L̽ IGv;˚8U/XZߟZ~j۰]/;'HzXV N-zBW2)U6ۻ]%Ka sδr*SU|BާzLe+}Gd\;󊬛ԯ`R֖05aJigj^[C~:(8}qz,yOt/44d9 sEkUM>L "rl.C#,Y._[YPuk(wGi7uVXO@A Pϣ/!=]m,m<8G?"ES>ySLjE'G|E6#YUrd]Se]mذZ$@}ɭM*ۣ"[TRV}/aAL*] mByToCS⛇E00|1,d'г 1 ~5n9 Elycm% #QVTJPf8,ڃ_pE;MDV(^OxVYRcjp~] 7~ڜC M z4:MR>^/.P IDAT!A4gJMټI~)N)A? 'c%n~~qqbߜ崨>} Iy K`R0g\-_ogg9v?ŽorhVL]^f۳s6q5KܞӬ7b%}pX9D[L}S^xO&p<̓8rD|'#}YBCCp{3b?y:yФ<2V-b8 .qf2~^.ښWT/ U u]OɹKxScXUJ]SX.vLY1KoӵIa(+*%rotZD6LE8N @;ӿ >^:Ӊs7=0%q,=ftCyh7O9(+grq$v#5(*,z4,, Nn;*b֤uϦ<ֈ!݀_ ͓kA)~϶r*2c=#p|nOZIꀺ'8~v=;%A"l-ob֫z 9 uéл|A^j~7jVl:,?*"!ȅ ;'|?v$<<___ׯŋm/i;*aС4V\I9vnq_A۠KRl9QxӘU+sfbJOt9)/*Ʒ]#{ŕN̂ (&r~INmGOhBo8Ҝ cxb=.[guύ}0)th۪sSvfh=B[ *fzl6dnʊJ^<} 85c9>UR(ݺw\B@#r țqf&s~ sb4a FyIz+wrvIf|{p9V?^4Y||Nn9a{{kpc(a`ɍGIfΗPF{;j v]n @=NoEȰ v1tz3LˠNo"()_&V4)-2m3Ezt)?SV\FYq_4ǡ4XuӇ}TVټ!_] @k½a Ӣ6ϽH#X}V68P̟b?ԬQhНg^l+ϻ獔yݻWFc RA@*;oG@u|QꫯU{?ѢE ,Y+Ndd$m۶ev<fDHLLd8|CsسtbT~{a\{4-ΦcR0k%{ሏd>TRݾO㨤Q) ̶c%m9aN:ÄP|#3g"BR!6|vI|>1 ^$]!EeNqW);y,s#vB%dx Wd0Q-zp,YtEO !g.Eg/qpne ]*xl)A|;f5߿jO:Le 0&G/3o_AL^daLVy8}|}#icq-Q^xm08\&QYA,z gd'[YZ=_eNlNu1Bqd{G-nsqJ✥쇻^5O_{_0S x L(*PL~vaO84Y>sn32"xWUw~R ָ1`$ Su֊UYYٿm0 }WJKK:uo۬YFn[ӧOa;vkӧO7P~dܺ"%j7W v/6 )j7)+&B}n'Tla|f59l':l<{7Rư&'a*j,C#@~2,CreZR\!S,a(,-@׿DT`DӔϥO6[w}>YnKU+#4stY0dy,$r;dXT)svR?=oW ȌVHjI\ycaNRh4mv/21&Q KBۑO؋"Цxy~X>^YvݻŋTؼy6hkAzFڲxv^<7^w^l-!ar=p_"8!p\ε#g CakBlSi!P8A>U},ʱ_CC/>kw]jI_Vܧ/_/PjZn.a* -LCr8&@~20PL+6jJb$&Jn%ar[ )W[44Ycg?AݪtYJ}\T-4WɏTt-Y +$@PYq|t+j\d7=""lb~m-ԧEs"u2aaڵ}]^kğB߾dV&5kW СCYlI:uYE~8|/..fРA15kvRAAA 6ɓ's~,-1 +LP9$䥐fc5FL4=Ɖ/F*p98>cy8=49('r36HķJw_o 5 `B#"پO< ֭cx<8pk{jÀW?ʷl57&1bbl܀_6Į EEՁu8o4v2wzaA(9w_ {)9]6pnI|b8|'!9`vAPI9!0#ŷrj&aC}IQ??71v_jI9C8s ޽}?Ӡ{}8Ki3} M*Qt/v$gXz7i hƭHŽD!}DKZŽLj#(TusiJVH] |CШ|}w8?ek;^i^;9z3gdxkdiQ1RVdƍdeea*UiӦ}ժU~\t'x ܪlX ^p\,^|}0=(-zCy]W۲cX*dU;Pq)[^ZE[(9҉.>ۄύ7ܗSѳ qMS(<|!uH钍vɹfmN:ƓoiVwjԓn q*=w>;ԗzz3[5^ ̾;9i"jIJu·\:v;Uta#3830Cב5#51SY$(3,>ux"s}H 잽 vPrgf,3[qhnBr(<|_o#$;%3эR1'|K|J`|[\>X^'^YʥS한p.n/N\`YI3 7Rr5^=k^O6PvU^dY')/-Ǖ{)/I91aE5Oc8O8*Sz]hoPܣeoko6}mE LgJo[H3!_ĕ 30(a# _/b|4h j`;vbhn7m;!5=|҃G4 0,x1,˿Ap&8#CpǑU8c[1QtOTyckrt!SQVTʁYXap7~DJ2m[^_JJ웽C =sۏr`O$ѕ{8bqm*Qt>@\~668|l|e9ayq Hd+84UGgX9$4Yv,OYAJJdS@ ^G1WiG΍ ;as?8tik>dR'=3թ#ڷT5 T!`Xd ?;OOOW~~СC2 C>N˥aÆKU~-)))Qjt 7\gϞ=QFرʔ)SRwL8^ClTE%v[QXZ.wq9v;"8Nfh] TLOJFjd/Q5^IYۧɕ/+$@Ihe7 T!gG0 ʊ#WWQ dzr*cLY?IȰLŶ,LC)x;oof ?7{WQ9pXBp:a("0P r)S#UjsR QKأ0Pշ(u?Eli$Ү|U?Vx|čw]]<^ڭhVT.yP 9%k0MLU>7t 2P$J|ٰ)=24 32T89s+ %fʝ߬|jVR XdU/[zșJ=\>oɕ+fՕr"߬8[kv*p! CMӅetZJLSǡ#u3]$ Cٷ6T@#O 4OH׊rt;8~ưZClP]SkewDip<{<[7YB+4L/э5D~To,9|&MӧOK֭ڵk˱6n(04a„Ѻ~ܹ2 Jr1c(((H;wӧuimذAa'ӧ 5jNǂmDN[x/0,/nbQݦeapX4%oۆ2%'~HSB&<0TVH^ZS'[մZ zG(Tkx%>rtb7g(oCNK(cMK7PWov*YR(2X/S}^WMT:W`_b6ajU:S9jfɝoh-{GdFș=X #{tȿm=9cER4z]v:m߳G^l'9>kn'I NB$e%' Ǐ֦EIIF*؂ D>3zywp:o{ɨ'ɳ{A8 ۹D&4e8D?L9Bء0 5gn҃+y㶰{J+QfzP57/C8U/_:37qj~nbol)@u׿.ulAkđr#{63&ͫd@s S-ʛn @Ò_JJzFyR(m' n߳?VO%TvQ9MΪ[Iޞ4h?qMevlߣbxnOny/1NIbq 0PfN CZjյ6 "D~RA@*;/K@o0tP0 ^߶oݺga:|$iυ#Ghĉ[eahBn!,eׯ/wۙϹy}[N{sODž-ar<+2+ۙ~E,/j6.ҳms9st^͚1cƵ6<4g[Q`h0MYNJwh+ۄqJ9ޟ,@)lsc:yVf"#]Av_LYYWgdӔU.W> v;o7PfaA2,1²SMjESCFN=Ǿm/MoyY̾ nz^XNo9홟hOΜ9sMo?A/I~{ȼyd>_շiFqqq*//|M5k\+))Qvvիwn֭ZlٯԩSejZh IDATe˖… r ŋ9j\v_fs$z?Z%>do_"+p8ve cLO)UrƆc THjR:W3X3E틦(IeLQҩHp.!-R)zi {^SێUk3#mj]0E`?g>PpL9CUmCpX{zR?%,SX\1r˝`;w5rd&)t^TٳK(((G2.uj"Q2|Z8ri2CdԨ)2o |/ǣ_Jw{왴ȧoGaprƇ+iͯ؀_ZWBN;)B׭Y/i)~~DTDju.UVr)FZϐ;!\0t;xkcyO)i(S=%:&I1}nKLj#3DKNDž)L̖H-.WD"$*پ\eDyX~?W ߤ=VDg1BBB4a-^XC aT+"PRrńPL& i#LCW962\+T˷"5-Frf*N;8q+Fm2^o.9 + 类.\&f%~媚!GPE~USJznƍwMr }j Pԛ*|n]4p^ntuvsh+!7hS'wCl~`k~OʿAi[PHzjP:Or䩜ϯ'Eo+g"e>:rLgbBWX[{9L[%I@Y~2|d%Dp:J kUeezWKE˳tⱷ5Xrh"5kȑ#ԩGe޼y4j菾 Uaa\z}@gq 8d$tWO ?2߄9X8֮_Xn((Y Y~r;5gNwh/JCnq)8njSl{2H\8Y̦aoCY9c(/,40 ޠ\82G‘U9d#fL~w @H_6#e|zQvv&$<}St1CRa+ӷSpI8F/Q|zŷS3L_ f/ eYу o~JӸ*P;8enxXR3Wp =S6a"Z##`;qs}UU/8R17ZrW9˙Ylh2 9Tr} Jb…+q^=}=󾟥pyⱓV+vjV=#04 cDyxpk- deå{v>K> 0ŵ\Ḧ́.$?%l@,πuT:~.㞅ۘA9uf"*B_@v6@3k_q+`]Sz >:TΜ>Ln< )Nϣ0y؏7H"(lŝd{U܇SWVr/bQm㉪ijDJ<a)9~ʹ_c ګ1c_i{/O:]y>G kXX/nqaZB6a22z̜L=WU zPOקfMk\ G0; RUZC$5udz܄̧f2Уem=-uVZ#Yq4{PKYhi׌@]jTU[s޻*URm;~ ~گvvDLb蚙NGñ -˟@Y(58@߀V'QUvW- U,XޝHwMOLyon7gdtz&uB Zٰ]nSU zD5O7̞@SjzovmGۃH=T#B螼KrfhK}a4iI h]u{ft;Ku.Q"ݾE-LUBGT>D`eAA$"Smy$BX<1DpE Is TYv*EW"-D0 gQ$)A$qYYYܻw//^ݻחFKCU=\hNti:S5^@LA W}_뉋g?VQDJDmFԬuSooje/?+T"ܨ.PsM U^Z`5oh R]t4>;fݩ!hLDTi]-DDgTJmψ}V+:uh ?EL'*糴fUbb-[rܸqLHH={ HڵcƌӪ5.j58N}ZҰf^N@˞]4շc?MC?i~i(OKIƍUڳT6V%z=uaW̭9xyFP>)\+mcJv̝ШO4ش~n(]90a j$T[qDYD`;i'<#N%‰wOUIHx']s{77>My Q"-D gQ$)A$q^/3OުqjѢUGSOΪFOM!QuKjuH#7!BR;tR ׳{F C*iOmBvMf~77좮Z߼wF ؗ&]OĢ݃W&&x Q5$#s~4D=45mA,]^SCՃ-!ν[k|s6C>jL{O ~)V>@okǐ=YL*swWZaӄas%f|EMm_ -oV楋sK}HTO[ j2=w=hOӜϩEF+8q<l?j(}Ѹժ A;$j)&FM!* 4;DWh3A4H,&>ExfԠx*A%!>1bZpɩY>"ZՃٰaCnذGefffqbxm!Y\l_|5 rEƀ|1H(SH l|3C5T\56mhV+)Sk@6Y,0|)Q zh }axy%h0<?`0*ת$UC۬Gv;4__d=.씭D0fl nb~q`yc Kt4 HBD [ VN(#"gC`/e!tLxoԨ mu P?Z-9d͚> - a jť^}T@Gk?O.]yA]p >V1зk 9 }n0x `, ?:vN8h;e@`v.ŋkSsgv3M󯁰r7Ӏ@DM``j\$p 0qh=Th!_'B_Q0}1?ۣ~l.*p4 2Z¼y t/FFOOOayYr%t: %Oqg@*)qn sG$\i,=.I]D>ĔD󋿞pRUԯ"^TQnZ#cOyH4jBY"-Q)S}ԦLѲ{hI=IR״ -I}ΪD>}CU4?W^ kMS 5!L&U_AԿ FToFjnSźW"( =Ykަ>*r:VVVVVo{XҥK}jm3G  EcTQd>MY--i32'W>;ZT*mMWݓp ʔɩg^-Fh:yA { V߹GLF7&1yʅS;Q?ZCZ' KU)^uuEtݰ5Jxe}C0ࢮ_hW4ƽBjDRS3i=;WZ ,(*nDo!Yckժsʕ7߾[jx.[g5.w9nܸG4K,)S8$ %$ ?VCRpGjAt;G"+dϪ#98DyS5mԎSdQXsB'@oDTP?F;F6n쫪=JH1 4l4?"H5׵QTt:"$}r4v5]yBPҕhzeJ-$Zn4>K1ؿUVIFJS x9~2MT>6vٙi Q  D\GbnD? ,0[h6V+ͣ{q5Wi9s~H#ʇ>F,AU |O9՟h3৅~iav!+kO%8e@Np[gM˓^=rojuRkD}Z\9Lyb5SmkINNI+wWV^DXb\2rac&*7Qo fNN"}߈Jф͓Ex[~2Ny톪$i;m7џDX[u}B,jצqt{Q>r믿P"- r?}D#11qSinjs{ 6fi ˖-i;vDI!m@(Daq̝;S&MBF%|X6tkB$? /?~ #p@>j@Mx'0z=Ф`0lOQ zq`I'pJ>y6]Mig2Y,_< N}ZvP6fžu`+2;? ;ڇĶDu ^m^ hޢzu=bҥ .`Skα'O(L<s[T>|Ǡovܹ3|:mK}NBo3!v3: \ n=`j`bXA e9PJm#P? 8+P18}X} LW_L{ NZ=4#3@l3`O~{_2/C"9˽ {,x]L 1̛7 UT)'ĭܹUVNQ222k׮|׻t5L&ddd`7ӡѫWz$( H\%% wl^tѣO]vހȜ'-I}KuYEd%&*6Pu[T^">R^⫋D&jv=ƫuz%߬TN|j'\U5D2ԂrzqR(T]|C ԋ@j6+- "LZTapX(ϟ?LV"Hcǎۃ507_?@S_jfjkӎʃxfBCLL"B+.ֽ5a1;AsOq#֩uxJD|/!*S5@DPսރDt7k-ZisϞ=ɉŸVR%m9~85M =l]vy槧Ã:6lw% " H " H3}tij(uSm+ "Nxzj p!<*MU7ٖ҄z9թF|MNx6ˆF*Ѹ=u@-J| #Y-+^Ovͅbun35˖b#%j?`x٬;d63M^ujjaă.}Tcݯ3#s6OM]Wj^W{N`Z= P" 47!b[ZZŨקV=~}ikjd֭#ED$ -w6NZBM@Hx IDATrԨQ4'O3gSN4 46mwSR4~wb'U(FjoCjf?00߀5ٻ:1h+P9u: h8yжs)P*0d*: /o9`$g@YX;kZAڝNdeG5i+ت /= v{13@^58"=ȏW8TY}$a(cޅ1`i0ֺKB_*tSIo6<`2vੱ4<;H}ȨQ0tP1^^^([,._[*888O<7,DWJJ@J4кz uuEEruD%tbq:1F_PF,&^hzг?2U/}[Ht&*V#vډI U$BJX]Ϫu$BBhm7M_R A}Պ4c>>z@^>>B6q|\RJȑ#]O(tN۝$@K*{~sTcsmVnRW#~9? a6{l5 jW_'>?Sm1A]7aMHtAMv]SN)SOfڬ\ }~4M7c銺F-bNW]{-۩fX5Đ?׏D|"] j5Wd9F<1E=Ac* ! ?cVۋyXm\eŅ>=:ԦՕc#U"l!4c[h5Myoݺ5CBBnړU~nJwy' @MӸgϞDԓj֭Sl"LVV{[8z&'K 4G->w0Y*]h غ8 4vC%^@_gOZ ܠ E߁L&0璠2e3`0]ijժ7BCCoX.8pl6c8{,`sJ ;-+ G  ^h~-Vh`<}y6wj Իؓ TF/ߛXlW \tc4aZq)3TAD pssCxx8zPGtA}[ž?֭[#%%'ND iӦ!!!>}`8p@5zjf͚0 ظq#&Mh,XOff&PzuYbf(B:t@{n_~AZZʖ-ŋ?N<_o\[FuX\sgϢLH\'6nw_,9U7td[Q@FJ\2;@lǏjeb^T(t:*V络nǙ3g`pylذ& eʔARR233ݻw{ .\ֶ5<=|8FTTZsE.]ry ̝;GFVV"""0|p 4~Ν;}:$.Q!!+K!B0!B!F!B!H"B!pI@B!# !B8,hB.)B!B8!B8,Mi' B!]AƖA!B'% yH"BqW$B!pz9SQlW% B!#{mW% B!ӓ6 yH"BD8I@B!C!B'% yH"BI$O!Bi0.UDD!I p!B8=)CW B!(=D!e\DpI"B p!B8=P8I@B! RZ!$ B!N/ E^Cڀ' B!#{mW% B!ӳhئ($B!pzvMqI@DD!I/XyH"B p!B8=P8I@B!^6\% $ B!NOJ@!B;uaȐ!jETT͛W]x15jxzz"..wwSNa(WL&|}}@%!B֩S'`ĉ@BBa,X;[nxWpY37͛ѸqcL&=j*\ta" B!p{$,]ڵ+iӦ8|0yt:]m^|E{x3gN  <<F;駟˗uVxxxo߾}a'TB!pz,z| йs<{Ǐcƍwih۶mAAA·~ RСCXp!'O!B6 ;wDժU+刌ڵ+T2L׽g2֬Y B||<`ZѼyslذ($B!pz-9}4eӧ]/ Xn]ضm4M]رcaÆ{|HOOG-cǎ[}(D!eTxt: $8{,9{ [b+TD4m?-Z^ɓ' ǑFB!Nwdڔ>>>r#FBZZF^xjTޫW/L6 yѪU]^=lݺ AHJ@B!^at `пnF_sK)R-z74MÛo.]sxꩧcl6㥗^*x#H"B ȍc5jRSSQjU̝;]t]nn)0 ;w.F,DDD`4hPnrC=o*`6ѬY3$$$|pI@B!ޕ*SEl6[x뭷ň30cƌ<󢣣iӦA\\ ($B!pzvMHjDi' B!ӻw]! $ B!N/ "ڮuÛ!C 88VQQQ7o^ѳgOf!::˗/wt5 X,D&Mw 箑X܇P,$E.]$n!_K@:uٳgc̘1Q^=yEl+V;#hӦ V^gٴ44k 3ftR̛7-Z@fffQJ,tK[8V6TiEaOEѮDv VRR.]DtдiS>|?Î$SNqBKtKCQF={r^}Wqss+i*U`ڴiZ6,, o)Gjjj7㓻ֿoƍs0Lh޼9|ߺuk|(WVB!D KG3:t\\\dnnnTRl[NNw޷N5ݞȎ;իWؾ}uz$oӖ/w^HBQDGG~}}}ѷobٷUjޱcG᫯3̙ӻՍ߳g6mڔ;/++ 94h@@PPjժ={68PKPN;w&L}Xt)7n~rr27o ^i2e ֬Y+WA]!BVJ@`x0j(m7oܹs$jHnT2LXl7ogyqqq8q-Z' bٲeΝ;#..k֬I+1ٳ'`˗t . "":SL) [ZZ `XVDEEzϟȑ# OOOfTT #G,Q$9K.EÆ a^z땤.y+:W ĝQF!""hҤ [#ȑ#QbEfxxx &&V*p }Q 2M6't:f͚uZA.5! Ekժsʕׯ5M9snnff&Wΰ0Ι3K.eh4jժs188qʔ)mqD;v젯/ p͚5?~<]\\SbsrJ vؑK.eBBCBBɋ/Yl7SqGk95|;q?ueHHN5kѣ} w||< ǎ5kphM2qX~~~#G}#Ey>oőq->vI`fػw=;333i28t<]pzÆ +Hnϝ޳ؼy ڵn}MTڀ8Ν;Qjލڵ7;v,233+ #9:[t)Zj>悺 Ν;^yߑG}\÷{-[ ##+VD ٌz!))B-l6c9s&-Z ;v O?4<==ѯ_ ]}w}N}M$ H;}4e^~#^+55o۶ 'OG}T"2td}vL4 :u.ŝVи{eoGĝ>⨸kq;v 0qD۷_|xK,8nIлwoo O?%K ""Nømwz.>l;Q5!$ aʕt(eeeW^֭bbb }%9;tڷoebڴiz,EIӴ>(-糨nl6.\l ,@hh(ƍWGXt~i|ᇘ8q"֯_$DEE!&&7o.+r_tpڑЋC*U | Ijjj7㓻뭷p!|W8s ܹsTwgΜ Ga>͛vYc]w[AWu>oq5_8{޸qcX,L&7o/9"|xw1`bcc!C`ݺuw{vAq6g%&Dq6w޷N5ݞaǎpb|K]pYTTeG#Gb۶mֻ-Ff͠iVX2eގ;q]whӦuG#‘VwQ_#O~]H:4qouYNӡVZ,NqnWI Q oѢE43u Ӆ_~>jƍ7λ|2UƆ ۳gWZg;w.5MO?UV1--pGNfrXlYty'N\gdddYi?B8"ݨQ#… -1ڵki~<_|]v!ESpwzϾVAMk*v_8I'|˗p޽{`0?ȝw' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/PySPH.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/PySPH.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/PySPH" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/PySPH" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." pysph-master/docs/make.bat000066400000000000000000000117571356347341600160760ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source set I18NSPHINXOPTS=%SPHINXOPTS% source if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\PySPH.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\PySPH.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end pysph-master/docs/source/000077500000000000000000000000001356347341600157565ustar00rootroot00000000000000pysph-master/docs/source/conf.py000066400000000000000000000201071356347341600172550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # PySPH documentation build configuration file, created by # sphinx-quickstart on Mon Mar 31 01:01:41 2014. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os from os.path import join # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.mathjax', 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx.ext.intersphinx'] autodoc_default_flags = ['show-inheritance'] autoclass_content = "both" napoleon_google_docstring = True napoleon_numpy_docstring = True napoleon_include_private_with_doc = False napoleon_include_special_with_doc = True intersphinx_mapping = { 'cyarray': ('https://cyarray.readthedocs.io/en/latest', None) } # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'PySPH' copyright = u'2013-2018, PySPH developers' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # _d = {} fname = join(os.pardir, os.pardir, 'pysph', '__init__.py') exec(compile(open(fname).read(), fname, 'exec'), _d) version = release = _d['__version__'] # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'sphinx_rtd_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'PySPHdoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'PySPH.tex', u'PySPH Documentation', u'PySPH developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'pysph', u'PySPH Documentation', [u'PySPH developers'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'PySPH', u'PySPH Documentation', u'PySPH developers', 'PySPH', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' pysph-master/docs/source/contribution/000077500000000000000000000000001356347341600204755ustar00rootroot00000000000000pysph-master/docs/source/contribution/how_to_write_docs.rst000066400000000000000000000037151356347341600247560ustar00rootroot00000000000000.. _how_to_write_docs: Contribute to docs ================== How to build the docs locally ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To build the docs, clone the repository:: $ git clone https://github.com/pypr/pysph Make sure to work in an ``pysph`` environment. I will proceed with the further instructions assuming that the repository is cloned in home directory. Change to the ``docs`` directory and run ``make html``. :: $ cd ~/pysph/docs/ $ make html Possible error one might get is:: $ sphinx-build: Command not found Which means you don't a have `sphinx-build` in your system. To install across the system do:: $ sudo apt-get install python3-sphinx or to install in an environment locally do:: $ pip install sphinx run ``make html`` again. The documentation is built locally at ``~/pysph/docs/build/html`` directory. Open ```index.html`` file by running :: $ cd ~/pysph/docs/build/html $ xdg-open index.html How to add the documentation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As a starting point one can add documentation to one of the examples in ``~/pysph/pysph/examples`` folder. There is a dedicated ``~/pysph/docs/source/examples`` directory to add documentation to examples. Choose an example to write documentation for, :: $ cd ~/pysph/docs/source/examples $ touch your_example.rst We will write all the documentation in ``rst`` file format. The ``index.rst`` file in the examples directory should know about our newly created file, add a reference next to the last written example.:: * :ref:`Some_example`: * :ref:`Other_example`: * :ref:`taylor_green`: the Taylor-Green Vortex problem in 2D. * :ref:`sphere_in_vessel`: A sphere floating in a hydrostatic tank example. * :ref:`your_example_file`: Description of the example. and at the top of the example file add the reference, for example in ``your_example_file.rst``, you should add,:: .. _your_example_file That's it, add the documentation and send a pull request. pysph-master/docs/source/design/000077500000000000000000000000001356347341600172275ustar00rootroot00000000000000pysph-master/docs/source/design/equations.rst000066400000000000000000000706561356347341600220070ustar00rootroot00000000000000.. _writing_equations: ================== Writing equations ================== This document puts together all the essential information on how to write equations. We assume that you have already read the section :ref:`design_overview`. Some information is repeated from there as well. The PySPH equations are written in a very restricted way. The reason for this is that if you do follow the suggestions and the conventions below you will benefit from: - a high-performance serial implementation. - support for using your equations with OpenMP. - support for running on a GPU. These are the main motivations for the severe restrictions we impose when you write your equations. Overview -------- PySPH takes the equations you write and converts them on the fly to a high-performance implementation suitable for the particular backend you request. .. py:currentmodule:: pysph.sph.equation It is important to understand the overall structure of how the equations are used when the high-performance code is generated. Let us look at the different methods of a typical :py:class:`Equation` subclass:: class YourEquation(Equation): def __init__(self, dest, sources): # Overload this only if you need to pass additional constants # Otherwise, no need to override __init__ def py_initialize(self, dst, t, dt): # Called once per destination array before initialize. # This is a pure Python function and is not translated. def initialize(self, d_idx, ...): # Called once per destination particle before loop. def initialize_pair(self, d_idx, d_*, s_*): # Called once per destination particle for each source. # Can access all source arrays. Does not have # access to neighbor information. def loop_all(self, d_idx, ..., NBRS, N_NBRS, ...): # Called once before the loop and can be used # for non-pairwise interactions as one can pass the neighbors # for particle d_idx. def loop(self, d_idx, s_idx, ...): # loop over neighbors for all sources, # called once for each pair of particles! def post_loop(self, d_idx ...): # called after all looping is done. def reduce(self, dst, t, dt): # Called once for the destination array. # Any Python code can go here. def converged(self): # return > 0 for convergence < 0 for lack of convergence It is easier to understand this if we take a specific example. Let us say, we have a case where we have two particle arrays ``'fluid', 'solid'``. Let us say the equation is used as ``YourEquation(dest='fluid', sources=['fluid', 'solid'])``. Now given this context, let us see what happens when this equation is used. What happens is as follows: - for each destination particle array (``'fluid'`` in this case), the ``py_initialize`` method is called and is passed the destination particle array, ``t`` and ``dt`` (similar to ``reduce``). This function is a pure Python function so you can do what you want here, including importing any Python code and run anything you want. The code is NOT transpiled into C/OpenCL/CUDA. - for each fluid particle, the ``initialize`` method is called with the required arrays. - for each fluid particle, the ``initialize_pair`` method is called while having access to all the *fluid* arrays. - the *fluid* neighbors for each fluid particle are found for each particle and can be passed en-masse to the ``loop_all`` method. One can pass ``NBRS`` which is an array of unsigned ints with indices to the neighbors in the source particles. ``N_NBRS`` is the number of neighbors (an integer). This method is ideal for any non-pairwise computations or more complex computations. - the *fluid* neighbors for each fluid particle are found and for each pair, the ``loop`` method is called with the required properties/values. - for each fluid particle, the ``initialize_pair`` method is called while having access to all the *solid* arrays. - the *solid* neighbors for each fluid particle are found and for each pair, the ``loop`` method is called with the required properties/values. - for each fluid particle, the ``post_loop`` method is called with the required properties. - If a reduce method exists, it is called for the destination (only once, not once per particle). It is passed the destination particle array and the time and timestep. It is transpiled when you are using Cython but is a pure Python function when you run this via OpenCL or CUDA. The ``initialize, initialize_pair, loop_all, loop, post_loop`` methods all may be called in separate threads (both on CPU/GPU) depending on the implementation of the backend. It is possible to set a scalar value in the equation as an instance attribute, i.e. by setting ``self.something = value`` but remember that this is just one value for the equation. This value must also be initialized in the ``__init__`` method. Also make sure that the attributes are public and not private (i.e. do not start with an underscore). There is only one equation instance used in the code, not one equation per thread or particle. So if you wish to calculate a temporary quantity for each particle, you should create a separate property for it and use that instead of assuming that the initialize and loop functions run in serial. They do not run in serial when you use OpenMP or OpenCL. So do not create temporary arrays inside the equation for these sort of things. In general if you need a constant per destination array, add it as a constant to the particle array. Also note that you can add properties that have strides (see :ref:`simple_tutorial` and look for "stride"). Now, if the group containing the equation has ``iterate`` set to True, then the group will be iterated until convergence is attained for all the equations (or sub-groups) contained by it. The ``converged`` method is called once and not once per particle. If you wish to compute something like a convergence condition, like the maximum error or the average error, you should do it in the reduce method. The reduce function is called only once every time the accelerations are evaluated. As such you may write any Python code there. The only caveat is that when using the CPU, one will have to declare any variables used a little carefully -- ideally declare any variables used in this as ``declare('object')``. On the GPU, this function is not called via OpenCL and is a pure Python function. Understanding Groups a bit more ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Equations can be grouped together and it is important to understand how exactly this works. Let us take a simple example of a :py:class:`Group` with two equations. We illustrate two simple equations with pseudo-code:: class Eq1(Equation): def initialize(self, ...): # ... def loop(...): # ... def post_loop(...): # ... Let us say that ``Eq2`` has a similar structure with respect to its methods. Let us say we have a group defined as:: Group( equations=[ Eq1(dest='fluid', sources=['fluid', 'solid']), Eq2(dest='fluid', sources=['fluid', 'solid']), ] ) When this is expanded out and used inside PySPH, this is what happens in terms of pseudo-code:: # Instances of the Eq1, and Eq2. eq1 = Eq1(...) eq2 = Eq2(...) for d_idx in range(n_destinations): eq1.initialize(...) eq2.initialize(...) # Sources from 'fluid' for d_idx in range(n_destinations): for s_idx in NEIGHBORS('fluid', d_idx): eq1.loop(...) eq2.loop(...) # Sources from 'solid' for d_idx in range(n_destinations): for s_idx in NEIGHBORS('solid', d_idx): eq1.loop(...) eq2.loop(...) for d_idx in range(n_destinations): eq1.post_loop(...) eq2.post_loop(...) That is, all the initialization is done for each equation in sequence, followed by the loops for each set of sources, fluid and solid in this case. In the end, the ``post_loop`` is called for the destinations. The equations are therefore merged inside a group and entirely completed before the next group is taken up. Note that the order of the equations will be exactly as specified in the group. When the ``real=False`` is used, then the non-local *destination* particles are also iterated over. ``real=True`` by default, which means that only destination particles whose ``tag`` property is local or equal to 0 are operated on. Otherwise, when ``real=False``, remote and ghost particles are also operated on. It is important to note that this does not affect the source particles. That is, **ALL** source particles influence the destinations whether the sources are local, remote or ghost particles. The ``real`` keyword argument only affects the destination particles and not the sources. Note that if you have different destinations in the same group, they are internally split up into different sets of loops for each destination and that these are done separately. I.e. one destination is fully processed and then the next is considered. So if we had for example, both ``fluid`` and ``solid`` destinations, they would be processed separately. For example lets say you had this:: Group( equations=[ Eq1(dest='fluid', sources=['fluid', 'solid']), Eq1(dest='solid', sources=['fluid', 'solid']), Eq2(dest='fluid', sources=['fluid', 'solid']), Eq2(dest='solid', sources=['fluid', 'solid']), ] ) This would internally be equivalent to the following:: [ Group( equations=[ Eq1(dest='fluid', sources=['fluid', 'solid']), Eq2(dest='fluid', sources=['fluid', 'solid']), ] ), Group( equations=[ Eq1(dest='solid', sources=['fluid', 'solid']), Eq2(dest='solid', sources=['fluid', 'solid']), ] ) ] Note that basically the fluids are done first and then the solid particles are done. Obviously the first form is a lot more compact. While it may appear that the PySPH equations and groups are fairly complex, they actually do a lot of work for you and allow you to express the interactions in a rather compact form. When debugging it sometimes helps to look at the generated log file which will also print out the exact equations and groups that are being used. Conventions followed -------------------- There are a few important conventions that are to be followed when writing the equations. When passing arguments to the ``initialize, loop, post_loop`` methods, - ``d_*`` indicates a destination array. - ``s_*`` indicates a source array. - ``d_idx`` and ``s_idx`` represent the destination and source index respectively. - Each function can take any number of arguments as required, these are automatically supplied internally when the application runs. - All the standard math symbols from ``math.h`` are also available. The following precomputed quantites are available and may be passed into any equation: - ``HIJ = 0.5*(d_h[d_idx] + s_h[s_idx])``. - ``XIJ[0] = d_x[d_idx] - s_x[s_idx]``, ``XIJ[1] = d_y[d_idx] - s_y[s_idx]``, ``XIJ[2] = d_z[d_idx] - s_z[s_idx]`` - ``R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2]`` - ``RIJ = sqrt(R2IJ)`` - ``WIJ = KERNEL(XIJ, RIJ, HIJ)`` - ``WJ = KERNEL(XIJ, RIJ, s_h[s_idx])`` - ``RHOIJ = 0.5*(d_rho[d_idx] + s_rho[s_idx])`` - ``WI = KERNEL(XIJ, RIJ, d_h[d_idx])`` - ``RHOIJ1 = 1.0/RHOIJ`` - ``DWIJ``: ``GRADIENT(XIJ, RIJ, HIJ, DWIJ)`` - ``DWJ``: ``GRADIENT(XIJ, RIJ, s_h[s_idx], DWJ)`` - ``DWI``: ``GRADIENT(XIJ, RIJ, d_h[d_idx], DWI)`` - ``VIJ[0] = d_u[d_idx] - s_u[s_idx]`` ``VIJ[1] = d_v[d_idx] - s_v[s_idx]`` ``VIJ[2] = d_w[d_idx] - s_w[s_idx]`` - ``EPS = 0.01 * HIJ * HIJ`` - ``SPH_KERNEL``: the kernel being used and one can call the kernel as ``SPH_KERNEL.kernel(xij, rij, h)`` the gradient as ``SPH_KERNEL.gradient(...)``, ``SPH_KERNEL.gradient_h(...)`` etc. The kernel is any one of the instances of the kernel classes defined in :py:mod:`pysph.base.kernels` In addition if one requires the current time or the timestep in an equation, the following may be passed into any of the methods of an equation: - ``t``: is the current time. - ``dt``: the current time step. For the ``loop_all`` method and the ``loop`` method, one may also pass the following: - ``NBRS``: an array of unsigned ints with neighbor indices. - ``N_NBRS``: an integer denoting the number of neighbors for the current destination particle with index, ``d_idx``. .. note:: Note that all standard functions and constants in ``math.h`` are available for use in the equations. The value of :math:`\pi` is available as ``M_PI``. Please avoid using functions from ``numpy`` as these are Python functions and are slow. They also will not allow PySPH to be run with OpenMP. Similarly, do not use functions or constants from ``sympy`` and other libraries inside the equation methods as these will significantly slow down your code. In addition, these constants from the math library are available: - ``M_E``: value of e - ``M_LOG2E``: value of log2e - ``M_LOG10E``: value of log10e - ``M_LN2``: value of loge2 - ``M_LN10``: value of loge10 - ``M_PI``: value of pi - ``M_PI_2``: value of pi / 2 - ``M_PI_4``: value of pi / 4 - ``M_1_PI``: value of 1 / pi - ``M_2_PI``: value of 2 / pi - ``M_2_SQRTPI``: value of 2 / (square root of pi) - ``M_SQRT2``: value of square root of 2 - ``M_SQRT1_2``: value of square root of 1/2 In an equation, any undeclared variables are automatically declared to be doubles in the high-performance Cython code that is generated. In addition one may declare a temporary variable to be a ``matrix`` or a ``cPoint`` by writing: .. code-block:: python vec, vec1 = declare("matrix(3)", 2) mat = declare("matrix((3,3))") i, j = declare('int') When the Cython code is generated, this gets translated to: .. code-block:: cython cdef double vec[3], vec1[3] cdef double mat[3][3] cdef int i, j One can also declare any valid c-type using the same approach, for example if one desires a ``long`` data type, one may use ``i = declare("long")``. Note that the additional (optional) argument in the declare specifies the number of variables. While this is ignored during transpilation, this is useful when writing functions in pure Python, the :py:func:`compyle.api.declare` function provides a pure Python implementation of this so that the code works both when compiled as well as when run from pure Python. For example: .. code-block:: python i, j = declare("int", 2) In this case, the declare function call returns two integers so that the code runs correctly in pure Python also. The second argument is optional and defaults to 1. If we defined a matrix, then this returns two NumPy arrays of the appropriate shape. .. code-block:: python >>> declare("matrix(2)", 2) (array([ 0., 0.]), array([ 0., 0.])) Thus the code one writes can be used in pure Python and can also be safely transpiled into other languages. Writing the reduce method ------------------------- One may also perform any reductions on properties. Consider a trivial example of calculating the total mass and the maximum ``u`` velocity in the following equation: .. code-block:: python class FindMaxU(Equation): def reduce(self, dst, t, dt): m = serial_reduce_array(dst.m, 'sum') max_u = serial_reduce_array(dst.u, 'max') dst.total_mass[0] = parallel_reduce_array(m, 'sum') dst.max_u[0] = parallel_reduce_array(u, 'max') where: - ``dst``: refers to a destination ``ParticleArray``. - ``t, dt``: are the current time and timestep respectively. - ``serial_reduce_array``: is a special function provided that performs reductions correctly in serial. It currently supports ``sum, prod, max`` and ``min`` operations. See :py:func:`pysph.base.reduce_array.serial_reduce_array`. There is also a :py:func:`pysph.base.reduce_array.parallel_reduce_array` which is to be used to reduce an array across processors. Using ``parallel_reduce_array`` is expensive as it is an all-to-all communication. One can reduce these by using a single array and use that to reduce the communication. We recommend that for any kind of reductions one always use the ``serial_reduce_array`` function and the ``parallel_reduce_array`` inside a ``reduce`` method. One should not worry about parallel/serial modes in this case as this is automatically taken care of by the code generator. In serial, the parallel reduction does nothing. With this machinery, we are able to write complex equations to solve almost any SPH problem. A user can easily define a new equation and instantiate the equation in the list of equations to be passed to the application. It is often easiest to look at the many existing equations in PySPH and learn the general patterns. Adaptive timesteps -------------------- There are a couple of ways to use adaptive timesteps. The first is to compute a required timestep directly per-particle in a particle array property called ``dt_adapt``. The minimum value of this array across all particle arrays is used to set the timestep directly. This is the easiest way to set the adaptive timestep. If the ``dt_adapt`` parameter is not set one may also use standard velocity, force, and viscosity based parameters. The integrator uses information from the arrays ``dt_cfl``, ``dt_force``, and ``dt_visc`` in each of the particle arrays to determine the most suitable time step. This is done using the following approach. The minimum smoothing parameter ``h`` is found as ``hmin``. Let the CFL number be given as ``cfl``. For the velocity criterion, the maximum value of ``dt_cfl`` is found and then a suitable timestep is found as:: dt_min_vel = hmin/max(dt_cfl) For the force based criterion we use the following:: dt_min_force = sqrt(hmin/sqrt(max(dt_force))) for the viscosity we have:: dt_min_visc = hmin/max(dt_visc_fac) Then the correct timestep is found as:: dt = cfl*min(dt_min_vel, dt_min_force, dt_min_visc) The ``cfl`` is set to 0.3 by default. One may pass ``--cfl`` to the application to change the CFL. Note that when the ``dt_adapt`` property is used the CFL has no effect as we assume that the user will compute a suitable value based on their requirements. The :py:class:`pysph.sph.integrator.Integrator` class code may be instructive to look at if you are wondering about any particular details. Illustration of the ``loop_all`` method ---------------------------------------- The ``loop_all`` is a powerful method we show how we can use the above to perform what the ``loop`` method usually does ourselves. .. code-block:: python class LoopAllEquation(Equation): def initialize(self, d_idx, d_rho): d_rho[d_idx] = 0.0 def loop_all(self, d_idx, d_x, d_y, d_z, d_rho, d_h, s_m, s_x, s_y, s_z, s_h, SPH_KERNEL, NBRS, N_NBRS): i = declare('int') s_idx = declare('long') xij = declare('matrix(3)') rij = 0.0 sum = 0.0 for i in range(N_NBRS): s_idx = NBRS[i] xij[0] = d_x[d_idx] - s_x[s_idx] xij[1] = d_y[d_idx] - s_y[s_idx] xij[2] = d_z[d_idx] - s_z[s_idx] rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] + xij[2]*xij[2]) sum += s_m[s_idx]*SPH_KERNEL.kernel(xij, rij, 0.5*(s_h[s_idx] + d_h[d_idx])) d_rho[d_idx] += sum This seems a bit complex but let us look at what is being done. ``initialize`` is called once per particle and each of their densities is set to zero. Then when ``loop_all`` is called it is called once per destination particle (unlike ``loop`` which is called pairwise for each destination and source particle). The ``loop_all`` is passed arrays as is typical of most equations but is also passed the ``SPH_KERNEL`` itself, the list of neighbors, and the number of neighbors. The code first declares the variables, ``i, s_idx`` as an integer and long, and then ``x_ij`` as a 3-element array. These are important for performance in the generated code. The code then loops over all neighbors and computes the summation density. Notice how the kernel is computed using ``SPH_KERNEL.kernel(...)``. Notice also how the source index, ``s_idx`` is found from the neighbors. This above ``loop_all`` code does exactly what the following single line of code does. .. code-block:: python def loop(self, d_idx, d_rho, s_m, s_idx, WIJ): d_rho[d_idx] += s_m[s_idx]*WIJ However, ``loop`` is only called pairwise and there are times when we want to do more with the neighbors. For example if we wish to setup a matrix and solve it per particle, we could do it in ``loop_all`` efficiently. This is also very useful for non-pairwise interactions which are common in other particle methods like molecular dynamics. Calling user-defined functions from equations ---------------------------------------------- Sometimes we may want to call a user-defined function from the equations. Any pure Python function defined using the same conventions as listed above (with suitable type hints) can be called from the equations. Here is a simple example from one of the tests in PySPH. .. code-block:: python def helper(x=1.0): return x*1.5 class SillyEquation(Equation): def initialize(self, d_idx, d_au, d_m): d_au[d_idx] += helper(d_m[d_idx]) def _get_helpers_(self): return [helper] Notice that ``initialize`` is calling the ``helper`` function defined above. The helper function has a default argument to indicate to our code generation that x is a floating point number. We could have also set the default argument to a list and this would then be passed an array of values. The ``_get_helpers_`` method returns a list of functions and these functions are automatically transpiled into high-performance C or OpenCL/CUDA code and can be called from your equations. Here is a more complex helper function. .. code-block:: python def trace(x=[1.0, 1.0], nx=1): i = declare('int') result = 0.0 for i in range(nx): result += x[i] return result class SillyEquation(Equation): def loop(self, d_idx, d_au, d_m, XIJ): d_au[d_idx] += trace(XIJ, 3) def _get_helpers_(self): return [trace] The trace function effectively is converted into a function with signature ``double trace(double* x, int nx)`` and thus can be called with any one-dimensional array. Calling arbitrary Python functions from a Group ------------------------------------------------ Sometimes, you may need to implement something that is hard to write (at least initially) with the constraints that PySPH places. For example if you need to implement an algorithm that requires more complex data structures and you want to do it easily in Python. There are ways to call arbitrary Python code from the application already but sometimes you need to do this during every acceleration evaluation. To support this the :py:class:`Group` class supports two additional keyword arguments called ``pre`` and ``post``. These can be any Python callable that take no arguments. Any callable passed as ``pre`` will be called *before* any equation related code is executed and ``post`` will be executed after the entire group is finished. If the group is iterated, it should call those functions repeatedly. Now these functions are pure Python functions so you may choose to do anything in them. These are not called within an OpenMP context and if you are using the OpenCL or CUDA backends again this will simply be a Python function call that has nothing to do with the particular backend. However, since it is arbitrary Python, you can choose to implement the code using any approach you choose to do. This should be flexible enough to customize PySPH greatly. Writing integrators -------------------- .. py:currentmodule:: pysph.sph.integrator_step Similar rules apply when writing an :py:class:`IntegratorStep`. One can create a multi-stage integrator as follows: .. code-block:: python class MyStepper(IntegratorStep): def initialize(self, d_idx, d_x): # ... def py_stage1(self, dst, t, dt): # ... def stage1(self, d_idx, d_x, d_ax): # ... def py_stage2(self, dst, t, dt): # ... def stage2(self, d_idx, d_x, d_ax): # ... In this case, the ``initialize, stage1, stage2``, methods are transpiled and called but the ``py_stage1, py_stage2`` are pure Python functions called before the respective ``stage`` functions are called. Defining the ``py_stage1`` or ``py_stage2`` methods are optional. If you have defined them, they will be called automatically. They are passed the destination particle array, the current time, and current timestep. Different equations for different stages ----------------------------------------- By default, when one creates equations the implicit assumption is that the same right-hand-side is evaluated at each stage of the integrator. However, some schemes require that one solve different equations for different integrator stages. PySPH does support this but to do this when one creates equations in the application, one should return an instance of :py:class:`pysph.sph.equation.MultiStageEquations`. For example: .. code-block:: python def create_equations(self): # ... eqs = [ [Eq1(dest='fluid', sources=['fluid'])], [Eq2(dest='fluid', sources=['fluid'])] ] from pysph.sph.equation import MultiStageEquations return MultiStageEquations(eqs) In the above, note that each element of ``eqs`` is a list, it could have also been a group. Each item of the given equations is treated as a separate collection of equations which is to be used. The use of the :py:class:`pysph.sph.equation.MultiStageEquations` tells PySPH that multiple equation sets are being used. Now that we have this, how do we call the right accelerations at the right times? We do this by sub-classing the :py:class:`pysph.sph.integrator.Integrator`. We show a simple example from our test suite to illustrate this: .. code-block:: python from pysph.sph.integrator import Integrator class MyIntegrator(Integrator): def one_timestep(self, t, dt): self.compute_accelerations(0) # Equivalent to self.compute_accelerations() self.stage1() self.do_post_stage(dt, 1) self.compute_accelerations(1, update_nnps=False) self.stage2() self.update_domain() self.do_post_stage(dt, 2) Note that the ``compute_accelerations`` method takes two arguments, the ``index`` (which defaults to zero) and ``update_nnps`` which defaults to ``True``. A simple integrator with a single RHS would simply call ``self.compute_accelerations()``. However, in the above, the first set of equations is called first, and then for the second stage the second set of equations is evaluated but without updating the NNPS (handy if the particles do not move in stage1). Note the call ``self.update_domain()`` after the second stage, this sets up any ghost particles for periodicity when particles have been moved, it also updates the neighbor finder to use an appropriate neighbor length based on the current smoothing length. If you do not need to do this for your particular integrator you may choose not to add this. In the above case, the domain is not updated after the first stage as the particles have not moved. The above illustrates how one can create more complex integrators that employ different accelerations in each stage. Examples to study ------------------ The following equations provide good examples for how one could use/write the ``reduce`` method: - :py:class:`pysph.sph.gas_dynamics.basic.SummationDensityADKE`: relatively simple. - :py:class:`pysph.sph.rigid_body.RigidBodyMoments`: this is pretty complex. - :py:class:`pysph.sph.iisph.PressureSolve`: relatively straight-forward. The equations that demonstrate the ``converged`` method are: - :py:class:`pysph.sph.gas_dynamics.basic.SummationDensity`: relatively simple. - :py:class:`pysph.sph.iisph.PressureSolve`. Some equations that demonstrate using matrices and solving systems of equations are: - :py:class:`pysph.sph.wc.density_correction.MLSFirstOrder2D`. - :py:class:`pysph.sph.wc.density_correction.MLSFirstOrder3D`. - :py:class:`pysph.sph.wc.kernel_correction.GradientCorrection`. pysph-master/docs/source/design/images/000077500000000000000000000000001356347341600204745ustar00rootroot00000000000000pysph-master/docs/source/design/images/controller.png000066400000000000000000001776661356347341600234140ustar00rootroot00000000000000PNG  IHDRa/tsBITO IDATxy@G? H$r#*x"E-J <)`Zj֪?֧>X֣jZHC*S L|gyLfg>3$fv0 B!@`H/aH/a5%%%v رI.]WXO?5 pV\ZM[YVII{ri[l٢~Yؔڜ?/ܺuua[r={BZ)j5@ @ =z4aXW^=aGGGaB;wΟ?Emq9:@Zj;#!;vv\#hAddX,611yft+++wwN1A[H7>C`q0FЂ+WB&N(;@emmmmmݺAA3t$ lЂBUCwxiӺtbddrׯ_ɾ<,_\Ukr8cc㢢" 6wn„ ǏJ7]`ȑ#ÇMu\2aKKˎ;vm޼yH,7M}軻wV-!77w߾}ӧO۷/566ҥKPPЉ'$rx:ر#??m!TWWݻ^ 166yD-OرRRK6|@R awwwBHppb\.T/f6sbX.BB}/MW?\iD4iWHt'OHBkD[ ! @O s֭k4SN4],wwe3GGGBXaA9~lS*++ ]}7bĈre !oVhذafff 8;;Ӎ͋jtӚخĥKzyy CCCzSSSSÇٜtaWLJrohpomzOBT(Msvv}Ç (BÇ ! EBȩSN:LBzQo0 եKD.K7nkMkJ6m]II cBg6-}ѣb%KVZ5hРA6y$`dd~9ѣ_[VVFQ5}l3E4ٳgtJvRZZIՊF1jo.'MktN<)JۗWZZZVVSWW'&sVh˰a̪o>ydKH}t;>>nʪ@lkf͚Eo@!GaPtMKK# !ƍ#ܼyS+t %:!D,M7̰Q:zbQ!wäj֭[R@1̟?_%Ɇ ѣƛoIafҥr2f !ƍ&>|&M˶tR.[[[ꢗ, mMVVVtC(*Mٳ'!r+_h]9o3`ݻw'''?~|ܹ.]jni^~~~t#22絵uuu츢6k,Bȳg&LR'?W^ugB}||'Oddd:u*$$d۶mnk@@SZZɓE:u i=^ \9sdwK \|YiE4X)TLLhϜ9ӹsg5:tHJ5twe*}W"kP4igȊA)VaooHCCCm޼mQԩSjHZPPPpk׮ݹs'==@"caaa?;twwرw'N\bܽ4ݿz>} 2DU)S9/^w^QQñvvv{L"XZZܸqn۷/]Scffַo^{m]vmVE/ y$ya 𒖯GD@h0H/aH/aHrܹ>odddii=wܬ,Kp8G@qumzhwTTTXXb75,Ҳ9I:xH/}'tܹsե˗/r LVUUCh0FzѣGcc#GvW^/e&&&\.w[lP,pժUt]ll,8h !.];veǎ/_.;Ů܋bkkkHꀀPX^^>i=y$9۽{Çe>LB/{=ݻtRBok;==[p0@6C~~>!ܜ!oə3gNN!?CRRRjj*!d޼yS/^}up.a1,,txSYY桗 ДgϞɾ%G L2rB #""do߾7ᓑQ36%''k޺u\,8qhՕ3i$BT*}w W^Mߢ&)F"諯"3K.tC;v\zm|ɒ%]z؋:u*$$nݍ7!+J{葙I_t{ǎk׮UqÆ :DZ<My80k֬}r\CCC ѣG_p Blll֭[ׯ_?ccN:m޼?433SZܹs鶕֬Y믿?А[ Hx8` %^ %^ %^ %#VO#0nH/aHC8x|>ر=z4M7222?ﲻ\t)55ѣGwݸq#Mtpp_D"lE/^LHH JFGG"""NQdU񇅅 +Woll.U,--i$ڕ!U% . `ƍ۷oWimH>ȑ#077>}[Қ[YYx'N8qGEE:;; >\xFruuիW?#BHϞ=mjffhtkٳg+VHJJ2d!d̙vvviiiߗ{.]/;6{lsfff>zH(Y݅ayԨQ5Dh|}}$@{+c$X#FBKBH 322*** [nmڴSNk׮=uCCC;F:{lhhhAAӧ?S@`jj|͟XԀKt^%%%U݅?trovժUMg3/|Μ9[n-++tRn<==!tHFaF*gMB]=dfA2FHH˗ MnmmM ]PheefVz^zyxx9sرc{{{BHzz҇uܙ"Dt %]GRٳgرàA>o|l~([Nhhw}sNBɓW^gϞΝ;?}499yĉ4s׮]mllRSSmmmINNnFC[~4Vdҭy$+_>wYL'N>}ۗ}7!?x&x̙3ccchÇy<ѣeLp&MsBoioGj.Jc,t 4=o>@tn#߿_6>Z~=H摠 yߤ0H/aH/1H$RvDD*FFFB$E* H@⼼ mGDRSS? 4;;;>@@W~WVV\ݽ#F 899_fse7~7???}~rŖ.XۻO>ͫ[hի !B~SMiH=-yӦM~ajj1c <==׭[zTѫW>bSLٳIvXX BCCO2tR]]]=طM?m%rO4ia-v&Ν;BJLL0`!$==^"$''{yy)޽{ӓnK$//lTگ_?Bȓ'Ox<!$--ܼ J0++0> IDATaZWrr7|G(,,|Go Ӊ둬kjj!EEEuuunnn666k֬aAHOO7445jԆ &m)kk˗/BjjjuԨ(v;l0777OOOϛ7/,,lƌ~A1ŋq-\.tssPZT qvvvvv𨮮޿?(..Q:FRAiE[1{lss޽{O<*9:#7zs8mB7M :K'tNܳ=zlʰa:k yߤ%^1T*JZ~puu򊉉it!"( aHFtz3gD 0 c`n0_,j"Uh4/#4IAg}СCKKK !G߾}0}^\\\WW0X,gfٲe&Lx"0RtرgΜa }Rm崨~2 s5___ԩSj*577yWٴi֭[?~ZIx"'dD2t3 SQQѽ{a1cUVV2 YTjii݀CvO!4IAgN>}:6(..611!Ϝ9S.ql:RZZJ'j=<<ݻ׳gϕ+W~駱J_|Ԕꫯ矞K,244+..WMeee[nݶmۂ *++bmS>%B\P($$$$:u"H$ӭ[%K̜93((̬щ1ɓ'8pׯ/^I$d///qqq۵k\{R)Svܩ-;;Φ >EEEcƌYJ-,,bbbjjj\UUu`>/Wl(b [rTT'MH$)))^^^YYY ødBȝ;w]O?Y#%''YfΨ/_\VVFyM􌋋[nݻUe-mW\9vXddb` 4J-,,6nH炤R޽{WZ%W[l4 O[߭[$D0̆ FO{r GaÆj͏Љ1ܹso߾p®]mnnnW<{aÆyzz߿{zz;wn̘1nnnYvxrQeddΝ;‚B\\9B;ib [rHHGuuJ9?3gggwwSFEE5pg:}V7?vht j^56P}7)-:q=V<|pgΜv,+} B^5_ TKtN\#0Fx c$0Fx c$0Fx c$0Ffp8˸8SWWעǏh0na֕Ӎї[SSӱcǖ`ԩs k]<ϻܻw3s>}Bϟoee%"""b1w߾}fff $$$// QF֭[J뵷7*vBkc$FIKu Ar̙bzMxbBBP($EEEYYYϟ?wi_~,--M$ +W}>KKKH$RT,K$\B#nBx<`/h\gggGIKK+)))))D9993fXfMNNH$ ! twvRm@lmmVZUTTDͽr !R"|ccH. `ƍb8//oJ!TWWWUUJkm tBTTzm۶P333f9RXXhmmXaÆBxH͈p.lUcݣkjkkٵv4ŋCH$O1sAAMBB!s),,d3Q΍7FQWWokkXJ[NKtGFdGsssB0DxF(BFA_2 SSS#J oyN^4Ob 4iʕ+! +MNNݻ7Ќ0F="wzww3f;C>lllLo^7zǏ74=zܹ3,,lҤISNe:r䈓Ӏx<ނ J?~ qgC `)pWǷyW]jC,ǟ'-#5LjҥluΝsss355MNN>w܀\wJJjff&BBB5t߳gϺܻw) Gejjڿ[nرر9"@sk@HoFQQQFFFVV}]BH׮]:o>}t׮]<*3!dܹOLL,//?{ҺLMMeLEJk3gaNNNrr͛77n?u͛7=<<&Nxԩ蒒zKlKYYYZZH$̙3...lzXX@ (((ri'O֮]%>|8z*TRRu!CtܹC666cƌ>۲e˖-[4z.+++/@Mn|^@]^SAAIOO//\`eežvZ&s~~>!$99Y;wBjkk^ʥ@BF_?~Ύ4ƍr/4&ի\.WqhC6$^Phhho>Hxvmt]|||׮][ҲˤEYZZ*};h&-G1[khknH'@ f*//JݥKzyyL\ݻwf@Zrss !ݻw/B8ƆntIe]]lOsZ"Ϙ1ƍΝSK.'BKv"׳j{ff&}ikk>3cƌ5kD"1ު+ľ|ڵڵi/^믿ݻwtf+))y]]]MLL\lRQQAߥVVV ˵Zr%_  c1&&f:uڶm&5jҥKcǎر VWJKdee%C=+l+yשBBahh!~* (>͏o{;AAAsy0999M߸qWUUӧ-,,233d6mZ``P(d&---33uuuC !>yz`fkafȑ3f̨\z5zKM=|reaFQlH4ѣGW\H$~M36}iy#]J#^T*//W^t)}ٱcG.+a֭UtԔno޼YլSzK=z婩WZ|PU~무A?=qe6:Uaƌcv+myj^}'q۞G"DEE:;; >ޜꫯNxA #G @)Z!GsuuiIM-Wx6Li֥Ti{877GrSDI&t:3'5ͯA<їt;22RQsNYZZZRRBS={&N-,5StD$KUal߾]>@UWZVWWϚ5~)Eʢh ]݈ݺupBNNN]]]ee寿JafݲY'a͛+++W\;E *Xy] J7mDOq=}:::zMl5-{ܕ|S7P 䠇7:X麼HMZw% {CRSSCSˆg>`D]ʶNjOkkkBX,Z]و2)!ue J T4\4DLґkgأ@딞8'Z7:X;wh5JKKH+fk\娺Æ ԫE-}>(-EWZZ{ ü[J :" ޓJ}-lDF.(U,P}s@iQD"4Bt_KGݞaݡx2`/__?zԨQlll֭[ׯ_?ccN:m޼?433Ӱ;w?Lkzk֬_Ǐoiiihh֭[7w\ -R˷ _^[[+5,eaaA)--5bLMxjE ([y].(UtT tc[JN?^m^uԫJKٔg.322JJJD˗/Po 4e˖-7oV2R]ٸ3֩6.<^p!}9a„gϞUWW0)ƣ:A"وϯ}ӫۼիziGX무Mih~Faf>}2[}LMMe*_ٸ2OcnD],1Qs_z#kb<#bϴ݃-GZ;GҲ曈sssSSӉ'^r%;wյcǎ>>>.]boڌZhu[[zu>}:vةS'wwwZ'ޣZMRہ@Ыۼիz mSz###oܸ!6oleeZ16R\\kkk޽K7ݻ+D"mzmabbb봂=70`9rdFFFEEEtt4Njv\U^^$jV`8v :d6lXl]X۽{[3?m4>oeetjBH\\9w)]{ܹp\ooMjff&BBBgϞuqq111wP(5jioݺ%/crϟoee%"""ESM<"""U+-?zh>}lmmb@:$ 5a(mڎ;\ݎ;4<0F;" Yf)}wΜ99997oܸq#֩Sn޼Y\\1qSNEGGxxxO$/RV& IDATV&3g>+,,L \r6Pi9YYYϟ?wK.>zݻl2/^  4::RRRR^^aM{ڵkbÇG]oVcAozu|zuP>pIvYowB*++!їǏcΝ;ϟ7nȽ422$bW^rrFGGiH>dC"]d7d)((p84… BVvh2i'O(|^6 [mP(444ܷoH$RZ<=?WLLLU*JH$j###R)})D|>_* Vo|H.'Ŵtj*eT鳃> 877ҽ{wQS:u$$ӧlaa'M$%lf63 }b9th+Y>շ]6̮]jRi'N8~ðaî_^oP#@0̪U.\8tP9S^qߔ;;;>߸HMM=~} /5⼼ ؍E?t@=Pj:K.ǎS|ΎI_fff+jQNNΌ3֬Y#~gvDR#*DJymMMT^U 88ڵk!!!!#5/P h~M2eEEEzhM6͘1cĉ=p&OLѣ4..ix񢨨h鮮G8p !~۷_-ަMfϞ=eʔ={N4)))n ,ӧϼyRSSnjSPPn:Bȝ;w,,,O2tRBҖ* UYYrJww1Bi{iutGvCMUUiʧh+@sРӾ;*;j!-]z/?䓯nݺK.ٍ9rƍb8//o֭O:nJD󍍍###緳 `CRhclmmVZUTTDU|oٲM3gei/}ǏǏ555:tx 3^럗\l6SN3 #H~a***w_1 3~raͫbbb^}Ua6mڴuVZTqqq]]02 o>arssy<ޞ={Revv6 yAAAUUUՕ:0aŋJcǎ=s 0˖-BM6qDX0ٳgUT1<`&MyfD0L^^N8noljzχ&~ju,jKGѤ)SXZZr\''+V2 3yd333 E˖T]<#qN}r۶m<ow߬#Gr\77/w=rD"ћoieek.>QzEEͯLxb>>,WAjjk4<7^ُHhvzu@M,ޣ˗/:M5jr&%%D''[n={w80tPww{Uooo~0 9iҤGfkk{}maaAIH$fff"|>=z8{,0C q[Hn؋_ £=<>חr]OOO(HRRR !4"&&fUUU>޽{4=&&&99Ȉ>˨1 td_ǏֈU7ޘ;󀦎@UeQvDT OFEWZն>OjԊ*J_ՖEl$ziKؒ޹gΜ;̙?  O`7iH!;lcHlڧk7|s޽(>>>аpB//QFر]ݶmۼyfΜ9rȈ|t`urrr&Mkooe%11qs3fL@@Gϙ3uҤINBFGGWVVbTnݺx;;;&-J%ꢢ&NaÆ5k|~||رc,Y"QcbbQvo۶mΝ_u^^^GܹsΜ9~~~J {ahhhmm͛E"Ѻu<==_uTtvv6ŒcYYَ;> p8Ѓ===dɒ%))))iiinnn4{QhBFP䔨J  uԻܝ%DWgrL<R7o+-==Z*IJ$TP(? ӧOokkkoorqqq"dVTTt)(!\.˗pH!4iҏ?! JIIVWWhÇC}||^k=xرc/>zhLL Y}J%;!455=qpƌBT:uϣ: Sj/^(Jl/dss30,,,((C̙ʰUV:t午ؾ}D"Ԡٳgqʕ>a;w}K/ggg:;;{cV_|agg!9}t{%}b555Fɀ`^O3h YԿ Ȑ}[YEm;w5k+WӐ?z%XgϞA%'/<===<< [ZZ \.Ç([_>vXѣGߺumذa k׮MIIqtt裏ʕJfff%:881bB:p@]]J555PT*$SRR\]]T*ɜ1cFQQ .8;;Sԙ3g(AoٲE6ٳ)++6mZd@ @}<ᠧBHRJ57t>tR6McccQQU"AHH7B+AЯkݛX ۷UԶ{pE+WcXleer.KJJlmmG gϞ @Gq8 333P˗ZSSlaZZN?ppرW^\}LD"ٿZ\.Q喖(ʕ+ BĄȎ'!ѧl)ruDCa͛7GxQR_7oBkkkTBW_eggK$ -H$$J ReffʙAxbHTWWrJvd˖-uTefr%JgϞ(26A$''(Vv1p/xb0 |$m2h0:@DDč7з1"??JLL|8@}J%Νkjj&^}Udĉܼ'M#Y< Dq NNNvqq5k_MR-ZFFFJ vss۸qyܜcbb ?}$VVVBPu֣GV\睧455555eff)̙3OIO8f3ӧK$DB\5j N)#!dEFS++:tBe ABSYY xב9gP(nQU0  hu,k0۶Ѕ~ 6hjnAysIIIo\t_XdɅ |q>RT''?D7x ܗ<bY F7AouѷRM:߿V\\Vv )˽cT 2t0OPϵ`0 Fkxxxx$C]vmٲe;H f Ǒ0]Ilx ?1  } #0 &kԴgÇx?\6h#)Quu5^UUedd7H`0 `EFF}cTSnJo۶4|oll``{$ՑQϵ̃hs0C #N>9H+W,**訨HKKsww׌ _|v"H$_|h@'`0 OАtA +9hdddeep»wbMMM)BHJJ *p޽S(kk۷qƙX[['%%!/^DEE9;;STCCC6?:LBP8κub1!fbb2~x/R̆^VV,LNVupwwwl)ԩSlaW^]WWhí[^y1$ RoF999zzzĝڿ5Bؿ?fooO.\ֆ$SRR\]]̘LftttMMP( 0Lv P(\r9Fsss#vx1c>0t[`0w`Bۏ>^ W/..V%SSS3j(Eۼ1NJ6<{;9~'ܰa(̞={@NN`0o):eׯL6 B8uTONVu!//oذa%&)g5888`llLR4PD0yhPXUU]puaSSS~~>:w\ӓM2==ӧš)SDFFBe{5FGGUTT@ ʐL\\\SSH$?WD A/a ?`0:K?,X155PTdL\\\}}}MMMxx8Jٺu+ZlYKKӧ+V|׷K.UUUbHte$ /niiA|||ٳQE")~1@__ڵkhɓ'>RRCtt4ԭeee:ujiii[[ۙ3gիW0cƌJ@Л%}hCUUɓ'5TYYi``<,,,_~B@˗Ya$&aVVaVV1Rz 3(Yď?FgNJ:>rQӧOy^^^zzzH@vJ@)=ڵˋ A BC}}=P$irVT(Ԅ):eɓ'X,cQzoUB!P f;vL(++^rMeSֆN\;9 BhLtU6:&L0|p:NbXN۽{d#dz_YUg00Oct GrppgϞAytbm;;;K&yeD 1MvuP׬Yu|¡B^y5Ą4BCCCr )"Y/_NBNJު.5fVRe_`1Bem02tJYWWL `5 IDATxM6UUU577 rHR%}ݨ q4?1 Q HĬ]"ojjB)I Uy4ƍj$@^ GGkkkKK y3gD_ϖ3g'ԀB!'o>9'[}V8!EPPXSSo>Ek׮utth4b2)) ڳgOll,@$I$allŋ>IH8E\.wk֬A3^JtI" `0GC>҆ w#G֬YS\\YYYy ooo$J6lhll۸q#DX r{uW ,&Hڒ% Ν;###wءU]jxאkkk۽{7N>rݻwwט!ɓ'\nppp\\@GGǖ-[8F;{ٳgQzHHQw9r޽{ϟoff$WX1a&yI@ZZȑ#===f͚ {E6<`Ob0 /hW0;;QU:I\;>GUUJw!ϗ-INK rqT\#+*Eq=RR\\;p|J=Tϑk"\,W_nFwYď?Fg8`ܸq~t IIIA</''gͣG666611qss۾}N9z… t:J zjw5^puذa ZO]㓞|2"@S6m|l?~͛-Zat"G0  F郿u07nܘ8qT*=p mWvlB/5}UY]} Fg?Hpecccss3`ovqqѶiJЅ.WmeSaz΂}$̐Ess0:ܹsGQYY)Fp=t0}J`0 G Ht'](212% f(Ǒ0C 3r*`0 HD"]`0 FG`=J`0 !`0]2`0 pq$ f@" `0~> ,'J`0 q$ F~`0&>t `0H" `0]HtCv`0!0-0 `t?!_4P`0L~34HLS0 `00 ~FCRT$QTtz ??NC~ &''Gi)7ntRYYqhhC,,, `0}m dM- deeЩ:y;::* ¶_T`0A 0^A >HM.^8fSSS__rrrN:dnnhiiYt)a2 .lmmEySRR\]]̘LftttMM J RoVZ;<<<8Ά ~wT`0̠HLQtܹ8f̘"=###//xↆ򊊊w}X[[O---D 0ګWuiիW}}}n `;ͯ:UK:ݪג%KƏ|" Z~jÄ 6.A==7\FVVV~mɝ(ٳg:WXXȈS& --->0 '''O"xU(] XܜF?Dzh@ɤh@*JQ,O>NK B€( {T*566V1?͍F ><,,ӧÇsXYY9m4pyY#?S[[[*jmm4|'o?J\:.iFFT*UհF...4m֬Y*[[[__'Nu΋S' ^bԯpsMMMMMM"H__*&&fӦMUUU!/++Cى =?~<>>>==}ʔ)`0'N"e!!! ,,,!!Bp=J" !5kVpppUU?,((-HPJQ,/^X$\(.'''55pΝǏK~WuݻwK$-[8::,YV[[;a^ϟ?ܼyBX[[{-xT233ekT^iuHC ,شiI" 111$;;[VUrrrHH-'[Ub$]DKj0 \Succc!UUU+payy믿N A\dž>|b}Y C畨AY} TjPfp -`0 /_/f#Gܸ\GP*aÇwG"D%B_xx:=w%D(dq/3++P>n߾'J u~WEENOyMMljCΝ;gϞ !Dfffw%iXA$  ~򑚛z-Cќsuu%oX/ǭ[=F @VUӷ6 2NM1L/AY}#a2jɭ$$$L6:}40J/))ՌVڠ5H|>_\e$$2[DtpiY֮][]]DP*v-Qjtt+WjkkgϞG>,MN:JW5| LFZz$$I `0Z>Njܸqccc#/^|繹_|7|jժ'Oxu֡)pyyy>tpp044?r N!Dي%'Ls}EEEuD"Xb*++wE"liiꫯʖH\*--|r[[1qƆ!Iɫcmm%g⋌X͆`رH7f~h FPh4ŨQ=@*ƚhņ.gt9rT|]:>#AQPl2ブٳ$fmذf[YY_^%f+!|yhh( Ip–͹vh~)PXt!{\^^JRO+fffYYYoٲE6E޽`P/_444  (]l=7xr;|lyM{ ^at ߑ"6azMct_?X,Vcc&Ɔγ5] nat U?cpΟ?!Rs.]ftzll,W sggg*:sLE%9NIIquuRL&sƌEEE(e<J1TZPP2hď?FgQkF[xxx:Ȯ]l٢Hx~]A=5Z[[-[;l@ @ÛAJJ \hQYYӧO+++|Md:uJ-[(ʉ'|~ee%Ś;w.J+**GH$䅊D'O஋ n9uSMwi bQM}:yMNaAONN_gg'%:JgXw H$'( x<+Wbcc3D9~_],|\QIC <,@m0ڡ/V<`0NB%*nE;J R:j(uKOO裏?~, 0rை2@}`cxz,=CZ ^#;U1 =mc!0 PcmBH -((ptt$-a+flo[H (T?F 0LuGݰaChh PW_}ellBxv,X,""b֬Yĥ'O{xxhx. mkk+..Vviu1jCW$10C =X 6h>N5]SSѣG?~,3{섄*"99{ml6Z>.w:˿y5N TJMu;V Vt#I?011QkR)*--e0͑TڶЅ.ؠ K}hé+G5oAY#Ɏiג!6v\9sFa9H+W,**訨HKKsww׼=mmm/Tb0_٩wڵDy/Ǐ;iɒ%b|7*++]]]W"hݺuD̸?x۶m;w|ahhhmm͛믿?ȑ#ܹsΜ9~~~ժNLL?9sƌ squu4iҩSdmS.**mĉ6lXfIJ+аpB//QFر1o޼3g92"""??/X\ ''gҤI[lAWU1!*+V9rj…w%Ě}WWWSSS ᑔ$ U4M2Bp8u֡@H0 2޺uW^111ٻw:%/2uT6=l0{{իWL$b0W h]tM@f5Ռ3222 RtԩϟVWWs\//~IdDD% =:??O>ŋUV:tHTf_xey|9sT. &M!AAA)))R4r)x1aee'dMRD" HMM B[[?cmmm\.7..]$1̊ d@ccX,rܗ/__MVp!kfHӹ"Kk(z =Dtذa EV`Ϟ=HW.:066Rxfe+fwpp@o'Uj:0]xE?G1P!D(ܜPB#ɹ(jG:v7Ņ\-AGPCjRFQL 4PA ȡ;w둸\+WZZZ?477O:uƍ| rEJr8܎@CCKq Jo hh BBBۡ+)''pҥ9s(UK5zBa0wŝVrqqAI6^PP Hbڵk=<Tr޼ynnn111iӦ-[mرKKKooo777*$''xzzΚ5믿&&-ZFFFJȺ999ytGGH " Νkjj&^}UFSxttW{{{jjE+HJ-Zzyy-[Fi.j*34!8p@x<ojjB)Q:I!%ٷoܗXbhswy%fddrNX,NMM%NSSSU!o^tCH$;;;?:v'Oļ<-[ܹsI,^re影 cg,Y/_J}`4"oq% B?~\RRa0B4CTG(V^S"!>:/^P[[RRB ݽ{Dg%ʝH׮]hnny+vERTH$rqq]v```fDڢbXnN Um+{O>A)FR%!|… Q/bnnRc֫c JvCBH<_Q]t 3@Kjf~ݑ8hD G9;;߼ySåc Vl }^SilG$@e>QUL?rU_-QQϾ}(11\-A[n@;r̆HOO^pb4UTb!APptŊrEHennnpp)^v-'/_ eXFFFvvvVUO0EJT|NNN bMb IDAT2%߸q1B!["gһhI-ެ^m6"w;vlGGGvv6Ãbd>9JRt:=44%{`̘1Ș7k,CPHuvvQTBtYn O1#0 Vl }^Sj?pt:冄͛7=m2(R]]E/>RwKTӧOg c7oF$j$ь;b ʴӦMN:a!򶕻*JтXG"!$Cb֫$A <hImݬ>)Rh͛ =F9//I~Π>hjjZx⥆7|zM:駟?JKK7o|.+'MHPdu 0 Vl }^ӡtu@uȑ7U6#F]!R{{;1V%9n޼yPxu$ RuJ^Vb4y&tRUUwH 88 ,[ L6YYY/^TʒFM4ВںY}UnrrSUU='vڵvZH.cOOϏ>K+x<JuuuwSN9_d f0qqq($29ѣMLLƎ_ DRҺِJ ntƆ>&QK9uat$t:G1#4&O @ }|$YΝKCXB;=aÆJ(ZqA 6rȄw:99b֫A %Ԏ&f͚[ Btgt".Ew-`4~```mmŋBZ[[{xxdddk{)Ƶ={vpppee%j=Rxxxll,ZXUU믿*@ٿ?}" v&@XZZ={̙3'Nq㆜1ܹs b0DN!ɐ%`8uO?'9;;S +VЀ.wEb:7@i^t86mRJ~ѢE=х t*vU9 7:99 6׷-`z/W_}WhEѣGWX~ |}}QoEh4lڴ,CC;v|gMMM$bW^uppRH$7nXZZȑ#===f͚8E믿R...fff3WX1a&yIɁfCa0:4~---- ~{uvvooo_|O6ך5kFq̙pccɓ'wvv7V[[}֭[+T*2kaL&q5773 TJtZ Fۆ`0Gu]8ϧ%K(NBP+!!=3hɓuuu\.7888..NQc˖-F={:j(ww]vFw盙EEE Iőn_tݻ:{U=W 3$<|̙3Ḏqu7Pl'e-Qa0na]z__ekk+{5555--X3oK,O'o{Іg>io6448;;zyy:tm9rȤ$$s̙1c_{aMMH$fggO8qرvvvl1cƼ|>::z̘1'N||M+W<==>3YoܹC(#yҥ>>>/|z {3gNttѣQ 5UjAkkڵk=<<܂i& YԪf. <Bx~i f ~?_fSNR[l!RTH3O>1bB:pst:=..N(B6/T*dΘ1H#4ST*4,_QT??G?cpΟ?/kL||< B SRR&MxՐÇ]1ONaI[ (!ל ! pAAABHnݺkM4 Bm۶={ U($Qkk+}l^aPPPJJ F>|X*/^ "!bmmm\.7..]$1LKpƌBT:uTZjաC%g kmmN4TUSEdؾ}D"Eŝ={8 .}C0s]A#YADc͈ZMN7oU}}}vvk׈t@%'\ZZf͚Z(>|XRRRRRRTT(ʉ'|~ee%Ś;w.f1S...(??_ 9rDv_kkkUUղeyg ===$$DHnQ/2Ofܠh.Zt)BP~F;X={ݻQ$}DԲszyyM0T6off&G1bʕOnii鲶&''6L"رcذaFݼy͛}||}D"! s9RU'PH\"Xt)qRRw`0(Fh A~)#FGs'Olgg"X,֭[7o Jׯ_.xM6e0wŝVrqq155-**200 ILLloo2FAl:˽rJKK *FPF†Q(5Uja 444bmIUjU?ט]՛FiJ}iu>M@hrH$X:NB/y:{=ZsQ}ʕٳgˆ"VUUqꨨkVWW755P芿uF >4q  =~$AC״77d:::˫=55([cX %((@PƏ^#E^^^˖-7n\ytGGH dOOYfy999ǏK7oĉܼ'M#ƍ?}cǪRdwsssvv144T/PhG^A9kZzIҁ1:N/a/:.ׯ_׶!j#-C|Maa@ wW Pꫯ.^/"k |`iiI-\P$544L4iAf***&O,#k[DDoQ]] !} rHRwu55۷oʔ)c4o`mw"히h_d,0O+-Nr|%Kl6 s8ɄƪRxYOOOtzjCCC>!}\Ԟݦ~E8ӧ+++lq'HNNh&I._ ;vx4$y=~ԨQ{upp8p """f͚%k BCC7l؀OitgC+it6"P!ŋJJJ+**}]͉'zϟ}޽{[n%will3f CRe-\<4kԩDbkk~),ZѣGw?#"Δ)S.\6xgi1H$X fzƞ5:MQpj ;oʅC@/]DD;hh%//P//3gTVVZZZ8p`ݺuB;;;4JU)ɓ'(̙3D\q$0++ ҇ JC2D6 _NP (@AAUF|rѴt/xxx._K]vmٲG@K0̀?Ř Ẍ\/rM!Ro?$㤘`:) n {@|>`aa(,,$|D/\p!22Dmppo-LӧO?$ʜЁR.{*-.''DCUUULL̙3gÍ333'O !D]]]|Ⱥ{UA0 @j CS`4 }!:577_~}CCX垔r%K |Źs炃^HJAٳ'66bbuS `0_xt.;{5k֠zf$>|尛ќK)((v/RTd2dw5X`׵h"AAAR> &~(vMP*vǏ9rdݽ% ӯ(B!޿pڵ>o5665kVllyb݊T+baa1b@pp0'*:j(ww]vR+V0a1nhh =Nu\IKK;v,ۈ#߿/\.u)..v!C9sFsI'A Syxx(yG>}? %dV\.755%KGsww=zA}}=Ƙ|55~[n]rۛ>r[nǮ]T9Oܣ Uͅ1޺uaRR*1fӱ׍7(1ROaִGЅKH|'F0K:;;wɓ'`눉9{X,yL~w_~cFFƸoպ+ъss(awmb$c}]눿y+Vm2^&]~^"d^;x%1cZ>^8999((FQQQ@@9sږ)oll,))YtIzJ./^\XXbX"EPP;`kzٙ=bs><,,_~$ :uҥKȺyB?h޽{]\\gΜy1zy BgggOOϨ(PddGssÇ!PFF/fX/^oyxx,]( fffÇr$fPSP~~@ 0`J0aڴi111 ,B}l: $ I T^@ Ym,--.B_~ ǏWd>>>zzzo{nٲ|0>O͍*!N*I矷oNHH022X~ɓ'O˖- }䉆Oth֜SNٳG ?*ѣGgffhz ??~{:o݃kwЄ>XN!;ydPP@ nTI={ܜOr @DL.h8++ !D={Kq];wp8Z VLY5R9afffddD&&j­[X0pВYꟍ㭭"==`RQQ1k֬ BK҈w}699 (<7Uot:ޞLcee"P&ꫣu8CUU*CIRfꘃfScM0۳ۖ:)77ٹ;]O3 /^ljj344$$ >T*W\\cǎ63srr V^^w!_zm۶577WUU۷/""cFPޞ0c6rkku֑Ϟ=pm1]EcLd2֭[y<ޙ3gH׮]loovZА!C8`SSӰ3gipRSSK.yì >tZii:@|!!!O<l}ϟ{zz>yDt{ВYHc=:00kرcƍ Ƒ^.yG>>hqmH@_Ƒ?# Qљ$1=<@rphm=RFFÑd+q]';c](盚x/3T؁x<UrVTR-[Ft h c`CWr6xyyI$}}nwI$Kj]/_ہcMLL$IZZZQ;wu. c$ĿoP cPqqԩSy<ސ!CN>M *WsX8pA++3gΐ*7a@ x˗/'?ZZZhDBINNvqqx3gΤ7!4|p8k5%Ɋ+lllx<ÇXO^u骔Zl/jݻ֖RT5y<BJsBO>%BGOOOݪ*۷9BP%1.//Geffݟ‚i"RͲ2V , BeeedWVmzsJ 'NVD}u0?RIϢs>t^4MڄY"222Ǝ[YYIMKNN޳gÇAb{{{Afy033C`r9 ĉEPaÆihY͢"СC5G*a(ۻ.U jVTTҠ:!>O*/6͵+--xwKKKkkkBgk ka~V!W[[[[[[WW'J{jC=մEBT"TjZT3L+ڿt!u4cwl>"p8,@(xJ2ܼ_~;v@ٽ+۶mknnڷoVUEBYYYTuxxuXdzg:xaaaW&c999mZtpp&jZp8̕ZJlٲSnovd<^c$15mȐ!X`iXXo-,,|||&MD{*xCv`{{kג¯А䯛|;vظqHjh}CLoWbԩ'OjGoD>Y1XҊ/ }#}kJ4ju,X;wSSb$/0ƚLc,я{7@b$D8.:bGmC %̓,#}6%ţr+۷+.үu###Gő#Gux{[WW Q\.3;;]Nv5 B յF7c=1Xל'N/ݾ}{ .>|{X뫦222 *Hx޽\//;vhbx-omGU èT*]zQ&NJHH  UUUb866{Ĉ1112 !?f''7WTTxzznذ!zԥK|}}=<<:DwCI!ٳgbV+2А*//2dHSS)dݻHSll,BzѢEÆ ۺu*JUj1V%cǎo6*HN4 >Y[˗/_nBor%K}O?DF-LriqMML&766ZYYZ꣏>2R(EEED>PT1?s*̚nݺꫯcvEJNNW(J&rÇ1 _Uחb̚5+))R{yy8qc\\\r1MlZYYD"saGyիW}||HSoZcwAڳJ (=꯯^ʋQ ]t:ߞ<<<$lllk|_ 2<+9r}Ǐq<==Ž?.2* XYYߺuK''۷o ơCRm6sLfӤQAAA$na&T}-JɠA UU555?0[n544$qxٲehjjruuD̺3۟W\9r$)'N=XAЅz^`W;t oǏ!]111~!]?rӧ]b ccl///5G!ӯ^/..\.WBG?cWWW\.YߺueܸqMMM_uDD}͌ئĽ{<==ɶ\.iAbJʤR!CSN1WRu}33UBUUUN &:u:88[a!K,((@lk2[>7zNR$ e2H$3f %`eeR__jiiy!B(;;[(Z[[ō;㙘9*//O__?(((>>TH$ l͚5tmaaa׮]KOO'gΜA 4(++4i#iz}ZUIzzB`u^I4UҦ)S[O>Yf i &~w۷ogg5B߽{W&ZؘBy>~~8yg[x@K%s`ŠmѣG333;}tO ԫ-3@W/Ȁ.>@›.AGtnnY` t^4B׶=AN .p݌ Úm_Zinn~Wmmmy< }v@/E9FR(۷oW(:@@N]]W(D"0`үQ 药..:Mu/BdWcIxc``駟H$_~944+ QrrrN8!Z'##2m IDATKOO`D_B_vMg#1ƪzG潦b>>::Z.\tח|СCLo۶mcǎ---eU" JEVP]]={l777Ǐu:3XB:?Dq;wԸ.[*駟233BoFuuu~~{۟;w>77:|"55ѣLϝ;ׯ_Jᰡ #,>jժ>lϚ5+$$c'21eʔӧOyf\1.++d2qccUyy9QD` H$r|vҦ3ӦM;<888844T*BjBPXXXݺ:UCBBb1x[laU>mڴD:#ڬfdU!Ru@Gy;6==!J?nZZUSR^QQp ,--ʯ\r1%%%G'NЍ2y葏ʕ+;V)T_xOZL@Pk'̙CSSSBrKQQц BbX5Í7 Ξ=KvBN:ry@! OP%O? ƍWUUEҦ3:ׯQup8 Zr9sBCCTMMMBdSH$JII!Re ׈eQBW^=t{QL&FsQ>>>dcҢP(N:w\\P(r9utt$B j(gg;waaUye@^&R[[[zyCC׬Y&ڿ%III~~~d@ի#G$[ajUԔRDҦ3ϟ?R/P֔e˖٩jϟQaaaǏg*zhmEVn{X >J:8sW<䄤S\\ɦ&+WQd)77%%%!H .o/J|љҥ&cm=RII # cƌ!t֡$tU>P(R)222Ǝj`\s 믿~9~tҜn@g (!|}}{ ݏ۠D"!aOmmmgžvwߕvEta&=%K-C:s BM0!$x<ˉ|}}}ll%Ϗd7sLdܸqÇx/ <o!!!?&dŊ666<חZɬF)˻71= Ua ƒK_!daa!HB$11H.^b8..7oL%D;.W_bsss eeeI$CQ*R̫z% >ṫW&/H0*ss;wr_NTB̟?quuUoQ7nL8Q&051n7GX] J]]ժ cxO @ 0! 0r\ս!4qDi&%Æ \rr={>|(A7tHMxyyH!geez&qKiiiDDgΜٯ_WN4 cL+#iMХOqqqPPɨQn߾M J\LFvooormmmO~=z[hQSSLLLtss355eee E_}BaʕdP("?x̘1| }mHLBݓB֭A={U+Je2y~wA) [zuYYB(333''GÇCV`.ӥ1h ԣG2>}~cǎ566>zhɤy{6mD ϝ;W__[WWl2E!/^ܽ{W"9sܜ~;wԸ.[LՄ` t2gQuF)k @wA,Gם:Nn62-G-SZZrtt$;;O?4..}Νe$(2M!tԩ{rBȜA>iښld2L֥=ČeKXpEoGKIGN&\NKK A/,,$ԆW^ T.,,$MIIITTT\\\III]]]rr2b57p@Pfkt̗:}v?~gmm[&ݻe˖-[hEiCKKK]cpho @{O m@@Mǔy˗[ZZl[l!Gڵk!T*|aQQۉEVVVfzwLϟwCe5u\G=zƍܹS[[+***RRR/_E+{ݺu֭[K͛v)/ :9Wz^Jo \p!Seƍ</))))) 6lԨQ$IСCwmjjE$VXK/ o!tѡCΜ9)qӦM+**B\2//?ӧOB&&&CU%@• VUU"+ZZZ̙S__OƘ,Cl %ɒ%K4ϟ)S455O>>1jAeׯEI())ׯ_&` 96өt)?szzzm؝QOWꛡB߬u#]>@[+H5Ց366F! jjjHǏ6ѹ c,JI #aͺU͵/T]\\A|`Oo۷S۶mؿH ^0zu`J[ӝnOUMOzƑ 2!@X!533 BFFFD&i4hP,w*5v2_~B@/KmQi6RGjw#/#G!sBPpX\[[KJ ?UUA۷O=|"LMM PXX?"~ϟ*I2z9.҅)XܼyH(L[n7hؘ=z˖-efqqq...FFF&&&ϟ~矧Lbaaѿ''~ؔ888xĈgSe^{ZZZ^Z=BJJʨQ|}}---OwlS-kkRqU pёEDD8;;XXXI^P Хp:Q7i_Hw TG(~ǏSB!>oTWWWTTM6a)@j ݻj:fQi_~! ׯXbǎj ]}>a͚5SbLZv&swV344411J",++6lSG]]Ƹ ҡ߿y!Cʈ@@@Sয়~R߿?˥ڵjS? jHV}Qhk⤄ IJR%8ιs65Ц'nTeѣXLN rpp zAO*n+Fw &ܹ]jC"PYLLL=$d…RRrرÇ qYY̙3,,,V\I]tdŊ&&&nnn☘ >pBrea7p@cccjJR!3;}ȑ#!ϟ?4i{!ٳGU2[mVMaob;{4VT<<Z[[%<cLO%z$1KK˔?1Ȭ HXlV^-5gUH)A4DjLn+Fџ={fkkښooo믿"c6eΝPRRn:J ::Z,766b###)8*222 ?>""bƌuuub800#\1b-dpZT=~xbW_3gNCCCYYٸq#F?- I-*` tw?Zii:@|`OmiiEّGa2gϔv TV}}}D#Q!Ƒ= }ѣGgΜ!Bmf0K@4GR_lM*1ۉ׮]kmm%8R]{I &:oz JNN޺u{ァỵGH'lmm)OPnn#Mwᐹ .X__?11,Y$0 n cF^('ODp޽{!2,I۪VCWHo۶m[nU!$H~WBAٴ 5<+A.ٳi߾}dj„  C)kTVV[D&%Kܾ}… `fW\immiii+Wu/666ӧO'B*i}555}${'a&o$0`@cc?%UuOn`ѢEG-Z]TjGGDz2lO](5hiu?Zii:@|`կ9= KҥK .;yd[c$fU믿;ՕʣU=0UUU֭1bDݓ__~zzzDyBBY&'jLŋ?v۷o+aKijybEP_3{T5?]M7푪רRC R)߹sL&+,,?~TI6ϟPZZf6lҤIQQQY15kV```qq1G ?>hRRRr%ǏSSS5I ,$FO0a޼yTOv?ѷYVRj'<@ZݏkwЄ>t|SDާzjKK X.ݻ4KDDDO]IUarrСC\K/taf1^b_v:=hIIIxx˩u>b7ߴxB䵫 Y(IDATGbsrr|}}MMM^ZaTXX8qD.+ ?c\J1D3f̰rNNNNee*nޤw']Otk}MA7➅EeeeOғXZZJ$;;:Ѐn߾Ӯ<=u.&hjTM cǎmmmUWpĉ5k֐ 'l;wARСC-[v=t_#@w ={̚5=2#oܼnݺmF @ oyT:@޼^v+ j]LHO?4//ɓ'T /@>}={z#-1 5k.\p]???~g[[[z1ms 0>>>Ԯtؽ ///===j] | FZޅ~~~CCCMMѣGy<^O{ezԌ9~ƌquuE<<< uV"B>lj G2 !?f''7ҏ UUUճgvss?~رcB.]pvv>tݽy͘1cСaaaYYYd{D@,z{{1"&&F&WTTxzznذ!nnn>{lWWW???Xb){RtF8q"k}9r J޲z St8D= ]TY w ״ O}񜜜Bfff111"???<<\.ggg{yy%eeeΤR5U~#`֗㕕I!CPj*HDeC]-N<Er t >BLLLbbbO{dddp8LI1%ݻwy<\.VTݍXG[}QǎUb$zcƌA 4(++%4iH$"7mDm g}fSvvP(;vұ|>ݻ2uժU...yyyAAA*xC=r++zPKKÇB%%%<\)|xxx8%A޽{]\\gΜy1*Q2A P{嗅B?P(\n{ѝ ΞQQQ" j*-WKmMd+N˾iO>$::J7W__dKKK@h"2^TTdeeȄEGGRHFKΞ=+ MLL|||Ϟ=;zh."fjj*"##&L<﫯"ڎ=-ZD)//5kϷ|뭷zR9ǎS\4>cL&#gΜquu5663fڢ@ww>}}}5)=x@\AZii:@_o@ovسeTM իTIDDČ3bq``[oE<}#FW3==!]]]H.^_&ɏ?k᭭tmsmhh())qww_nX```dd$%J٥U)gVG… kkkRitt̫:gΜqAAAjWóm@;^#] u>@~TW<"H:PqEE#.\ׯ_233X$O$)ƍJ̣\r@ѣGdĉtPnn*Y5`; HTwO=J]M*8NXX!HOSS &k/_O{CCÙ3gΟ?޼y23_:tݻMMM#""Hӊ+^z%@@%m rss6lبQvءͶnnnƍ#zXQ\CM|%cǎZZZN8qΜ9+i6W,J߹^SU ߿!Çkh!jkk7Ђ ^ur_AP=8$j䆐( ȕKQ3C$M]JvRJSZSUf:.!ͨC+2P4T">?ί7٬H$~œ9ϳϳOy))=z^odmr5$JJJLXpO?߯YOVz)˾ˮ3(59h 0ػwyyy>4z^u#h.\0nܸYfeeeeee>|Xianvjgg';2,p,QSEnh]RRR#o>9ߤԀo^PP0gΜiӦm߾1mhF",Rw+LU3cǎmoh 999r[n]644իWX0FmMmEF`y3fꫯ^z΋}oo>}ɟ\2""wݻwpppFF̙3S 55uȑzZb|Ill?|pnQQQXX؀F/UVҦNW]]/^֩Sdɂٙ6bĈK.\PQZZ׷oHXʕ+à M6xʕB'O:c֭B!Cj7,''g>>>˗/Xx[o]tҥ1޹s'..nӦMB++: ڵkׇ~hV'ooݺumݺuҥ/f7j=w ?&&& .ݕ矊L0aҤIO|Xʒr(BPPИ1cJKKE [jU] [`~+__au%DBCC?~l|џy ?y=XVūRp3ײ's㬏._-;w]vyyyBm !t:]\\ku:V}f_uΝ2] !ӓ:N.^ZZ:|!s= RYYsٯܹs_|񅿿lSɒIII|[n]v HNN>w\nn˅D8q}eFٳgtti&M$"?~CB???xCHKK6mٳo6rk5%%ԩS8$ ;w`hklll._#鲳}}}sssz}~7ojzB\zk׮;v4_رc;w9==]fԳr%SSSeB/55522nܸ"W_T%۶m3?{yyiZ''>)vLZ]v5*k{̟?ƍh YO%UR'$2 NJJ!n߾ ٳgFF;vX7779e,b deeyyy999駆Cl77v]tӧ ,MKKq(00pժU|IUUUVV,ٹs第,NW]]hѢAYYY9::8qѣGB+W: =W|5kkJEQRSS-~UVٳ,++gf־}3gs{ƮYÇ h b+o1 yyy۷/::zРA zSzzzzzzz{{WVV)#PjjdF]`/^7o^=LD;kxܚ{K,iӦMfff=dw}=((7ߔ1bȑ#d>:G%3{zzlLqqxC9h0a/ih/QE۫q \rIbYTUUzÚu;;;yREh4sBK!vy$bԩ'N 5kX)O2eѢE/))ILL; zA&"I \\\jψ("-4_s]3A.'NvZIIIIIÇ+**RH>/=I/5w}cǞ>>ׯ_gya-zټ*-Z ZmFBDFF:ƺI=<<-[f醴td$j=MZéC}4 h~<*2ZVG?GxH,OebG`D" Td$P@EF Tk2 L#~$͍@KF?H"#*2H"#*2l,/Fc&4@E?gPMh>#*2H"#*2H"#*2H"#*2H"#*2H"#*2H"#*2H"#*2H"#*2H"#*2HtFt"я*(n#ځcIENDB`pysph-master/docs/source/design/images/html_client.png000066400000000000000000001674301356347341600235170ustar00rootroot00000000000000PNG  IHDRTwsRGB pHYs  tIME (^ IDATxw|OWı #B*llJ)ZPlٴ +@dޱc;mq8a}?wN/w' )) AAI_ m! W ddք@j ֫4yiMUl[ieM_ hZSw 5̦ru6Y)&46~)6A@CTVB>>W"gB3EBy/l A\ī$ZLv<D;mS&/*{eJw\2znBQ{nɺ BEvh@Y_&A{69&;BI[Vz}MR^nHAY:!$:0TaB?eB)YD} Q+ӄ@ڦ&BY]ۗ|k@ڹR شP۹{'8Y/o>Arx yk [a8.qq '2B @#RD*R)Kq'p A ||QZQ(FiA |t(ApE1 M@ ("(Ap%BO7 Ӷj57feiJt<4aca@ _ "(A I6I; ?f7Tcr:[79AA ȇ@v3 'ƆZb#*KS^>U ߥc`-N$K(B9=#{A윛JݧT/B &uA(*12P鯥Ҫ~CD"FBghkZ9h-yV[͵w'TU3XB  "+%ǥz_ u3U tw >{-E&R%yj]sg|8^Q5egNZ z@8wQi__`a뢫g4'TFEz&fVUd^j^S Jerդ9H%ʒF^ M Mߤ +Lnq^Ż4EQ0)77ШT#N߰`51txuUl [r/+% +MH4ʂka f\G"(<( A00 U[a>[ؐk48ڈ"ac-a̙ 򢆺*IK3F8FFf\E!@QPSSQba 2!s?4X͍Zq̄ G^()КYPklfU[UfðPi%yZB8Ff\r ڊJqKZںVqU*ǥ\lQO߸YJm^ɲqXYa_Oҍ-l yV~v{v9a;JΙ*-:u.F'  5)V7?+ ~SmE3NVUQIr@ 0CQEy@dXqX#|_W̭JٽB~~! E?O_p!D}U%A [K[b3r(9>{Q눢(/0rtnP+bgyyL/<*,l]tg7yf#A{5r*- /J^]c-9.6RB^~UM0ںfG'$/}IlT=NK&a!CD; J҆-ۀ Dϒ_?wi`NI!ӹ ~,}Ȗ[Y?wROk^Cޡy/╺+'s^J?qY4G$BLv$K ?|ظۏPB-R@K3s17Fڪ҆L)={jRfj8 7|=)UD*ƆzA"\k}/7 ^ yը\"/!J/.8X|kHwvdhi͙8d>: (~޷ICsd t5Y'3e56ԪagmQSbCvaI@(܂܂@L\ Cx5W9qg{@(Oxkf kH’ܴİAPRqU>Q@MiifER|ܝzUEkASavꢙc\(L".9NAWRd墘?yO;pVLi@jԳ D/,*& qXBqye5P,mG1Y֮!ETsSnƥPZP]q ӐWTVbi5yR[P(HlD*OS"^Bfyӱ#j)ni*P&Ry5 /%SYP)] AE'IWB\  z*o,ӺsB&~a ٬zoi!!ȇHkd׎*CQ@^aiU0ޓA⒙z\ ~=ng]mh>[_vwDS7=;ߎk`#7;[W&<;=73!?B1c{VoWP\njls7'.,U,ߴa[>]8">?bmk(fE3DŽy$>`$B53{ ƥ?69&U}Â~Hcdq1biCy宙kśE1ʑ*mPUhH9_ijj HiE=6%S##sgW!jk:v9}/6qɲR u rߏMCP*jO\ia-q|j onA>'i;_WYMnhZfv,R)ttNzv]M m 8zwŗW "g{"dz Jt9fvvOn]|&6Vf%v\Rm xn50m  rCdRYc^]U@ej10*MR[ggZYJX"R(JoI!5djk1bq3FWAԝ݂MIQ_RA" f:į/Ô?G^QF%yE[N7*tQS*S=r2jyly^mK}ɲy4jçXػ뫕Vݪ|UE(F*99JMe5Pu"mϳKNթuS[we,N6衆\ dHxVR>T^l-5I 8@>䑿(Tm6(:zwbd-PA/-*,r4'Ar^MFSCOy#ٶ? 1BI:L63=; ~a| S ((*#F'oHߗ_ˡ':qĀ!yũUjǥeeYOW3u7j8ƖuP;q;rX2v# ~WҬ@Xj1YEYIJwr5PP5A&ALm-m-T*Ct 2[̜h\݆\PP .3X6%r\U_va0Y\'N(b{9qxDAUy3F}YKR9#mglMy$QyMU5FO@zv.t 0uht܂2IMICPJ}ken^.{ Jit-54. N56RInAkejiYpM-b0 QVVv~19O3Q0 3gaTřl6 EM :ؐS,JAo:y"QVfƩW-()'7|?IrP)+iifq|xKsj~|JC;Σ#b "id"&\R^Ye9ۺ忐2/z%ێk>b`m(F ӳ]l-(^"Z t7#r;UkP(Z.$ڜ|8a V&e[ ,QDlafӔ+5"H'W0Q"41 D Kʹy s;w1 L- .痱M06t kt#;b^]H)R]vy ȇ? ؾݼI$2YAuA>݃Is-vQ}Ϊtm|# EBQYe|*+#Сҵ+c/795#G6z:|'xƥr]z!A1JiE5ʫj?ZhV{*C+@RL-z %w ;o.~bxUTSU,C < C[7-3[0G[+{krU/]mO9|@O v+V.LQ*O_Q7 >fWk' G(ZJG]3miƯ8ƖjXScO5~kdaugfO5ao#EˣF~\bZۜ}PP\5@PV}9L) BUi4[M `VyERfI7Ө1ώžY&[D|'A3#JBL-aM>vBWHyUFh+-߇󓿃?Kf=*d0 p䕄Ԍ>-[UZLym g+%bhgg3r i &SP"H_,k1 Y8:z6ҭzOWw'Wn3t ev/!%CNoSOj Hm(YQ|e_al9s켡Þ.sFiE>PC1tT]("ؙk3P12<|ELf(jO}ܬ-Ls X/,%]"b-b 96Q^]U.NA:|cbc`n{zC%f33u3*uxmNl‹z~aRI(te_WZ^%J8_| 4=5Y8rlԝ:P|Aܺ퇴YTj뀙Rqy^Qhjl(JPL|}|- p)A A ]ܼ9oۿ\0sA)*~_aa#g}>\Vܞ4irlliQQ{X,n16h P E1ퟑVS[YSW]SWU[WUSWUSW{+5X\---AAAngϞ]tQ~euN utttbcc1 l/Md?o@@=]\36ڽ{ |w̘9Gu{ RG||̸UJ,vkkᗋy<'A777+엚_I/d_DDđ#|}E;FSS#""" utSR޿a(mb(a (a`HH4>@DM֭UjNXXɓ'/^طo_ Hq7nܸחN?"Vv O(yxx<^zMX&y <[{cS+wOD dGX]]3rDJr%* u4HcS;wbӧb% sԙf6.7OBKݭmLQ^^A^۷~?a[@`hhmrtpprt wAqU2:.72c猙s6z'%</|%~Ԙƿѫ7K555WyϞ}JioUMCy>dvvWF4i} \c'wu_H<=ud׉cc='ߡa?ٻ^^aFih^ѐ|}}:\+7A[cFџ@ 4irgΜL8Ç-BTa|p+++{{{CqrY=vիWw]!Ç,X2rȭ[>x𠡡9RGGG;v̙3'""uiiioDU)SiÆ AUTTܽ{ẃ{ӧOՔF$ɪUTig@0c d2#""JKKɫ֮]a``*dV4h޺u4+VXXXXYvZUs˖-ۻ\{/QjzyytٳgD" `CCȑ#ϝ;';=5l2j$We55?t7 s޸qѣݟ={`0G67 4gw<:|Ǐ Iuuuo%__uvҢ`AA?yJN:>l'#OɎ<ջWOCCCM ,,,*++<~<.Bʪ_}^xgJ-o➞8K|iwLsY+޽{ejjj6nܨ*eu/\PMnŋܹs̙̈Դ϶jB{Sݫ9+e0lΝ{ڵk5,}̙999?&=6m$Dh Z'm߿||ĉ-Z?nذM?r@md<clmm]|qƦVc^|eӆ\]]֭]-HpO<%~+QGP;GGcΚţGd ۉU}Zg655qrt7v4_ܸqΝۓSR=k6.! s~}##O'OJtS-\{ S˖AlA,Y":,eE >~GŋΞ5kF]<='O˪ս[W s˖\jdn_&VS_*/iT?2|,O^]k]_G1cbDDD\x'9) `mPSScm㠰 U}⽽}.^Jcjb Uù&}Qi~Ux3Qlh`҆A$Θ%[zJ$lWa3gtrrZn3 FE. 0`@Ϟ=< %HҨ(2~-|>ݺu ,ǎ9bmmݣ$555RyMLLiF zm޼ԔN/[ҥKFFF\.qƍ<*W<2[{0`U^[]]}ĉ;w=z4>>֎hĉ &P(UU9fҌ"HMnJG{`kke˖988\xQ![&k;rj`ʔ){޽eK6S]]};wz{{{zznݺU>ОʯJNԈЉ`ذaӧOwqqYzuzzW_}իW/??ٳg߽{M~}oFƨJ)KBl/.fWVVA~~^ Dސ[[._hi]HHتꪪ&'&%ŏccCB:󨎂C+馝Cϝɓgg8.37307"w3 b6Fb}v3O4_~˶]7o:.Դͳ..\v/}<pk['cS VUU.WX{׾M!:[F/iTe6dR)lNS/ СC ssOLgg+lBUD7w!C:t0 M0 B&G R_WMxdk 3cauM:&irJr@k~?q 4_o陖Fޮ^J|1ruuUVOss#IÇOoVTv!A;v~q``ñO}\}}}N5Μ>ûl_uiU^^=),b{{-"V^1|ĘUk*|ƭ;GTZ.9/&Lٳpb=0qRU;6=Aߢ0|I*Jeۚ'$899vڷrT/^|9w{Y|E N>|xEE4MUxOgVTT >NBaoG0+G~2/o8|8,,L~$CN! TVVi4O RT*_ejjɖ*W<2[Q(I4gggEe3EEEWFtڵNxԛ\TizYYYK,144D$Eg\\\jk&FhU5/y[OoVr%,9%e㦟34'R>5WVV?p`7"o҈<.YeeU]}}G %ouH;++K3Ss/s<=pT I{U^^q% \ |Ĥ yyzȦ7fը#:'~h\*^+[Ϟa{w6w[҂F* jB5{?eeܝ4n\D<;;>Fc&&'wZ씔4rx~ p^CMG#> ?+jڴ~'}꿛[6,xE'5*&&\ yJJJ~駧ON6 9`@aaann.9%MF FiӦDU DFF*y|u߱sO}lwa$+6E֫#/Yŋ.믿>޽{O8A.{*))y9|էO: 0a <<ӧ͗.]*++իKȵtdEȑm:I2F$'Ow͛7׷ŋd?4Je ?899 8ԩSǎ#xDŋwe˖-k֬Y~rWկ:~ӧΙ3b߃{N/̝;wѢEiwq*S]{q8C-ZN:uɒ%ag}j1{*_)JmYUb9H~nsUE1|z8jÇAk@H8v/O}|b+Wb 99H%4PPbvݽt "oه<З/(nu˴Pq)K }q\SΟ?_UA :t֬Y @[u-9nljxeyѻ=w][֞:}V( 1G㽝/RpPPbb/_mY䛟7o2x  'Dkc驹@ htE֙_ѣСCoC4&&ە/ TKKk 1GHZx~xLX,׬w!W'p֭ۉ=֭;Ab?*݆hn(ߖ POS8 @ *!}<_~ E_M@{%%~0!ܺV}!M@@ k!8B(~ufh>"L@B A &=U =z,bmlm @`@ p||@ [! @>68 Omm&4Bk >詪(~M/-5Z@>l|^=4Q\! 7!=8NP.eC[@>zC>na@4 q@ >|)$CQBx{[[@A zq?6,q!mR1!nƤBX BBtg_PR!jjjSDQ45@CA`1FO;C %sGme, IDAT0'w\khIk(W1E}R豯l+CCƎO>xO?A_R1~D57w9zBc.BhIr -!A 3rA /O&{G635ZY}٫f͜8mڌQ(Ғ?<|yVwp *RH` mF!)@EE?O>y_WW1t`PTvyycnģo/]Kx֣[>z޼}Ų$R\,(Hqx{Mr64P+?%_8YfwqCU J}ֆYfB(Eƞ)7晚ތwęg~hU"HS OR ܝ<]dS3rrlMkO>n-]F]f~EEU㸦K$Uv&Fct2k$Xuֆ577pWy{SSqsBF y33'OK$ צZ酃 n*qϙ3&8(PUɣnjG453-... YG\PKh芕 \+@LLØGH'O6mQ ΙNtxR\TniaAE_.$$#%幸p33{ {0v`o, x:D,wRu/Kg(@a kQnmkk͵ܾe=~8eQ߽#K|Z8@_q筿>74П;{MϟJ҃sp]8o\I}>+K_60 `gkWnFPTTutD"f]`0̛- ۣاs7Mf]藕bǍٞ}R|=i ~ݍu7FHں으_H"20Wˈ<}kGjT"5ot4q޸~/)/RR_sa))/֭+ѡW}>AAXO cb1ϟ7g_pG$%5w*7l~:0a[no(1KIz/555&汶 4kDZO>YrQFsD^;99%6 sǍ{o=sLχlܸ`$ɦ6?~ښy~!w-,mN:|䩟~\PPbzy`>!-?W_.y㖩뾟'ӷHpPpJ󠠀g W_C홒ӣEzy8*触_si+& Kh^ j꺫r :C*8xm۟!G^XTYHJ(rluEvbc ZDR)[)Sttj̮}ފ LM uuX>+sr󲲹WbB=]kymm“ Xft7-> IB0LW{]FOMUf݋KVQN %'YMMݻW,]WWOxQֈaC,#A{̌҆54[ JK}ĠSM {~ --jC&Lfz 33Yp_e wŘ;JJJ><=ڟ:E[WWv/+2zegE"8* 69mmm7J,M ݈4Btv:OL/Ersumݑ 9{׃IƒFwjOѷb_r`_cd1lȕCwfU]fbn"J-(|fB"l9 lZ:: Ƈ ?\7`&Apl?xBeXp~CCCRrEz:lg3Q/<:#_ ADxDnSD(?V&1YT|+|>_RRbR1G# ͜1k_muxS~7n\C133,Qy}_}}}& ۲'rlm8>!!gINkk͠{`Iie5q.bTښ]{9?xu[zzׯBC'9o޼bo/X+Et0+hdhh;h gHa'~'++۾24)DgnIJJ?'LBd2D&cm"aNg#[Ɨ3l@fgQHhEmo*ioJzJ`]BZB a׿ kt3 tDUAsVn6mʄ߶RMEWo|D"L]%tItD&wXCgoHL$##ñȱ1u.umܘɎ,E9Bhei&(Y)|L)b_SR :|G 1޺}oo]t[7{ % aBQSS%J[+ Z6ihWā[zYg }u[*~?d Buq"=VhlPȟPP2gZ &SKVV@0 E":&ys}Y;:;;9;;!C>z!DW27FF^__߾ -OHHL: Cc}9UUUUU! ɿ??-9{C6CD'!A鰸ׯ_O9kgϜf0'L1K\DW' H0 CL r^\l}ߝ75SpV\ѷxχ8 Iz𠹩Gb9 ?1&o5O_ tIƦAD0D*P(}4:;+XZܾs'a>Ӭ; $c==ߞÝxngy!H/aɣJJ_<~~B :l6S0bP(EEߎyg|\*X2P=.NQ&=1s}4&}9O$+lhTc#22D2mۥ.O9 ][[+"*FI 븸d;[[a&FMMϟSUUUDL[XL}jdda/^hQd]ڵmʳgjjjz-VfIeeeQT/E}Ey9>0L"9: ݴ'”tL}eg[[[?%_,&v^Amx5e5ozOdߍ݄r4E9a !4osC靜oG^ne \1YO_ !$!!IQ'vD l‚燎P(B7+**Lt{qU9N+(|.TK>yͦ)i~>sQ]MURR~b B4?[$f!vNmmmm}sx NY@C޹KATT+끕/ks|/v֭\1gy ]fgkܾٙPCo߽4bfSSUy8Wu=+SWQQaQ{'%VTT'uV7+*^ގ0'=a\FO}6Q(ݼ݀h7^+&/]ݵ<')>?8f=γe¤$9gUoi1ukutSLS^|5qˏge;F::؎s_4U[G lRR~%%mK,^(--;pԴ3ZZII^U.3L-TԍÇt%%lOY}+vII{qoNJYKjnnnnn޼e~ѣ -++ a,6K6(+3z'\nM/g\ֱ\oߎ:|/]h5D>Gtr2k߲\ܽO$G$ ߤ.R5Ւc*)!dڊ#<~s{xZ^^x-&+8|oZXY:K_j.?-^̠/Y8/#3[JRRDEi4#'++TE+K ھV9}2J963g)**[TWW1mґcA%NVdSϧ&>xSSLN^ߍ4GVoDp8;j:ʄ))>NKdee ^tW\L%?|;;8 }EE7u4i~m<r98u-QA^n?kj5/^0%ly'}>JWvĝ)!+SkFhRGu AWP7W#0߷omn qc7a`ΎGIplnܴ{;wd۶ eMet:}Ȑ e:j#VYvQ}[ x% Fvֿ"ϊlJS8gæ 73n_o[K!LD}ݬy%&fW244C܍ѻohhWCl6KpOD3}Fhoݺ$))IBS>Xۃ |1/??#3} o܈h5_>۰y^][_TR(bGy9Y>t!}+ˋ+9eߝ)QՁo: *(/(/<87gzؘp#߈<?H~ IDAT@ ONN\l]D|ҁ  RoC:R1 !T_W6,&~t:ᣬd˓d&d2uuu}}}[[[ݻGӧ65j_MM8 mpIW$6 ae'?0UG| ߶uGJNN.< *J^2}t''ӧOWUU2((F]VRR!$++yfSSӏhvvv˖-KHH8pr+Vhkk n6vӧO ca͝;*66Vgdd̝;gŊ!IIIwwC8!.]nlhte]f|>ḏ2U>! FHb|ӧO555I$FsuuEx<&IPBMMME^BDDy|`~7=== m3k,sζ6"}52240///mMwu;uqq166633p8w&VXXX.X?.$H$رc>>>$UVnӧO3^JNN=z4WƍxṊ~oK.HֽCp%{B122*..uvvvBd&&&#ǎ{[%##eBCt= ''w񆆆~ժUǎk5k.\@$n߾}?~hee,f###̨Tjj~!@ݻxb99b==='''&$$G[_QŹK522h4G"0JU733]hAzF&a[jbfGi蔔olhhX|&5svii)~y} W <~`e~8Ã'O.YDIID"͞=;,,˗gϞ5kcÆ )))ÇԜ7o^ddWB vĩ#~QF1?>ƆsN߿GF6`mmrx)))q_[[+ 񋫯'L퍿_fʹiB򾾾nRTTdX!}}UVoڴi޼y$iΜ9 OOϞ9,X-Ǎ" 0/vr+V^^^~ٰ1.>SIǍ3ݽ aľWLJJ|!FdMm Y|~d_$%=s:55uΝ7={C---D aFahkkd՞?uU|ǧ=͛wG =ڭElll/_f͚4І B</''gĖVVV[n)uuQF!̙zy_rtT*`Q?l"##!?L !|YMCKCS{k׸M9tV0/ՍE\Vj?;wl8))ܧO2^S3hz|а;oݺ͛7] B?z߾%%%iii+W$gooJLLd2ma%K"##VZi&13@YYY.]|rMM… ;lѢE /))tH;7`X\.ĉb-JeeeݭɢEΞ=kee啕@w,--i41^WC(+6=|p|6̽{aȞ֞db[ZZ:э"TTT3EHLLR&&&/ |uk!q87n䷵q9{>+-QYi@=}mhrs=x4n6KOEόrsr1 8޲? EO9s$%%qܠٳg "N߽{>(͟?ѣ&M ?;0[lYJJʌ3;Ojή]:344ܿÖcƌyq]]PzuuuXXXuuu] Ʈ]x<^DDN9sfZZ B˫@@uuu@@ ̝;7---###00pܹxbsZRRҾ7xaaa0@'_.h4c{G}5ֆ*˩xQLV,_n°bnll9|GIeWv! #d2*d ;>U7m4ׯO4Ho2ܭzlΝ}/_Í-[6bĈn2vX:>ΞeC"/_Ӱ013!HrTWWZ !DP\\\O 233L1`dƍ+wMQQ111_Psbkk[__|SF ZZZDo7iڻY)=f`h !!sNGbzd29C|8so_W6?s^zxDIIIzzN.1[nCZ#ҥ+Y[ Ҕq&°#G:0c]~۾e$ A@uf?fLٖ!.O[֖dl #G޸qC̅68y$~6RPP`/_ښF))))QQQl6aHe))Mp_^ %!!2q㌴E87[=͞9s:FuqoZ2~>}45XUNNO>;EYY@99[eEIb 8G# Gv(q\Ǎ7a„~ Z/x*~+mSn {g8,\2{lWq„ tryk<|/q;v㡡Е+f:Hz,@z>CYYyŊ'~X7}++@HG~ze^!#|*^AJFk>DtǠhb-<׬0 D`A'hp|m_?&N =A8::Θ1ce{N8;wjiiHH;w ƏѣGEoe˖W^G?eeeL&S^^L&3L&⒞.fs}ajjǪٳ?~,B 9yd_B+oo[nކ绻}';+Y !CTVV§<oԩrrr<㕕1ˬ[[[mddZf ɼpBwƱcǦa<]D3g7i$"" WZ70''G]]իĖvZ)))|_\g]$WCCC)a%:_aXKK˴ikkk7< hkk+,,8q"Bȑ#srrp8ҾyOb! Х?fY,޽{_~-yd7oiӦeeeQQQ"2?y$j۶mYyzz⋭6m3w5k {foԖ-[*++ڵ _ZZH*f7u>}5.Qg###GGGbUSS6^hUAABhǎͶ'q'n2w)**JII)++ϝ;˗y:e_99¬,mmm'wx)VVVP 7F׬YgccPWWCwlwq)(М9sZ[[ϟ?;wӓ(600`:::!!!<'!;"ux\.h;BHZZߋZ[[lZtttVTTTXXpe= 222{*`0X,ppp8uT||n߾4@nb\.t=:XZZh/_&EghѢgZYY_xQp_I$ w58j6 >{khhƎUXXXPP0eNJqH¿a3JQQRTD** }D0b``PQQ++&On9sMKK ;w.<|==T`466 "uWa;lЪZ<ˊ)Spc:ck? @'|O!LpƍSD&իVBQ(?ðO)OQY3<OEK nUUUo޼;"d2>|(xq8/^૒UEEE]VbD"E0P---w q|25>Z3gN߷ogϞȈ|ǏLJaY&MMM /))I<{a2ɓ'rrrSN=v㥤ĉ#Tbbb%brpphkkEt:߽{744F7nܨ Az{{/ZHm޼9>>QRR}vbU2;;xb-\0,,,%%… C ߿?|.t6f}U!6㏅3111113fLff&aAAA l6;99ڵk)555uISSׯ_?|*>V0+VUPTTaCN<)K.ӧ/"aؽ{BɂÆ 755?~$.//߱c&BHMMmŊK[[}v:"""lmm ϟ?EhGII .N:tbh48qz{={/䨨l޼Yl)SS(55~f =|תU?~~Wjʔ)G[s Np*^Tՙ 6o) 肯ȑ#>E`|}*^Üپ\@j0)B'@`A' z  0.?S^^oyy9$^mٲOo c̙D?ݻDZZZ^^ѣGxzhhȑ#t:Df)?U 7o^llce, !dll?QnݺtRIII<ڵkJJJW^ '+?rmllii颢"b5fEGG_"g W^QlܴlB$>ѣGfffƍq[JbgT`"J$xB|8*-((H1LPII ~u577>|;} ,𐖖'DFF0!4z@O_qMVH__֡Cݻ X2hРݻwÅ/_vvv***cǎ2eWQQ7)..sBhٲeW\QPPpuu y\'W *^Tՙ@2y7&]TA'LE/?:l^0 k|_@&۾ x%{ \pp0&HG5۹sD2k\}tt4Dy{{ߺuMV {Mnر "66l?|Nu7ٲe1͖;M#(((///77>@eMp8iiipT>X, WwСCl6J:88a͛7BDV˗/_j6''G]]իں:lPDDDagBCC)JHHa---ӦMׯ U!t־waqqqT*u޼y=S({_~qww'V>}֭[EEEw޵\f {n JKK%$$Μ9S\\u)|K%&&J /^ys@$Vf7nVZ|͛7;iJKKEWWW1?ag B;vLJHYv-L.,,?AaaaB_FFP`W]]-))r@\eߢBC 8xrrrLXYY!:+!!?lOG(Ht Edr6!#E/8>>~Ȑ!$i/^̙3V###;%%%0|^|RD񾲲!󎥥%Fjoر111QQQSLikk" ]C /=&8UF%kll?~ѣ |۷2s%--m۶ 0#X~=|/V׳} BV0V ޫ meV;wnEETTT$3/4jkkuttp!aORSSw`rKW?f0Lcct]6nED<8b.-M" ]C%%% ">nG  FBM#їJ'$$d{{yX,ii鰰0ӧOۤ%%%.]0LWWN/^ի</55uÆ YYY!y'ObVRR}v{쉌/_|q<&*))inn >%vذa򦦦ǏdAAAxcMMMڊ+,Ht ;akkkfff``QRR"4o߾GNHH@̞=!g͚Nӽzۧ NN$.4:[:<M}@UA'z'nqtt400՝1ct+͙3GSS377ÇЭlxt/44tȑt:D"xd2eeeG]PPt|_ss3a,t뗯aӦMBgΜ=p@jjk[[[º}vuuuЩS&MDtҥK0 Dt--Yf8p9 `***!www~!dee{ŋ_HHL0~С|>ðSNb[[[aa!~ȑ#p>gDPm~CWՍ˾g2dB~ŧO.**j"BZxӦMCYfڴi!yyy__[n!lbooO?daԈuuCO_n$xQx<^NN Bn****,,p8H$GI h/_^NNNp!󎥥%F{%Jl_y(++S(255COBDA1 wg~xyy󩭭%6nllcIKK[ZZ޻w_,//䉭mwӡ'o!|zsBBL&.슃d &nܸ1**bD"=-- 'U]]ѣPVV֣GB˖-;rHHHHvv?`bb2| %l_KK`(bo)p))ׯ㋗.]}O8󋋋}/\ t&ܾ}_k.MMMii#Gw7 _ a]O<111  yfKK3g6nXTTpʕR///"%==]f7nl޼j{n333|ÇwY[[ۿ} 0ȑ# .c)q\+xSUgB?zn...ZMMMAq 26lBsٲeN.\G[Q|4b =&V ?@ tAQυN5N@/? @zm_B/x fI$ѣGO(//׷;޽n #33o2Wmٲ#%76-++7#>8NZZ~6EEEq8 CCCOOR>>xOofcc7|555B}bnnnll u` h!d2)9L  %$$ðiӦko޼+**BgϞp-a!mm{aGR͛׳:s)P@@@[[[aaĉBG!?aXmmǏBVVVw^x;a!!!d2988ðC:::|Oo ||gсuuu'OLdssss=`Xys@$Vgdd_ QPڱcLJ>>>DڵkdraaG l=xƍ΂?bqڴibYYYTTaD&nBݼy}455ikkwVIY7oӷoN@t}ٗq\kkk"fEGGȈvvv|>?>>c gr8댽=F]]}ԨQ</''gVVV[n}bccC.***!4{얖/Ν5kUCtUVV"T`"J{Lp-FC|c Jrrr]% @w,--i4˗/UTFl_C==={{ Pbb"Ʉ/MjkkuttƏd2B0|ѥ&=>`87!xVDJcccw3m6U<}E3gO?rfϞ +>P#L&())H)***//wvvq;b G<.55HOHH uϞ&bD"va2邉7nÇ%yEJJII@M&##pI&}w~> Az{{/Z_knnN{ŵ,Ki. ,MV"XBlAkX$X0bKW"(hԠbURAydfa4yq̙3gδ̙p?NQ˗/۔!!aҤILCC7ogϞmbbZ ~O?DQTQQц ::۶mO<A?oBy<kP -->hǎFε}ѣG~ݛ7oK#)}С;vHII9yd FsС'Nܺu+!!KGG_믿޾};!e˖;ޟDdWVv9vaB322ruuEqR6(_RY=s !J,Ç@흯/[3?iU~WBŋBaJJ .vw iiiÇn޼)KY%֪[4E9::ZEEEGGO?}96i'***'LmCCäI룣<9rSΝ;޺!:"888BJ PQQ4hիWLYHד'O8w~ -!ݻ###l +Yz\tWLwKF̙3kDHFdsΕ$''3o!ihh(jѢEo>ѣ\nEEEQϟ?w?LIHHwދ/;_9L?~LLLjj/WzBO Чz]]]EmٲPEE+77N\h}\uI\2n8[[[{{2Ħ444}l۶MRiRzj###7o2s-]J"z+iժU}0`-NtV[***\.w^q]TWWO2EUUuQQQLj%ٱcǂ &Nhmm}G%''cw/ ^xAkjj>|Hu/֬YCtaٲe˗/onn(*++9sFlZggȌ9s젠s윜:TT‰'(z=<<蒡r2\}._L,IyxxlݺD%K rƍ7r8gϞ,E4y~vttиy&EQϟիWfff˥;tХK ~AX儭 v}777EEdSR[ggϞeRlmmJ qLԑI9s&=ثWuуuuuLf???___iBB} IDATԩSO7A fK.B:UVV***?^tlyyX/}, JIII 0_Ċmie ̙LCmV[OLll,P׮]oܸAa?YSR[w&>Xt)?ҩ9;;? ~|>_~QQQBT<OtB% L@_b%xћ7oF!( BѣG)SKKKw+++YJkJ?2w6b;0,D{rI#\~}ذajjj,k񥥥̚Jl6ٮbXM~wg6L;vluuuBBB !gnA5B~颞={F2c"JwDR4 NQTۓbb zxB]]b5,޼y|nll@ uuuǏϣ\+WƏ/ LfE?Ss\6]VVƤxzzb+o>SSSΝKG~CYxq*!^666W\MTPPo;BBaaarl(UJHHtlZh7oHYV[/;;/aX3g !wqq=z￯-_~allꥦ} ;ɓ'_YtfZB_|qY33{8q~Nҵl=dbb2{/RGDDбAF%4j[hѢĈefÇB\.OMMfR|EQ#FPSS?~/++ڵk|>`zHHHpuu3fLzz:t/rttܹs'޾r%-ys9h QNNNg^x1/WKt@[SDțR!@nm_@޾r$?WF`M-zz y x%1<O'N|w^4@M&幱в2&@O Z C턇z{{+[Ip8,K(X$ȑ#;w{oSSyVzmmu:>lvebb1|{BCQԆ ttt!<Օ*/'ųgGG˗ӟKKKlvbbX~IyV:-55RXXlr dx5kTUU;v,,,jժ_rEPxkhhHKK>|8=geeuMYtL_XXۿo ẵ&===&=))JZ_AAA~~+舆aX,Vt_ee%!DSSIݻ7[rl6[[II9)yV:@ !555LJ]][SQQ4hիWLYt.7n0)toB]ǹKٳ'***##c޼y֞n޼y,BȣG^|Un%477?}> ꅢ(?uuu}p\(g|>_SS=33?t ݁ڜƍg̘A5j!CXgggϟ_]]-VԚ5ktuuY,!%Bccc6mdd$}e4}Çc/ bbblӧ)jnn USSEQׯNjffVSSCQTMMÇ !ΑsaAAAse999LQԲe˖/_LQTVV;s =)i`͚5K.Pi߾}MMM&L ٳGz#x!, !<`&RPP8qEQ/_>|Gss3S֭[322.\8mڴO8)ήŧ(*##'y9W.MIMM%Rr=)?KKKfo߾#lڴI4`  !SN>}J9rH21 fΜI666kݺu wssc2_p_hYXX0tD(fIKJJ( SVV~~-[ȲHm_P(f3]xQ&400 <?ssss>߯_:+**绺2;|ݴYچYYYCeR+LG&̚5ӏ;/v3BQt:!bZ677w رcL$V޽{3l6=meeXEWP4h ---2===:DzR?念CoM2_~٩7o߿{֦+ɤսVjׯoׄgxAddٳ; ھgllliiyhյu-fTAAAYY٨Qi[^*//qZ> &޽{oYװ׮]SVVhL*---++ѣ/_b@G![LL yx<ׯGEEBW\iff`wZi ssЯIJJ;X($$?EQEEE6lx5466&544L4l۶mqqqt'OFDDOIѻw?xǏWVV~+OO^`AJJ =pڕ:b555+))2|1y&&&ZZZIII&L i>̔9m4Yfdss/N_hmmmmm=f̘tVbz*!$55U4QoڴА(/zEQD~ ?G>MP^*@@O&5}Vӏ?=<y-r ЭIQ4]Q?sh%7l3O>ߵyb;}#Y4??yd2e/wǿV#iӦ>GS9;;aÆ]rEGGGwO6C!T¤}nn'd/I]]=66vĈX#=ϤI\2gΜh:QP3PaONv`o/^ y8p@.1lذ]@׷dɒ۷ow|7o+cw}9TAE!m޾]p@ԑcE||W```n>~P R/}oh<)&Q_k(jllz7ֶv<^޷oߡ(ܼyk .t۽GIoSLQUU0`@TT8sׯX,}}} zKK]] Om?.Wt/ٻf`ee"ͷl89;(AϞfeԗr9TR=HH~dY`iixW׼~PE5YwA==>i7oݳwgݲ(?niK'O\.j9ihhП=yƍ ,hhh ;v,//oʔ)-'nMWW7==]]+B6mvUÇ}ȃђtZ|jﶇ[[Yy.Sp4rB-[{\ӧ91W_{D~h;fM-V7/^իy>|ٳ{GٶmIիWMM"1Ǐ0g6tʔIΝ_,ŋ'MI#G;vxHOB3_Ps2gL&=fhw'!;1ŲhK'X+))]\D+).aLȸVZ9驿\A֭]j-gjL\\\A'''6U6T~`ҏ'cV\$' ߺ}W755SNtZo޼6tJfmj;POҮ" Ǘ&BH^NشmncB444NK6ǫ6OU>}lmmZN){󦩲R]]H>-޺*z=}W2S K:H(qbKEjY?x@?|"bReLQ-?G44O1c|N<.qmo޼a[YYQ%ٯ_fA|]JJJe9ߵc{YrҠ˃|, j뿧4ԫ_Tӟ^z*T+W[gʯdfd Y!!_+**飥쀯V9.7nE޺]VZsD;kq,D45GFhjKEX㚻{SSSvv^GGgĉWnhhx5k}KJJ*++qI222<<pXZM-V ;\7  hCY EB6olddbŝ8qX{b˗Y5bBg}&`剉]jsrrBCC \ҥKJ7 Q/^NOO"fڴi5m_߈#(/_Lٻwoaa!(ϺcEVJ$|j}Dj KQDt5C ]x={]KKKSSرcrΞ=k{yy/^V[[[__/}w...jKuvv?~uu5!dƍ3f 5xȐ!-rʸqllllmm<(ų֭WWW-'344dXZZZtͅB1622 vpk֬eXӧO+ƍ,+22rҤIK.mhhhI^f,R{6DK$&?|e IDATJOH<<]z4 +VӇ[NMq!kŊGmnn TWWxkmf ;{ffJ_$nR+~AI%zۗIK۳g!DAA\.cz,_HHEQߟ?>bbbrȑ-[(((B|||IܹIգ1kjjwww'ٳp ^|9|pf>$888lݺ5##c…ӦMkYCz^}z*EQ׮]Ԝ3gEQR ٱ:fSSI&nxdBaCCC˅2vmb)儐GJ~@fBȥKo*b^xAsdddFFƜ9slvPPPDDDFFܹslvNNN{3Kok„ ;h _ZҷY?%%%b_mm!ƆN4i!DKK˗-?;f^;EQׯ8 z|>͍pŅK?H(KW\\\WW J9OOLLz^^!dӦM-?”={e˖6ԩSQ%%% 5k0)_}B~~Bd2;P+%$$DYYܰa33gJY)-gѮMBJ'et:S^z[N"%cD}1~)!ȑ#,A\b5l[)/l1=p@ڟ޺ukC_#B_.dK?~z‚p5BȔ)S|͸qZ ??ՕIqttd> ¬C2)5YWQQ?777 [qXuuugge˖X~ jڵt]xrf͚?ӃǎqAܘp|ђJ} ,v Ǐ3m2sL۷r:I2 &ِٙ?.eS.x<蠁!E2Ki;WgRY;|XYY1gϞ1q[^!l6+d%%%=!!beeE_jΦɤ[JOf4vꄄ)IiӦGnݺu| VdJ T.Q7 Yfj:nnn8/Y,VAfg wN/ZRxtCdH`vhkk^xf%ul555LJ]]YGGxb첳SSS7oLڤe˖7&&&'O_~ʙ={v@@"##gϞ#{R4uK|۬e:Ϛ5@ 8|p@@@gjoPan*Gv~) RPP e o3}90''I2dH[Eeee}6l!}}I7n0)066$=kNvȚE?ժ˗x|||~Ӎ---oݺ%zB-++5j&Ok׮$òiw2SRR-XVX[])SNի׾}Ο??ql$ ̕NYΐS;sisk[eevmoqBG«gDʦO b;׆ $Ҳm۵yH#$INNIwd^`:?~|Ȑ!jjjZZZfff_~emm-EQb/3裏D{n۶SEEEOOo…/^K666Rz޽ĤO>U{2cY[[[[[3&==dss/#FPSS?~/++Y__zj{{{{{{h۸8[[[sss??":}ӦM==@&?ͥ#-.󫫫%}ĉ3gxgm"p[])Ν#\|Y4'elgѮM"66v)))IM ,..7ÇBM&Vk|>!D[[{))) kW6޹8[[[{„ b5eiٶW~6bi}AAA)S2={{,0233|||rzarrr I&H =CtB}llf~hРAX=ɩSDϞ=[bźu6r|ѣGa6m:|p}}}DDظdɒ`n͂ٳg| !y:VK~O:*fƍ#F /]˖)~jjܹ .\5Ig'%|SK4 9h#KhkWRR奪jggޜL<~￯gao…ϟ?3gΔ)S޼yӮLrk׮>}:@'I|_~ ?G>իWZZZw8p7ottt[NjNgg$23Zx7o݋uU^*,Wrss߼ycjjT\\lff&dD!5URR=rǏ{Ksv2ko +**<==eD!5355UTT655=~RL[?UUoݺ2d=ܹsRrv2뛚Xm*#޽[(jjj۷ԩSJJJtׯnAAAbmmm333>ԭ}b2H ?tAz޼ysȑmTuBܻwoP"?B==/Bȶm^~-әIR]]-KSt(Qc}ÎTB@7 >W_)))BTUUCBBlll䭍ƎCoڴG>j//k;+ %??ŖV6:zֶvAS:;J2og B1euuhb\\ESWXѧO.n:Q vvvt3UUU!~-VnݺjhhرcK%iݻw| ˗/X,+3gl޼* -66ٙ>>>VVV[n~ʕqM¼3TUUǍ}:cqcUTT9rwg,γ.P]$׮]LOD@:UUU׭[GZZZzxx/ˣo҃N"dddPU\\lccfoNurr*//X'}|>͍{YW^Ћ<~x:D ,,,\\\i/]DILL꧟~bF_it1~~~̠ho5d \]]"6EUH{0?G1c|O*6ɓ&ZX_?a{YYqEyرc|Ç)**FGDʊn\M6~mvz~1cƸUTT档B"{9!D( '''&ӻx"=M2dHLL !W^:::)}XGGGYիѣ3gB6l@_uVVСCReVXq=Bڵk !|~~?:J_TC}IMM2dHjjsbbV;.0J_nn.EQ21yuMGvqq_e_H5mÆzsq |u[UؖzVYq&M[ZY{Ro~V,..~VV}捔š34hѢ˗=ا|f̈́gϞB455EsjjjtIG{}ч~xժ?c#}bc{-c ,޸q[Dc/YGFFӠA/XѣfffǏ>vؤ)S0OJ_UMǏwvv.**@G?^ P?G Y.n:߽{i%%n޼5no_:3>///g573 >~H YُY,@[:ZIIIjG\)))<`9֭[Fy)))yyyo޼BjkkӃ̨:YA]]}ҥoߞ1cF~"/^8ODUUվ}YoٲE(q8O>nVӳSSSΝۿ3F'LБO꣥|B 3KB :!y9YOeK>vVY0̩&fZ6-XJ1_:uC?%upX;wM,֭[LJAAAYY٨QD :_~)**j;ʕ+kbboܸ1-]TCCCOO>hu߿/PUU|r:_(JOO.ڭV^^|u̙D~Dm_6g0~z۩+K(P把Iɗ&S]]hgmaG2Ig?WPPHs|ZTbY?x@?|m_b/#۷/&&Q__aÆ Rtxxף!+W433u?066ۻ3$}!!!ׯ_;=ٳG9έ[*ڶm[|||\\=xɈ;w޾}N~Ґ!C$[XX8p~IMRRRbb茤/E{悗 11Kj FQ6n\%.$ffd^NdI֭9KEtvW֜?WTTt7 YFF.$ݳw_f'iքz^ SPPчVw/M)X$nA\mm+Z)d|ȒD,"rǫVrqq166^^^.hB ,..Lg۸8[[[sss??"Yr8˗ !W\T%z.jjj,WzJNNew611ӧݻw !{2/]]1cƈ_Gt:Akkkkk1cƤSU__zj{{{{{{+nBBt222Ffcc?~x%%%>_VV&R EQnݸ6zzf߽₡C\C֭ sWHQQQWWg߇qN6~ӻwo:{vv>q'LTUUMW@qIȃFIY2?3]]#Gc)ޅ9F} &''wۈwut$ZIQ }Qn ]-plll]]ԩSAٸAa)nܹseˁ6eeW嶱D ;w9Ԅ=KqWnժ^TFxxlܰ3௫pqqqrrRWWꕷM00ׯ%w 芧f ީ"aiy֭[_{w^zu…rqҥqqq}}}uuuk֬HII16l1F1$$mQn:8}ǎܹ":44vځRmnn/,,4~{iɴ}v!Ċ+ΝkXʲLlA=-ٳAAAr1%%Wv^_XXxM#1uwwwww !.^XZZT*333˅ӦM 2L.]2~~~#N_bbSzBp!D\\[[[7nܨP(.\Ϝ9e`p:ݻwG<*''gppPTVVVvttܿtG<ɧgnٲ%%%%88?B̚5˕Hƍ'oߎP(B s0ͮzl`lh4'Nt4/5Rzzz>|8a„>!ĦMjq$3ɓ'=zTΤIï\`??7 v[EddlR(^nݺuK~oZZV"ވ}2|XRq1/Sʈ{x7:n8^ct:=Q;'卵PW=(ݎʺMoGQQQ)ÜN:%)ABbkk%KNg\\\EEŋۨ+۹N﹭e gmP̙3?e0Ο??1Zc!?Y/..NLL|Z-[Vtt/ o(`4*?ç37nXhF9v/&&A;yf#ev튍UկţWN#""\"IҮ]$I$I䲲2I_5 EEE{쑋 aaa^]~7/rTTo`VyO- ! Chh/_={Z?y\lhhHHH?~͛ՕX[[}ݮѓ}>{.]WWWf͚V_{ݸnG !=Z^^^WWw}yzXJJhܰaCNNh4!!!Buӧ?6ܹsEEEVuhh(;;{ڵJ^VV;;;O>]PPymZB455]t.Xbܹs-KYYYVVV^~;wd-&=d&L_vRR)? 222nݺ'xyȑ7&$$!$IKZ/O6-((d2]tl6z;uTZ=qxIzzz<5InnJ?d~zkkƍ … z3gZ[[rss6l0|`p:^vE)?o׿0O0ƌn!ĬY?yyy{ꫯy)S]oߎP(BoK?v2ݒ+ h4rqĉf9,,,22RnIzoInW{YYY+*Ⱦ7iҤ+Wv]y%O+σ YSSD/}1#_MJJپ}fgϞB}W2rժU;w߄---FQ(駟l6{^$''kC !=5cK3fعs鬭3gNrrrhh/"x,uuuu 'N<|3//?*//t O}>Oq񦦦HVm6!Dss}>,زelɓ#W7j~(_rߟຄ*Xl"**Jɷ&%%IֶrJIܯDp8mn[rԩSj:""߿fϟ t lOy" =j>OF vTee^7UYqY,GyU `4c۴jg666!***@Vt YR9zQkLںdWQQ%Nyߋ@iVo |l?@?@?   @?@?@?   @X0=& `쇿))_vTֱAcd%  ?@?@?@   ?@?@?   gzIENDB`pysph-master/docs/source/design/images/particle-array.png000066400000000000000000001212501356347341600241220ustar00rootroot00000000000000PNG  IHDRuzMIDATx \TGAHS\w\ 55sIRK2Ӻ.W[zS+[,ӫ٦!*Ȏ &ȰY~y 9g3 |y}>H n~_b````````/100000000K %~_b````````/1000000000 >e*.k?9K6mlc6eJz/=~ Srs^<3"]iٍ'\1_y tlPݽ3S˵ L@#rkyVll,?+m2=?%@faanckΜv*cq#{A6S3K@x'䴥Dv<=/{&ٽouIéS"D1Yt8yC_ ew$;$/oRr۷"~/!nKMbKSrrhS*5mfmM.212@6̬&ge/ Kꗚ=~ )%-V=_N hwi昣G̟oޣ9sZb33nU|jGY] پlѻ79qkB)Sؗ=soPGE&ؽ[KsQr :bR)K0?6?}kRL73/'FG[ DMARѣO$~6MCtߠxc?5c'  `=11==&Oc)1q661?ȿ&fr<{V ]YY\S} ;[픜0P"-}%ֿ/^h7ߐ#Y^3gh{Q99#C'Q' [${;=s;Н? d͛?oPGkג_}U`&Kff48kعWye߾k:zx/Ƚ{u-#/h}DaNZTi8}ϥQ1RvkD洰_v휖fu+>"oԞ#pM Oĭv.;M*UCkfg/KvRY<{VoPGF } Gq{ = Xص%ʔjLMR5_R!N$ߠN06{ʕ9}9g\pgȳϒ?~w=vrqΨ~wR{a;i -uTSȽ{ɶ:VKVלO7_z/{LL^曖{kʗ6*&7m&oj؎{ XK/٤c))j^f A ǷoW􏧥ؽ.;≽Υ'Ji_tzȉc%X=9;o+ ze)S99#i@>:)eaXS?;؅ #S[\\3q~~C?衉-ff\Qo66Ϫ/\Ț(sKAܮ%;r"w$["In_t9NV/]/MmcX#sΜLѫvÆq--蜑1w8>̲o߾='YYC{}8$~=A$/uʰs%Q]j˿)U0Kz312ǤI;KbxhxZts~Kd/Kq\`:,3jRr\`R/@'{gQpK%@K%@/_/K/Kq%@  ~YG|]bfޏ@q77?4 ~vO}Kmlcڷ3e]&F#?Gٛ9!]5v̈́[?u'r9Ad÷7/пNۨ埔2>Cǘ)LVYr N*DU&48Ǹfƪ⨊ io4QJTwUP%m`l~)T̟S K_ |~GRO~NZNO$2o#Ka\f0L%1id.R3t9iLy,/K&1vP&>X~5_߾` }9FˡNk_ }9Zbr"%_(RM(W eR&EJ L&l&/A;%74ʚ۹v{#5PPc,V}m}YknngD>DO+)$j≳c/2y@d~|y, %r/:rSc~/S2yꔱ枘E9K_~Ԗq|󹏾ןd'~7w0!'~PC[yϖCsz}ЈDjZǵ\l=_ *dDDj2~_>hQ/T)RRP~$89?%+I Ώk2&4XH__Oԗ-Fc_͍ޗ }9t\,ʹ*B:eL =LK :_Ƿ; za_dr/ s{;2jL׳9s"|x*yK>EGm$;Fu&y-ڎ,@ؒ@MrS/uhQRᔪBI΄N]z3%KI[|I^ ̜>qA],VxMb@'ta?)-''&~vKL-( ՗-Fi74fEr\rPBg0>&kK%0OKS_uysr>ˡNiꔆ %| J2x239)k% 266vK4_R6T_r-onr4() I h%K-=(/j!Fk_͍ܗCP}9Zb4(Մ2YN̹IL#59e_YDtQ~}9Թ/GB}9:(B:DJX{x~ /M)IJeP}9/P|B[RE('ɝr]fh]l lcBx]7 3K"Xb:P"4z_  eSJh%?7鿖!:|}9[ߗrf((UR^ pdF2ʳ_?h nQI_/oݗy}9 e8S3>C3D%F#: s_r/q }9uJtuS6_/_j'}9ږ77v_D pN7*iRE  _v%0=rkYmQJ1h}w6NU/e'KbchQ/:}9D|1ey7kي'z9={c}e^6fɇ$ܬ_J{n&Y5M)d%Ǡ~ eg_3֗%Fi:b |,ɋcD(F>}lm:ͺ#{,sHiPNZCdq% YD }97oE_3%Cg؝6>%߳bLiCPmpщO*7%_K% ͗\Ӆ7ץ/GnPʝ9۟}\%/}#m:%ɿƫ wHJg|~ eKkr( ՗3]PTmVAUQ料oe/Tvz1KXmWC~ _v%6勺(uˡN9UCRr:pg_nZp0ɮh?_imo=2=Ͷ2>O9wx[~+  _v%r/G[rB9N|+'sBV<3 .RAlU$g=eO֖t{7*W˗_/t^\7WJPS֌9K@%9YoChQRJBILÔgn%0?~7}9i/G!t{ s!)K@s/Ե/QC_Cķ{wvs_/lUg ՗cQ(ķOy!K@G1JrSjGѝK@nf0ھQBmK/.]$ԯ/]ޜIُ9 ~% iKI ꎺ(jJ%DRE1}З3DsaD/q/52-i4:5n{RD/t2uBJcrh #/%%Kӫ_rf\_TW(FH-߈DY) #L/_%eU-G;H000xvȰ6]͊.}fJC9K/~ HLeSϚ#:z^lUQ_/K׶n!r @;D1_Z.bxe2V1!aK/~:)=ԁvC ҉bU'Օ%K_/S% 3QN~YNDx]J *ΌP[&_/~Iue-/ؠiWG'#q/~x~ x<w2%''94;~ D~ _/h_ =u)1$+L/_%~ @0 !^Y W3_")|ma6#PY~ @cg3ᗘGR#`6Eb _vaVP6=޾uyּRSg+ ӧffBΗ=Rj2 ~ l7­0B/06l#.H/2Mv簑ÈFeV ˉ5 <ovew4`Ke;S AD(h%~ogx/Z FRّd9 !(g]Fڌ/0-4)6yԪD>oj/55V2M_UDeVr㗚:7g-]_I/ =z{ؗl.KMg&8O2c'qK]knm?8x{Dxd[ _/K`[~  %K`o [_~ @Kun  /_/%K%Ǒ  ~ ]TCɪfΪC~Yx=XyJr4V+e! /Ś%y3ʬ̸~~e6̡H1&Ԗ$QvMNiR]Yr}9皐9/FSKWo6l7l {eƠت""L(&_ZҀSf=~q=ʬ<4幑s/DTT&Ԗń_"%/Y󿕩#UAOZmh(˯XZ}3%K"5=T[ag3(B7>B鐿__%/e"wr)L f%y^Oȍ}%L/thT%/kTR_Jǘ@Hkb&_L~1|I7%`񄐳Ařѷ /#K%=Kt!^GcyK/%qJٽ>1R*r; ~i 7a> %@ :=~ _/__" @Ҡ~%_:e]K~YN~sUU@|wPH5e) bi b 7E~ ~yc%g8bed-BDM:;?yH |53 D5](%$Օ%+%h~I28ڪ ۆt```a+RrSjPL/Qtqe4:bYyD@be2+&rY(SR=S-kJ>&+&/ses_Ln9/ܝ˟\T'L+1sCLL{m[[P2dlf71o3o1i LRn榦B 3yX~}~vg[(kff~,®V/&T?zPNʤ& ePrN)$)BjBYA(YuJP~Y(? jMBBuJrI]Ǥ.g-q2))zR؞K./]?k]-V߂D҄oQfLJH` BT %uJPNCsJ5T)B)wJPrEJP*/\/U25&e %y5I^D_nv/'"$G*'_7rí_"iB7ӼYoPkl.R %uJPrEJ5KFTWWS5ZѲ_>~jetVUTfeȌ[t/gƞȈ ,L_SP[(&R <M%K"F*FˡNiWPԱ/G@("PR eSL\:-NÜF_fC9yTu)dJđs#zyou-em~y׀~Zzöa#$a;t+om)DIt&erfԹ/Ge ՗J}9jB.᳙YLtZk=~n~)"./敕p|cqGckÑ*~y~ipG>!LTf1ߤY.G::E*[f)V%$PZ}9)(uYc_ B\ %딪B:eL Ztn __.(SYH\aon*L83i1"9}!Tjޟ j6{O5vb.ڐ\NG~Il+R/gr/e}rrFLcN4A_c+a&SŌ*Z|5f`Ax91Ҳ{w%/.+*)=p\Y.%Ɨ{U[]Gq< {@;n^N *H`%L/[ę ֗O/GKR׾ꔛuQ/9Q}9*ނB9 eLT&x oϜef l%[lb߾J,3VAWC4_NL'XQݸf,㗳چ`Z1ԅ(䐒,o)0l_s2V_NhQ/grcjB:%Jd&1g(s~~٬2ͤt"$X/s8 "ZB_6t΃ʺ߰&<pBDE.)0l_.ַ/grt_\׾M˛Sԥ/:PEJ") I h%z"SeO}rZ/ڋ-eMU9 Q'S"|/f4_"j$"er>Թ/GKP}9beSĜ82^S6Ie_Dއ:_Uk_ /iRSYTWDR_$FeꔆԹ/Gz H)s)cVޝnHeǯ__JV_8//eK ?>|53V)%PXf7rfܗ6 Lx: e˜zq3>/M/er VYW^H2&1c~~ ~Tl~eVrS/GKP}9.pB\ %e7f+$8K_*²_Z!ZOtͳEmo)_)^ZԄ_Dr,onꔫtQڗ3OC_LQ}9ʳޜPJag?JI/[dH9*euSXYy%5 YeNamƊڦw{.ϚXjbyqc}̜V{?Rx|V5yP{D/rܗ}9) ՗3U}9ʳ tr<3K/U!F/]w|ub lcVmF\8_d-"a#ʬ$;,YӋk|)x˫7h p_v$%~ysa}9,o[_=Ѿ9=@K_7+ރ>Ii)Uo~ _7>%=WLsR˶oYZ hmӃ3KFddG^痿cK/;_>FyI/GKRU(c(Yoz)B9I(Sg<eN`;҉b:/_y>HdHeK ~ ӥV%NV{y,erh%>I L7ߗH׾O"7o*RSԅHIr9ӋsN]2_"ٮ~mHˀ7}_A;?Y=xP_j:WxZt5n8NR$66m8$ݷNcgߟ,^6,g,V(Yti-54^)ʓT(?<š9BIh*R`4ʲ#7Do)$kKɊ); K//NeQl\rG^{ e2~,SfD0n8N!r16Gf,>2l~:+R}=fƂ<7-RGU(Z{ѮÈVfNlzьy[6˿w[2v$rnT1|#\Aɟd;m1[%}l$[$yE%W!ْ'~ @~)&|Y#h8/]?S_v(yrjff闚":9`n5SܗmysWΜP}4k\/MÂJrVqʳM6'։)fPyʳ:Zd'):By}5Zm/6Z_///T'"G*'_[\\~e_ES6r_Q+tԌ;7.߿%7nn֍Jܷ0Rߞ %9/ߑPڣs7)0an_FnRfs9~Ǜ#֖R__ /5%Zchai)Z_&]G9DV//t~k~5 lq݇[DS%16rtY|r1ʴ9ۃJZTZ2ܔ|i ꔣho$m9݃39chQ.} K>_"6(-~^R(~_V~ .㗁㛋=/g˙rmdUߑe~[B9eW/OPLy+ZsF)w8qJf>V4dGtkEۍ)Ifm1d|(yɹ-i%kK-S11%;=K:}^=>$/ᗆcZk/_"~~0VT8-ofȟg9q3NYYv4<&yGutSI ]n ls5<"2@sqHw*t,GVҵUrbdd?ɟjKM_0Kw~iec˘K5*dU!,Byǝ+̈q%ZJCyJcxȿK_8ї>J)yBb_J7IHm'oyXޥtU.F%b _3q$Vb$4_rBUYyQN-:;sN;O~㪋b&JETWBɊiL/[DtQ8.ˡiJꔃϾ鞌UuJWIt ~ 45$7\7lbt@ N-]a۰Q a;l諛6f'ʮ$IV%:rxB)wJP*VT{cs_e7x d<aN=$` mKSKv/IRLCd&H @At%L/[nf#hQjJt !*6(RB1+X˔^#o-:W]u /\F' f,%[TK5Ԗ5c)MtZMH)l s0l_|hQ J:9ߝ/l di(OQL_q"XUTjexI_V@'<-N 4%uG11J55 eGjK/;/D1jK~&d~~G=hlGzl`A:]]vs%KV1}h[ޜD<0QB Kc e~y~ ǘii:_"i TX%m|F' tO1//RJ/_f/ ~yBT /4_D\8eLbD'g~YYw?5Rފ|/PK_/.2 ~YXVK+R+ss yj˚_NͷJekG=h >-YvC=zZx{nvwYK[XXL9?rL_ 9_pGSJϑϪ&/p_/_dH/WN8YaqA|h;Ft0*\d/O/^NQY/޸~+{giLK/~ or)ˎI~<ó@|ڦ/Gg0Ȏ$/p_/XReb$&8OZug7`ΚRV+&_" K`JK%?өi/۫~ٯߟʬ$O/5u+nD-Z7~%/IG1?ɐlm+P_ka_A,Z&/5E<}ʌߞ _" Kp~ߣg~.,_Vwk?x <̼?#/5Eus ;j%KԵy|ժ lqݷr%K_$X%R{|cTj2w@DO1|OtnKc_[ T)__qYe׀unK#Ҙ fڝ;i^ ReH/!%GR,W_ ߀vV!|w.~%򗆤ÃЉP^K%ꗂ~Iފn<6?_\Q!}9韛Q~9UX! ٙ\v6uh3*˿8R/+e%Y[06)ᾮ_4_r6YI('r0DbwvbHIVlUaBm IR]!i[&ХXg;luepj ۆt```nVCĔY&= @#OG*{9tZ[Rؠc(>zrAKÎ8O}#:<z(Yy7ώT|1X&e6V %K=f<~y署ΤŜu|HS"N$jK^~pO&ֹiGg͇F'~ k/p6%L/%yHi(g3ƕt+㗛❝ؒ_j_ NW7>r!hq HhgZM3Qv%W1+ZEq}fDEnDXKsBf~YdQ7INݻ[/yqYQYL=H~I6v_l(',\/LI%L//36[J׌}U$qEѷ/\$f,szP~Y{W 4ݒýf*JOa" ]~) , "I+M}9eV"Z1~)&YUX8 H<t&-& ?-zf%5hR_rZmTT{k DQK# /;_aD ~ `[r唰6K/k//jKԪR g-e^Kˤ0_P ~Yג_e+׬+j$1&  %l/업e/B*2770騛{OG/x|/PeaMim=Qg[,naa1u<`1}>jff?+|ý N)U H(=2~%/OlZ_wAu|;+\p3Z,{5e۰j6"3ndwi=`Tf%ٹ` `^XK^^qfW}G^{q/__ 痮;\:1=˶oYZ hmӃ3KFddG^痿cK/__z2v;n|Jz(]s{R`tּgZ9xV4_Kሤ*\R~ٯߟʬ<(/5u+nD-Z7K/KS˻K~)ԣ [ٞ>ISVY{4>yʟfm#8?vK/~A'r)bY$_ b{D+m輅Ͽ}C1YD}q%qDN\SV=ND0QLs/KRf846={qr7IMeh9ac>OK/__jl6_Jxa Wh};offΉ _eğ3\_GʙK/__ _eq+~il,RAt˫7D%R_j90aol>߸㧀%4kKT{;??WndDN~t‡_Za~G%_/'"$G*'Kj6̵ڌqQ1YD@_j9]:eѲ]#@K&5hKrJٽ>1R*r; ~.|1̕' Ϛ2c.XXXr}:Y6|?K/__|p]R׶!ecݦ-]%4!0&6)Rg/D;2nK҄rͻoϝKa 4~Yt iM}π͸9vּ_v /m/_e_cge,ug/1wrlAb2L}S.'ԝRY26Mdn'ȉgn15LM S-'d"3rB[r瘛?sÏrݗ\Te=x0g(eS-=ɔ1N9ΔcJR(0S|)>(S)))R S5%+&/ʿrv1;|\\gLΧlW&c91>d.}@dG|FO&c3IL[L?(ޔsaz{wSS̲zwH!%[Œ, +!u)LZ{j$MJRH' l/_`Ȉ *H2QV3=s>QfTadRX(4%|jrJP eBI/N&y_jJrN& %|L,;I}]Z&~)sI­B/;ycEr}yBm Q̈૙2^N!$GF,z/Gc/~ >qP %|T)sJ5,BB)B:%_(|NJ)cRM(EJPʝRP(S^cR0ɫi Nt3)Ds0+UҸby=fXYN\ ,PK$'g KݭZ@HedX'~8 /4_㊔jB(RR2X@(NJByBP()B)BPrEJPR )B(R eSd$JI|I\A%_v*&Qzj %UḒˆ\_ILwE5k?d)/ʥ_b``Vh!JZ  %|l.R/BB)?|_%_(Y$2~ BtSd`i?q{w̒o=:ܼE<7v$ Av{1>- ۫~O\'Mxkս/Vp~CvC?w%9n&8g-XbJv9/ҋ~yr99.s~ød@[^~/;rvxTڧou5d=9w)Nܳ$љI~CJٱ_~usxrL}@GU RPRCJ行(P^Xd׊m MH!B !$Ju>ufs'w&sLfIH;qR,>ϝL\Ǚs7^Ϋ志rlJ\SJB%Z/K,@7ː]EƯ>rcǐZ_4lՆ}#W2~ڗ{LX]ڣ֗ێh*>oڗ.j~g߁"{vv}|ˁ3վ]>._m_.J./ޫ]_= .cvqMZ_<.>ع=zI^#r} 2v×/ŧ=r'/sN_KK~rmvΰR<վ})wG͡)Oc~5+luv֗M,/fel9bؗ;rh}=zg"/Ej,_˻,3Y})xs(=#rFy/Gw[ ԮJJ%,<^up(<\vW_:_W_^({}yY_z ZOlz|5r8k})jrkΥ*.:ߴyx}ٷB }hOAi-[ټ!ڝ/Eؗ]{>4aqqþ_K[K۬e-M/;p}_3J\_~y|6|̿f|}zMvլr2v_ݙwӯiQߗ-[Ǭآ j+{弈 tkeaQ̪aGU^1JO8'rԓł,o0f%̋@/[}?=#G!啿';t>bd\JuԗwVwڗrڌZy{{_w<n샏޹kKJӗ~IJ8x:9`t ;t_W?֗GNi/g-i?ӾO$~<旎rϲԬv:{y{|ʷ/^y|֢o'*:g9caL{Z|2mv5} UG0:?޵M›m{Iף_^.qt[ēM=B y֢)^px6իcz.ӿZ-awkަ3-q2F^OjEP^7aD$e?ʜ]ŽF=C)w;tAخF4?z;_ğnwBG%S'"P&,k"STJ`*\byiZ҉|ICM{9lĐLϚX>d?ˬ̟"_ rkҾ5MKpj\"W|2:u0؞ o=ww/]G}yޅ<멾"RQׇ/ϸ2|Y!}yc}ʴ FMeuѻ0)o-5(eQz[NR&g*KoR¼r?1N-q֣#lx' ͮDmnïM_˪2< ~hG˵/=B_,6m}x}/=<.~;'/e\/,L/+/a/g[ֹn'׮)QnEjPʢo-/|G7mXfXkk&5v^78nΥY >D_r34"/_:/+vguR|vqrR-{|eYJחܗ_/Exrz9k?h*gWmK۽4z2ᵦ>[̝"qWdSF3'W-]Yi .,7o~rR68r6KM/[/KT¢Y߿Θ\PR}۾Ϩe)L>y:t I[iwK٥[ո6}"񼯯_ԗٶ7m\E?Oz 3|yacwq\fWC&vU|Fշ4nr/:H~{Z6ku[7i韴<|}?j_~='sk3- t-[7_?r֗}VXHG{Nrݶ#Ljw6hYNhq9sѲ!MokyZ_X&5諹Z_N_].> e {ۑK;Enqj_ng\ڑw-2eӈRemG_D;wQL_ҥܑ֗/.'[к~hD ;88/WkWc3@eCj}o݆}vq|񫾾j_vOx*x貕]R/Z]s({]j_3pT L\S>aվ]x~}9oivq|~n=e^bҷk(h(Ǚ1g)rsr{9APjM)"DS.&Kp'K_-g\KK?+TgIiC' q?֗~_e}YK͚ڵ}>,rRr}\ժuЦ jxguGE_oj}mq}Y<޷s8R,믏3@[i2(cþ>h7v/ve{}<,nqWR|-my|Z_3p~xɆ?za㫶B/gd}xv×Ǚ𾌽e?(AiiJ]PƷRb*QջQW/Do}9qO<|uwvѳ}&-{SgvݏpԗAmڮٴ[5Ǟx.˴uۺvy_IZиɂWqb}!bw&*YnnX/?2Sߝeq+6}9mvX󖭓Woqz[武sҗˠ>|ˌ͇;u#ruvǟ,oR߳jڗ #N3ꎼcWqj\s䢘WO-hӶڗ0:cۑ+Sl*82e2~R6 bfĖCCU_Vx_Fsa/hAe(/sO^xG}}v]X[}yܗfo:H|3:uou=/NWD/xܵJK!!mAL "6}yV唩sZ N旇N^z@ "EbޟǵYޟH<#~d\.edvB7&jOk}oĴE}D~>?Z}^?zrI$/g&;{Ƥۧ%1:i^}_Ɍf-Z~|!Deg=vQb/3Y})jOD^NhW-޾J D9eZN^22nOPE_y\֜\}GW< KlAXֽU7s%} ˟np_:z.KGN_yI_:yq,Wtҗu [ͷ7g)%}՗y/y/]egP}e(.+O_87/DUsW~vrq܅G}yޅ,ま<[r_Zҥ s>%} _¥l۩cc/=G1=vφGw;%} %RtL?c"137x+p/ʸx)>-򶖭[&)|˪O}hyn;2#WmZ1}Ɇ4!^".ZuJt2|%} eDZ'L;'Ȏq6@%F9'Ǚ/A_$1 6]9ѵg+OO?wTo{!v#_ypЗ_Z$B)*syP۱q7E}q_?SR6 Ξ(%ʘjenV ~|nrR @ ZzڲĸdLD+),C!*1Yz #}N^җ/=&7MwL3g_@?u利+Tˡ%TB_} %Зe%K%0I_/K/Зq/0%0 /K%0%%0 _r/K/З//%0K/@_d@_2 //K%p}K_}%p/Зq/0 `<clڐ%XW>?Բ<݊L//?|@ (m?_@1S\; pKօ)k\2 j;C0u g<1̫&/qe5Z'Hʏ{µʵ]՝fە+۔+[-ҥMʥK3 k(V+W)3s+sfiҙTLY}}t:A9'UN-S b̢(RDYr"L9jD9"BJ|%?Wɛ͖rg)3ґR7JJ4TWʡ/C_(>W~f6E9'>T}{=e޿({QyK iʮפ*;_Qvl?)')^4lI5-1O^_ǫC_'xLjPҔMiYrAP]^ԔvA8(#.(;J)A47>(|0(eSr)Ołrg-(Y)ukT_;neuHIJ}PڔeAPM0(#RkJc4 JIJ}PMʢMiA -OK/'AhӗWN(<}Pʦ4 JIJàL7J$]PZNRvR ҔqPjMi8 JٔNRܔTORǕM6>*s_Xք˪ٗ팮zgQ$rc(]JcTҽT/cAqPZNRlJ}P>fi 㔬[D%\}vd)rdS9F^<{9ߖ^d7rԓAàMiYJlH_@_VK>ˑM驽'cy{91JNRRz˧UK/E #21J7rdStsS{9zf/G=Ii袦榡/`jHxn/r,nj!A9>(77d>9} ǫf_0rA^Nino^ۛ^vR֓JEC5%^󗢐\^b|id^Ȧtw/gýiVZ旸LD7iC_^/7s/;8^1JO6J$>(eSy􁖗Goy /_V]{99s{9/Ȧ,ϽIJ}P)ImX%ձ/E6|{yc{9N(=#^>(eS2GYSo,Sc˒ЗqRS{sOȦUbA۬Um׮eI_/EE_.ٔq(4ejW%D%5fRUrys{9N(=3^즤teVr_^d'cȦrzejWsSvVC2xtC.ӗKY帗'8^1J}PMI^mDZ3oN_/EoYORۛ{j/ۛ#1J}PȦL _"<֕\/`Ҷ/3xn/ggKoop}e?Ȧ^ )sP,5O:^>޵zߜ%D"ONM~*sPORʠ^ ާdw?aMk_+VE?1oe{'gMM~MAL^(V _CvM%e%M^ۛ{j/ۛː2_iQ'6^.2gfȓC:Yv 3ۈ̟:8G~vJb[ YW<ҔnLLgf7t!2|,~_)Ew":2nBo/`\ѻT{9z9Ȧ,c^ʑMS hjP*?il^[3qYߧpUCӱJs9{ŲuH;޴-K|} 󗕬/Ey`/'{9yR lZ lu)6FiJZ˚sS4JGf+ 'NgޗSYaBpa|e/5Ҽy^B+,cr?ů.e?YDMߌ+=ɟbz4a޴3|7ї0YR\ɷ7/^溠fz9IʢR m%e:3䂉~^\˔I21i=!V_Y }ޟHWؕ7їp}er{9 ARlS%|76=d4 /`Q_1JsSrrU24f[L%_:KQur}/$QPʦ *jʖJLlJطgՖk=ߠKtҗ mw/G6>(mNR .BJwF_eemŮz^NᯄyQW/eɽvAnDU}*oЗ0鹾' ҃{9JL#m)9I 4)4{9vW-]IJЗp}eEr]o9FIPK-G$-zVb)17N_K0 /K0 /%/%0K0 /@_2 /%>N_З_K0 ӗ`_K_K0K/K0 /%/%}%󗜿`%`_%`婋]ݻ0W 5f?|,lp}Z奜WMUf\`/[/婋 2m}EEhu RE3{VL-6?e7-a8 IyPZX1f/U>if#cfE2SZuGaC!E[d6PJP~JZ"}{b;%¿&oi寫{[̇yjV~@_qn.N6Vrryryti,L: /Q.Vί2PέTΥKgW-WΤ*gR̒ xtt*V9L)1NF*'*'"•aPbwf JU|Xهʾ}+Seʞo*PvMVvzMeʎV̦EFiU%fP0Y:s)RTҔvAyi47>(ϥ9 JKS2qPEMi)A9)AQPM (eSܔ~ة-31jT_^yq\}i/0(eSꃲ)rRjSeqPMJIJ}Ps:/R;Ii:ʢvAi9IJ)m:?ef sZ啃@_VKxŮz;JٔTRRmJ}PZNRwA)AGAiiJA)RZSRmJmh4} UeJgIJt>F`2e>(-')84 ʯgA|8(_u/ڔ|>(&m{iE//f_xr{cˑMY{9{l/aPZ((-[EK>^5RcˑMY{98ys{9 n>(զM~V //!$^NjȦ,ٔq$QPZN e Ub֨_җW|ܔnQzj/gj칽'cc(]łR{aɴ>Jt=%^" (=mL6q2F^lqnQ(pBi}J_ ,KeuKMr^-rQҔRm=JO-Kؗ"ʸSۛ{j/gbrc/G=Ii}-MQnX3ӗ_r>Fh/(yɽٔOzN( rDS: 2(.tQ+Ko)K!^1JOȦ,|rQe졤vWR(a^$} `(2|`/rˑM驽'cNre7)\܉up4)Ҫ4{9^lrˑMycT]eSY݀/󗫇=XȦ,i/݋eJgBOVҗKQZ%8^S得S͝^4%Nқ/`]'oo^ۛ^r_VRH ub)!} R{9)_qsS{9˱drsAPj');ʦL V*u=U)tS* їK_rQzj/+ʥ1JIĶwT}7*]ї%5iRDۛٽ1JwrdSu0FVv7 J$ڔqMYƆ/eu~Q.IxX|j{~M){xPgrG߷wC; +C45~pw;ˑN:Zǻvt -F6(ͱ~>JP/ywߘ47x 7oR˯)i 97Gě1v_ Ŗ04dw.eSϖ=[54E>g=Z/Ԣ/LY )Jox ?5b#~G[7/ho>Jo}(+} xKd.\dD /l~k4-jUoLJQ?peMnWo܁kF+}+cZ>:Izߜ3v2kJukg~ҡpEӋ_A[m/UO- S2jhnN+프6B %xLSUv95MlYM n2e./M!Vbyx,ߋcWLzM$i#r& 5eyZ}&ӗ0(刌86F{x?/2/gzˏK=c|&j{󔴗?aH ۏ#p * AJl3ok?cθD >J-WrX{7exѠzdb{9򙢠_ϸS2I 'xw-[+˚ϣkJ*\bO񸎏4^-+Ӻͣ=#>KAoG*M/߬߿xL_/SI7gg(rMi JsM]6&ݍ| rj&ޥۣ]YZmJ%9j(w2}O.R5iOL ҝ(} +i_v5Uoˢ<}~>^gЂb3$|\|[>Sr3wԽ^~/gXH_Q&Kr<y^[וtt}\;ʧ{gYK> 5ebYWc/u})ۛo{[y|fݏ 8S<8wv5吢,v\non>=rl٠lV׺0Ώn}ڔ[x@G.%׬{}>P#T{:6i[ptKwؿÖ*vkzMcˮU_K2$L;=oVH>Sb_sMKerUoWu 9UoKP)moon>=rlrf)xՖw$7<,Oua0.sօBt. * Cœ>*XNe} &4EÏ& V,>fNևaB;Oá0!i/VB090!i&4BÄ[\0LNxpeB/2L\„W 2L]O4Ibx|aBe&pC/2LL 4 *}i9$EaBu IE M{&$"Ä&َ-i&TZ%i&4Q[W?K ΜOG7\qzoylr˝?f盝VWPUG^G[W??>}{4]~mΊ/$&TNWzҗW iWO￞L6Ϣ䤆_;O /?@@_~(WTd+,ʄaBt'`_珼TBzwͳwGm;xq^\rjſǼS7~T/VaB\|>IOn F'?3"s)]ʙO4z{/M#Z;?ݓ`yk!{,aPYC2mE9oŒD3j& :@fϼ>~84~6gÆ?>}.VP &Y[|es@4AU f*dTƮX {*yWPS=X />Jү [oJ9^!!{'lTWXz}q HXe֖}pG$$;nsHenNZK&$Io$I Id$I Id$I2LH$Ä$I2LH$Ä$IaB$&$IaB$&$I $0!I $0!Id$I Id$I Id$I2LH$Ä$I2LH$Ä$IaB$&$IaB$&$I $0!I $0!Id$I Id$I Id$I2LH$Ä$I2LH$Ä$IaB$&$IaB$&$I $0!I $0!Id$I Id$I I$RBқK3F!QY& HPH1dj”PI ~2$H$R$UR O d2 ax2$A*Db|{d< $PY ůbGǞBA]]%K̥ٳ܌nx7d2jrcJDmm- .d̙̘1#c`{;/~ 9'ާOK FMhؼoEgsҔ4~Ҭ"5r³X0/=xmZپs*uDQxD@__-[vpa#+W.izg';w,_~bGϕ c۶m,ZX} N?}t+Q=/C!*e$$bCǼ]90^N18O$k,e]46Ne֬)Rؕ3sA:::83 ÐlD^ӧrU(-^8b8uup%~=44@}W&ʹs2RLD\x ny'53/ab` ˞=,_>|>J "VΝmrlbWt:MKK K,P(h%P(Xh{>رV*'RVGk+,X551Ldf՜%Z+dl~PG*`D,@@>CBHafaH] ǐ'8TڶP>?22QJr tg"x+Eg#RN#@7\ƣYl($FMRm(I$I³tp$+Q 3p01<(D$J&12ա#ӕL;LLD|+0LD%=F9a"W.+o8o51D (4LQGV&|2h,_{τ+'?vT+O)\1_(qO&•D)'btC;Ĺǹ/W&;˒MA8q=F'IiO0擄Os&\h+c&F&GCKۨ#D(ҟQWg".$ޡnRP7.2ꕉ0i806aabW21a/]9I%&L?qe•Lb'oYx  N:!a9YV&0L2L}#:w(]pCgsc8J8I<@u>|=7'hDa4$qiQ4:W73eWqe"W+o_ _NȲ3w"NLݻ<ĸ2l}vկ^k_z'x4#a!1eɉ|=2ȾaH"(m? }o=O0 ' #es&ƫ~ $ɷ Hax T10揆QHwA7:sV{\`:Ã_d2Aߡfo -lyj?=D.j-$V?g!YƎD>N{xnjkf|_ۯѱW_}G|aַG;vIgګ+y׹޿`ɒ̚uMi8_!ݻbW;Yj466r7Eqtttf؇8'R7|Y^f͚uA&=|D"EOO7|?KǛoʽ~%K~YsU-{-wD0ڵ%|czFCƮf:H$! ϗJv,Xli00lk6رc;MMKH$im}~grnx f oy\? Ao/| ?n{0,pb5sUTāpnr0Q d 9ɦb2 ky?+vY)~ܓL˾n0),L_MOQ=YMۦWyo׊N4aJl~Ι58e8)S(غ_oEl/y&XL&CEl߾N3f`ݺulذF>M]][lh+xihh`KAM7|es7f%l82pO>syz Çٸx|U2y&Ɀ/ncncn]acNˡ 1X2BDtK/ZNwQ0Ef7o<8yT x nG͜8_R*.clm{lrd2}L똿xjm[m466N;Iw~;$TYYW\>ݻB@2ds9>:uwo% H$*巰k2eg_eÆ {/cƘ]c ^1=c/R= g rݛwqh$Q2eqɚJto?F pMy&MpvmfϞ-$)0i1մ?{~A֬Qjsy/jUqBY,::cߦ Hre7Yɓ֭[ٶmd(X`N=TfϞMGG6I4~'d:;R)}28xmy ^MG<NM ?zg<*L,96Gm/^_fvz{3(G# Sϋ]x zzzhkk6[`Æ \{ls̟? . v{۷rUl/OF9Vu$IaBFqRJhtVaB$&$IaB$&$I $0!I $0!Id$I Id$I&RLV %MvijQҷ$'GtAladxIԪZU*kj0T(06嫭"LPSSacД)ՄaH]])9sa~qfz2QAuE%.f͏}y;;{yW$EO*TWWre+ij}y_ so)** 'TCCŗyީc&$IFx$I2LH$Ä$I2LH$Ä$IaB$&$I҄a j?QWIENDB`pysph-master/docs/source/design/images/sph-flowchart.png000066400000000000000000000527471356347341600240020ustar00rootroot00000000000000PNG  IHDRJVY/UIDATx XTK:*"y);Zy /khTXxԴ"))+*2T"5R2{/fa`0}ó63g߼ַ׺Ado7A A xC xC!o7ouڵ)Fַw<rhQʵicχhC];2?f͊MKK+++C;yNJJ =s+<=i􃁯xm3slz-|p֑,9~} b&hL(\60=ft7bf+5/=զGzP\\fF!^ں*j|۔/;el`}Kvv`F7qlO KI,m ;;!;1K2>eg#_ K_~gzv㏅c*--[h[þE=R "K6b-l ͗mcIΑl-f&v`*;02,3e;>e}N|mߗ1_{۽A.]J MUZq~&?RiYl lc7чm6VкYv4e}NɟY΂c?'-8r{oP- c-NFnQ=[Ӊ[ ƬeNc9Bvz;(=(<ݡCdnt m7oj@G$~Q̋<֐owuS1vfˍegWskX^|ʚڴioC;֯OgO7縐叱Uv{Ѫ]űkYz]?WG퓛6o=njJF =Ylq kn].la4y[C'[nTT𤋮,`X9g~VۏmmźYA"+Ίv9)+S@@ -اgOllEknV.bwMQJc|||P9:@Grad[ٴuK{Y>v9]9:IAF-5%[X7ʮ`Wٵs|BUAwgG=_-}ά]`2c^5jZpx/l*.]?n/-|Cݦeٱu'/'c~M6(k^ĴPĺ/߽U*'UnOH[}W K fdڣ:r j}[#ݝo su umAwfS(&uf])xhzd_y՝s"?ڨvZ/zv+jDeݾuԅc9鴀n;s |W-rn/zFRmAr kֽi=g#z=z)~4Oxu͘ݰ3uaS¥wRn9!ȁ; 1c|w0&="-&Kabn];As3u ]q1a.u{5J;Bs3v M_ńn{ܺGr 1_#׻UI3Om 9]0oҨnҏoß_:b\ؑu7|9{~2ᝲuWK'G{ZsV ~`]ҝzb;QlB1y+wx c"~4ҹ+bݚ_1 9ޤ6_uDGj r8ټNɂ\׺KF9x{ԋغƌcM!k'Zha=7s 9.ތ5ԯYwO0[ xs}i˵tEg;n/gǿ ]QCzvx%sS6m9;t?fJ?.u=TYzN؇\\\"""0 o=;vxh Ywʟ;o>999hC6- m~K&lo֝ʧ mڴC끀QV]m;i67cVeݥ:`PxWWism2gn%_uMZcЛ#=l!mJKKc-5G u=zYc}DB̷yOz=688]!]v-)))|zz>awg 9O\a̗?4z@{=zUN9?e.OwlՒu\5lѦ?7dP`slllZZ,޶yz7 !xo7 AxC7o AxC !xo7 ! 3 xý!4- !4- !4- !kA7777!x@575д7д7д7o\޸5>}رCKhaM8>woo~ ʈha̵jբ^RM>vԩC? |4Jx۶(ם'DСCxo޼- 9jvsuO6Etp۶mF׺I&5mڔs8g5{}kM'DTڵɺǏ߰aC:Yf^oztCoxKa.˜%C@koNdܹQuEB耵M>sy>}_8?ܹsɽ_޽{sG#$ۄlĉ۵kktLy ߣGcr}Qz9O>FٮhLn#N:koi<#mڡ(][7&QIHw֭?CٳxzޓȜzݺu)0mO'2x=zxxPL/G4a|<-U[ oǏOo=aZ'Ⱥ EzOOxjԡoặj9~m*,,\  A~~~r}||Z?%''񎋋oh~3j$F(22n~mo*))Q*o{SPP?HCvdEvwyo?;>?RX4ɽLt+>xuٴ?A Uaφ9bͬQkgYc"ww7͝|?LQlizҼwDs'صv/gKػlNؿU_&-k#Pa^FBMo^xdˢ̭Kn[vl;bvڵDkOIYoNn>s 1CϦ8yGw'd\wJɫ9.^{xm./'F`DR~˧X!=8V`{V x l\uonA _4wo$)ۢni^&hş x/ ^ x+k^%猄7-8%&3Q{+v:{Mu'IpjMӶ95ඳI2v;y{\8BVjN_ ޕZ\Q+hD&Y_.o muFZ\AWϺJ7NVj6tk)qZ77N66PJHaǺiG^Bo:}u]5TE댇у0U0aT(,xxœ,&4OmA&C;|<;&dΧ4lȩNCv=Lxf=QuD>vTzA/e3MǨkWjY߁7NƭfPMB.qyMUr['hxrk&T _k $,ؿKWdVP]RV q6"ymDnרྱG+>+ qumf^muZ 晕Ōkƙsm@a f^pY$4^m%ͺ6 mֵEPM˭'hxͷ6 m3n9Y7 "s $nxmocj[o͸6`#ommdRͬkrͷ6 [)' 3ڀm9&Zq6)af]Pیkrge;'hx+2fwum@» q,f]pxch8A<,f\0kr [7N6ڀKͺ6 xVd׺Kz!Ge6ڀYV U+2[_[ʘ2~.Gq[(l1H؎.%vl-meŊJa;X]eDZ5ɵ-'/Nm`6 q$lg6aMvvmsIC]–vo{XM727ڀ7j$BՊɺxK` +e[;^)lK?{ko,moc^"c{!W]U*c{!e`l4hd.g{E6W27˶\R7v4u6:m9ۦvKl_mj"meuivl6e&LloNg[ζV@.]xoGζmS;ٖmdg[6b^ol"퀼Ta۱𶧪ζm oVLl'V-g[7 ` UfގTNsx60PgxSՊa:6ޱ**44_{FC̻DEE:[%oVՊހ-loݝ>OOxj+vSbmGǛDAY@^񶞪S;l.;<ޅkᝓj'Y]Պa/f sΘmfxGJE6,;..1TbK(Q#yxxh4fx[GՊθ6.Wdd$ݎYoɪζKGZR}mU+rLAmVjE?G oZ1vQL okشU+m9ۗ8Y|Q ,XpaLL"Q/^dҥKZ|+bccWZjիֈZvu׋ڰaƍ6"*11q֭۶m.*))iǎ;w%z{+*%%e߾}OvqXԑ#G233=zرǏg:qDvvɓ'sD:ugΜusΝ?>OT~~  E]xҥKŢ._|ʕW^v7nJmU+:r-wtt.ޜm oζ7g[›-ٖ޼y.ޜm oζ7g[›-xslKxs%9ޜm oζ7gx o+شU+z>zkonoζފX7Ǜ֭ ́Tr-^[q[Y[q UT6+Yf1Zފ[Z.JY/ۚVR‡$[Y[A6WϺx[-"~LZ)$9>!l֍̕nÑm 2uo^H7 2a 5He__nFM>y`&?La/8tjv726=.g|S '|Ըa:uqzw.y}`mƤnؾ]x+;u̕n-n$dgtʁe׳yk4ہC:HM^~7o*7mC&z7&]O?Sܷ>޾YoEߠx@:-""l'>em>(pu ^{f3oNyL.f$&fg"sŭ[ oe[ ocx+<ִZm&1< O^Onrawx@NpVx@N*1Ud<&:EOL^wp8Pc)4M7oy8[yLλZl_=i9/eQܺx+nZx_x+BZmR[DAygmz\lӾ٦}٦y"<'"5)E|ڵjlK+R'ZN㭸uk᭸uk᭬u׺xs;yAV"{pw/ܛ'85→u+ 3rR7j69B2YʟJb۾KYn}EۗM0f̘u6idƌt<lZvrr$9ܺ"mIu7QFuyW+@˺V? >#Fmg}ּyڵks}݆ ̞=[}zZC8֮];:VZoۊ'65mÝm9[׺ ҩShuРAܺ3p{sgcȽCCC B_2pui񖬛Ma9woɺ~^z<}}}&={rݻ[o%MaܺCBBwNO~>aԩS ~=z4/MݬY3:Ynz 83׊Ħvs l6-Y7yoddV<-**G?sƍ%)JI5'ydR$)ȧȜM\7yoyȜJf;&ՕW!2yXNRXޢE Jlْ9CV<2/ {۷oJUj0%y[s zM!.uR7ȓj-e$y[e\_V-0s7$nU׭Ztoz{=>Vo,[ͪ:rͷ[7&ӧ=HlK[l9^M_sEo^ȞVl~`9a pvv&ƍ&}WD<[4F|ZJY8vYYVꅪ]bζl,qE bZZJm7U+6g2xcm@9X7VoK-ǫxՊaoYo ;ZEo^ZV٦7_)`lߺXxj$NFk[a v&-r_ V-VٖmI:ɬkVpB59}ޕU[τ6\٪KM ۷.Yobko_%;%UjE$:f5;Ç8qnmܴOnm ]idhkVTm0&mJՊxuW2δ}/y}|crTo-]Z,lkVպxWViWtiz]!pՊ{Y x@~|_>y-ZEm9ۥ-ݯ HRʺO ކVV{d[ZOm_;_R\bx[ڀ\"Y7V*ٖکvE$׻ AVȽO|ɴib!7fֵEnFûaݯ-MZ'9nVBQ(W{M[ r3/k]vq9|iQR mj%,dnu= W]9sgX&>L;]ׅ,'Հx[n9ުεRiՊހ\bxCV okkńζY mɉMslٖw.+6\+k k0m6Ev:Feo`x[{Պim9񶗪ζm oVѪ=lz?noxx[j9^ūVtr9x;V]bZg[RarWvloxMՊ6۷dl&lx;\Wζ?oox[gՊim]7v<-U+m laxo۞V̴ l4ex; Z҉MZbm o;Z1̶  okشU+rEx60PgxKJEvtcccU*UhhhYYwqquwV,rHIrtl fwww<==iˋ~꒒PgxSՊajMA@Ξ։Md@g[6fkᝓo;Zؾ2o"rm-(99Yw\\xcjwW؀ Q#yxxh4fxc ;22o- zJJJjJ*..l[YuҥM65 i[Y011x[LYYY x+weTXX/WCN_ݺu9e׀:ӧOwwwJ 5 `+r @=ۍm. N!o8ll^hhh^^^  o+ ]89fՕRaS2ކXu u89U[VZCJ#\r@ rʎ;:99o~xL^mm#F% -[4+u -ݞiL1cf̘SOjՊ<駟#ǿ+777gggz?Z]tn=qVw69N4uKz}KLK|Wӗ͛e䭩h8A;ڵ9s*[ZK؛x8>m=j(J%q޽{zzvؑ*]{WO|F7{Uϑ& *M]RQZ}^9iFF)9M4qvv;wTsvPlrrlȐxpX[+omB'6"MsxWr t kn&A(oSufJ)++#VL-x(9vhhP;9y8ANdN_.6_ o&ݰQՊӧOQw `#t怜 !m0`+xrVoiya 9)S7@f͢p~V5͗p 7rbއwqqu[! ɧO^P36R-~s֭;x`86mrr頠7x#$$DRcOlH׀8q x@x@hZxChZxChZxChZ7o\xo\M xoM xoM xC5 kokio io-'6A^^^XX%mX**44_FC̻DEEQBۆE0GO;^^^SVc_xۼ(OGhyR|7Oζ#Dr!m?5FAs(22nxۛJJJjJ*..F[)(( vd4Dx+|搖3汌hvx>;Y2cX"vt1;D؎-,+ed'VYv^Ne'ױxNmӛfvf Md[mvv.dwda{>a+H4VxdEtv1]:.ařv]j6z]aN3F.qrr䱒|aY(nEEv+-foWoWku ܸ }l6ޫEt~wB~m3Q:ll@b+"x+޺l x/яwE-޺--[X+rփu u x˭;Xn(2׵nC"s].J"s9ax3ru[{}呹moukFl+ #s]J7_u[[nݫnﭕ'l x8&g;ܺJ'֭72/⽰򤚜mu&8rcuWT[nݕ$l_+n oCa+l xo1vV,R)v6vw,RÑ7?k@.u1 s<|[?E"ޕomxi뎇94kJuU3=>wer޵tVKYy豝˳7}h]z*S_9tM6{z{ZmE%=nzN*^^9}*uUfnuG rLV7^:jA)g{Ú;:;;?稅x'O7uwZne,|nЁe׳5z0[7yߣ7wiEO}5?Wdm[0?4̈́76 @{؀1Juӯܺڴz5哚ߞ6|Hw>'nf7iF9oS~24gG)UJlN}n+$n7MgQp^~|^7<^7PX7םzlƍ}x9<28OKn>FnRKYV޻\toR5/[nNp@s1ГR=xcO s47rٕi4ǻs1s?;`"b;):w|"IW:"޲[W^ʢndmLE^}m()asJ/*w҄iE{'s^3 7="=zd@nv>延mn}AbgnjvSGA; xش<JY&1A7O yW0:`[Ō~Go)Kiz˾M|Tbz>![5?ZShѲUg?qcx;p+OߎZ|jǟprrоc۾֞vSjb|) mߑZ[+eRoolkʀ73bro6*`fuomx*D27s-{kW)ڭ?7/uj'ոuuxo{۲v*"?7?kjq>Vxd]u'ͭRP0xo[U vj)K9w] ͭRP0Rɺ7;kjq;vlƌO='.]"[FcjMO355uҤI[0޽q ԫ;APz4)wss{g#""N>]6olBN1pRR9>Mw׮]̙c ފ\ .GRߪ7x޽{)8ر#癙UDސ rРA˗/s$Mx>RK7 9.ZN^!s:;;ZOb:̹|FrPD99Oƽ!mSNg+a-C(oD*?42j!]e)%CBB-\n&A(oSufJ)++#VLkQ- C[y''' ˅k!m^ؤ6Wf;0}t50xא6›Ol lz~ޖ 7zN>,GmBrrS (;6ng͚E:jf/ !m:Anż痞oO>,g lo ץ[`׭[wplxZ +QChڰs #Ǻn]ݎvfe]؆FЃmzmöʶ`[Cv?uc85%ve{޷YxoK aiف,);<2/ٱ,;v{Nc9}jFQݛ7yKEweҙ͝?]?afyqKKLc?dBYFe`ǾfǿaY,Gˉf9 EƲs գeK0oCz]Y Kle`۳'-nT ;YrWع8v~[>].33m޺lM]z# O5[5[7;ų&V9kqåKAlӵvZ+[!f V&eOzėEK:{=PlكlMZ7+ƊvɅxz0/ј f .ƕ-}dwK{ {Xt)xg&@Mw;nvi+.fYڬ2m]vK%-ƺY~v9]99g -寧JfgͮdWٵ#}_E+g֝ qvQRmA[Zc[[O\x\0TioJ[wq^(x{?@g(SǺEu cw ȥAr|ݟNY8z٤k!,10x\5BG~iGǏu_,>k}_OT. Q^^^tttPPԳϢRTt׮]"##q!o7A7A A xC!xC!YBObd9IENDB`pysph-master/docs/source/design/overview.rst000066400000000000000000000660601356347341600216370ustar00rootroot00000000000000.. _design_overview: ===================== The PySPH framework ===================== This document is an introduction to the design of PySPH. This provides additional high-level details on the functionality that the PySPH framework provides. This should allow you to use PySPH effectively and extend the framework to solve problems other than those provided in the main distribution. To elucidate some of the internal details of PySPH, we will consider a typical SPH problem and proceed to write the code that implements it. Thereafter, we will look at how this is implemented using the PySPH framework. The dam-break problem ------------------------- The problem that is used for the illustration is the Weakly Compressible SPH (WCSPH) formulation for free surface flows, applied to a breaking dam problem: .. figure:: ../../Images/dam-break-schematic.png :align: center A column of water is initially at rest (presumably held in place by some membrane). The problem simulates a breaking dam in that the membrane is instantly removed and the column is free to fall under its own weight and the effect of gravity. This and other variants of the dam break problem can be found in the `examples` directory of PySPH. Equations ^^^^^^^^^^ The discrete equations for this formulation are given as .. math:: :label: eos p_a = B\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} - 1 \right ) .. math:: :label: continuity \frac{d\rho_a}{dt} = \sum_{b=1}^{N}m_b\,(\vec{v_b} - \vec{v_a})\cdot\,\nabla_a W_{ab} .. math:: :label: momentum \frac{d\vec{v_a}}{dt} = -\sum_{b=1}^Nm_b\left(\frac{p_a}{\rho_a^2} + \frac{p_b}{\rho_b^2}\right)\nabla W_{ab} .. math:: :label: position \frac{d\vec{x_a}}{dt} = \vec{v_a} Boundary conditions ^^^^^^^^^^^^^^^^^^^^ The dam break problem involves two *types* of particles. Namely, the *fluid* (water column) and *solid* (tank). The basic boundary condition enforced on a solid wall is the no-penetration boundary condition which can be stated as .. math:: \vec{v_f}\cdot \vec{n_b} = 0 Where :math:`\vec{n_b}` is the local normal vector for the boundary. For this example, we use the *dynamic boundary conditions*. For this boundary condition, the boundary particles are treated as *fixed* fluid particles that evolve with the continuity (:eq:`continuity`) and equation the of state (:eq:`eos`). In addition, they contribute to the fluid acceleration via the momentum equation (:eq:`momentum`). When fluid particles approach a solid wall, the density of the fluids and the solids increase via the continuity equation. With the increased density and consequently increased pressure, the boundary particles express a repulsive force on the fluid particles, thereby enforcing the no-penetration condition. Time integration ^^^^^^^^^^^^^^^^^ For the time integration, we use a second order predictor-corrector integrator. For the predictor stage, the following operations are carried out: .. math:: :label: predictor \rho^{n + \frac{1}{2}} = \rho^n + \frac{\Delta t}{2}(a_\rho)^{n-\frac{1}{2}} \\ \boldsymbol{v}^{n + \frac{1}{2}} = \boldsymbol{v}^n + \frac{\Delta t}{2}(\boldsymbol{a_v})^{n-\frac{1}{2}} \\ \boldsymbol{x}^{n + \frac{1}{2}} = \boldsymbol{x}^n + \frac{\Delta t}{2}(\boldsymbol{u} + \boldsymbol{u}^{\text{XSPH}})^{n-\frac{1}{2}} Once the variables are predicted to their half time step values, the pairwise interactions are carried out to compute the accelerations. Subsequently, the corrector is used to update the particle positions: .. math:: :label: corrector \rho^{n + 1} = \rho^n + \Delta t(a_\rho)^{n+\frac{1}{2}} \\ \boldsymbol{v}^{n + 1} = \boldsymbol{v}^n + \Delta t(\boldsymbol{a_v})^{n+\frac{1}{2}} \\ \boldsymbol{x}^{n + 1} = \boldsymbol{x}^n + \Delta t(\boldsymbol{u} + \boldsymbol{u}^{\text{XSPH}})^{n+\frac{1}{2}} .. note:: The acceleration variables are *prefixed* like :math:`a_`. The boldface symbols in the above equations indicate vector quantities. Thus :math:`a_\boldsymbol{v}` represents :math:`a_u,\, a_v,\, \text{and}\, a_w` for the vector components of acceleration. Required arrays and properties ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We will be using two **ParticleArrays** (see :py:class:`pysph.base.particle_array.ParticleArray`), one for the fluid and another for the solid. Recall that for the dynamic boundary conditions, the solid is treated like a fluid with the only difference being that the velocity (:math:`a_\boldsymbol{v}`) and position accelerations (:math:`a_\boldsymbol{x} = \boldsymbol{u} + \boldsymbol{u}^{\text{XSPH}}`) are never calculated. The solid particles therefore remain fixed for the duration of the simulation. To carry out the integrations for the particles, we require the following variables: - SPH properties: `x, y, z, u, v, w, h, m, rho, p, cs` - Acceleration variables: `au, av, aw, ax, ay, az, arho` - Properties at the beginning of a time step: `x0, y0, z0, u0, v0, w0, rho0` A non-PySPH implementation -------------------------- We first consider the pseudo-code for the non-PySPH implementation. We assume we have been given two **ParticleArrays** `fluid` and `solid` corresponding to the dam-break problem. We also assume that an :py:class:`pysph.base.nnps.NNPS` object `nps` is available and can be used for neighbor queries: .. code-block:: python from pysph.base import nnps fluid = get_particle_array_fluid(...) solid = get_particle_array_solid(...) particles = [fluid, solid] nps = nnps.LinkedListNNPS(dim=2, particles=particles, radius_scale=2.0) The part of the code responsible for the interactions can be defined as .. code-block:: python class SPHCalc: def __init__(nnps, particles): self.nnps = nnps self.particles = particles def compute(self): self.eos() self.accelerations() def eos(self): for array in self.particles: num_particles = array.get_number_of_particles() for i in range(num_particles): array.p[i] = # TAIT EOS function for pressure array.cs[i] = # TAIT EOS function for sound speed def accelerations(self): fluid, solid = self.particles[0], self.particles[1] nps = self.nps nbrs = UIntArray() # continuity equation for the fluid dst = fluid; dst_index = 0 # source is fluid src = fluid; src_index = 0 num_particles = dst.get_number_of_particles() for i in range(num_particles): # get nearest fluid neigbors nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs) for j in nbrs: # pairwise quantities xij = dst.x[i] - src.x[j] yij = dst.y[i] - src.y[j] ... # kernel interaction terms wij = kenrel.function(xi, ...) # kernel function dwij= kernel.gradient(xi, ...) # kernel gradient # compute the interaction and store the contribution dst.arho[i] += # interaction term # source is solid src = solid; src_index = 1 num_particles = dst.get_number_of_particles() for i in range(num_particles): # get nearest fluid neigbors nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs) for j in nbrs: # pairwise quantities xij = dst.x[i] - src.x[j] yij = dst.y[i] - src.y[j] ... # kernel interaction terms wij = kenrel.function(xi, ...) # kernel function dwij= kernel.gradient(xi, ...) # kernel gradient # compute the interaction and store the contribution dst.arho[i] += # interaction term # Destination is solid dst = solid; dst_index = 1 # source is fluid src = fluid; src_index = 0 num_particles = dst.get_number_of_particles() for i in range(num_particles): # get nearest fluid neigbors nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs) for j in nbrs: # pairwise quantities xij = dst.x[i] - src.x[j] yij = dst.y[i] - src.y[j] ... # kernel interaction terms wij = kenrel.function(xi, ...) # kernel function dwij= kernel.gradient(xi, ...) # kernel gradient # compute the interaction and store the contribution dst.arho[i] += # interaction term We see that the use of multiple particle arrays has forced us to write a fairly long piece of code for the accelerations. In fact, we have only shown the part of the main loop that computes :math:`a_\rho` for the continuity equation. Recall that our problem states that the continuity equation should evaluated for all particles, taking influences from all other particles into account. For two particle arrays (*fluid*, *solid*), we have four such pairings (fluid-fluid, fluid-solid, solid-fluid, solid-solid). The last one can be eliminated when we consider the that the boundary has zero velocity and hence the contribution will always be trivially zero. The apparent complexity of the `SPHCalc.accelerations` method notwithstanding, we notice that similar pieces of the code are being repeated. In general, we can break down the computation for a general source-destination pair like so: .. code-block:: python # consider first destination particle array for all dst particles: get_neighbors_from_source() for all neighbors: compute_pairwise_terms() compute_inteactions_for_dst_particle() # consider next source for this destination particle array ... # consider the next destination particle array .. note:: The `SPHCalc.compute` method first calls the EOS before calling the main loop to compute the accelerations. This is because the EOS (which updates the pressure) must logically be completed for all particles before the accelerations (which uses the pressure) are computed. The predictor-corrector integrator for this problem can be defined as .. code-block:: python class Integrator: def __init__(self, particles, nps, calc): self.particles = particles self.nps = nps self.calc = calc def initialize(self): for array in self.particles: array.rho0[:] = array.rho[:] ... array.w0[:] = array.w[:] def stage1(self, dt): dtb2 = 0.5 * dt for array in self.particles: array.rho = array.rho0[:] + dtb2*array.arho[:] array.u = array.u0[:] + dtb2*array.au[:] array.v = array.v0[:] + dtb2*array.av[:] ... array.z = array.z0[:] + dtb2*array.az[:] def stage2(self, dt): for array in self.particles: array.rho = array.rho0[:] + dt*array.arho[:] array.u = array.u0[:] + dt*array.au[:] array.v = array.v0[:] + dt*array.av[:] ... array.z = array.z0[:] + dt*array.az[:] def integrate(self, dt): self.initialize() self.stage1(dt) # predictor step self.nps.update() # update NNPS structure self.calc.compute() # compute the accelerations self.stage2(dt) # corrector step The `Integrator.integrate` method is responsible for updating the solution the next time level. Before the predictor stage, the `Integrator.initialize` method is called to store the values `x0, y0...` at the beginning of a time-step. Given the positions of the particles at the half time-step, the **NNPS** data structure is updated before calling the `SPHCalc.compute` method. Finally, the corrector step is called once we have the updated accelerations. This hypothetical implementation can be integrated to the final time by calling the `Integrator.integrate` method repeatedly. In the next section, we will see how PySPH does this automatically. PySPH implementation --------------------- Now that we have a hypothetical implementation outlined, we can proceed to describe the abstractions that PySPH introduces, enabling a highly user friendly and flexible way to define pairwise particle interactions. To see a working example, see `dam_break_2d.py `_. We assume that we have the same **ParticleArrays** (*fluid* and *solid*) and **NNPS** objects as before. Specifying the equations ^^^^^^^^^^^^^^^^^^^^^^^^^ Given the particle arrays, we ask for a given set of operations to be performed on the particles by passing a *list* of **Equation** objects (see :doc:`../reference/equations`) to the **Solver** (see :py:class:`pysph.solver.solver.Solver`) .. code-block:: python equations = [ # Equation of state Group(equations=[ TaitEOS(dest='fluid', sources=None, rho0=ro, c0=co, gamma=gamma), TaitEOS(dest='boundary', sources=None, rho0=ro, c0=co, gamma=gamma), ], real=False), Group(equations=[ # Continuity equation ContinuityEquation(dest='fluid', sources=['fluid', 'boundary']), ContinuityEquation(dest='boundary', sources=['fluid']), # Momentum equation MomentumEquation(dest='fluid', sources=['fluid', 'boundary'], alpha=alpha, beta=beta, gy=-9.81, c0=co), # Position step with XSPH XSPHCorrection(dest='fluid', sources=['fluid']) ]), ] We see that we have used two **Group** objects (see :py:class:`pysph.sph.equation.Group`), segregating two parts of the evaluation that are logically dependent. The second group, where the accelerations are computed *must* be evaluated after the first group where the pressure is updated. Recall we had to do a similar seggregation for the `SPHCalc.compute` method in our hypothetical implementation: .. code-block:: python class SPHCalc: def __init__(nnps, particles): ... def compute(self): self.eos() self.accelerations() .. note:: PySPH will respect the order of the **Equation** and equation **Groups** as provided by the user. This flexibility also means it is quite easy to make subtle errors. Note that in the first group, we have an additional parameter called ``real=False``. This is only relevant for parallel simulations and for simulations with periodic boundaries. What it says is that the equations in that group should be applied to all particles (remote and local), non-local particles are not "real". By default a ``Group`` has ``real=True``, thus only local particles are operated on. However, we wish to apply the Equation of state on all particles. Similar is the case for periodic problems where it is sometimes necessary to set ``real=True`` in order to set the properties of the additional particles used for periodicity. Writing the equations ^^^^^^^^^^^^^^^^^^^^^^ It is important for users to be able to easily write out new SPH equations of motion. PySPH provides a very convenient way to write these equations. The PySPH framework allows the user to write these equations in pure Python. These pure Python equations are then used to generate high-performance code and then called appropriately to perform the simulations. There are two types of particle computations in SPH simulations: 1. The most common type of interaction is to change the property of one particle (the destination) using the properties of a source particle. 2. A less common type of interaction is to calculate say a sum (or product or maximum or minimum) of values of a particular property. This is commonly called a "reduce" operation in the context of Map-reduce_ programming models. Computations of the first kind are inherently parallel and easy to perform correctly both in serial and parallel. Computations of the second kind (reductions) can be tricky in parallel. As a result, in PySPH we distinguish between the two. This will be elaborated in more detail in the following. .. _Map-reduce: http://en.wikipedia.org/wiki/MapReduce In general an SPH algorithm proceeds as the following pseudo-code illustrates: .. code-block:: python for destination in particles: for equation in equations: equation.initialize(destination) # This is where bulk of the computation happens. for destination in particles: for source in destination.neighbors: for equation in equations: equation.loop(source, destination) for destination in particles: for equation in equations: equation.post_loop(destination) # Reduce any properties if needed. total_mass = reduce_array(particles.m, 'sum') max_u = reduce_array(particles.u, 'max') The neighbors of a given particle are identified using a nearest neighbor algorithm. PySPH does this automatically for the user and internally uses a link-list based algorithm to identify neighbors. In PySPH we follow some simple conventions when writing equations. Let us look at a few equations first. In keeping the analogy with our hypothetical implementation and the `SPHCalc.accelerations` method above, we consider the implementations for the PySPH :py:class:`pysph.sph.wc.basic.TaitEOS` and :py:class:`pysph.sph.basic_equations.ContinuityEquation` objects. The former looks like: .. code-block:: python class TaitEOS(Equation): def __init__(self, dest, sources=None, rho0=1000.0, c0=1.0, gamma=7.0): self.rho0 = rho0 self.rho01 = 1.0/rho0 self.c0 = c0 self.gamma = gamma self.gamma1 = 0.5*(gamma - 1.0) self.B = rho0*c0*c0/gamma super(TaitEOS, self).__init__(dest, sources) def loop(self, d_idx, d_rho, d_p, d_cs): ratio = d_rho[d_idx] * self.rho01 tmp = pow(ratio, self.gamma) d_p[d_idx] = self.B * (tmp - 1.0) d_cs[d_idx] = self.c0 * pow( ratio, self.gamma1 ) Notice that it has only one ``loop`` method and this loop is applied for all particles. Since there are no sources, there is no need for us to find the neighbors. There are a few important conventions that are to be followed when writing the equations. - ``d_*`` indicates a destination array. - ``s_*`` indicates a source array. - ``d_idx`` and ``s_idx`` represent the destination and source index respectively. - Each function can take any number of arguments as required, these are automatically supplied internally when the application runs. - All the standard math symbols from ``math.h`` are also available. .. py:currentmodule:: pysph.sph.basic_equations Let us look at the :py:class:`ContinuityEquation` as another simple example. It is instantiated as: .. code-block:: python class ContinuityEquation(Equation): def initialize(self, d_idx, d_arho): d_arho[d_idx] = 0.0 def loop(self, d_idx, d_arho, s_idx, s_m, DWIJ, VIJ): vijdotdwij = DWIJ[0]*VIJ[0] + DWIJ[1]*VIJ[1] + DWIJ[2]*VIJ[2] d_arho[d_idx] += s_m[s_idx]*vijdotdwij Notice that the ``initialize`` method merely sets the value to zero. The ``loop`` method also accepts a few new quantities like ``DWIJ``, ``VIJ`` etc. These are precomputed quantities and are automatically provided depending on the equations needed for a particular source/destination pair. The following precomputed quantites are available and may be passed into any equation: - ``HIJ = 0.5*(d_h[d_idx] + s_h[s_idx])``. - ``XIJ[0] = d_x[d_idx] - s_x[s_idx]``, ``XIJ[1] = d_y[d_idx] - s_y[s_idx]``, ``XIJ[2] = d_z[d_idx] - s_z[s_idx]`` - ``R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2]`` - ``RIJ = sqrt(R2IJ)`` - ``WIJ = KERNEL(XIJ, RIJ, HIJ)`` - ``WJ = KERNEL(XIJ, RIJ, s_h[s_idx])`` - ``RHOIJ = 0.5*(d_rho[d_idx] + s_rho[s_idx])`` - ``WI = KERNEL(XIJ, RIJ, d_h[d_idx])`` - ``RHOIJ1 = 1.0/RHOIJ`` - ``DWIJ``: ``GRADIENT(XIJ, RIJ, HIJ, DWIJ)`` - ``DWJ``: ``GRADIENT(XIJ, RIJ, s_h[s_idx], DWJ)`` - ``DWI``: ``GRADIENT(XIJ, RIJ, d_h[d_idx], DWI)`` - ``VIJ[0] = d_u[d_idx] - s_u[s_idx]`` ``VIJ[1] = d_v[d_idx] - s_v[s_idx]`` ``VIJ[2] = d_w[d_idx] - s_w[s_idx]`` - ``EPS = 0.01 * HIJ * HIJ`` In addition if one requires the current time or the timestep in an equation, the following may be passed into any of the methods of an equation: - ``t``: is the current time. - ``dt``: the current time step. .. note:: Note that all standard functions and constants in ``math.h`` are available for use in the equations. The value of :math:`\pi` is available in ``M_PI``. Please avoid using functions from ``numpy`` as these are Python functions and are slow. They also will not allow PySPH to be run with OpenMP. Similarly, do not use functions or constants from ``sympy`` and other libraries inside the equation methods as these will significantly slow down your code. In addition, these constants from the math library are available: - ``M_E``: value of e - ``M_LOG2E``: value of log2e - ``M_LOG10E``: value of log10e - ``M_LN2``: value of loge2 - ``M_LN10``: value of loge10 - ``M_PI``: value of pi - ``M_PI_2``: value of pi / 2 - ``M_PI_4``: value of pi / 4 - ``M_1_PI``: value of 1 / pi - ``M_2_PI``: value of 2 / pi - ``M_2_SQRTPI``: value of 2 / (square root of pi) - ``M_SQRT2``: value of square root of 2 - ``M_SQRT1_2``: value of square root of 1/2 In an equation, any undeclared variables are automatically declared to be doubles in the high-performance Cython code that is generated. In addition one may declare a temporary variable to be a ``matrix`` or a ``cPoint`` by writing: .. code-block:: python mat = declare("matrix((3,3))") point = declare("cPoint") When the Cython code is generated, this gets translated to: .. code-block:: cython cdef double[3][3] mat cdef cPoint point One can also declare any valid c-type using the same approach, for example if one desires a ``long`` data type, one may use ``ii = declare("long")``. One may also perform any reductions on properties. Consider a trivial example of calculating the total mass and the maximum ``u`` velocity in the following equation: .. code-block:: python class FindMaxU(Equation): def reduce(self, dst, t, dt): m = serial_reduce_array(dst.m, 'sum') max_u = serial_reduce_array(dst.u, 'max') dst.total_mass[0] = parallel_reduce_array(m, 'sum') dst.max_u[0] = parallel_reduce_array(u, 'max') where: - ``dst``: refers to a destination ``ParticleArray``. - ``t, dt``: are the current time and timestep respectively. - ``serial_reduce_array``: is a special function provided that performs reductions correctly in serial. It currently supports ``sum, prod, max`` and ``min`` operations. See :py:func:`pysph.base.reduce_array.serial_reduce_array`. There is also a :py:func:`pysph.base.reduce_array.parallel_reduce_array` which is to be used to reduce an array across processors. Using ``parallel_reduce_array`` is expensive as it is an all-to-all communication. One can reduce these by using a single array and use that to reduce the communication. We recommend that for any kind of reductions one always use the ``serial_reduce_array`` function and the ``parallel_reduce_array`` inside a ``reduce`` method. One should not worry about parallel/serial modes in this case as this is automatically taken care of by the code generator. In serial, the parallel reduction does nothing. With this machinery, we are able to write complex equations to solve almost any SPH problem. A user can easily define a new equation and instantiate the equation in the list of equations to be passed to the application. It is often easiest to look at the many existing equations in PySPH and learn the general patterns. If you wish to use adaptive time stepping, see the code :py:class:`pysph.sph.integrator.Integrator`. The integrator uses information from the arrays ``dt_cfl``, ``dt_force``, and ``dt_visc`` in each of the particle arrays to determine the most suitable time step. For a more focused discussion on how you should write equations, please see :ref:`writing_equations`. Writing the Integrator ^^^^^^^^^^^^^^^^^^^^^^ The integrator stepper code is similar to the equations in that they are all written in pure Python and Cython code is automatically generated from it. The simplest integrator is the Euler integrator which looks like this:: class EulerIntegrator(Integrator): def one_timestep(self, t, dt): self.initialize() self.compute_accelerations() self.stage1() self.do_post_stage(dt, 1) Note that in this case the integrator only needs to implement one timestep using the ``one_timestep`` method above. The ``initialize`` and ``stage`` methods need to be implemented in stepper classes which perform the actual stepping of the values. Here is the stepper for the Euler integrator:: class EulerStep(IntegratorStep): def initialize(self): pass def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y, d_z, d_rho, d_arho, dt=0.0): d_u[d_idx] += dt*d_au[d_idx] d_v[d_idx] += dt*d_av[d_idx] d_w[d_idx] += dt*d_aw[d_idx] d_x[d_idx] += dt*d_u[d_idx] d_y[d_idx] += dt*d_v[d_idx] d_z[d_idx] += dt*d_w[d_idx] d_rho[d_idx] += dt*d_arho[d_idx] As can be seen the general structure is very similar to how equations are written in that the functions take an arbitrary number of arguments and are set. The value of ``dt`` is also provided automatically when the methods are called. It is important to note that if there are additional variables to be stepped in addition to these standard ones, you must write your own stepper. Currently, only certain steppers are supported by the framework. Take a look at the :doc:`../reference/integrator` for more examples. .. _simulating_periodicity: Simulating periodicity ^^^^^^^^^^^^^^^^^^^^^^ PySPH provides a simplistic implementation for problems with periodicity. The :py:class:`pysph.base.nnps_base.DomainManager` is used to specify this. To use this in an application simply define a method as follows: .. code-block:: python # ... from pysph.base.nnps import DomainManager class TaylorGreen(Application): def create_domain(self): return DomainManager( xmin=0.0, xmax=1.0, ymin=0.0, ymax=1.0, periodic_in_x=True, periodic_in_y=True ) # ... This is a 2D example but something similar can be done in 3D. How this works is that PySPH will automatically copy the appropriate layer of the particles from each side of the domain and create "Ghost" particles (these are not "real" particles). The properties of the particles will also be copied but this is done before any accelerations are computed. Note that this implies that the real particles should be created carefully so as to avoid two particles being placed at the same location. For example in the above example, the domain is defined in the unit square with one corner at the origin and the other at (1,1). If we place any particles exactly at :math:`x=0.0` they will be copied over to 1.0 and if we place any particles at :math:`x=1.0` they will be copied to :math:`x=0`. This will mean that there will be one real particle at 0 and a copy from 1.0 as well at the same location. It is therefore important to initialize the particles starting at ``dx/2`` and all the way up-to ``1.0-dx/2`` so as to get a uniform distribution of particles without any repetitions. It is important to remember that the periodic particles will be "ghost" particles and so any equations that set properties like pressure should be in a group with ``real=False``. pysph-master/docs/source/design/solver_interfaces.rst000066400000000000000000000243171356347341600235050ustar00rootroot00000000000000Solver Interfaces ================= Interfaces are a way to control, gather data and execute commands on a running solver instance. This can be useful for example to pause/continue the solver, get the iteration count, get/set the dt or final time or simply to monitor the running of the solver. .. py:currentmodule:: pysph.solver.controller CommandManager -------------- The :py:class:`CommandManager` class provides functionality to control the solver in a restricted way so that adding multiple interfaces to the solver is possible in a simple way. The figure :ref:`image_controller` shows an overview of the classes and objects involved in adding an interface to the solver. .. _image_controller: .. figure:: images/controller.png :align: center :width: 900 Overview of the Solver Interfaces The basic design of the controller is as follows: #. :py:class:`~pysph.solver.solver.Solver` has a method :py:meth:`~pysph.solver.solver.Solver.set_command_handler` takes a callable and a command_interval, and calls the callable with self as an argument every `command_interval` iterations #. The method :meth:`CommandManager.execute_commands` of `CommandManager` object is set as the command_handler for the solver. Now `CommandManager` can do any operation on the solver #. Interfaces are added to the `CommandManager` by the :meth:`CommandManager.add_interface` method, which takes a callable (Interface) as an argument and calls the callable in a separate thread with a new :class:`Controller` instance as an argument #. A `Controller` instance is a proxy for the `CommandManager` which redirects its methods to call :meth:`CommandManager.dispatch` on the `CommandManager`, which is synchronized in the `CommandManager` class so that only one thread (Interface) can call it at a time. The `CommandManager` queues the commands and sends them to all procs in a parallel run and executes them when the solver calls its :meth:`execute_commands` method #. Writing a new Interface is simply writing a function/method which calls appropriate methods on the :class:`Controller` instance passed to it. Controller ---------- The :py:class:`Controller` class is a convenience class which has various methods which redirect to the :py:meth:`Controller.dispatch` method to do the actual work of queuing the commands. This method is synchronized so that multiple controllers can operate in a thread-safe manner. It also restricts the operations which are possible on the solver through various interfaces. This enables adding multiple interfaces to the solver convenient and safe. Each interface gets a separate Controller instance so that the various interfaces are isolated. Blocking and Non-Blocking mode ------------------------------ The :py:class:`Controller` object has a notion of Blocking and Non-Blocking mode. * In **Blocking** mode operations wait until the command is actually executed on the solver and then return the result. This means execution stops until the execute_commands method of the :py:class:`CommandManager` is executed by the solver, which is after every :py:attr:`~pysph.solver.solver.Solver.commmand_interval` iterations. This mode is the default. * In **Non-Blocking** mode the Controller queues the command for execution and returns a task_id of the command. The result of the command can then be obtained anytime later by the get_result method of the Controller passing the task_id as argument. The get_result call blocks until result can be obtained. **Switching between modes** The blocking/non-blocking modes can be get/set using the methods :py:meth:`Controller.get_blocking` and :py:meth:`Controller.set_blocking` methods **NOTE :** The blocking/non-blocking mode is not for getting/setting solver properties. These methods always return immediately, even if the setter is actually executed only when the :py:meth:`CommandManager.execute_commands` function is called by the solver. .. py:currentmodule:: pysph.solver.solver_interfaces Interfaces ---------- Interfaces are functions which are called in a separate thread and receive a :py:class:`Controller` instance so that they can query the solver, get/set various properties and execute commands on the solver in a safe manner. Here's the example of a simple interface which simply prints out the iteration count every second to monitor the solver .. _simple_interface: :: import time def simple_interface(controller): while True: print(controller.get_count()) time.sleep(1) You can use ``dir(controller)`` to find out what methods are available on the controller instance. A few simple interfaces are implemented in the :py:mod:`~pysph.solver.solver_interfaces` module, namely :py:class:`CommandlineInterface`, :py:class:`XMLRPCInterface` and :py:class:`MultiprocessingInterface`, and also in `examples/controller_elliptical_drop_client.py`. You can check the code to see how to implement various kinds of interfaces. Adding Interface to Solver -------------------------- To add interfaces to a plain solver (not created using :py:class:`~pysph.solver.application.Application`), the following steps need to be taken: - Set :py:class:`~pysph.solver.controller.CommandManager` for the solver (it is not setup by default) - Add the interface to the CommandManager The following code demonstrates how the the :ref:`Simple Interface ` created above can be added to a solver:: # add CommandManager to solver command_manager = CommandManager(solver) solver.set_command_handler(command_manager.execute_commands) # add the interface command_manager.add_interface(simple_interface) For code which uses :py:class:`~pysph.solver.application.Application`, you simply need to add the interface to the application's command_manager:: app = Application() app.set_solver(s) ... app.command_manager.add_interface(simple_interface) Commandline Interface --------------------- The :py:class:`CommandLine` interface enables you to control the solver from the commandline even as it is running. Here's a sample session of the command-line interface from the controller_elliptical_drop.py example:: $ python controller_elliptical_drop.py pysph[0]>>> Invalid command Valid commands are: p | pause c | cont g | get s | set q | quit -- quit commandline interface (solver keeps running) pysph[9]>>> g dt 1e-05 pysph[64]>>> g tf 0.1 pysph[114]>>> s tf 0.01 None pysph[141]>>> g tf 0.01 pysph[159]>>> get_particle_array_names ['fluid'] The number inside the square brackets indicates the iteration count. Note that not all operations can be performed using the command-line interface, notably those which use complex python objects. XML-RPC Interface ----------------- The :py:class:`XMLRPCInterface` interface exports the controller object's methods over an XML-RPC interface. An example html file `controller_elliptical_drop_client.html` uses this XML-RPC interface to control the solver from a web page. The following code snippet shows the use of XML-RPC interface, which is not much different from any other interface, as they all export the interface of the Controller object:: import xmlrpclib # address is a tuple of hostname, port, ex. ('localhost',8900) client = xmlrpclib.ServerProxy(address, allow_none=True) # client has all the methods of the controller print(client.system.listMethods()) print(client.get_t()) print(client.get('count')) The XML-RPC interface also implements a simple http server which serves html, javascript and image files from the directory it is started from. This enables direct use of the file `controller_elliptical_drop_client.html` to get an html interface without the need of a dedicated http server. The figure :ref:`fig_html_client` shows a screenshot of the html client in action .. _fig_html_client: .. figure:: images/html_client.png :align: center PySPH html client using XML-RPC interface One limitation of XML-RPC interface is that arbitrary python objects cannot be sent across. XML-RPC standard predefines a limited set of types which can be transferred. Multiprocessing Interface ------------------------- The :py:class:`MultiprocessingInterface` interface also exports the controller object similar to the XML-RPC interface, but it is more featured, can use authentication keys and can send arbitrary picklable objects. Usage of Multiprocessing client is also similar to the XML-RPC client:: from pysph.solver.solver_interfaces import MultiprocessingClient # address is a tuple of hostname, port, ex. ('localhost',8900) # authkey is authentication key set on server, defaults to 'pysph' client = MultiprocessingClient(address, authkey) # controller proxy controller = client.controller pa_names = controller.get_particle_array_names() # arbitrary python objects can be transferred (ParticleArray) pa = controller.get_named_particle_array(pa_names[0]) Example ------- Here's an example (straight from `controller_elliptical_drop_client.py`) put together to show how the controller can be used to create useful interfaces for the solver. The code below plots the particle positions as a scatter map with color-mapped velocities, and updates the plot every second while maintaining user interactivity:: from pysph.solver.solver_interfaces import MultiprocessingClient client = MultiprocessingClient(address, authkey) controller = client.controller pa_name = controller.get_particle_array_names()[0] pa = controller.get_named_particle_array(pa_name) #plt.ion() fig = plt.figure() ax = fig.add_subplot(111) line = ax.scatter(pa.x, pa.y, c=numpy.hypot(pa.u,pa.v)) global t t = time.time() def update(): global t t2 = time.time() dt = t2 - t t = t2 print('count:', controller.get_count(), '\ttimer time:', dt,) pa = controller.get_named_particle_array(pa_name) line.set_offsets(zip(pa.x, pa.y)) line.set_array(numpy.hypot(pa.u,pa.v)) fig.canvas.draw() print('\tresult & draw time:', time.time()-t) return True update() gobject.timeout_add_seconds(1, update) plt.show() pysph-master/docs/source/design/working_with_particles.rst000066400000000000000000000132721356347341600245470ustar00rootroot00000000000000.. _working_with_particles: ============== ParticleArray ============== Particles are ubiquitous in SPH and in PySPH. The domain is discretized with a finite number of points, to which are assigned physical properties corresponding to the fluid being modelled. This leads us to the concept of a set of arrays that represent a fluid. In PySPH, a *homogeneous* collection of particles is represented by a **ParticleArray** as shown in the figure: .. _figure_particle_array: .. figure:: images/particle-array.png :align: center :width: 500 The figure shows only a subset of the attributes of a **ParticleArray** pertinent to this discussion. Refer to the reference documentation (:doc:`../reference/particle_array`) for a more complete listing of class attributes and methods. ------------------------- Creating particle arrays ------------------------- From the user's perspective, a :class:`ParticleArray` may be created like so: .. sourcecode:: python import numpy # Import the base module import pysph.base.api as base # create the numpy arrays representing the properties x = numpy.linspace(...) y = numpy.linspace(...) . . . f = numpy.sin(x) fluid = base.get_particle_array(name="fluid", x=x, y=y, ..., f=f) This creates an instance of a :class:`ParticleArray`, *fluid* with the requested properties. From within python, the properties may be accessed via the standard attribute access method for Python objects:: In [10] : fluid.x Out[4] : array([....]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Important ParticleArray attributes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **name**: PySPH permits the use of multiple arrays and warrants the use of a unique name identifier to distinguish between different particle arrays. **constants**: Properties that are constant in space and time for all particles of a given type are stored in the *constants* attribute. **is_dirty**: In PySPH, the indexing scheme for the particles may be rendered invalid after updating the particle properties. Moreover, other particle arrays like stationary boundaries remain fixed and the initial indexing is valid.. The *is_dirty* flag essentially helps PySPH distinguish these two cases, thus saving time that would have been spent re-indexing these particles. Thus, setting the *is_dirty* flag for a :class:`ParticleArray` forces PySPH to re-compute neighbors for that array. **num_real_particles**: Every :class:`ParticleArray` object is given a set of deault properties, one of which is the *tag* property. The *tag* of a particle is an integer which is used by PySPH to determine if a particle belongs to a remote processor (0 local, else remote). The *num_real_particles* attributes counts the number of properties that have the tag value 0. --------------------------- Data buffers and the carray --------------------------- The numpy arrays that are used to create the :class:`ParticleArray` object are used to construct a raw data buffer which is accessible through Cython at C speed. Internally, each property for the particle array is stored as a :class:`cyarray.carray.BaseArray`. .. note:: This discussion may be omitted by the casual end user. If you are extending PySPH and speed is a concern, read on. Each :class:`carray` has an associated data type corresponding to the particle property. The available types are: * IntArray * LongArray * FloatArray * DoubleArray The type of a :class:`carray` may be determined via it's :func:`get_c_type` method. The :class:`carray` object provides faster access to the data when compared with the corresponding numpy arrays, even in Python. Particle properties may be accessed using the following methods: .. function:: get(i) :noindex: Get the element at the specified index. .. function:: set(i, val) :noindex: Set the element at the specified index to the given value. The value must be of the same c-type as the array. ^^^^^^^^^^^^^^^^^^^^^^^^^^ Faster buffer access ^^^^^^^^^^^^^^^^^^^^^^^^^^ As mentioned, the data represented by a :class:`carray` may be accessed at C speed using Cython. This is done using the *data* attribute only accessible through Cython:: arr = pa.get_carray(prop) val = arr.data[index] Peep into the functions (:mod:`sph.funcs`) to learn how to use this feature. --------- Particles --------- Since PySPH supports an arbitrary number of :class:`ParticleArray` objects, it would be convenient to group them all together into a single container. This way, common functions like updating the indexing scheme (for particle arrays that are *dirty*) may be called consistently on each array. This is accomplished by the object :class:`Particles`: .. class:: Particles(arrays[, locator_type]) .. attribute:: arrays : A list of ParticleArray objects You must provide an instance of :class:`Particles` to PySPH to carry out a simulation. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Specifying an indexing scheme ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Upon creation of a :class:`Particles` instance, we can pass arguments to indicate the kind of spatial indexing scheme to use. The default is a box sort algorithm (see :doc:`../reference/nnps`). Currently, this is the only indexing scheme implemented. See the reference documentation :doc:`../reference/particle_array` for a further description. ------------ Summary ------------ In PySPH, a :class:`ParticleArray` object may be instantiated from numpy arrays. We may use an arbitrary collection of these objects with the only restriction that their *names* are unique. The :class:`ParticleArray` objects are grouped together to form a :class:`Particles` object which is used by PySPH. This container may be heterogeneous in that different particle arrays correspond to different *types*. pysph-master/docs/source/examples/000077500000000000000000000000001356347341600175745ustar00rootroot00000000000000pysph-master/docs/source/examples/index.rst000066400000000000000000000007101356347341600214330ustar00rootroot00000000000000.. _example_gallery: Gallery of PySPH examples =========================== In the following, several PySPH examples are documented. These serve to illustrate various features of PySPH and show one may use PySPH to solve a variety of problems. .. toctree:: :hidden: taylor_green.rst sphere_in_vessel.rst * :ref:`taylor_green`: the Taylor-Green Vortex problem in 2D. * :ref:`sphere_in_vessel`: A sphere floating in a hydrostatic tank example. pysph-master/docs/source/examples/sphere_in_vessel.rst000066400000000000000000000137241356347341600236720ustar00rootroot00000000000000.. _sphere_in_vessel: A rigid sphere floating in an hydrostatic tank ---------------------------------------------- This example demonstrates the API of running a rigid fluid coupling problem in PySPH. To run it one may do:: $ cd ~/pysph/pysph/examples/rigid_body/ $ python sphere_in_vessel_akinci.py There are many command line options that this example provides, check them out with:: $ python sphere_in_vessel.py -h The example source can be seen at `sphere_in_vessel.py `_. This example demonstrates: * Setting up a simulation involving rigid bodies and fluid * Discuss mainly about rigid fluid coupling It is divided in to three parts: * Create particles * Create equations * Run the application Create particles ~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this example, we have a tank with a resting fluid and a sphere falling into the tank. Create three particle arrays, ``tank``, ``fluid`` and ``cube``. ``tank`` and ``fluid`` has to obey ``wcsph`` scheme, where as ``cube`` has to obey rigid body equations. .. code:: python def create_particles(self): # elided fluid = get_particle_array_wcsph(x=xf, y=yf, h=h, m=m, rho=rho, name="fluid") # elided tank = get_particle_array_wcsph(x=xt, y=yt, h=h, m=m, rho=rho, rad_s=rad_s, V=V, name="tank") for name in ['fx', 'fy', 'fz']: tank.add_property(name) cube = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho, rad_s=rad_s, V=V, cs=cs, name="cube") return [fluid, tank, cube] We will discuss the reason for adding the properties :math:`fx`, :math:`fy`, :math:`fz` to the ``tank`` particle array. The next step is to setup the equations. Create equations ~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code:: python def create_equations(self): equations = [ Group(equations=[ BodyForce(dest='cube', sources=None, gy=-9.81), ], real=False), Group(equations=[ SummationDensity( dest='fluid', sources=['fluid'], ), SummationDensityBoundary( dest='fluid', sources=['tank', 'cube'], fluid_rho=1000.0) ]), # Tait equation of state Group(equations=[ TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.ro, c0=self.co, gamma=7.0), ], real=False), Group(equations=[ MomentumEquation(dest='fluid', sources=['fluid'], alpha=self.alpha, beta=0.0, c0=self.co, gy=-9.81), AkinciRigidFluidCoupling(dest='fluid', sources=['cube', 'tank']), XSPHCorrection(dest='fluid', sources=['fluid', 'tank']), ]), Group(equations=[ RigidBodyCollision(dest='cube', sources=['tank'], kn=1e5) ]), Group(equations=[RigidBodyMoments(dest='cube', sources=None)]), Group(equations=[RigidBodyMotion(dest='cube', sources=None)]), ] return equations A few points to note while dealing with *Akinci* formulation, 1. As a first point, while computing the density of the ``fluid`` due to solid, make sure to use ``SummationDensityBoundary``, because usual ``SummationDensity`` computes density by considering the mass of the particle, where as ``SummationDensityBoundary`` will compute it by considering the volume of the particle. This makes a lot of difference while dealing with heavy density variation flows. 2. Apply ``TaitEOSHGCorrection`` so that there is no negative pressure. 3. The force from the boundary (here it is tank) on fluid is computed using ``AkinciRigidFluidCoupling`` equation, but in a usual case we do it using the momentum equation. There are a few advantages by doing this. If we are computing the boundary force using the momentum equation, then one should compute the density of the boundary, then compute the pressure. Using such pressure we will compute the force. But using ``AkinciRigidFluidCoupling`` we don't need to compute the pressure of the boundary because the force is dependent only on the fluid particle's pressure. .. code:: python def loop(self, d_idx, d_m, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, s_fx, s_fy, s_fz, DWIJ, s_m, s_p, s_rho): # elide d_au[d_idx] += -psi * _t1 * DWIJ[0] d_av[d_idx] += -psi * _t1 * DWIJ[1] d_aw[d_idx] += -psi * _t1 * DWIJ[2] s_fx[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[0] s_fy[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[1] s_fz[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[2] Since in ``AkinciRigidFluidCoupling`` (more in next point) we compute both force on fluid by solid particle and force on solid by fluid particle, which makes our sources to hold the properties ``fx``, ``fy`` and ``fz``. 4. Here first few equations deal with the simulation of fluid in hydrostatic tank. The equation dealing with rigid fluid coupling is ``AkinciRigidFluidCoupling`` . *Coupling* equation will deal with forces exerted by fluid on solid body, and forces exerted by solid on fluid. We find the force on fluid by solid and force on the solid by fluid in a singe equation. Usually in an SPH equation, we tend to change properties only of a destination particle array, but in this case, both destination and sources properties are manipulated. 5. The final equations deal with the dynamics of rigid bodies, which are discussed in other example files. Run the application ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Finally run the application by .. code:: python if __name__ == '__main__': app = RigidFluidCoupling() app.run() pysph-master/docs/source/examples/taylor_green.rst000066400000000000000000000154251356347341600230270ustar00rootroot00000000000000.. _taylor_green: The Taylor-Green Vortex ------------------------ This example solves the classic Taylor-Green Vortex problem in two-dimensions. To run it one may do:: $ pysph run taylor_green There are many command line options that this example provides, check them out with:: $ pysph run taylor_green -h The example source can be seen at `taylor_green.py `_. This example demonstrates several useful features: * user defined command line arguments and how they can be used. * running the problem with multiple schemes. * periodicity in both dimensions. * post processing of generated data. * using the :py:class:`pysph.tools.sph_evaluator.SPHEvaluator` class for post-processing. We discuss each of these below. User command line arguments ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The user defined command line arguments are easy to add. The following code snippet demonstrates how one adds this. .. code-block:: python class TaylorGreen(Application): def add_user_options(self, group): group.add_argument( "--init", action="store", type=str, default=None, help="Initialize particle positions from given file." ) group.add_argument( "--perturb", action="store", type=float, dest="perturb", default=0, help="Random perturbation of initial particles as a fraction "\ "of dx (setting it to zero disables it, the default)." ) # ... This code is straight-forward Python code to add options using the `argparse API `_. It is important to note that the options are then available in the application's ``options`` attribute and can be accessed as ``self.options`` from the application's methods. The ``consume_user_options`` method highlights this. .. code-block:: python def consume_user_options(self): nx = self.options.nx re = self.options.re self.nu = nu = U*L/re # ... This method is called after the command line arguments are passed. To refresh your memory on the order of invocation of the various methods of the application, see the documentation of the :py:class:`pysph.solver.application.Application` class. This shows that once the application is run using the ``run`` method, the command line arguments are parsed and the following methods are called (this means that at this point, the application has a valid ``self.options``): - ``consume_user_options()`` - ``configure_scheme()`` The ``configure_scheme`` is important as this example allows the user to change the Reynolds number which changes the viscosity as well as the resolution via ``--nx`` and ``--hdx``. The code for the configuration looks like: .. code-block:: python def configure_scheme(self): scheme = self.scheme h0 = self.hdx * self.dx if self.options.scheme == 'tvf': scheme.configure(pb=self.options.pb_factor*p0, nu=self.nu, h0=h0) elif self.options.scheme == 'wcsph': scheme.configure(hdx=self.hdx, nu=self.nu, h0=h0) elif self.options.scheme == 'edac': scheme.configure(h=h0, nu=self.nu, pb=self.options.pb_factor*p0) kernel = QuinticSpline(dim=2) scheme.configure_solver(kernel=kernel, tf=self.tf, dt=self.dt) Note the use of the ``self.options.scheme`` and the use of the ``scheme.configure`` method. Furthermore, the method also calls the scheme's ``configure_solver`` method. Using multiple schemes ~~~~~~~~~~~~~~~~~~~~~~ This is relatively easy, this is achieved by using the :py:class:`pysph.sph.scheme.SchemeChooser` scheme as follows: .. code-block:: python def create_scheme(self): wcsph = WCSPHScheme( ['fluid'], [], dim=2, rho0=rho0, c0=c0, h0=h0, hdx=hdx, nu=None, gamma=7.0, alpha=0.0, beta=0.0 ) tvf = TVFScheme( ['fluid'], [], dim=2, rho0=rho0, c0=c0, nu=None, p0=p0, pb=None, h0=h0 ) edac = EDACScheme( ['fluid'], [], dim=2, rho0=rho0, c0=c0, nu=None, pb=p0, h=h0 ) s = SchemeChooser(default='tvf', wcsph=wcsph, tvf=tvf, edac=edac) return s When using multiple schemes it is important to recall that each scheme needs different particle properties. The schemes set these extra properties for you. In this example, the ``create_particles`` method has the following code: .. code-block:: python def create_particles(self): # ... fluid = get_particle_array(name='fluid', x=x, y=y, h=h) self.scheme.setup_properties([fluid]) The line tht calls ``setup_properties`` passes a list of the particle arrays to the scheme so the scheme can configure/setup any additional properties. Periodicity ~~~~~~~~~~~ This is rather easily done with the code in the ``create_domain`` method: .. code-block:: python def create_domain(self): return DomainManager( xmin=0, xmax=L, ymin=0, ymax=L, periodic_in_x=True, periodic_in_y=True ) See also :ref:`simulating_periodicity`. Post-processing ~~~~~~~~~~~~~~~ The code has a significant chunk of code for post-processing the results. This is in the ``post_process`` method. This demonstrates how to iterate over the files and read the file data to calculate various quantities. In particular it also demonstrates the use of the :py:class:`pysph.tools.sph_evaluator.SPHEvaluator` class. For example consider the method: .. code-block:: python def _get_sph_evaluator(self, array): if not hasattr(self, '_sph_eval'): from pysph.tools.sph_evaluator import SPHEvaluator equations = [ ComputeAveragePressure(dest='fluid', sources=['fluid']) ] dm = self.create_domain() sph_eval = SPHEvaluator( arrays=[array], equations=equations, dim=2, kernel=QuinticSpline(dim=2), domain_manager=dm ) self._sph_eval = sph_eval return self._sph_eval This code, creates the evaluator, note that it just takes the particle arrays of interest, a set of equations (this can be as complex as the normal SPH equations, with groups and everything), the kernel, and a domain manager. The evaluator has two important methods: - `update_particle_arrays(...)`: this allows a user to update the arrays to a new set of values efficiently. - `evaluate`: this actually performs the evaluation of the equations. The example has this code which demonstrates these: .. code-block:: python def _get_post_process_props(self, array): # ... sph_eval = self._get_sph_evaluator(array) sph_eval.update_particle_arrays([array]) sph_eval.evaluate() # ... Note the use of the above methods. pysph-master/docs/source/index.rst000066400000000000000000000040071356347341600176200ustar00rootroot00000000000000.. PySPH documentation master file, created by sphinx-quickstart on Mon Mar 31 01:01:41 2014. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to the PySPH documentation! ==================================== PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. Users can implement an SPH formulation in pure Python_ and still obtain excellent performance. PySPH can make use of multiple cores via OpenMP or be run seamlessly in parallel using MPI. Here are some videos of simulations made with PySPH. .. raw:: html