neo-0.10.0/0000755000076700000240000000000014077757142012774 5ustar andrewstaff00000000000000neo-0.10.0/LICENSE.txt0000644000076700000240000000273514077743075014626 0ustar andrewstaff00000000000000Copyright (c) 2010-2021, Neo authors and contributors All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of the copyright holders nor the names of the contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. neo-0.10.0/MANIFEST.in0000644000076700000240000000030713507452453014524 0ustar andrewstaff00000000000000include README.rst include LICENSE.txt include CITATION.rst prune drafts include examples/*.py recursive-include doc * prune doc/build exclude doc/source/images/*.svg exclude doc/source/images/*.dia neo-0.10.0/PKG-INFO0000644000076700000240000001275414077757142014102 0ustar andrewstaff00000000000000Metadata-Version: 2.1 Name: neo Version: 0.10.0 Summary: Neo is a package for representing electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats Home-page: https://neuralensemble.org/neo Author: Neo authors and contributors Author-email: samuel.garcia@cnrs.fr License: BSD-3-Clause Description: === Neo === Neo is a Python package for working with electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats, including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, and support for writing to a subset of these formats plus non-proprietary formats including HDF5. The goal of Neo is to improve interoperability between Python tools for analyzing, visualizing and generating electrophysiology data by providing a common, shared object model. In order to be as lightweight a dependency as possible, Neo is deliberately limited to represention of data, with no functions for data analysis or visualization. Neo is used by a number of other software tools, including SpykeViewer_ (data analysis and visualization), Elephant_ (data analysis), the G-node_ suite (databasing), PyNN_ (simulations), tridesclous_ (spike sorting) and ephyviewer_ (data visualization). OpenElectrophy_ (data analysis and visualization) uses an older version of neo. Neo implements a hierarchical data model well adapted to intracellular and extracellular electrophysiology and EEG data with support for multi-electrodes (for example tetrodes). Neo's data objects build on the quantities package, which in turn builds on NumPy by adding support for physical dimensions. Thus Neo objects behave just like normal NumPy arrays, but with additional metadata, checks for dimensional consistency and automatic unit conversion. A project with similar aims but for neuroimaging file formats is `NiBabel`_. Code status ----------- .. image:: https://travis-ci.org/NeuralEnsemble/python-neo.png?branch=master :target: https://travis-ci.org/NeuralEnsemble/python-neo :alt: Core Unit Test Status (TravisCI) .. image:: https://github.com/NeuralEnsemble/python-neo/actions/workflows/full-test.yml/badge.svg?event=push&branch=master :target: https://github.com/NeuralEnsemble/python-neo/actions?query=event%3Apush+branch%3Amaster :alt: IO Unit Test Status (Github Actions) .. image:: https://coveralls.io/repos/NeuralEnsemble/python-neo/badge.png :target: https://coveralls.io/r/NeuralEnsemble/python-neo :alt: Unit Test Coverage More information ---------------- - Home page: http://neuralensemble.org/neo - Mailing list: https://groups.google.com/forum/?fromgroups#!forum/neuralensemble - Documentation: http://neo.readthedocs.io/ - Bug reports: https://github.com/NeuralEnsemble/python-neo/issues For installation instructions, see doc/source/install.rst To cite Neo in publications, see CITATION.txt :copyright: Copyright 2010-2021 by the Neo team, see doc/source/authors.rst. :license: 3-Clause Revised BSD License, see LICENSE.txt for details. Funding ------- Development of Neo has been partially funded by the European Union Sixth Framework Program (FP6) under grant agreement FETPI-015879 (FACETS), by the European Union Seventh Framework Program (FP7/2007­-2013) under grant agreements no. 269921 (BrainScaleS) and no. 604102 (HBP), and by the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 720270 (Human Brain Project SGA1), No. 785907 (Human Brain Project SGA2) and No. 945539 (Human Brain Project SGA3). .. _OpenElectrophy: https://github.com/OpenElectrophy/OpenElectrophy .. _Elephant: http://neuralensemble.org/elephant .. _G-node: http://www.g-node.org/ .. _Neuroshare: http://neuroshare.org/ .. _SpykeViewer: https://spyke-viewer.readthedocs.org/en/latest/ .. _NiBabel: http://nipy.sourceforge.net/nibabel/ .. _PyNN: http://neuralensemble.org/PyNN .. _quantities: http://pypi.python.org/pypi/quantities .. _`NeuralEnsemble mailing list`: http://groups.google.com/group/neuralensemble .. _`issue tracker`: https://github.c .. _tridesclous: https://github.com/tridesclous/tridesclous .. _ephyviewer: https://github.com/NeuralEnsemble/ephyviewer Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Science/Research Classifier: License :: OSI Approved :: BSD License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Topic :: Scientific/Engineering Requires-Python: >=3.7 Provides-Extra: tiffio Provides-Extra: nixio Provides-Extra: kwikio Provides-Extra: stimfitio Provides-Extra: neomatlabio Provides-Extra: igorproio neo-0.10.0/README.rst0000644000076700000240000000747314077743075014476 0ustar andrewstaff00000000000000=== Neo === Neo is a Python package for working with electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats, including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, and support for writing to a subset of these formats plus non-proprietary formats including HDF5. The goal of Neo is to improve interoperability between Python tools for analyzing, visualizing and generating electrophysiology data by providing a common, shared object model. In order to be as lightweight a dependency as possible, Neo is deliberately limited to represention of data, with no functions for data analysis or visualization. Neo is used by a number of other software tools, including SpykeViewer_ (data analysis and visualization), Elephant_ (data analysis), the G-node_ suite (databasing), PyNN_ (simulations), tridesclous_ (spike sorting) and ephyviewer_ (data visualization). OpenElectrophy_ (data analysis and visualization) uses an older version of neo. Neo implements a hierarchical data model well adapted to intracellular and extracellular electrophysiology and EEG data with support for multi-electrodes (for example tetrodes). Neo's data objects build on the quantities package, which in turn builds on NumPy by adding support for physical dimensions. Thus Neo objects behave just like normal NumPy arrays, but with additional metadata, checks for dimensional consistency and automatic unit conversion. A project with similar aims but for neuroimaging file formats is `NiBabel`_. Code status ----------- .. image:: https://travis-ci.org/NeuralEnsemble/python-neo.png?branch=master :target: https://travis-ci.org/NeuralEnsemble/python-neo :alt: Core Unit Test Status (TravisCI) .. image:: https://github.com/NeuralEnsemble/python-neo/actions/workflows/full-test.yml/badge.svg?event=push&branch=master :target: https://github.com/NeuralEnsemble/python-neo/actions?query=event%3Apush+branch%3Amaster :alt: IO Unit Test Status (Github Actions) .. image:: https://coveralls.io/repos/NeuralEnsemble/python-neo/badge.png :target: https://coveralls.io/r/NeuralEnsemble/python-neo :alt: Unit Test Coverage More information ---------------- - Home page: http://neuralensemble.org/neo - Mailing list: https://groups.google.com/forum/?fromgroups#!forum/neuralensemble - Documentation: http://neo.readthedocs.io/ - Bug reports: https://github.com/NeuralEnsemble/python-neo/issues For installation instructions, see doc/source/install.rst To cite Neo in publications, see CITATION.txt :copyright: Copyright 2010-2021 by the Neo team, see doc/source/authors.rst. :license: 3-Clause Revised BSD License, see LICENSE.txt for details. Funding ------- Development of Neo has been partially funded by the European Union Sixth Framework Program (FP6) under grant agreement FETPI-015879 (FACETS), by the European Union Seventh Framework Program (FP7/2007­-2013) under grant agreements no. 269921 (BrainScaleS) and no. 604102 (HBP), and by the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreements No. 720270 (Human Brain Project SGA1), No. 785907 (Human Brain Project SGA2) and No. 945539 (Human Brain Project SGA3). .. _OpenElectrophy: https://github.com/OpenElectrophy/OpenElectrophy .. _Elephant: http://neuralensemble.org/elephant .. _G-node: http://www.g-node.org/ .. _Neuroshare: http://neuroshare.org/ .. _SpykeViewer: https://spyke-viewer.readthedocs.org/en/latest/ .. _NiBabel: http://nipy.sourceforge.net/nibabel/ .. _PyNN: http://neuralensemble.org/PyNN .. _quantities: http://pypi.python.org/pypi/quantities .. _`NeuralEnsemble mailing list`: http://groups.google.com/group/neuralensemble .. _`issue tracker`: https://github.c .. _tridesclous: https://github.com/tridesclous/tridesclous .. _ephyviewer: https://github.com/NeuralEnsemble/ephyviewer neo-0.10.0/doc/0000755000076700000240000000000014077757142013541 5ustar andrewstaff00000000000000neo-0.10.0/doc/Makefile0000644000076700000240000000606413420077704015175 0ustar andrewstaff00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/neo.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/neo.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." neo-0.10.0/doc/make.bat0000644000076700000240000000577513420077704015152 0ustar andrewstaff00000000000000@ECHO OFF REM Command file for Sphinx documentation set SPHINXBUILD=sphinx-build set BUILDDIR=build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\neo.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\neo.ghc goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end neo-0.10.0/doc/old_stuffs/0000755000076700000240000000000014077757142015711 5ustar andrewstaff00000000000000neo-0.10.0/doc/old_stuffs/gif2011workshop.rst0000644000076700000240000000766213507452453021316 0ustar andrewstaff00000000000000************************************ Gif 2011 workshop decisions ************************************ This have been writtent before neo 2 implementation just after the wokshop. Not every hting is up to date. After a workshop in GIF we are happy to present the following improvements: =========================================================================== 1. We made a few renames of objects - "Neuron" into "Unit" - "RecordingPoint" into "RecordingChannel" to remove electrophysiological (or other) dependencies and keep generality. 2. For every object we specified mandatory attributes and recommended attributes. For every attribute we define a python-based data type. The changes are reflected in the diagram #FIXME with red (mandatory) and blue (recommended) attributes indicated. 3. New objects are required for operational performance (memory allocation) and logical consistency (neo eeg, etc): - AnalogSignalArray - IrregularlySampledAnalogSignal - EventArray - EpochArray - RecordingChannelGroup Attributes and parent objects are available on the diagram #FIXME 4. Due to some logical considerations we remove the link between "RecordingChannel" and "Spiketrain". "SpikeTrain" now depends on "Unit", which in its turn connects to "RecordingChannel". For inconsistency reasons we removed link between "SpikeTrain" and a "Spike" ("SpikeTrain" is an object containing numpy array of spikes, but not a container of "Spike" objects - which is performance-unefficient). The same idea is applied to AnalogSignal / AnalogSignalArray, Event / EventArray etc. All changes are relected in # FIXME 5. In order to implement flexibility and embed user-defined metadata into the NEO objects we decided to assign "annotations" dictionnary to very NEO object. This attribute is optional; user may add key-value pairs to it according to its scientific needs. 6. The decision is made to use "quantities" package for objects, representing data arrays with units. "Quantities" is a stable (at least for python2.6) package, presented in pypi, easy-embeddable into NEO object model. Points of implementation are presented in the diagram # FIXME 7. We postpone the solution of object ID management inside NEO. 8. In AnalogSignal - t_stop become a property (for consistency reasons). 9. In order to provie a support for "advanced" object load we decided to include parameters - lazy (True/False) - cascade (True/False) in the BaseIO class. These parameters are valid for every method, provided by the IO (.read_segment() etc.). If "lazy" is True, the IO does not load data array, and makes array load otherwise. "Cascade" parameter regulates load of object relations. 10. We postpone the question of data analysis storage till the next NEO congress. Analysis objects are free for the moment. 11. We stay with Python 2.6 / 2.7 support. Python 3 to be considered in a later discussions. New object diagram discussed =============================================== .. image:: images/neo_UML_French_workshop.png :height: 500 px :align: center Actions to be performed: =============================================================== promotion: at g-node: philipp, andrey in neuralesemble: andrew within incf network: andrew thomas at posters: all logo: samuel paper: next year in the web: pypi object struture: common: samuel draft: yann andrey tree diagram: philipp florant io: ExampleIO : samuel HDF5 IO: andrey doc: first page: andrew thomas object disription: samuel draft+ andrew io user/ io dev: samuel example/cookbook: andrey script, samuel NeuroConvert, doctest unitest: andrew packaging: samuel account for more licence: BSD-3-Clause copyright: CNRS, GNode, University of Provence hosting test data: Philipp Other questions discussed: =========================== - consistency in names of object attributes and get/set functions neo-0.10.0/doc/old_stuffs/specific_annotations.rst0000644000076700000240000000161613507452453022643 0ustar andrewstaff00000000000000.. _specific_annotations: ******************** Specific annotations ******************** Introduction ------------ Neo imposes and recommends some attributes for all objects, and also provides the *annotations* dict for all objects to deal with any kind of extensions. This flexible feature allow Neo objects to be customized for many use cases. While any names can be used for annotations, interoperability will be improved if there is some consistency in naming. Here we suggest some conventions for annotation names. Patch clamp ----------- .. todo: TODO Network simultaion ------------------ Spike sorting ------------- **SpikeTrain.annotations['waveform_features']** : when spike sorting the waveform is reduced to a smaller dimensional space with PCA or wavelets. This attribute is the projected matrice. NxM (N spike number, M features number. KlustakwikIO supports this feature. neo-0.10.0/doc/source/0000755000076700000240000000000014077757142015041 5ustar andrewstaff00000000000000neo-0.10.0/doc/source/api_reference.rst0000644000076700000240000000020413560316630020343 0ustar andrewstaff00000000000000API Reference ============= .. automodule:: neo.core .. testsetup:: * from neo import SpikeTrain import quantities as pq neo-0.10.0/doc/source/authors.rst0000644000076700000240000000701414077743075017262 0ustar andrewstaff00000000000000======================== Authors and contributors ======================== The following people have contributed code and/or ideas to the current version of Neo. The institutional affiliations are those at the time of the contribution, and may not be the current affiliation of a contributor. * Samuel Garcia [1] * Andrew Davison [2, 21] * Chris Rodgers [3] * Pierre Yger [2] * Yann Mahnoun [4] * Luc Estabanez [2] * Andrey Sobolev [5] * Thierry Brizzi [2] * Florent Jaillet [6] * Philipp Rautenberg [5] * Thomas Wachtler [5] * Cyril Dejean [7] * Robert Pröpper [8] * Domenico Guarino [2] * Achilleas Koutsou [5] * Erik Li [9] * Georg Raiser [10] * Joffrey Gonin [2] * Kyler Brown * Mikkel Elle Lepperød [11] * C Daniel Meliza [12] * Julia Sprenger [13, 6] * Maximilian Schmidt [13] * Johanna Senk [13] * Carlos Canova [13] * Hélissande Fragnaud [2] * Mark Hollenbeck [14] * Mieszko Grodzicki * Rick Gerkin [15] * Matthieu Sénoville [2] * Chadwick Boulay [16] * Björn Müller [13] * William Hart [17] * erikli(github) * Jeffrey Gill [18] * Lucas (lkoelman@github) * Mark Histed * Mike Sintsov [19] * Scott W Harden [20] * Chek Yin Choi (hkchekc@github) * Corentin Fragnaud [21] * Alexander Kleinjohann * Christian Kothe * rishidhingra@github * Hugo van Kemenade * Aitor Morales-Gregorio [13] * Peter N Steinmetz [22] * Shashwat Sridhar * Alessio Buccino [23] * Regimantas Jurkus [13] * Steffen Buergers [24] * Etienne Combrisson [6] * Ben Dichter [24] * Elodie Legouée [21] 1. Centre de Recherche en Neuroscience de Lyon, CNRS UMR5292 - INSERM U1028 - Universite Claude Bernard Lyon 1 2. Unité de Neuroscience, Information et Complexité, CNRS UPR 3293, Gif-sur-Yvette, France 3. University of California, Berkeley 4. Laboratoire de Neurosciences Intégratives et Adaptatives, CNRS UMR 6149 - Université de Provence, Marseille, France 5. G-Node, Ludwig-Maximilians-Universität, Munich, Germany 6. Institut de Neurosciences de la Timone, CNRS UMR 7289 - Université d'Aix-Marseille, Marseille, France 7. Centre de Neurosciences Integratives et Cognitives, UMR 5228 - CNRS - Université Bordeaux I - Université Bordeaux II 8. Neural Information Processing Group, TU Berlin, Germany 9. Department of Neurobiology & Anatomy, Drexel University College of Medicine, Philadelphia, PA, USA 10. University of Konstanz, Konstanz, Germany 11. Centre for Integrative Neuroplasticity (CINPLA), University of Oslo, Norway 12. University of Virginia 13. INM-6, Forschungszentrum Jülich, Germany 14. University of Texas at Austin 15. Arizona State University 16. Ottawa Hospital Research Institute, Canada 17. Swinburne University of Technology, Australia 18. Case Western Reserve University (CWRU) · Department of Biology 19. IAL Developmental Neurobiology, Kazan Federal University, Kazan, Russia 20. Harden Technologies, LLC 21. Institut des Neurosciences Paris-Saclay, CNRS UMR 9197 - Université Paris-Sud, Gif-sur-Yvette, France 22. Neurtex Brain Research Institute, Dallas, TX, USAs 23. Bio Engineering Laboratory, DBSSE, ETH, Basel, Switzerland 24. CatalystNeuro If we've somehow missed you off the list we're very sorry - please let us know. Acknowledgements ---------------- .. image:: https://www.braincouncil.eu/wp-content/uploads/2018/11/wsi-imageoptim-EU-Logo.jpg :alt: "EU Logo" :height: 104px :width: 156px :align: right Neo was developed in part in the Human Brain Project, funded from the European Union's Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreements No. 720270 and No. 785907 (Human Brain Project SGA1 and SGA2). neo-0.10.0/doc/source/authors.rst.orig0000644000076700000240000000674614020136255020214 0ustar andrewstaff00000000000000======================== Authors and contributors ======================== The following people have contributed code and/or ideas to the current version of Neo. The institutional affiliations are those at the time of the contribution, and may not be the current affiliation of a contributor. * Samuel Garcia [1] * Andrew Davison [2, 21] * Chris Rodgers [3] * Pierre Yger [2] * Yann Mahnoun [4] * Luc Estabanez [2] * Andrey Sobolev [5] * Thierry Brizzi [2] * Florent Jaillet [6] * Philipp Rautenberg [5] * Thomas Wachtler [5] * Cyril Dejean [7] * Robert Pröpper [8] * Domenico Guarino [2] * Achilleas Koutsou [5] * Erik Li [9] * Georg Raiser [10] * Joffrey Gonin [2] * Kyler Brown [?] * Mikkel Elle Lepperød [11] * C Daniel Meliza [12] * Julia Sprenger [13] * Maximilian Schmidt [13] * Johanna Senk [13] * Carlos Canova [13] * Hélissande Fragnaud [2] * Mark Hollenbeck [14] * Mieszko Grodzicki * Rick Gerkin [15] * Matthieu Sénoville [2] * Chadwick Boulay [16] * Björn Müller [13] * William Hart [17] * erikli(github) * Jeffrey Gill [18] * Lucas (lkoelman@github) * Mark Histed * Mike Sintsov [19] * Scott W Harden [20] * Chek Yin Choi (hkchekc@github) * Corentin Fragnaud [21] * Alexander Kleinjohann * Christian Kothe * rishidhingra@github * Hugo van Kemenade * Aitor Morales-Gregorio [13] * Peter N Steinmetz [22] <<<<<<< HEAD * Shashwat Sridhar * Alessio Buccino [23] ======= * Regimantas Jurkus [13] >>>>>>> c8d42afb439ecd71eff1cd61d24dadec055070f2 1. Centre de Recherche en Neuroscience de Lyon, CNRS UMR5292 - INSERM U1028 - Universite Claude Bernard Lyon 1 2. Unité de Neuroscience, Information et Complexité, CNRS UPR 3293, Gif-sur-Yvette, France 3. University of California, Berkeley 4. Laboratoire de Neurosciences Intégratives et Adaptatives, CNRS UMR 6149 - Université de Provence, Marseille, France 5. G-Node, Ludwig-Maximilians-Universität, Munich, Germany 6. Institut de Neurosciences de la Timone, CNRS UMR 7289 - Université d'Aix-Marseille, Marseille, France 7. Centre de Neurosciences Integratives et Cognitives, UMR 5228 - CNRS - Université Bordeaux I - Université Bordeaux II 8. Neural Information Processing Group, TU Berlin, Germany 9. Department of Neurobiology & Anatomy, Drexel University College of Medicine, Philadelphia, PA, USA 10. University of Konstanz, Konstanz, Germany 11. Centre for Integrative Neuroplasticity (CINPLA), University of Oslo, Norway 12. University of Virginia 13. INM-6, Forschungszentrum Jülich, Germany 14. University of Texas at Austin 15. Arizona State University 16. Ottawa Hospital Research Institute, Canada 17. Swinburne University of Technology, Australia 18. Case Western Reserve University (CWRU) · Department of Biology 19. IAL Developmental Neurobiology, Kazan Federal University, Kazan, Russia 20. Harden Technologies, LLC 21. Institut des Neurosciences Paris-Saclay, CNRS UMR 9197 - Université Paris-Sud, Gif-sur-Yvette, France 22. Neurtex Brain Research Institute, Dallas, TX, USAs 23. Bio Engineering Laboratory, DBSSE, ETH, Basel, Switzerland If we've somehow missed you off the list we're very sorry - please let us know. Acknowledgements ---------------- .. image:: https://www.braincouncil.eu/wp-content/uploads/2018/11/wsi-imageoptim-EU-Logo.jpg :alt: "EU Logo" :height: 104px :width: 156px :align: right Neo was developed in part in the Human Brain Project, funded from the European Union's Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreements No. 720270 and No. 785907 (Human Brain Project SGA1 and SGA2). neo-0.10.0/doc/source/conf.py0000644000076700000240000001552214077743075016345 0ustar andrewstaff00000000000000# # neo documentation build configuration file, created by # sphinx-quickstart on Fri Feb 25 14:18:12 2011. # # This file is execfile()d with the current directory set to its containing # dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys from distutils.version import LooseVersion with open("../../neo/version.py") as fp: d = {} exec(fp.read(), d) neo_release = d['version'] neo_version = '.'.join(str(e) for e in LooseVersion(neo_release).version[:2]) AUTHORS = 'Neo authors and contributors ' # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.append(os.path.abspath('.')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.todo'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = 'Neo' copyright = '2010-2021, ' + AUTHORS # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = neo_version # The full version, including alpha/beta/rc tags. release = neo_release # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. # unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = [] # The reST default role (used for this markup: `text`) # to use for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme = 'default' html_theme = 'sphinxdoc' # html_theme = 'haiku' # html_theme = 'scrolls' # html_theme = 'agogo' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = 'images/neologo_light.png' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['images'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_use_modindex = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'neodoc' # -- Options for LaTeX output ------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, # documentclass [howto/manual]). latex_documents = [('index', 'neo.tex', 'Neo Documentation', AUTHORS, 'manual')] # The name of an image file (relative to this directory) to place at the # top of the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True todo_include_todos = True # set to False before releasing documentation rst_epilog = """ .. |neo_github_url| replace:: https://github.com/NeuralEnsemble/python-neo/archive/neo-{}.zip """.format(neo_release) neo-0.10.0/doc/source/core.rst0000644000076700000240000002653214066374330016523 0ustar andrewstaff00000000000000******** Neo core ******** .. currentmodule:: neo.core This figure shows the main data types in Neo, with the exception of the newly added ImageSequence and RegionOfInterest classes: .. image:: images/base_schematic.png :height: 500 px :alt: Illustration of the main Neo data types :align: center Neo objects fall into three categories: data objects, container objects and grouping objects. Data objects ------------ These objects directly represent data as arrays of numerical values with associated metadata (units, sampling frequency, etc.). * :py:class:`AnalogSignal`: A regular sampling of a single- or multi-channel continuous analog signal. * :py:class:`IrregularlySampledSignal`: A non-regular sampling of a single- or multi-channel continuous analog signal. * :py:class:`SpikeTrain`: A set of action potentials (spikes) emitted by the same unit in a period of time (with optional waveforms). * :py:class:`Event`: An array of time points representing one or more events in the data. * :py:class:`Epoch`: An array of time intervals representing one or more periods of time in the data. * :py:class:`ImageSequence`: A three dimensional array representing a sequence of images. Container objects ----------------- There is a simple hierarchy of containers: * :py:class:`Segment`: A container for heterogeneous discrete or continous data sharing a common clock (time basis) but not necessarily the same sampling rate, start time or end time. A :py:class:`Segment` can be considered as equivalent to a "trial", "episode", "run", "recording", etc., depending on the experimental context. May contain any of the data objects. * :py:class:`Block`: The top-level container gathering all of the data, discrete and continuous, for a given recording session. Contains :class:`Segment` and :class:`Group` objects. Grouping/linking objects ------------------------ These objects express the relationships between data items, such as which signals were recorded on which electrodes, which spike trains were obtained from which membrane potential signals, etc. They contain references to data objects that cut across the simple container hierarchy. * :py:class:`ChannelView`: A set of indices into :py:class:`AnalogSignal` objects, representing logical and/or physical recording channels. For spike sorting of extracellular signals, where spikes may be recorded on more than one recording channel, the :py:class:`ChannelView` can be used to reference the group of recording channels from which the spikes were obtained. * :py:class:`Group`: Can contain any of the data objects, views, or other groups, outside the hierarchy of the segment and block containers. A common use is to link the :class:`SpikeTrain` objects within a :class:`Block`, possibly across multiple Segments, that were emitted by the same neuron. * :py:class:`CircularRegionOfInterest`, :py:class:`RectangularRegionOfInterest` and :py:class:`PolygonRegionOfInterest` are three subclasses that link :class:`ImageSequence` objects to signals (:class:`AnalogSignal` objects) extracted from them. For more details, see :doc:`grouping`. NumPy compatibility =================== Neo data objects inherit from :py:class:`Quantity`, which in turn inherits from NumPy :py:class:`ndarray`. This means that a Neo :py:class:`AnalogSignal` is also a :py:class:`Quantity` and an array, giving you access to all of the methods available for those objects. For example, you can pass a :py:class:`SpikeTrain` directly to the :py:func:`numpy.histogram` function, or an :py:class:`AnalogSignal` directly to the :py:func:`numpy.std` function. If you want to get a numpy.ndarray you use magnitude and rescale from quantities:: >>> np_sig = neo_analogsignal.rescale('mV').magnitude >>> np_times = neo_analogsignal.times.rescale('s').magnitude Relationships between objects ============================= Container objects like :py:class:`Block` or :py:class:`Segment` are gateways to access other objects. For example, a :class:`Block` can access a :class:`Segment` with:: >>> bl = Block() >>> bl.segments # gives a list of segments A :class:`Segment` can access the :class:`AnalogSignal` objects that it contains with:: >>> seg = Segment() >>> seg.analogsignals # gives a list of AnalogSignals In the :ref:`neo_diagram` below, these *one to many* relationships are represented by cyan arrows. In general, an object can access its children with an attribute *childname+s* in lower case, e.g. * :attr:`Block.segments` * :attr:`Segments.analogsignals` * :attr:`Segments.spiketrains` * :attr:`Block.groups` These relationships are bi-directional, i.e. a child object can access its parent: * :attr:`Segment.block` * :attr:`AnalogSignal.segment` * :attr:`SpikeTrain.segment` * :attr:`Group.block` Here is an example showing these relationships in use:: from neo.io import AxonIO import urllib.request url = "https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data/raw/master/axon/File_axon_3.abf" filename = './test.abf' urllib.request.urlretrieve(url, filename) r = AxonIO(filename=filename) blocks = r.read() # read the entire file > a list of Blocks bl = blocks[0] print(bl) print(bl.segments) # child access for seg in bl.segments: print(seg) print(seg.block) # parent access In some cases, a one-to-many relationship is sufficient. Here is a simple example with tetrodes, in which each tetrode has its own group.:: from neo import Block, Group bl = Block() # the four tetrodes for i in range(4): group = Group(name='Tetrode %d' % i) bl.groups.append(group) # now we load the data and associate it with the created channels # ... Now consider a more complex example: a 1x4 silicon probe, with a neuron on channels 0,1,2 and another neuron on channels 1,2,3. We create a group for each neuron to hold the spiketrains for each spike sorting group together with the channels on which that neuron spiked:: bl = Block(name='probe data') # one group for each neuron view0 = ChannelView(recorded_signals, index=[0, 1, 2]) unit0 = Group(view0, name='Group 0') bl.groups.append(unit0) view1 = ChannelView(recorded_signals, index=[1, 2, 3]) unit1 = Group(view1, name='Group 1') bl.groups.append(unit1) # now we add the spiketrains from Unit 0 to unit0 # and add the spiketrains from Unit 1 to unit1 # ... Now each putative neuron is represented by a :class:`Group` containing the spiktrains of that neuron and a view of the signal selecting only those channels from which the spikes were obtained. See :doc:`usecases` for more examples of how the different objects may be used. .. _neo_diagram: Neo diagram =========== Object: * With a star = inherits from :class:`Quantity` Attributes: * In red = required * In white = recommended Relationship: * In cyan = one to many * In yellow = properties (deduced from other relationships) .. image:: images/simple_generated_diagram.png :width: 750 px :download:`Click here for a better quality SVG diagram <./images/simple_generated_diagram.svg>` .. note:: This figure do not include :class:`ChannelView` and :class:`RegionOfInterest`. For more details, see the :doc:`api_reference`. Initialization ============== Neo objects are initialized with "required", "recommended", and "additional" arguments. - Required arguments MUST be provided at the time of initialization. They are used in the construction of the object. - Recommended arguments may be provided at the time of initialization. They are accessible as Python attributes. They can also be set or modified after initialization. - Additional arguments are defined by the user and are not part of the Neo object model. A primary goal of the Neo project is extensibility. These additional arguments are entries in an attribute of the object: a Python dict called :py:attr:`annotations`. Note : Neo annotations are not the same as the *__annotations__* attribute introduced in Python 3.6. Example: SpikeTrain ------------------- :py:class:`SpikeTrain` is a :py:class:`Quantity`, which is a NumPy array containing values with physical dimensions. The spike times are a required attribute, because the dimensionality of the spike times determines the way in which the :py:class:`Quantity` is constructed. Here is how you initialize a :py:class:`SpikeTrain` with required arguments:: >>> import neo >>> st = neo.SpikeTrain([3, 4, 5], units='sec', t_stop=10.0) >>> print(st) [ 3. 4. 5.] s You will see the spike times printed in a nice format including the units. Because `st` "is a" :py:class:`Quantity` array with units of seconds, it absolutely must have this information at the time of initialization. You can specify the spike times with a keyword argument too:: >>> st = neo.SpikeTrain(times=[3, 4, 5], units='sec', t_stop=10.0) The spike times could also be in a NumPy array. If it is not specified, :attr:`t_start` is assumed to be zero, but another value can easily be specified:: >>> st = neo.SpikeTrain(times=[3, 4, 5], units='sec', t_start=1.0, t_stop=10.0) >>> st.t_start array(1.0) * s Recommended attributes must be specified as keyword arguments, not positional arguments. Finally, let's consider "additional arguments". These are the ones you define for your experiment:: >>> st = neo.SpikeTrain(times=[3, 4, 5], units='sec', t_stop=10.0, rat_name='Fred') >>> print(st.annotations) {'rat_name': 'Fred'} Because ``rat_name`` is not part of the Neo object model, it is placed in the dict :py:attr:`annotations`. This dict can be modified as necessary by your code. Annotations ----------- As well as adding annotations as "additional" arguments when an object is constructed, objects may be annotated using the :meth:`annotate` method possessed by all Neo core objects, e.g.:: >>> seg = Segment() >>> seg.annotate(stimulus="step pulse", amplitude=10*nA) >>> print(seg.annotations) {'amplitude': array(10.0) * nA, 'stimulus': 'step pulse'} Since annotations may be written to a file or database, there are some limitations on the data types of annotations: they must be "simple" types or containers (lists, dicts, tuples, NumPy arrays) of simple types, where the simple types are ``integer``, ``float``, ``complex``, ``Quantity``, ``string``, ``date``, ``time`` and ``datetime``. Array Annotations ----------------- Next to "regular" annotations there is also a way to annotate arrays of values in order to create annotations with one value per data point. Using this feature, called Array Annotations, the consistency of those annotations with the actual data is ensured. Apart from adding those on object construction, Array Annotations can also be added using the :meth:`array_annotate` method provided by all Neo data objects, e.g.:: >>> sptr = SpikeTrain(times=[1, 2, 3]*pq.s, t_stop=3*pq.s) >>> sptr.array_annotate(index=[0, 1, 2], relevant=[True, False, True]) >>> print(sptr.array_annotations) {'index': array([0, 1, 2]), 'relevant': array([ True, False, True])} Since Array Annotations may be written to a file or database, there are some limitations on the data types of arrays: they must be 1-dimensional (i.e. not nested) and contain the same types as annotations: ``integer``, ``float``, ``complex``, ``Quantity``, ``string``, ``date``, ``time`` and ``datetime``. neo-0.10.0/doc/source/developers_guide.rst0000644000076700000240000002214714066374330021116 0ustar andrewstaff00000000000000================= Developers' guide ================= These instructions are for developing on a Unix-like platform, e.g. Linux or macOS, with the bash shell. If you develop on Windows, please get in touch. Mailing lists ------------- General discussion of Neo development takes place in the `NeuralEnsemble Google group`_. Discussion of issues specific to a particular ticket in the issue tracker should take place on the tracker. Using the issue tracker ----------------------- If you find a bug in Neo, please create a new ticket on the `issue tracker`_, setting the type to "defect". Choose a name that is as specific as possible to the problem you've found, and in the description give as much information as you think is necessary to recreate the problem. The best way to do this is to create the shortest possible Python script that demonstrates the problem, and attach the file to the ticket. If you have an idea for an improvement to Neo, create a ticket with type "enhancement". If you already have an implementation of the idea, create a patch (see below) and attach it to the ticket. To keep track of changes to the code and to tickets, you can register for a GitHub account and then set to watch the repository at `GitHub Repository`_ (see https://help.github.com/en/articles/watching-and-unwatching-repositories). Requirements ------------ * Python_ 3.5 or later * numpy_ >= 1.11.0 * quantities_ >= 0.12.1 * nose_ >= 1.1.2 (for running tests) * Sphinx_ (for building documentation) * (optional) coverage_ >= 2.85 (for measuring test coverage) * (optional) scipy >= 0.12 (for MatlabIO) * (optional) h5py >= 2.5 (for KwikIO) * (optional) nixio (for NixIO) * (optional) pillow (for TiffIO) We strongly recommend you develop within a virtual environment (from virtualenv, venv or conda). Getting the source code ----------------------- We use the Git version control system. The best way to contribute is through GitHub_. You will first need a GitHub account, and you should then fork the repository at `GitHub Repository`_ (see http://help.github.com/en/articles/fork-a-repo). To get a local copy of the repository:: $ cd /some/directory $ git clone git@github.com:/python-neo.git Now you need to make sure that the ``neo`` package is on your PYTHONPATH. You can do this either by installing Neo:: $ cd python-neo $ python3 setup.py install (if you do this, you will have to re-run ``setup.py install`` any time you make changes to the code) *or* by creating symbolic links from somewhere on your PYTHONPATH, for example:: $ ln -s python-neo/neo $ export PYTHONPATH=/some/directory:${PYTHONPATH} An alternate solution is to install Neo with the *develop* option, this avoids reinstalling when there are changes in the code:: $ sudo python setup.py develop or using the "-e" option to pip:: $ pip install -e python-neo To update to the latest version from the repository:: $ git pull Running the test suite ---------------------- Before you make any changes, run the test suite to make sure all the tests pass on your system:: $ cd neo/test $ python3 -m unittest discover If you have nose installed:: $ nosetests At the end, if you see "OK", then all the tests passed (or were skipped because certain dependencies are not installed), otherwise it will report on tests that failed or produced errors. To run tests from an individual file:: $ python3 test_analogsignal.py Writing tests ------------- You should try to write automated tests for any new code that you add. If you have found a bug and want to fix it, first write a test that isolates the bug (and that therefore fails with the existing codebase). Then apply your fix and check that the test now passes. To see how well the tests cover the code base, run:: $ nosetests --with-coverage --cover-package=neo --cover-erase Working on the documentation ---------------------------- All modules, classes, functions, and methods (including private and subclassed builtin methods) should have docstrings. Please see `PEP257`_ for a description of docstring conventions. Module docstrings should explain briefly what functions or classes are present. Detailed descriptions can be left for the docstrings of the respective functions or classes. Private functions do not need to be explained here. Class docstrings should include an explanation of the purpose of the class and, when applicable, how it relates to standard neuroscientific data. They should also include at least one example, which should be written so it can be run as-is from a clean newly-started Python interactive session (that means all imports should be included). Finally, they should include a list of all arguments, attributes, and properties, with explanations. Properties that return data calculated from other data should explain what calculation is done. A list of methods is not needed, since documentation will be generated from the method docstrings. Method and function docstrings should include an explanation for what the method or function does. If this may not be clear, one or more examples may be included. Examples that are only a few lines do not need to include imports or setup, but more complicated examples should have them. Examples can be tested easily using the iPython `%doctest_mode` magic. This will strip >>> and ... from the beginning of each line of the example, so the example can be copied and pasted as-is. The documentation is written in `reStructuredText`_, using the `Sphinx`_ documentation system. Any mention of another Neo module, class, attribute, method, or function should be properly marked up so automatic links can be generated. The same goes for quantities or numpy. To build the documentation:: $ cd python-neo/doc $ make html Then open `some/directory/python-neo/doc/build/html/index.html` in your browser. Committing your changes ----------------------- Once you are happy with your changes, **run the test suite again to check that you have not introduced any new bugs**. It is also recommended to check your code with a code checking program, such as `pyflakes`_ or `flake8`_. Then you can commit them to your local repository:: $ git commit -m 'informative commit message' If this is your first commit to the project, please add your name and affiliation/employer to :file:`doc/source/authors.rst` You can then push your changes to your online repository on GitHub:: $ git push Once you think your changes are ready to be included in the main Neo repository, open a pull request on GitHub (see https://help.github.com/en/articles/about-pull-requests). Python version -------------- Neo should work with Python 3.5 or newer. If you need support for Python 2.7, use Neo v0.8.0 or earlier. Coding standards and style -------------------------- All code should conform as much as possible to `PEP 8`_, and should run with Python 3.5 or newer. You can use the `pep8`_ program to check the code for PEP 8 conformity. You can also use `flake8`_, which combines pep8 and pyflakes. However, the pep8 and flake8 programs do not check for all PEP 8 issues. In particular, they do not check that the import statements are in the correct order. Also, please do not use ``from xyz import *``. This is slow, can lead to conflicts, and makes it difficult for code analysis software. Making a release ---------------- .. TODO: discuss branching/tagging policy. Add a section in :file:`/doc/source/whatisnew.rst` for the release. First check that the version string (in :file:`neo/version.py`) is correct. To build a source package:: $ python setup.py sdist Tag the release in the Git repository and push it:: $ git tag $ git push --tags origin $ git push --tags upstream To upload the package to `PyPI`_ (currently Samuel Garcia, Andrew Davison, Michael Denker and Julia Sprenger have the necessary permissions to do this):: $ twine upload dist/neo-0.X.Y.tar.gz .. talk about readthedocs .. make a release branch If you want to develop your own IO module ----------------------------------------- See :ref:`io_dev_guide` for implementation of a new IO. .. _Python: https://www.python.org .. _nose: https://nose.readthedocs.io/ .. _Setuptools: https://pypi.python.org/pypi/setuptools/ .. _tox: http://codespeak.net/tox/ .. _coverage: https://coverage.readthedocs.io/ .. _`PEP 8`: https://www.python.org/dev/peps/pep-0008/ .. _`issue tracker`: https://github.com/NeuralEnsemble/python-neo/issues .. _`Porting to Python 3`: http://python3porting.com/ .. _`NeuralEnsemble Google group`: https://groups.google.com/forum/#!forum/neuralensemble .. _reStructuredText: http://docutils.sourceforge.net/rst.html .. _Sphinx: http://www.sphinx-doc.org/ .. _numpy: https://numpy.org/ .. _quantities: https://pypi.org/project/quantities/ .. _PEP257: https://www.python.org/dev/peps/pep-0257/ .. _PEP394: https://www.python.org/dev/peps/pep-0394/ .. _PyPI: https://pypi.org .. _GitHub: https://github.com .. _`GitHub Repository`: https://github.com/NeuralEnsemble/python-neo/ .. _pep8: https://pypi.org/project/pep8/ .. _flake8: https://pypi.org/project/flake8/ .. _pyflakes: https://pypi.org/project/pyflakes/ neo-0.10.0/doc/source/developers_guide.rst.orig0000644000076700000240000002510413544336534022055 0ustar andrewstaff00000000000000================= Developers' guide ================= These instructions are for developing on a Unix-like platform, e.g. Linux or macOS, with the bash shell. If you develop on Windows, please get in touch. Mailing lists ------------- General discussion of Neo development takes place in the `NeuralEnsemble Google group`_. Discussion of issues specific to a particular ticket in the issue tracker should take place on the tracker. Using the issue tracker ----------------------- If you find a bug in Neo, please create a new ticket on the `issue tracker`_, setting the type to "defect". Choose a name that is as specific as possible to the problem you've found, and in the description give as much information as you think is necessary to recreate the problem. The best way to do this is to create the shortest possible Python script that demonstrates the problem, and attach the file to the ticket. If you have an idea for an improvement to Neo, create a ticket with type "enhancement". If you already have an implementation of the idea, create a patch (see below) and attach it to the ticket. To keep track of changes to the code and to tickets, you can register for a GitHub account and then set to watch the repository at `GitHub Repository`_ (see https://help.github.com/articles/watching-repositories/). Requirements ------------ <<<<<<< HEAD * Python_ 2.7, 3.4 or later * numpy_ >= 1.10.0 * quantities_ >= 0.12.1 * nose_ >= 1.1.2 (for running tests) * Sphinx_ (for building documentation) ======= * Python_ 2.7, 3.5 or later * numpy_ >= 1.7.1 * quantities_ >= 0.9.0 * nose_ >= 0.11.1 (for running tests) * Sphinx_ >= 0.6.4 (for building documentation) * (optional) tox_ >= 0.9 (makes it easier to test with multiple Python versions) >>>>>>> 81b676c0b72ad4ae7d05321e07d9d1e73480b074 * (optional) coverage_ >= 2.85 (for measuring test coverage) * (optional) scipy >= 0.12 (for MatlabIO) * (optional) h5py >= 2.5 (for KwikIO, NeoHdf5IO) * (optional) nixio (for NixIO) * (optional) pillow (for TiffIO) We strongly recommend you develop within a virtual environment (from virtualenv, venv or conda). It is best to have at least one virtual environment with Python 2.7 and one with Python 3.x. Getting the source code ----------------------- We use the Git version control system. The best way to contribute is through GitHub_. You will first need a GitHub account, and you should then fork the repository at `GitHub Repository`_ (see http://help.github.com/fork-a-repo/). To get a local copy of the repository:: $ cd /some/directory $ git clone git@github.com:/python-neo.git Now you need to make sure that the ``neo`` package is on your PYTHONPATH. You can do this either by installing Neo:: $ cd python-neo $ python setup.py install $ python3 setup.py install (if you do this, you will have to re-run ``setup.py install`` any time you make changes to the code) *or* by creating symbolic links from somewhere on your PYTHONPATH, for example:: $ ln -s python-neo/neo $ export PYTHONPATH=/some/directory:${PYTHONPATH} An alternate solution is to install Neo with the *develop* option, this avoids reinstalling when there are changes in the code:: $ sudo python setup.py develop or using the "-e" option to pip:: $ pip install -e python-neo To update to the latest version from the repository:: $ git pull Running the test suite ---------------------- Before you make any changes, run the test suite to make sure all the tests pass on your system:: $ cd neo/test With Python 2.7 or 3.x:: $ python -m unittest discover $ python3 -m unittest discover If you have nose installed:: $ nosetests At the end, if you see "OK", then all the tests passed (or were skipped because certain dependencies are not installed), otherwise it will report on tests that failed or produced errors. To run tests from an individual file:: $ python test_analogsignal.py $ python3 test_analogsignal.py Writing tests ------------- You should try to write automated tests for any new code that you add. If you have found a bug and want to fix it, first write a test that isolates the bug (and that therefore fails with the existing codebase). Then apply your fix and check that the test now passes. To see how well the tests cover the code base, run:: $ nosetests --with-coverage --cover-package=neo --cover-erase Working on the documentation ---------------------------- All modules, classes, functions, and methods (including private and subclassed builtin methods) should have docstrings. Please see `PEP257`_ for a description of docstring conventions. Module docstrings should explain briefly what functions or classes are present. Detailed descriptions can be left for the docstrings of the respective functions or classes. Private functions do not need to be explained here. Class docstrings should include an explanation of the purpose of the class and, when applicable, how it relates to standard neuroscientific data. They should also include at least one example, which should be written so it can be run as-is from a clean newly-started Python interactive session (that means all imports should be included). Finally, they should include a list of all arguments, attributes, and properties, with explanations. Properties that return data calculated from other data should explain what calculation is done. A list of methods is not needed, since documentation will be generated from the method docstrings. Method and function docstrings should include an explanation for what the method or function does. If this may not be clear, one or more examples may be included. Examples that are only a few lines do not need to include imports or setup, but more complicated examples should have them. Examples can be tested easily using the iPython `%doctest_mode` magic. This will strip >>> and ... from the beginning of each line of the example, so the example can be copied and pasted as-is. The documentation is written in `reStructuredText`_, using the `Sphinx`_ documentation system. Any mention of another Neo module, class, attribute, method, or function should be properly marked up so automatic links can be generated. The same goes for quantities or numpy. To build the documentation:: $ cd python-neo/doc $ make html Then open `some/directory/python-neo/doc/build/html/index.html` in your browser. Committing your changes ----------------------- Once you are happy with your changes, **run the test suite again to check that you have not introduced any new bugs**. It is also recommended to check your code with a code checking program, such as `pyflakes`_ or `flake8`_. Then you can commit them to your local repository:: $ git commit -m 'informative commit message' If this is your first commit to the project, please add your name and affiliation/employer to :file:`doc/source/authors.rst` You can then push your changes to your online repository on GitHub:: $ git push Once you think your changes are ready to be included in the main Neo repository, open a pull request on GitHub (see https://help.github.com/articles/using-pull-requests). Python version -------------- Neo core should work with both Python 2.7 and Python 3 (version 3.5 or newer). Neo IO modules should ideally work with both Python 2 and 3, but certain modules may only work with one or the other (see :doc:`install`). So far, we have managed to write code that works with both Python 2 and 3. Mainly this involves avoiding the ``print`` statement (use ``logging.info`` instead), and putting ``from __future__ import division`` at the beginning of any file that uses division. If in doubt, `Porting to Python 3`_ by Lennart Regebro is an excellent resource. The most important thing to remember is to run tests with at least one version of Python 2 and at least one version of Python 3. There is generally no problem in having multiple versions of Python installed on your computer at once: e.g., on Ubuntu Python 2 is available as `python` and Python 3 as `python3`, while on Arch Linux Python 2 is `python2` and Python 3 `python`. See `PEP394`_ for more on this. Using virtual environments makes this very straightforward. Coding standards and style -------------------------- All code should conform as much as possible to `PEP 8`_, and should run with Python 2.7, and 3.5 or newer. You can use the `pep8`_ program to check the code for PEP 8 conformity. You can also use `flake8`_, which combines pep8 and pyflakes. However, the pep8 and flake8 programs do not check for all PEP 8 issues. In particular, they do not check that the import statements are in the correct order. Also, please do not use ``from xyz import *``. This is slow, can lead to conflicts, and makes it difficult for code analysis software. Making a release ---------------- .. TODO: discuss branching/tagging policy. Add a section in :file:`/doc/source/whatisnew.rst` for the release. First check that the version string (in :file:`neo/version.py`) is correct. To build a source package:: $ python setup.py sdist Tag the release in the Git repository and push it:: $ git tag $ git push --tags origin $ git push --tags upstream To upload the package to `PyPI`_ (currently Samuel Garcia, Andrew Davison, Michael Denker and Julia Sprenger have the necessary permissions to do this):: $ twine upload dist/neo-0.X.Y.tar.gz .. talk about readthedocs .. make a release branch If you want to develop your own IO module ----------------------------------------- See :ref:`io_dev_guide` for implementation of a new IO. .. _Python: http://www.python.org .. _nose: http://somethingaboutorange.com/mrl/projects/nose/ .. _Setuptools: https://pypi.python.org/pypi/setuptools/ .. _tox: http://codespeak.net/tox/ .. _coverage: http://nedbatchelder.com/code/coverage/ .. _`PEP 8`: http://www.python.org/dev/peps/pep-0008/ .. _`issue tracker`: https://github.com/NeuralEnsemble/python-neo/issues .. _`Porting to Python 3`: http://python3porting.com/ .. _`NeuralEnsemble Google group`: http://groups.google.com/group/neuralensemble .. _reStructuredText: http://docutils.sourceforge.net/rst.html .. _Sphinx: http://sphinx.pocoo.org/ .. _numpy: http://numpy.scipy.org/ .. _quantities: http://pypi.python.org/pypi/quantities .. _PEP257: http://www.python.org/dev/peps/pep-0257/ .. _PEP394: http://www.python.org/dev/peps/pep-0394/ .. _PyPI: http://pypi.python.org .. _GitHub: http://github.com .. _`GitHub Repository`: https://github.com/NeuralEnsemble/python-neo/ .. _pep8: https://pypi.python.org/pypi/pep8 .. _flake8: https://pypi.python.org/pypi/flake8/ .. _pyflakes: https://pypi.python.org/pypi/pyflakes/ neo-0.10.0/doc/source/examples.rst0000644000076700000240000000055213560316630017400 0ustar andrewstaff00000000000000**************** Examples **************** .. currentmodule:: neo Introduction ============= A set of examples in :file:`neo/examples/` illustrates the use of Neo classes. .. literalinclude:: ../../examples/read_files_neo_io.py .. literalinclude:: ../../examples/read_files_neo_rawio.py .. literalinclude:: ../../examples/simple_plot_with_matplotlib.py neo-0.10.0/doc/source/grouping.rst0000644000076700000240000001324014066374330017415 0ustar andrewstaff00000000000000************************* Grouping and linking data ************************* Migrating from ChannelIndex/Unit to ChannelView/Group ===================================================== While the basic hierarchical :class:`Block` - :class:`Segment` structure of Neo has remained unchanged since the inception of Neo, the structures used to cross-link objects (for example to link a signal to the spike trains derived from it) have undergone changes, in an effort to find an easily understandable and usable approach. Below we give some examples of how to migrate from :class:`ChannelIndex` and :class:`Unit`, as used in Neo 0.8, to the new classes :class:`Group` and :class:`ChannelView` introduced in Neo 0.9. Note that Neo 0.9 supports the new and old API in parallel, to facilitate migration. IO classes in Neo 0.9 can read :class:`ChannelIndex` and :class:`Unit` objects, but do not write them. :class:`ChannelIndex` and :class:`Unit` will be removed in Neo 0.10.0. Examples -------- A simple example with two tetrodes. Here the :class:`ChannelIndex` was not being used for grouping, simply to associate a name with each channel. Using :class:`ChannelIndex`:: import numpy as np from quantities import kHz, mV from neo import Block, Segment, ChannelIndex, AnalogSignal block = Block() segment = Segment() segment.block = block block.segments.append(segment) for i in (0, 1): signal = AnalogSignal(np.random.rand(1000, 4) * mV, sampling_rate=1 * kHz,) segment.analogsignals.append(signal) chx = ChannelIndex(name=f"Tetrode #{i + 1}", index=[0, 1, 2, 3], channel_names=["A", "B", "C", "D"]) chx.analogsignals.append(signal) block.channel_indexes.append(chx) Using array annotations, we annotate the channels of the :class:`AnalogSignal` directly:: import numpy as np from quantities import kHz, mV from neo import Block, Segment, AnalogSignal block = Block() segment = Segment() segment.block = block block.segments.append(segment) for i in (0, 1): signal = AnalogSignal(np.random.rand(1000, 4) * mV, sampling_rate=1 * kHz, channel_names=["A", "B", "C", "D"]) segment.analogsignals.append(signal) Now a more complex example: a 1x4 silicon probe, with a neuron on channels 0,1,2 and another neuron on channels 1,2,3. We create a :class:`ChannelIndex` for each neuron to hold the :class:`Unit` object associated with this spike sorting group. Each :class:`ChannelIndex` also contains the list of channels on which that neuron spiked. :: import numpy as np from quantities import ms, mV, kHz from neo import Block, Segment, ChannelIndex, Unit, SpikeTrain, AnalogSignal block = Block(name="probe data") segment = Segment() segment.block = block block.segments.append(segment) # create 4-channel AnalogSignal with dummy data signal = AnalogSignal(np.random.rand(1000, 4) * mV, sampling_rate=10 * kHz) # create spike trains with dummy data # we will pretend the spikes have been extracted from the dummy signal spiketrains = [ SpikeTrain(np.arange(5, 100) * ms, t_stop=100 * ms), SpikeTrain(np.arange(7, 100) * ms, t_stop=100 * ms) ] segment.analogsignals.append(signal) segment.spiketrains.extend(spiketrains) # assign each spiketrain to a neuron (Unit) units = [] for i, spiketrain in enumerate(spiketrains): unit = Unit(name=f"Neuron #{i + 1}") unit.spiketrains.append(spiketrain) units.append(unit) # create a ChannelIndex for each unit, to show which channels the spikes come from chx0 = ChannelIndex(name="Channel Group 1", index=[0, 1, 2]) chx0.units.append(units[0]) chx0.analogsignals.append(signal) units[0].channel_index = chx0 chx1 = ChannelIndex(name="Channel Group 2", index=[1, 2, 3]) chx1.units.append(units[1]) chx1.analogsignals.append(signal) units[1].channel_index = chx1 block.channel_indexes.extend((chx0, chx1)) Using :class:`ChannelView` and :class:`Group`:: import numpy as np from quantities import ms, mV, kHz from neo import Block, Segment, ChannelView, Group, SpikeTrain, AnalogSignal block = Block(name="probe data") segment = Segment() segment.block = block block.segments.append(segment) # create 4-channel AnalogSignal with dummy data signal = AnalogSignal(np.random.rand(1000, 4) * mV, sampling_rate=10 * kHz) # create spike trains with dummy data # we will pretend the spikes have been extracted from the dummy signal spiketrains = [ SpikeTrain(np.arange(5, 100) * ms, t_stop=100 * ms), SpikeTrain(np.arange(7, 100) * ms, t_stop=100 * ms) ] segment.analogsignals.append(signal) segment.spiketrains.extend(spiketrains) # assign each spiketrain to a neuron (now using Group) units = [] for i, spiketrain in enumerate(spiketrains): unit = Group([spiketrain], name=f"Neuron #{i + 1}") units.append(unit) # create a ChannelView of the signal for each unit, to show which channels the spikes come from # and add it to the relevant Group view0 = ChannelView(signal, index=[0, 1, 2], name="Channel Group 1") units[0].add(view0) view1 = ChannelView(signal, index=[1, 2, 3], name="Channel Group 2") units[1].add(view1) block.groups.extend(units) Now each putative neuron is represented by a :class:`Group` containing the spiktrains of that neuron and a view of the signal selecting only those channels from which the spikes were obtained. neo-0.10.0/doc/source/images/0000755000076700000240000000000014077757142016306 5ustar andrewstaff00000000000000neo-0.10.0/doc/source/images/.DS_Store0000644000076700000240000001400413661002130017742 0ustar andrewstaff00000000000000Bud1%  @ @ @ @ E%DSDB` @ @ @neo-0.10.0/doc/source/images/base_schematic.png0000644000076700000240000020356514060430470021741 0ustar andrewstaff00000000000000PNG  IHDR` Bh pHYsaa?itEXtSoftwarewww.inkscape.org< IDATxwSUI22*(ҬeE.vqVUvAwW]l.*v*8 (R,H'9?ޛMHf2dy$ܓL9=@v@V| X੆btWD)R*@+PZ k)RJmA|& (RJmV2!P,疏tK N-09RJ)jTޢ%` ym|l 3qdY{f}Ӏ2$=^: rNΥK)RjiQFxnc;,.Z'6 XLOzxB8iJ֢xiJ)R*N$Pxā$;5c$.=ۻ n=O(hRJ)T$P /^# 's\dɞswy;ӳ|)*(tW[[DRJ]&QNa$` 90'r{#s@.Lӳ(%s"ApddMvBEn@KZw[&G)R Ȍ@6yNKuS{` {sW\Ӎ\ @ZDB]@OZVHbM ֒ }}ݷyֺm{̺뤔RJ(G&5s*SVgxDeDLXPyoB mܾ@5 F`s ",?G^RJm6@M6H@>qyHgG <Ɋ^ |].?ympoSԇy #Kg`gs&%:EVHwgHlD[x>;IL kpƺ(Rj3ۑ9&zSNE5F?XC=Hcг."us&y]ɮ*6^o@a*RJ5ۑ+Wdx`y=5si.B]"AGC\^߸ k(6k2 - ?"d*W)R*cAg2VN@㟑C2ɿCҿM_ՔRJ)T:$(,C}"W5RJ)n"Y #I9O륔RJzؒFUE<tUJ)Rjh!H2f=+RFEоf!϶J`UzRJ@Q5+y$#hRJe Uc_Y٦RJ)`{dHtVG)RJ5'O"]ASzRJ)D d.J)RBlRJ)2D8.ݕPi L)8TJ)R*Fx,QJ)R5@q%0?QJ)RJ RJ)T(*RJ4PTJ)R iRJ)@Q)RJ%RJ)JHERJ)J)R*! RJ)TB(*RJ4PTJ)R iRJ) J)JIh:88okhRJm6k6cX i` ghRJm Q@a' RJ)؍Dq c6fR`<]z`0P |܄/{#%z RD 9 !@;^p57 UhRJm,BWPND{zWc8_ 0X,p;3b ] fS<> (Z6x/^FERj6 Fnp 0ǁk+A`97XC[x RSrϠi"qI59 RJ-d\D92<w \wK)ՏO!kJ5u8C`5߁p(J\U/(*R[2" x/1, <1@g1QHgZ3bmt]8X aQ)@Q)R53<-џ0osj`4Hm{bq##=,J)`1aFC1L;=`!ضW[tLhc1u(*R[218thG78qx,vGT1 /Y x0>21Rͽ1Z RJ)Ԗ% tVesɾcH 3耴̺p;.;J)ÁX&`xXqjm.tEU/#{zo{5Gxh uCUV@©Z"o1C!IyɴpK:J)f:R5y L_b0 (d K\ӌId{xݶx Z:Wv>?YxX$`Td88Э_` x% `pѿH #qǞo{[O#]!MVY"RJAè d(`eXݨfG{-PQIJCz,(&L ~* *pG㝐`5Le(Is]R ,c,C> r8RG%ȸOư. k8\/XzahO@!"h~G>u9Nq7n7_ZanOs]T/tunXVy"A{œ{'yG"S=~@KLlO. <%ڊ79$);P侾m9+Jf}w_[b߷UH0KBƞnE4;y!?T&@q 1?bX$- R=W.#XzpV!R~>PIkdhQ.tEf|_û z E5cÑ&yU!kxz')#϶m&:?1n}N/ y]!bs^zs_v-e2b_UtM "-ȇ>Tyf#]J.G8C7qXF!#xPI8vov=L@0vqo"ݾ!,AAl;нL-½J 2$,wGKG{%ؗ,Ig ݕʦGտ -.z <ާ2uCZMAƐIRRJe(+G3 ܎agdAOp8yMPK/0q|X>k6: U*w7n=Á;=Aj(q~WZy-#HU˹-kTk!;"s_u!^ |FZ Vnߑ.dF"?uJq'̱;MAFf} ]}O_)j`&\a$𭌧e$x; #ѭH2H2@! )E=:P:"ߞ 2яt t[#Z3,BCFs6—C7WHgg GJLU{tN$faɤ $n- YEAP [䚍HE"xmrb|IHwnOdtYdŤ^M8H0:@4 y>2pҍ 2I xCĘhT97UAR'uv\+)D~A-5N$30ƣv@Aʇ"ܗ#-#=EHkH{HطjvrOwu8lsay茓0q>r@]ij25ҚeÐng#~wYAb 1H4 ;DY$? A V8>MYAT ܁C!~~YHL3Kxc#:`&v6O7mp6ZM~#;Тeݪwd{,& )UR!ck[+v5'U?V#|o`Y@tɽ{K63-kjw @>C~Gf!@n|hDoTR9x H gni,Etw[$IR8܎|lʨEN> Uá7!i8|N)C9D'1AׄEQR -HCUJFfA K85 ɨp>:OqSTʧ@9l aT C1dYn(stA>n%L`pˑ񌝀=6DVẑ@p#sr)d0ӏMDE՜ >MBƷE֘`j,B.7xC4@a4PT K݀X ûXzq3[pxC1"^@m1C'h5-,u 9(W,o dF@Q5'dVijw#-Sj@k;STy3Hte)?jFJT2Ccs#J):+|,#-ek>ecy98Y#@Q5'1]%v ݾNcTݽLLTfy̩9眆_s:|[R$vc?\KIclu$ z&jn,2I()pՌ݋$$kY߄uQJՕjL•n +z?0,VY`5[L뗐$BuU|&5C|le)dD@Q)ʈN -T&cVV?QL]c,|Qr$s}bX 7ZT k(*ƟHV:Je>ޅP |u%70efnSx=w d쌤R'+PZ"_A`+$]oܶ乙Ս b4S'J)TT1%&͑` @3UOka9lrjnl0'J 0}3M)\]N?Q%* >i <jiQx|7&CiJ{c~VIhx}4M1Sf5$:(*Hd]T- sWLóisevپdF IYE˙6{(*% Mh)PJ5`Y^1˝R6߁b7@Y1f6ICRSlFf_tW"c}d׼#[qHkkFCEH&d-D3DTu5t75G(a'L`]w;AꖊĎ|tjkF 3SE2^Ȫ'J%]+\(*ٖYKTziֳR[({ٯo롔R@Q-5PRJ8(*RJ4PTJ)R iRJ)@Q)RJ%RJ)JHERJ)J)R*! RJ)TB(*RJ4PTJ)R iRJ)@Q)RJ%RJ)JHERJ)J)R*! RJ)TB(*RJ4PTJ)R iRJ)@Q)RJ%RJ)JHERJ)J)R*! RJ)TBƿt>[ƿnCqvp Pp)PlW NxՐnk ewM+98>`|nj,ڴ:ԗ?SREќ,B p!6Lqf{)ܺ7/|>OR(aP׸] pn&D}5τ2pa77@l髃RJ)UM(L-d7577;Cx:P ]= }%5l:ξ38Yв#y豾7>-TJ)6M\q!f 8{HL2v4'Hdز_C^ gF(-ܰ0z̮PR +vʀ48%)(-sa\?4 'cw9}􇂩< h#W֞  f9ؗc>>`vo@߀,O%0no T=,Z$830ۮb c33Ttܓ.Mז^\:rw+yR%:''rni=cRJ)9Ҝ{_nKV`_@w}|v*t[v/sG\'>^GA )3m}{BUR`E3Ahpޅ\/dLJp==fl2;%ψbϓ`;X ΁cb]W IDATX4C`i<Oc^ /ݱv} JQI\?ƒ`O:B_g*Fu!u@Y{a ] +䘒3H`z 98G˪iÀE%R*E#*4` C0mVVrV+< p&)e:嘛z@pqyqAxxsܱx*+Cc2}KTGG9 l߀>C?.d<f!# <m%q歠'${9 8xcym뎖P<)ǀ"@~N>`2pxgFG?k!4 `3ޢɽ@$j=^lon.>pa)dC7[N TPӑ/3/ݵIZlk Än~~#Tio{& p^lw9@O`/@se WegۅvD{( d¸;Rg.@K#O*W+Ѐt;>pvAvp18<Ƚ 9l0$8σs;p0 P,= xZ q?pt4H .xgqO]z+g"0J9nKA")Ki=O==D7tBt2yP`MiX̍t'B79ȘX+`ʔ)0xpV?,>_~q PPP 5`Tu~iMNHkqC?YЀu?VOҝE´5NV31ļ7W! z`@ZPݭ휌t'v9UR72i}$|CN-?Pg㛶nu.߶6@}/6D,gO7?w|/ gHmd[XHKv2tݏӫ<[Kxk(( Qdlƿ3h n|ٻq{6Љfb,B0L1]v2[4r#Bd-ZJJaP+%ْ[ܹv:4kq2euǺ*e!P*3W@5l 85"]'v:'%;v9F< HL~'0]n΃VPyȻ4:^yk=+J$XJ-gĭ}%rH,EZ^u 8jM߇gߑe9ZOV'8&aYa|//&D\'-Nc g0$Vӑ) 2H ~Tf{dklB7ʍZ)k-Np̗ެ*b Њmۮ]^82eŭk?:Z@Uv6 5-[[nh nXʇPpaaaZJYWx8]~sÑd 1A:s&96nao&~PZγDeرL.وl`Bs q .AtvICƝ"}aƒ 7;^6&yn;&V2N.}x/,n7{u -b|%=Fv1H~H8ږWAI Ǜh O(2f9g"anֆPj q`ý/C<4Ec)"w@n˖Yyʰɮ(%;;-پ*&!sڪ@wG(\N8 |*U0X髪27VvPv(,, c-hRД+v@0>}7V?/&4p ^>!Aa_b^SK~΀¿R{l-|B<$81v_x:8oC`)۸Ӽ\ݮKƕ@B!T7n\ގ㜜3 c;> S83[pߑѱeً P8G] AɉwLdk0:a.rƵgAHP@i~|¸ dTv26qR-7m/2 `0Yqźsc_3^Oo|m$] Ðz0L< ڂV |UZ;,r~a;&Ѭ:e7*[*k!Y6D>k|њem*PbuOw]cY&"*'*Ł9%`=?ѥ_?ʨhj9 l$i"GX*x{,ܱh 2(xG9{9 OO|6<$$/@A`5ַk5[pK6{;!w e` nKPN:yQhp+%pe`W=#L# W>i9s?%9V༕60` `'CVښ[ 9g.~ | w\ %'!e<\/{ꨓށ۪zpmW$έ[CWP878;S{lQHP7e'{8{s|jLD\PyӶJ>RO=7 0Gƚ5ș>z- * z,ޮ2uUUkUEecm|k}- CV[wazatץY [a>*LؖkL`Ͳ  rk֬ S@()*Uܞԃ9t0Ɍ( YGonM= M)q'Fq༑8=k/%ZCGsǁ d䂿ɾ', g;W9vGtLJ_z*]%c5cʑ/=C܇$ w[#^ @߄i`N\\ퟂիoO& N_w%lcdWHk0iix WcJ~ͤm!7r[py֚BuPl?@ح˜>cZͱ,%Pl-E?t;tH!2؞0 SEؖazkkVZZ ,}.6 N9taxcj-%IƷ\#b&\a)&fgP #-(K9]1Lrv` BV(~EME YM4!$Lhh^R>O !u7ؽyWJ@vXWswnSr-[A`L`  Qy  ) i (zK 4Wz$]=eϖ_`?$BUW򅥽n2j⻞Aǐ@'ѩzf8Փa({@@1%?i߾/ܣUEEp8Mp8C'cl퀵KX?>(ܱK c֘0C&L5l0`r0cJ}aJM[SQacQ!dYt\Z%t@y 6'vxn֣-F2SGRM$x:h\m<; y0bEslbplN8Wb38 s ΣPݑ"A|`?w$bp? Vһ0ʝudn+Əg.Ď;9~G58n8H1r92K|EҴSR%*%Æ tU+~B-.39~lŏdpv~vY9~t汒b-CTho+>gC![ gUu^ٝ92uf(Y trWC`qu*k?6. y^h?v*SLtVP5I&67r7w!B18I!"#` Ʈ)<<S gȇBm,q D[!ˁ\ ۽nv: I/cy/@L"/nXBdfm_g$/Ws<8_ \E}{߃ 5ZۊU B~_0:iM{ΐZHw܅X/g?0\ct]J4P*5H*N$1S֩M67C9$D&Q-*%e9 e`oxܛcpS jR @ ? /1B2ve滁W ! ;I"z|.A#nYn vMs\VO(ޝ'"AH&GnbmCAf8j#bƘl˜RRָl߅R 425L| h Ud9 Ȕ5@뇑uEiQ~ (:]m.-Pn@=K@w kdOG5oQ\S |&Z΀YwEOpH>GZqrp(n- G##T㶃;=dl8f >;|΍H~?CAt`A`=29;c'OPz(AL)7J"k< 5/!gBzRhѸ!4=Ot12ENLA@1P yȦxZ^^MqRMS "=+tW;}N},AYod%U`> #sJtH`{&@Q)U_QJm ;{s"gwdjuH$|62ҍ`%Iۚ5а$]8W Bև X9E)_t~QQ۷gРA}`sRYYɀhѢ+2g:vHϞ" #++J5̢TF(dZoK8ǂ3E/>N[ ʏkQ&ZCt?ݛ+7Fx.F??Y6p[& i۶-۾;1ZtG}-ZPQ!iF0`Cx |> ƾ .K/ 1b:ufv`/`„ \veZ >;н{wv}w D.]xGb׽{wz|A㭷駟.kmaРAՋ/ٳg3`vm7vyg-BTE2ڋ]vU8PҸDV(צ_"˞ s 3$8<@f̈.3uT1L2c3gG߿?YYYL6>}ه;SO='xW=qDtR]M7ĉyw}ˣ{\wu?k~qaqWJe R*(Ʒ&s$ MY9dO~Mɐ!Cذa~-g~OΪUX`C >gٜ~t֍|zŋ)-.4|p/_o HW+k|_=ܳv)xu7oe˖1k/ {3˻m۶~BP1[myRӮg&,0'!Xs.ik_~t֍3fP^^GM~~>{SYJmδY)[4d{=͛ǑG ?{UI" *-i*,L}eqm]E}\WW *PBQjH9$3ByęsϽd>0rH\.=\9s˖-/n:*xh֬mۖ1cƔkժcǎG-m /?V/%\͛yʶ-^>qU cQ4 G ۂoa}2j(:)EvKy%\r%ՋabV IDATѫW/U:=%%Ҽ)))\y\r% 8^zƚ5k(((`ժU;rH^n&^z%RSSYx1cƌ[iOPh0 L} xWѐСo&={,]teRSS믙>}:[n%==K//"laÆN}v 7o_4j # P#sv҅.]Tڞ 7Pn[8!pI'BF#޽;ݻwoРA 4(ܡP:uDN*m̬m7uP4  -[ٶmiiijՊݻ3dzQi046L2`0 au]3-ZpӻwoRRRxw5j:t{T{`h,`0 a~ʕ+Yjm۶ ;f 6 .r[ ( P_|?z+HO?Э[7 QsW|r{=.Bo^C+4# pTу-[Gߌ=Ν;\;w>׏ :E!6`04DAJY8jKL2t_d{CmcPAm  /z>#!!!Ccƌoe͚5ߟa0dR" 9pk$\3'ON+'|2fƺh81B`׶Zc}bh8z[[o1~Z?5\ʕ+Yr%߮s EwxL C!SเSs-,]TzU'С#'';s=O>Nm0ăig0H3M CƁɓ'@AAeBݫ_\.ZhA6mȠo߾\vetIuz cǎs !.#--oӤI:]# !NAȟ{ pZ"aAG\(B &NH6mhѢE͛rHLL$|ZhCؿ?wfʕ,Z￟QFOHX裏X'Hbb"SLo7駟fʔ)\|dee1bN8zY` e[$;{ 8Y0'X/%aLuV~wz`y]&tTsD]%jZ_mv3f :ů}[j)(/Q8pO< 瞫Cb V牆"33Lmٳygo~CΝ6l}NNSN!1T2.&F`@Z=ŋ93h޼y'N:$nV,Xyڵ+K,Gs!))-ZЮ];N;4{نF( B4qQ[Hxn`i480x]xYpxȯ\Qy#Z0p i%ݽ-P>xX됳: ۶bյgq95Abb"CeСrA>̡CpBCc|5 HWg)x*Ϋjށ]y8,ZgIsER:rDŽ lX s70Z\ o *yEVYKgIߵeȑ\yw}v%K0xZHIIm۶zӇ:5P46^ȷ\hQ-@D/@!9@O=mن6gmp6U%v8:8SŸsu/_O/ t@͖~FQ (񮔚>A=0m4>C[J93k|n1`pDw~~>n,`0EAqDZ], I%X(+S? S,afx!ˆ*$ 4&)?a}T%~@+haX끽yp'#a Sl ẘCېZaW'TNX暡Ձ##Bݓ;oT\炄2X'/z(\6̀JA{\ TrMᦛnk׮wҥ ׯp\a ؇#ٰe@~i%$a +r>Dm8;T@%u~@@FL<ؖ-JYq|XJ9~XA=+qHXPh+V` )؋$!~BY.SUZ$@JUz ecZHMj/Pk.J8?| a7p U]Wj'(+[mWV-:ֹp Jɧvnֈm $udɣȏ / Es0%|d2?Hu'D(JfTatC Z!`Tb(Zp|%kˆcJDۂ?g6Wu9 uDYo臚O>hѢÆ cҤIG=߆ hѢmkL6E`d*cC_JOuqĉ^CY$ʺHX/+|I6\CDb9V1$os^QC(q.Nm!S3PBԠŮEQgbS6Ur ^& x@(B1~.֖@M(Fwy]ScQˉ>Vx ݣE] X@ ;"lˋN1l^oP&+K;E9چ'*Q?U]]-EXr FO>֮]KFFp]1Eq~խ񈀇`[ye媞x% [>A rhk?\?J-E06LR>K*QZ]IʂE}Be?Jh$÷!"%`*7@s}='P  f"뽰~-.m WJ%mr/&t:^Sgbk #(X9r$jd.# `Y[<,Xt͝WYn\Yy0",ς)I-TWp""sm}ڢnRqzB7 ~KU@Yѻ[ַ;E-r7yXQ-0*8WMIxb:a!ŰBQ>/t9FׁxE褣-9p+k"k+vUѢPׯgӦMG=ڵkM7! F( `7-q@NP֬5U w<$rsox#'(Dz cQtth'D(Jժn -e *EYhe,k>_37o D rks/XZ(ZZ(JPA(3H8a$0dž9*>4RH[(XNH"z_]_<*6v-0\;-u^oME=ͷ~D{Cr1|p.\xs# 2Od"F"+[]+PZ e ] Zg ,`>xQX8" -DcuxB[[l5,UEnMjrCzHty 5 @lh$|JPه*{LurY$TKnP\4stT±Q3tP26&JKKY~=={V*j0B0B`(ClW(d.RܟsˆJ-#Ὂc`eą~tuMkgvK ='P -D.wEq&@B)DHdGڅ_BZKP`Et:cBj.U`c-ʐ!Cꫯj|ڴiC˖-khUCE!\J`}KU/\ ?r8XQd ]0\M^Pǜ:w`tr \d^ `)E`+h!G%~<]鍚3`81BP;Xr ~8\vAhb@XoQIqTJ *4&$o4]@ <wo ׂ:cmEُ(]kt ?cH O3ae"xHXF7ϧ[n5"qҠ-!cOF Z!&puJ^?vݰ><akDb߿Nی`8~0BP{d rr2)z{ͭHy?ljE-@|H!3};Q1Z˶ 1Qך#-B K -Cb1<%J8+Q%7`PN0eǁ0v7ϧK.[`8*P4BH\U+@~ ;ʚ( s;v3<3疹E`TrBdQڪxWssH(LMaU,cs ;ӱTy'!o"^=*UDv!쵭r 돸f.b)fEH.$;#tH(CSk ,7S;E$ c,GW] g.xsK.^{x^* 4d%+3};A64Ә)!\ۆe87h{8$GMyZ8{h""Ux4# Ѐr&~ e=#Ϧw#$!x gl/$Kx)HWG&w֑#d " !JfAՀ-?64k'VV-zKT2.C)׏oQM4U # u|j+ɨleE.*rը˱#HBZt(j,[pν}>RB$bj#~Xq/ ww+,a3mH lwvD|0Z /xS`z6CjBhw/]5u2)U &| ecp)S$`Z,P=ǽP۷/֭ȑؚ՘g!>P4H:%CPR%jW)D\:>Q\`*5*=?f msU(.ҏg̅SekAgxvLrTbF(ҴiSvs( aPHP DB{ jEg& J & Z7{!\=7z\LB(6OϵĂ_Z[,{> @|5dҥ1]~Ch0qYѱyd( xJo:hQkaiwzWSeHa%J9'G]p,8+K,896zIQzON8nAknCZ/Pϧ{ E!S9?s@R0ÆgG8UBr`zf֦JHEw+x2DM!*pT%z? pK;֎'X :z ^x܆imHa ^81Fb@;\@o,FC1 "d&υ9k@ Xex/\,ԯWb>@z\4y?p( ;?BL%d~~,sꅩ&Z}]F(ҧOnʞ={n:ue>!V̻`H,<،2P1G V{aS.< σ8WZS8/Ҹ}$a86tDlH3Tm@ ~ܨ-"zIG zQVͦb`p |itԏ_lZk>N~U- %,]l(IT#@BEmsmOLLwQZ֯_On}l!.?&CdD'q~qȁnNCnx /bKU?1 CBJlpܧzz;c..=rDQJiSV{~+I_W(q[&g@\x4.pl@a[6\u}IZ pM( vAwQ{0p@,YR# 1B`Pk{a٢;jaY"JӥٶB.dmLoC" ZZJsǏ E1 |eRAezSvK8p]@Z`iJL -}-uB6nT2bidž~ TNBb-E݀=9 ۰چol_ \ od | 諯c-_fQX*?@s #B*@:X"`H Q1p@-[V# 1B`PB㢥^R.X S6.ʅfA(&*A xJ !"ӰE*S8\(*"kA%VT֯mEP諯,}00}htĒвvZӫc0ct{TQKs w KmX'U9&[0uA"͗囦3ȅ1?JV(fOu?MÁO; h. R^.S>>E_ A0W†\"tih̭wPu '4%"(Q X-'Xlh.<h#Ahw~`{*^0XA`''@tJ2Ϩx=zpAnvÇٴiݭ1 # ![8a (x/yaOa~/?A,A%U%J/\C m~܆rOm"qE =ʢ("'p[zjt~\ /. '+F(ս-!c? AXx L"(E;-QmWq` NGx%|2 F{W^mY; hC L qFL\5BA$uu Ug… _z5~:))s C1$-DEcE( %<-:6$ xSR$KHZ e7ZC,>L06o*8%+` =x@ծYޫN%\ Dl Q.upp^ ?ͥXdhh7`BDmQI Y߇Z$t:/a(. ϥeYf,m%ϐ0X 'Ģ:4@N$*!| q N!d]@w 0.i"Ȉ#Xp!'OoժU'Q*E`@8yI L0!HޒJ$K%(w+bh`:A=uE% dj6:d@҂]9HxƂ%맶Q\jHj̆*P1UHbZ$[p*zoƣi_YN(,2esU1P e}|RhR$@p?C3Ȇ"g$k|\{X9r$ ,oժUݻWd04|E`8z_ϳ -~CB?̛lK3P',Le ȘINV\?"qvc\LBqԽKk1:T0,Nb= E(ہP+iho7{"k-=V(3:_q췕%-23ϱ>,Kr \|vmk.ۈRKV(Sj0ČtίytXTno 7R܉@tصAA rU5!o[:RXBJ-eرc+رc<lTI#ei]|u54*P4#"|OzFoW Qj8lQowv!! cRoOߵc;qui0ґsY>+A,)L&I/*ukz`yAIB t\'zIHTc[ۿg/ VH,u iZBEڢH39Pco_ˍᢉ`J\h6xAo6tT<nL`xDpjP斔SBV6m>`*z(|e1o+)*rv^KmYYiHn33~P&&PP4?a:PB1k&55$.fBY 쐰GB7kf 8/r wA_`0でBv }K[OniijVql9́,)|Qʳ;jn5W'Jē@oWxn80B`8zW=X PP)P$Aq 7.n=j|:YN{$ԀP;,XުNpcB; B X#!_~ |T(a.pvd^n(0Rݵ ȘA@  (Q?-< _pE{amAê zch<*ɅmUVc8}2}p"KU(-uȅK~P:l "7hZ0ԫbL*>EjƢ(A ֥k"Pn_Íj (!(`c P6>ʆ;  '$X火|0;5hIqXg"#1e9n{ /V̓<{kk~/Lay:i6s ۰wU[눆{"K(΄r\le"9,P"xP%W2X+Vtu 6$+a2;(Bq$̈́"Z%:˷JvCT-E{T]zI-%tt 61etR _J$^ 7B2&n`0$_dVذgvrZL<3pLP5E(p2 # \ }XY*$^m*R`QVG6x^I`$3LLm lx҆ކxN|K%P GӛT7KfDM~˅/<+w@ }*-v&h|f)Ԕ #B:SYƪPFV;0KBMkBۭ '$2: pwܙ e֨zUXhm͆AMP?V~{Ql3tݞIE>&F+ W 愎yR@d8hC' \xX[a.]%l@Lhm"}MahmH-dD(x9|2 }cmX>9p4+H2@]RJ}pPEeHn p!Kfy|^X9b OmQk 0<Ery'ǏW.W1I.5"Uʢ]_yNIND_:nX zn L'*A.%|lBnUZ `:6̬däB.&ne #0ƫ`7놽Z xaxKd[a ߐiI6+>\w;#JlŭK` pWX-k&y2u,( i Rț5#p0XMU,Aj vinM !/H-}aDLBYEQPEIm @;Tܤ0Bq*P0@J28cChHQ;\Q _ }Yft娟Mg]B lOR'u{<_GAq|X7(p2 #ڂ$@ =EB`@g<@zHɃ$ J,Pqma0YJh `&er{?Y-k!<WS6\:=ͥ{N@@@t*a@)(p`C0P)v&aJKe+v HƗžנ0qjL8V (,UzmPI$Rٝpn`X$ V̻ h'+sôy'NK pAGg$偿 `{= {zNiG"0BQ]c%셡nJܧe߉T^ cC\}) ŀ_e~+i5'!ǽ:B%*TI-Cmx9pQVӓO%ƌMP=qTh!vׇ 6(R 2 zn@Cb$ZdV|T9p~q ,Q`S^z pGuQpc5A'౉IpTEC]> 'b<[J0pvI^> @Y 7HYv;pc{KtJW{*/ % 6Y:9pF{بe杀%ӤzHXЬ-v~I?Kaܬ;U\7y|&BAQ"PO@*ŪU wKՠO IDATx?K(h*phEx\Ey~|'#dB`ޯ/|w;| 1K|2# &9f'f~6Pn ?+%4ğ)b4>78I06͇ e݁1CgGS6!sMwOy Z"!0?dAEC@&fV#q|}EGxx0fLn[桟Ξk(v!.Vc'Ą K~L믕o -3ĄDž!R k25i΃ np9 x/HSuI0ւNTOȋE6߆p סЈ$ 2};nE\V{*z`7_VJÍ1`$fj((<}1N8T 9~]a&Oq ӐQg %NjN`4Xgqߪ l?)jClbS,²D_"8mҳC(` H'U|lrA~!z07f/:Y^xjuoUwp .1]!} rcOᦨ!l=*Դ|[/uӍ̽i}K$ Mdg_d+JN 9เEaXl'c$5uFbgXڎchXIu]`5Eցa16ݩD#DG!j]; 簕D⏖?`Ca$/y%9%xj?ǽE4[¢6c0V/lS݌}]4خD6l0=mU?zmCw2ڴ{J(hڐC⣘Wv#?fK~xm). ぢv a,ܹ+8Qik p- `j~ 5QyM~nO8]9h"g7vb-f좏OHWbvХ}Y-Jn5+XA>J~Z61\/V;|N:gDshwi^(u=.Ea dLZ \thot4oB~xH0 G*wc =w'Y$o^ԖA:]hWS)#]3r_oK37-=5]:2E 7^ܶ)^SSQ+33ƣ[ !bc$ƴ8b]^#Bx@sf/"f.-#;{.!L59` pfaGf6c h]ĸTc:6 x7x35Aٟx89xBU]a!po8dU2tD\^...IWx-C&)>ӛq骈(rnopCgO˳=3/i?EvC*} DSmk ↨" Up䵝:Jэ-Gťݘ1^C YaaN>Y0{Mz/.W/:{.Ү\m ᖀNF/t<HfM uggOK2hYC..]\\:?`hw(?'wwZ]0ahYsyΞJDXJ'O_=,\XN $ѤΞU{:{ .{%̐ )'s@r"'3O;sO[Ξk(8rIRvRػws:ϷzV.XXpٿpC....ޝdr >jzHh=t&W1`yqi_e+ܔK!?u\\\\@pV`q٬Hp2=Ic&ũ QUЌ-n|h ==᭭9(FwBePP'= ;_4>L Ugؕ=:jagz;cx{T󹇡ݔua=!6s٤Vl#yFJPy\/s}~z]\Hp|%k=I  㺺> ESinG~ͿA88,xշN8,XО(ͳ_$3BNtwFs'i64F7VtY*cC㪵<4&Q8_f]8r.oj^|zhUA}ә ' lj*B&q|mY@ G{m!,}gb4Akx =^$Ӧ}x<)Av>Ǣ۫N}l*T;PPJ{<#d쩴TpI݂P{ruN`>-qYGy~~bvZw5gv:F ZFیX`\0L2R6Bj+XN H]Ce? jJM"t[ؒ$R,lKI?p^ cO[ƣ*+o9.ⲇȽlDs` oՙl<Ho,9D>/<]2!b 7x[ɶhu*g(*dQfrZ*cq(/  &Q8 6畴lzQbP8ێJDmޙ )`c,ZK&Í E=D==p ri&2.#tʻ[NCGA'0.c7t|EB_u;>*?ᶘG9E/?'M"ܗw$*=&p7JzxY*c[T0|_HJf+hn(ވVPU B^Ptqi^S8OPŋ2Qj(]2Ł_,\D,ȑ[St`UQ-c<_p> e "OBuu/-{t[W@@z- ф FX\X-HPBxxFOA &EU8g#DٲXp: reKz_{lË0x㬗,E9M͢T[{;91-#<^]RexZq\\\y$1Xۼ8K s#47ؠ&]41C10>5rSʹ:'/ry8(0 J, 7/@ac5{`<ׇ /|K!V%#Q桼j+c"c Zߙ3[&qX5 #*髎PV*|(W[0Sxaa&34܎O`‰L9Xba|Ƣ_ Dyn#ⲇhIp25<8ЭGyZ6B@3JғYinJl cE@!8lMx 24NXPz*l~Hy u©B̪:~|p 0.pF|(4TZV!mN\:&VPtBkVU9b" 1]e{]_CǸUz"n-E4[xBMw ϓvh2?2uP_C|7BA|R%!u$a66w_`26VNP0>83E1 s'¿n@U33)ϞL xy3#C+찣)Z\Obbgb[9Dk>>[0q~oǷy 8_IrLp= |1GB$1o;P\Xٞ >P^V>.=`iB(Ӻr Q..DsI֙EX]\ LA5qx%J0_l?> θ6e IDAT F!| B9Dy8D)E)Q^ 6c=o1 w}Xg`"q6 { @-J܅ ac&ljYlV;R1>8^I:/7fF Mx zkI ʝ~Ge.SI@OKD;}[lN\*s)DPd Avľ'r88/([nTn,fW@vAm EMQ`^$gc{0ejLG)T" 7ţqb~4uOr<r٩ӀiCD zy# !/ .&j[AB`(q{R"fQ ٜTbb\Akr '>ԙS ,]``;Uإ z1A"s? 0 c K 5x#Y!1Q@ͲPR3Y' MDa3ڳ? ÎڄՐ7I)0(0i 2UN,hJ:0>nrQ1x <gDsLAM%S84S(ZTcEi@eǽy@S E=ꊂOJ4VIUDRe`l219)lBnMNvZGWw}]P8a sđGqCYc̼9cd;pPTq~ ls1H0?HIeY֠1>‘q }#Uh˻cYBy#$-cg#'HwkU i ]B%Z$=HKMhqN~G@@.Ɏ!;8_B5VJI"aJ1k0QE,} z7:ؼs1DV?8FR`rx)s/sȓGRs,2D)|MlB8.O@(6gxV-s&p<e`P 1pb(gT36<8*b_/ ǯz~KTxCӋhy >jr=cv 'W䉇wr+f0>g.``3_eF歱˒p n(9\ EyB/q pVnhlXLE,/$";Qxپ1Nl/鮳sj1_5 ?yPqrtv9Ix:?.Ʃu%+:|'ئJJ-6"#WCE…Q ^O!u,Jtb+TOl .`stXr{Rox^T-`C^x_,F$*%%ݙ(g͙@j= .z;ڒ gg\d 謹8_̝M=5]\3Azb<]'OsTf Crr_Ǭb0ڛEL<+fs-'ąC W蕪e 7s8**YwxRj|޴Hb"l ӊr\MwC3 u(]QEas N@/Q=u?>fS-X @0@j˄@UCIỶ&FP & Lnb_8;jNڰo] ,0trxHΈX8@<3E<ƀ񤏡 I}>i󬳡/PU60ǣxEaê/nreu ^F8aBC9UM256]g8^ f)J1sx}lx-+wM9a!s= =L\< p6FM\$/C*a(X L Ao=VgatKa%]_Q1zJb(^p_tkƋ5]\b+9 eWty(D Hasx,2"B؉j<MMC۾Q۠9 8=Qz9NET/-~Y6j=.l2nAҬ)I+qA f?1՘g(ӀÜp^ϒxYdT(md  g-ᰥr5Y EPÄ́ _wPDFb.$^8=KV3e{²kG؎?ʉbFv87ry rQ^zSLd0<X/ܚK1]Lx<;>kuowc`8]; SO*Á7T9`…^*؀IR[bYw L(ZrSOb1_N([Ł"݁2[Ep ^m 0ZlXM' `M[;P\p0JqW5]\pn:A/4!ח>=6+N}B'ڱf#EA՝NNK?L¶CfS9+p'8a0jrC̊;:׶!) AT|ʨ$(K-Jx[$".3WP/vz1o E %sX0j?vAa(pW"uO 1f/f0G"rʲR M-\*1ȥ73^[EAE䰭Sv(bB<9!BD=^.EX7tKūSd޷n2}6'c!N[S5Х,\L5"BA^ns0P0I!UgBj*8co/rHO$!6oup7hpWdJmGíPv,rKgO%M|/\D8 0!+ oy[/S1 jq&' w UWuTyׄ@E2FJf/Jń汕~,xP%nӋ)LbA qZ nmCQ âeC1w1b6UӋ#r p*…El 4eY|-}o˻day_䟟ސ|҃mx]yE;шMr<6B.O_?FEIj dҒ(stBEz9D*iC(xAi+4~{9qXp;+8[;p-p29:?I_3pmev(ӗC F-\-,]{55?!3v5y-IcX y7%J>AYu~!6MoJ#ϓOWvWBBNT𣤆l5:IϟCpXv{28c#w1-KZ <[g<(*iAҩ36 |lحHӄz,^~WYGk(x??vQ s?B |S=iNeT'FVឩKe~>vK!x#dYOjx Xߖ- -ҳG<,m2XIjX ;Ra'A!$~QM"m+P\+ M-,ow`fNRN}=/@R̜ .Q{w*act6Mױ9vl9dD٥׻T,QU#0,f;F~G(؛{}<+$,}SD ̜2CIّ2LTn\_.h Qf{Ypm^G9*؜;n>NDZeosZ\C%(ta w~'r܄mTJqžNRfyMJ5kBptZ=Tfeb+}uڞW#=cGpҳx٧K#o˵u-ai<Ѫg -5wGc+jU'DXE$>L9yy4 ޖwY5˕O-\ 3$;wNUn,$^ί!9; C-xG-,(ânF]? |Ϭ1M!kfO4?.ΤM:{.-ypHťI ^6\pQt9 Q~[/)# |SI͜~<3g=M6cY״@uGQQZNН罞k= !tX{z3x+?2Gdv=GXע[C>*eLPuVrrkPp>jՓUڄ'vJ񴭐$S wAT7NI* *Sy#J6x7j/^Mfl/e1uMHu *&4[ťp ۿ NwX~SJ ЇǓ? \ٓS*ky!.䴶.Ĝv-^ +纍[g^9W= %/|)k,GXs:(z03BKZ P3^'hz.̭u=!r:m!YWH/(ΞƁzvqq\-ΞKCYz&S$% NXzJAqlqGv(1+41Fb~} 4Vwq0\CeyUҙм|Gh EG+Z3ۯWӕ::*:^х w[eP|E5po4^Kkb52>& ]^Pާ茶LCA~򞝃)p3QBуwvC X|:k(t()2`;6|Tj+4bDO ' ^\'uFH ɘ9t, ݏ6nN'i-R[o,йE=͓yMuJmD=In[gi @6H"&| IJ۫v]\\\ >c+IDԺu4ه'rt:y1]l{XWqo}uA8RXsH^M&AF菰EӫEn]V4d28 Ô|_S.|%a 7wqqqi5Wn\*E~zFbB0 ,a㻸8E!wQiǿϙIBBJ, b] HbU:vcJ*] s̤&=d>ו+ss?93H[H,C`lSs, Zo;ہS1>2"*6T[ZCGvsk:U(Jk?`JShU$kJpXzbtDtpf?}|C50=ĻfF@_.8\>p+CERío;p+~p6Tv8#\5>Y<lpj6[omnxW/VW VAշ`t/:R(Zd zL*SL8};۔o #@]:ej| GB>s.ƊҹʎXlU\.ĀH`ŻH9m 6}'I&MqՌT|JRI惫9:ofv@,FO"ᎿqCq{u!pJ v,6R#+YS= (u";Wc?pM6[XOǟUԌ+"DzΣ{g-MK^Ʋ'řO!{U=ʵiPjckhfi4ҡT(ʶ BFl`mxѫ]nN;ұh[>_Eغȫ /BGa/eCчt:0N}lbRrU|\I^_%~XJŝ3uZXV7*2SV2H*{sآ+փ_ {@1ZjcEzгlQlŸ,&|G VݙR.D?Z6G`?ݤEc!peqE}𹾂tfIH 7)vZD>]8ǒm4§J\Q|Q* hl3nq8@!a P D\@2a~,b.-c<ɬ0sRـ1""S,"R^kZIq+ §rRVzLBڣ(۔v.NsHl`bۋjXH,FwxCm"mj5psț95"ƞl9i>pm5$[ 0W/ȗhŗۀ_˵HBmcifj?G9 E-u\ޯ Gq\ meji2V]cPw)XeJVfzMDVw'&`ZWn'7'4Sv5-ުGfǙ?I6el31NU#ڨXSֲ zBFT(%Bo"H={ ؋fW%\KpU:H=R(;hՈԏ WY Vs${ xuj"T($CσkԷٔT(H\*EDDD$."""" EKĥBQDDDDR(""""qPT(H\*EDDD$."""" EKĥBQDDDDR(""""qPT(H\*EDDD$.""""W0 HLTK0v |j"/I,uTLt"}S(s,#ƌ۷Qj8,Nt""}S(#pٝdO?'2{pJK#'0NIt"S(O}IB[|feķIDD<" }Yǝx)/Ԭ o#G+uοq}ؗҠ ^c p: BUtpӀM`C"yi7ȭ!u x3~o&6͟W@\*{o2`?C(B 7]`_iqb<.QxB#˵)\= !زп! ,] xBvbA\ͻC-*`ހXuwºo!t^7l%-^+{# t[x7u? 4&XJܾ Rݍ Pc ॒;z2l(V@E!"/:h'MDWʻ'7 N~ЗtQ> hGqqp6} t pE0XY+E$;ˀ Xk98?S: dÀ6` 27X`xWJ^, k෈UI'JΉ v$K'E:=Mܞ!`eS78`u[׊ l0867}b9 _I_1C[@r@yA@gJTp(LHMZw{30$09pJ;p\-;~K(HuXZ} }l![Eeb[f{5ҡIR 8\G`sϑnu*t3c p/緋JO0 RZP[nǾXe][E>e^w] QI,{S= G11'li2OOp""P(R&-"Z̧*_1&3Jڦc7g5݈'?}_k%"";+"52} t؇lXZ3͏-cV:vfq:u@OЕ.S8u۬ Ejh LJ>Hia/~ EPͶ: u09<<]u]7xT(T\ǃ21 c 5qzog("E#@%D~CJ`A2H~womVpot^iP<ڭ-4J'a:hC~$fgGpICB3 ،5~f9[AaĽͰ:hVT:?in XAd4c}%_Royw)%l8ܱPk՛%Y8a#Pns2UHcYUǗ2zA_`JXu|+ X|-, _%W>N>. 8ѓH| @հj14mk/C[Kwu_((>_apZ(zxeSFA ".g$pej |[ÞKbn?"w/# H]"@_*/@i]X5ES5}6#5fG~߶K76ϋ+E@ϏPs`";'\"SseCNӦ,}%Mw\ sj~ͧcIF+'rrK>O}oYeqRխ:޽Hʸk1A F#DZ=ĚL!$Ա>#jhYcY忄дGQ9L==bx9e>cnff*Sѣꖥ>ЩgVSq^MF8")ȎAHE<0ӻ8m ô(2:'ncPMZhZhs,Ht"S(34;vGY'z٣v,xtg8t\,""E8D!""H*E8;yADD$T(a!<~݀$'X q sLp_8y,wga I<cp& Υ=gUfGaϖ[Π9YUf^EDV-U.f9~bxeln-cofKIQ2- y~ߤ)yǠsf9,bTMV'`ܯxw,pps`F6ơ8~K {>}f189,·\g V ,}Mrs0xg"Ȧ|&[i8~*+h!@JU}Xo6ϪhT*ov 3n5XsGugs 8" >Wt}N1^_ Soj2^?dxKF:s:q3 J:>?~KR69򀜛#z;S:c$pMs "C,{qf`w 3Rx+#1WI1j`G8paq,1FYÌZGQ*rV2o8cN#7'oG(.2U553xЌ2{IaM%x 2`DxYĿ6Ss> n*""nHD*RǓo͸hl2QJGY<4Rax1fl~W™oq/T(` Dd6 o _+^BqlC5_Xזr]cU?JWh>WT(JգtHzhJ.~+/,sKNU\P}~i5xkbPktп 71x~5O6T}@W~2ऎ@U(HF !=T;YҠ/]XLMuX,A6'b .tPDDDDR(""""qPT(H\*EDDD$."""" EKĥBQDDDDR(""""qPT(HTf?[ewPqi2:րi]O$KDQoNt{kw2 ޲#@NzH;ߢvFAtNAf9t܍gѽ8fe2t%SzmjR'*E$8&FR٣v>GH':`h kC1sӨ7돳K;c^8yՊs*yX<ׇ1xjW,Wڟs~W2s1N#9ήqJ;cnxjR'*EDDD$."""" EKLt"HV"7vADs a-΢ޘCĭ.]|oa3~9%fUƼU/Mn,ytawZ}Wml ̋]QrQg1'@$y"51+_sk+oT:3`l9n `gXR)I#$D'ݺk1A F#DZ=ĚL!$S/XA64zT(H\*EDDD$.]" ctkn2&:7Q( `Җ" |q1<RDDDT(JshB=0ydY4%?()H EiXIdǸ_Y_l&<]<Ͳ9=Yp2>}xMbeLk.Fi|LpP2feq3nϗyX7vs#p0c$x=t{4bEZ𩛼eXȺ>Ȣa<#HGk7'GeO-:Ȃ\F㷀p|pL1u.i4ysj.8nfp!=ǹIxDsZdw,f WQDDvN*%XUv<\:n2C<^-Gn2,g)TM,(fѴqng IDAT@4tbG.0xj%>uʰTtG{P("";Ұqa{nXmXql9MKeh\G9biX{d+iƹO1Fra8`_8=̛""" !rly+]>:wV짬Y[5y=U$""9"""" E8XTDDDz^}Hc="""" EK,D ""Pe3}lIV_v>':FT gpfsilT(cbRHcBQvza>NOt.""" E+,@i,T({Gx_6W~UV@EtG6}b}M^3MbTucP)MfG>3<~pnBcZɛ0ೱVDS=d*;󼱈fξjk#55J~'\X!cM"^sp_(BY"R²9ѲXiYa8sǖ!C?RN}~8abw]<'.fo?3~ YgΙe>Æs_ywa:9`L~̷_h\Ιꬓ@Iw}0g}z]o1d|gϝ=~C?hXKdZ57ujaîY} _c==1p>;gVF}><^8|ଉ3sf wЋΙ94^ {C_en?s\!Þ78/g.s4hoF7s=\ &M\v^;{lg*?aGx[.wYZxۂ(TM} L/-go~-o-;,AA/L|z헿R7o۰6EVY'^[aaApSh7l @g~{|;p;EVE~R3f8z th[Xܠy+$$ϚV-@0%|uvhۣ|SJ^uXِ-z@$""Ҹ蛎H5as15m-׀0>$0Z)ŏ! WF2D`/"" Er p5pa?<If?` O@ WآŇvqa!9Dx= >"""Q(Ra #>q9=8J;z 7&{ʩPGn*kT` MDIxǫc}M>r`.[[TP4/6' [oUpС߻w>3fp˯nb%fʊ_j._#m۞XRTS^".6[$ P:M\R~i-mj-:ҜMar*eJڃ Hʏ5/*Z)l; Qi]Gs{9}OjJIJ|"XR\vz~_X ~Pη: Jɛ:谡C3ٶbE׏+7sh [\ 0:h~~koR*hSL;E^fُ9Oz67tXɒIvE*_a;'riur=e#;0es1(9yE^xvQrYxIv?/JZq_j"if.IV4H=,Z~, Eee3β)dXp[NnGiЀ۵sp[{Eeq5p3 #q%x(""Rm*Eeq%q #55௹ m gDH|tG(Ju(X:G6}6skh,`3ĥBQDDDDR(""""qPT(H\*EDDD$."""" EKĥBQDDDDR(""""qPT(H\*EDDD$."""" EKĥBQDDDDR(""""qPT(H\*EDDD$."""" EKĥBQDDDDR(""""qPT(H\*EDDD$."""" EKĥBQF2~miHD' Rn7p _bM a7T"""=hxm .D&""3EiMӹjAy6u4gq \mBMZ ÏFO']uPfަ=h]6>SlsH=]+ۮު|ssG2.4/>[΄sI]p9l/YmetIg[AÛOђLطVVDDi4z$Mr-~@;! p'<>SF0a_ulxsùe\n 6.MaӃh$H)+{2\ktWj ""T(@*;qi\0ɑO^pX&"N7Ef zV#. xc:c((Ǔ/1ӹmxm-EDD9XρbL3mJg7ͨajRkvC(ʎBpI[7< N9=MK{e3{SpN=k:+68'e reDVIڱJo ŏS (L)HSrmVZDDdP(wH&Vv?WJG|sX#/MIAܿ5)X,+ 0-NL 0|23=8h^te3")$'@:+ o.BA4\BX†3 ,z3Ѱír7j$#^Al< = .;!_zhb/r隚n>PF }cGKR?Pt.{hfy#fR|+V!{-q> aҙpXsU[V/:kN(-Hrj6:DooA,;~_?܄ Ku4# ZFK5bǂ/{ê-|~ |h!õr8 AShY5^J´ց-JW;\qdZ6xxuAko)*"X*z+?TûfzU[ /;SW"_ f?wi[{pO55&`[q/9ۻ]9 ,{}67\>sKA/պ}Jíp|x%Lxs$exA\0&)x.zhK4 Ak=y>~[Ctƻv7a;/po(""BQv*+~E*Ӹ|1)m?hO98t?W+~+2Wr0~3Gy hzwp]>EϤ3a9af䒂b,>e4Kpyf-ynϰK tM! "e8A6?΄Gf0\u:[ɄݍnKuOgBl\pSx Pw&6_,It.ҸP+\='y$^[әpm54# |ɭ]hG2g`r 5\;KQvHv689SsRۤ3S KL"" E NLg঍䶳se[t *IXʺ|nJsMC0qd8`podb \eEqSs-Lt&ث&3R(GpUogd//τإFt&<5ы3 ]Ӝ ]d#po47]h#}m@'";ꭑv6xτ0mO,GpՎq>~n: í3wȠ(vпC Ig{ep(琗q-Dv06x&-z~΂XHOඖ۲ma:W-θ ȗL:;u?o3 k1l>yHBQd4O `ޔCOumQ_e?OgEI= v}q뫏2>=GOђvzt|MɠEgҫ^Oqc&HǽC%IdQFVZu@<E'vo7,,|x'2Fu׽~8n=4sgUI#ͭ)iLNV`}Bl tqr.W5q 뽄=4-&?RM,ssr3ܞ.1uwbI.KR(JJ 8 `mLjs{iK"I9pYɢG7^9MO;$ '4 #, Ldb___my^>w#䨟qs'h@9-5HRBX3ΙigbQƈ^V@h*İuKupM[͂4_KHo-7B27!YHۼHpВ5F:.vAC|x L@SeK A(u&o(ˢ3dfC!KivEWU?]v,c]sh,(K73_C&-/s$76J2~Vy^ iL ɰ#⹅4rۙt~3GrS(HY:Nr[͂e U |go3C)t!.ow+X`ڙIB<'رzO8\>]XE8$Cƅ+EQfgfi F@j`+hlڞdiFb{ 䗝N hz>L֓L(vc׽-5!=X,Cʹ_ p~$T-]xl:d4,A2KO#G1V|g]zZsig<]`"o%@k{3$IEi YFzi̚܋5rգ1dH&tpsTᓍt|vUU\b=}WEȼig*\RM{>Mi(JcP-߽񎿟73Bq ig+Suď VydIZs ɅtAni>'lOM;`1m9sh,5З6_rx*ħ@Uܠ*|&FxIƶpK_N;ƶ23hd(JX#msv㹐nZ~_@,7ioC,):.V*E K=g)MaS\2GHfQFoyY2p"ﲽ#O).iai5pf$sbIxi%߫x3FbsJԳ49tۗO?õ˜'0 &ȓ XĻF/;gADofꥥ;K;J;$+4E‘@CBrڮZ3u}jJB՝$ !׸7sKwP4/oH̍%COEHƘoԡgg1aqitEi Cy5 {n]Dn'shT +fc6e$lr>blOZɢK9)q7Mgv%y{ncsִRE@>W)GQRYAK/$7D$@o! DM7-c^-d KR4j apwBuɷegqԭ q:Z`>XK+xrKoZ"nwuvHU=v_=/;ʓ3jsȗܓ`r1cEiyC.>t[#I/q#hLJ+Gziy_V`իGb1qH%oSgsB~Me‹pd/1 6p閫=qi$5ۏ-. ,]9";CQl gS% c(0WuʦOxX{MQcޣ(iDZDIDATeeS@#.Œ*n`N+,'C!XpzhtٹEQ҈m-_P]Kgݴ< l| I>)ǒ<]4o@EԊ3X%(kh|%p.1@.?vyΥus"IX%H4/p|uaڃ93X|X1#bImOuG|$|8s!`ܯ$i8Y%.KY}3r!G2 +c5 6I'Yѹ_ Dbm\̒4\,R~aCyo/zh9+|$XhCEF0|yK-2DĢ(< 152 8ny}6gA. v4H2d] *HBLRŲ(JᰄRm~Dv\:lXpM@NE/B@깡ECvF:$PR%IfQT)XAKUbIm5Juh=a`Wd`j撴;xϚ5򳣪H62Qås yG"N G[K9]ozNQH"|87RG6+bě㮢,?vxeŒ,(j/[Iqtgrtkkso`ՠs&X]IК֧c,x{$o*ﺘ4*.g1xZ/ 1#ՠ  ?.W(JR|W= 5R}EiXDo8so~0}Ճr]&!NYƓ f-x$YMdz#" (BY%B\Ϧ !%\GൿO;G!9x<cw,T"779w @pjq^wP@ƂyL >ۗvBcs0,EQJ- =yMCb& TM#C aI*3h{>7㑐xk7 归;EI*@< !o~{-NbJRIY%dobJZ-YOpe946x$I EI*-/V|S3ld4Y%DzhސƼ44yY$IY%ITm20udzh7UwF pq!FH=yFp?q>/jǨfo`=8sR dQFC}_$*V`3$ #w_²|ݷRsg ;(J#iK,ȴǗ$i0$I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$ $I (J$m K3$I*Ʊ)$IR ai$I$I$I$I$I$I$I$I$I$I$I$I$I$I$IJ&G IENDB`neo-0.10.0/doc/source/images/generate_diagram.py0000644000076700000240000001726714066374330022143 0ustar andrewstaff00000000000000""" This generate diagram in .png and .svg from neo.core Author: sgarcia """ from datetime import datetime import numpy as np import quantities as pq from matplotlib import pyplot from matplotlib.patches import Rectangle, ArrowStyle, FancyArrowPatch from matplotlib.font_manager import FontProperties from neo.test.generate_datasets import fake_neo line_heigth = .22 fontsize = 10.5 left_text_shift = .1 dpi = 100 def get_rect_height(name, obj): ''' calculate rectangle height ''' nlines = 1.5 nlines += len(getattr(obj, '_all_attrs', [])) nlines += len(getattr(obj, '_single_child_objects', [])) nlines += len(getattr(obj, '_multi_child_objects', [])) return nlines * line_heigth def annotate(ax, coord1, coord2, connectionstyle, color, alpha): arrowprops = dict(arrowstyle='fancy', # ~ patchB=p, shrinkA=.3, shrinkB=.3, fc=color, ec=color, connectionstyle=connectionstyle, alpha=alpha) bbox = dict(boxstyle="square", fc="w") a = ax.annotate('', coord1, coord2, # xycoords="figure fraction", # textcoords="figure fraction", ha="right", va="center", size=fontsize, arrowprops=arrowprops, bbox=bbox) a.set_zorder(-4) def calc_coordinates(pos, height): x = pos[0] y = pos[1] + height - line_heigth * .5 return pos[0], y def generate_diagram(rect_pos, rect_width, figsize): rw = rect_width fig = pyplot.figure(figsize=figsize) ax = fig.add_axes([0, 0, 1, 1]) all_h = {} objs = {} for name in rect_pos: objs[name] = fake_neo(name, cascade=False) all_h[name] = get_rect_height(name, objs[name]) # draw connections color = ['c', 'm', 'y'] alpha = [1., 1., 0.3] for name, pos in rect_pos.items(): obj = objs[name] relationships = [getattr(obj, '_single_child_objects', []), getattr(obj, '_multi_child_objects', []), getattr(obj, '_child_properties', [])] for r in range(3): for ch_name in relationships[r]: if ch_name not in rect_pos: continue x1, y1 = calc_coordinates(rect_pos[ch_name], all_h[ch_name]) x2, y2 = calc_coordinates(pos, all_h[name]) if r in [0, 2]: x2 += rect_width connectionstyle = "arc3,rad=-0.2" elif y2 >= y1: connectionstyle = "arc3,rad=0.7" else: connectionstyle = "arc3,rad=-0.7" annotate(ax=ax, coord1=(x1, y1), coord2=(x2, y2), connectionstyle=connectionstyle, color=color[r], alpha=alpha[r]) # draw boxes for name, pos in rect_pos.items(): htotal = all_h[name] obj = objs[name] allrelationship = list(getattr(obj, '_child_containers', [])) rect = Rectangle(pos, rect_width, htotal, facecolor='w', edgecolor='k', linewidth=2.) ax.add_patch(rect) # title green pos2 = pos[0], pos[1] + htotal - line_heigth * 1.5 rect = Rectangle(pos2, rect_width, line_heigth * 1.5, facecolor='g', edgecolor='k', alpha=.5, linewidth=2.) ax.add_patch(rect) # single relationship relationship = getattr(obj, '_single_child_objects', []) pos2 = pos[1] + htotal - line_heigth * (1.5 + len(relationship)) rect_height = len(relationship) * line_heigth rect = Rectangle((pos[0], pos2), rect_width, rect_height, facecolor='c', edgecolor='k', alpha=.5) ax.add_patch(rect) # multi relationship relationship = list(getattr(obj, '_multi_child_objects', [])) pos2 = (pos[1] + htotal - line_heigth * (1.5 + len(relationship)) - rect_height) rect_height = len(relationship) * line_heigth rect = Rectangle((pos[0], pos2), rect_width, rect_height, facecolor='m', edgecolor='k', alpha=.5) ax.add_patch(rect) # necessary attr pos2 = (pos[1] + htotal - line_heigth * (1.5 + len(allrelationship) + len(obj._necessary_attrs))) rect = Rectangle((pos[0], pos2), rect_width, line_heigth * len(obj._necessary_attrs), facecolor='r', edgecolor='k', alpha=.5) ax.add_patch(rect) # name if hasattr(obj, '_quantity_attr'): post = '* ' else: post = '' ax.text(pos[0] + rect_width / 2., pos[1] + htotal - line_heigth * 1.5 / 2., name + post, horizontalalignment='center', verticalalignment='center', fontsize=fontsize + 2, fontproperties=FontProperties(weight='bold'), ) # relationship for i, relat in enumerate(allrelationship): ax.text(pos[0] + left_text_shift, pos[1] + htotal - line_heigth * (i + 2), relat + ': list', horizontalalignment='left', verticalalignment='center', fontsize=fontsize, ) # attributes for i, attr in enumerate(obj._all_attrs): attrname, attrtype = attr[0], attr[1] t1 = attrname if (hasattr(obj, '_quantity_attr') and obj._quantity_attr == attrname): t1 = attrname + '(object itself)' else: t1 = attrname if attrtype == pq.Quantity: if attr[2] == 0: t2 = 'Quantity scalar' else: t2 = 'Quantity %dD' % attr[2] elif attrtype == np.ndarray: t2 = "np.ndarray %dD dt='%s'" % (attr[2], attr[3].kind) elif attrtype == datetime: t2 = 'datetime' else: t2 = attrtype.__name__ t = t1 + ' : ' + t2 ax.text(pos[0] + left_text_shift, pos[1] + htotal - line_heigth * (i + len(allrelationship) + 2), t, horizontalalignment='left', verticalalignment='center', fontsize=fontsize, ) xlim, ylim = figsize ax.set_xlim(0, xlim) ax.set_ylim(0, ylim) ax.set_xticks([]) ax.set_yticks([]) return fig def generate_diagram_simple(): figsize = (18, 12) rw = rect_width = 3. bf = blank_fact = 1.2 rect_pos = { # col 0 'Block': (.5 + rw * bf * 0, 4), # col 1 'Segment': (.5 + rw * bf * 1, .5), 'Group': (.5 + rw * bf * 1, 6.5), # col 2 : not do for now too complicated with our object generator # 'ChannelView': (.5 + rw * bf * 2, 5), # col 2.5 'ImageSequence': (.5 + rw * bf * 2.5, 3.0), 'SpikeTrain': (.5 + rw * bf * 2.5, 0.5), # col 3 'IrregularlySampledSignal': (.5 + rw * bf * 3, 9), 'AnalogSignal': (.5 + rw * bf * 3, 7.), # col 3 'Event': (.5 + rw * bf * 4, 3.0), 'Epoch': (.5 + rw * bf * 4, 1.0), } # todo: add ImageSequence, RegionOfInterest fig = generate_diagram(rect_pos, rect_width, figsize) fig.savefig('simple_generated_diagram.png', dpi=dpi) fig.savefig('simple_generated_diagram.svg', dpi=dpi) if __name__ == '__main__': generate_diagram_simple() pyplot.show() neo-0.10.0/doc/source/images/multi_segment_diagram.png0000644000076700000240000044074713420077704023362 0ustar andrewstaff00000000000000PNG  IHDRv7sBIT|d pHYsgRtEXtSoftwarewww.inkscape.org< IDATxw\U I(WP EA+boaW(( $)$PB ix9sN۝M3sgSzDUq<"28 ^MREyN^U]ۨv:"2 x.`,0X~"~!~"2 p.UӸV;!fKX_ /6z{cZU_-:;)  26Vb;OUͶ9 9*{k=E"oj{oc]%sc8epaU]Лms"JQUk sg;WUPkOȾ7c(Ezio_.zM8*6\UW vS88-`離RӜ~ =ރ}tsSbƩmTjD'𶌏+Tj;Cÿ x] v=@DJݿׁ>%zN5w¿Wգ\%Y5pzH1.X{x?6DU76~+oCb.›`S/Y ~@P 1Mxw5|]_EvFFp`"^n^8 4`26$89XtRU=;]NU_^ "[aWe¸8.oxw8.vHDdSѡTt \""[bJWl,~,U]#"+0`1`qt@ps{"Rْ8[2oT*+DF]sl͌w"2x'5 _nF}cqug1pىB"2B,U]ooehTlQ,x36KU~G`l6Ĭs[`'H3p4oY$|/aGcBuׂN/n+UPUC']5^U][ X?MsXm B_eJhI~ @όO3'I?~ P^M~("MQ~@~z;˞[ ߎ3K1ݵ3B;sy+B0=TzFۆWłl < \+KioLA6,=kn1r3X,;NѡsDd{l] <xXU*SߖXV%|j.*DDࡳ//+’'hxMk-εF`* ȗA`b(ft|L|KXo6Kil*0u,p{ՁlQ/E\-T}KU _*Ι~_/Zϥb%~^-Ǫ+T5 vUGI [5.e*찘j/e:,voA1ku/ `%ZYkxlzVqìܫ<2u1ER3믂ݝ5/mx x*"U}ZԶ&&J/$ձP֘kfRi`U<ؕ7 s& 藤eLpU`58;:?I|)ՎZ1XWStdb^¾1}Rv)k`;E|fO݃=c(){ +|6'ߤl5]W~xdoÔ"񵿊e U̵;kbT{=HߟI9PQ,=Hazw usLʾlQ_ҧg)Kϫ v=C)V,ĒԣyQݫ1A,ySlRz9#'bÀ,2ìy Mܯ'a<;[_,8w m] 4ncx[ˌ:P`Y#onyRe'9_W4;o|_/,l^J2XS`PSX)מUL;pzrCTzU#iJb`_L (&ZORB解+Q.JZnBB2QKsӚj6U}j 0an^YNAߊ/%nD+% vcjW#)حD}㾷Q]5^lL߆bo( v-KZN=?+J>]73-X(d#,o.e|+&)&bۋQ(6x<]nv}Dى(AA iSTwT KR}1k,`]J[7_*S.^sg7k PW}aZr 1-ϙ&U w i(s_TӂߣZ.S+n3yYfnnLr)҂w+Ѩ2zS;+\<(*?)N6̲Sm RBߞ%9Okqy+bZWRߣZ%J%Ўx)%cUwD_'cAFWڏ)(*үUֹYph]J*Ǔ2=nPoZAE0SE~?7(ʱ¢xU3Qյz!OQl|Dq ܭJ9RetW[.~_?a)}/"/Gm#taTu!fU'nO߿gdTh L&~'r*l!XBH=~k**PUWU(7|o¡"Gx_~ſE>JxnǩcIFÍeB\Dnz]TKZQZmf9łS͂&Z^2>$z ݻ_VڅZljkov% vu"""L'F~*"7j?ǣv2Fԗqr'~4^BuP D{/ >W:;ǩ!!BΨuG~0V-Ji8er&_;H dN&z(YBqZz1]Y>qj~>sS9sJ3nW&`!Nb4apt[u},"t^#(~q5ǰE֟¡] K(~ |XUX.9N6ai"rB!"CDX:X#צ]63W՛(^XD!"Et' ):`ڹo5"rj1k棕/ Keh<OZXD|%i:,x \?;"2*]Ot"rb~?XDĬKq<5:+]^Uo&ȍ"cV#B?MDYfy=Zo_y;>aq7xozیʯ)DVD `s"r?on\/ޟ."g EeV4an}"I-˪."{e6' oaUXB}/ y+{/1Ԍ:H|L)"'l/>/FNwff9\D.b2 V"rl{PL `{zv^q3xx!"Tk[UA0ŃocPP5grŮx ˀS(^Q_kEd W B #"a5,@|nو'vˌ28sjMtl8nJ,i>?SUd8pzx_b}Dd /~& 4^}|h"Ҋ}{q6p<ݯdjg[o{0Ů>Ѓ˼b,>LlE:yCDb&XL TN9 a0 w" =!aJOBJyd|&z\7>?UD^ x0SU3ÃOJ { I!՝ZQ;8ۤs~u`j16YU0!R{u8 |pZq ܔ:܁eJqH'\c5%[U99κ tMLWn! g?/B"I.9΀XvޘP׎%Kׅ 9:lm!5xcl%Xj*ug`!(/g!mYx)$)wTUnႝ888]1qqq8.988 p\sqq`8883qqqqgゝ888;qqq v888qqq8.988 p\sqq`88HL IDAT83qqqqgゝ888;qqqNK88BDZ i0ר꽍nD=q4@^UU88N""Cn4o݈z""!n4/wp#";w5`NS݈>n1W63F7q́)肝>l,7)l ^KDd|<4ӗd\s.\_kt#/Kqqzϊ8883qqqqgゝ888;qqq v888qqq8.988 p\sqq`8883qqqqgゝ888;qqq v888jDDF5 8881.N,D<ٱ rqqg HP``(pJx!" -atqq, 8IuAR/bWaB]m3K ž888pa<6CdF^`WGVQ柽 qt?Df4tr"#q;p~R/\J2`v_4qqH@+ptw6=4mqLXge`W%b+MU/8NXdFDLğ{1u qu06oB\èl0g=@`\xuhBp~::n~Ⱦ>Q)@:jr"Djt[g]co8κF> #ټ-.UO%yU}OZ8NC ;+MDY~RZ8}7SԶWB`FYh9MJ@PЮT cۢ]h` Z8@N䨜FnGo7n~S9Nd 8@bqZ®W{p r훻80vw\)źbɩ0)\.x7ܽ`,P+rYJdeȇD~M4r"r"m4۞03KwE&H8jMȄԎOx$Յ ;u걡+Ye"5Lg!Ip;@Pv;6j=Ql7s=SO )~VyX6(\*|.ꓖ8}¥"C~}/^`k`j[ C#R_T8 u}] 6o v-j0D\g醛dx(ʌ9v_ij9u UdHNdvȷzZWp5p'v{q)5.ɛ+ @ZSwEvɉɉ}߆9 r"_'2u`W1kmqaJ5X*Py@lB&^#Iۗ0Cdu9/5j!$p7z/8G]!a5n AOS} XI?D?]Fc=d+̕gi0)g vk4Yد>Y>]fT/ `h&I Wנl/iJ>W!=+,~bjgm~Y-F1b뀻0j25"_Bt `o\F&`ٞZ*2dȏ&r0c\r@ E${؅8yMC-v9^ z\n@%-}}Wj0e@ pH]!2ôo.̉LxGeM=삫RW޴T.uApL56\&>nȉ[zek bn;x,U;ޖO1vWL`d̽IOđ{-CނI|h(;lQK`9ȔHuYfiZ#=u`ZB1Kmw7:`nNGhKT_+&roE2`Q,$Uסy8JTC4鯼Wp0cApEp7wbwId݈Gxd 9#5>  yrKTD?>)lwhNkEƻIǗz {0>jH0k">'rqNڜHsx6Vb9kncb2Op)滑!PUOZ2@1zTȽϦy.k,0:FxpV>D՗%jq\c0:ZȖk&lF9D>ӛּA0 @F3=ʰaLĂr`F)]74ղx[ A?ړ0Tb?h1:l vŔ'ZU/_{<&䳜Ȗ9 _iʉSYJܼ&i\_{PPE9 9F,p } t''i\\g:Շ˟Q 'n"oL"W0G1-˟|Y5ELKaB-5h+VՎ7rl=C q V-MM).0[L%֌;Oawaa<е }ٸBOݹ3Se]_.rXNd~NSQweDyym[&Jb4uTr @&A-N]ެFT17U50Uʓ ꋳ HhU]{^ˉ^-ow/8B6.\K±nͿ`p*CΈ.Ix;"2X)+4=%[Ӌ"r"ˉdzB,|Ǹ`W}|\+Pw;)t`~͖&-U`uYmXgjU=Uͅj6EadU'#Z4r" s"ɂ,'F̝ a ˱ aj5l1p@~ <|3>~I;fs}dK{< u_t KR W7r"cs"ײhP4ST3UUJWZn=Ki]棘yL٪,FNx[UyZrYn{)<6ƶqd/eKM<յXF vok `zOJG\X> w%-vAP.I\xܡ \3W"!6 4ѪjEBȫEv̉Tʤ92qX:'U8`P'pb#eU+,͇qY)7~ݡ8]N`b^$k ^WjXg٪0%?օgy-3 F:.ؕԀud`RNPPpQH4i 90sJYZ6QfY -k\)d{$)3ɉdVC#w/0a*0LJveq ;ٴsn;<\&#<<}_I9;r"1X3Kvglxbd\Z^vA 5QC^C@Vݡ:9]k9Oዱ=M`x#0)FS`TVL(X3;Ou3߱ʤMiѤLt:D9TH쓇Dʥߏ±cc~ 7KȾ&iQnnsmJybVR&]x~.pZnU)``y}e..rN*l/+ L`W`I)K[)*rFdSB,vMpTTn)Z_[%٭)/VմPO`Zȉ|='2Mg k.3kf9\+O%έu$IWKEC܍зiKa`B. ?&y{D?i.rH X`PjU-P,6bZ.mTaw ,'si"{'`yqe#k 0;emƽB|zhlp~;U<&RXOk76oɋp\n1E>U lj+]H!Aq5~XsEX,T.d^p(eXؿqsX蜚,v!AiD7)"NadVFB))h0hY"]iڃ=t. MĞes,<յ!DNY\M"uZyVO6 1K jDž 2;eNdgbw&ۊbW_%%d.asͰ|! !L)_ y\iml>MdDN Y|U5?EĚa}+BS[\-\}S &=v9}6]9a(J-ݚP['@a|!]&B{; ư9$V۞&El_ܬ:HL J *M.,*Jjÿ$oeƖZxDیcCb쮕fE)"8 ecPK@<8@`תTsλM Q0t#`֤T ZGnjWΉ AApb-T۳4t!1Xem"Dr"r"[fo&'`iMANQ}pGcڪzկSXN^ D 4hSu[v%5!f8TMB8F *S+l-UX삖}tD.Bdp~ _( 1cNĂ\0ND`W.Kl|E'D llcuv[;U#K\7q63v5)psNiUiTIE~&"mSܹ"mO\iI#LI,0KwWtQZ-^?)1ـ6c?4; OJ)?hTV,` h\I vyͫk"#6'VB|Y6H,~*3SX2Ǫ֌}r";+ *|\Ҝȉ2G)|lOSxB Vd:3ؚ f)DڿcJ`>Dl8kL9r";L@0E筪c'>pZ rjP'b {Ë)bP)U4([w8lo#eS:TMH,4 q48dԸ6y2LspgkDSU?hU=hbΣO[ FeqL\ _d}PIwv[H5BZS^i3'ˉ'@ {CQsYE` h;=T}[1p vjyPn_NΜ'f,iNU6>:?`¼ .$f>C4TYvΉl&yM6hw]⌋bF*KP]!2[+|ZB8‘e k [a&MI؄P1ZNМc9JUNo?B:jnp ^]NdpEm"gޣpS*,Vڂ ڼ"7?-ԉ~nV9/D&8_X.ފ-fNV}kgea_ )(-DZ\]*2Â(tg掶S!/&%$!<0ʜQ' IDAT$0jL\R"xbJv%(L0!$_H|>HؾUufzo 3:‚-͖0&W}+dK-̀N$j6}Ӫ`.[cIٖ7غk Y&prDAiT?NUajGppHE!G&Aq9/kI]Kgc 4[gV֓T.U &x%%&T.\i/lU=UT\PLjƩ ZZAQvJ=9MmR,<)*s` Bio;y cCSks>Sq ]li-rb#Rt'nz5 [*3+9jݢDؙĜȗn>,_]"jG1%vi7 3g5wgL!Z\W$3o%7R#xLӲH%aRnSTƄI9#Rq`[3;).o'~a.fW ]x5l5R:Y #2 y[mcc]P-K+4lGߒG I/B>(ZB"&.xe}6XҪ:U0OfNQq#ɾs)kM t*1Iܧ&-'w`ApRbW6qKjs0OLi39o$K})&S1`t:5WTNa;^#^Ύ+ U vA6>=BbkcϪ[u/5iНZn`PO>Z<׫):`WJ\#6)YH=.)[| ɭ`7NU]okw~L8H ! Am^6 ^0ǘ즕d"N駩I?D&}7^7P&s8XZN-!e̦^ժujgb'[OkP0<6?T8Щ :-R HHp0dD W2t4xHHtu?H q)Np!pa[jdoŦ*\1cܟ- 1˙9Lktr` ,n1kSGbʒ *pc)2 "y;hJֻ9u SM cJ=sRcX'⦃d\Sh0ks> ̉l9Y']+4-%%.VJB0屰`ɝMf8VKu <~ $,$#=4SU; [TתvZgJN<$JJ7+'._uKkU}|gNRǶ&˶e,}iUh"̪v)W僱qI d,6#\,cj՛z8؁)Q&6ܢ!Lf%')X/¯ aԧn`WJ;wÌPޖ+ v5-zՇY i1MWTk  TRnٱ &`iS26ADBj0l7:寫=?h7)\$e+Wӥq|T!J[*-v ")R.Ֆj{ !S2j)`g5½AAtXֶs ~{YusD@v,6$ s%? 'rdN´1jdS;+<bJd&\m-!,ZDXAҴ*xeۓa., K-XZB:]vfc8"-a:8l}!hvsz&p%tC¡jLϨ}m sauAr\T:?OjP7w>$ ]>va;#ރ[S[LJ fy Z9aL'QB" oUMq `(ࡳ@y`gT𺞐LX%SӋ-xX>bޯT߄8j^a9!mdϩi: DB3|Ek2T-@FaW+a{m_JHf2_*4wM$&0!4 a99 2LpkVSBKXuޖgWY\UAyp3@NOb ф]5BÉ -v9˲>` YST3Ed{lNdps,Tvڭ8ԝSzJڴ bؑyZU[C]WbH5E'⧝-vIѼ)yvWȾp&<:-!w-kGm$:.oMҸ*9$-y6$;6i+h̃v懬i͂Uй% 𦹅;h8MpZ@5x[9 w2EUYQ"+mwI?A؝ R$6dRB ˧$ɠK xhϪ`SSrH  C⭗8j]1WmzY{ݥ_]0#|'  V:'bt.1ց v]I/keu/BoAUluEӷYu6͛kSGSՒ[bj p;fхs:H1;O9GyN:wc]rߞ@ lkưGW㞊)q]!sUnBltIz_Ni3'mbf!f5wxVH+nLsv{s;qoI,lqa79WevWrpf7]_fSwz6~9erp:xOLO]eBG"VUq!Duh&Y*s˺IvfDNvL7Z] `ڕ"j )r0k -m!^' em oNM{\حYW-6RefS{|^/ӒvY>laj jm8%G% QҀ0avXժa4Bjޜ,+hy[,Y~!$ݿK^T< p—#zW'a.鯓f.˦3m{cpcqbb׮hsEϳn_2 D{cMeEZUݪ [Gn)`܊ >`hHSŪwx ~(ҀhMN"vZlbPwcCl vt:ﳞ]p;0TQ8:bMJ]d߉99ʹd;"Ǜ$AUF!bיRF~[2Ebm/_w #m=g'Epju@uS=9%R}aF@Gtn "V|%1r~[)uƜAkh2fa<}::jq7 WcnL߀;3ayOCܰTGcѐ=߿WKyIp:EOD#զDoY DzW߰$3f;'d1?:Bcs}x-XL?iTkPo' ?"v6m/y>ؤ֐`Rx*6=#Ykg)5a]L3.(orG z/"V[w pgjM fssx& j]Z,G ®2.oEǣfl?UDNOb!XtG՟i6)z@9I}Eb.cG!IހApxmu;ք mwbK<ߤysCwh,욊vytnLony}l1}o)|;(眑u]Ex0)eYt?]lrpN߫Gfu{01y&cZAC䉞E2ms3˦;.W3bCQ/an~7LdZ׹ jb~Nx=}v(?ko Xra=`XJ?d7Kyu~-LKu>"u Nun+WrBG=aG&hXxo.2G;pDJ,unf{1[Nm?K>'Nge3#<HZ]ejEBfXO`8SwC*a_q5W8BAMZk;%au| hbiS<">\Z`U?h{A½qI}. 7R6/IoU)R`jW{0u_8K j2[UoJ¯< o83NkciO Lc5գź+֨ä :#} }ZI; g6o9p?A>K Vg#J v~a͙Ae]7QUTqV~҈5Fh@Y;3Gc[{i`ڄ/"vK?Z,]TYS֘zsm[U^7[ }ȠQ!bU&pX-؂Ywҟd1<8bȳ$mbZ*ޥp R_, өV-h5K`Hu)pn# _+ip0J:JhQjY5v/T>$R]:O䨱~JgI-Ký>RcH`h|QS/"vRfp~79']++ zLv iA6/v"9f8Bmo\Ww@`'R]M:Ҕ*|ONj#إ+ V՟Us;l` \xNa.XB_S ߈Wkun]9Ӱ]FY f_K /QV?f=YΔ: .'/y@%ZU7"_Q @ [ rqL@. Ev[1vY@ @aWN-aX@ @ U+V*0.@ 7+V.a@ ~Gva@ AؕS+3@ @ ®Zr@ @ AؕR1@ cH_@?#b@ 0tuR  A9"qXv@ /Nݗ'}IH,7ߏ @ 'y8'uȾ:@ kBĮD:.ħm؞&@ 0h  t @?1 |Շ}Jv%B.#\@O ®Dv@  ,m{v@`]b`Sy' IDATTН@Pc1Aؕ۬mr&@ a,b  &]@ O2R*&X:fv@`@ ?| ߷jXdpusU'g"v%qv~/D"?'r\_K {F@qnu'QDdwV`]5+gh.}}.`&q^et_ do.#3.n|c& @kDjz GXXXdzJ\o@Ӝ|'2N"vAQx{ϱ@X䎅"'U_M̩f_'?*,laWV<@#pk 拜@RArHhs`1 >Ir-NӳMga7 j⼜唾>@g~#2 x-0ٔLNR5Tf7?)| $98 XfGK  gW9d Ƚ5:~ $v,'P{Ӂy x?c  5T>N !b!Jt`,R~]]No?y|M !cO#iNzCm?&d욇]Fn^e mY}zXd鈘mm5~ST`%p/D*u^4"0gpH8 @a-B*f!baDt:X.p70j %0>Yu[ ]f??ə1 wq^3vi^mvFq^_ۀkb"¹D ?Ԅݣ>uDoXع~<@B:ɘ{ZB <h ®_] ?ޖX",5a9"c3LZ0/k1׈@s8/=@ݼؗ]"iA}ybyDMDUM Q x:paGv"o/>OG`sp^taW"Dz*"gDZ$RV<K_f` ȩȘ>+3Dvt=|} )R?K61ׇL"<VЄι;^@s/jr(aHdR,njݿVd/ \t>n TDl!&pt̻',̧\-r 恻 6)P6 =_䂟Iǜ)9vky""?3Od&}.ebrA.:(~q0fWcο~)Ox=zqR $awɯT{%_gͳv$b'v}&6 L4 uZv{c)y{:a9sNӸ[k5AO( ]f2PȜM<}f4n֞3f5drKYL.)¿4J 3WYu#fjpvIĆYyyWcQUQAwwZeL2`8\Ϲw,F'"⼌ iP؅(o3YqvP.&"yR__~ m`\QtÖ_d|/Yo'e0maf_G{Q>݂/lTW(Sہ)D>Z B Z"b/yyw8s2Ӑ#6f;0"|kl *uni=Jqq^^bB)% :gP_ځ <~ m~=WK:F$좂n8/W8ߐh8/yBsg0)k&gڭTNUT0fj5glyfz@ _x'g5~-$'(,\1A ed>_#aW?AՁAU;\v'l3"q]U1ͦp(L_ZUF+ 0v잾5kIP*"W:A,:`ElسA(@5%R{ > t}8`yYNɛ;tzw.gy%pra⼴y9FTNME>8/)mf1IJ[}5Mf.D.c a gW/š"+6'{ԥQoWo;Wu+Vou:b29M"EnRXZ=^)ۻXdoebR'=Rn5OFx '@9ߔX*ޫ[c\Mh |ZX p%<}"2_|KfdXEzaQ5@qNU~.h&Jp &c캪̈Wd(lw(\9}[|˴x18/?),2=82D|i<`g5a}{996KE&i ;¯ÎkɭuMY; &p;c X6{hz 5"ݔ)6㞞B"O-JOyޫ\6}RVEvֲ֣4AO\)E}d fXi g^zorL_ԍe7R:"vS8>8W~b262`o)h/U}x"'"tRn'%5wp5dw"vIO=AŔ{Jѕq^> wWxܣ5dLB5~VVpHoy92u扩}#/e(%!s$FzӰ*ܾRcf"4=]Vj젂C-ë0':΄ndߓ/*{zĎ6(j?9gb}tբY1'mV046.rVwTvW拼}56f+Ad@Hu#`9`]$"W-k ;=b僰+ R[ Ϗxv9UKmq(6n3D$` @ \%28Ylza`ӀC-5):1=YB/s3  MnOR6~Z.\fv:ߋ0BԮ؟R;%\[\Ǩ@k 4Flz8/S)/6)(zMo(<-Ą]j ;IzqfTiq^~e6&ao595* ,VC' =O?sBimdzkݎ'b%USF`y,L%;3-D@vD^EoN߾֟r,xcExLAJz]"Mau+붐Q8K Zv6 {*xbcń`D%!Px¼ 19WPn/9}ԝPAOv{(E8@kw]s.`MAAdx,ro,r{7k> (%44`yi@ܹ5Vk|"%aw gY>K9:{"-~B3ߡE[=Bv#&b؆.ҝ)k\;׬Ō1՚k*#qL<}OuM8xam:,JAA3nIGGbDNn'vxMbnþ/E] tNba^}cT;x'|)]q&^إך%~[r%n zDT$Iﱍ;xy9kдRYʅ]=5ZD68B;$)u`ϫ2 ^%)c|)sTЫmtNk=]&bjqfNaW֏xȁDf&GvAkIjٷ2iw#a%< Lw{+wo۲27A`,\)9_GZ/y;; )j`xe!-F]:4ʜ9lVAU̧b Xs 'D ENGRh H;T@䇱ȓDA{@`>F s"enȑț+y|=i %6ocQ sh0,7+1ҕ0dDVdσ>X%_W^x FWEnEw᢬Vdտlj~\խn[;|_w-A'')2M:{'c\/ `?`=m9 X#uF&}6&+ ۰&/$iKJIJg=F)EʞnwWÜaۀoD"]EN6aV?yOh!!XRдW 1 9^Zcj+Kl.q}aR_UXd "HN")P`IZH)l]>xAK'#kpGta.PHvCݩs,9.1یEi#r,GMe<n5P&s,ag1sF͜\/&#d +Kc9 Uj)w#"^ _oKV)K΃R9|P֎gH,w2*6V@n+e5bMK;,[ +\0M(a-{7OQAD}-KRl󧨠GyH֒JdL0%iTe)I 6#I7J"i߿R !*hc3*؆x`[Gsl f]֨+4ܔR2j0|Tǫ_U,yBji]8Ψp_Ro21WyI6s9ul0YyP)bvIM2ݏꈢ{6DE^᚜ك>j\]+SL՗yCޠ&>5'*܉j]kվ+ZJ&IYE޿@vi6tb mnZA f߹xDX%:qOu|wVkdҸTAhUq:Ow9,s;9#Zd瘍%%8y 7ic{Ԅg&? zVԌ4]zL:vpCEWnq˿黇Sf9a܉EVvĨXt#@TXs$jppzQ!F=9lt>Aq^~N;?MQAtֽ-룂ޛJ'|L*kV, ! +F~v+#bɜ8/z8/?TNȌ2C\9> T^y%IJATe T?u`. SXFiB˱ W7e],~`vGz>;Bo]]2:-y_<e޾rOZxI7amRPE36S=: qj IDAT Brg6b \ |t*o;ނqdi3aisT ۈ $nTy~5'bX cUujFCk * #>u/=BU`W5۰ ??2*&jθËRM_DUއgAvh~q^fyv],=8Y1Ycln޳yБ`jM|#fo:gRo.a.IV.{۰>ԟQ>xI-/pSq])b-CMKjvx)0KLs& dR?x p$DbaC2ӝşw.< ;5KJ_h/9Nڵ$Xjߕ=\Ifd츚cH* rJ\-2Y oVݸӢdC93yOE %jBDƈA?/ d7j/txk*k܎kFc#MJMyĆXߤ'mu#V<\l8/Xo/I4n)gQӰ;Zj97ufWSń)j:pOL#RT ]ST Vy5((Q4C4 x GR6C/G&'\t=])Z0V\?>dI˭LW@ۄ߫oInGip DΘoy; O?"m|jlja5@aH;R{~=Btǭ +|Τs]պ{ clcU|}^y9KNF8/yy;%@pNOb_wyʡE'.lҹœ>{lɚmo'| Z๪a7tVi ~9}8!ooЎ{[Hi舒X*he1jv6X6Ʉ"?T_K:o8Xޣjv;-7rAM:b@zAd"xGKkJ9giH.>Qu{,J9]9xRq,x.e dz+DNڢw{p~8jj :ONrҺ®D N/f{<S] GIl%.<~ C2)sɦ}Kmc 3wGj5;,` ]4‘ZfSľU:Nۄ\fFNbF/3%(Qrpf^,_ r.ݣb)^/UK isK"vE{yƋS3` )M3 )˔mW=~ 9_]b>iL/z,$RĎ7`Q͐ͭǮzZؽΈ2?K'Ǣ6KKb]}djnIzOq(^Oa .˗;QA 5V( ?$ vJdc6PCE%H>߉px9LeYIeٵaG'8KJةő`7}߽A1Gqr;hs ;F(s7`߻\v& bNxe;s>6˖y X҉4cH0;Q_׊9J.E;5 ǧxiGҽto`ZE%*w;wb/+nW dz.kǧhf'p[fx4V)3!R}IjYwhOx~l/ J= \W.VYuczW7"C[J)ۘ֋ $m# н5`*mkpZWxJiL^6kFg)EP/szgZkO)|'3 2-EIKȷŒ ؤŏݐ ѱhb$`ݿ+/By׬1"E켦rT8.s![a3y1..UM\'utn0vN:}h8yK!D> zlfDQ P.fV`Mjپzx KyIrl]zN\QaPdϪ0n:&$7n` [foi6`)Lqޜ_q^$]K;ּL? tq;xPiC\ kTFw/m:higy&vJ:H9[|ToDKLuv 놏;ɼ@uΏTj7:{ӑWMQԜ>z ®D- u,5QpJN 1c@۔DͱV(^8M{3^>(&:PR9 Ύ;ۜ4"uV/F j}jnHWk)*VPoTPxx?||SyWfNc 0)y {BSEei0h%O&c\p(kB9ˑyURw(>DOlw~)(v$Ud&D+잌 8/cүJ{'L2NRK6y<˻8_55w9*ϳlaBr1%Az*x&DV,iGQmJ39biU֥R"ЙKM֋Zn$8/bQ?LeR=y֪Ww;,Z*׻vܝnMP :g3ypHC.QK@A- 5**qj[s).%[XlUcow3]'^vu"N\]Y2~aG*#veQjIUAɉ~0Gj;,itN-] 1c[U] ݛͦQ\ ΐIG Q`F~V1n,X`.kpZ>d3R#D~6=~,YVü.~#bmI@Ii@ yBuWl+t@>a E \Fص{4^v,R5b'a/AAqb>/Dm27*WN3-)96'1pB#S5#U=}oR~c^o ®ITp%C{D%uv{:92.L*ˤcǢZ_jP?Khf0|{&t\IC:oޗڞ5 I2&|l|X*DTИrSvfljceinUX *fj!Y=mU9R&[UUOS{"S\D $R=7RtLRAzs8̑IC65c]`.fQAS44ջߵ+]VjWAb; <\[]K],®Dv{&FL&sRf\cP 6…[UYpVՋ#>{xu:bZf Uav (> ٙ1޴dzk ޻#_1jożOGD[}Tݔ2> xX$O8 I7]f3mshFC%A[7}Jc"!R!QFT:4QAwzwF0a7zT?`0y7ƉX󣛱Gk1r7 ˕XW6jJvTJm4)|&J)}J0AM/v>*g? ZtHIfXF ~!*ajRf+Of,:4&b .s;DfnV9cn_<hj0 ՎH 6.ܢ(lߊmxG~Fc 뉨`a,uz6LCF X0'Fa !=Ez %_ѹV0:P @V?SjrgJ&kY|nX?vchE;]_g`G' 9*MQAo bՋ !Y<r>nvvzMѮ,4GL=JҐmV+|tz)ûv ϵ84i]x\57qNJ{W:OV#C#ժ㤺j VYAؕn$R} <nmK[&J}.@APrіu6eRknTI |} 0>mZW[.n^c+adCt׋UVzV6E‘`jv޴&~ɗ\Ek] JM2ꪛQ3QAS^ ݉m?FD)h26QJcYS߈y{gqlcekYz2ʅݱt*cZc嵩'N0hrGTG\-ټއE#nZ8xFM }QA. nN. x?ǢzF?!p$}XEMM-lpϸ Όݓ?EAaޞ3rkĮZw+| 8C- ~m7^YmAr;!w'(Cj2hG؅Yvm`Q,}f`W?`>xhCMAw3OGjt5QSXyŔHu|jvxNRɶ}0W[ *X GVO3jzEB 8/_Ě u!JwH^QAߛ3*q^_ܔݓ/Ŝ,,B4*Q|Hm_JDz)u=k cN})EaS_^{pT貃zs&5x6O0w=_怓ͩqcWQ]\TyӪXfWAun Z(7W^`Ӗ#d^$vX,])=M-dFL5Cؕ=L16p~5,i9ˁ >:7 ?b)Jۥ0[:e)UR9#՝jFG+窾/R==šc3hjjFzH$oN/4JIX\?u`YuQTЇ. O뢂ލEyEX[X޴,{nq{MXd,|6SN &n|f]ݺ1@RCly@;0 KND\Z1}g؎SǓ(۱U՚HvD]8>ǵj)ȽZ)b A|=X D:2~%lj߷A{2_:tOZՠp+p}c ~0ˢ|kgRn؍D Lo&asܺjl%KQRTq^ձ7]9>厧j;ĜF.p.Vc<*h1ˇڞi vY/ ~ENw&P5T}6dѣR> luU3#coQ ^v=#s5X$in^$;GUH]'A @S~D6?/ZG% ߓToţMr3Vϲ+W{?U?kҍiaҕ"Y쎀AMTЇdP#nT㼤UH6'SKR(? f]{9I[ߊ ,u<3*z5R[Jë{$b[3zj]4H}7$i)6űKi9GB*f9vnl~wrj9:4(f #V/4 LSƁ@3$k{VcL&=H=4ϠN, pX<5:{f7٧˿/WaC@7Hu{j9sN_(r$4 RwCĮXv5Bn4̷)--Jр#PU}} xB .}=QA;Oyل XkD=OfHg1)?wqVU{B[(K7(Z@fҲ)(˫⊀(*"B'-TdZZBnR(-]qL&fLrO?<<7Ͻ9`7<-'\]zٚd75XMZOaw{ؐJS{<}mn'ioZyf,,&c}19f˽bstwP!~54xx:>^٘\eHvx<{<#vɧ0`y[x% 2zЅ-Eb,SX [dHwHm/xx 7zyם.]w | xWЙSrw[iSݍ+wq_=v| \Y,x7bΛ&(\-w6b#asz4e種PC bczJW=k=@"""GTܐ̀8P`i ;j3x"0,\՜vϜZvRzڏaЏumW7װZ9T)~)|HLH:^oD@Rfj9^`:0IiQ^ʼn 6`bHQxu:w L]  la\;]kˆ`4l6h D 2)#oud9lT'C$* p7.w![y<0L ʕpQ KI>XCQ8gN(z\Jij_b eBah+;:TZtu<=.˧xQ"[p7v.Jx?tՂqH@d鍥T<4{?zWw*ްKZ˕ oإADJ`0|[x@$Z#rK%xL  D x4-U8'݋pe.p%i ;+s$`6|!>x+r@U @ds &{<[GH %:}&ɴTX(6q?5亂ˇxX үsy28t)r Hw8a`#;x 2FρHsg`Y+I EsF/n^71U:01 Fm0X|>^""WvӀ29TP `paS 4b ;1Eylv2KD҆ۓa$I׊D[[ pMH朣e9 G" >l *U My*4D_O+Sqnn{w!DFCd@䫳Dkmk;<+X/灃A,i__A"]F| rW.+#m c2Eu1X> wwWE]$p?mH=-"y88"eR"GaY9,M`>EZ"{PM1hy5/.6\glǻy8@(ʣgɌ2m-H | ؼ)y.Ibx9xNX^u76ToR@E[QiްL.99^ yZˀ17&y8D"l D D Dzj G-""n_vXmBgI?Ěz@a.xRӀWb[s05<-gcw1a9 >Q10CJ8:$3m߼@dR r9 71p_YIX!^@'#ΰKStlpz)d[MOy1]u,{l)ȱ 9b߁%r(p{wZ_s 2a4.ZXڤBD\S#.#f$4'pR 20q Xq:4E.V|U@"=ܹ1u§9ucl2 Zo"@䦠foe&W6Qb\lB||D%b|%p}$ppe%9 8BܹZ#acEʛ])`mFyͳfLxSUyO DAx d}-UߎTi̹S&[=f+(sr~>35b׼Z{\}qÞ EJp0+bso>9:oELuSvQRbκw\ELLۭ.{r{gzƳ gV%>'z{o.ra; +>S0ްL]/ɜUx NV?.Q՛ [)@/"pB8 ;rH ^T-9cd+TP1q"eAȷV 2mH?]^]7ULo plj#QܷIODæ5"t?8%4Ⲹ%fQJ`RRcO{,|n:hlBݡ|[~sjma] R{gŌhLYa(:+[fL.;$v',t" s$5b )C18fX[mw+dߟn,w'7aG[JHD`lOZ$"QO *mՈ\RE~T-1F.Gm8V̨g9]ȉ?q¨}N*[Bڲj\mnr8K8Ql<6psvNoǹ}oKC_jJ*ǺS@LF2bFmW0ްˌoyU T,/ȄbUY ޳ErT8BkTg%nݤpl0oR —7fV#r~ r,O< |@7ְ1qf$]WLfDB+CFlAؘ0BZ`r.jk[m6hAm ~" ( {5nTXKsU7NUMH4a 9D"Y#rqsEn )eV l3'λ +ӻѾF"#"hE Blo nN˱KQRݜW+Nm#7)z;h8qR-E@6K' şcߝdI1bsR[1p`U7+ЈkT1xL=L^vZG9*iHv`oO~Fǁ )#Ĝ-8Yl I7pO'B-Gkb2θ.=`]=X {W^^oo"90rH`\WcK7)P}ΰ[{dtdG[wCsNv?)o7S/>7N|(ީm8cKv-ZзX)@@H$͵Y o˧X.yvj#pZ A,_)rFn93UO:lyxĤˀus9 O3rRpAhعRŃ,w~*jOC}uŌ^ S{yMO=._ܵŰSOW. ē5"6y2& )7(R4p݆ :9z+jv^ 7rlܒIw~e8'5$R=/ ;3E9FYhF8DayU}ƊFrLۈӄ/yszYL5t$ Bb\+WEഘWw84ۄ\L_Z%.؞HcI& jJzw 9=5"b; @srsm3RU>Kx:bpzXv84ʝE_%OU}\LGӋp $Ua^;imo\`q@!5K2sC\C!GD<̔:*իhqVFjOGsApZk ̼o |=TR6b=Mʍ DF`7~ (]=b2#իUXz9 a\c-?!,i({+nSվ $H< bû#yF=E.ziwʟ6CiX2E Í{4ӧp_nZz<}"..'㚫p˓O ;wMHscǨ=<)ܚ WigS쬇-rgNF̑s@~U7nrĊw Dw~3*XM"y69/:f'*=Hf_eSO"yHٸ -s.pUSo)wQ!xmJ릪.s$2 O""ֈ䬚$GaZ{٪`Ժ`$%bWpT<(G47p:a3}'wr/$fIV`$1 l;NGFc#5j ru0];dvf[j{J:r;w)ž+{H !Y=w]% 8}oYq簺@N<3n Ŵ$:v NpD"w-H}gJco `XX)q3&~0#b BFî Dng@}.հ)ᨳ2?Bب"UU3\DD 59\nΛN3)Ṽ:/:Nu(P vW2,0IV-5nV!{6qsbx [D>osngnYg-Sn\n!ȣV1O?Iݞݙ|ԙ D6FCLt*pZ1^ɲ]{RӍ9[`1w}N~ U$lU`K/SŚf3'aQ*UœZb"YF0'ɻ$4#r<@`.Q25U-cEn DB_)5yҒc%ԯIbgU^Nq.rf`pXaud'vYk30- 5'_b;1ja-wy mqϽ Pe^FT8's>+Wb09:rvJ׶åb9G;q]<}s! ̍lŶgT4)'(Է/qn s PV8Sdtj E>QW9_&a`;;dk386%8s4I&mΜ%P"#D ãnX_(MnR#bQRh+9:AS}'<))`E>S}=ec1n+ћ[J<{O>ō\'&I* sP P=s\BTE;wMOq.dgh3C ]y#J; [vw?f&Wx]'laj0@k^>5J/W_ <nj' y>!\b Jr)3)$<>4`WWgNS}L } 3ab`I5#v"S;YDM&t)TK[lQ :#EDT^U\.J[> /& sFtczm~iX.'FE1U:hxE9_՘ hq<:݂ cbM"Ʈ=Xa?d)̿|X-9IU9SPzS(3&"+\"0}V`сDC_.3ɯ]~;0<@ϭK"'E`Aޘ%R/nyoI2$IUpuŤ-H jf#0b*4:kN̰Xw/o3;3IᎳUr@D,uВ#VXI)SS=GπOhzg\28LyQڂ  S#X}wKF>p{ߗ"bqNy =SK6E^)VNd(T^WCTnpT ֯5"'"EU\.G;H#Ŝ)2ߛ$ k2y e^'"Ven^kzT}<45A z,"2Sd\ؾmEjS=zj6rj=TҭE!CvXxIikCA/&(_Ddx6K\"Wb_Tߋ9M /:î[\g4p*m\X#IKYu$mEN D %C$aLy"i</b_^qF4I?Wjϝ/aWg im4r[Fn=pn!$/I?D[ Trlǽ.=iZBT>*IClM{q|`(H?첬 `j,]t:4SuVd)`Q~T4`c,#yvbWPթ//.JF3D$&^""Jj^dgu0\NY߈w=p͑48/'?$o%;瓡ޗ QE-ڣ:qS bGJ %[ ׄkTUWT~>H,=`f̌lcoÞjR_Q(p=21յvs.Ʃ5>/]DAZ`:@ ߸YRd#1ՙb+Ί7ݏ'"âgbs`ŽdйSAژ{.pWŌ=R#v. 9q\.x̜;Ii!R6"g8y<)ީ,4VÁ])ۥpslaTwZNLΩښ2'P8g$S12N:~)TAL*zYa @霆,M6GJ'0CX>XkL,]#rP(L:_ukRn1CWݪpŸ"ً'"VCMRݓ$Ȉɺ}ckI-&{XD^C)"2 #wl9{ob!lʘ\J.3Jއb3"];ؾ6bTٍVߒ&,Sg ‘ BLJ[# g9YS]/XXf]Tm^>dtuɎocŸqI>EHk"^k9&9NR',i~HZ5"3g6a &eؕ%Usǯ$%/Zzkr irq "syQIs6 :X7Kd,Kz4ҚLW]Vtlj'SqRװO 'JjN=GA`oSU';dW&U|^(߱290I)4 I}VNu9CN,rwgs%YD׍a{.ތTs,³e@Ģv%r.R}~ܥ؞'Vb#tRUCxjw_!EA?cܯF}%Eu6 0]DDh7"22ÿR:hmSݴQ"ӷ?=bwb!HawS>"IYhfVSmWn{J{$BT~qcd9⟧pݿ]$DyT/SRZLuh=ईiR!$CRlmҗv ;B@KS}R)5TZb֚exO_P=!{LW]S=JjlE)rhA7I˩8`2Gl] 䑒;~3<w pQՅ:FTR^U}ݳV;.d,45R,;(TgD0vO*nlp pO]E'LMw?j\L^CNțVPԀiW2x3 O2l]㲜6|DG Uܐo%V;mr(<"e]tlǔ b]<9[(&#jpQ[SA"nM,)٪\3Cx 8MTtR̰U}-m$âVFv/cZldƢs4/!1mqgxk40fv}4,":/E QYb Pb-srdڪ4M$L~YPnz< K\5@dradͱS˳9SDsOO][OK^QjNDɷAW4JW(Mt*B #8K)hRNAJ.:,x,I7u&5akԪ^u'0Mgc`8f ,sm_su&b9: 9qI2U82;M*.JS1d//[0dv1La%@!_g>c1Uu1%Ty+Bs<Ψs5S䠈DndzX c O"y-%ֻJ#Y*NvIX|*VJEV+ (2˺Ri ,$]YUaks\.q0Bf[ޛ*q6XɘY{g'Ŕ2H_=^vCibqtw'JD^/Q~3oe{#|Cw"֐Aͳ+IT-J"ԕH bG|qYrHԵ<9XV/+ C%%t|A &\,pqd[ Tω_"vclT` pR9|c#؀ 4!$z]Z ŮX-xLu>p|$M4O6b\}L <~U1dp=ζ $`=p!~e b7͚$gkL}XU;Uجb/{',tH_,RLo;QpD= %ofp:-Zr=2Lt%¨EVC\ܽب_+S")br9$GTbٸ?]}0l3ȫ"ٚvʘ(=Zܨcš6qȐZל8eApaE<-WO\R<* u]K &b)bޛt.TNKUUDRS,q9tddjP"`{0&مn">ir;;?8d9Ηky<C";a ?-X<6dy&}Lx9V% /٢kvrv{Hvx<0UU)X<'_ _1 qVZ\U]jPW kX COE2x:Awx<Sjr]P!7{,DG` Q\])b=[i||zٍªz<"TwJx<OXZ쁔"vٱ]4>b"U*[kU@mc|| +ѷx<Ǔ?A|G1Rjd2 eekdG}na VjzEüaB *d0=tJ C~u:^ Wc @"r!9ډH9p!㏁MHԿK[x<`y3~Ax<xAd2@9Ndb]B9w+8`0`xc71"[x<X9\x<O)뀝u OQո\<LPW +"\qa{pmț~$We=E.x< ¿|xJp؁_ "SULTuTc۟綡_7Y8r 1v.g("Pnk_ɧ2ʘOAax<] hE@{[. wPֵ8; ӱ-` px<S:{S ;8!S]=MUj_`,p 0BzC0zb=}S< ;tzEbs49hy5b*뒨j${K_)DIP#rLr%xgGa)ٍ7x`֜6SDf2Ǹg=DP!9cJА/6O Z:a x1襘pQ |9[d8pL5}c߻Dvp"SSt1ZxڏX^Y & 8RǽGA>i-lzx˵[< 6GOI{;2=vdE!Ei$TtjT?p%"4zfV$rHF0G|m!:yXfu] -l3d5V"lԜ<oC](w 9.7\ٵ|ɅkTdîhL]OQ qatHb IDATԡ;{\1CW5AEιXsbj;0m{)ϗڂPF'٠w!q^_I3".b<-FQwz',FNbP>3H=3T?z./u4?bo)6w.xtA։vQKyTALؽK4, 2Okۂs XRT0<U7@*ZBY *dBp|[Sۅ5- aŬvt|u+sNFMCDϋEd6rTK9t3oO..TH4B===A t%-NyT X x01G S6V ޔMEhz&`4(d@ V,,M /'}5#7 < iym_ ްMg<T1ATahwarUeBI/:((Rs)"]ATT EQ@tJ ;mw6ٽgvfn93syۃތ5y*ዧR۰0/gv1fװoNޣ"o¼ rHkR5qWa0/;6ʈTNU _yhiϜ|p1x; ?(TT ńL[;:g)Xp ƪU7玵NE¼Ե0Ia^.nɞa>ʋ*Hyj~Mü r>3DFw)V zyh-ǪދS;oc׉LE}QL\hpC&yjhߙc Ɓ\o&C (ǁ:5mnP]DͻU wrapbO¼|a^0/#|T紜楱3cǔTn$ؽMmr|4K̩i%\A =+?Yc&Cz#`Re0/W5Y \m|7pp;֥̞0/*e0/'y,VpH=PN tgo¼TR.G`fj TQ' »? ;וeͨZhmc8N"^m0'glN[OI\FD!qYť=̆AnP:v4|@>ѕ"Wp"۟v}׋u|7Zȡt9scf^tNA(;,׫IkįErQr0/#|s89KX4&`H9PW|y=h-rpf\I}} >Q9d]362(U4-Xd˵_TeqCc/u2lb6D:|3W+g;ɉ")"R~>]L<' v0/[y0˨0/H}Z XуjzIz8>yvBSΈƬq/s!aU-^ l3秅y9,}v VVvr/AaBl wAl"v?;LQ{E2hSERz*<[c];U-r+v)P/V0b+EY~jS5ZzToMlӣ8f)rc:eB1ټw ð "dyvI#oFbիq-r{ <Ȼao\a. K #qm/7¼lֵ.7cckK픠b]|4PT&_o0ݸJü\y#HSNMU/VoɪUTU5X8{*;ADn.g31%i|[SF>{e` ɘ߉tzfN2 D1˱ZcѐzLY"g aCzcL9 =/vXqDc%n6Vrs¼\rJ{;u]#~Q>cNR]VaӦR](i" %Z0H$IeyjjR~`K1@E BcU ܮm-̭L'-v"gn>ye՚F(f ,srj&x)t*ke {Tp@k<D>gPdEFP!a^҄gG[ 0/,;:x,N`e&<6&UV{=BWm6b`PB.b&>C/4 =)8SsXJLcT-{+$wIuIc7ДmqJ_X߶Xc'co5(v-f%ؽcƨմWrc:&~^8LhݩpFE6Ɗ뉷"E>\ZtyX6߬9O踲8+rt Yb5u EfVsXŦ{` ƅUMUNmhd7FJ)4oKfLcؽ;sa)vSr)4 |Վ#DWW8fe~q|d K`aP(sa[ؔx5Pћqލ!^Ӿx\Sj~ tʜ5W:aoa ^{/(ܠ-<{}L^*mlM H{`/'cs,Iݿ1׳.<*v/%_ ƍqm~]3=뽴ϱ[{E` I8(s 6]0_2u!'~_9|~jgcPcm] {a^u57/5qSn,|XQ93>Ѻ;⠠MdD^a]l,+W Cݔ*ՔlU;@76Ƙ"{1PUZqD!}r!}YP@H.wmSOa`CI%VW|F^l7]"W"R8҇,î T.F;_$>W* ᭋ8y/)9SZ"%Wq3̐ Rl\ ;]SESS}(vDN&rtK]`”@cTvyϸ 7{ )k)\x&\-{YoEgw\Wy{;>W┽;m𦵈߸=."!Jx"(_4JPw~$0Qw +Ds7,OAAo%j/6 +[ąF~n-̚ڏ+mW̅,Ei¼uD߮p >\ܔb0^ ,r]- rL!kՠmYA{`+b,#Xb7V5[- )pkhpa-=8Vj2 `_d; 4M9%u"ۋwJq`KD-{8 |ؽkYV\MV{JRtlps&1Yg`fK"伐Oӵ"\,~?zV2]hhzm=5/ڶ~&r )̓ib.=iB1'%B`q^7>a)#}D`sOxxρ ˽b68VS]+h#?&֤XxX\OͽJ ֿ'I݉$,J U+C0!vî^!v EqȄ;5WFDk;0E.-k6 "EgLkp=7 LEk)=$՜Qg-v/8ƷW@L&xz2>i5 {@u 6} ۘ:Jlȭt d \r|~PЃ Woz d_~\lV}Q[xPEBzg3ǰq糀a^Ӟu6o~.(腉C%؏Vŭz)"=α`8:(MDalülh &w0pW 癏 Aa^F]'=M ?L+o0/Ga\Wä{6J‡>tumb XeeB%"RSgq2Jb ^ 3ob¼lyg5#p&n)vCn*TMWl^)rp^U` D]('p(cƩPd80@z2t/(\l)ܢp[O9Veig@_#2xOv.gokۀjsA+ }jS(SEv绋2}Vm EPQxVM/ac,IoH9H@LF{iXQYxql>}͛X*E>h+dm Zh>)XW蝗dPl_%(}͈ü|[VIev;EjBa^[˽~~f-IXZL"9.z%nmþNViX,8.3cb{]OP&`.#I|' RR XOyY)uT DdW(Ha^?B0L l<Y즮$2 -L&%E@YTa^>!m[Ra? DAAAA7eC}X" %(+b$Fyxa,`i d¼vUA!ۧ&'2P &(X#9iv,}E`pdފ[H7 )w4*rF;KT̮]^;Hw+t0,#ۈ/o6F8<̫>-pQcV"G};B79)4@kأl"oך$9_M4;uwH< ༄حoaqsO(2˅x=޻@Ͷ`oEŲ%=v)nUx;Au lX]lZkW]\\``H SJ{KsTRB1)vXL , Oi ppjI;(= )B4 [J9r,mhמ v& reO`ɉ=a^(y;* K2֏5M^idȔdn5"LJp x)0T' << @5$q/V,$ME @\0/ľ{9q^1!ѧ(ZmB(p\ewh i;^޳2=Vm yv-VSkTbb9eo@Ǫm귏RMk1K Yc~O3,):%oȱ_ims⯉,矻&[)rFQy~[LƼw9wKw{`i,OX, xȢM|RM,ӢO(]_/5 ~bVbRi0=(RrݗbQ@hW UszDjLQ>w$Vl i%cxC^,\] C8<2l3RK"Lm}?O`,"Z0/c¼>uy2qBz8$6trL³ [~-]WOleBoĕ*S(豪p|!׋ NdvqzoLFn,ܦ5̝.^~|9vnLr~#(h%oi`5>}'? j~hjMD b=)?F59<]gc缏̶7_4۷-W8 [pO@MlLU9߅Z[(| t [RFs<-^+cbYI*ܭb}ba:nnQFQF D╎[r]H޶tCi❆ۤV CiNܴj8PʹAA+a[Iz2â6kVs.m#cw8v ik@kbs m(n~Yޢ -} ȫ:ˑzX#"ZåN؋Wy%Q%jBIC@I (nCl@[ .y0@HzGݔno]ϫ%ҡ8A&a|5jBP̏n zYc\ (:'͓5ca_{byQs11&TS;g{!y/Ai ό/ < i.*}J{.kXqDeQ쉘bxC y[<yώpJ$rlr6_jZfΦb'= *MOʮ*+vhtm켙,sJ^1ع|Pd;8UrޚxX>9iK*\踁+' acs1E!nl C\>r;^ew&70N[Q N/-ʲA.sLʙρ,?D0>Дrb5Q6 삂ރyVRrbcyfj5 )v3V޶i}xgܠ{.f8;ˎa^>Iފ]\ &ѺT/8Iz}!Tu Mq&QŨvzN>J˯faruSE-yacGPЯ=DnV#u)@njX@ӳ0{X?y#1}¼ *v΢ &Ul*XÍA{p<3\۷ x[a^e0/Gt@(؁'wJOI9D7Ƅ3¼7x\3+5֏{4|}%lW@YU@ |P݁OBE.~k`VkX !VD/0f n:ÿzMuǁ]ONz1OD=Xa{]/F ~;I#O|#3|8R-wcT㟱*b_ך lR=L|-(h2*\u+69'a^vŔoqkX]a1¼ x7X*QX%rO=Nn ~&`lx&D/ USi >v{s = (_ދXnWhoǔxc9p\h7'R`ż~WfiY\,.= q6Cy=($ t -۱_N*0l/_s|eR)Wecjix9~^AA)W5(߂~3(_U<9)vt/-nC1E2'y</Z{T)~ZR%69ݘŒq}؄?Su 1l}?Qq*םe\ *y˪T`vwIXKL8&̷`a^;S|N@BWa[*aT/sǼ=(tJPcU-VeqT'UKNEmhSZLyhi-6^,l8>M*SΕ*>pv] * ݯ?hKʻǩ>ziD*¨dd =Ǫ^VЧU- hh:qV}ftQPH\CT ?a>\Šl׌Yg$Q{4@ݏ RSy`>wyxsycB&E&cax۶Ū>-+E:N# GV\>5*v*~b9Jᇁj1ǀ]=E)&p•, Hv@u50>ivev{'q},^~W)܆EBzo`.蠠 >Z})v6P>lt@K#(be&XF.M}(U:| G9Ӄ.s@ yODocU/L6k]q26rh.RߖfmR,}!?V{1/`,6 QKD٘ ¼܉-h-'_K B{{97(0/bWyy[x^;L+yXx_}ڠ(yDNmVhd(C,|QS쾃*]'pQmrǜk;"ɯMyNa$INIF9tu{Wb;"#hR 6"m w=|((?{8 2!(.2NPH\QH4*(屫g$62pHuPЧakBB$G}׀N-("zr|Wbbba/[x[\U{'(rח;; eIfb%}'ˋX?nk0lIV [Z➶XYetV`9@}.v%\Ɵbϫ]ɀ(`%3otj 3\ ށ 0 8SRk9VPde,ԱorPΣTk] %KF#t'4*2pՌߵ *Umpebt"˱s+MO,)(h1qꯘqp4i\E򬡾ﭻJ',x,^huG&/9]FqfbyX1ŮK==T_+'11hH(V-NT}7IBc"`SWFtYJ۔DOgO<֋XA4¼R-ؿ9a^6 >M -yy ++Qw._9ԛ%RշÈEJ|hEH !mD=yUA-J' y;Ȩ/n [3:gZg /'`Jׂ.(_4"y{]#QFxmSm l=GJWer?\xX*ɒ. ؝c3}F__K06_NGltV <~'g1JX<4='(0/7GbB)vRT"o`sӁf_F+VMϴRsF GGPUMFFwNgP.퓘)u OkF@y9w{," MͲ0̌uL dddddd}7iF{Q |\^Ulh"#3.#####c#([X1 | "bS˱V8X L"뭔XyH?Lz,UfjMXw +=SQW D"1FU풦݉l |McȈSV#?@3Qx<<_Xm.o " X6}GQRAc`z¼&X86n?~+"sw4jk?Ѷz)9Xۤ*v"2+7xWY<7lVU[DrYr6ա]ka]L>\hG@u3;, WVx8 t6Ϸ@6"\S._V]94J)v;hiTRRi @|LǸe QҊ,}DTR=vV?"2yvrTW-3##/ "׋ addddddd-J)v+)I/  ){cpk/"Xɔ^>ZB8o rKF2kIQRB@uLJ$?]iD ܈h N8QUafdd|_dn,&KFFFƺʵ"VdG=(`c'?Dd93d Er^E- 6;t_Wu8NFFFbk{ ]"asU]q8$)ମ8GFFFFoƅ" +Ѓ)݂q97"ϡS=XoeAU_^#"g]|7 IDATi.>OFF:F(2 lFlgsl$EvW'{E+W|ddt NnO8X-~Ks9/f" x(`wQ8*"K/ѷe8]Խ)uian|qFDxo2Uj|1yk^%6V*3$:>+pAKm222j$&)yUQD~KO+‡9Axm2\? ,C1TޤS\|R3hFFF ` >CdQ{\kkD6m0 >HCRqĔ¡ p$@}> h'уw$# 9қ;U@?S6UjFvqVۺ}"GT-P{ j p_hbh]]?lБjtizn,1DS[u8Q - ^iS8wRQRQOf AdPwcm2Sdr0J=)v]MwD]5"cCM6Qtuj|vv) S3z`ZƖl.~x^` _?Tz~OUQէzUq[ XG.rr(rT[bl=wa`q?8SDNu(r_(r\- &WUk%bדOMM{h,v\4 ="ESe垯 ΌuP!9]K3hʙAvKDu:CPLp؂e>W fcp1eU]EXGP&0YLl-6 `Pdx+@3&g@= jETO(\fe ᦜDh sOz3DFF@dJ50rpw >{"{HaTP>c(2" T׸D0C\nlEAعuOO~,uΓ>/"լB(w8YͫFl]0D7""]e(yxܵ©6TP O,B5zId&ˁMBItDqIk5w@Q99c7UdfR+׊l)Uume950:Mjq12EmKz0h؆Hd~tOָ"4)vvg(/%_l=֨?pS/`^]N"rdP$&ZvW;=j޻7JdX?F8MS . XB'DY-Uax"a7B&"#0n{ll-vTÀDSzy'QtJI\ jPk 새L T߬$`\l8| RUflEj;8ZG I+U-C("p{~T(/Ĝ=6jEa0zWS2f!@ +7gc2gA^j9XkMV+`cPˡ"zgiSB8 E6 Tץjk+vc۪IF Qb ovc=Y!0M,Pd@uuW&JǸHB7'"[`?3Cٕ<M EJ7{'CX|ZRǢڟcN~"^X <CGc,omêz^S 38 pp ""he 7]W&Å.;Zu$ ÝkvgMu˝\7+ٲ^`Գh(_ Wx%8:ip F+o]"sD* ߌ]uP/~Pyp7!2c'?V p @v}4Uu1iuH lpTGYXuIb' ; Ù.ZGT:;}Wk&rLNL*E6u67>3uWĢvCS TU+q Q cpf ?'5QJۑ~grWկyz~_Uq1Qճ_tתjQUݪgU=+)jD*nѣ8_hqVĬ;Vo+nP&ANtY("p7Č0P+sQSId}`d L`7컗U%|pم6~i;".r+M%rK)iC0 1B9W1-nP=nF\Aj8̋@SZD_(2s;SLR |RR"jb>l N,qэ~9qZCVyST0.,;G܉553,4 1DM08.Wرw*?"Cż`EZ EV$^r6<įB;y=jr̫~ 1KSNi zycD{[X(B6 6~e r~ +;籣 !l(Vev?LQDZ9Ju10;{In:v&pQ(2"U{LT^T n5CSS줃Ulo pt^bk)o.gޅ؂y\eS՝=$!$d$MYeVEHR ¨8 :36?ufqw܀ հd@vdsnꪮt]nUսwGL{E]$R0…j@D"\[Iռ_^E sTWmG%oJy)SUު;|)׌g 82)D"Ӂ0HYye*𚷁Y.P0dȌ&0OΩ/Ux6 ,%,sFMߞI"}N)9H<3؉׽O5`."kHdpciCG4[.?jE>u&|ut|$2 9;{(XoWyTHdP<BڀVĢ&Q}|15Ɗ(Y.~ދ{D"rJiBD%]YbLJZue!2?/e@e,j|.JUPDyLuk 3T_`cRj,\luH'"qjB1E00R찰.W}3_$DVD"9s.0S:[%\𪚇Si\\_|yǤMM 2 n THMRru1V(vzSv 5 ̨&Z*P\(:ެy&j'5u7|*t[EFP1}) k8L>»]bKQ},eafȬSްk YQE]4_/zّH<_"Hp|]bf d;A^+<\ԡ0fGNU+HQ`j"ִX:ip%!RحO8׀#~tUn{vpwMpe扜R#h{ٞb9_5;GmyŷOk;cySfkå{z pK}LVu"-g"'=9@?;$7yOݯD.,3DLPad/E%X(R+ _ph;=]K*b%jy|Eo=QW0&0KG 9FL8i.liF)j QH!kOZWt~lgkae(ikmN\H]$2^zwD"+x)?Y׸v9d\p5͛3׍)*{ bڦ>ƫK!֢'VDҷvŎ\Z*v-[kD1vZr9ԕß-bF壴xS}M;q9,I vxL9P1E:"vȕ"?LzZ}D8Q _CJuº]k^Zd3B |ɧyXZ9c1.= ]" N,m0T(q8GC55@c#45%ehlF٬}5 GR Ja {^Y_zV,gF [v3F,|fڂ+isb1ynݢmGkʵP(Z!&&xtHfxDFWKPf D#93tʄ0=iut>=M- Z^!jbIUT@M /}"5żﺴ ]'x{ۆ6o)ע^cZ1+侃'V&.f%*6 q{ emHnjIZՎ=2g%D0%3y߰r?O}^tF =ao`||{eU!~3j ɼĿ_U&j,ѫ*km6$on {RfIJ2&!pjwY,9`KZKRypSλ0 Y"NŔbO'vaT M=X&uImHz6u$⵭%R(R6F =kJHYGn<4qD7$Mz*22\VvNQ٤PH]ͺ{~(^LF#K">X*,hÀsA>sT_ȨdG}#90F[NnyF"mB7- 7`-rZ]DFEW}ᑻott<U]厕⪮KoDD..Gpƭ3ODVE"X np ٪by%B"E qvYzu 5b9fɚ [־E Z$IhQFiV(ҦPF"w>klpHRiؕn{5灳] r,f.OrV)whgyk>Í͝`-|1^ҁ}X/+X@{dJUߵJͳ~\- w[8GKNRl/](ppM"-ʏ{6zJAx} Ixb߹n$eirW*"(o~ @bZ6WD^r73dO IDATTN6[|z X= u C&ޭغ\ pKŠ8vlQ٪fz{5'0s^"Cx zSZYZHz8X~UO`Տ WPi$HdX$ʣ- d TJ~ G <~jV廴iN^tp 0`a*2<_D"c@/eT z gi8,mT ÔG^/2*wH0O+%oS~5Q½(Юb0UsB E+29T]' N'?xW &㊝[;@&͋q][ӖÎ3)u #+)z'fy \ܣey)C0%X V$'׽2kDR+ \#ͭ5Q 5EI xOH^lٵx+߀׋ /'NYP!54V-`ZZgmfjDE"Q,>N՜d6,~+24QI#{~}jEڸcq)Z>&1!}okA7Q~0 ͨK=X8 ~1K2*޵0\i/#`i< VhaQIa1?AhH}^I9MF>[E j؍. hO,֪!Z7[5nJڬdKc 1P^bxkdE2߲b g[4&7,.-jB4y#oz8^$G)"j/gݯK+iz Zl]8Eoy ve{)إDb  4X Ud5yq'/ E7n󈫸֘%&tʺݮb'"DJm^&e< f -r yby=?⻯*B{#sTdpD3 b7wwœ^L{\ɞL@=\eZF-W,qI|vb'H^d߶` \ gܴT k.-l lyaNNdPʦ!*a2<6 ~ɰ_>ԏ<  Ƽ +kWK鱋<8t=Տ/Ps(car P?%_jS# =px2S.~8߽vwmb/\98(*APbc==Va(/QdW|f>0vjUɨH$TϖQ]?K_=w%z(X:yTCWkm7-UFE8=uKT؝{*aS 95 5?`YKca?k搛sCW$DBiajOI"BM$KnXanJ,^[y/ ^XLɺ`Ͱ0^A0X%';Xhu >\y 4`asŊ^áy~^83KmN}[OyH A=p o  m}-9'han;W&IyT>KwsPZ'oY5~,gTeT*vZ02).ُ@,9[m٪'5{X]/y^]٦RZ]QEܑrA0I+XlkDR",VS1B<1dn Ŵ:,rܱ<61I垌E,7z5Q1w)bsT˺ak5}IuҺ]PHzg g^,[ ݪ3֟+_ORnNyCPx}$k""r!Z|W឵$rng alղݤ{H>:\9$W-,$6? _h]Ku)b\Y+,}r^y7<[-Ɩycxc;?Y=*4aR{7'ȹ3}{'iin.c硘1AKGZc^)V&qAx$g٘)vc}ҹ꫆´Kq [/0 (ZXpgؕ#H0  ^ >$p$>{V `C`gC/xɋDDm!QKU_񛩺( R^5kl s#&L\cC̨.+y_ϙMj M>&lQg"؛)Y_aAUa)j}uRVt7BNZ [4|ͻf<=[ Wzᜱ\ʂRSv=l ]E 끆f8^PYɻ["N Qj!`>v+TwV-~Dn [)͹<88<3Qis67`GE)3ZNn]zMf0ɍD5PD 6 -7(f0|bŸN#R֭֯TPPbw,E*i~G{>M}ڱ7OIX rQ}#*0RaU$) {$Ш }իe͚ظ]DU è`g6-'ʸ}jRI'; GA= 9(|Gl٭hBIq Bw Į{-uWbUMF n! K*2ҍ* %s(2!l&U;\/)q-Vy h}3U<ry>g(gjkVs_)KM.puТmN|[Ej֋ׇ4Cl`U ;"ǁS]YxeFwD3x"g+T5SYkV:!#/)A)Fa]Ǎ;arhˍwF`&2#ͳy٭"#cZ(iB v2RՕ7nqܒViHךM14ܖonOg `F+vUZPr-^Kg 8cBtGR_ۺE MLx n{WVeM{h$B^ [c&(Z~t|ODmOÚM5"L)[ xjRzyyί`{+pn<66p¿c!sUR AJ67Fyx(Ej=F9/ ȭO6gHwHn>g`?d`ժfxH5%=z4EYg;ӓzQT/W1քbW3E54x K!*=M믁 ׮Ob{TuzK{jݎ OcyTϿL򜓙մ]b^v U V@`5\<{_9>qN*mf-imE`oM(3T_j |a•#ԌEEYwk8ʣ,Sr? nEEp v~|⍟y؃5rCZ~(.roN:N 0b-9 ϔdKz`s\G/ꅳT+yF/+ mglCzQDHUlun$)]yƬELTP;I^Qz1;7'& j֥x nd X(R+tK"sUSL{Uן)'"{wH8$`( ݜ5-KBਸ[_T=v˝ ׉ "7g% 0:ۍ/ 59yu/U][ԍ!eդL))"4j6W䰁eTG"6BB.Hb }]. |P h[z KlʔjCEez?Kexa#lXf_pȠ06 lR }FATiT"*W!)vS %aҡ®ٴf<"f.axا?+v WhCOdO" tpu8"JA%-E#ۀ vhvp4mᬍjy6iHǦr^MFÞNFu5#}7r7U|>ǥk#b==B64NW\Nb /m{ɨ6GVXVdTq\f#?S)6Tr{dvČ <6?,<ܸR ܯYJ˱B$\1LTv/s?O= TXO8w1W_\n'!@$GNx!J>W){+mqMD|H&(v}z{n}cܵUMJ5dQ&;ا}^Q@ ~O10^]Rw~Y*  .f.c^$@ -)v3bt@RkTu!nxTu`!1/1.@  )vVE|F-I;Zt {),;@ 75lޠ콯J^`@KCR[T5-# @ bWkG UfLjZDv:#l{" y˺c=L)] @ B1@\RUxi1IQ;`$p=3 =.wo.(v@ tQd\O#zb;id5|>yTaS׫4$x@ a:9x @PPSշD'45N-swF"*w=>=CqJ)vA ]QxpO$zb;TSE%"Ӂ9]づ}zC7B1@ A:Ħ~KQŮ=TuNB1@ i&aB`:0g=G)Bn]7#H>4PWrV}"5g6zM@/ZqP^"{SFe% 诊]Nn9RյQ]t@lտXmd{˃Ǯc>Չd' cwAD" R"DeTۤv#1MwwY@su)Vcۖ|os|>Bؼ.~-~c<]$D=,<20-(L*X`x Q\4{bBSpDpD'_Cd &|w:QcpCGQ\PL@.\Fc#]T'i.$\GUp+ʠa-@>L cJ@@!,wt]&oeYg;5[o b!׋TroW6 }JDR|CMORoVP$fص "EPHa+QQz $w'We:%/V}xJn*/J, _w }ca`\ >c8dV;纟J#T5hQTl[Jy wp%V|})X~9,U0:/eQ!9Q\ImT'bC by]zcw$V%oE]T'87qƯވ{ <;[_|v5#[5L'sx |3}  u<΍_2t$2=Y_My߅-G{XɸQ>ɒXI7Iz0g^bC <)zۃrjsynw , o->૘0p@oYn9!'rzGS݁ %֓r>p5& sqSF[DFcN%嗮qT)3v|8)إNýhi!t.̘G-Z{5YdPX? ^DΨ2XuKrHdi^Ȩ6+;cW/,ώf( إo=v}:p2FOcȔ[Zg\,"wH HdRHEU]}AX Q]cIg6%Y ZGmodJ}&m/VĪ^zVXo.؉iOppa#xnkgzl٪_^UhS( 3|1#nDz˫tpF$%VX3W(,l^?+ޞXU| Y9X;ߧZyJaUܘ[duEs Sg] " pTmY6*Zq֋Te?~FL^@uG<PIصa%y'& | Tˁ+\EEFy5щNeQoycy&kRmwz״ޅM}JGVH6'`ap2|]R`bGFuw^$XF6XI0\Soȕv_W2u' 8N84r*LoCNC9 Luw%f*Q|8+(< 3ք#;4>ÝQ|%I+Λft9o"!m:=V*buw$4Mk}L㩨ZB7_LaLv> pR֧TO2Er:8c:9WДycQ7cQ}`,H e ;ϗNdPw"YǨqre0PEr3~vev޲Zg='Z< "ب*D`*xx) l]$r|nBLüf?{fUy-w#TD@FQKD`i7I7ݛ5Ǜnbe`IFOFMLP)*EQ)*E`9Z{>gi C 3g)߷Zqt$oм6zxpkW EzE'PĄfƄW' D \'6E}链EG(7@' wWĢCY >sJ?"ZO8&EQRQ1Ų=,䛡 {Z{bOu`5\밾F93lk8 s $|3 =R̈)x ~,ꖄǣ̙NZۏ`)i> p,PBF15v k,8ky(UϣD֒fu> 1 [AVgY4j| rZ5(a6ʹatF)L3+ O`<+lhs=_Gn_1bG&{*۽?*PH$ ZAg|Tx 詊D҅U b/ęf͏);Kf؝JnxZSX[T7`㑡[ilK~;`Nw nd.93OSŵL9IM"L,M"_~,C`)1^.{ ( b0bIs=a$4s~ofx.xgC/"D0ƈș"SD:?)3 onȠm(>.eVBU7wUuiUU0'10H%-.AaۻRΙE9hKjC{#;Ol)UpWh&DU}oGu{C{PHx+E4+EB|+B] 'sPdd(r@s '?E:7t] eJvn~ ,&;|-X@Z4S9 IDAT,)Iޘe/_wOSՖ2زŰQ>6F9;lQ4`]AV bW=AV \b EJt(Gb`x̀~W5TQzb9 v12GrB13XAVWY,=0?u$wBen0~p"eWKD:ki&doS1 gK復I&ǥS1YԪt$fuBݤa}Y;hO9Ee7ܞ١HAԿY1UST?I#Ǣhz2IR$]9$G;qZdع?Q,5}} fFW7o#N.dž-W0z~#l6yKĤ1kD/ףJa,EVX35m3Nc(O?Ja4?r8Bb&φ"W'u:RTp .-B{4K5(;a!VoYa [N,]@#HX^Z$f/χ6f37.M`iaĮCtЯ?tV)|NC@0 EcԢut>zwc/ uDLǔS x6qAV:ar#j ;sa5 (װQί 8AV :1бdɱAVnmD.lX+I9`NšX[ܘأsQ&a])eb'3t#.I4사*:r(=aԅ Zٟ#`OIJ`؅"DyH'>l*z J7[`&b&;N`fKYnxڛlGeJttFF{n?~(b8uĤ5?*"m| J d\ˁM A3G cT!ՙg99xpJ{.߉LGnUX);,{Ium"5=j;sd,ň<>oǾ/SN9Wp߾_ Ad>hB..@53Rî#O:J]"=:V^0Lڣϑ7]dP"g! -=B&9@5qnX+ޏƪx?ԅI-,u&[[0DmXuEB)B='S$rYIjEoE7֫E2jJj Ta\o}f/&.XsfT9L\F/hj9u+MڥxM$D사~EK>}ǠVc\Bd, 3kFG7RMP*-u*2p6R,c8J]^sNë10a$= Z"Pa !"yѴȰRQλ{ ̰;{ᅬ+B:QFE=rgÒS1"7 >GG} Bq#6 /SNCe+ՌrЖB<2Cۺ酥Z^H"CBB'\fdFLpxC65ï!Žç E7T~bYb,4;sX .g㣝:p U`S=54$;fc>0wvKuQ:FG:2q[w63Yk1=p6E@gh~TO~͜.|V,#F8og'n@U-T2PU{Jt;[`YaܹV7I DbP(2(5gkBzAqLfzl9fߔDcĄ|+H==ȯEp!N$g)Dbh_7Oߦ6oc0*1~HsGpmCAnB\'bd:2L3s]!ƔxJeԢ5փS`Džr`+2ki.`Dؘo#cqXKB.l"VP,^C>v(1h 11,: s Dd}_Y}9q`߿}'F5; m1#d(0hr 18 ō:½"n]MRuVۏ>[v&WsLUsF x Q ׫pz/93,֒ɋXoFb~,:d.0>5l,p>\3sH{*:fߌX 0Kdx`GԹ,kxXK6_avrFcz\G9>{@m80^EeZO|Iٗ4αUu~B^ϲkW:X25tOݛR)zF뒉[+1۝ 89booyYB7mo}!8SaF3Y(Ȱ#ĺ]A~U㩊1|tMҋNW/1QaZlgcߣq!]x<:rD(I aG"D}1EAV٪Z̢剄NJY]~LaOkOxb-g({`5~"CHX3aDM1~B7Z Rkr:NdxupH!Wݲ<6Gk=@~Qy#*T_Hb!7Vc3\ <#m?wjO$p%XvA4lp5rYz+pܷ93D̙t"F:XaqZJt#)m\&z*e64`~Zy:G7h}+0\VHh[[I)9hoaAo\3ZQÐw8\Q/59-i#uID>#ED^DއüW&c{pLa3',ϫ |nER"u&K3]ssۜZr9 Ϩ{yOzKy"՗1sXJimSΐ V g"aJૡH,8.#>yZe1QszxNmO! &YdhԶ9V2[{bϽm"qP--gm[J\+TK4;[4켉ET\_!A5||`mն:v_1"RlFs s⩷AXZQ. lf\ȅ j#cվ)pFVY]Os )7(܅틇R{=I݋Nng?? 0fص*=;.<EXx: G,2DQ_#\"վ,ߧ;<ޏjλ@@Z5ciW4-l\toFr{ E,I2X/]9ۛ_, XK;8<48ZԌ-;[/nOEejvc=jwe ;w?9 #5&WyӼ̑*G%d:E/0(qQhKݒTjbba)!^ub1am9+ji=aV`Lጷ s\U|ѳqoUoPu%[ Nv.S`Z8Gᜍ^ P"L Rk~p8|J_0)O^ P+0S>{RuU bĎmbu/AEamw}[95:6&7pl&vE=yZ:_]~6O5%)Z5q(\*fd]g& U̠hA|u7<2Z߂6Qs60;3Ha}xgN>*RD~ |%VQR?{~~]~xSX45̀[ߋ ˀ<5ueso.1k*Eg/i^lyN#isY`h`y j{ttB(|/T/m[9o ,E)r@7dՅ93NT[ }0ⷱZ-b=`TOa驇u'q'˔038VQ!זs+顑YڪEbuiw#G:4#{㩩Ź,aWjFJbn77뀩XU֕~ c As)^5K,娢å]20)'M.Jet65U nKᛪzwqJ74^STZ/u-ֺ[ߣ:o[ yXIK̰DcLXXoNd ;]0EdѵBU .Hpϰž3~=2t?]oSLh^]t2p:Os| nM(`6d[3̬ZyS/ svQyAV_υ &xq{ vza뮌Aȃ ˂du["vQZqJAV8 W7F?~d l?zm路n$U^E}*qJzXE|w6 dAVz~E^v1n)`!õaqN@Yhz 񌠓rV9SdkeK4evbN~(jk0}Y oFUJ-0?cv A[0.jڌ6a >Obr^҈]:H@UTԛ΋0sz,9ޔ~ 8nu`<Ÿ o*K#aׁɪ 'vP Kz^%µ _ŢvL:?o<Fn]1Pms9 f_h↨m, EM\%˘BB@1IaH X=ϣ-zhJ9mbO`IhG̻{m"ý`)`E0A+5ECC_UݜG7h I[`43ʢw3 Ƈr+?+!f]I>Τșta9hvj`E>Y2EsѰ(b较d5[Cޡ$Vz9lPa̐ x_Klgs nx0pP(u,)Yݵ#E{x˒:VaV 4}/GMY&+n`u.#q\@OQ=|מ?^Վů3OPnH ;F<c$=ŝJij6P=|jAEQJwֶYpZSũ@uE:Q5]e*vT7~WAU_Tտmz>5F2zHXrbN5߆)O3]r pL\%2IuTՒ,|n|4zۃ,=_\;ώ٬p).Xiܗ:8d W(ܛd)dbN(R1#vGSBᮂ轊cvʰ+C==;d,73 .j}U|bGHէ3[sc0sZ m Z"bd>4gwgazJqGgΟ3~l<Ҫ~!ܹ*E,&\, 9u݇eJ N;zba^_34 Wwz1Gǁٓ: ]eQK2nU\`X {Gu2xz8OO=r۽> {^c)5ey(>>F=؂9@:Iz]wcw.ߜ,IC= Xjvbڙsx%tu.E֩E?4ǘ|tndǘAy:Xa(bFzéf?H&2s԰l~˩ ߭Zob N}iSlNKxX(ؘ$ÎӭndPdGgHN#gѴ15J8 |"9QXow"lJՃ&ʺ@uCNT8P~c?#窾~-P`=WCs2Q<%#騃]倚\8T:MSX4hmxgVn7gSmgT׆ <ޛ,P)֎m}vTK=8"wglC Qp&ӛt C |jM{P{0GPKTՇ T u@9ƫElRݓXq}9G/[Aa| ko4;%.$Ώג݂9."k{eYc`iU}Fgroo`6Lv=e) תcMow.O5)^7p*˜,69JHV <]{f IDATg"e"E:=V# %%LS}&y 5v{xs%k|"6.pt1b\q.DuSJSj-t\ z>7[:XfM [ NUՠ{d`'R8KPP}ZMcd5G*oAVT샬.LJ0'Z"#NR ?b.%g[o uf_5iFap}Xô԰+O5҈NrX9ֺ: z=nP'r63@L+zR,90k{t+G+R8&6ӎmSRe#؀57,-Al}RDqe7U*V3z4 w?]p?w==}]"XMڦv{ nW[ ee)Qs^ МG> M)sltX72ovRî<騃]{`6)Z> t=lxmXF| 'gzyZ3j$jBMd-AV5l'ὁبK|Kڸza; աLQ&OD~ ;]t[/"rR/\i<Έw޿O~-Sî<"voj* vV@riT]ZǴLV}I x+HqOJJ F$U Q Oc1gulkAV7̰aW?eePF/z.U]W8'j'pmOq뎜Ts-ֽ)3U -wf9}NFSR+_ Ͽ Mkn).)~kwGAV;=*%%e l͵J+O%.m➩];ҥ.ѺNd-9Sa5~ w`|YXoO馩aWtAJJʶ0z6hKJ @_dux~xwJJJ EuDaW*v)))% ln(|'eg90o"%%%e'}OQ-Q}G 5J:HII&W],rA^ICZ[R>P}7 5ʑ:HIIfkOGJJ/=t4"vI ]i*:P))))))))))))Ljؕ&툙SvI ԰K@D+v)))))))))));aL5HIIIIIIIIIII԰K&uӐvTm^7R԰KRn˝To`%툙S _'%%k]dVSoSTFY.RR[EF4\}|7|%%%hoS"}z~Rvٍp腽/݆~ԝII-REf7Vdq{*ة+]J}obs_xmW)՝8M.kuaבJ e.wE(TWwg.g^6nT 8ln]}pCUsZǔgtZ6ҥz1Q+ }%9;Fvbe,(U]G* .e (R<u0n');?jk;S/N tc l^԰K\n6b'fؽ<*[mm <mGovP4̔T#םʞ+wo^vb_˩ZsApmOf~@u3ipD$&=b7|G72:Kh ,L&ߙȘ^|K {ͰU;,aבJB05RvFKLc7WScKtȨf"uΩU݊yR.e{0<ʟKPY$(" Z?̹f7dW.A~m40go2H"'H&9PX%w ݚo{ښ/w\ۋ԰HK}{~êI䋡n;X-bfDX7*N=}{)iHt @}#k7o^׹g./V;E}I[W Sh!_LްxHB|AӵtZo 5pZU1+ Kް[ 97rWXE|asBՕ\֜.gˀw"%;I!28ҝ5&pwٵs{ ryU8, :qcf(bag4 x Y߁X{L^1/> !֚"ż"G$G-LS`Gus8$67&lWi=ε" ?Uŕ:o k25a@ \Ԑ"[l1.,ry(Yk34/z El"f Us_dp;'U|̭"#sE$DB;cJJ%4N2u.ǝWwr.aWȾ@%.0Ff'뚡ȄPdiINNUn\/ǣe>RxWȆ[(ђ8j&* ʞw(|IW(2(6'g^¿.SݨppwfmCgJ n w" c]:qu8"'5ĢU5&W=%M=,Z2 uv98k%u`N(2%~"CE~ p& {cngC~ eﮗ\g}Ϋ9"iy(2Y䃷Xs>&CG4ơvp6|>|t!E&\/ҧS "a?ȸX?%ql`Zx@-7 x4iX EdC)"C=@kJA= m@DDdSDRf_ԕE"O(R7CZ$/>ۉ[2{)TM"kZgf)~ s5.7`i x^o̝ڤp?=so٫YfEN6}?V;X)]B=>c@Y YkȢ"'T\O ]8wd|*LתnÕ;l""&3[mjlRZ/PyE W"Sfw9Y:88U *kgE끥 k?TPd‡nh6$r^Y>odh|v ;*S){2old &>,rR1m}}Vgs|TuT{[lM {`oumSOm7 dŜC; aWH5cSî)"7bGۀ,Լt͸Qf3+p5U&rX:?ўM93ۢx,ZY %O4Ô(Dm+ 7T*V +g@M |2oh T( s^ ~rT]g!rx(\7k}njRaj? \ ,@MUw2lk^7a=ٿuWH^Ka6 Pd@5}g(o/nm쿵G2YSŰ+3.jB>@^&=Ǟw#r-g)+,e3V७E8W "G5ܸoOڀWuRs\csBJP?kXQ[7U#'>ϵX E; 5qZ ,S/iJ/p$ ̋~n؅"_\`ibx34R 0;gUQ`MOJexX'c;6Zn#1h}xtaFpCXT"ÚD}Hz>Qm{j24T[D"7Ԝ*mb#jm" ,T+0Q{ `J~z^OL .xjܩ![a셪++GoBaT=ҹRb)aWH7o.w "08MXqXnGqT0/=jZ#KV|]+P"wf$K3T܋=|נ*2I䗡Hg3W?i'= [a.@]༄#:z5SU)Ģ)\ |8}Xu5eP`SCw.YJn 34*z˻T_?=0onIuVsfUC"׊4"n΃"^Պx~1۳Jkp| osMlij4h3Z)->Z 8>Gg*}o8-ɬɸphMS},P}T[>Z|UNv"D~$jfn"0 xAa^3ݕ͢DH8F.?}tB(sç30I9slŜ5qCn_U5G |V#!g9m=ٯc59WipV0W;E|B\Ϸ}c=  T9"ߡ݁1"eVˋ[h:q4 /0;}H gC=ҿ )lU:_cv5p7|UEU;,B,W /V @U7D<i`u~C^! CC?"Ko~MS)j<}:rMiX,P㸒2Fao/0s0YMXhL O`euJx={-d@uC@ռ0 J١PPsEuz}әb:n[wf^1AHCzHCȗCEE񵞱UXXzɪ^XSZ՜bW >"4GfӋZ0h=C+P7xV#E[wn:֖\^jF$,2pQ:b920ѕX4&i 7י>c)ՕI,5xX$Pn/ok1hv@[l]-O%Qo2(IF1ԻexW4fqڑC 3DĚ+z;ffb n&dldrV߻FS ?EJ/c#`"< A jiËrPaWH%%(MtDX5L? (; 9ۜx;{|Ȭ&GkpeOcIbże7Fz"ȘZZrOߍ傗lkDSHhm%5r` x7…۱ f>V}aog^]X {Q'`9$8>QT_~+0iw4(|T Xu?ƵvT+vAd8l:FK.&zrc=(pȑE DDF|SYU({ꆇX~j ,/[ >)؜1ڼaGyh=/ܹɔ1s@] AƍH huBD/l6w~3ALƛ<«O[GUpE Sj• @>BK~ ߏ<}|/i/zn ۛuWs68e*{@(GMۣ&El]F_"Ux`ilP2bϺ8"։r&(ֺguyl R!p60w\{E#D{TY‰O P}[cѤ IDAT-4x#f5Q* 9ؾ2x12$;- ]s9 hw^:c ?&`jI" )9K v~l.;QoŌ=e;q*XLwD)K> TT/ud*vnT2UAǵ k[<[F-hq5'7>Sɔ@OZ9DQFwI#<Ҳ`G T"#G.P}ԋ˙?uܛ ǝko׎gdܡ{bk}֠-JC88WQl]! mw1\Uu D'""^!Bؒ{W\N>^}ܣ%=O:zȉ9ȴ6}% jeSTRԜ--)tQ;dP֋֕>MjՉ؆>=P^fAً BpPSmWo3i>, mso{*)b&pXjȊr£YSZՔU0"59 Q;+ +EZrz4QZU)6F'Rk;KF :loN1#bk6 kݣW |3/P#7c\`X7_#ek@uCE@QNґ '˂mD-tFB|QɪGWgQ7sC,jD:۟J e8I}f=Y 7a$!pGG]&f؉e됩];6idLF.5d]lEbDږ*"v^kT ?zTOƚ1- am"u!+uj7Tk3 OӢH\DB?:ߜ k0=È8ndջ n3g" 7ٲ&foz*Kѵs&DM ~jLiyX%^?5 yE}{ؑQ0n̏j Xb]+pjk}DBӞbXP);_D*yQ]iwzsNh m S"[58zqZ]u8bl@*!0ش1d@  PDBBq|{kHTQuuuuw==kcs+-nUV`' Rjl\z{,J1$s^LQVKV5iy0݊U7džOkBہeU;|ں}Izf[U?CUy osU|yMJJ/X6c# KUQ6UG1܃>_vɊ|$4[dOTBU_>O!Wa]owo'Mb0ZOK,p6Xm4I,\%'FÙx , "6F*v`#Df^U-y{*N}Um>7[Z+>ƌKZZYԅ<+G(Lbfޅ5 /2 Ʋ Obyi.P'{e8ʻEl626%ِPd,2Toz;2*;{`++'wY\P+MF`Ed8{mNЙ #rVR2vs -=Y?jV`Umna=ƾj^rs"s$NJR{A({T_9O\Ά}pB<LMq=v3&Le.x;7T۬z=lC׫$p]70G9"W _2U n Oko26jUVԺ $qk}|WYLMV?iPc߫?VY`5Y}7zQ/9k>j(\X@ PG0P-5(};9R{=lٞB ,|A)p`rT?^}k /X`O<3K7IF`DŠ]O٦p[:hϩSݖv''Q$^s?+ܠ~%Tݘ8ƣ(2n{ށn 5YmVt Z j67~$+5֏PuZ@7h} hr]}O|5WիT3X&g^>!=w=9{[_2/WݾNz)ip#^S'ßsQsʡiڞ\!"4jCk8gH~֙UOEC_=!A"QB9֒h{ [+4E{qcދknR=GZ >{jÄ:! B=sܞwXPU3:>'jͰٚ|7Mסb\tT`W+9{UOI3GKs.9E*Лc?0_TIΞ;G^n.{D RZיU7+|INl-X\g<=o\/uKrRBhMpybUZ2?up:#Du\p^ZH]ZX+[H+ >a0xXcdlTDO pv8BX5y5[KF#U,.D& \蒕07Hvs*ģ ?Q|8'2_fV?ukL'ozPydx'/- lEn< hUR&L^@ZH7 ܬ=q^`츪7U`&^k<\2 W;>/2QRѬz,Ou3Q T^\`zRj-68OJC]c=?6Mb+)ƱB?fn &@VTSye'W[+GI |U9~`&DR l4M0uO^M|է8dᣆXjTKמAooB<*\S{y`jJh j**B,L_kRD>!}^r*܊IV/JFSzA܄)%8I`mA#M = [Q2AOH J6Il,Q`e`@LlZ1'\%  TZ ;|Z{b5-(z7/.g9F MXvs^f{jOe{POPC-rXnUl. и'}M\,Y9Qm"T1 1s5AzGJ0S`Feb,9hC>~#koY1:TȨ+TKHvWAP PbQIBm2GjD&uNجi p%:5JxS!´_DWa#>|Hdm{`V?pOdA͇9EaϞv=āݞAKE"Rh?wj]SUF"`LSzWb]cbRVVgCpz2gU?p2aծ#y:UᆄnuuĨwe=>KXd+ p4j!A0*\|~)T/xN a5pNJ,T8K7[,sG-ww-Z[XPw yaܽv=-5D :^ (K6}_.kU>/{6Ϸ8{=SĵG,XUR۬Jͩ^%8f[Ll#;Wp Q6oYrԅg05#j; N;ͪ*|BL&Ē,V%7ƝnJU6TH9"?8IۊUPSZ`:_='D)1Rzio{{ۃY֫ O2 ~A8 17թ{{ 'b)F*J !:5#Y $nӁf՛};q]7v1]c zI.]tˠ,N+le 0"BJ٣_lWZOLjj-; 65%#|B6Zwge{@Smp5"uth7ýR벗!(v&)%=ΥrbMj_ymR`89Dkw,Cnʬ/T_RySSCw _(=^p ֽlzVݸ=+@VOJo^}S~8 NpMׄ.;;ǫV` 5y`Tu6S"r iU]*|vl^jGwƒZ%&tť jgR}%F±zUU`7bw '9#VzdI`˹@i_;;z ;w3]\aNj$;pHbB,ze-{1CiKUz)ؘfz€RVJ½;CfgB-i3KnѶcJꂤ*vb3gu% Ur:iU73UfK#";.F5U..MbhԁG~R2AGwsBu mN^O/ ;{հ%= bWY"3pP{Y%=bj-!T5^H 8Ybݬ]0ƥt03S}ϣ'#4JW< <ڗqXLKpFc1 4ۉ{i])1jvt+G1{06 !ldjCS} T=%TtU 7|S۾AVa\U L^N6N{!FmI>Oc{`Pa'8 8 !n *cLs3k:zzz:Hћ NYMTj'ոIpP)Vn@gtNPdxl;)+v1jvyf43̑#Gp893aBl|ky.FS{ԌP8wc[A p ^`9:% Hu]{ (| Wb#Gf)U.(x0TryŮ~HW@lP}psƆ7r#GU;$78ذF]9z6)'G"bvÐs\\Z\ {J#GoSs=o +yȩ#u/g9rBR܎']}9~1G5#G9rgX9r }a]b#Gְ3}9n4WMi6ȷY;=rȑcXv)9r B.fŸ;rHu,sč";>99G;Es]26 d>Àân#N "G^ˑbp;W*pf{0GH sA^^vhOuF*pwWnawm'.FN!D> Q 0nSNѥ'/?p3p"+"2S9/^pLWn=Q Ϸ1Ev#s!]}!Ǟjua@@ߗ_M < )^/K}\ ja,Q8Od70\6,vbǁG. WHP^|QQ9Ls 1=] @.>w/oIVU^T#4aNcHdxn{,)@ԋp#(M#a>ގ(-ߥ3j0.O]55Y7aY]}"!|` *(>Xu'PR+6P`#u@ARt<9d݋;9zU0PN D4G|`1xmsS$ X[<녈)DLC}=B9vʨaQtՏ{XȨ>CJI͇]O) P?csM>9rX.(Adf{"2G亹"턣Mwj[?cN9Ǘcb4qm1=uw<$x>wz's;x:xgV';r]9##؁=%#+f[H䞹"hI-.y} ^?si(pnڸ U_^+v51XkDJz_u:_TD?<,$}!>a8P/Hl{ctQNŨ 97/gFLn1waN, |0IXȒu{ILLĪ  lzDJw'c ^q3BյN@EFv AuȔHHՆl< h'cIUG̑ o GBúx m@ҭ9 95LWcwQ DXu tWDz('6')1":7y1,蘣7dOʽA+BյrCz|`ۥ.<4`sJpJzy"{9-ډ- 쾃w:͖iOLv^Ѿވ(o_Ol[iN| 'Ix>/A@3Q[Ճt8JӠ74|vv1 z/Bǀ1on.0"1} ]5x?z 矣@^ y aX ~o< B5]G> V``粬_Yi0 UT1p U]J5%q/JeX?CzZB z ;K)<1yE;uo:T zJSrH:W;= 4& dpIQ 'SwEwvqQ #_v֯A˭ίA@a:,Iۈ t9F+v ^z Qѽu6,o @2> MQ 並__,9 vo;؜/P6b*vڦc1CUqOkb]_1ÐȰ"gԤ^y\O[_3Kݡ16j_ڱOQb* +c 4P}nkD2cV6N |)T]OD!F0C@ +*,A6G}{IB6 Q`Tf<_oݕ,=TҳaUQO@Ox8KzER7n>g{b̍iQqG@%2W y`{0c D kxaQ^z3pB7}(7P'`XV ''GW1Fffcs֋ٞTL ly/0ЀCF>WE"DInK;Iv`#^LԷXE1J`7F|EY$0XsEzW|JSU+J "H+KnpZȻqq;â.I ,F,P+;'$;gUany]O)y~t'On' 3@ׯ[˓ânl. @p: =螪iju+a{]_W|mo @EҰ_vEރw$^ "{ANXE/V]MX[2^-Gc*NGmcF NC?[=nèo!Xxؑ_'%d<]UnUl7@$fz0& Һ5B`F9a'3+ ȅ`1_S"-(6sFZP0s.Jwo`GSyEv'Kl(]8 vZ`;;9 beج ׊'R9߂M@`7G,T&qes2"ujDaUӋw.PmlRpC؊* DvGfr}_aâ@>)A]{{,6%xwY`%1K뇰{r6RPr^_\%0y7aTҗ'(xn1wÒmӰZކQl[¢~8,"L) 'Y1pb;kry8L9w%j́=PKݛx _UVx7sZ:7;S8S;iLwjVQ[9* o|lc_lQݖ{ܕrlXHka9u=%2A8kH:~Q#T1cl犼WԂɒNKTx&/-c*p=pk4/Y!0_i3 v&qIqFgd+ARJU"p ˲ y`W?^ O[ݾIu9^2id } $,/¢),jȣG-v/#4`!ð }OVEѓ/FL*?Kl>O3* fKyDOSb@$ *þK}`wnHPf(3g`3[L_o i3mkų &IsUE %p3<m`[cᕏU% ,*|:hNaR;uH vQ2 )M=S߭1 `ԛLrLa &0_H%Kkb;$ع@s/X(}b{Kb'F.$.?X.{PSo(o}Ǯ?(ގ{13*aF(pwHoaAMWm2- &a> X`{_ߍ97 w%c1=aQu{Z8)Ƭ ϯ 4~fƽ/Kmwb?=cԍ<+G-*f9%:?̀mԦwj,K6hK\%eac P􊎞ѕWRۮU9 uf$zA[ȑQ wp6M,kNw+_L`5\KԧU/9,L_,bA{y~ v 90=' x'⨅ KJM~&r2p-`HdY:1:;uqZxpU:wv 0-sF8[IbgZ[@qμ;¦^S߇+.T} {2@Vd@(mG-xu)Z+vWt` ܡ>(`o/nKzӳmz+S>XWEݎj3xXw8[xpϦH;ގovS,5HJL ǮGO;[4X>ⰨJ j*EYc@i@XxF caQ$+C 'Qs$%xRkD-]9]\ ]y"lâ>57&sok}\JSe5B ¢ sx ^x?`A C6oŚ}?߅آ8;,aQEu)$tӀE}; oL*9@LdoGX/Yafp(Yקmlv>v}8SbV/ ̞#j):nU:>MNDƊu)(S_XD[7PGt5v@ܟsQKj=n*_c '\ne*zӱA` nV1-\@GI[^YcU?';Z0Lb:R[DwcYoQrR⩯Y\`F^"{-_i YY_:=!~,E6>>0|bd@Ni?ِ:g#ȪXc",ꚰ 'c uv ZS>u>jMx9; Zn6bI ؚyG5Pn!NTQ`e8JZ2UF q(yE=;,i2 |E}49bQ {cu,ǰ@2ΩA{S &W3c\gb[$qp@X'310=w;&3*60cx̗i@uVuަ "S~ ,N((V49Ί_wm%Ke/pVna]j U/U UqXFuZ~> E >SpVrDD{=xV*Nǂ|`8' ivOw*U3.jQ\R/OWI*b߫H*?]/2E,q]C-.h}3M^fmlț@.-cT0sH)Kv~V[*GWy1X/9Z@H#ġ膰pյpJ/b6p8Z\~{8Ӗ@~ ?p#>6itf7'%?c)y}kbjغu(ua3VX[+_opuւ}DUݭ &C}՚4m IDATaQWSNF20),j2#[ 0'/\. d (>d, :(7pQK",aQâ>_8>q"nZ!9 @eaUAG[̯"'`N 3aQ4*og,JZ¾,c#ICtz_eU07tں.gA[P]A Qdt$2sIηs:Yp4`]n8K,8-q?P#X!p>CCP4:o( Kl|hN'pؽST8zZ,>)27& & ,rQ08}ʿG;XhwpY N> &lbE]'|է]uM8 p``f* ^QR 1۱deXNx-x=xd3KȼLS$]o~Q!  lfxAm`Zk_R#g@,P861 ?f,PX͐(3\9KR:wkx Z\_EM‹w1Z)X\xW]9jU cփct?&A'5XoX <bSk{l;fVӵ7Jkh{m)}P l3<v 9, 'dR0=LXlۯZ*p#{jUX%uXH;W \@<3o0;F;XŰfH6{c' d J_CvA!J }O$,(}5X)y:'C5hBLb& ?~VAW)F[w^JMhIR}^Qf8j'Ᶎ^`FՏq+-Iܚ=}u-nQzUb+i9U_hJNnY9GwjGRgpfM۹X7 U7:1_oֲA$`C;`1J B2iIa3%}+~pRHӈF+v/%*h{v*V5TbFâVR;=ドC4mDJ@G1 )mό{K/ÒLor5^`XL]a>f,lXAӷGԎAv]x# pߵ1 ,=HaQ TbZ|?Xo46i_y:h,)>G)M`Bf`FڰX c׽%^r`/)R\09`jwKvFXԍăGa4}'U+݌9ə}> Q*H\{w p'^v}sƜ;m!ArU.hAYW'Udq4oVep,%j8Uukzv' %8V}yU6Y"BDݽm&RBL1<Zd5S>)P>:7X /W;#'.#2'oZlG2z"{}Ӷ̭hVAGjpsIv{{`7 KemnAuv|=?6R>n:q͟˴D Er,,ɑč'ߍ \X#hUvt;x8*-3iwc@è>?̾<%|?jv0;08iso} kyލH?^-#J{FQOĮxfLOwցb4`w4"11XN'3 UAlʳTIx0#c[==Y0)/cZ:f~x{cOE7=͐(k0Jarǜ>OJ9ߣ4XuM`T{=7m<^ ud3% s:f/R])𿹯5DLS08_m0YWYU7,q0R[ν]pAAߖB_t\TwP-e%$MK叉# Nvݤ9ڋ̕G7:H6Q+jt߿OJ/@`'D29*z XX1s粎{Cȣ06F2=Ķq.)LT[.ʱJWC>:7U;ߘ 뉀  z?:@lIxƶz0b'zjh *Q8ypR>KfL[%V~T9${/&32v( Q ~T،%j6n7sրrCQ*T)tA-WQO)BoU8_uQzhXNd \PܛU?=KPKSr B[Ĕ { uɮ7 -[9+z\ZXrU}_=>Ӭt|ŎseY(|U?KV]!m%ks,8Cgc TLv<,3uĴh0oS$.ٌ_B[CpSQ @)5f(h8Hf K&0`mtµ=kγX8Z;(̯&obׅ0¢~6,[R=MXn+_[,&PTc<1^sQ ojN8aQ5C*IXd?VbV,ˏ`g]KBQ G\ +5L_±-!nWO%P`ǜ:ԇQ W^6IgA\gd$ Tx]E-E$mZ ª-f(v.x\p7Rb X \93Ys V:4Yd@K矰x`5 p ,X~U"}ͰBMJLY09TUGP%Q-oq꺌{{>)N!윷g,5;ۼYYXԟEw%(3=Bl8؎YwZ r{]ztOUc. }V-,YikYƽJx hHXԏE} =:vKvob}[}hOľJT|Oހ0_N%\wWw0Ov&-jVo#t+O Ɔ\< \5̀SV:3,_ |, d<=g8:YX8i~`\,?ө 4}cb|Y1ct$ $kel¯k>KE',j=Y;d"~L`BH2PaQ?zo&Invk=1 &P_V`N=aQo zaXKâޅM_#v&j ]I]ex棪ڐ8u/baQVKn.SXQ~" _E lr1D+LgcD@zD݌~t^`;?s3NCI GH[JDl!(.:]hR/%49, sb߳Ie.HƩApNsQo >uIxGXaQ g[[``g띚9HVAs4\ _T8:XãUnW)E*B'Q, ԉ,4Mpͪ @᳎Fyd/1*\Əp^zvUgハ~ߤ+qX&bRIʊpAAꅛ"/лouϸJ}b=+qc`U~ؚ4mXԵXs C6, lR&q^YL' /&"1K~4OEV;=H+؍_$]3E}oy)x"qO(XXA,{/tvo_=dе(ci1,갨YT}(}}9,aQ5,İpϧ 7x}0P- 6weXԻkfNgw`;`@̙: 2@s55p;P&s֪&xk_'£q`vNַ/`k^ R*Ul<2AsUz36F%λ SNL"T]9{u]݉UFm]E߉%S ܚZ(v?<f՞MQ FjZ/ſm4xٟͬ_ H҂8c;05qX_aQ_y[xcοWZF-(DžE}2 X,HȚg;`MuAx8 X}d⩡WbE 5Ցx:Iu(0V U+%rPNTX+kb<֧xxwEm*Vc'vԦj_m ϠQB;8;O@h @#CY%R" ODQTQK@hRB -B$$8+Mr|2ͼSۄv PUKd;Ȃ:Md+ $s٭;9<2oYq±'맳 [ySvPh`WLkmZEÓx,R A;iXjXYA?j=Hb C[&Ke+wbAӖ53dTx}Ů3Vt@D F)B61BW]w~{jQJ+kQ|3hnn+LQA?/H2Y|T]gbU(_ 4}W?^E{`Xk^U1340{>ʔr,u}P{+`LW#X. W_RItL5nis9TuF%AEZU( ۩?b>`&c,B[%Q,WWUIk]1 ^님`9H5LhLL6l"Oﮘn7=c܌#+S.= P_(u L{=7h c汛p*lS39v"s[56 Jǃ:"0_Iȶ.px7;]ui'Zd 0?\F}@p^O-9liB( Ӫڥ,?V]osΑfN,ς^ݝwI 0|\] GϬ 9EYm KȒyf fbglmZzbHc<[aZM;;CZML[3βTgXfV\UUy" _H4ÜHޙǼv]tI9E4Iʽ!Ea&#1-0B*5}셰 sofc?V|rDkik1s Xz9T(N[ 8v `am-2 9ю x8ESZ~;aYc[Ryj,"vKJcXTkޘl3]} =>&E=3q.`W"4x\UiTӕ`,^[BɷHt(%厡[;s LLRkPڮsZqLn|/om7[|Bfk;S ゝSb&&}+U8?,r]g=IZT:]KzfUSZ24{KQ0ڮY`϶n4~aQ4 Iѹ/v]"$K׾ #_#X:ȗT_nyGH`b A[)#$RZHNxVkuTփdS.2f̎qM\Pc'030m_f=P*Ḥ{B߉(͘+шE35׉gȏ0wuC 9*̢5cEEz+vTs#FNS)n&> Zc!C|.0@|km]\鋙8sǩeqn.)ۮjmtھ( mW߹0t 1Әº(=.eSi.ۻ{ ], #TTcdr L= oWV,pQN q a8ky]VL{nE~Ņch\@U$ fTڮŵ`Nk{ Bvo[lCG88N6=^km++ty47y}q.O{.;gq%%)ywڮ&~խ;8NYԚ_YB)3;gq% v=^88NG$X|.rT~!ܿ,ND2qgdHLx8)b<᯳Դyoqi.9Y`O%)6;g1#pq888}$> qqq2`L֊ֈ888]1!`888Nq˔u2qqq\+܊888MbLqqi*.3);qqq v)gytqqi@RrT6m@qqqp<@qqq vy W v884 ʠOFqqq' *9f"ZCF888ʸL6b 888i\єOT{#888 Pi3eֈ888qrF)PDmHqqq`W9wUP1qqi8.U=2uqqq]e`888N,flWP U5 qq'Ymd!"#ÀO=|80x xHU˨;piX덈< |}S IDATRe&Zx]Uyp:[n v81"Q@U?jx҈^XUVǓDDVkSΣ!:a3"r*p oU} RiQ88pp49LZT\% Ȩ򆈜Wv]sB]q U =,Ddyl|w<&pU6pqAD&e]kOl<}آcu೔ϱ0DSsD~zglx̚-Nl]Ǫiۇm`PZHԈL_DVRSO'a /+O m[%M !`(`"2GUS`W:[DahrUgED/20o:11ak:"88; ޹^UǤlM[&`/ŪeB݀mDLjMB96">Ҵ,M m <ɡ݂Ffb fo` %̌p'PfeL 즪+ȷ0EkaF 28DU$ DDdUU-*\#)A4ݰC ADbBȖaj``ǰmۄ!h ®U/ foIFDyDŽ Ddpv pQ4F#كSt=uMђ1 LSؗ+؉H?,q-"kbVj*DsGY{wta!mb5u5W{0nV\60SvoBJTu uK0%ϟ1np0n&˿h{Ac^6PMׁ_DMߒ>ت7¾/IE5!6@w *Un $G:UUՎQ5VQUz5zwoi"A#0`mEd˂Oz"}&=)""6gx@-"@.|Ed8x"yuP." 4yAD~gZ]+|.SPw }>3E| LD3<܃ "7>|I"rKyI8waV9}x%wݕZ.9NA|@pd "mIͦ.PGUTU$$:QcIPlX*W4iK: 󊠩@D#" "WDfsDmYD6i;DD~$"ti<%"?ekA@4 B"hEz]Dve4*"Dd0OD&=k<;ȅM&g99H8c+@wLPvl~ED㙘glos.14a)aA4~[> xH,8Sňș zj0!@੬gc#Ưxr& q:v-#X{aY"~9¬ ~ Ӊ{oK-+pv mO mVf[AuLk,LCt3^{1֥ WN-øf%p6bSlڞ=DXXs~Js!9KpegRmF 3#v/x]9 \ U= *E݃91`Gwt;%ip(0cN uþ 6eӎ&fy(9ߊ|g::lpCyx8KI aϊSB˩v%5vpjpg;+a>e3yz."XwgbDZgqMƳ{&(paßLk*NlX-KWp>M\7-:%wRe&vO`ٌ+bunFP(=GNK<ʮNÞ$TM=p4Qs{hRFáldc M'ss݀D<,@(6#Ș`󉩡]KFl:*mڜYn=o=ykݘ+hsts 5j^&m[Sm&c2O`$(9=n*'+fEs_Rf텙w3 {Q+)0 w`177dOo*|?76Umޙط&,iyD`Dژ`3SN~_p]Gg]^$!5A& l:( Din`z({S}&y&lI &1"np;Fl=tBFoa=gW3ڹ`׍U vX:Ŕ&{aLr Wx}()%~/$wDoFUpJJ o'dlrd v3&Q^ ׆3&.(`wF7Q*ܜqrc/L TDvk:ӆiL)} `9L$E_P0ssBˆ}ޔ_ W1hǜ .FAɧ 551Ӗ6L ހ嚘~NjńOm>bYl5sz>{ څ&U (Y:,.LArt쯪O%OVPQa*oja"HX^JlMzI[fDU}4n.{uL@ _&++a"M\ݫS70!ocaQ`>fTcU 0.gM-ӄߧkT<، 5"Q0WDK4aX ~1ܶm絚 D-yUk13ӱU 8`ܯh uM<ՁTtU}NUg} ^_U}SaR'?TD6 ; 9WU~Ʋ& 4aMT/SՓX7Yks6L(|KD 䀙\LVaSR\ ?^"\Wׅǡں&DdWlE6/}w;@: uOuc+ybB;V](䘪UAre(r,Zp$\ߨ8 "{eT{;gQ5 K"5^[bZX)𙹢uEqzA"ذUD6&`fKP]7Kt)v`$N*&NT/ǼcL1X 'bׇ9XQDd*6YeXL=V$kRH{"r)fܟR"J[xýUvc Zn\N2yNJpw.)pP8X7^D `!aEb+cIvəeX4dU"+\?2^A?V{ grT\YAXUKaHPZ=_an!$],wuE{]DA" |kE"^^D)"Gԁ(9)em*.䰚 ,s]Ս ߁Y<)3 ;3}ۀzGWp(MW҅"տm+7~`|>P)G;ʱe a6vx$wn8u^%d;B)^ÒG;]|>cW&{ng4Y "[`fY7soUlwQT0&Dh"@&or>Y nTH /"%2WH&'' }JIX`w#fU.rWE^,[YY< 9&" }um0[X 1An2yw7ĂMfb.¢Q3m?_`0`B S+?Ĭ|G0ꟺ䳔vhcs}|l'I^G~5i]J]! RkM7Y_UKz?K "2ҋ&2$~妫7!"|}Jf^ήІ|6sL)CBi󔴿խ)iDG"2/WLM|(h_1qUU5m|1f p)"Obx[S-ˊJ6I""ȟE ~ 3>\UgdJlaYMDX,B~VW˱o%M-"7`DyCDw騰y)VL9J_6&c¾c:I7 3'& VLo "&y.jaNXaQ72WVKԊ= ~C 쮪ߧ@)f8`, ؽ9>$>K8#|33_UO>9nĄ}0z > )ob(E-_܎5ǁػvJ׺õ IJ(i&%Zn\EuȤo=y ]A?T n)hPojaX$e46Q{4`*ffkGlX0YCN3ńB .6]ö|ݰoa8/0S18nf]>6.&0ۡ|T\CuViN^a/µʺGw=.?^c V{ܫb#6L ~.[RaSaoþ {p13MlsH/\9i l]0^;IXLQ,d`yH^ sXopNWI;e7>kh]#|P>-l'l3s\{-;ayo8rǀ^]C0h @n>_7M l5{p^,[كͷިPLޫ桂@QlŰhj6Kn31- ](w!?]?JHc oPvژ5;xq{KZG` xM5 9*9/[9ծ&.oV,%L{w-k3 uߣ v=x̞̭Ա߁nnr^8)LX`2tΎ㹭3fF*p\tr?lL1 =`<{kIUvqJ"5AǠE\G)aqt.5+ʱtUZX-0f00FUg峹 i~wcQ3SDU?&"`wUMXd )U|{v4kc|sh|=jffgȏBiH*/0KUfc+E{`NUmclGM141ZF9ED|xJi֎^ {XM # 88NphTDƂĹ³XFADƳ<2fpdPl9BŬcRVV^Ĕ7o4<kv|sWW_,qqq"N.gRaZ5)x9ZDVP7Uq%U]("bQ~7"F+sh,a<{fq@wyJp׸XRqq":워}Σv5&Ё9~?RUkxgBUQ{z;q7u6V v䢪a+ zfOz9! Hqgg|IDAT'g,`8NEhdM%q88 a=Jѧgi v8ΒȾ"Rq@x" ;S vA"2كp~`tƤfqゝ4,<Û=qi4jMtIENDB`neo-0.10.0/doc/source/images/multi_segment_diagram_spiketrain.png0000644000076700000240000017432514060430470025601 0ustar andrewstaff00000000000000PNG  IHDR O=- pHYs&?tEXtSoftwarewww.inkscape.org< IDATxwumr %B/Bt# Ҕ""`ҾBQDE MTARN(i]r3ۛmw[.㱏۝ٙ~"?Z,Yv ``_9ϭ݁a@?^ l]9=<~&pE^;^>q]E5zM_s(|꿥nxa0X{uW{DDDDDDVj5NT:hpk\.Frl?Ci 5~'qb-ZFN9u41Q<>3հuw\WtXOb:5(g9*%hcemmt|,ADDDDDD -5f?RJ 띓,p6@  0"Q^-#7n#l<OWSF>#cm <4샮֘Yik LKXWp[yo7Xqׂ҃HB Je:<\À}cbur ;֛o ^!] m#-=ָZ e|+RW+[-2-ǎشAXn1b ~n;RhܝgMX>C=ݑƆqFa *e~weH[-`?ҾW3r+&ϗ)yܗ{]^X!RtuxW-¦Oz'',a/XOu+DDDDDz1 ^/ v]iwG ֓^ u?~H:|1<54V M1j'(f=e ts{adžߊH@چ?ǦJ'v1 ,,{h7`KN=5%]ys0 >}`fQ˰!`?? 1p=^e \f`$`PfR Ǧ+9OWzck`XvC1qqG..X΄Acc3|5u O;6 C*[Ėߦr˽ZdFljˤmz6o;6hIJ[ kP  Yi*1<^ ^jp16d`&fXOX#< ?sF (6` 2,[pKl(tu! ɸ1 Y%?#a_>wc3)5}5x$g_ߣ ,ZlHE7{CDDDDDS!DI$R@DDDDDDDH.6VS*^7@DDz 4K9@{6Kagm`0ԋyP` ]"j ;ƾE*0;bwQ""{}?`'Z+/L__ :Ɩ%c| o법R-Z~'3k}Ju\KR^#̠g+JW=n_Ju{d _;=noJUlD} ߘۛӅ/;J0nܸ*Ri&L^{$"0a:dE?zENv5L#&gbisu }XX:ZuWa`=Mc ŽgX:zoH/tpѰ~u,k(v\  mS͔^.CC/`yobYbt`u`?Tb{+`?v]U׼K.\g?~ -1.DDq%Q`p2p[7N nCil@#nYX`ava} p+Vm-.' , X` `Br  lEHIp1Qy_f9v}k#(ɱ?&,s?pDO8qhf,H`ZkS-^SDVj_ ~  z~´XRh0M ~_>Ⱥt'GR8`֋9  ڈjCv(`Ug{[2lHX୪n%c)g?`V 'A}8>RnƊfC {RԪlzo:{~PEX~m,3j8V@k>,8ȰWV,ǵ;R`Yۘd#mvq fRkE_bൻZ~pn#? J94o ,Ra0݉RO+KMbur?6ņ  64;XQ',/xi^ߒ2_sH zVP>v T{Ʋ㾄jڔ#~ nhv`7>;j18nĎۤ'$eWIc۵Qm= `Vr7`:(^6|CbglCN9,#~yjX@f"Eذ_&swlB_[(hPpj jNǂ;k'<S1ųV:"Xcs\DYqS/]lYk-†Xc\L|[,8CnRiX?TXeR]obq}'v uj3k4pέ}ru݄uf͚/E"ؓ'O; }XRcǎ`,Y~?q-4ilZ<{zذaz%8y?xO5_"'O ͮkuƹˍ5:sͳƾ/ۆEn3^(la|2 ߁Z^[~*X*bke?n7U> \G"lsvM9$75u^_8Xvw`3ce/wMh ND }`<\YFNsۍΆ0+*fc رaXFKaXиPmm[9«bv,cF,xQصW ,VH},./`ݱ6qs7X#$w2;бQŋ|_\`xwmpߵD(S$h7sn,*t+"d2ι+FuvRc}ɒ%`EA<kL&_|9={'N<|ܸq/ۮ/i޼y5kι9N22V5kT &3̊\Zٳg/pM5jԴ}qulI6hnȓ^3a„#}߿8ͮn۲eZs5Ţ/]~;Ο? M4eŢιscst \ nC~Z䜻aرc5~Kȫ`'wAXB Kٟ5ÆIBt,1A{s6..ž wU? ~kN5x ׉z)>Y!-vE/Yzn 24w;X.XVXdx%XiY7b cj1/>}";  Bbv|֫Uj:j!{b4pQWǮM.'b 7Æo]?-K0&2臍Nz&xovH~-`˱ خ ~_0pA,^]\C!F`D k}Ʉ!}XXoV& {s,s?2-ǹ؆J|Gbi>6l=ιbSO>}`3;[Mc\/|~c2vr/"#^xI^zͻ.>0tCιr538 )f*% "&L3gxfcl4c̭_~ySZVJ1(\UDןz/aB)Cz`v1y Q5x0qSظ;6!;SQK?<MXPt_)ިZ.6ƾo4,[`UX4:] {jO*';܅>#M9tv~h蒏'*Q0 هy|Xac ֝V{k7U,, p0t+v}C=(x+6f,a;k,#*t-$ &M%a!6c_Z,]PG`yg644l K{9ӦMK9#_u`Q˺˖-;5 .NtаsnK.ιuGk\'t:s*=v ,6jԨc,þjhh5xιusC<;(उ'vfRTI֘"y֝oOpsz maO\26 {,Xv`,Nc&Z}bAXa*uQa* @4cۏi,-P~9R츽2:g1 a{aRJD=Q8wv Ul]Fu'X%TV*I33zX[XEXpرC l$ʖk~nc姐m*X݄5?&бADem\tB3@Fav-v|&2ө"ɬ,Rk:%&͞ "2M}1sb'U-[i\4M8l6FLD I6d2>|eرsw:FlL/1cƼ3ylvŵgι7 y u-n0af_%v03iӦ-hiYq-k Btν;89wymXǿ, a=Y;KZh^Dιx/ A=sM%"z{ Rd]teXXtbKh)z˨l1N_7*OArxx1҆5@XV>Ū?SʻTt%h_^x˧XEX0*7Cg0b~' gK4ug,HF!mDYIZI8$v6/(aglӏ[P˰4{ vL5Iߛ{e y7bbxٝڳLJJ gCx!bWlz4 IDAT{ k5C(K<}? ǮR,0([d1FBN9"JGx}ܸqO7nKιgL2|ѹ s}0~ F)Vwuͧvʩ|\7a-1ѽz-K=Ň2Ğ},ﵱݗ`kat?wTk|^ -ѐ!|ǭTG[w#6{56:润c9^c4QCTA<˦9feς[9V{AkuVeܠ CӀT0D[A:ڪTιbD>n!',p Ɩ[}6` 5^`=pz~=EJ3c_ͷ.b:/%|Î-Ƃp:N؛ŎSKNӳkT 1AB<ϭݭQoթkY xX]+y]'a1 E͙3gwQG&S.6_Ѕhh0[C#y??~|)Uh^K>) /\7sC<@"ҳ,afUw9.ŦyH9E&cYv& ,Ž۸{ pr, Np`  ,[BT!(ggm/v(z:8e_Ͼn46Gx[H|~B4`WG48 OJ詣{~iAjRñ@}Dce};~_6>L#v} ߳OCup}#8{4 ~BE}oO<)67%KS]J[Jح" Uh?yމ߫ҋ+Z*&M&"5jO# \Z:M(-B!I{ðrò'~N^# g+ {&y1 .ze%6v1xQr1kW?:Z^Nz$pF Mc r#pYs X`?XfCOu{*v<Ӏ9Pq+ݗ†往}/ 'cV?<03= Hi5ڶɸyD=?6~Ifo}TJHYhiQ`=* vmUҬu)_F5XNצO^թ?1?9g„ gi̘1B|7mڴx\?=NZ`ĉa/H3s\{ƌ6,&L~?0Z_;f8ͷԩSrPΎÎ{–t"[$,V "&7"TWj:Hf1MJqtz-xN-ν~2Ƿ>]' 3 @;9 k/xٗ V[]8;&k$׳z0^WIMWc=Mg5tގ7Г ]5Hn^q{,wP"YLk.f,v#v M%j#ĝG4z;x:K7;&F4l#,!~ :]?FFaD:f`b3;a߉`*g jO}u(KNg:n tñH[;ƭ<✛ ٯ_Lfl6{'ONguw`ӛrׄ }.ʿgccy\`Ary?}ߟyfJ < 圻 }#N_0f̘JgaQA_C~Z|f':ٟq9Te.l husfn}?T*;Mp0{)>E<p`p;}+EF{X_:(gcߕGvp<)x*ẎM!cajrS)  aq>k9(['rJS,ƎaIO'a:@o3p DL nq;a5Yߋc6/>I ]k ^-P|X`v Ɔ핳[g,VS e݂u`blO}j(0}4Hj/^uɲR<[Z'p<\d~ieG;@eJ.}/X?ikkk ^>#Mi_ﱚY`,DZ$Ep-p{7J]yI˗E0s/\|ߟ](f@sǏKEDV8 8xFtlx,.`s-vy:(ñ05^NXW8".^es9tleBhm"{{: a9/`|l?Ctg_!DXƁXA0x*5H29' 11ebP,Yr236Mc<`0LZgkleZƿcac5SX'ұP,(Of#3 v hʵ NǀsX, ϶V(fvk|;+-?e#lM4]+c} NJK0⯿6n25Jb]u%5Ȟƾ**[c֬Yb)4s2 %Ʉ_sι_:p\ KƏȴHm,R%.~Rn$󱞅GlR 9؅UX$}o$=;oǂcE|k󵰆GJ%9 kCbL% n$]D.V,JQgshGc:^.:z4{= gbyR܏]eO딞<]=-Y mXv@xܖ3]c@ ? q"<13[x.wH8 \{=!aGHDץYv64@ݒh,#|רZV^#:n.ܻXXm8>ŽybmGJmetp; 57s\, zN ΓaEoCx}lW3o;f{bYD7,[XgXgcK~F_mϢ{(z(ک3;M4CU컨_PfvAJ=osn`T*5yL歠~@fΜnkkk-49jhh,ɬNg9NSK566ֶ9[l3glxW6}p6]2pw5iӦ24J5gϜsг,k4a 5mDG5:V6=*DC&2EDDDz03o "=D-"R@72)H}M7eaӾ]Č։HaӯOr4V'`)vf"R?a#<>K}WDDDz],Ŋ '6؏ίfHAD2-q{V*~^Pgw3c^PlJIEDDDABe+u"4(tkGxР`:mTQCEDDDzyX9X s;6$a96Xj"R bl;6$!͔6I "Ro`3 bXa>V`AA"LcǾ ݽ.f7vkwecƌ2Jd6LZ&JʺK?ϹH``kZo j`%Khjj"Nd;>IJjmmMy7zTjC`?yX.Bss3T w=[w+ɩ6 ?"""""M  ^J&);w.MMM477Դd2+Ruxp崴l24 JoMzp+pNm7EDDDDD"B,)SV Z[[Y`~) .dٲedNfh"̙üyhii}2L ;r] Qm>H% լ Ƿ["""""~%iP\vرg766АaWł444+`}`߄̯H1e,?0x 9f1  *cwh``LuVK{\ƫ]XHo0+{%*v~xx+ ІͶM~q<_sY/`h7+RM;k-'s-جX8~;6:@]ikxXm3Q߯~]+HA _r /9tĿH+p6}|?:NO {a>z˱\`\`O`S0I.`cneO7;h{ X.8 ӥj,s{k[sVf`~XH:S>vy m @92_Wy;v3| a-zpZ^@)*e4iKEu`aEu]%8jH47a~88x=^-Judc-a;V.lڀc3'`6w24Q181XՁ\Hop0Xk,|  <[~o:c$Jx<<*h IDATˀgސ<|lxȵOtn4'=PAl=m7Vdp=o{?!U+a{yKY>5`66:oK%s:' ϯ!x=ZXaJ=qt<~ufmDOSРSMMM^_,hp\FIZo)t_cvvc ~.xx oQsh a jVj' %KጡE@v*}fOT*uK&Y jnn㭳f<D[¾s] >$,= a|%XIt,Xiac&t4<+L 7M㒜DI8Xo߷c- Nc퐞.~,XA+_ n!Yذr Ə|ۼ xciEιecɛH+cY~GjJ}{6dhM]vVyeV(q*nCX)_:cUjt3HoX ],c5Ra#Wu |Os)jlKc{bSf^Ni30[ tC6]QChlc뮼:cvLレoe˱Fc7aLk2 V ~~\ z:]HochmZ9C8^\dzTbcNOƦ1a` VDžy^JAn2eʿ67wm`fM<I>BA=ϻ_~嗈 fM4 n[|qL7D (<Y%Nxl6lvs Sء]L }@rzm\2^3_=J Vx6ݫigIk}ݔJDP UQ!6Zo9b {y?Qv ;Hl(Dϛ\xi}76XczW18bv>|}*{X%l6$a 66lX 9YF߂y) /ayXjHc5N.Ǝ] \)~<犜o ~Dռ^: {{ܸq}f/.3.d]U6P{}]|li\`2V:<.m4M"ݵp 6>h*oo?v$QsKg`¸}aU&acE¦X< K!S .`A9X=ݰɷ1p*p9%X@n,ֻYM[cq7ܠ?ѹXFU /Vl ɉ\;;XcMƛ?X=DzXp`_=v/m[>I4T*uC&\R gF@BKOA&^K޺͈|]HdJXiX_weg`!vziަsA K\n\5Nv2&>ҟnWb3 Ky./ -ؖۀOP:kC¾;/.zo5kDw4m. ^:uG =`3- SX_H0 *9p̘1zwtSS^Ų 444FE⒂)|㱄rVX S;V`Dz%ؘXFk)QZeHoЎ} }PqT ,ܻMV*X!Tjhj*! x,DhK7~7Wv RР:~['L 8aY(hP%, 1`g?&B'""X FE (tD" ^EJB@B RInaΝݻev=;sޝݝgyNth\pAv"I>I(S@DDDDDNA*9Sݻ7/֬ R[!""""" 6V駟lMZ#,)h0$VHӥ hf:ufѢE'ἋWײ=ҰK([=V1K j: Cin Z6MDDDD: hܸqotIO9F;wy R7Q.X8й-8B3Q8nHx8!ฌ[%""""UxⳂ 9wq>u{!% z3nK2ǖ BV~#+6b8`lj:v08 x 8\ZHƍTb ֣ ~۱cuh* ~8 $_y86Mx$TJ}?Xܿ Nf0;^uCR0{.j,ve%Bqֺ >>K(0q 1-]3`7bGc}-pr,cG j Ogƹ\`Y` #""""R4 Dj4pllO.`67:_I -8O< D_~cl(8.5hk_>%|+wr\ 78U|݃R8~_X|*TF_ {;U}?c|;[eԪr\Z7C sՎc}ś}~~A|U_^+p[Kn"y`XFhHܜRp >s'3`X8pfJm{X8[q"`Rѵ9Cw .ZdeYhV,`_w*]VĞ_wAO_~C-ұ.TCG [l mL`bYjaU컠z@S6ֺ{lZ7$&.QWu7g'"˗~RES@1$M"=%Dc& a8+GAvX|ovė藻8 ;|$6W~9{<_N&r/sJtB yUH|YݺAB\?E [o|JoVdL^HP0l}z"bu{Y=Yu 4ʺQ~$w\+eĉʿ&z>vî\{EH= 38G`%~_ >IjYSp\l(Y D{ e]n4> Dˌ<3m4 DKKr DDDDDd 4e̴"""""4i.$->κ!"""""4i.;'Mei 4}ͺ!"""""4i]ˀ^ ݑq[DDDDDItuh`pMm^[ki_Or`Lcei TnWĺ~W ԳQ%yHJp ;ЖD_WqvZ7DR ê(in 4j)Y/` 6 z8vEDDDDc)6h  ,[k,`\?`K`06iڞtN:=1X=hIhKwjJ mIk6:Jg>0zCפ;!@ҵ1#w>,og{ `qumUD6ߏXЯ5R#v|V|l uN~eo9tp<~sԗ}A6k.1+-n IWuCDR0{?uCZ7B$ñVlc2 I`.4s`.IlYmX|~Wmf} |GRͱL+{D# «cAmDb`Eค-bIH&W }zIzdzҶ, a[:j]{X}MĮhǝ%U ўq!v;;i "o].~Xu{p{?6(}"!+zJ?^4,SJP;{yHROi&i 8@>ӠT VV/ Tp=ɝ{Ԡ>qz7: _Xr$? 9h C)Gz(v~6Twx<"(h DAơH}*)hPzǺ'\½ *y A_WVNF_ބJ k=ᗕ a1^c}ޖ ,))g pKDDDDDd՘rq_]g_VDrMr]䮲Zƾ b'yG9v=2glRlG?_X*v yY-1USݰRhtp'mElz_D^/b2ӛyrrX0HoI,6D+8Xs ,(WYKy(pM  {lրZζO-<\bXz>BG{픙Řl[~;4oJ$7]`ku.#"""""RPA Vfݡ4-D{gs|=I}M8YUPʛazH;)0*zb/{X1܄X(~,;jȱЋ\Bzo،IK`2niO+ѽ$7!j l4/7$7?V^glB<u/ӘW®-o/wĮ γw%;Knxl:BcRoWvIv0wEDDDDDd ˾X@`'`S,[haW%p?Blɾ-| 9XxK =8+e r'l]ٽH۠Gρc]GaX;+ƛ<`9:w#'`C8ɨ-ʻbWL\]'`] o]c&Y?5RcmL)Ўl iGoط,iw#}^coi|,xͥ14f"C= DS{Hum@J_Z'sej$'JKAЭR:ye i ԟ)}Vi5DR<-BDDDDDWk<_f i \3,l:g """""eP@PnHs4BDDDDD"ͣ0"""""""AAu>uCDDDDD9(h >Hj~9=pz(sۨ+I$[ 8\`$8:҂hڒ&gi24 f`]D>^ȳX)츙4́5R#6z8R_>_9~XJoJd4'1{J aS`^X-[E1ClXB,?e <>wuvfa6))=* %n8mڔ`C[ ^wr ~\]z kCbߵ^.<\NHݹ a{b3_a1-~ݣSoYkI~ZD\| [ĶϱM,bc7E&Ê> ҶI'/bݭ6krC$ ,?r4Ҟ݋#}]2׺!U}_ v E/~<~ۃZ9$]ñρnH `eTvN~{(X|6,}<-4;ߒ8..vP /U_!=+aۮؕr"C|=c=a+,J>YL7۠"[hKV= NJʱ vG!X0lKl7;ٕ}$O]8;xʗ 9euzl/bAcv1x+Ю@o`.y؉Owb]d[Kyy}.Ի m"p,g;ycr~ձvU_'_WocGa- cdž~/ŷ:,rK]m=}9= u Wn"Q{b(?z[ݯ-R/KzXb'#mI__:zg{= 6~!W$۷G<-RV).~?D.eYDۈ[C= ]z{S8<ӳƜYcgS^ñw8SOƤ:=]g r|[x 2X(>3>f0X\BWʟ>Anx"_jr,Z :T:% h ~="ZO(R+bs/1 †9J_Ndc#"U54 †)  VcC`7þ_o x1v ŒWbEB_l'uBAtӣ F.$:``JKPt-=GVceS(94$=L>EYoՀCcGؕʴ= ex S "f16H`,X.+"R]c`Nņ] lQ[vz.:sB,_PB{31IQ TFvN$W-.w)MsxZZ܏%fa}!H}6OT'hTa"GN\Ŏg D7?FJt,` [ ;!/>3 c_: )$#' /gaW$:QЁuE]u<+:1Z[/K([ܖuCt R|./_EEwڿPYD;?žaɪ4-a >FgIRt?jlᕴ4_#An̯s7"3Kݞgri; ;a,LXv/;0R͗p߄nDq6J#{ =26l[4_.cca};ǒO]cy;}$lDqmrMFTiER/&u7{`7±UlXNgžy{ֿ7fAp !-n &HkXw_aW; ; Ug,ꭋ%;ߎN40r:%\#Xr1Ͱ\Zb?)Ҏ{cJ Sðx ꌽWUXX >QK%?o-H$}4F!X. o ȟ04ݯ_Im6}uGrO uEogUVV,g_ Ub'lB*z\ Єm/nw{6RñIo#=Qp%v0{v"cc~W.wlTЫE= s s`}@ Lcѿ>CNڻaWKMN*6˕Xf1XDa`'? 74!4w|<",s.ac`W1ٮT"ec?"Kjq Q](bSk%=FNJHaX数lKco,YNB=Ow`ϸy{0Hi}xPb~V~r<s4vlUf>b`|)`iԓ)X'ر8-{>v3=8v۷'[7X P:VXx4lծ m]5,¾DWwmy6#c""}vB_p؏M@ۯa}f`'a]˵oC4x/Zf]݊qRmK|藵lԽb{4iYRz4}nTL= [,P&ʅlJi `Qam_?$^k`'P ~ @SH4L$c )h.z]J'AAƶuC$uP'63ʺ&ͪP*G]fTnwz!z7O" M )"v^SHm)h 8`ݑuCD<`P!"ٸ rZU\qlNn6x6[T(h 8F'͸"""""A(h 8F'=͈!""""": DC7,M#Y7DDDDDD: D@r M "a Y7=:#6/}x!)"VV&q#"""""!)h" '/e &o|QiC""""R 4A2p0 |c 6VNvμ rπֺ-""""ұ)h Rzk&7$q4p?֕K\º#c2mdp[kԸ9""""" 4U<q/fݐ<.soziA`%5i[W^^a"""""e "o[Y7$Zw  x'cuNuI[4)q xN"V>ԗ`iG:K.6T;X I}x>/J9%ת `.Y\fb88fI!=xu:uJk㠛}w;Ժm?3b_ٷtnrsp^~?]bNsn5++;.I]ecU<֏]RSmYђO좝7úGoմE9pf+u5p&FW|`/kq_>hxθXjs:TkT_V5ªck8x?}rpE*\PC ͯǴ +@i{?M[XC΂wF'[tk1 cS2g-[oG_vV}P+0{.?NYr*sXv|۷cHYkٮbe::NJG)b%f7+zԿ+odފd[SN5a]O~h"/:z]RjS LHRNt[,Nu.>\ f0QlRD&g:H  D_RWO2oE{pɱ^ AMрA(‚S<ĂcUطHf4%K  /+l(o|:a! l*Ȫ ,\,h2 HS@u&9ӧY7$pU<pVl{` _~T`#/IFA3`( [DDDD D[,p7=6Apt%Ay@On ؼǫ_VDDDD^)h Rz):Vw7p"vclFjX:8irU`)`YFp _fɁj ^N&lBĬ :_{}ÞWSj֢lI8N{9p/UVgTw:pizO٥I8>6b"ec' :li > D[AV'=mǰ򬳽_>IK( Q(P^ Ӊ̮Јb/J@\\vq# l"^q$XaU>v OnfVc,bP>?sjԜe:6,^-3i^qu; x.O_T˭*F980a="-S-5DDDDD!Է|AyEDDDDDRH}[HEDDDDD: DU e}2nt 4 Ԅ_"""""2 Db2n49 Dυdin 4)ʷ6˸-"""""4iLg.""""""KA4+|W`nj""""""MJAuF.Y6DDDDDD")քu#2n4! Dۯy 2n4 Dۻy Y7GDDDDD"tcC8V di"op4g""""""MDApc """""GDDDDDDD$"""""""HAI$R@DDDDDDD)h """""""4D H" DDDDDDD$QZ7@DDVw`[`$0xYom`P +auZ  f:YH#l l, >Ŏ~%?`m%n ;WV/wEj6@L^ǾuzJwQFv}/%9l^~xZF}""R%]Q!I6p%l.p_|&r IDAT%ݦن;WJ i%'1uCxGX/߭w|Ufi` 'ֺ!)=RO䣖n!{sہX0c"p0O2P<-~]b}AEQOk`oؕ`r};a ܇]l]oX7Ud;N;Y; \]5밫S@ۀ䮸4́`c؉N^ձZhG'RH0,pS-` p4 vt !:X Y]}oi{,N`kvboc/|^cXox?8 W`9Q=SI=)A@;{ˋlc\gĦ;Ɉuk$ɽq ̩Ai&`W}+'x|_G)Êz*d%= z$wruѩA~'6Iz`A| .~:cAb{,މKid,®`?5k4]{ =]݌]YZIE0 "`],t=lN5rrMGM$9oB g'#n8 eP-H>K9vc!IrR'4i|w*RĒ0u!wQ y9^6I/7i+;Uq_g'r,e$uJA|, e-&wE}4s_y<<4}1A Fu_^&ؘnbi,?C "'MXkڢ֖ڴ͗քQ@Dr[3n4_b{®7]@?œs<>+^zb'Sp%ti$NPz'bKh fV \FqSωdU,<`l4W-1g->i=,d2=LR mr?L KmzSV|1yI=zn~4IS"lL®$F "1n%BLrϧDo=%Blk{S{/Ti"$:_zP߷c|.J0\JJ("ksĽTN /_֯4hp؂8F~0;/Gx`7C:ypsJ>V61ekZ6Hx 'p//4qZ+h@%wr#IP@1 NtnźTӻ~XT+k~$fdlb~|>)QN+p19'r* 8ϭu_lWa}RG4iL<"Mg-] Sv<#=  /ق%w%x8 8q#Ҭc X?6EJ%v<ǻk/nȶi"%9sNm<cC!~܌盩(jeXܗ9CSj+֕4Z6JZ)9 뾏 Yt+ uȄ*ip:rzcYm:7A\ c=n#z끽Vӱ$cXm'9ΕeN~'ʟwaC,ݐZRNpc]2_T U=Xp=߱U/%F2 7Ju(HۃֲQRJ  cS.`m A?, Æd6{K%Rދ&~7ţ $* OFР' ۬G)S_x[6K49ߥw΢W}! ۻ m/?,V~v[f{:J ti3$} ܙuC)Mz%1Z\6XS$lse&GXま X/ߞh4XWzXMr)vL- )| $mmrKEѕ4`'ػ`07BؚFXN=h+F}{=aϱaQ6gjTl)l xZ6JuN4\lc'C ƤzZ7Dʦ 칟^Hj4HہWe܎<\7pঃ[n]n'.vw  >,vu dkz""%{ xmz\Z놈Hm/je/p~Rw| j6:JOC^vjili1ii`HPO&`= ͺ!mc[i~Msv/l射 KNv)#6GswyxC^M{  K :{Gպ!""Ҿp*mZ6 +l [ܿݓM \%5ow1HPODԶ e-Xq]sOAN` p5cu @/BJlUkbgC<e+y%{w#P8l% ܘ ng[`9-V¦>jb.zZ,@&i AL 5!Ҍ ߼F-,z%aΘ i6r݁EGU@_Ag _N|\7z}\Ku'Yw=p]N̏*}OV4iZ6ޑF!ҝH#I _:bE'rT)Kܸ{5sm'7mUcܿrzHԨAAi7ߪT]kn[ܽ]TYun_meNg;ݤL(?]ߩnNPO&aBYnBdJu.<˂5_as_uX}qU߂ ?pW!=}n3R[DDDD$"X0.ŌYI+'Br>p_;xpOaYoLm;cbNS`%n""""4{Hc؀Y7$fi,:# 8-0f?fx$0{)+Nwr:rw` ti"ä́;Y7$s>/A<M/ %|/G$6Yy^:18SWDDDD] VL({ ;ρUe<X2c}Z`)~k[F9p}{MJ~"~ ])""""R<4i +$MI(ZE{e3"'̢PUsWwp::J_7Hcנw 1XĦq(4ސ S崙!%eyp aC?h] 淳^-<YsO3X@ ,4j݀$9ʯf{p:brn<ێ3$yB\O9p\zF`SOb*9^7>u#DR0;^uCR0{.jO . Oi I?ӾX`:gg'\q[îC؃k  WG~O@P(B7α!m-"""""S@1$~3vN“_ŮG|]uI_ҹ\OR`$ֽϱm^|~N `"cp#]ܥXK{J<l=_6EDDDDDI-]kڢV\w?`q{|p;%VG(p3s=^p}YP$m{c}nf"1_%ԟVxI Of "CDSIQ:'ͺ!GC#+nn`VYX""<O8?x85?>$6W`&za%%P   C?]/LDDDDDDжe[ܺ^~?y&nz׭!M^^3L@qH}RO&ӒOTd2P0XvxjQ )EDDDDCE_ =G EA0?LA* Oi _cF%M(3 Ɋ"s`@l`-q׺"""""YiB"""""RU 4Oʖ˼"""""ҡhxHcH ,y+# 8 X L^=WbX+`?x 8x}vVf* \\iF?ӁW,`+q;%ns8wՇ߹"J"*رkb%jFMlQc͗b/+hDc Klvܹ3zggϞ={k[{S<93*/v @cqU<]hVt?E/`[`'t/\ i8 \Br\p`a147"3 0 䏀mԵGF31 }g(.dw灉H?Mjo8)LgXzM %p;RE>]Zg%"2…cS_w'kٟѤ6\+4_$5Ի#`[^6P{{CFא!3ڏ }7({DOG76_v+{i6O.EgTq@eF''$ճSFP 4XO%7=p/ZMĶMNA8ηq=R/vBGb+ C?FaF}v /dC::8^';9Oυ^<{Z9FoT= mĶ0A2NϠ IhЗ%{|;$U@?v.֤=t4)! O|Lqk:0:!+l4X2rseX#DZ5}phҤA˵rr"6hdhP:7yϽz)ÿ6szwmaFdm=Pr;|;؏\c'۾Bm$k$ES 9xtY >_8 Ug2ZNyhjq,LoVF39ddx̶ oy è$X8OHU\c'*+ ~Ha2t0Vx6OHX8yEUFP'H8 =*a@D@{F1a4h`Tȕ@r!p0zJ2K9>=VXG."ˁ F/{92]4 $mV@Q|:xXlM,uQ]@iC3h~?^MB1)CcB+R4lmF#.2M@n@)AO"e 1܂&`B2J␗5 8XDdjhD9=Q{h`s eaT吚H 5bT~-x2̺TPkSi VC"gϡt룰kֻyx'4!Xè&"#P`ol$ :V门a7t7d_ez8꥓5 0ЃOs:^NMA9r1=NWnצsVbqv[bCu(^'Dy'beUhȏeO(Y|_9ꖛ=!JS ;d0eO0NLC6uG1s pOǢt>Msh^+ehud (Qok_FRpko j2h`C e 9~;hf:\W9Ϝ4[9:ǏRxMg;qsqZE  &5PHD>ɨV RBBB=kRwc (f,Vzg |8*oaxxC1u9~_:l5VCH8Ǟ w;N%4_jxB` zH=|ܖ(7<$ڳaѓc{,?:eרP:$=Sh gq(B5ً"hڇ 5tLzrk>.V;< NFQ;ߍa4{gQƄ.Ȱ=x{p*NVAّ8Cv@EGQDO{"!gaaTiL:{DV/F\uZpΥfv"E6_AB+I^."\TcUUn3_g'E{UKAlg MbrQ)OI)}b6u2r$ A2+U4M!_*r= m{—+07 el1< h>qO8Odwp+;2αsl)[\Ǜs: 8׹ s8G}Ub}}n9z˜8?Wh^˅2b% g@v/H?Yw98ǐ\bs)t׏}XݹЉ0Ñ)[+~?_[CzE>y t -_jR߹Rh9[{h23Ze,D'@yXKaiN997?h_4G+r0z_QO#ᷯmvI(h9E[vy+Ts(/|ۢ~h!sH<+/?9}HWCCT~{b{z~.1atrl>yv&:G89fF-Rs\msb`GDs|rrk4?/r+k);ėW79t0Hys SN|=p|۫{n-pU}~sup#񇸡JٷsQf3f4ۮUSq˝*_J׺9&VRp3PCP!mr.q͍>W~?9WiA@1gF:ǘ25 0 01a4'JmSnR6T {CI<%Xm=\W, aaŌѼO(}xG'  F1桒uK_+fV ^%aa !Fʑ_F}Js<]k~/R8UGB1 B۹܉2w|˦8șaocR::.d,\.Bە |cwʿt82#X5s{ o 0 0 0 #'1D;O:]4K89>uyX'k8~sLls$b}H@&v1m<yk{s|+9V,y O0:`̓'Fcb q}BYH4W 58X8x@}-He< xpfyp+@H9z"Q};4GAc}n Ų0x , >V,}0 0 0 c$ǽ ng|H }xn>`s,ecqPӄf}K9Dz1Mљ0Oh0< ,a4?m@px|[uJu>UM1M)&RaaF]$(uG 0 0 <0:O2s;baaF s?ʷsjޣw3+FbߡmwRmߍ҆JXem"H iF ]xz H(ĭPtuDⲕb 0s:5Fae`/`M$~<@z$OeEπKE\3#(`J4 0`S:!:`zvhJB\(SD |,D/vtϙ("SK ńKc(7iD,`K_?I`7GI^`FibGte*?_h?i䜹H~-i6W@{F0!DX@yMʷEQ {ф ?!s`Gs&FCCH$DY;(Bd[Fg$nEi^@hs`og!cFy/W̛(.*ܮaT?Ǡ14 yB@o^\o"K;V#Oyhla x:Zm(`?΃h‘z8};kq(5%fGa͂y31X-K};?+/s*i0η{(AG>lN/e6^{e|s{+4h<40!D\- j }O!jߖpXbZ5%pmF37ȭ6ROtOZC pcp*py5JBփ ,*Ifw VFƌWfCFcF\܅M0w~:Ԓ>)+`*[d~ p< gGOu&g x]N5S(\!d02vLG3*5^0:l^1ۑrH /}9WBYVD,r<?W{fË(y!Hdͯo1x P2F~s8H 5uE y. M tB b\_"H `o+n?I } ls-d,x8`  9tDFoQxί(b3$2pNA㾯ow &O h4>F$.؏; /[bvԉ+%r& !~F:ѬLayx.Ł1 ~?'GèKl"9h"#`R$2(i>C|ǦŌ]GI`(Q9O;j;o'_g W:i&YҚHA 4D(d+D=lP6WLE10z,͘ ";3KU -h%-Lf}o~hcBaW _wER_'_)>G a";kitF^Dm;8VZF_ƣ4&^8؊B ;4g9è23? /(ChlFݭ/LzXYFn;(\r\oߦ*yӒhmda4:$Opd8 l̏WG0x x39u,NVV: [ W/jQS&zHaUgX;b 8+׻#F {aMhzi(4H]eS\ GqkMR }O"9uH)| p2|DG!h'\碇N1>oI 7eRk4N/L29(Gf2 g2~_dpi ;MQHӤL{cl({Acp[R?N>X+hJI's,'8?3`ᅲ\1s&=g#?.ILW'*mmkimml]nD&C_a&SQ;T>|SIF1Yn*~:̙";L6V6y2{dGW^gSY )HhpN;ΥW&ftdOɰt׮/8P.f2 DqNNW`=c:w橧 1L~ЁLO;y uTq `x:Usf2/rHBwv26OX*pGw(u!4I"vm"u_}?GDQZ(X!ݑ81C slb~V\w$9$? `1 /^4dr E#`R Ӧkih d2SƎe@&s1Jw,#4d2$IiP:RtʉN#_]ߌ5q&*'Tѝx≼R6͋͞(w7h,w!2N4=8-h63369 ^z; Og2v2 c0 0 04 NF.up; ȠƤ (g+ tp yN'=8}kߘo!/_'-4؟ NBG_>-J2d|@Ѡ7v(Μ;V,7+ UgHI&UsVs*E-qױOf6A㟌#ïg2,Z?*?z\&?Q a(A&C˴i\ \pj%;laaP`Y-?d4(*4sԘ+AO{/yB!Ų_Dz0~Òh g/4jQ^R,ELhPi=!` e^}QFŞ:F|ffn&w >Fe2Nhkc]L; #c48!kSHndX)"?$0 0 0jBF5Ѥ!8}XsX@ݫ-4(>B(U)`o4ۘbʰ.6&P)PT uK&OGlP>-` ZJ<ָq%Ta&Ϥ%X 3t9Ǒ=z1-aaaT i>xB oMÕר@Z^JY_*FQu%|OUFb „.iNCB&*bAcݦNe5JК90ԹBi8Hc2 z|cg3#1aaQ]J5̋W*䛐q00DOg^P|TA 0=:+}ЦCN< MO O~[5}T}c6Z PʳsDl4{0 0 0hO2ﻠRsR~/)Hi=_xt=t^~ ]DQXnȫ"1<J NU&@>~{R֒n>SA*0i>,V!XeeI|ʾ/ɑ* XFM'+<>W116&SD{0m<%w^9׽׹4*c״yx׽;UWN5&wL<@ sk[Zr?#:ZPepsUz [pm.z]MOCBtWq琱;X>3Hl1Rd+D,5|-qCVh5PxKVX^,L5{ ?C݄p:8;a̿,'~7aZXƄUzwuw&'LC)s Wzܶ2cnnwSrݓ2Lqץz {kqk߂GxڪF#z='ȣ(CZNƌ-!lU J;f $yDk7}<ٴkI??0(<7WH>q~N/ʝ~84MӪBhA'kDi'H=ܳ.Jtφ5Ȩv@7AA.AnAht݊:  h瞛jlA]O;иqpCvU}T&pɰiHs2~<]˻=&+ 0 0 $~NBK!%of0<`?e TOCVI@_dH8 FWqG]PƳQe"Ǔ_;Ԋf '}Mv3 P\j{׾&|@oH! `(En/0O(%_ZQyf!b芴%VVEi47@ҏ-T1c8?am۴igLF!; [Z[96\9;r {pN;ŦNb4 Kܮ]yyKSOV&0),tԓO檶6&3[Z뜄 B[Z8haaBղˀ~?c4 M[qiw}{-o4'X3iR IDATj$l?&h'-ʕ{F-Z<^@vE{W:fsxxoGL pUȀ6x2dRP(SxO9]>`yx'̝dɴ8x>d Gk+{D8J<8-9vORޛaaa4i+͗;"pWh}|\g?Muy)'y?|x Z}82OkdSBa [$_|ϟ /[(|桉h6i"Ҭ<B>I@^3|ݯLwg x.H+ q3Xf"|[wN`bCcrYn3Kdx ܑ#]p"YAWAM~6y<>V3*&>MBۆaaRjEcЃe0%lI55!gp ӷˀL~)dp~k6EQ軰G;b}oKw;a`$Ի#`[^r-F0b)fx̊# t|E ɝhv.E#?p)>B)ɔƫiaaBg74*,\<KVw$ Zajaaa EX~c?HHtX }"H==m( DʾSҡY}4wqAy|E5 "F:PsFTeAimߑK8)0j*(e !ұz-R{VW(>| d1&a}H\Kb1ghE;0at&>G^CX`r=;UaVC!-ZfF>@A7W=W%!݄WFm0at>VY߯i/ (`S)=?@Oj7ҟ$bRÅ(#Jh0$z Sx~ ZUPfaT!1ᝁS6y=Rߍ ڵ/p9Bp 履ݯ^$ܝEHYJB0J~p~ztB+yQXB @ ,#[%j`=55k-˾-6|ifLDw94n@+סr Hès]ϕH72$܎Vשq25^YB(Hh:IH^ y`m㝦\/4& Ⱥn SǾDY(?Huk?ht- }_Gmߧ p{+}J7}JV e38S,<Pfߢ0U~a;x>O$[}ak4zgmF30^_G~ գs%BFI{m #/w dZei8P ZvE%/(,`T`4X/cAhW*ۣ/w9l ]O)?ҥ#eWM{0x%u*p?2@Gkgk 3 V!vzSoл쳔0F﷛nhhR l<7HguzyT^3gtܓ i߅K_F۷d]S6dPH/uVk+4ZCEy !9wr e4b5 uG~H߯p hx4B#= {coJiDlf;:|[{r`Wҽpa!p݆i4 hkb)mKڮu^QQ\tFG@1?܏{P9'H8(wA8d4.abW䲜%)b [&< EF(P@!wPhѹ btODcf b$VydՠQ(B J}^ ,…jz^ː²msBcqX; Ax>b0 `)/m_>W5Eos޺6JE Y݅-j/ 8 G7( !W{Ѓu>F7߹'ޒ\AqH>V_YA6G:Z^l?#~WH85I%%\ZE{ێg`.Qsߍ\?+7ݷ3 '(}h&-|>Ӷ)H |H6bYsh!s)/[T^;,:C}}D=}_FSfTwhE%^O~,BgȝkZ}Ϸ;o0_my34|Om(Q_Q'YgB ki0R_aȕ(E?.j?p?> =GU7N e\ B*G3Gr4$R6 eX9?O; q)ہb6+F_G$^qښp&mQ67!-ۆx@|-w˓h5wwʦrZHl C+g NH'a ޫȻbd0ijh~'06_ǯDx-<^F_>@, eimXN=BAeDR lgvAaKgfK[4K* G{3q7NDz[h%C0b< DYC|y r/:{˿nQZCEKa-d"}H_˄}1G7Ӡ;zXp!*J;Mv gn);Xk=۪fi4.f4輼C;FނHQF'N/qǡ$MʼXȇ)C !P?^m|;$5W1&=?VC! ql<I Ǒ{s357 /"J0d ں9?կkMhUr|;bFl2 ,Ƒ>'2:AӠV&>m(Քz}\ѷXю\+d@^ OEo_N`q ^*-ϷFC9CP"sؓ{BٗվƘxԉw.Gsj$a@` dЪFi0j5Yzw0p@sr4&Œb]ž@V1ߍd4) >(铿JNVAh }?N<#?"aߖ4CHO$ffQ5/Y qVE^0_@9#R>ʲqM@p{AR =<)@H7qZ^G\]Np+=WJ<e jFJ? QJ jB=׻+ }|˖in yNL 3nE{o-Q*}RгٳpjIJ}oR|80 +] YIjJ pk|¢^{[^! iyAs[ IfoUmi v_y钎S/T\,Rv/)yi#"b`>yH[0Ca>zBP(0kY؅82`cc9mۚp9k}1;b wƏEdb#"5}uIi>gjZ9p᪕k~K׫չkQFmt1M8.qltvyw^;v=ϕ.$DwÕf89nu p9ܷ vd#pe4p5+W}*v#pp9 [뾕빴,p)kE it!/J-LEKz$DW/P'a2z 0eR͈-|)+p¾< / @<'(N^&^ vJe(2 Fo# xnUp: tqk_`oߧ=P6A~ɡJaaF!bL + Ge3c!Wzp7 0{`)C!hdDmE6L$*Cv7BFF47ɡ 7Q}~BZK8m\L > GSQ*Ѓ*د(g+"0+@xy'+aaQQhP8SG`V80uI'd-u0ֿ\- xyS w[aaaQMhP d3@NƎ+}_GʻQZ E')O{7GȪKU\+Fs؉ߥ/wiK{i,؛\ƻyCkJ}_unR*>OZ('|۵+wP&w1ZzAv'ӹ\YŶD.6#_xƷkѦy}S^M0r0\l;B/:\C|{c|ߑzQh_X1^/3sN{mrPwi?pм' O0 1)*< X o;% A["WA#K &&!\[w5Q݁($$56 H:_7( ׀:.NƖHtm ?=; (5HlG` 3|. -@my"3}B8o9'4Xexe)PTv}d1M,,Ac] .Dqό*(s.QbB熠߮ 0~d8H{c\U R1@㇣{(eWN*٦/z^{Ba nVw B4^'T`g9 >X(^vY}MJ翋ߖʊH+bvchӨiP6DF%/d_3K3o*ݴg(p ʎh(4`tcipW umM-ϝ RėBWwHmFu1OlPtnIu _Zczxf(}:f\`¨>Ey\FٲԂn4ɟP(h[ .FK߾̓KQ{hP$.MP*a4Xy5l m[FFe1An^mn\F0uE2Zz yM'F'fӄfTk<(pA"bb.Gj2K86 mB .htig@8z21ѱh0jA8ֵ_qȀR43FsÄk݉KyaAdw^ 5kMQHBQ2CtG 7"L|4F Hbi,'9vG¬w!h$BaEi,Ց? K"mѳwQmh`"/#5 o_!T5ߧ=h兺ȋ++q݆l8mdE֦/,G^,U~y$=(ۻ092ߞ$CAPAN!!QT4+bXvk=X $4JX@@ZH8$(x,N@J0to3Ttwu|?OOWWw=5GSVpN#Fn;i!rQ\]kUȼOIDd@p5qfRļg!>K{^ʕWW| Pkkq"` q`1HEp[UJN׈Dk);RnRޟ_")t( #M#8eD] DcA p!i?'M|@B<{4XmǛ2 <-^A| L#|/dVT<֡v%-kU%*Y.&e,*'5"beB|)h(2ؤ/X?/ZAݞ^\Iyblv*Rޟ%!jlؔ[A$w'>vքw N"c8+DFd>;pW. Sf@j3(lWׯj6I_q_(f{ZZ@ΰH5~SzjG?E<(6%"i"@c?_!F2|t;1¡$|G_|];W4Fdg3if8jvMk`woϸow`Kue[Db൬{Klad^(x•ZD~A_v$g+ف؟!3Dz]>Ecz7u3񇉂禮}Kb{u#33a~ R{؅5Y@$.^Ϻ A Ǎs{-C3VcH>w`G8+yAbZŶļkhTeVkŸQO(GgKux;Dq7UGL8GlI|Q[ h/<8HaGľXYB"97eg'cIޗ8/Sٓx G %b(sW?sY L.eIx,g)o ,'^[֚߫hŠ"7㽖_A߭OIKˁ=C?]^6'1ǥ%+/.^ we:>3skK-Fi7Q4Q~|۝/ه}&F ^G~ U}OۭI_ZGV'.5eDP]  O! 3(Vh8rfӓّs'I0.%ζBY|QWĄ4, xQ#d'b~o!IJuDٌǔXH7(x= btJ'  Eَ(&K}o9/>=ʏ'ǒ&.%jL#>#'Mt})1!&g|ΚJY:Ҡ#F~",O`nfEؾ+Ά-^cVٗfg¤"I׍@$p@,Kظ;z'butX*W`/3ab6Jzlsl(j# 4UE@_ҳ潟kO`Z >6 vH`^-Sw*=D۟0 vsMWzߴtۜ^IDVO椏f{:K0K0MV4"Snkn6jRXY ۖ`Hlb %pf"n̟Zo*5Oޱբ"|m зXK?æH=uK0?)©E~SL-qҨ^S째u6őR[^*AoC,(K:=6{"Gⵗ鋁¼ѰzP9n1IIc FN*ťeß2/ҳ)3ϖa^ے$IR-LHa%' Zfa=!v=$pg x'846e8XCxNc+;mk N1GJ8dziׄ Q8Vk{X g̓fGI$i g4xx PNswXv!lZ/nf'G[vRn6jl 4$I1i LR|e'&l&_&~<. L-eˉ n皊,(94n]iyX: Ξi:}pO+*zkq1|^Dp?ua//=40]$ n".s}4@tEA/hj4\w3S$NԵ{UN\Ym5 @ƲV`Zzsו𳳣@#]^1 vs6l~=xecgAԜhyfz{|g;ON$IRC9@j$: Tv)؋[u LGy#X (N8~P7~~nN{&*zmC$.1pۚpm]`q1b;S b\[˪N&ɂ/}q `HmI$I g@jԽ*wF: t̀% rZzV `qঅ:bXS[`fIy7o-{7zU\DI`m =KpJ`e ǤHJ$I }ܛlGQE\l_sn,F. _9W7$ Q `9Q>̱9gݖ} ^z.^/1DܿvKuu¾C$I>X~X$I424KVoC$I0i ylv #hxT`EH$IIAWxf$9-:|98$Ifr^VݒĈI$IR2cNO$IR4dwQ Ӓ$I =]l7`J$I4|4eD}J6;I$I×I== \|_`rS#$I4l4y@wӛ$IɤԾ'!ME$I0d@joRc$IaƤX&"I$i1i `"I$i81i f,_$IaĤ4<?/%I$iPFX h4p$I$ & "I$IuI$Iɤ$I$Id@$I$e2i I$I24$I$ILH$IL& $I$IR&$I$)II$Iɤ$I$Id@$I$e2i I$I24$I$ILH$IL& $I$IR&$I$)II$Iɤ$I$Id@$I$e2i I$I24$I$ILH$IL& $I$IR&$I$)II$Iɤ$I$Id@$I$e2i I$I2;I$Ij BII96;i:@҈v~HuMH"I7y!ZW4i I3By iDy= <4o 0$I =A$I$er$IRy#pvAHC4>$ IIֳ{z$)W1QIENDB`neo-0.10.0/doc/source/images/neo_UML_French_workshop.png0000644000076700000240000052743713420077704023543 0ustar andrewstaff00000000000000PNG  IHDRq`jK+1bKGD IDATxyXT?0,.0(hh`J)JFOZ4oY}]wC321MٔQy800303}^55{sf{0SWuExxxkA!B!B!B!BifkTJi7=!B!BQO\B!B!BièB!B!B0j%B!B!6q !B!B! F\B!B!BièpttlQX|""B!B!???hkkvB!B! *//ɓ'1~x|rTVVr!H0qDDDD48 &&&Wxe3k׮'akkx޽q-8!B!B!qې8|ᇰŶm0uTu_/^͛Ι3۷q)رWll,"##LzT޽HMMEjj*ϟ544ɓ'KԩSF^???ǧo~Eaa!RRR,]A{ŗ_~$lll''' 77#8!B!B!qۀWbРAnܸ?s΅gcر bDEE?b1Ν~ǴiӠmmmr/z Ř8q"՟H"AAA8u2j066f\(Bs8w6n܈hjjӦM;SgffݹuK"4YGϞ=۷o#..eee 'B!B!Dݨ'n"=='O͛!tRuiMIIZ>}BǎS 1`~~>b !55R65⸩Ґ&5///t7n@HHЩS' !B!BiMԈFt3f@TT_x{{sS :>w\8p޷.\`ԨQشiʐ{bҤI haa }-^\pwwGPPɈ+֭[i&xxxػw/\]]>N!B!BHkF6H"`ݺuHII\~mڵ K.H$B>}x=Uߏ\tΘ4i/^P\ ɓXP aĈ Ĉ#xy Dcǎ!,, ƍ#G ݺuH$„ .YW^ży >N!B!BHk0eyÇ___ZB{B!B!D'.!B!B!aԈK!B!B!m5B!B!BHFB!B!҆Q#.!B!B!aԈK!B!B!m5K\с-M;wO>KQ!B!B! 5Nyy9q ѣG!B!B!zԈKgO<%%%8rH+GF!B!BF\mmmD"byo޼ oooXZZBKK ͛7ìY`kk mmmbȐ!n;ww^NB!B!vJ /^@aa!Μ90`@~Gx{{ۗ={?<== &N2.mEE_{aĈ ʿwƎX[[#** ={lB!B!ҮPO\t`1cFy*++BNBYYN: *̛7kϑ/_F^1c //;wƥKB!B!+Q#.ٳuVo߾, ///5 wHOO <k׮ 0~x.}]3f@NNvhtMgI!B!BHGׯc سg ++ fj4ONN\,pϳmCJJJoF !B!B7F\СΝ ---P\\,5m ^ڄbqssP#xܹ`wB!B!fp b˖-AOOOjgggX[[~w?~%%%8q.^F߾}ѯ_?7_5k 77ňDTTTwxb7!B!B!5na3sssq! Į]j@__ӦMCuu5P(Ğ={gnnCCC?^j\MMM8qCPӨj*=!B!B!m5@ m۶!88z*`nnMMMcҤIr <==ƍCLL fΜ X MMMcݻuuuO? qF|;aB!B!BSd|}}j !JBaB!B!B!B!҆Q#.!B!B!aԈK!B!B!m5B!B!BHFB!B!҆Q#.!B!B!aԈK!B!B!m1Uŋ}vuUGQDB!B!_zB!B!BHFܡC|}}=2!B!BQ=K!B!B!m5B!B!BHFB!B!҆FСC-.g۶mDǎp|b1 Z6nܨ{Ľx"._:`Xn]_e>|^z5‰';???4+6___hii!%%ڵkGիWcbqCCC+W"!!A/TTT >>Æ Cii)`ooBOc_^_-((7o涏9ӧ+\η~?^~}SϤK. wIII͚^W\iڵk|?3ۇI&$ɓ'Duu\ݻ  44ѣJ"kƍ!Jegg'N ::ZVoݱc֬Y(..ƅ ş_nݺ׀ QRdffݹ}UUUrͅ@ sqF@SS6mZⓇP(P%OPJO׏B 7߿_5BQl߾vttDXX }WU'O`Ν Ry]68fcc.]x RRRxi~'gΜit>[ųgp%xaa!|||/୷Rz<׮]ڀkbb+++ٳgx1[0rHDGG{JB!q1qD@`` ƌSaٳglXXX5ڡ10RkkkA$)\9c(**;///8qXd JKKW3e 5%c->#CII ĉغu+[]]T,B^'=zW_}mS#iOΝ;` :U/NNN iV9777dggϞ=-䉏GXXmۆJ5󺻻ŋ߿b .wEhh(UQQ>c޽P^^///<|fff* Fi֬YpssXzz:BBByfӧOヘhhh,.BQ9s M$A,C,CWW&&&v9mۇl<;wW맲R:u---DGGAAAAɈ\{{{899!<<@|Mu(--EEE @Orseaa 5+"ק9!ѣ)]mŋ<==U#!K.Xr%k۝;w:th0=/'' 222˜1cj܊ ̘1׀kll~ ڀ zBHHn޼ X/.. |;v@rr2SNpsry?\)P("00A.P3M_9s:s>!DU̙={@(bʕMNꍸu͞=#G666H$ u%''zkȐ!H$8v H0n8^CCCŤI eؑ#GPUUnݺA$a„ ?8v8" K aĈ :uРAq"""آExe_vM8 ֬soΟ?;_C"u!=^vvv*Gy/_.4^oooUɳgϘ;v4ӧOJ_~ip/b ^Nq5Ǔ'Ox1Y[[vHDIj)!;v 6ؿo>& q[}N\B!}Ç=ynWZdǏsvRGj7o")) 022Bn뱤lţGPPPW^AOOb-w hkkC$pppeDVVf>}W^j.UZ?ESmllЧO{)*##׮]ǏQRR333L>Rӫԕ7n ==BNMINNƵkא@kkkZRQQ?Auu5LLLAA___-$.Gpp0Ԏ;PVVm~zwsY+#$$;wTJp3M6qۧOMlmmaddM?W>bBi/dΑfοڵk}vx(&1n:n_=dʥ_t)c6=qϜ9ܘ/ƺ@6mĊhOܻwUV7| Fuƶl***>n:&,322b쯿R{-wc?3cLm^֊a#Fh+|;w䥽>c7oѣGK]נlڴiٳg ʻz*oZԩSYff\Dvv6 bƍ^+]]]6k,h9YYY2ߏƍkq}kq#={f\zUm@Z|bZZZ2H$,##+k?RM=v?_תCuokUUU7kjj6q>|+_WWxEeVUUN:ʽ|R⭬M H?rH^\u/ݻwCՈ˘\NB!rH$x뭷YlРAR8p{޻woO!TVVFD~kkk >Bp-n: Uҥ x H$999so7n`„ z*UYY'ѣG>===9z =;w&SE-TUUq{{{!...\@YY֭[y/k}wXd o%&L.] ˗/~odd$>#nX `llܸq1^.^Kpqq4557np>EB!ݻCKK ׹JNNܹsq,+ysttt0f HLLą sW0o}~^444aL<@ߏ5k <<\eu*ӟvvvVxJ4440dz\]][T.~S(Νy :vwwoH!DyM@=q !r0!זz2Vp~C;.88;^{▗ⱱa'N`UUUR󔔔~(iٷo_}v&MHHhc믿n4ѣGyi'Ollj+={֠%KXIIIiiilԨ[F IDATQ @^쉫{XMOC>5Hͼ>=]uuu~_|ɛVDנ~|,zxxoܸxyBCCH$b'KMMmﯿb|/^l26E=}:}CQQ[`/&}vuxxxpսY] K-]t))Z/lz5ŋyΜ9SW߿Rjܹj媽nv?%Q;!B4ewQAA~i.^̙3*lݺ7oqML2ҿNbD"ܹ>A/zsm޼ ~{njjF 6>P[͵n:qK.Ŗ-[ X,ƹs0tP^GeR5k˗ _HjnnǏcĉ-:RW\zӃ!kP_II Nü}>^x3fR{+F {9s:uj^ؕ|sV^z\@cÆ R˫Pgkc7\żMyOŋ\-]۷!y̝;HJJHKK]40j0f̙Osi8::6Ny`ͮC]BHD7^5,7n,--TTTV 8z4ЫJB_}ioHmT*l/^q&t{쁦fۘyL{yInw7صkZCn R|y4&&& k'''n1;"44|ƍCݹ+W,EEE P(l2_]rۗ.]TeԨQ=z4O?ڵk|6Rʭ_Nz<\?}k5nݺG߶J,(I#.Q???n*8~… x }uӶW;>}mO< hňoBKKۮۛ >͕\(9x,X WEϞ=Լ_~ףv…2SNh5~`.y4r5 6={l=zJckJdd$JKK3gn9#"" VjHWgH$RJ˩︥꾦(44PTypy^s@JBF)Szu888R qqqiC~̘1Xv-<==aoo[[[333 ^?z( `gg{}^ 4hD"ܐ;k.˹/~H$X` H Ho*~y򇇇C"`ĉ ܝ>}={Dn+VK.?td_?Bi+F 䍖9q2]xJݹs7oF`` <==ÇՕ{ >[ YIg0u/22~/>E^ÑJm+ݒFq5;/k ȑ#JW{FQQ5WSLm_zU)1 >>Xv-W~رc۷oFW"::cǎ144ɓ'ѻwoc᰷… ѫW/Xr%T󃟟_QQx 6 HHH=BBB  e^Z]!X~=)>#OޠN{To!CR$5=uILYfaݺu={Ν+ƍWWW 0@(9߿=AϞ=ΫyUq/!G8jݻYTq +]Ƹ{ >mOb&j6l؀3gpW;R722^.,,TJˑ5^@R筲zۘ0aS @i{UM7o,--add j*,,l'/L6ill 333!22044H$BPPN:۳g1vXoa$&&޽{! 憡CrqbL8qqqr/O-եKhkk@ @aaB˺?#iXܹsɸ|4Yݩ455[mj̙33fhVFx̙3ر#h^o0p@1[e6G9$MMMVUZ{njjPWKkj*4 2Rz57cLdGu^;Ou[׫W/ޢnjߏ'O(ܴ4޶2~Z>MCjč;>k׮UKRLL0+F*cdxb=;wy?,|YeihyF>˺Mک}GjFyxx:tyꍸ;v@ǎѱcG={K,tuD[XXKJjސsdff*ԋ\<6664Ν!!H~KΣ# QYYBQ6<lG{{{r={ӷgu۠ ׯ-p ĉC^^=TҐ!C%KظJ૯B*sNRU_ߪu_-R֥w6֭㽮>37m+^Sꗣb]XѣIw^|HJJBll,׆B -- YYYڑ!!!xn޼ kkk9s`jj\ܾ}NŽ;#o w^:.+>Ddd$p "YQQQʗ_ڑ/_DLL N:5r֎l޽{7푚T̟?_u5v FcٟT[Ns'>YwƍDvv6rss,u"++KfL>?&N Ę1c0uT5]4%;; %'5SNHIIQ>077c EEEܗκ=$'NK,i0DE H*OMo Ƙ7]?Bȿ]ȫ~>E{I3{lÇ^^[Y**//7N.]%׏̊lee-[`ӦM˗qU_ z \rÆ É';Ve*竢O[ E^(k$Yz~mXoι}m$ ϟ۷-vA{YCujx.))it1 D_Osu\E4VdxFM>gl۶MkF>x ,\jjuqq1p]@,cܹ#y+xIIjϭvaOdggعs'կ_?|' okk wwwq2ɥ;w.8~ @Oy'׷KIjj*㥥@߾}!S]-,,QR4#՟|;PΐiӦAWW@͐544x`ۻÑ+++y?,O?rr5x)4伖Ǝ/QQQ(,,_իWzTTT_GU)|*eaݹ 5)ڐ\~=g ;Ls:O?矷ϊވ[ٳ`9r$\]]accD5k4Hܬ/GAUUuH &~]|k.,]"}Qhɱcp 8߿?w| 0bbĈ 4h&O bC D"cD 4#7RWJ;v_۲7x+߻w{!ʚ@(6l@bb"<<{}f++>Y7222db|X%_ L5j``m޽ܿRbKݺu8͝ה#G 66V:Z۲~:ě1^֭[yMG?涳T`Ŋ ^$%%a}gVzO#GzFDDs*:::(ʒQF!99'Or"޿?rssѱcG8;;cҤIXxⓥ/+S12ZMgdtS#WZ899ddwll,CCC>>>RW{z J:[vmzÄ׮]Dhyy544XbbbySJ^'.c}XYffݻy邃 :A/z.޺uKsa_W5kRfs}.]dƮ2z^2؎;xi{ɞ>}h7n0###η%=qq ^1pB^ٓ'OfM 9rdrO\{%ijj1VVVƜyu?C7۪MqqJ o7yX~:&ModlֳgO^=nnnD#_kφ[94!By <(//'$uA|7 ~O`ooQQQ 344K^h9 @MOb:OWYY"##uVZ Gnv&L7gիakkiӦ5HC̙3׮]c26m//&4vZSE͵f;v[g˖-7z9@zz:̙WVoWUK ÇUܻwo|ܽƒ#$$pqqQ˼?ǎ57v['`Ν>MMM5feի[h P3c8z( -|TPP777̙3--!!aaaزe p1Df*ˢEm6nQHx{{#447gϱb |w>ZJfYYYx1(R;7w%-[& [ B^ԈK!۷oG\\OEE6mڄM6;w?d$%%5(aر ߿?DDD]ی L26661b젡$$$ &&jVm ,\k|,++6n܈QFϟ?Ǎ7pETUU֯_J" ݻwK.ŲeУG;w1ׯ֭[|~ 婢水DXXLj5 xw`oor… x 88K.UZz?QF!!!e˖aٲeDUUokkk|!۵h+:G'BDDΟ?cǢW^ HJJBddduV۷5BW /<5z;ynH-c þ} ѵkWtx1oڀZ8sBCաC9sÆ Ë/L#Gٸto13---|W `nLEM<{Á0k,7깗:uOc{12ǎkh[wZeSugbƌ\W^ٳ8{<:::ؾ};͛PUF[[֭ٳ[;z[naҥ >ӑd & 44\ЧOӧOԼ"##722w}QF<6B!|!NСCv&M==&`ƌ{.l٢^u&1d(Am6 2ukkk#22ׯI틳g6XЮ){Ň~>}@C鯇ZZZ˗Je޽c6zq9[N%1uYZZٳܹsѽ{wD"D"888`ԩ8}4akkM7QظYC]ucٲeMޛ:`̙{mpk͜9oFk IDAT!;ÈEtuu1e\t ΝSKn-777bƩӧVB'`uQÇ__2`a̙-? nQ93g΄D" ZTN>+WC{5=LQ ܺu =B~~>^xVŞ>}7n %%҂1 333+\v (,,b1ݻbq䖔@[[ܔFFF^fKeff"::jx7ѫW/ơ{z9Ĩt(wt ڋrܼyrrr zlU|q}<}сzC[[Dii)._'O ''ꄨSjl"MÓ'O])FUyϏBv(ZVVV8qС 0dVe-emmӧv*:}4\GG}QKm6l ڡ9ijjja%tuuaBMM0e߿_+W2!H0qDDDD48>x`,^GFΝ1~xr0h D"q+*((?`bbnAbt=r/KK/>> ,@RR$ $ B!O7oę3gqAKK#"B!uFܦ\x/_Fzz:RSSSCdee5T4Z~ll,"##LPq 6 EEEXf xy}}}QXX!++ k׮P$&&b׮]x;__^ݻwHMMB! {\icccjnŋU!Bivӈ;o\( @+++ <33@jS̝;eeeg͚ :J]!By}]|˖-3ttt```H===1۷o:`VBȿJS# qI|1118us000@ii){nTUUhz6!jA1fL:x Ϟ=gggB-񙛛1"n䂂5\600Ǐqa뾮 ]To]'ONNڌ#GJBII ={Zŵo>kعs'1d8,[ NNN\.DzeΝq+Vipi?_011[o&BDDDDm ͛y:""'ԩSO[n'$ /^ QCAz6 gϞ=Uz}2 ;w7m ˗/m(~M={4}"""""""IEE*++1p@H$ܸqIII4iVkann\xQ:jv DDDDDDDDDm=֯_www;;vlꈋCuu5 J1e=iצfM+#""QoYooox{{K~CDTKDDDDDDDDDԆq .QA\"zqF;K,Azz:AuXDDDDDDDDqu`֬YXjU n$1vjv%accDP$m|%''W^Zq>= 8vX{K!"""""""z:?022umεkא޽{:6IJJJx=zSSSxzzu8Wbƍظq#z̜9}}bWmj&_gzxx ** ptt d2q=pqqT*En#`ʔ)033"##*mΆ+R)Q\\V666 ^^^HJJӷ#F`ѢE?~Mڼhޏי ,@xx}ȑ#ѧO̚5Kx饗`nnΝ; wܩӏÇcʕZ[[~/++ Ǖ+Wd˗/0e"44G|||"aʕذarss0ҥKxtR?""""""4rZמZ] iS+&&k֬+Wkkk@@@ʐ|T*DEE{`ӦMsΜ9;;;1W_EΝQ\\_زeO>FByy9~m RFRSSqe 66VL9s&F2^}ZYMo_5rssq;v숄8{,}vӔh5z?ھرcĭ[ik׮P\\7B__N/^JjRX|}RĶm͛7 kkk̙3)))ɓ'c(,,1c ݱuVT*:tAAA077ת߉hT* ((_ BD‹/^xŋ/^MjGzzhZ9i6+o5nl尦 ZBB6)OSڬ\nl5,nٲ_ŋklkKPԹWPP0a;w\ܿ_-?_o%%%666=[[: _\\ DV^.LMMA"J}_N: Gtnj\B]vZŧ)]m_ӔۘΝ;e2+VAΝ;#44ӥѓi崦ÚVj2hx6+ZוJJ!!!bYMkrYlj9:?,((^^^Pxxx`ڴi6 DR瞝$ 233!J-gkkz,,, Hp XYYT*,--jK.RIMEEEb|ptt4?Mnݺ___8pH$Xxqˊ6g+s֦6mޯڠtyy9ꩽ JB.]O|ޕ5+/\;ñpB[Vgg:+Y ]95+Gׯ?&QΝryj@+|&T*\.\. d2q888`„ Gyy9GKSSS,Z3#Cu>GERR`jj OOOn+ѣ1sL۷Y'"""""gөSUNimVkZ٪覮5u z2ZЊbccB@@@k6y饗>HaLkB\\\KT*a¨QD"Pv*DDDgϞmȉY+V:j|\\\_ Bnn`bb"'N(̛7O(++A\"驩RA \.$$$)))T*Ab!55U- ~iqΛ7Ox0`лwoP(--*ZJ-ω'SSS?^5kp=!??_޽鮮Ν;’%K 5]׮]ηS/s;vLoY-AXX?R 6`̘1uuV:QkٷovލaÆ!88šNNNJ2e 3;v1}tWFKRZ7rH( ۷۷oBĉԬ vpp@Ϟ=Vr[nպ]v4h2ZVN>Ni-z8T{ ///TVVB*bpqquXY[׵kWDDD ""BסӧO7.ɰsӵ@6`ɓ'ƌ_ aϞ=ͪ_.#%%neh}x .=???]3st6=Qg: ~p&.=Qyyyظq#Ǝ%K == :,"""""""vmDrr2z꥓r9lll Hét?j}7n'|ѣGCP 22ǎs_~uDDDDDDDDmS \v ݻwaddJQQ{SRRDѣǭ[p\z7nƍѣGc̙۷{ADDDDDDD~~~صkW⥗^9:w 777ܹsGL/--EPP,,, {.\+R)<<<0uTZ {zH!,,LGEaڵ+&O @VV ϟ+W@P@P`Zñr&??MbccP(兤:ŧM*//_|)S8r ヸ8!..+WĆ t.]ǥKt5""""""zF|g%.\X'ڮlֵYYe<3q׮] NCCCBOOӧOGTT֭[9s&ƌt>}8pXYYY5j*** GGGlڴ  A 6 F>}T*+V ;;qEӧ%-G3p\ozci?j^ ___:tbɘ1c^~e[V" nnn裏pQ۷ UVaժU2df̘iӦ[n=""""""z_;v쀧g Lve=jnL266ƭ[T*"@~~>RSSqFtRHLL~Ûo 5 GV{022#z{{{H$i? addDž -ҿʚ5qi_Grr2>3@zz:p}c֭PT8t+}}}#::* ))) 1~DFFBP@"ŋ/^x5f̘'g!>>>P(~:ϟBKV&ki&2e `eeHq',ole6N-O烸[l%,--_c˗/kUNJ+0h xyys Ń< &@/;w8{b}?g@WUUjUVVj_[#"""""""$(JtRĦM@(JL>Y χJBTT_}Ut_-[5kʕ+ǵ4?sL1eeeXz5;tmT\|պ4:N!((^^^Pxxx`ڴi077'||2 / vvvH$̄T*SK.EEEhWGGG ~mI$- M|ھ:wɓ'1j(XYYСC8r9ECv UWWmPRR"q;"""jMǏ,\""j15+/\;ñpB7rmܹsСr9Νbjyѯ_?@NfezJJJ+ӵQ@WS|&T*\.\. d2f֫&իWߌ:0aT#  6Ç8}4ӵ_Sڲ_moѤvkGmrYmkt Daa!vލɓ'(C IDATRRR0gX[[{ZYAp ,Y7nn݊.^g"""DDDDDDԠ#F[{G톔@ (..WP49>mVkqVS|&nms?r AYYLLL0{lqqqXlPQQ\e˖A&'NllTMk>>>pvv z-jտ999(,,lR5:v?ިĚ5k9r$T*JJJ:G헹9$&&">>GERR`jj OOOnAѣ1sLW!"""""ԩS:m͏r7nWT*XZZ6CSuTt ooo_~)s?(uaϞ=Կeee4fuɓ'-M|?>ϟ7n`ؿ?N8xNjyv ̘1C aDDDDDDDW{e / ޯr?9rrrpEL4Icfff7n6l؀;v͛%KZ$+׭[3g ==nnn-R?65+ :wݻ… 8v֮]밈-!,, yyyؿ?+ 4psskֿM+W6?]v!$$066ٳhѢ_tmO; 9;;^Bee%R)6o Ǯͺv69QkqF5LfM+5HIIi4OZZZiole6YyN-]???=vLo_ .Ӥ5Vk•K8qw^ݻ5&TLDDDDDDDM+5EOQZu AxήXe""""""""z8 .Q֪ikݺuHpڨ+V:6DDDDDDDDԦM ޽{w^]4MDDDDDDDDmڵkuQƒ͈ڰVl]L\"""""""""6DDDDDDDDDDmq0a%"""""""""j8KDDDDDDDDDԆq .QA\"""""""""6DDDDDDDDDDmq0a%"""""""""j8KDDDDDDDDDԆ:""""޽{0H ǏD"uԈ]իUq&.QƙDDDDD$ ]CntNp&.QA\"""""""""6DDDDDDDDDDmmVK 9<0)™DDDDDDDDDDmq xAξ%"""j㒓ѫWǪ#88-add0ęDDDDDb>...J֭bbbĴL2fffBdd$#F`ѢE?~::={)lll*1=66 ^^^HJJӧR2 wZ',l۶ }ޚ҉A\"""""zlXt)gaڴi֭V=BBBi&ܹsgΜXW_EΝQ\\_زeZHMM˗QPPX@>}T*m68::BTBT"$$D,۱cG$$$?ٳg6Rӧ7!77PTSGLL ֬Y+W ##֘:u*lڴ bMDDDt .5ۉ'W^yO?aܹ022Ҙ;v@PP^|E&M}6Ұ|rtrs!0|p\pA>grxyy!33Iϡ󑚚7cǎJGbbbuѩS'XXXo6.][";;}nݺ1͛r ƍ޾I^o%%%666=[[:MMMMԺ~-֮]\Zo [AԦP(mw4h~WdffMJ'""3qb+VD"iZ`C̘1C˺_Ƶk͛7C.#,, gϞ*x4([oH$qxORҲIqJ$P[HCT"((޼aggDLq+|_=xIIIE=?cӦMΆt"""z:p1fΜ4:u 2 򊸥sb<}{aƍ 6vbbbݤpuݯ@ee%(wxYs988`„ Gyy9 ''ZOaKz*bbb0zhӉAAA9sڽ](""""zZ) ^QQQxV&MBtt4pUcb]v!$$066ٳhѢ& 8;;oBCCaood;vZّ#GBRzzzHHH@Ϟ=ŁfMl2899r˖-Ӫŋ7;}FC<>3f}.΄:u*6oތ.]Ϝ9kud2F+Wbذajugffbݺu駟PXXccc֭rYYBBBc„ ضmZDDDDAOM D```yr9RRRԩSjߣmoϞ=@DDD}ZtG?Z;v/wŻ+O9KDDD :}4֬Y'N&&&ڵ+\\\0gL2E!ԯ 0uT :TqO\#!!f͂9R)f̘FFFOOO8qҥ ХKx{{#==b]'Nٳg1k,rÆ DDDDϠ'N`رꫯP\\jܹsYYYؿ8x,_zj^gϞu8DDDp&SlڵXvmYXX`Ϟ=6lť_~BBVQ~zŋT%""""Vs&X9r$̙={`ƍuʜ;wӧO-`cc@\tN]v0`L:D>>y&`Ĉ -\P,3.DDDDDjr9233QRRwwwx{{cС:t(OII7|`D駟PUULƎ[lAVVV1^=.] v.0m4OhRloB*B*b(((@uu5 P(PA\"""""jU _~ AQQQ(//Gnn.= &H,R-jByy9{jyĭYsωw=G klД,,,СC)Q9 x4cwXd fΜ)?VVV⽀q[^^V#//јjkk??|NzSb єNDDDQ7o>=Μ9={b={ ${Ay駟pQ߿6oތ-ޏTggeeq.q;!tR'o՛%=e``m۶Ë/.]n޼ xWļkܹ֬sCL>666f-7|SbkÇC"@|Oo߆T*m}L\"""""j56l@dd$F Àc߾}j_z%>}3f̀JKKO>Vk!&&NNN022B,y+H Z/M)CPԻ={81߿?222УG]BDDDԦ 6 Æ kRA!..Nc7o/^g~W8y$Mܪ !!Nyccz_`,XX#>>Pj %""""իW1|pK.(//Gii) 7oqDDDDõ9DDDDDT/N:ڵk{.`2dC$"""j%""""uQLv$99zj4Opp0"""Z)""""""""""z8)###]ADDDDDDDDD-3q[vUo GGG888 ++K-ñrJ{YYYP(?>\BB۷ybccP(兤$ݻ7ЫW/DFF aaabR2 w^>6 /@* ~~~j'j>ѳ,&&k֬+Wkkk/BRӧJ%mGGG(J(JyT*1}zۭDVVPQQll۶ !77PTSO}ic޼yxQZZ-[ 55U-]5N_~N:I/++{"vFFFpttD^`ooD2ܽ{p:v… nݺǎĔ)S|'"""jN8ػw/ݫhڷ]@DD8lق~Ç:u NNNb^B000xJ#h„ b- :Fee%<3ⶀ xyyBCCiӦlllJ$f#H B D"Aff&RiH$(,,;w'>_pڥBee%~w:tY=k8̙aÆ5|NN Msuu!˱uV1mȑP(طooB'jn\\T)S //I57._ L bjYZgiiii7vpSoɓ', ooo_~)޿sY&aΝ_c ;&~ǽ{>ѳ3qڨC#|s}g%.\X'=88X<6\:ס)f͚UV5Lrr2z6)ik^KDDDD[~=v઱@չ5dggwO:6C#""z|K:#::Za ~~~صkWipvvp:y+WֹBׯcP(Xt B///$%%59R2 j[fi WWWHRX-=::={)lll*@VV ϟ+W@P@P`ZŧMƞOrr2z ''' 033CXXXOCO˗7=-8KDDDDԆ9rǎõkנT*z:y.^JU~RRJ%t@TbӦMbz`` J%OެPVV\CR!**JӧOǨQP^^~;vDBBO={ a>}T*m68::BTBT"$$D)TVV"++ fff@vv6mۆ&=zEEEM~FDDDO n@DDDDԆ͛7 '֭[>|… ر#G.\N||2Ο?ÇC__pssS ~Bfffħݻի!HPVVwj~s߳͛#GHDDD qt`˖-xoÇßN:'''p 7ol`QPP0axf6!H`cc#޳WbڵͅJKK*iC__```^UUoY|~>>͓J011L&J;w>s֩kx7H044ѣG;88`„ Gyy9 ''Z &c IDATJZ!Hpƍz_QRRҬ*|痗עm'ma,ၨ(xzzRɴN1bZ ,@xx?ɵN6.(()S`ff+++DFF-1b-Zǣk׮+Vu(DD-{2ЩS:Lɳ\۷sΡC;w.<ŋfhhь .ԩ9'޾}?~wb2e'5q[' Bhrڜ<k5))) jTmbq.ѳwi0Ym5-@ۓ-M'ڠgyy9U_{r&a 5)T*XZZ6iPl;w'+Q] QpHRq1x`jk<]`=zh<#""}|={ԛvFr4~)kf;&~ǽ{2 ;wlVDDDDD"t3 .Q5tP@wf͂B{ァudXjSԩ6nج6Z _ڵ+߯01cpq]As%""""gZ`` tFGDD:"]ܲa׮]fffݻw>|8V\YoyDEEppp@VVAAAL&CppږSHRt 111Z+;;JpwwGqqZztt4z SSS ""UUU,( ̟?W\BBůMX( xyy!))I-dNNNի"##aff0گqڴ#GرcvJ%V^]'ŋR#&&k֬+Wkkk@@@ʐ|T*DEE{yTǾ ( *bP\w5 AыcQhDIQ#J@Q}IJEYAe 0lsΜU]릻O?/_ܱ֭{xxX|y9444W^ƍ8x Hطo_`۶m033CJJ RRRm_Wl퍔xxxȌkܿ۶mCyy\ǯJ}:~u?B!p B!js]͛ok׮StX$&&BCC@߱c|||0n8.;)) oɓ'JܲH$sV'޽;aff޽{<(**LKǏB!Q#.!B! i&,_PXX'O"00p\5;;uXSuU=$6lXx?wwwʗ'RRR||n]YYYOǏB!Q#;ԙE"30\͡3̹Bi||| ;w.Ǝ>={-K$6.Wc]np]B 99c(((&1P )S`~F>]7cL!uCBH{Cc8w\x{{UaB!@(B$A$SNWZ_~/^͛Vcҥ ǏǢEPPPx1bcc<سgΜ9ɓ'* Chh( %%qqq\ׯQZZ!++X@e'O 77׵}S[?и?BZ=Bjmf-9p]ꚙVkf3L4 ҥ lmmK͌ =r]3 ϟ1cNNNRǎɓ'#88I!"ˬY`ee%3?= ajj PǏz+,, ѣB!&N4.[lA@@B!tˏ={`ee??? 2K3449sȑ#kamm WWWC$a֭r_Ç)"""}vb„ r<#VRR(899Ç ~A(vҞ>}'B]]ϥӨL|(smϧ۶mܹsq5dztBH;H`iӦ_~!C^z,φξ455٬Yd}޽Z eof1LMM٦MёM2B6qUCyk{>} [r%ѣ`?#涫/=gX||C!D';T5&͛Cȓ_kO޺bnfiׯ_ B,ZlpcǎCJJ x<lmmѩS]vmж6lT믿l2|޾B!BHkv%X[[cڴiڵ+]srٱc|||0n8r8::/H__ǵ=CVCe=v ˗/ÇuVܿ>}:M'_4Y3x3 Fik5  ΐH$pwwƍ5Ըe%%%@I˄B!_]k9b ,#= a<}Æ YGnn.cR8 ''G*_mg>ߐOA?ݻwQ\\ܠtBHB],\50cLjaygL+栩 6QFI}gf Տٳg`ffuB!u5BڦSb„ 8|0BBBwww`Сr100@rr:vMW| mmZǍ%sUo+))믿:u*~'מB/N f䟙卿.'ODjj*~㩢"fƭ u֡W^ŋi B!.t^^^8}4\---L6Mj8bϞ=8s ޷'OcXn]vťAq|(qc\7bݺu4iRSSk. !56YXCC?\\\  L[W [[_;w`ĈWWW пܹso'B~c߿ Ut8Iiii0111+X!aF . >>z%vz"7orc:%&&">>iB! :kr9~~~ĻM1c 7֔F޽9"B!E"d8;;B!!!ׯ"B7FyB!z"ӧɓ'H$x1fϞ!>}:v-3ŋ077:b1j6lX SSS8;;˜ǘ1c`ll '''߿kkkB888 ''Gjϟ]v@aa!z_?m4}LMMѣG055)o8!BZ5B!Ҋ={@JJ V\Y#σYc7RRRQkwAll,S1{=<<`gg,_OV,#??HOOGff&VXrH,^IIIزe |HB!Y͙3zzzyoڵk7|www=bIII}6N< %%%888֖.==HLL`ѢE7oߠArJ@" ..:uCSoB!%B!شiGb$._U.";;YPSS㖕PZZ ǃ>nhh-?}0~xn(___K?sL333׏C6%%qqq\ƏEq ?Fll,g֭y&oߎ0̟?_p-tuu*~())AZZÍ%M&""_?x%JKK_~QЧO| !77/^ĪUyfEIi&ԈK!B͚5 VVV2>CX|y<?[e055EDDoSSSL0A"""gXYYC J Cyy9zP'"-- p-| >_+4u077H$֭[i㯿Ç򙙙8|#ס<.]Gѣ֭[ D_W\38NGڵk8x fϞ ee5r b1^z7n8k׮ٳ رcѡC>9! H`tHu }C!B$XPE"BCC& !mVjj*^ҥKUVq+W毿bL__ b1{Aܹf̘D"L]]ذsy]ܘ.LGGk׮Ity6m4zٱcٳ'SQQagGe&MKOOQÇk;s swwgLUU0D"/"W]G͚5K!??_~^~1---qDDD0[[[|>SVVf&&&ח_V_C焬>fn̙3e{rj;=FŕUP}}aaazO<\\\:SWWg^^^5_g>Bw!B!BP(]vcbbₒ())AOOϞ=q1\pgԩSpvv$W\۷1rH9rӦMCYY';;ѣGq!L2F|\wsyn߾ 4x T[o]8<~Ŋ2ԔMj|v.] D"!55v™3g?@UUא 9'VXXsqW\)3\/\hyjm ԩ$ p=\z]&D4!B!H/_DVV\M' (++?JJJ`hh$AII Dqq1<:HMYYgϞD"}PVVƍ+| ++ 999[8v^~2?˗2JOOGyy9ʆݻ7(UUU\rϟz*X 99ϟ?HMMšC =T?>RgرR Be:{^3dCI&ӧx)T'$MАo!B!֮]kr{;vp wO֭[:tT}>CnݺDX,P9?7X,[ BBӧOQ^^gϞ!??9rH6mڄAuテ5111\DZsNdffD!Yퟩy>'nNo M9b…PRR|駈9sgnP#.!B!Lrrrs}U#)P9YUCYueee(((۫W/gggs"H*PfU\#*&&&Ri nĭސYgeeeRuImgdd$Ոkdd%%%seo߾rN`Ǐu&k!Da9T?>/^@JJ W_Uu=bY>r^UWQQQg!D6NcWmcB!BOVBqq1BCCsxzz")) ŵ>KtU*om Tڕz*@}Fz^U?o~oǏsr"5B!"4_#G… qE0!PVVX,o޼ҥK V8tN>m'HqF|g*{GVz*/_"66O*?@dd$p={Э[7 8޸5ʝ={k8z*n޼v]8DFFŋ {j_j7={Ǐ8|0fϞ3g68Ճ 1 ""1( 9'jSڵ ...8rn޼'N˗Izܓ'OrqYϊ+Go֏;AQ#.!B!† `ooSSS|爏; ޽[aD";#::{nԶ &:t_>m۶A ͛77ntuu === ==@e;vp?_jttt'''ܿ|>[lGEE<<<wwwTTTHׇc*8p {{{!Ƿ~ \ܚXd ׸ -v*ɓ1l0aϞ=())ip666ܹ'O\=qKCΉڼ}|~7L:CŤI^{SC+OOOn3f@[[s̑BL=z\qҞQ#.!B!b۶m=z4D"VUUowСCann`$$$(zڍ ;w|puumH׵kW.?L4 W^'uϟիW۷/z]N07n3 󡦦+++n)SҥKpss|>ttt₋/bʔ)r۷ `a֭=zt3׿]vGPVVF=zlj`*ݻw" }􁪪*w_\tChjj"66666PRR˗ѧOlٲS 9'j#|~m'5会8q"BBB   p 6:0c hjjB(Nqn1B!BL,3,44T!={0'''&0WWW^z%MEEx" `6[z5{cPf;//͜9u҅uܙ͙3~K0`b666O>ۛqO|8+,,VVV,((Q޽{ǚ}1& ْ%Kj_BB311a:::3fbb¶m֨8Iӥ2ظMўI$v5VQQߘi)LJ6x{{{{Ƿ@tB!KEEE0k,g<{ 'TUUxEHH`ܹڵ+ sss :YЛB,#??HOOGff&VX!'-- {.wqigF.]?ĦM駟K\~3qQ$''#11\ښ5k!HidTHII۷/RRRm6!%%)))o8 i/RSS1l0{ҥ Nr#$$D!BHEB!{'/555xyy(..n݊L8q>>>ؖQRRlقL;v͛7~6l@VVVUzz:bcc~zhhh@(bѢE8xT>OOO|tbB>}_|TTT Cqر>>>7nYʟ={6 Э[7888 11Kر#򐒒5z/1??_!c!yt SNEΝ"| o 2D!BH_vc (1h EA!K666z*Iӧ*W)//K.ܲ$ ʱcR  ''G7ܹ3,PZZʽ W_}gggH$cƍf!nݺq_$#<<6! AB*44TgBQQ9H8qgϞٳg1|;:u\qDEE!77K2dzcܸqʕ+[nx{.BaS 0xPO|B!홪*<==鉂oӧ`ĉpwwԸ1\|8pTc={bܐ ǏǢE?@SS?ƃ<عs'fΜ eeelݺ...uuu=֭Î;]va…ܶ裏1c 777n ɓݻ7LLLPQQo7n\>@  ..Fjжħ'O 77]vmp|B!́k]f" (/yfEA!(&|||\kkk:uW_}s6*VB!`'BC~)6oތM6a޼yBiuH\tgfcccӳ yooow:lOOwV'!-%-- &&&0669%&&w>E 4O?8v&MhQ~Y!Bi!22GA[[Ji?-W5BZ򆐶q !B1NkM377o!:v-G R DϨB!VF\yݺuҎt@6 #F `bbPQB!B!BH j}}'Ŕ$vUcUݻׂQB!B!4 (: B 5=z˗/K_ !B⥥a9r$.\/1!c|2-ZHmmm899a޽ԠK>>z(:F۷o_uAMJ'Bi a9kyCU޽[aZ"xukyݻw%yϵ֔s)ZV^bҥ޽;lmmL!66f͂\]]W^),VBqIϞ=(۷/^}HSNЀX,FvvTyׯ_ǴiӠ@]]]5{.fΜ ###(++CCCÇG\\\G.]WWu@hh(SN(~aJ!SSS̙3\=y!_~xAIII$B!9b۶m=z4D"VUUowСCann`$$$(zH+s՛-yo{4E˭e[^˗W^2d{K,בm۶eeetuuÇ͛7B0pK.eULJH\^UUUtuuudŌ1~7/>ώ9•{I"3ƍkmddT#ԩS\+I^ܫW/җ,Y0 fjj444<&&&իW1c?W۷}Ysiۍy1lӦMBQ|gBEE0sI vE >_YXXիW0 eX,nָ̙3Y.]XΝٜ9sׯ0///fccüYQQĄB!aL0faaԘ1۹s'fmm͖-[Ɔ ”T7ofLUU%KRnjÖ/_&O>&XBB\u=&XNNW͛7*+((`1&N444>|8+,,qXPPjݻW־}  lɒ%5/!!0󙉉 311a۶mkTmAjj*\] 7=z4322jԵT׹Vߵ~}6l0ƌ&MľK.ks[z^ԶE g>dWfRwم XEEr233ن ׾ijj2vqVRRlqL݋{Jc&MBXX@,#22n1c |'(++"##1yd;v }' <sAqq1믿F@@TTTpyԨ[UUW\Aqq1QPPcǎԩO1cƠsGTT~ge+++ٳ@jj\Θ3gr [8tf̘sss899ĉ ?uuusG":u^x0! h"cB|pDFFĉg*@'''xzzbԩԔ=ǃ-lmm?"..BBB!CW^-bС<<'K/zы^֭[zK5FII a>>>cǎRu谐iii KLL֝:uq :1~Wfiicŋ㱿K_jO?5kkkO?q}}}Y```j7fxZ/l}N.++c\zPPg߯VSʜ9sd~'Çsr{ߵ垸o{\f?3ϯRmZ}Gť;88HZu.S~Kӽ݋fggW;vX}!??4U'n3Yjʫ-s[*"Hj;CCCnY"HM2&oU2@ll,rrrFZZ`_T~k8}eWs[cpvvkݦhƍuV 2=@pj4h ȩz;BHgccWhgʧOT{@.]e---H$c2Ơϥp6lXqtܙ[(--?~k֬Arr2|>?wweYGm> IIIx1Fͥ᫯3$ ݱqF(++ϻko{] jjjܲR%Y% x<@EyZn=݋dST|ضmέ͛xxx`С*?//Dxx8Ο?/&`ll }}z{*ڝ;w5*ǏFdn6tuu'zT#l)peXZZ<C{]Nq׀kee(ȑ#:u2.\?nfTWWGuS$BYF!^x{8gٳ?>Ǝ[p ՕK ˥U ~7R^rQw <wޅP(5_ffԲʟxl777aZWv؁SVªU7nرcއ(Bww;s}5禍 sܹO 555L2G^^8 5kϞ=///XXX°xb_H$ŋ888YYY:u*K.m6:v숙3gb\#lقBSS+W+6CCC|ppp!0r?vX_~ѣTڝ;w|t 3g΄k2?~gϞ5*N pqqAii)[7Ñ\tQQQիP.O|puu9:uꄯ smTe4Zwε0@KK ֘0aToo{Q^0|p >?"!!˗/aiilիW:6mԯOi Hob3;;;.o \f̘455 IDATP(dL"H}5tttg:::Ņ]zFwa3f`"|ƬعsjCniӦIwpp;vF=АѣZ˖߲ү^lll2ҥ 5k_md |> &6۴iC!BZLǫyؘ7nYNhh(b;ҀXXX;5;ibnҤIlVt/jWZZbbbٳ&1[[[ž>}~wn߾M#m5k4j\®]r ~Y^xx8kzYOj[[6 /_~ƌnM3!BHS# HKKCdd$9A [[ΝÝ;wPin޼.]{HLLD|||t/jy|>ܯHbcc WWW(:ӧOovt/"4c 1\t WtHB!B!8j%B!B! (: B 5B!"4_#G… qE0!Bc /_ƢE  '''ݻtIGB!"++ 6l=LMM#>>ƺ7o*:TF޽RH$>x<^=!"E1ckoʵ666ؽ{;W^ދ]|/?tRt Aff&YfAOOǫW2! Ml6qDEAQ@zz#!BZ\} eew!/d^߷_BB"""p덌OOOtBXXhprrBǎ'ȏы^j/OOOF!glϞ=ɉ  suueaaaիWRTTT/f`` ‚^=x1Xhh(bqƝfΜɺt:w̙^~cƍL]]qO8 Yyyy3Ƙ5F͌#W^BB311a:::3fbb¶mm&N444>|8+,,VVV,((Q޽{ ̴}1& ْ%Kj&XBBck1˜SSSclΝ\ړ'O LGG2w6lSSSc~!0a/m뻖W0nj&M$U~}gmm͖-[Ɔ ”kyffnnTUU[d +--;u/{a3,>>=|^YZZJgg.\`2d6l`vvvqjjj2vqVRRܾ}`0'ҥKAi_֮] JB!)**‘#G'N OOOL:2x-~G!""QQQHHH@pp01dիEA,C  99:tVXkbȐ!֭?ӧ:Ի};w` C߾}h5[f |>$ \%%?055/c l؈Aaa!;###<cWQRRBii1aРApvvF.]0w\Ƚ}Kk֥ju5s->}|2sssb999r՟'ĩ^!H񠧧ǥW_޿*U@/&ߋ?#F@$wq-w/-]{e]Zm۶G0o<,[ 7nhtyyyؾ};Ν͛7KD" }QWk䍼g!JRrB!m)={gϞ1vzS\j8\.ػ󰦎@P$DDҲl,h`EaSP*EoVk7ὊzE\E+HEk(""PYf%d; y'Kuu5qQbaŨWHL+}1o>>>HsI/?~qqq8{,Y)SBdggVa$&H\  ԳgOرc(//ǁHII3oooŋBSL|W044ĸq{nTUU ?3p L} \]]E@HH֭[שAAtHNNFii)v >F$%%|r|G5jQZZ sss|{.޽owLL ޼y333hhh`ҤIx>>>>СCEnfr,򂅅x<vMegg2d|||%GAA*&6o OOOhhhУ`ȑ011A\\ |r'~(>'NDDDBBB[[[QQQ OOO,Zqqqq8pÇ ,1߅ rzj|BEeOlܸ|>cƌApp0ƌ86]D baȑؾ};JJJ`޽{`aa[[[CWWHJJׯOmuC]և3Y0===zQbdSNa~e6mBYYsss<|{FLL V\, <ޯVVVG>}Zk…صkv)rk[-  MYYLFFFhȑ#?=ڍ >>/:e:Yw\RGN:QQQb\\\f+Wږ̛7='߻w/Ott4LLL!r1h aX|9z-4XMM  . |N///ɓpЧO899"u?O<رc'''L:ϧ#""0` òe(w9۷oǜ9sD5 >>>R'gChh(&L[ rEAׯBBBp%aӦMpvv& CwMzŋpuu⨈ ĉ~zz[/__ wU6Drr2lllP[[7oRh% @@@ĖЀ5 ͅ)n eeeCEEPRR֬Y 6f***ʂH9gbΜ9㏑{aј6mؠ055:gϞ… طoeggX8~8҂  ͣn-[~},,,1MϪl@CC444}vX[[wuX{LUU}YWA աN\@@%^[[ۑ%裏УGb000Bmm-^x3gbt'QTT 899-=ܹsߡ!C`ҤIB#ၻwU۷cccclwbjkkUVu8  B~:>gUSbԩ]A;MwΝ4wHң `ffFvae,ͦ566ӧ &!b ^@ ;УGVUUGoCCCذa fQSS@ w9xNl^ZZZx9^~!  4Ot. tsćJ700`/<'XkWRbjE`X{.444p1vX梭 rheqjx{{ѣpwwŋ Lq8u8^z*DAMMtAAA]7 E$>t"t秺:\.x<^26l\]]i Z*hkk+q 1abp8(((@^^&NHMMall^zh/E%ݻ7|>lق={ ''hhh!CbPVV$:6lllPWW2&~K()) u666ٳg%R1x`" x_=y8y$ @'' l {QSS f"k=zv72AΉ;c |μb c90`2*7&&K.nܹsQ[[ uuuL>^^^r/;}0}tp\͍N300ƍa``.1c].푞???'O G]]VVVBCSSvvvHKKÐ!CD8ܝAA2#>>d/^Dxx8 Çwq]Cڂ:FK 1((ZZZ r8 .]RȚ@.ډ.1---MfSRR¡CĦ]rEq>|s9%tU.fbb/үCCCK˗eѢE8pH'fffEjj*6o8  %UUUHLLDll,222zwwwڢGǏ0C VVV]\ ;())Ann. $u2AAR)uuD 7nPx[l9s=AAĻٳg8x &M}}}#==l6^^^AEEbbbrJlڴ |2BBBbڵ5֮]vj555 6\.7oބ )))xhjjy<8::bѢE?~}IhpH {{{\gHhIII"cAA?H';LII .\^`֯_|  /^ 66Ì3pssÁP^^cǎ={:b ۷oGqq1Ο?`hkk޽{Xj,,,0b>}SZXf `߿P111󃒒[dgg̙3ӧO DQQSSSsΥDBBoܸqb;###n:@|>^xWYY|>|>;vsxܼy7o={"ʫgΜNP,X^7 111:u*^|Ǐ### yDO?xZu666.+Yf!** .ě7o+4QTT 899G{Y(A#AAхqUO?UV1:ӧ &Zmף044er<<p eee4440N> 6l6555"HGR}}}d磠***7nb ^@ ;=ٓ'OpѮ .][ AtKBk B^x'O">>HOOGzz:-Z>Ñכ7o8$$$:|p 0G+BBBjWݻw!vSSS <III8q: %vjx{{ѣpwwŋm2ړ?Á'> pرc|{FLL \\\+Wϟ>}tj=  ZƼy0o>Dll,bcc} ÇX\./u}JVmzGIIIh֖-[e˖I/--LWRRu</^ŋlrORAP8uTDEEMsqq5kSSS"''Gh{{{\R\888@CC|>B鎎BΟ?ˏЫW/˖-Ccc# ''&&&7o=z`޽t011D)&M޽{CWW˗/Z-add7771ydp8NNNxH9O'O`رЀNG۷oǜ9sD5 >>>000x,͆BCC1alݺU(]__NNN;.  EׯBBBp%aӦMpvv3.\@vv6:  vHH$''"R8?Ȁ+F555X`,--QTT$u: ((Hl3g΄.*++b̘10224P;;;DGGmذl6PQQAVVEʑ~̙31p==ӦMӥSϞ=Å o>kNhDK WWW?~  fddx"|,,, ١p  ڭC666 GێqBYY|>NNN -Zܽ{]eUWW4ܾ}x={6;&ԉ+yD4555TWWߞ۶p9PQQ!C0i$}>o߆:厱޽{-VZ  E:q f_!AA(t ;wtttp),^~/oۅB~mnFRO3x<LLLׯ_@QP"SB BXC`ǧGokPLTUU($溺:yiii k    xt۸}6\\\?үvX, ۷/(³gm555BlN*X|9QTT@P%rlmLhkkbVZZ yp8~: w,kznLۇIիWagg'jjjdq9    &҉uuup\5- Æ 7|#ֈ!##Ch \|_~a\f}}=0dUd?PUU%WzqaӦMxJJJ OOOy7bWUUUd?q$>>-[;t# :[455˗tG˗/EF666[lٳgEcrFAp\;88 PWWի 9r$JKKQUU%%%$$$`HMMDEEaܹсOE11;;sEmm-1}txyy'dٷoO. [[[iLG]]]#==]dߓ'O G]]VVVBCSSvvvHKKÐ!CD8<&N(wlAARVVx#33x"add@___ߴFwiuPPV.ﮘOGΟ C"]bZZZ%-eiiWJ<\B[gj:$6ʕ+RxHII%:""B-l4|Euhh(^|)w,Z#̌Zb ]}uuAm۶A]]2UUUHLLDll,222wP8z(?~0~~~Z(FII rss1hР.# K.ﮘOW? xP(W_Q ͛7ׯ+<իWSK.Ux P;wJߟ@EGGwzLQVtt4]GAA ~WR(-TUU)///*&&ᅤijj._LPBmheeET^^EQ[иwEYXXP={jhh/7nehhHM8z)߿O_?(777JCC۷/l2Nͥ^zQǏ&OL}tzYY5i$JSSrȑ#::Ô1A}"ʿS}5e?x<u=F>BUVVyݼyٳ'3FΎZb:"}d?iݻw266Klؘ200PFFF튕 BQܹClll:n ѹ ZQRR… ڵ, ,%ݻwwuAVR/^ɓGrr2^zPQQ|}}g=b NNNزe 222ܻwVªU0|p 0S꠩ؠ055ł }`ggh1>^3gBWW?1fa???8;;˸z*|4Z6lFEETTTeee:= tmʿH$''nY8p 8,\pax{{CSSQ˃%:C%Sx 'AA H'`۶mV䫯)6̛7 Ϟ=#Gň.>___|r_LVVVǎ;p9 >>7o͛7gϞExx8<"Ӓ iii}6TUU0{l;v /FAA~7fcԨQ=zPjjjFQQ,,,ĸLs@ %֬Y…  MǴ};[GuAćF   #X,X,~g=}4`bb(~Zh^zϕ TUU(h}}}TVV***bG~+VСC>} 88qL:obb"5I|;w***7nޑ}{|AѝN\  B.Ѡ(߈TUU`cҤI8xԑɭy?>憃˗>|8=!!!BYYYtLVb|˗EEE d<ǷLv6X,mEN///ʃ <<<בF3__5t&s8xzz8|0H2%}`jWA!AAgϞűcP^^ 3f@OOވŋ/( ꫯ`hhqaݨ~gƍ4_QFkرXj\]]%'Z/]M2mGAć 00Bbb"bcc$$%%W^pww-qQ !++N7nχ\.ƌ555yfxzz֭CHH ** s΅0}t,Z>6&&rppp~*4R4;;sEmm-1}txyy#GDii) 0w `mm 3334Yۢ@Q̭I{:Lspp,,,УGv6]zO<0p0a„ )JÇ:$1mݿ_rKo[@@vu s%*66 >|H>|HQ[nԨ۷osIjȑ2~IIIԀ:VN6ydj޽rӧO).K͞=@ܹSj^*::0eEGGS(nj9j~@(,Y|||r>|eeeQLihhP@ N>ݥ1%+},,,(uuufSԜ9sbP\.f͚E]tI 6P>zܼyZf f )Gy>kAOOO]9Լ86t۩QFQ,KgddD-[q|}4ydj˖-lݕsǏӟ{<ȃ})>p}y\"СCXn]Ey233y9rrr.I&uaO'''X,p8.ѣG\޾sN>}Zhb۷Oo&񮀆Ԡwdlٲg…Xn k.,YSq-~aĈ*GѤp޽.[}]ׯ_? $$O裏\x6lxg\pP6⋮ n}W,AtWNqE ? 퓗GwCSS AII ajj 0****(,,|||f$i鰬 ,_{4i(!::ǏGff&222*#H$''x ti8Æ C\\.\8|0uඇHgp۷Odigii")0ߓ'OpE())aʔ)X~=~G vOIǷpuuƍCqamGڵk߻7nHw[ƍCSS򐖖&rqo߿[=:z>4IlΝSE Ccc#/4'1k,XYY!77ӦMCqq1ʰsN@غu+x<.] y>N???޽ŋ.h*ݝv8q`ĉDaٲeXl[)ᅲfaaA/֝ amm١pV߉(AtlvuN |E>ڸq#\C $$Zp!S!)t2Atܻ4$އSS,DRjۣsN@GGNŋ~zzzs]]]Rsi8;;QQQBJ+ӧ &&&&={6^zŸ+V`С@>},R~G hhhTVVb ^j`` t, 48e8wTTT0n8<bb: []]mUUUbof~0_>}\.dqmY竅ޟZZZ,<|@ȵ֫<'N˗/ ?~sΥӯ]L׃AMM ?Fdd$F-y['O; 466?DRRp)zc֬Yx!^z;w7oɗhL|l۶ .]˗/56>_]S\:xCBBwƽ{ӧOo-7okCm Ҹ\-G}ĨN/!;1Z>zh2OiP_~#FÇQRR!++ wPSS#(  D:q[w2zҊ Up?m97/_b!00Pd?I, wEQQP\\bF+$~: Wګo߾( Ϟ=ϙ Kg?Á'>Ç#00P$,Pj*իpPWW'kbĶYK޲WZZ*o߾d`mm-QXXȨ=; ÇuԩS1~xĄPYYzQcZ IDATagϞ555X|98vĘ1|466BYYx$Hbϟc(++o; W,Yr<彭6k٣GBlgtZXX @-ei^G^ JJJ8z(oڵKPD_~~ncc#uǏc۶mN$J.**ѫE^ꑕ{үwE}8qys̡{=#99Y4M={DVV~Wf v   {8NCCQ]]\.1c a׮]gذapuu;>TTTcЀ!Cb III"(I*&L@hh(6o  !|HMMallLUUU~ejj kkkDGGc…(**BFF=+i/Ϝ9Gee%nݺ%WMMMx5= ˗PRRuHw={YYYByؠeeeBne_ ]]]0~G޽d1et0,߇wMpp0ZE MCC4)Swww:t8uN*r?sذax)޼yr`8}4ۇR~ZSu}w7nSܾ}l6>cƌb/sN'ڵkشiΜ9*͟#Gʕ+ɑ8ݻw*mб~z|'@_.OHNNӱ9߽{3eرtΝpܽ{Wh1~I KN8PSSCbbHRb>|8d\[n@]+k֬Z:Z~=4/d"YiwvhP9eda3  HfƌŦ}'=z4JKKNi@b&6o OOO444`ݺu 6n>p\3FcbbtRѷ,"|ֹs碶>}:_Ga֬Yؿ?8N1mb `mmMaɓB+++ssshjjiii2dPGzzJگ`aauuu^ZCPV|>F2|go4i?MM/)ahhNwٳG9s/_-$x">3Ԁĉ"PijjoFn; CKcaXr-Ƥ2 01;[/:f@|;WTTTx_!>>̤/(\x022@ :FKقZnWןiiI AѽHuwwƍ2xb,^XdUer~'IQB---qU3/--LWRRӥ'fff /Zĕ~@s-8ҥŦ1}"VŮEjj*<<<s5۷Of>or[SRRP]]-4/2 }7nevvvHHHN<>Lf\i'Јötb#u4h \mU^kf@gܢMoO_}͛۷oӟ޽{Gv$h}>ڎlZm T1m4|<)))rcpD..VWWBQ=e͖9WZ=d,))Ke]lyfaҺSYZ,ڡm&aܖ}ΪXddd͛74pww-qQ<~aaa 9`eeŵPbРA]G@@@ ?};PąPD!9q . ;;[mGYYYtBvu-[`Μ9)yI}]NKhq0I^~MTK :FFF8Csƞ?x=t12_۷"yϙ3Æ qY\~ UhAycm֙S;m4z:|}}E:AammMw^xGAuu5vI/֑%٢ {@ ˗/rm6<YYYʒ8RÇHII_ 44N[f zI(^֖~MC]]P=Z#2… 򂗗?.aÆ#^Ù3gd^DR;̙3~{λٳg8x &M}}}#==l6^^^AEEbbbrJlڴ |2BBBbڵ|k׮KgЫW/˖-z:::bѢE?~<&4^<}&MB޽˗ӝ@pqq)SK(//ɓpЧO899tztt4LLLၤ$e {{{zyE>Ɂ ͛Gы > m"#""^ +())… 077Whݥ~iX~}Wѭ$$$ zᢖ/B+n+ir4ϧ[&L=[=<<.V՞huTT7oжL4@6ј7n}~333yɩSbȑc…blH1cK,%Ks8\3=cbEYY)D~'pssk("EÇŋ70c h(//DZc+ԡ4999a(.. mmmܻwVF!u꒎DBBoܸq"p8s SDGGu43gD>}usN:? 6lFEE*++&4< EEERg_V-UY#->KKKaϞ=055;w$_FBB^ kkkhhht A.0!B !. 00r ˗/pqq5kSSSzʱ={ 88׮]޲ N\Bnׯ_GUUMա@KK^[QdUho>宍y"ǏDAA9Μ9GGG(++ʕ+4hT*Ҹ#33۷/l6 OOO\|YhYf!22fffѣlmmqqzŢ;6mڄ˗cȑֆ2zcڵFeSSm9rjk׮@͛7ٵkݻXbccyߟÇ{=ضmQQQdee)e=zClݺAjt_7U+;;???oߎ7ӧVE 8h4ӕҔM6m` 33S)P6ԄY˗/ 11OOO `4R5Wg+Z7GM|ڵk\pSO)YLjr ee{Tԃs*TW^?P/R|)t:6|Iwɓ'IHHP6gU[.x<(oFM!EO[lYÑԼysǹ|2m///̙C֭?+++ǪUJiʆ j$-ה͛7s)eyu޽ggge*o?K}UjР_"""8x c߾}ԫWaÆ)?mؾ}њ;wfرzڵk֭[%ʭ[(((MLQfU...XYYǀvvv 8%KvZ]ƺuxׁKRǁر#ZRJ֌6GMO3hР0վ9jsrrʕ+dee)n 0|p BLL ˖-cʔ)zٳr-kˆAj i޼9Fݻٳ?ÇSY\RF!? =b7oΝ;-wPݾj_V@Znnn/P&QYqGDVVDDDGLL 111Mnf\tI]vzƎKΝ=-ZGI-pppg6l̈#(((>Pfoذ)SФIׯyהk >}0dxLBnn.֌?h~FVVu!**:pUWw_ŋo}sGM|}חcmm[oU1S~}ƎرcINNf9/:-4iGח,?ΐ!C>R}eN˖-} ׯW^^7иqcC- VRʖ,Ybx' رc 'OVkhժаaCCV V~vvaĉ&Mlll :t0^(ްbŊΝ;eC5GCmCmET(B!CٙPBCCIII!22]vѽ{wz=^qr 7nL֭9{, _QR:uS@K=W՚ܟkZ,t֬Y̚5K<;88~zuѫ eB!_ B!*JNe۷ͳKRR>>>`kk˲eҥCSB!Dm!B!BBqFbԨQmB!2[R?5qn?\!D핗۷hB!EAܰC!Mݠ`$Bp{nM BG\^^۷o0*$>>C!7xcB԰ݻtժM6WHv ~5I$ !U'%%^_aа[f͚ԩSk8!̙C^^]Lzz:DFFr b帺G5gpp05bҥ5CMjljBM+svIXXgΜLBZޞtZ!Ljll&B!D1YYYDGGA\\ooouFvv6۷oҥK,]KҮ];z=cǎs4r˜9s'|F㨩_Wjaaa%[.xzz`Zn]CU^z={Vߟm۶pyڶm[q$%%|rKRRRsZ{W_}={Hlʃ蛫+U^:5B!5-//M61l0\\\ !66KKK|}} '33pΝ˒%KHJJÄYp!]tK.,\Wk[lACLLLxyy`iӦ -[u_|Eiܸ1ܸqCvժUt5k3w}߾}yxquueСܺuڿz*Æ '''fϞ ={>}`kk/Ws\sݻ7sU]ZN>Ncԩ\pN{{.)))lܸ+T۶m;w_Oƍ~AA\!xDiZ̬ͮYPTUkΝtرBƍ`ǎӠA5 ,[TBBBptt~coߞ={*3˪Z`` Ɍ3y֭>… +K -ZKKK233~:K.BaÆDEEpqYf\ILLիlٲڟ8q"7?iiie>̩-?P7`޽͛WCTѣ7Ns޽~h&N(ʅ5Fque[[ !m4Maرp<==Yz5iiiݻ UOOOVZ2 IDATEZZ#((s n W_}IOO\^k׮4jGGG&v$''+uuZTz^u֥wޜ={J̙3zj4i;vŋϼXZZҿy:S_/OlذAuU2q쌛#F ((H9btީS3f ... f $$$0~xZlIݺuiذ!#..N9رc9f͚aee~~~;v̨.ڸ8F=O?rg}F_>O=_|E}]x1x{{s73Facc/׮]3cÆ %z饗:._ y&L`ΝK ???֯_^jL2ƍGdd$+++իNcJ?s+({BQ 7jԨ2p1Zʤܹ'|mұcGfϞnnnЯ_?:ud4k@ s)R<ͥa?Xt:I999舃ܾ}D M!MIIaΨQxWݾBW߯6t:]޽;>>>4nܘ޽ٳZ-: 6z{ , 1_@ff&F })jej{| K }ӧ\vMrunJϞ=W/ٳ'7oѣG_صkرL v111{RDGG+Y`Ο?ϝ;w_1b'N(w}||۷y&;wdʔ)JyͶw}^`ApϏkגSj?G֭[hZׯϥKXn< 7oެPsnyBGqŒRPXՂN>n̙3|Fkp!?T/5)R<դQ5uJKLNN6@nn.IIIV񊦐N<'xV\ն/*,, P_R^=bcc ŅaÆi&K޻wX^y\\\:t(6mу DhhѣGkŲֱg={cǎw^ϏٳgJrr2AAA 9#hsZZM4iӦ 233 5RW6~Lo4M~ݸqt=U 2e wޥE$&&r~4h̙3~M<;w;޽{ԩ+baaAtt4w!:::uUݺu%33͛7SXX[hزe ,[BY z\|YYn/֭[*ںqFm*˽I\ 8z(}2[`=}4yyy$%%ٳt钒%P^}O!j'B3JA-RV beST[nMݺuiӦ ;vEh4rsssN:sΪz)jSDնQL ijj*gҥ4l[[[f̘Attts+B_̙3͍aÆU}!DiРر 6nСC&L@f#"",)#Gxiٲ%ddeeѹsg}Ν;ykf8pK.7#^z+ pssSC˳惺+++o;;;Ȓ%Ks/_fݺu16mSOdO~G>yy7TǦ}sĕ+WʪP|ŋB~HII]vٳG矹r 'xyy+~-UO)juSK}2dѬeSsWw_ŋ%}sGM|}חˆR&\~uτiiifgtnJZ֨E[:EiժQN̙3P%?(,,4je˖Fu-[baa{ٲ噍[|PčB >>>&7,kY4S+ϹyOuҊvA\!c)((BBBbF)Pv 9E)xzz2`۷Faִ-RXWZZM6-5vP~xhюiӦuSVeh4$$$`kk[)J0Ѡzu/^L:SNdd$9r"""s]]]ӣGkY\e9x`efRbVs=zU*ә>}zZ}yݓO>i!K/d\j/SeoN:FR)))%Ws!** &?mV|/W-k9݄2z] W\Ef\[M0`| nZyyy[|ٴtqeW^DEEʮ]>|xVWE-{BQ;r Bǒ-ZV5CQV bu$77իW+)E*hJeSDQ{Jlٲ%fƌz/^,n-T,OOO> _ػwoBn΄ߓ̒%K(q&pE8q$~*C4hCSxխ[ev۷4ݻwWVرcÜLVXwgٵ?#o6ׯ_'??sA"##y&۷o'66l]tQcccBkhF\\,[%6n܈/{رc0qDƏ_8VwWWWmVnUHyBIf !{&L'e VgjOOOOzz:Çg޼yFIq42)樽?S,y뭷 Q9s&m۶Uv.m~g?u리YDmBkg;SfYjJmO$|||(((֖e˖ѥK~!ϟcǎ;v^z駟۷4hU4ȑ#3H׮]ùsR+bժUQXXXbc\KKKVZj2%>SN֭[Opyի*3M-KKK>&MğmaaYgXwww6l@pp0ofΝ%2dH۷/gq]oVgii=!dW.T jR+:beV͛7n0:Ã&hOs)jT&S,כ):C)g̘۷ݾB imPScԨQ5ꡭ_oo ܹs9x /"?#}qqqdffRn]:uꄻ;'NT2dǏg|S^=:ul4͑#GXh"''ܹsݻ꘧LFaѢE`0Xz5;v0nlE|бcG.\ŋIOOGࠜH߾}Y|9_}.]Vs=ǫZٿ?:'O~ۛӧWhfoE=!}dW!D8qƍӺuk%EtѢE5B!#iѢEe~֚;w.s-q{K.e>/ҫW/KLppѱI&Jv]tܹsʌ]v)iР5mڴaٲe&/o{.uŸq*Uoy 5?C!ă%B!*T裐b+B!~.]wдiSsfV!jBԩSj}S)Bmy_,%%HvEWzB!^>|8ǏXXXжm[3g[BB! DFFɑ#G0 :t˗^ߟ=zpLFXt0kmJV gSΝ; ̙3 UQC7o^bs2!x B!deeMDDqqqܻw֭l߾K.tR.]Jv;Ν;p/'00utveΜ9ÓO>i򼇵B!EtB!BԴ<6mİapqq!$$X,--%<۷gϞٳZh",--,] sRRR8t dgg*e$''3fdee1p@<==O撔Djj*iii,X\;wކ2럩:uDrr2~)mڴ!99dLRq !B#)!BZkѢE2C43rHuxzzb kÉĉ8q˗qvvt';;dڷo{s? >Ã_99{, 6`ƌL6ŋ 77Wuݻw9p:uRݻ7?̦ӧO3l0n޼INN:y橞Iﱰcǎ|ڪժU1n8eƑ/j0~xN8Anh޼9M4aժU_={2rHԯ_\!Dm׷o_~G{=ϟڰ0z-|||D׳b _7n|@fff/`…888pU[~pyI=>9{,ǎرc5[ud`(A\!0cݺuݻ][s `VfG)VVV=zD6lHTT]vxxxЦMMf/jL<'xo30n8< +++Scƌa.!%,,̹y&v"22{Kll,^^^3|pu=ضmQQQl<=zСCnʠA 5ѣ=˗/ 11OOO @@@rN?hڴi%** ___&O̎;;k4p|h0 jO`0,^h+ 0z FZ*>>3gΔt2+fzv @FJWW heS<j!!!ygNE_/nnn 6L)W+ՠA'//DDDpAǾ}WÆ Cc.`~`۶ml߾hsΌ;^Ovغu+[n>8p;ҪU+lllW9}Ǐnݺ^#F kSOrJ^}UZl1c1\xs /駟fРA7+++0`@UW\!++lO8q[l҇Bjk򂂂 $$///F `4p ȬMb0n͛hHHHֶzBBB:t(Ge֭ni0F`0Rd&B!xi&  !!!bii/dffܹsYd III>|P\\\8< .K.t҅ rj;##_|{{{7n;7nP_~tԉqmhdnà߿?3gTHB6tZj:tgggf͚EaamdL昋T|aVV嵥eT=ӼU_6mҥ [l 998e˖ <3fŋٿrի9qk֬!<<^{ #~r YYY5ҾBTB2ٓ͛7W[}-s@m Pgܸq̟?è2ֶLD4 F1h >\f] ૯2^iaH!~饗Xvm\|Pܼyh֬&L`߾} :7;AFk4Yl|7oi߾=={dϞ=҇EaiiIff&ׯ_gҥXXX¡CHHH ;;?P) $991cƘl'++'| 77$RSSIKKcʆB~)mڴ!99dLRupi֬ 6$**?Ǐm4H&xObb"W^U?!SZZ{T7_y&L@^*|ŋ(q_~t:mƚ5kt 2Du۶mcƍՋ`zaTν{h۶- 6N:Ŝ9sƆ>}ѣߜO>Ҿ}{Z-W.wBQd9!R*OL6˗/dɒR9$''Ӿ}{ +3裏?L{9<<}ӓׯ]o S͕̙3ԭ[(ɓ→3f'eiiIf͘1cgs-[ɓK7u̥韹P4wwO߾}I {иqcBBB{9,W}:t/0Z+SGjӞ={@բذaCcj#5&B BˡC|2ɼ%)+򫯾bʔ)ƍ;v͛gvMRRgϞ5J4B SD͕J*7׿N>n̙3|C)7fO^^ov6j~PvrSwؑ~m۶)l޼???e6]UիǏ7:Ƿ~WM?s)jg*ż~?BCC(q\ 4V]v`0?חzKHH... 6M6aV{+ CeӦMܾ}=z(3J Dhh,ܣG*1w{{{/_ٳg9v{%**ƥѴirKTT=z0zR|ãTRSSWPiKdggٳIMM%99*ݜHU$n5ɆMB!Aq„ɓ'ӬY36m;v(qNnnn3y׮]KPP& /`tĉqqqyxzzx(e֭[DإKҰaClmm1cI&\ZnMݺuiӦ ;vEh4rssU_<hJkk1L_3gbX5?ߢ65ŻrsJZ{`ĉjٕH#33\V^͈#FR k׮姟~bʕ  ݺuh4|?܆L...XYYmUɆMB!AB`֬YQm۶0ĵkTqUzmYYYxٳE%999zM*/[hkU2JT,,,(,,TXXo>>>dffYbu֭]~Fc4B)wx"VVV 8Ш}*k{ژ-??{{{cYYY%gbnO]㴪ޟ5ƍܽ{Jۏ#{{{ "++h"""#&&lll[ndgg}ve ]vzƎKΝHL2\?~|8SGZZYYYԩS(:tJȦMg[n3sLڶm˭[j̜9 Y[[[obo-Z࣏>ӓ-Z>ktNyo3#>cFAAA|}z]~+x mO--յ*?*T?ӓmS]6m` //O,dyoJA6U=#F`\xuuKmooO~~~sݿ"Rss 69}1ڵkWj_㫊g~~>ח*ԩS:u*DFFɑ#G ""B9^Yf)!Rmh"?P^𨈹 LmdHol6}tO^⸹'6 !YNAXEբjAy]4`dff??ub9i$6n_ ܟuvfPuML )ퟚDlYڴiC.]5rau|?qD6mΝ;0aB?}2%6),,$33O>3fwڕ|ӍE̥_Y)_AAA:u/s3}z '99%KQXM !BLJB<&LOj 4̛7oWK111:TKݵŮ] 0*իCVVV?a#!)!o`9~x]W>&OVPB0j(6lPjÇi߾=vvvpݛsz}DD]t֖VZn:ի 6 ;;;={ц~v˂ M6lْӧOp郭-\~-[)uy}kСC_j:tgggf͚Eaa!OF1uT.\NCӱfptt`n߾]kJJJ  wwwF+*Xl'O6:Ov9r$踝 <~qAΝ;O?ގ;Om6^}U6oތ_zxy{{K7h vɐ!CT-?2+L ⊇ñcj:Eu ߩSťCBQE&OLf̀d-ڵk䤬9uC2i$v͖ڵ+w'&&믿r,,,ݽXYYgĞ={V)+>9@c)߿g*3f`ڴi)|/XYYưaÌΩLEN:5Z*Qֽ{w}]FAff&qqqX[[_fÆ ܻw呖ѣOK֭ϯPBB!fE /ץKh޼yM:ʕ+iҤ M4aL>]yW4 a]WIoVVggg嘋䁹t:]cׯ_G]ߢE ձQ@yg<<33'''mTj#tδ44iҖ>jڴ)<ձ۷oFӍ-`0yh4oܿ 7n\ͱ'??ѣGٺuk6gĈl޼/Tef̘- .,|ׯ̺B >nݺEHHHM!j^xAq?yXUaPHQLq+ S㚙vM!TBpf9cZ(! 7T}91AZܮsY#F"BQ6+++[[2 R[ѣGcaaʕ+2dHs:uꄟ_4ƍcСҧO9~8ښ_~E駟rMyLaɤ~QPP@aa!b-[tv:[lI߾}dɒ%jRSSpB>ʻLJKfRRRرcqss#??7n,^'N?ri /H۶mu{wiڴi4PyĉdeeSعs'ǏĤϞ=!SBAqCuĚ5kBzj#F"u[FD!C|||ppp֖^z:k׮ҦM,--ywbcc>}:[FKQgڵ9[[[:vȀ*>vvvxzzw^?u3gd޽4hЀ]2k,ʑ#Gtr~縻Kaa!:tuQ^97nܠM6ʱ3g2sLJ/` !GqjZb׮]\ra)yBCC !̙3ˣ^zE!8e?~AWTx+,,24 ;w,Z}III嶹rȑr >|UV|1c3f̨rqڲnݺ 6\P>GFFr!bʔ)|g:J*be-fff믿LLLh49+꣢rssILLdɒ%5rC6B!B!BCCq%%%%ѻwoWf}HNNY l,]2Ӊ! #;qB!xD6l Bnj Rѱru҅TVZM0ĤIt !*Ov !Dn݊c޽{7ߔ:>m4Z_!BeM[P]666رcdgg+jkd'BϯRxsuu%--5kLZZiii?>033#33,/^3Fŏ?ȦMXbW\>}[npB<ۇoq}r/\@zz.NVaŊڵK=44\._oFzz:ͫ8B!B'׭[Xv-~~~4k {{{ILL"ēNq(Ï?Ȃ X`,*֭[RҠkIJJb̙ԯ_Føqؼyy!!!abbBhh([ni?u<ձc کlH{a阛~]vxb6llڴR!Bh44o޼JjRDD3fx{_pԤ|>s'""$>|HNN{{{&Ow}ÇJr IDAT5jÇ9hhhŊ@BbbPtaZn]1f͚;C@@ӠhZ7o'++K缒ylmmm,Տ666ܾ}{X5kk֬@Q}*-B!ģwUΝ;GvGXXQ _{o(}vbccٹs*̌~1l0 ..xRRRXr%+WCѿN\a?< _W%Çuڵ+* +++󁢝Æ S]6oޜ0.^smHH~˰aðArMt^i4irݟ_QFq)N:/I9ORj4Z?;v;vsNy7nJō7c4iD缒iiڴi'?? }(,?##C9V-ZR8}J~~B!Vm۶4hЀ͛3c ߿{yy1e郣# Щz}i8<]v ___ ܹsDD@@[l)5k{/V:?խalwe۶maggСCټy3w套^bŊ\~]v1vXlllh۶-̙3:u~VZq5-[Zbٜ:uSN\a/^, ڹsg4iرcYlxyy/pQ5;wdܻwSSS5kFFF6l믿? @ݻlݺV[Ut䲲 KKKlmmh4γڵkdggӸqJ >, 4~omm/̢EOy&Ѽ:ׯ]#GR^=V^uƍ:zK:{,ΣSNSe7ʕ+T5CmذAU$=z4Z*k׮ҦM,--yw0aA&''3~xrssdȑ?~2qsssILLdɒ%^_R'''8|Ν;g[[[֭[WB!'^^^9ro:پ};QQQ\|333rrr9/SSSחG_LT*U:%UN!CͿ? u=իܹs/1f՜W:5ԔnݺСC.\H||<[nѣ=z7|oooΫZ}IN"..7oժ!!!W>INNcǎCA+$L\\7oFVE=z48-Rܿ* .a<ϟ_n[6mjLLLطo_'MT\B!(((e1u˖-0z{{{ٿ?{V쌻;->رc:tH~CDǐks*u$ _06JŠA4hܹ86m>>> LII!>>X.\hݼƚBqQU֖]*ǍG\\yyy__~h^x-Zpu6oLRR@.ݍ7G}Ty甔2S2[EJe*ؔy޾ѫ`B!0~!>>>888`kkK^j dLaa!>ӦMieԨQʿ맳y@_nݺNvv6&&&$$$жm[ 꿶oH z }?խQWXZZ2d ۷ٶm޽$4i< M6ҥKʵ͚5#((1111LEZZZmP [oiru]huIHH9믿֚+M66mTꫯ*6L9~EZK*TxjF=_JjmLLs5B4ikf00YB:Jp}222*& /~=:55#FRm{fܹ6m͛7߮B( :!ܗ_~i;uBu/ YB:ի;wvUx^XX{DQ=z=m۶QXXXj`ٴoߞ'*p..ذa B!jBQ IIIښΝ;ǚ5kt\8}4nb…FZZÆ +dvťK~.ÇEnn. ,`mHre~7ә7o^>y_HNNYfo̘14jԈ,~G6mĊ+aÆѽ{wxwIJJ*G^^׷9}r/\@zzz UVb vڥnB!Iv-֮]͚5ޞpum%ēNqܺu4T*XZZ*Ek8}sQ^=qqqJEnnrNHHfff֭[ktS^=<==9<lcuؑǏ۞[Nn}ٳӧcnnT+sBt`ƌi47oJ2jƺƞeƯ=ںu+...?DDDÇ!::cooɓxabq!9qO+VEÇiݺ5fw! L?gݾ};QQQ\|333rrrV(޵kffr8g֖g t),, 33Jzt~6Ty_~}*}KTqj4o\9fooLBR;88ُZp,n߾ͽ{j,7oq}*sBtxR Z{QAA۷o'66;w*ׯÆ #00 ∏'%%+WrJ:t(!!!xzz!j,>ݻ5ZFF/^B!ē-&&9s搝̈́ ޽{s|}}޽;?#)))ܻwD\]]`ܸq#^5#FPҊEΝ9{,/^}l޼YyM\~8%G&**J0qyFř3g蝿ڞyzzҧOJg3{TQ))) 8? weW_)5HLLLx饗&((HI666̛7yO?O\\,[e˖DHHÆ ã n #F"ꢼڢhtILLŅg}VZ~}[B<<<-[пJűvZFIzXz5iܜӻwNgggYh|ǎСCeJ-[o߾DFFdj5\p,ּ,ZO?7o믿̯C0ydؿ?;w͍|nܸkg^n,:uϏ>RtR֬YCJJ ;v`Ĉ@?!x9ONN8::ҡCFF2BBBظq#/^ue&MSژxFDDDEGGcU)YܜÇY͍WҳgO4iAPT!)) VK.]Q53;;;y&zёSEuzɡC8r>>>:v뿶/b.\ՠ>+K=(FWWWغu+fܹsOxxxpe*]2|p|qwwGeƍǓFTTQQQB@@A}fN"c䮰^{͈d͚5<3F1zhZjUxrr2Ǐ'77KKKFI`` Pj~ҫWJCݹq_3gN{Æ Yd iӦЭ[7ĄڶmKbbAc2j(lmmڵ+؊⏍enݚ4 ӧO7o}[~=ǏI&XXX0rHL\qFƎ˺uPռ⋥ưӓ{2|2o+\8MMM%##y+^cǎ 0@O!Gys ۤmqesb%0(ɓ'ӧ9997бcGN:… ٿ?4jԈ>}0|y^~e_š輡ohѢgϞ%>>˗/(oӧO,XwLYs+yoիILL}f͚R9E/kCyoTUCu@T@Z:_֑[ÐU*ZSSSu@Xp!lݺGrQ|M>|8̧N"..7oժ!!!Wآ={c 8._L=c,_:0rmذȑ,ۧI&+MBQ l2h49r_~uzWʐBa$N2AAA6|;T^{{GòeXB<ɒ Ic o~-ҁ 2(ZlY浯 @:u 60fcAvv6ٳG٭{}ƏϽ{ppp899q<==ߙ>}:|M]^=Ki׮>d֬Y,Z_͛7+)馃& IDATҥ ۶mΝ;su*nMW\1þ}V6~MnynC:sڂaÆ)mT۷O9>bĈR^xԱgyF h[jMKKӉѣy>}(4hfffBׯVr/}xҾPmPPG5s-{nA)/_\4ؿ~g݃<ˊ+h.]4CNN6::Zۯ_?QFpmRR g>_UyLt4Suq0vXƎKVV7oF燹⑒E'XPP *3___wΏ?HJJ #11WWWOOO飓2%%#Fɉl̊ƆŋZ{fܹ6m͛7߮B<~:t(;wƆsαyf?7o?<*{O9k+V`ƌl߾?XYY^x-Zpu6oLRRoff&7n$55>SQ /#7nϐ:byxBC"S,88777>2s=GzpvvT*ڵӰaC"##4iREԪėϞ={駟077Ã* QWvm۶QXXXj`ٴoߞ'ҨQZA!ɓ>>ryyy\7qD"##ue޼yL˖-III`ժUmۖ мysf̘KIIɉ^{_~E'(}D@@[l)8p obʔ)GGG Pf^˗^潳ۛ2ۡ(ٳm/ϕ+Wݻ7VVVx{{ĉFEƍ%"";wTz!5oѢEt֍ƍcjj3<;?*܉#PXYYݻ+%_sTT> G!$$-Z_\]]k&_Zf׮]xyyajj?@v픔B!*'++K}.Z 3j(0aJ͛[Z* V[@ˠjiѢ*ӧOceeU4nJEFF-##CAX˰aӧ=ϏΝ;WR4lܸؐz*={ٙI&H1c`ggGVV7oޤW^8::2uTdjt҅طok׮-w~~~~lݺ3f.OӥKtRk7n^n{\\qqqzږ^ 66}-k<==J/wZE}BBp-lB\\~-j@U6 񤓝O+++4 KKKlmmŻ^ թS'z2v5e˖ۗHvT|屶LJKRXXO?Ď;4x]t3gbjjUϩK]*5 >}F'))3gR~}4 ƍEX9ӓ봟:u KKK}rر#Ǐ/=77̝߳gӧO0ŋiذ!VVVDFFiӦJ#B񴈈(K[#*hh޼MFů1'&???퉈 ))Ctt4ޞɓ'wCc-DE'ѣ/322lڵ+iFիnݺƍOprr_~˃hݺ5VVV 8+WT*]K.akkˤI0`@7}GPT:;4VyKoNϞ=h4899~zݻW18uHKhР򳩩)Qecc۷k,q%B!D #00arUۧ9$$$ꫯҬY3F_Çׯp9ϟ+7odʕK8::׿#GBN _n[RR+*fbbRkne.Vl </B9~mg[[[֭[W*āϑ:y4 _<)N,baff虗EUp- —_~?*SI[%ũCnܸIOOI&GV_9XXXhq5I}"O}) xyyy1qDƌa`ٲeJ{Ν9{,/^}l޼KKKœ9sf„ t];%%l>3gǏƍ155ŅݻwzO>,\F_Q3d~¸޽ݻ㫯RLLLx饗&((H)B EH͛Ǽy駟'..T-[Ʋeprr"$$aÆa QdWTRSn޼Itt4}ύ7J.vYSNU*Wm'k֬!%%;v(K>Yd jT.\P !B<*sPQC0DEEaffFff&>|U:gģW+eT*]ve&R4ѣlܸx҈"** {{{\\\ەj}V:!j, !zGUVoܸcDzn:j5/Nٳ Ciӆ^z<~!>>>888`kk[%SXZZ;0a(uIzz:٘@۶mILL`?&M`aaȑ#2eJen vvvxzzw^^9~m I}2rHlmmرcOON֭)((@0}J#B+sY:,,,uiiiiӦ:U}S O߽{Xp!˗/tȝ0af*7!<==dѢEyժUe7e>2qsssILLdɒ%#OBR+1w__5kdff?MUu=>?cܹ+S^'O2l0uޒ4Vѣƒkt۷oO˖-6dٽ-YBQP/^uI,]*=d !l푒 0})_Wu: j?K.C޽ hz1$֙5ԔnݺСC.\H||<[nѣ=z7|ooo9ȩScƍ)[jEHH+Orr2;vuBYBQ!WI*W!D*~e|GFF"5*^CUT!11}YeoKQ:177gݻR_ULڥR4h ;w_9x ӦMLJB)))˅ >5XSV"Blll:wLddR彲4 '##J{PM#""a5B!&m۶R9۰aaaaKQ3aW@_!uͦNԩSKuLʊÇ3|prssٲe 7ndϞ=呗GF dK;l!j,Jz*Ν]vO5%,,L?b6la!DgB!666;cǒ͛h4annnxd ĠA3fLC{?~)<9Ӈ VjUVGqU1bQQQ[E믳}v>|ȫʿJ'j7bjj wʊANNNNN̙3+}ҽ{w~GRRRwÜ9sf„ t]g|///:wٳgx"۷gXZZE9Fř3gWjO5k;!B!?~mmڴaB<4ikf00Y}Jݻ`aaX`A.\յ}7lܸؐz*={ٙI&4~hh(\| Ƽy* /8gffbnnÇW3\]]IKK3(Btt4;v͍\#TTVcǎq1#G#Dݖnx"fԨQ0a|}}:t(͛7WP~ήRT*Z3[2d_~%T*NJAAߢE T*OUyZ w:;U*Z~j:.MjuYFF5>B!H(j R#[nebff?Vbر3 0(5\ȑ#9q;vE4i҄UVՀH+3335kFdd${壏>bx7nLZZ0!dΝ;'E$Y}XYY)#FSu1zh,,,Xr%C )uNN+3=߿޽{+ (,,Jō7زe 7h-[ҷo_"##Yd jT.\GyEqqqgUv֯__;;;]Fvv67T쌻;->رc:t)Bt+/j R#@_ BRRR޽;;wggg>#LMMH NVHJJ̙3у#F(ԀI]tpWm۶ĉHQ2!ʑLǎ7v(BׯƲo>lmmywׯ_B!ē9tnnn|WθJ"77Wxb6llڴId Jw~~>{a阛u U[y5 ju=! ;q`e74ũS2uRg̘[No|_011?/=))ܶ~¾>bڵɯ5h 5jTaB!xzZ#ÿ:w͚))LMMA5jAYjj(j@Ԥ|ju/BOqx̜8qFsqy8@TTB!Dah^CjTV5F!5 qƨT*222tjinh tȑr߰B!A)|2=zΎ|r:t`찄BQGXYYhh4:54uԉz̶5jR yyy@Q]v)$&&믿[@_屶LJ;)-*ڮB*,-Sa"VR3Kf0J254Ț eRr(mxx<=|}?ټqY]QYYRvuoii)ʄTTT //7oƥKP9!q0&L i$w&`ll~; 2e >:/m{ᅦ=rrrPPPDFF{pL:t , JJJ`hh 퉉1c|>%AMK{nxyyw9r$V}p\{@Hs)xЫW/]v&,-- Z_cǎAYYYa,%&&;BZ$J`o닋%hHs!s$._OOOyAigƎ;I#^_@ \\\`y}] a}k׮PZZZ%qqqXĤT(yA!#BI >~HZ.ABBرcBH#"C E؃ם IDAT$n V}ݙ3g1ҜZn-HH}xxx ""BaBAHH̙#0!4sbܹBG-*yw)hHsqydffB[[[ޡB!|l"!SwB!B!B$$.!B!B!4c%B!B!fB!|b ;z۾};:t]]]̞=[>>>ׇh"TTTppaÆsĄB> w̔w͛dѣI4i9HMMI5kƍ(**`ff)S`ԨQrMVkԩ߄]v 4ؾ};lll^|qqq{.޽{ػw/N*t~`` ֭[ǾWVV: `ii  n:K.HNNf7lHHHScƌdCOO{---zIn_$PZooo\\.>Ę1c`ll ???Ojj*444p!8::"66}cԄBȧ!Ǐ %p`Xf"j<7n܀co޼Arr2r[[[p8hii-Ǐ#44dgϰu:Y^^"!)) {yfϜ9sf#$$oHHHUT%e?LJ>%oI~@NN믿uB}0tP]=fddcB}.\obǎ Epp0ɓ'#((JJJrss8wf̘L(((Ctt4*++1~xl߾wg0sLϟ׮]L (**::: HNNFYY.\,YPRRBCC1m4g]nn.OׯCQQx"k̄ѻwoCWW%ʢߝ!ڴi#:t耀ƾ}juҒ  !U(ۂj 3gΔwq(O?!Ƴ~z6f 2HIIALL/ɓMoC|r@7=z4{\[[Fll,~gƤI3g`b |Wعs'6o H1.\sl4?׮]*FUV S蘽=DqqqlIMMMDFF8p 1{lƄ W_}joPVVFzz:+V1&L򠬬X' |gaooVO?-[cʕuBHKUVV@Ղ]vaƌr|BiF<<<LDDD}0ۏ!S]2555fҤI=믿m/_f---u֌ =rckktЁQRRbZjtܙ>}:%qT[q1۷gv1̭[Da޽L׮]wӧѣG 433c0:::LeeeFRlL)S}KJJDUVR6IRRRhjj2ZbkײݻW}lիڶl¶iii1o޼a.]Ķ={=?}|||89w\q|lر̞=x{1sss7(?5wCጉL/̸qa3֭c555ӧbϙ;w.3g?QRRhhh0waa233̣G/^dEСs5c^b8V^|B,X@d̓'O2 T޶m0 3 0=CM AcM8x·W_fFWWeaRSSLzz:w˖-AcdžbnnwR[޽c>r8&,,QNm?_ihc3Bibcc jߔ)S6i8;;Gii)޾}(ooݺ7nPUUӧOg۷R:ulmmq 塢/_ɓ'akkӧO}݋iӦ!55޽ý{xq ppp-[puvj9_S޳ ̙32]z^8884Uuj;uիW6{{{(++Μ9S9{ !MƆݠiС}xL V|}.]Bbb"tuuaddahh} sk9s:uR0|px]0 ===Xǎ/җCVII]qn}m۶Շ ?_5_Y{xx ۷or-C!M 'NĩS;w 88 vwF!t38VON0C6|>"##}vgiiӧO}q@MM a7GD`` 6l؀OĉBtVWQQYf8z(ƌ3g`ĉlȑ#=R`رطo/^?eeer \PUUŸq&%VEyy9/_PPP#G0j(DDD-?kssзo_dee2226UUUBԖ눍ů&CBB0k,*6c5%&&;"Ell}2kffpssùspeAaa!ƍcǎ ())a066F>}pIdpTP:::p8x{M999[C ڼPP7-- !&p/] KQi"%zغu+<󖗗ȑ#3prrqi`uK.:ѯ_?<߿Gnn.x<ݻw#''eeex{G$ƕUǍ7n <wޅ@4h_;v 99Yh/n† pyJKKqAܼy&%%jguĮAb궆&Hҿ6q!K܆qqŋqzAH}ݻSL*BBB؟Æ ?ׯŋQQQ-[`ҥ‚Mjjj򂫫H?I&w\.z>~%QVVƙ3g?Xp]v&,--# IM \Jy$.!4yBJR͛7>~9_ H{tl022©S0v(nvvP[۷Z5a_q%*?~ cccz=zÇ VVk{VկՠoڵkǾ.,,dW5m۶7ZZZ"IBXadV])jqqñcǎzAH]ݾ}[~|>_K+oEV/\ .:vvP^^.b޽5W q9Ĉ=޾y}-_Mx<]ƾ/H}*B>&&p(K66#իWBIRVVƖ\ 9}eW:N߾}?pQ}ǎß ֡ٯ^GիW}||Я_?].]?ĉlݻKɪ.؄kpAbǎ~gffƾBee%󑟟/Iڛ7o⤦ܹsĉb nݚ}0 _޽PSS%'']i+)B!Ҳ6+@DFF$.]Ė8pfϞ-666r(**ݻ3|R$1(BCC1n8TTT]l`5k0}tTVV}Ǯ:Ν;BW,u-%%Z'޳PRR?3g];u2!Caܸql[ffH? =Gxxyݻw_ 6VzVVVl۶m۰m6U 6Wj8!|\d-E2|phiim&phE.JBH SNoooCϞ=Í71o<$$$͛prr?d 7n@PP]"p\aɒ%bN6 @U2733駟n:x\.a;v ׯ_GJJ |>TTTеkWT+65=3f ((YYY055Ś5k"Tر#q%bkgօ"455a``KKKVߩSb7_066FHHFBjc˖-2"Tw \J䒦FI\Bia "5_>Ç􍌌9fee7o-˸8~8̐®=u;-%,--kSZlYMl۶ ___~!!!Oo.]BBBݻǖx2O BPPP歬o:uP}(:Ϗ^8ƍBܻwVB4f|;&*+@\Ҕ(K!Dn>} +++iڵCqq1[| IMqqb/1x`\r6mbǏѺuk,[:qƍ:@O!၈yAHջwo$&&eN P"4JB}}};oFvv6abbGGG,ZHlՖ4_}۷wB8wuB.]piаAƬuHCPSB!4?D.i %"7h瓇3g4 HE~v!D:___hkkcƍBQ4 1mM<<?cEF!DVD.il%B!4Khժ';?!i%rIcRwB!#,, ~~~uVѣmo޼a-,,?={ɓQRR¶GDD'O:Wqq1`XQQ. ___MoСCadd#G CX[[C]]:_Z|_F׮]~E6$"]S'p a]v5@I\B!Bk̘1000',IPP|lܸB}233q5$%%k׮e<==www`Ȑ!ppp͛HOOGVVrrrb eD?iiixfVCqq1/_sŧGbHKKChh(RRRm6'O%)BH3RPP8r:t h!ŋ/_9B˱|rG^0p@m۶5Bddd[n3i$vsA_?q/jfeex455T2={l[Nkeee=zHKKý{p(**Aeo߾Xj\\\WBMMb'c@<|c4TkҼ;+@HC$.!4#_߿`9GCiL^|AlDĀaXZZ"""ٳ'222SDdd$ؘ͑=R7 4?PY]YYPPP;VX!ReϞ=8{,{3/^'TTT///@DgԩSJ49s0|U_8L8r4r)пyLK#ZO! o5Fbb"f̘>555xyyU ;˖-c>s䠠 Dqttt'''ۣw8t,X ,9/Α#G0m4ݻZZZ"HݻXhOi ĉGqIqϟ>///ܺuK(y[S;P劷7 h߾=%ׯwBEE>}:N8Q IDAT!ĕTwZWIIIMm777PNZKrqqApp0ϟXXX4@տY/^.$qo߾-i !r4g; B!Ԡ!C.\ .g_ĶݼySAAAB }rw^ z߳gOް8⳰@qqбEaѢERc"|?Ɛ!CЧOβU$(((0Bu;v|~N"::AAAHOO&MmAIj;^׺䟲Vx - B!BH0vXdgg?3 1w\v1VM@UR6==]:::p8x{,''89^XXqooo}z]댌 ddd ++ YYYbcPYYN}:_EEEŋ䡡Fvv64440ydAII QQQXd ˡ'''bڴiغu+~={Dzz:ѯ_?ڵKY"""l2vvvB6660`DXX&vPXXXD'222.Obb"Ο?4<lW_}|Z /^y$_igϞ@XXi?'r* [[[U츆pvvFRRޥKj 0558|>gҤIPRR<<<Ր777(++UVbk=ywҥK;;:tk?deeظq#455?~\?i^~˗/cPVVFU !Bi;K \(KEBBVZUVIM>~XQIڱctuuӧOߟ}&W~mo߾pvvF۶m燲2=::!x<'.X%%%XEEۧm۶k.:*I6m)//Jp$ߚ4ϟ? ><<ӧOǻw 0o !DZ)Քdp& !ɓw{B5V"PM_}1###9ESw9vAO2}h.))ZSvBZooo8;;舉'M---~mZZ0x`xxxƍñc8.0errrЮ];3 S_k׮A^^ڷoq}}}p8$%%58rss_To !%u͚ІҼ4t\JD+q[]bĈB͟?-MaAvkkkp8vvwwGǎѪU+BN4?;ѦM˗VVT!8{l0 jVaҤIOիz*&L---OGzzzPVV x<|||7߰ `ffvɼŽCCCBMM \.}/X*~a"/\O*** )//8^x'O:vލ<|ܹ...B;vell>}`Æ D\\_Ӿ}{<{ B;uÇ# J@|'yfٳg4?!???ܺu^>|fffPWWGΝgT 0***9rЗ_޽;ڴi===,\P|7:t(Dί˅/JKKuEDDY 5CX[[C]]:_Z_F׮]n?~<|}}!%k%M-Tjj*Ο?/GnڴilC?~~0a444p9X[[ѣx%ڵk|y666^ZY=z@߾}$7=|9f_+++RUt%%%M|IZ^}CGt] IR|p8Bu ײe˖a055m  60`#"5ri.'JP>h xxxoEii)v܉@.r`ʕ֭<<<(JBiZ PVVh߿W\رc {{{ر#W}6Ԕ:6ֆ5lY\\ ---U5pd#GDll, }H{W@P7B!?nܸ:%pMkCCC1w\wʒ܆PSY6tfM!#Go߿߽{-¡CЦMX[[#00'Nl̄ 7H-Txx8ÅClU%%%6l˗/ǏS|VVVp8`۶mcw}5"##nnn"K.58PU666r(**ݻ3|ߗxfΜkײǨ!B>Zjq<==)RB.\> Ukjrػw_cݼySj{MSdFuⷰ@qqбEaѢERc"cciiӧO}q@MMMll,~W6Yf5uD>\[VV/R4 #Suuu,gшäI"ٳ'|}}!!!xPPϬz)ooosnp !"mIHPUUe_ ŭ7?TTT(KJH,]`:u*CA!B>]↘CNBm CLL ߿{{{r,PVVFzz:+V`ݺuB!T]lVZߋp~* A 6!%q !#sssH;oyUUUQXX t :z@CCC8;;#))I_MW\ׯqePVVFŶgeex455TК={6%q !BaʕիBCCqz ׯ̙q5| B)BH3cBWWO?>--Mo R`` gggm~~~BN$::!x<'|y\_AAAc_?@#RӧOǻw a0 LLL㑑"%Czz:޿϶*\BHs@+q !38q"@OOOo}6c<:VZZZغu+ -- ɓ'fmm WWWt jjj'Ӹ1c|>WWW700p\ 4&KҮ_ݻw \.{ȑ#: %%%044Ă j=!4cƌ/  2/ˆlyy9agg<|"m(I!O\Ǵ9PԝĶϗ_.\XcMCir~M'Ku؍ڵk\.[ !˱|rG^0p@dS.]ЪU+p8x-m(I!O\oߖǴ93gBBBзo_yB!666^}H=zo߾HHH@RRRmڨPRR{BhCI7I\B!BBjʕۺu&y!#66V2DGGcr Ǝ`jpaPB!$JBJZSey@!f۶m8~8|||^Di}6̔6$QAAȑ#С!JLL@^R%B!{iN~AAAݻ… 2MJѣGJ{9By@HI$q+ mmmlܸ #kYÆ c]pk׮۷/^miiiڵ+@KK Xr%Ɲ;w.tҦB!"QVdiN޼ySy...pqq8p=^}m(I'N(Hiގ;T:$lǚ IDATӧO\( T={6a禦BCC#bccѧO}ɒ%իf͚$B!BHC޽;rwHLLDǎ !-0a'+VѩS'$'' ’%K%''a̙x1x<x<vǃ3N<)t~TTz4440w\OQQ. ___=z4жm[ڊ".~YdffbPWW-&LYf޽;ڴi===,\q;;;@JJJС0|pNj:v[[[ٳqB 0@hRC390448Ci`jjZ1|}}pB!BM&{5kc$&&OIIҳgOddd ,, @FFf̘DFF[^^dhhh>DXX޿V>tdee!''+V` 򐟟7BQQQdq}c?^]SSpm?~\(-b_pttu|۷E6 i_%0>zpuuwBHz*++!"$q`nn7*.]UV066) p󑕕cƍԄ:pq|UUU"##PSS.~/_Ƃ 5J{ 쌤Zs]sε:O |{%nݺACCxHi> 333s"SSS1`ػ& Nb(&\DTHɸ*R}!ַ24@[/Zr1vXcڴixwtn'!Q***a;vvv0`qY0 B!]ۉkkkX[[HJJb?(+$F٬Y###=z[n&O>ϝ;7{|JJ F x^b0 9CСCP(H$Ν;uzI,,,p]BE'N@AA*** HrJ}?v͛>wsЯ_?}: q_XX#GnRZ/NSAS޷6dV>"moh~QQQ@CCV\GFB!Dߪcx7?n޼7 ҥKq.-v[K6i]t⺤W{Ky2tibf򴵵Uط=x<2 ~j3+O> <հz#"KVWW0dgg#88<III 077GSSSgMi{SSLLL(!v%&&bժUQ5mۆ6y Nٳ~~~(..fccc3NT.{롸fffd$&&*=4ŧ}ڸw}cСly֋Ǫ~_w^^^nS|BH{#77YYYW>iӦ!22(,,Ğ={DkbڵpttDdd$ŭIO]㯨իWkܯOytL\@PP>KKKl+WG[o7oDmm>f'OFrr2>ʯKׯzٳNcjj ???lذ͸t>̖߿puuCUU`qvvFSS?~ٴ=T*ņ p1$''+ʕ+9rα,ւL|ܾ}sn A4#ߩgllZh?^S9AS|޷6K_o}uO*i|k::!fGo]fOSSk8v HRٳaaapwwGjj*_۷/rJ >Xz5:,vm]]]!1n8899a֬Y v-$^^^Xx1MӒPXIے\]ҩk-~},EyvtL\yZ;4)--Euuʲcb:t(|>/_qPYYZ ''Toff&/^  1o<444#::ZeRMkhXZZl=֭[???Ǐ׹ĉR(o1|pjC ܐWWW:{YV\Z"J\ٵD_76iԞq}k}^7>}aj888.BȳfMoܹs8tM###L2 扁퍍7ĉoo>\t .]²eпP=L266ƙ3ge6m֬YUVh?0sLk'OF`` }vX,1```X|dW=qa8;;-[񁃃9_XX|0 777ddds~@V1Z[B &Ypp0,?? ,x`ꫯP[[ ~'H$455 eg~ט?>Ν;llj%v؁۷oΝ;z*^}UK,Ayy9/TWW#55ʗD" p㏰W(_d O:u ݻwc۶m:׳`ڵKi{hh(Q'ݩ1,먄f7!مtm}e]G',///<==1c YHʿo̚5 7nDII ہb ܺu Gܹsu^:ضmÞƍ/~WKHbשT &`ر \ys.Q>q˖tѣҒXh\G!@7%ݛX,FUUΟ?soذqqq z?7!OXJ  {{{D"J?uTl޼ .@ 0#!!M5DKK  @ +cff#44^hQXXJo}ȿ" _p>q [B$aʔ)ɓ'aii˗cʔ)2$ԔPhBH /_:YsۺKBFu #5]. );:af\\ z) Rd>qkZ#CwҽtHf``'OvCwbb";K,88Xv,g^R̙3 7oެyɒ%vr[$%%!))I~Ƨ} %gqʹ]o짟~xnmǹsӦMSx$t$ԖP= !eم[}E\:#!\K϶]ijjq%AAAJܹ 33GE~~>󑘘.K/mQ(((@VVrrr"22&&&HNNۑ; .K&wWVVO>:1}t`鈋Cnn.ݖj˒M$бeڳ"<h&S?Vzx vvvV!<~vb " 1kOBH.Ԧ32vÎNY[[ aԶ@ @TT[TUUa2e [[[سgہ0 Ξ=d 0&LMpm899Cqq1~W[:wreɤtHR444/`ܶm~g$T[l%M4.Ӟ%XfBy{jˆיD27nĈ#:,BZS&6Z3!c{f} 3P]]~Ywޭxs9sPSSlٳNBnn.rss!0 DDDDťb(^^^K/e˖e$򶲲—_~`x{{Ek"o@s"qM$no"N4(.܌իWk|kK|muByi%cƌ1cFWA3C k 1iǎƄJaذa8p1uNS;=z](?Wsk">spi%#ڊ}'v9}41556l؀-[ƬY$Fbb"q-dgg#++ gΜ0{.pݽKd>seږDZv'ݻg.KBg&mkN\%d--~},EyvP'.!t#פ&!{%HuB*66 RY&KYYY`bj=evfMff&bbb`iic5!#iّ@g:---@ruwЯ_?,Z-D"0j(xzzV $=tN1c 99ՕP(ģGP]]ݦ6k,D"vWyyyXf Ο?@*C J{=xyy|xb!D63fi0{lӧOcժU^۶mOawwwL4 k֬ѩ͛7㣏>BEELMM1k,]Щz,ZǏ/> 1w\! G*yxx`̘1r  >W"66;гgO,\?pÇ˃rMz*\\\PYYvl]p>>>f~7j122B߾}'N࣏>Ν;ٸq#┎t{ݾl2̛7S\]]䄲2466bغu+kϟ6\>5'8 B:¦Mk/!ݘ4{B!]tjr Xr׮]c; uaff8;;>>>ppp@bb"b1QVVDDD`Ŋ:,R)q*ka!##}=""&M?SN! cƌqa8;;---5ذa=z4kh}.,,Sn[)u666ɓHOOW_srrD"Q6m֬YUVhϟ… 1p@\>/u!B]]=PSBg1a!t*zB'R\\ sss$&&"77Wiٓ\b8;;h]6$$/_T7p ػw/MLLPWWDOOOv>066F=ήYRR~ o& 秔u!BpM?gv-|7*gk닩a/^WrҾHb/-- *(1ydvB!۪ 0vXaHNNٳg&B~]ډ+t$%%dnccΠCB";wÇQW[v`D"D"̝;7SRR0j(wHHHPz~mhhf@MM x$Jŋ*i~‚#B!;FZZƏ7? y&6n 4K.2^JJ "## "nB!lmm4}\߿Æ kӱҙt9&fJѩ/|a~!,, CRR߿p˗/k\#T]`nnO>@N???vZZ}0hlld;)r?ʹQZZs3p{={nnn Ԥ'Ԏsi_eeײD8~v܉wyGeΝÐ!CT3>}|655Pi _xR/wyG)… akk~C'Bӫޟ|L6  Daa!ك,H$]k׮#"##̙3ѣGCg\?#]tL\@PP>KKK|;v@*⯿¦MtѣG㭷RYq)Gss3\]]PUU})cHNNFcc#GT?7cgϞ:0b6yD"Qi+={6K߿:cݑׯ:b\Ç+V^/"ѻwoxzzΝ;l+b1ƍ'''̚5KaRLFFD"BBB>566 /f#&&VVVD||<www,]=B%m_QQD"^}Uۍ[n{fMX2&L7!*SZZ0cV֯_Pvf=֭[???? ?~Ngff@ @PP9&񁙙\]]ӧԾ®]x lV\f1֩o|>111ѣ2d Ç#?? 666pwwlj'ίɌ;ӧOСC! _>???xyypttIJe2M?fff?s~\>ѿ#f̘>>n@{w IDAT{-666׿0Zƍ2F;;;xzzbjrB 3frm0򬩨ɓ'}ua۶mMy566";;@rr2 kkkDGGx1LtTUUoX,Vfg͛8vлwo\t ˖-àAw^$hJe/_:Y-9s&$ """4S[['6l`b444 7n@ee%VXS|@kn7 E[4'KDe888@"@"`޼yzBDN<%<<<.bLaW׾#G2Vogg͙h^wX,f0z9_FF*_~efǎ*8::d&$$yœ>}w03gd-9r$4773---L`` |rs4440D"QL0Am9ˡŊٳg3>d~WW^LBB[>uT&88illd 7ԹBaLbbbW%֬Y`RRR:dgM[n !C0b޽khwތܿ- e0dagӇ122b 8ٲeb333Ғ7n0 ߟ9y1oR[GDEE1?̜9S*h\~?~<ӫW/fܸq/wK[|EcGCUVXHRC&''P1Ǐg.\zRRRoooիǏG|篾qvvV/..y7[3bK vQs|wq=Dw ~PNQJyk'zf&.!m=m޾} HЧOdggC"࣏>qfy{{ٳJǜ={ޜVh)AiiiJ˹qi驔K߇X,FzzzgjiiA^^ϟy桡A|ؽ{7FnkO"f.%B~#Bun /26o .'O Lݥ})11QC .-- ְƁ~.))K0 )PSSZ6I:s^oeۗ&5Byšo߾077Gbbж0ipRw„ Xn=z_~Z>m >vggg>}hjjݻ ####>>W_}K?~~0uTl߾puuڵkaoop3g0 QxXYb;wFNN>!s5}t^PXJ>l0ƍqN|p]qOS"jBNBH9w\WNVmxlvvv]/&&!!!W^܀np~~~X,z~+++69lKeeR2;mhjjgSSLLL}Xuu5uu5,C3 !O46CSSdI5lllpm/?%#NzgHIITp|ѣzj444޽{8v`mmھΎTG,ow݉@ @TTЀ}!++ ǏqA|"""`?32337otrrBxx80tPݻ;$< 6 Th8==ѣB"ddIŷmۆ#G"-- B"vN9>ur@@>|p___|"j+++#%ɄKy2:u Tf% BPi܀'Nć~m۶ؾ};-Zp ggg455JsZޕ+W0rHh)c˖-(**Ç1k,C! ?RTsmgɾ7Q#Fݻw_"<<}?.-~kHuh,,,0g̙3555ƞ={p)"77 àO>H$@DD\\\:-Bv>hDòDUUUx饗!7rrr舼<_"88pqqAff&/^ XxNK-(W… 9K||>/_6J!K:qZBH7p…A,k h{opw܉y&&&Ƃ vĉj׈5vAlii *k{!ߎ; lڴ aaaJhS7;N__58i``qaݺu8z(=z?o6H|:HLLDbb"nݺldeeKܽ{Gxx8%q.YI>se?c׮]yĉ{.;vhW|URRkk W_+>B ]҉ /tE./h} m=7 4CfصkN܆ajoH$BAA999<`?k{!J߄ JK048iv>:rpEEEprrBKK .]?ihׯ-ZEA"5j<==a`@id!t]҉)8!kaӦM z 'XT?cƌQ(۰aT&#?͎:UUU#'~mv歳Ҳ >.4D"?S[6tPB%6#ܓ```'O*(H&!tӐB7ԉK!O9}< .*+>NBnIB!}N\B!B; Njٳ#ikk.CB!B!nyyy틀#uL\YzBȳB!t5kD"}񰰰@jjjcٝZ/WB=BuuRT{GJJ ^iǏYYY۷󑟟^{ 'OFxx8BCCajjsl>>ppp@bb"kh#_J066ƙ3g:e]'''Nԅ HӦMCyy9{.PpDDD]kl}Abb"qM 33?3vލݻwW^xK/*,,iB:%6# TMM RSSnd7ww^lݺ-onnFQQLMMq}\z[laoЁ֎\|uuu:uΜ9j),,đ#GPRR[n!##-rJ=zs\hhh@YYnܸJXB۷oի}Ծٳgwި w^Bcc#}]+'O6СCj˯]JaРAGZZ9PB:ߌ3sNeOСCajj X{)K*l+**H$«?""H6xann޽{w|͛7z-,Y0 h|[bHUrB 3fz ǃt<; nܸ#G 55fffHNN޽{mۆ-L:Ԅ|ٳ'B!Ν\N~Ûo CCCӳMm366F=b@ii)~W022>ٳg{~F.P\|S|ʹ|111A]]$ x<<==UNhhhP#q> WWW\v Ç~ױ~zNdooE̙3(--EBBݻ|23{uW%kB/C*@bkU7=z 6 xhhh 22}L,cݺuXj S{υe7033$''#11Qi?`= ]xQ)))}tE| 8Pm]...xwԖ744p?~.]1\]]Ėr!ˉ'PPPbʕJoUR7Jm%9@A+++uvl+??;hP!]#..UV)u"e mhi<"p̙vCUS~pe5RePVV <UUUTVVښS}0hlldQ9IR)_uu5}4O:!;;xHJJ?k0u677g7+))|}}~c;::Ưoظq#6nHDFF*,Juu5݋Lx1ɓM7!E)BH7/ŋǪUϲNE Tqq1Ν;Ç#''nPƍH$ao4G5J$u7$/#?zϕo% $ nܸ7n(-#HdT=Ic]MMM011덪,~k..׏9`mm kkk8pIII璒v?ܾ}bLIIQ޽{#!!>|CPH;w<-onKF~KܹswzֿL<}Dii²GsŮ]pq?yyyZ;&N?7***}vr#F`H$8u#G?ǏqY>}ZaMhnn+{o>vDvyyy~:MxPkx뭷 n޼Z펎?gϞoPD>F#{=!==믿ٳ'^yݻ/1m4%O%nN @(B(Ғ,5J_7Jhhh_|tߞ@Ms.~7m3:ʕ+J|~~~ذaq%>|-G?@q@F~WtiQu4֞5/ҙ233҂C  ((H!YԩSyf,\... ?k;wDMM 1j(b®]x ƍH$BVVn H)SԾ'O˗/ǔ)S_kk&Xn0~x$$$`cк< @lZsF ӕ)--m$mkqoر>}: P:ݑׯ:k׮aʕ>|8  ">>lRݻwC*bϞ= |' -@!OX 4Hi{aa!͛|DGG7P7l PUU^z ˖-S(lnnիpB7rrryDff&bbb`iic*7L,^! xb־;wb޼y 0gرJ@@,www8qQQQ*yk ~zz:aii *BK ӎ; lڴ aaaJ=*_]%[?< 6 4{K'5SKگז-[PTTÇk~_(--ŵktny6Jx*cii;vh<~̙9s2P;99O?isP7h mHfصkN܆ajo H$BAA999Y!:BHS7&L7*++̮.O*>˗4? ~ҵ~=QPPkvuXbҤI4yɦ IDATi6mڄcǎaaa*m&Y@B=o%c̘1 e6l@\\vgs:^?gW-t=^HڲC5yՓ !!!hnn@ ƍ1bĈt2ccc<BHYz200ɓ'U.%̄B:aH3f`ƌ]!t ԉK!D/vin<uun/;;N?!OgipZ۷!*,,t!D;%B! .TF_Ob@KK >.C CB!BH;/i+ta$to@őN\B!B!&!h\\\PXXةK0!߬Y;tjǰa45x{رcJ.\իWwh݄B!BHGN\B!f̙>}zhnnF@@RҥKBcƌ_ݮsp'ByP'.!><йB!B!&%@DD؈w}m:۱zj(,,D߾}fffݻwqyݻurrD"-[DDy9s&$ """T;{l555p݋4} q֭[J566ɓW۾:tHmkڔ)7.. B}}=prXƍĊ+tBtnN>CbSG;m/{'ܹ\]]!1n8899a֬Y*hB!O JlF!Oo˃!٦s`aanB!!!|r555!??/^DϞ=! 1w\"))I!6ccc x"|>.k744)ǏҥK066+7nȑ#(.. 99tBĉ((( rJK׮]6W\ڵkaddT ccc9s?{lؠo1`(((M5k`ժUZDžl.??;1k,\,eee000@DDVXA 0 :BXԉK!\ZZ}]yyylr3gΠ<1mK$~!]eee022B}}=T?ւa씖ի!cnn. ܹs>D=Z}ݺu 0yd\~!t8wWbb"VZؖ-.LLLPWWDC4u/22FF;b֭Sĥ;ozG"BQ'.!ts111 $$$ < àĬW8Bgcc#LLLxJl!))ITYCUUlll'Z󃯯=uO~JӧM˗/GBB 1o<444#::ӧO|;wb޼y 兪*KXlN$==Ѱŋc>B!/^s=B!2ԉK!!+,'6l@\\RS7wyNk4xM{ >s!BԉK!waM_ajX,ASAtH DbGJ2ԋStj]:Sꈴ.-N+Q,"Al?B~=ĪU c\Q$.Q+2b1i K@2bx(Q$. gϞ\-[9jn݊(K"""j>&N$.Q+I\"""""""""VDDDDDԠ3g"!!Fˤ˫8qAqqqHLL4BFDDD"fDDDDDԠ(XXX;Vzz:T*4b-B>}0g!;"""zp%.QuEl`` /^xxx-[@&!44{x7//C X,F`` Ə>@իW1h tcƌAUU 772 o6]LLݯ7#8p@LFFq a˖-޽{CP`ݺuBBY{ҥ033Cii)޽+V0b9sF3g`Ĉ\R7nEEEXxPVWx#""օDDDDDTdd$w_߇.L>魷ނ \]]f݃BH$O5 ˖-Cmm-~G1Ċ+`ccXx޽tM2fff011\.GZZZ<;;VVVڵ~ΝW*:~Zee%>>>>;v׷DDDԺL\""""6$)) ~!z.';}4<==2KKK!U>׳>CRܖ6 ._W? 1P*pA]t&x>|(|.,,l4^sl% JKKeeemvvvxjjjᲲ2i|_QI\""""6$::ٳg#00&M8;;DߩS'աNNNQaP="uuu?r9__}"##o>>|XX- Hbu=yAQQ:uꤑceeee^HO#""օ)!bRRVVVH$w3g_ooo,_=™3gp qrr/2ݿ?n޼ ر#C 6 ˖-رcիW F||<***<>"X]z6l@ii)J% JkǥK5>p@{z X[[# +WJ…  q}GDDD 'qڨӧ L۷cdxׅxjj* H׿z 'GJ/~ŋ9r$lllɓ'cĉB|Ĉ033C޽1n8/jÇX,رcq-Ç={_GjǑ#G0zhׯDqaC" 66cƌQ?"""j]xQ5v!SN5Zo^?ǏvFӧʯ]VlڴvIHHmᅬO HqFȑ#7Zfܹ/1uTR駟j}_T6Z62 ǏǣZOuJ\"""""2ȏ?7npq9G.Νӈ\111JFȌ^4\KDDDDDqBCCR f۷Y5֣Gfiy011AVVVGMYыDDmDuu5scaa!\hDDWDD"""Z&q[l5ťiϭ-"""j%"j#V^E; F.c˖-Nq H$NŔ;"""""犓DDm… dcAbn݊(cADDDDDܘ;"""""2iӦ>xm˫23gDBBBj}0g۷XDDDDJ\""""6*"")))/_!C@,# wU:T9s >>^L`` /^xxxkעgϞر#Z@nn.d2~m\v 2 2 ׯ޲e d2BCCw^ 1vXX[[ ,Ç;w.^{5tcƌAUUF=k֬ALLLc???$''77R[nb~~~9sxyy9D3gv$.Q;3yd >qС&ՓD\v /^DΝ666صk~W;ww&i{ Bu  fꍊBɓlނ=޽~ wFRRZ/"33(,,Ė-[B``!##Ckʕ+(**j|nݺIIITr(Jܸq(**ŋ n@DDDDԎajj5H`gg'>O n%"5 "HW6vZzJpvvH$z!tK/A,cn3ƥIDURRt̛7OwB$YxͭImdgdd`ȑJdHIIAMMMxZYYwqq8cǎgSSST*zlmmm|W_}GƟgti۸|2z#7رcqQC988@$NNN"8::T-*++-SYY KKf\>!4WWWD"@,7[DDDbJ\"vv˜1cpUVVV011b-^c뺴%//Ot'00ǏW[ɩϥ;i}˗aaa2O?;:/..ƍ7DX 8PЫW/xzz ,5}|| 1l0ӦMkmt4iݺu={6Ξ=`ݺD8X T T +++H$ộ<<<зo_XBcǎѣG8qp8p@R#L6t9~?+ammW_}˗/۷o#99aaaӯ_?TVVXkK.[k|x jXr%T*.\oV#88񨨨\~]\"""j_8KDԆw߅;bҤIjgCۥ6.m:u* R>Ƞ }4v!k۷ 6oތp|t2 qqqofp* FUU._u֩~-?~999w,Y3z~~]4~x!** }ŪU&yuʼnHӧO۷//fΜ?jEܹs۷/fϞW^yE6ܰl2W^!C0qDR_|6ld2oߎC&_))){.1`aܹz|8FѤ]#RRR;Çضm6mڤ׻8v1}t#,, ]XW/2,,,///A$AT0e -[ tt]?~!N8 }ȑ#n:q"_HHH{3gh}{:/"{3OKHH@BBBhsԩS+Jo>ӧO}_vmΝ/SNՈ)J߿~vr,Ǐ v3H$lܸIuQĕDDmН;wp5x{{[]qChԦK[JKK!^tGW2e .\|>|xW~׿pM\t jtojj jGFԟ<0(--+7]??C.Mի .] ''G]q"g!Q\\siV\HR#dFDDDDDmЄ  IDATpmL8k֬T*E\\T]qC4t( ( ЩS'թM*>]Kw9,XP(n~9aaaؼy36oތhcMn[o68=YHsNꟾ?'&O^TSS{"<<ݻwٳgj*\|...:DDYYY޽F,66֠cfZ'q(KKKL:ӧ!Ho //OxMM QWWZTWW,t]ooo,_=™3g vs鎋 5.]_ﭷW_}4L>]z>|\t * HII5 6J_|ƥ@ϯK`7nn޼d2tʼnlmm5zjG><]d裏xb3h0|pc„ _wtoߎ?ظq#lmm.M7o[k^DiΝ!zvi,N#&&&01Ѿ xVVFW\ץ-z? |?~\<7os}.տT*śo'D"5XX0yfʍ9yh럮_c&ZV%* NZ'qUaoo_~yyy8~8.]j /6x9Qsҥ &Md4 2)9%"Vƍ JX,ƚ5kзo_c7___\~k׮!6jĈ<3$.J0vMQQvvS """"""=qHQUUl5g]D/; 2fDDDDD$.^-2vD/]va׮]NqH$\Gs%j,,,бcGcAMdeee^%"2 bɒ%\ҥKue… '|9s8y$aee.]o߾>}:ƎԩS\ǔ)S}vիW٢2;7}D^زe3CDDDDDqɓ'ك\H$gĽv.lڴ Tok<ƓZ7NQ-[&L8&&&W_ţGp:tbڼyƳ[;D"3ߚ:Mpuu}Q31vDDb2e D"D";b,\C a0}tlڴ +VPoǎ>|8ann:@&!&&on0-VVV2e ͛8v\9ɓ';vĉqΝgi?x tprrBxx8~)))ѣ,--oǏoO3""""""ҍ+q"##QRRpqqHR䠬  àA0h XZZjqY|x饗OիWGӱm6W7khh(jkk;PWW&񴯿oܹs{Ezz:كƍ1c ?0tI^Cdz!3"j>;w4v D&Md NQYXXѣӧn޼ v#G};%ñtR u"&&FѣGXp!/_7obϞ=Q x`ggR]v':t蠑/Q]] oooo04'bΜ9)v؁oI1c@N$a͘0aRRR0w\ φr35fDԼ"##:c@DDfplɒ%5j:GgbDYY[nũS+"1b222aByyygJ۷:t!!!شiJ%X X>DII d2Ay<~BQQ p8t 333a+\/$%%!77WnCi999ƌZW>RkDDD͏DDd>7oF]]._`Ũ7p1FUU933wھuަM<^$L&>?R<9RT-&|.-- N; /CګגcFD-cǎN]2JDDb3""j2sssgq׫W/OԩS;wN͛7QWWm'R8u~!!!Axo>ܻwOBihOsrr>?}דߝ訵l φޓZr̈H\KDD*&&ǤI0h سgPgϞ6:`t#t}l۷oGll9GC WWWȑ#™8z(>>>gggرcرcF7ҥKu2 ۷AYnjI\""jvϟ_~CB"6l/;w?{vGFFjLyBttF^Ο=y$_wΆMHHN!!! AaaΟ?{{{D"t?ɓ 8;;#** W^m ""DDDDDJ2`Q]]݉'ɓBee%RSSv4.?3P^^WWWdee}aȐ!رcܹN:ݻغu+ /6DDD8""""jv܉; j{=9r555w,--KݽwǍT\.ǎ;[⭷vnn.fΜ2t'fBMM C&?c8x` `Νϵ=NQ[I\"6P\\7n9/}v"##Fٳg|rdff P]][ԩSͅƻӟ`jj xwcÇĝ:u*~7t GA׮]?=z3g 1HBmm-_,Qi)))HII1n2Dٳg=^^^4ihGc͛QWW˗/Xx1***p ;v 9::6ݻzo#PT\TT"kkkQQQ{.]]""I\"6\,H Q*//v CXXhWz׮]իz˗/cڵ7ѳgOP۵ncpQcƌHIII('˱e;ԯt&""gǿÇ7| bɒ%NlݺQQQD"D"'Nh'2#"""""珓DDB}6ѥCR!((HX,Fll,>#^zyyy7nqEb ܻwEr'"""""q@JJJ@,^!!!;rss <-xƍ-kעgϞر#Z@ZZzOOOxyyaF\\𾏏r9 ޽{cڴiһ刎$ fΜj]nݺb!""sk| fh<>|8&O 777sΈGpp...Crry۶mPWW:xzz;"""&$.k׮ŋܹZʕ+(**xoҥ033Cii)޽+VT`׮]_q9޽ׯ* FUU._uÇB[n{d%ˡT*q /;]%&&ݺuCyy95>@VV i8w ddd 33yyy#66|^c*++qa\pرc}dgg ]v58ǧY[[7>xp\KDԆ%%%HOOǼyjee2YXp! Pc٨9r$R)d2RRR^̄ZSSS#zVXXL&L&Ì3(++C]]ڪW8}۱m+++ C.޽{Ν;Hyum]]c ЩSsS뇫+D"rrr  Y988@$NNN[?),t*++9_8s |}}5WVV;""""j%"jb0yieeDTڤ zv튎;:tJD"w^=ڠ7l؀7|/wqq9;ṻ;O?~:\"X1ʕ+n:oŴiӚu|JkL?z555PTjM֢7oqi6.]ooo" m$.Q;1}tt֭_~%%%/^YfAT o&&Npssòe777H$+Ç&L_ >SAR!11qqqT̟?T*U[X_ hDc̘1B ѣG1uT_+++Gݻ8t|||48rD 2;w;""6DDDHHءCtb$$$h}xXXv9r$>F7o͛\"`ƍMIM x|Dž񨮮6}]Ν/Rc7,,Lh'yzzj=ITbO9O"jY;v0v DM>fDDDm\.Gqq1Ν;u\111M> W&&&jOppphzH?%"V-;;)4իWXvvv-RKKDDDDDDq DDDDD$iiij̙3?<\!11%S#""WQ0tT*EЧO̙3-( @JJJ@,^!!!;rss <-R{ L~׮]L&L&2[lL&Chh(ݫ~ZZzOOOxyyaF\\PpppD"̙3Q]]чǭ[X ???DDD`Μ9B|ڵٳ':vggg$$$v֬Yg8???$''.-%"gͣǨQ χH$H$p zD\v /^DΝW\AQQڳ޽{CP`ݺuBBY2QQQP(bbbЭ[7#)) jqڵ +Ν;ݻwkLRQQ,=g!##QhiuѶV,#66o|իFjj*qix{{ qn%DFF_~ QTH/2,,,///A$AT~Cff&`ccGll,>gίƅ `nn;V\.>KR"''Ǡvaee]c>hDDD6p%.Q;VӖ`ܸq=>m`>XjZqڟ$8::7o=??_L&3Jfff&ݶ)|}}q9܎KԾDGG#;; ,|zV$5HgV]]]\]]!#҆D"JJJgO~wñ`@P ::ڢRk\񫬬%ws%""iJRJgvڂdffbŊX,F||w%#r}$.Q.\ @hh(1{fr"mWYY`]܎K~M>M~jT4dL8=zT*_|!Ć L۷cdxn755>'b1Ǝ[n_c6l؀|H$b̘1B ˖-C@@^y̞=m899a8zhƯޑ#G0zh&""a ̞=4iYlkN+,gkk> ]HTOnurrxm`mw܉D"̛7UUUk!Μ9d _!!!Zc~c`ӦM N:06o,<LsD7>S~d8~=>>^$$$4'͝;_~%Nkl}ۿ?>g΃^l\KDԆ͛7;vti m=zjT*@uujZbʕ8x 5v\""(..nґ6+WDLL Ri dFDDD/%"j'Onݺ5}}ZYY+fϞ vڢ"vBϞ=~MMM鉪*HR̟?__xf͂R|ML8Ѡa曐H$߿nnnH$ϼZ_ZЧOswwСChqYYYMqF=O%"j'vZ}m´U1܎KDի[n;;&̙ы)ϲVn%"""""zq%.Q+,muv\"""""'q DDDDDD/6@DDDD 4H<?Ot5v:? ooo!h"śsQ{ĕDDDDDmXDDRRR8q=z􀵵5r9~72ƢEԞu}Fل;kעgϞر#Z?TUU ۷R=#::H$9s&'NL&/~2 /,,رcamm ''',X>T700/FHH<<<\W^􄗗,Xkkk5>u!燈̙3G76~Xf bbb4\ -&L?ʘsΈGpp0VZwqq ΋ I\"v:TAk J"iiiz:fΜfʈѣGqqܾ} }F+WHو#pgΜ#666صk~W;wwꊌ TL:&&"ˡT*q /ݻ :uΝ;P(&z-ݻ駟{n$%%i䜜D\v /^DΝ* FUU._uiL74>AnP^^$dff?}UTT ++ D^^ /Ν;<((HHDDD-DDԮݾ}YYYN͋ĉ=%&&;w-bccg2JRqON5 ˖-Cmm-~GaW._~,44999BQQQHMMTWW#-- QQQdffbŊX,F||>>>;vZ]㧏lXYYk׮Za֬Yq㴖;߿]"""j^%"j'Zb;mm۶o߾ڵƶʫWbРAСƌ}VvѡCbܹxХKuum}Vol7n-))q`kk {{{\d2۸vd2d2J0mۑ`˖-d ޽{ < 2b?~E/^L磰P}]q]qCc꟮K.JKKq]XBݻ7 ֭[( ( ̚5KmۑP(0y&ԩS1tP(J|G8p}$jl??9iYRR"|.--^++DFFѣG8| ޽{ǂ PPPBh uxxx{Ejj \]]!#SPPsppH$Bqq𬨨e =N>4U}~O?}ڢRk<..KKF:s |}}5WVVFDDD$.5y;-Fttp뵓FV歷ނ \]]>^K.܈0%"jZz;-xkn-cgg'|677J]cǎgSSS?v\}55]ypB 0ٳ%Vvd}hRD޽GEyyp@t*H EQ$ !VP+f6hoIJE"j-a1zBH6bFk Ƒ6a`@眞3y5X>v1umѢEg燑#GB.cڵmvX,fϞ*;rHlٲ 3f@BBf̘a0FLL Ο?I&F[[ H ׽~444&MBDDVXarS?]ٻw/JKK!J!f늓qwᅬ//_nر;v,N8|ḋ 񖈈z&DDZp!  ܹs@>N <S'EEE H6SNEdd$!ɐn>>>8t222 1sLgggRn 0v^;117nqȐHW%"?}⣏>BVVشiRRRjժ^ PoG͙3R_T*) DvHDd=&L'.w==K. g.}ѓ~z1wwnᆭ+//Gxx8Z-$ v NbȈΊdDEE!**JG}G qjkkω*=ODOiӦaXxq_!LvRSS)C.#CDDDO'qoc} } `o?8<DDY*a&DWss3@,!z\كK.!..=???jXn,Y"Ŀ(**/O>VVVݻwcǎ}61|lڴIoڴixq5|7xwonnoxҲ,YHHH]wu,Z_5LC. ;˿'Oڵk`kT?WGDDDOCD4@L>_UVaƍ}cG!66 KDԹ9s栶cxzz+EEEχNÔ)SwW*077Gyy9ua͏r:/&& ٳ ‹/(,kkkEpp0|||7n ""Y_'@DDDDDˈ#vZ|7HOO / ** MMM&b… 899a֬Yzm/^ ggg1 %%%BLTiQ\\?::氰^╕Ejj*lll HRpȑݨGd,R|Wx뭷0h ( LҤI~]vӇE\""""gԴi  0ǤI0j(~0ij36vvvgssshZ{NN! ˱~ܿ_!Cσ Y \\%Kb,D" >\9Rlj ,@UU|}}1nܸǹ"""zx3^>rrrpA|կ~;v䱝Q^^ޣꫯ H$BRR޽ۣ~nĈD(..D"1N$t:I:t(t:ZZZ`kk 矐?}Jcϝ%"";v/ԩSW\ 6<ԈJ>n݊ٳg֭[V,YFݻjH$Bmm-=u`T*ʐ8{lm WWW?YYYZ)ҥK@vv6VXap==X%"gǡj*%%&HD4%%%;:ٙYfaXr%$ &LJ9[lB33f(c777H$B mۆH${&\C!##r3g4CpL2/ir/_[ol 2SNŪU0w^LDDDOS "zFDEEaΜ9Xxpի fr5ktRƭÒ%K  ¸q駟B"4ݻcܾ}֘?>6mDb/qUܿ_ejvB\\\kpvv/233aooo+ظqI]EE,XK.a„ 1b{n}7>XZZvk"daa+"66Ϗrh.99dSwJطo%%%!))v?_t*?\phv{뭷[ou_܉KD Zƞ={ Z ZmrPpGCCRSS1h Ƿ/^đ#Ga0Off&6l؀7oÆ 39 00:c4~ Ԙ|_a̘1hnnFZZJ娬DMM ֭[yhN\""z$hjjZ;|}}/YdGqqAhxzz)._ +++=h &w1h4ݚZ[[qi\rBhhDnn.JJJ`ccPTHLL͛==ׯ7sww ĈױKD4aڵGSϟ?77GcժUxwzDGG7Qܜlڴ hnnFttAzQx1vvvs߿k766BaذaµaÆABW&"ǧ".".pBs.Dt=VxKii) v! *>0D"$%%uHd4򷵵Ekkkhmmep"uuuprr a{{{# P\\3q0DLL+++HRO_䄪*466v{<ܺu 0dݻjH$Bmm-=5:Ƶk0qDɓ'V^P(ؾ};Z-\'NqCR /)++387m,=#-Z)St:u*"##Lt-**?lll兘DFF4ȑ#e( ̘1 1cFUNNNƙ3gga֬YFeeevn{Eii)R)F[[ HnCDDDDDDS "zF=799Y8g'???[b8y4 m6{b3YwJ{ ߥR)ףcǎaժU~6񰳳CjǰDDLP*ŋ b۷oG\\d2YdFDl5xq'.۷^^ߙ%&&""كK.!..UGss3r9`͚5Xt) ++ k֬Acc#0}tcǐV X0޽*Lorrr?^|KKGDDDD DDdTgEܧ]םX;w|bsԜ9sP[[X <qqqX`MZq E|||chZ\zӧOݻwqubǎ4hJ%Q^^333`ݺuؼy#?q DDDDDgĈXv-x'/~ XXXƍȑ#!hPYY\* Gy"=i,=MHH?y1i$5 Hn bbڃP]] \.\.ǒ%KXnDDDDOS """"zF?GNNsvIIIG:uTDFF2 BrBFFr9fΜihkk$ BCCQQQ\3qQQQؿ޵WB.cٲeyp`FFaٰ=|}}qǯFhh(hkk^^^P*񁇇ϟwk.u3|}}itHII1y***D___DEEaB .R)q޽nCD$YXX8fff8x qm$$$/jhiiZF^^ ""BUEz > Jo>|s_e˖JDDDD DD0jٳPPXtclڴ bhhh@jj2S_x1Ѐ/GAZZ(,,Dqq1qFxKK h4Ϡ ߸q555&]\\ƌf<ʫT*hP^^J`ݺuݞ],#DSSj5D"|}}aeeeRV㭷!ɰd|'z^ubATرcz˗/ G6:ׄ pEqFcSٔO>7|BhhDnn.RSSaccDJ#Gtk""""""zKD48~8便2ǪU0i$ I}GcG444赳>KRckk\vvvsɹ=aÆ ~,%d |N "<<@̝; ":GsBP J P[[ '''@MM  : Nlmme3:!:ìYPWWgݻJHLLDHH^<;;mmmpssD"Ahh(***==x3",,h ѸHNNMgd2裾NO 0vXXbb"6nY(߿عskkk(J|mbpgφ-;wݻwÐ!C0|p$''0}q᷿-rJJ%|||ݻ&;++ r8zA|ڴiXb^y5 !!!z㗔`ԩH$ Ĝ9s;t:L,ys… P(2iĉ' F.ju)(JfffuLJ\\ƌ||;zwp'.3*..Æ -'h4䵴DSSj5D"|}}aee%ĕJ%<==<<6<<B,,,qaȑDzg033Rıczq@tt4aaaoooxXz5b1O??~әJ"55666H$PT8rA[c~immӧo^^^ DDD;X%""""`GGG?~IIIRݰaÄNNNoMcժU4iaooܿ_2 r׋µ#^,JQ__ߍе!CjDz租C.wxәj@ppp’%K?(ǟkllN3=?q DDDDD… HHH@`` Ν >|ЮN\__'''簵{(--B@@@J%ꫯ H$BRRIgt:sMMCDBϽaСtzL?"H~1bD"!HSW PWW㞟 q'.?~_;qu&>>ɏ5:uʕ+aÆ:7QoH$ddT*z}?OxW ƚ>>8t222 1sLFAAR)}]̜9vzNHC&!==]/6A" 44c޽{QZZ TDMz)@?ƔJ.++}7nܨ[jUxn߾}ׯ] 2DK/fΜ[f:u^o7+[vnΜ91cd2믿t:ݟ'__tÆ 7iZN}׺ѣGźѣGF۳g0_ѣuD *]HHN"KNN=x@/ӽ:ݬYt8?t:ٳu[n52eJ~-oݺ1cnȐ!:kKHHMMM uvvv8ݻw=@S*} QLm'Negguٳgoi7xCϧ ߏDDW_<==MN\"g\LL O]='336l͛7QTT$1;\xGAFFj{쁫+j5j5.]* Z]x1Ѐ/GAZZ^"梴ҋF7nh83HKK3xUTBѠu=uϥKP^^())AaaaOk= |3_}0h ( hhxzzJR,GK"??/_!ɰd|'HJJ퍒q._ +++=\&L`FӣO>+W^^^ ╕EII lll* ؼys#""ӕ#<<Zvׯ7]>"""".&oڊ<`ϟGCCD"ޛG٣rysrri&C,=wvvFCC^Aj Ʊt.;;;ܹs߇E/dw-aÄBW&"._ܧGEE!**XgE܁jΝ}3E\"l…$$$ 00s >":---BYo XWlii\쥩 *>0D"$%%ݻ}u:] PSSGGnckkN۴Ҳ ?󯫫򯫫=`ĈD(..D"y3q0DLL+++HRX,+Ə/VqY1q9~O?޽ V ///zQvNNNBcccgmm_~[n?۷o#335ڵk8qɓowk^kkk( l߾ZW\' R[~n. l,=#-Z)S\?t8)S >>/)))x"Ə̘19G-[@P`ƌFO:pwwL&Czz\.ǡC\3g 4i"""b sqm>3̚5h uuuݚ݋RHR$&&"$$D/6A" 44ݞ^Ȍ}6 y-[?DCC^zDDDxcǎ5%&&bƍ}ѣݯ .R)\.GUU-[\\zr˖-͛7!!ˑauuu={6lmmaoo___ܹs0j(|}_ڤ{yyAT?9)))&]EE H단(,_".ugggXXX<vi"'ITBѠuѣGV1tP>|j;vZ={jjK.5yM6A, HMMŠA~~~tWqahjjnܸ...cƌAss3 ^ZiJ~DDDDDLss3: džJ梤666ٝ͛7?,--Z www 1???>}K/!44o&.]??.7u}:eR*زe ~ٓ3[[[qi\rBhhqtuDDԇҰvZ yyyK&ϟ?77NWWWkmmm)ێZ .Q__h6lFNBPP1j(aM]Y*WNaÆ ׆ Vۭ".aii Ti=1C  "<<@̝;0|pH$t:31"ŐH$=C$A-{=@ii) T*1~x|wh}:\\\sa͚5y;qii)R)ODDDw `aaaƓdp]*b߾}]_[[k4fffvds|;zja筧 ]o?6=yr U*ݻ׭DDDDDDDDDDw~z1www(' SE\"""""z:+/_~ܹGDDDOq7|S """ǀE\""""bǎ}=, spNhah_)A$u DDDY_'@DDDDDDDDDDƱKDDDDDDDDDԏKDDDDDDDDDԏKDDDDDF#99Mwرc7n\3{zԩSW\ 6AFDDD͈ȨXXXXuOǏC"(( ^x˗/}dGDDDO %"""" ĺuWWWի,rѣ}KJJ0uTH$bΜ9xw|7x1x`ݻWB.cٲey&r9r9222L^W]]fϞ [[[w5 Ad׿477c…pppT*E||<ݻ'*R xxx`~j׮]p Effuz{{#%%ŴD"/|rGDDD DDDDDϰLlذ7oDQQ \Z͛7ӦMF~;|mrrrpqYYYjٳPPXtyoڴ bhhh@jj* Å  \p~~~R FrTVV֭vJ Q\\&lܸQ/҂]GPPrrroܸoa̘1hnnFZZrss⦬qa>eee?իWC,cBq/^ ggg1 %%%%V! +++Eܗ^z [lp%HMM $ T*9]۽033Rıc/_Fmt&Lŋ5Mg \kk+N>7|Bhh7u}DDDL\""""&-- k׮𰠗'pssn___H$Zsj*G}}=~~~ذa4 {:u AAApttĨQopp0^[[𹺺xc+JQ__ollm밳Ý;wp^;{:ק>"""?X%""""`.\p@BB1w\ڊDn?tPt: 3j]]]5H$N`kkL IDAT{PZZ B(J?}?#::'Oӧ#FH$Bqq1$] : N KK^}yD"~}ڋΦ@DDDD4H$d2d2XYYA* Gꊉ'b֭q;w899 [n >>زe BCC;w E\CR#"ύ*n޽FAzz:"""➞hmmEmmu\v 'N4g3bϟ?~ccLrrpoGV^իWx#6HR۷h?񏝶Yb8y4 m65M#QXX(|WTwݔQDDDDDm.]Byy9 RZ\x }vA&AfDDD4N\"""""rCB"`׮]?~|~z1www(^I033CAAAGM$&&""" DDDDDmQQQz,cwV/._lr[;;;;;V:]ڹszS """""""""X%"""""""""X%"""""""""X%"""""̟?رc7n\m㑜XS \6lxs/6#""""~+66mǏC"(( ^x˗/cˁ+܉KDDDD4EEEaׯ_SB"@PA/>m4~˗/Jku!,, pqqիWws=!C`HNNƃW^\.DzepMrrdddcgeeA.#<<G5ȿ~ߢM/+VW^Qwk.ux퍔qc***D___DEEaB .R)q޽nCDDDDDDDDϠL>---Xv-{4Nff&6l؀7oÆ ?w}/ȑ#Bj{쁫+j5j5.]* Z]x1Ѐ/GAZZ^"梴ҋF7nԁ83HKKCnn^\TBѠu= $%%fnnْq._ +++=\&L`FӣO>+W^^^ ╕EII lll* ؼys#""DDDDDLZZ֮] a0//Ox9HÇ }F٣rysrri&C,=wvv68bȐ!AAckk\vvvs߿kg߾k &W]] ?=*=, 0 .Dxx8 !!;w.`DthiizcbgKK ,-- Dךꫯ H$BRRytD"jkkcƱEkkkmZ[[aii٫/WkϿNȿNxyڈ# P\\ DkӋg 02 2 VVVJwX WWW?^8#VVٳzcܹso~߽{Z^^^B999 Z5^~elݺ?n߾LDDDtkOOOhk׮aĉF'OoݭyP(}vhZ\r'N...JBKK \"""zvKDDDD4-ZSL1~!8pSLA||<~_SRRpE? 1cs9[lB3:u*"##Lt!\C!##r3gGCC1i$DDD`Ŋ&<,"{{{̙3F|g5kxYY5/ݻJHLLDHH^<;;mmmpssD"Ahh(***= ^=q8wD\"۶MۭzZk'i9<6&qD\`".@0 LvpNW%{D\7<~WDkˆoyzpbqD\`".@0 L&qD\`Zz|z"{g5-aIENDB`neo-0.10.0/doc/source/images/neo_ecosystem.png0000644000076700000240000025161714035030454021664 0ustar andrewstaff00000000000000PNG  IHDRvy˷o pHYsrItEXtSoftwarewww.inkscape.org< IDATxwUꞞ@Π9gWQLk5kvMPsZqkzUtqM`PPrdC}fyշnfNn2HVg;+u"Zl~_:mK%ƘY;~hٲl%""""""vdѣ-f0hdg8ο]XtDDDDDD$ӂʞ:wS \Lj:URQG8Mo`IbEDDDDD$ŔvX%8,۹Iq0~"`GPl&""""""Iff-ŮgXc<\K"_dGz>΢T5d;֮.[c38uwRs%&e~&vQ=73ōDI[%xA/v-)HQaL{{@ Rg86be;'Y8'} `=>۩]SܢN3RVvc|'uo m]l#""""""Ef0gƚC%LI]n4O+tv>"""""";TimmrOcl1,Gme8L5ľGDDDDDDr ;}HrZsx׎v:""""""Tib%/vok6xs[l^vc8Ll#9f8&uSܠNS@g;-?vPc1l"9$ov*""""""Sa %<{EruL `󲝋TiJ,g"9ʩ9vj%z#"""""""TQ*숈(vDDDDDDDDr ;""""""">x2ۉ4HViW`0 $wǖP>] -MgFӌivSfWDDDDDD"x ^>%(x X  6\`:5[IhƎ'V{[c3bI\yŞ=i߼y]T .?:g޳wp'pr{;-'=I""""""$.NJ{v &l9@{ot ;W[<^z5JXq('X`>5i] v ?|A?πhZ_/S. ۥ\#~ |OhU.~^hjO7q kopp70Nxj[ Ž4Hyu{GJOzFVqT>ϽzFV}ҷ[C *#~4wܳ:I ^Vz fIP9ضw#( LG#63geP/^ E2AaHt(Ai9A!U 8` `3䆾v$gGΫ)Q|ѐ؏y;g+OMlİ#RX"*VxM3gYx赕,y %wK:UquoըO:4& Ad+Ojm]NPy{m2p*A:/=`a7 fZݓ0E$T ^WZŠ׀m ~m*HFUq?V7&.^\b͏cc#7+x9/BIqǪwW[F[.-ݒV80,(.[sEŵKmֳIws:5S*'Z5˟vMEDDDDDMQ[;u 팴kv- zca5ߏ1л}<~EmצUaG2F([lqn?͙MؼS5xy?J0ύKl0cuXq˟_;K8?6 FevV\Y^i݆qSEuhc~)}Y9b,Q KHٍ`u/  X >دg[:K0(}6PL/YDtZwܚ>mW [k ;QAY}Eg(;"/*|M[Zawfl}}ЖnI}}Gޢ;ebǠU\j :TkY%vCk&QA)5vvv/eis2S#>ʻRPw*o޼QņE#\hY 㫵fNs+7-KVk:DDDDDD9F,"AVTƅn%8-Z~OW\~E~[iQū¾&zͩ]팝Ž?[3qE.8}e~N yHlSbâ. ާm˜Y}EDDDDDGp5œ9/&\r_Hx`[.?~(0v TBp51E"Qρ;D|?mKyǦUr/o|| tCvrnlعW^sc[DDDDDD``0eu@pR{Yqf,Nr̴mfDpEEL[E>ppo AQDŽu2ek J]m?p0Aqkq˼:nO{ \@#kÉv}5c)uw} [o_&56׹__g[*bxEW߄?ֹi~=5]ŶO#qyi((tSk;],7V o+)v,9)6iYyk35pHy}OڅKm,""""""Z#ѭIנ 'Ns wtn&6o.[~sLfNΛ5dMoГZ NZ7%g; *4OY xYNGrroͷGDDDDDDr ; ~gXӑuMd;*4N̟m  eQۢKNIDDDDDDr ;Ȝu]Q]`a#i1e1֚*;󫲝[a*&[/v^|xRaqnx™KDDDDDDr ;->١CY~E6ş䥦(e`nf0$.w>XfNڻxQܥN q.ǸGYk,6kϯ2g;Gi:tVSr%""""""""""""""""""""""""""""""""Zc'w_cx'0x%w ka0 x=w O7xpX { 㽁߇808( ¶ ߅|Ga|08 _a{{a|$a|k!@oK *%] a"]l߇6aCl&a40%O6't``?L 3M/ag`@? L㳀~a0030>(_a|0/6{a|! Ԭqt1=[, ]6`Y_t [0ƣ0 㛀0h#0( a|=P "ii9;i-'l7*Sa.02a>yx?a7񊴸 MeھEi}VMZeiK˹$tH{ˁ[ø0 \@W0^DpAp] OEa?ya<0I}uNO'x?B><+!xBsO#xl)&|s0Byȩa<x&$s0 cOډa!LJ7Dža>6;Css>Ǐ ؘ>fccx8tHH0~A0ncc06Ʊc/a{qؘ>f |ccygL?%_qؘ>f!@;:̗o8= 0>61Y0#1;9O㧀4`0~cc'ca|&?fa0+ 1}̼ό 0 "&a>6|, >+!LX@0( 㫀a<(1}̼F* x8P秵I훗gؘ>f^ xii9۴bWLp8}lL3#Ԏc0On *MaھccYC۴JrnZƎG~߷qؘ>fv! 0Fp\Apccًฅ1sƳG3w1?V̩>7 8-1G%XylL3&ϻx[=>_ 1}@cOf3ש5˺/AW5}T7g'SL/3>S?OWmOfTҶWdTm2l_~j ~2j ~5'e@naҴvҞn?U/xJ^B?Rb{ n?uZVo>a{O{O۾"ʴgh_րZ޾!TmO5ɴ=-~7LcKC4e'}{63l4fVeh_a{Suk3f'XԐ~ҷ7dllX6cl~2- !3}k4vl\׾OoSP;u[DDDDDDDDr'S?-ZIDDDDDDD$G,c]dVDDDDDDDi)#""""""TQܹWŔ:ISׁ`)9:̜GW2-1ծ[UiYC|j3d}wqayF(:~7YNQ*HKRaGDDDZ\n[< %57ܤef+`ϫOX1Mw ,c /46֘j&T4\_X_qqjLdUTؑŽzπӼk=>0F68v-gS|^n:.ƟkxM!H"#X;O#IWVi ])7 yUYXIMTR#ֶ/DC:1Mdcqg9nGe;-iTV|tq Hjrsuy&Tc+FwzMo*tw <_?0oGum휤RaGZRANH[{XζyII-M5?0YU&VNc#Lγn;x3>ӽ(9IŽ*/vk7Ya-%&b:xG6vlj ħZ$YVŘd]cA,i*숈HaXcpԷZD2IYgR_bwl֘&fJSr%`KG픤QaGDDDDZ]|3܈`gc: ݯ 6Ě $<ǛҜ9J+xˍ"6oӑGi5 fǤi[iHr*v?;aߏ~LOZ Ma?;7H+Ž$-,"""fm;ӳ L突V{";i< yӚ?Ci /Jg;i]TVMp[ 1vEkHY~jp/v@-+i0p4 ;"""":8v 5ޢl":"ߚkM}q&b gf" ;""""*/ܸ8˩HH``p IDATmkAq&yˌ!i,=>۩]CiIZcGDDD}S;0ݰ$#NFrY?`|vQR)ew3g"Ž1$eu5,YD鞩E;c5q-Kx4 ;"""",n'Vd;M+|tȍuJo+X LTg} kL bLm)mEرd>DKyƵ0.C wL^&N 3_ǙgƘXsCiIZ:6r(Uݯޱm \Wf ]c{;ŧ7W,b&Y_le=D}=c[)4 ﵹm *9nC-f\sb||1;uO_h;{u[]&zg0]==;6f>NLۮGGӽr3T$ ~njS.z8WUyw6} eUvo1e_ؘn}Kow[dZgtWV}LJ̀ˁ֢ύ[ݩTkK3vDDDDDZU3o5?ʲKnO4{{(.<5zw/zc5]2uiwTǟ[fL]Aן{}毮G~y+=;xuWU:$~|HqG%vksۯa~<)=htc_GI+p[kt3穽ӯCmoy倫*O>IJ8x v < >OB`^:㐸صu:uc6u0}hĥ8<&Rt:/!;Kt?6hUm.z*ۯ F1`6z1xťv]YqUI,vŎ~79H@pV3jTi!/|:92.޹qS6o>a- W^X 'a=YW5#y5> S7'x]<+x? <+l 4Em6?F>Oб ʒ8U뿼Щō=>sNyOu`&V'|{7u칥tA\ZqUICol.<^>  8=+],~ԞvU~;Z D6388ROޛC7lhWݖsgztJ:KZOڂ@=th~Q4++;ws 'P>oNkJ*&5Ew J6޾b'$}g{w5gK+/ew`Ͷ~¸榆b# w3>8ڢ7ӷ}5++72۷D%L}~Hv~sAÊ3nFm bsv럥ۍ.uAc^GNω^uуշ[jwKN؏_zISf{qCw7]~\&-n !AAewfWZ N4|',5|*XcYC=+K􅿣ө#0`a8Ay\5$iw\Z%8ATi!d𿼽~xx;8{GG<-{Ks{_ZnwN /azPť6z߫FTumo~e3g#NSlߎ'O{l66iV5Sgu$9AA'g] 6*숈?`lK^ΊrQ_f,]|K4/Bɩ**t5?m Ϫ.;7G?xgdjkNw'f]uK,|O]}O6!_Ԥ h_h̘o;}bD~5qY~ac:ZܤGGM[4+M9wxCW '.W7[ӷ\lvXZB:':5mymeIuV'SQ~ͬ]Z\npyYuN˭TŴ~C&!3v5DiIZ9۳qǐڶʛ Yߥ[{|6#bokgKfⷜ'.71t0?{yNotHlgR|wwz~wlM%vu ^V8f%f.ߘf{QۙUiz3 `v ܱ\_`}}3o᫟~cB7&$}Cw3cR=:o;<6=Ӿ#o$vw%v*6\Bc3sc7GuHP|y>{EElt3-xgМMgW1LKqa|.0"m_Y:Jj/Z*숈nM򓻋dow^v,:ؗu/A*oYry7Lj׮O7kDgIO=0oaE2,.n:-}K|]6u'iNԾvEƛH[nz.GR[>>97N:~߼[c%[8a,Ljwlk):{͜EonDHo±")s,][&ݤg\6L`@{S>Mv"X['%AA&%,W\}s;\AƉ@"LsY+/v=?Nq_g;uQt/OOx3YJ^蹓1ֱM4ѪHznvBY#Zִ3Xxtp"޼LM BcMMZ  uܹEZvDDDD$}gVu)Hn1ŚXm^_[ߚi2Ik7g:^;|PaGDDDDZ *@* 4G0Üu9>kK&I.8,3͖GZv%idi6ITEzf;-~*31Ta*NO3Ǚj3PaGDDDDZxY;ݭoe;`~6k_l6~ Jk` ׮8i vDDDDUu*70?HnpF|cͬCջ1ؔo>p&͞ ɸOv>Һ#""""GqYFu@1c_sl" ;ҒƎ4'-֚|n|dno%%UJ3 ۩3V}5VIGiUbg6R.}.d'jƧϬnI`1U֚Ƌl}fu2wTYF^<6Xv^ecRmnŏ% ָcxt_cLYrTtk;a}y%󳝖^*HK"""b:w[LJ1]oj:v3.$~ti%ee# ;nHŢ7m6? ]ٻ8+3sӑE%nl$bLFh4k&v{P /lao9?fwvمE{>sϝ;ܻ3s ㏀x0x3 xoSk_`l ? ?|7Na wAYDŽga|w"E 6|?4 IDAT#90>o,0% 㧁~;S0>1f0 q0~?G A82$Iۅ0/, 3>a|?mlW, sa|,0%5 k`0Xnoք@0HB}oa|6a|#Pa|=P : Ɨk$O@^_r9;uUFLKۆC0Ga'\a CX?q"?@>piWs1ø(~?ox- Ӆa\3; a``u.x vyn/%؞!؎ %ayK@Ƌ/3xPN/ia<`RSxa!'_qm?!v#$`P? hm3O%sC$lm*0>6mcͼ1g /amm2a]_0>Ѷ1f 1CHAr.o*86Fnև0;hm3/';Hߥ22c?mcͼ-i3p=s%|86Fx?%a{!mcͬ !~.=. hm3! ]ƫ86F!}_q)v 06;a`?`93wPm\@GBͶ1f%8@7 8^Ѷ1f~Mp\x4.g 8~Ѷ1fN%8NBp|<6'We' h<9iN )w"N=sLG'vPސk=ug沞hyCϦY}5)-o)RcIt9Ǥhy}뉖7d= 9h=]OC=fGzچlzZ>mCӯT[Wz66UOyc Mf>[mNֺ1mXH̦64aog#m|q!i}2Fp$2$Lldd. "҈i}~Cf.I0껜HZs#oj)hi}%3)y}waH#3Y|{;GD~ %v$t`ɽ1w٦dzHtJi2_qԭIt1[]i1|C9<3bWB]ޟV36H]EbCp&F?vE@ɹwSa 7I&7H=,`HA9ݪ>TcWtװ5 x hP.uiU_.*""%vDDA<գXoku\IJ-siO/_-{[OMgƌX8vulh r';PZ\A,PbGDDe'M `p0bf- ޾жȖ=^838ϵlx;k1OlsM05w8oIu$R;ȷ mvsc-Ƥc]wҒ[3 #4>Ur " `|a?BDJᛗ`_q %u 1,s Klo}+)-_M’ѥy@?y"`(p0sDEomkr"HW=Za'>9~^`mvKc`VmJJKX[ f\ק.W _^jDDQ(#"ه;=s`%?ui2VXj*@rZgZ&- .Dz֌V2m![JېH(#"ʭq0cWnҪvuzkluo}C^b5[Aii߳@/#f4lX<%vD %d;M:4!eeAUnq.jm s]*whiBi594CxƮVwu}D4 l&J숈Rc:XW.m,Ӏ\IۛH(#"""""uѠ\mC QbGDDDDЀ\DQbGDDDDD$ԧۆ!,PbGIvRLDQbGDDDDD$GgH ~q=%vDDDDDZ E$*l,?!\g6*w"M;"""""R%dKij 08x#7\@)V]oɖ6L{߷%vDD$k~3v}EDD!%;?fh0 ֝`2Z8]/NмMXֲֶ.eU?vzm3l. ^ p" ( >( ^dn.W{v&x> XGmD`gh%vDD$33ʣsU*I:_`Ƶ'{{qAK_{G:5XD$4(hhy#H;wL"e$+.aO"eπ;7 I'\ s|ǁ ,p\/L 0)\. xX \X7}&|w zmVHd >t7=G/-~vc-. qG vZ)^GMy{,8 s_dwkB6 kl>9ne>{nˋ 3.\n aVĂ ~Ͱbko묏~cQ ]6k_4m~}Q:|ɤoz">dRۥS*;`Z߮&~ͪr{6ڸL=/'vє_ӂ NEUQLI[ K$y3 8 V:8i GpU <yGpC9,? `{08{,|}'yvޫ}{̷-\e/\*vGg9[۹Kp&)#""Y# m__' _4ow7 cd*5>dT.+~5]VUsgLڠ{]n|{W|;ee00N6} L8٫q8]Kz7zh@.٤yZܡ}ܗwY[e[Jۆq_HydѺ8ˋm%OWۛ몯fҠ\mJO|ľҰz/ Έ`(SGޞ#Wױԡ>c "!8#(Z@gsuR,VHs9R,ɩ*yK+ ?|ˉ+IV{>I`%W>XU|u%;5у8}7zg9L.=ϸ%CӋ^>,Xnm}k?lo;`Υ'=q֍3{~?~7C5`;f.u +O+|`Bo8]Slu.,^f&"R %5?L; |ܐu\y 曩 Nlϰ ;?DKbO0Ie &J8ss2K$ҶI`\wؑlҁ]Du%ʂ)|#'?)yٮqە 1׭qz-E bG t^8;/s}cDbS|ٱSZݜe홿ɷ,:˫t}qf}~u<v_:h nSdL~ݙ}ED6E}7,#H4EwzY'ѧW$R oZ̋˔K0qpm~giK0Mg`:0|::Mp?LbCJpKZL|P@,[]'%vDDIx]eEޑۥY`-Wߝ+ԇ3gO.W2b;tJctx̯RWISؐ:wh,]^ܳ3KҷOy{iSe7WX_[;V;Cғ/Ѷ]ܥȏJnKb߄wS\RGZ4 My?6wc>$HIG?`.S:`nþ͘sGH ہ} y$d6c/ >I{`rFĎ4 };Me%A:`Տ%U؁]VqɯgZUn^|OUwc \bʊ2;|%Wup#P}ڱ1?g]yvNm xcowȔ1'][=cS^G Ž?il'.(c{t6~P׍O|UƎ};zk?>L[]ns<$ʷ+l_L*N\ [l@Rnچ஍,s;?2O} $pA3,kΦz'Cl?gu"s֧<ώD5pZ@p:`y\G0Ho ĉ&#N&8Si-eiQ6H-&žwSɭc.,/\.w]_OͿ~AS[3JH?⿟z}UkJ(=yh;uj)ߝzY)/}KgcwvN'u?&wyKk|='捏.;x[gYp+?O_s]1t5MSD$Bmiyg^l6%4lΞv<%asß#q%e e UU^{{l0ch!l8wd-o?QĎdS1_S\ލmn5ڨ3)Xˠ_^ŜmݙןS^&^nߝ5SlsEw~6+N*zs!κ 1S ?FbW2`zh`/'= _W?Ij*c6M؝t9 [ym۴"茝ocIibcI6}I0;xE[x nc-u}j[E6~%Dh3C+m;۫$)_r"1㬵)߹ɥL&.e;j^V0Xϲ;TZE`ލ<~֫H"""uvRv""Mζ&DCmFH%v$t`ɭ鏩&Ѷ!҄(#""""""[@"YĎH3v1hiBA ,PbGDDDDDAGۆHĎdRLIۛH(#"""""uѠ\mC QbGDDDDЀ\IۛH(#"jQ'K~(:&늈VAGۆHĎH+e-\E+pR~"5Ӏ\DQbGI &ķN.3,2?ZPHi-"(#٤Ho[wk:kG6u?;H}7iBiŜpPnRn4/7l EDZ%vDDZ1/e}g :0֘c$Xߴ3`\GjxqWUp-u]DDBJd;""BY,Ovu} qL*w5־uN;s]itƎ4m"M;""g>3xys]'iB /7[T%RkV}K~Nm\ceDD"ɂX+ ""MCq+=Aٳnr^%3T2W|g 2j֔ӹ\.qo_j nsY7iQ4(hiBtƎd&É}r4NX(/Hx)˔k”ԑPLIۛH˟-{[5Cd^/\wr^,(e7I d'3b kԕr]t'.Z~wuzT^O/\]Mkyy/^}FgϽmսa3c1;?`aNO[akv>7Ϧ6O9K&}4WyGgEIW}\긥k!۴gG?YӠ\JۆH)#""p7Bg1ޤ}ɢjz:]a\S1&L aIoӵkr]6w옻_Ni:Ɋ]v3{D>v3}|!n\9c½e K1ʫ?!H[f.ZnWL)-K.Zn1xv` sY?fPʸ?,XfdžUUth7xW_J.5)WwXo{˫ 0o?t Rf3^xGnw )cH.(ϳ+j)Y#AiB:?m;*X𝁾@F-%Z驤|SMuNmolsYv&oxwR;?-8WT\0p>}O fa=f&:YwUG.XfOɿ ?pU/]mG7Dž\XּRSS{?K|>5m&@PmچD@&E3JY=gצZokqڹ~xuqObjqVoU-î7җNd^e~?Lt[Weq4q|XC3RO-1ޮC |>?ؽ=y{w]MG^Qev9KwkWmYysz/QԫJNxUɓ%?"?&Ag4!J숈Htd(il:_fֶج4{>e-~\C޳ ϔ-1^*n6:oN|ݲ|V{ېX}9_~nX?Gw6PHJg;"""ҪŨwn Y<9}fc*m^,"(ID_k ~*0uaQ{e6O;l[զrta\Wqo4ЖƠH۝'|ǚdv?=S1SW;\tgϺie GJABt?`potS5 'N^ˏ$'s;3c!N;G}ovk=CiՔ-mH$ tƎH= {OR>~OO:ėםyeGo$o$.859vm%+%fه9Nm=qe{F_~wvfX[aS -;~b٩ c@<=wt!߇hKcv$҄(< `zv oa<!g3x{`hsexf.ul3&|Aa<=+aj}@0㝀uq ,㝁~a%9@v$`q@za9]zg0 Ocw|O0 x  w0?`ex/`0~H_6;b"cb_ =gĻ@Ykx,>M.߂`m T@0~Hp u*5`} @u+@<ғ /8 H% Ƈ,^/81 =wƑϑEq4Q<&\^c'\?a^ c7y)ca!/q^y!^ |0. >!{ƅ?ø0.&B}o܆`;`{zu!a``{e9Ǩx ~Q|5~O":{Jp0^A_C ]=xqa\Jp82& 8Kcv {Fb'Qax qsd}\PyS^/ztq1&6^uuJ-i^ ~uw::pcMM 0QENI0݁#cBŚ ;A=R]\hJ /$v! ISK3< ƤԶ[WxˀL;n[Mޡ0MҢ|YN9WZ[WoWڡVؽ|yؼ]cBN%TRq).pŅ~i"/~n )M}3*>t PڎmOgy]vxV.s~t`9wW.3f@w3$k?~ree|E{ ug2] {+;ksW/ߙ5=mGh7> 5hY_-ߏL'6%fFhmmfmѶ1fFh>1Ѷ1fnSk1dhm3mc͌6yhm376py6Fhm3mc͌636Fhm3mc͌636Fhm3KP-6v5hm}:WeoOhmmfmѶ1fFhmmfmѶ1f~K0mcͬm/ c8øX1є0ގ`l0H?X `.0-[x3Y2,lкǓJ4;g3dva@ÁdvĝSq2;.a(q$0~̎8 81$#숻?&~2;d;#{i0MfG܋Lx;q2 ]q_j;~l;⏨9xKSIPaIQsp *I=>Nd9d:G9`&9́k2#i!s@J&s d29@N"y=́s2y`Kw^O sLD2t^BDV:S2wt^O"~LbjvӝSt_%yNd:Swg ԞņWCQ.O"q2ԼHyNj~L'0RL'(RL'8RL'M|NjHy)NjH2hd:# tR;EIDv&y)Lb'Z>Lb[|*kHd2מ/t^{E?'y6R)kHGd:}#n)L]2tRE߈o.03ckΌ޶ y:3c']~NL[MWُS&Xo?J'v.:;`}ܾo+%}s ߾³n}&oz ;K_<^#[)*$c_=9d[w>N$r`cmϺASІ{Cԥ}Hā=?䫹KהL&{Dދg#ŏX=^xʒU١@ ^ΔyW)4Fʇ}жT;;-YСĔ8×w֬M5y;`v,ؙ2yYЭYN|5bG'2ؙ0'N"j#xP2l3Ӄof36-3ɴGLflD+2mqdIlhqdƏ6\aZ:Lx2AحֲIͶ1hBͶ1OȴSw" 2$&vty85;fVRwLY)/6s5u+ȴ%ef.6;2mcH"n3Pw99dt0-w3 Qt؃mf]mW6N3n?!6fYW>u$vPw&uۯ(Lbgp%2H d;;Dʟ#i)I$vƐIngcSO9ɪd!?gYi!&tj,I >Dq\~lD{jE?>j{ AĺVr㫀Ա܎d $ ױlM,""RM/aA?""M̍yTDD%vDDDg2g “Na}邈HKФ"YĎd"ҜĀ_v>$3zഭY)-3vDDZ%vDDDvAM,~EDDZ#҄X"͜zv v~r;H3&jהw|w 8}(]2odYBg숈0J4Cvn9]w|(oZ'UlJ g(_7m<6n͸MV02w1CdnS)HK3vD%v$t`oS8b_oр"Ҕ)Y#Ag4!M\ׯ g,T4J4qnmQŖKĒsp5l|'+d vJuz0Q /uilJHcv$҄(#҄ v%_7V^[Ke Kz~xB\׫9@unE]!,PbGIʹrY'v\?:I`Xb"ƮЫM:c!y˱t4'Icv$҄(#D~g9 'j.q%RƲϚv9ujN {SX:"Rh.[JېH(#D '.O+N4e:cG &D&&f!.ҼX\ `1Cs]`K>"""PH$ ir%7XۻF>:"Ҕi.'8~Ӂ; (#٤&=Ҿb \GI]!o]E}s]fNg숈ԯKqN p.9p[ ŵkDJ4%nva v%MjlZuZE);"Zt q 50gk׈iRR\D'ێ1sҊ( +""ؑ$tRE\V1)#ֱ҄s]i1k;.͜:"ZiM^%8K]A?pAiB|c \E''vRے>"Ҕ,Ci :cGG*X| ^&؎/ l[X~!LO^lk\76f7׵yX[F'&"-wHρ[{$ho 1<`uX~G '@ƯiA|V7|琾{iʔƠH6\ƿ&-:@7+j& H²h3}F{t)ȆtƎ ISW{V{XYnpַ+&I@pg ĎH롁4mG\q:R^ņ3u<אo1%vDDD:"Rh.[Jې4% 9CaB Q+=]𲫆~)#٤4KDZ*Ǥ1h;`'w` 9 X.6@_yxNlL] n\iq&!#Nשu}d"ҒuӱL6GCI%EL"""eiBrf@gH6sЊJ%S׎WG-p3Cyu,EyxuֿÙ^8_Z:Ilש ?;`a]G-\n {)>؉GwW&]wS 0s_\ש|qd]"Y"nsڒ"j-I3vD%;tƎHĎ4ܥ'b^Rk^rID?GĐС]{G-( J]P~x"EOW&7W}y);_/}މ}ZTyszۧwev8ϼwSɽG\Qye2E;S^>Zm4yHkxXl.KyWUџ8][WUuuҮ,9waTߪye%Q ,b(*׌ᢠ^9+(&AIJ ؅eLQtuouj֟\\.|uyp[~]wI8%[}r^"w*;mJaSw\\`ύwyU̙?EfOtKMz^H;6+gh2i}x@I,;}+׆Mg2fW}%~3OJ(b2#~rIY:Jٹ3n̏vk>,^Vڵ?>@//oArד߹5*ܷO~!o_s3]k<]yP pxXE$‡Ӂ3vN:\!f`鯽k}NZy/_yhb^ɭS-+pCL-^.lŷM[j8]A$i3a#9;FlҎ?u7z[?7=X89c\|]>]6-Tʯ`޳>>=ݔ>yW [MS^ĺWR|O)~碁q[œdBPRqƎcHx C^  x E঺`?``p8p!qWսvؑ4bV]vSq/M6Heiع x=I(vI~zvN΢,~]Q;tuSIRt_C{+?:'|kJ v]S[=&fMԛzs^|MiMgƋ ?guw뺥o⾧U;ΌҌ~ۗ}ӽ s5G~.\Px"~>_\fmF']8oM.ɤIssLvXX$88?=YI29H:o#abYY_ P*B" K2g'D2¤ؑ4=;}_:iʕ޻ޯu̡=shךE}`Qm}oϫ/;gf4x}zGԔ=k4 œ%ɋr5籡vyeߏ,ox7&&9g$vߢ: |8x0|dfM֫c I$$3yKcؾJ2֡q$/΋rǚja[*U^'V~T:#ۧ#}+rM?羖V7$ QI6I\yl]VWY?i׆i$Yg{W$BuY!۰^Ef#YS$`YI#bbG$I\(WxiZHq4G[܊U1dz?)p? =Ifd<&([ľ|o;j'Ojcm7̙Y:ixn4Q8vS3jG']aCg($$8)Z+H`?*$$g MlB/o#Y7h5Cao;2ż{>j²cʼKDEcMcuۙ$;p~If ,|Rl?Y$IkHf,#y<eI dfwIjgH4J((W$RIfޜ^WE {>pIg/ŖokPc/-ǁS:$7Mؑ$iCؑ4XXx)7\Er{h؟Zg Qnp(~$IQV,'vI6Ic7IMF$32gbG 9cGDfFcMVov"l4k򴔷bI$IjċrIpK;RΥ*%8G$䅏3ad_YЈR F@Tu,Pq\!j:F[$ɋwǐ&vN؇ 3E**Зu,6ID^;RB hZֱ3Eӓ,H:3v$Md.|vĎ#!5  KD$MX&l1#Ț.%F:;Ra>@uG֡Lbn%3v.CRؑrdEanDB`zEkpފ%IRs&l1vx#Qg)c&;dC<) و!Y2A8&SxIm`bGʙBbaCQpc֡LG38)*Qu<,ŨUyZd_!Zu<jcC;RL+*db* C|7!c0kѣ٦ ϋ"(Ą@]CP4@s`~?%7e٤yKc1$刉xGfr寉4ew[|k/Ȥ*m(:Ad3`Mv&Ȣgѳr %kz&Ҏ:e4"MXn_6̖$ s[&,UvĎ: xޮ!l8nrw^˜$i(&l1#)׍P #8X ܌&M.kRNgW^B2 `>oyf_S~9wy!ސWy{1psy{K`- ?@u޺ߧrbؽ}p[y{O$^صHfxY^`NIi؁.oX~ϸSgA ^rەo+o/oZ![>PލoTw)$uUYlw/$3H^Toq"{+SCrlArl.iHQGW9H}wV%s/o?WRH`vy*`eW"Y֔zv9}07`]yP`ZyT&O#Yo~a$kr\T}86ku<꬧KʂG@p ՄZzǯIsADuwHT,?rۥ=],I9g˿帟B$i I}Ku$ I=`T=xx={y{zey{& }Uy{ vQ9I;=Tn;HՁe$vcu$~GIP7.%9@mߘ31gt}f1+1 f}cz$4}cL>37f}czTg#5}cl7BRмoLO>37tߘ31gKrLCmIs 6ޗ.h"A%$}f1?7tߘ3}cL>37f}czoL1gt٬oLO>37vd~4Yv>P޾jC9oTܮI-TLjC xywTⓀC!V:z,ojC<x{yT܁[?xKyTAgLSmLw!Ն46ÀWC!Au6#H>dVJezWRz+җREBj痷As疷?4vgPjIQ8AKv:H}ՎTԗTϮ+WGz/T;wQjs2 T;wPmT;z? TOo:H}#RL5Ay /*NzO׷Q$mN5u;i<(P4NƉa'zN*RƩT>T=kTT;}ie=#UQԙKRg??'׹Qn^7KuAjT[BuujMTۤ uR_ڿ}jUT;5g^N>2Rl7ھT}.WSmC HHRkQ7X>jb)$P վP}㏩_]}ߣ7>-gߦ\T&վH~k/-o>/R7ygGmX_>CmX(s gP7QվT/bOϬ\|~>r}1T/&?D5s,ՋSǑ\U/zuT/~LmFjmSogD/z tߘp)7VY;h;i!TVSM켋g*mTJRCL'GI*A}c}fq}ceGڕqiJY?C'݇?F53Ҹ|}bӸoվq>}fNSH7̛[@>43qx5վqGjF}c'^O>jbדO%9@'|=wľGhݤ4P{\-ʞ4|㧟 gST|&=6oRuڤ&R{lI@r5v$¦FuiZ'خvsQuwSuYƋ-ǚ&v$i hEБԩvz{JZEzjcC;j';+xTQ+xR;P]K+Ƌ8ǚ&v$X{m/'ʚVmW4EM< )GLHʓ'TJԎP{VpV;ڧZ{=w;X5LHʓIy^D/HfbGYI$ߺw&IZZ Ewj5oǑqfbGRMm7O Zگ_O x5/4VCRQ;h4Fz'ؒ:E^3vs1$Iy5 /t5/|49XƋ-h!oŒZĎL/z.I֘Vy.`bGR^&]6s ƞ++ j,3v<ޤqŏɓF ܀ IDATËee,:oRy.`bGR^Ď䅏b4:ؑrĎ UFs3ⅶ$M &v$啷b)K>CX3@j;RؑW&v)j,KY%IV<_]KyP8cG;j40,9`X8;QY%IIyŎE$sqƎ#&v$問;4@u/^hKbbGR^e ?V;y?5Qy~S3ؑrĎɓFc,'x|vir$Ƌ8I@LH+oRIc5Il753ܱ1}egkD$'=G+K^̨Fz\9N`s ^x-ɵ4QؑWcy*6Vˁo)^@23dbGy2V<XRgp/6DI>*Fs/߾mH;ʓ>Rذ8k?lg DjV3מFr§1 wt!n |gu:p-;hcQgLIqHLVg;(u.`n.&9{=@<ɀu&_kh['la};(MjJ[H^i8k78M81vS$|p:&٦˟՚Ф'4;GO)F['$֚4A$yݬlt8[G:q:j+5(<WEicB4Ф\7B3&ۨ=~v{}18]jBcheV"Y:c&be4FQ;|VkP<$O]hĤan_ bQ[x/HfJ^6c$y4;%o꾿 _i(#y xjFzQ^8+Z&WS=ޑe H֛HOE)G&v`éކڙcjvSe4\OO2H"?-q"HGhڅC@_NӨ=~?L+|%3v.S4lIRN#55|} cQg ŕ/)C'FF:cvc anO`q٫~WI58&5wRI(<:Mt =S~+ɭ KUo6)L6sp;ԄS-en}X0^6s[$].''@{'Q;e7%l1 ުV 7VjR.2EϴZXz1I"L2I4 SͳDj/t>L˽qi8x?0s4MdQ{;tO?.$뤌d-E$u4 j/t>0LCI$i]BCݚ$I mu GO{NmiDbdbUIZ$I4xNp>AI$I$$OA)8U$I$I$I$I$I@47,ɂ ݥ%v 0=""8I$I*"ZBXUbJqᚙ^:1!:{aۍ W$I$ICQ nM<~7CؾAqḫxLXh@k$I$IA QqM+MRז6#s5!jVq VsL,?{QDt =Dq]$I$I&8D3޳❀(8.}i֢GokU?D݃WQi] $I$IRYq0 Rg*<}C6*09 ^D {Fo$I$IjbTxk`0.şʇo/XqƄcuk:$I$ISi]!Z}f $vV@x n ]+`I$I$U 8, i\ 25R40(wQ~JtRÕ$I$IRZ6*%pK;skO$vp @PQJ$I$ , !LWvlG"v]]7e$I$I6x#@p=PN!Һ̂$I$IRCQ| X\8R=9)Q+%I$IXPw(> ^y,;z$%I$ILWW~(fp]qBTZB蓰$I$IrCSV,q£&I$IBx c`@TZe\$I$I^Di@YLĦ+6,I$I$ PZ T(e$I$IPRIvG,$I$IRE1qI$I$ib@ԑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:Iu{F1H$ib1#IR\|1F󞗜%{C{Cw..MmUl$IL&v$Ij]|O]򑖿Ү- ݷ$b]M$IĎ$Im_?ku$I|&v$Ijko:I$u6;$e @+{:I$u.;$YQ @|euL$IL.(IRv{uœJяn 34l?  .,.8;tE#zi{ol $Ij?;$eSEM7J|M}W:sŹ_ 293;OzAךoyV]+>a&9;'>ލ$I.ފ%IRF~?ꮅ3Ly?_a>s- U:N:w>{ֿ3>K?#ԁdݟ{?o$IĎ$ISwyEf{5o^>loה7kq@D."Q=g~]3 ;Io~޷ޓk'q`Dw ,oo-IbbGsӿǶ_2q_=7[~#pSŒOyEѳS_xȞL5}a=:kPޏ9yK{㧦EwD<ؖҒ$I#IRMSO^{1fS.!$x@I_"zozM{S{fz瑒O$I9;$@@wp,xמ1} 5*t+_f7*bmR;KC0wf|8fOX>_$IRؑ$)'M{uKkVkتQ*;nns)z`͢u}֗f5+7 B=$I>&v$Iʑӣ%޳/ MLu.?s=0H75W g~xu[_{~;īM^6xyť;mK\;3s?I$/O$)g6 \?'Y{/ O;k}U^+ڷ N/s}ϨOyCkWZ__]R}+~?I$gHC[̋?#kg٪[u,@? x٪g{Ҫ'͟Z*ѽdEm`Y[ϏS.ko,IIrjmuSgψj6yr+q -]JRgΌoڇ,zb-E?nIt@޿4c)NyE:cg B;~OI$mk>~d~~Eu_>Zڱ1%^ z9=*ֿ]U}o/xե ^^֑$Iъs6 @~)* $.8wm[̋E$I mbDI:Qe$I5v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$IebG$I$Cؑ$I$IP&v$I$I:I$I8$I$IʹB!8Q)6,I$I$ +"h}\ a%@f$I$IJL *-%]fd$I$IJx0 @hnaI$I$i8!s Q@th~QI$I$iHDDhN(F2O$I$IMD–@s^8:aJdgaی$I$IR`-@$=/72N$I$I !B5@ߠؙ=@͉][f$I$I6Tޝs-ʉ腋F}#I$IMK.!Ue\M ZK65 %I$I@90@;fbO$v6=aJ"~ `~ d$I$Iw6@X^*/r=rqPOC<ݱJ$I$)Q JNEMb'Ło@XbQ&w$I$I2#B\?%wlz²ѧќB(mG$I$Iؽ09*@p})} VEӧ_=rf 8%ŨPB \$I$iC<5 ba!@Gfϻs}C>|ɹg7']E]=zI$I&"o`׎Q@`v"N2S9 J6 :Q8$I$IB)E1a)O#y#wsFRX6;T Oh7qK$I$)-/J o[F'vsi3g/J["38ޘ#Iٚ=?7g$ں~U}٬h X$ /B_ z|D'^kە$I p *Ib;X l]8~ਬ$IH.x$L~8²W0iUPg񃇼 `cQʳbz #AܬPC5bUYaJr/: 鰬^揠T;رqiCaHfUUV'7²WRZF@.@& BM:yu¬а FY8ڋ} Geo~uҍ$oH_AS{ۥq6J`Cm=¸9Hr 91XMR)8f:@ԕN1̛pvA>I]u jrjgg.#:5?I{9-8;%xMԞhitJ;o}t^ RrKbʗrKxiV*kcцvz:#XT$R$G4XKRW85˩ݛe@`{e2I,Ƶ?+ja?v7}1y^맍y0Rޟm8JeT W73L6M3Kh_OEj |36ՙAm?lÙS{n\pm]yo?mWSw0_PWFTw:j맧ui$P{\.4cwjE7ӈT2ӠlQ˺lCm]}-pWhl?  GuCm'p&m6{QWǨ%݇[pu&[}Jw<ojc뾏y?^E 뾟<5@4Y$LQE2hħegwG7$@3pFO]mfE xO}= UVcV뾷~2:1ˠ&~a)lX? ^x YȆT"TH<*_ gxH"۲ j?֕u;! F^ٰn~7{ՕwbH$Ʉ6jdWfdPVk̓u!ϳi\/+07q]}2ˠT$uOm)zQ]\k$OQ뽄,ybDG_ZWV'zQ5ԓ^W-quoO8"sK7_V]$glO~Wc2]$k\]-q y L:Cu 5XOTkIHYoގ[|&v$ɞ^Ɔ&&86`ӁגLoo;tP;(=HaõN>T&! T[9I"XGkp`;^\d/O݆dq״w?xq&y sڿ=If8#=:NkIg }\O Vڅ$Ow+F7kO0_V8ɥ맲KĴ[rTdTl[{l<~R>S>؆:{PצXT58}u3ڿKq}M쨕:AL2|^:`=|h&m;dL9O5IƶOKpP\fř a 8σ5dH9e$c'sXW(l;fɗ  13&Cb*eooO>Y/ú SX'fkt 8pcmGOQ+u\dlT$Y}z/Pg/Nf5v:|yg2&vڧ$cȯRj; IDATѐ Eco|kLcMxl x #f$c/Ng:??fɗ  -s6ת ;t\dle `{Nfe4 |q&; &F~v:???fɗ 6ZH6oiq bmm3xlOco|LcMXO5dH8Ύ|mm3XOz|kt)&|LĎSv\-le `{Nf[|:kۄq2&vڧ$cu Nɗ 61n'&CbQM  &/J˶`=uo% ~5 F[OGz+VdH8xȎK6˶lC?7`&br\3Մi0Al߽=֙4N*eqT6˶LtSuk`l|$b /??-?FRu\Ob_&vm?l2&vеO5I/30 N:ʏ573^{3NqN#,RntTqKv^t7)99`θG7&CbO ߬n:NFc!57z $3)}uLR/=q0; ߬n:Kg=u~0#kn#I 4EFFZO7^.ٽn$X6AjbG ߬n:?I9~Ώ%z|Q4'H:+ꌴANNĎk dAAY?et/N;)3q9d!ze0fLp&kgO83~aM^Ir'&CbCvm#l;6lOc?1Gݾ>M= Ehf-ُvDn<"}`k#/_n=vYBaĖQ̌XDD!VJ?sXzchEQ]D~:o(-vh`RNkCGҝpE/m3F?! #<E!m֢GGfƘ,?_wQ[I&CAq}GE\P("\w\D0 WM@]$3LwWh!lDL#ӵNuU:u)Wm{,A,$,7\XUSjB8mh$#vX%kh(+9I&L 宍v%"á/{WmLWבXȂvp dRh}|z :[Yi :D} J/,D't vLWѠGP YwF455:ww*W '~ BQVD7DPE +*PwCh(KR-E= ;bRBP_Nj1f>K9aF㚮1:f$^)dwѕ@z L?kmB &OE6Ň V14f4g]41Ϋ|C@e1ԬhxZiyQ?J z'&;+ mEaM0/"b'tҽ;H" D:Һ(R!!KF[uv-51-I: p $^VB1Ғ$DQ _!Rqv=_iYSJb!kf ᾣ)ŵdh@fj ¡rI;1p$tR9+[X(q|qwv]wg EPEql>H(шYkE)I%x/=j%%ٟZ`M:4}g YKiRRhIApxRBv;g瓐 Ti}cHa H@Z'79QuH$v>B09x=:9d$5ޑ8" j$h}rRtgxJH3WbuOEQd@18& yY)q.0w(RP!IG)=P;-nZ@l-`:N'b7@EП=Z˷t7p+1jbא%Ec!<>L1w[q)A38,f CFpj֎v"{KMoMᅏk87#`\pwsx(%tX) " f% FIXB[ ϡ5h %Nb@CqMD%AE\ 5YQoZ#H4003L)7h{PAhP7M&&io^"ĚƊ#q NȐR۲wMa ԛ\DȱRsȢAKRޑUP_?@([%%8 1 ޣ"Ish! -Лn]zv!1@ nJړ#D#bHq $j>RN^VJcܒAr~SBiliAX=>'2(0/Ь 5(npr\3>GRZhp?0k+ש$DV')uՅMDnsmvȦލ POn瘄vr'br'$Fn̷GIXȩ'mϑ|+J?!# 8Y`7 cM#E0) L @kjlʭG{yeڽ >rm5,r@r41ώ)#UOKmp!tcIzP2BQ 3J٩k+g *JO,p̨/7z( M6Mj9,g_q %1 9:4o4_p$uw*W@ZVU>DAuI9ߧgAs-՜ol$%tcrOx$udwa\b+u*n{& @4,braàF{%_sJD@#B.AwjYGRU,xBlQBDo8Riۙ4p,h8\KNs9zXS908$OKH d5m+~ Dp9:qtY#B =z$8򣋆ԍ9R(aЬ 2!l-zQk" %p,ir;JCpYYD#S+Jhx$u|r}R"ݘk+ujNcvSUKS JJ3GjvuJj$Ny߇iTk֯eJiZh^+U7n̶D6P$yoNVzAT’@28̭BJqxĀí—z 3Ncp0Ith2ĄM ORfH\!TAĭw0B 裴|4Eݚ\l[!T$B?!$(;5#+ʍR4P|EIVzYY㟚ёH ]GwV! $wj9f{M$ZM9#qP`rle`(JZ11Jyɷ|cyR4` oVjC-[aFsŸZh 9 'ZB6}y!ɽoXCS? њIJ Žt.8ԲS(~M.ԩq#rC 8+Z&n@p,;5~>$_qER∨?"Z)4uD m RwG *uwHa pXӰ(a1Xi 0釪MK*D. !׀ m<`t!ػ>QB7*ĝXɡ)Jws;f CX}!f?rR}xB'`e@XV6ǯҬ_(sӺJ 8%c%q)k#%fY8͛hi`փ)+L`z`\@a-FE {>LRAẑLj YќZQ2Hd"\;bot IaҮJ9]RO@eGYDISg}7f1ᰏD8Zz7Z2)#2kj ~`ٽ-(lK(P!֯p(7b []x'>`KNs;& ;+nP܋uwHlK(\( |s/>X)i tb64wnBХb2A S]!w!8K?59M 1X & -BiFp_U2C;:=X~* \5>ZEDh9>LK$bEdcod ZOOk]]V' W md!" Mw墏C7`@)i'U$+%[0SdQK"_4(Wnj*/,#ޕ>$% (w-iÔBڻrцa 9VXAjW 4R›x6хᾣLim@@8~d+$YĂw[uyyfF,)q̺[mX! + %dqHX!^8VrϣUx%GvFUJg /4-٣fҍ֎ L'ONn|l4mB[0LSWjQz 4=SϊzpXc$& 2q)VZOIl ԦEr褪;Ӹ ]mVq>f?=:{BF,0lum!&P~Vu]@#M"[}G$VN-l)};hUZ{}y 8m+j!%{>P> !Ê9#`J;qqpPŎ8? iƦy5q3Pj/~j6Gq>RTqVUe~L NkW X! ;>@@j<9wUfHdPS(y@U֖Bk ,=AmX`W99br<3҉`:.D*B; &U8KN-aGm ;:u;JY-H#bwqUf"n | p}BṜ$g/~jVh `d;ò6BǝNo7eT@>zUŢ|=jG`u ~U9(=Bʟ>tuP  ]z(ڇ7K12]IVX ÝY9Til-pQʲnuۈ~) L%͸qJ+kM ;P'TTw34C }9D`(ڢ|$h] +SYMc$"ۊaY BW ֜bR^VJXuByp[LP$O]"&UbvK~?uB09ŠRf #  ‹JGk~C)#"V!o(hV v0:@H/Ce5H7N]"&fV5.j&)Akq8t.s6}LR\dG40u;c2c' `Lf.A߃ӜH@ !u@EIw%w(&S*&eOȈefLE$א4Prii}.`0@%`Z2bRx0w4y-Z+~%xUc'4YUNJ"e[ >ۄG)Y (;\ЦVѴ|8JѝYMcCS2F*@2 Aީ3 ElVOqOJ*CTZq)guyUBq)gz%XA Gh0!8)ܱ C kc HbK!&r;csBJN,dDͬ&{^:vgZ9&a"Z [,)}mFL9ےu*1o"BZ#-xN{vg.5o(q'::&9i,-pr4?pPmwl*A9ΩRlu$fwgk %t0MT` ]+XZNBtn}@-h㟋FİpBS'eCRn K onزz w(ǝ:#ᣁYא}ԼS=CE (]iҮ3Dg޿i_` -r)GE @#Sw %(򭋲sQ"XDP; pZ/ {lTRJ #DgN yIq9)6uSGqZxa! zܧȷN=}Cm2wlA@Ve[-m;uGg5 }>+6O33eb7Gztl-JkTwc(oiDh:Ʀ(v>c h垽lŎMc1szQXb<45޼߽2O<卯ğAw||u1s`*3c㱔DZ]Q3mqD:oJM$fyye!՞p-tn~f |;i{yT[(v>m4ݝ]u_=ҷK.Z+/سA㋟-Q="%\^U~//eyC8T}ULYZمӕ!S[\o~n^ѐr&_d̺j߽pkvq"np\{fIW4ht>Ax@J[_ZƍS2dTlcJ39!N333B'7$߃׹6EI3[(v^uhlB:}hFχ{uyy5[";u9ޕcs2wɋWF~oc(*ŋu@O.3"yvZg.M(-م ر"3x_nLh)h2tB5A2n vm󰱱QfȑboVmsf(SS".96x_ ;6S1s6Td{YOynVf^I 99B7{|ܡRp1j&ٶry7zr?> :Ыr;ɑ UbTpyܔ]AG+VI[66x3xs4SheYI2μlyMh{E: vڇ젉o}e~g>]Bb823S[ccccsԹ3S(޸!F3H[N!w~.+xU]Mwœk!pyۭN(.2AH޴$Z9+.^zsiGWlReFwD&Y!/y&uw!ZN^9]GHMUeӏk+dțRIFGq2cӎbsL"%Ӆ@3?z4vvO_^9_|6.v|H"xIE>nQ8YжGW<#xA?-`M|8vku66664Tf./B2Py٦>>WWW8QSRy\Q1o|ȋ\(yzXΎsVlllllljE/{>kSrZfm&/jk6_{ Rc~iB뷫ؐDMCfQtwArW>eIu.4n(}Y?!Nw:PQ9uHϬ|ܖ~~kLD+7LheVǺ(Vuy䝲+,Xrb(r2bb%_̨We5ߎY]]؝qtW666Hjy/Q;,۠vY'rDvKEN;3$jNFk)@9% M/|j!>׶\V1m-o p=yV{}n^lhU*PtGv󜻇,( !!:VƼZD`Q~hV;.vq!Ow G$h9Pk"{y >:wDG{zz-;bRcbO'ie^j]2wmVnԙ  ѷ>Yhf1dBЫ_cYu9PqnIxRh创\e=* 9[^tԦb|>O:z#/J`~=֓Uײ z-x َ%]r6[nWw=DK :!T7}I*ջ_(qjƠɝ _mCظX ׎{;٨ɡGcckC6{R;TUmR^P/ oq)CѿڜIrW4Yϔx1S]ZEo5e[oy>x0;'ӽ|'<"D.=WxyplJ#JA GK 7ny{^X)Qnf|?B`nx7~N ݣi0{x7Q_Uu9?Ы\Y s-V-:105 9o{։oN h.0Kwi+f[%Ҽ8b'.v6KW bㆴ|0 +߀]711yTŧ֦E:lcׯ+*:.)y1+i*q=Jːrziκjߣ]vߏO}T`dkB°wCѣ.vR>?K9Ӓ*Kd26YHy&WUG$8Bum&& qp+e"דhӋbD~;Ҵ݅|3o *bJޮlv"cHLuU-w^|om9^i1ҤUyԉbVy"BbV?J_T˖Hfj&%)Gotm7/t|~֖=#jsmGNK 8kI}mw8A/mM-:c7FF[wrRKϽJw㧇ߒ:Z-u0:4 =eU^^>|<<ʭ5iulSȧѬY{4ؼ[pPêTl 2Թ%;njhS`=]i= 9Vɻ7ZARiuCR h[q1.m,l`xo쫟ߒ;VV<=%xTp3_U=޸S^7uw~UViZ[\k]xEg-CnRgΓmm+-BZjJAC]Owlꎋ$o5zhwg"RVh8yۜbZ*SjޚBF ;_ ~|uaq\| e_A 1/4XnqMݙ~ Dy}ܼI*W9:YWWC Ԥ걛ivX[^\:y;u5ON{ ưb/ox{Zb>ု7& q̔PEkeX]^ָzZ.Yy|bIq-=&cJgڿ1qv6RU6~tbYb^y_ǿz_d."w+Q}cҵr[+c#JHo,~]QCP_>uHR>gf63eyߏN -q[.7o2^M^إ?m4-rXrn1~Fx=IZ'B⛦Џua,v$kCte/  pX]_^3WwSĝ6Ϡ4./sG{ը 3.p鯛eo?5vypVɁť  g,{_2uE*3;Nk%!XSN_L+lNmIxAN v8?n[.wVJ" _r6? gN|1a~:9m_6x <0-]\C^=]@o.^+o9\0t픯ܢW=طkLSI-=#bi[q)g^Y??p:ٯ@rRmlyQ1qzq)gaEh^\ʙ~ ťz]ĝ5#>yZkڌ8\~ ky\nbO}ťYErU=߹BA>21O7rf4&5B/7(.̟bhi!-~G)ԥm=] =Q㶭 [cs$vߋ'㛲-;4 :`d̫G{Y) YHJM_hSOxг\W ܛW̧ƹ7eX@xs/mn. iKGɶ\~WFvKH9}ygv7 ę?C>m4ZZc#ʹ릜֍R5ò"}SZj_je 7)}ߺ B* pk\'^ٕ]ci{SŹbXUZy6pGO34oMeY]^Wsb|QeD1^n?~///e&{=4pw{ejZ^!1לL7ʾ|dRmccf.SBI`w}KSZ lI)pŴh18wƜr\UY7z󊝋N WOɹ?[bNfhq2'Ծ@fZVSGx^uSΦlyu0yj{ks s\ =Y -z9OFsi7׫+Ѹ-A|]6iDK!7Ϯ`8Щnל~ohuGtF}˰D.z9OnV­#nr=T PJГM4fo {%xCNoԲ_,1˜O{?mjlx' ?kp8r[ y/pkOVm?׷I#ϟPn!?Mоq| Am!/k|[M_㉠}W;B=x{jCk.|fRo[ź)g[NM~j硧ΎW? w\,K]#?v(-榅a.!B疅(֏^UM];?3ЏǍV'GcGЮBq'>]`4^M^-ȾO`;R跒N#lw%~֫_e-0)S{a~MAA?zk䆝 W>e.κWO,z)\:ߥ:Gү pƍhٷ皁 FDI[zDDȁ"sZy3qMŏ;>b3sf7$( (">E# *;PX+"]Ei* H0!M͜f!$Ag?sgޛsg)[q}J aH!Yˀ!1Q=={xg](;޿X?Ղ HjH?~}7}}Cj2-, ᔖrƦ7#z}Od`G/0x7?'racMvU](W;>u;i3ČŁ ?7=]]Fu͞3`TID6wo'@j2-:q9=Y]Vn@=7|5#.{!r7GB Qg(ʚ5#&[6|y`G&JR,<=.1V8座 &T%$<~٠Cp\lvd=6)㤭򒇽T.0>[9xho.u, ܡxY~ pm,U45%F)(y7N{HIDP~v\5{#%B9CZZc3x)_E]5lu1^fizW2_#.Ux_|ޖ#5]ܦIckMɥsߖàﲙݒ((& n? E*SZOin>pșsr4";aѓKpQwPEU5gԥMî_S[ȏ#% yOh ?ϰE}Q]ڈmS}>nкIG2N;A:|^,#r,cvU!*^M=OFBb)8zBJ#NJz: ed}4x1A/C_Uju7q|nO+~x&bVek8G9픑z+Njh (6/!?<N;_/~Öt@~W;s;ݮ)Ӆ4;\wm 0 e`:9t9֍D!\t |%߁ntF 3Ѯ'Q *7>}e98|}Pǐe|۷ToC ."/~ ^rg|険ܣFڶn$ ϐ(ܙ]Gonu7"0?9fqқ!biPWl|¢Y]5z">7_ qC f)N9lRH#)3Ǹ^'RN4) eĕ3yBT\4m|k6EEӦƻ>,%q1NRpEo)u)壮Gݴ,r>%:pP0S"17p^<^|{osA;s_Fw"vC_( "!1~ÖѻܓG!gzI}fa}\ynG9ⓥ[#桰 ;[foW_Pw|Bְ=fru K N5ӋVM c` +x:8`pڄ̰L~ ?W,]Z&|z[6%lIMJW W+i [n~;8pJsc_ТL/~.u [2r8=UqHWx5>5Y=^k؞O\mKg7c:uY\Y/74'Ja$TP_UZzp- @q(k/wwӺ&I8;{|nM>sʥs_UkUɟt@zyC HáV ;96bT_'?.aqt2m^l&|'73K4'ZE]Tg^:L )D|+Ym~J*~,C%>Qbd3k'1UMm&$ !(v^|x'Sҋ_@ *24L*aHp9.ޤ63,ϯur鶥iH?LۑͧAokEOM nXPaO)lzOaHf3.dRR`37>1'Pp:ئJyAJfr?z>?|& qNG=mU4~ox7oHKA T0PH [79FL!(A+O(&-eG~ 6(4G@J]Y"JXTGP0wiܞT5{[۹}<8 qQVT´yFoGt9?cWUE#M~S <(ERcM“>] i=W־?v.u׋!ĻiCP!z+ёDB{ 7̐6u%5M'?GGюDBJ$m"@VAh{xs%e{Ne^yEx|() ~ްn\ӧ̞|" F!Qu%sc8Hy{ &Ư@Av~Nj԰d;.m{f<???䡞zFDr XL4x9Ժiכ?'^wӆ`1VpN事xؒgGoOpF i@,7a$סU7 is7tVuV.}R,w7U\G^d3ʎl_/sTF۲Rpz?*KRX^]V<Š}g6nc_{$:S t0V#a]K!5)%ˑ&.^9f 77[^`=&A&IקggS)Ђn ?1ga!;=@k]dS+s,C]Ir~-]$fP QvdO~uqѴbCerG #y2rU xTםg1ZOsW4~#޵;?4N@ο{T ^hB^ҟ g[kHx.NJ] $סUh[r|gdvB 0<ۚÕX "oHQTnαf?{=:ʎq~s,JA:0$>ÖčQ6]i՛&7Z7ih}ovϽ>03mۑMCv&?EZLթI^N7O0u_ǚɢ|Jؼ[M5畻>{=ܦ}o35פ4HE;оg{9ȋhxmT4Ufv1"ҳGeCF}eGv~nw⍞w{;=#\= 6utuiJ7?1x\PQ/XP˾){^?w+}=7"tgvNm)$)!˽;RZ|65γ=o 0r vLDrY[8z&!50 M=]]fHx6#OǸKQg]ڹ%{wRx{#m.yNrZ;}xݴ/>z+Cn3e[2uT%k?/0%U?!pߝS|]F1b]^p)U98gE&/C(;ei{zm@آ}U.ZF]6}A~Fpx׈ WZÖO- pƉ䕻w_v ?ӁNep`vRaѱ E^2p9hW\4mjHKe{|$K)nȱ/o~}T\4m~~>~ҫ|頌b>~_Rr9i$Nq:( RD}^z|p9( Q2 DJOuk#2% r~/7K5GN8PE}QcmSO-qJ)^8BNeD!hs] IDAT>e}9f"AH dy@>/$ʭYݴyFnTRW\ul&ầE}iP F7هEƟZxlYRpĸ(3mO={ghPWf%z^TSZN;e( C&xZi[?ۓhe)*?wW ):v}MJrh_uL$L3Nb]b]{;9u#ﺦ&_'XyX=dDEգ,|n1MUNiL)Pؾ)>;$2'sMRs)u\b,q:(F~T rvEntPF#j΢yEȱ]Nn>NNL p9\v;Ѽ>}ۥ|S)9n0܎9]V׽;v`AN; 7 GpEA1o^dTL "ɱ#~}K&SL)F~}ˈ++}W:EG1kK{NeoP7^[-MZ7s&`/ŴyF`>Q lgRn>;GIn(dJӬg@5}$u&FN6}n'yhgwwӆ(Hު}l+{|%b?UQ&}vbgׇ+4_tY~/7'@7LeStN-Z\<^Y |u oP=F7JeO|#Ru.">/R5WnR\Z?zg_0xvunH%hw^f#ĺnpkiHx>"_8be]]pGݯ?'h -Z_uY90vʺ8R5;έNpdښKK6<}cu<^:`J tsk3w'ĝ -$qc:&_ZmԎܳ:1 BeW"sUJMi&wG}oңu,xq)nak3߹$LೄjBbmū0湏-̾pMp@6!67{T)Pݮ)k>z9-0n-Nr^U;=^}o}'fc,?ϢUN82ݼ/|#먊v 4zh}]rMvveܱ?:G#M_IBB=дi" b;S壑F kC#Ѩ7?둛-\6֑'=-'[ZB?'7?p쥁䉳-947\`{#!&0Fe+x ?1}RGM 9py¼~-q6@@ $F߷6]] G&gp+{;x0 |\owwٚg8lȱ5oa$ ~:I j.RGQh1Б#i7 j.o=:f*3ȇshB"|S=$Cbԥ@j\~hP{EwM#aܽn XڮeޯjְHo߼Awc e4iHYl3&mY|f, n[wWݴXx`TD> j/\5l[v_P 1unr=l)x bs vk#KԵlp0ڢ]N];lAcNrW(_q )u)djByzK~2iy21p`~!7O|=>z|zqa[2oا2֗Vz.۳ S"a;a'r4iVO.rK4Jegs ӎdP'|J݄CYXXXXXT \q'm}NږhVO*q' ڙ]uC9]_vD͈W@2W9aSkټ[\n{#%Mnwvho5Fu Wdx -XDP#_1pZ'_z:Nt!gaQNn<|YV?4 H|6!nPK ua߆9fi|Ys0s7 Ax רoxƷ{ jXXXXST{(V=ׂ? `3+~飜Ntҵ'G\65uC9~۬ZM;yD=wcIWNq?խ|G3W 8w !M?EפWGJGبCL}$k^u|' zud΅ =u{sH vEJݨpsSI[4'. , !Xme( \Yx*Qa f!%a@8Ad8ݬBEM㰅nNSiD&Ѣ'E_{w6&%“7G}jH殑*wRpn X}@{e ٭|{{{w61kĶIݰ"Et۪THQ2=w)+E} )=?q0ki%3}^a+>i 20gE3dJ%kײַ@zuܟC>tcѱrjaaaQTa+LBn8yߙ]v.$]l9CqsƊ[2ܡ8?{i G"ueԵŏŻiݠm\HyWMr=@6ݭyoJ;^N=T;)q2Kk^R w~x%xߎSu5Ìq_SZѵo[TI`f 堝Ң 1޾r')࿷KHCsRUl{ Yi,MW_nKU7B_#^+%~{~`ZE¢|cfq}ּ>}RYSoU_ɟ|ױ][x?D|ZX_b] Y0tZͯ*῏zXF]wvwܨ=[T/SIݟ/}<\HWX+ɟ~'[XXXXXTHG$Z\'X Ozur_wR8։]cVhHKCN>̈́:hDSj)u)ѧ(D`QskONM+'xLO>͘轢ANK|>uO PiDe6,,j\K֪:3fuxOW מk0DB %>`Nuvy1[BѴٛ?W&Ǜsϼ{c³bC ܴG[XXXTC#.%x}Ȼcz5`.uަ]m!tJ \o*8з_j{晏۰Chw\tmf@xrwy <6uz/)Ph3suvZpUj}DP̐T*O{ @eP-TU!ʪP; AKwj }# Gz(KxܑTr,6fh /ǹ>:To&+]LtV޹6M^Ov+%Kxׇ>LOެ蜔#s̭͕gv_y-)D5F,mL NM5ɵŏIK^6]}ީ[/ Bb݄wwtPtPb~ wp?nC^HMO?Q?2r".](K)u ŁLyATЪ=? wP)W\3"`+50s!E.sƻ>\J}l_Ͽ2 6&C.r,E9Y'`ē[˙iGة?a)ȱQNhLN쓕S%!آi"=yf)tԹ딸hyO_ߑq<ǣ6cb$?'\ȩ4(E?@3V"h͆IiaC"`ѫϚ^7 ] zgSk/sqG ^m+6(;" mNm |ϱš``A%Jl k}Ԙ L(Ad7'~˦jmenݲ=m86}ݨ~ut[JW8_=Qp^%fG8p y?zp " KLXIABU@ .3&>A+AX.;Ā$YN] ٬IB``.ǰC ;%|j b H*nSA6$;15sqG1٬QvB!JKsjPsϦ\ʘ>9oM%ti#r徙\^+ϲ펲˱ru㾣w>Am |;\.R2AW(d!6ۭ 2@hНh_7b;ktLXI55hNF`_5}WV]w9>qe^!76$ .fU[9%N> cg* cWwv"P\Bw\-úD- H/D#aN-A(z)YBwT ^ckܩ-4#l츣ll;)2KN} r0Qƿ`q:tUMh\}eZxjҨφ$Kwj .,9/w̮4$ ڮ-QlZ𜍠VjaY"&-$?#NIC<{?J~T[ITta}Őh:' "QvBCBɘa R)~hP Y5PxLEzRȱ>ǝ%tW@vMQ+c@/sJlwLbm"hLe-XS) s(9 %>0DP$<9;2#tGlEѸc( z&E8kVkp0*7 RRbMV[?9￧8lػ;;o38QN3524ׅ+<)8 ܛp&HUЅEue"9Ǵ)qL ef$em~>Ć|j$*z6nMehG1!/y@VfsD`f UgnvxnϏyT|t s3xbi4! (mGy 6dz09hC{<2 Xu(wGt0Tθ~11 r4qD`S;X˼u_b׀rܘW]*fF6b^xpS}EDˣ j*QcNMC,.E86>0jbJ5.2v._w F($$k\㜈So(zYƃZ@PYI@.1;StKw|S^Ȯw4шE5*;τ uj\-cxOlZ^XQWE@>3\LGB2O.ԡ[W/Z $+$kPӄ Ffe*K]g3`D6 R2ScPA[g߹|T5Y(jDS0 ׂPym4禤;5AIQ}0(w܉Y #J&Դ3̢1¥7e=BjR6 x_Ii;DegE^ w|Q@vv ص6,ҝGD°A{L 2Z@O7&y@V&e~aMjb)u@A.4[!%5f LX] DL5+ aB&X v>`hjkҢ 6_k[ymNi (;5UZ~WQX{f#PBV>#$Z7_yM( J)ZвeMw@h]ԋ|-u!I$(\'vVQ;^LKwj];D\'Y<-u 2VK w@ ;u\ `W0 V"<!`& 5kDl?#vu -jN^afh.Yk;p< +`p2ZZ W(w,uAJ$(N3++leH^2bmǝCN,u"!JdPAte^>ѺFQ> $;JCi0BP `Uc0 ) - DL XNMTC Tm'[^;Oo&W嬯m 2~ae֐-RrH MIP#BfS0S\}f?@00Ք+$I*|Y ڐ2Y ꇔ ȏ啵K #$Ԕ-R#X((JfY݈hΎ2{Uemde 3א5`Xwy][i=9V8-Z~_Ƨ 2:Ԉ93N@ ,*ç{04hen -Z`S<^b(["dN(d6)- XDQh<\EgRF'VrD; K8 t04DkTNbXq1We:fvϚU' 4LJ|${8ɪ"SBP"\~C5|g V J4}J(:VpH6&ǩЧj_䅽{jxFLetA `Es"&<^ _ F@J1P<Aq,GjN$C/p9^% KڀfglTvx1;DX2e-OoTp@iP 5 mYqSkgyiH&L(4VeC6 4 ,Zk- YՄشCzS@Lڞ , YٻXA".py-t8\ S dgªi?Ug]A^;Ձfa h޼\D=FֲqJyp^o\< +S 4;[Ճ"i4Dg2r]~9\!(Q,qPf* ֞XA8WN\ L?`v"iyQo8dePcBSAM0`2qv4 k-Y1ldoiyUy6[@`My8zOJ T+E)Ds}wξ;w(Ms δ<̲ ij@U2EPtfB}Xa%Gb)EA ;V63x2N[4of~UYEbWC^[j0@Rٺ Ɵqi zm|ھD &JH$BMzi\^/M8 3HC ot#F ;bϜ^ZȮZV"j@:i@ZO/"Wevo̞q̖bJlZùu;v ڐJ61G !͈/ww츸wi+1bD<29X:Xg\_Ȯb<A(jC!kz -*`  1ImfnuR.0!-G^F #v̿{iy +"\}P"W+ [a?G fF'b< +p^>= ZsдMRfwV`?` Wd~2lcU:(e@u{А_6||` >['˸sP ua& *0YORZb\NHg[i ,c9d;&3YXav]<X[#B [ݿi{l2Z/X](@DSPʣeM~O!2.²! DA[;R 5f^wgcww1VUWϹ惁 3|J5HVREk?L2Vm[>7FdiZMM/FP- 000s߇ v9ɰy>gJT WG|'&fp{C0׾k_CĽ[P)ܔ ('h$@ΞW1'^)n; ,'ӱ@`ZFr&y 7(KK&$ҕc-=]ޏ+(31NO<O 5GImԀJF&㙧Ű1@7v<=8 , rYX˖C(C3pxb5ǝ":dawQ(7R&bk{?y>%C ͶҴS5eJNݬM4g|'u *[Bk:4չ[br.Hl&w>~AD,ՎXj3 xb4剆ܶ "KӸf vf1{h8?LO:TkS$ @V^˃O1(Ȉ7v1~@"oitaʳ`>e*zCVx/{I\[Iܥi{SI*ixoj A F0ѾsIIH4j=}P]H\mޟp%|׳OFr 5[zU.3H U0{NhMiV)tHjppmg\u`϶ ^@`ܥǠ1Ulgp4޷>͡7 Z2|w_H}85Ҭp'jCoQJ輗@ -n?Nc,Zt,_ob9s&;_^V, vPG#0x=:_,5.0\h=kۗ70P}+bP7,OYl!eK !6 ,ZicGzeϝo@e-T1A c>wFqD+ab0Xf2SUFtIL=t$_wNъm鉛l[#u TtY0V̆ S,Uƙ,AXw2<^c~bbKRYE+f<,0h+݈;ItͽE$"(!! ۧ I*Vf%.a$zBb?Q/j>@`u391wD`DSV mB@Ȃ=2f"CϖnNpAZh54D ?:% FK筥w!7h!в1w;oVWyvrEϕ~pui2lK hC5s4BS,l=ܴe2:ۿq~J+A2Uٺ~/cۗ$YƁ :wy%wB( >6wfv~1w>j+A69P6j k{HM҇B򹗹؅!1y"n48* ៘ٻ3wo_)0 /!Bb8u!Q(QK ה{OIENDB`neo-0.10.0/doc/source/images/neologo.png0000644000076700000240000144320013420077704020450 0ustar andrewstaff00000000000000PNG  IHDRcgAMA a pHYs  tIME  4 IDATx{WU3( $B"%TEĄKp)J KiO[BD- P$H}2Xff\4$(EI&~yt63y>Մ3O%fB!B!B!,K@!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,B!B!BȂXO!B!B!,HRO3~?/2jgtyB&aP{\,:%}HB!B!#P GP.!3\ED/Rc:]k[s/KYPk3#s>x?u]4!?M!B!rxXOnRJWDN^~r!dS1B=ڋF|7̃yWi۶uxk|qN!B!Bam|_+֊)%Jٜ?6[_d ! v̹ZͬgByMgnƓC:nL18_LO!B!#LrdVѼtM_Xr:OE_G/Eit}[_tszB6- ]wᔒ?gi ֬|?|a2[>щnB!B!!YOsٲp@_\]DV4]pxYg ?'|gRV#ds]~{>o7޽ W4Of\=:JO|? OxSg=?x !B!rtb}_n.9tΒeO~ovyvt~>,]v|Wrm߿kaA<̧>gyg?Og;t?I*>ڕ|qg@u&>DݻwϞ={򗿼o߾nͭox?y{g> F@d9juc]_]l.?W4.>ZB!ZKͳߤ_kO[̺s7uY9姾{M߳gϮ]\zjeCQ9ܶ?t:vss9'~~_?z5rԫA12i թ\zߛ{r-_>sH|;>O9ukv^<1H!GkYkU\?=ֿE39!B9F_,M'餛#&Vu{o uu_o-ru}׿L&骟`ۗv?YnE/z9<9u)7ʵg+9'pgx_;nwSE|k|8F#N;^}gq_uD 8rV;̃|a#"W@WKd]_B!B6 -^6cf-NTjyz~=x\sΝ;_\|N?mgS=MLS cOzғ?__9O::sIk3a d`MXɜUuB{{キߕ8F󛓍|Bm_?;\V9û"cݺkS,~}ɪtvsB!B6!uH7}3fMk9xm 5ﻬm׾}c]w-Rx.in9({RWKGFUp蕯|K_R7:OMjW7jzj'>񉫯z׮]l7[E*F]!__>_W< O~?3:K~}pa֖LM7Fn'B!dc{7wvlX퟾3SJfEP~_}O|b߾}]ϱE/tNKg㨈q-Vm/}_m۲:96kO (\wO|vVkR>rjky'CB/׾\p*}#4߷Kn;p}ݷgϞoVU yw񭿜|B!l 6u(0;o馟ٟݜ7_RJlRj.g?[}]wu_~}e*i^zaN^~SN9/~_|g2;sU_rU˟ܻw'?|wFOjtkk}7H;?0?~~O}j8CB=_<7t׾ٽ{]{XO`ϸՑ)Agmezģ>g>mo{۫_jg+O~kjs= k|rU\"[D7<^u "/~_W:r֙8p-޽{{ݵk'O&uW}Xpl+"B!i g&3-Usܹ瞻[`[9{uUz+s˜ӂW\q]wݵE=5ǽMoz'>_%D٘>b\]w>^#4)Wo}멧ch4ߗOZOscV{c!]vڵ+_7|wSL'B!e Ql??¢Zso~gqb\c)|lf7x__}nMj| ox%\'?6׹毵@\;VTl}ʫ;zGnNAwyo~/~Q:tUHzܓcg ]^sQFjo~_of/ ~B!caM]EEP6\x;w| ^(dΓ/`_ssW?vgK׿K.yS^~=X"l&I}eU/k;|E#~mC^駟we/{I'T/!L&`PZ$֋m/~_>;sN[sD!Bȱ E"FxWϐ?X'йR;v8s% I!+];F6sY.xUJ)/>3N}f9FX} {?~WzXO!B1q?~SF=3 _9VY|+{W^zwuܵ]8Yj֣5y;ygilg%=L/߿6OxRR['[[o{FS- 9X7:ef]wӟw[f޿#a50v-K!Bȱfئ${AU9O4wXuZkuرO~Ouq{1.&ԃ ުCUce0[j[#"{.|t:e&+++1mW^y^m۹Y]-{rB!Bȱ)o66&3'I՚\^zGPl/| {o}7m?:׽}U K.3??T뺔`0wދ.O?Ao]O䨪Vz::n3|k:>w}?×]v 9{??%/y׾k!;c/xSO!B2 yGb<{gdcU?x;Χ=iW]uĄevqE|/~?Ou]WVFLꗈkB!B6:8Zg};1w\d1K/}:lzORY7$]v~]vٺbmɼQPǟug2S~,۞D`8j;|^ wsJd9qZ\~?]w]zg<+| )E z:{) | _H:e:o_6cu6ԋַh]/aCFhM~R(Du\u;yHSzJ?y׭߉ۭ_Mzl.= q^xt:NS^ gqY !4&/PoǮ .@4OY0E@9ԦGҶbuzJN}(=̏Ȅ`<G0FU]wͲ[2^O*M@թ&Gs=G!W~vc7 /zыo`u}H(T[owS)~0s<wsbi]׵m;LQտ˿#[Gu VHy[\rz0:NC_^qI^ƛaGGgywݷ-J)yjQ+Fz_cF5 vygMo:x𠏜뚮-#oPCj́~7~λ;u|=#^.[A#/B"o߾.褓NcBw1 ܏;rB?g,t6֒폺߉*1tl:GnyC!oɤ6uoG%/#^9i=/!XY _4qu\wJܱY!HSfrf: sWۓɠs_{G=ik=0[~WzWDe,n? W}o#ƘRzSwޣ 6.kƳn/o~$$"}k o9Ǧi !ǁXUZ]zןɟ,//{܀P\U[ֶm?ן[>{W^yiE0, <ԟ>ߵkL[ Jm6*z.cnW` &" H0iD.DG:GGlGY:9-šYvE:eשf|G_.HnOUxD5@a['}maG8lߏwUzK6: "^!v57x ^TKsGq[/|p5b;W= }^'=jg[b򫻖~_ w]E/`ޖL-RuI'+_'/&9jPq>^Xя~o~\[綬u4@ 0`}Wo)%7"M $?У kݨɆ$Z`D*[ZS`[b1+V1{ݪDf,Af񞙾q> "G B>S*[/(df ~_m kwv]Q!@ uCȗH/;w'Ooa Z-k$I;*dZ~),_?пMAf&ҙu #4%56wUUځ!$3(Purr,"Ⱥ;cp?a2 LF)g"mz;( ADL:5ZR_|b}30ٶWRED fd]9]E Cm2A "bs+^֥.H &JalPetf~{nl )MSHJxYPYj6@+k:*`ݷDQC6t lg#Bx;^&Gnc81pvW9D;rddS<ϵa;V_eWH::k4Ua:L$Mㆳs=wm>bE؇Fhl VCuzU7kW]uk^ZFu5!Dz;w|o~W;<Ͼꪫg}{ycэ,f0@$hW^y;y]w-yfwfU,sSr$L| E013?_7Gy@`]rDf^DD)y;a ``@rPRJwVn @ekSCE 5H0?h/I,,8MvA\&= .vI"&߂79[WSE{xn bR'&f~_\=fkE1)ʋXАJn̲J`0FgYtϻX0$o)gwv%۱Jy$ DkBG ј7u/֡yW!U anX6,+;ml.y0 1*"LCjڿjY1(*_ @MA$ ET !B!m8.(,T@TL f*p眲ɭ ^'X@ 3 ͹$zV] uZɬ~7{^AV3T!&=uLf(cぅ*jK!KaHZmM"`J1`Q`XR t $H!H2-Y,䫝R;`OeI%BcRT3`?1ht)0 ?%D@y|`b}OҠ"jJdqixh eCI.4bi&@:hcT8/ږ1emK|+\D$B>`gfCDA0 br|ȇfD$/H!EZX64ҦN,"0BQ_o> !,:Xnu0$Dˁ c("!vDOO,V,юXC iڞ", H$8ϧ|Otj=f+&?)|OI}J֪4(& I.Xf_by YC9Պ W`ЙLf=:%+j3, 6>>f3{A2~@̍%R\O9Wѽu}٦QٸPƇ-[W& ]#)nʿ"Τ(T<>e"b֔_QH: iu{N;p),SJU}m(ih /[ɺcژ 9VẺiXݛIxr-sWU4X{RO;)Y $jlBSK_e 7v B 7s'1xN E.+%(J+E=kG/T"Ek 04) 06 j=r5am ;@#h,FyX ,)Y:R%KpNbވ&B'a Z1 $xnoiUaRm]r}_g]5"T-NM BFF bR4:>Sg"zQig3IdA\HFWu^@ ,bBQ+50NR@R+SکPr\w#TF`.^P !~h Alj6L >F5(2Fk Hq`3@7Q Io d-{4V:F&&H h HiԁBA|iʛ7NIH E",+E)VSZ%ci% AX5iDl0xYRWCE,T;LCΫ9_o(}uSBN~%N`*֙L u*b9Xf^tS > F_%h6TWU8\zDM@b=>VPsxṩa*wMƠYgxöZM!N U/0B!Di`@T* MhI= D,!y:)T\ Ȕ c^y1PrKMsFvB),m,Iܢb7h[͖1n[l{Q'ސLb+c3#VyYZG~vPQē6*,ưNPPbNXiYTJ!܏$k`h, ~eHL$ihf L$C>O)jLgM` tj&b]>@'yĊLaj28Р f8[F!,6Y-]޽~=A:*,[Cڷo??x.U4*x AϢH)_lBX=yW|=SkrǶ0)]R'<=شww^Оt+]!؆djSTא7&`#͹` ) *"d=bGV-E"BȻ%`0ĭHd{1fus%DMRJ|޺Yk/@Fbۀm05 XEA`+<:KY)/(g F b#v`0$65 F9!eCH#6t.r5$$T,iʛqY"!$o6$fIjƨ'ߵ$0Y?j*@Y^1:D 1 VZf^5.6mJD8&7XK֓lkB *\p/U)%Z%3ݞ*T_s&׶wɼ"sq ƒa%e8+P2S[yLyw!?#@ZuaQU@h`KĶ F0MSX>xR<| ~EAbT0P- ؒa )~h 0xꫮ_rV!""AI'< D ZXk@h F"S1p~WLa"0i$4ZIe & D&iH)eEKGFCAͩjH&"L &ɲ@c҈Ēv= &WgC0x2L`TP~9Cڼ4!B#`d[$xd [)I;F&#`@,C0'8[1a0n;!0؊$s4X,#m!tT4زIljn< = Z` 8l[lw'3q%Wfme(15Q1Q7G/=!B)jaKH4x[FGLU-$HrQ]g+1&jX=>"XXŸM!Xƒ`*ȲA;JAQ1aL@#S!+$C CpafX4,Ȥę20D=8`;L%(P}BL &`DL:ΰ$! $DH0lI1Du ⇣0]3IB'kUi!lIldd2,i5d$$h V U/n7{(8;XYZ2{ܓtfYT^J"~hm@ڒJ?&-uz&O*|RFMdjr âՋ"s='rw鶺,%H10f .?jBVTmIe}XJISBr3?Zx I(<$<`4DA^,-#֧< _,5~v)f<޿ܧ? /p>6Mut:Hz_lE헪1;v/ܱc 'Pky+]ˬ_Uו.ꫯpܹsιꪫP HtVr0̊k¶-3ٝpsbh9 HCy9+6fe;l %`"hK cERK0 LBH`D[ 'Ylf:Z>OCXlg7Qks"cbڔXT]XsӉB`\!&$4֤䱶`Rsa^1h]OO!"Bc<9àb3qrj>W"A;Z x J` LLC2l) Zx)g$'o,!И3󱗉e}b!p+5m Е40Q,A}*T7@֘{#?DwbXoZ 'zZخC~p'ڼI0լš+-7JF@=9ZJO3PK&`K̙O6*(;`2 DS SXE/* &LBa$or*CÖ[<2)VcV0!@5|@ɏ:GShEFV{2XBr&,hD-聜/jr/nJ=`6򷊢3VU-kE,8!ra3Qvtfh KW$2UhX 65K%$XSϕѤ @ZX-q(y=И4*6RET8# F@ 週X-6TekD\Dϩu~D^:U A `0L& [Km۶:o<.|sΝ;O<ĪW#r,+54ceըӶm W`Ϟ=s 7<ǧ02Nb 7ooc?8i$.d{b ۀmIpa[u= q+y:-;E1s*ؠH1y;gV&VeP*کMmsVc ԅj.>21t\ΒRG _A4 IDAT&ķ+k*02,ybI?xN4´,r"4+hM`Q- k9Z M r[]fYi6FմN3C/f\Uِbx" Sx\GfXh%vqQoHgw!7L +EU;:-dmx({`HUu(hN+Lu1 BRG,f."ڼsֿ v 7bD-'#['a*٬HC0m$4Fb`L DyE)"Ҙ c?ISI3F3'WX!tqlcU3K՟)jW͞,!w!6Y0 w[z2wJ;뉽`ΰL(FVd l` 2:*L+eK=cѕ)V[X1lZnGez2 +6!n} NrI Zja{N͙MEA mԅH@ Ǖ0X$ "$mqS\C͗n^]""Hb[28(rљE͚{$ Dtjk`~bLtvpMgTbGKaM_oǰS;*2 MFA%eed3hRv6?!kEΪ\2 @żh3wqèM jl>Fkbd]&t "fu)?z`IqZ{=o}{Ϧ5ػy=]ׅDd{;NME;6e8p_|ŵ"d->Ȥ|ȟ Yg77{xQM{?O6! i51d;`V]]G5"^ m,53[ox\Ν͐ t,NF9vΊ=ME,3z!g}4 ۂ-F 45]sإj^ Բ4dlS`๷@܂rEfd8}Z̊|)h 2 is1@QfYZJWUM20;Xzmi:X`b#C *PoQmP֢T3Y"A:5 m4Щ_2f.Ad(jԂLgf.#Á0 ۘh=|rCEqj* ^+:SmJ5Gr9_{}CWYxek_?sy {Fcf }c3zxo_V/6_ܛP=h~}L^]BW0*u@,kn$1l&DR,֠FAaG]hU\Is$G3;{Ѩ4h%fF1q܁ x:U)V|U8j,9x>r'8z3{5z nrM p$e`F k&d{uB )*@ĝ |Ԍ @d%U9LAd.%ROjg!,k ;$v,,tBj82 VOm"{e7e5id$^#!Ӎ<4">E G;\ʼ]LX \BZI )()J->'\\TaF"WO@b"B!' `Whv=(^:P F.Yġ|5;'f8dD|DKĮFj//Rw''iWC{6Q24BaWY -PjQ6Ҳqw巜L 7LNhބ#t ] >I]pw` |F\ !2A";wԝn7b֒D3R bX09QjI{1p$LȰlB!tn=>iwѼ9ƅԭyY *^nfXc0ǀm8օ2*&^#/p7b7Uh7 B+uervt*Q lDKoЅQ5HCfݏU2ˁNFxgZ*I%  ި8ʓ6#5+(KiƅC0/ۨ[gB[_RU&r/rMtǔ×@xC22 .ttd6رj XO`O 2ZT<+l ݾ34EQKI;ah{Frʯ!X3@t:rwπeѪxDv̏;v$G`hIXFQފYmDzN8]_̭ò¦2@f@0j$ipTC`f51_ \Nˆ!Dj  u;A}ZfP>4 9Ę XpGejq_,%/!4g+Ei f!n!RZ D\Ck͟܁)FPh;q0#17Y GR'v cRRGXL:'XROrp1AZw0E͛ifB8@OG 1'cӋYYԂ gKVA`Ow8xi)Gg c}E}vY% h)ć օF_RȄ!F$yɏ-\+T-  Ӣ Bl׸ rVE>My'2eqkB7Ljmrr ,$v@$޺M }C &*F5C,"*s8fS ga^Ph5hP`7s(TNlT`f :Fw, t12 ZˢiF^q Zp KȐL@MNN/nq?}dim6+r=:FE !a]i0#Z*TutgPZp2CϬ*"ehWEE 9<4,KTGWu;='}#`0j15`, Ee"V+2[5gMKȸ\ LN L(`-Ev&J<-0X8A‘Eـ3`z7h4XTu#vBuehw;~m^fB7ۍ[?LZ+  D[.OT'e]h2y4Ɗ`\lf".ۉ^,ZBR tm6UnnwR+-YUNhw&)̵V-dq;1,iqŏ=;%e8VKȌAB΃ $ {n3~Pkja >e˿G?z:0:lq7wxݖ_yu]`]Wa}_u;O͚:o_v,8E<AEL1&Ѭ%q0^f.m/.XX& "p, e?tZ,;=hvfbDۑM3&$im84vSV}(![>+4 R$`)#!.7#-l];#%8yV-I'-hE PZ lI3i@c/l0xD9 [&8=`QIٯm‘XGê[Jt1h+;SbĤ$/L:+ӘtB3ϣyt 1lo3oR67aaJR,0BYd("Zv]t] [/a.m5cJ5ƊvX߯afcpVpba`@v()U[*P9b3g؆NQckk pd1[i-~eh U%>;,2%HS kXF5*ϾyA0lNe3#'EZA0X4@2!u6ex@"Aeh[!xC M*|%1!peW1O.gA4h!W%S`i ZX(<5f뾨sAI_M4| *W UX "-+B P- P6HhB95&XHzgoPyЩ-0ch63^D0r4W%}Sqٌ%gvT|!íb9%إ])>b_G gu' M*̰z F3 ^&UrN8 g8ܢ3)3Ֆv6 z!xg:%ƇQ(-Itud8`VcIݵz˨~':9m.D8-CWaG3-kn,K`I}caZXt<@Pnr^n'\Y+ɠל"v4VCF-aBs񒕏3gYrC_#Uk hǟU@tqZ*&vUju<{G_̏4b* N͹&?:wX1p{j hrr=\\3kk|_l̙'?K➜LRv\H*'0sun^J*Iv)tm:!.¥coN8ճ]c@'p<9u}o' N kU 3_}N7?,Ѩw5#K_-9Cs 82_//,q;<>+K2>|Ē98waQg]-̗5s˫?j~/S,0y TSS6˔Pj]SG`R}섒xb*2-P<4"ZAd6${*CuyJ;@(R "&d(SuYIӓ>% <E)W M >7vrUb[@aܤH\fWS &T @tG8)c3z1Ӌ dhfl.J=ԅNzvR$]rfi.t OpdݰN!T48"f^РN+9Mmc-r}m&gu^=3~c21Z6$sHjgRGaq?ViM=egȉ.0aL#'su;8faD(RyԼ'~ &R]ّJ.2ZY9npO(Ь'm8&D>UV:\# V3UEʀBBCJC]*FO)DiEڛP'lI_}H Qff\_Ibr C,rc9|k&{Е֬sf+R" 5J<(m@RLO4FrrNy?:ЫX 4{ֶ@,MI)H}:| .rQ+QWDczw-(َ؁3B; B;b EW [ mXhvdI Ŝ]%E>g IDAT:0/a \@UVg1J-YܵⶰwВc.ulڄJ![2hOY兀@x ݸY8xO`%#Dh"--o4ڭ"xb!ḵص3Kg Y +yQmy" d^rn΀b.LF3<]B ڮ̐ L z3*x.AJU<%6RmDDHX=?1K1cFt("U' ?I}ޜ`m}G):P8_lbvpF @rbp.o׾/n!e6pL𫯾ݔv v;ʪL}gO|_NeԿîtf.gO}S/>Gg.{ ?ϙg?\ZkIajwg)1tY 8)a.  q؋Jp'i:$ke8ԨNѰJMejOLZ @ `Fhab]+X,D^EFmA"UE1 ,yv=arkI'D~j&`$]!Gnk6"C !NvpN 8UXuJV ewgr-XGdy,%q\AJsQ@@8tv6 0gM Ёp! qhj-v$^0VNXhHwYuPK3y- V=V@Gj"%i fU/]X'PAec/9 *xI)Tퟝڤ\HTzvZ Fw2.=TVv`Ԧ,[ v@yD)B t%i0 g)#G2L. <芤8RӕYJ1ƒ+&]&I,#OI HϬ\a <20sEM KG?,`B<+%y]n!$4 #CڀtI6"U8gb! :HGH,.7FP,Kiz{UvJdM t P:xd#@3p eն*4:JE"ld ` 3d2%0g99pĝA`VTK-M`M4G󐊰B0a l-?#ewmBЀ%Phb7vh,[2Rx*%BbBu<.f̪Ӑlaq XuL+D).O~9'ކYɆ!;}- kICV50 &HlU fO1 N XO: Yt )2{3HQD|3I8[w{-3;S43@/3?3}ah ,>>6zc= Ϛڅ]ɴ0iQu4pVV_|w=ry;b+yh7.%6 Ka;D^h.'qC2WdUR& G0 xNSS詃Yv0(UiZ tawVd kb'➸?XN 'NExV1Nz5C+N]ɎHwOqu#֗ذP2RNOJ{HRaQ[Xfoi D6<|gQبłEBXI#R]jE88g!empmnA6pKrvac{{ e0C)/T/d8K tŝ P(nL -;h)0݅ ՕDŽGCP C7v(-;h3@4:dHMM8w`f!jd3h7O1ʜ{! 3x )ٙ nh6 b*bb5{U{! r$e:g~SjߠSN2,&9. &G{J-$ &xlL { vD#/R>Ɏl׎?=5 tg e!!x(0)&'lK4;̊vjxVEcx @q"JP̵[) #Wz`(h%}le<4f}h'}&L@,92Z5Pn|(e9ëu ԑUo/)v kؘq-g`Fn22k=wU),8t) B!ڭ>iqȀ;c( kX p|=bgHO6q^{8 sxz[ΣRnRM-ˌ;d}?ُ+%$?S?77S)BxE;$M<2H~3iqˇ/$YcPpDmPHߥ/:7Y¤ē^\L-&&D.yNu}H=:vE9_K`Hv@:yKx䔦Ea&?AA({Kmb lA.F38y@EL,' Cy#a Ԣv `4ͱ %{#ρ p;*dsjpM}Ll{DF#z% T€C]h@^M-ux.ƉW6Pl44 ݹ aJٿr!d ڄ xI$QyrX2 0)l1a2K>hV劍xN*\Gi]ܝ`e[ۄB;I[d.&@ (O2 vOuEm'6E.`"FN`R*RMhLQORe-Kͭ0I<K% lQf d}(X$h'v ?FYk/4J:{q/}-m&qxj_~y۶??~? gf7>O(/lawEw3 qBc8&YUu(hfHt+i/~ȿEd`f .Ygd+52R/b3]xn2LCS'DXR.ZŰ2qu&:K֬G[Gf.t!x]Џ㖥YRh#3x6ځQX<:T1jEZʰoS:38˯kpOB ,FNlhPb]ٕJp=q8SG5D({FFy[2l^rO MUS4g 0|0_،Ƴˇt#0@YF06(r #蠨7b ׁؙl Ѵl}uIa(YZFz*%"d{2x)"^V@_ Oݟl׸pdɇK _vng{X6p#$adK8cdg>P#dĢB_G؉t;Z:0 -da:.L2Cj<~`W023K3WRi7`am2 C@IPI*Sw\܉+B@ݚu3 1ct%+6B8 ˖+36,d~F'=(N4"F @nI(* Nٲ{N: D'mܥj+kʸ΃x0!Bgd裚ۅE&q/T$e~'U*0:X#^^P #":oVpW76K0eÓb+[kI_Bor z cs/WJc(!e x9!Ԣ^lqTSTk-8^gtm\51Įts,xQ p lâK*.y@@uL!y1ʎ!!;r0 1i춐Ap&pDU!1dQۺie,o=DϓCPVAU ev\b^d. a [9p@j FGM'H#3J}%߮ĕstjk٪(fq;}N}g9zǷ@r_ Ax*Ur'E{l`ϫܱr2ib spo(e-sd{VAJLQEsI ڞ[ƹ7B)ahkV>,9sI z)kZw+'/dwXᤞ]׏E8o_G>O>lG$믿?oFN&1[gx}mɯ/| ?#?++O|ޝKG>|+Ϟ\ޱ g@BsT;p0-4̍!,u<ä=aAB8P*;7`6ӊDYR7w\\u1 ԼTEb( Պ.ZH& q ߞ܀@QnɣD>;S++ |?^GYBK92M S,P3rCx=)?~V^]C VOnh* ozIxC8ZJ.s~D~6LjWyq%`-Ļxr>myJ@Ms8g^:|:cCY3mh"v$J0y"RD/*jmk8Rg!C37 ԐzY0O?^5>`]L9a S dPPEZà elgvĈ.py~SϹe7@y;6D/Y&fp귌EP֐*݁v`CdTczTOdE ȳ8d5 #qRs LM׎fG҃KU 60SQq| 5)[j%ϭAeL>dX8p!|Z0O%!)'.߀'R/=z. Ȥӄu\\H\}`"СMZ>.H$pr.<})k?\l0w%1 xh=Ƈ; TCUX(xDQhf 2\{=cASW0M\@q3MTfu1ԣ7٥N{ʜ& \Zyq[ gaC\zڠjZPtqtteA2 +]= N{1Ssvֶdӡ j | KA9܆d<*u!$#ẩ}7Mէs_(H]aqe&xRFu˥TƧb>lh:Xc`O]<[u @s,X"c1.ŽCU xV{S2+CG=Ё7Ѿn95H(wX9#C}2𰻸ݴ́h qG\*F>^|;*M!$Y #]FY>J̋.خx.L07:P~B >u8N9g!zE򠹹Q<PeGQd,Ņ.WIN,`,N)!pZvW=h,3-dQfuJEu~aa膛!f ҍ}%өs:C}Hn:!0=ե82Y0ao6N;X1RIKG bޠ@_?*k ] > X|sh_}]iՑg4^z~"A_Wiw38kpԷ S2{No)POZH?s?K/}Y1LS;gjBu!7__|>U3JV3T Ƙ`O]Lqn}^@/;37^g̓_zNUqM+U˿S9<7?n}|9p+=(s˩oNJOeҺ{jbQpn'y**l( |$U"ф;^) '[K}jN+f6iyVha]~-ɮr3^̲vTelѰd\2Tp1 dƸCpmh#  qq94 ,rV~Gm̈ȕYeչU\TVʕ{E3}XΰJTw _C'8'g\>rh;]$%VHFcZ4dXnPm-8dd6QT>s ڸ/] "Oд liں-ƵA샞 `:³G1</FT#qc)q1٣*_'ʙQ$~T/s(β ̊d36$zy39e0N4)N L`ʼK5Tx *y<|9 h A|8\3٣0S|K)pB#B̠6V[7WK" ZxM4F0a3FӽnV+]Ӛq:j5p)EC'~Bi7 J*lTJc,C>-3y$"5t]X:3K>\ez<[!',o/(iB#FA 4䓴tL'/bl_p#ɾyWٶell鞜iCze/'&+D԰3z "eE'-􌓫wMl"eS'BSd,^\KA>NfInjЩ y fF6v:FaR4mEIj1(|*#e+,jhVm!j\؍dM@Xq8ȭNKgPD0r}1 $ +^Ft WcaPYϱC羶|=s UsS [7R6ZQ#jLQU#P 3OU1ad%$9ڭv`0l#n\nclikɍi1hl՘p+ 1l0zÛ\6,V^ R8|Yp4h+,iaPN-Hx!i/TLn%M7A ׉9Բk K2<_rkfYψYGYC 3G5*W X+TB@W!<㯜z~w޽[USb:?稿ԧ>u{k^Be}3yg~g~tu*1M;wO{Lk0^xw/|tgYtyD~DG{r.UU/—'|ew{Q}z}|\eO{Os?}NUjhQ Iz7ʼnc}$O2huYo*b<:>ύ=EW'Yf#E`^cмάQ}9g99nϵe+wjiAz,4;}|l14;N;>MFY33uЖG1zEQ+PGZṼ<*>P5翕^Gdx(iWJYl:&lHəExfDr5ΰ 2q-%`e A(L*\I1|ԥJvz9pcSٌv''W-Jk ֺW#Y>y;mt(Sa<^Y  |4^NL:LjwqdoDMBa63cOAQ#2 om6:\Q" e+gSn  1#:;3>d~2Q*q nԽs#CЫDLaZ`G>/Zd!&㱙br!=WD uạ਑M*zЂq(IE:-Ww<ȸ&>F*Z9`}̐Y!Ќ b=$<l8qݞnr8|eN87QqPO-ɝ(}qcP"4.mpxKTaIV;L\RË''9=B6ĭM2qˣ }\Dֲk#w;8{lRPdkSCBgVJe RGTF%,.!ΰ|d{a",)FK #OoKJF1D 9a22DtʭKzK00v-CMe0PZ9$tCp{l~ZT҃޹q"ld Zәԅ=Ȧ^64<h[³ ,*l "ϸ2JF,` .]a"kJ\{݈)'| #נ82pf:{ipw}g.//.{uE_{mo{c(Vj>t^^J,6CiN)7-oy>0놳^l3NM1/r-۷/}K_׿=~eax8 *_[`}C#f+$7|r_Ͼݗ Wd% ,`PQRR)'1*5B]FVX]8\]CYq';=0«(ޙPX- _>62^K횑7hSXgu.8s)+L"BL*~u9>dY(ڭ6#IR?smTRh`m,du߸*!t`#u"E3W^ O#3UY5f`9·hsn!"X7UrQ "&J ,6:tYv2Evjr U(krgA-m DUTo'X^#**24nkt'x:C] vO";NEPb"O{'XM JIDPey94q!i#Ap2cH|n P23L'H./ll@KKJ*r_̗xhSp=h2W(@G5g %Y5zA*X[q·q ^-ݷ(} -4^'0͗ pc"c]oVlWƍIƍk$QɫBm5eUKa2XC \MY9k"2:`\8'g &TtIqf'=R*d?m?h6$?A;)H3; #F:ٔOQXKAFvLdUaȘ%Z,qî=ܫ8DGh` WF!X3qF>|JdDjD3oE}X:abg 𲷎ĆՕn7AK\ Lt L^i,nh6vʧznck̓,);ۭ ^p.=LNVVS()R\ݑFm c* 6DDɩa{|S$'Ex)>'pBYyK%ج*ԈB~"r-veUB`Q&*sg92n-qsJ2,usqkC^c5RtJ#l_|J j6(Y%4H*RJ\ae L89l [ q (Q41FehF5q~E16l,#zt9 Ǡ1N$tS jEmp""E5zjK^ ~?S?OREO#@~X<Rgy;}{_?//,|ʻe8sb-ͤnnn> _8??p9)7H14. lKLsР5~]xݛ>*+5& #E m|k!q R fT|e{5perjHR6`}<=ѝvB} fS5R._:)q#G3Um -J VgJ,oRp);F-ZLV+qa~<HXcO+W73Ӭ_g+8:l-ZY"f ęl*M9MZbc6hFNm{VB *GA֗,򖃝IPؙt Eů=OQ1Y)e2\98:7!l؊vc#tL#}8Vkq&-*Lj4 55* 2P %. NhCTŒ[ f6jX ^lփͣ=C9KSd׊݉?R%L-vU0A*F+%,a>,d> @>G(SXFQd68=QҐQ,Nmڸܔ%c EB Ƥnl>\y^G,4N {&di̳궐\d cklaCh/z0Lbf&݈TBV%`wPdYv>alEf|iihFQ)[~dEG\Y=x[9 ,,)F+A 2_VnF\eXi컴2ɭC[IhB_X/P]&;GcwTVbinscm$Ebٮр^pgVj^̬\d?c>z,PF}༞Dɘկ~ݻw9I̞7G{{s?r\~~g?񏟝K;ޝ;ONi닸9YUd?ꩧz)I_eSThSye>'|~fqfs K?_9mmd|@ 9ш,d"wS=FQ4:+ΦC~T0%іPilm#6Ccս FrUZePK=Q?ԕG7@ Ԃ>R֧*T;t& t֠qc?; 'up!#UX|h*,[DkkbJ5aUn{FTT:jd S4PIh5Z=X`2CQʘ(&Rq֢2\U"G@K4yty+DWJJ#߈8FcZNӟ~A3.`7z : ]E!(FQX^vO02jUqԃbwE3#ބe,Crӳ`=mr9ۉUa -5x_qiY-34-eUXc^y:UYhgjiWV5VMG+h+`6F=gV6qA q$vBpGƜD7_MK)ؖ. `ar9|Xfր۠:7̪]c|Lj% t5|7娚Z>efV9P=1S);ǘVCcKdE箳i(^r) edpҁ8`=^ lLKT`vt,PEd$֘[,C=!Z]D#*\Ʀ'2K B.sDnljiGvwkl9]-yAԙ¢6mVª!#hdlsk[z2Ǖ',u5 I=sԧf82R*b|>+}WuSJַ>򑏔?͕}$4 ;zgܹ3O$3C*_<6qgln$p~ՒigWmoԧ>?g???\t/ iQd|ꩧ>=D̑z_/մLٙ9+jb:rX6[Hx^Y >8RE&۵eu4aR,"70@fFZIGcOC[wlcQ'%ctbYMR$ y kܒ*k js3\f9A@:>OsRjTxABI W!~xR%"ypk`֫DڬMC;Bp3 |Et2Cy7WNSO<9fgVQdVo`= iW)gb'=)cFٮoȇQd+t1ncdjgbe M\VYcĐLjŒ#IchitXe),%טۜͩX*A66:˕ܓ^+5NJ V%wo2EF\a]ϊ2NKe3.D-jr9aR\xNn3<OD,!$*MySڗWyN\ ,.Knj@-ڹ ֤CG5PUr2`lbge(<BVuԨ7Z؊'`fvaM:ց谞Ӷ+R`r ālC9:e2Vx4D56x}e'e"~@t#c.ngZBKp 6@Cj-/ VeVVU>XG:K,v,cxXȺdMq05pPzV cj jpA<#s0:Uԛ,߄"CBXef)b#o!M}M.'cg]gV b4{PFK*o-ڠ ֡53˻ GSR1GL*̣ kM: [zdedvL6h\Ca-H Zƒbc֡l"k?c#Lbl+Xc+LcRG/MVľcY:6 *ӹ8 mpct/A,Yjs#@ qF h?H/5JԊ]F.ׯ6 m:y6V[R+q҈69*̅!0T9 81RYq˵ b Ā$CHl6G<*RuA 2P&=aK뀖9nhyX kktĊrŤ[+)l5"ΤFTjg~6xɝ*{5Nmuu 1da^boDUF>G´([a=]Gd)e(*Rc!(jV}#t8B*rhi L3Q$9WN\UIiVm $iLd9xT nЁT#D*=!*(U1*й-QJa&\&i!LC5iPuJ2_x#.A#~Š[VEbͰt*~? ѐV2)\‡&b(qoam@+ł8V 4Xº 4$Qyxp4jcH&:OsBI\[<ԍ*!P cޫz}=O^˸X/sO~ݛG gP9I3{$`&SakHo60ܾ}'Mƿ79?̩7?>bh}pʪ~cXO[A~ӟ.0Gcg'bw΋_Y'o~vf.^z3$ !|Lqtu_&`o'P 7hGVEu·jrrCܔeŨȕi81LTe2yeK5y߫UѹlՈ ` 4hYe jiJ5ܟ"lC'~}{߻yt Hť͡M^y{=O?l|+?oۏܧJZh@h-^ɝirzD[Һ6%SpWIŅ~x/pw%tc뜎.gwF+:it Vt3#.uqɭ63ۡeFrɈD QpHUނ  0cQWeC|f܊t#1YWЅ9PA*GCުlՈa2pE ˅"nPcם`cI$EVFzc┧sy.u&ӋCmZ 9c^ڞ%)ZM{R^qWxx Xf#B{뢂x=Pp=A.@tLFF7F*=+̰^D8vЊaC;3-QYDT<]ΜIl1!Ů"Mܦm FEX-NU[j" 8.N $ĵe{dAi+l0"q9^빯(qh)DϞ]k=Ys2.:fbRA]$Wޚ5=ߖa7Fyԋ,IT.LRԧ}qZLgsUA%ϊc{d-} rY Lb*AM:Rɳ e7]7䙙 SZM٣ 0k#F<[j^C,(J]_\X ʺJ'ժkdЧf T®v̧}i>+B:mB=^#| y!#HHNGxiI\J$)w"gF-75Z\A;6m(EbN,Ӊd8\ē1oюzI>8VP%v6YQ\>{1UaV(.I`tڒRvuLKJX%ӝv'aSˮ yj`;(fQ[s:ˬRS]S|goGtf(%b㱺x\M.ڇgc{|̥@Aԋ)ǚicRЅb^0p 0Hvk4Y8rlMo,{%ۏv1UDs;_ZDD}SmZI{SO}oo} 쿱qJwlww~v^o֯گ}'ofҦ7=;;{{W~W>|3o9qNSR/?̓p p2jU]km>O|;RgawZ3΃J\EK0Ufv5A=ae5!fY)Ρę]j[gYUxFPͬ(R,kb9:x,Rώǣ#ʬԻw]?}۷Ep秱)͆?ݧ?|3 ۿۯ[(c"Exf͈(y97g٣f:tBrd*YEݼ@jr$d [T𢪂E%Qa xKam27̉c,'Cq_+g˸n&=9xxƵTUط M6% bFmֽ0.ĕKSdC䉨ӣ=eۦ`^ br1NrF¤08GZ 9񨳆̀yTDe^(pX¹V0c|4DցEf zeg _VzbVR-3YUdy$*k!gdaCBYjw iޜAs䉜L:}e%:;EK,(ϊ]P9zHVHsYV;}tvfQJi|4Bjqvj3 IDATf"B}>*ŵbS1BnJPoueG-ðMtD{*6 3^hNTZ*KTxx.EH*ISԻ;kpC`.Bb ԡKLnnCu9[%ZՅYLK}u%fU=CGCxgֵ'I,LQk\Vp$F{]Vc!4[dV8=aIY*s^+kF)tb8kV7vFxZ#PoPG18tIr餠0g1\"u$; cT:8q]=U:p5,% sTjNI!Sѻ'QZuvK*j.<0 J Q56iBRc1ɤGD\XulBd&S ,"=CWxHva7"M Y2StAg.]qמSU3wx XW~ƠB_ EDG.s#eku­q>#c8R Nkh@=,暘03D3>d+9J ,9k)ڬ'}= 52簞1Ԗ x!bF.U3@GsahM8WY8(ݚ;Rex$;8J_RDcYWGBTǗ}mR,gi8-⬕׫Zx~~uu(?R?cɈ,v퓨Т~rx;ۿ '$d=)...ۿe᳟>O}S/ſxxOņv 8G;/ .ᷴ! /ofoѶmNe|}?a9iTiE7P>~?#?rz:41oUYJ{?Syn8\dU(f+%w+(pgϥ nb|~Oڭй38s^jrLŘecJNJD(wbUN?22ftX7xʈRIT1]p4{'}GztrQ;xwp% u8#/H'F'qmt躮TYhlٶ\;:M6c@FS?F F&P |J8*.zl,]8n Q02: YGӇ;qh , #Ȇ! fKnQc薪*hVCE(wI^gZpm0\Q 55pvgҭ%9ϱ+d,N)[`QlOr@GgcAT8f1M/AG`Z4)GI)R*(Dg3ʆsX3pU"Q9kN%%n\՝5LIKl4b4[|CgeJ)vIHEt&5w8>DS!-#D ^ )jZhNΒH.Fd#SQƖOvh#?5sQQfJ`UM}&CwřWrZOG^+.E ;fBtcɭw uL%)%Y]@ȣ2>2's#s gxF+ I_%-?&Gŋ{6Z"oqΤy835cGD]fTuXDeaFE<6C+P\OGSfpIQ."uD;{<57pF=]&` L< Mx8Acx\[_Z!jdxL+W׉Cɟ__>ӧ'O7'W\w{= I^?wwWuC+wۻ[??Oӧ/M1{MO{/_?_.81=OO~1 Tb91oyiQf^]]~G_>m?O?37Pʯ| { fLO?t-  1s{ZƗ+E+g p<s9ѵhW˜'R^5~RR_a 祜z /ܩ \L(+g[xrKt \/_ce=Qp9 wj=]M!"庬\ _y,E+cEQՑg fJTP;b}/(Pnh ZJ̭Bs {{"" VvBXhtͼL:I#wpQ4s,G,8GTQ%s{pe[_č}x3s ocO]#ʅ0ڦs{آ54EKJ*줍#% 8ϚaNw׏ǁt;#9G} o+13{XFPL?ENHzh"Zo;9 {)huҘEqnL?*TDX r lOvh@*'f}\A_u ;h-Yk33qymPF,>7j㦲$^3襽& K|)?fb+#=mNHr;1.Q$LNغc8Gh k!Sψy42fm{"/bؠ5Uzt!h2%ZsDer(?y,eHUcxoF92g$"iƽ7@ k=77Ce5K5Pe.o5_E|8<NZ0ǛBᬕJ3.OIWpVRG֖i Q;ݠH$K2`)NTbWGwKql9y ].cمı#Ё 0l ҿB3ʣRp:\h&.x =LΉg W!9F)kaI\-GPfcs] s"8N:L~-pkx^nmfւS'=Q3{i/1'+4SVs#[6{{{/ah/Ί/+tkoRy2rV YC!UH$W x-5]'~zچ uS Ϡ[6ZDg;7`Z~OB: VA%sFL @BkZ!U\M36r%Z­~B&Mt9uCGUs Q*oK%bf.aWfYH3&iUOk u;<fe1;z=x;Zϗ18DIa_ ߓ/t}}ohz<_x}N2}ߟO?x]mb)Lv wk/'__~//xPfЇ=z 8__گS|>ɟlF?HuC:uJ)xaD//?;s&NpsAgB!4cW?ѿ%*ٽFQ1 iB܊[VT[q O{BNzó N<k؉&8R9KV$y#nKfS g]k9]J8<83_?;g22b1Ko)OKڿχ̱6]˟oя~7oOg~ӟÿt,-DjFYtaOn&7$tza$Hg\9]]-{{r'/<@@g3(ٲbۂf;MQMy ‘zHd!z-Y**]hE6>jGO.{sKCRϪfm75_Uذ0Bsy8Þ=FeeaJ= >[jP2[CTf7sXZ3j|r0MFGktGvi)o&Wp)>[+cD3X F "aY#lS|Cli/3 E6%PW0L9tHKaMn*^tdai)I/n`=ڐ- !E k(`ڻiۓ^@ ?dbpmh3Y hWzb!RJ1eLݖo#R:[;la %6]@QQ\M!} $CQPida54Ed` kfrls*֑ViWJRj #XNQ{%ދ;jCҍ3"'jqcY'`}[|{簻oEɛoݟklPDIX{E{ܢ-Ij sr*9ۣZgG U]N7[Xޥ vP!8+,"lO$kW%[ǮuNڣ=DN{ 4B ݕMS[SPO|'EQ/% |};g>l!ܦ0ovڦF~w:O[܇~ӯ[.=>O_|vJOIJ{Ӌ/M}F޸+寜wS[57^u]_y{>||M9C6蹧6ۜTvfJ)|;?goyHEk wO%(>ϿٟLV/OǛ1$hܙcIgJqp!b+I0T/%pcW2vŒ0 O(D}@c[FRZ#7 lwf:Vue!g](~oC_zgvd e Ks2:&yTH( -?~3O'Ӯu,+ 3h*̀ ћGNޘ M`\T!!n9\#\ˤ'vg.AXQJΫێ;+[Iޣir x=,G}+̌,8'ʐV^<ӛKE?4^ +G #9bo)KvշpOQ{7=7əuf ڎO6M:[|J& 779G^䐹\i;fA8oьXQ{58ke|MI[ңVk "]z :SbعkѧZܢQ bnF p݃1:ʳ;OQ`C̈l0ͣΝs1͈;Y4!8ʵ!BJCm͝S+{"qPn*wbhv{2 29Cı&sTX+R.*%>cAw{G]).iЊPqz]4;MXyA\bafh]oS!I'fx^cUuԔf=:rjpwQl;X]fӬ[vyStlf)< 0p5ͤ"^f9O[G{ wnʩjyn˞ol'[/fNL?,yٛ٣ʇkl(> :(Je'U&KﰰGj$p@7"C1WkXGAZyܪ,)UJ*<)f%X{19cP|+KPNp7%E֩B"6*QaSr~֢;}&Ў8߬ܲme(G;t+%!]m+>1NEʼnfoovlJ=[48 bnn=&c Oy6IU1f IDATl)VZ^7jaJ!slwf+lBcZ4WvӲt;\^w0^0Xc3JDǁw'V5Lm dk"\w Q•j A JhEuhDؔܧF3dro9==b˘vu:,GNv3:DmRӧO!O/< }}o{-,"iׇ,/Ó5T@7$-~wG~G>Oܜ^f_ah#_kk,H~D_:)ANBY>vS??wEatP_l|`pN{\&Shv_k~?/){7R캶?ǿ뻾a?_ֿ'%ӒƱ7?_L:`'BC+y٬;mPNMrVdUv(XIyG,-cR>7w*[b^8 gS[- bMЉPfsooaV OՂJ:.̺y5JN8J%?C%Y#4oo_GysAfKװIVV̤s]F[|m8!:?Ja UY5b,<6%ϒ$KIFQQ{֦nœR "K5p<O"Fd$lҷf3sĿБ ЙJFD gfSwb{e6/׊!6ٚ[E]FGK2ZQlk` ;bg,Xف:ItþI߲"[OFrFsf:\cRQ jH)m Ol4`V˪xñ8L<›Z̃^=y~ihnv[Sq$OÈ!,MuVFjq)v5q\UCuCr!ٵkZ+;&J˔6sbHq^꒨1Mbcv뤇'GVn'Pa.1ܚjFk6X1ٚ5yRA6ܺ[\cLQھ[>T 1okX!n)QjĦ;фJ19L %7nTkl4IZHvR7vh6V;X€#tkzku6IC0z} R-V}">=d*++I}fѡ-݅ɀj^]pOב)Z&s6%9bN,*kjv] 8+X5TZ9W᣽Lfwah+͈[LP1Z\u EQFUf eHkVqa\2yoDlݘƉM )mX(X ˶@bHS ċ;@!W5y'@wAcSA9]\ksNGzND(ɣ HR ep:R;$1 qt!(\[H,"uBo_s]ugFs>T첩=K%|ꜳZߜs|<8VH!PgGKLe-WJBU0bEwľjN  ;J x;93.p߄OǞyo{Gx M?򑏼o=rHZQdؽ7=3S0}ږdJSLO}S'[/^j+|{uۼZ;mtFZ_DCPra"ݤh{q.m]M}-5pUW?ӧO'@͓f})]wÇA ~-| _`=* Ba0ص]&x $jN 5fk5'* b"VI.PCi`'b '"@,HCm˸"j6*V`4gxn~(+ eqA:XARr+ΐ̻l9^Aܻj[vn\ U @ÿ=o_z'OE ҮGADs7 1J]ܔ5 #4XX%X4, byӜL^b6'V`?8jARIg>oQZ>""0n֓H]sI]dW(@ZI}.̔ Ї"F bL֫ʸ+m%]olU)l9K;k((H03+BQ}y{!3GҶtv1 =`Q!X!2GC, !lG [LD2c֧q'r[LE QȊJ?o@Ϻ=p*pbv+mGXAwD@©Z3l y,1&^sV bȔbU%PQ'!{: G!S P2!Sa*ȑ|`ܘx !RrZ   SM=W}=1G5DK0rD*i !b=WEhSIgV^ÆNP$Da:Oi 11{.fIbA*DWRR,^0"[.0$ B5|Cq3q+fGhY2 v0"jCȁup@ MQ PR> }uㅐpB(`%f**9؅8WmEzT0, BEnG}G>A5bG,MdM7[6b1ᄛX+LZLvܰ`V[*h8 C#s%戅c H= n:4ESČ " ChcRJ<c qZ؀莰)+/0D r&0"RJ*QGR۷`0w"EHPv UAv @I!S\,5ap#ka {Ѥ["u;>B6X5`j*8-afmHe2Ҹ=ՈL*OnyN璑aP1TJ$N@;`"FԆc}8du&L/2ټ w;& nm .m -"A6|yU8~ cY,BUЮ+ tR]#7^V$d(dg)ơ+m.= G8@Ėp a>l"@.@1>*61V YʢYZq?PD&a%5eSDۤcJY}zTㅇ9\\PfRY*_w"_w%#^j뙿2]uUwq{Vmll|g>mhh{b 5|Bf Pxe _4{akF)c7Mo뮻RAt=G}{o|co.KJkww o'p_ ¶/~TeDOd++ ii8`* /0"\U2KU!v [1>ً'& #;ԫ26ȫF֒Qn wvm{F3-6mߎ!>Y=<|?>~.ֶmeX1 !Yv)mxWj6X%U*GdY#5 }-SPCQ/FەۑIpgвa3[F5F.U>B2(N@[Cݯ~v9 < ~tYᲚWpͨg+<~}9H>YD=4 A,*=`x:N#84F' nl1؊ׇ# u瀁!c"|Y0|i43MHW no$) EK#JaWvA41 m#d@6j** H (ʳ^s8gD M3ԫ+d0Ga^\Q51.9*)cMz),ai؎CP|NZu5Mv,>'P#ȘnpȬoFjT䫉KCwBxoKCCq>+B%PWw:0vs0Y`k#톌 u Ҥ+Q%fRs/hα ,[T5*rG晒?Db.0<+K4Ý~ˮ%Ci* \aܡh DNm0+{!` ai۱)ۡЃ#l@ߕ{pv8hܕb8ZhOX K+`W~iĂ<>a:.UP8MY,:G ׽,|$H(KAi e9Z]`fUQ=O]&LAOkA5.G*p/''`Oo@fe(64.6Ti> [7@jX4a˹ش}!؃A }+CŭLjp-@D$03Y^MҮ)+&pSJWjZZr2}a@ҧg3`.m@XatA'D W*Jrm:7]x}L/`s$*ʋm `8o{??#9;oEN:4ixĨݑK!P,d".Dl"n=fӪ#4h=o}~B_׿ ;nt:͆;]z^WƵ7Ӛy2OFL`ʸo|C=4tHJ'uҤW:K>??kג 3_!.m>?up#1;;lOۺ(巿An;zn}s]wǮc8ԈK ,,+8f%ZA̬Q4*@`rØHfQ5 z3 fJ*7+M#GOrpYz2b8`kZr_iX ^4; fycb8WBTG@8 `[CKP PF *$xF2!6:ԍw)R۾Za>tLvY?|v*9..kڎQNCB6ZqXp{ -FwLIKx}ro^VajaCia I9l!B-:$`&Xo"'#B@&h".DžnZ aV8`?6oXw]gR˗ qF 0 Q C繰L R@(j2tq%*+U+<0BO8 }Ɂ┸=`>U528;qd[c6ӡ)["&d "/ Ă>U&bCpHYh6Z1 x糒@33钏 6%x:C" ,XM>Q6SAi MƼ1U2`aOWSpx5BE-c)qD_q$>mX$05B:I/h8gbh9'~P "oO{y|;l7!.*mo}[ǎ\/Z^^W~> BT8i`p]h͸fo( ٴ B>3v ;P 5Kac݀,,)`&};qAx9;̩]lj  KO=.G`>cQ9 ./ܽE$4'S3_s$ِR5GQ8 !#DF//{귏EO5`x4>$\ R%pTģicKa-As7`l]|N 9;GVtP4R&> m*G+ö]F]BCKġc\v;2lNWb4}o1h,~=zH\55*RQ\HsXw'Ƭ1GP;+6b"pXjpzR8/ \Zt[ ])`X φz;&ļgAaD@_9$XO`VwEYp1d),8'TDxx !zRUttjf`!U(E=?d?A`.n.x%|ߛ[)+>g[^Ϙ_r\SKӿv":_0 IƬƒ]si?TtBTTNj[JuZo% їr ɔgN&_p+e|ֿ+!G M^ho|ox|2`]hJ;Ç'g uO_G.߾֕+E˾Ǵ\믿_8 /5N=#_ӧGbMWWWYkz=%,c=ַGAj8%ZrQO&dtb]5d-(rkhѨ=ݵHB2e.oT$tߜ3`,xޏ׼5wy﹡Ў IDAT#xR+DŽՌ0⨰ w{w|o&L;= }p|5fboU∳23Vo;wACeMnM88b(UB̭}ƌ@P#1j ~z X#$,V]h}+p@nC#Ŕܪ`̺ gaL`=:k\(YPvd[A۞y(z %4˳cD֥Rzy!Q\teLf)il45AUR吨}kg.ʗQ/jp9]#Z`0*")Z#-V=5pRt7K!X!w@ԉrCp{zvqPGBHQ#Qjz@Er .:`oѵ& >H`t ;"g6$:$b7Mf`Ϝ)Mt:Qq)S Bv%L I}$f$KsRBl@_s.^CR~Wte@rq<)s42$a~!vZ7QWlpY)U|b1kJn'Pgُ}rp&/Cle0vP^1[A"5ͤOOd?}l~(2BL,=έ9?ADljj%mlYmG? sW5]zW k֢o~굯}mi?Ŋ},S^p ꩧn馯}kU8^WYzk61=/^ҐkkGImd3'>{{)~Z Th" THf~L\EH~ZsGڴc{nnt%~fRtl95&͡!e>ЗBgA[>n+Ko8&V WDhD&Dpycw_iԊ’V٘/0Tj( EG mqUU"Üc+Tsi!DĀOQ05֐5L"0[' XU.#Ra.#6-`J@(?aDD4&U˄GHo犟4ۡ?ianXan! LLgDC Ԝ$XeܦFS*F9}!Bۂ32R]xCj,.{壍)HaE\JH-lbmʽ$I)S$9'4|.='"wޢȘ9 dij?1;G>bS7@|#Lg4^?LJOLdHiq p9plO߇ր]<ЇlzD&VV% R||/Qք2O1@s e#qMxxaPնm ;"&ʁ=KM@-(_絆9gdc<)R w[͒?)B6%<䎓?X9Zkm9V4o&?g9j&Mv2}lV^ &z],QUUEQ,cCoI^/{n?__1>u]'"}[3?[o5AK/砼 #΅-<@Bҳp]_[j׭Oź.ѣ__j1YhٻONDN^J+]wu7n.wpOj8|UZ n'(1`Q,...mݶЪKU:ϞP.3oRnߙ8Gmv_JXV׾9{01eBU Aű&l!Di֥g>[Ys⤌4=kq9<Ӳ@-9edp0 T`s;8Mc%x4; ';57h6_%}j,.?x{6М0iU]/$7-9ViV~g/DTzDYMOj1V`=c `^h~ĭgpNl_Pz RٞygBY:g\VV|6%T^.%U'T6:$ӓ3+2z*]l]!bx>?3 M ?oߛ^!E_//m/'HMM(sb:}kb։nٝS8IW^A<𖷼ebq\4WUi,R*Hk{+[4,Hi)ywQXۿ<)H>}c}ᄜ|gwQ9sJabD:듈u4 ҙgRcwtM'WVVr"R hw/]kfTjcJq|c؃>wP Km&GNLrL@"hȌo(^] Tֆ2A/I|!@sDX#GQ IϢі?&]P u~ɱ]s;e`=tLl{ r-%Hx.XA%YM+ 5 ;䃝uDY&GU;.NCx :[͒FR,PެM< P8WQ'DYr(1fIxi}6-h^.M,lL S>\`X*ryq`40!m͘{^cYcP6b|깦,R~7-y c'Ozn 9 k|viRa.؇K|㫼 nA ٱC\t4nzIO(lK "H' -A{{ykz }~_ \Ls]ѣG?T (`#kƬ;xLߖ r7E̅֓.ަyBj\A{: U#S'hݥ0 C3f3[T-;(xUwRT`/E΃L@϶j*aH'dbiU֬~CuW. zB_40My6]RRHU*)den ͊ LgoMDz*4Qb`2Mw5i c)ɅL5!ﴌh­0{q|PP&|jb$8V GK rߕo8ց]`Ա}ȩw6&!H t԰nZ (B0swR%TJ/~9OYn! FX}m 05p*Qʼn63C*=w;:?FTc9945ElO\֗ ;_u?C'))ԟҗc?cս8~zo{/TRzew?0O 3J _/2Os 7 rI}ԩ|3Gg&U4ˑDObKq]֭P$Xbv?l37rAxI 8+ECsb3Stˮ"i "$$!Ƅϩ)夘::rvVVdmJUaîCQ 4ldE5 e֛w􁳣Q gAC#`oN)렦{9 ɉ rT@v9jj ~`M|I@lɒ%'ł hY$=u߬uΎv:ZMYA<ѳhIzT2Wr)(5x)N&9A?n,Y}0!tօڊIXwfd>9 pK@ : m[fUnX@JԎV>ބΏ9qO(~O # \lTQ#NHEH3.TZU>~u {ϟrYr[] b+W e:,b`rޫ'`f$/ҙMb)Lf(A*/@L}E(@unt'ͅ NH7"J6U;l=z6!ћH!k @4+G8 :> d]jn*~bVgn3ja)]0 4X%gs}C_ jlO6+gXGjԚ*]9-AqVwr">2(j1]ٓA,5HcU&' 2 L1Tb>WBd)[ԥfam#o6to&]e螇MeN[Լݯ}NzpV(vHi'P 4bELf#}ҬET 5b}n;.w}cRboQn[o{]]]M:2%o{۶ocx/3J--){ʄ\|=? ˄6c\sW\|TSjق/l?|7eY6!)%iL_@Z\{{O>{^"BCgkشg&ٺg&'|(y`9l\ HH19+&^lr׿`?s!nmaa ox7xI2Dc-T;+AK2'fW5ۆw'Εn v$'J 9>.{I=lJWޔ e^\xĬ mqj*=}-So[ʄc(%aȲT ͤ9`8*ļ2UKKqÆE4 iq G [JHFgH]m64U"&3T-Y;Yr}#6"4\jvG枚iHgc͊ԉ p0ow!f.ZL6j\Ph uGWNNۙMKlS>[bl7d I.\&qEorgͼ{~ ,8敶Ϸ10^['i쳿˿?ɏl^I )n'\o~_n%bxJzGN<@%B3G};9Ҭ7/|24ag(z(O k!kp򧤍__M_OPʱAzmHv5G`vs,}Ng w3 ᰴ+MG1 +Jiݧ(cmf\sr-sss 'YH=رc7!:Ժ/,Kw $ظ@4Cuv @yQ4qgѦ}'Mp:h c [j,}Sy3dW h: X-m;I}oiY|( Iǹ߼UPKđWE#0xp0[_ 74N -LVϱ7 |}0L-~nL4rE3 Bq ȜE!Ẍ/4-33f`\`ӫ a^5Ѝ@)tD "hzVGΘL"dm@+3JT[nwaKoH,cOU#g14b .?&I{6쾁}4OP=`ɰ$d%ud1&3gtGO *e`qq#M /d*^2_z2B6Zz4U&p lЗ hh^D;8gvyΏY^@=i^JC]T!kIl<]hT'ZX-MaD.A+Uß\OeB ,‘dރm0@w{kk3.nB7p]wu$?QH?w}wS̅) iI8]w~u{gnu~e@ۿ[գ k3 b/4<&0>˧p$Uq ؄AlE=DT5~wOa IDATpUQ&e[no|{/}+^ #½KL$e TNrn;Р>'5 cRt7Qh%_B7G4IX Oe0$Pg1ӣCP5R2d^ '',V%3y\=`^:zSb΢5 >OQ2pooY`Hs!(`seز6}[0s5\"##6=D@% *UD3B'eg w c"2[`Qs 8 tfQOŮKsv=B+Luddb%”d$ ` ͬL#e.>iQ Úx>ToF?E`XH $ŅQ S="PD'Ko;vf#v5 NC))iȈ*rAXAi3g BZQp` +˝u*:6JFBy)BU(r!cɽkuFaOͤd=" )iVb%=V pԩ/%Va\>` w={E%xOsp̰ D97(;Ar*ֶoɻ|G[Gf ڒڳw:$$N  X֑H qpg/--2[f}쩦}ѧzkieŜu̮艕R4ʥD^k*k0'kWx6;da T)ŽmI@zy` \k-=ϻC=2 QrEvb$P&32(a"YXA%C $l`2&s5VW (p]]_}Zs񮽫p&T*}z׳^1\>d> }(Ek_AZ l^{~qi] x//|w|wާ> 9}_QbR[V/Ix._|/>>5N˦6"W_}W^zgսMx񴄧YSOoK{ϳ1Ja:ᐉPoR 6)txj;ڑa+ m)K5IM@4%ZۙVkh{??K x'l>O54aVJo__G?+vQ73 '46G[R.*&EVyF/<צ۾Rt 'mhGGY䞾;u2D7]CYBr#bt pt+q-]X>#kOuQ=QTGC)&.CۢF$)1+zQ#`<,Ʃ2K fA[#0D@@)#ŀЀxQ((N`Zت3z;^T,( hpoE=rr#|l D qW⍅aYh)pr%ܲfhzMؖQ8 #%?j}$UCe Qz̈́6)Gff) =UpFD bg ZqOVd7'9?+IrA( O-P PߥJX`F[vr1 P6X<-ư0w1VnT4\_3@] gωLMpSe1yzU:t)+ 6󻂭q.)6ڍc_|#wI>OTANK{\K59lW$ɬ;;go3L럋ILJ?'OpĀwFIҷ}۷,G}@?_-ǩqX+v_WWWu-պm6(%<ú'?K/=[y+hV`[=q}0Qt"$8+iE+q׶S2`+|D]n*sjn;OxtCQ'7~Q/ZJ)_5_kk??ͺ~?u071M!~G/C YfZ-TY~{r8=iKR*xQ\B5ŨIΨskSJ@ ;C=xѸ-G+ܸO1QU&; n~zۑ80dP 31 zJJysEbq/mpnQ{|*atepZox tÈU~ָ()b)Xz8s:jR`桌LzX%5pmC ǣB&LI,aIj#<.` M 'Jf?:P(&)_7 C0<)H+%)1+pE0ޓPt1u{@jQU7bYf0Qae !8O+0WS 6\24&=N[1 ;-! &|`l"va_?I fUPj^p]q\m¦wM M ueOk[FyLF1tgߔ_NQlRYOj]dwP,"٭̃Mo{:3IQ֚ 8#ס*ove rXH{Lav;~דraEYf_` vn άF{b'n-W~Q;`lfb)iCB7=3]b}}V_}o7m4>xD#w6XbcLI}9S+^/AX6wXK(:yhm9-D BFd 3 +zb&f.9 b>>)·pRJ`:b(Mc8(YԸ֑5YK螋Ϗ?ᡈgN"g~g>O}|~??U%HU'Ms_o/ 1q-!vc-^Țk6GG Flb|es C4zVҊ0243ɽwq;yYBhOCJ!Hx4_+~;{߯>Y^nXXtnZUr)q ؎d_/]hMYDi<&1/Z:}$Wdk# p@s|]Ⓒ\QD@nnYR5y.1 uq:KN(#Q(vXhXe'Dޠe1;3eʈ ^uRB[3Qtq6ňc`ftp'-'vF ]M&6؆#RwR&>Fgr!p'bIat5΋C /"̵܋BP{P.>.* bRj$H6fVdK)`68*GZ,CNjE}YD3obG/g|(ɱfȋ`!Ð43tAO.9-`F@̱>3 4h5D-?,+`V3% 8k: pKwVZuuf;Xj;2Ň. :rWڛxD}G81W #ឆU,5"{7%6*QqX!IFoed ~Ң:pwe42؞ޣ)=~'_oh({46IxxX'r< KњLjg^TzmCs̰]Y IDATt=ǒyBz1x?4;Gk0Ȕ.Bp"Gtf'> F5ĮcnW0RUkfd\p'IHtK37Tf%11_yb3t+IR4̂%MfiQ푙5Hc tžWd{Xld$t0hY^( Ш>Hj(ir lZ#DovKx`:scd=QBfɖQ9` [ukFD3թ+OEҋyan^lHFJ1*! RI>6"^qI%پ(`͜Vt",m҅,|"M3J6]a3-Jb0؉+)Rsu5#ae1jnɋi7&%|e % 8;c4,sc}hE)Iqډ&(Y$yR^ /`#஑,fߚ[sX)͉UИeԇؼ9##xBzu]|0Z k׈keҹ&j0Xy+fnVc3b} gv Z8*VsF|k˳QCXx 'q6"uq ss Wft7 phmqimaR?^U8?|6//I5R)x|n9g>O~^/?zM圛۪}?K{5ڔb?X,NFHޒk '>/| XOǻeX+^z3~ ,Y! *M?>)x!X+-Q sQNJ#OiĘ۲RNV,Lq~034@wh@֍Hns9jo-ˈ)U0uauN"$2wAά5)B*5NŢ֙ki%n Àn'CX+se-38K\̝'ƣ̿<V3B~4(9 Ӧ J%[2^yp3sK P7\l2GG v*\ ;s-(milV꼸)p#`$yEM;Kti_z̃>U`eoRJ פ>[^̶^lnEPQ!)lT= +Tla(7\f̬+|b(V8OD <=}&8̰bf4whq* N0*!LqL'aF&Ɲ8>NϟҾW\Qp7K;/N9li tJ l5@9ɼxPt"HZ$KNhLQvjaIrxo /9a SEMDrde-Q\cE1gybK\ԮA^|H/\p1XCKt#Э5lEêxݢ ^1kFnfDIWV ֬ǜ0Yh;coB H[Is㙻Xh s|aZ4x}MHxCj =qM< 5.ၥmS4:ؙLl ,GoJ~sqؙ1bnˆ!ws-V*0ʏ~s_M4_+|EzǪLw^_5e_}ooH)=?fs PQ暈Tw 9ykf\+;/>O+oƻ}z?C?K/յmQ+qJRc) AGNEz`K8ja!= M;]wȴoӸ)cy<hR(Exk-4wd]3c+SkQsl!u>CgnK1!+(2*T=*G'i$B&pwpЈl҈1,.*i+Pn\9S4f6AU +VA䦜.rWҲ粅E,8V:؈}-<Xz2W p{Ґ;pja5SԬC+i/RXI*F@G]kO܆s?#s̓י[@f 8o mQ`@cD 3{&%fhM9t\¹|CiCa<n&3C6{>@OXk`Hs+T}0|n>Njԧ>u ݲ~N?">O|Ns8r?4wz"z?n'*ǨƳ_~Cлk^m~}iSU2 iҰc۾ۣ 4e=r- b (NkMlMn,NJi ֐mRתP4'v+g.nWjw~-Io& OɁS23T~4m{ڣ|K_l`kdۜG;Mmes!3YF\Q# `,ҹe ɐxWH09e,YE<`kkegf&)n" ZE!=aw1J#Xg[7i;ž$M h=D<1=j0c2aH1Wu1q'u`m̍Y0'zq"}XQ kb)s)[)-;:R ӔH 0т(Yh`%Wi dpWCb s]e'm n if*f TI}!`/\23b> fjWTeB.\cp .Q6öcRj st).KL1L x$IiZk7f:gN]"&K^,}Z,%FPF:PzȈHCK؈5eA\Ěx,L+ P ̖bsp5Of32`&\6"99@\oݱS;j5k$stE@XE\J1Tl:e#;p6Me5:#AnxŔߨiQ|q'"IQs1mjU֢sZ Y6Kf"ͰA ]yo,/Hw+s_!FK=jޘF\Y"txQ\7 b'Ud궝}ss}콏mFQ A [b ! KDۛ lBBKV1"Bk((HVTT*U±}s<Ͽ]I k#-ku;xǘ߿|X*gL+[80 v0F4hsjx^l0p,*@H'$7EXK.]W7w[M)`0͌*4^( XO̞zw (:#`ob/] .1&U[g Sޘߊ ~By2@o.KfpO2NCKRMr̒15IвZ+?O?y aJQ}gfG?NgNoZ1:m|3?/׷Joy{{۷/;Wge-'?ഉ4M?ԧJ)|Yqs8iv{JW+֩\@O$k64 rJa,җlU aCZ6.3xKc\uֻ1kyt޲I{iC'8ो};W,n0NuIZ|&770o~W>寏Ỵrrfx pw-jTK wZg9br[?.LNC3iflHk2[R0rk荥t0|?#ZFc!Nҳ;bd9Os43]h'roL>#Xbٲp%%K3E^U_Xn`8O'1klaÅ Wщb'm[`E\0yG\BʮMКroV|g8MEjcc,捄mJ)N䄍>f9ΦOAѸ.D`QeS!f :TBqhgk /R,m ƨKt=pp}19 ^32;p|T溊Fх]}"µN J8\^SxB\X2Ǟsy9j.O̵O:#S =C ZXMtK 387..w䡶lp.l: .Y*ىi-Pvn AH?/2nn erNR'E̥6we5֣[؀G{Za,MGifԵe'.E#ÅaN.eF] VЀE2UKFfkx„L'b 2b\f\7n]' k9[q9a+T7oRF\KӐH ( EY*胾{redr*i`%vzØL30y;8X ɘ9X. .tb]ZVLcC6VMdJCc #HOqhaR3>a;AswWK 1y'eʃ꺆K0zxx4YW]wq/RڈkvZ',pX-:Z!ʙF6WƠ֝D qnLUY|&N6hŒf OJagyMy=+XYkSEj]lzpp=k@+ ^0ڹ턀N+1`.z_RD^\#'GXYRNMȦ#/*=ԛðya&k)YݦV4Z|67D%Z0 hXj&>A~w}ק>J+9=:>--/_lfZOF)eݯW,>63ЇKh_&p9'>xoM~e[@k5Ÿg?o?|jՇF66_3&\6pgn9C0vbU$4KMLGs6Ğkry~R& D &K3OϺٸ~riY?綒W4_#asX70mڏ}Gz;S=*m--Sѡ.iukovePYUDx2g-ȷ!\ˁג "8 bR!A1EX#L'|nji @ 8V-ٝF%=r!A4^U,+ױKgW c$TucjZk qs{)d2퓗ptܛ?ʲ>, ;a 3… 9gafm/=2q:H=SR^ F;ӷpcR%Q<j 4JIGط1F8$uhF&'>YSx7[~Iغjc,X =diim#6JNs,ʺ0Cf\Q`O`;9Abs `kѹ5aKdZrSc )`\xgdܨQժs8Ҳ֞%hm43KA2ԶڝdH8dl`PkYbS{"{?דږr'l 4bl,>9'r.2dlM< ~̜zV`V5Ңzc'qM(#0V[%q g@S='e\ tKp̚3lLQ5fG舚5PhF 4 bŽbW-3wlV&&#Y(ֿ߾MAo_6ɣҲ.)W&j7MH}cUx }g1W3>Jjc]~noo[f|z_Ί/[=^-#o[j'?J^u}rȘlR.oji }f^^KaqẕI:s1W5]]A %ae>aθI5Z'NYjo},lnA%33"}-(^Iqg6/?w?7oo|: q+{m Ttkl.cZtfGcoATddL="~k:$ Lj._Y%eF½ʮu{1%(TB 3)w!Q8[aB*O[V<+}I'3{؆{p D g%em1(+LVy̬ޒlϝO+}#8kԧߘ-vgy,-|viZ{HgӏFBYtUY< f ,KQM1u"Y0@UKN']J`Fk?`l4e$yȆBfoywҁ })4470D 4OҴB,HzHK͞;"r,RA'l%=L̞62r *|N)2%\ G`CpۑAXwh)Fhupt 1q&h.3 ;=,{i-[stNP;B &Sv|+R2\-'KJvaMxg6:HZd LMp#;]2`VpArtNVҳڸ +&&xA ẅb;CN"g0/d=gK{ ذڪkkc>Csi9ha-ZxΤ77N:6[+VA19Z\b b;sg͊,ɩ'cWV19]2F FrV9Y4uöFBMi!+fYō*X9=odYwovcڊ@69|ɸ5ݤ;T A`Nz6C->"rg x]ڔ NRr9[I.-{`.%ʐ AYOv,˜ɴn֨Vi=S2'iL{*9:cp775=)55'MBi:ƅYڻ\x\ %NT`ZS˺M|/Tq{84WƲָ X$s/ j-Q^x>ϼ?qۂJ[߽~4ZR#Dgxv>=,[k;&eگدxY''G@}V/k(?9_%aj^xxV-->'>ڧ'0̝~g9mUna>=?{cG,đ܋r8찗؛mX_J^E=y 3م3u`Z4&qY[`ž2S|7)j_, 2}3a[f'_KgZ(Sޱ{m{^Ss>L;˚`ֆ YVsUۃ{ɡ61fØ_($%#͞984* ቅG{i ĜX'aAO7F=؇FjY|dtV{ٞf朔7sy1/L9eI۳UVфFؓ5KlO&JzmQX& l.ؽ'ֽ=XY+0<+ tɁ$=ƴ Q zЉ<!g5FJ1w[Ų1TFKTc2{cьB:J7LY`} | >s5 3)I0YP 9֌u#潲shɁ<'$ $rF̱ll~Tl !AgcS-م=9]tQ"`JHKrrx4ȳCE7pQ=.)5 :'Q+Y}XB' *[!:; pJ2Pɀ٦Ȋ cV0"8#9e x7Sֹ{MѤx 15ZvQכ?V̤ރ$X\1ө ͌1Vuɝj\ ֙$͜yyLu V۴+Ja޳:xl1{Q&Fa=qiN7 zK!#7fUpQ2"wb o 0c6VR"FͤnSdqqTI\KbmVa'n| N\\S6B Ke*ؗҿ vZM6)&t)naϬ)Ra<ۍ6)T%Q: 0Thpk`( ɩrLfh-:7oF8-L7'[es7`ȁHN5cRlᄛE/A)1RNaTԨ`"Q"ga)e4e&'Tl.Of`ң'[%A$ʃ|'LbNIDR#y;e霾~[p*NAc#/27C.Mb~v܊T]clQHwF)LaVMDuArc{8ۘ&1MH#rى+L)/*b=(uxI7l҅t }l%Jt~iDQxdT4Ӿ``@y'MxLtު4Q\VW+;_Q)7}7=4+:ss>&>J=Qṇ-iߣRhRCзxAk'7U۳>|#^ܗ_U]/DZ8۝D)5U5iϓ)N*KdNg.ݪ(ߡSž|FRJ7U6 PPCc}oo?}tag w1$_kc'ɵ֋XL 7eӳN K1)R٦'4+<>t<ܻK/vU#]̳^:AF2Ǎd[+/x͸"d-zR# DRGE.c9Tdiv<5H$O>i+4Qfqj>/k.d5]v1SbGY{]jzO?ֵ2Jwy:Q`Fw;n|3Qٓ#d''(UeEW!&Z>MLk넞)hƩ\Cq!{vыam}ow.z4'D q:ҙg T)d]ڎQD1Oͫ3V'`%%EⰏ)E;Ȋ9+ j7F*~I[Od7(Xd:V{].LCwq";5S'6:Swp3 &qnJi'yUCp[hvau`I)%J`F)LhJsXU>m,bUYT;-vVV`IKNlأR_G^AQvE)s/>{Yz6yGq, 2:z2yL{u#r^'~Kyq̵-뚮Ѝ}OjeaB6CƉ}fHdacNJjl`q/cӒhnTnOܠ΍ 32j$Fr̲In={_+S9R`EKaS-~䴇-$JfBY6Gr( {MЎPfG33Y*d.z2:M7 E%Q ԛ:ue*/nO݅Nv7BdZ{9MrgU>Y ~$,AL( JWIԐ-%Ilᥕi 7Ae;uJ>#N24@G*RK ȅ_O6dYΕM_TLtb#ylܐ$2߅y .%\&.' wfҝ3٥L$LC2լWE_8}tgUz`M]x Kc[Ec¬N8vܓ-$+'csPN^l,\T&KLMm)NHo5%, |^y5ۙ\5,0dyJ r.ʶ i qxtNMle2&xIl)!Zvjp!9ʾ_^RRg@)&YhRwG,ܷ Ke&)$Ýz{#80q=&a{|Tރ)Te R8 lvN&~׳EEX߁KGٮpLV;gY|2'Q! Q;W~R:f_^E˨m!`R5a} hvYogd;H9F:U sβjp3FIqRL)%+np.N".X>2;y͹MЎ8I,3,eb"CVf{9!mA [ԫj:7`Y&|2H# ;K?t T3LW˥t0w]E-YًC2LhC#RS. `VrNO&@:zgu$-5)QUɤH1Q|g7Hɱ3IڑhT13aj I=90LJu @=3S }ҝkZifUr0Pdy89GFMN`#BbA0NN"V>'/3=Im`FC=G"zD 526[ؓOԋswA;E/ 1-[XXZy>s«tF7pHa-LQ K 0)#ep1m[so06#j Q+;I'cn*L=  ϖ1볢yh-M E;vd7a!|rJƙ|끦I&B#RGR.od8:m eIVXgV`H8=y FBdYJP 6 x)Z&r6{8H{&M̰;i+ Lbg4%6XYY[1j18'Ya O›p@ZՂ Torf2kV,w}ΩsEElilbzFEt&fbmDF`1:;b4ǟ1Q58!&f*iϵg}NTUu)ޕMk=<}%Zygq vEq^̰ƪ'­4I;±ڇ+f3H{pA~ptP:U{^¥ (@'0WT* Jh3cE#V(y/Gf; ̠!<"c#&H(+X`- [',ChBc:}Q.-A;<@Sl5"RŮG\j[e Z"[l)1GLj@6pPPcdVtYBLim9ɦe639t7gy9VR`g]09n2;5k.  Vh]}qZ!< kKY4)kZJe=W( .YK0'bil:拞H0n$M$YgX:yRtaT-S9A \omlL ʫ`:*EyCަy3*@*m#!U\CsKoɖq ALHXc8l?騪@=չYT椆|k+>:tDw_VނZ2{$L`L3nEl:0ኸH ,IvHYiaգUs=+ة,h"5J wTkz2tF(v ;pĵ'7.vq 7- nje %묥u֊601nY%Zk0[T%PsJ63FX-"+E%v9im\p)̓b,dXuD8iꡑ9Uq&0Xb0GfMC[80B=r˪Hߐ}ap/,bjK /} i-ܴn9)U/|G>z lȠo:V;|_gwww{{[Ο?}kG7Λ ᓟ_QIx@^\x6;\ȅ5ʀ)V9섊bn͖1@ Gu`38yA t `LҢlTdqX\! 217:z劖KXL@ӟbQ TX*~"h!MTLyd#t3Z(fKb!ȲbAU,[w@s7k\Z{pN: KLP ljZ#ctQ7A{ /]9yihd~TnHNu[Ua@^VP Khz^'Xl*423"˓4.Wj,l ŭ@R ~NPSՑP"2`0zo2ǎ%QlfkiHYWDoGϗ0het2XcGgW٨pŠ`G~"̚i0tN+c&4[**-,Ңʵ  .Cq$T)Vh&ZX|s,C/ZV7UA9ؑEQk3uFgZ;% k*d-EK8^Xc3SB*2- pfӡsgmtVQNfFrBnjD% IK,UteERzuD- Jj!Ks28XJ ]qbf$%8.:;F-Mk1M#dXָEBhWWclVJUf7kB{=q 2VX-؇d%XU*~gJ MЮz(ofV$}d,;v(ڊ+&89,$Z߲v)VK"Y'`f\TDR̍5y8UzDh).9GD1td~؇5^"E1ޱqIn~Vp*~W{/֦#q{{F]؏;O7㸺 Ȼ_ -R-==|?.I麖EzVr֣2+XG EP6ˣ< ` J Y ݍ&i!hC+4򵻧Lefv6( khBjak'Ɠ$/ 8Šo;H2 W9Tn &j}ʾ"Eܟ*q :_ S$5 YU.T}I 35)1g(+67V(b qBL%|o>C YӇ n)H $*)0i!޸&p^+NA튱̬%cua-2 ۺ!Э}]Q%Qѱ֬SUӎ*"XQA 8~j$l Y'Tn' ,*&kE[aScXpR*_V$/?NC `n\q:6]ǰ8v25J}ڣ%Ac`D7(GcqdNJR uŋfJs|f4Y<󘉥hkYds1 v!ZXS AO$Uo 2tX<,2D*P JVx\qP3ڵ,FJG $ N"9^ZuZ FSX4S]%V7 }KVMaR5TJV+Yؠ s#+9R[60OۀI;h[sk\n$&Mt/h b% wt`,kNV܊[,cbZGzXf@զ> 4#M#4#Z*ZfDd]ۉClP`-,V|jc&В4}aaѕIvdцǺuphZÁKɭ6ml-QK"ʈËecG>^I 6SǹM?A0FsO۩Q+lLֈ7B?Gm*Nn>WJMmaQK 禤~-a-m#-`,->4BR< -)CLޛD:uc]7l4DID??7a؞ok#'~'~~o9*V{}Ux"mFI???BE57E/zQѵӷ!jvr://+m3p ?8,FKg3)ΗfkXl J XA~ gRSY5B Y09(jiCemW! /!E0Hy |fes|Iz Re@U 8ɪ*~[@25)ʲv4n"e)',uZei-[KE%k%q#21؁bL"ΝZ^f)T Czhәw]еL=bd7հwPx@Yk7 UjoﲚZ s-<Dc-V#=s`Wp;\+^C"e0MDž4μZ@,Yk,h5ͥZE1] xki?e3^Vp@1+UY`klֲ^2[}kh`"?Vc}"%zM*#d=Ř\u]*kWKHgX:-ބ.56a\MѴ,9ce7aPhKVj0_ b.)L8ZasYv0KudL3c1~OPLJdcr,Nq;}k"Z"BӨlzb,ÏCaҢh YR&7$ˣZlnV+lZ±as"bfP:Jk+Ir+ -&Ʈ,(-kaEbCzzjTvڿȅ'XH;Q%}ϱH܋&}{4zB&]BK @YϪ5smM8D1g 7:g U[&jq=F6@zx/W%NN@t*8Uq6Hü#ctzHөn=JJ)!y 'QjҵgR ?|t:46Z*V.)*oucύty{[ߺ֎7e4!h7k8z^wilVVysʞmG?gXMEOE)Dv11kS=mWEJPPN#0JcHT(}1WƝDrZ,!mL&WժyK>ԏۙݽ菾뻾Mi˄87*zD+:Z+>(1VwOdzјSG&֦yptMRUuPT-ޙRU QՔax 뇎qWŜDPUscAc1^7UYeAeb ԪAS3F*`Np9leV}!*,V1n\C34XXFT|"n}r3>4ԭ^e{'& IDAT]^#T1 ɮG.EpvSM8=;ԯ[WF!󄌲S"h +VN*(!p 5XUVl&Ro5E 5dۓp|kuo!u6he$k#aQme͝i8鿛NԽP.X_7:]pôK^Vo춬<٘ k Hh}<כ__DObv^'Ї^Wl; {E ^|37]w]ck)}=~~cT5Ml6/zы>яzȚ E&]yا">.SvЭ׸]h,1E1;l9$,pULyCKw9?K _ ay0iH${;vxc4ήpwe-I1M$3JN` am(U!TX+WiH=pMя;:̶Hf%kw}yj~wAjq-w}M_My-^6 }Wϊ]ץ#RPo/}K… hd[/yK/b1?=%EakU"W r6O?wլVByO^ گjV- JWVTIp,o"UE0R52:^,`Tѕ*tIQW=J(z8"> b `s|q}%O.bpn~ƓMf04ۖq\7| ׼5Vt…ɟrk˿O}uQ{…۶ueha{m37׾[ 6NXO)N|Im ŁGKTB]Jv1O3A:C0c}NsDI3OkWHMVU 7awϰ[M&?0.)q Pbaޙ _lX/ ֧_;J&Oe4Sr&{"z=ajv񃔮-bT*u?%v"9K"٬JZʁ:lK3zQ"S}QQ{sʀUsG9sd@3=q7`Oe?m=0+E2e_[IIfbPfufƁsh`xRvRϊ<G oZu%ۓ]Xik|A:N~lyhm'9@THC"_p; M=5꽍][ku%n@MgZWjgggMv7MO| m_oGmۈL&x;};9v3I8u{mf %OM?Qb;8jQvg>__NF1wygDL6|zBi]s=s(L7<=F(0%^<`5RGt+-CErE-eEYuQ"FluC/ExP:X6:|D leH>3[T?#E& ѧȽրC:~V]Ikd}>?"fte]IYr&"2؁ק_eYȔz:.e?h^k4Gpb|ޥGQ+[%ĚԺ42̠f:1GW ="ĊzJRKIUu~*yL$nzZC" `Yڙ`V;lm.euSM CbKuoຯUnf~GXm_Uz$?#?2 M6&Is;ޮ;NSw߿{Gtf<9ϩH8n=!y{}Dݚݜf'4Muymj哙oﺮm۔Ҧuoq(Q"9^5z!|9ۿ?C?TW)&N͕oFшSn23{DI@PJ:n[l #TozD ɉ, Uk[Khaei̛dǮcΪn}(W$Ϲ)J!Nq`z~~*{E,{uuk[8wdoڵod7j'd [&'֗ģ~ s XcDAkbf}'JaN@:Nt enubU2KЬUԯX+"*{,]M8Ⱦ0`/a_<|.Ci΋S!iVQXɔLdʩIa]XD4Mn y`{SNVyGa7>rHX`Qi Fպw܀|n]XZY(GEQVHfzߐYs%9 >z1}@b$? rݏL33Kf1 6o!"0R6|aIDՖh7ɘ!%3q&VrUGf(?~Jq{ONKGk)`i ֟ʝ4axltB)d#%M$tɤL|ܗ}ه:yZl6/DkR5F25'u}Qk~.ٮ39UO4{Or? xF{C!t*[JMn]IױNVz{]m@mwuw~wloCOO]wݵI '6eͷ/=C ;DymMo1[gU;bX@ޜ7wʕx`)>cظʌY&B ԩ Y2;Evsu^_eNu /Dѵe^wxɏǬ i`b4X$+Ditۼê:\Bs7e_aN=)`yX]]'o QnXl*Dk)N|S'uRa~aSEψ(&B3vRþ'5%!pL0= Z405m$& 2 *蝆F 4ƴ, <=7QH5x㤦~f%sfTD\K'H !kHEtEV '0&"i[/rojCHj%4lwR%JG˿˳UU'Wyi\+ob{ZB0㚋"PXD4כurYDHj$Z#.E/h/,qp1@ Z$~!l](r9\ m"q]鿵<]E$=4R@5PTk? / pD(JK6TXs iFC;zS)+j$Bų Hr0zf8AB\۔-r$7[&ܓU_+}a g6R+ iu9]3SI&yԑ@SDXEg'y6`x\WsH,gv7- 2@vw0G4)TC5I=▴x/bH>M'xm"$g/son2?{B8/ R'#Dx\y[+xᰥk;|"AC|gfDY~}t^\kLdչ4qAJUPiiܴ6CMRH"%z"Q b$Y.&9ijp^E( &ŀ[HxBő'T2!}2`/#Ť*·[\rd*QVoQéxDSbe5YɩD- |Reê2K[FJ0x_TB{BR X$ɲFMb5՘^N#9fVwo (o7H:ڬ~>|plkvaÇ}_җ *bPDJo,7AgO<i+u>—2zr7#G#;9{WLxwS߄4}=̦_}__{}5}//rU l֟Nt +%bldQ(mH5M*T)/;Za@7\O$dϥRAu+ncJ;."Pp-nc!q *%&x s _j(I*YpC GVx+`k/P^2h3QT{:01SFrY|N|2V*4 O]mXcw0?+Ζ刘G`ň9Ϯoz:R0O68^ϥ=]fHB)d-%P@Rā9DL 6ޜ2f6u;8]}8_xV_ Ncj t`q!Mo] [s)!PO5y{] IC}_Aa7\*ރ# IDATxdKNaV5ȉv.nW2FDasA^bRi,r)((xeB4z&Zq.FCKX9sZvp@, \4œhv%M ,4jkKȐ~ŹvcSbO650 )1q![͜$"^]}f<#ѾͬP~x<[*nqɟBƒTs p__vޘJMTr|3v8JfHCS*֔#,[]nAN??X]]-%۳3+^U_T\,DtDGJm_X TqrNCS n/o񍳿lO(yyN!Z/&24:Jf V ѕ ]ktɄ;A_-'3w ;+d'$oet<#VȤN̈́R&3@"C 4”zBJjEYVUDBJq 1UqhGja tJe5e|Ej!VUh"HCDZTYAT:ZҲ9fOW0IUup::ZIzک&B0DD"kUmhf}] ˔ +pWVJ@/"N |ҙ8$lX<=\KXECzRãj{bZ IVI$=],òЊ3Y=!UkWeZKq9& EoZL'DJS$x":Y'~h^YDIi|o,8Vnf"X u= drTY]vK3GRAB*f2)2zIcOʬ j&&t }c+l\N=w$WRw䘩{nxaϢ0xWjXb#Kh^ Hw :PdlR&_ъ֮-B/ړGH񪧆MR정1g图UmE~053R&*/%W(څ3LNTOhqd_!an5ɮJ$c׀oH"}{`tFV0n FD,e/nۭ}{cq9fOե_ްҙJp-r7l?֭ww|Y\dD:LIٔS[o77RDԤv{-3_p|f뾶ʥL%붰'i"G?!O\cr+ף#3Yn$&oJ1s5HzkW)^{2T$%Yg/' ZVŇWFʈ;ZP[ 'ގPօPGJ,E9*IK;Cy8r!Y&WgR{jK<(>| ;AL`z#rYΒ08z*NHd'bTXYK6f 毰sT%pOe&%% 5<汣p?nxn9ZT՚Zg?{3?D|WC&"8%"Ki B"-'6ȞMPLt%9;~bZTTJ=BZ՛ʿ8SW{7KB&h4!GUg4tUynm;.0xMPHw֞5r.Rer8ى R$n8ל#|qA*RگʯT&껵|ݘ#Xǎ;qDD <2[|\pa낽AOUzJG>rر[bn5̶ {ѣ[= nLcH36%?OofveH}H&+m."~7~cmmm6gY`v{]u w]'gߺ2]GJU%2QNU7üdZZU񅄊gWF䪫psH"x_ڿOM̕hEzrwrת5<b/XiW,*Ҋ&UO n;p"1{Dd@$ +l)dXf}Q2|J 5T"AGлShꎨ8Ģ6|,˰HT=l0N!L56*: JEROK%` ނWk Id VBiB&ݍ-tk%Hm٤{%/zhL_5LS\+B &PHA10͝^H[T޾~u׾hn> IHvb$*hX*5-Hj ML-~{NXgG8)fdcIW~M=w)wݚ."b\ (t-]Ӌ,*4iEMʒa-$41(KͿÀM!=_FWՃImȪƝH?nc1PmjH:{z8h:o"iy ՗S&S..5ZVb+o Vpv1/"}} _~fFrOO?#?=ٮjۿ=cֱaWUE|d6[5paV۫SMlQh>ƺ2.X.Ή/\z,"̚mo{'>񉫲D/ڞkG|FIWܦ>32d[l'[ssvv1O_-զS C+y`%^ږsȳ&O+MM87rn]]@bv.ϤBI:srjNH;h<=r`] Iڑ膦N+ϳ^RU4jef,_$ki#\voSt?dz'ng)`CDА*Ce݉^ʓn)C#*CܧCtXBn.q4[,lzV3{%KbR6 =NKjyN/ }ՏP!9HǑɋhp8j'h$2=p r}"ued:6HD:$cviڙD⪳,Vk mܳV|{tWu8ċ-Y#pLʊH֜t{\h3}ulir݁mTo [C-k:( jW^&e\\\B\QHH`Qw}CWo7{Νϟ).]̰?]ѱٍY91~>,wv'VwݫgϞ2>N׿z}UỲ~G9 @?D`!M̠`PP'lrTd̓;`zMGa:&EUio~<ࣣt,| G І蓦7% XL>$cmrU Z  nC+\B6ِY䱖'e.74n񍷬s3iYd}AN H&r1HѾҽ=\(5~Ro3"=K8}l0H= 0((#BHh6ZM"(i#H*.u,"a%}Ȝp^vb{ahR#U`[t{zpo<),Pry]8H_l.A &W.28/9Glm]$%Xӎe8:HsĿ7,㝌n`?O F]$Œ]Œ]1da߷d8l%D![t^@!Ɖ>oL߰dIFLGcw`b'+poJ<ᑒt)rx3x34"ڱqч1w)bQ2lc])'.* ͂t H`E7Lskdk-"7F.9A\4bN3`@%7v[; Fhu9`{6YS nKPڋMq(hBɞ{iq %ÓeSekq4:_knH^heˋ/s$wȋһ ]GUg}^8r)T"",*/sAm34 *%0_F.8xD$ h MrK*A~spx>b5)VBtq'E/7շ5M69{ƞ# ܪ @Z.t@&AOFWRd|)8膸8( ݌5>VxB~Ɩ{.4v޳r>( Od vw2;QMkG)1U-,ѻ@4jx9WOtw G \FXX kQTV5º_k!/4F?N\]$&mR=oX0l/rF߂8j\FJ_JnS37_^͊hӜ-wf_3:WPe[G>ٔ.2){,ϹTN'͋$B U{" K=+lD - FyECN"A;j{Mv-Imʢd6K+gA9Տd.yg7OO,,,lw/҃>~l0ӧO" sמH}?7&^WVV~G4-gwٟ ;cg1T|3_7Z̾sCP!t)g OFh8Z*Xz"@、R /$ T&7LL&ytte=[Էy_ BEヒim66n?> q_.U9>WkUAznB6=fD䦕o Ubkzh$L#jd+ ]Hƿs7^ IDATe]߱g91͸l8R|*{q効J5 )9Ԥjxٝd\bGEtABM\t O5L)q o'=c屏с=:z`cagBCojr籯:՘wYHcصnaK X͆^ Oxt}MȞbJ:cv!ca$qy˰&vЗγ.~e2,.ss5ͩykf8SIB!9g/=mf"E㴮e.}ޑ}ɾ>Irĩ `^9|\4WzEڤG҆D!ԕS#" SA*يCuiryΌAWUn/M:p28iqP99$O ;v<o4eg/НX EXE؀%U9_p:KD[kI5|i,.w{d:"N!.x4yd_JE]R|\OwnY1?̻ ERdRy66Ζy "r8M@$xψy8刲E`:Hd14tKEx6Xk6USiwp*LfV]>68jD$6pׄCKTD/?'N~.?q;_-22XO_Ls|{m\[[sU*+ Hxpi=-8P{cx!8}/44[&hNc9aOsN#gz n1N`!2G>>".o&({9Z(\(Fz|8!dJ-@!ՌB%ȁytr3rY f1}onAX3!)DOp_q; KEϜe] W"g#.,+>?ǐ wy2BڅI" Vх̋FٓdrUC]F'vH|Wa7<EB{8)Ed/68Dd '.{r4IJV%wx!pRkʆE{C&r1۳ bpe`u8o qtXut#ĦӁF 'Ed%낰sLC& s,¹ '.Ej %kD4Cv"I u㢰+|oaʦPXWܟ<CP DvĒ6l@f{;ǐs;IBQns?uAdC]%D[X@z.B@t6- @OA|G֮p\yϔX@:5NI 9vs A.h,~mSƞc (㔞,578^FRNҸ6'3Rx$HXYg;зkbWJcRiC. -Zt\Ǖy=` Anκ3Cƣp- pgMb-K3{</ȌDY*JmkH ZV BzՐfiU-V PR@Q"yȘEy~szqQT[EH2wﺻtR*/i Z:ZWCG E|܂uCr_0TlT 6V U֯ [ċ1q,"(1 }j=bK^~3`Hh,{, dq>*nF$RJʸ6 e6ygAck0 X#|:XO1 {`x+|r3e9-} ʿ NT"YR IKaw:im6Ff~TN{~ %8l٘ v X/yaFBld t7F39lUg=WS9W3?hCZGύ'?7*3h܅BKy%d)7_v.ށN$_y~C%HB zo&v2yUx [> O?`O O? ʯ?ɘ?38"?'O?Hp#~J?a|k_$m>0 ??oƓj]\+i~JiTW͉o&r{/c5~x|&{ xXݮ8)-xƉ/W)qKF~DPtt("c"bH F5*>gL,Y}Šw%oƜ+^z) "y}joJ z*I ik}`F]iE95K̅ 0p| ^0: aSVZ&ub²?{O(gdѬ/Ë"]Q/~}(1Wkz@݈#`D";-l#ǹlzK ##՛jef2Lg/6e=ҊK%:CxD,LJ͜#hLx 7ጘ2="dD<'A8P#GI^l1PC;S:A M pѱHiOtD<,d2ӅS3D4,cȉ%&, mԗѰĂͬ1V` FëM&&F񏃷zfbh$Q#6O>f<ށMhFtjι 8. Ͱ^54Cv,P̭ 睯+ ;Qg( z  ?T6JjQ z8O>ȥ.Ŋ_.x$rn0 }zjy>MnBr#\Smf;p@Ds! 9ܹ/n*M^0"-s ׄhZGn~~J·*>nnG /+ϥ2Р)h~Dtv ks"Gzw#pd (U-#I%W{a/M }Lf}v|GK8P 'w(18!v8O NڇY2@\ ߇$ºnWSY1y]| sfON )JRn 3-3TbƜ5K ֿhElPzHbPrPDx,אW—=FC;Hfb9f"sRR*[W(rY*HK ҅ک' 2j~nwRceDTY|.*M+RƄYGԟkm:3+V&O&ژf adaE k1?# \g$KUp'c:R :OQ)CsΔa0_Pg~k0svׇ~ c(/..nmm񯅞>kä:T'i/bZi@ݿ}{ ۤM6 zZ`sU7Ua;.??7|s^(R9s-GyT)Vʖե_ [tuL>#"ׂ7<&rKn9mET[QXNuV֋\]<=NjڰulP>Dޤ>Vį; yC˱ 2@AP.PBEG+ş3օ棁Pサ3g5?to`A~%H23a7ouaHNBiޓ|6MbJ ׌7]ꜳf1*1@4_ǞD{9Rx]yc֒O1>ވc kpq=m(ΕhBhJ,tpYÙ1L((d)S|X`lGĺ"L 222=|3a0\)d6QU5*i '%JS8szT$LX `4+B6an5$çvd``^Q51CE`x61Kv-6s/A_Gʢd.TS Zx ]$Ky} , ncR TxmSNΦ,)11Q :m9Π&W6r DrDD/!#)؂?r)pI7|*q9يE10K~ٸVβb:Ү?`hROp ! $s)S XwrQL&Y݅mGD^K[`VT;-g5~kp"K>FORhJ5=tg_^NCz䍪(Hjyi%ට;ǴU9OElDG(b~`2,ށt˰LKe⦳!v4X4nC.L,R%?iߝR 0X ^.JJIs{eU\2iBNM|X[YsN-сpUf{_b%JԷG㖩6^J$v[iҰi]NSp.|z-aCZic!k=xg {ONN>p{3.w's=E {Ax㍿Ta3D2\0DSJO$~׳)h9UsR#/[JM2<6 "_ece )h6J=_^r)Qϑ΋Uv#?}IEZ^$-we!b%^Y;{]5SoYy}Cߐz6W(J=\-nUinra`twe#hcъ 0 ČRѦ0I&ud4句|H5|9$>XTv >t/<I>.A+"*/YFza)CCɄ)~޸Hu[8B`+\B'[ q<ȧu\$m$cSePr6/ደ HFfEI{NlH>WqTq| Qv›`G7vX].Kb+I H$B Hf>'F¾!)ϣ9-:M1 æp9n7]+pKW݇ZK;LS@idBe/0zL5CNHC tkx IDATm9p3Ֆ)1DtYڑfX iZm1ƄѠA3X&aIsxpQbf)P8 Q<ڠxE{2!C4YVI,X 'WK9.R;cjN3v=3)ҒEJ c"#Zh$%9cvpG&XHDcmoJW,]x2 .^Gq7y-kmLlZ|qUڗ=%&~h l}9O;BWat` VKѸ}R:˔ʮ68&-JC a\7&<B2,95, /݄qa܈e[N68PǬD@C,V ~Qfź3־^#F[y' E#R*MiQh=؁mXE|!Y]yF(Q`a)h)jč0K80Ӎb./AwuާHY`.p$+ys8s#7~nLPVI%|-+JPʰ9˧y(}©~xnS&:cg8//qDZ"_]N׵M,ac; =Gẇ/Ž31`Wősmy< .VaL'_,̉-F p#(90py5Wa 2>z ]DKNDI{22MLEQM0&C',xFTNue*q|}.<JDO%g"z?cDH cܺ{/gc^_gƫ~Zj.PG˩,5LN|dl#&1yXa=;u}1+y8V4v0O I|"X|]J>n1CՒx2,PJ3qR_4DPG#(qX8 I5͞5aIsQ:"[!g6-i>Ym TfDD,pI;LPU<}Tr\'0qzV{^I16 a:ɡCgWRS95p QM,80 ]IN1O$IK Ah:L-ERYM%<(Rz)(=cu!]+&)睻DDZfB)Jp-tJlם.,򨥨RSd<}Dp.|i#{$}gN-E$?r>[2nc4,-`21&)t4AaNqr,cINk֑!F#a3h젏5Zx381H83Va p&t4z7HĬ8DiET.O@BD'Q#}$ѸxcM]km-tU|pL~[۞.)eH":yKQ؃D3GuAYsܸfh|V-XFS-$IiF(! FEbeoz=tSd0d?3a5hw6J 5vnK1 X&cĴM7ʟmYgwi]TR]W^jrw,}D=$d#&$p!ٹ"Ihcа,OGnj'# mE~,Epo5^ $5GVw#fV¯ s~▰J‡S)/3o{{{7͏YOo|<̓ZpZ-X?|x׆(YW* ߃~闆󜗴8ɑ\H+hzU^Nppr Nl>4g|]6qIYu47 #e|׊S`S2iN[>hVE:<;K1o}}ޘ?w NSv{"eDٟٷ탃#yRZes]oGܨp.%n<q_$^I;Q r+7+-}w~sE7y= GJsdK|c <2?8MbV7?֒4%GeUI8cbcSy MsmѮ<22Hu𲖉햏M%Oc bUt[;=䓪֣PpP7 qxK=,#o9%ie''A[J+ԣ]4{"\`Fiw: &~ҁF)^sӔIP)S5~ivs6w^ζc6z~i%bQrj 46R: |W96aئqF,L4֠iijS8#gدTfQ@QOSx+'N|0i&[ gpiBFcS=8(6>nkc~ ]EX;fD÷FA3ѵʦ6*K~QjʃuXa$޶ks-)(>TO9xԳ([[#'^bXJԖ(m۾Xc^FFHgQwz#y:5SWktW(eduy"F9T/$-/~\~,yjhEN%|82. 9ٵvO9բWZ;{.dP`I'ǣ 6`Y\M8/]c' bq .:5˧t}HM oY%Hx *RUll~&%si4*:.̈́H È:G)vz0RbtȉqJ1oYЖ‰y[10Z>UNnȁҼHz?qqXjZ廖sDX[u*L栉eT-ܫ " us3=!)OFww~dGF"\p>eZj)t; 9Qn}dlH{<4P6'|Ru4ZhE;&xwfdPFi[gbݛj)%\9aÏ%Nsj;q65ohw2ӗ $%I6ڦEd*%|5%>CI*zoYSQd\VctEх2>(£ $yީVص֔enz #HB,4CVula ST3^?:潰}/cW%Z2 ]3Ѳ퍆5B.\l~p?xIܪ88AإWϏh Y{6Nt6=Y`re+gUG=ā+i/M A{dvF 8e2₏~ mVL&Jҕ`Nm=1[XOQ-$q;2;CU'h^h Ҥm3J@`:e3 /l+g_V0\r{vΒ5n`-:~W }xGap&f::0Z40 {Lʄ3X5 m۔A7}%j"hgB)XӺe7sQ U}3Qlx0ocCT>}f~w/ 'WדB7'Si __Vz__^ts\A,Ze7X E.:^Лu*Q ;; w1s5đnGv36i2,v3'F`9(Rg֗PbՐ-gH/i?k~7?ua1?=.>z033ddVGnX"1[ ';f 9hqX>$>΃9 Y k-o[%O4y H7UxD#vS4SӨvhviОUu?t/'>@(VґmOKLt>"=g(olq7mj1KPF&%d4ޫx ^lεڛ{rP|(LPWm .-84?RoeGCgG dlK 7!$hj=/@-Eu'7=o7|GUl xer˸Q*&MfoCIA;ft[YNeIf'5>0L"S H,"o<0}\>'RTc[#rT){`,Q)Rfn/4А=7TK,'Fòs4&<i'BGW#Iº&%wĎv !%l@ϋP=3 XZ{ZyAxg{\Ezu[s߭gٳX5y|SRt~|| 'wm#9"afdLQOU dsmL}t^LA O~-tVKmE85J$t.vYjE#OVDnR4i蔜y_s=J[vt!YLG+5Ce^W1WUQL2(d&=20WQu~,c@>NEJSK,bF&\zQvԎ8g0 3j]d`cF'sK,&p"Yw9bLXcxN%&3ZSN2{okY}ksk"EVFFfVUERHE2iDԀC6  ˶40d)؆iG;!jgoNO\[VOtu ]anFӟʬRˁ9֍[.W1nb:,: ^6epgشp-ivU+D`10WŧDiKL}"R΍Faw8eKXdlg,:^w:T^ڬb>6q+ H߫ "P(f3n^40'ËP`cH&" $J=_ .ܰij4}Q~O]7g3UdSj.OO8y/ߘ-Ջ+| [=MkQWڸr:wg~]@4-P6~?s͢YT)1ma#f%XG`١$whN7?xLuEJmbq;mQ&I"ȵ њvOzh.uz.޸1{|K ,9Fo|I{g lN̸nBex| g 2,JH+ShdzO(Y>,:CyУ_6:u|&VƊ;EL[#ݛ8P 17>U]ES')s=^r g[Klځ)MG5&h.+_FS/Etla}b'-Ost$A$&ZyݚrѹAl:g+'!em\''laY'7b[pumRn)Xrc7VO*Sb$"z`FTID%!l;bϹn~(K7uӊ*e_AC=hRҨFlLp4fMWDU-:~EG7cJq[Ӯ jDk NG,lC0y>![#.Br3284֕[Jy/4 3ݤ{IơfS8W[1jK\,u=RE&;Ҽm~W:RPJ{aeXtۢeԶbm/G\s?ˆÖT?_L`+)F8C[ 8ɲ쒚>8tpHhhޤzH$ebSHd,REL~$ tJ~ ; ={p?ZQ,fJO`TI\l[]D\,qa&N423ġ,4FτœFeTRhWlhkbu=\nʥ&ns*iHlR.4!]ؑs[^%6ѵ-kE@* $ނ+sl4ƲXZv)YӞ5mj؄:6[RSHb~2<%33%GsŹNk8QZ::Q_KFoSuxQ:{"!表 ҕϩ[;X3qC-fuqϖۭ]Ō5آ+=^ ~jF.ۆԵpAvTƆ$H*|W]Xm <-G:TlW`c{ IDATF~Z׬77jF2 Ц7 DAF̯(fӰY -9a6i S Ng[('avJɯNC{ԃb3@DBTb-L5;'`VZ.IYn24+; TFZ0lۖ^hyF'Fպ$+E}݄Ʋ<\itaYy!yWqЭ__(rz~ ~}YtcZp }ŋ]Rk`+\=77O>}h4k|^Fϼ/E~?/UjHO-U~ rg:MÑi햏ܗ#+1@U*`ۯ:G/]< iM{"|[;ÿJ[Z&rE?su{uWG__C,,͉V9iIQ~N6exJ!ؗc$7{9l"o!xLٰYml},Q6JY!LT98V.$9&2""`6 qy$U"E\$.yX"&.UȇGtx ۝[1y(,6ܩOlEOgPeH؃b%3W~(=Ěd5nd̒ i t&U%z2]t(>ܕitbF'NN9uv}x܆[t+r%yC7QT(9i!^,:|7z&S* S6fhii=8LldK]G 5tʀI:] g*6m)&S9~9'3?U8Hl~0~t }3 [<OA=2'Ʌ_/<)Lϕ ;cA#L\G߁yLXIx 3&=۲):QֱU4V֒ `y,a< ïx] N>s`g-'/巨Ӫj0^iּJSy?yrFIVzZZ*:zlNo|sHȩ+fXFWdLgW׌7.|sCOȷ30=H#=#hCk,Ɍ sHsF:N4Ay+~[w1|M_/o|C}/ ^x7ğ Q??gYiwZ/!'s6`J~7p0ɷŽ o#xN3Y 2ƭ֒?jS{/O][UnC}0گگʯmKQd,Br5 ~?"߄0{I%Tî Ě<!W]acr8Oڀ+$dZ|8$sP- g"k4 d6KN5(B] KYm6GGLa64p~UNa[^ڙMk":SGqu <'V2sD+IXK*kA~}Y8ޓ|fڒ&pΡ;{YڶQ267ͩ;:3W_USRb,4y*ZbP-ym [p4m2ڶsY38 yt66#3.) j궴.g{NNJ>3u[5+[TW8Tn\S.WM5^@J#$CoLݒ"6;NBhO4۔~yEifv-ikqMSOӗP,ջ  6NYhi3X W!> V[JSXx,O|iLB֣u݈w2!IJٜ[lcvjfb9NDN_I5Ȓ=+և,e}Fn9Oeo87uWua^Vof9S^8?$bU8)AƲlPf4Ôԓ','+܃c"_ PtF,|or_ә1]؇Mͅz*[vafyUǙOv4 C܌؈xW E yJdTv>e}XT~'`l zx>u,g N˶S߃0=rOux ̤gh[VşSmr?hn8@$5$Oޅ[iՙ6(_Sn-Hl,I:/>=Oϰ}\V8ħ}Yܳ5DPf*.~H7m|uR8kO9~h>xsM{S~>W᎕4S+՞P2[ m3,jd拪pF)G6iqiQm>*_a=EփO~ُz\>`dڴuNG4KuR: ǃ=PRFB܍u<ϢLFi$K]JY^! j#P:e>cH%ihG*LEC02(;QVՎM!wӚʯ'סƪl#(aCE@m!&+<ab3v\M҅'_[)~Ah?_?/{<ÇZ;grWPe'~PKoͿye=˿`kK _ L?Z~|;q_. FOaMɚB)20,)X%eù6Aoj`= Z*88sJjr݂`7vP\kwu=[,YXu&{3h\<5Yr۸''?mYô5bUx<_IGҪ5%'0 B3 |lQ(qŌ@عSl؅X Ȑ:f%hOahmOfӂߎhԾS-:6-3y?3'b:'g#*C=۴-o>XaEY1C)$#t5fꧡ5raT9N[g4pjԧcj~ 3O%nJMpnǮ7= q '~`9KTr [p̐tSespq4}ͩrjc7-/>*yAғGKCfA;d ԃ6lbai 38@<‘v2LsE.&8t;mMl3ǔ #Z+ma]05Ѥ滩k90Nc],^Z);6(u|Tَ각KisյHzOh~S,7i>X|z<9T5QC'Qu~ÛQ>혒yvl8hJ5Ao4Mt5eʚFٷ4i95jTzLY-9 %z8PH ֽZGdΠ+HnS#yCHyY^=w,OQSg3CSxw~gm]%Z<2 BXs9?/Sz{9H#z`!{vHTS!zJ}cQ { ܼjgH-(P5nTB@6rm6.=[J?.*4QHH~|?!.|U\;7;y`H>1buiކ[p` o %/ԶhVΣrOnWR//M꿈˞???gD _A2F/^wR~ww _Iַn޼Ykzפ3o~Ç}w$/-d_xȐo%#rsjȒL6&81Ⰿؾ'%ʎQUt\RN1KƋjW aaA_oƿ5dD.5Q-6`X &8M"`g5PTJ4}g'?Z{X$hmQ3?F+%?C>FL77MoYۅjv:*͍!s[οZ1w~yJm 0go㯕_P4܏. 7Tk^E<+zY*!@7/m,BEY6 3pQElvV>l3YIhUFJ,퍎wcm<$wNpsؕ5x ?|<47tUV,.Ye+}a 帝NuQz֝A+XF r ЛK&:0:g xߣLK]CXv(r3@-Wf?_#W9[ .|z9C/d8[lkmhR3ccJ'f3q=*N_q5 |pG^gX}WfVj>B`3ubC2kO+>븉} "M8sB#|D&ntkwE>rdG:\؇FD0dAqg 0߳s.JtZ(|s!^,.ވbW% Ű3CM&.Sږw%4 [0uFk1ibY,=)Nc 7 %7G~>RɵK7)`tUx&G5[wU(A(a߲}T Ä8C  J,)C!_u7}4Rs鞴|mUJ!B9.8^%0˴B԰)|X!Fk@\*9,(vs$=G*d4*n3CR31|ܓnGԀ}}$6N-H`dE;a6x G#9jRb!z}5)̢4Uk 󸀎b/0bJ|<f[ Q h,z}+Z>Ѯ0y@!(h98'#Sp8Wy֠miϬuBK6 VnTz|Rӆl=m=62N4 CJ.=Ӵ*tj[ĖegdRtoZGQ+QVShb 4b̚NuMŸ`XZV "Y;Ţ.?svH}d8toTF03E]EPnab{,'᳒]A4g2~A[cMzE%wҮah3ϹQY*t0s M9D2Kz뭷?~|Z_7;ܽijle{{~]h~]uKmN?|I'>e 9e,5 }M7Ru0.(q3&w(8"`_ 5pmsObYn#/F.az^6,ǩvso/P@.rij} 4Ʌ6[TsuM7dhʼn -u ubN\MI0Te|8Ap;%(i1܂K,fдE%θQi87qX LaܯGr`Z9R;CpdS Ɖ4p(. 4v&*-%#aF=$n& d?3V.sJ3| I7-ښ{e(Er̨4 \9URA2TAe,, gN"{,ޣQ5a2A/Y2VYM6?G)Wf$(c3Hr(797&%&z ~&P#24IZj6徙0B ^x'=瞱Ewv4u sFD$9> Xٙl-Wܳ@=/IU7G q+9Fuզֵd){4jf]GL2}2U FQDɮoFXAѻcV9/̂JI~踐7Y!0Qco%$wg!' ]eD@<4ƶ97<nEh %'lHXBy(- AX8IXaȂDbS;)q7jZϽUd7{=Յzp\A< g6.ѿ\R.`hxgث1 Y!g!iden,V`=1 љrWnlyآ7}IR&̃.8EwFcϢ(Bjp^pePY2FOo4;4j=yݑ ^&;Qqzp?,CŅڄzq ǃpr"6Dl] 21cYn(%YOZM2IhVcW<S=](.j#uva6x1*FҋGL"M[hcFHb3UègN3sLVW&$K LۢN$EXM=y*6Щ"DqILY-VU{r0!TQP*ޏ|ni7wzb$Z@cj깒+tS+Aq .ú- b$tD]=E#"RPGkb!nNmq2n!)Fw$T*IEI]}!-?_E2Pp$@$\4ͺbCZW`41D$_l)p|gC ej|(m\$(#_0 N@ܒ_ {Oq7t\ sy )-4|}ߊZ03n/;#'?h~8j ѧ߶ *m~{5FGp8\uy|lqI!ۨ< `fwTr8 &FH޻~_ 7:Y ϵe>e;Hq{}^nFص ݙ/ԍK>W^/.OGK7Fv_WVC8]"Sk25񚡦)#^/QCEk v*1++:hmJANwR=5I ;7rWv PG=0ZۧΔgtj] >HJg^T,?,[}P-;ق9ckZ AS6QE#E.!.- ^<.EIGuc]Xu576 u\˜rXf^y ?y3d):fEyZ`rV}yDp\j(1'DT ']*EcBKv7-BC}5 ׈}.E.u{;@/zf! S>-8^#,y9J~J6QvFBoTN̗)خ,DtJ9-^jY-6uY:ǃ b^80L,Sl^fpjSD}.E)yehKck ߘ`FΝO0" dLe4ǺDֱѮ3F\s}Y,ߤZvhۡ̀ePaWa6vGj:f_kƢ!hk- 1uT$ڶR&Ԉ.GI"#idL3|Ul3TnEwjvw7E. ztlYΝ;m~7sUmp4 G}Dھn6wk=~՜3yei+˿ˇkn~a&\x qye-Va+ⵓYmǞK?:- ;DwkGKu$x5.hl] l k4K%Z7dI0b'(df *2z߰CB #ٓ?@%1Mb{0bQVI/SJ)_6*}^z-G߇K1) lW :o-N(3>Q&%^5yuWX&C2n$,sGD'}[ 5._$4ԯE=7ZY$Ӹlv% ǂ?%8Z4p,_SP/]Xka5ufnٱ& qJYɤ1aa5-o^(s^/u[[6kf"'`sT`|V]); 7K61Y,\ӫ cJܯkVq`?]UzZBM<"-9򋰅^H~|R5qQI-^U"KA,lfJf6gCt\"|1|XM]}oU^1fs[]NEW|7pX6FٖbNNrfrRr$vHdM}q5MwfWCѽ^"I~W1#w?}g>~GʨEl>Rr(p(//L2땵v;~~_^ }$! ;'5_z,9w?:@\,恳)"x؊J6EMb' \&a.klf5~f_,ԔAD;y|IsRPQUQKJ@8ekm ዔW =c\%-bl²P'2#vicW^~ 5bx$"Q\Lr(+=S@ѵ/@0LHoK)UJo<uŸR<.$@q7fP6 8 91%nRI83߮cen8z=8# v)8WI5{ ^teԯ.ba<\On^0B6r<"TY˥d3Qo'c3|1LI5AǘQ-[ب{BfƇGRJ+aM-rM{?(2A*ӂIxTnU MA}6t֏40­鸐A=ƢDYoUvP䮟 {C$``$9@%6ft]Cp,C M7䁮뵠2<>p}yE'C2o#6%7ncoub#9W+W؅ Ibnk@CE["_+fjm߶pLq_6@֣I m>Ï?D[l婧jbS|>L`#_ 7jXfG$VcvhQ'NͿ7;YkMt?'xQ*~]%0!Kbn$R\UR,0eMB}BG2xC"9`uDj 71 {ϩ}+{.E>*2zAYؔ$?HCkn  R4Zw`7 XM#_V9{ğQ0;`MiȿEa)._E_s5i58_L13 젥3*#zU|Vsg>f?5;=XD}裏^J_w1UdjOɊT W`+g㺴+k[mVAS˟ R3CH^&[d" $ ŐR4Q*4ڐߜ^zfq+:=+[ !a 0a4SgxFe8c!;Χ/aWqRuE,u}`ymOZTV2i%%.KQCbpy_~ŘrUdVxn,ay$07)|qav{IViP[X8ߒTLy9}=pAtX6h4([eHRRpON%ZFd*qmdzE ]qO6MohL*`SY&Mե4uq2b^qBD;v]C}:-FgC-9G^ȗ5v)jw<\D:4C]م]0p%8JـɕFPZAD.җ*FrHTUTZ̛Y~}lPYH=ǜB5b;0}ÄFV7 j97r5 PvOvԳ ('p[T `UBeDj/ bҿ']Mg f>s^:69ԋ#;Z2!Pl ?o<=o . sǍjRCZ7p]1>@ШݐDΑe ~ d?x#Ȫ0_ʺg{׻2ҴjaDZ#Ϝ96XGG.]j2Gycy@g nuѬێg4?3<Uu>K(̽kN0R?é]ݎF7/e{ !0;O<3|bG]#A>hRf7:o^-MqpW/'Idz-D${z_Ю8/j|G21_)h4CYM}eE#(tzWD%7D ƔM/ 8Sj #oJCBG WwS9#f)2 %a\#:Cwj6 \<2cJϡ-6}plݛIن.}ucҜ+YYHy$]zJtzhuHH.45d(7lɢ Z1%/,n*ُxl0ʜ@`#]3 Jv uF!] x}eYFJITa{J0Ծo8^-,'RSVqf-ШZL1DQd:ƷGMˁdƣۇi􎁃 77_Ns= i:Gu-Jdf Qg?h}s>KMgU5;O{JCmPҙ3g$zU{{ӟ~C?ō.r :)ʶu]#?#>hk[󇍦ovZb+?RmYvrtV&Z7$(dB}m2(HᗎI=8/u)lYⷪ}2_jtD "2}[@(]g֌e[*l] "˪YjRsLȵt6T;YhqV"! vB3O5t2(֩gS\wH&ǒZ6MUW, =޵TZIbWDly1FIal?MNWu|9K7$e)iT"(Is9wV2V"U4\(e T EIKыHL#w'O` }HAL+[m[ ~%>O}. c|xK*mi/O%XHXkᝦCOusmѸw$HZQ!"IAB=,&3ClAb2lI#KI67VF$!t-wtl$݌ dk{񢆶I+WƅYQyq$M@IM"SvX+)w?*~a@uvr[jXPԧϵ# .^ۭwRJm=sK^z-;,^!ㇵ1~Wu\R)[[sw4+O4Z66!"Lc?T4?QF+hHԱ]cJ %pvXK&yBms6Ffcz9Ee3 I ۛd :fN:)+rE|ʭj)NV0jT/jP^Z&0K8aSMDe:_A;_E% jPl3%#Vg҈>Vpp;}y,sqLZ *J&J!ߛHiA2~b %2Z#iU8t#"ӵI̫5jB3 CD%YڛDeJo ȍ>J'םO mIpђdvL3,Eb{]oi̕!یY[JgRȖL8ɂzzz%D]B;@+&-HB`&ψv qՎI8bպ?r2R͑ ݂!S! #Tb5^|Swlmmj vzp ?rwXȬ_uwvf}+ X}1?OAWj 60늜FYԲQIx|&d |#TW#Vx-0dcKK{99.mX;emo T`Ie`2RT7i\QvlZ?A9z{W4=/"Ztn X;lTC;8iB[.YѪ_Z` <׀Ma:HM(7ߤbZFa^7-; )430@e!"# PSs9|+g-!dݲ~ r/0:0]^vv- 6jRݜ,Q@PeeX@$(b z"bdaKs6Yxꙮw$M).`9M+%^뜄璩3*I3(Es !A"%Yg>بzT8;HlkVDg#t:YU"']ΑciIXJ|gaW\9}_́43^TՎ /4-;cs\EMVt[_[[4кCJ)Y۝ {'6LY ;b#PIoܕF{Uջ1Zi;ɁFט:]ɻ_M$U>o #t3ЂF͹> -3X(x$B֖?2q+8Jsr1H DeCӀW*]JybN;cZOT\Qқ[gDn6 &$ gNN2;sc\L#Z1ݺ=Ur8Q):8tUTH_m(!zDa>Ѕ;[$K_68h$I8A~`5@gq"ޕ:m6^t~xD>OZ =];`J*+ppPxH"++0޶R5hU"3?3krmw3\m֟ G &zJ|kke_Zb6D4@Tq68EvzR0r^f2.1(WWz:/ޚ͘7;`I4a̢E'b fxHswt"&I y?Ef.po{O4]vGj#XH"2 e4!$?rZᵨ_*_WOڭ1QH>ќȪؒmzdBJ˥@2+N|&U3 ]|IQNj|I:oeņM|NnE`N7#H_;RJ8mbR@1,8Gbx;R潳3>_ 8Wb0FIlxQ +_q^NDI?̛|ƻSjLB 4=;vU K"d$=$I"Bo5S|~&:pŐ/HwnsMn>N.k^?b+ 5XOf+BRZF*ycwZRh]e[SxÔ~mtr+#W,="Oa} M}W$$"-(r2{ 3[B}ι?>я:uj-;js{p>+p/̮ʽMQ|pOG^. )*s2{"McrםKܽ}kʑD&UI}*U]gx#cVdz!ƷMS]k)_~ʼ1gU|3ET44e{ KTJ!G{' u(CKПɘGp: 8|=~4&}V|TIJi(4}IjZmʷ M 7(ܒXHQ!CbZ1U߉T3hlPPRa{nbirp 2&ڔPДo\yl!ϱ+忮 F2_AI2Uxdz鱌E 0c3nnUq[EMxM T4-Pc9r!u!K=i;TrSy>Z?v9(i.QPa@UB`u|nsEh8کHR2MTSuKixmCxeI Nz;S !"mBAs~:G6d{ @#>du裏]G&F n[/XkcYVb|+ bԩS?c?/^۝gM)OTj?`_Xmy\6 qGЅQ[FY_aָA^&5f3?֤: +lVRL3Y(5aԼjr\ِ!B Td@{!q?*."CC@nRۓK&sΉ]q F]R} ` q5! R(FOtE/peMu?$*6ߓ٫OC{ SX>a!ΚnT[+4zÙ-yXΦcjZٳg]ʩ_gs~8k6kJB2Ȥ뻎f3z!2gӯ1ǥv3ޘq nD@DR7keu[kU}&o2d;IȔ5x$ˣf`<W7RyHjrÑ]ǖ7\ QpҽCZ\oEhxĶ0<+D90[:? 4Ўh?]:7*Ze w>vq+=!u@.Z$"9o_ggѡe`f>"3E 3<S=.$]YK6$660{si{G Gf,KS醴Bl .,>WϢ45,!"tD&hG# 4u6P^-4>侁P20 t r:n8<B-`ѫp-œYMn8;FOny_`M'ώ/3կ~]ٔ''gڟ1H7^HJ~ݸ9ԧ>ի$Nⶤ0~=1/o aGWXGknToU2`\`wF-mqo\:2@dGi>uOd$7 KkE*%DxЦ'P3"D& (kpyг{* ;"ehbML6m[!PSJQ$0d0"؄cO9!+T?Äj}dߩ7^1qF)k5P(snYcS,}iCodH.)_vM4B}P/̔g٦cb+H#uTt8ȋ=RjCIo 9mG>ד";7jw;(+V hf#8\*㽉=#uDw!I~ ] YpL{Eàz'2:QG9idRu.Wj;2dS>ZV>IhHe1D枙SSX 1hܱ}K L)ݗ̭]*!,r4țdLR;1|ݝ@o-r|qd d.qJxTm?*?HOl=l{uk5%OLXd@3SIPk>aDaO$/ֺ繺u)=~9Ug&!;J̞b\#H8-4Xbf!y-%BPQ $|ٛT6Q*,p(^f/$~+`8u"%X%+@ :Y$[K AH1&7~#tM4i|hT ;9%y3ؚtlqhƋ\SEj%LFJImlɧƴwrp #^D{ۜOi'q YV!Nqcy}Y>E'O$):}B{ً/]t?6Α(R`?S?u#y'??p^Rt* MEե7woc$w{KuB;Uas TѪYWz&r=RM]k#vq&RDU"@< ILYxB xx*y^E\aHgt`CwH&2~4Y$X0 6.woX0%UHGH$}Pe!A=E; A$Ck56kZi24"F"8(ĵYuhNP R4*AʿLؓ;sq/cנD^fg휻~{g4e OuդvReW O sZ"ޢUePȾZHT#L"|B Vc|Dl4)o ?C+yE4ZY&itp+T{Vreړ08-zQ(fo7boϣh'nR0,L m)0jR 4XKi;.^vjWGv kc$*69'q DZmfOF2_;h\ZG}3rr'oCEd{:f>DdY㆒#zooqgΜ1N\sN,Zkcww~%dB!Rf^JI$QcyxV#RWfZ#[k#ȽJ*hj]cjrA=ݦOB,@z^< L]"T2)ЏVxRG?aέH$;uKPJMDK 3vhh[K+kePʆ qj;N݆74T`nKƮ"pU H*7)= rD 4h mbѰE@R5a8 +~Ӻ:` guyyӡRi|/fq{w9LE;[ahN#MlN]gx"J k -蹮*1m/HpYӳI4nh~TUajK`}*)q #D&JrZ :1?+,Р[#%r}rn,\+!<^A#Dp\7wP}C|#Hl V.e%eop1䬶aåWP1j! 9LHMI*LMSnҍzHm\Уa)!!הKH8c=yD |dh_ ;8(BX'%tnG4,8RXK}aKWA]HrN^Ya!6V]k=XHbrbmc#Su'^!2sE.)YI뀭A x|c wn?Z$h*-C(lH ˣS5[l7.oОgS`N8mHE6^8#QZM7$WPhC.b" 66(*׈o\cy``a^X%hgm.PR2+8v B)_q{Y!ej{fi+cV\T72!smUIˑ5x}ȭCzlĺ۹hkP~˄9PÙB B9KD̓h1h7pods[Nø6WtXPD\ٖU(V7 g*̖X{U{K ѽluvCzvȒ `80J8lāp8 {ṕh8 C qZd!š.a 믬MI"&19L^!]68ef\q;Z%&v[ PC./羛1O-aYcD^]@uC}i44^@D^m 7 Kh]$jU&ǺCD66c^Y(ֺA DsWsԜՓ ұn;d~'qpa J)X *9ci$Ps9>|w_TDa?}^xR;'qq?OdƋ돛iR%>m?3ـ w̭,,l!rsM23Ѩ ""[)<T>cD`TM=Fh?w{#.qFA$"dSu6[ :Jp/~Jfn7xeSeV}MStsLMݲlǹ_$=׻u,J#< 5Vlu{X* 3Qs.ТPDp.I .sSDp^ŞdlFGtx,VGoVNtC6k*.Uk nLC[u6FLwIآ-IDZ |^Soŀ,]^rpϢJn݁JdiC'lo۽c wD8[SNKrH}%%5;A /T]VxG7_6#x㎢hIJo.ĤVa]AkX_tA&A!aP yk)̑}I\!e\VڏxlG#NV^,mUvuP$\)K*4PB6#D1-<3 Y5m=` Fsb-KKTk):3%[-OSuƇ6ҡ:%Nm|\qC)x?gY {*B =ּ՞93[_x I| 1`זCM*x0VvDLgOXRJ8%-|xH$Y)(=ӂ1v_e`OM#0;t DS7PDpk oA)@kVmg3|hYZx eמ)!4FnvIc7u} uIfdT|zdZ-kGE!j kh9=}w\?S$cE%qkw׌#*g:L2F\H5 @]q ^Fẘy,xeGK.ya}A|9~YZaG|j^u~}F*@TD^ibx`{!r]aQovDC/mL7 jxK87& D|m}Mطuc ZkJ~F/axuw# cDHXdH GHcc;BHNtIx3a=7f{l1@/MbG*Ht7,WC n[U.õdߒ[Gy9uk*u ^؃5$^ ƀIȗH5,)" Mb*W2Ҋ!MT&`x.N` cd0`h)Ɓ6m# \MeG6rU%3E YSDu<2:z_>w@e6H5]JDl i#QU\bR%{͒QsyLnKU|0bgvmr Kq)tLBVXڐ]Eju0"IL5%^9>&Z>U4eǀ$C1aO|BCHdD@foa/%rܻI$ -"jPk4g{$Fb鎩)4asK Hq do]z^Xv,C-K*Ke-*ȇ8AFPj$4"@{) JxB~֟IqҥG8!=Y- 'I*.Nf{Ye@%.^uUEԕG^=Kj|x5I&tܹ'xR M_ݻV=4 ,XDM+Zg"l7m@r"Qz"8ӹMs5nPmaCL1]z󄹇 IDAT#B{p9)ѻH͒go/N~ NrX5ISZp3Y$*P*gu.@ۑx7#x/j|4[AygҎr?"FxkX (vD8 ɝEDH*ݠt' "D$l!iMA vY \sC[iTy8A--")W\iuq2+HKuRizaK[9\p_6%,A6`o%H5U[q$dҊEV+>H^Va{lTN=ۈx*HoQu]4:oGZypގ|h:Tfq?KDYK#hc6PsYÂ8mOV~Z<|N ұNwafˬ*WIσ\>*$~l/Kn|gM=6 q~Tm=Rbu%ZT%p&F-uy֟IIq\@ڥB>ϟԧI=4 ;&\/,V؋F1lBau.1@p )b{>O3']aaG?ʟCBUA*Ǣٿtλ"W*vS÷-f R'׵bޙ"N.w[;Lkmڦ43.4B-U & ѐ b;/亊iZ*F&Cm=Z!N b.#TQq>ûAsN2\DٖW9" 4|clFh;^uԑu!Z3&t\3>d@ 97f$ !-~6W^\ny=xdXy^گV»!Jj᱊",-kwt*tdLQ. +<6# A\#uqF'Átyw~gFX[-HڝT(_sjS2.+ X4F-I$(-5Y3Od:,W(˅u T^uIJQyE7.D*p"7u>J< nEƍ[k-`>DfBЀۛh.`a5&$Q%-2hQ1)HSي$@Hk lzܥ"{g7Wjzqe*gɝjAFMT6Oi {5iEH$DOYVӈYcxI*&W*@2|ވ& -c+Y8̌o i.ňt_cgi#m5IyVLG@%]D UK Gwo/DM 07 ƌPp, ,-#EcSج ڰ-jiM9'Hxk'K^*_^ltq=hJDzK+^δcz'"|!9C+mQU-CiQ O\N}؞-P7|vؘ>kIg@9َ֙5ҾgX/O0סԶp b h#1ĆGY2 3#? AKu,Z6jZ9*IS)C kXlms=O$Nn>w޻ږe]1Ɯksߏzwwi㎉cEQ0H!%DFID@P@H)R+ݶn֭{sk9VCSv>Cý={|cݓ8xPկ~Mѯ7Jׂ% JM I8sF[iuYU?O<\)gbfW襝<`ھr>7rhDͿiu`mI$ xDS9qmmfc71"{\Tc67Mi)H24F$cYyY퉥4[𵊱@i^^E GR| ԨW+[HWԡzj.aJF':X5eh/n{#fTPx^lu}t~ a d23- 'T^dD/0kGJ{v~j?71"u^Ipr>tS=oΙ.! `xy׼70b)SisT%p%__la upߩz"v床Q>|\fMH#';Nu[%ta-OZC1KԹQkW<Ǘ)`O4Zo-Jģgm_XvSdM(jwnIkSÐg·0 r M|g=%˨7d$IRNTd Jq\kPğM+P!BlF;FG6j=oCסL3ekEyn#9Dİs[4 *%ZBP Ѣ!B0{Zk%M^:}>g2Ruoי̌&C.3gIՅO~TOȑШ^N†hQZs<0(oS\Gk&xZqwAhEv/zK!ؙ@(tlGiؘͤB##a 炢R<ox"X73 45?n5WZ*x4i݆7E'*em->l9w2> vAa# BĻhveJfqn{-6Jc] $!|_gbYY@wuW/z8>TJ)ٳ—~75m> @NsdjDm&ljQ>_|0rƬ?W3 Vx2r~[ݸ<_bseހDHc:Z[>P-<}ZK4j!ī鼬b;sDG;ߢ; Epxzc#w4l{NWlt|)qyUXD \L*pD$p,u  H0#K sAP#ŕ-5.Pqc ,%I] |+ުy?`f6(RKo >14ΉLJj #Pj[vx&v3g 0-_V~H9vz|CpUOwUp.tTwM݌%Z7㳾8 \8h˹>ziNo?̽<֖xj'=!Qe1*-d6n`E"ɗE(k3;0[>H{7x(F t&L;Y:'\̒g3vׄd7c3NFՂ`v_8F<]D?I?7r涚FmCcTU*݁u8fS;E龅b>W(L$ݭːުlw_IFG԰^_˹yewR_ m#&wב8]Doaת̚{ه3TBai${c$IUT1WFD τ.8BH"+Dd$U(Vr+j^WYV0v_+@mW d]p½ޓ$ {D8AD"w^݃-~ 6B%5s8:lw?&"&+g_2DTֶݛRV$i)ĮÃ7 IJh?Cp4JHXµAnzaB\W%Zm54pR^⛤,ryQN4:B) OHJysu|bk8dJƯF~e:%צ.u]9'E YY=H ޾ONgbYYg#Z| _X,g,6z^a,a?j7\ݬ\($ޏ5@jH8hq3O~7.8MYsi>Լ*9.5Ro 5Y<|wZm]ikH- 7N͈ƔI% {B!uS= :aNJO^ܚYBPA`L\n9ŁSJ I3Cbl:Nc5f$E̙EYzCu*8MX+#pjwc'%vx 8-Tl  l7 ? <4y@v)`|Ã#xkufBVX ʎ+*xS.Bŋ"a8䐞IS;y|LY)_),GqsQu 4rt~6Xuj/Y442Dc{@Pل2D2`M-=SўӇH̲~򢄸nzB[)-\Q5Occ/Z~r%7>J |e_93IGZg~gQp?hEZEcT `loHTqԀHQ)#۴Mel96?/o埗ў6t(b٫#Qnhfn"k]A퀞xi8 fPR]ipsT.&KvMSkh2e^EAB\]$;Oݴ[gv_K+Cm rRo:צʚb;bPd6܀G:̥gNؒˊp~˲G[ ,(p#4\DTAKLkgjkdD[|F TEPGtY}D4ՀU&g [X*0]<^)/zhS&uݦW~ Jg} QDIG2h{gb ]z^'ಷġ;׳#3U04I誝 ~,cՀ#Էvv/@x9DBz|~D zpĦUjAA-]Dd%Wfb}~ΙXgqg qnfs9~F]Mu_\t|\*p [_&!r_>^9 CvUw_Ag?s~`0lY^MO|AP@M{p$D K穷fdrIVC7Jnl[` 72 oP҇DW`TOu`H/"E{Q'xke,ު02 ""m&Ƣ(ۣZ.&¿'vOe>@-@'FҘ]D呁E ÐMP0ӭꨡD֪Mܛk4C &!!Fd_xmm#qt{/ zJ@9V$@SW:4$IjyV%tڝjٙD*ZnOt ]QbNͳS'M fP0Wɘ]ZvF$ uK3`jI -4QʃKvs*`6GڤuPǕx98u]ݕXRLBѴ9Z#[ovMNisj \i9 љhugqj~ _8Y|N7o~i r& {@~oȓNiǃ?.8zJ f3,&ܽG?-]Ohn0?MUHr$aR׌ xc9F'fum*u@MuK&jFi_ZO¤YoLh# cYLdumvbDMpVZ[YR+krT!P=꣥JaM4am[:9D67 7$<o TC%dn,zęZ kFE1-nְU@|[hhFSpa`ܕ2Gʀ4㩍*ԏ 85MFF?ז4HBd&Ja-L؈i=^EZ.0Q3] _^S Uudц2yAW&!Rcbt&Z.sm1I!9Ej[!9v7v*  cQw7)8ԊtbQR Vl5oZs"ǃC[#nSo K$}&M܅\o'pQ4Ob)@&LASh nk$&dCf-7?@<&Q%rҴ2%A zYX0r !3 y~`N}10Tvf$ȽL\hb_UY1_lġESk CMJ5>Iapx<VmRAuchX0c0#mxsw> f{?ş&WO@2i>SlZڸZRii[.H>QUo 1Qbՙq%E5\ėtHąDh\Z /jawtS4AZ,4$Sb`v{m 1`$THbckA4'Bۥq3rtn4Ȳ~nMQtUz6_0X>7- 8N% ~K:d-d4*SMOC%>#'=8 A9v(wB ?¨n 'T%PntC%\;Hijf=A$~75޽żã T*W4y[9f_,?WM*C <yV^񤩖 @m#9#S儼ƊgN⍴uEjf{= dvq O+Iw[Ԍ;yk ɂ A~zw./lĉقJ #dd "gN)ՊISCBPS7Ae&A /ˆziCfPDq24{^ңpXhP+jxV̋=V?64Ĕ*mq#VwS;b9?~`%51.X"8+cK˝ZܯA,dS@^ؖ:&|HT}غұFZjdB`hhizy-y5S>2p⵺&iHpü\^ uhIl"a*\%Ձנ(iEBRJ{e;6)-Ćklz #KE;{J}̸UFiWHZf*mcRS5uN&ΐL9)bnSB4@h43*4 hlK&O WH}Dͼr{I h4꿲v+t 썒mM넫<%[{5R`/FˮԼy4Ƚ:BtË% pN|'M}oEO(c] b̔`Ƨ3/tLDDC~i_OO۸ iΖkSy}rQ%pͮ!7QT;v&R-M.ҴR 3eU+8+}}}k XJebA;cY0\+;(ZP秨4zZN ;irƫMҾcYi/y>PD?1Š9heq-@mTsG߾O#N0?%7fdxGHD@ nE̹ sJpJr*+)-x)m64<[Ҝ1!#)^]ͼG]. Ýqyi{[NMmDMXx[7-//9kEˋHD7pÖ]SDy#˼MeCuiH/]K(Pc$kLrXięljO/8 e,"_F dQm[q^7sܪ7^e@T rK^^ҸnmZZ1㲶s!WB,K@y梔&3FCSs}6_N]*D,f_P p4,Kkހ}D/'I0T_&VaNDD$ ?Sp;c2ܧU' uo|H"z9}ŞPL3GKp R$TzzR(]P9__ )p*T'͆CetQk"NwD"WZTo9+āqó'Fՠ;6N3J(r9kBUl [>ۓ?WGWA۩)"G:iB$?auHWBBo.,cKE#tG r7ATXp`CSpNlp][*KERI:F;ӡ"po?W!VRd%a X5} y }bR?V?9ݟ&A?jOik7PB$L%T.U!DQhű)ҸC[ )R%חvryL5dGoi"X^FB 5HՃT[=QtY;53QݒҤBY\䇲^Ƚ1N]T48NXC<񘅐"\Wwּ8#C72u?w$"0\9`B.6| Yt?G;$f%ZŭO^.-ShoEDF2Rc$yTk_fޟZP4K"L-Y8-JV̒ZE/$Vt0A'\yͮo鰝aƙXgqg?pgq#qx`0-ۻm)(>4|UJ5Hp~qCBPZXJͧh]~=A3fT}^I1mQgQp qZC$mV '=N,݉E!f΄:Q 8E!?wd㝢c~HUq +yc1ak?%&.x $DR}&r* v~^UjH 6EM1h**8s 1 T· 33!"Lֹ,&'exO^;EƖE՞QpY0ev*ehLӖog^w%T_9 n!]&Ta;maӦv!!As'sIr.ŁLMß^:U21ſr.X-7] uPqQDb_f[ +aN'li:%GjJ[AfWuuUIHetɆ1,1% $ry Γ8$F.FۈXefߪz{9yX46tB*ԩuZk51۰TD3eB2){*V-r+מ7,"M;}^.X,5TnpȌ+ynʦ]}3I299)Q"K٭W[{IPCö|$ rWt?[C]?ϰ8bg1~dR+L'/A?kcvR%PפЗJ`& q;){IpC@=c+w6O#\T߈ tA?G-1݂fЄ&2]T>@%4 >dz0-;)oRSIN ,hQ;w=L4i'/nh:e;U˰NRL26b҅ESUEߙ 9<0Ob2RW*`/Aކwб!>ӽY-kbKpn;s,ynHhCY ɁkBc? 7R.VkOn4~RO a SGKL[LyE&5i K`ű2?3J0kBcHmKB#OflS%z|xor,p|PE6]OS`־4[,Y̒SRPG1&3LJ%dh.J{o|1+PƄk*܆ Wθ[X,;%gE.b˵ hP,} 0/t ]s_oR>^BwU 0?bta62N"U* ˱7_rTT'_Ѕ=kR>YIJgCʮpqGTX{C#Zf72yP|i q`bڕPG_쐼 5\Nwy϶s P5A"u՗l9r=g}1鴙p>rICU-7sVp{vBDИن h?~AQlYw.d,*q'DhMSr4,\u<#Z9E6qiIse\:'H*YBUc˳,!Kpd%l.]%;%؃';"@hKrGjxˢ׵?]d{>+}&HvIvoN6|T#FɔP&oJO$ txywg耕,8靴!Rٿ*\mzF )"Q*%gJ 9NYj 2^P=]>RD,awGL 5ۥ+4 <#1<s[椤~H?tXd`pYJe,UO6p>U b܊\`p.t ]軬ŝ7|s٥x ]BzL\n,({Q]LMFR378bx:2 w;b'ٱ/|^ ~诵c?c)w|x!9? y#̰f沣K~ϴSB""@ߍg.4TYP `rC D,A>(IXu aq^Ϣ}r ٓ B/?bD V1yJU&|(䣩 iݑu[w-?LzrbUVFlH ~ OA!̙^x  tTCRO'V|KO_YW".r8c~=&V!* &-O r)"*axs-O; UoXdWfH,li4TRQ!5-TNN djCHg@I)B *R]uNx~SJxiuHϺkC: ! "H]PQU_z$kRlȞX!NSA+p.B}2R' ~a IRN+Zou}fgQޥ`##E"Rr1\̒UVcK[/WY7Ux6jKB_''.JCv,,GdbA5ѓL)--?{xެ@ HݣS9Z~yqܪ>d<:!zL~*>Dٯxa"3ydA?G}|, /)s~!̹/ p:Z}B#1-̙,D랁tyrF&S:C!*u;mm8T@,jb`PþP8bH>̌tXG_< >&afn;D|ns\"kg ζ֧t2v_=[8r7*CaS H`pKa΀R ]B?9Cl|'.4ߍՆCY9W)K{}w.?s-|f||^n2 vM=Hw &w>߀39*%T<~o&(n6JPr[_r,N{e,Py*teYM+y^L z$0,Dm2ivnDhbCrMRs1BmrLT*h$䐜~{e,bD\);oOrSCvѽvo"Omk Cq:1%2 q"Uȟ/| /J^cg%9KG'd`wmh?:|؛p*ޖ*h#pZ?捥 勵F?lQ"?7ʡ}7ڕm m&Uoi4qv?@;3wB̵m@}eYI3y#bbg儣<ǽhLt@xd=\qS8' Lܬ{eCYMtfix#J uv LSb϶GzSOlH*m&I#-DD{>mE8K ~gOֽAo39( MER1 _iCeHBtk政KNsU.;h ":r ӻg>9*06C  5#qpI:E&tRU݉ fhWK>"r\ d0z ~=% Gp;l72ALsuw"LGQکϪp BvݑNj١k"L g(,`%O# W*(I,E^Rܹ'ؕsW{gPdC1>L 6{)'pҸuŁκ!τBdiV!-:G)lXgY Pn( 7"K8w*s9}AdzG,ANQRDȔ3РFGs4˄ %~.| ]B7Gr̅.tﶾ<89fxu^Bk6ncrG_;cA(DqtLyZ`"bfy]mi Ibu :C'H\R<5۶=k`$C,|ge}}-jra.\ Ujkp '3d-&6+8Faݻ)3Ry37w-{^e#?to*nqhڋ@%Im9,(Yt6Y6͏nG1(#۷)a, Cd쒇 -QT5n(ߝwg>Z:WUDd%Lһ~Yzo+tʩd;Z;>%~Q[PomEI  `ͪG`؀OЁ•omg?g5Id|:-<طyApMõ(.7p5~4?VV ?34n]tQiFPҾ]P\=0­u!zpȧMIcfT 5y>0YHPĕle IV:&!"ڽ9' O DJ)x"FuPp%zOiXRd~BXmSY'3C>8:~c9z~p0:$I2q,|x;voIkޗD$i@9|3j٬GVX,*GvTN/FigֲA0u7wzȒVbD-I)`l$ _<=-K@vmOAQĩ"ZtN{1SE }a5?R3łvek%kƱ[g~tLpʡ,U=-)E|;w-T!NO|s_~w)UFb~WU{?=h !a*%Q~6Ʊߤ_iYbPyI"Kk"^xPk+4܉B[FPЅ4jw"'m~g z}M%MiH8q@ERkdM8IlAkƊ%5F*g$- 2j(wP-6kjFAiJfJe F8}d+qQ$2֋#2,0"3X_;`*(-F{mPi1&ԫVn(O0F^ =yHltW8UihX[ tN$ؙߛ3|Kubk˿:ve58Peށ`u8sJy)L\ͥu@IjJAr{,D0D_4f.t }7if?x ]w,~4?x}3{Z (Vj] PpE.[;ow(ǝ;dG緞G_\]%s8?>> d /f[y}]> }J 6u*\.A=OZ݂oy~K+W|%Fad.9j8~y4W4L~d+_P|ċ;U =U0qx+}6T;Eb:|+5KhhR53_F!>SK2r!oxDG햏"B'` (]MS:?G@gd}^pS((̌%s; 1ҎuZJP u#LUoG$FZf3~'XhS, 3wwT y17:JTzS8g, l[r,IϹVoAP g?]_W ۅE4 NG89d8`P* J=,⧖B4~po% 5 B^Rxl^2tۆhxZ**t!.v9`N4eĆ>TfQ\kX=c!*wIWvڟ5>C 6QD{+!/m72(li -pє\RO҅2C4KlnJRAdicdq0̻jtE':4D :q5Oi;"1owb)9MTqE3a{wGGWݵ_GnQ}_+0cOy؇.]'[- wVM\Gh *j>nVV߂dl̼Dqբh)ieE37T˶}~m!.?/w-;ҙXҊoƪFCWHQrWK'?/5q88*K옐,oz ,F. tmdQ(Sa~o rycvdWQW֬YLՈDgFWsx&vПU`Lvj M ⒔na>K_(% Yҥ{jɈ7Y u㛹A}])v<™'Q.ܮ4qMg M!4\>4IZ# 9$$'W:= "sڜ^6]nttXam%AW3NRW{IԲPޯ zʓ1ADQu<^ @SHF0U"*'UGcs+?SGH/1_8[]ڛpOIaԻ3oT yWQ|Z]{;Њw! 25z:_KGԉ;IuL2`^d[#uQ&,zr}&+ŲRC)(H _^Bwysѯ53 ]BElY~;T椡 ab*ZU,B%!J#RlBX냬U4X+ )D) 8k5 ]ha:u7'ߘk+jX/swU"]K%:l.Hp7@p~y vk#|8{egL]U#Un ;^G&"XI'ARҞ6 %qiP^"%*@m!k578 qq r^C@ft4:HV|,Ms;\:j>6']TZR8x؍*":=މH)̔Lj3 6KYEK{I9xm,*U2zLٔxVo!~ sI<&t~IDZ={C@C#4Ps|6ҾP8ɨ;+x?i}ce2 M)wVS #!qbϯ g _ï|v,~ā t ͍Xdg a;BT_4\;2@")T,iL}(h'QCQ1}FUU3}2i(!۔MZV]<#fIJY@bi>"UaN7!9E^^WbN;F~H3j1쮖(5q0he93~v[C_ !5F?΢A[ "te +{2z ʸuTj"tD ޟ3fYKҌPwRRoK^p$;sa`OsNW|2?nyH'|fߞy bpq-DWE2 U|5荁n4wz\.5sIcfNI<{i/'h-f2QGVr(D~lGH^*I1t"1kWt8V%㌔qy$ DO TμxՉ5s6sG2g>ԥm>83ƮlM/&orNi31r "EeT.U$%\ld.̤ۘ+фĭ/sZ lD' N" 8MU2@p}A2"/)`cY<{!C'^^i[ 1]t}?_&-4 |z(E)|`KP+V!Ѡ͞kTBDY v=! P8zԍQ2Dͼ0IrkwC uMM-* 3 rF8sE KW6,y~ss0{ ]Be-n;, ]w>%EJZWFa*kmgdHGk̥敒coǝu7a=glp)F%u.3 uxHyu5G.Qf#:: 5U +_'Ʀ$r^FZ^1^_%o7~]/Kgz'!$DF?qFJb@M$@y B"8jHh\:DK><Ȩ or<־=x퍬D2sX\w<tA%IN8OaC0އ->c(Oj"CEo/t;Vs Z*} g|h*Sk;H-݁|MÕܑ|YI SAKZm rk'M-O0T_ e6 $47˚/!jta Hd›h} gԹPzI0ҰPGD,,bbJUH3ip.QN i\؅>DE1o1ӽIV/ĕ <^*+5Wߜ;7dHYm;}ju kL<~cfw-UH^?YbL NKAnV|o#WGgu/k(WP?aˈx0!"N>~ئWt*ύMm*$ts{eW +=`M3lIJG$96 W~pԱ0,3"E9cOGڈ1}eM"INrb}z+ajęsӍǁ'#T IDAT%%Mˣ*wFSSj S"nB>_T  (%s!RiީY0u)M8lVe2C1G`ۃ3@Rnv"0 [r`%#_M(v֗cuw27mLD(i;sofb3Ph/\nl$,cKY4KR%o;AO@dMKh8IЈ!VL)W~aе0 gŽ#.ȑnW`,?Md.pg,jz3O~ \g 9#/1lD7q>v*rQj<9UݢD=u(jbzc,3eqYHڌR7W^?wS|)-OY2vU҅%2S+?[Ʊ,1fZEeUD}b%*^˥W:k3_ÔpmïS;x[E Z$k~#c!HX?H=(f5<[Gӌ!J &=i@`$|#~\؃ uv(x&8K[)9 'Hi9#QLH}ˆ}ܬ<ٽ⋎9D'wCuXj=Uzve]>$m{ [؍0/\c980 GK߉~wR$Dޯl?7;3Cӄ 鮡vD~+j*|[-w%2м Aښ_f/3_X&I2cd&wT˻#DiQ]dc @#wb#.qwަ/ ۍЩUeVKUd 4nB0媽 ,Hng9GDGcB+i~yr驉aWꝤ<7AJs Yә3h W,5JĪcz\C$g۵̃L2ld *?J`ptD쇷 h=Ō(Gx}g7[Y1"ca@$]70i= 񼩫7ةÙy D,o/IMA†~epB~B߰ ypiŭ!,Y_2Nb\hk~!wRF5г[Z}u[&>Hv$^C@U`g+cdץ$d/1g M-5)J!ABZhzJ؟j1N9o<Q2?-Ie KRNEwC@0)la{yBQAn0m  YI}MpKR5)]'U*P[K׀!A|a¶v1pYXvL%СDau~gğJOrZw|ۼ<53A5f xYlAe 9 ga,9spAat J&|N"f3g-vL0ꡲއ!麿o#ptJO`>e"}bVL66);xAr7Y5s{ZJB<z[Y[9gQRxJ$<]}I3B`ă@~8VDκpD҂aMe d[dVu+Ev0O"ia}KP?_bc}(s 2-lS"" >&&}yY84η&}oBT3Fgb=<*5Ǭ|mO'2)CgG|617}h=>Ķܔa-we$g N*=oiUph1-tUe%w'[>Y7ߏ> ]_ݱ |A ˢu+1*rY|Rd*2s#A3r|;A)!2:H!>tAIGx6sd\_ꋘ d#Q©'  ܀HL<]V76.D`_܅X73{3Ǒ2u*NpMw_'ƛ+pmϘX+ҟl(;2#eKڋN_8+÷eNRՊCP݌rfTxMɈ^Q Y8[(gK kNz޲",!2o $ktR쩦EZ#am0FZѳQD&dR淚/ΠUz #\_'" 銔|Bsĩ.KTo@xD{ v0#GooFkuo4Kj䁱įH }@(I+%db1֗8qv[yd}/\t>{H³qw]U)~~~o bYL7 +F+iz.* zi7ɧSF""]AItOD;hvf7IvIG&hl9xyͣv>NwUUiN>y(8!6+?Uy+II QUnj_00>* ΒHy!o,j#fϓ>mFB䉷WU`IG̜\+PdY{C$d!%L( RIoޞʵ|KP5`" I"WK%єā{FIdO|.[BX9׳܂fbm:+CS_qLx*Cp53KZUyOZ9IM&PftVPKpFȏڴu:,OKu&-+=ȋ.߂$ Ǩ=@+Ȍh1Rxfh\[Mw$#Nm)̅ND]d kqn)w ? 3蒠mI;d`~9Y79҈f8Jg$eDIuٞ"t)] v WuYsiHcL?x愫%ӫ_8\KhǪ ~b&o\`m\fjnhH'aoLPS3-EEoU.C;^)w`];'mh3Ȍ+s2uJ(A a\z{w$>U1Hd2ف$θi( 7_1vvHPgk4}=CSD%%dzm 5p_u|FiOp͡4U`\{{@P:*H@IH?BAŽK:K!t~~Owx3Re+˹Ӣf )Q$iFvYe!gm|Чgsr#+(W5 &0*tB_E QsOt|*3KYDn}?R7Lٽn m98lRM CS;h_*~&Dcd.psp''pzUSGJ͇5囅7k@cGⶩTʥ;E 3OP!pcgB8g><Rd|x K.:.l.]3eik&xƶޗS\8egNr檰fj#7X8oۧ;.1}-镕MSeݺζ$&< )Cr,m:۶ױ^wAPM .@ B,I&DF9geD"T* ]H:ʋRYLG6Q˶o9?kXJڣ&WĎKf~x(JZht߻*HZ8|t(It\;dG<ўmzRƣ'<"o3y=%0xP~Pxaާ**(+͜@*zYS~e#Z t!Ӱ\ Ji?d !j.HcrGoy1JMQ{$ѷ{qؒ4l)]kKY$ O3gEry+7_Z`#DP?ٮ^E)\;<&YÆ4$ ]dJY-{q:doX"b aLDyFhEXG|ap ?ox7Go/:BfNhF$p0 Wqr&.t }i wƥL U=w=˷/ !zsȵf!\N?vʯn~m+!t݀]%脈Ucqا{pOs\2-UmNQ ~H/æWgz9@܍7Zh슨n Z)8wك.p$QiDeo&JVI"3k*AJijbDb!pfphRY \ m;YCf./ǟa*KO>>zzowym[;YZd%vį"G'kPcZ>"H3dyფ ]|3jZd;~eb]3f޻("T zD*lcS;ݳ606gr2 ISox;:^-N=jwM IDAT)Aј>u凌?}v9:)gX/C$дix&R[pnZ2|rMY!u^I15$ᣙczZHnY|{% %3-69fPR!ѭQO.,3 K+NQ@7t@:&+e_>߁rvN;2cu Uf^S(ޭ>֨O/E7sM~:?19~]UdBNh>*'T;\,i5q+rVE5׳l:"L6P|Sk H'^rhU x>kHԓ4F+X F_5߫}OT'aT%\ 6WXBE;ٕ=5Ñτu T8L$Dd-sCU9Uсrd>y\>~wvu )R:Hd"x e3piCa-$Ge^>0kA: IVf/+PFb^g˴QF:IH Hx~%v֏g}l|k2bkVv!t^[z$u!Ku[o[ty7?3[`hR?=^@)_l z2%y5?t$KlF? N89LەyNkyQ,AגWfe 8J5ţScc)׻^-DG" &|獣 A&EcmbR3;Mĩ}Ŷ la$!.k+bݭ'<>"Quc [!mDHi@1&$׵0uJ2k-k8 V[z2$LCq@gm[MI% h N 2ښgmɹPW!q IXL];ڤ[[MW!D*%2K@pl.9w$--Vg-uqjߝ ιHkhԧuͲ:Ue5k/E0a/"Pn')XFDG:9",eVJW mz0Ɏ/Y-m>C$ͼvj**s8R1ki|M~2N91yzےhFR 2V`m==l/zJ g_~n>*WօWտBG]+3s1\=3Q.j I`{\ ӃJX=GEI%Zbi F3~+A>jAAbovqu2x TՅ~M-:)95uMk9Ncy'/!O+} >ے"ѼEd8kXrUV$if VLJIWưrKME}<@)&i&d33y.шb`,Ч?x(C]A7##N-V.qS?+#6~\t{}o{kۺuZks5m.~9U.B HD/}Pb0) # ODbA@Y\JTSgٷ_m޿CkU= H?̇5Zo}BŇ-W֐ׄMOi!(?;V~`\e!ټ: ?NlBd0}yL3憷T\#"m nYdN,m҆rljnEXn!`n$w]Cp#!NGzIpm]f6C"CO!_XW@;69L$ {Z/,[Ѷs\G&QU3[~sW _DDy4nDHII*[o7G9( X4^{~$==d.z:Wzdאw8{o'2.l$MQƃ.Э[,x@1"=oQ (.,y;`%у(d6 qd1Ȱcz**kYF%X ?9Lö해|J _\hϓEk|1T6jtSE)PJ@#x?YyN]V*]e'H+Ny.,P'mI|#-پJ&Tl:/Dĉ{ƛ5BeoGڑp9fa$tIg9<ߠ$+M=Xq1 1Nؾ? WJ1cI[ AaA |+|gƱW ix{C.1} jR=û0t>VSPp2_M9BZO֔( 7woWM.# 5Eq LaӲkK6TL”2fX)9̋k%_eVy-"$Ɍ%IOǓ?:uL~4;92}kDlF&Íyt|\h%>3@Kf?T?pŇm *}-җ8{(;± .g]+ׯKi*ZKy{ 7a^|ԛ2qjPXlZf(r9¡_O"L',͈А&͗vpqo2CCsHUi, m#t\B4eyEzBYtYI-e[d d m&5QkM\pXbIKDyTרzfuG&ra MN^>>uctLB$ļ8~&28}mKB`ٱ^RJ"RsHB_mJ܂3\F`sU+ % i'x8rUV{/&'Np}P"z _ICe@M50 EdIQ&W̰ 22A7K"F*g 䥾$.dLb TU1@Ծ4<.EyiX,.'^%e@(NHHE0.PyJSYy./U ~ \w-d& 铤'\T70F[lw[t8ߥ, W}8Sny~;J76U.'G7wZ 4?发Y[UR LŊ6i:Υftk0W "ZSٹ9J,sG"JG)(Uo/^& pv%^duƜlEߕ HW/gjBڤe}1RԺ1.)KB\ apd! |0"ӧN,_[V0jg8Lc Wkц WRVňe3In~Ϟ[^ /g-+CD-tޚHIvESeo) M<>+[#.DJ]Q+!=  ,YD z*rY ~ Izz'ueY!YqLae-.'A)ӅUp8s@;4R;jY4;tB W1H'2^ qS݂)@ײY ֝ _9 6 Qw%Rp'(H>۲ /TG7P7 JK C&l)ʦ "\:׹jw9׹~ާK)ea//Zk??1r"GÅʋסgO'1I$^P3X9Q. K=!$‘Iz-z*z;wֲ<[}[uL ;/!IMW“*ٖ1_˸I=%̷Ss؁|1oNT|OlCO>F&xB /Jd߉ⴝI ' %W j\QAR7]tzq&==MNC-R2 !ʛt;CC/!GxƼa1s\'clf sWW>||)gguv(c(';JE#WexNI"IB^e3$otRmv=+7+PŏpDVw,^ Nx| \կ.7xے+tl(F^?}g$uP-9[h&E), K]oҾwU69QK]b]7(^,nO\yr>qن#.}1|D@_a?*dH;+fQtK߿ gC z*bˢ j=KbWhp[2bہ+ʢ7$/N,0P.&+ˈ0b/?L[],Ecé'NI:#7a?ٟ)=hdNrB}I85y|:U=$M̠`2 V9n2[_pM}@t)z+x^,s>sn,KqNch <<`1 IDAT|W}Y}g)l؎\\m, Y. Λ;t2R]<Œ }8]GP/; &QSǘV8e#&w$#zS g%vY8Ə <7ل94^wC/S66͕mo.j7lti/\9}R*6=qt KήQi_2/ذۭ 7]h&9O<̚R*tknri-5dwE^rަ}p!&vΫC@K4PgDDӢ`i-w7.+ "i8)oܡ<(p2|=?wh7y6P2iHj*. NÏ}lKЫL>)*yj ޷sb(8n;]T36nO4UQұ)5ے F?X;B ٰPח P9o߁=-a m?[+ߑ~`,܆Ƭ\CΪcTsi=|䲩xʮqMKHwΎs:gُm쓷Z<9LE,?躊^X> \@.HqTIOvx%3踆.?ϵ$ PMo 9vNI}m8$waJY]~-ʑۙ\Kydn5ʳ1Ctbg(߆S۪km* Co t^.A7C~|=7Nڙ=!%Dž1{-2g|yN`Ȅqx)0ʧGxe].R,@bwR9ʈ3 A+.ET0!BR'qON1gJ(4Dq`vG[>=e[#r6葈;(Ymܬ?׹u| 3 {\D_yaYXV?vkV8{auq3)iM P׭rh$6'.<_fCA(8^ukk,\RH{~st{Q+;5mmd/`_>|]D5?A+5hپ)8g/l=~/N(?#U4'}BdCs|8WI}˦-zpBX knee_NJ)w2:T, ZN ;jG)q$%1uW|h_Df}jc{I?UB|4ff׫,8IR D0:-}FfA$g |ou fdڗ7j˕{ݙ jtvۏ5CZyx70}7_{n\f B&3 Dei5x[.a*A_l^iSͪŎ . ea}HUHJ,^&hTD"*|W UK&n]ٔ1^dž><%_7Q-vֆWc$F㞆^%cbagCf} R$I "B ﷅIh=@kT`9w ˂CьX/PWĒ'@r@[A4F\V08 ?<#'o80[Oux*?pyI9p;/aS%xYx|Cِz##B.WTne1m+on./vpJF-Pc+kӽ=g RHiHB\يDDACo@H}2yD6 -j8_ _i$`r\D \U4S luX(ʒOߪÿ*=_k~jSWȇ,/ A!TzfoxO6QjY70?ps~yգ[G1Ka\Ϸq[9ٕ 4gmNJ/8É&WE݁yxT:=""!oW(woL8ל=doGDnRa˨K$uLFx·yK' ":Zk}uG F#V+ #)i| ݐ^ " R+׉8F,"yYs\{=7I8:ׯB9""`ŷn\r 2ں\vs;X˪,2TUz&iY@Or~Yk`! *CEm8_9#mjK9?@I\( u b) q(NnmM=N6|"h6i*P$, IwvrM<,t>ka؅5f9~XzN}۶TVJeAA Ryz'+! ۳LeY$g,aF0#BV/ +:DӉءU*~4>Z3bn7 9$CE9t]+~ˠYn,m6}G Dɝfڂ+\A,c5^3^YtHHؿ9Vos#yAb;r?q9;Rݿl +K1tאtYnJ7TaȈ̄U4=Q gG dm=}3ؾ18ahvLA⽹6uXaod׳O7`t<2-8`f 3\5?;DvMˑ< u*c<Зr/ NAѷ|e\ӑ.@]D0!?xEu+a\ct6+Qq[R*n6:UBq, \ke$l@OwOJYB  hrL?vg ʒ$RoC_-4rJ!.: f7ڧ-d@Q.δM2}X-^Ka^]N۵mrzh$T|$עF.)M+l: Ry \YKtս؉Xt܈:j:̘!҂vV,,W2[KɟOcͶ2 _;8x0r<9[\p5cO[& -tVR@D}$>w99 :;ܿf43&h:Y݌Q,ݽo< ٤aBof/+INz' ]]2/r;nYNHj* <` )Gư^W‹FYS$ 5e\ݨdVAe_^FčCR]v KJؒ@jWa*lO׬pn-W'lv\q?wI2w4Q :ZyY5W)%ojT[y|>]?PIo(o:!;;^ E>JZa$LR$giۖ3Gg8k) ,'dNlG:/?liq:7us}GX o;?hv2s럘-"~we!lJ G^X/ 4zk` &<0 BɏR$94{v$_G}kaeز,2;N`6@U7CŨ?Ɂނ/mHF}q(jx}Nche 3G?z6鐟s I&.{q@P,νwmgNJQ]TK)ıT:]<$C)cqdʈcUacQ|v ڨcR=[ ]+ SufGGjZe)KiD3πNV"&b-Uv) I!!Yo^?o:xTT+B',G>Ąx6rL',I-[us~kB\ ȍIbIJS^Td-:^v 2f,}D2fʇ2) }m7S=:C:շ"]d[>:dW-_2^[HSM&9ΈLwbZ`l4Њh޼LW&RcpK1Lk8Y{^HΓvfT cIS#V/F{tƓQ/eҸcf@j^ywȑ].R!y0e 6-6$F6qݷ/e[Jl,^|`l"M`$ŕm`"h L i#FI"gCmkWË{mƐYگoEo}i?2C{F8[b`÷_l 0:տ?Fka 3[W+V&"yj~0?oc,JՉYLIxqd=-/Bw:Frxo<D?|{ 4?FP2L-f,+{Ƙsχu#MȲD"GVrr/97Q"E 1'Ȏ8"#p!40 4}j]}O߷֚s]E-.7]jk_x̄OA4* %ݏF_?kwv a :{"qW ) P"`s@zͻis9}<:Eqb !&n$eW[b.'+O\0I<ѐ2 jYOAۛh sibxhKRLcr# !.3KNj<)3ŋ댮 s]E"pF֟ g8b{-,<?wSs33|2~_ֱ K; 1h8(ĩL~ VHaNPshS k B҈/Fe.CqQFG`_6yq\rf~ ?ulC)H'6VHԮl$u+~ 6ËοNG#4QPrZoi )k&ا}WߧI֊ǒ܍\sZtQv }kA-R2!FW%:2i/;8☳# XE,&n$eF{8y'BfH a8v:Hڧ.kbBޤTQ[auF#ᘠGHѴ60eNMTGT6ܣtSG}R]}ƳI^7'::&io!S2E~T:h51Fe016|kx)/Gd@;՝v侸!G/hUp¹Fpb=_B;/LHlGBlWx0v/}kRX ݭa5,8Z肋rJ*|v@(&n`D҉~,-Cr Y%=h8<'w+e,frgnin9hmǔGl^|]Xg3 $B@o>X S)ny, ݟ~~ x:>RVq:"sFzFi!ȒDֹ~3 iH(am 0{`Eo豒2Mt+tz.+S;Fm,ZcXAUIY\O&oC Ih+Dv)i*_@_M\q&rf+ ;.ӟ ?] vasŸ^<@q в_bUEK|ĒS!I")@iPɋ+:J8G=pXj:'NGwdK#+̳g{|~CeF`?Cɗ{A,exuHs9"{R_R!BtQG3am=s{XD؋X$BO|8@$Z,eH7Q %D+7!.VzX&*b@dD&ɱ^1y_};g2cx2C"5M7)uyQ# 6˗bnM$/:O.{p%?p*c]OZE`MB󌱡h?~C"2̱5Vj+R2:SO4]먴%℗:;HwUA&ו_{3 g8ß )+^{y 헒YFC /0M}1fm'@D@xíp\5k H6fۓ6p 2ԛV=*;ng8"K[5ZRC*V&RSJ$28"ҨiS*wnfwx#;ȧS!A&4-R40 62VCa \*S˸:kxSYٲ9ԗV((7zv7jYmQGligs# u2 Oias +v-z]|uF} O2! B#YJ+d TT|Ad]ootV;9V6er(F!E;w 81m@]H0} x)GLDoD5k{ ]ȎЄصrzG{uՔґr xCtD‰r˾!Q9I|H]%Sd|:bTXs 9L&IYdҜ"SV:'Ŕ =Wɚ -cm"[;3^PAB 1"]ٸD"h ֣Zpfd)4u 2 ڸᄴ[\ۤuE"W_iJHN 6Pv@jSQVt+<"t$+kt$h%=~7<ɘSפ? M;OOU;_} }l>S悂zixȃ]W2ȹOVGnм ^k&1fROh>P2T"Yjn-5!ȹwGCC?MiM}.32nG÷ ++Y\ϴxS3f*tᔶ뮭Kk&sO1]mZXF֟ gff=+y"}/g _c,[77} S42üN_ɫ;mlI*.*  X9ˊ!a|Ȧ +2wD,W^u@YGD{Zi$ȵwV_բZHCsE~j?"WB;Ie"E$*f%#4;S~P x"|:eN&HZʋouOpˮCEfM;mjѽD(Aݛ-඼S`D4ɥ qfMj$:))+ݨktf+¶LJ9-;IMY4px%D PMIRUܐznߏOEDbVAxDrǞJff_izB{:I6M$ܪ:V8%L MNsD4vomfW`G:t֋K1,#͖ʁr^r҄:԰:l s$ g=B|!J:p(J:0e(F 4T1uԅHb-+z}}S)P_{ӄtl..5S 2֕95իY.UKYô( !ɼ(|Dxff*~"vZe?MR_$R{Baʅc }iZkD)lޱ\}:5t%BHhFˉ>\FD,p=-ٔyK\%evǰ|v%6c?J;R^cgB2!XB?uqIQw" )M{CJ&R!9}-7s"v'͵O_/IBBJ}5mzZhn!NYBzAlCCM;Vu]c&w6qnKn ®65)t`u8VReNK# քmJK)EO6D|"iV rCy}zS؃֤hc:gw*;RISfӨv de7}CA e$q1>\DrjBҘ arOJ?Ú*8!!ft*P휟p<66p_8?^kcQY>I}Om1Jww IE]y }52"O'NF]_Ax3 T/8,2ZUݟ_Q -3/" :H^׭C)!.h$zɹ܇D~ 䑘\f%jٙP-i!3,H Uj E~-]#ͳ|NyPS]O ۚXI3Sb+N:ZjqtxbߺU* ,= :b%Cx g׭ (I8VMosdzڗblk=RzmlĒ]hD{"ƌh{t%"X] 焒ݩm+QҔ ٜ'"I~^pؑm#ndiTD˜s;yxhj]|4Ȭa{#Z ,m.ɡw9@K(/lIx_j^'Jw#)M,"CtҤ%?:JGLM~::]p_3=2zMH*?n<Њyna2{}lE.Veb.ȻIdugX C}Yy qLZ?{'Z72j|ɇ^omwDZd*a.>frs)$YHY ez݆8iQO۠д9QhHHd,`+bbJZj5d8LnkUFW|gљ{UEy*0/K'Qg!wͫ40m4[x̙[3_`r pPuo!"fu9(-xu='GWe'(K#-%RUJɒ?Z$}O[_ϋV:m-l:YOz?UT]]AbeRf>I~69%ז@'.\ء>b_@ "eif92P=Ff9eD猬?p5;;;| uR9H)=5!yJܟ3bd5X5$ "& \.ES`vK*{2amq::MD0N.9P"5EJIzZG8I HI3pt>Te1H8pl~ /䲓GVg*~]$# Hei߀ҤLO"7MSt⿃KҜ]B7#fQh` {-&Iz j##G(rCph~K\2ݢgf"1N1_%%+e4^kh+: ݂ߙ j&>iANt+7k蕎A]* ??fTG~t~yͿZOh#]WZ@PnMS*ZFi:DPudTds1q }]Pմ$(E9«_e=z\s]7ʣ=+(LⴂM7DKD.Дƛ2:* _Ys ;66:Ğ3`е" ݘ>Wc .M9_hjD0g7 kK<~Z ]HZrroYZHռĂǢW)"FfM5A,%^L1 $Z{=wX 7 -Ӻ=dmEsSGB _Aȓ_@E azt|C=ۀ'_uwyֳMQ4ա=_5πLNԊd9U!IbJVZ.=\q.淉u}JO i_{Գ3 ^t)df8/_re![k#;w|_43 𧄥仾뻖1ɻZI>$L.&7q%=nգ%0rH[zR:E.Bh1ǭD=#(KX(sM6LHhޝ 80K2"]'Wbs~Sl[+6H&IjHi{S:V QMQIeJ{7ό-͜ kv<Ojk:!a:]ҖJm`x.j.]Ks6!:QXcp Ռ,rR ZM xϋ#F#72(5t’΀ΓWkh%E6P.Hކ.….bm9SZ=rbQZ +QIH|ra6DH#S㨽5Z=S@Tu| i,CiUV=%__/i9\_ k(3ÙgA]h 4ICvzA7/5pkw$ Ō>?S><< sj ~F\,6[?vn'',:YFsFF ζ j`DRvo'5bn/ 7[45CbΚtx:jcp#roE]5UnOW:TPTR2Ru]jT-XU.i37( R6[B|1if5 &HD:x:`o9jߚ:gOՆ9#\649]{|Edww7AZ,eúfim"ۍKrb&礭wIΘp;=4i+:pZa2x N+o Ϸ Aζ|gr,fVnyIߘW*&Ɔ::Ćc80`d9$IPyc˹&0=N#5Ş1hr\gA:m9FF"/yBٴsHx萚{*ib2VΎB'x;xqg(B9fSMTP<]N+X> $42qG澖نH)ݻ-FJJi,HKkjm|klZֶҝ#S % *BE9a=P#MT]59PkJ^B,4 wBwXoӔMg]c4ͶZ{yߪ\}lH E Q$+H YB|B"G(F" | Db 0'o'>9st~sfΜ9SZU>ϾZWeEB<Ш׆yut/#-t '[l=IV;Zx5wsj\aZ՝FANOm 7b0\Z ɸڲkCIM\ 86SFJs۪)$X[! yxOPx C- KH)'1|.:Z n g$mdUj׭|LUߊiO[c53?%Z4lB j\irn4WfYt3wajY1<Ԝ5+N1z̄'(԰%J&?͇gn9S7DZ9j.y ؜#7wHoll_smTJvf5r8ɭ)sڈ!qYY[v>^.p |3D,ل/bUwk̊S6z|Rkqr1J<Ԛa]ꈧ1DS {:OG@0o]pdPl&q4fiScŷmZFU䗈PN\^V' m, U՟s& {= 1& clDj ?:Y>^͗'bDM{z5Se!۟OFjyNU۞S{ ^jP%u=fsB="V-Wc3#y17zkq}J[ȵ6\V./{" 헷[u2AnR+*iF eԈj ӆK0zj X RKEe#0U,:t/;;DfnR>?͛땉VMlע K%x %-|iTa-R eI`V1@a*Yg6w/$)ܒJ$*V*ky{"WB}0ڹlbh#4Bۀ܀f壹^t~r#.Wύ\$˵[Z"K46lE1S49"q!}<پS3+0ƫ5Vv3CkHۍALа;0,sR[`;owa_]rK-u֊ڥv7.&y O esÅVk@R)WX)f<k jMkfHk pMHyNZcA=Vbԓ96!${) "sDoZyt,b rf'WU0<,4ܽA{%؊.b=ieʽQZv qMQ3R_6GƓ qﱥHx7\7@(T1B9g> E-T %9݁#&*5ѽy_ \эh/ \hʵ^y 'g4ts;u?o8/=V=0a̰y"ݚ#y3e8F.Ll^~ `%-mH?{ҕf#EUu !k|x{SܠD˺#[El ]!$:a*(<9qpsǧ}+S=\IANӳlYԈ$0f !PbS{x H0+ ɩT\KBsY$D5*θ[ - >Nl"B5馸,CƌГ멌ym5: cO]%m LzVmNȑ%E喛t> GQ#S=62Sp.zu>^CCtH #%O*,4XD{('ce}>ѳ3PvvqjkL MeN*3eHqX|Է2m.!RbMܥ4Ʌ^jfK4SŹώ8uE<g?O$#Kw?OMo᪜W!FCS%]nӵg# A=7^v3'xnA:ޜ-8nA?:0 q-1 y%[ܻ|xLhGUdaó:kMuƏ,M\\V8X@OV5l/VD% 0F,dWWi`VXZgTAp*l\=τ,C7vu" z4$.z k"eBHٱ򲝇IJ;'Q4dZH6vxd5! $]IEdAT!Kf{JwޓXWtrQ3.wOTPܨHkŢ= ƷqJw2z/7L$hU'´vĢF| X3C;/9˩4w)O%DB8Q;f 6!@dޯ$Ӛ~f8 r>+ <"z2̙]@k-#Zqk}ù_>6rCN?z9sdcFER DH] CQaPdK]^\ԽǢ%ؠB4F/Wѿ7r/G"R:ܛ^%x x3m1cn)O <=s7 =)4V#% ;iR3{%ɄBޏ66rXۚ_Y V4! Emr=UIaPցtcp2iВNP!+10+ oqB~'D$Pz%Pkɕ<[t{M|EC }ş^z]>ѝ*%_ 铹 lQL>yH ȯ%o*h3}cµry"q'/3\bNWڮ.%FB8A 0?Ykj (?#rʷWMwH#mhuAu'e jxw/ ͈?";N1hVH"1l5e23d 8eTFbV E3Bu'J\*sg Eޤޝ9A:є?lG*.N O*W\FҲgO, I+%',XjI75&w"/, $i(B}A̬Տ getH hw9 S]BoH(χ p~o][.>$ҡweWE(.>61sLQm&w>?V ep.noq?ÙӼ Gdq'/ $;-xsAfj^sz,͟Px wSBS[g ?S4 " f*೯-f DjjTNFq~\0́`³DK$cPq k PD #!##9[] d>H  ʒ:⤔[2qPڣ}g2  xsظEhM*`GDuhN n2GFO=!ɑ߸HITy5ҿW׉JT>ԡˍt챊Xz[{z3D#8}+S]nڱg,Eػ8Dww'G+d`IU+J[I%(uJ*I=i<>5ى{naDfM"!1iXii7 ڇ|o?ϟFkU#{ý#xMy-r{;>XRc҄3_"$$2uN0\[J#[mL\%Wrz7 iLg1g9n#2WPp%м5J|U07v[k>-c*v~~Ĝ&$s{B‘EQIwMIŨd(b=H##ˊė-;{kf䤝u~Z|r!>t'N8 u!0Ǣ@C@M KXM].p |Ǣ}=׵R|x)8EϾGm!lN:<ŷxstJܯY֋6oM>$ }ZeE{a5"I%k@xL4mFb$JEΗ,>ϢߣpW|(͖̞r:BIdI,=M,sMqt {E({_.!).CVn=hq':$+qo9 loT;wkŝ]݀_JVY_?O@*[$2yTrqHQPBxDEw^#4 J^#uۭr9c֐%P7`>'` Tw-se&$&uh]ihvʹhEoƾMYti+rdQa- t1I I%RA㽷yFAڜPӑּjtY \;g|D< Uu^wONqѳ.Β[?|f?u[z/|%g$:#RB%˽̑ޫFP x C<ͥ"J9dc+H )2y~ }޺uP 5)}r̠5m_{ʢ-s-EB2\'2Fc0/~~b,0ILْwu3&Ep"r)NhΚfdaCzSۏ{1Oq" JH+bOpX8+:*62_ْho"3G,a7=ZRͻy8E3BS"]‹yrǝVQan G*JW't/|( ~n,?'RhnPtKb,! 67L9ȟOx Og֙$sΓ0cx6*ӤrƆt|D< c_/pI)Hݺu;/zNx/yE{I I 1@m_*oEic\27R k[C.$j=JەߏwWê%}DK2FVAavqܮe}"'z֘"H\t0D$%^O34r J(8N 8i` J3"+".)0X"@ mI)ĻN b9PH"qBFJ^_ IuׁE=wƹi=~,ʫH ܉;M3H7B@\i}z6 /p |L[U~3.*_dֿH)uLH1kCCk]kڭ;|F8!M$= gҼ`xQYVR2e 4OCM+1^.ȑ t IDATf>8[~ _euM#[dfܴT[ 䈓fIWFzQ#"4ep;wY=j':dOe**RYf`F$XJ^LS7I!x ,)O,@3h@e ""5J(9Ç*Ӿ:FX4mٶ,i<ߙ"?)㺙u$YyŽDH*~}HuqJ&yD#LR@3>ȵI;b$>⬒8H>(K@  垘Aõ/\1ʣ:|c*ffH !傰zd,[svT*lBVX]O" yv0 )z鎑vQuAJ`pМvgyr5xt 9d9)"ji $ 9i&J]7L/$p U@H Dsj>E50iWg}9A}y>EkVF'bD}e|b)'%rWFQ%%#axy1s>?"] f#$a$!yZקN&>pt&-|4#z5ڍ-޹%F QYO~)d"7YƐȐڬ^l=ŢSPDlMy>*[SfYviuzz*k[1b?Od&%i6op%**nc;LOcUݰB߾YY CЁN=M ne}i]F5H)m f5P̛썋C77kWW*5Oe.`Ð*Ac+Olx(8v=#WI C~(<,󻱐9sC 'b"Iy+BPWqCLd.pyJgg+c_xᅳ=^g 0|XeϤo\yR Z~G-D^vI0;rcu^{>ЬEȕ^ܺs7#4Jń_\sL>^#b\2 NXhiC?c\ryPu U[d/HO:w~qp_*Ai`YSF'aJM 8-T<[vf%6/,@ߺcw)՘XIuTU 4QQox*9Е'Ark^#p=G$e_e NDfPGyOLb3,{X/4FdA@eϝS]qdv pbLD|+FD@4Zi8ɏe}n_s>ުV=xw&d"ug3btck~jR7԰=~^܇$F&BbՓseOF*xxՇŒ㥕5ۚ* /UT%MMLcn-&nH a9PNXTxh,AF~`r1t߀b%*6!\* ^d"-mBvqW~0+؎JB׎Od\#1mnueәGjiYa# *v Yϛ5$l?rtۈ!2R# E!E &`7]KuliSNs!*6]:{k-igWYp0N8͍F`=qKKOG*#qFEZL?#M,?#qG\UbYfmuDafNV[GL~o~7"}8 \lc7x/w<96]mvNH1ˡl Pth{a'ۻZk p_nA [e.&}^V=sJk]c'j}$<*V(*מܰA D g'/{_=\ISQD &fH&<}zkrs`Q2zIm䤦,1}.2]]Ўl_3˺HS^? "%X"x:+S0޼#tDdJ eG(  1Wo$T@+e+i pZLW|N8pW,ec(A+-SKYf[gD9lg^)_ mDjqع+Ј=ڭHDu.cmĠ$ &z+çuxعSQd ^M/P Ȋpne./j/g>[']zlfVx_xeM u &8$kV/ 9yiZ} {x.&! ]{P`E3܍fKT["i&+?;4O,i2h'+޹_1C9WNJI]lamBjppzXx97(*0iLjס_d*.d_ܴW(ݓҦEt/oy4 21}(xrҾ:}UHMg$!!{VUD~|MIYzՖÇIvۏeǕ[kE}NfVVX7,Qƌӭcy1 8 6ala `mlM-`--jQj(R$UYYYy'$ JIDeUyrkG|kӸY4/ S(֖ԻDƝ򪧐t; ;4A ;#t^"ʱņF&Σwث [&LԺ6?Gt|27CQ1t#&潧oTdfOb:Uu?Z+/}"YwnlˏO}SߣM_kbpL(QpȤ@Qyiޕqzsv5ll{6kJ7H)MR[C(\2=Cw׿x ʾ.[09@kPyVTy'Nsu.<򃉛wE Vprd^ ɳKqH[ af|8X9#.0%\, `IB덹W }݋%R<0B5s1RpJ`0ƙzΊ/x?Z:yڤOڨ(PK("҈*AϪ8׹#FBbH<vtc8G hBrȔ4=i=?-*e B'QHnV"99N MAOU.'yƼ ǕyQOJK>qꫩ~2wA(ClU~rn)yL*Y正{X2Y [XɜB^pFEnuv[@FnaHܞڿ*%݀$<CecSWĞ?+[L$bh?7+(K^ u/x(PQTJ ,4E_+<={s6̸o/5ǥ:̌Q. a%nZv bHEWt:T/.BOGKQӕNfګS{eA+7y:1huGI78ŏj7 ]glCPt77a O8&giMela8ֳw6nɦBv ._0"DU9Q-чB>'j}1[lT[ksvR /iCYR8Zs_~֪fTkId<;wcp'Tǔ DJDR|}{9?8P(:C#Oh8Qۂ9oϥMB4$"8kH(*>.<@ #z" U_!o*Kw0qOy8ñ2D5sMx%ǭ[>ƭÑpH{棻:ϐ$CVuTVfX/!&:r]÷6~k ܀ =Iz<i'CFڲ2S$9HMcS L EH߅{IuG U!3cYNE2 iHR0/WlL#&10R< ބ ڿx{&4Nڟ,-j@oxedTkHE8v~ӃYAsg7C .];^DDAS0 ?z** \sb曈(tpB٫"k EOqqG=wj N&LAE5㫇 Jֵ!{u)|fqwnyl"UuK*c#%.&AnɅ))] D { Y*,C.ͻi@R-\ vi5Lb2.pHd+ENUeKMB| Fa$)eYAЉʊ\N:Y`A4mV;e}J'1`-oGڄ2bݐ0Ah^:z3P1Ż}yr=>H#HywӚ8+foX-c;oO<ӌa-2O+[hP2UE,[VbUiY}BC)6++'Y}I,$ve`*ē!T8!.*| ]Ƅ-QÈR^HS )#~|+g柴r .$K| oԦ<oc%ixDY>to Oćoie"q?@th*8$ ipƵ-p 6V0)c{u}X7EKщ5݉24tn/ٜ'%csߖՐJDkꁣc s8U  ~f4s M$6=L~,frlcyOԧ>uzJ?+j-};Fnt {;tg*쵦AUPxrk.LA nx{uɝnz?f86ؠ+1% q`UW9 b]ZV7zไoW3 ,=]|qG'^X?&Rv]=HfUk[R187ޜZ\T! (uqjE0 rlaqr*;0w: ! l9|E4-fep,ee>ZRBZve@B8˃-F5'EJ@XpmqA;DaI< 9ĐP J!qtM4&A& l"D+Gj REdҦ¿i oQTcF3[0 .k?uajzD)3)֚qߢE&z$M< q4}]~ ><<%`0zl_` "s|f",Xh _֦R)1sX1)M ߪ{du}'eEPi4t@^ث~.*w7/~!?FV,Хx(-դ5#O r,4&{<jq:ҽ4ADM lr^kn]9tY8Ev >@D"~^n] Oٶ~-FF9c2O_ijǔ&IUy>%BfPYH k\{Mz [I@PP8 kPC\՛sL;( oAA8~ss!AC+4 3,*HkQҔD϶$gЖR49$љU^N{`(7uO8@T UmNG #6G0{8%uU84FRS9f1ADԵ 'w\7A4 d\ܷjFr|c`** '44zPq aj]5=yFaZErRsN(Z(qja f%'ͦgjj q一}R`W015y TQIcVX87*bAN$‡L&7C|w*4Qu1w qQbH.dQ#h GP@-lTI.\pDqVN_w 6Ls\gUFϽiJ4=q☯$RnB8"hq)ndUhkƻm~̖㭷ުnc~7锎_'_YاF}r%u/뚶tuX'Mt޽Fp?gժ]去0ۀ2w$cd49䜈f iZ~LQ U5A,&B]RZ_A|TWAB'.MY:vG 4x$tG҅ -x4T=QU4$HI)=`"bQVm0!PBT{L|߆9Z1^6|j=b'\sG2l]i+`Ɔ ZIľ?F H;- W,ZK%"!"ݕW̩Dcf$R 'R#'|L .#\^moGp#p߳!1ƛǾWUi=.;V-/:xr; v$.v AAJs]x&w/H4oJzN7#”sdJY< K:pG*=^jTA[U8/kR@"ªrU X.`W rv&kkv1x -oZO<(>^ǥj_k!*&OU=LǛC}๣_z6٧ECp(SDW0{ @]SYvS51nQYmW!F~l?ITKD'ҺD#2?m?q/y~@yHoI͵Oֵոқ"{S/7ZׂM@-Ic%-1CH)hq҈ v#FSAO3ܓA Ү㬸XESE֛aH2hi8{]I\2rsn{m$Ac›}MCoUwI=tAuoqDw,!m 3n n$ 72`ªv1k@;^J,P' HEmƢ,x9BXilUvǓTif(%ԺtֵαDW GOO.\9x(|Fák'OXEP6ß-{=WM+>_1-qӸwڵqka-tQ *Eno2G$t}8mg +_?7T֤*eXպWb|zKO _]K$]7l}-+Ѵ뼼ˇ\49t` B< Zp e˹B|LsT+;o/ugJAwl1@EUn 2oYͤv+bOC" Hrw@$[6 'ts&I^[1@jՊU9]& )ډh m1[XFF\U ~(}AJ 3Vk4(vaG :u0W[xn_xǥKDYDH$#DU}͸Tڲ Xll: 0R9Հ$PHà .L.~LINëHW؊ů|Pf[|6UGsoLfhYxv %7D2`4KXu,_AK<Kxe֬t'_/='oSJ3}KEqr"#$Gz~uqnIWTq1e>SRՆFM0۬Ç] 0hħ_5}\=Iɶf;_5OyEeҠ%DzWhl#` >c _961f.ow[͘n`0f Eyoz^ѭG^_Mܺc)JU 2ThtW.':Zs^xa3"|av}繯j'R@ɰ@'4mQ{a!q bךm[gc;,v%~Ӻ2~Pۖp0`)7 7QYZ5H9ɤl%ڵdxC/hLVD$iI]Z;fD`v9]!p \Cti ]6֢94ZDvU:~f^bWX`|l D!pob'< Tva8mzoX,:bm OԴ=O'**{eQBDrANJ&J-UAzRED/%#nHN!Hn$`H<q[C^ϋR^[hsAG# )i+ĕMюEt 7Ѫq(NAz,aNkG4{;}Atb?x>@ -" ,8R~ȑgv#u?ꎠ\olЂs(BjFշ}>.mߙĉCutHiZ\{I:}d<6pMDucAjz;.PM.ϿB:؅!!ׅ= $bs I!i>F'.#FHh1*L}.ŋ@ h;-زuHp7lZhϐ>]t, piR;;2O#}.RTx?m&-N|V|[hgE]V] whw {!\=$ŌkeK APNQaǑľ;9 hau'Yx9յ~㽱]z9V"k7H!ǃmB7޲wyG|Pѩy33Y+_`v KG^fRųD߿K<ˑ$edLffzЧr^{n4D*n{mD^ü㷵 8ZQ.&[; 5CdísHQhjl2fїk4{{k,8ȉO}pqQMYQvks +S^,45 Ʃoi8.<|~מ۲G8SK ,-L4%ôO=5IGB5۟*l1o$U^NZRGis\$69ɛX:(!MaMhBKI{* ]K6zNS %2sDm.$?BN*xSH nN?Cj1KɶHۄ n{IcΩ.e,Cr䬂4ֵIƎ'4tE䪂 / #v "^p*a BXH q2ljq89 ȵ1v6|%tP#kc lCNPTaM|9'Ա9|&8h5jv"/f?0oj-$KKKAa}?b+^D(8QzQc84%-'9rk8a28jw?|kŽ!GV}h؂ "RJ̰i\@.yv$6]l sK.q4O|wop6hϱcHbU^}N35 (6L2VF ܭftRۼGS0m8d>F^㽒A"'±hkS$V+cx^ v[,DdAO]>ˌ(;fwFØ|&E )sJA ܤKS Cgǧ\巚=0fU; ,l _}|$uڲTS $H6u5CBUu"n[nYnJMބ[ccN@1y:~Ȱ|NC0AicQdD՛׎ڃs5A;mRy^J-Qc Mk d~J98O\FZem ň'pJJxxWi@\K㘗bK6H s6ӂspkD\PգJVLcj^x~>=HuߕOnuj7-/~+.Kڷ)‹i,p@I-E4Ģ0[:mHY5 &2iqz{!IOr$ԡ^o6cC>Ξn [ Vv'VʌKrڲi Fb,cАDJ/(^bȚK$$/:@fY0ЁZL k ~9IA seشZkX48 ],LdT6eBZ@k005`@j(, sf44SYߦ\(-e-6y~sQBw^9ΪmwůR~'2x)Z/& v EP}qh0\m.hMN|PY`lǜ/z|!nX $=6]ad>_CIPu.>;0LK*O\!lhV3'?ifֳկ~No 6GD۳$y3έ@FHH,%+aÏ6~9vX, %Ҵ$H kwϩ[~YM5@ \Nש}[??hqI AЋ ˬ${ m^o<ӧm&';Uh׿u3kxۤMfN67j2Hv2Ԍuz@q2Rͦ)u QKy&M)jEhm$R$QF0YdH>:"-z$UvoF;|d@RJja[ . .'Dj=v D,+̡Ȏ[ ,IطynmO8AoQ^j` m`G#x7 }LX5oƧ-;Xp,$ cdaP?liԠXE)!yύ{)X2q*Zn|Ij* tn[RDWX@&c{ ث#rŤqSeDĩO~} 3r?#|2rwn&m,"pj^4]Ml45o|%nw4뿾z*s:\k>.os DHTIYմM-ϳ}NJFHV+o+( մqjaa>͙2t.a* . ^"ryIVJ[;x:S&L-[J Rd8'WQd'6;͛o8ormvU %JևuHL#,5浚ar'YXF·ѐ5(HKgЗ9"Fp\;I"K#kC8$?Ku*Qjt@i2pJL+"kac;FHR-( 2aêTorIPy;Di<;ћZ,N:RΣH1;<02a}RhnVZ^W!՚0E Vê%qLS:YoꑣyVW3 wNVutov<1S  Qqqj}486 :cQo  w3KEQ!ɲ1HyZ-3d2\zM{aɲr`p nA:MA˺}ٵD +NtJC4YۯPq@9䴽#@ Q2:r?x}ҕMGa]MSWp0+7\ɧ(9taI^D' $3j('5s}$Mq$. 1Ps5\va8Yv)/.Ʀwcwv/|6`O&IFiqs?7;~ m~@u?*" WsfzVvI&! \"VNUөglɥw-G%]! dR38'. (߃ǒu(kg2kUD;Ipp -c0\%cfܡWb/E'Vhkc'g~~ux6ՁcPE;`hj\bx SP:)0t4/<ikf;>Mg#.o$*t#Nu#ϕCIS eXOndRIpmNljM-F DL*`,M9\Ȉ'UnHdEax5R ~!x ƃb^Cpds J[#y@Ђz^\ĎRd,/9izri,r=D=hZѬ0daP4 眇Q-E\L\ a(QrpB6Q#G7x6#YL  pk$d2VuB:uv{PkM)1[ ?au/b߈h?k(g fȓͧtxaFif:U8/U>A(&p}{`;I 41@i#Vc MV IEb&峿CWtnf𞲥LfF`t,WbvfFdl dȩA٦t:J8TGX+X0pEEoTps㰇~dAj1`X_$ZÉW, ۻX oY^|Cͭ`l%RAֺ*rI q<"pE 1jUKW~\I0M=A2-OD%]or4ל9]Y.&N0;iEMA.CRje5AM Ut-^ep**R Pp'&^-Xs3VJbuk$~l0rc"[)ߦQtRrXta⚱ kDmj5f":W-˺Hx:֦A,* cB<}1"S٤ap?契50z{*"LkT1|&3Cﯴ~2e1ɳ;pT_lHXmE]F|˔G>+/ɪ|0@܀$@eZQ}D2q olלBnEw`vNp=[-#<"2f}mGÇ'jǪo{di >M]璉-׾E݊N Hx9e`gp;e舽;0(TP-C)ae,/OjA̞ş3$G^}Zˮ8U>J Lu9xxKTS&^y\e x[  t^3q-ԙ8txrS#d:p?>>N!m9;9|Ks@ D_>˟3ucg>EGƻVܾѕX :LVSFTz6Qr]QR.OL@(Ɣ4Ƒ[;QL:|ګ ֶl+sQ!m&h@.zlHꍈ y7 `V`MߌA" T;nZG)psu9׬+s"UByʼRΧts`LCgf`ٜqN#*9 /AfbNpq`8yy/SMh%,`hLh ((y*bA 4q>|jW>tZ98Mf.Y2Yw,):'%L:_i` V#{6,k5,M130jҔ*F~<'wax$+99\X3_i۬PcDv%~t$Nj-ηTZfuᬠLi n fiwv#~8J)9gWWMIw\JI_w7t>,h8~_wے-%EU26oL5!ݛeRjrG0g{=}&."l>L̾o_|ۥkPS9{i(5n* Kz&yT۾ێn/w~q⧇[Oe~ LzA0(HbSd*_Y(f0K^7QI.uk:@ys fHwznƭu'shW ^F_a)ic6mAP@FKG@䦀vRϚo9ؔMѥ >8~*DNz8^V~l]Y/cU.uFzox#E!SòIE5*& 1B(Npʸ3]$+nAo^PQƍliԴ1$[=AEEZs Fj&BN0=P]VRbD)LHA#9Q(ԭ"n;sKɆYqkwZ5ň+'2x } /pL'a׺Y1B>5Cͫ2SSdTSKu֌O*kc65SQEh&tpHn!+wbIsv#vv᣿>h@ض䜾ɿL~Ż,wI9~\qf@qW,@f|~ڦBN&=\ w+'Es3xCb~Ot+6 ]pR:l~Dn(fXp}ju\M˚E*)܃7:GmX9R6EsG=1K_ .^k^0;l ω\:ֺ&6pֻVaJ"ZB{J Ib6/L>e2جk5_{6ÚaC>-UUpIp(fMuq+פF?f!#6z $R{x&:1u (Agپ6H$͆Z9E_"6Ygi؄&ftO@4#nd!̨%6Ʃx }3̑Y$LqXJ']e!Sl~.1Ϥ\AZ͞ʵ+cOt#q-FMZՋy!ݡqPRe9TҁY8ȥT/5H&@UxMenѢڀ%l5˂-A9ZvfE95.`V6 ~er2_xyOơۅ藔u::5H9馍?9᧰{R:—V&"a+y%n] ƌD5Rp 3daO8ߢȨvs\ND`SY:ɞ| _UVw `o_iv?[k__ﺎBj. '=-|gx?֯|+p`xJPF&nŕy#Sp3Q H)r,Ǭ)>zH|ÑBb2ԭ V0PWp◪>_'3*bx9uwOXQ~Q|*0lAѱ6i˰f%@0VӃ:J%TN#Xq6||R|=qzQQM-{-j@ը?= ݎf%qdO+̲1+8gһFMciG]Z-Z.%A]#J5\A7²E|/ل2&~4%LDIk_O7/~͡%e*cXzwqVnK2ɭKݐ2-4&" IDAT5Rrt(:fpe>dk(31` StSs#Z*4wS`LL#%ǥK>嗉`ʬ7zs3!,| Z?SYKC 9g&=㣞B2)!XZe/m621M,)9хU:nVrTۄU*%W-dvFI9ң['.pILc,Z%drN3#o!mn" u iנP{8爤RыU59hO뤗ltƊ_e:d-q#2┬Ϯ~O,L£po9>Y]^ӤLE}0';5[Lns9n+!-;e;^͖_SnMlof?ƹ-v΃/=m.ꊇPs|cyn!??ϟ?<[;|wQQkUz) V؞w}*VgzIg8aQDZӥr%/q#/H_>VlRD%Hډ D1C= PleX F7;}W^y3}?4#exLA2nĻ*R+.X.TL.p‚˲ 208@XzJt0.*]4 6'% _X2laJJco4RMb-o7BTjK6Yj*o|S X{j 7 6+XFZ+noN:\% /pCe]QdU01A  /MUaСNʒ1oO@VCU ڲg*1LwF$_[&}0cFLX+GrD.y~ľ`Ap:H^LoR¹/Jv^:l_Kp-M-\$ALfX&dɭTԖܠ]CvL@)Lbu|IVP+74 _zZ=3o~Ý<45Sr|JAcC 1{$}+_~ݝᮀ_ك7q4{63ݭ4gH)VT%KX+OfPT7P/^jkd"ߋ%umОkQ 꽓/.wZ3XOy,8k(DU#&7j-C-=OUlfg8[2lSXL_$Qy|C1@/.@z P\fʹy{v%ߢm=|7G0UyPk6_En8>Y4'uwTI$QëqX@қV//[\G:մÆ;<|– "Ed ~d,df?+B-))ED7pB;HFckBZf ms5s%ܗ(4fHAOLs R2lդ0gD *,WE2KUf XY{$V`$TjqKXQ*+0NT%EXAL %HHbjJY3j$ uV pa=y4 7W c'ǝxYu% M37/ {9as2mp282s"#vDXgs znVɔ^ϟmN$򏗜䀘іb-8''t'? /LLg7O?~_pswbmN=O~ZkXIV,jc-6K'/}K.\nAXv19eOFi&@Dn0sG1C]8 a'-UQ094qEbF?3'^n(q?c.qR %4{jԠK9=e!At+f†X&S򿇍rrV;=XDSSFZy@HOLSPp'VF+9K+T#<͜}p~J38 pV&'37$?>"l.{;"KrHvNěcfqmT[l;4=DQ1"> y/~2W\.󜷷|b-~*Cءk#T Bq??:݀y3o52όYwui)&5dόc.1+LdZ, L7v"*]ڄ:j( 5s!''RzXfr=[t \f)Q@Zq q:Gz9WŊ&UU0 •k=3NyOoy_hN yܸ_ d^rW%NA44YtMcs(aӮOzW L7?R$|6"~YVH}?,g{a&I$A2&.!#MjI'SY?Yk$ˌۭ*sJ}w,(-CdS'+U)ICO+©Yu3'+|5? ;ejn@O'V"d9%v?:ޅGKxu,=jV[?~:"D2"K(h_s{-I@WEZkk05*@|>~cEy0קtR;tzhv",2%n&+]?iJE{]wq6~-xw,̪;hwN//ʯʛ?lq~驧\p i?}s"❆!HZf؝<"m\Aƚ=0os#GD+c|.޻(Þ>꓌#&Fˁub-;ZV '%.eWFetmodI.^k}>o;0N3-q֮vSi$Ι⤄457gje( -tܤ?bXdI#GٍCt>vJYZ[z85#{0GLЦCz'%#A!HB| dԑJo>:eXpy:9q)$RX7u!Zza<9!I!KJ{}bQqgjCkFk)a$ʚz#44KD$L]3k0C2RR"nz< - 7J 'V[{"tyIm݆Zon'^MzBLVN@)vhs&8cF8;ո\`=J)tGc&=E ]0__ڽrL>l*RSvwI̋kwrt[qlh2&́L =|$NikGW%eoTD`c1̀*uu KiZd$Ƒut4 ᒙ-vtgm|dvUxc0Gf,hfר`u,1pz뱺ĤPmVDXV9;bF*(^B=.dҏ1tY^2]"TKӒ!RWYxOAS۶ . -_,owP#"dSf5RrO}%4*vDj#3}Ci% %K:JRjƓUꏌ(y.|S^]NE$1JBÈ]R251cPs񡔏 ~^oϚdhTb-ug>sNpJK_?hRʖb Ӭ2Z|ݻ[~w 3 K)"}f̄eu` \}񯶲>i#<(\`xpYDϔ+ʓhɛVۘ)ma?T:Q5Ǟ;p]>dl~**EDEňݤGjx<|<塪{|yi͍U}q|T+AuGnw78x3k~=_lxO!s~d,H5ؒ[lŻg\YֿO/Lٳ[~-+8Z?OGFD`fچۦ-qGpWӄhh -_2f)3rN TBa ܄'D %Hsg_yr1YSǒ \bЂ hr m1A Uc= &*"^Sf\1.#fd '%RLGn?yL4(dcWg)Xxg՟$2-ɻd-39˨ Ȉ!X w4.(}*btA(-$DΥp$im%o:kT8Dp ټfxQL>HM[1ymhE/ 8hZTdxO xd+"Bl,Xv~T9S4hT(+1][|H}%&sQS$>ՃgߏyJBLգ2%cVkX'#|-Nl@ Asza"PZfKH)dMjԖ ru&oƠ \!Lh2f /A4H*)Rӡ*N #P,2MG`eݻ5;I5&S~ew)G9!OF.֛;~ͥ&$_UѹF&3䙥IDrB9 ƾpҎ$0-y ELW!diY%u XUu IDAT$:Rv57ɑhO" ]gm)~^uEbRȄ1"P`{i &c{_AUsR$ oĈ͸Oh*2!,p(F,blez.@2"6MWrF֓޸jJ>Hb d 6exᡔ_yKdؐXB"p^DQRHI~ yW6w(Vj ?o[l,~N6HTj*I\Q$R9 RNՆf &-$t\c#949HgDdϸ =grva:0M<{Quhwܡ`/؋IG9P)͵ -$;(2qBG8D^^Z5va!:)jk3$3*P}ZΌbJ#pUn7VV!>[)g2zqq# ؾ0;KlfBSHj"A,%%QgMᙉz{V|U_<8 ne4bYT -E`6;p*]HѦ.PJ]ǰcylҏ DAk@ik=,_;My8MWzU N}6i|ɐ"TFILDbm7^!)> Wd[lC)3d[ԿH=Μ|-: EL/OpNݿ>rErsXd,C$;-ʞpٸBSs/yښO=&{e'PG ~a y3őWek^ =Zȝ{IWH̡!]J\cuj+L|QS)P (J$R§d6plӊ"sGz~yYL Ԥ9ҡx ^QNaER,Ɏeqs8|^N,PPeVYxW]14&Q͡oKfxy^!_ FރSsrك9cJRHU(qI4$S/NLj;s:*tBq?KWLL%7М voF7mUIM_3ϧx7)!:&3=@a7l+W3O"q8*46D оd)c 8'A'%A _T$@4 Hv^(x+ߵMx!S1Q&b$gAdV^=&M.&*9 ox uLaJnIBxhJn D@äj'LN(  JZ6>MwF '5EN2[f.DAd h|F`Pa"q }DIMOkه9hb+X6ĝ%ђu2D6\@#h2 a.Є*{ì}f keIHM$+.$|%_y]:lB3T)yZOp%O ff(3r@ 0CRHb-uh}hWշwwTuG3c>%?+aO&qmr8?Wj2cr{gʝc6Ff0Cw\ v9dLsώH#G#o 8FFColW9^ViJSFG>1P1!EњRR eLg>y",cهK#]%f`OZ= KTD)q7tU}Ub/1pns06qvaTQIVȡ;=[,YaTAѩ }#AM3'Msa&8,a-F SjSNl~^9 fIDs:/?sA1[?,늈PXs%VI'0tz)MdqptzB"6SeK8yAx9@O=3%lS-' ,$-@.B#]bq@eRˎ e(TD9~Y|A *L/W=x9U@Yu#nL%v<|M]9\"8%>݃%< nrUT#X$\y}]DzsRE$E`XȽCUs đ&>[LՅ4~%,K2f;:H$H1]:- iU㴲Ef:\|D##Wi;Bh0isGBQi7*Px "˔e|~ :ڞ5Oы i#Q>ڨH0o5 @ud:m*%b-~2TԧǔvZDE2ZkĶuo-?8w]wʕ}c?K❵#$٨_r*E1-3#4){pv*Oۦϫ;T-V7&=跔{8xH21~ou+?:ZDLnU< S&t D'#23#eh޶<4p soW2'&;2_6_ jdJ)pf{;BGf@*va?gGKBfQ(d6DZNo  U:C 0w,Xe.p1Ҭe"-{"7EnEئH9e>_Z[`a􈩄X3vɥ-|=n咝'birfp qO4MX92Nɖgӿ `Mue QzL8Mi$QlP`22 #4lzHCEueu` 냏szw.' > XёujpRFbwgv0'u!mlʫpXrҝ( w׌ 5묞InJr^GMf0H֒DL9sڼ#=,­"K.Y%lܵ' b-u//_d 1E/H@U9>-I^Jgu/|fw~K_?}X[c0n°rѐIK!$1dWQI!Je>f/y+y6x3{Si ntw`!7>}Üؾ a,FCM ͼc2DdgG^Ej&dr_̈́gk0$.;7*3['OjJսU! C2%RNf z-j1B=R$BS&eK.t9}5bv,Ć3)^s"^$?lɼ "& 6ÔIO QsE|18bh׍Ud '.wF5'v2 T$LDP31R~Ҵˆe ?]H)ɔD[hWW,b20ݱxj"0$*S J+A-|,<؁Ҏ'3hhIƙOƿ%qr@$ͦOރgaM=2#6#ū'1v zɪI?f;S#)AV@>+%TqK,I` 5e(NN˻PadyhJ ":d|:"qxҢDg&26gz,J8XVҽ+!h'Z!r9)U)] 8~սL1p-9 Qz,{,hTiJ!GDĐȴ4$Lu*>IQn ^?`i>R.{OJ-Kĸ&/4NCwC8&L*/zF՚$w1yQ:lBN3 $JdT\9JNw.^Y^q5.eͮ~Qo-]Bg)ӟ4oS??oܸ6fv-7LZگگoV)6}>2:p, xS@J E oNb@͖_S;N)77?M:l'-(4Eq_T>k-U ;z,P<_&W . ENTjK+:;OlLMXfhJLrHl$2js(YY7*U]O%t^ N}̹m1q pjLW2XaAc6F\i6GgPmq"k;t[2 #ʽn͂H줤`^s6ƌ@&kSel,+u.&.8og~80g_qrEȐa^B_3G!>FMא]Ʌh9Y8;ERw)Gxau26#'NY#%!(W1*DVlĩM*̜p2T@eCƙbA( u'chsSFϸ1*5Y 6EȶlЫXE"2C.s !%$6D@Ri֕$bȶ$0tbz'WpsXI[Soq}DPH-@EG`1q2%.!6>xO/E.94xZT#1X',6eM_i #!JGF\:^6ſ.\<вv^-TN;ޛږ]ysnW}Y H5e0M zHңȂlXq#%L z /4wh)RQEVܪ۟sZs?sso3P(O^k9O}uyb9hA`z{h`7,p謷ݰ *<<1nT[p,s,'Wi@byy|kRg_|_+gN]kkʯ? }qN\:k>Oga>rXw;w`rpT5r~H??7/}ی:8{%m[MD&5̼e͔cݣuУ'dcX'ǩʄ璿bo{c3wfIOZ,|:Zi”F=bHny-Uǫ1ZQ`UXdPh6\)s `yk7( _3WG/;[B,ܿߘ-K)nu3X.N`%&!Rl.sE#Yy2ZX rqak(P l7PLԙWlB-=c܈dJ#{zULG7VMnDeψLyBVPM > CKV&/Cr>3|?h:X3[2tb5qU2Ap#:}z|ej:v2HaBF$Y;L̶}~4'/ '&ya w`$(L70+h.mMаUEF.EUu5[`E{؎TZ2)jLZ$;lX|m"V",Y? [s<<> Z_+r}?4ktk]oz>ybytNʭ[7'dD: (fn]2) ^ {6݉f+l ǖ3j c4n3eMoy4spc%` k5{gwpHY7_v .+%2Uf%\ l$$p^Kbi G \DOaƌߟ2J.I (\p ްxeMŦy F9T 7;p7;)W,]N60 |ف IDATF1 OSM cZK2HKD[ɘ̊鑦E9uA'ƶhw(,թ[Rj"ƥ[SB9l5Ni֭v<<?B~^cW<qXATJJ+b?=O֗O§`߹me / ӟ.H nVT(ͪЉBEy͠8iӃ}0_J[p՞H^LFz#mNiC Q$[p .o:K)ZD$3*G: ,]L[u;Gմ9@. Ab&|mh Gqjʧ7&i i )FT5W|`Z<znK0HJR(I;nuQ:%a*: Tl0sK2v[PMEDb-m*o]E瓙qSfIKbƞ(&x#:dsl/u* 4 ŞMfbt&MSSԯiM}FMh]3L6Pl0lj mFr`&PB y:tm5p.τekvagFU sي.1jYFZanIMz4LiS8-b;/[\;2ߢa۱:&cOeL)̤1Rn~w+mʼn'6jv 11f>3KGг 2' 7aIyzSD8-^LGP`;JBt/ȤE@4/t$" PzV >a9JL[ޞ8iP4`x ^x4 3dKp_~>zpc u ,u:f&.?l,<#7񙙙OǷ[wPZ /;qu:TmbXr Pw}9O&1[-ODF3@reSV!`vOF7o g%.PQaZ X%+2u:CFr4k+"pJNͰ]Y!LɅ&sCa: -ݘ DKSo&P%ke.!\%KUi.ٶX" 97fƢw#ГrКXE)=f{O ӷ#I۳jjbD5_\I Vf mĚ,pn֕1ڣ] h[>m= H6rt̶s[92SAXء!OFHx އUgL|I"uh qd}g_~<=i!rrmF/n!/s΁q;x [a]Kr8PcyU܂l䐸e5)@df8t'oJW[ H-ؑ @p 6F+uhAK<3} w&(`W)G-"%`?hi Fb,X:` cPЄz;qql/~],>Cq3 "?'?S?u^VmN)gN]sϽWm}iÅTL䓍>eeLaw!{OgЋ KKH>նL]͋ |;*y) NkoK%Fi tDžcn[)b" A u$%weeM'}H&JzץMAE i;Gi]'OxU'w(Oոt y\rCDE IEK` %r4$ o epJ0Qء=/> X#}l[)U˩nlCzfbF> 8FҌA ƑetJ#nd1͘n&YX&r7;.މ)A0ѡKLD&Mi1l.ʋA'Q#ߓ앫nNZӷ*Le 9x2e/7aPhh%+h4 [@@Y]BbwH`f2DP \ƞp)*l(>ig_iǿuorYyIoEtCV?~>Qu58 ^4 3eJ^I˚`iS)QtY./#Ifcedp#|j[Iֻs6 d*bfTF{•SO'w8c)0*' dћFc67\C|c3g(: dnj2A#3qd&8CNMJ\I+S)H@1 !@8.t6f1\.M]4zwLTmż Cɶo4FvXweV{سZ[ڽ)F-g>_qk/'akmydv-7vuW&&3Bymwo6K_# ?mI<ͨQ*rl1 7$R3/} k[c1QyAQ Emcw~ r=ӫYO<dxZ[+yͿ׋li[JCR^XD!ۖ@<;쓇lmv t| MUn3ϙQ {*ymF~>5Duld1qO$0|nUxNwpiyp+aX(NdK43q6I|3vq ?]z3|>x0癳,`gq?a>ShE˧˓O7F@F+?P5O.NHܻgɃׯj?3<;1<#~}|) O&Eg'm'~(x~ TTI*J,c g`¢[& :b%8g(mfB j:7ߝQv3mKW!pE2BwC/J"co²umYr5\\ ph<$]iLx6yfҨު%,]+PgBnWUL73]#7[{m*MT8hR?J]9ixX! iuڰs9?5s.ftH{7~fl|`p=TE&Lni\!GOE`S! F%gG7>k%S".caA, o aFrܗ %̐\d"49&^@OB޻V,o)l=XQ_As+jj3O *p%&܊3`Qf]96w s$-H{FGȹXq/L~hi;_/yUJs3̏؏}v2c{F4a]/]ǝ=״ &Q'eu$ݾ}PW<^{m%&TQIGOg  LX[w (-߇T J3]Ws]{yEi`C.eq)ya5,- ޤ\3ҨbcA5(ŏkJlЈm+ NRL,C* Xx  ڍG5ew^xl//v%R\vX3yhkxg .Gyݳ{Eꭋ_Gm2Uk橡\N]Qʙ fXgTHzBDj_U/t@sk%7LodT2@=Uk! U]g;_/}q[k`vj'[ n샢v2Gc~$9CE>{N T^&^5'YERRK+\'ϋgӦK3Fic{F F" `gM]~tst{y=]}6@_yaMZ)̮MGW*Tu",];4m-[ЛYcAMȦwiڽk .&+^ G(Ref!^|7٦)̴36mlmvƣ+< &6ryNfouV&Cw:Õd~bô.qLNAVoc41~)WmxKq[S+M'PE3a1oqJU12BxNIRHWUE 黤?O72ed#y{oxw&b 5`09kL 8@юR ').dV9]q|k)Ly2}nÊzu$vѮ=+nפ2LcLX1?7`6XRYp(,"VReKJN8 3U&vGK>ѸbN %ˈ%adţ9UK)|,9f >lbgSj!:gZ[ؑlJ)rf"Ir.֟y1Π̒կ}'lOt3un<_PP+hQ9h󃙞Ƌ+[|;8D%>e|9vck;+ ea# ,E)I!S}@3Y?/eψa(ؼ[('e,qAXTM)T0˔2$3  (II EG)وIg <SFLhgK=bsdfn1X d>v'nMΠ44Ě|nX@OG Yb} _j0WiA7tI#mXQvl!ȉ8i{+m:\s{r?7Dg5Rq;7W)?u5x$uxyw(BѴ)̄$pb).K3g=Yc-F4%#=p?,ʴTYgHeF~|Ǧp[ꀏ]4ʛXK 0 %&J5}By 9܁KP > h]tYTlDKg!N2b;v[1pn&׌mW?݂)Id _p\E{fA]<=5 t_\fAe݂-*\k}.YYY*TnZnlpG8xC6*D )hhF =6Mx-Qn nURny(kE[[#%Nđ즳Lh;qc`%GT5`Ʋ6%{HwvoQǮ"kwfw}ݽaj3?3x;^rhr^#` ~uR^vjK?Bww9>;s!J>$}c\6 t\Y w .^E#>SA\6SNSKU7l2Shs3͜}iK!7cXNp @Q De-S2! 6)2q=Z9{Q7`zW aYThK)n|)뿎vȆ6H+uι `ɐVj -82{4+Nr^°+cC̪رh.l#,II^<$\~g4qGv-qTx^Sx9IGʞB=%u GU<v̂Sc fZEx6:bC^K/d/hVH # 7a;}U'֊qќXaV.R!F"iYpQڎq 6HQ_;9CP`cߙ6Ӊ&y+^/Qo 1kXÊj\M[! UxIXIFaWC>:2^-g %B;3f `7D]G7 VJF?qуEǍ=s7ֲb:naKR`}Ӽ6ϜV 3yrK[k$ B̢vxvtZ啘o\iD{2$c 33RF@*`w09p2`mkKt{,׆ [@ t ^[dQ BxFmY'-}A;4kݽ\؃@QN9Tx~<fĀmuԀւd[~#ٟK%jsՐDOg e)f{ԕbD*+c0:j*gv;u];/¯߻wqH97h8>Oo;3 }ٙ;0ߔ?|xC`T=|f!Ȍt`{֨)vl^) 4Sԃ;w0zN21=_ ZdqoP'+Z4XB<M킏T&j$jye:\Ǘ"MXv19x \ue5+8RXuTGg0ǒQEAFWþ_sĉw:\#C1< ؐN iwN@9LW;mUEZ)8b%Boۿۥ73-p'#HD%\:9118jhGk?Fj9~ #%EؙQ>F͊F!lOZ߱1oxΩ٦X2av骑p1EF)k&rPO6DJD8~~k,ɭQZafgumC` v}IjZ{x4p1kPPDFqEs ]e[aNJ/2rnd]ljQ+LM] ihFr'k6XKm6]ٖԾQ&_Vyv߂4A7N\(ybҟN [ LXSXT`F U)l3" 9Upk:q4>;G* GWbzx?^|~^>qGRu 1ݬ<8h* 2 „ڗl^TScl03ZIrTԥ*܁s`%RJH㡚$̙-^>Ǐx;j:Ǖ9=QSt,`yԣX݉pzlBrP`j$P%m^{ݮ;,oryňpICY-<|}{O??lHax_80y,ƈ)a/H;⩅xܛ΃3.'3w?nvxgakvme5I f!^gEK] ᩝ<_Cu;&G+L=/R;3&T}^ 2)Md= S&[tboJ,+' 4H4ɜ0mܼ erg?Ngo@x[ҽblt/W^kp%vQL5ڜ獫i0]-6.-6yl̬(/RfıM}'g>zs$ lVΊF!swsvG57;rn@r<0K}(X֝1P=害ZGm_OYdVlf$8%QBUTZ%fUg^2\v4lzYAˑSy8gx@/gwVxʑIutxhBMo%q4zBd o<ӂP)pt.û]qNt"ug>T}BrG6fPG h` Dyo~9O/9/=334Yf1t1ZYxN!h\(XvipUKuxHKg#, :,ysk1ZFbM To^>294r̿k7nI H[O@I"E@(uMwRSMVJFʱS'PgMsodL,H1#׸ WrpΘa g7r"C.x/UנL8XI :as0z#AYʚѼlpMGth/Vs.OO@2Ǔf]6깰.EP1 .AX-gmJyQ9:rvq[)v*k6kcj}ϋ# tc$K9aMj֤֎'l/v46 +؆@ǽK`@-N>bҰCm磇y#wauh;VN?O?ٿhG&٦Qj;}:)=9{nMTg%X.x^K=??:ª#Fd}l{ xO]ĹhfDs3O쫟ϻ4̂=??%P={  J)1}Z8L!B(T#st_f`ПvDK#'m*7KnmN=rCl̬3 ֔D-g):͝f IDATH)b셸c-TmN2ԀJ%jsIPs]3ܫh6矙-ܟ1N)tS^̸Ml {)JjdXըy/u嚕r5c!O~Fvb͆j4 Ya fɎ̰T* WT=zÆE8l*BTF12)!`ڣާ茘Zgdua;1*׀ 5F l ̖tTӘRLT,r)JIüg 8/2oOIsAenڗ6`3ą)Pļa,QO{sy #s]Ia tt ="\/?P1%ۦ#ng|гoL=3>|q=܃TN+H3]n mwU cܞsu +=6StPx,'O;M'd%=Tsii ubg 0F[⤷{_rޑ;>6Az֦3=s;~6Ԩ0O6wz*Ⱦ^BfJE ) 1w߅QS*6LF6؊e^EhVIj1C7\zqs QʭoD{sw:~Me-gmDA1TZp_nġM=Tַ>9ijh0L\t |ɿ?cp&?otoV^fZǸ5Ĩ:V*4:"*~r?3]l4<{^Yo^,܂%Q:Cs`oD)dYG݀VO&n,m|?T&|a?;y@Sn Hnra>jStMBrZtDēp@iDDe,d[(EuC=c3 &Ċ8n1'9ROA c9J=N jtɴ,P[-5L 0H$[u Ti\_x0Y^4yT3eC^D'zҬ dd;51t+0+FzulSCbjXQw#n=]LŝsץDN;FR,Ps=7aRJ޼Dދ/؍;Gq|J\8?=N'ʹ'Fx,Z$VO؃mi܂>2i%gM_;X+<~jM\P4leFϟetF|6҉qB= 45lKשm}n)تC&6ƀj g쌬W!Jf񬉡%}1L;3:)=ޛz'ᘵhzY߶z3E^MΝ;9睲~*$Og`ϩ+Ӆbx g6ɺc-EɞIjz"t[Lg:~ًؑ ̰=,-8%k69sbVq_%&Nj;~h*ꫯ~?c޸8ݬE8ljXl)slO@6N$k$,& 5έfBQ(>zEj[4;<\gbr56yFD 6bVIy渪S;Wzא4BLLb3*83f̲YwS ϑw83&*‘5v>D\tFUXMyo4(T9xAϻ }lD63["Ā-cG:18MހS qN(Q녪 I4MNzݗ%Vܳij*5ikd`)O#V" b5aqTx#VISvNco!Ιmc)GըVjXs{Gs M'>YC`f1qf*!Dδ+pd@#SFd9'R}9J̉ްT_}$XA"9 N3È;-dA1p+if7zʬ*"alQ{5J ju5ݢ1nn$q.9/Wj.8%pٜ0&xw/3^4_XDF1u,\ p#F.6Dt# ^zƨY{JCA5^ h/LxJH)6m׭q<[\c9xޅR}tJ bZÈ`Jh&*>s ^>Z)cFRYSf=~Xxo1DNMkHbD͆+č0S*#r50++RG*ޔ _Y8*`kc 1ccl #V6|jgZSk;~v>~~oիW[DL}bl>_~p7; jM.-f5~˱nҜ(nZjp ulOt*'pk!n_B/.K&Z:b}p2Uw ? ,^95?SCI)(٪[kG>v^립w|ӟ?C8o M49 (օNML(=lD'l&dO/a)ƕ\qlh.4C1ٴKfaB"rPͦ\tƞA&\k f hڧkfAӵf㕘D`DR"Ubq$< h | ܨ0/lЩi(QQgҞ%K_V`f^GkeY|s޻vjݸ11&[.' ~$q?P`)@Rc.( EJH@HdEON$Wnˮڗ{c̵ UTUݻ1|{ަ1*Z5ɮFm+{5~F!c e(F&P=80Fi LMidd7#[pb YK܂Ғ$aτcm¦YQ!=dBp2HVsRvFF!jWkCy'wYN\91|;FKl;5ۭ1iVy xS O-H=1Nx 2uΙ=;kwŢjf8 s07FZM.X=so3eTm>^}Ɖ J"K&ensq`CHmΉ$[q׈chY'05Tc5n38ѩsۃ(nhBkIa [Mq[6刄5]`ZR!l JEH;>*q_fX-Fbnzȭϰx"CY!$3SK&A}Id$W;$!i`Y00yǙMȳ̯$[̂5bCs;}qXL4d/TÑq,aխCil+ok[C~闞|&|7}D9uK:::z{я~t{d{R2}ą ?OOw]R#5ybQ5vY<;g6b9NAuSCy,U ֠e݁'kuu\}OOn][x:88x'[lQ`6qpbP&Ao| y:,9.%-O68}Gsl4qb=|id65k8NWY:Qk5~S3f6% 6 mP.mNSNa-1v6wXԼf̼I HV6­%u 5mn9b=ƓE6V:!4bBz6jRo 1}eڀh7O^<,LU.^2tFY )b`ś{Kuac2vlNL#F9Jxˆ͞\xB"TVE<'vu#ga L$nk|[Hk0M hA4k)  bM~!ٗ& /*+(3u kuV3фFk020(:5vMC*aϘR-R sbm2jQIQqJ 6Gv]}d _d"z˺:laI•(]b،uS:^hy]٤> y:"82Tko淪L.ݬ Jիh<Ʋ1R羈XWn7R^8Hu:SBQW_upƠ3vS֛g(+:oS^xmmKr΍f==U)6BWj\kf=xP80u&> G?c~-5"NxffJktW׼t -^RX1[eᙡ+Kλ_̛ oz'_^JH٫O~zOͼNJ YxpTt$QDFH&7Ejd\u1t}"N}f&;ȩb ӫIj.:ozqκgM,C KX8΍6)Sp(`rǡj(9).1[͠G1K:f/; 2>shṋ%mdOۤg |}'mnؼO;͌fܝ3'%&@%%^%$Mfܢ}0"NO*Ս&Tuξ`|.HH^8i0.1N{.lܣW_^:݂\aa蜌eW[6Y,QW\ {%d $X%sg9ǡkg2U+TM>(h'0;UHN|*"N5. IDAT7r+oCAzڬf OӢ2ֆsa(7YӴ)̦{Tn4@.#exjOQ<'/ېVUDLڛRQ6vzEGt [7.Ӕy.3N,oYFO[gz霧[f:y}9ns}l%RbV`,X" !,c- 8 M ߝ;vbI^\ j U4B:[Z"׮ZS|^|VT,{;7fm(q|.tmБlHT=S\NTH-<#)w|>{_mmo8 0$HOqhT]NwN*8v^,#et\\?9O1FX:HK2p˻ '5V5IԞ'?q Sڸui֛W JR>kw`)[_ :nR)FX 9Sd%3)}iv"Wr9%c˝u*[fg7pd Ⱥܿ 1jP ʾlK";@ȕ2&GZ^ IM/6E?^&pG5RHX{Yk9gjTdVvKȜd;ԽhLb8[cu:X+u頣9$#dnmpOIMN#AtupH2`H-Q !22`,:҉kd̔^O0xae7Ys\;N<aGkIX0Q^Bv0 x&'`<˨ ;uxu?^u2ըdAI%5Gֻ+:V0zU߭HK\MǪ^Huc}tqx.H͸ej*ã<ʻ+QR}Eҗ7̟} ͫbQIbN}պx8̆C'w>H:?¥Djd/8;n`՚`HM%[|D̾ebTӔ`M&:n:)Zltjo7sIzOQjh,Hťr/p5~3 e,6mF^ڸKBO{DŽ UqH׹0H `t W[`RyIÎKo符dV{IjɶjQ.me8 fIӑDTYXK`~,#\I/;'M;c̀/" ԵaA5L^[pՕc|[㻥=xYI?"{YWsc5[3ZbʡaYUhXbdzXOw>"_[=ߋ)6nq$ IknN^ZODqaMm잒ɷmmk[ۺ7-g~gRJ)S}]O>ASꁔ0 [~[ۺg+2~3X識t JV]}':<>ǫӣ.o$ "wb<'5ߌ̱aј s3{i|kOd=̰n] ^Bx hwf =ΙsK{n{ΎH̬ sULSxSW>dNfy, A_X![Ɍ(:E! Ҧ0qx 0*^^Ց배X`DcaXS*acU cx=2^3Na]îsvI{#VKw=KK6oNT DH9$%Xq/ny y198Vݩm2cox%78TCu{fݗqZ^~X _$}cƩ .%o25 n2&7X8 nujUκ!|}oee j@81Qd#Oԥo  wSĦ;_4m]w048N1w:j$c' ؗcڛFNd^NT7!4niChl,bԍ1C=xThj~snaTj )<;FN4L%KnX&[ډ 6 U84|CX}.GMͪSӫqah1+X6+qR |F!-s!b3lf$ ě;y ttI.DI3v;pM) }z{?lacӌ ˕;GAfX:KXre )tG .ժtb"ƺc М80ue /?&^Z4vϓ]![W=Uц=)ֶկʯ4QeSx}mr!h?<pXLuxƪQh/[uПS.nBR?؇kn&}}3-qQ?Ʉ,Rgӛqr[o)v7~7>яR9ıN` {=؅M~~fav޽Q3',op.jUCWs$ *oU# 3>@v-`U,& flfӈk8;{͆FAy$u91~'鶬8pEv22 ֛\Hmc//ɸNgY2 7@RU5 | sY0s#2jm^FZ7||'fF!3XOji ǒ嶽q$+dFM|hҋɩp7^ FHĚ.8OݥC|_:c+&Ep^<_c썶D;{6}?g ]x1ng3Ӆ9Ϲy%ewj zyI;NY$"?v1á5- pA*"ո77B2|U^D41,$ TQG5סZWn߹r3p9Es)߫6W]W61jW?4c"$䖪nG߆ELH`.P1K *Tbrl/\g؀Wd1G4bk'Pguq6BbebX d}a"$>O/)2[uL y'vi`I# 3`lPjSy7Yܕ~0:DXb6Lp,'#N=vSj|]Dxd'w<簙S564;L];eyg]_=0FoF47&t &\J==֛ϦRLěM7#Sx-Oс o7EjP %X@$Jխh2}W@;EA4 wA' 1:@WkWNu$0cmc43Yx kOqv&yE6%0H{.3,u¨Uޓ15X±q¸Qa8'fs-}$47z8Ċ$4.vy zę2豳EpZ icC肋p81MZkiݧkس΁R4#7(G3OJ BbC 30GYfKFߴzVIáq^tok#pH* Sd6t*N *ڄO5Fr - 0XZ7fΈv|Jt\1NBoDF0s#.M_zSKKxhOF$RsәZ!.psED8a$ir ~\.K7ES+0k%lzB>dTcW TŦ 1Z|_(>  9 6t?˪yU$o7m^ޖmmk[V{ҥKw~~.Il^_v~>򑏜;wcb`[YRBsaίOm;p֕#aWYb6"k!Уs`"/ V ZΡ9 N4KቿHs)f֛PM$Oĕ+W%4a=>fCh2lv`w)춁ѾNF>kvX*da4 N[ =| ;ՋkZm+`tbkEW{IxuxD^iP6IV^OJ}/뼓N7zqYt"i\F2E\Mʶ<z}: Fj#-S}=W*+vW ?xӞ/ol~b4¡ZkM^nDlŚmmk[s#L߼{fbzȏܺu`b`[v*!(?S/xs0GH̬0H"+7p'Us2VJXQ"vn{}-ʯώ9`{1mm[}~>m>웫d{d39#4f@ d;M") ~IAm`wŔN>Gl (\ @c([OƷB3؁}"Lsv$ցP1gF^oُ w6QdaSa>aOUϓ/ggv%wM9tin&ֻ&g_Ҭ3vnxg|W 0c-w va+X)伍T)ԧ` νWiN8l4Ρ Tm{:lTg\&\u]NF`مO|&z( lu*M,YτcN{`>l WB_)<vHI +T_x'ЋQqifD-1KXq(5Nj1Mt!D IDATOG<.ܢ1kmZ+XYOz2v·o`p6"K1-hW73yyf0eC &n{ ,4ha'&nhPiO)ǘ 1Z$5PayĀ ffsHNPb-Kf=tԻ-3LJ) R2$#ȤRvO7g amX84˼D3D;׻ۼ;V_3B33$wDde~ ĜhĪC/|z0LJ\}ğOͬ³Y"$"9:4݂_ >_5[ GX7v=M\>ٙ52.Co|"*JʊYլFM C4ou<{j{gm=4n+dN4 4]8dxC<:jT8#brFuG06> eڠEtR K6idMfEN.ǘ% { {kCh<@ƒ̃Nt ](W_EҘ EL5-K6%Z!E`Iĝ'FEե\g0 zuוNU 1r8랅qg :HIUpvː$y1(VWbLt]DLnBqQ 3\RTZoM<8P:,w}jNn7U w =h3[Pf0;؋HkF=K9jG c!C~2&d tE{$2k+Fr |_MF*(W:c&t0̻N-LȔ!J0?39Zkim-mԅ}19Uu\ǧl{]sztݒ$UJiokݐRA??u\Im󁧜ZlC 1q}|nb/6:6^_b€61bdݗ ?g6GncUx5 J⓿~IU4%d65vvMa01\*rpscM*Wնp4*!U4p;oEEJ+0v9mA!9dڸU0{9וvTk7*-bՠրtv̦v@JUG%aj6\+ 8(˞ E L&%.ᗍ*LS!pjGi#4]Vk [K34mJϗ|8!v Q!-0a:F?XCJ&G 9ca=S$@YzgehssVGU8B lW], s{bU?.8<'ЃaH "qCN.K6O&x[-[[ͤa<| ,'80;h80l0P"A{xf?,R/R:H0Vwx8FX9k҉Xj+: }6tpJO01fA=٤9I+ czhWLnrqZA-]fUڌH" lmM@ܮ6Zo}q΋{֙b.f#iiTShg#bU,U?243_=4 Dlg䨓Y׈_9AocK(OOB<뷱mlconJ)wooM Ĭ_jD:r?￿mlcߴJ-'mg簥yI3F#o(Ƅ8% hq"{b?ʛ[Q_̯\AΤuҦ_6QJYYݯ_ mocCr81#sb؉*dɖVAHc{S_j@bL?Re1@ڀ*gI{U Hlv`ňVjpԓ')_\f`ӵ)MGҴk[CoUn}J ۇ /8Ύ̔kyFQfvݯj%D%쮾)mK Eĺ)`b)Q4SP ,XB8Ğn<|,Wݍ[>) BXoIğL֏*Y/7Гg}bjJ_jy4qB s W.\'LY:ml֜~yf VJ+2Zu{V)cGy.;)uN{#8J<< ̋lbp1bxf܀$WcVNV̠V_t;_Q5A Y{~AZHfCiGԱ&nME)Vj3S1M t39A+E2km_ y7֬ZkvŅw4H8% .+@9Vb;'͆>0{q_9SE"r#/?/S_TF3W<,J.0ћ.;KDV(T.IVYx wKo'MuiŸGo}Ӛ@놞ƫFkռDy{z:Ǿ_3 4 -I4(|.kITm G ^BTf:R)d$KaF \$iBcM Zmx^e57<`*LejjDoip8!:;,Jr;VP! To}4ΕnUh60UcqWEB_6/k`F&8 WDf(1KP%Xa fBn9<-UH;&ё#Q&ddt$SA\.j%*shvӗdO2{U~p*b&QLa?1Tpa̭LJ)U>`N#%x#JK' XPa(kë{4`Вr'VV.ʡzn$;ʶk(Rpj~S+ӳ=fz¸he% 2Z`ы (L*U+0Ka]vd;:*[7r4ڂMm o.jLnBV0̊AB\E&b?ȢhdEלP2|H"-jJF?h*S47է & nNCRj!YGOB, ejv0#LI hky(X 7;1`S|b$/V wIj`ҔRU$5ih7ÛSf6Rĥ#ngPk P}=ݮWzV%-m`/,lfES fPL=w e1TA-5 ~<}Au5]ͥ+gUI3hGp>"Ox]^t뷱ml㍒7 <5Ms__Y'M~|0}۶am|S5LJ x+39St|mˆTspZRWVP^R=-\{ S^t`c.'osm!6wZk<)A#lr$`ӊb>hK@Ԯc gP(idkLT*N-dC6aCo<$UYs,slEG9Ki LDnb4^v,pNN{Ҁ)9~kҶf!iU4+̠(YJce]m(U #,θk,~/G jA sU͖jZ8p7*XT< :) @ɩc ԋjQ7'N-<.C I°P d"vp/thl\^:k5؃Z¡5)7gNB d:+{A`3eF9۰V.ycq N-袸yQxU 7ž0-zXRP=dJK1gJ1Tے9o(%B3 fGoNLD4"P6\Gsl/(yۆ X5ޤoc/2C51N#&F-,u!f=-M;sI.4I:chW)pXeDs.% [Q\' ֋"S2&TiVDs3hLQ ))o 5Jb)naKhZ{=a$R&cTrŵ®̓E(AcMAciwWNU-6339q7eK$h `G8<UKjkc*.#<`-*7`1\TF_*1>4 ߷_j'.r6+"P7lw2 G?_>O*RI6C5Q\H?뫤/?~;)s>KuXTPua_6[V~> )p#XJÿ_~ً ׀Uu}//r}lϷݢY&ЪtZՈDf0)4=FRT0R֩R,nuA־pp}U{ni zV|LZ5 [ ZΘGy.lـbTzEK4*UBAZ0#j>+@΀4 y3¡=`cinm LNE ^eP U#%"0[-VMR!rً:">e4G~n]zLLV>b'x!qV7 F Ttqj)vYF-%;]GyƟCca4NHq$4Lמ0|iGӈ*cA-2шc&+PԚf-KYg,PoYx<?+?A+ScN1…W`&D"&f̰= GGkZ%?`ji䨕v\ /D&.baGw;SòSl-52f U{9qGW8vA}CЮ4Ü. x5 !Ugc81ֽ5WX0:Șñ%Z8+U/t^dhan@Cf cDWD'ؑ7=rc;{6 ){]HC6 ]c;* J1!3 pvέ,Izt͹-8ZVgpLO<7n9o7%JPnIH??nKsJvQ!-N CL'^Ȼ H:.fp/"t!~2ip ٓ\9~~C=hZ¶Zx H8%Ֆu#_Uve(US5+ؼB! & I1`E86 . 3h+Y^fpz2z7Nn-O1`M))0}>3L m|ð ºAID^~;9~hSXlx䃚 ,4fkY򀃍N.ۭ^ l3dj;,1-աnC%#58]E 8vCƃ0dbX3k!Gkdb2{H ŚCӳǛ5e/ 'ųE 0߈Ghs 7h/5< _aHg.|]v+| EpTxB= ӯ$! LaB $v;o~mS ]P(+A8Ow?N1K3Y8X:?6; Y^GoC'h _uʙYfP֏]͖U.hvQp&>L@4;"NS@襂E""-l][_ZIb80dKaOMC9SC]Py_T- Cz.wX6 "QvJb(o  9dX%NЂ. K VTEKqIėNwfiHNolSJ@;iC^qіw IDAT7f#a|/*0o>Ҩ`F|j6#MV$ǹɒLV;RoȚYlxJj㾶,p_}C ;; 4A L$v\rK(D^|hhPS9M꒙V꯭TY'r\W}S؇,=s+c׽{ThVzXX!%ᯈO7tQy z:+ы%lw}gK4KL 4'gmt>kq ,2x ʎ"V㒟/]JO2/dVMhZVl dhJ5]6.\N6DP*hWsWW3lymPȾ5` ;^M:iǙ9- J>:sg71K)h+^} CLլ3;Kޘ vs{cceyl Mva``k{+*BGe0vvbs)ѣ%L=ͧ$A1D@c=UΕa$FuQqܝõ-X17ZEN&VDH0qk[KY"5oəNsgc=3S [Cz=v m0waǢ-Tf]QY(##N"-D:SpW?, dŖ%%v@=[G*+!\˷vU%fMG 9o}V~66kDbfxZb j8֜5 ??3[K\TAuUc;r;LJO~򓅞3׶V<=lqJZos^RiPY/:j*/Sx4K -J'1ի{x X:$QͱQ'O|[ ֳꦣ-# eXkN(aV eՌR- iI!y"Jc"[UV5hFr^܀*aᲐ vD#s^LTUh0%<,U<&N e7Xl͕R8fl,U<73r Ud;9ZԘJ\[QJu6;1Ƽ(ZHj&M`kUU-Q,%bZJ&>6!, CBH=R^BVRQb wmX #!ɂsL4hVI+ A*v&=$!OeL|5ɥ@E⅂V y fRŒJ"%{(JWbA6L!⅍\v㔙vձVNR'vv }鯰 /V v3]+%"[ Ji/y?)$²hwV<|Oa&zڧI}S.Xsxo_sZ#V¹R#UK8HϮ!\ dd9G]4\͗{upyexѰ\؇rWѐT:!MH(CBݖUO,jl 0u5nRuo@ml f~뷀mo4OM_G衰grz#n|%ii8fJkEqWRz'XMk~n{1%\cJ^XQrRH;dnRvl`;ؾl_v?_{0z~fDCyas4Y|lf;ƞl5fi)1 q`0]G1\wd</$b,ehB w nJmD;PV}j~ 4pKRC(s0OԋY g@{Y3T-~NU|*X С>eưd ȡ(fE+Q 5xJ DQ kݦz:jbç Xơ&v a'eqݹ|8V VK801XJ Ypz8A?1,yٱ&6`6mlpvE菚Y{VufP,d}//Uي|`C]Sm*s_*e'=㖞5lux\d貱˰JQ(EI?wkm30kwxԹc-I4w7nܨQqgAA`LjaUK^rhU~lHKLDUk}bf(b;9L w =tW2FL*str9&F<@:;O8@ng.M,&fliROX8CM/IUE`ZHk^c /v`^n>?'|`Vvj h%L;Ild oĬ g;`T11݈=q, R 2,C}5)Ghqf8LɔMF&1&)5F<tNˑɶ̕W*QEn FȔ+kih)Pw]{໓e#1,Tƥ'm?_'fF'LŽP̴V͐=`)H)֍8w\xMc^#.q&C5 Bު'P(Kx,suD9ӏt:/ Cg(9%+XZiB3Cwo']1 "a>'FmbEPYuZ6ﷱ+Xۦt6^'Io~wwy^S4 6 'q\gbׄp< C/R=҄yڍ3jR~ɟ>WE-kum$R'~rIvA aC Yeܱ4105&ANT% ?ɨ<` lZb'醳~k^\I6f]}lf0'Uf,Q%7;0`}mYW6}1)xIV $J&潓̠Cbp4L$K>@oa*va7ةrМiWɍ`H;jBΗZv SQCU)MrV b<96N'zT1GFVEf=F7Mr);P(gƛW7X{Ȭ#]<*#OձtD|fSknUteÊr e|7݅=lWbN_"ȦH7D|}V<1INx[;(s̓4i[,y^S%pNjncܐizkk~F ځt7'MFDMk^o>he5ٶCq NP }D5x!OюbN05me%W 2S"ZK;S1fihR?FQCNOMRI  ל9g|G߸WZb ޭe,{I<K٭3:'2Ȭ3:85H‰)kzhL" xe,`FAjb(ܖf\v3%*Ejи^VI[H[adjd\ƾjstùQ +"<=m;76FŬ'l6??}߯UXl>ImVx>Wg-1vՑ?/|V!+˨7a$QT,x"4ݰ5Y8V\=|R>G+~;MJC//nz09O&#"ațL%4@L^~Fk \Tf1dyqPJH%A2@m݂@ÀTS`ؚl?Jp Rh5 [Dآݶ!4jYm,Euω~*NE*D̼'OFر#_k}?4N#Y &1@Z!XTa~(=*'ž۞Bj䆋ZF#ʵWO3jQ48CTNы^Ck Z6a\Jܴg@g3氨Pxɇct ӈ 抂"A:6Q;pMfFUG4d[ & !/[*4ǖN $ֽWpưgcrJW3:ͻ6dTj,.T-\]wk9׎"g'̺S.UVDHa)Yvj)--±:ޖQ.%*>Ӕ?D. Mh,qNjPڄ *%*MԒ)JT0P50]/Mo.c>:G{cί idrir40ZW(n8tfR"G)áib|sy۔83MLtN̪ɨn>V:f(<,T"Ř݃CXYu+XRO;;0E6F'I NTvҠn/8[/ԜD_ԝ|w.+D2\\Ի^=oZe^NSsA%5m 63݃ H}D37TބJ³{24K<'fof IDATq$6Jph.ա8y"b9+>'_ǃ3{gT>N.\^{Ye 񨕏0N$JH[$FnYys~U~+_w݌8Ӳ;̗T-oyZDϖt9mI[TEFTBd)ugr3>cM(d9 d HgA?5qsbdV5R0;7{F[޷ma4tAP'zQ%[!`d#fX&+ldH6T6Fbt4ݕVO$>>Ë5:du!X!VR5X/ _Nn5Bӑu)eCHż5ڄ>X97$t?'@h7޶>!MݖubɲMy|4OD8Xe^K Z2W`9Z>t)U#ٌyƘ3Νs88x=<º45hLD!:z}2a"Ij+Y2érYs<,'ʅz~;*$r:x[bi"G+fsm4X6f00f#LL%@qKGfak~9U]jy\hYp] ]uf4`e8a/F扻)]cFm|Ky?R'}Hywcl`ta=c 3Yl0_gnO3:ΣKg]2 aIk>E=5akV06=-yS8O}Z~uV["-R[W^ pR"[/?23;_d{0s1F(f{_jɂ$0+j!XQJ]cܮ{|8x/g]clwX]b_ZԞ>眧-oFńKP;k׮/k^stto_S:ZDp$ㇻzܨO_;֨/hu T:E*ã_K#^W=T"@|x=C]Wmo{[ hmldηj JlD:|׾.5*2[eu&| f]>B ,DXJ%ECHItl&͂O!RE:U_ՁsT4#DVxx;n碌5r!>*W2[Ȋ3$)a@Add1A̙YwWWB^kaJdgq'-l`tιM9~-?tdh NQ}WݰB]v*ݨOHb WqS!"x\9"u m9ؤT䆅&H$ɒUo~!]{3UIum!/PRpyWj$9AGw& Q+]>z->4|Oɮ@wb X6\9x.YO%qΫ4kc=j +a/KHQ@!01VEy fcsSmNr721/]j,Y"[,w>tOވˑffQ]P IcYthv-%q NoVMTN(S\{fZs#wgǼYÏR˷D}{-{%U[:CsgqU}mm }[aq92**D_!qʪ+PSyi4%st0 b|FGŲFǤ#tB9_I4J.Ĺ뙷MR,GƭPX%KIУ3',s7+$5EP/51Ku!AXJf :/ K CorqCGz?3Ri]|kY3kCA[3B䌧!h!1'3,ҽ)@36}zef )lmG3{jU9(^ū;^ ؉.v_#7ϕNZo?ɉ 3Nw{ u2oVygu6\_J95qCsrrW_ Su&)v[ų}E h{k7^o 䟧_ż1\El\x]PR*&Lj3j~rK 3Q:EBuR$ "\@C4Nu" SD6GXev[rs۪kKA-:N{N pE8F*}{zI$l3~;}ARo E (ZFz/3M46 qak/Y[EXOY-vZ㚏 ,ih_p=SUF*n[@(FQ =eqD|mZ6+M\щK 9*D'#EOb6 MIߞw"kB.@T6eSWgcL cՂ[h1~X2+|`10؇lk]י,"EBtMMM sq/d}Ke:KX<&3t^Oev.홯y]ƿ`9cϘեx-xHp;'#Y2Z){wD\Qt`.$!6 s?n|<~uy)]뼯Ip2"Smp]\?$f>3! U`85$PU^CVV [XؠmwDrs,e݉Sq=Yd9D*N1!7J]7v{"Oe)N8uy?Pmhkm:#g:&֒D"]kRۡڞ]|w]鋛/[Te7o|k_Wp_T]Uݪ-fȳ+l0UVm%;93bXC=Tלg8.??s?sväVui%UXI9扣?yūzo E_ތM2˚xo]e"MR1b[I ?-# 4AgꨢoU K9ms3blְ ΐ 0|Nj IJ -t=t M? v*N l,L t F1iЭB qd333ٸM ͺ&ZVP?3]%=<8'%,.[-8Jw&Μ84ȄՀp¶p+!ELRxJsPv=1ܕHxeLYh݃gRDzXr(r0[ZٗĢ0S(id0QWg{͌bbx/1Sʈ2hg¥eP%1~CBh!Z庿p_<]؁'-BCW+XFUb!0# h66ghvq ;:ԋ~,ňā3Ԋ5vƓ 7ħhN @G>fLHȰfq ¶qwV#5Ckt3q(.^$JQ/Mmrq#" `^l_"͍8guK-Sɫm]V&ا̌8,]{#0j%eӟK:Nl(Ejcfx n *'DZfgx.1|X-pTͰ/3v]b_UMo3:=̪ݿw/}Kgb+*ng#T˴dfPF]~v )R,x\&ДW'''@->w zIh]p9? %"z1&yH}yS 4Wv<Ǔ0Nv4z$WyJW𾔍U[U\*Uy[*:UС1Qu+w8MVh1wtqe+JsC%4f0fDrQIċcq žة1kv?X-yT?I垬j[H:b4oVjrH%W$#|j 93sME&th[ F}_[0!a1Pt],L1_CP:yeh2܍.EŒW|1P3=UcLiy`8-bΌGᣤOgP6JTh>#?76Csi$'[f75TX݇eY98+Y=ng9[-빙ʳ ^@U,%WBwD3n(r\ >wKOwa V~m68|7, ÓwWfB]^t v'iNGnHg[+5/=dM潫 \2_jF?ccwKՐ|Xs1k_?<ԑ|nTGX'g~_OL(/wn[8fN4O'dfRY9L+WJ` "!mXXC-Y(,\"PtXDZrLGzT"GeUs ru9/6qTlw AX.=1O3]zD.GO0‰u69mPGDX'padL:q'}Xάo!%oD] ѡA
=bo[Iͭ%,+;:F75t,):όO)])"ov [p!O|C[܁9Oۯyk 3K)m6i/}oo3ZshFf_oى<8wA ۘ;x>WG 5t]PWQR(@ՂJ*z=" 0ܕYx ~D#Vp_Kaa:8 ; g*iAa|ZFX:uMY `-:zbcv c!2",Jpǐ'Uu{<Ж8e s%o(էΠY76Lx,~ܩFabee{H*ګ]ݮB!Z5Zv@Q̞ӌxh V89Iؓ/-%6c|g ,; (Xt_Oi* FWulUjۓT`.FˇIϵ|y#pW=pvػև#7бhGXYOMqWF/|k5jcP6ssIE \J̰h,y͆j)Vg,kn p@UDp5cXlk \ ]}tj:<~ X?4g}uK2:9>>>WtJTLEikEV CX!ւʘ ]_Η1 E^AYož.-LuEb3VJSͳ/^ ?Lz"E k8!3~k=ȥGIZYtXy!EĽtJ<^p*}rC ֕ȌTH"?ni4sa~a꿊a.v]|ّRes['{SՊ|ɟ?o~w+ O}_򓟜@+s_*!i*@dxfF O)ӊafOŽQ cj% 0TJP8;I6<38зaF`u=QSaS8 n`p=8V- v]wGX+sbYf(S":Lq$Щڨf KV-Qo .TKAZɔVf!Po. ';xCtX 3cjk棗&O$0Zuf89=tm& LhL]o[oFIɽq4@k<(h @g>XU a4Qm lqRcBe:,&[Sq=s,eXv ]6;0j+ռD;FJn'n],lx1R'C,U~=I(]H!pi@􈹘Q,YR"4s͠AI [KDP_؇C*\aI vdg|2#mc!RVtFOlZ$kG>ǃfH$NDQQAݲ Wx}5|k m3zk0$9eK(pj83I,2DB IDAT#1y+aKWR ma6ĦJ^B;p@&VS%0d]C0Kػ >gE.̜o[E./#6g 9*\4vWso]b򵳩,4++/{˾$Z_?Tҧ?o {Ϣ'>7???LL=}n!QM#&U|]wϯ^]1$_*MZ//899ޙecJ@ą;ct:(VXxܔn1h`g93c]0%Q NaBiEZDc٨ykzgvak鶣Nq<i42 5fk43RmGD-:oP_ʘ(ɧZq=st8#7N$u8@(`Q-Yc}2|J2v ѱa&ʶWmQO^ctF0̆P9uc1tFHJ!DgE했ixm{Jm}kXLӯH @v5IMTڊbP9@vK@jjS)!:b^fɬ!pk!zǷ&ObԔz+z?yn@r[5&#|-"8jBbcMƔ{oNڨ#DJ_C trS8f9ء彜ygF_8/ ~K(BEщRHh.cf)6-^XZ;?d8ݍDi νZU oC|^dVTH|g=-u7;>0+F1^͌8Auvϻ nv]iPeU8zh[LL3+w|u{'~'xZ3[㪽dI,:uvUrO(m;J)Ϳ7o}[)o2;|wn d=/{~׿ۧtbooSJ|;_mo;oqFKōU{trӟ5xZH1ps ( U-BDJEj&cL4 SWi:2Յ3%;/P.Kt︢&ˆ$/d^Id1u& #4v\$,1Jzbf"A¼GF?,(M8c}%Bk؀âϪ(^70k1sQ[C|eIiH 3,P J â1DSޅE^Y]+pr UE<4&9P:aFpZW׈Hz!,IIeaE[S =]Xxt ږM|+m^ϧqHɅo9l#좿gN:t7/9pq?LfNs4Ǣ|kWW~gʣM:SZ̍CoDCB:(,7M|{w3>}]S.=ڔԥY_ahns뭗> }5-/p+;wPy?e`9:SXC`/i L,-/R^zTv^UI kBQfCIA+ ؠMlʠ0kTFƘ[ kj s7фd1Tz:덓ͳgPm3-gMЍ|t{E3>C%Ljsϩ1b24VagSP39CQqa k4 /ERyBkl5 mx-/[X_hPJtb"ra.J6*[ڕ{<@S ̶LKP c'w5PF5I.9ūٟ^^ SܺEߎX?cP{DFpMJ)or!lLP;}ҧ[=jEvhsʣ% (Yk:2hhβbwR{.v4Nm۾/*UtG4R?K/}}ߛk׮աhfp=>dsA+4ƭiڬP5ַկ~o|#<—$miOo6vW. 8SWx,8"zi4 De]pGb o/d]"ٟ׿~~*UVw 1sy@$Ie9 %4 {n޼x;տW(e ~ww@mW$ϴn6}W mo{_UJy&j뺺Ї~G~~~TιNi3dIiĩ 2~7 o׿M2,(Š|Y{ 6tF,jW8ϕ̒i_\b%q}tMd hmla,W*U+B.fٔ3}[L^-:ӑ-vâ0Z6p.Q~ST~unsYrcڣZ8 U2cVα]YHZF^`o)nu[VD`z\ Zc#6n 9ZYZbVx B{$@Smbo* =QIU>"xHLUTC3{(SHuΏ{I\5\dwQ4+, ʑ5cZL5Dil޻r]j˹mwNZH!AK@? ʶtvN΃ejnl abZݍօĔDQ%<}[kU9~j9dG1&7׭U|l])8qXװ5 3gDXn tFfR틚"*]wE,NdkW^rmZ3~+\3:զuj +Sˎc,V} ЃB5!{$I.u JKM&W$ec%;1eapGos)+ ˢ3:Lr;gf(m\z"ePHkx+XjWEhvQ+*s:CjRtɼ^,CTb̺5 A c"@w}^w{a7\7T|i_)PHc"|G*we~PqN.c_L }&%㋈`D]/p~hB#,xM)SSy]B$g4pmkߊ۱۱߲1ˣUD̗9q|+ܔ??˗G~G~G[>k73з&UĬ#wq\>c?#?É1.+v}b?m` ^BfC2K\(͏L&EǮ v[X"E?I2ւm!XXYETH ,;, xde}4ore;1N/n^(!&# Zӿbi_ ^k'+hv2). 4YcZlñCbmon" `*h̝ '&uB;)F/^(sۑ9,H{`&OeQq o VL3vϔE\(9&R[kS?٩&MzM.MNY5S lm5FvcHyiMJ:+W W D悱hW/{|N{JCB^Fșz/!:k>u/z N1B )L I#Fa@6EʾWCԟ;5X%)Ն'5Q p v\ P4ˆhe GЈu z;+KN 0E*I4SicZl|#jF c/8 !Y@45=hni) 0>9VIy_>_XY"7ՔZ˭]bH$ 4k|Y| }޿ꀈe/ B4$JNR;/ZX#\1QbW|JD0h,e@I J֚0PO qUE{/<1N5YryЋiۭW؊۱۱ڋ㉴>cs2j>|ӟ'?Y1/w89OuC믛>37z׻.]4kv3cԟ^Osca:~w?O|b3vSnus|dO_k-ۤopDJͳ_;vݘ zG{=C@ݷKs֫Pr7n7OB\tXTBln]]㒪+z ZH pYcX1Um:My3ͼI*xOĀSgyYik:GEEP)Xى'iMUzv‰&ɖ"&= cۉŵ4%:*vGeGNB2b ʰ*^Ʊb>ivقPYB5<% 2 &/m V27IaDEW#kh61 s(ȢPNLN6ublƶMmNe)3fOqlrnq \iQk(ӆ5!^{, yb ȰCqBjf pjXLFY+et?,.C;x)UIj'YyN/|&4}}^C&F7u:ճʿ8+i;718OY&k!s7\CرlcnZ3e'"Ja bbɢ"%o5mx%i)6j-'4h_ aʊ,['ÍZ6ʽ간M-D30TCDa\*& 7C{7ŅH^8/' $zgJ.!W'v;`/:%쁭 H܀#"4{|zD*gӛ=,)89,K6h2DS3s 7&Kf|;bvlvlǷbT8Cu}|$J׽QNt}k??{?C??UU4ea眿կ>Gy'`2n2v3>QNl5KGn+6Ѭ՗96*z׻~Uq~m6gOO/‡?x`{%Rt.~뻲ȶOٛY%OG_~H4lp){=#b22"ehۨYrvY Gs"Ny9J)2* 1p9iFFUKiT +fce!BHZ銝vW%z͊%lX.XTQhy0&:Xˇ`,ec-iiҀ-ј%p  IDAT[`Y_K  $n5]&ґBU5q*gOu' ъ^L`jz~3N;xr֧ 8G?ׄTs&5ӊڔPS%QSUvkX%jՌ\5Yq,rԈM>a:v5({h0 RI\!R`X?rL\ ;%ᚑUW9TsFVYԯEqF 6ǜ*f<|T꼆i is95_3L3+ePZ4SXo5*„aYܣ!;ٲȈҞEns'b ZB6:KX<1-;>h ʭoZ/$A%KkH!k,rJT- TTQێkdf3'h,놘(܋4׆*!e|;?+ox^i0s%4n*%ڵ#dK*f/G€Jyp2HZQ>ZtbRŻ_')^PkO$YVQ̄Va]=gZjg޲!k)Yr,P \* [M3!nbj9Iz,MHߴاD$ 1RFcT'=h0Jh+N)kTP(8QF'+fS~`Sv;m0zEԮsF-lna=Z10&"A/֣ެ# DS!u떈kXlLSS&FƹƸ,^Q0*h.ٺkmsiAQ6X5Eh 59 ,34^Pifbҋ3a3t A$A:>"vF.wGJ\2#tN1q崆ZIjqmQ؎؎?=\sk׮Z,߬2|[}g}}C) .]+导΂|<ַ]^a0S??́ϛs"⟛fuZ|?OzsߔS\ -ts rsoE%pbYf헼gJNBAxBfdz{Tpۡx}PQRcGHت!ҚCB _8Wؚ4SEgQI kɥIxFX8, '&XFSF+Y5Z#E6&*32Ք YRD%曲HƢdn-' Y9@d4|@ WCVv@^lUn_ 4ϻD2b);JX Sda^"Vwt6愯:*ВOYj+ԎJBYBRc~;9A`kh%LZJ-e75n{m1Vؓb0(Q\K[*K-*ixtԫ&NaX RRTR&s!CTZ[[~օG0.,>l';"gP ֆ0NZLHq\RJ}f +yݧ.Q>\|cY=Z}Z--ZBotig(p]w a~~r ‹5݉E`s"^kiA ijHe=tM@ j[Iu?o؎؎W_yU>{#<ׯ_>7۱y˱)!R׫.==/›9ng Tκ{\I`~Ʌ><+^-s^."jz'>{ٖO>}ǔF0og?~y"m s}_~}84 CJiN>Z/YlJ*7~o zћ-`$pP6geB3WU'=Mj.1b _` cL9L]i2n kF77D/"r9J+yD)5{J\T:+9tiFjKh&dJLKXU`*n/\ ^q!rh5e5qu+5 g WgmuX0#KH7D%kBi&)M @XrRuz@I]J9]DR+RYȄ;V,1s_蠘YndAd- Zd [ =^LA$XCGSV#!G%*wh#KI؅VxӼҠuNJ'utPkާV՛6֙:%S2pw< if`3qgd]<#23ӎZHa] @`Zvʹ)]JAb}ZFkK 8MQ{8>:-|), (ĸ ]ace7)z p.u`qJek! iBoptJ;;Nl؊۱۱JI)ُ9緾7<99]mO緼 XƜ+W\/xwY:9ͣ}wu惸^ŬݗR~fvʕ/| 8>#ⳟ+WdQ~~'~杶1'=Ԫ//|c䯼xۭ_җ/x*9ی*~Cv=@'~'v/c!\JD;LE ]-ec/ž" GN( z cS4*/-&GHLDlgdk$NE/F8`v`%N`}PTy;=6?atOh󻱼ۜY$)3%|XYtZ1tJ(/U*%5H\F5iUEJe;ESi:׉‘-X"\X{>C@g掎.W3lrtG-: 3Uwvgr T34w4;.*p-Jfa忉ë(dX0Jd-d (d,g/YCp0.G@'5GTf͊Քu7K. [@gfF*-k?Cߐ̽:a HXWDЌ9)URkvs e 뉅/J";ֱ%/?,h5h:u=Y}q 58X:}$-#qp%{N KkK]ܞ]?ZӟaulWP==!ށ(׭X۱x<*UEmo{G?ѿ7j9ΗY2,SO=u?>=o|}ozӛ{^K.HUw{ޗsիW+yjLN)=|tt09RNjg.7ķԾs. #|qQZԓO>ySquȘ]z9Αۙ̈́qݿw_/>%Lk~]#ͼxg6!HE6 'Up{@K}(`I4%Llѭr0I;'"pRj 3}5ZE+=J5, CXx;>OUtLι;vfC#soW~Wyl:=&Ee6m3*hX۟S624광7)~[xmWh8+%SUU}5JB[IA UOoi IH3wvǦgP]VُОqSekTZ ɍVru#Cp \6̢  <Y\S ^0+2X׀SwY虣eiWT+JwMԹ";!q[vlvlǷfyݪ_?O?˗c[qsf) })myB75͟R_߈7o}s}_ ;Y,s[&zxI{ң>ZE~꼪Vz._OMob2V_kBc=>>hvLww\#Ї>7?ϴIe5%#\}b_'vKUt7QUg*5!UyCh -ĀaלݾjUV,ffZTz` &qb4W=W)uQCc|mx(RFqոGhN9-!npd! >q7k~ oy^*K}Tޱ X*ؤ[n^}ӫB27!7F-وgWrFUh#84ܯYwҧWt~]1a`Mӯ Z#%`V|vh_/_kXNF$7.]Ԉu`j/EVB:؅pvT[@^R گ nJƦ uͷ&jS͙UT fQ8[SiH>uNDL_O5s-[_"1s1xc6bCmvEi0JwX3ژ{1Ȳv:q pahp9[&97!zAR[=no'fvq:c;c;%:,΄ʯ[o[Ju-MY&̜so99?L57})?PY a^M̠;V|G뺪V v|{/*KtG?xքnl~9E6"}'x'䟼~,ݮ9Ͽ DBnqA'zcGر#Q4ɵ$'d? !XB KF\I.3#A N:,1 CYٷxz/&S< Ө(Z!\vv%BnlPcfm MMgIv`rrP`֦Q2wɳW1\*)i*'PnY IDAT)V+eb02Rb Xr% 5FeQnH~IgrbO18u) Q?ksoI^M/$zU rY*8?N [ɀQaU돬WnX>='"\3 lau˔̱K潘B*xx)%TFce~ܕhkEV-E:E E$DRX5녻qPL׿Y0iWYj1 8WGvy+혫&0Ѵė`ia>&Ժ< [zB  ]I J^K*A&21`us^DN5Dzb',YMV3Z̊Uq]iJR-xuaܽkNT)NfZwLu˶؎؎W^P:-O<˗{issY4JɨNZsJ=Qa3A{~bEߤSf:T{?Vn80};comc^fA_GR׼~r{=v|1kR(?~WySX,y.'&{0V{3`fv*Uh'Z FDA t8KDɃLf%A`: HʶV`ծ S|Ui6#ZC@%S;;+ (;cbNi%b<{Sm$MjUvVw*ԣih`KbIͲJҟq9žϠLgg(L/Y|p*&SQlsrŎ=cg{E-8nKЫZ*ޔܽϝU7߹  i:8 ULn1㭪z$ĬFr:~=_Ih$nW WK:'bMUc.}%"3ͩ_:8YK*_U|9x^r-uKKB+XVhecDY,ٝ #18B7#v>8:0ٽ3_ԘZm;0 sKY7AfF!G}DjYdYd+bqwoa% (DY)xX{*3$I-}ܒy&!Gnav¬ۊ۱۱4{M k?`nA aWEioiԧ96`f͹;QY{y7г{{w.;xG?[Fn~/>iSSLjnM==}:ܢ̢v{u{g25$O[8sʕxϏOw??KK|aRnТą L.QWn) 's+YN}Q2<)pQ^QGPؑCottt1z.%sEDȓ21U >NH 0T']M_î 'H&2EANs4PnE"f7n֊Pv<.]"_X;03u:o?WRrFIP93Abru\ ӄ`EC88tMK] #aB +c֖+7?̲; EQSN@QW,2{ҮK nBEi YWH M9V[1˕(4Lb9#.er+` ,Kl|hen{eiG yDtѲKD>"(u*#T];F(اu=Q^JmQ V-Kk',e1J/|L˦O nPCTÙ^cSWWԄwq= .ʢao؎؎̨Zɬ/{챷m礖٠cqG_Mm?8yc\,ݭzlKksǸY&FjQ3DŽL&3R8/ˠ[(*FIRĂX`o+W, \|:“U SkE$snr4K%ByMS}cS·cqum_̹ \~zw'b^dtFgZ&jCkCӼOl&]yg&&r[X`h>2;cX\ެoںʎw` Ss|#ʝgg}믻7A;oLQqG+:^3Ei(N K4ޞv*l)g~q&4zzV{t3ñD}؇ Ү5y6h!J ,D1ê#47*UCMINBHS4Ḽ],^tAPZKu-硶7k3# Vj|k@!ShStq%X_qǔԼzm2EDo ƃR:qC\׈%a%PZY.EDP84a9K1W ,,rĠX3\̎/%¦YX2:lׁ5(^g'Ē%""(Ӣm]xGuNnh^NN`Pju >3e9x[º#K7^ڿ9`u`{+((KKYE-y޵_Uַܤ_a c55wJo;c;W<=Uu3 %vD2)@â([@)&nlؖ9D?|aH2BM)0bF#*X#ssN}rUgd gd-4zzNS?y[(5’~/ſ{n J],q_n[- uc{m-W(ܺuWW[;ahӄ|} ֶDpo#+:gY  Lܿsu?};6znW*C\u(ql8T<߷SFcO-V6lL;4}K_'?/8meۉh ʬ:GtKf U-o[xa;;Me1#"4i5Ȳio" Tyu"r)5t nF{1+NX1L첚*{&͞f6<7*BgVFZ.hkkEZX G#&|]lzflBY-[C1UpXLb%BRTzavP/Q o- wid2hksm!)l:]z9Cs,E4N̹e`d7xv`ھ,G72uBubv`T4{-x-Bmr5%" WIN'%+sZ^%&FT&ԩkȰY8(AdhK45ܪ(b4[&/d\+!W}BDH t,&q[[-\ʍc0=v鎚IRT\@Z^PjԖ&_؈qwy@(,;v z 3w0Ő,`EBYn1!kOHKR!Q旼ٌ,F{ndl 2U]Q.:MLnp;r>iz%tUI,Ť9”|rܹ߸ύsNQ 8W~pWI9ȸfv0^}}ޤ;~]h*9pb[H4J%V 6m͹M iiWT Q*BjA*,_o7 ҶkS]·MJzTur..Z#Z];JU"ԗ6?__zoVml؛`1(3"&\yDKk6looܧ?g}p^ >f΢ay>,J#+ Km[%vϚk6qgGd6Ls LM%te*UE]#+%#/fWc Mm9=„>BN Ns;QjOZx5=hTۖ>7hU4G:}Xh nJ}7~IUwB⍑P2{]ZnHT׊4V@(vKR|Qtv7n[GS]m DNW>hjƑ+29kuh•ᚌSϒ5 ksm vt~ Kcϝ[qo-u mrAT*#\W}UUJn//Aku@*vf'&cVmB)\sWhm'\M <[Cܡ4pmFlZ+ddk"EhЖ+}/I[pִǴ5qKY:|g.F׺$NiG-3#TeR0!h p-8i7Dk'-s. 2G=vNfE-f_R:՝s~O|{{;~{f uݮjb=%P0uyIxHs7n X59?oͿ'>^j\#J,׼HiRTrjxu t=%dWtUj;:aE7":u^Đ"Z&m4KRa1qLO_ $MDnWuHȪM [MWIm(Jq#ߊ#\'C+LtKWij^`+L6wDd"/{fǫI%LCa9B=ښ5+nuSJ, aFiM-2`  ~J}ofn )5I8oX^w\.{$-q˄iA'=̇;GqUpͬo2~\jZOVԪ')%D{'JCU- qʏ.Np/ yZ8ė,+%&ھJ1Gڷ6*lg6sghgOF4͆evy^9ǰO`,B\|S#Bu rmp;ye%-vUJVꨱ`U J11 8#;!؛c zqNÙq!< +OyăʗU.׻z_+C2N!jL~p'^^t2j2ea[x(_7䘷Q_z Bk|N†s+XqVmVtZkSd7}+r<\5!mէ?&ID?~g~jj gL2Kptڱ{rMrm»KwٿZt(xtOp+n &;pXrt!YU"pus]߽? ?廿ϾϿo/VUS0bز"TIE&ZZ#Z/>3<̧>;iǔEZ;c؊$*T\kD*VXDZ)3{*M 8Q/oQLMAK$;<^)"wY^'sd_h^bJ(2ߐnm@ Tf}hV̫7O|yd I$#l)Ufg\_]j9n :ӨD/M/mlFX V:+ވ瞮9ZrN)bue\%AM gpx9˨ \he IDATIlaˢr9W$ڤ6XfchoGDS dDJvD>ZUZ|Q??8~WIJjȻX+6DNK-Hg[֙m|ٷ!~33|&Β |v4&n]zRd5M4MwgôXgp,|;7?[?3?5ŷ|ן/}zæ09ɍR"VDs6~SG#?>ԧ}ٯ|+I\9$֩LfkyuSVܻ;meq;;le1Hg\fn55:WojYL;xI JYL^0 ndlz+(ZbGyW-E͞18Fw6%h:`v5-hьVFt6kui4!'iZEoR,!h1` z@7T- 7j=syF?TE$Vw%o<1 Ql4uK\Bc/kJvfInƖ~ uYeGU.q'*bRL= CD5Tܞq1ZW:fp)1aT,B]$vŻ!cTea 9F8M#=B _fF [v˹{OaڱXPk97$MQ=[v?lDV]Jа6\;,kf eC{wMMڀ5o,:p%`l&J͊*9T4͸c}Ђ/L#eDmQ 5]rlhVo$aOk]1K" KNcj0Iu_~:k|nG*yL~HqVOZbPn*@D88oєL(D;OO^т-8޲qxY~|߿\dn[upj??]]_|6,nv([Zg0]Tꤗ$=8`56x4M\P b |7۾w|w}ϟu~++(ko.tQ6+dej"8j,9d_~gy?|s{^w7_WjuFQqdTe31wxE;ZN@m; ye֩5YU AH ۢ Ӑrh)2sU4Cz?n6'LPx7骹Kf;gwvUwDJu"lRZ;P!0}^Ghs5]#Vsj&Tj)ˬ#͎cd2\]LVFf :6oMTK&HTFY- "\#y@N֟h&gf-vHSwQ5429$JrYj rs̙K!PDTZk3LI][,$𔹮\I5ĕ9 Iv8"jZaébU%"al.ɇTs> 6?;yzkLWFk+{zP0J!n}O=v _U#닉HzGh7fDi;aط 0>?r-GVr+ӊOaorVsXzbR*;̎9&+.^+ڰSo Ԝ&57|V?CrտWG[h'}3<#?#?|Skz۵ڌ:5Z,uw QU ׍N[٨[pבW:|WaBkVկ*aV{#E@W+I3S6nX6Klڬ"gԓ+W/|g?\}0[j ]o}۾l͟SzLgw}[V&gw^)s}_/~q=L,i(o֎Mvb2iŞr55jfR*(J]y -G P#m7ΫٽR*cUTLD0+bi.LL"3kRSք'kߪzہ.YQ%4.L\P?7vCK?9wZ5ɤGhf^ C,X5%3vP`B0:/[SLMB%~[@]G3X8JR o\q juÖe݁T5WdqIo6R&|Dۇ '~2[49Jg0(Vv#!NZⲟ[B=hD7.#9mŕ=YU(OKh-v~R@{U5*>ܚy7o캪N; Lr#4*Lo]1֣@k^63(޻ZC4BJ/EbyN>喌;_gWzmU{K%O4˛`LV-z yng}2ǘ9GI޼AI:RuʶBtPA:e艛 [hW֮,t{NRdxRo{H&G|֥TPfR)('Z88I0j_Woo69h4̞z3,Yk8]2SdK`pE"qPЁZ%axKtJemf[rd!Y/Sҷߒ{f Nkqߦ*MƎ I8AԱ|w6;+:Hae?1=)U pgZ.@ca.\դkܒ$@NʰZS]Uv%Gke+8'OKCߵϤC?Q6o:D0VK.K"Jg5@$kZc-M8X5z>9֦Ce,`sW; .9,PD5<&[X'HanTә~Dܨp#8؊-u<${SF!j&;x~OKzd2jq3kskBv#O `J-;!ɪ񕏔04$l1glZ?M^5LjDdVl`հUɨ56a>;888baM&"{WW_+`+FSJiߴRo3n۔__;?ӪT~;~'~'Xܴ=ڛ<4>|&NMzUYkrECS` 5laReahO25hOm (5u,ܔƲfp.hѵr9:I6<ڞΨpJ#;m`*W57ɹXSk"JaX?FZdML{w|5F2u8'5ӝÒ(шwh+vh U98 ]ܬԶ0N#?02ksF=pdڙL&с?\Y^}IMuvkOae*~.Mfsk ]+*K#a檉ս=f_H/B\KHts:IUX sn㯻pHߨg'Bfd>ˤu%(">2NN;O;շ̙YROypZ 5_ ZQP<'};SC6+3Gh"EbfmdM]]QINf3&[e7 rĺYן}`~g[>]o W ([.]J'&HA=Cyhhq7/|Ylo"VA?G˺C)U5P[+}\nۼ~ R[}+:gHY4[+<5ߚ7r%A$dVp*oN.` a۶e{d*i@TlLm~U'ExudEe^Oge׆Л+Smt\ j7 yzfZx2tXq6SɌ8Xin߾я~ci%5a>q[1[h~x5~-CuŶ~h K)KQpoi''̮1"=ʉ%#)$ˆuQS)h4皎n^0|+ l̹H.\9:SDS@hXrRtΨZ6r39uyֳRDu OӴhl}ko = nP+2."쬝؅F5 h~1x\$'MkHuv$ hpblzhL4pn^y$vk.&$%»WZ]*=\O]gD+/onP죃'bcEx)'.Z榇O78[,eHKb \ϕh_Ę9#):hȈy9ws* "~ndҭjD&5&-asM't*{jJ:o]|OaA2UӅ77Z>12\]|7Oa^֦p=s%fiUL][ɫ#ZkդIVׇL٤UQv\u[9Ŵ7/Z*+"8x9oϕMVAzmUX >O7)aeq:/&__^*2ftJ;4/ݝ;5lE`TrIETFFQvͼ T,3qiM]w]]؍{Nhay:H·[fw PLm$M3t~E6lX'ZkUke[YE1Ʋ*T4"^VzXp]4WlŒ"qjw{#ea3TaKɄ&F{m̗m=yZu߆S4 R{7!, )ܞIGpdwmVmkV2}JIPǺ4'[Ɯ-i_몇jbE:Iqq?MfȢZo߾/?Sz9 =*v5vW}sc8~?owR`WI.}P5vVƬhj=łEѺp1wNpnXd6Aкks%ѽmܶns|&WVʁuO8R,iͰ>Yx7HT;OclJͪ"f.ˇT:,\ +mc 1m'.w;b _x`å}EC 4]%H%gd^6Ԣ fU#ݯo?RI BDC7g=I^\+//p\5d>!%%)-}5x͊"m*^Ap_}hnS8N=LpXZPMum{" W^<PdEVjlLPe7dpAw} VDi4[<kujty&3:p IDATw!51kt ,8 6+ }pZ8҂Fk#t%Е[ f٬ӝ1TOؙޒ5 DFJ悚ǍVgANdھ/4[rڜ$z㡉>ڡںОϙZBGfgT qRֻ\2]VfChWri{\=zc ‡̛8Aޡ*cv .J$c_n"ER6p&΃[p'<-U X'gfih5aO\H`d,Ͳ{%("!Yi֜ЦMq(QC!/C]Clpi [(qqG3WE_ЇZex[Cbbx|<S+Tgy}n{6eOc߃lzߴN^HbЭ!ngjÀxޙn QVc+;Q9úKQ)A 5Ź_=ukrsjɞ kU&{Ё3tlIuuRq _KYwhۭ(fM Hx"7ZZ`SCs8AOYNh"%bC7p.n;C` [Z6E3q6fh+Rk=3ʭXaK9 d&vppQ8/ h-͓擒qqqЧ\_ljSd;㓟w^xᅇ˖x'q|5F%>}GxfRJfݿw/)$Y#3*= !V㋆d[(a9Ti0"1Ŵz>o9`gv=q1W v`$ Úz]h;SF%4Q<%R8!iFT7*}PX)׵ƀr1X:1)܂UӲް9Kwaߒv1HEEmѡ2PI(v톡Xw=X&й$Έ{QȦՁzknVQKX'O1ECW}5R/W4]\S{nz*Uz ց=9%E/:TYA52ZyRY^F䛵͌Fbl@R=IXJ@7܇bj\fR9VPSQ[խ<́->)x!if, Ldw۩Q]m?%{[IgGmR NGxl:׳?Knz y$,t /Ʉv&}$ie3W7&juR݋z^jB:y>ѫ pWJ{Aa ⌇]z/t[T'ckvװ]#Ͷ{bULu @3=,Bx!]_Q)n;s%-k)Ug>C!N ^ WtlI7x%]MZGNIXȽ-zd*|)PAM^E@kkJZwjLl1ۋO/TǺg,XR gO_eR6V4NrpXrM?I;|W $2L'.>I1W8 l\z>?P ǓNJW-u33-go"oة5u4J!DꁨF{ya<0gӧO~c'?__?KHl6I0ƻ1NiOs?s>MS ٿ)x<~ӧO|/~Dž-m=)cOPPEk[ I耎&2\cREjz ;M{*4I?kpTXJ$J5b2N2Y!m8ݢiGylDsx-ݰ{ݥۧK23uOy̆@A>9x֦eYz /]I­  ab>>f,9]-,* !ݧ O݃wALd[6He.Ug{[:h42UD0ʔN^KT~,. m+Gk̵DQ xT`T~6]^4y2[S+^j^p\~jJҒx6xrnq-|2퍒P;1ɊF2ee)SвYD$S:#LI.J H^T}a˚AHS8u97 8B}slƍpؐsqa/P/&9B͛[߂+vNܭ~^LJ)χJx\pY-4Йu`*ϮNkϵYN&mcxQo>(g*ƣ^X!xb -9(Uؚ͛!\AJU8r_Rkz/~t7+J_%zk{Yk1ze.C,bZFJ4:GW͓.%;:¹E^~+/i7 n7p)=$1IhT W ?0n;Z;E~˷|?'~'_˟gxJ/eM{/ܚi'-ga֙I>ݳk>W߿n34tbRYzi-Ȑ. u_4)#3c N_}=}Pε4sc֢#Vcw٥)I^+e r.Xt)Gw^&R&껬OV4gH=$0m' H/PBϪah r!L$rvӥ2}z}ܒ-~d=1󺹅[d׺G pIN njo Uu&:ے;%[dW`Cz,XOXu1S¹a\Zogqt$[| $j)W}HKAr2"l% U)Ɗ⌣9KJ])gfPxhc{q K&j YTzZ+|xE+aQULw$[B^AMp&[`*rO/k''Ҁ+Uy5`I5#Q2*?osu\/ͥg2C#q:^X0xU$O~?O~ӟ*(23NŢpf)_(iٜ&ꃳxqԥW\ooW-7wH_vAuE_[h"Mxq"/ #诚d[57wZȂYD3yը{$;Zֳcim)۱< ܊kFg':|9jM}"OkY,_!:t׀z2\gr!t(ƴ": r=,7 1 SWA8ټښA::sSmݹh}_|߫[+$YL,x9HNwy~ahf0䷝e{Iͭm5wIj͙8oyõ>/h+Ύ>#*V,i5Ǹ2"wQw}/U ? FA)v }@FnCӾJOֲrHt/ i_)V}U/[bk+e}+zrN[-~$=xT?fCc֦5)1~l7+bKL=o*]];;ք 5bA%'0:_G+pqUE';4+xj[pؗs]V_ފ4*i3{Oqx/s}gP8Y\*5^aibU;]h3U |e2)ichlcc*R݋o-yOm$^/Z9CĥR>YJ;7Ğ㌼ ^c#ѧI9ySy%&Fm"9'[fzx}6.RX?AߍZU^wm׭jlr&>B#b'ů(KKm=#⊼v{̊0<*n÷=o5IqԏrlWM@+6G `f䆼4A7{CmQ{B? 6E~gYƱsnV\$ >v◿X{g 5=rf:ߦ`pR`<0'tOY6]/ȅ wmAz_Aj)fU_5R6h?"9q@[k~cL uaХS/^.f]Av N~g‹x#rԏ)~ 5d,VZL[[ңEj~xn~|_] yLɥmW::9Դ mnfWxK`]'X4sDa˸.n55U.>F(uvi ;^U^ʷLJio͂pO }i7M5TdBU*J& ޣG|K)fso,EV[LWr Lо,wnnN2҈KTugX2ka*~;@#ha<14q xm a#{ʊpAFp"Geŗ% N4֜xch ѬhDV΁bG>6ͱ#/`(g:is!.id!(ZSBE$o' (O"sf &iae;?{~ WQ߮y'9fLs p<{a<0"I/54__K~]⓿f77?{K=M[8'h4˲Ĥ6OͿ7_snlrt.ƶ}r)vV9j6fv4h_8Ţy<; [鄼Jo8qO-rA m[ >}YV܎\KTVKWu#"=8/< IDATG1q-Ff*X:nbܖRswkT^c݉Yo:ޜx}}rYIRP!ã@օlU/ҩc(2:ɥ "|i.zC7`yOrd ּ-4G&#3-.~aL"ǓHwf*3g5袻ߩ1#bO{X\EgrxqW>^AB]-Մ>-{)T1XXļ"w!^(\143e+WȡUuoFJ\K;#G;ZOp(6ZdXcwql9;bPYIXɦMXF=ƿծpi{Wd#FZ(>-fВ-XN|jFNؘ T7fd*ZC#&pLIj5JaAdßv,jċIs!Նjxm 1c7 e&W/bpr~aKބpsys [Jp?;ӑKi2riFYrV]CCpv3M_}%Ń7|sv@V:'&ů5ZeJ g}ʱhE }+|ZyW @}M:Ԛ|j=t,"ΔI Y ̵HI Z7&Sn6؍L<=M ܮeųƻ 4<[厼[8%?w[i =NaNXVeRar'ɛ֙Pl9$m\źθ\YF%>J;* !RfĤOh؉`0d[<ݮjs GPݑw5GEF݅i?Tjz~ >!J2k֙X=1hYozH{8}Q+.[=UUoc|B;kE5]υ lɝc)r׃ =(\SwEGRUYȱt???]5lg;bWkv;GNr>v_ÆeRD,qSl<'eλC\SF;SO7l5%<6&iY.ph7U$Ug{6e_3hb\Ymc﷝=qY=]+^ UV&xIYYrwܦhZ{6_X0xQEea|3?#?~klɩi?~a{P?Eaݼu3$7|7|'O5!Z}pVEJae\tr?Əfg\Ot  tW`k4ɒ";4v\n{kg8m虚[/,Mƍ-&^&k:fh˕X]}.A){3ĿiqKuֳ;I!]>GCY XGb/n3О'cSNqc=OʄтC7Zn8;O 64pCY;=8XIZpKoDEgasw'9qF^ܚvGE 9݌4Qql;1**qAQY&:ܧFK>97|J^BvFdi+a}LsfEAM/3q^/z>BB Bn,mlPa&<FKR YOt怆 R?rO+"&XװwɖƗ,kW#N|o2-Sc!+ʲF1 $HSXprh,{M7WG-w=$i(Per*z/A]mS2\' 6C.Dg؃Lzί$;'M\46fk#²7y3^t[E?&Yoq_7I)|ǪUKD3i.Kx P;{fKjl1pb Bu>?QҿO}M'B}'O>~7~?R0>4e hКEl߶,3M;n4e,KpxJ.kūW8`n;8k;ם9 Qn53c73xәc+pP O3 l=\@]xj%܇~樴{dFl@ax{tXQlt:o!tEϺw!\̢|%o2x'S=crHH.}kV=g.ξƦ2V#.ԙLq~5Pi&z۹׌G+=h=U13L7`^9,!)+z-KO"94nlhqzTy╈`cB{O'дCEX|ޞ4iՠ0 b'YmeR+gQ%ܢ7s:< jFݧŲMA7Ԃ&"ʊ Њ95\AL#ɞF8{&*TJ~}u&<.Z@GI^Sr}y?۫*Cջ_Z`* ڊpGbbz|i 5 U 52Nq49&hJ{[v^r;0DU >V]8܇0sb4ve©*Z⓴kZ6Y^ .eądC99ORI i{ |#\j_5ҫғz$q6h)<ClTmh&i1uFӉuڄx͚ NkxԎ'~$KxF|(k'}$M[kCǴ㥮d}cO}SKy }w#0>J2<<7џz)6<}P۾O?я~D??_'6UĘ/[O}ͣ?o"6q=`wCIX]@v6Jg7u|M!OqIϤz[xW6[S#vp[x= $ ,x lwp su'zh=Er Mёt CK'XGnĂʃ| Ɋ=hI-t x#$ V4solB>M]:!_"p/ֳCz!xBۮ|ef\).Iz̀촕.-"8;9M 㧪a6fN$M pȃk61a'n}&SƜUtRd8P7|.7?]?{cbbAOǣuc.t{s ogu&ӧQ񦁾.#kW>%Pt薸V< t OLPj]PTu)Axr6"5U1,9)PZ=-t!&7b=`wOMK ̞i-%^>dƔMu.Mm9fgG-e⯄>pW Wch!.+{;5V}z cfKn=L :l$7S4 q Y>;]oD_OOOɶ&t+kvpi.+|ǚU;íoc`#![13#ݎ^SGwָ0Nu0wp'b.[| R{kՐACGqtq6RW$S=q~/˸_u&p~VqTҳ}|yh]ipMHMUOpa*rJ|j/G#8e;5Nz[/jnK͡L]{н2ZE95VQ 71i{vb?$~5^֐؇F@Mn]G/[&v\,{k/v4JGCT1ϴ-[H+eК]9 G{Gf=ʲ'!r ᫺Ij aLt[zG}eXo_;huL>#a ~;5,66SG99EaCLt FFl!+IHq0G],Jf|0qCmA&>(דgɪǧ<EƖOn:]h#NHG#TsE^\Ra<0>ࣰ$'ɾTۿ__ A ͇y? P940ϣK^MW[6o7a9lO|W{`ҴL8jӼKɎDt= "_e神s/#B@ȶzopw&AJ!D(l;\M`$ ֡ e 0'GO$۟yPV,`YFzOj^2l0+K:a]G$_F4AS@Py˷V@ =„h^fgw"5}evج"=T-d؍giy= Ū=1Lۋuar@*Y#b/Mk'o<Rlж{ j6 <֐R"]6B<6Bh3\s&[I.K?(m-#@Xrd\ڻ+1$-ܢ}Hcp+XU\֓b 8f.n L7#4]ݝZj& fP+g?O*`D3Cֹ1 l[ 짷Q'I\_tl4Bm ߭cY;}?ʴIb5i%k&/`Jfd&䝹Y'FRfF$r"d<}$犋C*.kAk. ʷu)aךʳJ^?ns|k%73]cg~="V&?(APC-@9UN"n{k_ ;tH6YI(sG;`79tww\*.&쐦D;9BeGUڧ̀pMl+>,=*;K ?0|l^- IDAT@74}|~~}}=Z4x'c컿_{~ߩŬ4,:'?Jfк~P1mi{鏦+ wyoo\bvv;zC&OƐ΀VL[ŵlQrqٟ)fX$2s W eWϯ4{4Ө%8'o]HwGH|j}I_+q [RǑWW,%n 7j?P>^LJx`TPݱQ8lY{2^| U"!Y'"0^|+LZ0kF8g~mγ>>UN%i!6ĸ*wv'$H-_ L .H-PDA@ EniP:QnH$PPNJ]u>^k9ާ/ƘssNUry*{ϵ֜c{F' miңVܒV\[ *MʌcTl7pi6mFnѺu=59"=ڪ-rx4[l$9& mŨyt4F)9uп~^xg~Y|}C}{yǮWu#U4%J6br :E!E9c^G/__O*.M5˰8Gt 1ʠB,Z:5>ܜ̸e(20vZVd[Bl?-'1G2-*!_/ֺݝ\d૲|}BF7@w;bOJ b[=]f1(@qMPDU`,Rə4ͨ7KWULrw.=z!{1GVv'7WFWr.39RH4]d|ڀܤiO9na8 wުSHE9FHE kBKQHj*}L=~jN1& 4mkSfּijKp`sC~bkF)s'ƨ=cvʪ) *PGA7HGYg% a˼%$(3,+Dqi*mEꡘi-8Rnr -}2;?PD2w[ >:lR0DBnŅ +qґxS@inEG>;N1{Ȧ@ T3Fv|%zV]|>$<`+V~mlP7Ɏ$0pƒRvhd 3k쪼SKkBZD/gz»[r\gbSF<I[tͭbpfj(SM\$`VDvcC]b'b_r!wKuC.3?4\>J&<ѭɬ֊TkV^FJ@ 6٘NU nbv]*.h(E%MF,RbI}Sf֤ߺWÛVt9+~=~?}x瞜 V3?v^m_'c{G:][nvM3N(>˸o1̆rMVj\7*8-?lǦ5ub3ǍLvEYh6/^ɴ*N@dH2uMLpaoL=ѾimJ =GgmU'vq?I{l6 -JS9~8fGx$t.} -D`-bōRc|_`Qn5k}9ޢdyr i{XI l6T{+ʫj\yI\+Ft?R"ۉNլ*hNE%HTp)34#K!+)V 2g&Hם͘H$ZB"P0C@vX˺K< yP[b_Y7uj3ۊ% %2`GlL*5 e^ADPno8{nIlW/VA6s31Rj; <ٕ{ZP!dɠapj; s29%uTǜre敹[an!k$lF`6EZzɗX:G-6&4 Ц< Cf)|[{x)mox[h$?EJR^{ Ħ=4$hs#֜|w]}}}?? < 9(P(Uzw}7?/~ᓟo, 9MEEg=# `q:c|dgb,yh"iٰ>PT(_I-ÒM~ެ~RvW1PϽDg+#(Z\ _Q<='-AR䧉ftHS=Wn b<^-KD%J ay|2M14l@ [TMWҍc4&ֵkInk 捋9,EFtR@ŹPpޠF Sp :xN 鬸Sa5[s)=Ov\)%]ѷAݟʢDtXT[H7lpalɰciik7ywoLSL(Mefֆ%${ {+#FG4AM[9+]M0 Fõ{IA-U =P[#abݵ-V̌/=xϢW_7p5G8zrU^ZG<1$?"pJ[f6OoUSz3b)QڪUF8G*m)c Vf1_u9/W.P+'X`!瓮GK[u#g4)MU.p2(k^pBځr-d;ilUSՕo]TǾr7O\S6xȷ]['O}`v|5R@7b: 8.7hLtyt}JqIxQ\ ٢99ZO 8^ӎlq RAl驌ɖ5;!˭ 5H-7rv)fG x<[4cnGkjnE] fkShjZGT2D nF{z`/WQj qx Ub(yK`#5aK|io `3/oeW-P͙Ah:P3{tp[>WTʶA-ЕA@ѕ%׸Rݱ= %BBVdv Dxh^N8&_%ë1ppUԒ]i칞vz,v)Jv`HQh݌[\ZZF`ఄ'lx ~faPFߖvKUiy?v:k#v8~̽%t^ޡX '̒VcW&:"7B%D[>krhশ6kA! {iSN+{c_Y:4Wz tnKP?=4/_GM%8*ۼҊ&{GsB #N([4cbqhkEB<[hFrZOwhҢ@z:O 86hNju#qx S JMi^;o//}>6MFY9svֿ$eɯe[p-R޽NPkp%%íd]#bTj׾?~?ADKv&cpf<Ôv0uQWۜyь8K64t g@ BfIi^ ՋRئ!@YtcnXE]噂)=1EYGN-t'NoN^ yy6w#/S98l8Qsھ5 8<5*y|EFbkGO.w͍S9FyԷl\h7\i$۪4"eЮEy==tX&XPZ;L<NX J2!>ăsH}sDcutʒϱΖnQ97H^=|sm+$Ԅ&Û Q\ +P52K`G& 58v )$ɗ8zQklb iGLsmvhOS4kV>~Jr0lbLp)@zCKwfx^bxj>( h>oQOf`%sBf#ߢS@L܇\z VA, Elfj&s4%S IAjD𬎉)}x ꢣ5Ƀ.9#l9[{Ӡ:ҏQ+ Y}8y QU%<`ىޢ-b5G1p0\.-:\Bɱ'&<7S4jt]nJrFB#;a#e)HDaTN~C+.x ~ j.SpBĦЀ١UCT jmVo`s۪}/naqKQ9le ՛ΛV&7ݿc?cMhJh#VLk>XZ(V;>rQ+Jg)Jwrbh2;qܢ@ rD%-싼ԅԽw -~d'Pn0au(}ZNoS;'3t|j^Lf&BBUfv ].+4WMOH[-}$k Tk 5$ s!vTvq,DȂcqp#7\ϗ܂;5e^ e0l+3:AMj{+0Ōc3p(fի2X#|3LHKRٵpv-"&aL  WG'Ư&74m oBR+6|Y :xmurrB|jo?C?|_O|+OjJk=~w|w}?_JqݮfiVyy)"J7*ͯkm< yl!ښEd!CS4Ԅl?sǤ(Dmektn ă8=mp.(vt ep'-]^ż*XҤ|s$8nU7ZD#˜;,+g<^|񑫵g1Ǘ%5|6ENFFvWRdHډthz D-9I?gbn+q*Ni)FSKzDZhE#m)p@} H3@XCCc YQ}\2N/{6^q%V/?4Y8Z޸͚kSL)V*_4Zv# 4 Fpc"ɹ[oB5d"׌ڥs rnlO}g?O}3^%ݻw{?';g}g}إY 륽RnV}{:#~Xq<ހdg\޸Q"|ܱ\Yv(Ǝ|G~oͿO?ܿM,Qiu0G1Uf}}[Mi"ْuM5#;if n[ϩ v68mȷMAt+|r;Cqً*[mpxjȵPrD@A[K)Ԋ qmM pܠ0_&djۮD:s/GpF՟Ǘ^l/׺OH~mz3VJД>pu&Ido|m:5;.!JZ KKXE]ź65YMI+{o9n/݌gQTRQ ؖTA DDF2}ZstJxB^ұVk7wm.P6$lяp"Bx"Ue4AmƝhDY;1(,nߙXC; .˨Ggt/ج%x's@%n"W%K*n9K+_͝TDʱ098aTbWW7Xw'_*@BOf&39$W0U/Vz&Ê[VD3x-?r p|&˓wy%\ΥĵϠ+ceb*2h~Pd's@H'=DdWn[cj]֤2) UgiP"\5\'׭ûIkL%g.d6ywVjYSGj ֢BTz슿Lx T u<|~[igEIi9}{ۿoͿ'?_NiWi*'ak;)Qg5=(;U}bA[(}SQnɟɟɟ>??jTPL*"){yi昛Rt7šI0Ԙv›H˧ܤnf$mۉt,gy@L S <ϖ17Oe7A0ӷn$Ɇ16Sdޚه/+{_&^:sHJ߁Fnm16Js"&H4<=b(Ѓ=6@6]EȢ;M lSf1vi*b4!P$ gg% 4)p:qqu5>#>oo//r g>HɦzƮe8(*ٯ_\ ᱌_ta% 0Mӊowoۿ۾?w?3ϼ]:ͷ8HĿyuM_7x^;Jq;~2D_\mKA:ed9 s߿ ixf ;ݸS0JuDMxHe[j>e<%RoK~@U{sȝ{nJХ@m aBʖ 9v :s*R4k~bCEr20.vpn`TŔ,+W!;Nt$G"X4p#L"۪-ފ‘Oko๙-;cPpxt=L|WpP;ǸJpiiu!zPGk!JbA,W_i%G1c'O=FsfX:epr&Fc2/75=Pk]j&zJ}V%ZBqBHM/VlE/ָ݄^|KWEG$:դj9"&M ڱWzd)W嬨d$ qf+٬O) LIىT(l(f$%Vl4s3q2{g 8ўzf#;g){[1`DJQO5F*Лv ?5ڠs%?jz=KN _??>_'Kaj:m18'){mU I "@m}ң7T z0H0tL ݙ=ڋ}cؖ5FO;ɥ%mSJ]ܚ|$ƾlVo E.T*L!N\.; Q87nԩ8 0gK.9&EX_k3<%ʴ[m&KmڥW::6(098ZٞKtk@ 9D5.hcvbgR;rˣ.͝|m0&Iy'/F̩VZLsTݑؚjB?>itxYE0ȶ7.d!tsPO4khʙv]  K64U]krqW\OWyu+M"#jff3Y3z(%sF~. ȬdC{ `aVY4\B-+IO[pxFqAlWE1!N{S(ʍѢ+ ^T}fLh9_;hNQPhMT-!Gz>6ɸ7f#ƠdD;7; ' 23 9$O-nճ:+L۵ =U bVeMI*L0ŏʅPOJU-=kR8 9U3>q^>5sE8*'4^~q+uc 7! !.B YV AI*nfU\[2O g<<<ƩRHכ}{_)C>}oog>i[[?v{uuO67|7|+8~C-cvy>Art%[j T&#N4BS)Y v=ľ5-\[VrrhvSa==6Y- Ii_`E $qC,h4ṉ?VX ړM4C>7%RO>5[ F]0YsFX@nXZfӞ1vC'0;7<[P(ymE%y=,H1+V Whf y6AyM uY6i}+Nbu9po'8"K}Q1)I{ÂwoKdy=e i2ztyo Iэd 4h6l@qHH#sʌwk#ݘ=볶+U=\WR漞Aм?hQgr1[};{i([10P)dy:@6s۾ӋjD4 Ͱ9 j Eq !cj@&틬^70_)1(g5z\7֧RF]+k?:I4nooof/Zkok߿51Kn{ jn-^oOӛ9C"2=Cɪ?g?||d7qòq1)֠ ɪ&U{|"ccuAj VN2ܰe`4 2$Y^،[BB! azgk1"T+6p#p?L> Po-|+6XC,m_S\w<ߥbl`Ql}.-Ti_WV }t+tD&!|_yڛW֧RF]^ʯm=`so޶wܿh5 I_/fJ9_{L{ymv[SmS ׏7E-umTޑ?X??_G*ek_ 5S7 ¶_ {HCNfJ1Ѳ! TOA #VAϡ[A:E. RЀ:XHm1L>zH`q־k( XaɈ>y)+Nwޏ&,؀ށ=fM\7e'ZQ]:'Y =| [Y>^*z>J^j`#zLA {"Q 4pyݥKp@R"flaleB 7 J1nc4DsO'h'&\%hx$.F6BzW3PIr"t CI}I",'$ 0p' IDATL{DGq9zX,UPHK`W/x%| ;R6 `G]%`fVhmw#n~%L~m:B70 \!'x#iV6 <f p6DdMq፡ tƨ=F 6G:.O~y}& p;bdRodž+kRߛ'niR+!\NŠH!hlT=c>O 1U0AfX@g\'-hXo c,!s3Ϯkܝfn.è `Ԡ>C$>a0MEpe0g ͵c>RJeMF_o诅=VDrm`} ӯ`7O{ ?m15.7ڿ]is{KןI}JHױ8ds$чg}gZ??e[@ u.™ZUm|ŵ$)NHI"qҧSnFl R>M!fxZӱ(5dO_~ k:KDCD-XɝdT^icW(ۛ {4lYY}vO/(79@41Q QrV2B އ.^AMd#莾y&I֧|?<=M k d?ۆZB7[jU)p#dwF6`.JMt!&L?xr6E3>Iv?.cAu[a\XEl#~~vw%NB6ڌ6*lOvG۴Y ` bb߱AS(ނ _YwvDL@1 $8};`"6Z#jSߩپPq ڠυM~MtE ֗ea߆5uɡU8#ba `‹Cm+<(&F4x^Onh{_ٷD*+ E_!5\+6ٳ~$m>MfXRJ<^_Skb۞6m}sO^ۜkf8|^M=-Mk&}ò/JYǺݝS`m4)B1/Vseg}_|?O.3tN$I}ֈxe4s$(3Hлnc lE;Pf6u6'8 { 3PoY }8=\ ' ki(9!(RPL2nN31·M޻z=c\lX [oa- z нJ/_,$4`1ϽgBo` % ]o;9#FӵIj7ܲk"x8Јpj-j(I6 5ro3^:?( ^JIa=䢠6]P!l8O9bݗ!?>$S e3\#17&P' ,\ 4u[@C ౷22x rp}{AUڝpD+  Gqi5I~x.;A; ]{yD>":ƊDM!oy5ܞt]KZ$8E~k=Y=\F\JM{0cEAL&炎zX?ilRP,g" 3K/,K1fMi}N L>Ê6DdL?4q;nhZp3RA_% R>@ 8K*l^!`P@7x~ۤ}wXMd}  E\XBKhm ٵB @+Ұ@H<?3O)_'wyK׌~_^yg.҇iWDy`ez7ޑh7?w?/~6]VJZQ!s톯~1s9V@YWǛb0]< T.BQ?XxN3*r5?E!B 8 `U+0ђͩ6oˆ*"zd`oQIٰRPQQgģD+hR#.dܦF)}{OG?r!琼C x/V`gpj3U|D5]Lm |B`PM7F;X NגA16c BV| <4 +KijZGv'- D8oL@z}#~S:qӫK#%Ek74 JU3dnX(6pgcQ {=48kGFo߃`,B[ {S1G\-FD^Z#%l&;(16أdt`sSD,Pi%1Ɗƾk{7Bʢ/Pm@x"JCg+L|dux ou5 _P Kc|=ԛ^3 L1^${F<֊8{ld vMcG|^buOv)e/*qp SS_th?ڛŰ;?6o~ SJ)wZw/R.o ?Ii?W?ٿ/ſ|zzO|;\`?tM X\'ƌ/T<*8Mf8 g"|S٤ mh mm#JDh#ϒ4h-ʋC%hpIa+swkϔ~ |"fls`}ZDg7\7\{ըAkқCOjOFܰ!D?1W`Y,&,У0 a4^߀8 o ZDL\Bl 8Ju̷s]Ki MprA\P/_V`k 6Pb-yxӻ6ƽ_$xZ`S}45" #7ޣ8A ;# AhxA<:^2pN p6klCY>J*bgr*GCl-Xij:= E8 lwn֊V,J룢Cq߇'eoqqACw"̩ Tr \he~[ʹaY3mP(ո| y0ɚ$e>ڥԷ&3viV U[A/0j6o^۴LXδb<#&`YXT/7`H"3U=!XR s>nTʹNJMЃ?h||+`qlmV;Hհyi~a89ݜќi67ո,LbbQ^[i,VߖTbhH CɃ;FQPq3 n4 \a{;-Q承36?f ٢n9@(iDl'.hcj RL]!U.>((t R5^j]J4hNx<^g~mu2O)RJm 6niq$j >0h!:??Ͻ5$ۑǪ%Z`)+|=}wdǶY;t@Etb:J/l~9c+(Oy~_! lxr{40FZ x@Q6ƭ0Yl3+g_QlBygMn_xo:KzOOiAE-';=iť!*ʌl(j am8p'GO;ztRڊO譞O [O oV-m,A̘8TM_pZp;HN^qǽܰq>aPQg&֕KPe3YS6h݆voxEW^n# tVgL3.o {Wwx~Xpy'opR74,LTql&mnop! v|aByO NĪm 8?GZ@4=&'Qh`Nџ{lHd=]^odȱ^=|e +fWp€uvm =p QmoVjU1Em\/K>#+JO`*( ~ӊ;&6`Eӄ_cX/54`PMf0 X,8G%9aG+ "p벇چmakXA d@0`RsQZ,? NJ)Rn+ PQ+ (w6lOSA^K>qyʹ6ArB. p ~>g44de}v( MD:h~7c6Kd&U?ֆqU/ u Ŵ_: 8w 0T. KLV XX*8L9Qj! lawf #ڸv|w VX -̘VlB0UԉSB(eF5o'g&]/X @\TD;;lZaO,uraFd@#>a04Ee'ep0-9}*{qbU҄rA h|( aϜgLٵπX>b)<x#",S0& ǃW 3+aQgۜ5} 8eUe|4a.4 5;fNMiXP%¸ŵ8_<X/ aٝ ~=h%l$ AH0V`IAL_Pǣ/N;p~8 l^g,Q]z5y *T% i2yPu5"ΑYQ(uvFp7FVR9=/+B 8`P1CRU&R䊛 x PW {챻Q `G$]#/{zNIb[peq\1U|Y('O:r鄧x<^9 cm*LQ$]<-چ-GǣV5Qk{s U3f`Ayg5}w|:s)_7Q@ `\V4P,ǘ E.퇓]adc>з zxuAzB GDhd0@D HPMc"ƛ~V|9-0vR'CBF/pmV&˝iCcm61WՔw sθ+֧RJ)L^R)eAF{1W^+"q$G\'Ea,4W8 RGu\5Vx [xXhjٗ_RB;!#9W>, 3_*E[hG"`Z@T3zu5N4ylx z\\/9fM*w:2C IZa[Q;8L_ xShM.$Qzbhd ݾLkk-Au߾FRMr܁^9|Xec$xS9`Ta({JثKC*o @6 EAh0Q~^~MEx~`gyĻOHE_oUJI! <ڏ_g3*cL#nqs?) wB@JPP5]IDATG~M&tUTv=t_Ky8VH} >؋?\7#/~yv |6N/FcKY&aȰ>RJ)RJiBCGH pNJ)Q9`6RJ)RJѕRJoZ)RJ)RJhއŎl.RJx 68)RJ)RJûm?KRJ) RJ)RJ)7y}f)1֧RJ)RJ)2O)B,RJ)RJ).Rz)YYRJ)RJ)tzuc٧R&ٔRJ)RJ)RJeRJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)a}J)RJ)RJ)2O)RJ)RJ)^X)RJ)RJ) ˰>RJ)RJ)Rza֧RJ)RJ)RJ/,RJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)a}J)RJ)RJ)2O)RJ)RJ)^X)RJ)RJ) ˰>RJ)RJ)Rza֧RJ)RJ)RJ/,RJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)a}J)RJ)RJ)2O)RJ)RJ)^X)RJ)RJ) ˰>RJ)RJ)Rza֧RJ)RJ)RJ/,RJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)a}J)RJ)RJ)2O)RJ)RJ)^X)RJ)RJ) ˰>RJ)RJ)Rza֧RJ)RJ)RJ/,RJ)RJ)RJeXRJ)RJ)RJ) SJ)RJ)RJ)L^zIENDB`neo-0.10.0/doc/source/images/neologo_light.png0000644000076700000240000001505713420077704021643 0ustar andrewstaff00000000000000PNG  IHDR)WsRGB pHYs  tIME  ;0IDATxy՝?UTT˪bI1NNwfrNzLw:ct:=3풰hm"@B|珷(%8!3_ws{ߒm:3Kp΀ j3ﴐ=o"/ ^WWW^^Ғ1hРE +ŏDxNɹ+Z[[c=;vj2do|xsoRMMͫ:o޼e˖%<999gϾ{Ǎ;o߾~ן}w%ƻ;kaa(l={Pϝ;w]]zO۶m{ǏhwAC۽kwZV$tԖ瞻zaÆޜ$s=ww>%%3 ;} ^H W]  02ڡ R!ڠQ_CBC-ԚfH:Hu[qDg! 2P7!AZb- @A+[.70T6C+Ch43-F"Ǥ&S)3+HRYY޽{/\>}6lP\\~XTTԯ_l UUU=x̙SO^7|ֿ` D 2IٱkhmR'FW0\l4'@8aH@.dA0P 9iV4u.Ҡ AЌn;] PdgK8/A( b,`8bfQhD68 U[\Ճ:>8Fe˖ׯ ?333_tEx뭷|cwoL܉->~YYY?ӋEccC/_~vg;w/˓#GUD4j)Ӧ2 geeb,G#Br*J70 ii1hOQ|j3V @i Ud|hsPb n:Ȳst,jq|IfȁL|eCԙBһZ J`;@ 20%%zǷ9ƍ}q+++=cwe'Idʕ=L>=Hk$VTT G#ݾ"|Z#1o~O=[ֳ>۽eŊ3g<7}5.ӑ9k֬Gye޼y?OzT\򇙄mD9r?]_ Fri' 4 J؅ aF\5P1RȮHz[)**0aBoVK/=&=nrɒ%'Mpgm2_QQѽ{-E(_ };^d1(@ ,^dnwB P >xQ,b"n7W* JL16xG'o0xe[M1hD*ц(zۮb Nћ|,gUf5^P.G<1w\wP%3u"2jԨ^ (#2utg[pg qv'饗^:-==SWsGTs[t,@J0YRΣ44蚠CN! y?ᯂEa+w&u@3i>Ku!|%Y STu)GE;;zVrrrz%%jlnn>m=ֵk6442*GwU+Wu(pʴ)?3)ͼTE@ QSDEgd/La"Sx6"Vpuh9 Gt~͐eaXm]!l |-ٲ]-E4⤽qYc+њ(d) WC 8,M/):9YGOS?] #F+Ǝmr8Ǩ6!X!1?َ]VDn2WU(EW!:y!Z!1BH~\4Hȃa',i5$xU{L&jVQ¹OFpevJmP,ERћӿm۶?AiGv F_nԩXVVt"E3cQFSL Q@ǜcD\h4xX+J—C0-piטD8#ԉO Mv'jZ©{*I7tSNNʞIxl{棵u=bn""+r8lYa-O0mb*Z+Jv\*9\f>UhBEDڮxMRMyLE{Yv7;QM +چE/|Ȃ*H>ppU|p̘1_lYWfU&b'+~nkjKGk 3f-M#Brd~#Gy/WB+SS6GawHEQ6+Z%F$(NS73B.`5j'4ؘЂ^WhR9ߞ>p 6GΝ;O= eΊhkp>X}G4X4+z5NQybt( o%Pz#8*/k:Jq*]NmN@ЈڥL#Dx E"_$hcD›hCv?qlx~ڵ_[(:{9}ZM E;M #|6Q"VJ|-S(#i9h3+L1Df]!f+\&41W)r-1wnB^uGaf.@jG(&cVk"Mljc*++?SpyۛK 7?(f(\ 8Ah;F@í)s皉6F=O.C2 ].X 70^A!3ԙ?$sҬȶs(IX1N$4Q~\CĹ~dգ(S|Q&E怈V֑A[T ~ ax7*z ~a D?2E4S[}%K̞={wWM197xJ%W#?p…/>2`,jZ'04]!r3,0Dm%3˷82!(!&]l5t\b&v #d\n>:t (<skJiGRmܸqΜ9>lT;555555|_$Hw~~[o/O_en߽ފI2QY{7o|s.[qYH]dN jI*zhHu z9aA #pA\(x|$ ID¹@x# 82,Cʍ!ŤBLwT$<1j'e.,ZhѢEf4iR׵?j+-+XmOFQZ^^{G!#)_I* \VC[{-=P ofh,ӊr݂K92Y(GE㤳hsPpx}&B::n[~dsQNns*fHP5~B1cƌ}T&Yv믿*//_pO>iӦf̘q5̜9 ++RJҽ{{F^[0H& "CE6쵟n4d֤ܳjP`@RFf)4I Y(,* a -8m 7[m0\ C'd!)XτA8Gr 4VL׬A['PY󢟩ﱜ֭۽{wmmm"((((--=묳|W] ׷׷'$a_4G>tƝ38C-*PIENDB`neo-0.10.0/doc/source/images/neologo_optical.png0000644000076700000240000164757013531515001022171 0ustar andrewstaff00000000000000PNG  IHDRzTXtRaw profile type exifxڭirvc^q9#/AZja2HYp$?)xrmH=8x3|^ue>(3 o{}"@?4s.;y1/~Y}Cu￧J0vE{dº ~6GkQW*ٻ#Sm|[p|[C(\uE~Ɵw{gw#GwS?^q$V|e^l~TN' )0 \aOqE{ךzII 7VG~6ىȜq9횏3109oݫ2xb]Q29}.R77kd00768 1sY[l>;nMPwBܙ#ˡ_c!F~+$!7ɬyO ޘ2B"ZHVJ9QBe)\r-N?Rkn[o߳{笅oK?Z 'Y9#c12^ :*g2͙*2F 1W~f_m?e)usJ/3o=KFqSL>OS*o p9sY,nVR/:cK^kbC'4*gt:nup7$٘<-Ah1>0kuEFb7O0fius>-vܾg|]e4!˲B;gi8 L:әۀqLDAN Xݎۚq+@Qˤzju-PB{WV&y0A-TVp}SIiVɩQ!Gsuef_JWG2:ws9^H$*{S.%m‰o>=Ew])fnEʷYn!iXz_dHw`T+36IG1L4R/'^%QlVͶ)K iD gE-sΥL#y!* :Ǜm= LIŰTݗIU+%mu :۟ f/!9CڠNzf]?4\FrD%ѱēQfcg/0msY=Fڻ``E2/f S<6$f7uYƔGt>6@ VXG'|Kͯ!fAbtŦԹi[+F6P&|mE yn3W&aKp̾# 0d mf*٬$ȧU'ү Қ$DTxVFmbT뵽VghvNȗ] @n'Ogdc.`(g+Rz84eU`imD+&6-ʀCޯ DٯN{*ՕƁWwj ëy> 7 *0-X CX7K|8/n$@_d@&geGAwĔeł@ g/t.Eqߟoh{pA Nϊܣ␮:N  xJpbSezme&qӯqF~p.V Qk"$MAA5Wzh5~!$^@' ڐ x […=TQLp4*"SUR GZeBƈso'b 0~hqjt%fi!ƽ;XB9`Ck.jդ^"tMdrQã; p5olFB8 mP1-7>\kSC 4hRq-D**j2_ͱgeH% ~IED<̛ ˞HbgZq١`X\5rmRrzUbP=>& Ubb *)"v`s)ƉHUM~'At zu"b %ꤙeB(Jpԃ'czd MJ"/L;&ō44ӄG'u\!ƿf S0oqg&MA) гl;wAWhh 1Zhd }XL7S,`<H⢫3RSkZs^.!  Y2)7xDTfĪ~1e2эt[ojTWtwscpɡg& oH-9J _.:tx kaH'كZr\RXjP Be Gn*M =ݧ6:zZdp6&aBO@-HnM&cߊ%|\ 7!l67\ 3MC BX&|FA=8s&o943f" X+8A&0d8KBn DDxI[Y u(GY#i4Ľ#;(FׁsW/%T*F3&tg/ MeB*FDP(v 5sJ$ZӬQx 8w6 )zswbR2mC"$hy{0G6NXIԺŅ p#z z(p?1-G3?t8!q-T=37{@p-B3 SZh?]*,-2ma+*L(cz9+}8!죓uT'1=>jC2]PԽP/y;$ޡF!Wn{>lWP\p':hdV{^"@<ЄaY& qsMր,HS/  4oqNQpba>Pl5w$nPפs sbijdt% "wZ O7Na ӹx8{xУ| YAbw' !^/ԂY<ک2rf߻-0캮TZ^MC4QgnY^R f ?-ğ5$V"#zӓh $lAtFAq,;Mw o6d{iCCPICC profilex}=H@_[E* !Cu "Z"TB&~A$Qp-8XupqU?@\\]%1ݽ;߬2T22˯ WCJD1 u_<ܟ_) ijL7, MKOeeI!>'3ď\]~\r3F63O%J],w1+*q\Q5\V8oqVu־'a,sf ),b "Ȩ*,$hH1U#jP!9~?ݭYpIŶ?.jmN3pu&0Izŏ6pq=r|%CrM7偁[o K]oC`Duv?WrbcbKGD pHYs  tIME $ IDATx}eUu]{{711QA h!"IHqG*QTTROM+>I%hԯ"hDbʘ03_PLhY3}9{k3 =EOs=gY" B!B!B!Ba%B!B!B!B! B!B!B!B@AB!B!B!BY(B!B!B!B!kyB!B!B!B!d O!B!B!B! !B!B!B!5ß: !d#]-,:~+ilWz~GM!B!B!Qz YR !BBDcȒc1"ƈzpѨ.߫|ͬ$dK{ܿ,uJ׏4 B\{s$B!B! O &IPq1'Ƙ_'"9`ǝug8z!d#3+e@j>jz41)FQow)̗A]Yk!B!B! yBV`LP4Ç7KpYNpo"cf3c{Bl]++dw W&et:^|\T*M1(Vnm/3ݦF]׽@{^&I nr`$!B!B!l!O ˗.'L}YJ7:G͂ӡ}՗}hwrB".|͙tUBt:~u]//vY)=B!B!BٸԼ,g—t:x<Љ(,;v@D0o>}rV2.my<)OE,,,Yz;.O&GI!傎޺1b߾};~|k_Þ={Gyvۗ3^Ѩ}cĹ瞛mgc=?3?‰'’`,B!B!y`;o{vmx]׹ *^MEUUxӞSTl<*TUj<3( 96<'IL&׿۷nۿnOwif`X?p]:q9YgSN9%ï߯r]nfi !i? "ˋ6cZB8L|B!B! d\)2#lfzr0+,<3eO|;v={zY3.Vc8OA2 e|]טNY䪪 OqyqYgVrcUG1f(IPO|<Ƭw,eVh4e/{;uX~Vzl0qطo݋n mbϞ=cx<?=W"B!B+6v5MQoFyqM7gg7̀m"xD /3. ޽^{-.R޽;?C!g+݇0m* aAi'E/z.bvi̢$ Gws>OCv/bߗ~Y"r={7 >TfUjYf1'~x3k7@NYkytM׾Ʈ]o>|xᇗ(ج0lܾB!B!W^FG7U<ھ};9 ?¨`:1%ˈ/G?Q|5\`y ;FxFs:gmo{^WsE d_كOӸkpPewJ3+Pjj%{3+ϥPTmDUUx+|cC4Glz jyހ8餓zsrVL)BA2LtB2hwލ+W]u%k<$/u9-oy .^ppkAO.QfX)x;M6nV|s×%r-h&5!B!B6' ]\zpFtK'Ԏ;`;g0{5W.g?eyA|_x3 <~2 q3cč7ވ'>f >VB41F|x;ށ8z !G񸷦x`5mb4-Yg݋/}K|w}L~7!B!# 넡"rFFYfo߾g}6g 2{/€guhH=Cxk^w8s !kϬEPJ} +l>se/?/{lo2Ne ܔzkK?O} W_}5nUawfB!B!G.C0v!w@OO8s7y)Y޻qwWK.;~TE0,i{{0wq|r=c??/~1 /b`KYY !B!rdih=!.ʨEd=hUUa_u/;=lvI_J?_yb9U:]e6}݇ ./| {n^@Bu{xހSN9W^yenwK:6=6h)E~|;?ꪫJqENj^"|3A4K* r [9:B!B!lD6]9cf#/S~^6v=18̶UUK.gqFt~U0lM岝fv܉O>z׻VSyO4Jg<W\qAs497}2HcdCnuc׮]KYv|',ZB^{\{%\Sq2y\uxmJtƞp^޳̪ر+H*_~3zn4XsA|e?  p|~Hkת 7Zz S?Yc?׿5u\gT+C]|}aaaoN,y(XжO+BӛfgDi#fXx)ٻk]O4 d˫>9zxߍ{R!xߍO?7|dYԳ|}r$ 3A*]dq1SU#pgpXc29qo}+yZ}qǩB3FF%gTUt'җ'tômV+j?o?ZiălN}ZK_V> p~jkػU:/|aٯ՟gfk۬Ѹ\G|gZDunhj(BD\wB%<[o+ݟuYU6m'ڍ~?gm9 ow!m w# 0~ u]Y>eYI/e7>oW'G.1F~)X2˶erN){*evq4FL&1F_%N81x^^r7&mEw~tI{v e5!9` nrm1bKq)/9!SoiR'뫥̐X1kq|'<de6k*UȒL_oɦ݋luѭ+g5.&6rcBz/>w$U]~ϷOr?|Ө@|JG}:Xe9mu!07nzs"mۜa qףa)KYٗ!\\~-e:.>N}ݷ$m\kLfEKF)#|ncرr 7J y{2m uξ}kk8sp]w2髪ݺi 9L^<뽬JQwE];_|1瞜15 Y{'/W;7Ձ{W;}KgSeQ6eY?tGҗ/~$Bo~;kQe>l *+a/{˰m6|[6 `as̴m;vg|/]\ jx\> ^jyn߾wN>d|]RlOUqkޮPO'XO&Ml[Dgy7ۧKJP 4{.+m9~c!i|{AY|hpŒ뚁D•!mؗ9c~g^^ _PiGS1 IDAT{_ܸqsx$Wp_P)TVZ,!v[=ӣO7f`M|nr 7܀;/oqQaw9WUmg<s?M)ˋ\~8,u]OǴWh" /_~y%W~M- {V<!Os\gR %P'hIp۶m+q'Cl,yBfxpO|]vv9apYc]=crreI_ f=.}5CXZ%BLeu]gsd}DF/MnG',3qcPV9~n1R;&/ >nR.ڕPUr$J*Y+|F-5~q~ma=Y>9#3TѶMj!ҷci7ވyIvt_.j/..^m\z:K=R38\s D5lf\կⵯ}-vڵĦEPÀ(rC}ne]W@R#<6E/>1>o-ePg.Rp 2ܯ_Gϧ¤@D @2Rj>ADW3@R:0kNhU+ ,Hah)~h-c^[Ыfb;Rz>VBH)ƗuݾAn+oWw+}ԚDjh Z[ut(p\瞁Q>P@s_/篪qE[nY2a,Y#G1o~)#a?^޽{{fRe{ECkB /'=IY0d!*M(.}++z{zho\WL۟8[G0W+8=|Pfp<  F+T("eG1|ۧ6  HN9ԒF1ڴɱjb|%!X!$c"x6; R%!A盳 3Ų< Aބ-rZ'[=s U q?(6MBA ⺇Ԭ}`ƅATd*h48]}|*:s\ZҚь:1^DlމPEhtΞv˦x^(KޞO^J_`նZ]Q:ЄC[?)W&u`_ٿ k\ݿ[4{A?cE␮Ϫ{K޺e㶱^}lT`@T!R!lG m۶e37s4[Kgp2 ?a\|ػw@['Mrw6LC8*:Zh@<p%LVy`Y0EH3M]* K,o ,Kj[_PD@Z1grʁU&9|jvgXxD%2K;Hyƿl#QIcnQ[Ƶ:k2t`ٮAv@4 <~Ƌj| Vw %>Q V<;1"wvbzjqעtgXε.=&-jᾈkjG B"v:[ $>o9+@hUUdpy5G89V^/˒B 6 pI4[,&@/5 v2ڱjҹKBf{Rq+纔`}ryhhZ m9nt$f^@5{ UHц 2_bxz1!zBco= }M88v"&iU=k4 HlV&@"UhԈ2-MװҼNkYfQPiXEAP T^U"ͮ4Ni:w9-3ZPl[9.Ğ fږsi t E6r!*uyIhRrDǚ*l|ī^*LSTUrP'h[EU}O4*8kHE|_ƻ.y7ngm+aibܹ"<3ȝIZ8Y&.J̎MwCq@*y59"FɋDE$*;9 5.n_8ۺU8.}_[ Q85?"FHrڷ.Ρ{jn@ NS jwB56p,!g7{ɷ+?z955*2)p-7ѲeE'?'<β2HWRz/͇,&R8J KXД$=pi`4|@]_ZSqYorhL/:&<65ttD؋(nPqB(tqulҍs1zń9QJlצ[8<Ԃ&}M0BltMj NZ'.@{.*j1/a қL,-˷?)^@Da@q}:)nʱfaބɦȨNfV  1WrL{afl#$kK6N5 )vHDGmSV\H{ Eh0BH#0:颇,/Uu96.r<.lEiv̩"NP#!G0ȼIFUZ/Q格$8:دm&xWg` Yb;S#K'`fMBYb%WH[k@ڐZGXn%6V,s43(,j//oI 0N:dbZ-UGӵM3v=$<3shdsv@q<d^@V։  5ypyvs`HkdZ(907沆brݭ+1+> rBR@Im]i1g" &K]A*[xOvֳI έnvmݶZX)ەۇWW}K^W5KbThc * ֽx|}m=qw2w96 0h^.6&8N@J7ՔY^r0|ע]%9#tܥBzL7dF~Kbs|&BX~k. U}U xBu٬<؈vҚ .VםEPJ*C"](,KÂ2(!1y~:&mHV~Ҧ=.'ig :q݃8r e`_&F=e-tØ%Py@_ is#I"ӼyMp`I8;`s@T$c@(`BgK8@LC6У[`eQE=S Zg @0"}h*(?5 ےu lmH٣x48*@RPj٦" bֶ~Mi5WZi+hhRb|kВ Y/rFӳE\@_-LE,V$}Ms R=v-FO2e*ca0r]-DrptL6NW"~2Js(EK_1, E"dӥ4 (E޾Ž./+F4Ɗ|+'"PMB:\[lRCseObg E ȱskS-hSU!RBqޔo,6 ߶mή-ˣnލk,..>ng?YŞ={&k4Mt3XN.WU۷sq1 7{ʓuX-{d8y\tEsVm;vUW]N;W b:VjhPBht&\ [`j @^e{JaV)k pqWۣZ\E4D@&abw&JH%UWU=>:N%GI2ޥ "eNEM*г1>CYcM&2yؼ@%R3l^"pѐZy=`(fkx-|m!4bmg{ Nvف>߷(EX7 o%ײ&h/(pthces`-rgJT6҈(V A[ٕj$@Ka(b] ] Thd I:@Ĝ1*kcQA6. `XYta4ٮTUM wU 6ʚxNh,:f 1^>]/']}.$T֌IS`?m+&AX2Y[qFo+D;!*2^(ˬikۋg*:bšUX&j& c ExuQTCRdPϚ>o|!zW=t2VN.d5#qж`51zB畉Χ蝨KiتJ40_TrP R ZCr4` ~mD|}OW,:-E7j4."&14T+ڹW̕K`%-j$eA^{Y{jWj ψB/*h9Fݞ'U4Hv֛(9$)z|>4,Ҟ@Hp[ԚKɋJ!Ƭ{*6VHWu@ )hNRVM3QIz?>Ȯ,b1iS &qoz̜1ϰ03Ϸiy8<#^!ƈP<01L4%4Gc[ng}6v؁c='ʗ bdx[)ldFQ/ܧm݆>7p=K֓#yO, *`Wv-f|2o*` Th!qfHZ6T /Er&`s蜾j9XdBqZxDJs踈TqM2瑜^*ϝ?be&(}}sY.dR@ɥ0ń:ē e-{܃t i"eH5](Y,Z-.`$Ǭ&QT!^ѳ;lUBrxjR4'eT9˒eRlein+y%*{X{?wqƑ \^˭ex֑sXk@ЬcjB0߯>孢+*EҢe? v RLuGu8ZuG[\0@r/8'(BI i>I2HoA$m![0@U,Ps9i*pqerSS s:\6,h׿G E[/f.ti!ZuTvݽ޵&2±hƲUBQTk QbE$ڝC*ϭW2T^<>,wK\/˖s@hbJ:fm}:y УR7kdǚOB \0Te%1n$a_yL₶m(f3\̶6!)(^ !VuAD:-ƖT1>s;2-v$[PgtU3U$2ubRsrteʉNWEv5 SyֹgYU,,s]nTKkg뛫5v!cusnMzIU__*mҼl Уxʋscm~\Dv10[#t>2d^AHbEօ&xVEok- ȶr?oo2ci;>BBgL|M~kS^BbOE.}{/vQy9rZ㑢 x&iY &)?lQ"݄rT9dA2Ikl#]ii犱9_|(9mLx(|H-hG8؇@ ־#TaMb֖yU(B H"e"U$X@)F U(**}՝hNxM-Dt>gi{\a.Ur!;zn|Ů ڨʨ@DZ ,J\I7^=Eli$qhP`,!>ZCҳ6M1E ZDhP (~#B:__yAp!c#d xx[ߊ9#~qqmOC2Pw4!@Dk.l۶ ۷o/ Y$K/_|qϞ d3ලmۼ`*|AyuE#sw_lzx]-bzm)؃~'XbNĔa_U!(knEZ^t Eh;2\!ʔ!&wևםVǻ8NrzL9 '8G R- Y|ʓ2 Oe3sw7e6%v6J5e?KmUB ڈY`M7V"=Q(]нyt5eŮ e_s O̙eqbW=9y֣"ѴC,DcoNmHd+*YU1 kVfk[sO.;257;:# yiRpey^ G&CRf32PI%JM-SLqJikD-0ă4KvJz`|ѢBm;;N|-˄[uEg"4f$8'sKUHG25Jqܦxl^1^@i@ ׮J=C ]se{!xlEiHk.Z# k1IB'i+oZt}[t<=;IƩt6&z@:%-!EKoBBmեbI ݴ`NF$U]O#F E F,V"5-j]{Ui~Hj{+@6 @nё4tAMAʚ݃渇J/ V0m"uL]67b|%!X_+ꁨ%$'Jؾ ڽ چ=Ӯ[ ZW@J>W S߫-j|-1,> yW9woV/Ν;qGp s1]mYU61=?UuNQj'hĠh1Ĥӭ 1@$zm7*z[0h½0;1JU1䋒(s{59生_ ȩ+:}\g@BoDl8$ŲTBVxǡB ;EE9L^g4I2o$'i.dg^, ε@"4ZIybZ:5\4g-{`ǼbjόY$907$x&TF(u#1C$&^JWp_ZX|5D(*cЈ&#6<+Iբ1҉h /ڨM6 [sev5PƵCϝ(rH_L pSaI)ևj$.ɹ5jcay 63ů֯rw* %RةVO_SKZo4T63 l kxDS)!7^v|žZV237ʦvoB[6z.;d? ͎I dMԚ>uP`lŚnijl PG#c$_Bd~߲l|b݁%m= tEn6AEtCoLLX~Hww"0nsܯuǬZ>Le6:ulZSMͦT ZK͡*( vE{8FWH.vK*zthVRj6  ٱ K|ZrPװ,Ɉ+dg% CeI$hxic-\ۅd J4G ` 5,: lG(-HorT(E|uPЛ8|[M<&iehR(Ƹn9V趻+E#b||~;@x+n(dW Tc n9iȄePjЙ]]L! P$R9l%-j䞩[P+Y+.)]!An~1p:9ċduylZ\TvPJ:HtGCvl) MXtgMVSh(-ySݰz˜ r]YPOSiYM  x AڢM60(<=$ Z=۳l;čNP Ø_qDCeK0H:BF= *u4S؅A'eFx"^C*mޏR[Ӝ%\:i6B|f*%TB.0JfCnEwZGN19Eg| &|Kb\Ǭi{z5$ĠmVR R4̣1״OQxtw^p{ǽftZG ԁ8;@9`,q|eS $aj  AB8 Ҏ8ϼx:# ]m"$(>78L<8HgmoM tA6! /t[5ο*$I:&:x)pjluq>,y!ĂxX{0( 7*(6m IDATNsl3,%kDUq;jy#yUx|@(c<_N{l ߿mZJc7)}} _;9|6sn= Ɵo5,c_wZOǰll6 Ot8[dgl6U}2BUo[30'vaWdrޜ2bJfY fcVXH˦2_"Ze܀27 ۰usņ$J a!ـ1sa=w1)oֈMP )ǎd㑕-w6"?n`e [c*HPȌHAoH#,eU%rݽ&x @s$GmMT Hj62j#JŽjwhv:/9*׉ZEb0-]p  ^#f\}|CuִAq'x*. գ79Z={(`h*j6An2-sJ׏Kljo*kE`$p-b"p)"0*Ve;sPNgv%0ꡄʵvkjC7ڥRG vv\Vw,Z2vj Jpnl8&XQۘZV,+gak=oԫ,*e{wgiWk!mY$T7RM" \:IˢUFWk3zKY7 bRA(eV+ب;pnI5rKc{\zuE߮2>n= )B`Z5갽1f) Tb<0wQyC{˵*s^2Ur<9[xR+CbHSȎu)8: -+V[kJ'wg/E!G1i$g }ₗj_<-%^A㽳 /T0 %_Hf%extgB("SH2{w>X S*h:HTqQ!Pٚx?Rm=vY?N'xƳᎳ`A0W=-96 rF-"?[Z$T,#\ >Y]#k ?P;R",q5|0Z7hd}~1JGtG:ۖJMsg c0"k5uW`'OѼVg腵Rbm.}. [~%T|Pj徟pgHGF7$Öe!Zȣp,=|i|ϓ,b')=&$l:A4e[9[`}$ׄe}G(5iFH۵~_A#xw4=4 Ɵ+=>Ts8_1?[C9'k?__hV53#yƇ?aj/OKq흯x;65x#KTJ}Yxom٩cʍ"M9*2=X^hsD)v~hJfSװQwTO{%cFf[&x\R̋U :;2r}pv:y+cd^vK)֭PGn%{TapuC6`g wZK($:Aq$>n 9z|)IPdc{|˅I-sf18nl| ҽSf?T ũBNۆP2ZRjHG@'WP]Ȣ1:֋ ~ jz @zҬ~mreJ}fU~ FÒ~PJs>cK Ana; ,uCbؑsJ2ylmF UJ:H%Xc/΁l m*Vܹ 3 u HH:iSkOpt p6aOcH`$pl5ظP6cF l&  =,:dbPj ] Vm}&BbQ˯όmMi {O@jy\ )}n韸wR-E{_wfx8WE4 Ѥ{Hp 9ymi[.uƳ# , aMM, _psG8X;_:8ݔXLEjOGg\ Igs0$ဖm.ϲ=2G=!T7VG`{|lƋѲ;\Q;%06N'Γ ^.P7Do} QyR.3c AE ɹ {,re2 "dʂDRKfzc>#Y ۧ6*U7ݙIw} $\=YfYinjEHBܕǿ-Gn:d1/:(ܬgn/1w4{ ~muD8a0s~S[Q 1­)A_wpl*v]aANA۰O7H1],#a;5#+⎬m$&0Z_. m`ok'+%`H|7w! Y܉Uoc w0m\_Bb)܃RVVW5WN<gb7`2g3-:qd <[kNnQzT>z;žvwtj y$! eT\$2b$#>r &3gg{HeHq(q k_OEVRZ )y׷UJis{|6QPۿT7%]3hA [`4Y rea#]ճTFp2w2Z61@6TDe8deML`sTSf0)r0p0`b tvW5!eve73{3:֖GU-"IlfZun3vv^6hvqڣ!y2lwE XR#C[cjۭct0lﬓS6M̤KAծabK 4 B)ia=gjء<ԡA}|X ~JFzWf tn 脂;!.\9R{)M'TN#} JR*=ϑٗ XJ8$Wi*PJo3Zf{J,b*37|A.'G$͵!rʽYT  {3H%@A8bۏ@)M5*p~$?؆57˵Uz37o$Z}eX'1 *a\rcb/A83uf6#bPb=I$ `E )3c*Zfy2)8N,29v>(Jޝ<niQ1T:*x<mTUC b3Q•pHARրtpB+t]Νq $$Jyc[Ύ5{7JGv/P9 p|^Y'k`dm%C=-#.XO-icYA#0nX$*}7HESGX7Ց>- V{Es2݈0>YSK=)$ =oq97|ϘDͬϩkE$ڽC pc!d2mG9tP>y2薣4bIa-3,$iYtW)5n> ju rϡ;6Gͬ@T %q8ph [lWC^92s'!8쏜:XS &H0->XLG'ecOA|k cmDPMB:JkJo0$H{.s$NCg˔1c/LpIֳ{id n$#0jڛ #T 4Y %ڤxͺ 4]Ĺg $X< =6ɨ?#V^c]<'$g)nX <<0ܥ, j%]H22gMޝD;a0!;_퍮f7M+;1[:UJg~J0o;}>>v7:%2c{Lus`f /WW}Ϳ ƟN|a@xlEd$?_W>* 苳//_)U\ A/>Ly9q`o^#-K32Ca% A l;DuQ܆l͌Ҧ(p#9>KD% "l Nqj4:"$ߧj=eN &Hj@\ѴM3_ ?+1WH, *JfH RC4A:bhx AkD1 mhFТu;= k =3}b>smoq> @vO0jl\y<,I B >:gb vBj|SJ܋AISJD~eqh:`Zb:@#3%r;#xw?>Rsv0ف\9jmX M̴ ͱ ^P%*+b԰*9:yL."T7r=SefƦ:-󄱄5rPHjYZ29/tgWJO5 - 0R .I61.XXYVM%l1*b;2x_k'W:ͱM]KGt[Ycد.$6r(EX0"G*ҧP8FU`ALbot3W~\1'z1:Q"n~C1$ o6k&vl[}{ǞkїdeClVr 3ϖY9XHO@3{ rO{mmkLm4gw1 ATra[@oDx=&Q֣{S{FFhJ;$ȵBK@Ʉ P#F d_cE|/ `WtHyxq=ad3" M>xSH;x^H#G0'9u٤9 ^rgU)j0>Jⴛt L[o7w[)Hե`CiyqI0KMc]u~ ~|wM_8XZ$|\N>E:ޫƂV澛1zz'Bj"w{8M@:C9Y $XlJˏҞ}73B,cM|$\IhD{=aB+~̱\]N{!+x[l`jGy'|#kQȒO;<4P}khO S3/Ivoo ox>4u9>^}3=m4 i0[ GڥgSw)~2ש̖TN cԩ?dE .>cg{O$xN!`} |өeg~ ՖU[; -mu+i[ZR53/->Q${y EsBU,7p*>=hME!r s 8Pi,3÷KBU;>zɌ9c ;(yȜMbHd3X;s MߡB0:Tă 1 OBTy9QY"I y<ƈY{2G2:C;+(bqpm=7rJh5v1g}?ܢ5n Yv؈X;eȩ *Eul yB1|5BqM]ܕ:uqr-H| kwk|]o$Axt˵uIVgW5sE'gaߞ_Җ:I^@P(Ir^'}x6}]#UϢ>tRN IDATs׷UB)뿎ų=G|=zWK/|'o壻C|֯S"e/gx둚23/>♏=Ӟf%1`笐Ya"<42 V.`>00ZsSu^Ol(e&W P%]J)eFwc#Y!z\v)ڕM17O+ajjq@4cTPoRak~OP7v Pךެ}oAҲ'-PNʜRp,5@t60@-  X][S&j(C# ̔660̇r6OJN2%G gc[zD<9UyugriH{VH@$=m L7,wZ ZoLH DV2!m۳ 涭Puo7ӔzhW4ETkЈL8A`HEUWGED)}9 !Tؗn,lToN J$RyM, e6&޵AA^f[Htfa ,iPA-+F%2]’v j izO) 'w;tǃHn?/{D 6_4qo|,bTЦռY{OT#Pˠ@K^y|v<͢)IAC['!TVѭujW9 HPqQ%J h  {[ @ {m2\ov`o7j [}&}}q됭m pMK'80%\vsg4:d]5z`w>w@A|_ii`IkS96䩨#(퇣KWkCImq2փc9'&6M?|cIfY[Jݘݬ-$o bwFX咄O 2|fVfkqߍ +5 b`53oEФ6"b_>O=l )#'1K# ~]X$j8 /]9֣7B&6,Uvj VDFMv;HgF,{ h $v+,VhlڙbpI i=$jH'5b|]f'`)Xg&">;NxT%-T1;<'܁X 5`*dH_ϩkܱM!_/g>}41v !9j|/ #RKǾs?w8h"0R>gE#v9{G&(^ yW@'(3~^32`>|5q25)(zu^|Lp3֊~x;߉͛7Wvg`|}3U? /n3[ uWe̔~w'> -o//~|of|;/!}EQEQl,!W`sdl:s^B?rXoPm"lFMٞ5cũL:=^l)$?kg}Sz v^Wc6[C2+/X+^ON?d-CiMI#N塯Yڴd9(7)}0 6# F{f3(=}|r/&zF[6,Йj֨uvyp3 MT6*R̽݃|QqXqCkfeSᤸ1{ l`n-Z;]8b :r//aHgl="R[i4s@C+r>y>\I EL[lRvչ,;3Kѭh8_u@>,Ĝ=w>85|"ii!3MRxM~}XH[/D>q a+@u`x7A!ZcCc ۦK=&X#d ֶϡVG%*Ne/6nD@q=p^u'(ls"~t{6[Yn%b9HwtQ"vlzD5؎@vڲpm%CM,u>o-C OO  ˑy"GZvrIsig~NU+z}uVYwmh-Ϥ\uκ5b& W^ X>b1z*Sҋ.іKe-[8$(p1 _zy ?8jŴ8 V5Dtx&ct?&?>߂eJ7NׁĐ0 )֙b02||0#Tlir]\گ^~՗)}Qæa.&o3 Pt&Gk%tmD'bS65l&)A| W28Eq5Y۰9e06*mPe⹉cќ0?i.g@G@T,X8Efu1I[SJ@dۧS1ƃǙƸ&O6 lj6uVjo"[@* ]h#n׏P_b1|́XfHxk";7P,Iȴ JvK8Q-<Ϣ sڙ/K.qyڀi:jan׭1soz/%`71u5xQB.VPQ0hh8R'ZPj#,6E[cc~G5{8_hַb]d,2PЪGe3@Dm8Zz{6c 9-ܶ5k}ePVC|wLͨ_+ %\a|1}aR)a?WR]蒒CP'Ӫ6D"RN+UIz|ndl'{_ 'YcHĘ դ ZU\|[Se/fs 1M;)˂:rsbSi5@doQJ2 3P.}/w(SSuph dPܩhB{Nn D v){BcD8G'B!PYP's{E?]Fi졢CM*r 5\!}^)N)NZ6sAc3  p-B_}aCӦv@耂d""REbx ^+b{+jaoXn`o=3a1;@B1ֱR^$@Maj1C#-%4R~Hm^$|mWE͐(1P!ސk7F:ii߫mn4yZ7(vЈ`ɀz5g_Wa vù.ZᵮmI^@uѝ[+eC+|= (>iNCm)Pe`% D<\GA(q`.]P/,Ġ{Ѽ $x|6b;o!o +D]NfGGlVbNG=VN Z;~)1JF|MJbA0O; ao|N(ܧ=H,r$"j%@AHΚ -ޟ=3L6 [Oւ9-EeUI`3`>$n-\; |1>! %YP2kMU6QovQa ZdY6ʬ(RDؠ`@Kn(lS? ג/X:eB˄w2S8'Fg@[\鱡jip_~bhÚ4zg}hϜ2z?3s=~ݕ|ȕI, Gl |0WdVl6Jaf+U5|?޽ Mow򗿌S*O-rԋƁ; K%{d9NK'>K<.U]#Au?Xs2S,z!s9at2x&5 XI#+-F an,ց{fqӿEЪ l#&`QQ NߤRx/lE^Ax؇^>Hd6IMվ? cs*z/:h>ik)Y@b rW< ~;A"x+ ]݊hR-_)êj}:N XnpGK?!\3Nũ\؃TorS+`[_lXu$a?` ]uʦ0R{YbJeQWstx"#|HTcCo@2YiIO;"MմI,C{ݸ;yNWM}}}rzw'}۸-6dkr>6_NOKxutOu[VAB<. vZ~,H7 vjF7uâ1>Sa=*(KPTyO+jz8BgKa ;+d $`I Bo ]n+h$$'HEb e+qYZ8 `"tB H oX3vY w3'ݲ_A2 wGB6 \poa ׀^vAͺݔMRO2\VQO֢TE"*ݾ 'OԨ]VQw&ie8 ,T+IJP8cn8F/jwL8k Ϡ7`_f-Nt gcX TIh!UJq@g>$X$HRdX+z?b׫Uҽ_|4 K0rleg?cSjT(^J~O~|GXuѾn{k]HFp>lv΍`4Mx߈׿xކz oy[p|}_ aϧ,K_ۿ??Sqt#Z< h~hZ^l663 77 >W|}秱wyF_+{9lʄ.xBoQ`dϩP ~@1̞~%itkԑ ȟ rWv?{$N0l؅Eu a =OU*R\\\`߯1zl|[?ZrxW^ u=0HG*DcrS eQ^qJcǗh?6 >> p}ua_uNQؕkŁz =h)O#Q`W$:=ysO?os;'x,. rc<V^ 5KB;gIufc#nEAv@5H.˵\1XE$~[ SFCׯ%qu)BMp~׬er߻gn:TY:] 0lq{O_W4e.3dž~ Bnk2sEm{ob !@etQx x soYJ1O~ezZP*ގǧP'm*8a s熭Z w m6½j([rCrewrM(},rՁM3zBS8dHH(o6Y|Na8UVs]}Hz_Ǘ8s+'34Kv̿,2kO]缅ײW棫jH.Hys5SΖ=%D.yO9F}(YAkx o>\2q{8 y)4ޞ"]wmcYCpq8HS}l;sx d'4 3#!@b!XH{X]"YXniv3k7e-ky)Iߌ72Qh]5Va}F+g y2b. Ё0?sH b]N_"k8"X*M;6*#qM;%D839zT IKj#d|m-[NV-e' dNF~\W8=4Ȫ$gOrH&>X&A=,iXl=6|{y. *[T»}xDOߦ6>O-_# :;5 }S??[Wư^0*=*~']]xްz|_WWg>/| WP{@Mxf{mo{/Ν;0%1F^tO ??h-(2A ["(o@((l(`Vdq>܋,{aZco>;JF,H69>-]졽Oy?|U>f`R^U#G5gPycsVb4*~?ǽ~qf,l2u1 JV$)MNHAρT >vjJPi7G-@W xL]EݱyÀۑ# ,핰ad(FhEcIcwx4c+_ F|Zտ 5Еa!Ozmu3?{2U,ۀ?d"|@dfmϹM샅l9^K%;XHuM}ڧ0c56潺.!>^*1ST{ɠdTxIg[O"L233*Cs2XfsX RY]޼d@ʕ{h9~yRkT `$bTA:4j/n{S/{Y /KӠ΢F5S/o W"+w89WtkAͦsq`r1k2nZrߨDDNIJ e⽯03AkH2^:o>er:F`S|MU$[+ Yvd [!S,N`eȥTs:`Yw,Q8;u{Ekfu*~١EQPNhGЮ Gv ULĹ<ǜK7JUgSvLrqY;1('a~cㅚ9g).xrF@r+aC:8AYK%<aXs|Úcueο'1_\5uo,Fq/nI}C|(5ǜmP"۷.R42v`Bzjm* F2 TflIIxԍvą!E!Cvԉ~3@iBe>`K )( ʴTI'-~sw!Ys.t(p\ 7<.<Il6\eS.h)J8*wS9FGK z_0qO"jE݀EY#)'wYo9>N*\;wEԷ7*\ uϘ-cpd 3E{ֽ]UC=|AZ-Nn例8 :a׀sez 2?ĨsQn_=F%7BRO _rs>‰EI!Ow:us0K |dxM-{~G0cUyu^ 'x}5Bi@y=׾{lB`0:_&K/'?ɳB|}]}sxquur)Ϥ4a;*"fa:[w./RՅlxٜM`K0+W`!Y0Rc&]0=x¦7 rw9nI+sʭz6Az~PRTʤ.v\2lmT@U`nB5,A(M0h&9|NgydO$,Cixli8poe w_RGǨ \4Ĥ8C{@U ِwTz86o ֶ>Śr!o#jB\cDPH#T~2ЊZBhm}!$椓&f̻Ȁ."\m9 HV`+$s\Z]{%k_Ux$3,詢 K!p vJ7e/W{/8I=7"V9&WU#0m4M3ko}+y+DmBM7`B8J)fvx?S{tsMUO|}k^W~9gLF'xgyO=O73ދ:^:,_oJT( Ŝ3>Ogghvw7`tPeCCo+Ɨhd$-l0DrE+ &g|-l Kv>,gV *(h=lĊ Z-*f[;g;@d'2QV$L$I=b\nLl6K O|s;~G`~=zsD) ˭rŕCD;lGobϏ#cqf&vECFli!kN7Tw`b\Fجkr&%V*E."Xs_^j/#+٢*:w'etq۲͜lZɘO@NRXWpucƳPk89f2Z9J(J%f0֬1_77Y8w-r"G'; }[PU)$OÖw6>ڜ{?SUMaC]2F |>uB"V,$b>re Xg%V&eeWnCqgݮ`pr׵Iƅ=%i<}1!'$8H3 I3pzɑS ,pp>6=c68O)APfuKUEyX)I;!b5q _TT5H{H@T 6= ( $ObX9a4 jp((ZXk#ZfViq u {M_o1WBK#[qPt 24` IT2>Dr76: qmHlsN0<d%^IVtF4* q9e؏%xCLd\|/ܳ](TU&Ma(AcѦjDg>mSX7A@C cd(}<+Tb{ ̡Gj"QSo kv[;EBOw9Ƹѓ4;pp=7HPc-5W\u_lr\k] F\6-QlE" .rnNHX: )5c+HD2a؃ Aa,N!XqUvϔij[@\껻@HvAnY6L%qNbs3VaMoG=!Y˃<{&ku%rucDo-]\ZI3B@Q8,Fw$8]|]؊Gr$;r\`AH?ztı!/ K@ _]If =͐6۶>|*T0qs|-)%v+׿mvsnRO|ܻwolOPQ6_]bozӛ׿_^צ V5jJv8]# _qP2M\IӱS>G}LJ>!*>Ϗu| =t^| ~J9nG E C_0[v/ dшr.hpK'`l.Q@BU~lYǐwBhh) EsXɶgo Ȭp5CVM0׵wH9kv-oBH2_/}Ⱦ`sxl&ꁘAA;d7`}㰔9>N+Px#lB+P!ʪYi}4A-u) 1*U:ƕl)Ob\ak9q9p&H[s@* (|asfCW!."BlL1{,bLY܃(c2@H6aBcO슱}(OԮ;EqfrP45T6tV!P/Fûj/=z)Je6y5t6FbH#> tFWjJRx \ۃ/! p0y5Z{6g6ڃhS[Ue~Oe:~~NI,I<ӹjTnX.02"f25AaC`綋.|>u}#ĽU1.p,bCP1Z -羵B6Qb^ ؞٪V8,eGpELwS(j9XX;̷x\*]|6B[DnCٲ{Y[0-odߝ~|}-+q(˝Ax.QØb@,#G5zqyě"8HRQln\!8;^Z%VԷ6q|zc)Ѕf5)$%ϩ4 4`Еh dV6 :"h]-XsVpHqAXcI+"[ΐ5/o-łS -͹W.B[ӵXW$uڶNWzoRkY#i~*Q S:ZdKB8ˇ U`:lp HIt}qrzҧNPMx *yO PD#pۮTw,$j<ا9Øq[sCq0|QY($tBBM[FhLƁcTUc99b¥KG,'oO ЋXwf5IqJBS-||*w&Zvrpl{c0˜d(] |=GuF`N v(ͼ_(\(*\9iRqYEAZ tã&)|/|xϑk7EtV~{> YOr6֘E'JL9=սpWѡucGyK5,3dJݪ{ZeB8m :) !Pt8E̅KWd 35N7 |_|z w>b e Q)S0Y2k 䄑8>LPzcm!Ǻ=תrxO 눭8Z0BHUƴAl  (dEGYòqR`SF @60^HLC۶xgO^Q_ohOs s f À?tsWJ w׾#AjS{s^2 7ݻ}SRM]׫:ӬgMǗ M2ǧ)wO#g߿O>$wӟ4~~ +s"Eu*#0 IDAT裏UzHҼQAW_O -X; .9W d%Aҁp:6Bm}ka %Au!гB~4607~~FR BpG*_&&ِv|9-v><䳰M-CF{ "1+ uM6,|WޱwW׃SO=[@A RkzF$!ұ" ܎ P]j1%YfS-ĝ2Zl92fM4E.9Å|\s]xqB% 1C(lN6Ӧm@eo \@PsNAۊ`ᗎ2Tk.c/l8 KA쐱!TbT&+\ǒ B1VLP> 6Uh f1@ G 8rȊ}^a{8>h'h@/j~K4>!5Ȏ2D_GRQ'e\oUogNY8R˘apFL;JP㢤<2@m k˪2 '\%2sʿ׬h:Ű6h1˚JNh3& E>krHBhMpۮk8gi5Ь0# kT !౑ѭ9TnQQyXTi~IA:Z{q5ط_ʿ4vHA"9(3AZbNXW'hW,@6|EAQȩ=7?=vt)eԳ7h-<"FhۂuJqS^Tdyށ5eMi9@En{AbA9%QaIn9Ot'g@bcV ~D J0 tؑ0`XFF+P-8p$E=}⚏ppf$nb%k +vܦ{^r_#jP+16sMI U'VnU4[U܃[s:n~'!-n{:ttFII vaȓ򦴱J((/(G|Hwh(Į)> 㻂'`*$d7l Dcs i躰2q`9lmhcM?mV`mMhH3F9%Ҷ!Bx8.lb+҅a(E3bii 8 ŞCb;TRchO٧yz?®;8Ш]a;$Uyވ {$PP*$s2\csplbh:zjasrpC8"-`}rQF%,`@SH];[4' Td-cD _5 T4Hmviyvs8KSS%{UWf޺;NOO񶷽 <Om_-z׉oo9}_ "o^8Qs__=yh]qu13nXu޽{Aܻwܼ^KjרiNNmoÏȏr[ǧmpsAՀӨxLשi>~Sp}si xX$Q{u23 o>9]3>o7WLAFvE=đAg}co@׷ x$ܧQ6ɤ14 e)۴(57{xF+'CJ#u 9 _/%7lYEzСlS?qȺn:oia>/@/iiT! ("{h_KuWW}Fӹ|}^_GO?*`>_I/>Ma5tq˿K<3|i>wuU͝:MTvBi}a'Pe˱e{tfX(򎨪 `DӹZD﩮MB95YEpJlv&ߪρ)Hve4!qzF@Z;+K+D$[.th`7EA _Z |n}[d>H$ L ̠mAy$2x@ le\4"煊Pyƫ\[TL:l F`_dXMS ~)Ң=UV}:(ûh6\j;|^]A _}  Ҟ`m;s'l@[{(ǭb `aHNFu?ĠHI Q4C*+Kp4* ո50, W@9bwTmP#&|U-F@:aĭEmy'm%3 4zԬd%,OQj,q͒TB/`)K֒2:#HEcl2v2e]an"}OQ2fPQ,Nut^--4oA0M01آPu`GYUpL:pz Zm $9e iA˂|2,p"z;v<+#:+@WCvP^bT|zЎ2"2Ƙ댶gN t0r-2sˈ._(APLSk?d;tY%Dl%`\ y@`eIeQsG{@Ϙk Y`(.neII)PYTQfl3ʝ2ŀmt Uj2@L| |VZ2*¡@:rр$3~ao̚4 odAq+ t`Mܫ6F= '!Prog@UP4Jѷ1o-?#.)syiu22DW n@9^& L2LQƸ' 'XB4hH4G["c 9n{֞`xۼV]6'TVm p?2&t5+tCXdJ!G[}6жe"Zء=(0[MxsPȐ2fQԠ{n. $bN3),y8`hB+ZxdE]I>H+hb=Yxt|x$r<# #.]٤"۷ $7|vPeu1 #aȻp9 y: D6B)NA@ro`{;ԐgX˰SyFQ]H1@U $E0 D&gV Cq͹f ϩ{vhFPJtI k'yg]Fb>Dg [9'i3RHV7pc$瀞 r&#A8%^(#/0*K.e\`'!p49hk:6 ^q]317Q]-AnhTV?|i0Fti6HQ$5`Q ,m W'i ;*y#ɿxOHl|BѬdƔPDU&"_sb _Mcgw'?y^WO|> b|w|雾 /}KW/yKW>a*#C8;;>1|5Mv ˩AU=`q|~𖷼/{ˮP ~~<*믯750q9xA4C_S8s eK`ظu Y+Z3e`.BiTCؼKE YE`I‚ϑBH!x( fU48rv C"tx/=ߡл-C A[v$ U& G|c GAӲ}ǝD|+^ koo$arGjJ%w..r(ٚh_ "]\)P}5Ycm0sM/=в7ƼjKEƫ}ށ]*q <8؆lOn9JV9K@Y%@FTu6\ K>A.1 ?D(),Mo[(0+9K@D>L )O5' oMA*g=i2f lZO~BOZZ\*Tey; Dp8% ^;-0; (}s 6Q eP`ksB8#s)VF+"a6 [  m#jEDi/횸ǝi^2OF4{VJ*dPXWv: %G5K0qxOetzgmE`EI'ky"CK]"݄ @_H P! @o>bMz9*I]b>kCBIPvNBKYK4aϷ ' 6Z`,jN[#U=%箴NeKظ\0u'9UĒvG*FPޡJ$ @JUZ'xN19lǽmP%7JF6*! 4D;Va>@rMe#/bU[ێ} 0:!q<Z.y%q{F'* yRB4#͎8*v5:\fCdmHk\Kjh"b -׭!ka?fd㠈vaQe;F,Z>띶b["'_Ƞd]pGoN0tD7 /m @1 V&̡$q,r#](I0ifXc}plY+-[F T.,##Fi - wYOR-t8J3@o ;_à lDN+9E0G= Kʵ(\IK"=6"4I]ACF6^XBE-$m` yԑ EI_r +x8 )t(Eg+*(+:ta `αYU#U\/2 |f9  A!)lچu@Z("t=INP#ŸЈbd,g+@zA5 /b/'UD*DD$Po/;`B#Յ+"V$xS7.N:~ d5QDUYn}{6/rۖE$r͕f` R2 b=Ygy3SH7`Z<[?ؠ:jޱ'ЅF:@U+D^(klp~1D|HD*n3JԮaFĻh 7!%U$Cl(F)I~my8cJzOc&i^H!Q@Ȝi:O5c-pۂ?vr{]8 _ϵIDl䜱Z|~!׭~&jj:<ŧ )җzի裏5y [eT V[)`?c֜"z׼5( ox;O~GO} hN ^FZNo;~wsovgj F6af a54E*YJ0[=14\8 Pn+v*/h!y6l#c'4%̭lślgqMso2s`"oaہF|^3|vQFe/e\`888 p، p7G(~xOE{υEn\ @ W dpnx_Y`L@ifpm\Pmǚ"ggX*ՠE 6}M>= p n[59`BξGcڄG,@OEDV)NNW6g $1(tN,h 1<DHYܸa I=rXc}Ša-ðr7NapOKk3c(@jqGop?Ue&1 tY H4!)40qyc%8 0U!ZQY8={D\p >i/O$]V`cޑ&ydX?g c d x^Rx F`hԡ o܌]} Cvvd];W7 aeUVI|@0K8J6A9pJĹyVaGvZYP3/N8T 8H =#JKd$x?ɨ> B"#2S/=)Z2ѷ;? A EƶiTcX;0ُ7W ٙob4R IDATI쀒 Brx4L(19)$Vu-{!KJV!Y!Fg!'@~︿CT`Y ; AS|{ߍLvx'Bqsx;m*)rw ±V@a:[9-1dcWeov5]`/hڵ (׿WP${x`iڔj7կ\k*z??o|_?p޿몸R@upZf.O<g?Y>\oJi|sK>>яw~w77Nj{|TNvf~'O?n^9_ sKٹ0ּc/"wrDεiK{4xozc(w9Mӱb!\_~W~h߸3 ZzpR$xꩧ =q$amejDq v v%9l;%X/BheQ,K@4/.~ʅy0 &7_FiA8m[li4ͩv_`#k؄RYam\BM {)0wo o${YZH\<A?%jnb%㞰ut>b"ؘ P# ="J!/Tj}8eS1OƓAju,>/,l`$F #hu:m{Ж@5eC' Nj CHba G@/Pb8^D7 g|OB jE.;ck(TmϨ8޹Vת36,T溧Ϣt6fC';Zȣ(lXZDPKFF#T0t*;$;JD. rmyv8pD6*.gXĵ%}B ||Z@:0OE+*KV"h(c378V3 3}B!gq1'M3P~j9 "I+*>w\] uQ%W<@P>֑cOv?!APuH-Pj^ \f։MP]L􇍹UDsg# 渲P; AT**UNmdʨD \*/,A}Td-Cpk(pJ`]-cnGXc5j c eݯx-Z'plυnIq Er/+ Zph+0N# |!Y9 ދ28 C%ATsȯ$( icF )gC >HAc&i^X8l`7J]ﲰ>d@:U(Wys(/{k[V>1\kUEU?$ENbUa A񖘠BM ɉɉ D"KAcΏ9e׮]{쬽.s1h>5][4C=6PkUd񔽡'xw|꠭|ɕ[^sW { SX drLLΒ{b!9Fi󦴦Ӣv Cg :R,N]T0Uk!%X3 \ [,TwA0/G=nxvOR%8亱x a*ڜ3|hVa5Z7&UOܽj}w} ^dO 5!Wgr&%Vi1Q SHkc=u,¢(Tв˶ĖuetG`/vMF] a&w IdcUECuN}\ָkިq g?ˎjAFŗ &$ֈMEZ% uqflhcE왭2<+T֟+]Jr ,Fj3C _1oҴGgV~1Sڪx+@ߗ5 S*j0YO(͍IDUe=(-ҥ7Ayt/Ɛ/M}vm}YMHQd!b{i;{jo@L6LY,c]$;X/X&ck@LdI6~؞2x~)R\0uоXf َděu#b cEbOyiĚ =T}) ۖ%nfG?z}}ͅW H~^򗿜/ں @FUF-@ˀ2 &~p^j^PiuuWwwkwGR766ַg0ϒ׻ś.AiZ&~#UU;ϫ_j=c9o+ϙ0Y?7{;ʯگ`cSfOUԧ>/rZWj8=NCswWUe}k??wf_a]Wbd4j4B1 Gqdm6^g|fo۲i 'VJUꊶky"- BJ a%1IP2ެ}fpaN cqsJl10fV5J`#fM͋yt|}?MV%%5U?L e.a/Gac*yZ%cչW \*jو.XÇK\= +?G b$OZ*8Gc&a5Ӷ';^7,r'}_; ԫ 1joV>kϗz* Z@}'ϽMgLdA{ =c6Mc/IVݷ1 2\LZt 撽IAѰy,V_s`)rKM/֖@K١$LrcMP1v{k@8 aj R]ߑѹb HZl*QqkJ{,RZ/Sz`p0DύGJ0˅u%L6JH.SNS5p}_[[61EҵN9QlX4ή_Į=r'Р9V=*c3$gU3+&VPo;Й 1UU3ƕN@b?u{ g3DLIo[G A*rȄ-!m”+Å]by)|EƠ Wqe \0Sն 7T+ J˙f#<0d#xEH[f7ROLC)Fw3\9u. 8^IRsla]ehd?R~3R; 7Qk˻rB5q|kdI$rp+ ]bҺjfWjoA@-vJIR^k7-8TkE@@ZhW3[s6!@ ,X5fj Rz[G) =aGa?;eNk[M9ךY15jEIH<)5[&5Q9 2Scnk &d g4;}[TM56|;5c׼Kbpm*f76s $WY&@VCk%n-_qߥ' X e;[j)$5Ȧ!òM0iH +4{B\`S9MNa3=o{Sk< dYShکw{~w_5uMf/Ekm݈91VfV[Nt 7G%Wr+rЮ͏ < 6g52Umϫk.ֆL5kT|7,4igByL) 7Z`+W?τPyk^OO>( 0^Ʀ/Uߗ'_V4)y\WG}-_5>A?Qu;p=1a//Mo}[2뺾)`~˷| kK_Ҷj</4 0SuJi&us?sUЇ>Dy`Bi.K =}/}߷43X6IPG~M*ŠA&2WfS&gW9:en򔭨|"螠9Y~cr}]̌m0B*%/<${K3h20H:u53V1p.+ui#fb3c57=N~W' >?֊i"aXGt QLrl'LA\A)4%M_s'+0]U&݌n+͜8 KPOژ9xԁY Iok~m,i)=1gn1lx6os4jL>qmgiYlIEdj̼43ƪ #U0:%рӑV [ۋE2|A 4\q3}Vg`9뉖eA^@=!y]gydnI| 0 h0c` bVTjLulf [T݉]3X/:$Q](5*ľfδ>>~ؚ )X & e-A=֩kuL2%{ W-@?0qV<Ɔ].J°(ldC+_s{8İ3ɦr oEkʃ5,.9Ɖ<3G&{݁-]IV\>6g:ct$Mqau®> BXU\yǭkvl 츌egO2'\}Ȗ@]ܛ}Zp@j'' 6 32iejl,Ŀ "(1=7M< z M00heInIq=$BaD9^dX0ܠpz޷M^) 5;V1;R2 ڭ$2#Ҿ7)0|XFI֥d/f23 r0irӌ֙,Κ9W Y0/{@=nkky3a1(kNQD]m$6ƭ#FL! |ә1% g2͵7\=/SHMtLĺZ$g{[,kts&`jFhCuA}5KCa-'zsnn}c|ٷJȁ/-@w%tGF'J)*"ڸ we{ak.Wx/y[m uXrgy3YZ+ b͌bcҌv,-țI5*4Y.K^6w03an4L*O5SSPqɛ+LR8[p:==K?O^!{XB IDAT=UǺ 7\aQh @KAkrYF}ܝ{B8k-dM}T =A#FBncʺǠNʜEϳS؆vJ{r"ɃFms\+rl=5l)MPҸ%Ly'[cU2=7#F}cjEe=lI-s{k9[25˜# T˿/Pp7/x _{K.-A_0e_ƾ,]?)Te@<ñ2/ۼ^wu~w~w-vwwe~x{3`77 Zfx+^7üY[[ٟv*c|߬G=ר2n1Fbs= xrJ ׷ڟe7 o… ϖ8ǡϣOOg%10Knl&Ahۨi%oŗX{:k&B\wyn{! R%?)9 x,&Z0 Qv&-^r0qK𢡊9;Xs`S mJ;-S%^2]A4 ^,\ Yu)t yLd^*Q+"ЕzjZ kVw16Kokc@l#4r[^鮃a+V3! h텺@m%lٴ݊f%u)~edilM18m5@޽54H:ScVw ״OWހ)&Ji' %؀qWLCQ+x} &c O!ʀY@/EyL.'&G0vTR6dRe0݄ {/FTŮT+ooAjc+fZ2B-|<B3bg Jb=4Vg11A5OlL/OWf#΄)رM,qDт8hǛI4(Tub k ly yM 3gzAn@..h#Q/\25intl<Jvǚ&ؼwfkЩ :سv3dm|/r{ƴHn d7EU+z5دޔKvU&HTvmy{t [=TRf3UO;L=L:{@Ti('zRT)&7w)`1f8[&)T_o`.[@*k#qiZ c;<T0q9a7}7-H| ַ׽nY| g7G,ލ*F^ov+7Qrui_G;w]Χu^^U>Q>YGY翣}[8îۣ~7cogmO=BΌ&4&K+T2281|ޟ7%~ r! ς|kb^Ӫh6 UK bqG/ <rH51)w4ъnm- $Ss fB SpFmB7VqR+v61B&?jwo=@MTAm{@q$7j0Yr&T8k3Um4rf5EoVj >lGdH4sp>\Hp p!X1r&&vL3K Yt/ b SN]8:,n `E; rgDos83@T!*tf"ozqf\β닂Wm#+h hˎÁvm U j72Mq%^3BHg<ȧC|ޮ v v3Ky E/fx[L@ 32s`9`YJ yNwqՀtOlDRRd-=}քpD6Ō`!JS2װ"}ĚI'Aʀ&3=Yeq5sي\c@힒7rǶ?* ͊rr6ش:ɝp/&s{7$`$Qu lr xHbpjgK T89uAԘbYcg) TJ 6.Aы^VMC,nny.9S 43ưA9d |͈p[ =!>ƴHGٶTun^Pv L]ײyOĚw6/*iSѨ*u1L*k%cAfkس5JZkgo*8&b grݐz'~owi5]h.=;P?'V+Tl\]1B奤UU #L;kh Iw׮XZ.m\s?h++gFkhJ;ɔf>sԟ 4t֬sxY^ `ym7G ]X#ksxCòR ,ju)ETwco1å A9_VLAAMHQιuN:SS`cx?K :pK@/8`EU6PM(X:X#1ƀTRMҀB&y/׉aʒ_N |4 Bٜ ?%/!G?.8ecկ~çMoP<+(MӴc==YU/})?bq}Nb4G)Ea"Q_~&FY?NJ__^N LL)_MF^mO)? ;=n?f0eo`ty' g &oЅJ" Jbm0Z^㴍V>QEV| r['Kl >h=".ŻmKv %/YXkf]~w ȟ?sۇxhe%#>Cy.9 Aab\xax | :\ KP\lM K_[ 9iCn+R9 gGlNWaSLy"r5vwv&Xg5"@#p7@NK,j~Ѥ5c24)<@x1G'/\)v;emty67Ŋ)&\BV|?fFaׅ5TZ\9zip>HӺdmO0eS[cjuV^gꞶt;W]s)ӞQwkX dms!]w&tMMAU3^vI|g2_\OYk8C'_qih_RL?@-`a6jpVo+:K]%(J4'xxT ڪB" kXzRϮ>.mw+z ŭ+DQeY3׎~VY ћ#u|-2%K}`l|${KƼZZ\'mS/6~v܇{(G/WR*IS߳'WD^^Hsܪ9X5O-l㜘Ĉ}0yz6z7JneIR=#\GuAZ3~31y^C= xP O|di&4/!>q.Nn/RY 1F`<1}=oohN^|WlnnlrL&>s뭷6{(ji"y衇93fL9*Z U@:/Ix.B=DHU+&aO 3ԥUf.dw OmiLM` $kg^=\2/+F!H4]=}\}9Ѫ4dC|+oEUEr"|&H^(2j;`n'Pa@a-R8kˎVɭmǩOaDг w|d|۞ݪ!!U%m&-1yCP l*/eeuf z^RTL[V , ]mV8^S8܈|1+2>-oIXRK_/^ 35h +tEVZdXwb{ <{_=COI֤VA=n4gy\ 4%d_oFY`Hpg Lp4fBKYu,A~v@R#V9!}F\P6+j_$™DX7<˜.ǽ1e c[RyX^n)Bj=<.]܈bicA&y*3o%ԁ{܏~ 12i{=CT7ـX,n``(N*m R"r!UTF/{.Xnuz<D`5wV#{jS$u/LGjW[Cyӷ$ꯏkïo5ճkj5Bk, %,~8_Uj4=@G^ysB 3{k}}⣢^n چyl?lоp\Af:[666$oT͟2}χ>u XG)\SGy￟+W,䛛e/y;ŋ6Dq|=P|3||<`]χ7CI1==G_~g~FU[0~>T !)9{鹈w{8QV``W֫џYѩ+qG>@wх1yzsFjL3'Ѽ2eAt%1b5XQ/;[;/pſrcVޣ Za?= Uг _1[57\ clgKG|eW@3\&@ܯ=3[np2(!GvTնxbD/)yet%3!$AErjUm[p0sʑ8@_spEmxA\kkBo/9Dr{YXWAӲh)[5-Rg< SW;4IȊ5:TT1-5ܩsϘTt17B9_jvwwyw]3xw/{?c@H1.O, u?y^rz,~{キgP{Gi5n6ֆ-~'~Ƽo|$QdBlnnap }/~qvzΟR~7o|IZmYUN24,"^[KpB,y =r1F40>h_Fh>Oua?ǝ('}Op5 c-,j\nci edq%U<\,M%/a,$oݜSEL}ե7&ۿ&"I=s9Y)޻zmwȖ\'jxh=`m>vߍg)Oe! IDATϰdILo>y}>8{!)2$6qΚmUA bȮ  %YyM-&xe=;&\HrB.iGړ lRHzHu="&Cp^kv䟳#%Rc7hE=-rZcb ]lob=_ޝ]C{Z^c5׻*tQYCX.\&?z>]&Y2iz}sFa43m ),Y~]$ 7P2I4ubHة:hıbPxxN:8?o~ /CplZs?{ 9+t}8f[>{OY5,em1iDW)mW#R_LDu11S.ȱS8,Nkx}|]TUd2iPsHv{}a?>@9w݄Z K9giYK7 F]7C\Aw~w~gx2'ocѧ7{{p`^f8boo'46+I-<=P{˜X9e_w7` y=pk ?]xH>!%}0~?/ǿ'i|HA!brmQWD4ò逑‚b7a}wէ{TM/ P*5 &lN,,є8X[5uO{^;kGJL6'O#1;.:-5?>L$Y/r,N.?߁p9yhYN*p@q ۡ%׌.fEcIgeJ*l^̲FW9Xײq G- CqA{i64R&m]M}]clˡW Y>^K*l_o 't,?>1U_Ma+*s,f\l»zb>Zk_陿^Q5 yNإOy33y0[H(J_vzy~aUUw}wwY J>\X~>g }#yeYR?w~w.nO71h؛ы^{}?}Ұrʎ6J*%=/`.f[Ti9||W߃hÎ+lKal s`r),0b/\#Nyod@^U??<}=Cd-ĭ{^6J,>iՍG6ZVSJn% Ԑ:c2ߚVz9!iK ap,' >*) 2-p4xs7 Q N>*71].RpN܊!oa U&r jQ7٬ThͷZS_}Rfڜu1sAc. [O\Nz@Do\wm~ܵu6~ rG$ y,l+YيtAWvg"JUBu>Uu}me/6S/^lplwU)K /_oF>5٤ &̟׿~𺛡\cŋ|Sٳq|zE'yK_N՟cCY?=փ}pb,PR`_ I-h2.Y>,ŔR5nal'j9y5s9wrQߟ3o1ߌp~tY?>}??]ز)zN-c|) &8:H:}gq4U68 Q[=!c? sFXMYǣ ңk[W ]3Mr|q;  zF[*t^) {&1u#?r9u89=z~/^˚}(yG!Fw+lؖ/FNfs$ gҹZ e #-2Ͻ vsΥg8~}w>P+d4VȣK(b8rZjڡ84>%癸'xRjbhqfx7՞וC+*@թTHnڸZrƜ0.ކ 9Hb{idݒQ#\ \ "FBT4k5 cTgμ凒ﳫ5=D1msh dꋷu H}ƛ` mt(Geb_gV|N'|ˋ"D%h/״Fd\=ϩ]h7J+|__tC r؍ QU,H& \ Y!j&+y+&6-mzVdkQ5_U<-FX΄+ɬ%v|*~Bm{'qH 3t̞Dl(W2ˎ%Yb3F4|WkKsM'__Q~6 B~zOO].!+[pS@t<çŧ̡2q뿞⫾N/c*@C?C<Ҷ='y _P<gnVi VH{raA%Ge炘&X1qJD5\ GiJ[>E/z~ /esJX։ zL(Ղ=`rt3&s~ٜ[{i&NX%s/?vbC?-`DqxjTH%.=Z ߣ.ĵz= _R4|^s`{[ vBڜwj~_VJ@ӗr->"4Q{V`l "}2 hmcwA~9NB"Aضk(1 Y24H I,7/nhDSKihcYsC,`u7m!L4/Ag C{_Sf!/ ~Fm 3BDIgn/Lҟ Z(,#J:zAryI F#+knp&m-@+6Eߋ8쐧쀓-{sc~X3dq[dd6T05kYoKREHvU~ DiL5m 'X|^oqRxBl"Yɞ$_E,N=A.왕#y>J *AG&E%(**s1J,4'" ]S&kw?SIk̉ 5DHnu3s09Pgp sA/{m| MEYK,t98 .`eXP9#(RϰԄLt lDo\5UuLy|o}wmG;Wd2J$mVK"ňY*"Fde<+˅~J"8UeSl$]ΥI9Fa V0Q:jF2Mh|sU"ul͎-s~Ar{*u/Yrs;%}Nddmd.Bٷzbbsđ#H =fPzm8"ɮae5CV<O5eOkj [}f]_3̛f~7s s|z ޗƬ׽u4M6kQօdʯ {{udҪD![|pwE.ohIsXE7CR$A;g+jK -zo strajһat[ќdB`6 -#x'8=x_ c~؈ZiM!#f+ >b|.R c.H;,.2{V^CGyvĖEM 4-7LÿP{pvO.}rAG.Eu"_*ȵC$:OK2A:+vsfTMj5 p6yn}5c4)amD^hLƨI'^`-phn dgaC R-jx{s޴yk^G+i۵5%ෳ'aEg< MsI?v(Z%6]#*k DwgmV֕ŏ.{fAi:a^odb&+ 4!I0Ҵ2 4,e)2ﳫUc|\. X9 z^`$c] j;aP^D259|/ y.jZ:?\'S1pZr}2En4A痽e|W5}{UUs #<<ŋq:1~sЧ?i^W2&pWD-[|maZs{@3–[@O0^ ,,&"},IZU.w_ƊٳQ@^FwܝM%@$ r^v g<.JMC2Dܲ GZt`] @o p xS~y12{uxKgb-Kzߜ׾U}A%o8pǦ_xC Y26: E6I8 $Ba7݀#7Pq<᠄]m57FƘkSjuU˚76v<kv_SÀױG؎?j{ve9We>x?ƾLDn'"96k˕gEæ$W{ zj$i-ЙD͔GnWͣ?#Ȟ-ǝ]WO!ҤWA[GTllѷ7Poy[g?|vqo 0 $mی2U^cՊhD)fA~eQJ}//~|_|qt  7e@$ /G>뻾kK5,W77N TUv0хOK׿TW]d^5"4E|L2'^45F#d1o­N4}]lN,/R>\|}ߕ~rllnkkO:[_~u0>Z{헗_={Gc?Kھ럻}+|wmo{nQwҎFtŦCQl>\y/qOkN̉ߊ|{d:l@\D^e,5zԛu ((.-"߀ii^"$n+5f#j]!pD~:#4)||mO>wm8ٳY E@P%x)Wl@'`{{M1G;"ULR=y<nisJ]Sۄu.kjcm;^!E {u`3V^X ,9G\ 攦s}K֐- yf&?<HdZN3"xş n<@Te IDATyuk@U[ת[ G޶3mD{>B|S6ʾU{հ8Yb!}ڳ1؍cb"kW \˵٣DZ02]آGW8|7d(j`,-#ȹ8^lH\l!ܥ Ir!XxL7jQ*[p*)|E27y\y .rNj <I7aRxr}6$ZcK[ ח:ܓhV{A\1t_} 'u?w2xl˂j$MkƇ\0\ݐOx# B͑Ƹkb7wXW~s|Cci?Cvww{h Ot Q~6Ń~xFov0~wV>O?݌71r'F(3=y/Z)|89s@Dy{.k:BX~>\߇>!^z+@x-=װo1p^^54! ui^Vq^Sg(9en6R!1I۷FKURZ?7G=ώ6%01IvQ+}:VX4E͒/k rQ+ 6EхT 9X5h_kвݼS_)O` >4q[j|nRL0hQӴ, E ᛵG=0TI6yFpP{[C6JEq+eݐ}OfvkTm+FaÜ$ ^  + :4=ޓ-f*tǎ z$B1(5ám#^x@S`uBF!S*w$`| 4.[YтOjwP ue,_$h-N'JQzDA &D>;իcEҚnj4ҷK'?}⡊!Wz]^:qJܔ"Լ@8(i%j(nH}pzA>賝B;dpK޳;oH w(^,WrH~tGa/rz GE P<iMdCyk >6H"'uͶR=﫭3t7\+USM?q]ܕs;Ԧ-~g m;=ȶ2.˾6q&%lN3x 'dkicmeFW-U8SC^:uB^;R@900o+65 `/:Bq^C҆Uk^KudE<Vk0[q{`%H_fb$S5|eXǚ0Eg0 OPÍxov#ثQ3f j߀zL3;~G~»CD[g7R^T|$Kuoj\zޓ!ɓZ=}DyHRW<2jM萸N/;=pszqf=ڗm|qGoUL~\EѮ@R% {$lADue$v1\ˋc7=>Q8iEv>|]ci6qφ GB69ۃ2< Ү;{ڼ]Slmb݆*߷gBv>p{q8}8>Zҝf؍qwtcoQor 9ob_OFՈI5H߭A(In<Ǡ/dAg `b5o(m7>-`}50~۾~xnG &C~hrPϯ7rlpwwۿ>7oc5oy[aX(u }lG?s=PݿWUMӦ +?~}ݣsX I5fv}im{m7Fh[]6ku.rn05l׋`99qzzʟ.v %"A PeыP sik-[ B;%˦G-(fP–7Y բ)QSv%gڃ/-[>pKwsX "Ħ1j]9_f {2+96,Y aGa1!-[p 5̒76>@JbĚ$ru=g;zn|N//Xg6ȵo[B[Wa2F>4s+0e'S y7<5@)W gڟuhZ'5AjaUk|)5h瘞CPtܨkM< > Ľ U|lt^%!KΡ uGIj^]"US{ZYuwX4Qwj.!: T(Ysdm9^9qnZdϛmd$AZp ȹlb|}ɱ⣬ѻ`{dq; > ҲXA5ZNɅ$y: 6rWcm6b3I:yFb0Y'Zblϡ lj+ȡJaR=H16:Nڰ2tbX{Ubt&Jȭ|lyŵ-j惭MlK ans̼ͭ6q":.R}ﵷŸ̡DD]|eάXK{Ck@>Wob k0 0C+>M{76x>r{TFD6eu]U :.A$?9.zߧ-2vePL5.|0^  !r;'h8Jkԧ/K9n_"ɷP}⋼k 7uȏoz^ꊝ%C~Xg?y~J6J_ri{R+9xs:???_TD݂s=)~p:::yt )(i;Tl9U U 7NM\_$F7wFi-%U[sQ:Y+N#UAo)r霃J(]tMu]1OϡdVw >{F=օ[w[J{P'B=3hV* ~,/;XiKia |LRv$g5 Å@㡺 MH`MQP UM׆f-+tzɊ ߃{> 9zXWcCMŃ eث5žBEv%my؃{(N%|4=Li/ jFzӛR5 TM1pi1צ,13ATV] PyY5] f w6MDP"7币z+>iU:zJGDոo^Fc][ U TL a4Զ& _)ܷ ,x>@iGPzSQp9ķe])Q"8 x +bm:/ _tr{ͽ ǂ[{z 4 MAkP:NRb}w"#^{úT#G{>N Hqxh" {]\+t @flH&\OK&U4 R4=5m n(F!FYŸ Z "h)#c5*6Ա)Shv eaYPvydG`w~+ q a&a,b|m!jMKj BX۱{{qn^q6A][R|utiV/x: g=TLQ&Tc ##N_PT$ z54n0MFeMU_;\6`3̫IRu|`"r1J/cRU$ x8P) bNe({wtQPO򙠚jѼ5? _X4REJJ QRpjzdQ ,.}Y3X;uQ#Г" , n+`x#;J%32lff@vkИ@ȎgA<(ˈ5-xY۩h)_)-(Wl¹#KK+䭨*{C~ =at9I/Q52 x_IGkEUeѭ-$ c.H\"0F-}քcsE YǘyP_;:>d%ԌLȄu׎olXp@[[|Q?Ь~.5gƗ~} iz[^c :lf?k-s6ֆ'=w]*mw7Bm4|?qw_^#3ER M-_V z͝Uoȼۂ=PYX(jK5OtLhY򒍱y8e,Eb;j ȓvQ#|' Nmzf2񟽓//GVq1 Tʡ; THZ߄Y.FtJ$hbmJ]MiwFQa\4iuAC"#wm:ݭm`,^ Xa 3AiZyjԑQ4m 0h~؅Bl{ ꬏fnȥA.V4hi4FSYFҴvqo3sPV(C]!Aئ慅aaw.I tFbXiЋlˣ̶uA-"{41^Lb]󱇽^ IDAT*Uʦ:_ KLbJ#ƆM#c3VPo{|EAZdbg> PmΝUWGN1y/tYG !М@wk9S:9+utm fmY݂/`5A|&,MZCBZ0[>4$)F^ p&/Si>[BTJf!MbT`.cGf(-x6}ܳ6 [|^3sb\$v0~(y{RհG/cԱi:8=ث$_ Qxl ,K u9îߧݩ](\uc/@_WȂP-k vT^xXGoeK%2Q+%wuM{ij<'طxn_.o+0W6IN'J؛{)ۉ$Y~xaT8-k2R蠟vh~Ob#rǖnun?QFbgF=*M}AN* s1BAAy}7հnۖ*ҶmYVk03 y{и9sy[ʃxPF?qя~tq;:{Ck:0uRf|g6:ē뤪M f:3?3| _೟zLY\x饗?s=) )B+ch eM'a&nƾi=ro*^ oNh͎Gҭ-B*L+-uwE ~Nsq] "FY]e7`-;yug UQ$r6WBYov}+0 V^^fkG S uG&b9{Y=YVtKk}φ<ʴJP/NSi@tm: ]mt+ 6͵y{eeq!;дN=*f/vB~b}"kkJ`RwKm[/)Po?Eɒ*Ԧ"gQ&;a7͗Ӏ ~' /kX d(*a_ Μi-RKW%T&2nGc(>X{8ڤt%mf^!VIp$S$*[S}P,HabXlo/8NVieȮF>r}:k=1FN+Q^ qN{mA:m'J*Ri4zCC8 x{ʹd64O)v;"KNX=v|ʵei ]]yAG`);\ 2 OMd66dδRJXGĶ}Xï]('hEK7~$خ;c'Q0Mh}AJ{qV+:d_ ,l 01d+`l4kW%MY*1Yބ2׭nrkOm$ư:~+m'4ysYdD۪E(k2='؋X;90둉c};!6ksFxyW$-"}C&A-wmӧ±\W;'ᨢnH$l9&x8cY -Ճ0 GiRW_c~) ZĢ. N ]δB9.%K^޻D3 תzWCΓd@ǂ+w}"]DLwIƑ,*}I'@4I. #-X͌Ӝ9?W xSc?h&'tUEyٻqO6{!dp450h:pi^{q]˛P& { ڨɢ$%_H79 ]kWSEfl!"K Qd&k*ܧU(U`1_;#dC K[K+EW{`xnG7T4Få N|v=T%~"0S١ z7uZ?4Wzkk{o Ʃ*oy[>;5a/9MӼiryy;5 (ߛ _:7}{. 2dgg_~!zp6OӼKܻwؾ__[ϧ8)ItW.%8ps~y,-TǎAF'2 2MU*^b!^.G`~n6=i21*Ӱ;9h,JKɒİNNqSWYyUl6_mײW%xIp׿:}}׿vs'҃'ޓfٕ?>gu_dKvX,='}|qqq{s90d$F]&78Mb֘TVic q ;Ι㇆e^3蛍Oz:OR +t4n"aZ"HCd"h>3܍*LAU)ݴ"mAmC2V:V򪽻Q(u!0@ssuNo ϳ1P*>ug\ަaIyI{J|M<5?ߠ&S^{ E9~yp˃8"AhaK[e:gsxh|%(a=$רVntZ8q^G%mȗVFjTY=GӺFfe cowĸ6X gj=O< ,ӺVeKˍzQNKP .j$PCǚpL]#K 4z\4E*=RjB"© #a-@Iϱ~pl-w"7wXȒdhM&A&IW MWYFQo%@"k| 26L8W L<_n nU~PD}'[ȉAG4_.gdR~<5~s].Go"Z(}kmxnpݻ R=F 3 \FH <*1~ se<0깅(׏ %XЌ(N8K۞דDI+U&!;JIg1,Ȫ胬5$h`eQItjk vk,<?/BOW1c'a^8 sfC~\QU~JM8lG ᷽m/"ǩް{#73>Owƶu(v]w\?xFO~~9 fZ^k_|yӸ um[|+|w~[(Z]]*% %`{ uSP , rY5xMoF&zeӔ' +f}6IRFm":KC皶KE;\-w;n-?OF#Vʹ;d MB3vd $U"w?K(luR`NͦX˦q;2؎~ԍ*4Kq*yX2(Mue`~H(ȟΦ"b1X*~v _';3UaP8z*Tc_2qO}v{d{ b<(5{2-0ȎnFrvZ# AO–O } '`qZ88ءlZw NbTj5bp,cCh&>X㑕YzPw[&,e)%Ym:J{g:k|FڞFn*tKd@n'i+5G~?+b%0# 5FrΞsP؇L=6g45' vpPBm d#y4.r9 _ [(Ҿh\~_F9~;" p7G5thH:PqOUtZrDmQ`##}vm6k.-@A 7#G0_ĺ7e=[CUY3_DAݩ|jsFzF}f(M('qDfhf v۱gm$pd rQ6f//eBd+1 X?X$=߷T%ki:t\Y ^2>pS85rAA4r@])zn8rP.jFy3Avj-&Ip@󺉟q|5P{\D@:௄=AO2l!ZϦӌ86. {:TA&}zX_iIDrTWno9 6 bnxIֳP8b'MqXx^_B'yqug]G<>wV!%ׅ(G?U68u^>$ot]玟 ~.YlHE;Qu uϪ69>8-Jb졜 uCF{q>q0g;E 3,{p4 u}DbpjMq{:~eEa?}Z w$+x,^4W>a$@QcjܑP:GIj7>e\lEY R0n^nut}ϧ>+*,_?صmѶ- g?YO h9ST7$;ɦQSd{B\ձ Fnq-;FUB̠'U;&-M64=>ajGP㜙 iR0h]\{<ꛜM,j6P)588ڄ5w9 a{?~`Dý GȒzExs)n]o|\k9U"ExUN:+ߌ=nKmCLd\e]\d=Lr&$$1S?iG-V@̦s峹; vY? IDAT^Q76Ԣ SzH83=8ch,rsEOjK;āBv,q!e"#VT6J&1'مK%a,NME3}AoU=.8LkT3ByQ(׮ϽsW[^y^&ΈhJ* B-&+YwPJiOn69vÞfA+ K ^ ବ /~;PVjL_OYr.KV>jA@"b2Gm+[<O,iv$ݯ $a 1~_!wG4ɲ=U='/r.LǔQde& `RFy@]Z vlDP\ , Et:fF9XK?i5I]Ӻ]4NN:47"RhIl#Vgt ť lZm֥gٮs9#-.ĐKyrOodF ‚hyW~[YP;d; Do/ج (}TQ|:. 9=b)oo~v쁚XOۓ|_wׯ!j{P憿6>&w3c^==E@Ј&8{~;}?O~Q5 (?0}O|u~?:FtvϪ?y}x0cTe&[n/|1XHE3 XC m~b_m{01N|of 6˹  uŷGSgx:wKxPG HET|Aߏ6{fd{R3v(U,T% AY2:G@.Uß0wb:-(jkn^t+ HIZc PQTe֡mGw\|4CzMBT+)Mʹ,Ȟ`WEJ1U}iwFm& 1ςἇ\­'PsʐŽEN vvjkVy<+إ+4C2/{bgngK- 'yRd{.{w? Nת}tٙ s '0e[1FF* (Nn&RËܱG}xp P]?)rK*T,An.^*慫Ϝ幣PVRH6#Pb> {gqetA*xNleʴU^{.Eۢv? HF~:^d æBVtx09|4CF\ v_&80*<&jZ1W+Zb ILˑ=)eF G+&`q)BaBP6K2DJ81,9H%Iw {' sgk/3eÇn*-)*bi$s͡7leRRXly"?4-QK7R)[)U~gnwNi~"bQ~o؉(x ɟj#0]ѳWwx^i}!Yd,EBPn׹z_2z#[42cڛ \9$PGM^s!W.iC}&`v`-QBYwRn_ ½[)>@M*3g!>ĈZ(1h_Hxۂ]$VgMsUF<0 ֚(ާ}æکcM}iQ0sXLuR:z|9rlk,k#ڈךc"M%kyy2V>ZK%ȽWK$1ab}ӵ#S6%{8 s,zFcÆ)ɾ1'12w| qF`'r_ 4b[(DfQ},hЃ-x;(ͳo]:<XْėL"1fߙ诠܊`A5"2MO=۽Dixug(c.XМ2+R̴̼xfC]M{ Ճ 5x0+r2qƵ>E\*؍dM+ +mq1D%k]\HuBNӞ"h(a$l"C1swb 87F9G*1ʢP{Θcl))s^5õ_xZ^_ s ּ|ֺAj+'xΓdc[ԧ>şr&bƇ)C43/>}L:Annnġw>}zG?~yi~:p~~Ώ~濉UJ᳟,ԧgG7~%)V_ޱSՂg:(c n,̎@\ $ πa}**gӯsjs-sތ`OUEfT,-bw6qoDYq»#5Vs}~2=׋g_^^>;_~7~?ǹ}|QfϞ={˗/cÏM`/K[w¤Hޥ$b*y[Z\Ŝ68WӲ6fiS q>J1{Jơ}jl*u5tfUͼ,bqjXRѐmF|֥ iK)x ]1. xnl'((%jSYFSُ di!hbDzhΧ,C7bMGW_Ke9f>h8ؕ-RX4 ԈM6+sYğ*rd{*cKWRC=}ѥq9ʩw+H*qyFO3G3A=Ub@L9ؔT]*7ʡ@'Va{O<薕ݠ&ǐi[sp%lLa(6݇,&ӆ2^9q UmvR2v3R{߽)Lڤc>o0owOW=(SygR{sSA?%+;^J'E3KB>c aTF%=M;VI؈n$ѓyXK1X4|=wC*1jG-(KlKugK4c#ELkKa(@4r,DopRvW >j W׎Y4* o p+$G5Fly W6t0&:{)g 6i:jHBen!Rz2_X.3)r/pg N_mĽ9mOq#F u5ڕ3̓ƁrXdxV  Vr #3h`QE^HՆhqڱL22u i\{Ӽd*K=dkCFO؉nrtFwgubZ:+ύrq]DlC됮?& О"*:Blg+}[#v2: c>Ø.m PLf"1z y ßaÌ Ԍ)4谒Wx!& xY)cLvIxfoo$V rB9<\ nׅI2'Zަjir)Ǹ٥/"$^#:߫3{I`'I9Xj.2n Calw+is?rigrZ/Ȥ\;SE7z]Ck հDS9/[}̨E t\h=Xʊ^j}%r8-,y;/,Marܛy:(Xލ8KX>JEwN3˱nZ付Ǿ ;QFFh^|hx1slkt\O=(}XԑӽN\wg~gg?^ԏ?g?g?1??G>SNҧ{كu~[[;> {i>O=o;jOck6oye͙P00[0m[) قCvm,^0esOy]n+15 ԰`|(vٔZC@J\'^NTZ;32|5еo6gOx_/~%翋'~տPgN;Gի&RYrsOLD1,w)Ut[ Tvk\WA{~6"ȗ)ZՂREv ܞ*SqSfMR x백]`f*~I bWe\Ϡ[b#5(sΤ)C#C}8TN`' hoG q7=aw*vRDZ Q_V8Y[Stʼ1.y]43)Bq2RN vnfQ8p*di9=ܳ/䇱WQdcAq U(P33ǝtPHSə.kTG1]I[QWT0_䳾D]JR7`Q}"m[2(㾅z뫠Ν~ԄYv݀g]*+vo]K3c\8qϥB\5pkm/H ՠ\dsJ9$AbI{`ĕ||ifFwA9;MRls݂hLjv1)c'`Vq `q͋>1EX-t&`]QN L)6#z7$"YDO؊X]0(=ZVl1s(sYVc .!ꯁ*A '3h ޡorV8 :@ИmWFhN ru4xl;dvҤR쁻"bՋ`<.+5*恷&+%"A[)QK3㲪&TY"unFS4l9a+sʶi4W|'F͖7vUWr!$'Ű]S\)JV~~8$׃f=2`!ޅG-- kK{?F6]LRL+Q`" ~n]drrpNiݾ_ViC1f 2 n(/\dg "c=j7Eq]v*mj/г[bvy8)] k2rPkI c^3uֆNqb$ڙE\㩜nbicT=?!/r1oֽ[Y]U^ksC0ӯ7ڻLMmеBF9BA9I&d?HR 2(hknԿjeC#W.h52G]$ͪ8+=jٟsMqpE=ϛRbnrV{VuؗMk)U9t(.)NY&G$~D36Z:K9&;/--{z"~Z}z?px8Op >{{~k :q_Ǒݿ'+ YkbZW_۾u9I:M/,ŜrOOAO lvVʐRLql*î)DUoq8oJG/Bwo"(-,݂+[Դ1Bt9Fdq|ctߏ_CϷ?y#g87dMeھtnrU=QL7u5&/G*Nrlv@_NFbzT(/Pβ/_fAwݨL'4[T /oM־ByU[T[6÷!sکlIyx';_AB?;#<ظW}/P:L0kgX Н@ٹ^l܌4/Կko* M>pKbЩm3_y0=NGlcؗ#il;a렭y,/*n>fXsl"3ںQPr%[!xڧrl]%IȚ=Ч*|t=W'#C&9ĢH{{. ]?c\¼l,ܓiD?zC*6h~?d m03UzJƛQzzp-+B* u0>C ̖Ew9N6Y=$CկzDiLN|^as08q"DB^ۦʜhR+t*b@t0T߃^-oZ6mt0^=0t)#qH Jm#9VsYQ_z\ zNr`a `  +s1brE8 v R@R|iKE8J`2n xғ;6j, E'9E쌶ju^7\BnAe!F;KQQFqcmbrPvs EN5S G.} IDAT[lmW6@wR.gQD&pzo*aՔ55g6K9paC0rʩfa0i.u'- AcEVd'ҋv.jjpKWYŨ3(oe uyI"78 J wF[%o6W,i#nJ-0v.Dw&H=@rWTdH!Jil)\kbW|^oG5b [2a2WAk ;7-ؙ"<(>*kh%sٷ} ^+`Gc]N:ʍ{>;NSVݲjݥЏH⾫7][_ĵk^\~İ;A1jO&kܓ$Z0oxi.`b0[iokf8V"7Y"nC +"[NS9m+u:Ev7`u.(g1wɕk?COX:OǡZg?kO|A t]MM|~//]A:NٟY\F<cq?s}1{<_?O5:?g @PȎ Y/'?9K7X<+3~>{Byةa[h1 YC dn}-0fg~ *c  {8}'6/afTCڵ̳|nݫе~?Ϟ-7;Q6 cw >/ˎ4H e.A Wkv'oT٪n< q'UEALְ]@]jU؞r~(sGn_9 ^*#R]U ]=/WTCz:4*^E emtWΘm:\LhYP_:qҥ)=T6Vnr,b(dk߻,ւ-{ q+d;Mwop/Uv1Tg>:I67)p7eIKסٖyT@J[%/)tHkTS,Q=gRRGn,] l ȃ QBٞ6z{ 6%t5]RiXP&kRsS/JlR ϡX۩LVZSx!5OS2' 17e ںi]}ayt3rڕ]:辇N}>y!dvy7VR/M6;TtM#0lTn,s衚0UeǤj7O`G/=xű'Ęy!ʨqfEF8m!nMov&XYdLJ-{/҄,D)̫ڍ2<fcO^FjXbIHc@@PRJnr*U}v:h&nr5镁lc"hct($b]6 N/#! eR`3#f697n˴-!|e"pgCW x'@ AVLF +cQsJW  qrpZ6]ܻS-e JBKVHrfS3VSu\ bDzI[ĉ^,L5h^}9's3}>[(RLRB>~MX];t/+'Q4ϛMJ{׌i^$KT\k໖oXM$[8KP18MM,`P,ùM8k]gyA 0ޏe+ͧ:j}j{iϴw8y\w@/Vi>r.U[,MmA" 7wך2ObЧ= ]'kIXww_q1mN=cgM;gm 5#.)睽b]Z NƸ=Iy[̓}N|jEfg$\vKC=܇a0=l洌8n"y0^Ϣ@n_Iǥ!i؃ϟ{W%}kalF|ktB9Ӝ1; 䔐 &' 9eIw ;^l1.X:6MVi'=8dYǑ|Zk}GF8VN10?Q#햓bਵǹ{_>E#u=~~A[+C-6}3N݃k]}1=QfʻiSn{zk݀zu~n^8S5/kp,U\. D O`Lv4>,U8x+`WbVtx̤6-}q < , I[s֮bJv|bd>Yȡ`Ow}xڔېEVcYMD>IGlPԎsy^Wmoo !G*ov{K$LMY\ `+^}\HE Byw}5k)$3=Le )-~6.,mkč^+Ϙ0B;l,^ 8G.*A $z3e~ܴCDѩ^iNĕxɸiDL[Jo z*z62˴&{fR3^Yl]q t|*<ɰ]i?`A%(B\@ֺM!Kh\q|~V*wu]f6Qx%oq r'@v/⅕[{vR>QƊ+_s 0ԆG;9" *@bLcn JK6lĽ}0NmJ1 @R e&6ʸjWlC`upP%xض@\`L Ue,': .3;c 4`i.эH5oIӾnȾ;ɣK]u1ex9bc>r-9ל/EJ_6]Wn)|cF`Թڠ~UvN^3(3n!n eU|xM~ב7-} 5*## `ufNS_i1D]/~8+/-%(#K)s_'1Q{ 38/a399ĕH8{vm"-M[ge)"-2l ]Zd߈ϛ90Ȯ̔o݅PTFE:^YDX<Q'.nF6Y`v&DY&zݳpiP|E`nRcK9dBVSn닦H@vR,gEq1eu;E h 5B)mQf{-;@x19w7i>#,s"v{bUqO5~")M~iB)3a7e8(ʡ1|aF4 `QI_"f,ػ\Ơz7>?Q!l1}]"`dB$ZO| \W2*>a]}zJD+rȵ۠lzN;ƓQqGם9\\ns7E7~#_җŎ'0Şs\)o~ UO{L??̷}۷ t%c`iRO_%>O}ImJ??///L3/[MW*& - Q֩Ѣ2yC6ʩPBERe[ ԽP!{W*Q*0se-U]4MOojVRs)Lt6d% gZ+QVң2).B@:R]`iRq!,Lvc*Nj)RX-O NK}l+A\!gDJAj:"s2tTk4O΍oƗ VR.רy$=7ϲqZRu,e{.QѺmQm9Lco;m~9BryO[㠩ﳁk40e?rWgo~V ;D A}#/[T=R7S+lj  &kU^3vD[,7ʮOp6y|$`'_}Vo hLr:'6 D+DfCVY21 FJ菜;&W{`Bs[;X942ʮ*f(mfr'Hhv-טXI|fi+";0+Z%z9~R})*q+yߋ 0HFc Ku Ӳ:rI O歯$hLjΣMzF6.T`C /??HaN)ʰVIeo+϶MY斓ͻY'K#܈W! \^(k!cQeD<˼8`LfBYqLTZ$ͳsI-Lr{ᏙcŴlTF:C9Ђ&2?+|볝:9ǺE爠SON&ƺʵX;mP_횽[ #u {c l(֛Cxq$ʽA7ޱdڣ5L` 6/>t髴/zf{oڃ:N)/ 4uW祉J hg4k=hBQ;2zD(.MӼʣ밽B KK,3祋D7_C߼} UvVZ>Mw&"u魦kB{łըX+e8 ۷~cO_WWW|s{[>:ca$~ǻL??'> '07(4<Ә8u q~|}6IY,)W9_;_(1|?5/|=It?_ES~Pb/ ,Nn>lrTlSm`0W+1b;qZ3sR-/ ],W6Md0Tl, )+eF>{a?c>oߤ_4$JYKUX:]TDo*Y'K>gওb. ]`L ˲VXp%˚tImRؐ&0N+TFc8*%B6^FҲ:1TМT & }R_"}2"{jNQTpy-S)!kKKAųz[e zz]*_<݄2O Hr^nIG ]"a(4߂DG>厏1T?e/8EnimyQx!K+D(4-k\zUWIX;1C)7@T)Bz(zSF !5i[8Y;GV!toS&mP(R`>}Vuwo{@'ԏKWN@Dp]2ݕnփ*PMpxIMICydM )Tײ-O'n Ftf_uU,h9&ɣlei#BI<^7C'~Q>zZ}x=66KYײM>O%ɶ?{FT̳=Uѣp`X[}53] 6&JB5v"a19NDPaVr_|6*xu,sPֆC{2=x%gq{檃s;lA_=w'(r8\x8/Bs:ǣPZh]涽{y<gvĿn[ۻk_Gv&:@!d6*,mG} IDATX8)۹IS5 w$xDIE\*ki=I}!Qb,(xU$ٱ/V3]:$P6nxm F` 6Ѿ%m_q'm q\53<8Cd\F._SvQd!܎zBaGֳ#{^+@ݍQjሼJl=!y}45/iFu/T0Ξ3ˁ/;j]lJrLG*WI9MA2HO] o=]ѿ9Zr'#^¾>W"+(Kc2[˂/CspddB$yF"Z~j\G"0]e\J#D2)%@Zc~OGcq0hM;؅Os~&/fFsAjCGLs'F9Z>~%kbL{$1@늵TzRNk6پ &Pe崌xh$ XgFW"^;'J5*[L!нNKD?k7(5K!VxinpX턢RGZ犵ͥBP h[D+&/ 9GUU-h1vY7^1F7wk]y]y^y;?qヒf K9뮶C제OO/iU??u]_y*}[oe/{Yocܞe sm<ַO^Hk|3o|bsay6t$[C>L6QtKv顾` F!XDlڸ,.> Ҷ3:[:TNԾ'z~6d W&|UJLZ |wRh )_$Tq$!Σ3%zECjg"8iPKk=l互L `XJr|wڝ mTί $b@b`HpT|' Tc+lSVrҥ\ĵhY6[ۤE:Ai! As66Ydm4V j4>)n[N+'* O8!L0 ]\_S5cٷf'h%8]WHWb41Ré66V$[0sIAѩ"SrXegcƓyˑ c#v'":I{~$Th;IkV+@uծ{쉆zyTR_; RAkcHSHJLATГi dU'$giL`!i3U ~EV]uibII610Ǝ/GR̞;tYУn4&wL4'mBbɊIj$.ŠƊI,mI4uͦ *22XT% <*jS{6}țvi.+J غuҊdyTVOV x[!f P֮mi[c!V%#AUi*0sFx(I&%t!';~vXhҋ3io){_&+5nȼtL-N Ҍw$kzz6p->; +r'ƊS~o>f^3˅qR7?8uE.?$F*>-" Rq~0ݳWZZ3cSLub( FzI{^9gkn7W(PLZvfj9 va|sk%]ȹ\z l~CoJ oxC;/9xKJu]|_o(o_ f_>SY). o{.rn,?UW]cH۹z?hl A5|L!!E;Z]K>m\9齪hͣpinbMsNUTQ?9?wOqQOv9%Wy;ctm8uqh8Sli j-LŨo]5蚶Fgƺ\m(V=1ӓt ^#$9 z5XYNuIIg Gž Rt,'& ێNIGg,Vt;-⮔Wڄ_A RМ6 }\t2{t%cԥ{%BԀS8$Z2h#' Z3MK$K֎^;Bʟ9*iZb`5csLu:$£POp zDWXT(B40zalQڼ h+#E2\eI4gmc*"!6D ih\ӧ6}I!N rOOe1/bn hcFv%ZsWauGRӤG zy Iet mY\jD*i͛9RI()@2ֵ.&,P7 l8t%1;lc'w$ifg &?43$vo \N-@Q}XWTU"Gm7Pi\v CbYE1Bҧ #_ʣ1R(D ϡ&vr~$5dvpūkeF(5M L ?é51 :VҩK u/WnXAǖ!uKN&dSVѺZddXm(ʒ$jy#FϪ ,DZW`b, j >\mCbzqTұ (V(RONy d:B3tJl%"E½M7:l>E88bVQt?j1ĭ:\@Fgvٍ&^if1b6ڋ!F4SdM=&4=6y-s&W$NcDn"` aHwoil=}#jbmϩGNQMZr%wlKQQ=M @c16nPݗCu)s>#tBY?DErQ7`.fb(S!J^3«hHPe.o OvCmO`ėq0g?˥^.)?r.>+)~:ty23_WxУϝs;9Eϯ?c7Y$rwo߾f>93a%ׄ__#{gŊO}SX3/zb ~nePM `:gݿQc @ pLɯiO%=(NRaHs0~nsn}{#ujj{R@~^"J u`o%w]I{rt}QFa-g?z1B,E5ް _1`i,WCsp¨Y *qѣJp⾤x6}Am-wgZAI lX|6 ,Ubh'۲ S8bP$7O=e`U| )[Bw>w5XOasU]/ bb:7\3 [3lo6Uȭk_Z7ϻ)!?ooB Г6(A ݾ򕯴T9g+grUE~]wo|タ^7qsQ19>_4!哟:I9(3?#җ]weo`~Fs,_OOw|!??߽{>|?P}xw(3F/ðeSqRu>ԗVZC]{ߟ}O [ouC~ ϡM}}{uRi_`#kbTkZ6K̊g"T_W3 }X 1b݂*+ɹniipM":*NcZ$Ή`Ť>sҤ5֙'bER9@VLM>Es-տ֙z:$kmzD Ѡi3v|/s5Q0'ZzIŅo; "rt[ 5 \k:xg,Yʲki(>k- D7Tf3`D3` ]IO ti^n]{h +Ș3d4";Cy8,}K$,&Gk`k±Q]Uf  vsP.ldƉU:gBzۣkN8O+mݸq41kZ ț쐛(~j <I}9!PS復Qu DM1 yΘʤsE/u? r-,//6ǏgϞ=-05ͬvwl489V3,Q=6eeӛĻn/k{W^߹I*}{{D^#Ηy*6sL܈UB? <pZtGِ߀j3 lntG%۵J9o$3]{F EA_4R6 snj^=!+舂#5!˜ 0*`mK_۬ -sFJƀ|_{4qDjRtL֦V{+5'?SBlr..o2B"L1}?sU|o'}6(px~$X ĺtg$9k㱣Nb**,&\u[; >S7 5 }> &0> |tHV@ Oԯ%]&Uv}uxOUPǠ$jy-iu[9؄ >Tq2kzוVbV8'yW˂Q~~unsIpέ3|mq5\`ܶ%&z?F_ר7s0~g9t_/t7x#`bvr^z)|;/˝νos|r\#D->mcXr 3nZ᧕%i8# X$-H+G9'rIc8f Ȥ?D)"V r\bǜ0#&h Fr3SHj@5 rUXz0Z2(h:h7qHŦ+. 8J/N[ABI^ϗIyA> _/F,섅*޽ ,A׌3ilOlB*);iPY˜PqGL(PAf"zDI8 V]?{ 7*@ Ep QS|__ꯈ1N/.IkɾhtvtZGs\!|1>*OW?+b\ xhuϖ~gv.4 IDAT?jLKY4x>,*x;Jfuq>t{}رAnYws*WWWRG}+= ?*CõvX 8nGRA39&_ddhNF4QS 4dݷ`@u+.7äpA uB\L^7hq.S79M#ĆA;]^=a8PIu!tt .}X + 6#&AˁKZe]fp0 S|sԧ P/}'bޮpZa4@'1O6}I׾?yFUUBݏ,smx;7zns; {+_r.Ȳ\hY/XS]Ov./SsN/2ylAx:y/cӹN'Sp'z£yԺ7eC4ԦG06pM!`eG3]uPϻzNZ=zsTW ӥ@9&d!VThLP#Jh7h%-e,&ƍhﶢAP~u؋y+4DjS wQ{ٞuqO:L{%Tuεys~d 95—]cm0^6~o)"$Qx8!Ơ,[XGt'XlݑL͒aPi/vhksYϗ|H =gaVǀgJ=K 09z7rbcX/QYvⶲ onuW.T+zw1Z;r>Dybz+{m1FE3vnxшni]w텼>u??]d> ')Yi>!Mz盔ooov@ ğ;IJuݒ.@t;7欣M!ߞ4QjUQ,Uhngn9R;Xx}G#p"E#{+Y̡{ T^PP3Wtn؍"I3)!;6 s;C0;{ͫ;қ#X xf{a VUDb|[zIw{ݨsi}47QI\wB}ɷs".h ܥoDw[෸c`Aq+ȣ-y:}=OggGGEp2vFQ>,.=]Rh+垒 |0Y{>l9wFwJp+}*~l]FM)OQUǝOBL[y+$ex]3='ٴW/hݸ  ;RV -.qɳj5 -SZ!c(~6/qV;g5HmT3x`ȈQ}7u 8RGKf} ?G#ruܜ NAtܭ.,6bYoK#1 -,A~÷kS%)>˙TRFhAMbhz=/.$@(k__s"nDSwTC{Iվ2T".#h϶ٱ. J:74%PSQ+Rvy cf ݻ>uaǥ #+ppstR<c&na݌W=:X_4&>'P:eZ'cc T~)"kV/9Ks8@Hp%3՘NBɤ֟bƘ*}! F[9*o]l斈NFISǢ~DJMЙHX!Kx]@< /h_k+8HQI1 %3(8*zCXB_^?/"hE9T*Ey<_..^`トb~Rzh#鮺Qi(o|-g{z}Cs*OAx|^}Z7:M;y'NO[g;2Ncra-@ELX|2H'>*\>׷6U8YIz+hQ,:sDD3["1OEŒK\?R3{H89hr2Ӎh?{&6GsFo6>OL4w s=95|m! ג RUnv&Ksٌ%KE;WW;Z. .28?3>Ovm4Ms::ki[̩=zC]w$vnzݽ9[?-,,^Z]74U𜶺֡o%[sS-;R_>ljy8./v??[iȟjJ~hk |cJ޻{'>f3QSBu؟m -"=&_o0 iՈg!ǴhjI%|J.g h9mJGIȩKDDHdaQșݘ 2Ukڌ '6G"\ x.P_9#ޯ9l8ft1P^# ;xp*hvb q`4('!Zp{ }PV;9jB\]R#&q=9خ8ZE^޺ʰy9)Pd)??mYڮnYGuúOX'ў:d g5SX }lcs 2"g}եs4!I؏*VpThқ'!h&QK TdcqꉗxQFhWL"Xy0bWd-$ONmހ%БGCB 1Lo9hO@#h~2wvo)Y'Ҡh!1#5mN] xң*}jخѡ8C|vAu*FsN g+ͫ@.TGE XEX3Ph·gOxPmVҲ""zcFWXspRID20⥺*Vt1ӺC7]wc8[%mB/wE4LGjSEVhȕ9RF3ڝc1A s~`' 0FSW qX_TѺ1f@i.xJ "+-D697>[tb%o4#C+ ˭p[f@w KkX#,LOkڽMXġA=DM/I_$>k7ҽg궔gl+g{w_ 2<75\&0K kT)HD^k;.\p]wQUU;&s9{S&7My{߻._2Wvbe0tWkvӸy! ﭾ/?rblW[M%ٟʮvߝ9ooovh;6sӲ?U1LL(I3<Ց p-3YKVB O1N*[ xж.eaۈuVDRrQO>Z` 8L @cW< x^C|ޔx>pܨ4.A+q1Xb+;yMf9cKu/RP1$qQÓ⥵ߏC1 mW}n\"HPXБ%Ew8TұH(r?ϰYUaCKu]_nKM[ aK٠R⽨"Qۉ} 殹N3 ԣtwT"JcݮS;M#/1]#\*7g!>V$ 90{qfLokWA%Z 58pLaa (Z D#أ gȠ"bj@cuRwdĹ79g`[n. F*5:\43#Y%H Ҫ?I5Nv@,'6]KyMA@"\XEk#k@vQ$R5Y[I x u?܋kxuM^ /pF@l<M . 2bѨx:$H{[ǼI.Z(MH7ThT&Pp_Z,z2R\^N"W0ėWAshY7Z9?1x\Z F<俗 xrHB<:-똩 B@^] {';`}]z, [;d+"Q?#aL:JWpigҚ'*D$P j%t=8u%}),.RnG6mR nc:swB;>߿o]zEHyWUU͋Zv2`]uFxl6k;r킜w\X$q~;O ;w+I1FrСM vf-CgR ԑ'΢---+<YO;~h w)lK5$d=ܮkoFJ`[g"+*8.uOBe#gW FKZ+tEO'tcon[i\A@Av^PfM }wGF*:l;8N ,WFx3eE;2W Jxޞʺf#:R(*hF{qo4&u" 7*5"X"RdS[Z14I}d,QҼXE"W -bN+]sKԍ`tm$]+M<絽f>G R SWDb&t;ᱛg.kDHI5(M+%z[Dq  %g 0GuUA ]PZ.+ PEx6_&D; ^8i(S1]L`u (`1v#HbzE)>qVGQtw"8hR@^33Q2f#:n  l+Ft&eׂv>DWIF.e[Y}>HuĐZZ7qQ PA|R9B;yTLGǀ:uE(}fJh%j+Al@BnR? ',$/;"AC) |@?'Wv> q]9 E<3}]NDtx4Qrg3~adv/ Mkih!85Scug)Ok|=5ƛr0];J=IWDs-ٖ{bH/ߊ&!Fqp&Tr]KˠOh/̊z$5qg[4A?gEKgPgׂ^ |O`- u#~Z nhL+صR# 1ڻh5\E%:n4ړAGݠ*pckS&sD+DaҟDB'7g$XȘw+"B i|Uhq7c0cv'ĖϾk4 32 L/^MSR<  /AOV7#Z7[`6lPۡE.Г(P,zAg%:QnAXaJhRg)%bXAS p=kۡDGùR[0+T ؽP8pEzLKW8ثFIvcލ"'nbd"W'8{=P'.Pn%^T=fBX5ūrE9 8!O1r5g,"*華+O ~kM'm8 IDATk:YwXPi.vikPM<WR$*Vio4>W{h_M,8SEO3 ["@Sivi.$0>OfQ4Zq+;:܍&0YYE&aeCC]׭uc} qetLjf⌿+ j6  X M<ͅb^ '=q5>!0Q'8rti )qp#q0h5eIf&ŭӈzE Χ%YC9Bbɘ;3{8@?hviV@ :`3CWԊ+7h*X  \#,{'>G-'k}P?+T$IWTg6)Y\0𙯭;[j+ xuD;=hLE?![0OE95=Q5XAŨ:AџYl?4z>OxOEm=?o$3=/yvկUsm׿2(X5{]3rg7M7ڟd?^H)mo{[Ͽs0~n;eh_/wygo^Hi</~qwо/pС!3{l=@GJ'*w򼘃s,3d;Ł.Gۈbٷ giZH]~=^ 4<aY"%'jK|ҧ:,Ԏ9U$@~XEg] 8aI樚6n}MjeUX9q_ rࠢٽ'tUw B<2A'i5U/^/wC]U:YQƘb5z8l}J]AoPZ^qNC#/oU\vۚfzEHݬq '*Ci^уp5C_Pj'dYq6@VT CT,9j{A7O<*}4g5$Uioź3m]kG"c }w^Iu{:%RɥQYU`({ /Y(rAXj) [LSuW kf#5wQc$E +P=p=pEC踜]DUBABtP3t_ÊHM1bP(PM$Ak86"m?<+]= '6d ZIJ: GFrkaDw0;?[XkW Â'rL]@wEH8$U2Ki4&"PCEHSq`RJ@k/4xJnCv2敝NOIsh7'?u4SUo̵w:׾u v^җ?f6g9Y@n q4 UU7swsOs/g"WU^ׁ~=fy6mFJ,;,!1=>! *JaΨ3=Mvw@uɁxbl, KYH RӀ"w^bwV勄E.O,iz4{C{p{.\=CW>'F :Sl7頻% _> f==@,=㾳 R%VM z"*,*1zfG,]Yuh}952EO\ 7`"PU=py׀E. Dk=YOCzdI( ;߻Da]WdYHKZ!_Wʻ]a~d nCiB"Ҥ8էg%DC.fAׁM0]ߏ(L a[(]9D:RO=ߝ/@u;м_MV:N#" ;c𞻃&`:R@x 䩀W"2,Q*`N* cb Dw1X8N/^M;BF3]nF?s?n?]|`1vd63//GACaPOOY?n*?#W_e~~?7wǫcj+Yĵ|Hߗ }*rigTz`W,ux@(q>@ݷA@F']" ߄bsQV-*"#w~4pѵ<~xN~ϳncc[[!~p8!VX >k+Ⱥ)Q@s/* mg*X D]N>nK֌IvAsAT rla*×FV0b2bq3ur6G."Q" MؽĈQ?xTͥ ع'c"&p&>3]n}ZlrH)H#G:8.cLaϗU 4lMEt cxc#!;^<"IE>ݒ`x>2+#xm`@!m0_c ;`[*&f-äIBC{2h 4Bӣ,) ed!ڪ@i'AִRMBm,=*JTK|*;5e7lVDaмېnDB3aQj)yuj*/6X]ހ@=)wa.3 {[w'7&ځӑ޸UB)Tz70+X{a6[AL@h~<{);@U@;;.B,asCb1|`V3@mJ#Q' ց{ AG56K3D= Lة?iW*_+*)PFDMF}AP]lM,h(@$"5WmWK$=AC NPnkn60f*l 3V<@6읊kF@΂=,5Hx;)BM=zOh e:tBE = FDtgևP?H!Ըˊ;AaB<7z'*ڵ J&4ob Qi= KtxU@+bNPS;iTGNQ !)C^Am5ѯ}Hߋᦠ 5!;=aŨe3jb0CBy [п^#KvFʯ~/˿4knR_0}X//~E%s1Hjgۿۏռ< ??}d8~Dt~0F؁Y c#i "hȀv}]\"Xhnal];%4_ v*pHrI-{;>Y1@E%p C{Jh #0ց PbDKw&/KW`*) JB%ĭ' <+Ϻ/?.Thh ilYэ] g(tb7; 1SB5 lh{FL| ?ɳ.أ~),B,ؽ!>/9,`D?W pi"zj,Ft},a2vAѧ!<0׌bFuR`]J `uq: #t7XhU&{[J\>um SБ\Csd \-n Ʉ1J̔WPfSlzYHh[tX`$!G"4"g_bE710S\ Ίy$*\J{!8X?/a"_^K5F`3-ޢ4t)nڽyla A cPNb?Ҳ*[СzDBf&:i`>]ޟp&[c{䳂TqX K2G7`W oXb6=ϧB_`'#+W1xm˔`q)%RnCPDЌ}쀶-S ].j^5g@G 3T6q,AFު}N#f@. ^$c9`Wh֡z9!/Ah'2rt M18#9w[!P3 :MA3SmplKl6cI gYfv;e | j(˅r|zC -Jk{R b@'Z^Eifj{}`%o8c!M iϐ]3TN4'ĵ@<g;=>sCiUdmrKУG̲kuEPb4/’/LGD!]F7m/ $ps4Їvd\lg{{-U~R H43\[ZfpJY|:JpE P JD&#yaxsDeȹ [PҺ!9-pxS 49`^uet\xI!6 > Hh#`ڹPLiXg{__0%L@-T\yB`Ðaj"fRAmpǑɄfnGȟG5g#aNh _4?J03"ò-w0ytLV ?;r]~.\L!|n7$@NL`qS?GkK5V7޺XB D@T!`\U`QXq0UB u! P. JփB0i)ρqiW۬E%L/d"QqWTŗ3O{󜣎qn_cmޖL?TnbNdy_5FU]{a^];(](#rN@T c֩?yŠ=e8u1ԫ ZX%L= 1yEd˰3t @F{~Yk7uYH$5*ނ)X2(JӐ/\I»P݇V^@x*7v+\ Ġ"7*pEϙf`'!<$F6^_v+vG5qh϶ĕHiQv]@%m".ipېmWzE9b–y{ 6C"~fx4)k]e{FlBl nF\4(a6yN*XhLfl}tN)g.Tv~ج"Il|| <Rd6~-&"ظ@^x }rs]qK*&=CHpէͲd(w]JBm ]k[N a PrJW@_w] @@鹝3)slGʁU^YK3\ށrHCZmkUUDL<ҥta ͅ NgotϿS9|vtQ tQ @ބpʼn:{m6P Ĥtj;r aB`f76} m@5_KJ߳gX([C䄒pht6]-@[|xvTF:= :6-(=W'"X_=]).#O(z&&Q^k_fDˊbSc`Ú'^KM'MD"<ސUn@x'73ݝu"A%Q:bvTbϩMkÎ/:4)|[t!ܬ(ZlRg<# >U;B%vOC9.a0/ǬJodx˰arȱu>  ȟlkvr 5-@< S=s5RLlX3Zw!t/N@L ND' lN=睸?TBm~ӊ JVKC}{Z>Y^䳂mqB @OdN4g|0%OJ XB^Qo|-KE!]^ JJ5XEU29[»TX F#ޭg u)`!йb#,(5-R |`S9T tJ-@dMʴdQW=?9!mGJ `&+$dϡ͝A5iZ;HY+ P2c q%X$=c% C; cudB?LXȜHIk|X5Nh>pZw =ݹϼ5гk*Jw.\W̷6:y &"dzpGa8PsGx >(zꩧxIv)__Wp߇_pÇv֟?8ǗGGǪl3uQk}@U{>=r!Kcc]RXWHh)oB5hnG:Qu{u2='D}g-]eY Zqa cZbKxG ,AX HW":k.iw<?;wQ,@0zB`)W[jbEc.3d^x!i`!7;U e…h4p VEaߙL}1& @36"@,(Wxgҁ퇖 ԕ՘m1Љ aP)֔62p_J+™YF@߭h:7W\z"jD[KC) xBv 2ۓOp<9k?#UT|ӈ y? rl(ok@ޣҔѬO W  nֺʭ0t3hZiW!)1˸[vk-@A}/rU[[v%ύABsahfC i+y1T3PoA⃖r!!-iPK0œm <9?#pM9"R MG[&˖98ET%ЌE+B~yga2R7ظ<A]cbL 'w[r6ntE!*e"ځuMz<1 o=~< c$~ T*AFnBh]o;/>ї\ۮ~+aBZ[e'a& 6](㠗 z \<,< nɟ^yQ(r!qS8th3#IE/['c)>_E=;g~My{`PN<]g~F7b2l(F*OAH:#WY3ƓqIYPd:=@'K@bFx+ 4 !܄ɭjy^1lB 3[m@hwy 7d)<詌\/At((`gUwϿbMwx߁aoD C*pD;L{+ ~&B7? <>r!~½m/ú ,lQ1)UߏQ o- j!C":0JOɩ;6~6| YHF '^eЋ-wEwڹ?re2Wҷׄpǐ w x'Y^^Ν;㠒? $=W}77uy~9BudĐMe~kt 1An`{dXS`WҬf@Gʠ n\TWI_q uOK% 'ܖp0sXISPpeB΁ #QGsgo<zJ?ޣُ~=fzz;;;777^߿YYY `0{=,355빹h.?w~L]헣so^~;zMѱx4~ll~7?/~Ϻï^ќZ[Gk=y#gy_%N"RXhVEP4s~(A &lNj(2?&ȩ/|BD:ȩ@hU^}[ \V3rQ:%L@~PMoA(YN@Yt3WM5YDmGNQ EIth-O8-m]m":gNK)c̍E-n m둄"e K.4 i7P'<+\iϐ^ ,(l\<[ .ʊM&\}ʓN  $,hdi 1tm6fwKЉvOun[ECނpW91 Ae";7WoB)^e ]&$ y`l*: iE #)E&moR_@&WJ*!W. CkqHrf{QᇙkUP,:; M(L΀=at<9LTik'#SRKh'2| `(` z@3L e]3ҁ֊^NԵf؃D !DdN:vo;pзlH(60 !\Ql蝎#n {T ('C1Y][SK5邝X*EFN ' }73lt/;ZK|cr}[ibV(NDXUlE䝈])0.q5'{hJn yO)^rA8л>Y@Ň}ľrjNf[8! pLH; 89u P ;P$A,4$`},)e?KjÌCm$EtF$ؤߏ"fįʉ32,$lms7bETS`QaO[F @;pK쀘< KB|۟t&yWZ.@!,D|q%y*pCz =8z9g{x. 7|cye&y&Ó>) lDʭLo:"U3Wώw}[g?o;\(.׺V a'z|Юd@xJkBET@ 'P$H2dskpHs Mgp S<Ѻ4pn@ i{$ǓjwaˑrPv 1'/(p }A޳A5dfE;Xu "+ mhoy; ]u"Ȍ!^3q!v/N[fFĒ|#܃pXpO`yn 2 J}:2Z*wUB!utGJw~7p ~mzF[Y{.c/Fb |<x&/Tp og6 ,%B] XC,w`$$7fh0? ,w*59(E6wi[i a mBbJ<(ED3ݎH-lHc;5A{ 5߯֠F gES.¸`jBE)~V%@D+q<&dV O{}S|1z!6ۉR'W#f0jf4 ޳PAZ`ܑgs8XYc\[|\*ؿ3L@{w“кVt Dqeƭ]`w<_ ~-aX ʥigF4dvޘ2F]k>V3F3u#Fy|ĝn$6B0"BQAe$FJpMa2^ ``>i;J Oa-x5FDwgkA НkSw9oB/|dA+37XI 'H/ğv!~m^zR<;OyUw__FNcw9c<)ܟ&|_ ,͌(s9}^wșsH'?v(:oS7 = *`Ǖ1Vq^\"AD7lLW/^_bu-,քɈ^4'[d\v"aEbzF>y;oRٟ۟L lț2r&9$pUNXB}F JGw9*8 uϐ.Bs#<$Ca-U*BR6a1 {`[:ѳ/ϼ<(r$LX1`2=#\p$T*3(Q&4_(uD!wP @l 5G`I[rK 6K[NRbNF` :{Zu.H]3h6r[R,%x)|%Qͻ- NEӡ4}< @ƣ̠  ؒ~}͜h%J2\ AvaO)>'\2eYv"<XUr<<;FV"\HeW?W#6q  f+%*z&'̺  /4A~ZA BHo8P[NLH}0K.]7PaHF>V4CʕH+_1ֲ+Я$#<}`܉a\=oZuȅaѾ!Ϥ%+ $T' ㈽# UW߀ѷ36 :VCxCȥ"8iocʒՀC(zD_rY WçLPw t618TdBBpɰ|)' Z kyqr(rXKqF&Wuf<}Э y82:;=i:ŒYJ4_&( ,:Nz't]{۸+D>_s@7(v?#DDu^>o.0o|_ 8ky___I;m}~ }=nݺqyݯ}kى];RJ'ww Xz]8_~ZQ%HlЭ"׀!436盙3pY e >iOW N"\4IБF\t)Rl65^s~ v9_/ ؜6$: n~-ןe~v-8k}ڂ{G?M5(i;`š{O۟=1=?kUY~|Ӫc1ܳ洝/qv<>}g?˲ҙ}K/?}>als鸇s^}Vdp ˗/j9^+ɳl(ZC茢+ iMPs t SC؊c򟃽Qt cO  K \tp*ݟT yIk=sS?L :`L VyX ˂^ ty|6"ߊWKH rH%%4g#Vf[4npq| GW*@;R% 5 >yP\3ʗ:nzJ{/ ^ 5+pCnlA; _[cϒ+.y]@C+M&A'3,ǸBx3 ו8E&;0`7klSۮ͂p /*T NB 7 K.ʍr#đ ]u+xn akz xM!Lp־~mǃL$MC AQ$h#zJAs^Dt*T Al"yl# c\J_:ny#l93`:H96ZCنa0Rp# l`-Fvr4KA(#,%Vh&2kْ`H I|, lDjtU'XÀێ0 v!/t2qRudhdFz C9RaHAfJw;j[ 7aK+ (Ó|@#jBQ54Xנn+2>p`#@Ҙ\Vp$OA\l+2.BL<1 jۮfS[fA9P)i(2A+pSnkFiQ ȉ8#J ^RhAϱ~RB(# Eho1t#AAݭaԈ#Bq!{Fj(s B:y>5.SɁ)sEf2L 04)aFZkL2%|E-[슃3BD2q. lCx@`k%݅Q2d%hnCUn'rXPfWcǖsoӡ>g6/)< 04) &ɒPwq[ Xl % ʻ6@.B—#  J$òHVB֡Z0 6pTG]Ȝsx4V奀z@{5 5N'<5lX DtV`Z='8:!p9c hߥ<"/yv8("J ͦpAq$M(<@fKR2✑(a PZ;p![IAPu QL5X]0?BH~pO`%:1&*f}hqѣ2r ̴Jh­PhfT5PC(r0|>pt zC2W@%Un~ F^L w/5.zw[hC(W jބ~Q%EJvvu95!Zl@acv >H 3 K++`;Q(IL4Zj^:*G k;Gԃ"'toT7o"KFQ@"}z6>FKB 4fʃXAk{f6CjJ./˒f~S{ Vr )!/i@Z4j h/3@+G %6*B&HJwYh& (a7LZ!ހyѦU'tH J Ip;bN-FW}#mA$; `:6HUw+X0y6&|CPhzΖPF0_RNDс­Pc*$ pX@?QBD-qv@AUC8yc(֡^n[ C&Z%:S=(w3y.Fs%@B4^MO';PkhՕx(l uԋǢ,A*ө?#$斒ű  rAZ> {F!@;aϣ2zǙĊKttLIӡY:>.ğ #Cmw ߬ȵV*Q3ҡUc_yP)@AnN4uF+'tH Sn@*(d5#GA~|nps@qQ r&9-,f}h^"GB/мRI~n>JOP5D%!?sС1h݌.gt`$ 'Zt-LEvx4 Cc׍4?: >nK\[ |%& ' ]#03p‰@%dɾnee4bnv)B&fc^M7ީIIp-|JNz߅*xٰ&rHY~epXO G'|} xTNn>ރ<P7@>wIE[L(g`]4! \1I1kf`h0c:QE֠؉(ЎZ mFt{@[^# #Eׅ@F.)op߯W H &P#ZJ*~F>Dݧ橢zmu`ݗO+BD HS0v ,FҍL@y7R=ɤ@3{[-syYNW_}+W\y;obi0\o}ymX-4 ,,,w_C: E__~a88~}ZM{{?v%Ccs}ߘ{O?'h?V @Yp| bk]S,n](܇b\6v;h*.^ cVN4S8E-].0P(gg):-KuW|Eh$dpBEơr4$!LA U6@wA:QG93|˓\\x 8X)ؚ!j&Ä @#4mA,b j ZOni5}D4tG^ k2a^& E|I pp. 2 1B)x. ] FR Bv!+!|½!4el#6&cK5r_ &w19j it;X~'L + P !t BH뉸Vz-HqA6"Xi*Vp臑gY+3#R6[899, [/4Ѓ!^"o&MC r?Sa#y`Y0rвvk3wƚ, y"6cIJK$wZ2؂8v9fkؾGԿ$$La c|,K``Hsq9} H_iBxA84ByM:a>`76nϣFF(GlRWprn\`'7QբVSP6p9R*u /$Bf&S0+EZG/fz< VS{9'Pǁ)TC]$'(OA|pΖ%W Zf[  `݈@zI%]D͝#S;N̨"vݐ@Ί6=ఁRhy'n$d%KȷJ [n 6m(/w1jp~I,vgL}|r.%l%;0i4%#`~^ߪ=P:r7q̓I_Kfj]Qg Yg_a{̟+Ȑ> ]T 1e%땊~o kyBVbak^tƣ+iؐ'}Yr]*aCUS3粝_&"uwsy{܏N_W~s*_D~_DH)\{1??>~-HJ":qbYv%Ϗ_Wxwn=GnVVik  r/:ا2R_=SG z`O8.@+f<,gd!cxfۊ`n퓾n=#ulfi-}ͼpܣ0ZW%+vD$gvX6o v 25*ۭ%!!n~[i}FtҁA-؊cmDx`ϤoaIDvgpSǜ"5cHseC; EYw8^W _nSB}ry<"/ex@0 (z T;N2=+A߂OXx0M'#"z7y `O\QH&pR`-FH gV,F>;Kί><"h?{$I$-rr;Ҝ:p?AF#/lNHI nQ(X@@`iHgkWmG$y@&;C84WݮKfvB0' ,e{V\Z JHbC^-8t@f=7Bq夃FwSa&V`XϺUsF|p J9a[ *qB%4'Dj@glQ}z9&6' e1a_XJؑ mXѣM>u@PRM~XR^Z oqsTո)q0 %Ϻ<4] N v1x+z^Ed@2/؃dQCjT֜\lm)aZ1xsIq2GlKd5e!g[F {NFj [pFC3&?V۱ҕIZBh  lMD֐l?p7{MZq f6l&~~de+q2l1WˢEr2&ϳ3p$/D"bFVvFRAOp#AcAXDOʞeIK (N ]=\AtW:YpNig6_8-V\U9k/TEp?iqoIB9_i-&r*`Jځ mT͋(1mQex%AAI:.%`WF`sF` %yP{ƃ *0XW14KrOg]b.ټ{:=žt9}~g|}6s}bbד?'8n[[[/>::zvqΞ٬榟9}gg{:Z~c<{gOٻOg%g{>/cE!qY糮IA/*VQRW]q%@}}ЋmU ˁq =,ʲLT@qG$}]"mvl,z^UǶZA- Zh+aB*Н  \)ƘR 7褝(i HWK4g8`*r9rU(E&$[Q$}a=0;1W=zTM$!k"q l Ƽ/mQU{N>M]#=gu" # 뵟9ӂc%(\f(6|C1 !y,$%!VjAΜ WYnN 6<6Xm]f̳> BMs@n:X9_Dz+;Ϲ1d23 R! UR~R u/6`ˀfwA / Y$WÆXLf2)"2{9g9MU,R 7s>[!'%Xa oV z3`n.`% 2]Wȫ  7}?'UB ԭ ;/`'k$pZ}8RMk,§_{<q^NVB,I ?Px-nG+FיnDΤ[ 2vg:xF-#B|p)Qa;ߗ+&`^=4qMۇdSTBX@/d2)DJR s`Zow!^SA~xux6E.^QF D L\&TaKQ؋1i 7 \'3X Fu/#tUps9B>YΧ#' ;BY6~c@WN!/H#=?26 rF\ω^޺oX $ 8mCSgVHhBh \}J(@$T+Ӵ10`I(-E> t+73L|љa,$нpSl'x 质? 47FB|7'jN< g}omž#hp( x~N\1kj Cnm! tj 'BU%p yK]zVWzdӁn=1:55(Fˆ} xFLJ&> A!¤# mٻcEOŕ]e &oy{<˼0wgTb*F򾢗:Gmql)&-^h\TY*C >wt P 52k {ݼ*%"@(: +DvBo@/OtvnR?uBG[WD t-Ez# ĭ;ڈqd@rC|n"~GְWWz\-'8@ Udh/2 "6,\aF^"cȾb*ELt(+#Ge5!uUX [M܍-C_P/m+ 8g'3]W>0BȁD:_/(ADH?fʾ1ewmH['{r/s [%(V{9Z~y9˶l{gאPa$' <#W"v%:C6];e%@X. ܚMqA2VR&(c'p#+Fw+cȄ"2K,10CC7ck 60Sg·!A9kQ !Ll:uB~ y1t P/NI9(*7y_k ZYKW=,BfQ`-8:bOf'S)brw=t%Y6y_ oc)ݴd /D?؞H** im ӶlT=4xWi u>ֹm;ɋR|?Br#i>i l dp.*.-RX%HC 1UALDlgU%2v#p4Oy{ \q0AE ^ vQ A@/ϛWN}#%ͥU}C{a -ڀ(.[Ȣ?0΁=ܹmWmФDۚV)g-ۃV"\8v^v`pes/ۆn幬l&42nlλMp tH^bK`w٧@aqYm Ĥ k@Z`Aa76U= Kb~ V .% QGAM2J))~ajF_bTU: #3 ? G w9ѻi+ª9X8t83{S{&3ײ{ Zt?tB+gQƂvV T,Kȓ8#66 c!$\O8; VseMX5 4+qE^w ݃ 0y~wPlua*n]Q}S"X¢0F J]J2aI`]A~ fbL!C/:.?Ξ}|J&fg4ə: PXFnK1׫_2#5!E/Dn }hXiveWVـvbfצM #fEX'm#㹮ߣcqb :4Ֆt ݌}6ƋzIYZ1x AUT(ɕd?eBP ˙Lt<zGfrVv욡zխkt=nFnBuE6| x%IĄZpYҢ IDAT%ly(H9)mPlViD F638PpÞM6.ļ ag2׽6QV'0v[;ɑj(us!9"\HeV6tBrM Pwvwϕ%ia\2s0=CMnӓvOuz+b 3sOc={~ws}Gf HR(y↠g?GlV/yva#_Ǽ*_׎cG_eQC9yǹrʱfEp]t]G۶%#_1F677y׎UAb/2_u;"%˫WC.۲}Zӧ9w+"|}}`v?LO*6l2Q$ dhs`K쓚.bQiHRQlK@|ȏҡ ߁<^%Kٺrm>wH~ΰ }7#cX6dZP :X@%LJN@k)b5p)kBϲE<^<=J"<w%rbB0g(Z]X怶gZg{Ryǃp`* OF"tߴdjYU .Gu:i%֧kXm`? _ȿnu+0E0 $ewIɘgLgB|S[]-\Y-k{0o:?fCF,8q6Âp_o6lO&8%3M)-3\Z#^ 7`6dz^"}\'=}koPuM-Wx@> L2z)|5drnDtCF8!Bw P{4WtGUBg~@a9A~">:` -؞~e*iӱ{Ԍk( _k}}E` tʲMc3n\Q&]El +EblDWۮ$$e+>pp8 exSљGAgn:zZ0IbNhTM@fR14܄є_.?ރUqv\.-:;U}UUS,C~ )ȟ,]#B~ERCJBDkf@W1;k+,g{tsoo_b|&n_xeaqc9grk :JР1s Vf4TvN󆝌ibрI<d&L#f S'DdtDwu}'nEtWۑfk=#q`h(L䔩"aPLEE*E:GչsHVeel ]\X!?)WӊuS80,gfF|[y~Έt*t;?i~v7`]RrB?Bճ>F:=& Lequxź[tו]$_^+6,\#CP΂{u 3{\ :v SŸ]?́G;1@fI4RPL\!byWwӢ Fs`]d@M8zUswuy:B2ZM`'ak.l5k|)YX%9%ϏZ5ä3ֵE2@*:.oȸeRyV\ z.RrUȤLM\"U +SibAy1ODƙ^(UOXAy<6pB.%}!Bj!:[<)*[ 5ze^3!U bp+K@3h}_&0L{$l?>axgO?4׮];2ӼBt]GUU??T~ic5cW~WzXSkhYLl^>w+We{o#KB .[';n&h#P%K…ݿL~0ag~¶P5C}~'k~,S-/z+,{衇|?>!Ag9wܑUg<ok*o:_y]E _{]-u|P?-):-{-z/]s }m^w~--~BltEXHk4mp.3 .g^]]ƈ%# @H OW1@(&AWe8'zA>`D"aP =(8$+-zT!Wrx"qUV*hNF6wFTO&A@/[#ʜ*&䪀rΞŗZՁ^XDH"U A& |a;gÛ̒D\4Glk;tEyY#mh *9'-ԒDPW!Pg;/t.#<18#46Z$ %ÞzMBtp-C%!v"U#p/,P5%e{83!C9t9cWJr^ԀHXWj53šȈkBר32 Z 'R|i&ήWM*6%O3aB BYZ Ɔ<ng&JRfl!p7AHR:]5d4^K3i? |x * $%jSUv&"q;#G:Xt`p'0\T `<"c*[iO He᠍@WF䨴]E[|oڰ).s4ǿD{ ]M/݂u-G(^.:MABY2҂.NWAca+Ԩ=57]~lZ\} K& DkݹG@TLBcf.,dz*5 SyV).}P!L<-/ك:ԝhĠ68^Mf8Xi7Iy"H {R܃Ft!-ϿFlb/Cs¹\pXyP=2jhN%H[v^ yeOWщLS؇ ְL8lmj-;&5-h3<"jי R)AjG^ܵP5P; |/M! ˥ -%+wWhmff:v4CA9YO d3*-x<"If%W:n9T&\ۣnk#9" TX68+Ygzj j\EHA*Ϫ~I,B1|!(#*:5?;j@ۈ%#I@W$/ReLN$:a=POl#v$ݮ-Ur@Ue-"nd}x;*W L P UaU$ (dn6odK(Tِl^4U78 B0( i^ 7MX]9.XAfR,YRd>s{vq qZ߫Y^ m†j58cp⥚t@B|TNkG F~jTO7RS-S6v2q}Bp Hޅ%C~>}_E/e[-z\o}˿A^}r>>VU/|9Nj SN|rN-ۧ5NO=԰'|^Ejl8po!7&S֎<JCDgɕYfg{KtBu"9Pz&S?V)Q2Av+W `\VNڇ^iM#o.OI_^piڊHոqվv(g ύwGgn5I6dd7fϮFGSй yGbyĕ}=Yt ]PrqKE_aSg ?Sp_*3JmHX'cm\RJeB0 팢YM,.ٰv?|` 8Hbe>é7`"B)e>+K)H^!TC;gp] iroCZ/IG3c0V0i@[؂:ݗ𴏻\Hϫ$\ T%\j N$t*4UkD{(oWtJ 3Z')߀͚}p}LNN`gY܁_Tz*o-e!"5{n sVU UI?$ \3˳)J%bU9sCcm>g&ij 鈌at"Mu(n0&1r0®Ϲr^(܇psGUph]~>FrXxOh{-+\;*S E"ӹ K A:k>湸g3ʑ;r ְLKS׬B]qCq@]|ﲹgž3uQ(莼1aނ;-0[ %I+EċZ*4LDhsQm@Kk9O(<, Ɍ2%Bl+. FLhpVI ܁IPL#g#GO+,9>?3T[xE==5Sfs-rxޯutBu'Wpٶ?赢h/5yR`vdޔ]!5lD~r%;txCZc~EBCs)lW;{̙|;(|V=կY]Bʙ ߹OY:?2v v2kormَI)|5z1>PXVgjIͰKy+-CwWM\W lBLG9;,. =L+u3\Df?SɎDWNUΝp(l dB.qPl+* NtozB3.vge+Y= Kt l*@%%#ƾ-%/vs>(@Y#B*H .?@_@KBC]B\1s[]#ns(}GvvFPD>kZOI V4kTRmK\~N(4\Ii##2EV1L Fl=X zK-_[z [IEV]Mě>z. s1资݄jmt͎[s;%݆2nD2:)kJ;M;ڌnx+1:B@$sv"lW<jԮPuĩyUf,>uf- Δ_WlaB8ˇJ--F}VtKٲ?bˋ/xp t "k;X.cPSWcVF*VP8 dv; ,pKLQ=A*)?M]6Wvt'?S[#c~ZySUM&?Z]cD /^U}Fg{%Ik~Po ngЭN]]`Xڡz–SxF= i^ }\{`x`olx\v˗/DIaq~~m9v,ڢʓO>ۭ[Be1$G"o> +P-\* };\N))6 !'8rj $1xX~^r|}q:廷|(w?ދWҟ:ukgΜyk\9_l>sO8qEY]ϝ}1~<.ͽq]\8voO&?xMA6UYW8aWWt|7B 9Vǥ&iaG@"wu 'ScCcf?:i 2yEBB|"'._ZI?RMh2К&ÍrZ; Cm8cFX6![M{\bO66B ))tRsFt :1@<'.*pk R᠘8J״cLBr{Oq}@BG*zgmکKQk/*@@=2@OA4PT T=*Hl*UP; X3qu4b$X#tҐB3@m#"V ɚddGd?  HF7dNN7))fSuow-m+bL2,6rKx k`}1隇aty4*=!b"l2 \5% 7 moއt wD")@J1_c2h@A)x3y@@$x[NkJac?79q:{kb2Q>z-]%=CzFo4Tޓ}dq\ gDaLuk  8D!##Σ F4(t$p2H.۲Ĺ'pcA :"Vl4oJ9}h vWȹ*r{w_L`+2;9g n?=Ik6p?7w?oX}ў]Ђ}wAVKZ}䭀e:iUT!!H(pRF{Qd.Q*t7:A./=ultT>j2Ω~A.U_0 .&qZ IDAT؈ȮO QFn~z);~uZnX,B %MF\# opk5.wjL f6f?1Yāb.w>A>- X lf8ݥ9'oVޘ!GD5M(f:d?D龢 I1IT7q;& CC<"ZeڳcX?>A_{+=yu|7E!=Ŧًhxp}y)T~!vWIG7N` 2[(c}.Fʹa]LNh%NSeF(ƼķPa)`Sas+zB!.[=]v`^1 ̣x'$rYѥs#@KPe5{-&^0T3"Zt9,07lE+:ZAOW+ݿL~V(22F~G ?K795 '}'<o2\j"i @=2I%u*"4 pK gi4܊}G? bBUdWY4mBJ M`'NM@ { ٖ1o˿=kw /.\p@O{dGG\I)~̯G}zj9SBxO^~dR}^?_D??8?g"+23KOPdlYnOB ^һ3ţY> ?5x.xsh$uh(G/dd?.;~ HiDogxР\Pb=]gB+Oޅ/™Ts{PeM]p74>-ް;nXd>Z2(z9ȖvR@Ē@؀xH2<|yqwZ!|TFCh9Ȝsi#g%D\8 "M{[ɎY4PQ Z|`[Zz͇[kkYCT^ RF%|I1} +@r1WlI1c+,ʃk?+[)Q4 =#0spnTo1` PQ_tUJBk k '9 N"6ֱ>V׽AWx5b+}]]2vm]dz=藥y !d6ħKsGvW kLҨBFC`:U=%Xe;1uwZ_ e Y=~ }$]A7eIW${v] Ak*V~u.D}Y 7gi#2|񶕂5+Feg7 `3|VN7 o9AM= ],H\t=!"8[JF$eG&mE,Wkd5?sÿ(G{o3r)nfE*pcMc,K;92+^ E䐴5f4cWB{>fBx xkXId#Q+7 \sƤ5AsIЋ)(a&TPGYk?@I(2RM;g+xbIb}'iG pOp |QXr!*0dmm-=e\ .w>SF*DJD-pq# qqT|L@E\0LD"Nt }26 ؤ`"[N+2О+}YU"@T>0L]3 Gaݔ Jmr=e-2[',DTi~ ,B?#0|GSЩDK\(Dyl3e2B8tع⋇hƏ瘻sOl??W_zid5c6vo Vsδm{q/}K wxSO/}骿r_*El{dϾeú뾱~[_O>z/_>܅ +}U ̙3[;_ii=n¯ѯjzzmX7_,_ׁ?qugr_l6{1_gn^,ZuY}z{׿gu~֡WuM_b6kѿ0 W'~4UǴKaPRy#NuWT,3TPMA(g߄qx~M9[avK [,hkvUR>vd @"WuY^U +|qRuڽB-p/Iߴg%ejQ;U/WatB;4^.vT7/U> >sA\Z 5![HF@oVWXDy# ##WR<0fjJ>-vOT=F R3sܡ+o!H) P@Baz6c zBL0QnF4<qZ W[}PdIpn^HG2C0_oܾ 3H>)-`ЭȝjS(xEOow6fwDm\wVaX/ת\h kK0H,m7AAa vYIVhA$ cE{x0ր:ذKR޷kҔ 0Z7В ҕ=ԔYV74AS,4(.$rM%j-+&l&oB5AL &xuQ+d$+<$\{3`^q%> ˾;PCLiYJέsSQ7b5N;.h=6%d~ߌfIS\%D.*7 ^ %Z#Z"KZ jT$|JخN!Ce$I^G$d@$VޗJn©OzN̕ Z AQB;c( $[3@C+?c ="GFrRe#PLэFZY[tPK.Y5FѨ5O˜l҄(3 ) L p2-h@@ WB׳%Ki@XbE;E(LjP+9:>Nۗ :UWr-"'k6LA\–}IcޜC1|dY pmV>!T76r·6) wٟ}]oJX$75QgfY7)%T__> 4:RAr,9?s?؍moR{H5‌ij%ѿƱQזq#44L4Ui\g)RotrC;OǀSRJP?/ 2,/EAiڹ^(CICS-B=VfFTΰ~J>cI9H<__T,M볃GwcXPCƒJ-','XP.@ ?/l&:,.PTWc$=Ò$pxljэӡXYℭ̣}1B|!﹅ g *8RZ K~*_C`-'r_2vR`߯?j-IP s|mh4s10/ytI| fg_AmF_.v4]G[+%d -wA2)Zߩl3L1YJb Xj-V{ #4>%KSPVrhh?K!-B,sìgo6X$L: }(7t뢬[)Ήt!y#X,l1J`S$g*SB0!ѰP 33Yopr;-ʬ5 `עtwՍw)bffK a:&:T_eXՄg?ef5As\`rllcſWDƊ{3ϐs>T c_擟l$6{~W~>tF6U| h= +١դ"L(c}WWI&10HG*IqK8)d/.}O Wf1ƄΰH ;:ҿ8@cTMqC`sp͂ЁB4'CC؋ Cv=!= CۅR 8բM*KyA ڔdwR|}+`4V$ ʕ@eg]Sh(v+>g# @8 2 *HQDH^TbN-yN)f??FDw{nA{p e(U8ӺQ Kwс=jލ40gk ::8W,ͷcŹ pL-S>ߦlJP &\  ЉIGC!Y~%5Bjlc:㌖ ^!”4UJ,Q!{ B~t7K+; HdCaVhiHO)BnxoB:4 Q$&u 0TFA/;^&Nh_$=_Ċ v޴CbmDrsx lU[sUg9p!ؽ`2zxu\4~ nGVW.\[Uq$GH|8l;;Nk=ϟ_{eoO/ xUU~wȰ`3?S?uHk|UfFX_OkY:cUg}]|]~W]}.+:>O:@迕.]3~CjM߻k_mw\kׯz׳nuX}u=׿g2ﭷa{V_-[6m[iӯZ/݀)o;jfcfxzϗ9Kf,ϝKy?pXFSŊf8mk5o?p( >iN ߺ*}_nkk~a|ND|0Lm,M7? 9 :r7&+eI:^%cН5k '@5$2 C <;y72{QCult]:?xcw&?tt=.׸:yꫯnca֪}s#k@V炙я~cc;qp裏Sw穧8ub6cf/ґ%9/Ao\6v|WWWۀnӺZ3g[{XJ/KŤ=-)"x ;2#'DkmΡ/<^?kq" ^h1w6w4D7g0qRR]F")*|iR~ TQfHx͐ `ĺ'~=P)#f*EZBI>g@W_} Wk^wW2G^Wc( 2cPWJX/OKu*x& $l:' = ^$"A^j IDATM.NL;8aHj٣4g㒽a܋y‚BB܁8?OWcbI:Ѿ!9 ̘PUõn7 ݖ !{nGxZu]/~K?1.걈_4"?9رݫ>J^xa.\cWw~wD(?c7{4*n˝ˣ҄v!~J% 6j[\/+bs⢅&5'zCĂTbE,:_Q-Րo64#;>NY"T_dL.urB$D)o{ ÈRyK֙QhhY D *U`= i]uEK.1vyrŌ[Z8@!u4$ 9boO5&c  x{BЯ 6q^H=4vt˘5@x Bh*;<z7 JqyˀJ $rH8s$I2"€wHG`h( 0s-pO^rTɍs %Hjf eqd%A"hD5Fs"mI?EAJr¬D#[դ`B,MҳJE4S'K\-(I !V) ^քAH6\B:\r+K?GG٪}ANT08B>^$+ |oq@ ;407 ĽzGهa(mIGR`s2xm|LtȒofq`c=y5b:%ƮǮu[[K?$qBjf<䓇6vր1:&.(" p(g=2떙Rg~g6 MMNl&+=}p ԏ-GV)`1!;iex/Fz dq@V>דxhXdI 9EKd|\r\ :Б?qZaq\ƛv>c(ȊĔ|j`dwP5@ Uq4f/HLߖJi)1п鵈(.9K?)I<:J2$%עMjIw|Rc~´#/\(š~` R)N(cwdmMi0x7"_hm#)"izJh]o}Dr HЁ+0hN|x\ahwOa @Yٸ]AV( AgTb'FuZZ&7B=k=Ղ{i`@)*AAK XoL Y; wHPJ"5O;2vI#31A;Vo701XH{Ҽ=>(F hl,yI)Eު;V$h aalL"J04Gn87G֫q¡d:6v8Y[?6R֎:]*M؍]ㆲjRJYa8ИGyd7v!꼽Jk_3ϐRTQ:P%Xodx'?nث z5u^U~YۻY'O|~Yo&k׿̙3~xϺ?V~jz:_حz_{=^sv>}U~7xM˺OE7C~~9C7@tAНXuaR-saB:EDoMiÕՖH&cJ'W_ZDx@4[: Q+`V 3k6'2iF>=G>>EN.|%9BdNf D!+Y"+vZi&ހr{TS{zr^_w3NB I?|̸D\S־ oOGz$fB^tD3a&48 􄯷EFQh#<_#-N ͣAo]4\]fi$ ZDJ!ժAH2<۽iѺIKӄ.ʔG/L|."/"l'܀<^Z#>ɩC %\C=<ͻ's6_I0{+hf),;i32߾ &xAU>6Iq8Ρ/;C$lq'TKt䮁 @|hHMv`D5Nhxg8%6!mʽBVTPuYq.y/%$EO Efd r&)m^x.9<.ă:f2-xb {tLۈɀhֽƥǡ4;rtna.lh6,^K|c׹ P?1+ċ/ȥKUƯq8"»C#qU@jclSU1.f}8#4ħ?օyKolc5nnXXh)rZY&9; :w{'B(uےK%ރp;L&:ku}3t-wAHމ{s+B9Q ܐCCJ4Wsl)T{ٌ#DxhԀx"S7zC0/$„};mhL@:8#Ct:c:|iQɷa!O2B0!I'_\t%ՐPy}`9Jmj!]wEg%kJsTǡe2:(H(BhJ"QDT^>Eh} |&r6" } )*[ck ۤ0QiK?Vm!ןHϰBN8|N7CQؕT14HYARzCKJ P|6usxUJj2$5g6?ە7[ nN@H[Qv:< r8/ԉRu> hW!X)rea-"i@R 5t7T\~آgr % .ݐW G-d>L'Ň .-m$14!4ui08)~&zʄODm2ZMv -uC[׎Ih'"Ӆ $Bľt,4``}{e['hXJ7]뼯(CҖۆJ5˜/NS?sggǟ;fuGXYdxЇZeaDF } 6[{?6ɉ{FsT~Ox'O\7oR;n*9qTȏ&-8胜3? O5]lrZW黁GeNNJžN=ѿG[6'Nx|V)զ]>^>^X_WE體Moʅ .i^~}'y7`ۛ8o~ok͗}굮oGok7s _5eJRB"0k/Gfn7%aV luY98N~W!OùW:B0Hb9ȭِ7PC $IĶ%M)up) %gGn957%Z)T"v@'|QH@s]s~Y\; # !1 kŞ'@gaHJ0L [˙>,b2Ws$,bnIaN *y3P渹 df璕Q O8g60,q2CtTfi }_{s.GO5{ʥ(`ZB{q4 1`C> ozu!@`x5C ,!Bg&CzP#^r@7^. b{zִMO34ԓ`N8 +XfD`a6koh0lu0_8 h,s -|p=5L?{en:\0@B[>ç: K~ B"}@kh!}>4N1pϵqRD|OebƓn ;1ܔqz&:9W2ƾAJxW&׆pWYKFp"hHhvIrysydT-۶2^ɕJUr7ԔM0C:ߚT1tt@3LGʆvNOӲ~Ty'&/2A"Hs\;a.1.»t ow ^@"Ttkep@124#Yn$9- G|v ه5Ly0k<foSRVG# e/ӈs\8R\E !FBN8yӇ~Jڟx><2w;.ncОKumP-r 1>B)6vg!}M2~c:(E/W{3;$u#r^JFL#_^Ʈgn/`c3Vc _ Cd-LЯ;xe*v.!/-Ǣydо̹;+?BIKmU ^+ZD#Q|kӎPվ k3d-ɜջr_Nȓ4൜$Rqt#ÿT?Npf1xf`D }C\94TBKy}ggB")T- P#5tZdNb._+5I;lC8KR=^4TItAfxS4#M|jrEgdvDKG$J P?= ʼnafCMZڱk*0 g@Jw ϧ_a ,&zTTPz3t$/TS.O0}x `%ݳoʙmsa8Pn6+mJo(NHqo p 9,y@~y;k3+ #IdQi//nrcm}v7TgQRHI/gB$d׊#(xp 6+%)ىr8.%ny"'5؁pPPD ΀ЃKÀόpIK}jIi2Q)D]gI٘RbhS|{vd8|~(؄ J&˜7@i5ILס?6{K@6ۜ1 D8K)' ?RvU*FգyaC UO&Ra_RxZpR [8 `( k!ރ]7@+ n+7>>+uO{E-E|@=w>_t$; U}^BO7).ap C$a&!Be%*zy!O_N a!x&HKUy Yjw  Fe gZYp%#Lny&v pW4:i%-2_3*2F3@ 3 @BGJPcπ~j-o\ Tܚ(LKB4~-}وLNVǴw|1-r l Sq(E( ) q.;ߡ$G]90HC)A6y"ϬT=420$^ Z&|I d,Wy%+y{/E܉Ihη+^1É!̐! op? B`-Ŀ3AӔ_LH0iZh@*f}L˜ӗ%id n.39En h סH0Nz.>XSNvXNGXףhѹk %viQrlpv(In)r!S `!0!I oL`8]6Z29@b`:o Â_}t-6[y` t<1n[^&V=1?|inf7"t: IaI,"}cc w _‘6[kq۪_/@txˁGf iQ1hJDᕌޕSȹg1-4(  >W9y-^9'y``pC9hTN"@ E%\.Nɽ IDATYaKM4 1;09 mjaHV>UCĉUC, Iph&4(&(50&>$B h4wF"ԧ#wreϐ S@xi 3TH.J6H !gUiDVX͂)duϠY =5V^?H~D OB peHvTBh(q `RZSL׶Tz o`E&),o2|ją蝧R@\!RUI4V+gKƲw+o&xL WmHx]N)'38io= B.cVn1e}o9Uߵ=sE($2hERR[Q(G0Vb8$ !1`Ē$+;)C9|Uu{<]UN=M՘:g_gW! &C>ؼII(CPf6D%cSed_*Pc+6Qm)G3 L)`F8JH'pGai:6ejcb#ɉ{K zͷldkl  G #sa:QSWnHTF(w' b [b+M+$ )=! *M&BQl{mhM˺SY̱c( 2V 0V/y#' h48n 23힂!5RĈ"5#cKV_Yʔ@ك #kcdĒ/A{ 3wYZeu#αjVQRJavY^K[ڥwOwX^|.voQ)3{.3֞4m%ZvI!1ȥw|O0cΞD.P"ONY/+Ϋ]W 65MIdUD }S-|Dԙྭkim*d1K>mWmzl^ܽѯNyB(Ζ@0LfQn>ca4g16L/M90\#)-x2ťaA _9\}]?z~STvߜm&rhd 雰%!P`0MP ;d9>'>ױLJ\PMPO'8q3cJXPzZ1NA0K`3RF\|&R!dLCׅs1&ڂ5Sר:+Wb W$فjp+.oK^~GxBVR+; kz Eq+1xP^P쾧x!ARd:~excg45 <>ē ٔ*a{o4@]6ѕ4nT%!,1-(摭f ke oƮi/̰g!Ô!H2Ts4햰޾u &:#!< {JC߼VG&* ",.{A w9& %V tr~ (֬i)M$s .vX0zoCcaV~5k{0?t/p! DoᅃÍ ]T+m4 FsR0Eⱜ'ی܆x&Yh\0`Z^84Wȧ"DK0&S08 'D&M%2af g[( |X!ՙ e_[ S[9!crǕ*] OVV?}3 Ǡ1^_auɜE& YxpۦnHioUL G@(K;_X =h_f k4Q# Yʒ> /f/6|T򆷵  R|;.v $YG5HD{_ωm6.v?p:L0zmE.u_T&~,6U&zC1Nx?^v~Fo.vkY9lόDpB[[}pl{k;G%x_#1|ro8>㼁 wX ҵD .Rp /FC9(~M3wIuL9=?VH#!@ !5T~fJ>L]% Jz?PHPꌓ8a*͕ X`{Yj{f0~qGM{nwⒷ9 *ѹ*$VVXH@#Йa6R"?߱KdQe'yJ[IdLi,X2i-ɂ$1KJ$ J( G&rOhXH/SBTD9|7(1x{o!tSȎ9w0#ӛP`| ޿B,QRwp,yJ* ]df_/RY0 9 m 5WtO1 = Krml>' qv|J|f2XϜHO[?m q3]fH:'ύ~_T'1oK82 l15O)q o Uged9vC "L0_e@1ZтsAlĎ}(BifL;iɶIq5!ŒȌ> ttD{$rp4Q"IdZl)V!L"٭*X4,C_\UaGLN_ß jFpV '2dIX\^/-_}:+0 g\MD>^Ë̌b 뼙Lޅw'J.vwږr}b{5>k{U`k |7I7 ^́dE(Q&A2*w6'3H%|m^g=h7?;n_?~ۛoܸ񶟻}Mgp8{?g}{{ζvkx'trr߷~Y{^nt~z_ζkwg튇MX?;wnlvq{Vɳ^۟(׀f.).' fB u'0r%/QP M+.9D%(1,~BBU_%X( &> XQ1Dڡ2R`x+9D .[+6m_/7*cT `:!PؤkFdbx,@íG FvW{:hyd+?42`)$d7<rƭUoUpT x vɊVȆYՑ,6jc%g<ȹ@"a{69! rai@ 5 V"0+ w .=4@Enȉc*@5.Nc%U&10e $Go* 0ƈb*,^.ƞo [)=l9JpbxP#A1DT*xy7njd ,>'# ')j 8V&,-u꠩ 0(|n*(@=`bT]d bFD?O`Z 7!(u||"@w.QSlO.0i7ߛZ,]Kw]|[:+_$ Ơ~1NLp5~C`b/D f Fe>??]b{sFyƅO]D>kLjbpY2rb̔ ~#-M ˤ7 2 5V 'i%d]mϷbECe%0@€k;=jpPJ*pǀ4S,i, *-9u--iB~*!ᏀRPG=.TE[сŠh3;-J=aCǢjC  ]ɘ~g.T<{1 0݈b@e/7'(j3*q^g3n)S9o16O.:#ՌhOaxscթ8`~hIo_Mi`lm%>P#|3gg@%vL(E u9/`폃-vPo@~V/t pf&+"w;!d9bcBK @a|m"V3"8Q2 P%yA+"D B aboZQx%̦fSm】#cO7@?\}^b+d;)4,co._ / _w,/l! m|,WYzv=>b.ޕƊ ֭[.vmL??+8|Fj7Kz,$, 2`ɚfl"ѩB.+>}j.&?jS6ߚgӷ9E |<#2&X_U2;4VN. _28:7ړKrs4 KJt@NQ.@e4`9&bnq^ #|qy'g}.4X*ӑr7[?#čiR+ҕ#:`8#3{O?hskA}jDZI\FBQKe+` `jap eJ}! ">R"R *BBDaYV^R`7px2 C]+R7@u6cV,,=sg:~RΨf,1 WЯVbJn@ׇJdbPRfRF!%~LގJcIVљ?( IS͇D~?t ~WOj M`\3<6Q =cc)K_B,a1S.F* nc_4/Vg?zk ɯI)P.rS;:l2?j& %Rz[K-тCq 4]&c<2,ez&H/ȱ"^P&뱰UpXV&D&pniфhbt˽sp;By-C}ޭGH0NEk:Sf Vt| X6QAy |ب r"VC`(ߛ}m .YD*. ol5ī)^wq=UEKpǛp: 0YE ${P%|bl"[:Jpl0dRfPPlrP'@{%Yp߽+\PUAI>ھqڹR^2X2%8Rmv+8>!e8yg2*<ݷ-I]+@ċ V#fg@?:nvrb˶O?϶5];BLӴcoYISs22JaƌG?RҘrΊ%X!)4LOoF=v}[{[F^l:+}s~㷽mYge緥6ʭoߥKx;w5޳g=|Mo}V}wVFX<پl;mrwnm״Ng?v׳qw|Sz4˥@5MA\@ 52^nm@ӧ+\S hO58KW*00QFW|GQr3@BqZZ#SjM0^;;}<  5?7AC~5i @0+{2².Q(Igpڛ.xLgV9 VTL{^MplENJORO(9lAS% :4??Sb=?6`Z=p{B M6}VX]4>\xͨҞ;mzMl}Ta_˹V??qK|=79k5cxp˕E]"\r@79/ôS4=%L^} ֭ RR X96.-HV%_ ۻQ^_qdm^ ء't @lE MH?%ŨmoqִVtݖA#'V Qϟ@rb'07 #vm"~}E&Ego{)1\0אn3QF2E&r">UI!X7[0:{@)CD}Ιh#zA iC+3yAP7J{T&/+Kb4X@b< {#t0#+W8^SDPҵ"ʃ֥B`}nOmK9 JTwO Z KT1B]ӄO7{K Wʊ47? %cج` nRTcuӴVkuF ϸFE(&)`v;DZN7a >+4Bra t-rgJ)JBRk WOTI$0ݣaFr&nL:Ԁe2/WjhW)H{[ޗao;m^ԍzTK!_É etm´R^ :HYb.|[7caj@HKY@HT!' x]FӜS̋F:0 >bq .콞)u՛'d6. BඳU J&^ 'j-cSʗ"c5ضn3ݑ.v]݈i!R"47n<.vjAygE'6s0ѥo|oj\!NXXpR|7w]e<iM * `,yH]yR=iEE-=眱e[KXxߠT^g*cWKxczMLZQlt^홗=@0̒KGN"A$cD d tUxYF}Ev>r" 2(p;k=k>jԘ_~˿Ip&t\_ 6epI?x.`/u$ՈH8e7pso:'Bp/4)e &:*+ga7 M@=^}3Su. ةAHW"L_(_Ul& \mKI6T`T$MEo¹Ъ7GV*5¡ҟ_uF0"3;Oާ_W"nfyVj[+c󌦉\ guibџC5h#\N<%NT}rFGd&7 ~@>-0 mLŢ8>}5O"(W6fg cx p(灸jLa%)lQ  % f ne7T}o|{6k9i';p{Svb{߆eU̖l$l5`O^aV/mD*c#=B5c7SA+ MzMTY3f_T/e.ȃ7g`lB eᅨk\ x%+eOeCh6Zy/]bŻ9gm 30v]뢣\};^yߺE+s<:b7E߃).l~̬b>cu.%bm]/nr+{E?5'M>+ppŤ6_&.#ϸ۟+x$ ld:Aµ,FEjHrVw8 GN7x%Q_5JWn0{6 kXO=tCċSNV%+atV_ _. 2IDk\7 b搦_ps(}{w5,ysp+vqNܢq^>լu{bjF|@rcH`դt"Is\8Q8=0}Etp xzb wb&KM60{w~zR/ >$4[HF h _n>Og,Tt$kFaZdw->޻vG==x&X 9? WDP"E%0D 103؞<{{uNU㷪ιg3aB!i`cuG KD/e"5 BlgLPeT f<Ϙ\]x~+>yQX3py&4Awࠄ X #ƈ!J$Hl`v>6h໶.ԓP 7;=(H\1?5ji "!;T 2琯†ְ@yQ<{SIǂ3O 0Džwmt@݃HweN׭U}Z{G5A/hHOt7`]aZ1H+J\e{$38%("$'Xx閸?4p< qY)N2jl"ۗ!\>&d  Kjs/ mvvQSnA6w >&'%~|pyWծvueNYRX׬V(ծvo-IJ~clԸ:rdMa:y(ICo  ?H"vsv{W`ӵWWTIS*@=x湖ј=߶ UCYW7 bSma栯[QpE7`]a8f)8u{ƁӅmB@]A?޴@ + ,:@Bse H:*ͳ.a]:%Hn9P9Mu6>x{FcX%A6s:\:$ezOTr<{g:$CxZjXOz v鼷ʓ`pp '(7F)71^4@ a3ƯR^0ct6=tߴvfBmA 4<3lk""hib\@۸A^[̶QkrJpgwbbP:Uu)cZ!/ԵaUn\Ve'GSST~H/0@9 ˉ4yg|^]s+uȬ'X}#}zxKtY|ϤooDFnJZ { d]|@s߇J2Z œ9U?98)Ruݐyti6MUIOS:Z`5lS<:B#| y!}2 L^OZ ^(?91XKF>7BSKbbu̇7Fqw^Aܻy lFLZO ̉7¸ѦLTCcWpE*?;OjSKco9F:Wp 3F;lP_m|NuZ$RB9+W{t/#PC;-([=^ӕz[Pt\+}OwQ,u\0t@ IDAT 2, ڹ{ԢX4jE{*DF!\ADghԱ@fKS, B-Bm(HQ"԰n,H쾓: !;1-eF%u Q6ߤvkaSOo7*@^#:3js<#B`OM#e*vl?hL.HFf)P=&b9 )dUZ6"]a}TWr ` ?a t ѻy{@J֑1TۑsWg>N?ͫ`vH6*)kݸ3mqK_ːծv]}]ksB7~7vծPmocG r+X%q `. t Dt) e`H=4RoƧxGTiTן@~ENʾLYg_lux4޽{ۏ|ttlO>sk?xhY'bV.8o~u>|st#?r{g_֭[|z;QC GAȌ@p`0  ;TDtQ0j&0RY84>0M(raBҲ'e$j.FdJJ]~"R^8ƢRjߋd,B{( deZS.U(Y2h,>0I`5!R\{?(2,J jI%V'#R?[賰~JŎd?q,jp٘,2Roo')V ͍t-y#`5@$'2Rā2*ӫv3WD;@c\Ɋ;J"PRBCtq2_ҧ+:q0]|@ODTf̱b0ypx1'FY l%P{%I! #=~"ص Ix*Ԕ4a2TЀ- -Th_Z!| HZV2 Ǭ>)ݧfiM##d-{&<@&[nm=F&3CXgtNH:`}f#eG#LAUah`gTHfpaIB}3WcVB/+p50B,SB02E>C퍕R/<2=ٹM'aS=7p sdb@X3DaLTNy ;HH)5Uk6擃qfQlX\P\-a#:Q6?:\uf0c,HM:a`أkr4Gta(`#}4+LB|9L*!PP> ::-ŎpTO<b\`3T(W|3=oߙ)=F}wF$#51J 47BPla1ʒ6 :.A L' Q$S#K!YWVcȒEa@YKGzbInK8znB>%Xg;u%@ji .y(@_wxP 7GV%52AC:Lj{P_gbR_Vjp@nFz.9>>n%=aibG0vZ䏌wB7+|0՞n7U>X &`,z\_RC yH#;)ms{jH 'r~e>QJuH p0ߓ@_x=$ U qaˌ^ugTJB!#80ˁŘX#ybG)1CJ^X!S~Gyey10j %r|yI$ΨO'CBUl=L#͞&klDH,QjD##JCu_CSisbbtY Ȝ+X*oUK mXfXGNPKs,@DvW d$ t1MbfQa5FaTh7O,a`X>7'cJB%u $&X,C"uF  Bf:;#b$a,]~G)Tn#y~~̘MAn Թo0@B=j M݇r/ qY(c]{obb)(X_ QfTQÿ ]5܁ OEb mbXhprǵb2"\,0F&=D!x>nM xQF3tw.p9:o n]mfiӫ#Q"3gT =d1fO 3SP) qE ٗuARxJ1JjY_(9)D!`1(?#\Ya]`j< {{hO[mtvZKshjVpc%Dte.Gy}c d#ZƐH⿉Azqe=6n1>v $,tTFz@ p8;1 |b}v$mTun#UN%^ʩ7|-ci>9A3%߉gX~% Bk}k53HlBB̌0ÉDޣs3oM=E5v'ftY*Ha 7G  *FGEk3&7W@q}˟Oֱ"Zi"Ih; ('pQeV"_}8•=RDGҷAڻDBXk!j=E4M/C1llr?#<(R zzq;61R5m&%1Ԅ)DM{5ߐԖ]at#bP?&B̅No9ML!3/ cF2',_"g9@89V a=A"Y6G>ϙ>yǝBF ª 9R_ÍQ"ĽDRFN3(VЗs1cmж*:q(t_G+aO&8,9ft mXx碋 b] ]9 ^vM=~q' A(UVv*e"3}T Ħ߮cdVXDkoОw0(9 #cdB|ޘ|?##ފ޷H4) H;BdPT֥— W*E꣟Srk] )h2e%߇F{֑q)Or@+Txt3.lQ1x&$*/L xZSWq|܂f:gi VYuhVZ =09?Dx(3wr@_S4Ȕ GKx>μ<LYQJQҍ)`llN#d3+@OM#eTG$@|փGC>ѯ;a&ھ %=:VН /~D&w<G]gu;$0+N3nSpt:7^EL H$Tz۹!M.-ȕ ouęPw {ȿ[Wx “ %pںBE:PJ\{Os 9e^!6WqOyosCAb7p2\}*%/mE\`s? ]'غcU3̨ A tSr 81uPCgYGHF $!O uH]Mȭf{8p_ۜ%@MN2@$@,A.&x|s>ed:zf' Pҩ&3rn̸w]jW_׃JqwvλծvjQ>a!0]*]&?ƨ,Jss-~u`2֟X׳tE@2ypծF@"aO( r7r|Cau%Hb{bi= =Dk'3$5q7WF˞2B LˇĿu,'gR!R\ 8WWnr4&He|)!? 5v@9d X5BV~' P`-=("Ffޟ`i "။Ձ`t[\^`WF7<)ϥ%+Lh֒APEiρ>]᷌tx3B[4ō fK% (c\@ :!_~ 侒Q. CPX|@e(RERj܇CEƦk{sW,B|!z1 ,̛D%BQwYȥ<p~b-tu+G(CaffzFs#$uO%1#*@M-pHfhS+~t^p~8i2+p$E;G-~ZID03APfeVB@9'ӓ.Q:!d `UE6o<{O|zǴ^8T 9sɐ:sٟ '#2(Eӎg7oLK nWJwZ y1&#':p\ϸտ!_OߝyGǔ66 ׯSLXggQ.0AhatftA K1{fq9ZH/U70 ,Zw`4a5RS? J o1H > Oy YfwdtH86L!\oaqwH겵3 ,D8 {@"u3D=.#/@I'M05˾nq)Q6,& ZE IbxtNC9|'âm0\ t~>S~o݊r]cޞ90{#Ot #tFOq5}7=K{Gu*=oN[:i [ِ)P6q.u-Еɱvh.<$@j9ogZ2ڀ˭MsDCpc)yQG;*'sV5lg0ihsH~f 4VLaAsw^[$D>DHEЦ.S"]yAԩ>5v&MP4 nD )Bf 9C|kI)QJ v]Os6[XvOudVn]d@5Ԏ$n+^~SJk NãڛLgg?:gZx:l=~u^c^o3߲ۧiN琟z~N?{u9~N_x-_`oof\.o~*#2+s=gx#?QVD̳mw:oL{]t3}'{WS|!{ HD>97Y3t ]a-Z"}@DGj5LPzB(ԕrhK{L/tWLo|T?9 GᏏ>tJc;jH4RkF}Ё4J F%(P6{J ݃Hl dW=iB!!@wuH|m{I]SuOXv,5aRR:"/q?. YKPfOx]zyD+6vTz HHpKZ>kR׎^,H+B5Cǡ0Qo?j%\Co="o bl0(aY"@] <OGyUHkc9 -D}ǁj-~J 5!~ ´ !GrqDM?aXD0HA5[^)dҹ\6X`]\,[D#f1nΩG+PL ~Ms HN"L-QCvt߭oFU|ObmXh@"{G'w(7 >tD9#ۯ .n^lkʓ`tIֳU$7n$[sXko2<6PR q9v Ɯ#FЇn|^bRx}镽E k*:+^H'aUYM~` e^F[MH5FӾmNLq҉SZN%Acľ ]*uKfCOoL JOQ>r>o*U:5u$>мCnz3nq硒w1*G_gik]}UI@x܇zȃ^rP3~0Tid8jؒ %X)V (W>ހw$g@ȉ~@t{r1 %/ ? G ^ ^ !\1 *? )Ѱ%l,㭑< 5yEpޕG'uf |hrr{N?5f;` :gk\I;ֶEd?Qd҈+` `!t@}_F%- nC\G>0ED#!mUx*4K%)HMڃB ~#~`F/nOR"CW:dlv2$ƒ6 knձ^CͲSGED Wλ6p@w{yu!4` TEGmuW+ PV,/TUCf-`x t/ys+5W'C+iבa ~s/DoyQ I?kXurSoϭ:X7y@]8&V9YGyPܐ6;#a{Mc83>.֙;>Xu,лcBH8 N!ׁ2'2G_f%'$b5'등W8^$PQA ?S({-r/SA>ݽ:k zmksY:aakЁX"_5KJz@i*ZO͆bb[0Z bsp'"31Fщxh1"5Rg.s#CGXmdkֽ7'I w|‘b Tw"~;#{h'yMOM}Xyc0\1FݲzZFrD/&:xě\0zd#;RLw{k&;9(QFJ:RwvU%> Ѝ}gCˏT=!uDތۣ5f zo %]VQ)0S~0MFwE OE&)(<;\*HNl qWKW ä@9HXZ9;"Qf ؑ{?O!(9ř#ccڱnހV8lbgIl\cE5xs`lv3lG>K)O90 >Z 9<ܱe6yt*er11\AxbXYԺ9%>$*]h:<*q K"Q9dЙa}'WZ!lcRk@סwo\82JF e5bzN 2QQs^f22H< @y%6A7zqKd1` N@6/XP=iCW3|j7B[mYz1k۹SVTKRZ!!`M Cqp^B yI(8$O'$!V$նݥSU~۷֜k]1MUvךk1 Ԧ6M!'OۆNYaXsV3@_5H]rf;iC ök;t6lSbk A __.+; *{DHuMۿPy$۪ULDY20FD y<[R\!~ Tc/K q;7oM`'x=Ȣ"GJ+uMr*"B0fe\)nSHvz^PjjsB Aߋ 9 kzz5eRV*qFnLJm23>0SE_ȅxC #H; Hzӌ0<7j(FS?&p8CоGne'W ;J9':{={X]Za!\Bx82T i-0o]oEZG`? Vi k5no/utRL9rjW8D';G(Cw{Ts54Î+ˇ:rCg 6<ՋLv#ck V'x:BiN e. LqCwېaU!˃+^z\ ēh n0ʛdaJ-@X>f:?W`""TU'Ǟ4 H#pVKъCw#`:#\x\ 3"U2#$R pYv;*͈|Yy&y5qMQ ;2L/W_*d  ҵљU Th,1C~&F_H_ %y TfLӉYVX:sù uI4iBCCV'[*XJY}xD*a|f? l䝉|'f}Ru=mvܖZ1&B'B|Vk33D :?! TSxsAV\5~p`;Wtr'rOqzW+w؞9Wg'Dg6y-zP E ݝ*ً>@vW 'dWFJv pT)W҆ܝڲE{JK AG iq0] 5\\1gX^.ȉԧDjebWC!4Dx~Z=$2.-`+Q_}Xn vo7ϱ;sB՚N1jT«dY?ډe6Mm|Vu5oJճԦ6?S\v4w)p4e:Iuz0Orlb` q9' xOg?96~~י=it9k$x+9g>Ǐٿvu={ں~9gY pv|UpO޳g#y<{F/>ǿ\.`.5tQ{ Ҝ`r4"$9`/<؇o´0Q&Ru|"Jaj(+rLttB]hMG޻ׅpc# q cp2}ӕj`{ Sd'V YiׁjM(ҁM1Ŕ`D*Up "4٧pNֳyv56V y@rE4b$(H4XȒ=[>$`}d@(XWxpmNdImR$]L_Ɂ#WJXDZ@~5' T:奫mhZG>@GfolbO  a%Lm(.I|\:V`!,_W8s؋؇x+ҝK,DspDG۽*bSu55ZQܹyD(<08NOeG!5VdrQr=d=_HmNYհYV9́0Xa/A蒑Re, *q% M5^7R(fPӇE/ϩ "@݁]LD *3 94TgJq9)!?;Dìb)sxw[uQ*]\Oy\`7%t;R3*\$S%EXWWca`GаۢJG 5,HSgl~P2JiB]U30fQ|mʴ%Hod .Jy\9!iw>'HɐtWHaD-e{Ve 3E!'V@ IƕB́lF^# 8ws,ӄ1V*ۣC"Nv8A@PS"ɉU9Cu 눚18/ UpYOK{!c_m. =FMFMmjSԺYkfxwmojSԧB~S,`Nj7?Z&Jq?z'wukW^z[ lV;eLހ|}^2$+V8hDds~&ʌhF#[M|'%%!oQS26| _n|١3R!_/%[ _Sr K'ό,1/ǥ;E`6juBGQPe@k@ KW#7c)Be[fo28п;C_++NÏ ?Vdd&o&ք;LdNE|0%# s@s(MGp[L5tkiJX YoX1a5w-3]Z1{% s>uhT/kQ̔ Z`:_FAK?= Gw/ybɂdNRt5gyGѸ$l曝&W˸F])g棂*u[;W$oA-F$ #e:W> vaTиJ_]RB{xSW6;FSC+F#]홺\/WrSXJboEˠEa3 f&:V 9 ,}l0d IDAT"@w(/wLb1#D?hR{e9PCa K:׀)GaDۤ)2.|<3Ad_ | X  ڱp!!E@8glc(-g^,HEsOwGНJX% ]y,(BC1t]x.x>Ag"$ Bܭ :CW.N"i61 ?BdQ 9)q^Y}PQnBQCtb6ET*ȳ /|yo$>y`46A/:^MnVt_ڝdc##_`;r Tھcɨ>蒼]/c^trkO:AW3[RZ|g{-9&F:"R0SQɕf$3xy  jZ農mͯ}jojSԦ~ jhӬԦ6 p+(;7#[uvuyS%7ÌYQO&XnɀFKn4Pnk٠\ymz$o\\ѕ2ۮҙHO3!9#jڙ=oUn)2!g@m72a%{bfBש[雫~S~ܖVg_R;Ͻ3$ [#rӥťLs3̨:܆դSu9E?t#h`2W](r1Pc%[oE!йrP QavOk)nUL5+ nےdϋr87 TB"x>qf:̖D}s[S Fj9 n_E"7S7tC HL3+^c`K`\3ƀf͊̀F7u=tw"}٣7͙g4z۸ j f)~T10œ>Top턓0@mo׶>M Զa`f ?sZ* 0F7O22앛~6;;LX81!˫4dd· kkTzs\&P0+'51tP]-KCb>jF#T]QN5X``Nm^Ui>U[:W|cR邁NHC-`rov(I6jk @>M&KT B-7:T'UCf6ݟv_ L`:  I2 uԭ2Fo? S&(a"+SOS%G]Kuǫ%T?͒J@n3>Xn?]|P^$츩MȣRgT>ࠁ}kW糿H%>է#ITS-'q8/r`]9Ly1!&}>4w2 Pȣc֗w 2B4vq4}65ĵAZ^ 'kD:U&5w>cCH ~Z{OdsSŗ t0e#4u̍Hy㴠w"5u.at'g K%VrFe6 c>(tu~c\jۛŶY ')Z[GԮɾz-0V?3͹=1(!^9%ȡϕ}gfo=@2,S!%P}mIA9N o  F`>F.\:F+:]ׂB=C_%kw>>uQ!*q{r| =qA$HN SV דb#T<>~,Ǽ]/E~qm-4J܉drEfON9JL~+dwLҝ}^ wx"# Hώ}ﷸ_`[YBN>s|%@~SԦ6C-i)?DA6M}aSsf}~)+¶fO,)<{FWJH,3LBY׺޺!xp/LPخpCjBSM:{=7l8?еDtq\Ly=0_IߞS"ϡSo~RuHRZ3YwS9xUXh':P* z),.f¶ύ]pS+"DZ }"#dЇip@լULdLЅu̻xiVGۯ@8-A KGٍ+ρ[3fNXڏ[KuÈ#aO|n͡9{#tsx/lh!,! L)q}vh|bd7#u0tj5(8iGv.`)Tw"xwU(l\C XC?&osBJ}׳T1#t$ogNb ?SƪMO@Ѻemyͳ 0Ʉz гGHuPf@Z+]@ D0jڄťs5|bu{A8Igtȍak)pPV'd(B-$3_<$SefTtz޴'n`(TP.VnxQHOܕ$;77wd yZ #Y)UR!FL}LH̫7~}xJpؗ;2"нbjbf zj!>2F-],!^GJ/5jjs`ql4B7H1y%LBxzVꑹ;P̯;F#A)N:G|߱Z_@PC%^Rg *mp _*e"hiS.\ʣK` qe{ܐB3EiS&''W:i Q'uzJRGN:aٮ [fpjnax-0S8U"x DTrSxA_ ;T) JyUr۫FỈ6m ܂s47=] C+~yH=hoV'lW}]^-Rѹ;3q#evdٵYA=rom$蟾M6MmVkE|{bm27M!~VB.Pި[B:foF'#*G΀/E)6!eԛ޵"/4cZTf]bU:%ҥJ@}7ĵT=!<ܮ]A>qݾgIg?~κ>}6t~ٌg~檟??g3?k׾y9yNg~c<}ο'ׯmf۳x*1}f5O=;~Nggy:=>c?;::u=YϾ韟͌?;*$,c h{d0-ȋ"DJԬFWU{ݰ`'b%W#^-S2OInHޓWnAwx*E+mv_?">HŰ;+N*xI^*q79Q>͆,O+!t=$.s"K"ʼna<6m#^ܯ02²|!Po& #7uK-z g0orz5ioF?GAUmc;K~BPdCzԁ݆~M!g>")Ns=~yEDԌV3fPJ5 `W|'[swntLZ/ uNUN*8ƈi6`Alԗfڬ{ :ERVбD4L-BQP| ]ЦY<\(Ԁ1fcLcJYAz7 Svz\82QSfqP<ޟi][|"`S+ (.uv..*[Tn}^6^Uv=(d@@ Za<%ln ~A'm;b>E17+kPއ[1nnԓ9HQVT-C~m=k |:'!yqƮ̉; Ҫb4 20f<9B:rʠ1Tѱ|}BͰ~#UNe]Onc5);9?-,$F zI#ȫj;kSJwP]t4%M7u1=`#aP|um%N5ry;=^DžTHa1ʩ5ŁdnpH %`pM"A<'D )y;hsgpbO =rmDpot|r2YA;>UmfN`G<{Ha1twf=Sv@P_Py3gsa&oxSB8RtPlX`խ6sʜr2ozVH*jBHU7@{#=NgEa5'(ME sE2(cj'Mxk_stzzy]V@k>f[kވOъn/42\H+啕_ࢋһ]g0< {B% }Oqi DF K o_e À.LB`J݈a0 7+Cv$ 䨣#`ƃ4IeQ%(Y*-9 ǕaծKBX(rЀpq9CyqK[.nJ^'a.GLݎI' =F8jYkMGSA=_)߅z\ǚ8δ3/Pa x[[2\XbD9.)>_BȑeTfM=X ZTMv%Fz؈U3>D|Z[ O-Go*HWMz'PϠ5D(҃ 6X CJwI+  X#?*4ޅJF KK$<զoVESv6H\uS0+z^;hS,X2]/A Z]!L1+`S\,ԟp5dk4 DͧZ1j?+??* >W7BXY0鈯 =7l1"sd)ntVa;5t@LI_ B]X.3?(~S V߿S_={_>H" 0ԞVf0S/ڟYBcD&c{s8^x&` Cipg{Vaʤ[vgW׷'_.-ާZs8~{eH ȓB@4[~MZY|@/+qWK8S/C$ 7}>y6UhDJ+dJaonۥp݄Z@=x)DMU섴^ $p=ae%>3OPdӦSk1 N!ڗ-({3W: GKZ~]!Ң5AQ:-LS/FTۺ]fX]{JMsb0xM-'1wOd[e =!6י9X:XxP Ȗ<[ EF WJҀfcXG_]-7MmjS?t \uuSԦ>`i@6>z!4[xrXDEC@-DT r1>ygOa {ʅL0วw }kM% `6zSgshկ~}/Cԁ| S5dYآ"[)ؼ{R<863,1IЏS9uLPvGO̩DgnP.L-z3t`4yWAwKlз[Sb%+&*xs4E@8mZ8#hV mr;H:\eMbl˲4o{DG{ddD$YUYT!!@DT!F 1"%!&L1$UTdV6MdF{묹9{s-Q("ۻӥ@=sp0DSڌIξx"FVT,u6Vv-0WtqJ Zxؖu)ԇJ /!} M!rE|v6qZU2%+3!Q& IDAT 99?ӱ'Шء+ٛq-#21A7#i)4AhC?3{ ${k=@<~EkEgWrb솇0.ݫhߚ7>pOS %;ͯw .ƙ{wGzuYA.u b;^'n #$  a/Sģ%Fplkg`w9>\C5go\*W22 gtU%5tߡdbt8`~$΍lL9o7H61\7"m՞ZjnLQ uӺtcpLls I"Υ7L$rBhz РXLeH>/jN)`.C=inCl ~G܌e='U%l$`$,6#9 T|XO3@Q=c1oN 6eR9Hi;t`]lcc \quC %Jy8I&G ):,I$\ɪ8d4}d_\DlLMKF8A"O:XFMfٓ B23Z%A+6F4J ua O8 t{0>|O˵2܀z׌Y8S[$hcT&`&@/2u@E\8O{z'4GZ]G rӰw֓y0I< KXy|sN Ag`ҰK-+HJx30(țډЖxea/ R'!pk 4dˠA)o ˆHg8[ysb <.^Wgp5i-$`[V+ BD}vŕ0,TFB @=*ī(^U|GA?h8=k x24T}.4t3%:MЋU&=t2Ђ3J'#HG n@++UԻH^B:.Cok .ra+f+3̿颃sy3 nQÙKՕr} APm\hr`gd z40iE+U,|?輐P I+Q"=JגWHk0<CWlu|p9]b4a O%Awλ.~|'ȝ;wvr.8nvnIdRzX`=W#oNyiýL'`F(|WDžr uK.vS>>Txǽ$tmઐF!=ш`yF@f_v~ ߅Pi,S&41Q'}]/#6P~'å NBs:hZYqG?{L0@{V%~&+_V @ ҞO2O@(; ^(aTXei@)K ega貒S/mD/O*Kކ4)()vEa#35׳=b#ҭl.HmbحH @a6ya_1r˰7]>ZsB)1Д@'7@3!Y 5uV`3~{@˴`)X, >sr!By tU(#B=n{ } lSq<+0lSͭB&J0&)_pUS-M$[FG98e(@ȑt-9:UDQ2.0]MB']ĵػJ۠{ĄtA2)rZ Tf A%4g^SlyH,'U3;7GYV0I!em.CϤ蔞x0)o`HYM hGN KCe}m|'Ih.q 溔ĭy(K~_-'p}6ª8^a:ءAP-hB(2“f .ì=R8x)`=wVVeF/r5X  J3%oPٴyqC:k8XUL#LR6)mzac$gd߁-YI9;8oS ^@TpruuI%) ETh;o"$zB8gnWeo t tQ.\P:M-:ULEpڲn C-`vE< ]_^ihF2X@h:727G>'#b@5wdf Mr=8|]6/N3Պ 0q&l4)0CNћ5kMh$.ds ~HV<1G9; _T}oH0."#-gU hJ@j*'2X2B;h@x=c tJ _nׄVpyUy}$+S'BLm6n<{>,׿}}-r#FOFtt<5!o|хOv?I3+H 枳g'"Dۈ{`'X?/A\U."y^ʹHmϬ+ FSq892W8 g.v]g޲EEo~.v_h1QUw#YvޣBBFCz-QMh#=s:S&SZ.ڞ3fbEX6=&lO6d:шܫ01$oi6.zN)}ђ}}Oli&_.EƫysoŶ^lyO𣣣OvѳҥKի?9-~ч9K~{.^ϞgϞ}zM/~8._že?{/Εb[7W?O`I9 K>}Y02Ze&J->>inQЧ#`n #cG3Ua{w zÙEp%#Wm"5!Dcu 0|H_pF%qUZ1,25نas&OBq{NYYYbv,m@M`FD,}{c#o3BSڔ>IzhWOJ!֎YG;y2Af]z7bc~X$Jz3ċ VXпY1td H@Qb*^ mrV1).k̯s9D 3zJFȘX*F67_]y!2L@2&ڣ .uI԰̔G+}'*A (ձM5{TeJ@!@z!i608Il3RBlp+C@&Қ+beLjzj[hp볼y9ƙsh=S+6,dBOX[2 NJfޠ КB TueR8|=m NxOFX R30hXw3ֶr_ܾ&8VP h 5!HFX(Ftz$pJ6|'8לF!ĥS #=(ݰxrBW>6F gjewQ4=E.{&B\Gf6kȽje|wpPXkB%z1 HDgh'gg ޅi[m HO'xjmdtW"zHǙmcx0"opfMDJiAh v A hq6k sAX+giq`:2e{@lpԳWg !|9=XAm1L3 ÂN^1dVĄ~ANE}2}'t1Gxf[`W:x^@{Z7o# Q52":B0込ϸB}o{^@HKn1J߼P)+5A]C•2)]Lݥdƒ6mQ5.h7_G#,!|n {؜ \¢%EjDQmtڻO$BYinJ61΃ ;#>[UFcA@5Ҫ*W/zKA5t>+)hXt Z2.j%fr/=yfTt}%QtEτ 2PhA@ux~LBbXT OӇSZ'{`>ݭQS|0'@m o^b'Wڣ |lt oT"}H/U`E|ϟ䀨R"ґbl1,n_aЎ+a*6Argh~z&?sڢ[`xl?h{bi_o)iC]VL* e7:B ̨ |߸;)XCݞb_`|?G 2D#$˴PCfBɐ11bU/N|pfF@yӂ7/"C,N9 !Ҧ:.vI .v]G5B<~>7ׁyG|)$= x|k'm&~cV d=FO!+K~F nuE"ڽ'^SEԔn,b)ϸdb|#d:`vakJxsI}`Z-Ud i\ { ĺEB+2=r]N&5$G6w-EhMHd:VG?3`|DfCFuo,ws=2P2Wg9}XB`s9c2i02o{A=˻UyZ  *NR5_ !MI9 U!Wq0#QC4{'p$RpnzqJR܇D ݳIf% ~$$, I*2\v _r*ƞ)55 ӅahFǗA4äL'7(Ñ,ld=$@!48GhKRO?t6@W!a2*Xs/) Fr랤ufKzA`F@&iZFÈ~¼0܋yJ`4?7U7?v?^y ` tc@CzEHW!11 g0HmA8WH'7rMuB"^=&b13k-q${ T)`5gV߸ֻ?#-h= n8afd`1T(_lTe^Qz$7>k THu6~-KJ' r;6zrL0ΉPqwT <:`HX^|u4+Dk+dJpyCɴGLr>Dv#Hut͜ rl@gK#w\V|H},nH]wȓDT[pU}{^F)Lq< ۆ BGԿvtߍ:ԔJFPoS_\tߊPlr(gw-RaaBA3X';e֞9ĽH`L #cJY`*3ΫV`6X.RwJGz¢| L > X7R%Ԁ=⯯a_V̯ 틑t J$ iU|~\]{JOBDЏ 6I@zuC[C@p`TkNDCF$:\ d&(y}Պ-%Ҳbƙ8>m6BH7`sb5b=C,8đ2Wx٭OTB ':X /BG+-fj~bb.~@͏bGb:C!@m !&o-̫eX6&59$3Q*n8w=KE4HQACmX{܃a'Fh|7w]k*g%2C(yN`@cݕ9rĞjĶsSAoN`A4RX #pսeLԍ:T_ ( B<>a |@ 0!X1.&[ L{|$R*)-]K.IV6,>&b]2v}fv>^t2 >^z$X&i&\p 8lO2bsdCctYZՉVQqjΌkNl IkՓk\J||ȆO L'*e׍Ja?]7xĔqSEh@=!@Mi#%TFOﭡ*A#h$E _Gbʸ}oLjuUN.J;8‹w+ga41`U@螮5~")JDRL*zb遲m7gaW ȟH ~b2:6k\{qFY%f*N.Z`(Ű"ܿK'|8-ҧ>۾N];39Z,8ޙ OK3 F14:?36(R!DK J2Q;d>B O0Kp'03! YB{Nx$O!R!%W V5uKMB~~ /dʷ>ޘԕJ˳`YBE鷟m 0FP|X`Q\:@t?̰UjnL=̬C7۠YI3+MX UQx59dt@ezaZvs.-WIbZ>mf޾P*:!䚫l60@$?B|h6 =BAn%3?§ژ}Pg1K~|XhS?Q!b_Ho q3ҁN=fK1+Rauea Gu!^I^nƹdmr;տTvgW{Q;)':!QF.p3нl4@nъBFXʜl9F1Lo IDATsƾA <ю'JcKõ3c$|N &ǒn0%$ZœHpb1& Lke929:)X5ptQ#|BP A7.mHХ|@+yϕF,؈fpb! 3Bs9ϺMS *# L[%/iIOX\?\2aQ% ڴml6v.v]g~hO]b(cOj}]#OxUg+%A'_:^6toA{Ph+=< G-0ֆ}Rz2Yu^ 5H.5YsIUþf&/2 ]b)bCK8! -Qfp%MO+.th6Z]Iʘ`|`H4XɛK.5KﻌD3GotpԽ)uv:3$x۽0{ c1+| L!.% tW"rdcQ؃Ԏ@"e 흑|3 (M NȦ=Iy!۔71"/ӆ&˖ѯ߷@y|} Sb.qvbpU|n\r_L+HCv怱+*LDb ,+ BCi$ID$eD$a!P?tBv0O9MC$;5=^ yٴll6YR >TNm`Ɯ5fUߧ!(ߜ>rMzF RlK}0w2oLv 'PRLO<'hΙf.Ү9KDҤbQu \Z~ !D}jFw+4|' Oņ_t$ueϬ[ >p$)܋pL`_3 x3Eq*e]DIy`dn htO}!栎UZKĈ:j $+2{C2˻:,\J<+QC~cٝUJ%l/GY 4$3H3Ls h6@bn G"Ɂ -h.4x%`@P*bP?2EO8a  oyt@56% ͨy!@>cJcnbbMee㽎nc@Z]¥":233J4h,j 7e=#Znb5#08CEJ(YЏ"@%ݥ,ƷŕhrJ UZmo\"B\v]OE^L|diA~i3!3q?CHM FȉZFlhw h@|gN`\ݦa'i@V KaCπec7J7PeX H &d**6= Rc|n!6ي+Bk6z J]'jڈ.iO`/ސe›ܞV}O_N`6Y$?ґe@g$JX'Hz-ȋ^bǕ(n"!H& 2 ؇J@rE/J F@{W"E|}/:= vw.vIl-ktlZbŌބ5_~ FE("NUC>y}h*eF]ً}x{Ͽ~.z_?ߏ=G/X=?j/_/vqLV:.|[/~mna+L~ @+N P 8e]c]$M@{%}uҾO5U!exk\ t2hпY] | E!ޝtCaX8>1˄a&ڬ8Q `de]z~=$&tW qE }[0Db3K6fB+LH8{<=еb<:@PH.EƬ._ƥGB!gTgLaU5Cii7ϐ8AeK +i%'7>>[EC8K+= BÈ-8Ax@*#݁ѩ}^!БI]]u5N8'LZAKAF̔x'j=g$"f EMF>W*B;U(6+bF8IGf+Gԇ I{1Y Vw˲u@؁J,Ed ء 9c_J;vP/LˁLƭMbҜ̀R½viE:2lkE{̗U.ϛf9kO:Rۣőes\ooD*Ȕ/O ף\ Yph ?m$;% VZ~gT*͝H1JPi !Ru[4\]8}*.cM9u%"t2RF{ 0ZkߛU:DJ{𶨔f B^oy +$ȉaAl28F׻FH*D:*,(qRM{ci V&ЮVP>h^ys5bm$;|38*agp9II10/8 `vs*Z*Mԕʎ\&]I\<$t|-P6#5p$s`ZJ $WzZ\2FN3Ȋ|P1NѲJRioCybeRM .p {5~>A`)0,ha텐d3fF9q1hfb #xb){ TZ FKH(J EPp y-qӉs~fA"+y7IOFP%LJjGՕHIߨ ڑҽӹ"u0QZ "O s[mzb.~&M^Lw]"o[ҩE5jJ)$ ]yrdl#|Am6%Zx( IaԅBYAB+nǥ TOƙ@IZ&g]c,ˮڏsGW{Ù!),Z"i%N8OrV'qYb%J pH e9D[Q(F$%r^NwOֽZέ)zb-`0]]ucYZ mXh*]ȻPaE NJIWD%2袡w'5\hn4e9z!_Lysl%Rr!9HW3u,]U*ɀ䄛*O' [lUV& s.OPPW, @yd=x^HyOm+;8i<OA% 3:7 P#:O =WMn8{mV~O;Kt<[m^E^CQgcE2zM{s.@x'aOLb*蟞RoA:8bIMRcQfP0weD[`MS@fWPvJwϔiH!R&lo$P ~~XL">3cWwl! Ly*"0^UJW ABn2+7Q'%2 $F#8+ҁ7_J#BXB{Bl< W+kǮ!u{Pۂ\ W jz Wwla愷7ZǼN1.qpe2WN%%9fz<guVguVguVKf5#13:5ǩ*V.]>?QnV'%7$@ ^v}uZV(S ԋ`+a?«憊hU E.H"vM7.nEICvl,Y7WkQ_""mU^|,1+tr( F@HgABw(72s5\~0'#6}-HmRmY /:5o-[[~E{xgQ_oP~rA{2 `cXc݉zo(VO3ߗ+(Xkvgf$)B /zE3YP{ەqj|u;'5afpM](+Dgv1U4n) e#oęH$E<sU{~Tw*)``{trn ^^8T{ŬRqpq56+)mrbC&,"r7x= A|feD~W6g9`m{ ;xq (h`"4E3emƯ@Gr} QM J!<hke|oi{'¯ctcvFG>V7h=NDI=9ηf& VԱ*1f/v Wv!OO { Q(&h₺Mx1Br1dI!UC3iS\5͆uF{#}`'kr^ҹA%{䕎O;]0Zc@{iKh/ R# _ H |@#Pͨ Öڜ:$j2,lw* 1# ӽg88(y^={-W;%'/ԮѲ"cd6i؁`O7x#B ϫy{^yZiȾG_\Y;i5{-Ho(C~܂t/!W)m>qPIc=Ʋ98gT[PjR#@܆gZ2x7) vSn&݈OK#?T8,p >iH3`kF耀Mqsq i~'ta?)pu o#<Ƭf"8<~] lmTkX}] [z_oTeʎzc"~1,3rk&6!sta CCV'f>GqH$Ms\'iF x=gԃaFA}i2bwFrTe}`< ̳쐔bE!5.ȮZtRM tߕ[Pߤ>Lyk6`s[&6 I_Ƹ_0#+Ev~ơ]8ԛE<^H(NIxL"!~ҟtIM+3'w#@yGwm>bL@Xѕ@%r{}dD\4j5gt̏`.^F✴9FvFp4kŜEqdiNN$jr^j;} D; 57"tK wr:L06}X{ M"ǡ47].gguVguVgR'-xYPΔguVg-_F'?ou<9wl]eXQ?45[wX81xh@{J࢑Ї`h&P)mKt|@~]|tbx>i~z:m~J9m~mO۫i+xxO}sOIpoYnr5m IDAT:׽mOߟ;;;i{wdiӟ{w??xn9s?9'>mߞ3'Ok~^??=}?N?'騅8J^3Dcرs,'G.NBCwA3 wQ^gj8{*-Al+5@>Zh`M @}ڇ"NRcf~YȚyŨoh󎴨hcAJZPM*-5&`D=n)iFX( [:FL`4c5QӤskuSݳ0RU`S;vw yNk]S.΀OHin B ҪRQ'ʠ#mЄR#gSsxH KS FFuKX4GL 0lur[L6PHVqsIo@ӝ e.B02 F(4Zk8 *e BtsFeAx֝  K#!F6FK7`kjLFh+ۢ}OO9XQon? 7GQ>rx!Rk Q~0,్lZN,5U~Q'qaIk̸\DB;ǁ_XBvZXJNuIt`#@C7LFiW OW{Mηr*=::/fSwyuƟYշz7Zߐ:l\UGSρ!K_Ź۱͋7Fr}F%H7EBp0ze&̞H"VnM!hֳz]|$L)z6B0Jv+`` skCO?^!|! SCqad.K='DcƒatCBiC'~,fnphHC>R" i[]wUϸfhZXp*2,V$18L g[+K`Tk.m6k=Gf4BܝOsms7 e.Ii C7E{BkĢЪ]u[ D 6M@܊xt{ةi.6)T P#m@X8n f_&V"ؖGZna3H"]r]mRi| ~'3NMH+2cn7op>bu]&wcpg_G ;0sN%QAJ >bh#PiOw!12BdE[>2&JU=zDj8H4DNR&(PV~q;2ކEw7wXp@^<_>\ > Z$)Uj]Ģ}ŕI`]I!~b/zrRM)6?ZQ.LJhl硧a3"XU ƋK~T!UTHn|"rqNX 1jzF-NAFđin?_)2[c@Z$J%6v ^#6sgq@ %/&gQ݉qQn.V4bCik%_ߢ%hY!1@ˉu l>zG P`~D {nq;1CN@q[г ~8-\9mZKv5#]Qg366^u04djkDzZ8S p1 4bZjh)Z1f:EDiP1߫hMv`3Mq8v8P'|M[$*N'"TW& #]]˞s<@li;wɉ)X 9=h`akjJVEHU5x?1THs+tPԤ sG-sز0 ƽ!מUۍwwv LP7([]5J|΁>H̊nl -獑Xu@:'YeG٩`2 Dݨg/_Ļ=xM#@Ö"9]S`/dܰ;`7D -5ZBs#%_ #: ~ Zk\A0A(s+[G&xt@*("Y&(`P:Q2= cn6;:28_ȬYxpp>ׁhVu$C(ŲQIJN;s=>0rdV]&1Dhu2& I!4"Nh#iB;(Tg WAV<9a w'L F98ldݚ}؈-2} - :Ϫ VX`chR&4F[Ax#^d*:C= 4b_= ab}!T@ф1}0$,oAD1k4AcFk/^,T@a@LX12k|55 s`1'0o j,VG{ī^_w 镥BcT^=|D_#<[3 >aDH*_l"4}ZhCb"'i%nB-qF4Eq.?G]D*Hfȟ=CluIUD#&Np40JrFh/&q }fO-Ig̸#i6oW3@~ U@?( ;wo߭NI;oWw vS fHϪ¡+)C{ϷU]FD* # t;sW\*Lգ`#mqT/Yp2@ɞ3OV/A7 ۝7ˤķL>3f.Tcpd6Ɵ9됿cFuoJ 5UlV!\( 8NXAչJ@6A@5 ݞaV$I ;[U!fEA^ŀ^O]WRahh2 c<+IDb$VtyP!!Ê@J 4crv45޼d5*4S&ȜdA)}Vaqs F}E|&MѭX؁*{e`iw0X$ jJc8~Cp%jx1 IQS";2c oƞ)B[P"lc0& ̞#eZ~A , ywrGT'S`s©ߍJ5B-]6-lEap$ap2QJ$p5c@& j]Ñ՞bc;))%W t]&P:BX( 7R)h{/TX2DQba%?p-"S;ݏu ƀo@PbܝJ"e3Na)DXwy7~uiHK~Zk Mn>)w h7 )j~5cTƂkkoB`+n߁0@\uOzq0רR {.|f F#aB%;OZ=|$3GH`S68x)GpA98*] %'#aB|]htFwD?Ew933W/˒{0qS6 %BشU_g͇Pw鐋a -Xglx=Ɍ)gd9D4t-+A-YǢ[t3_ڠwwFX~~VDӑ8p:/C Z9IZk8.u|bفg3ycW୑٘I7? nEe4t>[u yYd椏< ԇkb]y#^|0mZ@MCbA4OLʛsUX^f(݊rzkU,C~6์.'1pl߹`o,ֱY?i}:Ðhׄ9īvQgnZǨ#}%2|^a=.p3Oi??#}zN=}2yם=z5ϟ?3O3'||w|N^ŋ_\Ow>z2t|z̜C~޽y珓ytMOg<{:}>{sOs?w9yN>GsxxOm# b][ ?<2=wEBL,5wYt@7 F;X# E ?tI FIzۦ0Rj#|5{S[?f:+rܜQ& u9zҀ;̠wWm&*C^wx|uRiL;#,庺}sПĈ`ƀLyZaQ`n+P).Τ|7H* ]D3F'\$"9qnk Ud+ے #O*k]OXtd.X`zv@!|!ÿհUg[ k3YOVFz.24>v=dS=''g xv9L ВB/la ~2\hͻqR,I<14;P"I=#@@9pMW*y6CQ "ְVF{Kq֚R IDATÑQc@bbm 7֓y_J *-U<4wFWizn dH-]tgJłӌ!vC q/30(&8@Ət>~c&&w V0:cyX*캩*m߳7FZ7 jBW;`ԧDg~N:WGE_At v*X  `]}<*z0<h0W[i骐axݳyh?hos#mJ>ko1FJ9m W_kSq(Ovm 4e25w 6&AEֱZSyc)$sKrFY@ܙWj$8U5TG$$&oiLd ^U6/{xNa(KF |&߆if0(厓ҕD}6>|wg NRVq䵄+0~iAoqtx K wvWGr"L8v%B Cy~͘uW)D}27n?.' 0E_c6u6݈mqu(Ml@PBgH0lP FԀv=Åo.@,69= ASEЏ _,j;fP,1i!7X}M1(a5˟O?iGu*?yc.ɤU㻻oF'zҀ^Mm%p Swʅi)1􍓘C E,@fK5ESWކފ!ӹH/EFh\* s;;F<YB̜s-):vHC8^rǝ\f]bHv]ħO -zW" SP'詷m*%[Nw^doݘ`n&),zhH>XPD`tZWΫhL򺳈«@E q]ځH8 =(\J]`.oc6af'Y0xڇ!s07Tu+Wm|yH)Oa/g侹$yx K;֕zoB t! fD.. BJI/%@O5n !SqAܣT 3 yƷs^ĽAJ@4к 1P,9PӼF?[*r&hO548@_T,@OGNĹ8Sw_ %N~gdLRēNbKhƙ"۠g܀vt#EO8L3${=pS3^D e63K TxGR=S `,@ʂO 3XA54BPqZ*)@pP枺w8;W<:ju3!G}m$KZƋA}h+#@+Nr:ˇ@ =n|n1@Z#*ku'{qAQ}_/(Kv=Pn ݽ' _@r:O h_*BFi,k XȘxA`#XG4Q@/xqW:;5S5pmws5| u"tΐ\s-{v@7KP{zX(=c^c5:dCI_:eh _{E8ϩ-TqGkP9UOB|!0Lp#RNޅA'C\9QQ3\^F mv-e89SCzdkέ, >ukjH*uQ]eʫ'K^(n Xz*yơ7'tD/}/ H)#"S7G LVmkβl LkЭd|:dGnM srWqL^ay{r ^2g!M ,7W+/*ٜKg_7B:!.?U Pfь>]EDL5>04r ͕zPN- |$'+'=[XuE Mk& L+Ȣۉt(W['<4(ОtUc (#aaU6j! $ <YMk@zXwG('I:6Fx9ß?v_/Gͺoc6Z`9{>TA{Wt67'c{|N{ M*%H+ rྉ4;hqCJLcQ4C}ndQFè6Úۧmlc^еzӁtfⳊ$&Nc͘8gI28$BzL'FdBAH!eؙ9WhI8vb5edFG:J5JPA&W9#_ y+?"K$!e""<\%C-%Ro׳ffOm,%W܏hg ];g*!4|.mQb\R(2֐xRZ~͜uYB=S"Htm(ՙy;sT$#wƒ8雲͆r"dz?7 H;w:cr5~z:n#8{&W8ݐPv3Vf&T㥹˳RFx#\ZzA#?&`c.R*mO@9Ũ/gAlh hf skd|1{qC(.?~CEĒm2^!jDȱUHO M`xI"j8ru&8(wIwПkP*"q o  2?ܯ*F  oE4d'@;v#En /lNgH;)RaI6)qt?sf^qb ܘnV !'VQ)R3f6 vfk "`!U丹Agk*KQqF+0U;bY}1k ezsdxJ *4nyy6/efyæ9=ˆ[)ʊ= <2XS:j!# /ԐOxE] Tb; (a ,.ݱq!xs$dA^RO ( n~疘A~ȷG mr?J_]1J~VׅK>U)ƈ t$|߂qުt6 ka56́+څU0"J?o}S^l)K fc5_)kH ;0+# ~+\b 3yZOT|~Ɓt|Oӈ9;ۃԪ!?BSzR.}[mlcw%΂;;;' Xc;͢5M1enc8g?{y)h%M֞eo<bv_Q37^3j kЎQo7B.dCY@,Qx6&$z]N*ω狗Ogg5Çg8^z~g={b ηY?}1ϟym `oo4yGg}zv9}g'_>r˗/=/^C<7\s:Ng}Ͽ9>pO#d%+U7zDGbfۆ Vt $*QXAh,"0u5a+1q1ݴ}f5'8]tW=76 4 ,gv(Tq獟f3DE^}o:,"}0H#z=ڐG$W/;n}ʲ9n8E"&EU4$98+HBr[ m#gkߛ@Bi>͗95ĕବ>b 69 Ah^9{i3~r5dJV=H97(3y)Q`ɴ#Dꕺ.T ȟ{-7:AKdA D(B5=sO7 ,_ F#=$:K* v]OJvCcڦ:,[׎Qj ́0QdH}(~?({X5A;M \gjsW6'7gE=vLˠ<D33:DƊH[W,9GCw}]*XHi1F$$c994@ Lކx <RDU)1CHAf6anD nǐQC1Q :HAhFoIIQPrWqoZ"v@0R%dF) T2{B:7 T,EcPB,MBЊhpf`_& [3 ć>,: b,nFh@bldjtʫ ,_WT|~S@[u3\AvY0.WB i2S`OJݥ.c? K?H 0߀LF/\yĻX[nb]uz;zc9ħ` t@;Ŋ\D+S6:FbC[b`Y{.x:3lulע7 |JKki~ٿ"y_)-+?C{G+xtLTOlM\̅# j)L(|TgXVA>HŻ-BIU;015Mԋ8b=dx UB-P051aO/G)㢹6z ߼mcNJ fA}Gzo?^n F EAhq aV(0^1(:&X*&DxMς}n^>_A.ۼڌ/; 7P('& j`muBWP3dRW)1'2J@?LGV*.! @@OeuY OE *(31 U%M&6%~T6rp!{h@6zILԓE+G nWKBxFO$qpD֊ ֬?| UDN*S4@`UC(1BW]Ʉ>QnU@prJ{x ijP69;"(5,@RE`6?i|S'w2MqhEXkF Ζ#. MdlBh*LqFaDc(v߽E9$`6:JQ&^ .SR>H@.:E )`Lsf^愽m *<`YQiOErj?0pP'PW,@c[.oi)U*~^J3ȏ򉟪"~ԁvz{zOt0"S*,I)BJĒ'-QoզS IDATWwpW=qZaX =<Ѱo0XDH`ŗPHD*:tI/F'H (OP&5nC1C'rahdiij,R#C'W_1g8.3as Qfd+g!"Uɯd7FzA;՟o;8_.ɺ|M^4lGoÏc&zY &ʛp%p֧E}AwhAēShzгLy(.Qq!-4PXudUQ+lhc8TCq0Sz`Z8Nut/ w, * ! &j<@B5hݛˑR&`f;iB.Hi T~0Bqp8@cjnOFҕHP+`bYG H L)'XA ܫgu&zuaӈB[dL{Hܭ ETCHz 9w273U- :&&WmсvqT5!\6ʭ8z =,UBh'(J> / t %z7s/+CaA9ǏA0QxZCr?CH3 4nFH@Zd,.]yD84e%ccZ_6+:K_ԣ5iA{T]z>Qke^{*#M$cvq;6dȯB(\N`OzIO*~]Ḓs_ =&'+ R}lƳ-/\l"v@W!0 LoeCN8V]X nޝYq oc6k!"*'NX&@//S?S'~mɎ}TUnܸ7B>60s|K_3^D=9>{'^" *= &ňu1EqYg݆8PFdś8O/VAXh5eU^[tdKmlۊc.YGC$>Z 4li8o@}:2q8'6p8QOOL=$]>O&-XMao!!mH\:gNͪd?o7}&2#Xt`ȉ+1BjdqDоBR'o7~q@Hsr0aQtSO;Ȕz䉌m ;+=jFҙK/T"< ɌDku?[_B>,1x!§; HgX´ka~w"?%5mFc4T ':T 1K%]c!(+?V/QEBEF!h~EF MЄ`i âQ+4G{ +$?_Á_U:zpUfee_cWK#Ei$0m4>L(td Bah\WA :^jmE~'6* ڙqhpc?f`$#z&@X;-pꉏ|{C0r>c+ 2#Ǡ- 9bajnu2CmFs(K@!HHB>F,bMNbVax&/e`VSi ^岣;GZ@ 6Ro;O;$Pť-jaH+@Z0Dv䕞PuccYX<[th#fp$G3܄ '(G]͎RKZX#$ ]G !^3]_R~w@Ty$ T0 `bEfܫ 00Xu$9g'bY݃Ǧc;=\%?bz$ y!A0}<`fN(thrswf-7~F<7WH̻>~[cζ~nb3^TVG=9zfp 8rfB{Xvzr2KiBHwZ({#KX*TʼZ^'۴ȓ@B[&Ǿ[q/p6ԋ?["?ƀ-h-Q# ]KV1b {%1nE%w@5;X=TۻZ&8R6C]FyCڂ@i >cZ3BY9*4¯]!U~%!ҢO Zl;n47^c6E)8>oQ]m3cB]c z]o?IB&e&+Zs˨*fU޻-R͝bqXT!w{zF\J=cc#;Ӳ`?HkgK[u%L\ 3ŕ;+X}䷱mlcuNn@o&?7'/?ODbq ܟ9vVysPlqf,_%BԃJ4?A aa|q_9~429ů_&A/Cxܣ~"c#}Lh?Ǯkcr:'}宯]۷o?yy;|?~'9wb?;߽=$s~g%%ϷYY=::zoY-=' g|x{w^qϟ;ow^8ߦ׿8{&϶7gsޥ)<=#LYTku}V>O{#b}o fg3.߅ y4(陎z%;Ƈ;ʄ֏`^ PrbFӅDɳg_72'`bpxEA4RnAl"Gg);lt:lyl&;r.Z&#D3%j+PN~+M7U/bDk.[@ ؿauboRb\*L\԰JWu 'gO p"kA!Gg:‹6@i.3C5itC9nf~ҷ]ѡ3{RfV,jBe(Q[&i?בi d%: GO+Pvď] @3y~~d nk*\YC(4ԡ{DZb^ h`@-}~6nu"53sת+K. O=eC*]{.S^z]J=JcVUƬ^3xÎ\)7. 0ky2߱ZW"^CQv7{ )-D\xe@Sf|Xb)mælnOPb!:K k }M.lO~3 7Mfm%z[@mt/%ʅF<Z0\Lp] |`Nr%a+(`IۙZ >|"E'~4Q_4\>?qec(f]EkB+sea\:  F{HgҌChՙF4ﺪ}:zndt/t{ MB>OD>@72]R#e!?HnY@:\[ڬ^@?_(%R_i4"W,!TZDfwCm ?26"ńDIlpv= O7̲ HRَ !$g{dЂQ ?tO}{^]#ou畏"XQ{WX^'G*w:E0h䌯o4o]8z}귷yطnS$sf_mZ(dpK k5Ef{w:YRSc@ (GHē¨df5]\G m2@ZExa?E?_X탰"6JlV`rqƍ,w 'QUDdj_F{f>ٌNy~X~lڣ &f/5I#Nne,*)Bxྫua14~kp ztqIikZ#tَmlc@V}:Kl~@(eWkfr"Ͱ(m@Y Ү.ᯃsyb6~GgL'']ݔ(>v1%&"'+0tOl0yҸ>B \}zEf)X۰DH1Pc!6*.Yi&ED/@l?3CċUbB{G#>8aU\C&YSì'}hz?$0'/;0Q&!DžC4A5eދqRMWR4kq/)Dl$T2N;DW *SI`d{ j^,kLvzf$3j/u^rXW$ezBMK}lsI.-W;i6ya@e:%@ /\&C!Bh,BQۉز1-VQVhv#s`H3K:xABd-}%!,qoӿ (%ƏR4_"u:'{fp!!ע;4; x+Fw4PXU >Wq$Gꍂd(W3z/)L\Y {¥H܂p2jCo{x1}X+I|}s 7Oe3P?F~__,P.h.Eb|ۥs;beRIYiP>4ҴJk )ao0hauyɄ)b3+:tF'?I7DZƩn{P .,'9b[xKiG0\_y:q޳-x[ֺnyG;5A'Q\) #H -J>5kn%+#Zߠ̗$V &,-Fhη-e:ė:$@'M)Ȭ^=r|O3H @FWۘ~3Cy&f8I.P"\q4\8K~^I6bg0UAGF]Fʻ]L랲]Tws{zOꍈ6$7Gܛ#u=*b&p 9GF88Jyk{Ay%c 1&ʛJLK.:ʋz2*MB^u8? 7`D̩ a]S-hk o:cKVUŎ%= W nqY _D $ Sg`?Q 2R3C> nK\51zhhiR+ЮT1.i>o:6ac[N2Ԛѽ쯏@C>35ߜ"gmK&{ Sgsm X^])ma֊~/= IDAT)MiT#O\xڝ;\)&kB_ŀR>ς 4!=>(G—u` \8vևHA3#͘__8_P 9z_{ףȔba!w`uGK'bPjO^B5B!Q US9DoO3P8.~ǰ܇ǯ3m-BHuzU@d:FqD6į!1e` ۻ" -VxSg ~vBnZMdZ+eԶ&j %(t 7ǹV|cyah^tZb;jdo@!묠1gbEwΉ4w/ARخ5ӭҵft~^!{/0^ɮB$!u]Cy.k'$bwi*Iao ߨ\Ϙme52Bڏ>1Cꫣ\H!+Umk / r"fq%@rdx~E/9!d.v]IVq^WS~}{{{o<k_U!g[M^+j5@}I֯.OCY9"  y%J;Zu0B:u+ᡄ>T]6J 1`%)Q{JX$GثDrƍwM÷į^zwx>3ֱk뛿ZO?;~oym/ŶjzvV???yoNn6=}>?Sm_vmoMkhLl۶ۘz?oun=.w=lՉq0[vYemX-0"!8[R#8͌hQk)SnX0 QI O}8d;4Zv;ΒퟜQe`pʉ%]NZ0c'mΒ+ 6㪰ߚ`sbsa)#($"ɿǁ=+F&d @1kP52pz3},mb`u)kuPrD: ),{@*<J YZ;2,KjDc;oBc'B1,XJ@ D b5=%>l4HdQoHtkJפUmbTo&O+A πi gAͤ1@t;BF0,hF$aF\f4#^>7Z˼@?us18z `|tm%d꫺/z =-V1 Գ8RjvGn אBv'R3H6XR kera SB %`,> XdYz3vL_A 71nꐐY~ SjkA0"B T |Ú]Q?Eւwg\G!hB (# D̅Vbt}^̨?to+"#L.R`k(+ZNصN2uw!.[)B ( u;A`s+u`lv ƕOrudREY6I)a/R ?B5AJ^UH cDK M~^](ӍPODgbKn׷ΔB `b䔒,+ !&P1V& Al܄.`ҕSL*yQ+#Ad3G I8Uqσ R feR$TS XB&]>ouA\dJ-2ւb.e}ڊ}{D9pT.vIPUiW[kͺhiDx'wn41q۠VX~iߏĻ.v]lI˰&?lB`Xlkq~zGX3L^NM(_*6|?@R6 l14D* )͉׼ƠR"Dѓ;g+l,pDu};'2Q;躦,qFaV{3Jː~vOCs~JӑsB}88!.[X9?n0]v?14bOhV.'G%S[[BsOƆ(Zk={}ejm:AaZmCju a`RRCg'Dn q x@hOg?>s*znJɐW*IkW!Ka,p)jtCPcn|\н1i$$`)5`FWL!fta?<aоCC<]{2˘V;BB_[^IH+k%1:KPe v0l\A%,Du{Q`f%AFưt^ `c`H %Cd1T" 6V2&^L;.IީiH;H6VV7k\R}oʀ]i_B7 uT@#Q.7ݩfV™N 9O)MhνG(9n5qUeE-"vhp3Cx2y!{lS@*c}lf(Hug`W*@ ;+u+{ R+OWg4 2P,xgiR Q~FܨS{=|A8| 'uz @w+}W /0nAYя7x|=6λFq>g7ޮ-7k!D䮢s2o@.F dO18ԍi =qڣJxʠ r` k_\^k"gF+ ] ^F&tY]m{jeaIb Ab\l?uWzT E#N B1I OUMSEJ6qX>tƁDޥòc o:kVêB {߰e>RaL,k^ޙ24$*`]oKa FWy&,wg/2#S`̜LD˒Re%נD@0}µJ W#\MqM C"þ߂Ey32rnUT KƉS edD&Ú< btT:`]3L6=#@8Ò7T %TQ Lx@\_iŊV&= RͽM$ 49\Θs@ӄ#jVHT`o!b g,J (tt,\Q(F*S=7*uJ+0$!'܀X!Ը) B*Iɯ/w[)|xZSԠ3,abUa ]bx~ Z5灴A]~vͼA2yz]k#N 9L$z ^Ba7U—R)1\SœPК$kNb.`!D;gyXF15K(PM7l @2:uieW6 `880B "0l?t`} ҮPw VqPF8Hd ؊P\2A$R>7/DT?c-Fʛovq Įǣ@g"t, ¸GwO& ./ ^`|Zuj&K{]l" 1{ʈ (ٿ 5zhxnpVߙ)F >7DhSWB"߮7` _<{y:õiyVr ukpe*H]H\#3Rk^oOWX':w+*#t0@>> Ɵl?RzX/[54 A Sup٤a 5@jNmնRYsj;ਮoU}Sah2Ѻșezc y Ja! n9]o֔ 6z(]߃U*XDƓ*0kB\?C; ]bԚI|?ZN~}kEmoY 3V=a%534]$NК|FFMјz;\?uw>_pQAɶW;'r{?wyw3_-{;}ymm߳9YU.;?߷۟sΝw}vNww9wFox{7{v{ֽm{ʟ/\hLo߳.so~{~l˶} n#AokK4'f079Q Q\5£+HJ_f<{{KaMAB4&oN=Ǯr#ʾNnNp RTI^"76ۉڊ Y W5D'%T|uBHnY{\nGo?>8]ڄ07r ,&ueı?CV91/XM2arNJ>(P#!{=vG s85;QYYw:XaW LL.k\?Hsٜ| |e\ ƌK#!%!/B  Hgl T"e*+$8t z\H>1?ȌJM4`^Ql rj{ QVit-(+VvoR^PgPb]^!8ŀ[g= or嶾^سFLEnE-C%hXkzAfjvޯRÂ}:uJqr** BuE[qx? >i$k -缯VSGf,a%#gRPjWVYWBp{9# V 'wGV 7.5,_Q{tUx]Yt b">kH &❀]-ďhHtmV8~|€4tS;_LUuCӡ!1bY7iDjܵ7u!07W %RZZ*_^\RMQ;.v]} LO~oFJiپ;I]bXwoW$hL(Lh*l iIgA>J޼01.RFd&bnv }_݁+Z l,3L ЃL|(.f!9<ǔMt/JmXl@>*jot9]^PRdmi#p+MHրOc'/*v)3 HguE V нuk"G#ۺն$mOn|f):(!lepꀼNmQ<_D:tƂBG"??QIBbV3Qʐ!-/=_-͉%b : _1"jԂ f ^$ġTmL˨/,3={=Uq!!qY5jh DuڔJB$AW~w,^^4sgD3$MZr%Bz`381hBJ@s$Cif`/ ᛀv|{FP ^:39(U/)ɄD9PBxFZ1 IH2&~JG R_A| {S+.M0zPr?^0xxjD0 IDATv~f 9߾F'D?T~d.v%11Fy ;2ԇ-Lc1v]=[Yv_$b0BJP _1:&c%*|DeмTY'/"rz].v&Fj"MrLA޴v?yK; #i4|^U) =˩KxI=SHSGQ`?c@.+@g-T*"4 I;T2 =!<3s,h`n~-W}d;. BAo֜I014hXK߉!kr\qY $%H?{%c "JE|@0]pWV02bLi2l9*e~T ypqW´VPI.O/b,ID:9- f"s"a&鉈Tg55Kg "`.|,'R1;oJI=01%mt$<[QiӬukcF&X'DŽBy)QS`Ѿe^EgGݫ撩jܞ=?wJw84C@SkQM"Pn{ъpF *f'^PwKp{k34)V1A?́MjAN\qCA_`@`H$~8R_$nӱ L_+pjTȝ^(w ÿ갧"@IDRvc,Derڏ,,5ev(oҋ>'ve)5M{:"Pْʢ](^VS/؟Q2˛zXO%{~|1l[I+ JXviZ7@sX!5Q_Cz(1Ӹ}!ȿ?Q_M*t/.JL \[{@jboQ;F2A2}F.v1K) gk[.vaGy"A)o۬Bx]s[h-ԗ,0E 1Y.Ƈ1Rz>um%nRe_oy/ړοe_5Vwon2s@Wn޾w6>Nm ޮMmIwwV8m}?{^zݻEc"[ |\@bdp uxb]$_!$kiBC#=Jodꏸouyj@L >;2^ WUΠHE.D }`t&/8C)^k ﮥX 6X 3o;])RUd. %߈+^Be۴e# H哀>*FwwRYm @n?+_-qtZguQXA_I+62;Er[;egL9`7"Hj iǪ+h\"]ti5N^fmNO@AzO+0,MDR <.]@M&م >"X6a{9>S89נ瀉BvDr+ >I I|eU<ŸRkeCb\<lKlU\rBdr/T2e(okhYu $%jRJ:̋4D0qwqBV;I"_!)و68AQߑK?:뱘@{A-ÿkpϨftg>)aK99ۿx=KmL!2@gxEiٙa=q5iC#f! .=B$+ٯv_zaR1J${ O)ŞxۧXY%ocUUz m%vUx̧lv}aU"\S" ./q Ēު'KJG]j9b3ڦlod\ *+M~8jPB5ЍB!3k݈.v]?麖|Z3{xZ+fƗ%B.v]|ⳟ,]m&ޮᑞ\֗ pߦܫ' 8tf@ײZB?.vqK27|9q$1~@6ν ;Fkà.iMnͯ96sb/  ͓X[|U ?"G.bƞK֌DIeOhcT!/ 5Ćd,KX k>e*:4ܩ56{S^{3f#n3pZx-^qmh\M@^<+T/8Ʉh$0GV{ "M  tX5PwtX(U)Gz|h5 I78nyhX=q`k` ݒVAy)~чI8׀ J]1wPj$Y{ T a!#|Cgb%K3oEޙynU]}gwɖ(6 $ق old/ <00ƞ_d3l`,Xa2҈ԅ[7ɺt]%sV̓l Eg=%#,p-i/|sJLJb$wdO`)/H̓V HKU^WoDH}$HsijsB_{0D{sA%aQ`6~φ;D_;+ MPЇOx ( _i$# p39hCgxN K`ETX 4m@̉y!"KOi`J錹% 0s LzfJ])Gѓ: o)`0mK[zEw>NYQ^:$$vr0dK2\BQРqp%7{>pM3ܾ(5ѧ<.@oi@'QLĭ `?\hŊ ڞ&EP%xDO Hkzk|?Cfg`(qN}m'bȷZٟ+ 9 DV"Ҿ1cm>[F\9H^ _>a(,=f_'=; 8 ) Nq#kF"3,1 > IH(v14. ThM2~ypɠ DcWY j( EC);Zg eu\i\&f F2FBHC$R (wX,,-jΉ0a„ gCE@*1F/w%)L0aXǺR:!˶Hmh=K懛PI(AflI*Ȕe^M0 v=~?m_/=X.{S_~}͛7Ͻog?SgmKo{߫ww=}/?K=ͮ.}w?zivrwv{͏nU D ,U9WajiU ?>C\ S߫,~t\"C>J+1fU#Uhɩmv53 [0;S+mo;q1TCwϼ+H>x#IGՀѰt~"̰r]wL4HYJ3B(C9-ymS&@)F>O N6ڐEOC gBv"[F#脔l=@ 8+;%w_QiztXV qVm?a ֠8h /u HSDp.Ѷ 77Pftk7B lNbW/`Y~0/ 5 9)`4ɲU)tq;V1rz6!EHPʀ %Z}@ k ccfŅ VԵj?&`jhLNAOiR>1SV! :xh1>Q[ :lq/PTBn)aA_?+(.|!g4}]{ZXfA]Bsӂl1;,(Qr /f%\ڀts4OI^ۊGs5A Znܲ xWճ*[A1) ;3V fQ\(,ژSU" @%5.w/$\w W!ֱS,vL:_B*S5[~&؂) ?4T2fu=!1קV̀>_[=$ϝ$m1k@jRkO,O=]U}0rSQvr1B7{x &L,?.<ϿMdJ &ʨXE9LEtX"jNP~~DǘHU, 4n` D(y=GW((d%3JW'mbJ &LC&MDŠUz*l팽 rY\%oY6νG$i_-RBuCIf[6#~ ۀ  >6 odk+&A\0Dn=|]l. #sca14%T68{:qCI1 .ՆAo ɾ8ɶ^"#칵0XM쩟E}53\^B[h\Z,/%+= zPe|>mw")2K` K/&Lo evCP`Sru05>-s"g-]3.|>gȫڻ{04+9 x2ߙI5HF$APx<ݟ}X|} Nl& OD0O)gD|ԁ6& &L$62DJJ/-~} U]WN0a}sk2~0BJN܄4S-͕D p BF*|{U KF-l)L&LK_GNJm*=u_ω4#< 4~JMXK:oo-m 4ToW[?A+ ҡMpk|ޫE{|k0m=WTUJ $??4D!ȡ n:mt3%\m(T_ x5bH &dq}N)O`?}נJ P̭d@>.ț 's ߊW$:,R̼ Kt1Vh꡻9@2J`T^6CDzeE :[dP4SK>3hN@ A{(aSo2\|Dk('fyG,HFR`V<Zݫǜ;wN) D(GO r PJ@2J$ "#ˠΕX`@ݬ kİ4˙bO$xǦ.݉Y33!k*~AoJ`R#kMNe7)_YKRqg#Ct < Zts}+d3(r IDAT+ӂ|";!}+ W{ed[JZ V=P2#`iE*õ@p/sgO[AJdX05k9ԯ}VNjkq4 u0s& g$>TL->a„ Άu:r$?>?2Vv!5A?UdN0q3<Q}ۋ~H4g=,neĐn@8dk,$e/3 &ek32wxR!c@ Dh> aC_ rf5\E⊭So"p"J>\gē%G-T_*t0OVҼK٬P[J#,iԃ֟Y֕ǵ858W`* ;?؃ țRaGs0:jȿ FÚ{G͔atL,=^wE $հ8n h<+[u?VV`1-0=Q.l7Z\, 3Q26ϐ"fޏ&.lF^t{L1_ 6~"!0ؿ1z"S=čR/OQBM` NSD`iYCgCj 6*o[W}A , Ѓa'*]{s0JO6>]OЪ "s ^(ajNp f|nk +l{x?%V-'\4Wfc6̰gL_ZdֺVX] `y# Sig}6a0a„ {ȳ̓%ofN<OU&Lx:^=;#g}Ja@4??Qҝ 4wJP맜fc\ h!y)@)e.[]?mv=˷cnֶww_>v]o{y>kJ?nn[bܯv{?Om&dliwl>99~mӶ{?mkz^tۿ{Oaw}w]juWiϾc}uSXBpاoIO?ִz"V[eMa%um'] aE1UػX5Pջ5ȥ+3xB:m1+NJlTc3`Wq^^^- UCO d]'l2jGߗTE;3' 7@'p>Q&P \.ː{aqr <1C*. ~Q'C@ {4u] ĈBV%WG B6@ !zB@Y  ɒRV8_`a K$R eH=ք \kɛ 'lL\1|~$.}|~Sg20s%p[;7ʢ ƎSS`OG^5@8Z,ꍶ@4A|&_, "&K=н1' (`H/DrN1shDΊ zBc$O1r6^<&ب z.UMCDN{!>(/i–㹂H%h# [35F~;uUcpՓ &E?fdz'y]k֣4DNMчOyfQ@틯'uXa* bLyZ2J$tTZsXRxWByG᪰E_05lP[> NPQ'Lw&L0aC/ skׯc|"}„ Op۶H6U.E % |KA?HJ܈ 8cǠKb BdF^nrr„ +v|Աd.H{X/px 8QpABV+ 5P{{`9?.|ߝ dĽ* |ypBNu('Bѕ 'C$§ e(7:?`93 :Im%F2Vv{:6Bh*Y+r*HJ ܀kQR]{P33EcdX@4f C.'&Q{S|W")@*IZ;2(̗ Č[F0HCbr#QybVWʣ!Jbv@CCcHylk,ҏD&BQ 4BlX+~-ErRس;,) !WVG]Tdo1c\ ¿-s빷@ _R칚Ҍ*Ia)W oB+-Yd`5JyJEeTZ33fqZ@&hu(u"W_#DNP3<ZE; g=u{+ c&^^/jGr4,/Aym^i({^18ħs>,K4Np„ &Lx ՈU+Fi(o|܁˄ & &W[ JOM0a„ ֡c\0_(y3[K~+_Y&LX ^xaǘBꡯWX פsX {U a9 @@-#(`5OZ4a„ IRyBf۞]g2&5fJ$BJ|dƞ V b sUnVNf㮬ߟ!\C`yfZdhOXOAy&`{5o?;MNׁ'e*m0Z,o8YBVJ9)[nkn微i,?O4 ASq# _:"~z#bJn@;oH f EHo+xV|O] ?gL&L0bMWU._웅'C~w Q'Bd„ O,>񏯓Ƙ&1Buߍ3 4CD&RbºI%+]7^Wu//{y~۞8Ž{O+vvvOmyq~uݻmr=~^}7xz^gi^}noؽ{Υq{n[l;ou@4'ּF 6jR(+):8kf3lxgLŌaυ1 `i{3+ړ9ϠU\]g^wXٿOh95Wd0$^{37:BkAg.ۭ'ZGM^Վ+ V+m| x)ȔECh^-Z–wyu^3}ǃzz(Q v{ ߉ɌBG%" =9!>@~+~&Bc, P?~ D @dYd-/` s4D R,H FNL H1,2#Ă5 $kaj:\İ@HC+|/~%[f{,+${rP!p2 W-"ZiHdsIuK gdΚ8@y(oJ1Wu@ 'g!)< O*Y9gwe,CjPZbA]_36}$F(-&U)#`"~I %/zr[nv4}1+ N;@\3eϰ /g _)BR#/24P BbFi3< $ d< 2XDn0,nþ$SrOr@=. #2DkW@1|FYx|L%Ї usBbJKC q]tV5r`uPm,`e rF߀U{`b\bL4#uNc+BY]7>-%Ug50 ]pXn@ak9t?a„ * #ԧ8 _9bj„ g&ƸofD+^KCC[E(gu1]~L7a„}gYd4l؇1@PaL&PzJY9W*͌r. Ue0/Hq&Ai?D9>{ Ѕ,z,CJ΋Q"!+P>ZJJl*'7_Jiv P}gxR*# _;:|\+kE5G`!EbaOUJO0~ɻ2^lKc?1Ed"x2zPe@|:xɼv7״0#0"i`t{pV-caavk[Z%3zOFx 䏠n?a(J:(g+VfӱtwVPB!dfi$wWz +bH J^ >V,ժb-X[2(FRRI@uEf)=tu!tU/Bwk \"6" }%~aDK2cMVK5zX)a?U2ICÖ<{M8[7V@)m&*> Mpp+$85v\ (a 6G$ |F~^w&v @v+=Ϡ\})`&x./Qsm>#յYpKh_WSc &'\(ul)"g6S4S \࿃o>r\MGhv#S*t)7Fr`ĕMھd& }P Aax'1+}+i"?MBغ*ŕHBUj=fXw[Ua.}/n[m&S &Lx8ֺ+$G_|OoK5_U Wb_e"2Tռ*g`xM HdmO~^ xgUY.Q_A]8FV7X*Dk;oPV#Z2JWT\;nWKU +"W VBSJTL`d*^DJ@'l5zU`q]}bilrհVRiAEC@uOnek3x{Irq2u[CU#%v)lO)G@gpAx=7 $Pp[HֱX Vג-8!4<{,@#OR Eؐ)k)aW26c~^S. FԼZH^E^H/i}V'5(d<'* RJ~@;Tܐ}!?7k Yг Ǡ$ԓX< ¥!ZHBB!Eh KV;[pTܢUxx2U:aif%Wh2(R@ Bk5X]_,WX+] 0؂(M^+BBl2E%/zA1d7[`i29YC6ϒBN)a;qĨ ;*>!Df ݟӵ *-ŢTW# 1kfdȩ]!  3SWߢ`^5-߅^0ӥ'qYD 6OmC, JT3h)- U [Aw`J&̕)RqCDO0a„U=WQTJ7x"s1DuyWr&L8!/gۈsmgzȽ7Q̺J,ob&$]-B4B"y$qƿ \@2`j&TԍUUp⻍X ]Ԥlnq7З]5O:JzS( dĀs'uУ-@n#Tq:7N[εl$l 3@󙈮_$yeE]aI&uFVm4BDZkfqnhD]}p౩/I~MK[ZElbk D66Zc3Pfְ\xlQ!mrm8|4d?=su\e!?O *v!.ƀRt. y)Аɷ8rӀB`IH0L_1؀ ,7^WH Pe-CVsssTtf.[`U}a(S'gXcJP"R xG>- c5%#'`o+|NBĞ!j:V$V:;"3Ow{^'%S%$˯dxZcOI<'V&a WQ(g,[ՒdFV ֎z ȹKn#zȱ2{p_4Ͱv)2wT5١xDQSG$E"Zw e$t"9v%G.x7k3Sm YYg,JdUroRFF*:fޑQ EƏ@^K[[ 3ۆ:2ݓ(I/C|+NLjjrsk65)۞/=5-aF4 ꆥE-i :tAz/נ=G# w]bJL sw̌z!YL__޲Qa.>nxg35DẐW#&'g.66b8w_%[Q7t7äM 8y۷H:PP)5MӛyNB߭ȫ4o^oW5tkO]tAd{?131ZE6cx>uɤAWt37@)1 F j\:iԎ8nP 6(^4ICE%g6qyjaSZSV=`}}F_x\C7/*iK.{ɽ35F; _?TC~&H^ (﮺qCq$QA 9:0hpg1m9mlB:뎕Bycz7@"0&gS1"d-4(G8@qXq4?Ӻ`>ya7Z134zP 7acWQ\qv̓\} *UМ[Sn 3ǂ*tM ;K՚S!骮yD}&I̛+:%`c3#k4uOS3ql y/c ς']FډDѬM7;r.Dl%ϧ/UVY`Ǻ -z$o'mL R","FSgxb0=k]o{\%8'}aЖ8IWanKki'j7)2OK8!5h Ĵx1Aqxkd! )9EJpWU,k |@SubH+׀W=l ܨtakU Ik} Gu8, ADIٱI=dRWlՇ*A3tT> KSqRy#P`_OZԴU'}FyrMgAA.BzrAyzF_ˆ_;h\K;C7j*q2J #hIB P=}&"O9򶎤n18~EdOM]䦹]ں]{/! (2yw+(07s#Q,_:cH%U^ 12uHXEP3{e1HaOIb]G:''~f)K`phX E"OE??!)k4\y/EA;Iuuݺc៭'wC]bk G}g"~=ݼwf&`?^k}ӘbŽS9/}˿\ط>pyҝfjV̓nů3ZL˷)YUU#VFڃ| /e9d%@fݛ'KY&uu_Wʣ̾10Rħc1A{=+ƧKD߃cS~ +Ǭ3㎶7AD$y&Gon?X+ 9!^0{Z`x xU5aEo#Qll}gNcFruSp "T^0V6Ğm!DG)93 w2, w2%D~>g($G8xF !nn3k_~k&t$peM)Mz"jdN}n^U?bC܋Ka笔ÊWa?P)pf>guKjMةy]ç hч`sďL4@`ѿac\`tCf+a^sr_/Xn'8e7?1l4X!VE}p2[eNJ \֬ykV < ~cD^|\slAN̒wdNސJs[)]"bM\,y &7;ᠠ*،?Xp+lDձn@k]V^{֬jH$ ]Gj(Ȉ:zѲP"/ !Թ`դ>J1:2U [П f0D`z1Ê(O(0@z)Q~X?@oe^@92Uق[ƚG O lfZrCCaF . 8s#|P>2+TYrs5 Tc','\K0sHX*cs 8=Ff2Π<`c >"b+u(jْvoXV[&Rcȯ ;iKx2%n|+0 WxPa8VXUtlD2obq-hx?HbQިuc,Ǥ 3d7e*N$?!Qpn=ލpبWAnΨFEm9'{tw]bJLlH8 _&&qbOcaf?ۛ]cD~:I)$/%cw!gA??v'rJNL*RYϽ!Ir|d#IH趖LKF#<}ILT d{n 7|0| ~P 8=5$n ^` ^Bܓ>52{3+PE~\xZL u!nyBY8c\rRԡ`ېa:* O$߶oa<_[S:VI$:F@r9)z47!D{RX 4x#d1ETDXv‹CH4ؐ=Zژ6GD 0{*U_nXAෙ›{0W78WbgX@sԡR|-@]Ր S;ᄑcG2^O gf#GẗfY-+Uei řBUh"9eV9.~PV95Yc<5[ԭX'@:[.Wᔝ/ļЛ.W$LKKţдC, 53leu5eU{IizȬYF/InWi;{{l:]!HԊt+!s 1eח]Cl (˄ ~\)C=dԟ/ iG03t!T&XSv=Gsu2-uG-bC#TADŽBĥCtmFFW ~:'9+tFLhY9&RǶRG=v.v]Csݝ~@ΰx`oo?7#ebŽ0'jG 68s]}w@S"3y|tŃ[Ug( rTIB nZȘ#ቭƺ;6. ^ԎWQaR%X={^$J[Cg$O!k -ivyy!ލ 9'g")>‡Y< L4Rc3 IDATN,w ҡbs֣AsːcVbeZF>jIUNHlx"M@RO$Po$mɛ|\w):RkvD8Oኧ=6ҵF6Գܬ+̡O% (4FF.)m~t9`U[4?wo@{\)iޑ>5;Eua|BjS8lQwSDj,mrvR됰$(sllf!= F=_A?5Q_uW*V{h%/)z#CMP#33P8?BGWL(sH}yuҚmVeb|âj @JC)G_4Dg9Q[tSܵ5݋ \8N\4!?Q{0/*܀n }~GL{: UK瀋 ;B5a=0(z`r2}]pQ#{9PqDb;~բIl XA yHO9vkN} /C8l쩨IJPS]{q|b.>}%<-ooae7ԛ)s>o6,ܫE.v]lo{ozo{olܸqϟ=nyLn{6qv.pnӦOgln{o>w 09̶JByJxL0uX\'F)0C?P-@$9<+/dNwGMU._kЛY z1y/9ve};+\x6nll-%ϕ\r2\HK @y>'+-IV@uSB^Dޒ)h| +&@Pʘ}5ę%jx6@-K̛M55Պֺ4PCARI6E# kkkQQX7貏XFL#=&$,?1Ɛ§(fg4$}-N{P|Bo Gy*A5/+Y1?h``f G:TL =>蹋m_$~/t8Y)@'tJIcWꢒ*蝌ݲ:0SѰ {_݋V =/fuz '~j1 ea͢с ncZUN.v] c3mN/k>nr8L?_~e{9y3߫.v]-;??ZQ$t0;&*χx,s=g?.>C` ȉ oH?c+gMzr(l|"b}S;< ȧI?:Wmv;rx Zap1AE3Tg /9v$$qJECG7Bt$1S(4Y@ d5HMx6 ?0MƐ%)̞6HzWD3m^!6C+ئgSN]+-֊z1`/B@]681z> E#AԐ,|bhGc%A(+,c>\~`jYr]޺CsuP \qe,;o5{y m!/ Ku`u$'_^!Մy4 d#*ne 1򐻲*s1~wKask~ ܐ 7$n;qN--<6Iq->2x){TC#]-R^Z)P~(?CBmsL0plSzm-aQɗ tmog)덩y-Ss(҈5(9BC)+-PP $`P aҾ¨єxEȒ0{ZCgj|Tv]b3OLķ8`w[>;'<+\֐i?v,Yi"x|Pt~Ӏl'БB:V AQ;v9yqUHW!_P19\kclu+G|caxͻ:gUtA+?{/pY!L%au>g㗁*~rR‹Xd [E<\3e'4)z04v)OǞs3" tG: ?t$V9. zY&< v"}5ó _Hp? r;>h*U SAAZ@Zra,b4`_CJ9 ]vleghɽT(1&(y:~:nFp1yF#|05a6C1vF)]ېF p~)J(Q+< T\JQ}#&R3  'I^RN*\埋P0Y8~P)]SM`Z8D^T۞ҽ;.v]C鮪k foϯ>hO?ogb7z]~ùbf| DirF"{@ Ic}>"{MIoNۯݔޖޖxK.;m)2ǰ}̛R۲盯~Iooˑos9w[?nԷ͛o;ޖ۱x't|M}{~m|*nn{>zW3ZQȯ'N"tLq5WAW"%x ^8"$ ت&=i_'f`5< [SR!=_v I:УⳊ(ХKfF2;bM0^`)# ƪҀ_n|SK^HOgG( rAnj9ŚVFc.ݪP-X#by K5 NOTgInGPӎ&ӭҷQb?;|ּ}<\QւN CJ&U]!=H〥bcv?s_zhHl{Tg1=RNUa&j<_'!oTj2HaU: NG,׌'݅XS]!dS(fҖ:9JI6 Y<|u!FcH0Y3OYLsutBxtrJLuBSFJ{l=㠋G*q$kulŭk kȞ$Xa>+YkW>?DQ4&#m>}k!oNCUiD-Kh#Mf~N4(uTExY*v f+FŇ99UPXkлFt@ZXJPT6=ɤR4x,; ,G>(zZgp\)1=YòB'*z~HO(vݱI]5l]-/cUECbރq/jG5QFIVa/En,H >L]m}O_;VNRX5Fv )]b ~'^Dxgɛgf`O}b~;xὩ.v]ܓEʅ xO-I|GY%0 ~z]b[8~?p%}n2OEOףlޫXWZSۃpGo}{P.}ZA',U _ixoyc au~ c 91(sLo2~SfJȣ_ߨ#ȢM~5ӿ{ޛ.A7{M!yF=^1_`AfQv6mCqɾ>#9tH As"Kn5`'8oO)|`1 izcgmJbz7\z&(ȱ"OGBx#>KMvh:=Gu3m@;˔7K5)Nos5]Ī=pfs gi~B0 oy\?yt9#c:硻Gfq.N]2~:-~.Xѽ=Ёh0}o\ 'RΠЫp]-aOr[^c?׀oxީ} hlx4z]~_f|(ux0cp{٥9F^Ұ;M:7;`}\[ֵ3p7pز):8PX$?]G-푯%6 e Ք'8 M  @78Qu*vj9h&[>Hә21zz^/\iN-]Nx r|L}XI]OSCٔs{s,F9C;'z5FoKH Ѹ堗\Kp9G."Ԗ,I"v9<<| rFݔSr/t}kJ YHeq1~h?er( Nj@3;͙g>= .vF6ɦb+_ 3>v?7=)]bǑn|s)a* ?#|wwO`@3p•ρI3~8lB]õ{trU IG%F!(6eQ= cGK8% ߙ9;}@̝ p3SoT;p V2C[~`V׾eXǐ^)uqNbِ`&m)(ۀ-Xx-Dׄ&U4 [022&K{Ydf̀G`7qWFhC! ӂ2Zw2&!aJN%n %'OqC5؋ VN8\jmfUAk:}T>@ͳN)WΝnM̩>ўq)5>0PJIJ3:T ~(Z.OlL(e2RՃ5g1@2J?tfơb`(Nu Wq $SjuA JvtrPbKz)L2ƪg`[`،NםZ,*i/0*kaWṑb300 ,'0ݾZ&uS7c N~4Ʀi1s0LVJ Kdƃ[3@m[Bȸ1R%C(UNZ]A9TR) O1Fgm  -cƬBt]:hʌ{#C*V9_Qqy7}0_ tx௅F,@ tc0"afw`.v{><>)t]ǷgnVj$ /'\{ b+sH+UPSȫ;x@{`Il_d'M*5AI )^I"]XMqj p>7<`ڎn郻BQq IDATƱh6Mңt5̬ߋ4xq G=Ʊ;fΕ!4r2;<=Z 6qd BBܥ|+vZlGD'Yy q?O&%[>y yxҘ A›ja e&$E3hUS'DSP}o%1' 57{(=)O&]'9Ma &!@'\מ=g̈́Ū(lL%XX: '}JP|iFzl< ~XqUV,1: vj9TX3$~ۄnL%5 C~X{=TaϩS!5 N'AvT)Im*IC]jHJH/aԊ k4Щ6F}yV)|[av6+Sg(VGy[C zJK)DA+ e}7x <~bpTJB()7zF"zn@/*teu֙SN.h?Cjlk֞ KHu qY]޿ Uq{ /eoıHŪ.:&6M|\ _~CasZCqobxcU6xGG%?RJ)~W޳J{>4 ϛW=uuUyý'''Ǻu/񽽽c==Yot}?u/ոu=Wv_^c:z-ֽWoݫ~>z'~}?{14}\=o߹>׿w}= O ! \^0?l4w?ms޿ofNkL\[.+< { 2 &1)euKyI ⊥]G\t`k^mX!I;k׊o[WHOΤ3J͑xTO B|% x0BI l9TE>YuM/~~'s,c)lnXoqN0uܰy5b2*=)D\I!d˼o%W+\{h)r;͐a7&'.nE \. 2؅oApQQ7~AnuӐfvu8?zEAHH7v%^!XJYd{K=q] |7{*Hv|fc#@6|, BƎ+|1c qkk>sM}q妢q =t ?M[Km}\M? (MO yvA~_xv:\I vN Pd'%> V$RZZGU0uL50/sV%%g>SG zcٛ|3dꊧR&6cR>ww pttğɟsEMlbx HݩRk??@\Ml;-, ߾[ Q]Y@RԡO *ّEޞ*]A^yFÆ<F8 œ;s1>)tߗ'Qo/A:X0LlףI` Qx3|bQD_"ÓnN@.^4|s#ȸV`ÏI]<,}}.^(' ó=<~% &;Iw(p Kt o>dд` zoYS[ _Wxt?),/~.^ÒP{_X7 *%LSy1o7R^PS&+hoKQU'3Kpp(mC`ECfzwk{kx\~~GMhh^B7%ӝfw~xa:lG Pu,r[Y@9:Y4rN sè\1߃^Nd/sjRH7![n8~x\\QMmZWVseO }IU)d+_S{ R[1zSbM* ^_f7ii p+Q9aU~SU&#Z`(Pjk W Y `W ~lΑRAH}7ío:R5*:^hJQ'd֮@}nxHˏ9QHaV'oes ?Ex}LlfG߅2n'7u5i v])Zi55k|b* ָ9 H4l-"{$M;QG0P*ӰLA7 oG%3qG)RG0lA &y.,8EitZ%a^Ƚe?n.'M2EIȞO8f) Cůjai#Go W+R!f.M-]B d}"*I`Kf ҹ]CVyR9uJN9r4$4;={>=b*$!+l&nVJvK'*N=ǟ*<*8XH ; i]Tc<49-5u(0 @^ _ iB$X  H#;K%s22Q/''A&wr3s@ߊka@D| r P_6x-O&[5߁;w~fٲ=RbِssptG$5>3s>Vb65Rj )m49WBQChٮsKc\'9EG E #z4R0  ɺ闶#f0B*Y{F;)p=rDQk'C#TIG:ݐ4!T y 掟::R:[Pu;MS4/{ yWSXv8{( tznϵ53Ni[`'Y"6p&eL&~#2ZsVđ U:軠Ǽ[ˠ*$r],5lr7]*ރ 0$ESSkH1D,rȨtsiO+T4qi78N7e6?yLlعgtc'  Ƿ~?o1~߳>FVY}c|{i}Ǻq~WY?oʚh8~(qqAٰ=)A?@ zu# +b]Rv RōN;fj05Ǐq7A!w>QNQC#`,6e: ;/(PL5-IOs3 ~yडۍ-[j6hިN^pႽkI='Ѱk,SrɴN2(t5'_hLmK2&V/! DxC݌8^?4Ƅwٲ:^؛}B,[dX*O%; A6̇`K06S26Q] *[xEWa&;Qtf Tl9w8JBEC帆t}Ю:&4X+Z4f]O w `\ (8g**.MGj|c,'#u`Dn腊ϛؚ,fR"uBo'w1%8Óa` ,d AΞR0a tz$ P֦bOSH]  W&nѿv#{8ddOۘBg{A&j|1>$Ĩ'6![81 'ɤɠw[x(ex*DH&fLunpQНL:e0\>u]b>O+sTN35H fu ;sZD_Ip z b:i+SK^ؐu-r9C%qpV XuRCb:,d@@* DP6h1806lf<55?WtWDl-Oq| #;= Q6w5EJP"ػ= G"E}q{#ḱ;z82nɰ+V\bsA6*:z݉ %l#o) ނtܷv֐x@ p1vKSxfӱj<XhyihxR DÉ`5rM"jڡՖB6T { 9y'8K58+, ~M9i |=cGOX6TEDXb^4{ߩqHށWj1'FS:QiUx4t8w~é1CzT6"5w*YkXL~K!j EۅT$'=`V "*f$OoʖHLJp񬀤h&>#=c6vZbu'd J罄?/X|+aE(K m-&su(rPߌnzӛ=I{";z*p V1݄`>oT^KW5]!L(MB# s% 0Pv !Ɋz{NM IR} =6!!Mva`6lMlbt?ku|wR_ׯ_Gյ6M<1ڏ3~/),mKLdr0iІgȅق֥n|8I'#Q$8fbf/2['8J0V/x+)Đ+-eoC=h;sX@ت݌?kOx&S!M0 S`BV*s=C IDATҫJ}P\`;cW&pKЁ!$4@-Љ`Q8+L45Xn)~1Sj!#?'[=ءwٿ2EUwtfVmYuȏg>s=+r纮&ycDF=:vӷs EGIF^y?x '[zzzy7\moozUڵkw}}8{uX,˟>HK֜.~s]D_W_=uǾqzZ/ױ2`=\޽{}=[8$ַ߮nU^o{`Skg/ 4 4֓w$;F*;%,%+ɢ_&{k(5{o>u97⒚J\ǖ{2BZ9 SgpfC8H|0sЮ\jgy3OFdxX-U7"RTbۻXILSzWɽRO7yI#=`"#o(tп3/v/BQ\t -(CggCMeV y(Y8[*l'|d:ecahZY4.槡2_8R"ڸ.GoFD Mlbx59g>^lP_e1\&65JR#OO/a-I^^) T\ 0PT @VKNΖկ/ M|#AvZӀÐ"Cl#]r)+Jݧ^A@NciVoU  xwHoMwحz(+a ךJGDn9 @^.A7:i=wKhpQZuڨ/:lgC )ض?2kݡ !W]Oe5Jn=5+  >'X[u9lwQHvUY2Aޑ<o; ]+*uQyW 8h4!2TЅ (:s|TsWX^ؽp`KmKq%B ǠGg2~U8td.\Z"媔4J񀃭kk1dZ?:J}`"4a`Ҥ)S &77{7 l9DŽ^st ނ,uj0((L]S* ۊNj]I wMq1 q B㻤53,`{M5&6,|/~]2>}??gg,MlbxPcW)5/b2QKBYcB .4pKםtާxڌUt H27 &'7~5Ěu w []wIr, r$W#g:xR&!]^ @X⑳JbW!Mc4~PN>!o9%\F1VNp$UU tl!GA7D>n;|xٙv ~p;sl*<#1,] $yhl񘛤)!TjăQ05`c1 C-IIz`gh8 Ń9X*hV ?QW?g(|>WԐPn#d TA`K?n "y fJ5$xeWӛ1tJwnˮZ`[2R\D,$Il<X&}] Gcyy}Ke>aLw԰"~~NQuCy=sE[DYXZ +S2x ;`+ qY7P}l̩l­Mh$ٖ}ieH,j ~33D_殘ߵ ,DdC"%ad[IOsm F.A =RQo$~S)Lo&Ng. 2jjiyCdBB5r%&^b zII,%k(%Ѐ ^}|-_vؐ%=N!W*vjL;'M2 ` v`?\!q;5q =[Pi,kn0&^ (d?lbfh S{N@ =Ly3 V(V -ەa֘fkUA|H^!$޲<7gcSH a ES</8v1"ς> o;z;U2krfzFVO xp1E[ +;Ƣt,Cٰ[ f'eEz)zDZ(&0> h4%ܬk4*|(֐9죷ao18MvT]Ŀ0j "B Kz.]>!-oݪ벁dT|ℿy,cy4] @FA5q9ikhH{_ɲ7\+"-oYY\Uހ 'u 6G1,~E垪[ &gAD+!1vAWSF> 3Z8n>ߛ V_{ٳgO^{Y!T%\6i|޽o{c~yS!᭎q6?k{6?wӓ}wwM36sΩqX6+>` /(KNE!,*qs 9j|*eK7 51TN))~ lQq ͽ w|XuN'|}0\- pH`/,tU"wlbH[$*n偩AxK8V 2Vl W΅E#ภ í`LBB(CS膛Mp~$k[.rI{Wo}8!o6YQ0Êk.(^%2K@hIӴjԩ2ڮ7 x&?3ûUQE0WbtHmU a7E3| n({BL 5Vrݐ/F^,S( %&dc2HwA{Zߧ^TYec=HȹDcT%C2&u Vaz1KqXc|&h^xPj`@;]FvwB:$J1ld)YH.Od-_ 8Rh-EfvTaڥઠ1_Fy9 Hxܯ7-^R&Z6Z)qyV[CTN!a%G&(yg6qk.ׯF~SӊcA=(QV> kA Y0L@ST&JV0|9fxvz`('Ṯ(~EFN8А\ i>݂tuYuZ* ,=͇J_?M',$_[eߧ$=aƤ7=콄SݫjDSdHT .K+^hm 6V,m\HxGmȑLJZ\!+RU=$W{5ϼ P~0Q2zvIw$43(m.$M$~1Fdxoe'OH1<#]u, wDΞ֩1[ u&`x=w,miɼWK4DvwO_؇i  :mRb5{q,ZdHy{{0bJQšU;TӍ5ZɯRXwAձ>$K 14V9~HsXu' F%ZE~Kp :.a{Mj6ZkxK5Jc;>t)bYP 5z؂A;YI#G&]̃[ @ M5<-F9 6.+r%z>aֱm('F~C-_ oSX>wS T7um#'aA%,i~ҒN >f&!@s(C F$T%^ulYO%H /y_@j~{yI:0M9Hxްs#r7cW jxQKHIpFPr=( I܀=^t 2 ".(,]h^ȎArזWʤ`{=Coz!*;E!ރ e4f/} wײ6ń& Qe@dF3,:Ң}fsGqh>H}kEFUCA-:EWf䬣9OxN6ȝ>u5PBFVӨ-JKSY2C6 #vL π|,11ǁV+niCi'9гFQ^EZcL$̵ޑ"kƛOB F[Z$Z-un|^ j!{aK )DUO.wIط>N0z chj)H4V% JEӫڣ)Smc6ޙʃppp'?Id/~{1t7|'Qqg\^/ړ4Ny `P»61J4$(ɝ\OG2a:n߾c=#ϡatf N.쵫sdVBN!_[,{UQcpow];ÜlE\5Hom$T ) Ѵ 3}oH;!y -O6E,o*wBs`1쏰390'rBp ;a?~N*J9UriۑQOBS:}N&ٓ`u>@]1yћ3y*Y>y Xs7-G>˿ s ~e,=a># Ar RĐ(D :Ox:;s![lv.`10V>v,2/카StyO&l<{ }]^^JY #;:ZI(Fh5~oeVzȜ.i:Ą]a;堁I&OsAg8ʅxjӕw*eP:*nDBT({?9^BOv#ń]p p?5Z7Sj }TTn-L*JcĪB.R˃ZZÑ8XU)cP+Ҍ`g yo6'@2M#*.|s`BN)V`.z?GW5h܂tSVɊTJo5Xiȶ@炫== s EwR"c,I<M4oI Q.@{os$h8t,^km|80iuqtGYA c6Wf)`c!9iC2T^of{r?#'= 䄒)얒A/),kB+$bиti|̌](]Ye湢[&I;nk4#IA 1e@s<L -i`D`ܑ RRPdh,T]gcg:m#-0j\(4GgڜU- ƙd^)l1]0 FjJ(UIuM[,3띯/o2#b&E!Y F_3~ l3k`OYglkX"АP^h*Ԅ;6N#J`7XS ;9OG3+ծ6=p[[?>$p/={v ` IDN]mlcxs?e>|۠źn4p'Zs ~)߉O{"Hk(DҐ-Ge#'w;tln~uI`жnaq}\$92J(i(J)B`{HM]LIVUZ1 ?P˼( dBe z`vCe ,ع|,Ur6׎ScQ,'豳ވw(X`;U|.apPR쀧`x6C#d@ %{Rh0w&`O]0z芅|CXSU~9 WNۊX.^ۅs/=ʫ{Rڄ L `wGEa0AyɨbPVa?qgOr*oVþdSi*"V7;q;ycWFLl9Ɓw`qh?3E^^:ʼilO]246HH=|mp"zя, Ov%ƿVIoۦ=zve ` *:[+y᥈S bKFmSi ( ~ nv1F1RuhwNfsܭ~¡S6P 󲴐N\P!wAD L+ldu>j4 z)N%<6w*BбDBL^Ѵu@gL(dE[)0r846MI;Tyc߿2$9P8 @ֵ)ib+i>zvdAs+tfj 7W@^k::` =V=1/j֛aQxRIX N}iU/p*?&NF1<$ E)ES~\ib]`D{'-Ft|57?[>#bwb<_yt-ЪP^k%hV#k9a}} 6V~Aɰ"ӕt q_+,\˧ҟ>1pP1" "pbQ9s?|"Rq6䓵yZ~6R/fB 3GoCy8 rP>Ѕ`W B5038'„ص`%)4,s G>wRK){%xo(`̩+ѬUדɲn \zRȝ#FW գH_Rf ۰X0G(UR} {s439\08ݟƘW]J34#3 `^ZHȶPy)+X w@T &K mtAVO+ܼW4GS[HK8٦ m55ĵ|5^XzའfPOX h؉AUr3&<5_ ^2H.2?#!0#zeHWbc ylvH[eMEBg`N~oFq'ł}u5(IVn͊Σ̫FrB@^fRӐ<;I0aAs`~= ;?ҁOzt7ۉk%uĖuWcZHm΅ %򝺆81U@(yfV z٤UO6&9;Us}@k .ٖM&VH)MKv#,wD0RyG }xGͺ6ms˿_O(?@ ~_җ?˗FD [mlcr MEM_\v1@pit+?- *p^l%4OC7?k6XnԢLG`{CGr2UZImy!dP^XA"k gD\BT*5UUi=W.%׋ox4zBfRmxXe#~$N阅m(*=,(= 5NBt868 9 æUEo]Ra3gFu~L֪i'<HSE#s숗%Qj5z7D5i:kTGq*(ʾl"uV(v$H):mLCB8bVc$F1vݣe\vEW#`\=1}'C;yWInx#Z}_%>d ݤ޷QF^ЫNG!<\۲@Tq-6%,@KV !@),C>pV*Z[eޖ6EI VHgB뀅9.q.Ln)Ƽ /)>W8 ;D)kr~Fj_s5-.rsf3ਇb[Ҝߨ ]nq eX&0vl$襁YƦR'^Nw9~84*SkS! n &ĄƂI$kUòzm eㄿ) &Y6U5C?CJ.aA]ͫ[>uQBe&eE`9:K̋e0$66_wk02[XUa/a1ɣ_=q݇5uV$ttY) sL Ī:Nn?<_B&nN)DC i-T\9Hb)< ~` vl?C9$kó~_i+} g=~_/< : :w%PqlR,c9g[n'GP  xѸu^.]%`d" IDAT9_C)S :mG6ml]{ڶ_%8nui]~??ʎW%(mlcr//sꫯ.+T=6@(W#Z_ K39wAǀ6x2qh?ҠJx+벘8l1߿?mo6Ň>a 4;~Tn6>]Hwx m*cq`d7&D`+P i{/+7`TYUj)x4TOZ`nqmP~ d kҭEWU4偅N,>, IeBX'!NfR)QY U_4ϖ'|W;P=#a^^s *9?!+*B2cku#L8JOo!6 f^IBixsMNC~5&8\/Ah:LK3ҫ-|`^ Xm]]m U"?L`c&;9X{ba0-gT{\ zlҀ3ȵ8~ts:Q5L㭱]C!aבslZ*h)1=ހ: Ncm}&P2KK6(U[ur[ӬAP3kBѬu_a!K4?d`#*0n`* ߯TiUf=k&Z*9`h {VXjhnq[Ȓwx׫ <2EnS :hdG:[ 69$$$_rlTHAs$3^ʋ˰pCi)`*,ק{ZqLA>, ݹB?M#;tXk=~xD46~M 1Mn^mz1g$mRJ h C鄫 kl}OF!04(wiPmم2P2`ԕPvc9H7V~6^xjv *_``F4Kx3[ػ;o)O_JYΦw6mqc_I\9^2j@2L@Ӑ^ۆ| B29BN] Y!aXG!/.KE~ ny ,AfPt*d,T?``P/ISGW9$UO!d(AN% Y{|`X .m \zWz}gUhK(G;a9Ɓ&H <ҡlϑ&ZڣPXЦow3\-F2e/˳t.9~\)+) 4瀳BzZ|KYi_~i9kG`v^$kj$NK>%7 @R~ p?"'T60覲wA 51df!H^+H,.3u$TKl3 `/ԵH+kDJ#uWy^*Rjh%(`6& i(!.7bC1CRm'֛}Zf@O|C~+%֍> p:vs, -4ogJaM|g`w@rԆ$]QSJ}jx\: 7Ftb34К!Np/ID~, E!a;?%虖/x1~whBAZH1>><,B83;!T})<(i?1C]x˘z}<zc7̕=섒S @;u+_t` 3cj4:=*䷱mlcpnͮx?dc(P>9ofɜO)m6XoBj}s>9_me>1z0Ȑd6O ƮI)s۞se)<>(pూ=]/_oom<g $&4Þ(:䔃* Ei_ ʋ.P/ޡ a~BAʸ2`'V]V{h: 2x5%d3ȅK؊/+Rm.苊N ]$ʫ >>pF$z{H1v<^p&~Vccy{tS,`iC=?Ru**إÝFڊTXgݿvb2p sn[BWAo5o2N/ a[qHZ<MMd$ wlzV) BY# 邜 囎N 2:4j 8{~UO@}&iA.(C>,`Y&`>ձH zâ[9xf,rr MMU]-p{l4\bDӒ+kKe ~L0q;c1~oly!&-KՓ  YoRE?LQCix+->wJ*~זzI&gN@^rE9PJ8 `"(RT@f]0^'RX,3rJbMɻ DcDZܖ4tcQ)8P袱Nl |P9%|^ߥ3Bƃ\&`8ԜC[`a;6"b{ /*3Fu9nk(v|~/&>3ŋcljcei׉y-k?6ccWx؅ `m|?i^iG?ab׼"݂6o9rE2TS-Z:\f;K*~x8h('FRh4.`[m 3BNb5J؊!Wy7}/95⺇wHҵ-;JP*҄ h)+NO·]AH)lIQWW()$WI4$zyȠC:i%,}ņxYc5 U"Zuu8v-B_; Ky>k(Uҽ}˵ 28J sŻEht 3 f=_ R;$ÏԞX]]`QkGAf8OQVFl#ʨG*ReñfCmޖ #M&JhM[ %d{t_ybJCuG.]JvJGqJ &2 rs_B'1 ڽ \0rӴ4yC4R1(aOR;q޺61D߱ W g~se|-%i#5Wſ>#!{In_Hw+ *f9j??y&>W.@i˰xTP{{2"#!N[ʕrv'ڝSꢾPhv[ `]cԬ$Sr[fU<CE;($X4^$3X q4z(ǽMC)f CM0;=*ƴf>"-zKk{XYWUݽ/go)-+̣ EAaYAPF"(p^@;z`&/ ۀ-[7JfsϜ>{UUusH3}k$wt\$<2"18 X⃹ZTz&6#a1#$Lʞ*ձ,oF?RDW yA~sU`& m$*|97K&wmY*Z.RJڢ9S&Q`D"2?<Ӫ麲jaɕM}OVW$yb蓰+Zd-kyA>%lm_/;x]R^y,nD9Qt=گphA& 5nڂ<- RFbR,z>1Xh啲YBwȹ|WP x%ZC[AZSoY\Fzoc6-Et]t'~'|^xadRC&xõl~___ &RiMmlc۸>,޿+=7.*UJӧ$Ζȱ~o]Zml:8ېN[XHK3{a{p1; 1݄v4),'K~E~;llYd.ѽ|. ge7אs[]ϦduY,<κLo^Ϧ|;[ $:! ua>ꤢ7Sci5B._9Ns#vp'' JN4]kZp  }cπj bڇ[vF)ۣ3BΖzgVՄD8H!M+,Xc"brޮ5FۋT),x \uEHC GBn\T,/S*2 =" :PH OIPQMj/a`nd'<󭥡kdX04TmϲaC g+xjȆ4$:kHQt YQ+97rWy;sK {rB{fI޿úձ&(Kod $悝 .-19i)cZSD2#sf_ Qt[OxgEQeP(ƫ7)M`N͆aOq{IX״K׬,%oUT[ylLkNsܢ{dIw!ÃUǰINݥ2UɭcU)9h5V?nXV^8ф FOz~457@;q8|1uOsVǜ&زW++N~~}HyxN_w[k3GIx ~ !ࡂ\HsЮ )sKSx \3'6|Nlc˗V,{0Hf.F;,>b1vSxMȿZ|G)$fqn oCD."Y;A*={5Scqdy5) 9+sxvVשGtSMݙuSyq}ފub1ѫO# 5+,>6l>2kvL=D_Z-О%CeT]0g? *{hm49XGkİGgY(԰TA+K`axk`@P̸Ű;*RO}*8 PK|MOO07FT4ՊZ$Bx*g|V5TSAYu<:nn-zdgsĴԄXA]QN!'|B]/vLo'3-d(&]i{UfO5$xOP4|Go-s0h=X>l=mzL@]ݒc/yȏPpnBz=_ zϛ_i U?7°̽E/`JWwpCaGӮ/w1/ ° ^1=E>[Tk5RVci,]`W$>{l%3@{cTriAb=8Nj |w3VGr/mo6!XwT;HFdA$2|;.WLA%yE.Vy h gAx VoI*Q]J$#H ȯW8{'}i,(?"QlJIή%c "\(ͨk0(1XPKqx/A_QӚ*J&HqVo`Hi#AJk\wy!!q)cSAf$nW1+R*>>e *KZ?*RSCΝ"Qk.jy0"ہ8H)ϊLlpnˍX/Jx]XXz8 9QH:U_gwvG 75"ɒbxC H]KO4<ؖ!#od,0R ^nk> J׫c/ZF酽 !r{Ք3j!B +#q) ѰTk? u4MߐnfN #hKSs>䇁> X\uǤT-{(y;D\byD{E.pX+q~ߧpX_VuY>`M.7iCh|K!K03 "kҏw={!Q:kYF]/'Aڭh `OLR{B7<$$(<6i&S;^E67a1H[[yz |bZ\VEa zPc) W4 se8=8{Lc.,eDvf-bQA?/{WCAB`IV1$Y~h:WI؀D} /;3X0KRJ5H4z'<$ã<\l(v$8+*|x:c-gl! Q ~[<,\yR(- mlcO<_$Qo%~/pׯ_~PU2䷱mՑ4V$Z%K_9"=K?%yKG'P 3; H3R PX"uSCA^V8cN$]z}$5!llo6!O~︠1 D2G*T/%HI ߯Y9O [v}=Z9vH$м ;LȵtFx^FCO=z@DM<VM|еi<"B6ed6T5n2N,e8:Y-\)Jl|c)!S=I|:LL>VČ(!@J!q <7D6A٤Krwk2:@}*"w={aw: ?ᇠ\; |J|kr@`?Έq鑄MK1ȊMQ$ŠBe3z?5$/E,-X}㏂>(:Oo+ZSQUةSiU:ܤ/iK誑n@ dz ,]mPv 0 88FiKYdisR'1,=AE> XKM)heLOp.Ώ'b5!]ø\Kj5o^b9W{S/Jѷ^s?9ff>,Ri(/ <*N/qm.@gjiGUwvSK3'> WU59T X[+6XH̓5YƋ2TB+49T:4yTPfX-<*(Q2)Bж2vuUC{ q8188qA au=>ߪP+ڠ5=meRJߕPzKHXtug=Y;r!YfER Ֆc:5vN{ c^ƾ~oJ!iQ{@CD'*`s[7-C~=m4U_:k17ZTá^ *FcoA~)vÞ3BuEgXAˌeWn8< '?#p3}S^Vf{7g ">J|iMrdW$bu q`1lWZ2w:ئ; |OX .IKQ1*GfcY0LxEa0sI*'D-2R0<ƥ8fh VgހJсBvCsإJ $橴qicN2bX, hT }:>ӂfȵ.C^e/c=P*^F iƑ '|<[px %7D%Hk1.Dq kiH[ w#pD/~Ղe'AS o4ac#R^2TEܩt:^cTeI"|I'Ajz/^fyN!NENs>ڑs&6 UH'4S(qChчjጵ ?`HpObك2!]<!<:mA;,syN(ci+R2N*1o~Ԯ NmsӢ=~`G|Ȼ-jFֶ(C7/ks.cQL?MXwlnaב(F\|_$7:coh?4iarRE!}NgJAf !e Fr1oZEGI轏V̲PT{Z_WEMbwTʢ0ZmtH?A5}z 5wfX̱됌*+3IUF@)Lt5AC?>anHw^!Ϙ TTtZUB&B8^>oB_?H8re%D.i $Ev" }ǻ=p&ղn=z汞ԣp(QE>Y2r !>cr?8y$[b oc6q>K/ś&l-)%/ꫯ6ڰI),/?./ISȗ"a)W9 f-&C91t/bW '$&ПC!Ş(*dz)3#᳟hhAy?E~Foc01wliqi Hm \V`fp NDRx**خ_, ]Dc0ulx wC17wxA)DB<Ⱦw8 8]DJ諃igO!5`0טq^^@@KN>|(^^q_S|W–N'q|wH`aX/O|T-bQbK )<щ;בCe)вξs( e`RX%>Լ:) b$uRTnzi], K^ \0} [UHBVQa}7SBF زRH5G!=I40/㎡K%$x+HB>pBGo*BM]ټ9 ?X§@v6j7;8 ǀ&PlF$vcLUB'p!Sd RM?akӾ0S 57ql'9:eQ0(? ViajݼR}߯mw2x-8unp!^.+ "*gT jZc=[>(D0R*:5ytSeBFޛսle-g5> XwާܽHQ^CJ!OHy0@ti}A9%s7&jVWV2T.vџvpZay?PŐeEZz GI+kܯع-pT. . 6)sqY)=?Lȕ ׁ' YCkgܢeӡG엹 ;][ ䷱mlcwC~~|+#@=`~d ?S?ŷm6q0a0.]'@<~6 FRx;WQ 9 ?[%@:ygjy+qv>͑ :X#x T; k# ,G+__6̜7k߲4dW,ڲ!gb ׎ I9x%XrDBT+SYy_u,@$R S0B9V%P zU|+Ob-LA9>ثP3EAXI–{[dx+:$v`~*G)M~>#ERyPlQT9|?$>{_3Uk/C"D 3׳43畢Dt`6 A37fWk+e_nG+EQwP e ę78/ҵ%B|Ξ#w@!z@>X.y ;|5#bpͰ׊}Mu1i>fV0MpG,UV1a0f!_ [ {tsw Nzh ;ɚ'WȱIA&H!=.庍$yEE~;[9CX/$pБf/f|6 v 9~`h NxcMA3GTѕvV ? fde#- s#a1`6O#^(2 E™k՜w9bs$ͦ=H򪩌[)s(Ŕ~3>ӜdJŠ(6IK%wיa GH=熞Ӱ;9BFJ1̽c[PKm-asS!g|+ ?V.A4Ow0մ6m3?3\x1)!yw,~vmlcwuFA:D_/ {1?#$- 3$ge 'W| ˄~Gki&C4c`73K'-ofKxIo֤E"v`L<|h_n~V̦۲ml2 ݩ*"gXJ*JJ=Qm 憔=Ƞ!%/$Ɨ@&!Eb 霢J%t"gQ^er^d> uu]T^z s)_go؎ƪ-(R V^_ :y$VQلk$wb+hxN E ̯辢 )25pU&afuHVHBP(L tۀX$E,T T)p&x}GA&K)F܏I^cƖP t!}ow%__z"(ZsHGP'0[ؐ)'|^+'}6/`2[3`?){d+V(R2QHbXx0GzҶ4e^ E<6_YNH RY Y[2Ƒiwprl(}) CaߛG$&XhI0<,#/p^y Anf`~Jc|X%R݂XF{D,dZsN*I|K.[ pc+DjHlĩJ>o9 ݥr̗CRݰNnj 9Y7hGm[M0ZY^NԤt$/S IDAT)*VQ[d]𓊴Ȗ:1= ]fҍNZ{;iX}*'I&k4w%o`1v=$ס#?yj.!_$L{'p96Ã#=,; u͎zMU=8,;F <9X"MYW17h.rDQYgr7vk2  ËcTJp=nIM~i 7LI.Kz v$ *{[$5{X,@p/\=Dp8񰀐]8˸_]yBP!(eYvH}+]6m܅ /)%RJgW@B۶[0~]ۯE>f!OEվZo =j} jA}9xk8J};$d鰿Ca:3=`sEz> rM ; L%ogƓTC:HQt }@>)4|zJD ?Z|]H;-v>J$d/`w <,аyQC;=08(bCQxmΐK6X"5ř[K+5~}ht`rXYtp"U_PVUDן!z[I1>GN3Hȥ/+bgiزgкF(hjRZ\v&ǝj I(SX&IP퇅iQa_[] D\[>j\lJqlt*( ލ{87GY.mhٻrPB4Qj<$X*L5t$ԟB>`*Yte=]BuWPlJ&!-$EߖmH/ - o͵:Kl}5C'%8UXB^:ycmιg oc6LudN]t/}K#kQ??k_UUam𶱍mlñ?N ނ-}Fl .d ҅ \o[I@GH}750EWg` Sд?mln"',T]u!%x%U/D f N你 hecJI7_5 7"7| ; 䪬I&J6@iNu"v90k]#$4A(ƛ:(& `}&$Hgl&}iS(8/wr#HOj)T ݾxŚ Ӫh!6 6 sW`SqIOK; Ie*-0k dB] 7 +Pp*σ<7&!8 {1Ud()-Kr rxj&hX*̢JS 9R `.JArȬ;ݥ[>ej^vav VoVKs<+vгw|9_oJ@y7k`i_Ji$<'qŁ:9. jfenZi14Hr{;pR~#>{S_\1 %P~ )b"+Yl7 6j+@򈄎=cn3t.1.#}5VXyoO؀4iU#kn+ENIٻelҰ%.]1fqB!4U]H;Jߖu(bcj.9,tҠ x_{xuKaQ4ȪX}@]>l:ZⶲzQi,**GgAŢ KY;$^]F]{RxgxB;?M`d1[*VxWSC@~  -e (rP~RW#oYHρ\Z9J袁Vȋͻ8g"62v7~G5~z__lmܭ0u;~Tv$q'h}T'=x <>U8FQW-IHɑ`5]M>13]GJ"t7S3_).u)9ߪ8msV]n'.[oJoJo7uSZ~R{{{z[ɶo~$׾UsZ,ϛ9oJߪ?엷{?O~ h`фHC$y[ 6W$eŀ<O%ygyKInz!>v7Kv8H@H>X"9hg:W'Nn^e5ՊHbe|k,' sOȸW됒8= S AϕW(/ .rIt[*4 i_UWlpPһɎ3k,&/3BU==1;^ؘ֑nā09;GN-$j˕m Ta9x\seG*+sN3,9[?,r,0h U:Pn 6:|~!w>eB*}Q| >3vd+t}wUe |d iqmB%| 'xoh75|KMЅb/_|?l:\;y(47ÓJ_4Ԅ L0X'2bx,$#Υb?J.R(QLQ!6FRCÜF=>^;2Z.p`%L)9Î"@K1J!屏KX5ܥLBR/@%a-<@}q&1ѿ;K39cM]=.$"#0Rd1%;ۓK^ k?˄Œz!_j,UեN <0T ULm\4(AI% i2:Wc-$ @0]ԐyYk0Op׍,'*7>̡(6q|ձ s'83|W2r ~w_fM+BgaNfF)QV=ߛ6mlcwVC'+_ ]2Qk|[ߊ wLpoc@gȔ?y~?y]ADStEd]# ud;E:317-~k={sNU.Sx(Ӷq+j%A@ M7&:%$(RXO-:Rn-+݉(E h6C [nϳV>gsr*V%{{}~?1Ur$7K )6 >B1 Mnbqj0~V6py[lFb˪%fzm]+G={j~J8<%`cSt7~ғwBcx f iG]5IeMG p"@oB]Z^g <6LXrJ|c2;06 gj=HK<{$ @RX#"ӰAo7[ĩ4tlw\ @2gz.sOl884fQdxLDzt(,Xhf3Ed[>)?f;Zj.莑43 Fcm-vE@3@?Cރn`~6Tef/9܌ q r}FbNXQ1LsD$Uh>"v\1Mϡ|LVTj"JNE8O'[DCՔ5ɑØV )TOd) e53$ .X/e.1 ρ|xPZ4LU)g T^< wӰ-a4Yd0(Z+ AuqU<=u*TbfbΩwh,[,LtHxP(>Px f]k?YGyE cy%} zfk:ߟ*g\nR K @=opPH*P%"L:О&FB,'q/q(RǛoÐ&6M<!?0̌}cʯ-,bXǒ|Ð&60ϕB2r_}Bow9-#y7w%تw r,K xT2фivr mP` 5}Pbl0$X=J0> &^xؓkC>clGS fZ_TYV*_O@cI>pbETd#C~\'9ǫqHz|"[T#Hm)ofU*{kY^%}y-GrKG:EKl28~ ,|` bDڇO8֋S۫Ӑ  S0^Ϋ #XdZj zf8+cH}[sNїhO)N*ė*Z[ t _WwV^c9U)Ȣ,_Rʱ޸wq1S:4#{ $aDeWXCE1trSfݖc Hع`$qG36xgmg aΣNgQ\Ԑ?~Ы .ZZ󛁜y4^m=vȉk B @7AMz9_ ?Q(f$WЇ :29DKXy0)X[b<_CXb-] B,:멖2㜚Le3@Y{s֧2))6WCfއS2&U!eHU ]8e@hwwT#:e(d:k [b~mvHPF(UBZ%C/+y GV25 E&kײʆ$ѹ_(EH& 9ST W(g긳oqxTnUf/@:;1v:y|P 7ɦZZ_yA^TK$&):s{^Ÿl#ХP:e%_u^^KK$x$oLAsX 5qgr\|3gҸ:zG&6M<|18<B"ex)qQsNJ,!wMlaT*w@#u9gp Ȏxw‹ٷd]Æp+wv~{40^m%\YRMIE +P wǓa 3#f@IBn,KD~M,؅+Q}?VI,A}`_-Ge0 `[|>l{VyO@eXA{WTsݮ ϥI"rgM8}DoM1,GH|{b}(lvuT;} #e貫d.F'Aޕh r rl³@X =T_*I-d!}+oyػwC>s,{} Vێ·4Ȼj.1wv}%d(/f25D6mX`?<_ b Xr+ WB>DA3Kȡ]c[W M_/c}Mxސ/)<%HٸрhA;xWs>|P@(Pa4!22L9XhU٭}7#QZo l%|+N\F1a:I7M-k qo _%1H(GԱӳ.bumsq40*4ի>-}H[ _sIF}NO[cblу-oMCr, R y5p*Ʋsreo!m'/'x:֦dlxQ>DFOizW"h?÷[oCa aGW8K\N!/͑,y\u"+KIX EAs\3Z٩%S[$z͓~>҂s~]}̒7%!Mlbx(c!siBc#"|?ۿFC^j%Awk^1>:zs; b铩l R αAZ)3pEh[\}vĬuՙs{۞y9M^?+vG+-;^.4rY/;9lmm{Zu^_/2^?;~wH4e:oVܝ=I:[4яoX30NH` Wyp2\7EVk< `LZnŵتzj|}v~ ^hx"!k9*@ڗu_2 0/henc~`8>z `N7w IDATPIěҗf[;e?I6x7^$$ȊW󢌒q ~Byn=^Ʀ;BzREg ֗@St%&isJǤd Hs\ +zP"r-rzsd&#᭭MBaڦr2a׈]':V Bz5|sa⢞.@#=o3ux򬐒RZ,K@̶Dۏ #AzA{ԱGCF1Jo!Rߍ5ܬ>ץ(Hmb2K!$IÎK~ePZm!Us[%WvІu,XhM|1yfP90餑7Jy5Le!KРu;` (f 2@UB}XJ<Ɔ2N9R&@6c _)7r$`zL)aRHX1ȥ1 3%~'鹌HM"Pn6b#n<8b Ҽ+pٱF2G¬kP\kb+`z- aQUr&9؋7ɰ[է fKo&6PN`;ov|nB>^}7&67_ |wo66V_ރVp \Wu؅Ѹ)]G?M%ob(0~km86wQfl ۠[-/g  yFa^0ŒMYlނ=jKܝFql^yl*%*׺9R=bT"!1*VvTEPR|TaȄI Z*f'pL 39M{Gv/VU1DXYҲ&tm:=ÓBM]$4Xia>|LcTZ̡ <'l?;6NucŊLhا'*U߇D uŴĵ@5t`ȍa,JM_a68`('Tҿ6q 7M7MlbeRJ#[hݙfaFRJ771Tu&6&67N/;n58|[+>tdʥFHMHRU!hwڲLvmC XIE7WO?ȌP?6[!N#LYA]Dmݐm-n]Yˮ+YAuDHB!J?cT]uo_ÞSp,$S'PmI ;B"j`!)z\.n l̫t+`zeOB`"nw:rd3Fzh |k-V C1Ls%n_k8>r)ɋ6cv&'2H0f1wPͫm.H4_xut>@S_rO>> g>CgHi .,p4FVB #G%t$v7ӱ 4@-wOC,CK|8PrrRC"i t%Ω=HxB,V$Uu!f=(}u I*CZf}O 6xe}} a]˩Crҏ ոYm 2b{vIy|9p6(2j,\3{05j`}= ^[P}}?=u#UimlXb&o8F{ cdl[CNρ;̰[q7l}}QY`YMlbofW^|ſEU-1k<5@K ِ58*pnkΰeu3w_,ی|}lSכ7O?*жY,FyȞheOU?pێ>Ι`ٶa`OsRHj;rFyTTp2VUue8p ̼br= p$yx-6*F!]2\&{{`Dswl:}9]kQ7=|G=pVa[IVreh@^&+?tsOT@S}h+)/R$ @_̰V- l=.2GN@gX>:H&M@ʡ^~e>0M*Gj.Tm/p%}E(H_jˡ6a[П-pʢz?|}, ^ ,uu Q o")ҜPQHJ%U"*x'^Z#:>-omC ~NHЃD}`?%-Br\JϨf>aq0jUB!t8fn1xZ0 qj jrAWu5ߍ*+ C˚G[Ve3/VJ@E+2#q(>wt'N[^^4><\c6*<2/sxxV]q!섢AC:@*l< ;w^;K#1tm Dor"HiK sʬ[,߫&n;޾9X,>S/apϮJuL״^2n_j!!n-hc[2l7u}E,NWF+ͧ6#Ъu/O0I@p*E/29=܍T?&RvؼYb3F{j'Wܸ(yX_!]WT ё!_-ؙ`\h} {9Y3c9WָD[ZUy{>rBY8FS$ ųîZ!3e MF{.![hplw_4WObtz[~d}!27({h@ c2Ƞdž\/9 yAM R ~Pm^'K#ov4,m lF)%ZkߑΣD0; @+2ڐN"΃^\CasKN~FBl&ѥ_Y BYֱ1tL<ŝ~ " t mG5],p[bhjlPln0&jX #IpXHhr[@y&ep'#ڊ+=/4)[&sGZ $U*&x8w]j-<|:c[md^1rF桠ѧ :\&N)kc@[ZnDy7M6t&⼪7=ZPH8i,3r-sLZ ! ;-bL}L&봚ĕ9^ 98M.:%~Wc_ ;UMj7ʎ"^yUl{ط̠+)NY}ZQi(bC'w?LquxaAr:uwӸoU&6MaI;lip?S?G?QldȿUꇲӍl~03>R{ڶ4Mlbo-n ~>QٙDyn {5aV$d` p&6rJ[rւ+~ֱ'p/A7 ~"i;;`j|gɦ7W~h9zk@4yVePv!cz ȑ@urS=RvPwIW{$d˶!Aq/JlNw>aH: u198 ^cS}ԭs0BaԬR;췏hS΀j(WxxN[ЩêLV]3ͯ P}_, @g컑x 1ZXX}b''DF{VXgZx_$;~\F0OB2 \p$حKA`bmˤѐȉ tŤZ%X-ϑ`P+XxزT ex)\8Pr ^mPlX`+{聥Gy <(fa[1_ '~<muhyxxv0'õV6mg ߋ1<[%Xw|jbYh~/[8[Y  s_:WVס #qP,E,3\1+eAR]ԩ*KLb4am4ūR,IHbuYutR^EMlbؖH`X4$y _nW @7/HOVC6Ǜ9OMO̴;%W:p|ol*}xGJid`ZҧUz^ Hf't{[f\%nGN$M^on} 23 <5B0YGj/:݄I2 cư^ pm}do,Q6_dܥb˘@UW'.ʊ 9< 1&DT]nZ2aH2XXUkds0QcV'&!nPwTnz9L?',nWoæDtOӢ4 uCKR%8NЃH/Q=\:A ؐG@)eظoss{Hz8!R IO[YAI бprڋ /(4q' \=Q_`]gf֠7aBn&lJРg z"|n"= %##>5W@NXt`^]hI$/ g8 MFBEwzCd>1e^#GGvU뀄1МD P1*RI7W w|C"<ꭇZ <|]lZ3y@ρ\9G{W8䨲sԯ1񣵬l⧜=ۀ3+ot.9DzkBbyd%D٪+X OϤijU8$}"r~\5aBS#ӹ2!uEbgSk9+M_aû@DقlR?6BZ% *"+]<_K emZgk ҢAw$ g)-- =xֱ )\uWt_N*tʖ1hh1*PMsxOXw44}a˦on>R iҬo207Mlbm)GsoWX\>O "/"&67t fëYHO?5^x,-_DtP?#2 ։q!UJ?QU/Kn0 /DY?,DB IDATouݩ%e?__T}]n׾~r]?اe^~zܫw_zOxݻ~wzwGX?>0*" :X4peA_dAm=6; JӇ׫OӕkփNg`e%-UM*h-Vhp+=J__KyeV}{o}e-ǧ]NˆD19gP hVD`Բ/k`0F(0AyܩT@lD0\cT]J86|xAk>?_aJaz=!TؑGn9`E=Mz+UFۄdcΐmi)N) 0|^+Ú$&Jim9V44^Y|:ӐhƵZX}Cf!N[#pJH"Tj;% @>1z§Dj"^Q<*l^=- 3j X%E+%WukKfQix-c2 2+ r^}GF[ X˶J=߫RCØ7vy [$7ڽ/ ~WDSW67ռ t!ef7i&62zw<==&+.}c=in)|w}GGGoMlbW$*[{?Oa6)_3LpҌ]^b}{'DZ ‰(!]7cρ ؟e!r>?eЮ!7;?qM3IS ol]UW1>0T90W &,V%/[ lCӠ22F@Xl*W iJ۶c{58e\0o,5-H; ;nU߀ū;N<·VV<þ]TFS˺ F<$IZy}!no9*~^^>eiO S{s2ɻW ʷM6dS$gGDȜU)zydHE Uba`-[l'72|C}m*vNm?ߑ<>'o>ڵdO _)_3{xYZVuc1;b0Meg<`jF}HK'5\,w셂(l0H1&HӤh$%Dms <%R4~_Hn{Mt hqX8(K=eu-kh-F%/pAZ6X eZB:'Շ9@Rh*k^!9̘ %}qSjnaũ.\%m+,4# [ ;)nUn}*` &ҍI%w|2l 2 LdU2%M<ڦ*ڗqmJ7/qRbl݋?\I[XGYX~IUa"żgƘM*We')Dygz`ϺSSKd)ҶHJHpI A& 5T%R_"L$<>Ӑ^yHB[fc8-r5Wj)4aw`F:=!eS^fE%Q&'%hqډ݆yH̓Iސ4Rh}h1aU&6Ml⡌XJcN~W~w]F~بU[xcL7Ĵ[?/2߁j}S=Mlb [l_7~7(UCnؤWI| σ|^#H' 28SAd =Unџ>-/(ipK3rBgNX* ? lbЇ>4:n/tv@=a ];H[X %!٪!tG 7II%b3Tpo}&vJ W?Z'x=Wwʂm<:Z̥j,lp V07%/*eFMF ؁&*?6555CqBhWJG^\n|ITrI`C*ن $<8);dJ%d{h=Jˆ~ zgWOFVfgT8V~9*F?/ eFtI~Gu`_إ 'J\;NX@{lY1l7ZXZZP!(ʭV~^є)鷂02(XȐ= E ka;hP (#s2'CB\^+0W%>Q$޹eQ@SHφ$8 ƌh,QP{^| ;ejm8Hf[s$6/hZZrꡇ䊎j! UE{1y *>(JƅA=wlB0Y47#}U zb]F%u`MmOi 7Xv nfABxA.]a&=u34 4Kl[f,P!L) d%kԹvp|3=~,p"xkhLߩ I#y"PkË^2j5]"17 5GZ^AHyX]ya_:2cICHC:\Z$e5~N`,{uJ@]UIN;\Ʀĝ ՟ZW7ElMlbě2RJQUz)>яs?s4MASӟ->жXwڇr6&6?{R_>Oqڵ[ƼW5(׀?S m=-;z& ~ӂKk'?Mfm27r|<ƿQA0s )!+H)}5[ ]Hŷuw}Ʊ P+ai5>sGsD-.< &|S%3N ;p< *X=( X];GCRk&LGydQ~*jBC}jpZ+k7*xN][\- FE{vo:ސ0|Z05e88Vߗʍ-=KzJuaSR1z۠t%F:!M8Ѩr<91bQ >\wQ s  X`.+2Srfe[FB NwAɆ~bEH<9a_= ;IWbA왞"Ǒsx1Ѡz b|C5y_uP} :?xeno'袶X܇ӗ:kmucyW=ď q:1Z耠 Dbc-!HHK|OHiGG"lH,ZM"ة*?}{1s9V9u˩tιךk9= (sM=dM[#e;a{v,~ x yf\( J)J=7d<+ISºu(.]ǏPO,Posp\i\yq CTW_os g3H\ywk.3|Tu mO" >tRbFk~зy8tjcT֎U=U-GdPuJ9u0K1s`fz@ x0mxwye_M$f,%.g+UǜclD_ԉ{9 :/U̇xepIk8S䴣jOY( a#*eЮ`w]jWoI.k͌ɟӟ4O?FTAu}?T>ۛK/ć>!~g~~)ⷝ V^ծv6DxOO?f3 `O+Mvѧ G}zʰƿ8tӎ|ÿl#^c{Aa /O}X>(&ImǗLs/}ϋb~;sVķͽr?Otky۵6Οnv۹sk~9ovM=J੧#")/l< b#2SGt ʏk!Kڌ4BUcRk\ݗؘwdYkXW Woٛ"Cī*V}zv5!ekj$x򏀽XfsPzީ6ΣUFRyT9@a{?;h#ps`l8Vs-XΣ3Mw@k12״O{؞!`=aOw%reN,皤sn5x6E''2[ R}o%~0s愢9eܼc[^wkɲN6MOׅn83,|-Ԯp1Gu}̃뀴wKǖq;R yp9g[԰_/ @|'Z˃ע,;|;1, W{85=,mN]:)?TqO ]b؞1^X9,9t]ήvF]v]MY>l*ׯ_ggn᷁P Z{[|cݻ- m v0ծv؏'ğA5[Do)ׄ,}MpxxRi/ uepntGA>n>ik׬&*Cv }B%fכ-V=3޵{;0~E"X&LJTfՔ\N]0l`IArjZ--c_հ!`K.#'!b;\yl^zYR9b>ko%]}pϓdQcީW=y.1='Ui|.s}vedWd,aWaOi+^* Y@ߩԹO(V{HP  ]HC+j@5U9}:.Z .3t)gi/?']u|VwA}m; +yO^Px7xw@~O(O{/T8ZevyY-2TuqxR)7;X9΂[I ؔrhP<0Oe{ͫoHرp[brs;Cq_~݃u亢7y4'߷>9yW r _0sj 3 s[DTT#-Kd/|K[rq֌ウK"Ep>܌)i(Jtqd^Җ#2qm{b&=<0ˬ6?_Q/; G{.P@E3݃ Ԝ3=)z, +HkDu AIVa R.utik>Ru0Z8 mm.>1//:$YB(o5E-wD/ȑD=xmB0#@- [FF=_ڷSw7՛ljծv7c] jZ+O)>яSàVdoK)^_E̟3|H x6[;?Lծ^5lǿk׮mwۊQ鈵LtbS?W9z(2Xӳdi;D}c> _u^ezmKum>{>ؖ2aca ϚP!/]mu`}ISύ}WU" T/ 4s\8W|q7T&{*c~ *UxZ{Kj7u{8,,ڻ#ԗW }ysVH '頵@thHHAڐ-}ndj^d_@fv$4j*~m6P-^ӟ8>>(f^؅W\sa4&O>5]vzzz֬ܳRCx&unOzR)A9Om"`ð2=P{JthUgpu{]{,ٛ.r; ґK}Α*9nr0Z5=U,@f"ޱnǕZ pofJ\77 +m虇j5-yPϐc6`~,f0-o\'C1Q >mnuM z\Q}UU>S`ۀ퍼jW6x[?`c5<¶= &PP s|b6/%="8m~ꡫif|[5`^ }PE7 jk] 3[HZlO[nRY?&',#(w56-Q[Ͱff3ߦ" 1`CEfF^|$p@@J_a?$ীy YZ RPj8Zb'=U`4Mk]lT!p>mfȋ pfHI*>5q>luMevͱ&{Ew,[m,ʽg NGUĆ;76e1}pezkE+" ?2[{d ɨTGgkr~] לE8~/Nu`óJw L) RyKC!bq3Jsk!.#J;[ qǠ}_=1Ȩ^7󶓼2=.lX4@fIͯ C .RQ_6 Mb5ݼ9$s5Tu9eXGLR%"/"+AV 'KV9 |S(_߮E\rm!'sIK|=3k;~;S@y@ѽm2]upGv/ξcsSc9Y| #RBsL2/H:dC^ _Sx1׳ ǎYɚfWz.C~Wծv"s__KXnVmͮZ6K3;w}|3<7nܠBU݁ծ^7ưZ@oo%2]]96dēX(9QW&гqdv3}&y}c?c|Spq~ۯ\@ ǹn?oOo;}rxm;}@\>vwa;3~X{ݦgΉvx9|>On(9w]G y17)y&nrJ+3-ZVmS|wE TFX:h؞ vH1DcP Z;pȡ {lDS-q AZũզ"⽁#{{=bP> ST4Bx u ;Z&᳔M&>_0-gxˬuDrd;eBz@CIF:ԉc;61PM hg|.<[QTanl5If +E{XEV8n]TY{Q:hf_kiꔩ֟wlcSO2&A8WA/ (04"V,?j{U ^¢GzP*sm6fF$9JhJdOMRM2'a%:pW3N!J<8ϿoāügN?;rUkj'gLb=g+& >SU])52<7.^ǵ=ރ%+";)*$ ;_Nbja걥qe%To!l񇨛p0s9P9=b=">m ߭ @xFdlvNEm_8x_8Hx=; 7ڥ̔z{5.f;}˼UE1 R;g>o")B] 62jWoծv7e5`a?ݿc^oX>, )頑@aaQ~_eRUvjWD5Q0s?s|C֭[dBЕ&rA9p۾vX"auXXb°;o[bv'|rc}15k O@u7š7UrNUT#˶W{I1l. 9J:m}{$Ts"nז"a\kSlߐ;fɛY^M@Qxؙv%k#}@s}mMz^ Aan9B=4cL-H q}Ca+~Y27jT_R 99'%@Str.XO-$|>j)94bF~I5JUX;v4gsBh/ 7 ֏e㋏N:˴g=dƹ2J?TKr&7pDDMNQ+Vȗ PXN? j,,z\ @_>k;`ùt՚wfֱ.8J- O7dgmL}5!;2V 8y_*`l=N)V}>.ۯfqЭK}\ }G:{s :\d/ XiR(EPxܰ;܈g:A֊ K/5 ;;~B1} O4D;M¨A U%ggJl1Mٯ:@) ~`GF!+!"!/WVg[ž)e 52mB%vu!DR֎-2D%k5=@mumNTr]$SYu&}KJ|FctٍtRYaF"ㄠvfHNQ꺇\{}$D;JL᫩9uco5Ayc^jĘZa7_k~JOYfc ntw{87jSxɱ/B3wA_ӳbWzծv])kM%Ozgy0:Lfo͛7A>O}K~;y iikldwF2ڞwO~tz㿏㮍zU]A$Sjh<( ~\՛|!>zAlv3ޢ$`ce<a bi.Ƞ+`i'k]d=%ܱyXyjTsT΋Y(p\0UXs&HM{A]d+PPK+[%9|Q}[}qN^qf@OW j\Ϫ&q+)P;, ٵ! YC]/|RuJ5'VTy*ԄDDr+55SO [2D|[CrHD^*=3'h>Ԋ-Ǿ6 |/,u`]e A:\r50E֑U `%kwiZšÝ5~\л[, V Cς.853 kG1vK"^[%px8T' pN|i6&[5;8w൧8y͗ rqNw[pgFw Nr+yO0~Κ`j9Vl /Rzns~d|p%G VGKw;U/uER{>lo1=="19o ^'$6'YB >{9kj:,0osǤb}XGB WRY='Wc;P"?-R>Ke{OI8)M<4 omF 4g5׍P/SõؙYyf?!=D(3OxQ*_A N67+AM?o&2m mdȐy£F՚g'=IXY{(zc}XۻSYhX.$ڿem?+g5]=ڴl3˖>2OA G>zehA 4=}CCG2Y0rE?UҲ]OZt`pևu;p{ꗀ/ ˻ zc˷W5q{r^>{ue J~s&/BXk>%oFze| >p7<MOߏ93>'R$9 o/Td×=EW \@E \Ru`)k06yut/ambˊ-ӊ+p p_}Er]pB}X,N`r5(7yR*c<\x=COH\A*p(ͱ =A"n9ct{1ڎ}$Wd<EgƘ0ka\6l[ g+#. V_#~F?6Tƨ {YȦm\sH AD%a(Uy&mt#9$Rљ!{Nُ*Q~kn89˵ (2s!ɗC  .;C$9HPxIe<_P yj{G iDG;{>Z-z^ደ)gw \|DCJ9 )f#FExyU;v],w;O~ri ~o#S>OSOôMw$27>"N_՛>о9>կ~upko+++};|6xؚ;^vu׶g;~eo~r4o~>/36x}mr:kןzꏆWm}3>9h{eVD9=\̶lt3w׶ѵ\h"%[,ng /%`a,vb*e^ַ=>~6҅)U %Tr#S>A̽u,G `K'[ g'~s N`]:J¨ ξ$ $Ah er .ZlTF&9˃#͒ZԖ2}̈́qk%2Pkn +{PT _ ZU^yj"/ p l |opn@/w+e/  rwCY;PŸRߩz~ >qoجzͰofު)~p&[䖑02.9NO>۞'X7 -]:tB#yN5H1 `a~^18 X1jɥuЦSh =Y!-r|%oӘ*ؕ7I;3҂w{Twp-);Mk&C,8 $plpJ ٦KOH_ T_4B A;m9dd8 gA;tiE|a`{=cg)h?11 WŃ4{GVzd[eJ \z805FsskqL>Q|8O$rM':U0";+P*,*r~b ʙ_N@O{ <. GW}b[NTͳ%g؉|3ި078+f} VTG Y)& dq+z_u~g\7xm=~]nWrWծvP'> ~~` Ҁ]|5@oo|ɟh?K)ѡիgDjU}py&CT la)e߯\¿vMݙfoќ~G~dv'IǬGer Wd0x I=U ~P{m9ެ?[Tp{zSgI"Rr}eHZI'"C(H{S&IxP߷¦]P9-N4eLm5sEI,6u#!/::ڬ˳ L4cE(SI?Ȃ ~(&iv1m?ݟ CL؛șAI5L:F8]g7,nqxQ$MT^7'^j=x28%dBxh|J8NԦkmC8'b6lȺT"@"ovh(u]7TP݀T..&̐.1=DztO bN8xgХrAI/"@k }NjDa^FWo*mGwK 8j]PJM1eKGI\[}??5"h(#ߨuC. ZRro\10s6 Ӎ^į[BOe + %t3wa udˈ` To $ m=_O< *ERXNh ) V,t޻=qG.>UY60E:vT h T_ zȋ} &-ू t j*r``4z؄왅 (V@΁5+R$]΃HenA6(@]#`x&q :B7@ޅGH.[ly>@ULܜN6,鲏4@>0pWxpDVp RIFYwX H:Vu,;2!3 P RH%=Ȩj7O8u !P ڧӗ%k\-E&-BdP! l]+~_kb~χ65 J7ՙ{6u8qЀv O7 *𦚮$<8iin :@<]]>6o{% ,mTۻ$:/sY ,̓0SJhJIp ЈYٗ3]V*rAX# (LѮCx3&IƽM? .g#/[񲹽||Q:Z8: n9܌wo3_v~VE3 !{rՑ Y^\Etw..",5*Aי#cڣX˭PQ(Sd/؜K=nCP{,T{ .ҡ_ ߃Qr>"v_, Ju߈E#Τi`{Is~^DHmjgY]jWz(]s ή.ڀ3|?9ޯ.Yo#g3o,TsB 8sTCvxLi뉪<۩t/;Nzi?Aq{lQ2@o.So9|AA Ww s>+}ücÍRPN>5Y-27wI1da n=mųr"[v]jW]Rܟsݿwwv֯teb@y_]z]mx < ?3?}|k_۰"WU~v`CV9^,}{c?s|7?:Į^?\~m[nqL! j'xeX9S;MW5[NvM:v|m 箮J`6eo& 2Sd!j8=4~P%@[@t5OArvkJAV3kMX_:yC<uA3M.J u؞Owd6׼熝:v. , /@ͮ z%͜ToF #>% 4WK|S.F&9`CW{ Ѷ*4Adɶz)|t"rv#CWcV5u^+%1jY`,c1>+[o-qE{,H@%Vu6jA$w~p7#2ڰLg ~$QH=0M$i/\y Wsjc:WV$-cEA'HfimZ^:y˔Oo~.*Xg 0Pg+_;K|nBe N8yu=0HqVwCfkk`7"J=h$5^Vm=@:,Km1\2BЏkZ[$F).4wSz=pWM|gPo7{հy}5_ Jn ~Fb%3},fy/;yXs*Mc;38~O8C\s`48ym$X*"Aa5/R$ٲRG?D]^:Ofa6 ؁_V<}[g}8g3菈P^to_=ޡZM-"#|vdWծvPVˠnJR ???]|; tmv`37|'Kĝ;wv WU_W'>ww~w2HS'?!l?cʯ >j\$;P~W(W zs(3MqnX Wl{a3OBVJ @+z57=G=:r5U5_{(V`PTo%ɵ8#:%SPFAv)2 7Mr7~&NJscǔ򶂾] %vAjW l)\J kwG߆5,&jl#(_NF0^j䄳[ vmsd hſW"s[u.#c$5*9tʁ%65<(ti?m޷}~b{9 ˾Huz0?#5aB\kUf}d'926āZЩq@mHly`%@9g $ a|NgH1#o@a>CJ$^ =ח"eKm$d~v]%#3%g>܇)шȑcT-o&x;ZLa h[|z6d ׽j l}sJ k eJ*I1 5_Z|X%\qHR!5EW9Pln7s}K^,_@4]F[}?l'>J*[@Q`B}l[]CcYsVy#˗ۇ?a~~m9v^?~` ge \26+1`*G+AvvwAn+TȝY8LLV"+wh-uY WP}pO}:GO/&JzPJؐ@%'wpYmjjjy-W(&_ZF(Tx)j NJ6_\j7:7XD{[ZS†A%gsV_;Ҧ8|,/iXwkEkE͌UlZsCg`1{X{G^wX_F_7V@JnkskdURh Ra Z!LM}E7|*l-#{U5쇿I𩃺̵IqW3sxze{/r̾* Ra{X=ff,-V:Ɯj_E$%cB,r8%2&j Nۊ<h c<-%R[WSmmI-ٱLQǤ$u]Uk1a֬\(yp]{WZky['ӳXilx/ ^l(un1)uJb>v [H]hY_l=jA[X`$ %YT_%aR &/!t~ ЊA+#*N$Tb'Zr2ʢd%&WҰD J7^O* ]*֕FQT3le NA'肺M̳ɒe9$G-BsnbӶ'wh)``Hpo{ːN nb7nTlA+lzT5f@~9c=G?Q7o2)y2v*pttįʯ ???`1{Zeōz/DyxRn?wO_+oWp(<'v ~~O}S9M?om|o}w߻fxܵh=t[&4v8Zozu{׺6ۻh{+N{^X7+fy#d'-o~$dba{R > ^r>L 辝sT[ԌdN0S4(ChmY?O%|o'n #gŰK>ذ`9a*C;W- 2tJW$lDzpO~x}k^(/V2"s F+o/D{G,[q%} ܔAG^Xf@igRC8k`ml nܬϵ=t?r9m;HkNjo*xeFCkb#;$2WaJqv6=pe%^5`$.PlF_}gIR؄GOvvr+}ou_'#ړc9qei|jQO )5u3:qbfs HҤS!lV},{:Tw̧izf^0Z~ 7W"~Y|_(KA ͐`Y,8Tv%S\/"(^ji7Z\Fe*?:䟞uηe"ݎW HCPhy/dy!|OVbyц,Y&/*BRd"QЭSP B(늬݂\+%@W:䂡oI6c5咫^},v ιR3ËZ*BR10sXرy<ƾ3{]B- i!Xz|? , Q΁ /9ޕ^ &a~_|kY8.xR3_1s1s,{$~ul6~=/”RNy~%]۾7/"~y+_I)b1gзl9=m{; O}G<ɟ }{yGyg}0f~xdqE;kk<|C:Za (ws?R( e %7ƐMBԁa]x|: U$ %p#gw)bBpZ6œ!+59l0z+xFke SzIŸ8 qsT?&_rO3'mydvұgr$p =#>0X xU!L36R/1 ^\djO @ŁYy'&u/M<-'V^hPZ]ŌÜ!xlq${,;Htϓ샬 ۾D`saR-\Y 5o cq6{JU a踚J2O`sm~ [/z_:JlbԞ>y>S^,#ؔLB,W}eА7m A> ?h~} 's q̀] J:t(Rx/4dkSH[Ӽm{ŕ85g[I0 IDAT>4UƖ|e` k#Ƣ&BUa;5H$/T[ +*2qiufq`64U,<,dgZH8(6/3c؋N08UEc_@J^%ʡ"`!wqC-yeZ*4XR0ͼ/ Q.`:ixE ZX% spA)tjbNYҾ tܓ+xظ 6DJNbV]JHa/R/uvpxҗ};N 5+2\*C(DQ}ŋd6\LRzkD7W6, .nS` 6~17B}%ֳ&I!/дAȅ 1~ </_y:ӡNJԎq;(\`r ~*mq^o{̀s1/9޺9ga:%f//ckk`v0l{s< 6>ߪʧ>)>Oo_ສX,4n~Y߾Y" XZ +WO|~x'`"nYV*9`#g\yԧ>^ݓ*}wcO?;wzò'eY$]or9`'8smY,y ^ ܧ!I$һ@}UV7U|(|ac8,_ ֎[eᕭ..S,$7F5r 겾KΊf ى6úGFO@KT41+k^*}pEc6}I6ݟ?!pPHnuK0; y*!ha$sxFWAE {I]r~rS-Pԥ$;VMel z7O*TP^m@YeOJÞ<ݸFy(em߉PJu(׌,s=KL ݋`], 8pud \R | JYcٷgI1v(S4]Uy \eE°2s}wcQc ܓ,%MUZ'q܅/| u̓ qۋ,lȥj`,Α@NnCcq[h~_L2Cp9Ƚ28sRa*_'ºnc^3ʾaƜ]E@]C?5k\]vT儚 hXN󖸗߀ť+|~޳b}s'qu 'DhSkјK=/ ; p-qB1`M{W)pPnXt,ʭz6r^mE U(3DmXTB\_&ʑ!?-&Β>\XG42 G?֏agZ?@Z jL$sﯼ8@LK! =^)'^.xCm?r1Q^pO e P*ǜ,L\ y 7x%*{}(-J{:lem( in[TU ;}h- n$/HU/&̋һLwx 1s|mȫz*WU >c$d|x@ۖ~:ooW… [t_he]:v??ӟjͶoԢ@TM'r]_vieRs`96 lHr-@ғʤSe;~E_"),L Fߩ?yjjG--1X%2[q:_ $11uH3b( ,ͭ ֙2B1hH{ cod^fXgҕB>Ih8k* E`7 7~~+ctACȥî aW5Я+[3%''[WXgԕx Y曟ƊOKqX]S[9X̀s1/ۨӺMW}C -H * 9 "A7 l ]m o@nbȳ@96%~'iL˹n[oBNuuӠAN02r+*}0TK乁|])aH7@UЌFBI^ՁAfļx{`l\ǵJejC?!uN;2sm0 rtZ%- ^@V]kLRe4@ҨdQ$B:!:^&L9%gldzlyfDV-0=d@`L}El,^h1DZ톑 'OqʴWFn]"*&3 |铯׵qJf^Edԅ3Re3>ٰNY|uUQFn7IIaT6S#wGd vMH_1Dp' @z <K~E#"ȰP8ԥ"K܁ 37s1/tA# 8_Aʊ^?瞛= n"жۮ/s Uoxk_[Vz!? oY/r|3 .]'U\qЎ73Zt|Xc~G>>Y[302xŽ}__'%RjRkP]z}{2fOon_9.x\D2,(˞~2{?v94>Qp"K 9/HFc$u᠄)m$ɞOgdNG~;RRetzH}D{_]|݂';%@  ;9q_-q&+Ml ])Z( 'O@$j-yzJSYCOl F0"\lmX;Q@J$/Kp(K[a0 nOJ i}cBvXeB\pֽ,fi@62[;c>ߏJgIQ]ֽ ˋxBq#XH !?Khxkj}If؋UxPQ ^P5% tc'N*, nIM LD!A? GeU ~|,کTvt Qz 5Rm T9lr"ga,dPCo$c5phɍ̷ H٩Vͭn${߫ӂTDž=66k,+g#C"=+6!x퉱n"|QD@]olZwdR VE>nQq欹^춘ĽZ`~%J ,ϗ/Z'[Kd$M1X\ Q\X@nt/7FkA }@BDRo^z ,/y _S`+koʉNHLm$k5π|Ak&l&B" /SZs0+ {}mn];V8Q,"9^!(UL*>`P{rQs1sL 9ˇ\Uɟs&ZfU &wڳXwO>$O>$[EΙs?<Mo;u_g@m^b[ks}vS?^kO?4yy'y?SO=u{zݩY`SO=ŧ?i7eM2yjukc_qnjL]R3w;DJ+|]ܪPg}^k=߹;Gln{^y:6c^[{p,$cY*IP0:Px_x0Md,StσAItҁpM!gȁ{ʪٖ!-ݿ^%m3geJE UCS^]")?XUͳ¡Ol@zUH+"&l}IO&' )BZ%t*Zʘ֊kB-WKkW(0+}'!I;R/ 6', YlP\ߴ뮏lV߶w^r!#XqM@D-+R$c { 2 ~-\6ڙ e?)$(v^P] W*T2HQ=*L(ǿUX<.K)rU,Q":%եlޏ\i Y-@Q* m-)[|lI*~?c^#waT1#d6C|2Đ+Q{1!PR]oB+Ɗfg)eS8wn-y6|dT 3E@:]Yadۏ-'eT֌C*=G\[, ɾNJ60YGLꅲA\PC^8"4lNIz )eb>93E''cOt,'߇d`v* `{A.PnaVA 3dHΏS՜Va C(Kr3~EsH\&A/\&_d0IȠSa2*j _fo;YORlk3G;XL[sX|90_&P#@\RPq)d^12g3a A`E^ e# t  &B^%R+Xsy'{1rJGaQ{6S ݰ`zQKϜcϞ|"xNtÐ!`kji UYË<)חt! ?fsn 8cQ<<|*P倚:y!u[n"]Hc(<ɘEl[%s1sl>Cqʕ%,vAR GGGxk^c?cy0h~-^CDN޻s cۂm9 30prr'?ITG}j?q]'?3uC*yӛQP%o<70 =!#<5fFww,˹sqf6sC}Ueaԟ`gbV3,XK@gEBWLyTqrWf_MG c~c2'KR9ZH,[xA!rt U̠`!.n澶og 2!ɕoUveRJ侬c5r9%kTYL4 ]j˶5]( b)-ME1dLnibhoLJIv_%T?ht}at8 lR(&-SÆa<AӴeE|d~(+Oa*+qR ^lrMደ'؁ I`76:v.- = ?LV U{uŮfI˶lxIH?ݡ0s1sqV @k^A7S]ͿET&at.]d, xrμ-opVbF{_wmA@۷_Wx'.]__=Y.<\tyoq%Ͽ7fo}?2kqƯO~{lkѓ9w]Imȯ~1lR Pgu:Y1(6V>InZ0u$AZ WՔ@MbN(v^z'PCsƦd.?yRHִ; )$S} U kr9{&|k){]JGX;g* /.]d +2RGFZH{ei>ʐMv4V}M$6<&7_1;'p0ľy('\㪏'RA doF9e b$U?T[ۊ # =4AN߼OHt?X ;MǯEGLԟ2j`[ZeHa?d[ׅt5 ۄq-`c 2, áX֐dRAr0-7Z ~ t>i=~ d#SJZh*uw/0뙀ޥ%g؋~@T| 1eB &2ߋzw^m./J5B Y,eBR)~!Ko@Z䰝HWRal>`ic7$DVv|6gY}'ڤmKJJ1.| 袠W6꼈-VW.\-?Q\o$IC1M_uaBDnt_3 ?s1gD+_\*1#?#<8ՊaFٻymZ݇]g>Cu|#ْ^,}}׽׿||WUݿwp嵯} _x(,us+_ ɍ78<<$fƟTqO EYĻ|p~n|[Fwl>󖷼O[c}ߕs3?3 ̿c0U*İHAF{d&H0Tvuvw:u03dU-[f,`W9IqQنJZm0'#N?z TFI K:9N? `Z'S~'J:.s;4lxo-mBط:3% d}uP^0mGl* لٍZAK*"5qs9 I״F~ QLy%t`E^2' SgTgЧoF1h(kьrMmbΉ!l sJt3%0IQHi*3y&'9/t\Yx _Cn =̍t5pkO-%X4X`1t5Cv1Z9! ܬQ8u1?WY"aө|z9 *0PnuɝUNƽҝQEo`vm EPIQb+FoϘ|t>p%`d%p$^PvJ.bZa w!~p _= &뾤]H [۴HGr!H 8/ -R|3.T*]; M$jBR Z6\3S ^@zYľ$&ؒ !LC6`t0$Y#_u˝M[̵(¬ŜHҹC6lY(p/;ߟR }5;~;R c8QUio̠ PD" d%ŬQhK 9c9IT`4%mo?ĿWj49M #UZt1 !RҥKx>2 ]1 اo;=쎧vrdo_v||oM+vo_l2vv_߬ ר~*DUoOIΙzv`Xgǒ I,pl*`s HXOhU 䚠`q]!=-p$5&m2(| &IZrb vX%$7"mCBS0i/'Q<`' 'xv hHpwH1Gw`&0TX7zpу{Sݬ渡\ĵ}:cH7/v*H*q 8PJ4Zfgt`BX7[&ao\=!yod;"2Z{JE* = Ev{'FH{7D\21v[[V|6w{/#(|qHX:ܫ=ku0X`dUy"R+*2GO .)UCV9^KtX Tge;`}/MSL@6wDQ`-J n F`\N[U$QZ|[. X63T27k\6}$ZdHC\.ZLj%|ͱ/(|ɽѱP=/хzSImV-VK6VFYv8,NŐimTKj{O_$ed'$b8jN&^8,yACkS4~}pΐI)"k Q<H7uX7"Әnm ~=]<~SH&äRBnqh{%^$@!mr9ѧ7Q,:D/4==C{3al[qF-FD^4(tl+Ŗf*s1sjl ߽ŋ#hYA9^hAѽk·[,RF|{ϻ t}owz > e<0+_j,-usbvekۀv\Ї>4m9^>K"eqE>׿,;8w'kuz~x8/x |p螶zrr\’iJJdܬ]fJ8k}9#2yW'$k%UE<`ܯL~F)W_ĺ3i%ȍL#Hk! ,ά F rE9+d gc q>22O%́UGQU̱ ,p0g'!@i@iJn3M=ildĥU͍K_AzM*|/8M<_ƐaBbnVE$@ߥ*[Os9qkYI$_'7 8 Ƌ5yWf0_݈yZ"] ;^i󲌾'@֌0(/xi_,5ž}mC=řRP;8eY9Ÿub-:  EWӦz/Ҥ[1sl/Cg_@zcq {Q?`^k!鳠+d at}U4FQLh#h  g`@S!-ҥQkur їqn DF"Yc>>yLP٦y}r֣%s1sq TOv!mڛɵ} ䷲g1wDˮl]n-&hj~fzo=Uu<,)VC8U nƝ[Y[bf\pG}-Xn5^3?3{7w,ַ}0 .\)ζZ9uP:u]:3#meEmU=9oL% ݯU)lly|_"QmB`Lvnضݙ9/*۞d}Ե/ sakra!&vqU\3mvVJCEF?m6TRӎِ_Y7r5^Xp]덖eY.ut (UMuMz&{UaޥdFjFPFKX q^nYAV^C6^J-^gxMZoxAV7}UI|5̒EkAIi$g]+(Qٛ 3b;(o4i)ٴI$Sؑۉl3;#vϥ67ZJ^Ej|M$d6${5f.W?+ȯ62c5JH 4Kzq?vY :ZIk?4KuT\% $MΜX+!uU "jA\:Ţv t(yBܯj345LC^"ks0lU;* 3%QUlIU(EQXY y⸪0訬{^OHV%)*p_>X|@D ļr Rؿ,"6/}~roj>ꎡ4{? &x_8g^܅H~ R#2EAU='yor=@znW\4{G{)-Јj27bc9V~#JWOn3~>իWF6YgmwyQ;|+zfIY!Y_~O헻ndwYR5=<uiٜ_c=6pvkkQ^OOs'zǏ"Y(dA w81rL^Y[q;Wo2R@j"4I?̥Mck`oUx}7BҶ_=WטD(>5l,*0^} ˾JW`\zMrRQR*fdf\S?>{ ;aʔ%ag-_ⴈTٞ`{åM`SM+\ yGƓlu;Y3M5h41? TYtxzY"(Q&n:efo.G֦sWp~hz.;`[y{5+y7yzT]*">A"vhI f 3 ?Y# $%ӛI},ozl':{zJnQ벚K_uC5gdOdbZí2+F_'BҼLs39y(@_ˢmTaA.CMc! '\7ϣxf5P! Z}mˊ`P-'JسHVJU",C^kyRf< 7Eq'd@JVaON{~? cè^k4C:l߰'/?%m6/LQ̑3U淶.gIK 0o :Lk $j5{qo QFAL:ɀ~ ROyT33$5='O>\yGA®b-ٌA2u.XQQ$ $>ث{=v(@Zl웩e3%rYn?ig1ƿxa0=DnPtin(X+۳K&f@~9c90*KkMozy衇|)e ϛo=ZVmyq8 7vxNnwN?я׽nn<{o<Og?{šv?ö,t! 0m1_v(,V2H:_.!`s}e~m?wǙsծeD;*ѽ;·7?ۭrV=߽'[;wwdyx"JET uQǡ ǨBLUD05~7%Ž I3{!@VgfE"/}e^Uikn[ndJ;=OgEKPى"ဍ7SٞH@ןo[ 4%X(i\V>DnDhJdI(.B ]شwL:,z{X˲<cϥ.lڄ!lŤPEh lCF| #aڀЋHKR[,bR S QdfwUs^k1\ksR*z:g}e9W|PQǔe턫Q|KJ-g5*M]1!|hIL &3 <5J^dW_°/Lۺh  zEe)߷[K?*D鄟6គRˊ]Jv]qP+ThMs}W`4t*l_lF- 218r:%\Iw?x?56"ld^)_FldgU(0@[cNn1PDdOA IDAT%ʥ(Z SP>Bq4ד<æ.ԒJHMk TeNmUp5>.e?Aj \bj "g{3W _9W+|e#.3 ӆr9P/w)t0|H;$0xkj#F&*qw|^G Ს=Za;Z(`K^ma3C0eGJȼ4ըBDW8:lZ㕀wt /ICm9 XsI^!.yXqѴIrd QT: ꢹ"~Ú&GtEl1tǢs@ۑN9}Mݎ Ba4L_¿&n211^'dsɛˍB@ޥѤ ;߰ќ%N^b-M5Edu :f&-:oJd/Ӌ$ST=|/@oANt0!{dA)N˸&5d11`\ԙz/ߦ pH` H6[x! ,>֐ӽ_>ǚ%e>nklvQ ㎎jڽLFK/%]頉lCH/${ueDg9KFds3Q@Lfaח4|X-pp$-dܐ.B߷xk</ΒoJU>ܯX~d@E)O$sdP)A>uw 4E3[Q'~u`SGJʳƒ_D ܳ4Tsݾ %ʫz5uk Qq{7BS$;x{kL_$w=b䊔O0ms S==i pstT{!0T*Z5>gH! r"ĸ%zpE#_m hŇ=4Þބi5l(;sN}cIdORAM#י%.I[j#S9)Tx+WvSr1P+@Pcki~* 6g}?ʽ뫊\/vtlX`5ߵ6M.=u#Ov Թ)By,'W@~5Xc5P=Rgx"o />)>?bkiʗe~6k\[ݻw>/~]qI<𺘟Jk0KHcsĀҼ _+\Ѭ@/ozV8΃#VKQ>_~ ;q>E,%폕+ZK?꘎׊c8 {LXkew0#䏠(VnƢ|z,vU,=?l1~و&y&HoQ0U(k*!)/jKX1%YfCQiVmH2-Un2 l(`1esXO_+~]HH&+](C͛ه'A&¥Bo;ޛY+km?u<m$<7Dq),Ly I~\ Dik`'zX1 k?oX;,z9Ӆh#5,o*mv"' dAum\;ktMSL5`~=^DK66<5e"'Z3H6d|Tt/NTR<TOJx+^]acnmϐ2 H& $) DJM7͵y/E(p5ۣȕu0'llkHE3_K4!Iﴯ$uO}d_s`ƕy?hRh`h`6XT {m0񵩾Tf+.{xWuŏp$o_43SqLd+l~\\lM5xl>З/yP7eּ.<'6)?NaP(;[B6'~ffN#U:w0B}f/hm|{ڸş?W|&]%uPEY=+{llGsm3ǫ'r3*kkF?|iwx;ٟY>?ӽ005X7?O6o{`| ~hk*XݳS `uan"I.ajM4P~Nn h'G.ɴO^TְX S μsBk+ 72SJ^a½P&)A;#Svȡ7By/8ل 2`+%@]x؊[6gҿ &C\~.>ϧR~/awPYd^lIoֲsY$F,mJ^AM3߾NcjS˱F_v^* ٝh xc=r&`J]%ȣ} rb=k%RTBf حM!;/W9\+hgcmߎxrzv$.w@~ _f熡X0J0 I_Ѓ+F}kSW{TK nu(dh5Aʹy[5UBg/q[wvMhN{|opOh0T}4rEf%6sY`q rv%3{h{.& &m(?l\sSњGw+[Oac0 :(UmKMݻX##OFhylM(Ue#s~c2numm{5d&lVN L10}a[V z7v⣞6cl4? E'oY4"V`4 f} 0M[֭uQc qlnF1_.DهLkzbXc5XlY3ݎ}{GW5xb??ۿxc9Ǒ>{,emjBnϫQ,uIR02Fy,m0]ـeOEzx8AJҋx"_= yZO(Ug?sYp'> ݊l1/m|.L^s`ZPm"$}$ Cclo@kҫQ)4X(gC:$ŦMV7?Wݱc:Ô_GϕE_i Y Kr_,uf%Ӊ(_Χe_qS j AM e?kJ&} ٓX+)sTh0KiK`K`}ql}+4i ΐo_z_w>,Km\f Yn}k Qĵ!Ɗ%umZ~KBx_{2XV̒۲t.'CC8W25֘ytI.~,9,nVA`^gROC(per&גdQ,*SkZ*=d3B>s[0M8%_!8g?fKےaK+ŚkG(ːj o'i"<2]6}S9/:p^0 A{Y![F^Ž0wi{'o5 FĞsksI&NG%']rs)|` 5rz)pi ` -&JY}%wPg~@}";|͙Φ,fdkif&z φM)pR5A^v.-<2Ցk:7lK[Sc-^a?Tl&oE=aώ[܊|_{{bXc5X㍔9ֵ/4}=yzX@ŮV>%kf]6{l4H . ,Y-9j'Eb %AdmYh{  c Q#!ALN@n = SPYᐇ{~u=fg w?hYv8Ҫ3Yfm'Q$Wd r!egnHK0kogV%L ;Pb}š&/>YRH&&z7ʥR, 풸_c '_"jtlh(+.!<6j7̢G'q 9>vP u9VC?-K{":{Gp3uИW1>8݆%ukyϼyڢJ+ `,0ѹrYPjJ4%g.ұ-D: +wFkc6z7 lg!-'$Pql\d+.3CzCH-4؁_7.h\}>\dQ#̷"`SR/ +)m~Ezj r*F.X"i4̝ƚCcN-_ޕ6f0;Zl|9 ?̯RS*:Uzs eyiNs+ (ض2Lۊ8?w?)̷'=ҏ=,ly qěQ9~_ϱq|}ߵq# o1}4Mm=lrY]|vjil93\VoQn5lۢ0^,dEa/Nk4 `-sy;Bgw_wUń7;-1)ttq͢q" w%ַ).$* ROzX?{RY^fZ y]]l 5_Rwj.\Lyx2-^羘 lp:l ,Q_JDkR#ܰfc٩@aK]ƛT6艠'^Y}淬6!;cwVIYSELF\`ۂ"v2G*c*ZZR9(DXƢAnJɎTurr*3PקŚӗEiL*02 }r+O\4 ^W@T8 EVnO۩:gg79)pM4UIUb_B O/9UPBr!MrƋg='kymĎ 덽~'t3=Am0Mk5\*VV)O@Z]/%z3|LmOBz%0{aWTY${U) LL;sP)0 R/!O?9(J_txϹ T[I&h^{seSAA ۳w~+ ko % ֨*M//_SmY?.>.oȇx5ָs${kЇşjj|۾!^~/Ҷ/si*81>nnWJ&̇um&w% D^#YQ pI8v&]O [J=w1jqKUVf䢘*l&8]aMɴiEC1ݞg RR-G^rgg]w##|g 7~&X ᦿ7qA&YE.POԽunDg,. Hiʓn tnNTK2 ,ޜޭQ3b~* u\QOӷ~)Xjb2 `4Q(Pb%}|-ޱӁH0E&S,N-v Jh`\A/l?6eۓFAcqmE+QؖZWŐ(vb+f YjPGCbCTm*x*ju@T h+n P.oD3mJL!_?bPLu~"q`mLK ٠Ua#8X(?h45.I=ϧ}XX44mN'y&k 0^a{#c{ş9b><-we^' aYbđ IDAT0jL[5"|򥢔m]4L#: "*dcȝrxZ߶(6d\7 rO nY6l>:Z86BA::{XNrK釬薒G 56O;v8c˘ZG?{hv܃ M}b}$c5fM6'q5 F0b[Uub-<ڲƼdCױyZ6 (փH$s}9'=g/quHS%j ӅDϧ8u=iȭc>ԓfUUdC 7x3$ e){OvJ<{ħπSu{mq`}KQbcg{O9{*_:0?BU^s+s[ cy}/eVz "mh/N`<~F?7y꩸%l._J6z4=YS8̑ ^xK+ _6[ G7[A&&[PK˻r;Bh#P2_ glkkUcOSÐ(ޛ׵" ^8,#3xW"@xmCP{G5KyKZ1}1tS1$`y^qc6>"3T9[sNig$ۍ/MZ5wf)~|fycqaJ0-3=_ۉGLHKߦJ(i'X|0n𠯍a0M5YLfYH4ΨxC>F,=;rwd{?g~825I d~2NAK^XXv\ BMWclecȂx$g" FρAЍ$9T_.a:ݲQ/:VCܦo]>< [tPEc yO[o*2J̀9R$GMWagu sf>l~m*67eWPpu}~g#aSl)QٔDa0u 솵h}拽@Jc{Ws.7h.sO(\_gnjzp$ŋG 1{ck[4:Ƙhtx0Wc?1Rt~۶7%r&֚6'fyM H:;Ǧks42i0=dCc{a.hD}8N4ŘdIRBb-Q +%^L٣ȯkk<&%f_仿Okkho6O`5h`|K_ [eeɯ6Zǻn>__k~G5%B&{ wn?7N2C^ ۓ KO}g6jx,zKz\+A4K%}JL0,"Q-GN 9ȥS5|ow xF1G`nzssTO|tB7_m~yaxvVl)䏝`bϔl"QmE-[ʉ[ UZqk^[HJ6/-Y´j!Xv (d]W$H@,dkm6BJgޤ<!O a.ESBS;S;2kC-9')z oWgMI+V&(`@=qscZAͨUqsY1$ivohLdY=dnó]|s- ޼y%Mb-ԭQo Ȼe19 r]aZJ)qMb1wdJMsz?kc%,bN瀻!{G_kgV=ٮ6{&O@vG[MD8^‡C{CJx_NUWu$mrZFL}F%+`M[`gjd*B5h̵}cmPCڴt@|y[aߣ/H4/J*4ŖHP} -mZj˜W6y`6(h@4x4C(-ml'm.qyjyڥ*>ZZ5E^H+&io9Vzy4L3O 5vf5|c㽦(lٲ * ȯkƓ(v%KIΗR~?g '(,okMHk_[u@k% _?c?>8b@~g`' %}ۣ8)M|8QU|]PBRMh&N~) jj')>a*-pK+Ir,lՇ\4P u&($Ga;ؔ ?K?_+?u-.4A_|Ń|ޚnj{O |_9F Xun]}皀D8U2f,W!ڋB,r nc j)T%ΡLQ*%y4ϔL B@tƩ :"OWA90휼um׃Y|5S yv.!"ܹ9]ϐj^0EjFxc5Ђy>W d>-)dTgW v`+>&%AC]|O"7]i9>EspLh3?{}'YFak82I8<ԣsbp<#-A^e>;u%QO)דlFk*ݳ)~M!AUa4|wSʣfkV)Rۚ [7b0΁ܲoa\d?9} oҽ@m. u(@[Pד!Ay8>$hWG Jc1ALkOvEjCH71|` tojgU/WDŽsӵK.F \LɆj -}41԰&rDb\U鳲J .0 a.mįEdo:t=G~3D3QKX̩9J#5<:glp ېgLkk17[A&޽?ȏ<%XxMkZFD??' ƛ_moXM*YD)/F/,AE=v"];hM&xSK6fӻ5dT }J,(&dGUgdSދfVDM -e#Y9oc a-E2yeJk, 9>\W{ʼnvkf8cxS3Λ,YbQfVG'iDžm$[MPᎇ=X%QY2vm9Oi}G 'Y(zFdy4 p rPSVTCޯ*xͿپHԍ,I9!}So~pql)K0&{c~Pp%sFNl]->G&K 6j%2;zqY+0M0;eSSJ ي!OS C|6Z͕k'8. ߁X;TҬ)i򹎫6uO@55g9_ Mתp_!*_87*]%-^-ހġ. giroCWnσo-oA-r^g$R2:7X1祍@]2|aסUyb- lZk5Vm|BSh_P!5ܔ;dO&vA!q$EC]Q6 3/وJ8BYǓ.C%V.ƺ^ߛ/K6b֚>HEg[نM8k Lv]EHM)biWr_k5 x[}5NʡIN.-ks.6xWzzk42YYQ`ś*gtlԚ]$ښ&KSdqv^gIOb.Ө6T|uɆe+[ _5Xc5x"Ɣ^3 ??oo?|s̮m*Yo.ZP~@n|5̃rocݙ~7!^yΜl<}ꈢUag&$Nr k(BU R(*iHۉ#,,Xk0$Ɛ4yC_tDY44f 0Vw:(~;-)*&Ea?~QP-/I)pQӃ%H]2S-`x.&)J6 W{1 +x3I=\ <y@4 Z>ud1<ߡc\-$,VQ씓/~܃ :Ut5'j5Ic/s2N/P-~EM?`Atl6ϋ4J(7lɪn26ƌ[C[7am@&LRK/jg;>9i7Exy8)r2_4HjYL^q1wPjIohh^ Ҷ@ yy }rr+^mTj|p_&fr8ѧ_k1;O"y- l[ /L i^qc,_ZދeSO ƺmE0ƶmĞA$H1S~ný B,B{gfhP:nA9Wf7m F!HV=ys.5EcOykr{K 5K\zd&?RKWZz&c/| Kۛ~:BnB?O?*ѐ&ئ:96%(|ir's{1""wιzS&-isn)UTA{Ee?7&un&>qi='*GY*Ip- H.P(>*-,{2-PTºK7bE?n]`$oR]RKˎ%3^=3Iڶ rWwf܄ޟ hd^!S y(5 ⏺b6$Uڵ ^zcgp;kkј@g>oޖj5͌<+o 4M*' [2DZ+ͦ+go߾ͧ?i_yO,"IJpS4U:(/z;~m.EлCSROvQ<:}H;{%UElJ2|kӯٛV)!}˭,gPPúer<KVD}zT)cf1ڵ:b<;+;~±3XޓOzǑEqTڙ %1Ao%'>K.AlY@n4Ʋ.Xud!@d . .. 5~vE t"S~֕c:=. vygn1ivYd-)MƂ'Y+%y=;U3g*Gj)|^E+ߴX]4Ba-ߥy_8<=>E2[կT 8kzµ!LFYYFCz}%3uLۂ=-RCGgy4߽& 9 'Ɇr2 'FgFREMy͖@ЦI&#~tA㘙FN5)xmTt`&I)\vpY jJ F-NmV.7dke)Wݺk4BAfva^F K%phUUsrZSk #2P~DH, ~|ޚ2ی%5‹ { ;̭9-Ga՛w}6|Ku\kKr~|i0dPB9F\$kϲBN [,syn@oi<ϖ]m(s8)ș9^sã_7m&QqQ6C9Ud+7c6[*-pCl|V{U~5Xc5Dp74 IDATۘqKFۿO~|c/K/t){,ݾ4ˮmUc{ׯfY:^J _FU!Tm0aZ= 63ow@M4DI2Tik[c<.]Ť~ CiB J[$Y˗Y9 R(&R81>Y(2|,Δ4[jzkފ>WD7L]_'ZfJj%1Uߏm957IUpUPfEM~Q2v2 {FPa ^$96;s=*Hv1kJP0K_*씲S±2NQɪ49T+lAnqߣWPl9'aze\R փvӊ"NU=$/љƼK2jVaSkItsg~I_>dF&qF!֣(؛[  tuSS >åR7hTtT|#;G +'vUŢ861W]Hײ:&#v$`Wr1Q ? `^`1F.є l+<J^K\j؅!'h2 ? ֖hR jH)f<C9f vmҚc_%kGPi0t V<UEGc * r Sعh,"r|l,$iM?Ai~S1Cuk7w& |;&> RT{ohsJ׿z4Fz|> ^ƹI%> x/ {lJSG{b*K":~_WAԐݞz/Aw/mIOĢZGpv(*Mp\;Mhp z/I´mC-r\?^jX [F8 6@{PqjMK+W&Ӭ>}wN+ţ9BZ\8J@Gx :T\źbNc @81[N$lFCHkņ `ayR5r,}khN#Vn owR|R_"WY JD 7g3M/:7LMⱓlI"!91'<1ͳtt8&5Ԅ2w Op2a~ⰱhsGfu Y=P?P#xBaPqL NS!6;ܮc^rmMvAR=%2vXAw^;Qˤ{4rJ=͟x(y禰<3/S :K,XxR)&:R` Հ_厹ƴGx3ԱkkFR G?ȟRol_cǛ_fR~77y6Xf~Ϳ7?ԧ>E(R7W5^A(ɢf"i3X=%bΪw/C)©"{ T'QKyj(jr>PŨQ{o%ϲ(w"!^Ȳ23\OɄ/Y1,-J]~Rcn*B=s`p x,Icd1OˇX(M^) jR4Rm/=n6#b,7zwwZכnm42.,r$/N/ \#p.ș!cW/rmPᰫ8Wz*eV~m|R\gMtsb ˑkSGs1hH[ johci> vci2G ?Lk'}iY=0xdP~pC /~u[#n꾆6ڼfA~h {ciQ@ۖ k8" 1*S!I?ׂ2 ԰wo3jG}!X[޹j59'b?`^/WǍVn)MYro i &}/>f0lbYEV;$?guR,J5D/G8kkfPkSO$Y Fyg } ͏[ ΃̋SY}PTѻJ+;([/Gxtkqo-TIۜhFp8\l"/4e'Zv3\kk<R =xsտ:,8[c,>ks}կFQQ"<(@Ĉ?,XBAH{"%#!((@&0 $Q 8`BA83DBK4vwq=g?{߫6vWgKկ޻s^ĕ:n}gη"/eb޽{G>Og?q>>Lli;"qAN*z4ciCflji-W5pɷ3"'^]N̷(v`;!xö~VpPiAQ6 Њ"9>f{!]rEzT锞e.5ݜ$a/Bʘ6ٿL\d鴊ϫ5`ƠJv6۾@O Wac5򇍆!m8v(&!'sW@Pwb5ת j TVv \`SXLn][묥:0x;OL0Ҋan?.9vkA I XNZV\:Ps؁^ų +og=`ls| g8WA!aD%G I9e=@ ȇ1la_bꘟˊ6׫RS"NmN dA8ljL(hE;gG2`e@jDlOI`ݿp=n ]LB>!ls?.}@g C8l* /^8nފ|{lu|ϋ|ҩ_o#\%!Z 4ު֐%NjN^[^8ؐ Mw6fRJ(4_b_cSlqD'HfR:L1W >'"/Wչ%ư3~tչtůe D^vnW@E]D=F[53v;Լ|wyإ?<vI $:*ɦG$'dFZtc]EvTGxe&U2n9ca#Tc&O޸hsP;=6;,EIt@:ר%dӃ3a.''xϱw$N@gA8Wm']v;ݐ<Nj}`04ǹ v+!-(l 9WwaYצ[s_x4,p:!܉{ ]!O"?F@~iK[Җsno~ɟI~G~O|k{GT0K[qn3k?E>kiK{{%U873oJ?DCf#(sr8HAGmEYԼk9UUk9 `B983g鏄E`;ΥW7Dq\AV<(R6P2gR! lwE/_o,9[n]yylwavM +ɋKfJ2ˇ_XOxU%obպjpߣ3aȕ Jy8!Hy0k*hNb\]F:dȧY bQTF9dF:P_6GB)*" HvWìWv[ge]&/c\8y\U9fu}lB}Mԇ#bс7 hbVwD~<؉!'*qmP8sk0=N f=' vװ r 4%LaoxN;5^DrǺNV=K-[LbmzlV=kQy {$av |-"-cCXn_3و(-+pN$6[u^d(slk-3:=0琮X&=ԩ̭{Յ*"ɍ)pGHHѯzRiu }YyB#|sW$6<}iVO9|]u= ƚ/uT-G亮$FAY;Y RsCL,bzt ܉{b-E]M>xSuy9 `i+6L[ܯdʕkIfQ]+{9}:W':5S+pEƽo%NRL{'ĵK[Җ=j ̿=]zy{xʷ.`ҖXoEh;~GDXVcqQ/{0>WX뗒1 ozӛw;?w2;#*yN!oL iH 9Ie_ S^'(V'yNX iaP.HY1[|uZɓRmޅʱZ:d*_ ԴM>l. SKX+)TKV$P^gpWNyʍP2 S wX?Z*W&dW3ͽ*O3se?{']Aȶ8 wJ-"<{/@q FGtYzCѯs|ڨrvG-uIeC6 N ;X [lwM.I1ҍua^5z5a=]QkF#%\{fNyŚɉb9PN?>9]fGԑ=Fyfbv>V{o1W%b) ~ ZA8H?!#HVr{ͱ 4bBuљdrdLzq ~JCs؉)=f"ÀT:H +cK*]?v-My ]+@~iK[ҖǰUc]w|#->oK[ ²ٟ/k <~S*u0E0_wmiI~x*%))E lթ4Tܙl\U:ZJ 5Wu׸vFS!6F^Rjy:2+*!c!CFd_Ej"]]IC]UkP R*m[ -YkleB[a-S(*H<'sy5džm54c!VF&^H ,sn:ȶwE t r%Pib(bJLpPSXN`+͎C@bqÂTjMirg DNe LU]W ketr\JF˧mxy"̎ W& яZVqkSݵ _1;qlMn\9< 廧dU]~O;]2gW^RS{ӦXCE똟^?ÂsESzu91[:Zܓ7U<דUޟE2Hg3 oc}lٰXU 6i(AEAҾAeUio'NkjȔS*f.||I):7/g1$6J5T0}2z 4[W OxS 3u|]X9A8įVV3u^+96yN?m1n`>H+#)8(ov*e`7 i[ˍ#d NX)>Ehxc"<7t@ǛY?şg ֥$[ Y(ʋ d#D25J>@HȄ좥Œ6| txu|Ye A|&F2{W'm<}QءGj|BJf9Cg!3xsۭ P˒rX| k!mp"]6X,4( l["p<3PXXh1*#o ^\IHIf뮭rrJ*$MD`ʤKwYAnb,r%ICM)znyFG/fZX_:ky;&sf WZu. ^}v=DO\/df3ۣ9{/YqW="оPd#L Xj,P`aÝb'J}1l DbǺ[d_K`uLM"bw /5{wIuY=jV[[ڤ{BJH-Mq\ռu&޽Fqoݘg4 IDATƋ+HˤEr#Sedޅt<ym-r8փt#aG*y{O Z|ݬ{䆥i^G5w wtrn/1}p W>7;n,bU#y3GӊYyaL'%bتsT JN60a^.{T<`@柜sH3"@JG@VM̾*ȓp "~֡z PcE`(X uv/;q-wKX#^dk}K1[U:-GU~őm⬓2EĥX览Z]aԟD>\9/fй} +ߞz&.miK[ҖA^[?M_U__w \^UP!o;X f3k`Җrڗzia% wW!o'?oW0 Z ȴ։]t?5l]<@+MU`]n[ aZZÃ5 6͂Zlsŕ_AWQhAp{PX 6(wVHV.3 ItuPBi|p޷(i6_ja];Q2Ї O+< 5TV'qpI+>[#жa-*J8PB YXn͙$i yat6D},QG)j[)?R :vty[?:"X f/՟1 b6,L~k|Ś2/*[-- CFĉ0YvݿWBS_:e; [;prqPbtkԦpA:չ Av&ت}QF/(iVa{=\bv*5P>V0*ipU,lk?HhTd8jG: Zk'sWs%]͙!VI&1'KUJ/9G}Ī 8ؑ|qPL M>J^q0/ّ^h0<ɼOyA©ۿXYQH]P3NaT3c\hb>zO% 쥽5d~R\q$&C87 T]!AX6վ#\#1y8Evzzt`UVI o_FF,c/jC_?~17wݼݻ7^h.miѹ}.\\N#sN((sz3WK?e;=2y@L詑v8ؙ`ʮ;Btـ|)TB>^)1j[ fP⶷itpny4….p^cƊQ.'8[s]a;tfm[nmU)3 2]酔6SdۮKQjaLL<^>10&d3 ~2/ 2W8Rg[ +ʨMQf].NJ;[~#ƴLQKs^ Kq5݆:yZ"\#:\ij-#6?2 y \ H\;EFBQO`ںP# )dLlg }rpgKPb*d '7^"dtRFYJ r"ؾ'1ԲSdreuv>ž-#@3JCg,IلiLJflC'n&WOA<B02d(=Q.ve4-<]̅\|/I -v!:؛mcX-{<,DCmCU( f<P&1U})%٦k#U\ 9g7l 14%rSg^'3uO.OutHL6@ں=EJ9g152(& `('\ug#A<sUѦ$OoSbʞ'*"qRmF_$3$֐n,$I=t%c S VS)$ 1^j]L 9FX L = e&_J5p' k=LH2q%F4Tɸ͵N -`׀R?Y[!Q*YAFV~2/LSЍ=K egTs2'Y rli8H]ER8.bY[ćS2; F<% ?Ú(cC8Y eCr|NSd, }ҔA$ {6-vC_u+6 n"wѨ %gE\arwI>Xf%鉝Ӳb-miK[Җyg ~-i:Ȯ=ع}:[&>6Gq`|۵Lڻww-UiDLbE&K$2v>+=_e!fdxijzz g <NjVe繍zKM`sHR&Wb{yq`*y}u Yr9g!uq?;=/m}f(S ɿN){Z{F[S =%e0yXgFIA6/vV&ĊۻO Dx gjTʰ yXr9bWE~{`B/Ev5|xdaYh[T4[0[aW4wXK2ϡ: D:Iȭvt'N[guHNp=A^t+lkxWF| C0^u Pu3C}܅rgGW{%`צX;:kamvD+!܆d-zFA7vRЈ)2z':(䗶-miK{cAy;;~7}o@讪w\S Պq kkoVd󗶴ǣ\ړMm`6k|ݿuOIrJr BP7jǞIˏ'C s+`Fv^+1Tq0nwVrtQZ ,K^3ϠaP-Kd§9[UjLg{[h-]Q**ETQ}!χTR5&wQ5U@ q325\}u:G־2b߀ͮ1X:Fb2fSryɑϋROuc)3laNaIdROUos֚z g*.~K Yxaw'43toNzxVq b}6ff-hD-#HV束 F0L?uZkր~WwL \/k4;wKH%;ipE)P׺dst~w3-H^LO/~䗶-miK{JۜGlܿo6xHX~6=@oԗ,&BUP9gqfQ_|w}-_9s=٤󗶴ǡUWM02rMw7k2~~~ϳΉJIɳ IPH^' Ŋx1ԁ3jЯvVp@L0dImB̋Ղvb\Bo=Un%Z  Kŋ~ ov[_Gg?ѓ0s f'<^њ!%X5V;DU "Cqu@~EV2FѩxqT̊#\UO^-ƶcN`|iŹ"j=a󁵳[A죳 "2rØ+$s V& QJn-HͅNydtKkƦT*1ym,}La%j .f_@P5s)g\>+ㄉZrvN]5gw E+3=X**EīdhHӪL1=bkYX?=[ ;d8dn%!>VuP &7$C:1ck sDFgZy]'j<_8|])~[ ߱rNΔ,^BZ{yŖՋ^ͥsnç* ͥP<bsNP|wr<Bl/Is'X5le ͎ACa,9 Z[r \ L+Vyᅬ[nFPi1pj=@2ך ʗUk"o`dU$\2B5Ƴ`9NIIYL (M&3ѫAys)C8"GAA.g z:xMNl-*SUA7-לq}gm_(VKF@SZ8ME|Oa?ʼns*;N@NBa_C]u䲝9|*ة2a+xzIؕ0H4k7uwXI8O<`ٙ!"9 N.=e'dX >ԏ9I![9QV:vf?Τߴ3 Jn)]ܘbVHOc8#+ʥ;ؕed)/oi-Ts6)~ lARq"Y8=0rBW1yWvg4?I= p(H"rAeDnlu_/Kܐi<$}`*> q>)_#|=s*{s툻DDAr6+;^H, .ޣrK[6w)+'=k$ L]#F*+Z5 Z~k8 )U$,spHO9|䗶-miK{ [ 4y__~?j~|迵UнfI֯6#o|7 fPןYҞRۤ gY|yY>e7X gwoo?/yL}#DJMyVC>gض3{TI $tTdBT򃽑xx)42bSF[]#V-;v$MƂ퍴d;8UUے)Mpۡ+Lxa7uZG۽?y+b+s+ )Ȋ&u&rY UU̜a} *1$ׅz IDAT6^/h]q«i/UfMYPk2K=1LMP6ղVew(쓫â>>E紏{lH+jD E Tprb;ƂTA 6xajH-T p1UܫɤXiوKՄ6+ui+@"8~Eu&32TpWlu~n| ~^WEPiWH?eJ`iĐ6vOB$*a9OVd\"w6t,SmRnҞ'Nf{H19~AKJs2X1t7s hln48~Fjlw2= x9 K7V|eH{fr:n܎q&P R2y%"ˉ)k}.9،sBD7xUIT$A{!xܟ yudNCS:ض}Fk3):TrOCeT?ږs6EM4!y${wY!.8G g0@XRq$d{"$Su|_(*#E%$LAe¶lJt8Xg諫 ^HNȔ)#+Abw Xdea.rwc<Ɣ.^ Wz: Rx_Җ-)l8Zn<4V0mm|;q~٢\}[uuhG.W̿o囿>__pп۾ȵ%?~iN7RXG1^Gpk8 Xs?s||~1o,Vl !>V|ы6sTP<-! t9 ɲY-sS6poOڻ϶^+8aL2^d- }v͞DޘcX+QBul.ĺwyc$"ʅUoY~btGwÕuh `Vn,>Ù9`͖PC"MŨ @ljm}ΕZiS)Av=kNn fڔ!mϏ[sVq?U;Bĸ.c/^y}cmΫ ĺ"iumcB(ŋujGik/5>:{dV{6\P"od&úUrDE\Wv(/{ e?:v1#_XNnwVN\Ϸ# &CUk_[~qK(F%T2a<w*h{f7&XW?rRp;u!meJg.sֽB?a$6zI)1.9\ ~JT[4 {4r3!6TQLWSlyN#'d](L>]nWn2kUO>7daQT2knt/9&hH(h|CJ颞[;ph F If1ZrQ! i^uR|=SָԬ3+I+՚]+-fKrӈ9KdFL]>z%4bϷ,K?Hû~\*@+CJ8笳oopaܭctUcB2{>pD&%FHC,)6Hl[0dP#~FS?6"q4:>#)'ȂНۘ* *'r䗶-miK{ j:o>VMogg ЇrҖnZ ʯJ_uk?+k"K[ګyzܖ`X@b,#)[?sg|W?Y&vEC`; v⩒Zҵ׏-+W6S;uWо$O[6Ԏ 9 ZG<~}X avJ.SVB A.}n P!wYE\.R$  uz.ߞK3FN ГP#uU]iY+qc<$r8k9/ˍ56FtAFQ~]n-eܗ:M d,{ ;ջIflZׇm~%P1]sP(`"[3rJ%5[ͪY5ǻ82"x6n"3st/HŦٖ]y>m\߮ىXcڈbrVS 8nt@˄c% d2*+ۮ]dOJ:L:a'cv f$#ovR!cL-`vf*NH业:'!# dAOkaY` dfr}JTd4şM:SkDc7uy#ѯ j@f6/Gͤ|k[^ .DP5HJɚi1ԃH Wgyѣ5_/uݒ>`t٧q7ui7ݓΒ[ ަyb3Rl2sS;S؋|#ohة'WP #AeD|W ]ɼsj . x t{ d?聫%'nU'$I r$uV&A6SUsO 84n9%dL躠Ε,C"2>N\JH{&e֦洷}a@e-/w7}|'x;?S>я~5wֱb<|2ziAWlϧ7={կ7󶷽_c{ߤzi|,=&E/qnm<Ja/q_W_/㣔n݂1,~jS0d)skv݂~2bՕ$G+[=aa5VջYUu X*bׂLb]/ݴ*[&94AgN7\p6<}^u+oRBo[m[ J7dN/'(p9h/A^IW1"n^s7P, 8)F)8JlIHQl`IY:x)9z R*5&Z5dJ(01 qV{S{J bs )" IFl2)'ybid6 N:.Gv΅ٵzlmȩ`g1_PodAL>Q8}- kukTI eM%UP^_DXM`1M <<+Gs1;_E#kkɮ) /dzVX]%2$g#%T~VNד]g+ܙ%c/semtg+ȃҮ SkiM %A)SӝٻVyf`$r;xe œ!y?LL(RHzhF2.:p2$ry1꺴'Y.0~n{N̹}s_cL3ڧ# ijVVw0Y:S=CwOY&N'4&uhc| ب0B,c +k&03da^Q6'%'NłJ$ؚvݢ tV+"a?Vf TXx%[9I9bHYߓIHa_M(U(+%y; dWwgDAr٧0VRDz^i <L;!߁Nj >)'GHHB]Md DŽA('GeF'KA&ETR5#m3b~Ok`]4{hA3|},0XbppĞt:֧F9)05 "nYta;A'7D t]IQ‘fZ}ZaY@ci[Җ" o{{>>|ٟ8Պq}t4-ƏRʁ%~ (uW6|u40**_;|rm z܏IK_e_ƛf쬁Z~iK[K(P̪*1R_o|?oɰY!;CL %Fw+!p0i+J.,'fSb6۹f!AJmk6=`{rե*mu;P>t@RͧJ9B+wi\^YL!E՚9ey9iʛH[A,ZBڄF`SC(硾9 ڵ>2;,O29H&扲cp>{Dhmvy!T Yo{X77Z{s);&&lGhڮ4Dm&ݑxH#~C$iM.lӻ92mKP.]$pxIm# ?d6 vr$13k / .{[ ϞNSF~_A2".o惥E2?c6p0!FMNQG#;iojlVzg">ԕ˦RK[ܴUϔ}ܚF4K(Nni",SaB?oS72kshX:3q*ga7c.%3v . :W"qM;z_!aLL ᨱ۱Z߆U PA{q#9~8ǀc=z|d~M~ eV]]&a[}&R(sXrE񐄭*!Be#{v|qgew (=w,2j|O!7Y2K,Iio?ԙsVmPTxI58[ :d0)'`[| 3S ;tlic T 17;p}8~~h>,q^ 嶳XXGt}7|C3qoK}2;<Ҍ %{e !類×Ű"z6C&Ϟ\` .YQZ)qdlfk5%fMOj$r^HNFQSűb]:*5J.3~*M8ܞY=;Kk+_y6~j,%L]oBXo4ƴ>iH@z̤f 7ci(ҒUt%@]x7#PE%XZ )2.5tYm;K{}K} HT C(pFlyq|^3btII69t3'z;*XIl+&d&_;1HSPkc'T؎S %l.˼_{ PZߖ ynilxYƪJhaRU]\t1!ۼFGF VJ96 1iis,Jn?5_Mx~^喼{ڮKV`TFDNrUh<7 PJܞ7LRsoscHlգCOjZHs>͏\?E*pmW9t<9m, j\3R=HV>S$O5^c].7~ % I'hp*81pIsޝCDa\&|.GH~=M#AopVI&܌w69W^ J,/RJ#wjE5GzLwO?l_Kׅڗ 23~Z e|QJ tt!=:8pC@W k+,c? nZ*0|( Gs]`i;/O(Y"W 2W̱JT@ 0p*b%ԕfs^ѽ0ߤX[ gJ#KR.\0$f]\9An1r.B;h&Ss-B Ir%Z;RrNJ" H9>; 5 s)[JԴmwyQm5n.]ov ״7HmĩR ` Z -#ܛ=dL Ra+q/}Ÿ 6H;fנcb_z@r<fqZ#HfOK-u; K[=׹CiPjfn@gTAV;IOcR$P鎰z8ȱ[hDGG*2XPӀvV'1NW?)6]i-^WVԕzy>G [H gCgg\199Tv9k?&y/r \eYusٛ1Z6zHoCS \koPO3V뻾}C(/'y_>p}]%sn9;;}/y{x;RJ{5l&5|~SGuG%>NdoZ:e*[\QS #Q}+[::t?{O/?=/],SppUnH)hۇ[xPY/-^&l;GG|삓%0B6A,'Ţ%v~G hMl(lNTԅgFB1t)Ѵ;N7 PgpWll`x6c۹YɁ^.U;*\*R l: ?D]qs Ք:,LZkM_}O"}1tjJņ!W>DC{2fx=0;y"/~ ƴF..X7bpg]W;Ao* IX2[Cu|VVN6-xHM4ЫD/:&8= rsX۸t *;(u/_"6~BZLMF)%Hl㬚 9ͷ&MK HE6_7h J4% g[Rv}m¾H }l5PzT>d&S4ۓPIAkJV~a߾D6=c{n c2)ls*POkQ:#˜^L~O-W٠t^T?|6Sá >t 61܏uvFxlLy:;{ܔt:nF眐N([KBvNSciY?*:YK7P=\Cg >[ݧ ̑*FȒ 6|*;˸JI1]*s WiKWH'cGX|xcE~sQy܇!|kLy\xn\rX?/kd7q8GcӶM5]){z#WSq46Z[ko^7tӈ%`zz?A~G~R O|SW~W?g?5OuPg?λ.~i~iַ:ƙvu#6K%|{r^8vn_k^c6z="U~`ǖKXB2?g>?GѣG|0PXAhdzCM U&׺SF|J_䘚hlET>+GblA'UgF$;&&11EULAU&{L귂Qn{3d_ suDѭ0n{!G6\E϶" Z/,9Xjx{oF';kf[;=kgq` SOQ:E? 5wUmJ۔N-.Lt-mlA)ҮmYҾLNTvb&`gFnCyƏlx9}gH ]5[4x#EeySFŻPNq.0X6ԅ5:'2䊷gT$Ȑxf]]z̩vaǶז쉛 ]vqh7SD]+O%\^(v>1hF ' s:)Z : bTtZC xz B _g?ORHPa'z'2j9NcojCS4Zmk\_ɒh>9$g|7ߗߚC2Ocigl="#}'{+vt3(7bwBƹNA7C6(hoZ o{~_ml8;;}{{>c_җO}GKeJsl$%-Rto6]Ogyi|@h/_Ӭo;V"wwo{{ygx[Ÿkn_n̿y3 k.5<-t#Sdi"sg''ɟI>O?#DA)A(@90y)AG'bn?"DUֽ&q4.ѐؘ0ܙx\'>P~ʡ}x$4rUö$Ra uNǟ:̍{}*6FV!6fPz 3™" 括U ~{dBe'ME2W:-{+\t !@j6˘,ZkVexhlJZq(ư.:x\a{-p-{Z95apC =݀ʭqm3*ByJ*&h9) `Ě=26Ǝƅ-}q̧a0aPDg$ %fM{ܥm/T;8FCT29\vaeռdl1#%UIŽ3@hMuY w5uI6Vc 2Pc*eY%;F/m0; "K_x.նO9Ut@u*iQc/0jgmoV..PVr!ى,D)HrV0b9|_aoȞcм6[Mǂ >..(c n{uב|>5)~U.;Fy9hC{*ށo l]8\%p rUp9;uqIQ˸!H$l%4ѭ#C3F̕ k46{ed0)| c,33Q6Y4s>U-iϸSP uѼ?P)ّd`7W\%9DžH( )3\DPǍpܰ%@/,=I 9au@nAZkUyZal6e/uk#HG696Wsk(a~ #+YcY~m3Xy@q^̯7_p.dRa3 : li7 ;,T޲uF U\|R'zSKd&n[sUmC5ٹUASP{]w 2gQ~^p $SsՒD-HW|Wz;Y)jyajsl`j`W~>_D0]ɬ?;]"9w<@M̃RʃUnv!)m5<%*#QԂ  -ӽΙ‡CձmEpQOW¤p=}]kl DZ{G*{܇_cQ4֝-W\4W&#Q 3:7 k)8u"sEdK B&:@㖮ik3'jAuvgyΒ" 'bLwʬPqUOr 43ŏ1"NC]<Ƙo ?qOfimЎ/JA9vfs \&Gtt)wݻّ3g_kΩu߻^( C>1oDxq_\lNa@W p1ivR1yS{+ ZkZkvw(b~ǑO|*ԧx_W_ejG@v#=OUˌn@ˁwf3(nΝ;ܻw*>~xt U̶O|Qr1iVI"iď.׻y {|Asc))%AkI?#qYFNu]/#A}yvObnh׊- ZKaiވ^j oGK'uYum<\y })Z51,rKy)|sBP tںw3wLb-BJ381'|Cό -F%d<&(),sY$h3͊ݖ'8 ۚ|<;jq5cd"Y#D5-8_QaV3`u*G3۰XO# IDAT GsV @̭ȗO˼|&Js",$D2[pt `iE硢g۞5:jv%_v_W[ZkZkzS06h (o6SJ~P (Z __n |e oo#"___kk E vvw|w}}}?8 *O ݻ7m<-PaiwZkzIu gIyƇ>!>W*#S6ZLτ72 k4Tc %d'Z~22(~m"%=9؅wZgg5, }N %|G(RojlOM cd۟ʯl85ެ55;2E2TRڔ=xM 2H6(O׮ַxN4x>98eVJS]J3ۛj o1Cxt&hːq l MUfKwƑ1_k\0> `, ԍ~]2.=w3W|I2J1̦t KЈILf%k ZOiNy^!Rw=a:LoJ湴ėPݤ*/M6MA(7_Ů%i#c\+F`6 anWgS$BlQ6td?\e脂G"'l4/ۀ;wL"k9-[)rP^)_Y*u}N[[,%ykA\^GqY#Je2ʖ$9^9NN^cqMM ԝMKL1W9Wrcp:!UŽ$r1ɸqvpM!ĥ#<NJg=-c~X1l|| +H 0[ h=o"X0p:":& jJsV!OV}YIDtn$9Ftu&6sm\h8f2,m<^Z`.|ϙ\$ڣ_GG1#n̐Qal={qŷ`F )vEs[.2^HRbW2z"O^Q"W@~ZkzR![.{R f9^:/^w? /uݔ̜o߻ۿ_ z*׶~cjլOSZk5w1E`F;ju>~?~뷾8?яqyy"c9|fFΔ9u*,fVƻޭUlpdjJe7oS%@j2;"5*32'6X_8xD=ܟ+"72.MI}yܦT#ıf}IXb`-q1-w:i):뢎=",\|锈婕{MxF$Ec6+~VIMB%E p9FzMz1u~Mpjt7)-qg0~"\mL{z9g6lj5">)dV(Z~fB03SeRJj."1@GhNqarqjCI`%f$-f*HKQQOM^/$m4rM X) -Hbk[}ɸm*+䟇EW3F~p̑ޅP!GTHrG#v>f6|\cFdO[^];~^7 ]G̉WZs4{M.`yb}DRwbT]2' sO]Lkw:A#6cA eĘr͚rkdb,Kq=(~"#1Vqn9՘r:6 x ";Ja1PAV0xN(^P shuftfKdZ~&Z{׈PׇPxk̄"w%XUw)cfaoot'_kZk^w@%s 0刺k߀%,u )IYɗR&*~Ovķcuv>ݿe2~͑_kd|n2ѕ~tȓܽw{pw>s}t"Ey B (a_;Aaawhss0PE/ }ZB/1 4`e^˲ TfUqkbLic6~mA8u&,@Le3ylq jC$mk f￞$}Zy?9ՔIz\ğX_bx?/J:a9Cql F;P ZG=ִg0$]Ć!So0FpQaF+17UŘ>5GKX{' |S*ᗪ*/ l./b/OWS)p73{Ӄw]NXZp rlƐ|ݶĸ-P9,FyyȯK4ĿaQ&HG%7a)hbJK9s/׽h0FYaDocQO',} ▐]vw-@)#lX1J*藡>=6~KdGv&iZuPV24u#h&zBѪ^d3|ZZͩq1X(8xPV1V"40֩Խ`ugI. kt|mlUu_}4MMЈB9g0~1,PM8D,̓,Tq&kbMP'k Ss|SFnyf(mOB܏ 6/6PӚ̘/jM_j ~[y}V8?#A=_TT _}{Gse6$سt+`+BO5~` w9W>o 9/6h>>OU}dbЕq͆Rэa[&?󰧿㑥ppm# k -soYk|N.ORIb\ك_*k1AqǬ$ǿ>HZ$nar#S2Hjo~ LIJe$"&)O!Hj~eR"m+%h`9>b,EaZv@I0{s{c1J"<Ȳq[ 糃]biQ5^΃7/N /bm |EڶyBYb49(u#}A h(dt7JI DžlӭSƕV9vءA#Ez#}Qy-&IX7ҷINnٽ7zZkZk:}ƛ/O uTej7[Ei}v,3ٟn_ZZ}v=M<&K5KjQZn N҇9!:6Fhgp٩y~~)|SW~ _> ll *Aܣ lװW]v-r:aDcj l +EbӾqer{z \:r(ذ,n{tI6h$ JQx-5,!}X; ڷ&chx7Z\cqdV`^/u<|FKl5}[cD3qPiCX2)k6ț ֍Bl5H$qG 5I ާ2/ םT!<>P@;A3؍pVv}^' `#ک#<'kZ,֘y 6%66M2ZdȆt)Qq?5U% "$2=ܓ@<=zTdC ޤ٠U#@*s!! *=JA.]Ch W׎c!Hqx?~W=a;HqM b^Z>_߀$:4I 3 )" d'u(}= ȯZkZϦ,NA'_lWZݖ|}׳ϻMnۦW:F/w ZkoQ16LlMYs۳>,>_g~w^"w >:Uqj5B@Yhg7JjWaAQ|_m͢y36OGoB1Pvk 5p uE %TPn`܄zsʅ+l-a.| Hlkv< F/8-n.kư"f_aZ[Z=h˗!Gn,wa#Pl"; Z*wĿWvz@/(+: ıo9ˊ\גcd9P9Ta'WʠlxTWI&.m=\^וzQ#&a|.H)  9 ʙPPUzض Xj$ Zs ep'+ 3[ _ H|Ӳv(㡦JseA@߄V@P4lo*raP N5*ik)ju V4A؀D:;w1xsZ".m|Ĥ4s6q%W-XC2( w^kפеv]/=kGZ{nKÝK>L]hp8d+e MNޏ0?-QYM|V&NS#IbxƨfMKɒe ~5 Rk PT9}p8YƂa@[FZ/.Yt1ftKU$ \~y^"([MP`R誄{-,تT;$(RUHml:nPP|jy֬Gg3n*6Ľ 3{:1ʦKpN'~v _U=\h&mt/o>"2.>*yoZZkZkZkU]O}^E1籇#_o?_~w~׎>oit'i٘u&VSwhaJ(D D6hn){A j{vl[gP1,T!Z* 'LbI2j.3Zk7k9%l6h$Sȑ!i]:t=2*~Vu-lXW]#FJ8{,crƫ ̍!^4VI*aIrDuTefP$l#'6g[ Cehc& S)p+u`k0]?J}0& aYwlySWT@j'M"X?7}|C&*,> {&9~H5IZ3YKοhaԁP̰eޤ'[ *6t''L0Iw=2Y*Rt=P,.ՠ[A6w׬ AzIw\3Pǹ0s2tRzHAyqk}+@1#wnr[o42ʔ>Dr=\$囉>4r ǹWs^ku[3Hi̴þ"^qVc 9t:ŹD4GjzH ?V!lTTOccWX@Ɩ^b>mJ 5!>*F[H$$wuD!q 8ی:s{=bnR.@U t"Cqoud.J^r\7׉xYn(PY^iʋ!3kZkZkZkʠ1%\ugs; :hn}+_sÁnG|K_w~w8l6ܝ븹WʍP. Fࠔ1.bAQ #uWqttl.a kcƸxq"Ewqa[| 2nȜUBMvީLPI_l.;t_ 6F޼b[vŀobEs-nds cxPeT&,z^T;lAP\v ݾ]f'``@3CwLB7A1{FgV:Pn:eNWw,ȕ"C:ńR|c:NaQ̐NR*1W\m*`ڑQC9YŊ=X?nt *1uZu[H\RqeEVкd6{[V⺀N) WxmiX GIDATZcKkts=y|1.zfD B9(dD+UƳ (gc/#VDM"~g+A|WtPtU戱{b@S+3S">FݐjhGۻ6lA,H쉘~h%@UȬ@RL K" U@fUC3'؟G <$vA0=y`%m< y,ϯkʝ/5ϼTu"sQiua/,7z@9[>툗B~=KOTu>9ǻ%=LhP׷+q߸[$tsm ڼGl[|~$%I$IϜX9_X{וBabj/數y_(_{h}q`9i,k.K,I~_9q;/4%'*|Gk"AΜ̷Oc#XTj_ss,,XRyG>821rzfo͸Qh@^ZnǽF*`R&05R رfƞBq3},;rZQ<#ܱ'7~S>|G3?O*u|o|+'iLTxx_^8K<3'fqO'>xd+/unAҙ'1ӏ͎=g>s}4-, 2sh}qVسȁ)w,,7 }^;>G{ ~y3g"abbG[Μ9qf80eaϞ}Geތg3Irg<;[ߓg|% T9Px_OL>3E3-Dqh>DghVȾs;TNƍMl[uǸ]\ΣE>%}ʟ@WmdsO<3޾ vۑ{"%`3'Ny]f#Zr}',3o0-d+)'d'gιpϜGnOz(?|+_/Y/:ï839{)̜Y8Sc7VI? wb(==i{nK'~_Ǽ!<#C8et>)cpmy3gfN,4" %sV{u`_[GԜ..<ϽSJ),V?2[K$I>:Smgg#H'{nW5L7ۃ:z2)z"JY8r|O,шe>}a>X9đ"C$e={J)[0 {NBė2K'*rx!+U۱/ߟ@!&d("+0{0F#3kTc蕑{*Su>0{ȑ*s! <ycdFeBb8MvLp.vc@-8_SsGG#4Ş: [y(@=Scֱ|ڱ1y4BEcVFQg==s#رLq9Hp 1kLd&;*_1Q䭸 eކCy$e.cϞ%*di6|]yw -ozDPiy"R!zB?}guG!{R#qsĸqOVQ6/ghq3L~=5Qoykn.]?Yx3#T!L1]11{Ա5J˨7כYzϏuߝ(/cLǺvp(; AȞcُǸNOȱ{͉]$DD4} (#f=k$ )g 0qiuUi8[Oe6ʎʎ|.1ٓv?Dzײ5/?F0Ϩ2jL%좲Cس1sik-ϹA3PٱГF3 1e*6*91pKdPS5v)I(e |U- bTeuDciVNQ3DVYSz ^)I?cl%ګqafZXߌz5Y|Sje"rDh%#iޖçj.9:\%cRnumg;I2^% / 216ctJɬ=j0wD.8"_E%=\:GmA۰o-~//.!Gm>?/79gyb̅5zwt ZJB^Ɏ`3|f/xaafbώ9PQYzR\a czE~T 3-__34ɨq,kǨ|=~kts/W.]j\^?5qdw9ưӖIvO`BiZ%!_Ǽ;U@~'f0AhϮ67yc?K6"ǬYb`{~iyiť\K\Z_hHw [[ zZdarлDhT{fO?{{ɉWuK(^sK ~Mlq$I$} uD߻!ك&%?7n52Xxow]On[fk,@{UPYiWL2F]v֟(3% D+9Eo[n:[Ju0~$ P;Ij'^D"gƶd-j׌vUf1wQЫEu54\/A0,# URG폘iѫ5jCN0ߔѣBKμ xV(BMnq=Wd xOg:\I*Wqro_\*~`[{^}{|;؛9:m+W V.O0{q<:V]U`!g<ƿ%f Id&)ι _^`n6$Fכ>'ZrH>3Yrl-Үzxp즚Y/kblhds=zKG皌vu?.a/96o ?Wzsm(7aI K$I$I>Iqh.#aG$I<$I$IMZW=<$?΀$I$I$C^{=l%I[K$I$I>wQ#$Iì$I$I$}\%IҏĀ$I$I$c2.I~pC I$I$I܏U$謐$I$I$}Pm{$Gc@^$I$I!M`%IΫI$I$I$I$I$I$I$Iw`@^$I$I$I;0 /I$I$I$I$I$I$I$ K$I$I$It%I$I$I$I$I$I$I$݁yI$I$I$I$I$I$I$Iw`@^$I$I$I;0 /I$I$I$I$I$I$I$ K$I$I$It%I$I$I$I$I$I$I$݁yI$I$I$I$I$I$I$Iw`@^$I$I$I;0 /I$I$I$I$I$I$I$ K$I$I$It%I$I$I$I$I$I$I$݁yI$I$I$I$I$I$I$Iw`@^$I$I$I;0 /I$I$I$I$I$I$I$ K$I$I$It%I$I$I$I$I$I$I$݁yI$I$I$I$I$I$I$Iw`@^$I$I$I;֯PIENDB`neo-0.10.0/doc/source/images/simple_generated_diagram.png0000644000076700000240000114704514066374330024013 0ustar andrewstaff00000000000000PNG  IHDR9tEXtSoftwareMatplotlib version3.3.1, https://matplotlib.org/ pHYsaa?iIDATxwxTu{Z&'$!{P V,Zֆmu׺] XVw_]Yuu+6DEiRJ =df$C:Br%sߜ !! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!CB@B>!! |!C=H:uR^^^k,<<\{ma! //OɿGMP&)8Z܆r[Ð>ՄMX-f22ڲYV->'6I=#9JR$G8Z0Tv+Bտ3+*Q^,SN [c8l2T{\cZOS ó$1dRŢ0UQ6m6EXZn*vxC,Mb<=}r:Z{c@@@ C{ogT}9NNUu+L,0)l7ԫuW>~% 9gNvIaT.FTQ v:Je7c B$;J.+޲2=eeUZ}z;Y3 -ʙ+Lժ쪪jNpUՉ5մ -R˥xUakCAժD]nWbvi} B@8 u8jeeYZee*rjYUlB':RՃf2n6˿+lV٬@EAfX, /0+nVө|K9s:tRhҪV6%lu:tjCqlanmB-o$DLb2 Ne h rOW-SZ]eepVUgdm$OmM16b+X+7O1T^&V2Twt*»MU9R\WU/U \.hKII!b٬cణ.kڙ/+ٺ];:W((t:D=_EEYVFSUqPt$dVUgf*fS⪅}1~~sUg9Cca2h(bQ1vRөr:Jq8Zʯ*&Vf\֎R(-VIvzG@{`SpiAn۱Cءe!Iyy$ ӝbǚu,]Wtsת-vhᬅ%Icq5&ޖ$uIfN;}93U'ۭՂEEZWTr:F#ii WM&uWu Pvo_ͦlV٬M]]TZepZ>C*Z#k#mam];VER]h{|f}q̏s|)3r-hE992܆qF6:Q.b6ߠr9]WHTum^{^=1<.| ahWi6kCq6imQy+l m(g}$SWu T+2 l'I-jUժ c:7H,+Ӂre{_$Ohp(ᐭ2dÌ <88˝u,H0PYJݚ*{}GˊsU[*/-}r;$I]^%ϗk<'%GRoBV- 808Xi[I7VW;$iJh6\`Yԣ *N7a86ɤjѾAA WvkáݥUZeeUZm%%SVVU`vxxrgSE&sxyiN7 51 obhMgo8׍WT()75WW~v1f} +OVI~{VZ]bw\FЬ N.,Ԇ pmQ6{CKzZ9Z+爫Z3O}+xR=fk4f6ke=}a(YZSSZ]:-iޟn{yvڥ҂Rںx=3>;C]wO\.S%ڽz e}u(u;[ce,+UUseɨwOֺI. :k#kr|iMUiAL&BcC44Ic[6k{麗)I7*g/J7DSmG^OW>}{;ߴ_:St)cOڿibqX-}U;^ھtʕ;QnT6QвOiˢ-M͕$Ew Ofu~z'e$g(82XC 8~NNqiWV66ipd~v_7Ii;Ҽu-jןISNJJ Jd Gъ蹌Jҝܩe.,GCY ה{hFa^ZeZVP_󵭤D$$sej1ITn2wPk`pk@P! Nx&IQ6l6YwRNJN۰OX/]8ym>5WKWջ55Q:_K?\}pA}2ڲ\n[+XQgy΁u(  = 2~]lޟN(I ^sھlV}Eb֘k(SeQЪWi*,Кo+Nm}!'ԂBܨɷNjQI~B-"!u;70ڗO"n.|`uۡ1aSPxl69˝ڵj~Wn-zo|:c++*SBڵjV^)I_XY,O%/W6뢿^$U?eBs_i35Tۙz.Chr_3][ooVkd }wސӀNu(j Ґ$fnfz+BCi@' ]:5ih|D6PQnΦ.DWAg R@HV|Ba:4JݖJ8h!@TTUXP%Z&vhkHe{Ъ0k@TG COݫɒnT}9!Ao¸zޫ;X' wdLu[n[Պj9IJXc:h@}v}䱴= ;]+f齵eUEL <\#BC5(8Xf忟]Ybfy*gv?!IkǎG]?P3F]ӪWUͣT~ͬC:mKy[|64>+[wkKU%I}O.C^i;rm6}rUv3]q5 '*<.\Y<;kmzW5UHVhLNTz٩2܆G>7(Lٚh@X5j56?Ŀm\g_ѝ5n8D(u[c5a#Z>mZWTrsK^(8@< iaa:-,L 3К袍Q >,~tJhhRDZt*z}c wXxڃxk)<+GǨu~(p:e}[n~jX BZ;t($8MFܰ`Ñߴb4}gzPmHdbLf !͢|Y* KSq^_3w_"d6)ihj+vJ:|ppyܿq6Usw\_I{e4!Rsr47'GNttfN SBBLu PaOJڵKaxqOEI^޽y`)eK>M'M:砶.ڪ PuVzi+e L׷}el/PU]<qޟeu`6緣?FT^Z {,6Wc˶kGK}dw% MRdHnC;W&I ʖ_c*-(Wh7Fo(͢o-.4SE%) $@=Fжp{KFz_G9)9u] krUO5QMϖ{-)_(iH2[ڷqv]uXW3O~AgҁMj=՟߾M>E.j/N.k^NֶRO;5UBlW S`Y7hTөmǙԪ~cIvlvm>(Ц-}NT-`$iWUvItp:?@bʊ4y1LlX?LV$^ӀN-ReeGtW@h@u].m]U[mw_#.0I E.RyihH@H>l63w *m{u>&#pJ^ɒ<Ո9rl+ywڥ]v{{7^Tg|̜#Ifmy$]v:H =o|H/?F@maP\y99.'GUaG2*)O;0u[hb]qvW ,}ЧΉj|6ː.[ڽz 2 dY5WJ7NT`XV^BEuҘhހfyB^*0,P& 3i/5yF҅^%/QnZuIjvC <:YG{3U[2+k=F[ИP]k4yJۑ&{]'k^lQku2_[oUyinILR|n_Y>]m)@ `E%F=gl=O˟\?2f*(whޟVG?֖_'GsDҡ?mC* &N չQQA͇8nu5!>^w&K +^{)"!B%y%ZJo8ú(¥ G/ۮI֠m9U[oն%bEuġ 6zARN ahKIhnNQOPg#^%)QQ:'*JB ЬuZ'Rl2=t]. gS?[-sϪ P~~0E퟽Avvi8"-mhuu3M:sCV?Che.ˬ,ʶҡ`}W tvdz& [SX7lPzy y??}ݯ1K uP{)m{s%I<#K^.ڸQN: $gS_?u3Ο3f@=J].}Ҵ0/ɭCCfX!?BAf^HI;wʐ|a(X +v$gs EbK\}L p"# JahUaHK{*rՂef.山); &\q6Qcod2)̐X,ջOֲu˥\N6(/׻Tm+- f]bcuVd,3\Z)7jsq֛o000P*(P~?eq;.۶IŞ-@D@'U.'Golo;AP8XPn2ܨ(]sH(I?w6: [*T\cy3uՀzb„VY)p8EMJ),TL`SǍSlPPkar֤Qj,7kCCԩ~ΎƪT=|߯]yg]R]nݮMJJ#F -O\\VZNow8&y<;2RW꼨([/Vgw{n7foݪ99u¦/.L125KИ1=yyEnN-z%&'5k/{>@sv4ۧ_UR躋NY~KO˫V5kݕWjDb1=oI/ԇz-5UjUF{ ];El->^-rw[q48>"`ggk-{T$isw׀W3G_vY>11-?;Fm̈DY+_דv-Æi[o>?{NJn¼n,Ij04::GzҳZo~߿??_7.--3˖ij7`idb޾BmGT߲E/}vˑ׹f8@@ڼRKgeT-ϯBpZ0bѵqq!>^U8y|+lQ-<4%XH*L#0\˥/.L6]oݪVҞ;۷`޺>`Uo_xw?aИ1j))5gOI.];QLV*#yvN3gjuZ%$x %l2n޽m=`w*p84}{=:}fU/?_ڵkpeNKKk̷|!6{Ty.Z\9!AEEv-ݰA닊$y &SWŵڀp$=5q&&%չsx$}hwErZ\@S>@w}TD24>^6Qݣ4cGn+!$ޜ&'v7E-88:z)%E/(RUOcUSRnNHuqqptV t-{Bժoרk,9\ ӈDll:N~R5x ֮=Fkd{rII9 eeip|$i5YZYWǛ6\4u h C̟A$818!((sրUS-,$iJtnNHP AGf C.y w wIAQQzwz}uڇKxu?8C?.KVvm0 {fRQyx4N|IG>'N5駫vړ/^ ~uG$&*>8X;=}b=.a-=p@Ξ|Co]p lө~=p3IRpMCV܆11z_@!~~׹9iRu1aaZtuz_e_P ۭ.ZsM5 ,}v饺uM# .H_:qzT:}{}vG<{y]ZGǎqs\y2;&ܔ}g[-[),LwYco%S1FF>}tȑ7$ l[vE^KMտSR^^ SFtHpQSL-.ڲEj,KbڵT˞iAApڄ/9W^]pIiE/=1~n;Ш>^w%&{3γmZ)6([X_?oxy碋Z{$P#^]z}dse!_kwY, M I6JL Z~NxKO۶]Y}l- @ .=<:x1Z޲2p^IKSPX0XUU+0Pu+ڵHr{wIo_Z5hKJ޽zAԴ6VINиpױ&ED0 BSWlެ9995ѾM/&! Ь{aF̒M"d2vtOb(5t^R"CyIM^C7&$@XTէ23wPEj^j^qv{@Pg_܊ PAŢi\DDkhswfgeɢU v׽:ڸ8Y,eڞo͛Uv{;`6QYUP{l, :vyQQQ/O;vHP}տ"lVЖ[{ږ:k: _FegKo ύԊ!Cd]M8An=;w;dJҥ11?h`u9Sc;9/O34&'sϩGJjvL3fnxVۆw/\5ii]o e˚u,UL3f5k M#?L3fhmzFިNKY >P|RC^yE^-0q qϞ{4=CBzJ{N7}KJj7mlf̐i Y{LΚҊcG BM$?_٣yyjhŠ11ͺ?P otnjZ09YYGj+Srޝ:Uс׵jnw~;W~ؽ[;NM+Sqb毿4K^#M}~g_~f[ i;*H|]w,՞Γ0]R_#?7~M?OȀc ME@Q?={s~~9'UBC[~N :o.,$*^C7%$ڢz4-e^}p;oժ;ߦjޞ:G zҍ_;wX\uө^u7ߥ/й=z=G%_#]õOR@ZS;t.1)={jڠAc\ZdF@0ce0 zKB0l+)up-{lg}Ꜩ ig1~ |oJLT\pp+RRy25U~~]>IWcoAի睧qr^uFR$B>ڴIY%%خݲE%%GԍC4c$Ư֍_-I2}6ml 5%ɻGǎq޽z`|mȐdR(=5q3gjo~X”|Xtnzw ?ٴIzl:UW'-˗{qf)14TN]5kt_{iá~AvVja:!CtQ2Ucarƿ~6MO/YiYN=V9yr"=h$iwHRH}o+g)rX =s96hPQ*y*%iOnn>}cc̲ez4B^ah^nٳG+ ('tѨ(/yya].y_B{.X6E22WVwΝ/l3'}H^D{ԭV[á[}P]̛+?\nɻN^Y7NAAڕG~Iz<={tZ$HyG%(HrJK%X}FTbhZXgvq g uzlxu լutj۟aar{};xP3ƍSߘQJe?AxC}Dp7 j6}uP8W/=6~n~KKSNii\],PFq^:\٫/4:֪` .7jȢ}PoEdҹݻ륕+rklLqElL[c?r8|XFwYx ܝ;5pr׮e44wNw_}&VaO8֔J RcU*p8g+n$oʘk@9u?6Vڗ |x$b2ܞ S:TNii ̆a贎p~^?xr꩒!ܝ;u/Ţ!!>o = s{Oi;ijqz`|ҾF4ӾXMZcE@00UvٳG닋('tѸ(azj>m2SU~ 6 PPouLjХ\CF].۽wYt`F&&0l5ޕ!]Ja7 |=hfe»nFqq[V&˥ȀWk6woL&JMV Щ:hejjwAqqr:w4IJ   ӕNAaS|$^ :#G2aʪUz~ ɑk^Oaxt`ꌤ$Ňuiio^~ٻW󮾺w"۶ʕ>7WRUY9k!{GmYFYd3Yҫ=zɤ$_jYݺ閡C%3ubќ;˲KJG|RSZUo6VCCV0<A~~O]޷9kʘľ|YfuiVؠ =pܵZ~qdeACB-;zkҊA0G5>?I[CoHޱn;W۳uoiiOuСuLWg[]ַ""Q\Vh bLP0@LڇŋФʨ&t/_~Qc;uǛ6ikVa5=j^^J9Rio|]ڷo%+d5!,LO-^ ??]?xpǚܵwCO;M=[Z)) =3+׹O4}Xծ\ꑱc%I=ٖ-'Z'lHϨ(ek76V !!--Mo] {RP% >}<#&-MڵS`#ySכ\kU ;^ >Pgn>\av٣g.չݻ.Ko]-Xq;닭[V=KWV1vd56^7h+ҾzFG7zEE:>$DW_#xO Ubhvay~=p@a(T?''ի</G8I9n٣FۈJށz2)I h&suMp`ͦ Zm$4킂Ei 9ݻg63:K.7O!~~g(-ڷ{cҪUza ]ثu智^tܹrݲTV k5yt+Og&OSoƌ ќNU#1L}z%J)(PLP'$.ɤB̟'-R~Y""KgՄwQ%G|4H̛R=:vOWfI₃uz|&HݵK:$a4٭Ϻ/[sϭ7y=zqի+uzph6gw?z_dt[TOFgJNzkR")Ճ[Z9gުqߣcjqԨ7ސdRuС h$$ Ima6;[ڥiL#)IFGbh))!O8- @TGVۉ&>>^tM ŏWZQ  /}錤fƌ e& a7U^oWj6rKcgRáBx7nԵg+{ZdǶn3hVqqqJKKkPAAdUA޵K4YRŢ'tS||9h Cޭۿ߻,鴰0}ٯ©9ڥ jP\ bt]m:[ͦO9E/XѬᆃŠ Sfo[庼_Jy>R }U]g˽mIjaeeх uY߾!pSZޭ33e\XKQkeNPYUݺ~VQczW/dd郍5t5(.NsRIm w/W%r s;7WMR娯J ՟G8M Q=6WW(ІTT艽{BJ Ð0[$$]KJR0J7j^njPǎzK6mn1 Gжbh\.Uu`$_}|{[:{z.,&I/衛Zsh! aÌ ݷkRkT4,)f?vLL׭ӞRy1Lwo]P !FvԺboE`c,5@ǎcG5\OPۦbM\VYrg658ow(2 0AEFS>8'Mźw.}K2U ^ೖ-BV0pɤ|Is C٭=hP:S@E@r+*={Jj7llA$SBC|nӾ%6irq@u l9fYytSOpA|JKd6PG8<;wT)C0ۘ$\n-JK ۶ɐ"{` G׶Y,Ƿ0AZ@G@@6۵ ͚޹(; ?Ӄ{x%  ՜a4 W@өܩAViEA3htcBtH8Ź CwY#4I:32R $DZvO.C8qhEaLu_B8 o'Zm>\/?0R֕7))5_/S ֮]3f4-FVDܱCssN$[@ݭΊ:#BSnܨyy5ߝUf=ERQQ!jQAge.٣>+WjanA$ILJ Wv~S&%n[Ѵi4zh}ٳBBBt9(--ͻNaa?k׮ P^0˳pBL&O:3Aiݺue]K .1}j߾vN9-ZO<$v .PIIf͚뮻Ndd24mڴW=B8.;[Vc{0m'Zu}bxrұl38J.-Ո5kH.yޗLSvʣ$ܹS?z)Zz?y/..fӳ>sn?{ξnf{eXt饗S~gSNKp8$yd_|5c }Wڵ<p&IӧO?o^zr{z衇$I/ֲe{ݶm|I=qD8b8;5;+KfERͦw112Q8[_T3֭SNE\|b2Ӿ}u~ttkrrrrJuAt=r2͊?/viכoᄏƾnF~u9M<uA}ђ%K4a-X@sʕ+5l0Iɓ5p@=쳚9sfX,2^V\ɓ'w%\k׮#Fjy+''G/VϞ='+BUݚyINVӜoNHߓfOv/yy:w\ro6k΀ʣCu=zԻwo9Nedd(..N+׮]vξ&L*l7n\e K.4hC5ƍիwjРAz74}tM2EC ihjB~M۶i{ia+%O۾zW/ m@fgf͛2 íV-4H[{x%V`'I*++$}Ǻ[t3PDD>s=uVg?-wVV#Vg_:u:GU믿^zW5c 鮻}wmccc}<m!랝;^F,:|;Q}Uabs2ͺ{t=(99Y~[SLit[ZuD@4aTݿ{J].ϲF7s~bDۣ:s@+z)%Em;(H? zQm(--ֹn}Ͳ &h̙R.]eMչsg=-[hʔ)p8Ak h+ tmZW\ܤ-"l6Խ.@Ǿ}z`nm3`sq&L]wݥ3gW^zWTRR,1B4tP=9s|A]s55k><I Ц|mSNEED$ ={jdXXK0 Cen].r|ueE.\.V.+vpe$IfyΝ&gY-,\k٬pU!VB-X, Zk~X`6S6msN]ft^T>WvC==hWжPAri})<X=ޥLL^v+B:X^ʟ3-K//WQeWvcX GV?2L*3 C.fh(l6CV)O vB-FamVzwIbbn'C@h1:uR^^^k,<<\{m~0QFc NI&y zGuh60 :5B+2˪Pn dU ˎKPDdHrJ-y+*˛6͊ٔhݮx] ax??Іͬ֕[˧鵞=e!NzObT([{$e;۷loHM/uKbbR0 eTThwii ˕p(޶&Iq8 㰏DNdM ns8ЯTjv:zo+e..޴Ix-!Aw^ɋв%EH@{k(qH73 Co뎝;UZYeXc=*vTʴLKKTKKPPT*NHnyڭ֧VrYd'HtӚ5lVle- @=#0P= ?ZF"SmؠE5AO'%!g<=}r:,O.-mӏMll- @oQaa tuee0p{Iv)}$o PUQDs0mIU-ClTt{Ƃ"[E%֦'^a@ TȭЙkMaa;o:| !P0RJݽ[N#:vԣ;dRv6+]QZTF fk"Lj<:ެ[dRŢ Y!V­V[, Zb*bQŢAx& kj˥bRIWRk9 +\*uUvרwY5㖼Lh1o6yCTēQ^iKqq۳]Zm\Zq&[GP5󣢔\VViOYJj5LuB*ՙ$EXSbw^@/bQheWކZZFl;O0TZ-L,pt\˕^^:X9ore+6wXVvkSI\uiW`GURZ))ƭ]=5^?/w[ڷoqh]@5nÐe`fm$ԫbHJ+/ii2z$$ ̒m6]?jgJtL&-Z,i6.PVepX_p( aфU8ZZPuZv5$$DBB44$DCwE]Zqk*ZղIҬ^tM\\k @+# n@سG)I9J-dRͦ??%V5B/fNHIԮ A[01{JyvhkI95B[e}a}-K:p諬,o^(3qx[5nZeWTȥֺ>Gmѡ%=CzwCM&//O3g5\$YfSEEmڵk5{l=#2_T@ [p͒v'{@ݮ&?є0eJq8TJKr̭%%SVVclHuSeVVpdhJCB[a&[BZM&}޷΋np h<͘1CG{ZlY %O@8c =Cյݿ rkˢ-&lKQI^,6#n#^ mNrr-[X_ؾZ- @+:Eɤ 53 CYV}m))RW &S/4/++ ߲|M^^.\MZJEEf,QLLS6mUbPI Yd;ҧ3>ՖE[TQ gSbgk=TIvuۛe_6Uuc|Uﯭ1co(8P/t;uNTYL&:=<\ɤ$}ԷOWƩjzW/cG] vڤ}ԭsT٣3ׯW%J\Toܨ99ʩhG|[ ֩2H 0À')ۭO҃>(nutg*88Xk?77W]wﯤ$5hРAW\\ 9IҴi4zhѣ~M>F@8k,L&ZJr5tP^Z.]H&M$ɤΝ;Y;+VИ1cz*..s7j lٲc;x'$ݮx]p*))H$&ɤiӦ5zZ6 s'rw{ؓ!I,r% M=Ȯbߴ_~- jǢC!IRzWv% @v>L&bQaauϭЖVT P[JJsLs֧v@P p 6&+KS7m1ilhJgs={L YF~Zo߾]~N?t}*.._W]}%Iw}-[^ڵӞ={~z>|M_zJ={e0ͶmO'Ppp" 뮻Rbbz)}ڽ{kԩz5|pkKOOĉ5l0}JKK}ݧ}5ֽꪫt뭷ꡇҌ3t+99yd2GïTK˥EZ/<52L954T]5pұ6.N2 WiUa*ۇZ$ VW5iө/Ee`ajbDƇk\xz``8;3V޶LZتcCڿ222t{3… ,X[nEaxs4HaaaZfAgjՙg$۶mSzzFǓp###uꩧjʕMڶʪUt9xAZV0qޟ{-IJIIitΣxРAz74}tM2EC i{ґ?@@hO;v]Nb8+{vC\.l{C5n;  ٷQڗ%Ivp0c]7Nn[_eҼ/8QYN;iF @ ^[?qGK5nڸc>ږtC;KKja!iKIvꕴ4I*aa:-4TiXH,&k@.J`y~+,ڢ")*ʂ%Wef'8l|0,3Sjp</23M9VIl6->^5UTT(''afK1ͺ{t=(99Y~[SLitۓ cvzxڱ^KKSNe0"5nyŕfQj,9#mx %cޟ+*TQ#>nuk*~Ԝc7ڞ򴯲T]kSIlyk0Q"΅%!VFktJۭU%/O?iI~JnElSef0fZvolfe5k:(66V_}wr}&L͛7שdҐ!CO諯Rrr x4iRҟgI.]zJҡ ?a>|f͚2{/dFkKܹ|IՖ-[4eʔY{]#BC}}_"#sm+I գ?=<7f@[}nסUsMnh}f"}.H\sõo>͙3GӧOW޽5zhM:Uֿoƪo߾2뮻N~~~:uK= 9Ɀ ZСz)믗$),,Lw߭_~Y{.ӥ^ZceZ#Gi67|4rHoQ^^7c_Ԅ Ν;X!\ދspfpBIڹk5!JpLl)$mURڎ4wTߴ߻ߦИPI5 2 ?WoUڐ*-,U@H@cD&DcAs^ƭ]{BTBW6* C HTvƆiTXN U Y2Ij6khH6 m))/yy9/O?)UcX*Bgd655:ZGGkXH'XuG} s{WYYY?!٬nA ]gϞZlz!]r8ر:,o[QF7Ԟ={d5b $M6M~~~zꩧ;(""B'NVw}Wz6nܨ>}hΜ9ޠl6W_7~j߾#..N ,=ܣK.D3՘js\rᇄFW_}U/***ԫW/}'1b$iСz4sL=k4k֬f/2I2cQڥg[XVYUb.]&ZġeIv/΃:j*u[:߻sTI~0]:ohR1b4qr9]Z̗$ :k.IAK?Z穗j7k4͜&IJ^z{_5**WQim51f8S8Q=}r8jի=&>7٬SBB46<\c+3 0;,|a]V_K^ieBTc.ІX m6] 51">mWжbp2uΝ<ڸ4ɤw޲2}s֡.In[nݪ7{õXoteUT* ckVUUQ.I Hؿ_fIJ`\ժbbtQL}GGUvmI!B,(?Eb`{EAz @! LHfLHZ,23?ⶄf B!]H B)`VΦR㋈23)Lqu%rr(+C3V6V|͊lB,-ۛQSb))a}QGǠ(uA`vǏcR1ɉKɉP++V4rs*77W^,04g-)asIIW336ԩ B!t^!Fz:?zHR"8˸Q}:=}^ѷݠܜ9~~>=Crj5P.t1T x2jk;U,MO>$(’%Kxg5'( ZmW6>y̟?Sx4111<=/Bq&9BtHFM ;vPףF۳e6+AdB!DoaPڠoi`Ç:7S Xտ?V}5z=<+@ˁB{mEíLJ=<XղZ)/Gk0ngW/lkd.rrbt"zuu㋜S`k+{ܜ= 5kuuuh4=Hj߾}u*KOOor4gc, s !D&s !yII5 6qBko_UWcq¶-T*qs77jj鑪N 5=RUŃ>>?~v1c >ϟ… kHIIܼuEa֖([[^ bGi)en.uuMK`ׂ~.(Vf;z{3ήG8}BBBn8qoԨQ^oɒ%zO^'Æ #88_~SYr%=^{-'N$ _ŋ ?nu[yyy|[s=DGGסp޼yp 1l0vqZ]GVw(ml׮]\|{ヲ #G6 B ٲe }>B!B!ZU171N wx}EQh|'Nq !ĩ:a-[PUVvz݋y;,ژK%: 1i -,:XX0/ 33./mC@q[CuѣYYJ8GG9:\P%Z- I܉MT+*b]Q;F%Wr c0Bq^R¤kT0!-y*v_ eSq1_]^e:]؂\,>b-rT&֯_%\چvۗ={ٳZd9&O̮],;iҤ6k3ruueԨQ' /4@FFFhOAX|9>,W\qC < !$BVJ^]ص%֔ X880Bѓvكϭq++y;#AOb޽l<VhGQ::2ё,;w228V]j>cuRVkiiƅNNdgg44tuJWW2jjX_TğEE-,$:TW,#7ӱQ宮Lrv>L!޲2.:p=y&*V&:91ɉwY[XWϧZ7}̌N//&Tm-F?>f P߾b[vЄ8;1 '':[>8w/'Q33ȑl4k]](JQ_I)};ϧYSөXXp'GD3z4qÇ$4IX5'[d =+z~xUT(Z( 9:ud$Y,&b ZMIo6>x w2|S~66<˯P2v,[@M$iv 9Awu+w'$k~>:][K}(LK_?.;bdب(rGfih(}Z7~L!V'C~qv8q"qqq 8aÆ5Æ C?~cĈKTTo ؾ}{s: /Çk=%B^DZ !hߒkhQwtg8Ynaa==!  ݻ)j;6S1f>8A-IU\yߟ~~]\`˭.(|,-íuzLf*c@ʴZ))lژ)جcEv6fea(\..\w(bb(jiGӇiř}}R\Ӝ<_WWaT:#~a>s.B><==䯿o .hsH._~ScqssoCQ|||x饗pvvFM''j4yysw¨Q_;4oĉqss#00;/BeD!DzG*7to}-{&dNǦexkx˻\BuMQw ,0ʊiX裦eV=(+Moy~;]m-);veRT}7եM}0_̜ɋz& MH`ŴiЯ\~9i{Y '`ap鲮?7FfQ߾9f ?͝{ʏ-FzGIFy4SKKDETRP2>&'|ta^"#Ɉ`,,Wúj;v [aRKO;\\x34#Ɍ拈f@bG[k0G};;wTRJJLBĐW[$|;,[OhE!O#"=% -3|&ONq]]O<777mۆ??\r GR>cZ͛Ǵit_[d SN;`ԩ :{{No#t:Nٍfƍ̜9+M63rHʼyXd #Gg톑 !Bt-0t/!˧u4Y^^^d \v̟o[XoFCѣ5|3zի0gnaaٰ]+W3p ͘A^b"͛%/YBea!W_čqc}Q 2kؿދA 2I s$%ԩ Лn{X?> `8x{iR32?I߷\{υ;x0z=_}=/e99$/S/8=Əg+0k[}ǀS.~2}:\>Yqq|umo_?4 g-b<.799Loh=Y*Rbutt``CQefs~>T%$Wښ|}k.؜`Pe%%%8& p 8;㠑+rjkw/)M<#c2 [R|QU1l7Wn`CYח |||H?GOҪ"88;#h|^ !DVZZ*-FBFɤ$es:0IW,,8=>};t{Ar2{i˖odBƍ(5 |@$oMLhC ޛ4JKw yV|5w3QQdqpjS@i2ᣏP5vࣩSɊ+2ҴC+ٙL!=&#<'+t5뢮A+ Zr {"#QcdlBλÃJ?u@^EkT89u[R.tvBggkj8;w33I1h1t++;v{_TH"mlxߟ? YSX| 1Z-ENj`S\]ŅpkS8{q~LLppPXGGYv6dfXUe ֍a99|`[[LJݻ4uˠA(--eٲe3k֬B!YMZ !0(3Zx}[9:bŚ&ߨ,*jr{m8VkMdƢh7δ{>86۟kP)B}Wgju+}/ =41cL?5h-i1׏+W=r>> ƪGAWWGPt4}`*b~-k((Yu 7'Rx^Šӡh4T魷[ͶWzBKBF seikj| ?;>?_z G__&̙sJ{]KQ>ۗc-+tES~ͭ;.p38;YSÊ,$ÕzeV2?66zwwO2cOW[چ (NlagJu5oJZff\5nnsp@ӅC.Z- k]U('1'iwS񎎌wtdim-NFIզHYFK22\__&:8u\p6mj񶀀RRRtcƌa]M!B> BvFuu@}e<OKn\hkjHپu/9s_rp@ciɬoiq]EŅ z} N6;S޾‚s2q\ٶ|9}<#"NyFm=BgV:*w][{R!``Z\GDpc|ޜ o D@k y/35(tlBx>vnonZw37g'3<=(+㷂~`C5Tgۑr +w23qhՕkܸ U:ƲI p\yϏ|}T\Gרm..fcq1}x* 1Y>,d^j!BB!(jyQZ>>=6GAJ -^ @Pt4jz^QQZml07!tFL^m^^6֎R+ ]\Bzu5˳y?#캺n01))<873ח'TtwJ88Bp055^P+*J7QeZ>aEv66*W6:;c#sjz<ȖkZ\g{A8ؘ(Lprby|0iJB{HU3/)K^g>}B!i BLOa>x" \=s02]?Sp a|wln<Çy hh~~q.73KK^.9{9shT\udfq#=ip ڵx{«_?ffG̬4 NŋOh{3kp06m"eḒiRY[[׏'YjԑHܞ@^}fe2'1gyϏ}|p)4B:a !Bq+TӁ-Gާ}6X;9oS#a&p㏛q d7P2W&xO<+?0[? [Skp0<8Z ?N vp!͘IIBܯܸb`awPusyϟ;_z/gB[]̓KLqu@mg&M;GR_hT*pu WW++y;#eeQJ䉌(-ڸ8|͙^^YT\NNʱ*~/(`uA5:U]h/k 5pܘ--U75>_ Lm50ޞ_Ϊ*EaȮaYF22(Lt<Ǐs7sɡjzB!8){{Bرc_ lˋl.dUcҳ2{d B3kQf gC :x5Vשo7ڬ^{7,{ϠJ–7|3-NPɃ>>馑RcCQ1?"ӭHT`mou\*AJ/b0Fhk߃9{BVl^MM%626Ãmo5l0{zBa޽!JKK% BsYvM ۷SpY!5:鉀0_.)54}}rrz,w3 (jgUU-ej^JVo3$o_n즑v_EE,IO5299/;;tJJ)?HtiԸ swj77X[wEzqJN6]Vl<sV<^JM%{_qp詡 !B!)--Bq.{!5T,)fly] SSQ!Ck !(v FE1dϞ&mt@VKkkjjZ nFN:>E"gg.rvXUoQ'ڏ",*"ҒXV::2ёBB|%5^.dLNU}X@[.GWt338 tnpwgCq1/WQg-*bMa! y(Z2x0%>YNd !B!NT !9he%}w4U4Ճ Bޣ*~nV+""J[$T*^~T~-(eg@uRV|*zlellÃɳݞ2GM@5pS7(j 3ȭ?vWyϏ}|p9[ !BqIBIN6PzP!z(,Ӈ66V LLHl5z=nq Ͱ0y',0++cU4Y nǏS\Wm;_? Ǝ(nĵ!9<ٻ22ȫTVr6 ˈ ;A1ҢOOxORRݺ%+B!ĹB*swh솀P*EOVVW3hnJ+ 8GGqTt.gѰu`tO"[a!*:7O Tۛ9AvzwtzFD''fxxp+DD55ܳ&aaqv y95%%hmԟ8q衑 !Bq B!8=ܤ cA``B!:ҒUžM|_bh^+jCrUU 4Q uت(c՘)ܱ[&G1*Ea2jÆ 0`WE៞nA~ͥJwº:.ip)d<]C0ͭܣ:@k0~f&۷31z X!BT !9dwi)xgYNV99ѪNU&;;3ߟ(JGbxu5?eNQAsTXT\ʍ\0/㙬RcBL {L+ˋKoTUii/+:k]XAl2V!BS&Bql@}`w}LΏ=dGY1mZ׏Gw MуvO6ϊ^aMTEiLk0YSÄnn:[{}|H1? Rg&Hm12\[XFOyy 6;,-a86r$/ѿupJ\.;!EEg}>\u {*Jޓp[Yvx8G3K::$'};oS-UB!B29*,-*217ggI45;|ʕ^G$ Nx8Z e-?P$WU1q~}ۏ&~=xaoYWcefC>#x&00++aNNJl߿[y81]$,Do003>?Llj`=Znbf 5}}1W&AaVC, !B!$ BsTS,4 hkjzz]9  `R\Z t@FM 3ٙ##[N(j[XledFGfh(>@j:ry36Pcm3$aØO@C َ6^6uu,`޽n3f~`0Pb"_bJ@_kk~0@k>y=4QMdrGB}v\B!$#B!uorsMvj5ztL-t,4W akWY2ni~`ap0kx!"WcQ߾ss/7 3ࣩSYԷ/^y%MMz}H.ꪪNb$#o﹇W~x ٶ|9Pߞ3 YL!ę‚QQZZ$=+\]629rRuVu%;}}I5""`[Nc_WSxoDҫu'KQlmYLQ:nF5v355J]ڵϺִmy)5j‚?A1α#兊׶\] 1`.H%B! B^OK34V`s ^z rO-.C5e 7}1C󥗈JnCLJƍ'5s0k;ۨmY?w͟}Ƥ vvFרdap0,9}zQ嗹qrFv\sxFF2~<\+WyJByFQ>m.cl-)77>h3$RR5qqԞm4*=>3f0q\BǏ43MoE؄ ψ3>cC;F͚EkMWjSlq׼}.hmnaaXb!ęm'x'#jc#v0nF綄k !Yj=0ɉNN,/մ4`0^-rs,'Ig:(rp`oss67" GVVs'<=lk{Z=G.+Jşfms- Ⳉ癔kSVƄ&::rp0{rB!B#Bq{'#چvK`y6|If&:6zyK)#K,o$B\xg+GG !<ƍl\Y=s(=@ۘgD_}?HYN)mKѻp;+t@^m-Fo00ˋ[ݖ1?;ιyiD)F1Պƌm8r^L Ǽ̤QٙKޘ1b U UZUU۶1Ac{ʸ<6`U/#"Lhřo-?GEk8(\SXHԮ]|0I8WB!g ,"+m<5pmm{vP0{EEMj%k/d4GefuCا17o2O @UIIۯ8aغ[bwp#,OY)_f&#%|q1uu\?%Zmpp?9Evp`=p #4Vꀏ23 ھG)%B!ݥ#dBBp`NDX[-T #oM˖aG߾ EAQ}RV3nlx96kגi7}i>_̺^`[x8ڊOQ8xy1fi~)<sl@ui)#G.,NzR5W]KPly]ñqu%8͛rp%83BTuoI k 9Xh\AШT<@^Km󓓱Uy mC}(E\r^KK9`۫Uz=KYL  ʪU4nV//WWeN˳HF-F[c='x") ӓX۫^:ƒR]mz>v//k#x''utd]QO$% A,`fyyna~W!B!v !Y`0MrsEEl~]ӦM;[԰8sozyeMNaJ [7ߌKPVj|5O?M#qo`| Vkp0~ALt: z}v0XAiV涶E晖s'&}Q[Q̯"pԨާwP) _DD0pn2kjW:`ki)y9$EQx!(7[Dljnmm<"XTmx `l>fOоɀgWYfgEN:]9Zc wQaR1ÃYD۟$z[QZjHEgg.vrbuAO&%WYi~\r#,06T !BqPCII 7B~k ew33Ң1oyyym0`se<2~7 |xsot~~1>rFΣ\ڢETY}eeܻ6{[ݿ?7T ?zw33[]^`GW+j03ȩCinn $ƦmjzIv6kxutA5v//nҲǎJZcX[m9Ao0c^%%R]mzx5$B!'Bq6zqMT\?86-]J?pdz~#axcOM!8x/<Ç90(, vOOZ23fu~~shxߟh>ۗ 5 ڵkdyy;T\QQdEG,, imy&99?&oss=$T.ff8PsJQݝ#x#$;I5yyɢt !BGBtʮR))1P3"i~~=${ !v//fyzTt\})|Q) ÍmUZMU*n$np0q m 5RϠݻ"6ݽ1v57g{ <9c- bn8tϭ[y1++t3QeF¢K%zs9~~$ިZ癔wde^6*B!ŨBe9x_  }|X#c9BqFU:#pEE©||XwVÇ!/jQaСZYux%5P5l1ɉgvpqv7b>滼Rݥܑ*+.q&[JJ*Lk.h"Ɔר(--Q/w3 P!Be$ Bȫii77Bs@_FDX mCnHLH.qvn-TUŔF-JEBOF^;O\\̄_Z OODZt4/𝤽YAaVii11| J&UqIqvS\\17CCOh>$dv@+B!,!B%kk$;me-_OI!it+XEhjz:x*JHٵd騯->V%Ha??Tv2>+KKFgcacďC'M}LJ\w^[2+bcI2}'Rx18+8TyJHiNpPɱ ٳϊPQVX٣G}~ Z^k*xBU`ÇVZjZ^ 糾}Q)-B4bf''࿃'G*+?WBWss8ՠ$+|Np13caPѼFbgcƀ@y9S$j.~EP_z; Dȑ< =rL߇4%RVh  `MTA_ ߹Z߈B!g( ,gQkjLjrs !)ؿO /VV03uHH`;i H捐S VXh WV2!"v䫜S׀A/s3 ڸմ4>l`+Zͺp57撚s(Lvq}Z` ݱsrΊJ_!BqP!ddζW{yal` Os˼گ%Kxc.`~qW]>.!Ucx-$ @^6ޞV`Jl,˻t9~~5:&VUqÄԝf+\]uҢYOC^߉Uڊ.cR/IFq*_z f}-+ !B1BSV]ͯWwzyD"#}JٹoZ{//Q qnyח%%ωt2MIaQpp&a>ܑlzK`Сx5[X^^|s))$UWyed>[y:9Ytry2 3~.FZlNuN_0ё||1=̌eaa͜DPJ*-e؞=4T wF@@]=l!6?~!$H@(GYY?ثkcãwF!qS}f!x15 ۽HZ.=pV#R*3<p͠xbPZM w9‚v/*(T) ;;s39|$WWfƌm*.fCq1. p7nSacځI2>be^o2Ŕ֔e[!T5PӃBqy!z4x{NI]l~]2DWSGD<Y0)0\;wwjIڲ||r D~{h Y܉J&l.}il\]Mٰu/@If&~C2l f~Fٶ|9%/7)۷76@zL k~G`sє)Ly]{-P߲sԩh,, "/Kz uGqΟ;[U%%[6Q]Z&MyNIl`Ĺs8s' (*.\†O<'`AR)_Q̌"#wo+7|xyڞ $O9qV--WQuKh΢@'ݙƯ<¾v0Z;z n8ssQ??~-(C(@UcWx8׺/;qI..rrLJNBCiϊl> 'ںZl!ęE!BыO^]鲽Z5nn=8SSI16 L?ġQYx8,[>N叅 }{=ꪪPqOb@N|< }-f^{jֿ*}7\ @Qj*w%O?MN|PG| bk{ .AA9<##ioK~+6/A66<28if*rO&%aܵgx߿{NB엩)a!8 !D/vF`+GuL1l &un֬awn*)ULaiv6_y^RaƤ c0?t(5eeV}}Gy~>w#v89q#a&OqqwA6aEEl#v2o_{麾_}UW3N>lY [[] E"8c ֙q7Iv\Հ0#67Le_1rdES8\LMy9,3nlN_'c]QZXȻirJŏٷ۔Iu=87)kS\s))/.n3(4ğ\] 5/sY LH`kice[aa|745h `f|<%&2[ڏSlf}0ˋ;m <)LIiKPPBZ-v^r /DD|x8F4pw 콼G䙣G <18ml8~=5'-!Q) _FDdf%&VaxYX)(;ʼ.hn# Ď!CWV2Aß ثii|$t53cݠA\}=z40's[gS~6nc[IIDioޡCYJ:11OaC>ʒכB!8$ B^꽆A0agGdÁj#e ֯UpG[[d9K;&sikۛĭYÚ ;|wc i,/ٺR NNMn?rرLYyu0|etZ6VQPU;4orYmff;?U;2'p!\%KX:a^z)G7nluaO>A[]ͷǫÆٔK inn7ZQ76.J]4O 8[? n/k*WU*O?{||?|8{6//,U*uZk2?1xn>6U# }}92r$S0~~M|g99ܖB!8'I@(PNNJlSpM]u56of0bP]Vmů]K`t4͛Gȸq h [c֤Ѩ}=%ZZZ mh?u+O''T!Zca1`~ Zf>;ኊB!8H@(з5UpfroŠ7 4+۪i^ϡosH1U dOiVi>9eKG6lhunnD~;}wI߷FG֯o.$=&C*Whkj(81ښ~'y;f^uB5<({.Zm:k6=PrR5-^_a\ށ֣ogWK{{<##ٴt)fԔobPAѬ]+Vž]ǖY8Ly0rLv%͞Ͱo$ainKPURBX;;{7ه3{Zϲe|}JgMٙOqGQ[֬6K.O z=;>Hٱf/8yl@?⋱8=U7yiנ ]g~h0%hF~ڵRYQp;;͍BCyjeXMI[[~ߟ}eeONBx <HiqL9xZӘU<Kٙ*LJu:w2?ohc~~\E'_sZ  n஄ ncq1VC۽pEY_tυ?G3f&SVPZ '/':%?KK>h1 1U:Bk#-ަv23>TYvX3`ۇ a#~;M}n.}vxROPe0+!N ey>(mTJvD5z4Ӈ~ ڪ4=~ԫhTN=2e:& 8=]2Ti|xmtVRSBJL NnTlmt⺅ON<mjގЦCv4 B!Do{z,B!:x>Ak0ڲxyym0`se<2~O[l[afe%{gY(7-mV<Lz`iϒPS ?b2>e8敒Kx.R£OxҪW~<>Ao7—0^7~>G!#>﹘{xggLMF {JM!w(--BћeC8AcֽbHƁ S ym 6-[Fȸq !ݫ!!l,.PEEe8}mNO6~J ;. 揘fa@LӺξ&ۻ7zqv1oM |3t[ME }6H?.b~_;h9 23طfee8{;sgL 3 ffQVPFuY5s ξ;Ӣј}s cO6{߄GswB!ĹFZB^\jkV$'ߟ{/n̐c3Ϝ6:_}/noE؄ \tiX!ZgR22VB ! Z] j{Ꮒ8)p; ᷨ(m1 ^KK6PնF׉-/C[}‚gȈ櫈k?[QmT[yslF4ԿԿ'R}%5mO( qg9t[>{9~GeI%Z-)Rf7u&/CQV@uy5z*Rg1tr$NۧlO/ĖP_VKz\:_<-n*o=)1)U( f/n*K*#iO?2-k\69;w 0дLaF!1Đs,J:=Udf63ߵE4VE%-!DRA('٦5بۚYHt-Et| >إB >}qp@nm-'$cddms@C~]]WűmvrnUѵEa :;KAO&%q8=P\J oT@}|L:pZI5Փ=rRJt{x32LZ{̌Ϋɤ$KInoot ]x6%=eehOHQ,U*[[YtH OLquƻ7"IٗWR)OAQ)wj*j(L/c  `@n8x8`am^ EWc5mjM럕eeys }ƠIw^?-~Q yo2qWV`1DGL!#~ţۏrtQG!Cv;ݿv?iqiԜ?|\\vږο|<=ذ|IuzX qqZEya9ŒB8J |"Zsm瓰5xQ׎jwB!ĹBB!%UUtYk0pHBqnB>izU|,/ρ޻*6 @^%3lXi')WWpqᇼa1:V>I+mmQSYc O՞#MM7cqǵ|!"i1*ė994>ܜfK!p,-iGIju([[~BB(j&.:}kuWtS) ׹?bK@ @~]$&};˳vsj0;!M ˈ_W=̌GI9aζξ567wsaL b0Ȭs炿 ڽGRss,B<ף<uz}t9( RF$:Q@֦x |fQuYuc5V9mr_fe|˫5VC33 3f:odԴQxn2F˚%k'@aA^Z;X=??5o!`o{;'ohq tkeM!$B  >60( 3<<ΨeB!z7g؞=Z-j f>̦[#kM-+ l+-ec, 񋓧Vfxzr;Lr2mcjk#!ũ..-4'tYk4Vsf٪VtuJWW+*x'3YYT1cbec<ssCӅ՘̐XYǎtv<gBL y tĝTW.3;]:۬L?7ӌo촸42gnxD|Ro,Wďvv8(A@}u/6Xڶ~ȩZLNǮUL4cfaƦ6;8(/B!ĹH} !D/Ζ !8 eIhhi --ʹ6q/ܪ,Fs3F6//Fݰ0<̚nxXUW<}UZ籤&ש'ʪ,}mlXFѼ9Ɠ**aogIZڶޡTD"v/{i+ #F0(k'+:_K4n/ږJ:y4Y#ă17!pP T^i"?2frN>N23M?:<':똩#亖sqoUYRI~Z~vDUYU6o!B;I!;3OB99h KK5B!ZPU{"9ۜC1??XlNC;oc ;.:*0ӓx>%B 2b^ss`;ŖsCͮ?><#uB:Oy1&ּNNMTԐ5t;/ʼ2 #>@X0qt4˞{qag w^?/tu:rr_Az\:68D`OUv [{F{LllD?{7{j*jL*8:ceuq}9{~ك.~.l~[v  i[!_@ !e !NEN)T3<hu>!b.eKqqg&j1>&C ᠚nooF> OE2}FAQ՟hCNRN˵EQ)Lq:"Ƃ -h;wImmDZ%C뛅wW=ycn-s >}} \\M˘YwR'3 s+sTvv;΢ؾSVq72ʡX[afiF n]rkﻳ."8{w{4[p t;m !m `())޾"3\Sa?kkFhs36]#緻|jkq ; ;5)۷76x[/Đ?!>6 @jϊ>)NOG)LI4+B"#Oj}kQf BVVVOqWB[i v@(-e}-VM)ʜT\WTLOGk0:127`n@fyN鵥,-3t(gyUivM df,=҆ഭXu8Ύ'Z:I SSy%5=Bs9ˆ:)(UJ`;zآ3 :hk,i]ޅWXni,a -J-L/~ ZBӯTZ !ęBce~)TQ^lVڼ\": Lu՘YZ0* z3{/ @ŹP**"Yn<|ñӴgH{{> 綄fSSngT7-x)$}}y&9Q\'2Y+iiӜvTZ 1ߟOy5-jTbo qM\;?I'ӧcoiZo߲'X׾7R;&O9sL욥K9$nDĥ6**?orO?LIV~5Z\CNea!6..U 5sa%KMy9n!!?w.Ǜg||~ ~_ 5,4+_'a7Lz>c{w/>>\3M"NSYX'$q&lݙ8wndzؿ^/c﹇)SN5[[Qo p?e=?Mk.g 9~rb\U6d_mrJ2203"+_~oViqo @UI -MTbAIh>(V\Ew&$T-'V55Z4`cv\UN}-]-B!DP!`6[[YRSVƚf{z L;skk,Y3ĭ^̈́9sp Ȇ |w}ܾr%>C/%/YBeaa .-eCaBd uUULZrk)S[t|umfgcYpmE׮eҳW.x1}})!M(32Ӧ67|u11`ΟOAr2#gё?{5<>j2Οa7Dw/_8ti\_~|&7w݅[X׼jܣG.-my9n?vgd4X)/ga]waagǺ^9s_Lt96Jٱ,wKJHٱJ2Kƍ#pԨPmɺEH۷I `JqZ9~* (^%=#Yȳ))==<;)\&oS^&讈eСmV#3S?Vb;ƮVZnA` ;;w(\jE*77rsc[I *?mEZ- SRx)5<燧iR㹔^KKkx5$?!Nes,ʒJTj !zZ4n7}PHjKɣ gg£ÉBFB3Tnm-5i[t6mMmE^ܪwꮂd~ Ӗ-GQj*}>ÇI޺>%(&M꒱NYСm.}it[\䓭n'dX<<8jcC@f8@+d5ט֋1n!YG$;.?$ -/^0]?d;U>^E}ɓw}[+駛ݧ8vX^y%/FF|x8Υ&(*VOb,lm׏+W=r6j++Y,;y><&˙4 UuF5ΠSd>^hYmfFd8}M`Z#"FV{y駉7*ZܟQnBZ?tsAuBCjL>Fdűv"RwBm4n< ψ}Sږ-ѨT|ݯf-D΄ k r77guT@S~>uѨEOP܈1ph F,KOVZvp2y>(വ\GnFA;vpwBUUmgj>xbcMTx=$a$B!ڹ}zB>688}vrjVvEEjky_f**,UkY;98gWscDzL ,\]\ޠka˻ )۷3OL~v| L3W_ /իCcac^k6 ;& sY6x]qB;8kggwht s4ޏR1OkztuuEG3gqick''* \giߴ%Ӊ3ϱQرLY|+g 70q\ l\ߞyp.7I*ĩ ❰0nOHhvrr5CXާ77=II sLfxxpáC93DzXDHO`qs;Ixanrrx95#UU0XÃ;w<Zw23y")VO}ٙw M!BqP!@G++[^nln:ۋVh,-7-oFŅ z}*49&Uח)b.@v?l [<<nTI`ܹL;D-_ΪG#"ψ׭cm1rL:ښSneXEa!n.Wbdz~\].)A5 O܏[h(׿>ښRog݋/aleaaJ*[ҙ縱A^ˠk~Crf,\[DYo{_…۹~-SfdKVmt;v> 6g7oF[_Ǡ~I&c_`sX^ܾss 339yrmr}lߟ?|FKW}QjD>H$IDAT#0wq #?{^uҒNcưO@0wߡoflʦ\KPkH\ %-"ӧ,4א[pqw6JZ%Xzxpn"w3gf)+OV::kSrrX g%10PRKJ4ʁGGOܔoJJ 'L[ZG136ù/(X]kb{Fedsܰnb֎9χ/۵kȨ;jܿ_g =77z(ިT*_Ƌ11dI+iHAA{ B+QXHBqgx{-/H#͍uNJB[BzJ?ϷϷʊA3fߣkdn3x0_{k9f |ӧݷ! xB?С۬&!n=zP:.]_Y +1,9|\&{#ܖjso8t)f;̜mߐ…3 zC++ 9W䥧of yz;wQst 5m'O6:@ؔkб#GW֭bʈ> `puey9*e]ΩKNoH+utp bTfty2k%yySƎ} 0KK^pp۔:;oqMZjkWǎ|7\%z>XT+VۓRa8byStsebɠ34h--1~zq^ڻ 7NtǏg-=X8>۶a'IpڄBΡ DCCNuvnOOgŋumjzcSSRcV=Ю] +T*֤V|IL |q[[O֭ ĪUbl|wwފN̽|1VVtVƹ>zxp*?}YY!%YVC.p (QJ/-eTx8J7 %U''ipsGGHC^y9ߦIb"Y7,*"x櫯nngBvlA;i\>O?eU,9v[  4  UӒ$F[Z<( p3,s x'>7hh/:*q -׫W<)I##M&Ilf(Ȉ];wǎx߰ /.9kOr^tr"gO~UO;PXuM߉lHO(#*qA;AD0ťFnZz8!!h7oFF:Gfw ۿ?? 'ؗ=ѩSdV^,8tA[Ol,WFb3ˋuuz4LbNΞe/<(žxW5A'[[NɓqhBxo>B~̢[  4TD,ʢ͖rbAA@S:'9WTLhpCּS< Qd[sLK/ffi B Uawgex$bc)k&MG&c=1!!Ƿ2ZMwn\Q\HP~3jږ,r2R[S'odԊv/3sf[WKNN6{ iaAu;:#66񡧓ڌ#e|y\ے%t,mtrFsn{+6\@zA {뉎beW͛5p9+)ṿsR,﫯X#OH@;C/3bZ ?F-Y:> ٵ^#u]b,-?dՙ3uC5u 'mlx{wօS\| DPPTl~]}M 23à 䮯zJqX:K&I MO l>MLl`[PxEKJ@CCKmm\ 03r2˙EPֱU2Iba]ON:Qt^CΞe\dɕ+8+%Aof5nMF[\ί?358^ݱ#"X8t(F ؟P]nI oά^XäDNvq1 O<9fyƲ|YLcY0x0kϟg֭|&&a8tr$T*^u8>3W*_Jkײ9Ӈm?+=z\Yԩշ/GNU|3Ӻtajp0uBnI +>>~<ƎLǏ>OO8:u*^ɉ}u\ꪂfs(1K}Z@$Fx{+kHAYr9eOB;xϏv֭ rsdƍ\+(*Ή߾FP꾾fzzcuᆋ >s0{YX4+RTA-A;xn.YJ_QVV-7 AAhg-=99#)WZu1b[Nt Pf׻cnNH+*^,5Fr9t|)А];-#ccI(.:.O5yzT9IjaP ggn|(+3xxZAkΝarv6e#G? c7lJNΕWnߧr^ogɰauWTNM.]4/_ ,S]]۴ (RHuާŬ8uWz=숍Oӭ2y'EGd0vrevO=j= 'trBSR-)Ǹ0~C\S=VcG H].%/>ML(*/'kCigd*^/}\\))aK`f @{{l>Lܙ`{{trox껾/a_lٳ0M$FZYBOOn0?aU@kkx?μnR=kfƾ`UgpOV]x0< wNy9/DG).VS@ss.Feng mƆ (,a*V/203bܿ2HWY9"=oOmB^iibJ ,jMMeFyƇ$/FjoggN7eO\ffQTRTPFXeց59F::'ڛ/PS]ȁKHt{ou;]LH GnAh*K>S.1ҲFZY+\qL$RL o}M]..扈$4odwwghK.:23`k;,OMEF7r*2ZOH`YJ Kx9 03pp0x+.|dWjJչu~Gml ŭPTvcb*+CU?9`WތKL'޽LܲR|5|8>u}Xs&& vwoߞ<'}x:4xɹפ о B ),$|LZQ^,\][lߊ2$ YJcRFHKiMc]ơZR)ܹ7queWV'rs~Ot*--y7+U*NBʁᖖҒC`2__^pt̂ gJi).^)_z{Yܘ$Ikassgf80P>=әlg{nn4BGG';[*y:ޞ=<0kk~?r%$0sN۴g֦2vP4dekTnffVf4<=Te֜#ؘc(S(,*RwryG~=k9[k]UR_d5a+;vlz1uͬՕϜDr2!T*cb;Ei 77>>ĂC8ʚsЩSʰn᥿{xm]qqhdtA;!me\N-/z525O=ǝ;Qǎ|{%|ڥ 4J,Y?ҁIN -"'Nsg̚Eqӛg6n_`s=<2{ƾ_rSSIfϳgO>ЁG{8jcރ9WL^\AjϜz&&dcHquttkNeƍ83E,]Ճp8AS.p8՞^үn={2ys|>ᅬWp52(ZAnZ#,eкyYXazzǃ5nBb/^`p0Vj\ƒkHĦ@ZqpF$X[s%_$%17!βU~HMemz:y]e$5p[TT,T[oYNN ^; xEӢ77f:;B{rUf@qs6d̍q*5Nv9;ϥKY9zt3a1^a?ݻc˞x/#5jcXy o@77~b``ww^;(,+k1WbǥKLHPG덮 lJ#7s5?1Otxw21wR%%R(*@B07k BS B ^ZʱnzH[Xovffģ}nR1o=p E99]^S"_Ei)^|k11LZV]R_ḅ˗KZ|R/\>0SPseg3yz<<<ó_*Aڅ ݼ¾ӧ0`$9kH)aGS:Tc{-ϓ|<'L pĈ=zYb`iIq^N܌A{{~KOPv65/V*y:*m;6kksp`YJFYrhn.];nn4z ?yxFjҕxŅvv/.ꝟP)|+f2Ib cY{*s@Bb+|lgg^svƴ+JŒ$ފCR3U@336Kll 20`apoo>2m FB;1af^JLltp/s P^=;vP*W~f32bSO1sNƮ_\zm#<=kk?xl#$->#$bmhHwfVfwIݻ9x,6w`Ք*hJP3w$ Po͵B쌌Щ jcǥK**w5>//:0?Ȫ̷#Fhm]X_<ɄNxo_^ؾ]n݈`HN4Ax_(*/'Wlw@eІdF\NJ圙}Qc0g@^׊% S==7ht~+~FAH*''6Nn:-Ye~~Lk5/{{{T* G`;ܰJdI~:8cd{ F)%޳'e%<ɏ޽Xel2aPQ>3tZ;uz\׮Ť]ԣ]/OveGeFߍn=+oƶ_۷.ٿ?OZשLg3ˊ00<@ì8'}8 X6D]X6'Ogt!Xs{A,?m۰$RSS[z8w8AY`O6o_BAװ0 kxd`zV_KhNq>?_}Cv[yyÑz'FYZwkOʔJV_$ߤT 0+/::r [>>njq=TLΝ2'rOc ;k8u+?{{}*=ϝsvB\=ǫAh+Kۧ  GG{#2 ۛgjoKJx>:+tfCz:_Z)0$Ia%w2e7PTF\+RSYK;xӽ-ґɘ$;;2/!ZeG@@ Ж${{lFΜ(W* cOQrZ^|y8e r7hWv .+2y?ʷUdg[qqz9uOb mmvpWC|̂Dr- `3B3ϴxwfso3>,wٻwK͚3pƜ pBA7;[[JEv$Ոv}1#F`BAF'~ ^4J='Ljd$Vdki)_GA,3ߦOӃgbBNJ 3W²r~Wޫ99ջ^8O d qû>,('א ,,JZD}OWY:h!!Xm}R*9jX{{`-[ܵ [[̜k)P==xyҠp _U$~Xn.i x:*^&&`RP ()r> f_[砫OKͭ ?_ׯ'푉@ rqY>NLDnT8Ag2‚ޱ  B{  -doV24Nw=VVsKOG ~B^CT2q"tɓ)f /0aj\ucƍ[oAyi)x.[б#}O?Cz1|\&{#7-i'Cǎ][RVX+#>mnq.]8q#W )( Wc`@G!),-fQO?%h[ڧ ǴR99 2ގ+g@>\+< s{jl, ^2FG" 6=WbbȪ'MAyFGCj*dl|۔fwgf13)vv|()  BT999XA J#dܴ!!-8gooOJz[oI~t3m|N AZSmvDjjjK算X\\gQrKz4ɂ˗y'>渹[z*GDh,d)/xJN`Wur*bZZ١}},hh_:q ( ##Z 9}c"KXAAEA0JlDr^W$ -oLFg=`eKTsZZS{Zʴ4jkN5{__uBaTTeN;7TP0+6gpDu5ʊȐFZYݴffڕ٦:}ÉmYͪ3gΥ2;yBܹĶÜ9JXǐի1c>+VŖV..feeWA6"@(p'SZ`Q^TAhuuUO΀ގ,6]>?gkdfbAԑYݝ p[7eGU;pKJ0侮<:yR**xZ\??652,I[[7Xhiy֌ N ͮ=GNĤ(shO5kgCt С[]..fm&@( w; Afei-W n 픞\J_ZkPQVF'10mY;J%^(yj;x9$#=zmŲ:U-ׯ}8K\^eJ%sƥ"s_33.vΓvvHM,W-# ={+z2Y\B|۱c|t2EkkfKO''t(/{%1'ogJp0Ǎczzȑ|0hVĬ5]^k BѾB =YYFBA=hnd;;VQV (+qq|Ө> ۛQQˁм<>|oymsV Ȩ(wlutX<MTaa{PVTڥKr__zy/ŋ+(Pgl'c--渻3/}J 2*)YGLC~_̝;9B̌7Xy4߇q:ZZ pscaVd Ot\&P*kzɓ|t0EeeL҅G蝴e н;oKj~>yz1/ ٸylА噮]嗖lx]]ۗ$Ҥ{a6EDwl,j>9|_É\_^^,:) sx?y?P>YY޵qq+ q +NLj{gO>?vOQdN^78~A BA;L@N #{\WEAڋjiQ6%㹹sX[4R/_pvv{XƚW52T@żM̌sݺzܪDiEFK*  %ZpP:q[7^vr-f;]]!"$*3\Lɓw<֭CG.ׇLvqzڕ?}oG )7?\k~ןΞRV?? ;wfΝڹ?~Hf'G{dv1;x0!7ث-hPv</tμرZm\]OAAhI"P ˣZ9)-I=( Xhk7qcNLLnu $I|\RJJ4%ы LRQwp;[-1ٙ66ˆkאA2UUiil~ŞLkٛx*"Tss..z6;tXN3/]Z*LYXȰslfb//:WˠkGce!mޯP*%ALO|Iӓ_Y},_}}mmfsd$["#y_]zA'LPR_kr6-vv >a^^LnnlbKd$mmVPsb0w@/gg?Ɲ2qwgС>6L90`*`mhHO''<XrJ]& 98t)[W$*2lNO#ں&&ۨ>N*~\b;}yC |9ǓZ9NWK`{{5M ݻYzjiR9ܹ'SRsݻ P76fSO-x~o&{;##z>..䔔ifΝy:ABA}PJ%nĵb;wRMp哮i%Z y7:YH8vpzMcAh)mlXƞlʫe_(B$%1sji!0>Ni,Wi<ͺ&ύ֔)p03-1`re}` 33M\qqeG˙TC+ ^T*ڥKsWw}}~m(x3=MM9_̺t"$8U=Mph(O۳61k m+=z0K6gxcnNYT^ =($̰40@iff兎\ɔF6r;:ww ;;V7sk-?p2ӂs- e\̤2[Q}ZAFAveezz\[]8aٗALݴAA]6JΒ$ehSǓPwC01aA̬v5$$p*?_Dżn?yo!\ a;tçtO94.]XA;/{Ϟ幘JwCEGGu"*$w__ SdEj*Ǐ4)I#ٖ$|kCC&nق… ] r/lH2#SP\9si)trL+/uLWql(:_ZO?7M_|V׮B_5>ȱS0nPtUgΠ=oƿsWr eoؘĜz_ظQ}.:z058xO?Ν

GETO~~KW&ǧMår޺fB*EE5clUza8~n;v^BdøqWA||g\@SkmՕ")7²2'$0GLWKK=_;V~e7?vİ_~!1'/ްLٻ73{&!;aa{7VV<닥7plʱ 77YGnZAHd !'rs)C 23_j};Fd{_K[LJ:PV\ͅmۘKQv6eEE3>{bnj?IP{ҥ}m=6q%,ƍDFdA@vfc[Up:\]IٓђZSqs:&d)ZZ bz׮DT,|T+…foBv6eaKBQVuAG.g{LzYFa!ǒnrd2վue2Jjq5:Amx_L FK&j[rEEL R/s21!2*_4+*+x}jƨT vwؘ'6oĎ܃>ׯespzr5QTl ВD r<7WIq%[} B&Ndop-&Oi:mKx*?OZD^{ {{nϓ&ݘVYqo4_~Iѣ_@|﹇/HaFii\s24oMYa!IgR\Ъ`ү6\[.:fNN]J\eɜ.ǓFԞ=<4bAN.IKװZ$ZX`Uuuߟk,/Wؖkocs+CoT*ӢVZ"X,Hj#,- D>LLDRpV?GF}|y_I9%iWPdKyɉY.u9swf&8,''vuŨcsW2{.anNzA_8`wwz:9aͳK!!JM3gu 6߸yQ\^ά]ӓfljmhȄNx}.$ĄB_K#ts:2}AZZtsp`F^9w!k|n8ؑ{(m s޳z5{znn )_ δ[-)Νё="x[7Vo/3vdѣZ[iJK5Y~!Xٿby9:G5aci`zKIع3<#bY9EG]\WW0˗3d^ HGC `[};Kʊeaa5| Bkк? ENi<5rJb''xRwcD ,,pٓ۷eEE@#8[Щ?0f61{s'܃+2|GXd[OO9wFeG;;cA3ٙEWhaTQ V\T+P[Zߥh+QQjt9ڷ8Ǵ46WfTT$˩n7}:|G7n$ 4>ۭ1|t0bH>\9IP[CB(,+hPud<{q46&W僃عMٙ642?++6G''90iö́GWKLؔc[732w 0S'zx;Y JH*''L\=p9z+%%% oVd򭰷'Mpf ^!Cv-.ݺED|h^ڷ WW֭c EKW ۷טu8ffS.lK{hς]끊~ZfË/Rϓ7̞MfBS6l7G1c4z4.ݻՄd,_λȴػhBklj flۆ$**8ARII$x.torN ZFTdG=jc0%Ρ+5'BjRT11䖗&"r:σ$ td2f] 5Bbej*oőS^^ v76ko&i {{{tҘdhȜY>jҖ-ffrxʔ;2<.eZp0o-|9.0fxP%}|0_?Nk!oYUىς mPne;A6(+*gѣȭ?t(eh(">p⧟ȩEƍ65SMVT2{rrܹkd}o,h2xLϜɵXXٳ߿#ka66PY{F;‚vvJKWL"*$?N|< ԁ 0 6y֖)QQ$ב=[u=fewzxci^y9OGE۵keU?`ƜewS--yy1bcٖ1g87Wᚴ4֧3͍;P[_ÙsJ%Avvx <(W(I$տ:ڒ0nɜ5H2>̣n?t(}]Q^_?zdOg('g1i l!r.Llm1sv| cdm͵hbd૯[M}ǎ h[af&g6mc[[+K mttӓgk-/罄lB0TK~~<^c*9ll$KKyEL(90Ύq66-821 r>Q[[&EFrZ6hMayyt =WWpqivJbyj*/ĠP42?V::v,mP .tw))OBQ+0.3y_Pjd%  BDP8[87{{>_&%5;K(,ԘNW64pvl dk$H()Q/r9w|hd{{$3`]z:^Ǐ,%e+@AAh7AHRq"7V2 AboO#Z7IID6_wx⋊##%ɵF[닛~Is2ۦ;8X.gW û+ח]YYfQP0=:aa1@sӧUgΰ9"Y\r{㛵ϺJMeuX| e2 CAں?!;i\vũٿSmd. HsCADP6J(.&GXf($ M&$@& ~Ɔ,eJ%SZ&sM饥<N]x5&e#'6+VPPZFKȈS^66 ABAdO w52B}  BCt56YZr`Ov6\ޤ~%Ib/59T)-I*ɑd22Rhr*U*x-+<3gJ3c IkcCLHѕ:ˎ/:~&8[TVvG_ZS[Օ&?{X1jߟ:֨;2Ң&;A6 d^ڒDY -I[e"AAn͇KO'FF x1&,,jAW^^Lf+ 1-5J_0nA -BR"5bbPTuɭˑ:<瞞< bzr9o /Wf&h\ %pq/r9y e'mOg ͝ G3)(%|²2\LMykW^ՋVqr._Ο`ĉ tsÇ5}XrN::ٱb(MMݻOBAG[[{/}\\tu?lrv6<AЩ[_߱<Ϫ1cnrAADP6:TbAAAh&,S|w&=ΎΦbbԡí %RL2 xŅ~x]hy&:_˪Sgh[qq((jlJ%İ5V|gNl~磣I--(\d_V'N+]\Е5ӻs\ Me 5 ]]32H/(#xrfl 3p 3woLLHcC8CC.ee޾}5ڽ?<ѱ#[ƏG_<" -FAnJEX%FEPAD;;MIT^FC |Svvx4!CG$VwFPl~-׮1TI򿰃04d[Ihy(Lܙ`{:ssv/aol gy#::B'[[WlC'cQ7^O&{u+]>..hd};K ӸpzABA$BRcLvKe;g%Z{0?`d&$0eÆmfFtvFI||Vk x56;6o=={z\zDG3̬EJy~1I݄,$YR*8.| ҡ>zzٱ#]czt49uf+Ldӵk|m=a r>)[[GGs 'VVsٳA6ߠЕ$VdXhk?t者V:T{]@VL i$I1[[b{EGNJR5zi^&&yA2_&%ɝ.P*0n|}ToYKJFQQNJz9;ǦEi`WVQϿGF2͍OwtwtD&eRP9ʚsЩM<g%uLߟϏP, j́X]S}kTnff= qtİ,dQXAZ AM" 5H(/zSogȑcqKT~2{6+Ǎ֭|1` Y=pGW/,\MKbXi׮:m5O eȑ|q^#[W_ŀcQ{j\hcoӧi׮,!C8bPQzx0Ãcni m6{y՚[M $ERRjȀɑ 9-rм<9eㄻJ⇔P\$w}}Bve]eeETKWنk;qo!W_x{skW47LF?WWӇ2*?gQy9Z2/\h8).c14Q4NL R/s26&kzw|EeeT )U*6^ؤ5e:yYXŅw :#;7xvFF֫lm~AnnlҔc/*/U1AA<-R6 ׸WR-7t~V62=z6#}ULHW_qoG| [fROxӦaᆪ).\ns!~2CKK.]O>ё\u|MK)`󫯪ןٴ#˖駙z5>(=e?ÃKy2{6׮1y| zNBie |]` 3u&nڄ}`-O`kKOZY9*B龗16[ \))7osP+;T]-')):ښ]ҡe-Y@nMx3-2iO9֥ }|0< kiɅt$4%:9.^+Vɓ쉋H:D죯%Gy))䕔0͍²2xknv(w|qZY%$$R+{l>g6$F::3} gDF/>we0[#6ϏsW=tmb+.%ǎ#6ׯטW1zVVlHb"))&q81GG|n9E$$CMKS%}gOj?8>>|Û|݉{uOoJde@RAIMARTD4(1Z/JO>O0wBᆪĉȵ(`š5of)SHߟC_'c I8BK蛙w"tec2DcL\`lxrRR0up 9\I.GŬsx/{RFss8A+$%Ԫ+V*y?!o||Է\ϏN(=%%-]!.J%_Qt315QZv>?I(.V/SZۛg훥T35ezt4_“?OVkhn~md4XYF\+.]7)[SRG3Z\MzZZZ[c\XG$ӇyxzKK7q"nH,\]2{6畃mTJ% >w.]Ə z5uDȤuЁ:vdt}:dzW->nڄSp0~̡ϳs=wФt{ ;;s"c03!)6r83x郱 ΟOmI-=w))ݻR'<~֭VyN\&&jSW8߽x.RX 11(e ']]~ЁVTKOgzt4 EsɨМlwsrSf& SBx V"NDWtKօ3qRgάwGA,\ȪJgA6(77WA"(cծU O?<G[ ƶLzt --5VKO8'T*+ܩ^g2KJ7A3g,=={rZFbs9,ՋzF]AhO溻cPGBhDItrF9@pojk'rsFpSOOKݨVVblm aX[U JNɓV]5MMia5tr6=EEiӘq2Ծ B-swXLvefro=ђXO0VO66vH%I46EGfهк\((`J,eGf))z3ٱ#^Ř&T :u7]\mȠGRiAI.Iw5Z8:sfK|'5DAAӻ mYXV/ZtK+mmIMšSZtm9A8tCNDڥ1WſFKOt pܙ[4͜g,̸lV'q ddPTd6*JKo>AڪӫG{7Ȉw\]yH&[/.hI?#kG7ۋׯ=,j\p88Wh`G$&ѽ;oPAEpDCC9'gў7._n! 4g@{#S`AAhD mpY*^"${|?x'3Ϝϓ&$I\:|O>Ul103)8CC87=m>fK/1嗛t^ssed:= roQM b.:)UAH[& ooF?\ Dcj*άLM%D]rQ,OM9n&lHNUZsOOMcЫx+Wob6:t~yo98uXyx0p RQ)na-]]9r%֣Mzl,҂&][ϞMާ b8Cy7ocq֓ۛeK11 nrW^t˕kILMyBSP0%2uʞWj>twuV-*IO;8p9##9S@x6]/ wb~RAA$@I#n +P(0:tS]JۓRa8byY2{6 LٰYAX8>۶a'IpfOZx2swor*Νc_V5m`M}&*TP&bH.zzMк0y端Е4KRmJ .]lBxY&I|NN"VۣXfNAעcca!%vv B+2A[t*Ĩ  yx)) %ib"Ġ$I|MɓP-CQ^e%ryܑ:n-\F?OfyFpAW;u"g$faS[M9Jk.5V.> 7$IU*22Zz8 ڮR苇@A, Ahfu-\LA[?_JB3IRf\Ta>Ģ+WԁRZʢ+Wxͭ}e112Ғ$13c](.^ʤH*:h-zCttZrx 橯ϡ`HJ͸8Xn.'O\zddz<[ Bk.*P9+fՂ ,-IҘWl|YKAAhcutƬK4J*D[r]WWVqLݿ Xp2pj` 11di՗X'*we|%&Z7Ύ||ic7Yek dBDyyJ%DG5#X hK۷0A%odAi[>AڀBՂZ BzW=Z_+T5 gbOZ2ٗ.5M׮kzz oqm؄!1A"OOV`u҅<<В$ewF'O+3Oh?Μ9Ü9sP*7o, εo TxAƓ*J?( -LG& //j2VGss[ [[zhFk8sm3x:*9rKKq -/N|rr*CؑwE\xŅ3ݺАHd19fR"8B3;s sBAABAfPTT BAAhqXZ2̬Vv q IۻVRQ]Wbb-/Wg J\r_߻"pԞ&8,BuvpD׮lڕnnȡ=yRˆ,(#(++CkLAACAѕb-J B$,B%XRɷ1"<[POiiun##_jU`+ikӖ0r)\&zֵ+-9J[&]77BvK_*bAA,OI2٤I۷/3|pRSSmxD__???-Zqߏ$I۷8{,999?cccؿ 1c¡Cn1ϟ?tuugj*&O 6$1i$vZ|||ʕ+m  m 4:AUddĴA<V[r%`K䖗k,+/gjTƗ290‚664+3(@# n' ccNw ʎ*gy0<;>ƻAll,㣏>~ ,,_|Qmmm-ZĎ;x7oy}YF͛<#L<:i&\]]yG(ޣRxYnse֭xzzr}4'IsiԱO|'̞=]v_bmmMii)#Fw=zw}WmTT ,`矘7j߂  w ݤ$a%~ Bk1ݝ_^FI"/w#^Q/SY̿|O==OZiFIW&;QZ*+c V/yz2ɩ][}oa˩_8 Fɓ8;;ěoRD&aggҥK^>}aʕ̘1C~_~Yv :Thsvv& #G0x`Î;8y$ݺu`СtܙEdɒz-ˑɓ':t(=zرcۣGj|6j  w+A(ЌJJje$XWAua[L&'UXxK?@g%sT}4'k;'=[ڿ2 Ɓl|r@O&cktvnן ﷴbHYXԹ^\++㞳gy%JoaNGte˖>ڼkWXkYJJ {ݝ )//GP0p@n8r{FkPP۷ogΜ954  T# (Y?ԝelVhKwFmkiD'1zx@vRs=<;|Ag'%eZ /;9ᬫ[wqQ?g}E7\,5,eV[[af^ne][V }Y~3 22*3|3C9y?ިՊ²mF&=JZĐF[Ƿ$lr] ;:g׮Ϟ]5cG^oEXgtzlnJ2Gyy91>(CeݺuL>&&OOOqjva\RSSjf?K,"k?$֬YCBB!!!+:6@Z7 !BwBaCjbU0YP\s栾nhY5^EhFf4YcǪ?]tz( CCˋ{g_I 5`F`oI n6mԬ/kڵ|fdl裏sT*&OɓIKK/ӦM#&&nCB!椂P!l(FQ mӢh0#a]؈BTqUv> ZFk '<9<Ak)NCtF0>ȏqqWW3), 93)?߳?g>DYYY`0駟d/ f?:u9ܹs_Iڪ#B!9I ! r!*-FfϺu1t(I,ޝϞ|zpFy8X_fdM717&[oOr-?y}{ȡ7;Ϣ}aB~YݺjB[ܠJ۲wܘ^֍o̱Tӹk*>s9[sh&7l`NRt4ݛ,1=cʜ6m(.6[7ӯ\fGE~~=ޘ1ܹ3:vd٭/kjk1mhz}3SH۲UFfGEu~!hEQXغU+P=ΞmТZ̸'4g9ܴs'fg)^hUr})* ZfcNiV ֞9C~䗥lܸE70|pJmԾuРA| 0˗O?vZf̘… /zFaΜ9 :߄ xgYn?#'O~^Xcpҥv B!&W,Ba#:\QH$mSOܡCA՚%O%K0e ͛G޽| ǫPk zY_n?(3gs-X &O'O x[w}[$0&_m}_J}k=nϺu|䓴у_z?0WX\;vՉȲ" 339w4'mCHHǎ|#Z?ܻl]"9ZQQS_i^ np_ſM*KK nߞ!gpҥz@||<ӧOgѢEу}jB!=YP!l$Ҫ*?,\Haøc|lB/ (;M`@Qp CHD|<αOfcunzܾ=v_~wqtscĒ%(*plzUZ˰ L8I[=!Cغz55tq_nZ4|?> ߞV}b78خ'nU߾oFHTsrVZS'S7$0:>}ΫT(ϩݻ ĉmуߦ ADF6\Bq^b]n6=/rsrTXڦ C!!y㋦3UT 98uNttsgx4_۷l?r5o_:yΞXUeʕVh6AղtR.]jn& NK/K/Ԡ/gرc;vE7o,&: !B4w2MU!l$RAX?ǏS| j$j۲)9ߪάo̍Ŷm_"BQ7`.>>* gOU~[St";V :'gO>\sh?dN&}Vɔb uTj=Oi~>NɔNnJxB ->g//<ϳoJki'uH1WPL YmR'SR0tO!v..<lPL;~}#oGSс[4 ȋii ߷+à EQ:Q[~SRXmM!BQ B+A(Sv>Ap~Wk5oL|3 mFo3zo.^4?UFwav2wB_UEm֍xҷn}8R1z*\}}gXн;CpI۲wWV8 !%FFh i^#+ɯe qxۘ+Tu~QͿ@~#@Tڕ(X߸<}Q(B!h0i1*6QQ ڌ q\N]8^etӶ?j€)Sq:2ߞE`؂nmu*+دj}I7 :uk} tmw/gO=#5e#13gسnϟGpMB\ B'O}+s 2.gUTϣGk}RX\jDSqd3t]H#ϏpYo ѪT||?9f-G/6Ql,]B!&Ba#-nyi48䭶>p `ϗ_6j SU!Tm|9GtdS \v6i/ կ{` YY|K3ϐn;GǔkB!IP!lE $A(B\n񡏇YT'^HM9iidTT% h6~L0 dgs@RzlRd/5{.!k]<4ލU*g@^ϐ={q:e3sL"##5蘂9~+W( :Ζ!6;ww(B$q4q$B!EQXкuӧWRbu%USaa$Բaa*"l)EEܾ `#ݞȎBܴs'**:ff :ٳg[%Jrr2kkΝ̞=[B!*HP!lrB!=<<[*YF mS`^lٲ=4fEFRyXZv6sΝtfv..ѵ+L\\>n!%M\3SUU^',,.]dLziv^?!B+IBau%ݯ٭B!yQQE./?LV>"C#Z۶^dĐZmj1iC`{֥ O4Zv"-G u:ͬTFcm\3 SN___f̘Qkuڮ]ХK_DRRQQQ8::wAii)+WdܸqhZEaر}B!$I ! tV7@*Bk];W3W⩣G*5TcxEwV* u繹8w 32u:єU}xY$kŵ@v&$]KQ 's'p Szjʕ+9|0}EѰfx ~Wx>&M"99ŋ2k,LXQFڵkyjTXСC̝;$֯_wq5q'ȭJqq1|,^d֮][ 0Zܹsx衇=z4cƌ?ɉÇ^EQHLL>VZ2e 7ndɒ%SYYСC9s&7o&99tlC^?!B[!X@BqKdUvCNGo*Eai6(b5AAt$'McS'[>fh4\j*NzVDGUM-G:zYYVͅtܺccKt:.\ȓO>Ɍ30`-Z0oΜ9DEEn:OZnMnرc]t!%%{~7 7o]v'Nn ,,~󟦖111m)pBذa{zzr}1k,bccMN>#GխJ?۷ojT |_HIIaРA0bӿ[j@=Rl'Ba r$6PgQI6S,ۗ.4}+H+vtdrxœNUZ iU*^j,6糥Q  <|PIaauIv06:ZWU1`.0\C-Go7mspp`fmڴ;шNC'۷o .. l2[z!3fE㉊wrf>>>ݛz{֭[2d)9I -Lk8p111dff^t|NǬYS\\6l 11m۶alO yB!lA t: *Yn65ȍOƏORRk֬!!!^yzېO!εB٪*N֧Oic0UοM|,Zg !ĕ2%" (ſhvP) ƺg yyL[1-"B53m\\Hڕ !!))ZPФ]@Μ9cw|IRRR~y:tRrrrضmZ⮻"''kZ&,5b P1VUUE~~%WJb:tTƏϴiX~%#!B45I ! bEڋ6eџ-_{e n|0~8qqԡo.|wo 7vd˲yo^ܙy;[@u{L~}5fGE1;*F h "Ea;3ݬ*†YQA;ȔtUQ7}{S@؟ZmQl,*ȭ⦝;Ur4<<֭[gVYYɷ~k_ٿ? V?!瓥(B׮]IJJ4ڵkGpp0WY5c[n_~[n|fk׮h4`X/Wdd$sLJnB!MAs]B\J~Uն@WQ\̷II 6_͜I^j*=|g//3͚}/PSk zY<=9} ?_(@ɓ|c׏?r4sF>~1槟#8]~cy !? ӄuJya!iAei)sxq(j5?/^=ĄBQ(Ge z9﷊[I 63G…h}8/K0`:̞/=˖P| +nMLח](:>eW :0Ν>~3Pn;^y#.|#iFCΑ#poFG͢BԗFôfU]Qn}+«ZqΝmz Ξeeݜ,)a]TV:<G:vfoo{'͍c_IU%0YޏWGhxy#..^{ͪ-11nݺqw2f8yi]wE0 ?! ۣRx饗7nu]TTT_xb%Kh43o<\]]?~<AAAxzzzj|}}quucǎVcL4e˖1tP~i:u*s֠h5kV!0awww+ ׯiKҿltB!CBaTzH&*ܹv @־}kz+E|ACϭ/T'"8w~jJjk4tgϒ|>og/ V}Ɛ!$5SwMMEdwh0@Hǎ,&}=уeJ<fK}NMo'npqoEFiZdd07  Oۣvp=(.].kߞ^LBnn.GRөS'>>ڵ#993g2~x***og^Xb8::ңG6n܈+cǎỳLq zj{1Kll,6l0%U*o&=}%4441شi'OfĈ1rH,XpY1Yjy^zol2O>G3}t-ZČ33f +WIB! % B!" WWSr0=ٳ6@3xos*O@_Y {每¶%;^bٓ{a/mR1֥٘N}:9p/*S{=hY{k7`@yee{Vn7SLo˖ Vt:JpMAKy^Al}}hݯmۚʬ#G-9{yѲW/\2Dq9T8T&/GEmvz`Gq1&}ǝQi0*4@?ŞቫZkmrc`?=p;v0U+&]UzZ͂ b/۷gڵuꫯꫯ^\Fb Jr%&&hk׮lѺ==cmر;l[_c4M6_Nkgi޼y̛7l$ Ba BF rC#oh1BSί3gG<<8cJ*,KgR΋-PZcJX]GɨZJٹc;?j(%4.̓` 05ٙMqq::1:q-@wwڻu}1~*(h–B!B4ܽBF*zP$٫Lnvq5%.(={wgOO#"x95n ōA܈9Þu~|'a(` @uJV}ٳ' &EYa!vCE(+,Eu+GC {FߟsOsxDp=Ǝ%hD=s"T0| T _gOO}n>/Mn@QKM͌|z=ǏggrHL+<{G>JpǎƢjhMT 23Y|M1e=}_~CAI^ZE^LEEqIgg{B27o,kc*ّ|Xy90&(_˪ >twOzld@Ѫd>|}]\lrTp~@ʹsަ N_!B\c$A(T[ B;J~RSeRP: FT>l5˴wDΝˇ&Ǡ3'Lv0rr~^xӧq!S'zt!;b{֭-:gCխLS~X(.#(^+4˗͜9|裄um͞׭g~tM̘>G77~]իQkFF;th^C7}㫙3h$];͘Qֵ+ߖ/Gcbb^ohĨclD%.t)rrpM71p4>7OW=ơ+/矿WXeSt;smS/EE15">A]%6q:gfδw8lzU؁ٮ䧥"{k,B4 (RL=vXNg5.A!W"9ˋ h[l˵y*,텴4Z^svY$myE I+.v'$j(ke]B!BqMB4R(# 7z%%|=gǎwޡ=pk6B!7ȄÇռڪNj5/haυmajDNWVf3v=PRB9^VfnU(|˄fXM)^Kggڕ3y1FOi3K !BkA(3k]D\ߓF|"#yq{!ӧjQQԨ̋'NUYi>`˵ZfDD\ji|#0'-כBnݽbޔT*;vd@#[: j5G`v9ÞwHKL!B!RRA(Tu>A(BGNc>@'WWj5ّVlYEdX:efdՆ,AAA(¢EHLL$,,ʕ+Q]-K#Cdd$3>sܹD tB!KBH:BqzI̒~zvP`Pᎎf .TVXx\j##7/8AUr={4MZj -]a'nu=z{7/8q%ߙ7o=Ɍ9~_ڵkyGtvٳ%A(BkBF(p.y:S8y#B:OZ{"Xn6P H:RZ9;tlx1-o:wnt\0INhezNgQF"#=daz:;fM rvfS΄99'0!,9:s\cEEyq6>TpvvK. >6,,|N UUUJB}B!ĵABHuV6y$WEQd޽G!Tq  (~~E\&;f.TR1/*Ǎ d(/7[۳g]\L'7FRԲY@u2mrxu'jMtqwNDhU*iC:xhj{\mc}ǎuqyǎ˪UL TV\[oEFFFǦ2e{t:dҥN3gQ\\L׿E||iEQXx1{O>sDFF2zhί^\\?g}ӧO?$##~ n1Z9\qƱg}QoNvxիW_HJJbŊdffCYf 1ƍ@{dʕ;Gcȱc8v٤!B\$A(T%mdR o[!DFI~~Yh~8{/rsͶ)-qtZ"( N`Mot` sOxY:^;YY^KJ XIkVם:| WsN={(w'NUV8;;… *&~'EdǮ](,,{ݝh~'JKK4i8::ҽ{w~+ё`JKKYr%ƍ@բ( cǎ5{>ڶmSf{ !ݾ^4k- "\ 51$7z`IFMZ 3,Nrdn*u%~L55qqakB#jFwϴcǚ$o>:wLBBO˖-/y|BBz/Ҵ 6н{ұcGذai[^^[li8M)22sÁ+lP%.BTK!I G/2o3Zh=c7 >?ٳgn:Zj/S=UV1|LƍYd uXfΜ ͛INN7{!ΝKRRׯۻABQh4#V~Zq. OY$ ٨P5&$ ov#)-~~䠸|˿ZB͘ W[Uuc4i 8>5kCiӦK߾}{#<[oW_}СCbҤI ߟxS|r+nv|||P]FÜ9st̄ xgYn?#'O~\t)))):l!B\z!3k]G/O~~>)))1 T*Xx1P}Sn+VXݐ?|IӾC aРAD[xx8oߟM67ߐBBB s,\EZnM ĉMFawVѣ/l޼t3B!Yf-vtlOjz:5֒5 gXV%? fSG 0=57U~~|+AqR}(YOlJ'99g}'xsJiӦMxwxg>}:%%%t֍M6]z{-I&ɓ_XP7x^zol2O>G3}t-ZČ33f +W !B؎ w,BqM?`y*:1AAvjFff&ddd\rcDzuVkڶaJVVA_˗xb;fsttn1z7uV8|0ڵO?eTVV{ѣ1c}6kҤIرFz-0m4n6vj֚BѪ*رc8rM9k>T +*L =pjd#GXbѾT^kӆ?a=D(ٳ +}}}{DFy9eGquwCnn<*++#**~_|! ߟVTT$-F\#>xyy~a= ɿ?G}Cn:OO^zVԶعj~,YrE?~hfœkWw(vw߱pB6mڵk:t(B!ĕ!W`BHu%ŕvZnf^y Dnj6LJ(RRR~֭[gsԤRՕo$E3R];n`i tK@YaÆ1j(oB!_*L!IW WJYY`0駟d/ f?:u9ܹs_UG !hǏSiQgicI""Z'j]Ǹe\8׳";cB]A|ޡ`~҅qiV5.\X#ۖnnʹs(++#99~cGj6sPxe+vZCjm ˋ877v3**,7{nii̪%9x:t:B\/j & !B w,BqMg_mSŭ[DX}JIhh(G!$WO3|+ ॖ-ޢ={u^+G/FiNwo]ttueGBB][WrgI Q G*`{#u`0g!FIB4J R%zT!tL:zzm* ё`Rs-ZbRxI6 Ax?NNMvsA7y{7z+9ӓ$9(D 23]K(B!g BFrT( %z} !>^>yܪ*j67nݺI]&OO~-,47_系Nnn_(LÇFn᜴4^Hޞl I qQAAAfFrձ|FQjQ7qpswNHgY^Y[Fze%! B!h$'mE{$BFFy9Ukdq<ɀ]̶iN} ;Z|{,JJh/`OuVmt:F8WTj4|ձ#<=txow~Jz5m+WWw@B!$Bk]mF B!*|ZzͶnCzˮ],LO(!B\$A(䤲~+#-FB3ǎY]9rr2@^̤Z&U)F(YZ]S'%9(M ◸85j}9=;լ_Zm;.I qE;9ܵ+S^[RX5c7{{+!nnVm`eVϷWxB!f$A($B!UÜvU*-Z+,j 5 3F7S^NШqJvtsc!ĕVIТczNǍ;wM\3Dr.<d]G5ݻYpK(BB4JEmJ$Ah3g$NkDYJLL$,,AͶ\EQ]ܹD ,WBS  WXf E(fTWB+5"YP!v& B!h$G{H~SPPٳC%995\esNfϞ- B!UL2+*̪K-[⤶l:j<`TXPbcFDY w+u%c\]1.UB4gw{׮iVm^:y5JRGCC9.Z.ȺB!3I !D#9j+-u ^OXX]tɘ&c].~BTVUfT<`&[22l2~oOOzyxnL=%%XPРqJFS\>#!8.XK(/oؚޞJHYU8x~].B!@BHu$ YW SN___f̘Qkuڮ]|m'''ӭZ^huVwl۶ 4Zl -܂(v?qvvߟ{ݻ>}B.]HNNn IIIDEEHpp0wq\qjQc^B2I-&&[[XWWy{UN&&axz?'-`A_I qvtds.`8PRBmkƂ9.!!fTK8X%B!HP!"l Szjʕ+9|0}EѰfx ~Wx>&M"99ŋ2k,LXQFڵkyjTָ{!ΝKRRׯۻθGŸqOptt[o`>s/^Lrr2k׮ul Z?fܹ|m'ڶeet4E\pCT5B!ȵB\%j1 \%]t: .'dƌ 0-Z7gXnm֭֭;vK.s=Я_?{ѣGou]fgʹk`ĉp oԒ4&&-E.\6lOOO>f͚Elliӧ3rHUiBB}s|Z"&%%A#Lnժ=zZK!BsǏc ZBʪ/M KKMM8#iuNF/0#"qx{ym>ɣGͶV$j)-queT z'!wKKyվ?^ +J "fgs/:t B!⤂P!b7,ۛn۴iwy'FNN#..OOOo@\\ ,`ٲe?~C͘1c.OTTT[5cwޤ nʐ!CLANZ^haZM 33t:f͚ՠذal۶A- !>><䓤X<#@umҥm6Zj]wENNVI7K iT[AAA>999f۪Ͽd2JQTL;ݺu𫨨ݺu믿6\v-Fz"##;w.>>>8psB+`42AJEbd=B,BBp`ov..4UMUQ](LD( :vMB؇JŊvXت S=A a6v)P7m8đ#<~0:i+BBzhV&ڦhxy#..^{ͪ-11nݺqw2f8yi]wE0 ?! ۣRx饗7nu]TTT_xb%Kh43o<\]]?~<AAAxzzzj|}}quucǎVcL4e˖1tP~i:u*s֠h5kV!0awww+ ׯiKҿn !>^T0ejSL :=!?#A%㡡\Xh] Ԉnٵ ,|K.EQNߏh4U 끼*zo:p=Cmvx{5>͉reNq:t"TB!B*\T*S($hʔ)d2vX}ڵkGrr2PFСC7ozbŊy睌5 7zaرlٲaÆ裏RYYikիWo3bذa)ѨRx7ٺu+}nu 6mDee%#F`ƌ9~b14pq^駟xOѣL>EѣGmB4D^ϴDZ$Pa0,߭U 5n.N1d.bm0b]!ĵ.~C1[T ܺ{7+/ђ_^sn۶q|շB!Dc){",~#עVQ’6mIhh(6y5Yr%ƍ v_]<1ئnݚ+FcgV=RqW//ҹ;~'Ok03"}>!XYI [`Vd/tO=kNmWj5_vȍR-BF(** B!ZCzQ*B+(9'NX%Cyb]kɔpo٨-_AAi&B\Z9;O7Zo%8CQ%5)JҶmyMTK صK*<Bh Be68UQ!'(՛ҌQQ8K<=fvfdfb0Z5\3ڲB>>,oV*h&|Z~ܙh'2;!{pNgلX\iCCS'\j.LI5:q11|.!yv*PM, ;(E2+*XiVi:r_@² EQaU^Q69GUKQww\ V!D9|3f6?=K;d ak|<NNfEΰ{)B!.\ ! jkeb]B!BF҉VUz`aVʷ:23='99QKk:cG\Ԗ́JQxuk^o&@ꁃ$lƾ;E|uqak|<7zyYMLݐGI//KlB!vIP!lG^O!BTjYoeeYUoo{eSZBB.|N4&g2Z]BPw숣JeV3ɩxkZԉkCtݶMÅB Bj\!Gڌ !65 zpn˖պyXM@Ro:uY8w;x2B\_KZj6z=vӧ^Uxm[^kUyꀳUUݱrrB!% Bjͪj:-mFB9\Zʪlj{yNQ]! 9ƩS PpvQa0XmxP|/m@[rv6$4P]Mx|:&+P 7F##gvj]BqI Bh|T !6Bj*+]#,, 3#;wRәMhR""*`Ev,pr⏮]ӳH3RSx0:YV ak|<NNV'Np*B! B! _ǤŨBa{MOOO{uEIYQ43^tܲ{7VӺ5sZ$ѴM,?u T!jع3YYܱw/%jrm]\O_//ԟ9Í;v]Qa؄BqB@] B5b}YPEHXX+W( :e=}DFF2sc6;w$11B\eOKIi (3VT[s۞=()JNaaCC. *+6?& !*03"1#m~>}v)v]N<b(.m+)OpB!IP!l*Eߙ7o=Ɍ9~;:]G}A\iΝ̞=[Bʶs"7׼zPQ/ v) Ee~:U1m.,j+:&0LR=QSΡBԤ( /FEvvbM).۶m*-S͗VmYڦ *hkv~.(_B!*IP!l B( 8|0?8={$((0tbȬХKW^ϩK+)!Dyq,F^G8MMa\pYXMq-F'amnx{Vv(5Cs~_^ҖNQ|ݩ*8UQAm,(x<4o:uM6m@^]('Ǟ !*# B! g[6]Ǝ˸qP(BZZZq2bpsscذadddd&LC [FHH/ftM}jw^郋 ]t!99ϡё`JKKYrjQc>}жm[HOOb1 !ٳVՃwn!!VmB ޯf48unn|־=ZL Eg;YY\q]LJ]jz'[||H'1P=f,8y3+B$A(6UGS2^yf̘@rr2_\KFF+Vɓ :-2ǎG}ܹsQcqScܹTVV:SO=ڵkY`˗/磏>⧟~WGf̘1|8991|p*/`VzU?>SLaƍ,Y*++:t)yf͞C;w.III_oo[!.dzǏ[=7 /vu&//aqFٍ723IAɓTm_ZJrQ==;r5Փ6uCc+6=kVl&;Ʃ ^mʬKۿyyf23I//gMl,j˩'B!h.P!l$ngu:e+fӦM <gggt::@ڄf۶mjD+C !%%l[ocٳTw_4;&&̋5kVƿ .. 6ȶm4;**JB&c4Q[ 0U^p/74뙙t,#7{LU*6vL% 07ri3* ;2>(fd0!t!lEx<4lXM;w+Kb!͖$F,N+/oX\V\V5ٽ{w۪R.00,BKR_J ??<Lv8 3~xXf +KBaKsfk敏quWXvQx"4"N|ݻ1* _v@zr_`XKKB\V1eZ_]I;P+ [_ZY+mcv1B!B؋$F"ls\Ww}7)))V?/%ٳt:OZRm.$rssͶ[no*ɓ's!RSS?~<ӦMc>>TW#W0gΜ3a}Y֭[Ǐ?ɓ)((_~5.]JJJ iii[!.`4^jՃ<o&txe(ZUl:{B4O}|"ɨ}%%ھrd7xzg|}:w}7zM6]z{-bذaL4~aÆGǪ^`TzO?ă>m?'|B=g,Z=zx"B}2j`\p0Q :&+Z5jQ8_ ,BAr׮;8X% Oc6+f )tvs34R]<~T !=0^B\ksvKsFff&ddd;(++#**~_|! WW@?$ܬmFQ8ڣ-o.F#bUvU[Q=ACx;;™nCcT!.dy9w$ܪmZͷ;CIE^*/3@j+ E!UTT$Ba+j5juw}… ٴik׮eС0n8{&ׄէOs"9nAE#9<F#rsm2yprbK׮trs$qfVM&%ŻY#!>QQ98K\nD~g{]%mڰ=۷s> !⊒BE:.ŀjjMغu+Νdg!l2** !W*-eľ}/ͅ>( 4U聍gϒ/MaCNo6@ϛީ](p>E(zPY ۶O֋B!;BP ''BU-3Q5o˗/ܹs>XGŧB3xĉEklW+ 3y٪*M^o(@K''5YYjy `ΉF#kssy(8c !'GccǑ#,H WUŲ\\iwXGOu6^۷cGE!WI ! ;:b%9Q!"Ѯ];{ >E 3501$f:s%mi^+\]}jD?}ZBFQ+ iCV.x653UU,h $ \//]8UQJv^t4#B!l@BaCV70uѬ,B)++!4kzٵT*”f^=̱cPP`L^SGP^cRg$7&(ǎa O$c !/EQHlqS,A^UokF%4v..s*.6}F;p &KBqBPD#8Ah0̴wB!R9QI50.(f\="+X|~XWWVFG(  ~NIBX~M31m9㡡[! OeFD{Oql,juc+#_ta//o3qNVTukB!a BMc0Ad@@nUv/WfF޷h$1-  ͸zpsAjA:vĥªӧMs8PRBksr~/*2}?}ZB7 o;`0 yy ڵ:u³mEù|ѡO9kF.$UhS!&ɷ+!`GGTUQ7ٯ[nwB`]l:{֬ZMFޒ vU^KKͶirvOPvvZXcAoZ;9qzjl^iժ<"}rQ9A!j?1h.zw9}wΝ MN(,mӆNNL9~sr(}fu+B\/dBؐZQEkZwBErP <*Aaw5kϵhaDgtV^kۖ^^V(£fUڦ-5p45ZȩϜiBQSwvPznNZ3r5Pg""(6>o ܵZU!& B!!8. B!D[ePQ*ٳl/.6K}}i֘"p,Rb TNWeT!l%Օ?vUdy9=og_Ik qPL7 EEݹӕ O! $ B!(''P}sxB4*Veg%4_dT!JbZgLg67תⷷ۴gzfe$сf wrbK׮tvsJVUqqhz}|%.wKKS!HP!l, b9HB4NQehdrx"/{QU2<<|Ü'̶@>ե/ 6{=<+*~~8ՈA |q6Z-?qUsz=7w6[,FWy*O! B!GGt7Fi#h222Z7򢣛dvՃ"#}8wjQױ#Y߸[||elUN??ZzͨrhЩ# @={X&*Mt-`/Ty޾?S!IP!l3j{$A(h"plaᏢ"~((΍ .NWV2ttFwcbswXjE`īx)L^jQfJrPXckX?fg'8A]ЩF; Uv{B!j BkRc(0!Dsd4y5=lĸpŴ44 (^U ܱg9fR1U-/g *9+(mFMA(,iӆZ֧5c俧N5}`?~`/Ty޺g9cBq B wtD{d =%%fHF`jxxJs_~>5˨chÇI9wΪrI-[^ָ-eVEjDT*Fڌڢ:Q!.FQ^hْ۴jl9|M _w0??FFg !B؞$TBS+-mhtZ K,IB;&w$ʞ !͚$ jhjkW&ydͧ! !3UYiMLO@B0I6o 05"gϊ,iVTxi4ኽ#L;/-e ڝ`VXM;wZQ~=ݻ.?ߔSS=q`^TTu+;1R]<>85z\!\KJs'yUUfnV4Wee߹ G9:cB!hBEEE\WUӥB+ z\Tv 0-"BdfZvSQ6eQlڕvq=՟YܾMqqtvswB!Dq]TzzzR788;!SEiGqw/rsw st$G4rLQNGhr2Ezv'^jٕWc6 Pi$q_ը**MN6kuǎ׷Q !Dc,/;*ՠz`>ofjU1tnL/ԀZw:ѫ &!uSAx#_n0N/?2ge}!67<.Aaw+9gT_Api950U&En?}4@(>} `GGnB ]# B!E89{׮ܴs'LIBm~>Ca}ǎH.<56v=!˗[o\Y1I* n6-"4; &ޓ.FP "ā{^{onQTTD]6 {IGKlt޴=ymri&79쫮vh;%(']ooxylUUnʌIAA63jkUU !D[!9SU=C}pge%c6m"JtEQxOnS\cAջB!lpimq3NLD70۷Z>+ZH77ZYiEE$%$Bkz= ȅ۷ZmRi BavkꅇNǭmಝ;1Z,\]pB!D#kQ|-Z;7RV !ZV̦R ׅDBz*DV$ҮJnسaNQX<`>YyqhMS kI mP1358Z5̌B8-7/0$ff8XS 7rF(gQw歬,B!I.;Л^Ws-O#KsKٹf !ml7*@_ *$!687vz=WJU͜};u1۹:YDq͇@ f;M .ZP?JKdBw6YF#'lȾjm\ /-qfdhB8rV!ZAɽI%** n`[?քO1b8 df÷ضbyiyjMxxѳO&00#+w. \@uY5(0I 2߼BneUq1kJKm}8MC}DUUmzHܖ*`n*ʒmXÝ_:LJ` ?s^HH+E(NǼFUi& 6n!C.n0u:.ڱ XoOMhptB!ZE*E*{)HQhtQc1[Oؑ҈LIN m"cǿjXp~ӭ/+cX|tGYw%A!rB~'D{+7x-3Ӧ-(IL\ζI sq1PTVU) S04|EaYaa)M(޷/wEEl7uuy3PS燆ŦUXB!Dn4'p>tPk_gn^nv%l8o=VyA9E2E351bz:;*+ H4TU&x{sVPVa a̡C6*p_^]&mX8ovMKٓS;a"x__ڌ! IVZO!“~'#jMhz\Q)MSY}9qvow+ee쫩h֯ݰ{7 e. !ZneC`ciN`[E?;l2G "jǖ hjpQayɭyNp0{L}eEE\ڐC)SјUӭMު*mFI6[[)_S|޿?.:B!D%E3 L3zA%\&+:~ 4޾Qz23J P_Qu [,w_.=uƺ6E2~.ڱ+wt Ors Ot֍z>=f#* .rAnI~>s33*=zv XPY^gedmlRUJ%SRgbb'*fUUɛ7S\'CfP%P/B)ZZZֱeJnn<Z#G-`4֭c[LٶM* B&HP!4~BGbee#B8_YWVf] OAj7l~?B]={jR{p~֗THOO^.66-${4AAAo ZO!В(̉M$ 7UTp֭THPS2^^IsmVB! I !Frkd_uwcV !2UUyكcQTBӼwVUU=-0Ц2qGUFc)ZSqqa SV[Ri_>#:R aMǢ"mN$ B+I !fEGյd3sr::,!~..&К}ݮ3ʨ;?7%NQ84&ͨ??{i3*Eᵾ}>ƺ2/=y_NISmFS*+W]ݪdM1Y!DSގ窰0ji mFwqaA$5I_s]2C!D' B!pfY9XSӱA !:ԑfN dVa a^SÒku:u?̡CsEaQR>Ң]NmFc=<ޮXXSRҪ} !3) syIŜ};0BS..:x0vIf.,$BэIP!DWW>JYU߾H~jz R=(ƛYY aavZEG;Bl,||:o6 NbmY[++pvP^nm-6lOI||tu%^)/'(wwrP53T:,Gc41 U%6IpQOH@'I(̈́6$ Oܴjk9t<#IB!BH*膍"SG!ʕX~tscF><U Σ}WBrzqeAQ8?8Sm]܌ b%'oĠ|S ̪;vP\W0wp/UhR{`Nzb|ʄ8 J/Y$\t:ߟi۷maF>M~$ 6x0'n!z|z:.¬h!Dӓ2UϏq'jB 0Q_⭬,jj#Tp"#T!DMՃR=(DȢ&^""49t_KJl)IIݺ۴^HOE^^..eeIUY]R™T!N¤$߾evI <e^k* kjc>tEq!]^[+e՜9do.3f +_|z/;<ǯ{=g{&DF~,ܿCcB-UUyaaE=\I@}"G'[I 8]>NL$/9f0uu鸞 OUBUcQRyAMhHFTQ8x#az\ocMzIq1s` H.@ah |ˆL&ol\eQU 1Z,P=hn"Vɯeؿ*QQL6EaZh͹ PXتTŭڟBt:_ ) \[K_I8o2W^̠A,l^11{ӦTuYohό> 3bb͵8s&/pxyhV?-F<3Lּ5V DWz9T;@}ee,谄m6‰}76Ր suˏFM:ԿfZtϘT> !DWcr2l*pΝ,kB z=\]m*ڷW嚕B.[~ ѻp+py ۲c>'XxdnbWwIVJ g^Eii;Ül_f`̙f&?4__~y +5ka_u= ›o\[ 5pM7qkcwqwsقL~tGdd0#&͋|oγҪ ѕ߫nnMhblE;!D:R5*,!lUVƦ&ؾz=jSKeܴgbx7>X19(Ȧb᷒Vq{==!Bt3~+ "5m[:+ZwC0gt[j*B.wZ(ࢷߦGB1'hzls*<1c(>t5oEoEΝO.}}Ǝ (:y':x߰0'>޾m~`г'͟an`̟: OJodx2ؼ#F5dzYu+LayY 2ߠEGi*zÓлfiz=s8 hd~V]a![*+m)#R=(܌ 6Ch~-)օ)N+7S[+&Ṷ[mhf!CisŠ(\i|B!D[vGT|ڵ]\{)XL&Q޶ tcX惣AOֹzyYPϷ;$vacᰘѣ?zׇϖŋs|qd$ˠs=~濞g!DM f+;TɃeoVa a#hda~>&NvQ\/*^`~ h^ͤ|UPЪְOXWVFt@BtCQ<kQ `Xe ; j3a>,.!-uqقx{xa> sm-O~~zissQ-* qs p8ݪ/K,&jÅbVg뇱2w__ƣ>' Rqt[ꨏizͧ( GTPWǼJ\ҦmNQxHy;;]"HL $̨{:lWw j8| |`0QU}tzXXQѪ8A (3Pn2qͤUWkgf% UݻYԨ^!謺]Q71[OO?;KUc jJKQ-$a[Uy4y2NuO@2qwswweɽ#1V=B䦈ľVPf< xuoBt''DiXx53a52RZDUU޽ۡ^T fE8zۂZ))LjJJhѣBt'y nތbBmA]'o!CsDMmHiuu֖܉h-BhU6fps#nX]r DSm-b2ov$#M5{<~s:Vq[u_S3W`;&Q \_^N^+G{d֔:Bf\@_ `kvWUq֭Tʬy%xyr ?\GwJL!D1;: B bz:Dͬ,N|JaÃ:9EUr.Lv+=pNp0Ӥh(¹!!͟XXUR}af`Uq1\B{.*ANwk‚m33oLI]v ! WZ(Ӹyxھ?N #\ң==|1Xx.=cBغ+5զWZ$Dee9((JSL֔8ũPv{^~{pVgcjt1ڠ(,-(hzFXoLVW*F!jHs23u {F߾f`WUglJپ'B|$A(NQx.6|W32ʼnՃsdp*{D(\ѣ&15GjU~!E[$??zmuAA*XN @ofT!l(œ8acii*#%M<cm֗svJJq !ABɝ 6SUfR!4aXm^-=pmx8 K&>b5g',vTA۔ . &.E㬠 eֲhXӊBUw$-5sr4KغW/l ZR;v8>B!$Sbcix#+L{7'6Y:5I-gg;õY^`mYCkQ/hlo[fx__\CBB$Ng3> "UvM+^Ey*:[""-۳̌B$A(](??&ۭ?̢<%UBt2٦OCWc盂JL&=8% @*[֢oG[EhMA"[qQCFs3FkjZBtYn: H1*p(.*4@QZª99ܕڪB!D{BtcbXE8?;rM6J^nn]PBL:tN\}gVU.߹sE i-ڦ NU,nZ#s=MwogUP1:`ߥͨBX~<ozq VW39%E>9-a4j p܊ |!taҥ9ixav&>~Əg|rUYfGaqjNJ /69^'0ۭJ[MD?8~|IH5+?#l2t|3  7PF\[g7`󘏧Oɓ4\ٹ AXr=4]]Z7 #F0+!WƎgia<vy=p fX[~̈ԭVEƵMVL|1 |FNhbhee5^p N:/3̍s궨]}Ѻ6c9B}F7$(:EJea}|;w2} ޽7=,]Ogs_ql_g&skXL&ּ>x )gܗ_4+ObA>˞+I~ၵk9 ^NW^Y8?|}]4k61ǹlz5&h֭ka,/k%(: sI8tNYok/Bkvx!(D2g>-)hB]] K#W]ͯ%%MI;)Y,\sC;K=pj@´ oSRfNh4ɤ9B,'eR6 ;SS[Z*Oߺ-Z'3]ba/@I')T4\ !n -"dqN;Au5O3榛KL`г'͟N_ woOJ'%q_NҤI9QZfL|q7vQCЫB8\ڣҸy 쯩aiASuCQ߾7Zդ~&譑Z%lz`?1Eutb[eCNǻZxT-~/-lST6rR*+)35tBbSy_?nسfLzswϞE&ih {ƍ͘_dxͬ:Z(vSOr\SAX,&MO?ODfǯ/DQZþbF30@srlQr `]u56D,&_\[K `k-DΝ01N9?$n\ݭzCֶm|x޴iGF2[lpi} KLd:c{>q {Wwɞ+0.щ<ЫCeԿnoF2F#IZX9qq:iFѽZ,drfg~mhqTxvUQ/uf~e͍~ce[Z+0[q?Us0w88 Zf}f>/o6=!ˋryBGbS߂oJPvA qRd29lADGWZԞAQZÝmS+ZCؚdBtgOõ_kpv||19f Sl#!#t-Z>FqXY5K '8|?*]k3 EnHm'%1q۳>y }6^ S=H9۷z\twYW^9lUUZŏ3gcqymv !#{uAv=8 \^99ՃsݡpZ23ڋpsªיii쬪rtUKH5 $yyB^ÅLZ~~4*JP!ZDQKEEUeJJ Jr1,0۶aRUL*lσ9B۽J7ߴj?uFMśjZ^\==<HNvg_^ta>Fw/=b h{o1Yz>7~1úQ(ػp0EQ0 HIff|?L:}uMT ,pj@W쩪",!UU#5dJ\@"ԁjV859a{ <}PE_Մ(L S]MFMMc00rl!ΠER}|-@©[pբmG fR.۹ԙB]P=l+> {WbO?ͽ6k\QwֽW曩n'?[<+ls>v\{'\w;/ٳI7{/yy6u5Ԕ嗳H/R.gwۻ)<'Sy˖{dl79v-/fѭ {Ӧwطf W˳DH߾OK_ݳr% oK_l^%K5zhv._Ρں<ڻge|[XR/wњ$hD񡿗!b;ڤ.<\Dֶ܊upiH8֩*dB^qw" [P$,E=zrlX]$B]Q:y׸s 6eRx( Ss|cs#Y5go-ox rzɔfew8oUyq͢E|~x g(`~V͙é>hmgH2=(G 8&k-ח^bc/"N;$Cٴh%]]g<8?Θ]ohcee6 3AN3UxBtJzp̈&D2Y,hՃ8jA㥵&~,*&nP}SETc !D+b`۰:Էq_]Y))4w!ٓL22W23rs^ N!Djii)Zb*^&q#hB fQlaBvvvB&8l-OB4+ܙj0z{\efpRGz/=/'HWY6nl}rSdd$ܰu3):D-Hnd?( B!ZnKE7nb.lgeRR^E۲*Wɧyy6>'&rݘ!uBHËA&B؁6-+qqN태qEPJV\s{8Ϗڱ:+(Ȧem̦?HĽ5NUY߁c+ͷkt`pt :EؼNߵ_4K!D'W(ٌdj$vE>}+F+7yf$C]PB:*(pgRU (>tP :$$\:=0p^QHn4sSEh-m>ILef\z'uRIB2e۶/B!KA7&{HYG$9ĹƦI^LOH*SU뙙<_yh©-wHtsc5moUjrA D{xtxLȆSڰhPXý{h#}}TQAReŅBdrGj'MZY:oɌܸ Rkp-3l !hs .?smm_sh@(<ҧwrAŜAdBtwڬ׆3P[{Jq=0=,i{:l7P8IZ:01(E hXWVFɄ5P(BBцn"hyvŅ yPs46Pb2aJfoCuB!i1EHH "911Z'%D4BDV]QFi@XURҢ} ߠ(lhetB!=%6ͪ۶YZY:~47-0qVQB!DkIP!ۣ^҂rF-USS@`F>*^>͵݅>>u ݋} PEBt¹`s۠(TTԢ}%xz⮫?K1*BB9~B'[Thp֭UWkh0LJo.2+*`vL !' B!f D  ,a[eu"# KdQUΦWux-e':֤ 󊴚Zؕ`/. M&BOf$La֭TK+Kpah(s⬷/}"-MBtrUK![##mQ[`BtUϦSh8ߟ JcANCոH6A5*7كD'r$1ѝp^sPhFi3*+ `$wţN(fۓvVF !*$A |p|uKM;o^? !42LJ||o^˗cDB- ,+,$mjwi=pCD Jc+=e&bS_.g^o3*h_ 7oV RA(8__NJHRxC K4( 3! Z p--,.!$޽ W0;?$#uz7;|Ϸ43OBYv87( 聛| h\3 K55d^QH+XBt3x7!aCE^ {.:_%%1=I:Ue֭쪬:-bFL [yTb"۾?yWƎeVBv)|c9cưm|&u+NeVBoOBVJ~{5^9ϯV}-FK33yn0ߟ&L`I)|ÌfĐn]/p:7GF\̃q2315Qu"DwQj2ELL $DsRJqre֊ ^tO^ X:fA|H__\ZfRhBseXO7H;Y'UjN`dz{̄-[" B!:NL>o|1'tyw믛Ww'jPC~=Spo;ٻjUbY| >|:zWW>jjVl^?z??g` FgİjΜ%KE~>SyK}㯾ڪ*&=$aIIĞt,^5Ԫ ! Mnϩ㇢F6Mz`7oFgQNN'}4fQUnh"2'H`(ժZ C]ΛP!:zqKDYUBZ+ʊApqAOZN۲RIBt"nid uՌbFn`?lfseqwwIeeW;n\qwe6N8 rUWu+'h:@3ϴyףET֭+ď@Q֯틛7D Ҫ!O焄$?#ϛUhBhf]i)l* %$SSB8UUy7;ۡ:O\99++sny|ezeMCIUs3+*a !J߾쯩᧢"ԿGLLLIᯡCJE{xsr27mb 쪪B8nnQEUa!}O>ٺMB\ōkYN$q}OHXMu[ {sPn ZŪ9sJIAUU?w/com,,1?ϖ<7UBt>EFb XT-BB3u WmsERL,!ݟee5b Wq׾}M n55䴠Y'.ln B!PzEIUU}p}|n@"F3+w5D!^KV`+RhxWp0:ZЎ^T0d4Nn}->??z/*OҨQsdnB8MEHBhtvWUYhX[˰hrr0UoMP >p&Z_ jb\Nnja fVEfoI !Diu_).&m'|;p@˰Bt.Ax8Weɽ-CCUsq!gpuP}ƕvǭ,*;$:'\{-[Ӧ˳ϲ{Ŋ~9/}61|v-Bt>pGT [YYPM쫮43~|FgVUaa`Cy9ofe9$%ťcmk/nm4pdi55L%!pQNgM@keh BCy%.fS 'GBtj_DW6s]kc} a6wIIfQBںuMs/X_]RBƍD$';?*܃?kKzzo$smmS.&ձnQ]\{7FI=pCx8'kMAM,0( i\o׾Άz{kmM??Eds 5YTTMB!eֽ{Q>+:[+*ʦ3еw uBtn``~U<鑐?((Qs-xkpv._N\şv?=ky~شp!M oɐi'j\سr%5ee9'e쭷xaMY\uCPt4UUWp0A11GYA11y{xBsmx8s32l*;L\~mY\\x66VhrrЃkzdgx_?Mිྚrkkڬ o4R6UTȂ !vE۾e\#r Ʋo P@kJJ F?OOBᄺ]o5o1d4""8~>vW`2磏7 Ջs^zc~o0 f%;YCH߾\[!C&11L{5>^5Q[FH߾{=ʲqȑ: RSYxVVr姟[|L!s%232l )53t˷ rkj* XۋTF~).vhmkXWP[}M*W3׷9%%\0X%xzQc6B!ԝQQ쮪b~v6*F3ne^"ڞNQNܴJӷneaI;w!v@---ŷ0'GU4{yُWU'O&_?yvExttrQnY\0E!;;[pٸ?ʰ_v~|<5I6}N>͵^z-ٞ;twx 0=\4yͮ],qH;xXY,;  uѯ_5rnH dmBkHnT)z`<[!:^ ȩLh/+B!ʺ BY=w.ƞ+XtɰK.:4!uGZ#D[Y\̂FA,&!ZjANkG͒kKKy 311\t:NC0ۼF~vwUVtNBz.:_ @%P^]PU>B ann,4wѺ2B;2Ag[np~.xuz uhBBBpe~-)!Vh'5f3p2Bl,nn$DKdGUvpE`VU߳Z=uZ%:MVWׂa>>5\4*+&@!-g0<9_oxZƒX2`4/YjB'-A11\p!l#vqߓx© څyyiV\|}NN苼4&]L~FB!tB#ʇхlt ؊n|<:IZNFUU>k IKCCQ4έ13Fs ª$<=qoh(lhB~~|}]QZAD)FDpϞ6ۮB Bac|@!v3T`}y9 J6dQUMv B**8PS \wRmvB2[ah8XSr@[С8 !KK+eem:!8"c~>X,(ZrTW6,KvuBl{fOHU'DcQU>uHz:wF6L@'7EDth..,ONf X~3+OOn:D$xytNݺSF.!D7m>C( s' |(L ։'ӛ%K^ +%̙ʕ+m( Kl9m\&a>Ţ_¹ULܰgĪ#sQD'eX,/!9.ѣ35Z}SIpx!6;\b}y9uM^OxBEQSUՆ !hKq|;p :J;SSYVXQT޸މ,: i!]WL̞=~krEqj1idVxx8k׮eĈm^$ϩB]]9#0Z֖i-Z+!!LvyBͪXfۋ(.|P_;3:p7G8Ѿ6FUekeeP kRUvHP!޷K>L۾D$reX4TKwo/JLٳ[n?02dƑi͍?___C29]0E\M%6Jf&kV>;gy&qI'~z,X((|lf̙DGGFRRK,>O>\ؐ|Z&Pe_*( $&2=](JL&N߲Ekv}Q|A֮]ڵk 0f222xO8t&Mrɯ O~myfΜ޽{ms!&L̗_~Ill,cǎ%==gIrr5I&p7 /pw1~x;}1}gՕ/O>+;}Gqs]wqFy\tE,[/[nŋ7nƍ;Qc{Nh z=b!Qb2DhTVZ+a?? 2,!Z"ʛE0#- ɴa4Zgִ0wlT !S8: l;epv@zgZ]۷cEB5ՕK%&&?7g, ?3>>> >.]ԩS~.]ɓ۷/ %\bbSN/{!66JKKm߻w/g…z߿~%K0d܈<}РA̙3QF_믿eESOO?eɒ% :2f͚ŬY{0a̜9;7yN V lmEBN (*![Zu58`M(b4/XMGqϾ}1taq5Ϗ73FrfUq'صS]pi/N]ہaYF#f3-.(yO#A..,ON (71S?_{2_?$]X l+Vpdd2ѣGذa>~ ]*;<N>dRRR=VX+RUI\(Է:lBJ%n/OYf*}auBl3GCZB!DB\]Y^os4-(,.a' ͜fVBt –*((>ΦklȐ!\*BZ@]yl߾3fЯ_?<==ꪫF{Aqq1f&ouu5˗/+n/=9xxxo5&Z"?׏ecssϩG2586tw NA* ~B.-D{$7!1}|AU{?59KpK 0S߅!Ŷ++d1=q4ضrAnn.&Lhc4&ϩg~z=/GKhSb2Y:HFȼ*L|_T:zPG}B0=IǰxNt/'u6r˩XHNo˯VWDBNH6Qn.;;.9;G{&O~NF3neJ"W!I> &p7FVV \r Ǐ?㓓9ӹkyG%44z Œ3򢴴{̡J,>>?oHIHHsgTTTe***xg9묳"99͞}QnFǨQcǎر{=5gl6c679bMRfNg`XznO6Ț(5|p'ǯ1yp~SEab` ch.oo%`mYYo7|oIj!]gXiT`C|eh7jmmkI ۧuhB!ڐv[QQQdffIFF!u׫j?o ¤$̴4OKS_Q~."?ĂWvC~k`${{{ s:qFhH CBf3^kXލj-_eei:\>)Сr>D6sƍu !+4J!D[(++ B!C@,3}a!BtLL^BII7.]!!*)vhdwU e&܉Ճ*0_?zkm| Q?-ۋn\Ep_ϞTt c9IU1*k٢<{X{!\\X6p m܈b_{7 4>Q`~=f3L~6l{w>M 5[.fg3pSEu͘1is{OUU&B[41泦EUm%lE{xd޳oˋ4I!DIP!DD1f ,9B;!W,͵&Lt@+oeXB ijJkq1?$u{k$oWRlF/O]GXS]ݖ !9!!<ިuȯmۨmƂѾZ߾ 0mv!t$A(٦ؼڲ2 It#F#WejU>?IV.hQ~>&g;WTUs8rcDD[tM񞞸l?n8(B*BHBtAAA3SS Kع12#"P Pe63)%2IЄB4$B4`Vw`eqnDUUܵ ~HՋ5LQi6cQC|6UT8[]TR#+ Cu!pQf%xzp$) %&CCsx3+YY'l̍D??'rWWsU$B!$5;41hx'(jAUVK/9l7n]vY9%xymW(.-("]քb}hǢ"jb~F{Xo>Ϩt@ɬOBp^a\$K1X6p ^:ͅ˛O4\t:JJ"͚$G:4!HPZZ3fhdo3"NIS!@N &hWe%wٵ2( NNiDU~>M5FWw~v6FSX\%a}|kZQs=<313FI%]T fQU޶LQ`WW8WκlC|i\B!\M0'u/'66V0yNEW7> zjj-.j]ͫqqmG%DWSkMA\ ´fwL8V&S+H&qE0QUՌJFۙ !R\(1BM7Lx{Y6 ˮܹ u9Oׯc69s&ѸĒ%Kl[WW?nOLL f:c u]GMM˹馛Ã^|EԆ VOo߾(¸q㬏_jGÃP +x f̘Aff&( O<sh"+իW3x`|||:u*%%%6Ö-[8ߟ+>mIS!1ߡoEiilS&<\˰hW+l:ʤ^tJMiQƶ[bRtqxU}ohBtiEFre &UesE7c#wvp0MY))b!pjMu,vSO=ŬY& o?3fķ~yڵk1b\s .G'==M61.\Ƚ=Ä xmSYY /"lݺG}EQ뮻:t(:r _~%QQQfN=T.}Q233y0ͼ\{dddtR.] @TTݻw///999vmxzz_CaX[xǘ;w.{a̘13>Jz!.r#>ફi)yN \\ͦF  02QZC+Ru@ KBti 0(Cv(?v9fsا% xqESX( ˹2,Fш+ B!E~VY W~0n;5ѱՋ-, rζm<W !N&߉7g…z߿~%Kc>#y뾦O~L|g9sy86lYC2zhJKKy뮻 OҥKqqDZi& t:zz9=s*DG8=0JksqQd;P '&adB/8?!9hP nOQDբYUyVZ62חeeԩ*u_eez|OIBtNz=KdY2ݙ@//N4>QOQ>HH`wUUV~dqB8n|#&&ƚXr%L4 dw)aV^NiyL&[laʔ)6'Op߷z$<<9=s*DG]UUijnOM%h^GFrZ`a (-6&UvSUl'BC.p?lt1{4pi) B!"v@tvIsm#Mɝ^wo0~qleeiB&tahh킂F#X}deeaX(,,$ 77fL .odҤI,]x:HX,\uU6zzzck]6Ʉa8tAAOo2#S 'f`'hu:ƷS?⇃3eh>>6k,v7#i]KBmF-@Y))T8 twwH޶w/ېB8nbԾ=00~& $app0z|kN>d{9c3(̞=;|9sZ#00./kmz,yN9Sf`00h .]ʕW^inƵL%¢Els+༼9r$G?+ZSNaǎ >ݎqr ?3sΣʮmN߾}t;޽g}?k+V0|:,ʀx,Xד=x /9k~W,X^Ν;?u m{e%ݫuXB!Hyqo3qD:֬YcSQ8|n͛ĉyE]dM8t:|AtMtM̜9.>}8T/xbF 7I'ʕ+IKK/fԩ 4USLꫯnS >>kpW3i$fϞMxx8aaaG|bl6ڵ/9 n )-& ѩTm(1f]yyM&N'ťMeVUٷz=٦>>6mt7UT`9:E!Q:y!pNA..,03Lݶ"Y4 : '!k6^\/e@---WXZ,**L"##:!8zuvJ bimPnŸ8MKˆII|.mpF;++?վ}f5 bl,wJPC55^fۮ#<Ǐ۴ե( S!{7;kSX, Ȗ Fn؀a1AQX;d;h!ʤP!Dh3+ΤR\Y}!(D7u~>:p 4S6[bcI!pk`cy9U=QUf֢* !D'Lٶ ?)78v~<H8W `af:H__v a "!Dg2ϏssͬӒ|ɱ`%F;Tqw'ӳ͎V]ͫاP9 }|lnWZ,쫮17N֩*uuyB!:<ث4tI0q$ F_%%1nf~%;wipb=<4M!Y>#MRe8vi-ibT/nee B0-$M+>pf3BlTaF]+&6%#]Z !fFG3:P^`q~a ;cy16U͜BU.B B&{yQc9^DHTUBʕvQn2$D׏hY5*EEELڦ}[**,/!pKc1LJxJe%MW S]$?:.I(|ѿ?C~C KbNJ]tۣX[VƢ|Ϊ*nܳ `M!:$B 0ۛlopfV?5\Ĺ!!\֣vA o 1ua>z=Ƿ<'7q3hYURBbQUR**}T* !DgPhD0f&Bhpm6 O;BQKH`ke%{0r/7FFjBt BfNc]y9uEaCy9g˰q*LMQ!~d̪ʲBA6SvTVuAAՃBB&9k8o `}yy};nnB cɡâM8nkasM{T9 /gÛ*O6i=.-e(d)SDq#=PAQ?y__Q`9dC2,t=Ҍ#iLZ(s]\4<'Is?}wLݻ0GO{6& BBA|{Wii3Hhi&>l:occvrj B3QRB~+F`l}d&Ahӧ24 #LA)IjW5K0JAo+=S/GR\ϖ&lyzrPPsOw:qCR`nGqcRwvn \ݚf:fʔ) 8c0$ŋ_. 9֢"fΜɩS.ѨAZvw-JJe1EV0{-%eeԄB$Px{7Yj)/H&|S'8n'JIݗRpAۆu^ӂ PC$>&i'ӎgڢv0O˭f %*/2 ,ha\rEEE̚5KAhZY`4"%7ΞVTR]Z^ha B[W(@VK&5ٳO aaMrp! ksRN"ZTAMbE8+5Y2 f`oVDýQbn**b6)pIaTT:uja\XAh~$ã·1;+a׆JS' \R8 [J D[SXXރB3np$ ,6xYf9s' \bZX' Gbj-0PF"[,k \Ze7gΜIHH[n{hZHrrv{!!!FC޽ILLo믿NDDj8-[foe}*;w6a$& Wض{WIr&Y#GAh6.Ah ~Ϸ{ϬaF7QywRSz63Dt,02 hRKALjӆǃ]Feߵ>'4?''VtR2?Spo`qAqZeБMO?ONNwyeee='''~GLwQg?ӦM_e|s=GG婧wo̙HGM{/̛7m۶t&_AZvA ,K[wΞ巂B  + z*gWW #}xzC.VW77kTλ{QQhD FEî N$ݝрdb|Rg R5ZR> @uu5'N$55P-ZDII +V2[h4cYqq>S,YĉsީSx7Yl*ŋg}ƪUc֬YB@Ts:ݻk]GݑZEvaOVAj5_6ARU+,Tk k9}1OM(klj lڧ|32{X W @UUvA*3%`d楥sp&f_ -$Iv}%%@W]_EEl(,4L队AIԪdlEq=c>cҼJIgݧ>Y<{5ۛ7#"%%vOPsg>>_9܇#Ce|AHJJbL2和[9lm%M#\9y]֙ _МCeapA ? Ul0AZZ~~Ahj]Zvbe$`VJ /:Eu5+j5ӗLgG8$* {NP(GGSroҝvyTTbzi m _]?Mƅ2bg!<>ܺMAz[ʩ=(/E"8&~Ѿw{zg/k5\u{cQ9_0\Ih0C7Nj*R?H@㦡ۃN/`_5%1/@X0̝b'_1̽mZIZDEqm2rHuigMue5;~GOd4N.(4n,G">ڀ1bt \]7->w\Mdn}Vby.y];G~>F̟%簾2ӓ<V9.`enJJ@^eAA,!!~<,' +W[gСTVV0`@?>vb,^'|#GxIj2 u :K!l50Ϧl Ƈ:4ߠ9T^NFuuJ&(tʍ/~ C!fTb0=)-eGI 'mZ2拂 2濗)b?MEI))eYۿz?$1e }kҼRo:ѭGLF< RsE-jKILߨ7rj)N9ňFoR?]WVkzmWD쫮櫧"p3lXۏ3&[H',+(cIٗrǸ1s̵}?{ 3 3fK[~ee,~b1vfRQȐ)Cиi0M8Gn9Zgm?l#yK2~t/Zos&dKHߣpPH_mnu:R:rĄ\DrK1ۛwxIL[ruC9 Chԩk7_|4>#ubbb̈́ x爏SVVoӧyxر#*˖-cڴi,]xך_/Z.]\EYd@(1*)/g.,)7;_귂P'}ZgAѦE_HgoY{KKm X&"*0_ʶ$]\M|K%Ieny grٸh#$̸a2ؿf?՜Emd7nH@Iㄡ'vL̝oiv~kpPdԡ'6C_(uc/GSS̟ }q"Kgm?lC6gE=́D 7Xx@5l[Nv8M_l_ :u(UeUҦM\[YA;|yTVqpAzsZ5H`@JJ8u_,\"z={KKSVRvp@uTnˋ..8)\}{n4/оw{dYNK=s)ܔ\dLQfm3-l!e yQm us{2vsиP> :NRg#f` ~߇lI\u\et?Ùg!2췮R} 8{,qU"Zʋ9{6#^@@{stįk#u.m :7Iu3Dʪ8[7pn9qu xy[>&#AW#8&%Nxwm=2e@L |ұ#SsL?qEGgkr$E(+de%Fͳgŵ9N5kŋݞ9s&3gδ[־ ^zg}( ~i~i~:s16ąI4iROF7 &de%pȲ}ɜ|YfDým\l/) TIc}}/gȜgU*k!%+,ˤTUMŬ/,XJڬ_$|pW*A?zcJF؝7Ļ1}{j_eDH'sΥ$Ih5aPtuWOWϕٴϨ*>5dԹϚ}#xݶyg?ecGޙ stQN:AjR])b6,QQTAe?_ Ii >q!l+{g笪#o }/n;;ΐ4MD[=݇RSZRLL?>:n+47e;c&p$EZ}Aķ*Ap>%c..䖰ZJ*$\ۅ3xzS^TNaf!'w;ЛύG륥]v>K,eȔ!TUC"Vmٹtsc= }Y;@YA` 눮3΁SUVEQV)R0ݍzY`Va$k`J6NNqv'h &}U*ZUVVGWW>ؑfR^MI`g~]\ KWSRˋ^^=4A+J  \]1,sX*FYfvtvskq BK ~~_,z=كJI QQp 5?EETLG\\ W'Iz{3ۛ^l>;VT`d ͯu]f+)*⇗ kg~>Rb65g[OwR;1 |wxCw\lv߹z2΁F42fPVV6:,zlj9rXFuX2TU_}.hAINz̍1Ρ *q@τ>KضcUu ֤ck$كb| L@J$e{G$bU*hL[gg R 9UewW@k :;9g2' J%Kk#!ꅿtAh( A. BA{El**#$`no۶9%-oڹ3F`M)7SUUu+F\(0Z\o?Aۋ5}@OwwF0ˋ~e{$ɚUSe2"z?Lɬ2g֪6GwguW(xCFL[N2#zDgsj)JrKP:)qu'C 1cboɩHMJ4BsYZ϶D`liݿUk.m8D\z#Zo->A>tہN;YbMf֒MC8%ߩu8zdI&dLFa¸{Ǎov#hϝvYˢ* n}ٿf?'Fų'{ӐN7 Qf4Rf4Rj4Rb0Pbhhd:o&f2E}d[:%F#%F#gt:|!Q|>C$ٙP`N?:*kmS Sa^GkW1H$@...ãrBBBHOO'884ǽAZ~uۡCk`/28ҢJtKΝq0Yɉ/7I۵cu."*}&,VN*KંM& [5%Vyy1T6z,^׭y36<==)JPy?_! rr6΀xqtm^">C|'w@㭇BC˲dh@'_'W'O``7C&P:.sQS7NNՄj4aM 1T&Dn޻:iEW]^-̽GEVuVd$ϵk׬cA BA҉jQٕ9^YIo #G ʼn CZUJ/6Ϸ֮lV[,YGb9<(1b=0חa\lyZNTVbXE(v[J|C} ǍŒBڃ0WI$4J% /Dn rϳk)6N(Q`,+h\ݑɪ* ..Di4k4NFs 3}{9~z< KvBGS dyAhDPAd:B\QA+8k21) Ѯ/qq6.A-,t ˌ,33%ԙo["5|  #(<<ߟqtpmAXWW~$LJH&|y_ B$I**U&ɲLd TWeY]MFu5:Y uʎ,JBDד׳M * ]\@ S/:}ZPk-Y5Ϟ:5^^tw} Mv?B0#<ثbr 9 ptjn;I]0*/ D / \1 ڀ9P33XVVgyMVblaQl0,/ﲳ#ؕ}tQ(x??FȋfѮ zYv+\9ġMɥ `:H[Y 2IVDTҐ9D^o t:Ҫ2Om09X_1` }ee(ET*ig f k4Di4 JĢ:EVu5&AӉq !T~[8sVv"J $HLLdٲe+(Di8AP,sL\b-,m%p?O4ߠ aeV8vg8 :dDU``e~>GAYvX: pvf兺GdV(XѹEbma>Y ,/Z_x9,) :5|Ҽ<*M&kPf!^^L iD _|_pNf p$6δqv&u*FRmg8SUi˿:.i*5qsqqSDe%==BFR|mVʲ͇q0!XZ]iSLa,_h3f uJKKy衇Ņ}d7"I6l`ȑ)..[oݝ6nh7 |IQݛ͛7_c={6j?~<,^S$IL2qoر#K6FANql$ULdt8"1LGY/HWĈf` ;o<˽hYR:A%`c=֭ܐĒ\*-=l3!juwscndϺx 5bkEyrA>J%Ѯ\}̊`ql,wl~讹~ޣK:u(uބvFl ykg0"v寿cٓ'"+%%.~`~2u:&']AZ'N3gz==<?3899́x嗑$'|n_<<ӧO祗^bҤIѽ{w>}&MDjj*j<{ٿ?,^#GL9jcKī̙3|_|o6qqqj*;v,3f`lٲRudx fϞ8]֫V[˱ʫfkٳVP`-O2Bs2R\\'@(=

vbĈ}P*WPP-[n} ԨpabAOV<=y3"%W-T; ?_m5$^(˩Pl>pEy*wm˝mے~c\]1Y$qG$ \ ::tc08>?3g2n8z "8(Etq$I ]cCH*?''~C% B-, 0/`3gO^=^[(Y\\̧,ɩQFWٶ-üZ{oA9R^ެAZ'BA Q..Y7o{KK1OvOOR8Y?k =SU|m"]\ÚaFӪ.~8ЫV{I\N}7՗*,Yƒ>3<õ^7 oV}yzzُe589hvGؽKqq1 ,`֬Y0}t}nۦM& JVCI2wP9ULLHJh^UH:w3MN0Mg#g0-/\JCj&-Nnu5_de NVU9, [UJ||m[^P٫At, 4V^Q \2Cb1C2|p{9),,dΝSOջJW^W^i=ҷo__)**bgذa~Q(`+\H*zLG˙rJvqhqZ.Кz },  BCѴr$QP ը-N`/6׫VˉJcy+py* 4cP^n~ DEWQݤCH V$PP>>,Q]m6[^Rc}ח Eb"ɉ&Ѿ<;`ul]gƍڴ 0gzyD1}{ohJ~%d79_.j$I\W+pU]AKo}ee*)TUR6hhesYYȘ3#52 -Cy~x1$z׵^l?z=!Cmui4|M|FѾgʔ)L2̙39s-ABAJFcw\)DiA V,s_r2,V}K!,,r0ǧQ/-yx"$lB˩?KO`Y-U-CAA+0P|5R,kj./_s :cm6mb5\PШT 6ǎEē=ʉzY61]&֖VRV\߱#Ajye֟>ͰKԶk/&\C6m8]Tě7ljl^6޽u_# }~cb@RQng4*7Xwue#2EUUHO磝;t^6sEChbZ>Ёɀ`Y^dd`ppߟ$IՌ $IJ2vT^yZ 8QU*ɱf9;װ M99\==4Af!$ \Z"@bʒ\m x];kA U`mad-+/Zf0 #NgIⱐPGy?5/1ʲ[ˋG+/m8T^d*4n.ɾ;_6ZMfz=QPIG`-:].Yu8ρW``ltx[/}_Kyo_FF7JwϞS,_K6BBX?qtL}J%}<><;`wu D._pOn,F=˛7;#Hm#I)o|UL7V~yLz=NJ%FÁl^ްIO2aFoܑ#9ͽ \ YygeСzxx~F~5ɏj͛ŏºӧy?iɭ6=l}w w-]c{r87֯7Vx NG6ykȭ OLtsId FWpkְ-5yGVtQ5f[聯 Z/&n͒BTƜKdfRi0p}ǎ?.:ׯgٳѥKZ\={Mae%mƈ& _OߐFrrx?ˣ}p"9b-U7͍1~KzI ʪ}c0LTq8kOItEE&_U*~޽  xK%YBDŽXZk>!!L0.deEDY~8t-=ĵicwI99|?r -㰈ˬ rwg70}{֭,ܟPBXQQL&ںAPյ+ovBl,j5WU5|Z{` vJO^Kr6fNKB<<eZM32 T^@6LFi) NbLlW l4D̴ܹFPaDĨt ˒yOB<<Sʳ(uQQ<7p `d9c6f_c,tݝx |if&|}[8T d߾dbY;KJYZbNUU' vI@GWJն GA+0簾2ǧQ>At"@( \j5@(#2S(%`Yt/…[[XhGg ˍ07-~c@pu:9{TLOo͍ڵ&qqjղz1k.}9;[Oš5LLVY5cݻs'' Y26A= ]߱})"`3`Hx8{23;br˹&;YddTSRˌ&oo}8[\r_ѵJH:[^ ljdZK'((UVOI {22PJ]63c:t`WFݺjeؑ{-h˄8OjI ;u{̆ڵ-.h$ɖC|@mۆJ`dv=2uq!묛UU X1xT6qN-[ؓA))nR(̡TVºu d[ϏVA$s'&`dw|şGx99qZ޳d,)a[I KKL40T\֒vj5üˋAD\ɉ%:qMb"23590.AgΜ9:"@( \a *IBop^ɓopx>4{-AxEz={-jwsûe|秥n<b]!4NNu5o=1Tb~ϵkGd"4 2@x!=pޱ)1uhPڷ}\\應G9GZk`)G 枃{g.2`.z>z=OdLͺBQennVVZ˲./\ʵ:mǯ$>ō7Z{ي%,-sҧF1chhee1k&&,Y™'hИNP) ɉߏw؇7_h(Olh} kޞFVY;ZTaa!:2];ܿnn3<2`Qbb]{!J% B;;el,cs+Y}Ww/IhhfeVT%%l-)py9&eG%K:'Ŗ~*CӓAtqsdzyjx8RR0 U] hGEο EZmw*Kdd`b6ߟ7. PTD#Rf0ifZK1qy'5ћL_@ %Ryl6Y椃~ͩR dˬF@~<|.@k Ⰸ؁ I D6Y}]-q)V{ʲ\g4bemYjt#G%ae=jͲqnݕǎ[y]۶u$#ݝbeuArRi- #$#0C"9";Ɛeb:\@`|L֍E<կ]fc~E0k ՚ ٳ nGΏw3qLH`V?Z5#/󣰲TJ`z߾a~]>,"Owp0vA'mΝ _%ˋڴ :lP'_h(onByu5Z~ڴXVU۶7x_?3JKlwӧYr|cVH^wS/k##QHY{o $L]g F''3o(j}}=J^^*Hj5_<+Z''m K6m02A81NPZ]MZ%lk@{9|8[RSh3 K۶.,4?z=pu>!!5אZ\v_ߐ\x_ywoff68h__**"16mrwE^}露8r+*KUˏۃYpvUl\)bGZݽJŢ/蘅+QQ-( #7az(MjvEE)+ J5ge& Xk),Irwg,i?/XP]Nt޵2,sˡCM*jW5 /AKBWذNB t3~FmSTTĬY8p]pرl۶ 9@8k,f̘!pkxNGU>s%8R^MY\)`ߺtAhUuNDZ}eN32*w$ԯHx?5sģЮ~"0b;9Q(3u:*C BڕLw:BX0n/_ϠEvw''}8]TZOHk0mȘjՉ񡋃&4FSy㯿x6RKJ0LDxy*tV*y$;~|uMt c!:{yj++yu`fd8̐prˉuPnmΝuv?+oN ܊ }9 IC/ýz-JJ%wD߽{sѝs,/5Z?gg22Ahdwi)[-ŔH`WzZjD|Y@ dg'{hXC00} Ã- u |ƻTKb~` / eZ;8fSZL߾\bV̧Clmu(x9\ؓJʕ|=aB;a2o&E.2l9o\3mRT1{Pcqy9233/ ovJ Mճq|&Ypy9[\\Ƣ"2,J$ }Iy"5zy1ˋAi4vMd>̴?'O7 u e{Х=(..wZd2퍯//&ȑ#qssˋ{" :u*h4"##ywF! o)S8p ~-;vDѐ̙3 )xb$IbFCϞ=ٳgOpC$-'j5Fعs'\s ...3m4mlquu{lsp̞=Hj5? kp I2e9?AZ&o "4\ȸA0)w,pl9W7ܴ4Mw2\kWj03۶Y))90N'wo>ceF[g֬ÇY4oo6pOnW]pXv(eeMrOJ@v6=,}ZYYξ-.]B{ULOg՘d;vm! ݮmS)Bƃ|өso_߁truV l&=JĎl#G*+s-۾= JO< AA+ݕ[w"{|k{lz~1 ĠA(//_fʕ .{eΜ9TVVl2Q[$''o0{l>Dž;ӧœ9s=z4N"00_~ &0o,YBff&>,|vu]L63f0k,nfRRR$IW_e̙}j|ۼ;őêUfر̘1ٳgeJ%6? мuJ6y)$ 7R"ҙv^j׎)b \R:?$z74*58@@Kǎ'٦ `OOގepq"4}%Aj*ϜLOg_+MuyTea^KMl.*P9OObaV ,~;0LJSv. •SSSn.svvfȑlܸѺlݺu<Ȳl-'{{{R9r$6'dee1ys'22-1пvՠmk޽1cX`Z֔0 2 ==7\@6P||<93gdܸqѣA5q S%Dmv6ΜV`. FNu5' Rps<} >̬{Dr8B]x'2 >_֒6[/ݿ{}_%pŘ<~ϷUXz98+'Oc `cQ YWTDP'`xS' WU,{9HN?9%J(Qxxջw׻кa%ܶLyyy̜9a̚>wgƌ+L68}]FM~~>繐ӦMј=@ff& v˜ fjMYѪsP{/,XYfygϻmc?AJEAx)]\=GZo+ww*Ibh{ΦA_l$qomzftk`Q oEFrxȵʆn**B'uqKvA"(p$Ϣݵ[]ʹc..wU$8812PTĆ"Z3 U[ZUon;{,FMFЂa[K}Zejn﮳ @?>}c֬YL03gkIk̤s1+rssJɱ[)((8o0RQ(|N#$ѣGfϞ͊+HII!!!@k&+xGsソ9sd,9ի/ ҥKeAzo#G0n8c݃P+ ¦T3J-Vdun[GA?(=<λ $+ZqˤI^(J++mx|}}۷/Y5=ϟϰa'<<*R$LIEaћL|*+_XĊ.]ve#r(9ZQwxܴ4͖CBZmՄee<~![e5^^ЁXR8'YI,+|屯 sP^?jIb7}|5b" pz7*5|0 E+=knT*LJ>𠗻; %<>yt LZd Lڐd9~T<`.9~ 3 6$sG3X{d>?6 y.. gp,)$$$]&.@3ϐoBk׮|uٶm3f{EѮ];Fe-ٯ_?.\ӧQӇkע\2e ̙3/ooon͈kiӦDN߭FB x饗4hG@@֭㩧bĉqm{]Иj39~`>cz=111={3w\^x&Oŋd \^NNԜMAe9~ EEv?fpz 46;ls${{扊 ((pRTNu5/>ͧ`ڷg3آT/*bE^2 +(hk.\TAfT}Nٳ0m(*b~z:4Zq~~C40ˬ8YCgYwE>zk)/,TUq`49t8YvZfh$ȡ(H/`+gQVt a|'v`ײ]}.{>Ǽ?:GG:7ė?>$2!e9m$bLxi_|puhBR{W'(oݎcҥww=}qqW_k̙̜9=z{zkҤIL4nٔ)S2eݲ޽{yzhJ܀ Y!Wۜ9sTk \yU*߅"MKcAfݲ˜ \MEEeA):?=./ m+!je槧ҩSTLIJ@%Iӡǧ%K-?yyYXN&:%&(T2LJ>>)-L}eTW_ch)'&2ur/J`cJ C>J+(0:sFnG3)l® &]۶~Ut.Ǩ]TUۙܭp~ pn=y-"O['=s$z{ITFATex-Gϩݧ, f` jH\JOya9r7 @ȶv%R; D~J"qy,uc/aaP2@( 4oa^lc㩓'$f2̂p٭ pag'F`Zpp XZʽ쳔vME;K?mydgY^z@Jrc|}ҒZ-JHqЯj#0mGyz46m"ã+ةNR/G0=l/lcs~i۵" 4sڱ2?]%%1Oav쉳 %-3oqFxdO>Q$[^ lux&|얇v ElH }Yei]`>J 7$w{uBAVl_i)>lz{x86Vˬ``y߅ev6ʳ+ath}D+Ff~j*z;vd@lʍF,,de^+uP:!Dj2 U//&q/}#k-omB-\^yz$gy߹{w>뢢 ݾ=[@c|7+&C)I|Kܮ]TLJI \NJ |dgz:i%%ll˚8qccY|㍼e %%qoƴo{#Fn!2ʕkJW_ta!Ϭ]˟Na062cr 2$?߻ī}`vk5x>w.wuaìX 'H/)uؘB^E}|xvպMM̓=ăL1 =cd| "> 0OOVu[Ne@.h((G{ROE}H/-Ņ|w͸ZwgdulKKC)I ?cנcTٵkYsyz=x_?E1nٺ-g}V$ŅvdsA=X__\ G:hb{+ѕ%?SB|Nn_g_Օ88_XlO65 BAuPhd1fsS*Yӵ+~~r)…At,,D=Pj5}}t-Inu5O899ҡj[*_gy^KrrXWTA2R:ԶhZDn󣿇Gjɩ 绛oIPNE6L-.恞=mc_vY_p&V<ɓk֐^Zʡ\\=y~:z3!6ֺ^/޽:* Y˖ǝw68\ve魷}R+BBҶ-OZ#GxoykV5Wsg.,VkFWVYO޼;mW^E-"Ãǣ$fmoe BM\/G6=Km u^uvfhx89ӈGr+*?c4]W''_~aZ^̸fmK8+>Fe$&,Y¼QZ$Ϗ!!|] dpb_$&֭sĵiCNy9NhɉYAA,ܜʈkԱ(qR*y5dg Hē٭{<г'/ (#-B̡^pkA.Qouku]YZL'v.uDWkpx@*(}) F~ ?ُP(t=څ. eShɄQ=sEqzV!IҥU!ô4K hpU(2_fgI"j5cbΙ)4,yy,寢"L`kȔ׳vwg?7zQ寝 ɨ6a.!"Vv5vmmd-M&z6w.{23IOF7݄$IW^&IDAT\wII|?gx''Fw/GQINy9}}]\ʢ[@@eTL!,KNfѣtiۖr:pGs~~u簟1nF=F! ^rZkHoorso$ˬnkaBP"9cbA\xFi)}8Q <ɲ̠0Np߾:AΝwu%OّƠz#[jZ4n<=J,|}umWF#xW/벉:Y~m&Y?y2NJ:\бpscmkGN}'wK\p?Hħ;[\LeBhN' \2=n#q7ZozҡO&[v'~U }Hٗ±m(+(cG;< ##~%V KA˦vQ0gzِ ɓoWpQt9 \ZeeTwZQPPoi{[P?rT\JGy#2R|VUKn.Krs^Rudj Fz{sc|}iYa5@hĜzqq!Ã~aXD&?dWF6LO[1 I" _WWiQd?پ=J%22 l:)Dy{n99T 7$FRܔŒ 8MNg]~RNfd89a_mF{22 +wC56h4x{VbvmZ^ָ8Xg9]Tn&> 72cGz79ݻ[5m}{7vdA:pZٙccu`,,3|}׼QvBGJ;U_TWU֓iH:5Fq=)H/ i}ee k.+ eB@6ȧfհ0jąDA%%UotRwm W*{LITO`B{狘g;YYϹ޲2'<3 P)M~~cK S?Aˌ *$wŋsϲeT `1tQ|C{>zݑ Re0˳gZcrV*l_+ $|\\ȪgA}j`r:O?WWr-k6~ \W7D[Ԓn:Z[^EYXw/9M;g뙔ZRB{mڴ;8O;w1%κϑ|QCyj4L%./h9K۽;UU,ػY6}yv+*x4k,9tgH5~9rnnך V#}|x8(320a>Or(Gzn 8Zŏw;1')$N1/j,O| d#ϻA8qEVAl$Iӧ?бc 6mx=xA4?h'`L% 3ۋ79OJڵ㕰K`j[**J6$K hm[ݽޞqM-DA)I  Ԉg魷3ؐSkp?f{ZYee~Yէ 8ΖI)$ͭ&ВWQa=ۍU{Ú'1Lx-֠W ڗ td HuqqŹ&,77o&r[z=SR۱uZVB,RGrg.0huYsNg7p; IyRtOb+pc_z(CƶŔ";QQ.,te%F`dw,VA&Ӹ pT4Qkv֕@?Ĉ/PՓ ev6a~Aho J@+zHlB,XZʌS};qv1+%CPc]\FR^ۗ#"HlA`gg6k5F/ZbT<س'Grsdw9ۼ:te+۫Oh+_ziRǏ[WTV5 Qi0R(z@:z t内s(7nmےd/{wT mTwso'e!$[od *zGyujz""vw_~Ag42).Ǒ<\6C_ȱ$tp)R6=B|fe5ߠAN |Yp!|gf´xbN^GJ߅s]w1{o3sLcjiYl U BkQ;@(2W]ͨ41aPNay. 4 tW*TI 5a<Ҵlʸ!WV"Sw"sp I5 Y[V99|CNgOȩ&Ņ۶e?Z@ٿ`t$2l5ƁlY[&vdXD}CBpur_޽ٛ"0/F[ׇ`kE|jkW] `Ζ-\t`wHx8z=+v̺SXsdPT$d~|u~ps#?O.]nӳ֦%&Ofp#*(d1ܿb%:uYd#o%$0f7FG5߶m}VWCED޽Օw ,1- O$Iӭꔝ큕+twvc(bpX LO?گ޽qsvfٳ\ɐ :aL_۷'{P_%Mww^E,ǎ:ooڝ]A#_fΜ9̛7^z``=z-]KS[1%&&2k,f̘!Њ868Z:ĸ$u:3ܔJtzU&+ٶك ك.YXȉ*V(_j*/>,wwstໜTU]ۆkJvru6mOL {~mz\xa[?WW^/2KKqqaL}9 )>>lق<һ7}}:3 !Wur޽i(GQm Ibq~=-"ݝ'~\t4tvʂ={xnF#1~~x-LNϦ)Sx?{R*=k >8ݵ+#"#{z];V糳#Pm2q_r2kveA&rqq1_IBBBHOO'88Ar5فFTK bh z JQvJ 4jfVrmf ݛ8(~0?Z/$ ݺ1f \~>͜g Hm׮6c`MAìڵzgZ3UUu0[JJ^(%7##y<$.Ght,Av Usumߟ-LU۷[o7K"00uVSZf>ts A,[Ɖ{eJy{w^6KQXYIO?'k>Y-[pҦOGyKk0X\^. p8YYIΝl9FG350GxzzR"Vo 뭇BCJJJZ_)S0uTJ%$̙3 9OIӧO3qDpssol2|AqqqaٳnI裏xa̘13u=FPP㮻bȐ!ujŋ$$+ݻwg۶m >ƚ={6j?~< $1e 8o;hHMMdc&2ӎ' 11"8(-zkGIxW9sy_/w!..VZEuu5cǎeƌ̞=-[T*J&''o0{l AhFу΋NlùkFhdgiߩ%uOOG|z)y1~u&hde~>dgGAFY6i>j2i֞QW`go ,1ښh.)6a0`՝wine0?MK!I]%Ҷ-e/ø(37n令${8 \b$'RU&JN.]DQAႵaTTM}mvsd2vZ-|dŊx>))'0l0xyoߞEջ\+>C|AGhhh?}B زe эOAZ>Z%& zYު1Xp0/5ӈA޲2{]g3cdҬ+,#Gȭ'H:;$.)M&,,l~ˣd+!ڐO͚`w77noӆaٻ&ߌ[Zh)=QAD={9^}E׭UT!" ( M{H6-)ts>em]L eG6KaZ`r_SRCNcZH 2gNf%kS'|sx-IQ+?, `@ T)!ZDƌ0n "+tu倥*@N2owUuͼB\*J>|N`a..M82!-E\\qqqծ@TeʅT V.7jOFF6YC|+-@U-kf"???#<“O>y۞9B\AҌ 9~&0O;wn彄hѷmsIQXh7K<"6/*Ç9^RTj0g񘚒((.fqz:_ӡU}-ٞWq|CC-$^ ;&ₓJeME !ĥp`hǏG{ !3 :ߟnnpjΞ0rss1 6Aj=^<ʢ]v˳;ƤVyxHHHO>ᩧk׮L<ʤF֫jٽK=p]N9b q~~,Mu L)–߿V?1/5lrU5) o'%(bq,֍.=Li)K22*-c6C l8C` a%$&ОR^dcB!Z.]s'FEfIF69e`CG"52_ B;+Wһwo.TR1XSPRRի;t={ի=پ}; CTT ~!GajB֫j +?k>`7{HB4dvkBeSTrP<(GÇ57eF p. ~Ӕ){PR1ߟiLUi17G sBqnSjT82GQ|HIB(H>`h*9}Q80EfXSQ>IM'P,ˢ] p456}MW͍BlHBKR˳Xɍ( puNBt1(.6o P񮲘ђDDDLxx8IIIM=&QZZJtt4wy'/RSGQyk`Ev}T˸i͒g1go_O!Do.ٽnG߾ 4{7 Tbbew8_۽"hm{R45Myy[|EP0ɉ;BB=$ޞPE֭wopIKc Ghkѣ䔖2Oˣ,^xyX5:.oo]w]}۴O!p`vQЦ 117٘X};z&zyܾ}\ϣUI#n> [Q5o&qŔ¤$R>Z-V^\7sАʈ~ΚE nGwaw0>:ۇ uzs< |^ҋyI),$^ﻪ"mǏYRB7uοGij N޽HRAL̝}3o&o 7^90;9=7 $_9p̙3zhBQ+gڼ;A3v>ࠟ{-DbiU*z{zZ?Z\BjTfdI ShI5@3+z`@ \X_Ea{A_8=b 7G>*!!E#Ayk4ՔZ*$5jQNT[馛j-ͼMxqf׏G7*>_͗\z5ɸʘy3۵ Nah-4y3"BjBO`iM=K^K};Cڶu88X X#+1WneW0 <(__&}#G@3qhxq:q0#lӧ5ydݻYp r|M{R5끁m*ݻys„~>QQQLЁ^~܍7}_τ eyk :k4 ~eL wO>aƊli Y2+'ƍc%H Ʈ]B.k/L2tQY߻7=*e !m^>IbxrFET##0#4G8ĿϜDU ?:@rssmJ0"m0-$Dz7^v..aHё>$_6ζ;ǘ/ wÇ3ib~>S6>e_ͩ9sh_)+BS*fx^!DutwlJu|} hF.%uT[RP^=eٰbڕ&M¥&M ٓ3[9Ͽ֮右g)i=Ȑ!lLH I116O+*uXsQQs2rJK=y ww^3;z^o;w9K7O=ۜwoNΔΝy$FűY9pOgtTTM3f9rt^{͋/pᇉ.]xa&seJtn.OqFc֫+cdtɜb}9}}V<ɜܘر#oNW2ĉqwr~x~HT*jz}7zW<_|A??⮹sr10xyȑ/l B䣏>R._!ʫd M%?/k={J>!Z?^WU7kNzc}ZleJu:n8tT5̥~iLIQvEZ)0TB"Vssp0CC#}/R;WW4d^t bԩ8i4 JҸ}Ƣk%>=7n$ݝ n㣣%@AFqh#GYRBRA]uvqSh4;uIml7xv6#,:bg7l~[nqw gr,c>?||&G9 ˹j:oWn[]wM'nZV\oXOWŅyzh~1z񌈌y3KJtvf{RVɩZYۖ-cΠAر*S֤j|gmu9PYR''?=7l`2-)7'L 9k#8G--gr_skϞM`SEQd EEo$\Z^شq_}ű P>noC|916Xe 7o֮[ϝի twgjnݸWlCrJKYc ~o$n_` Pদ]G#G\#GˋWneY6iۖԢ"de: Uc˞pUNvܹ3޴4myӊxbP"I),-[m2VV֭\w_3gxa&2sg=Ό~݉UN9^.0W8Ν;7E(7ݸl01~ζ.&eݻ3/BIQZT~f#$3qݡCv] ٙ{%uz:p%D+^^^ݦ 7%M jZ5RTh,"kt>>^rqt|tUUg4rҥ$X{XbCCmEו:ˎaW8xcCClQ{6&{SQ63ûeϒn w''aѣo.t(`e0[7rJKy?yvp^uxs9XBCk>wk{sU=k` ]))juXU@PY[w䂂:Jg&ތ#&S{PX:t`Չ,?zɝ;;=xd:V=M=Z_-*[VM555:9!DcKMMm!4*֚A( LN懬,Lb :RB"(2;,t{566If 5ws7  q4Gܹ3ogOfq1l'I-,ɩc:˓g,@\ȹ]yyqeǎ6 :RR)xa&>LRA&]BՆ3g_?*žZzfZRdAGG͢UW_raMY 5#6Wnegr2L 57L|w/[^ՎkvSUSom͛7jqu彿fSBBceÄj\O Hǧ{FcRL+bEU|~wRϬVsȘR"E9Ces M\&.^hZB<p57Y?Ea{A]FHp"=mpGHasU]UȪc_?tx_}c2- BUTgݩS|1eD~>8 6ϘHFcuر>Ŧjr ;WZvVW>WEI5(^~9˱lz20 xmܸ R y?IV&UB3T:7^n~bѣ6=U !ZR5T@K0.- -^Ѿ7rٺ[i AAxi+TS 4)Jybznޝh??2y[Z\w<0`4f N  }BCy3NN䗗3wj;h~V=J7}};H'+sG$!/xF05ɉAWƎEZS:2}{A6m:oQ'\wL _WWIeWWZSsY]Ù;joHRAvdYr oovM|gBBmpH~!۞POO|\\Xtx89*-={Bg wW<9ty5۾=p_IpPfp @^ϣNy.M=K#\(5]r:7ǎxTEoaZ^t Hn{P萈W)~eǎoӆϫ2ofxy1R0WGGݡCLNfYv#Gr/p*7!M&gfr83ϧL͍ ?``|t4yee>y/L>?9¯NF瀀ZpWy|cU*sM:.\ȨH>?4ΐ>#PF/5a-g\iۜotT%z=ZM=zi~+G=Έ?Y"ټHN'Ppl dd^r2/([Wΰ0fEg5:F SC<<tw? 77&6͈`2q۲e9С뿹:gn|}Yp6ݷ/%%1}rbfTYXsj_ㆥK*R[oEGx~FnXOggk{'rʕZ =bH`@͞s72kJ pEǎR~ˤ(=nAA,ؾ/)?jfcs GV\s#F9t!JǓ'g_E¿U ޾r/7#ڵcӌ'wW65i|ѤȺux89,ڿ.OT.5j"-BJ6 Gcdd$O 6qˏ?חɝ:bR$(޸29tCkFoKIJϷ¿֮_~`21:*oZNyNbʔjWC1POOٱ۲gg1Ν,ر`~b0de1)8}:+{jbsw9rd̎Z{$cS2TcOڴa]w1o&wR0) ۴[o {xL[FEEɌʇW]׾=_^s #+(G=F-(nAAvKV1!G燻w݌se0PTx) sL(xW"B46Ee1/ɉaÚxDoWAd„yqyr@?t݅TqϜ>]Df/v,>xj2 ã*GOdar+ٽaJ _Qb28xۊ @//  \[Yg ,b==I<4fxx0/Q p{^d~qEB^y_5yG7ހDG!CGww^茣G9:p AJ<^7ohT1[B5:* z| !hT3;l^= rvnq4OSSy')Z㭪" #>, JeJAk˫֞tmq4B!Z># 0:ϟ911M=4!MHB!IQl-0Ӯh8ZRb vrbSl,,DJ N*||WXȡ5w4PyQEQx#1OfSU&mn܃EE,LN櫴4bt40h`x8WlVj@YY6M= !-LE;weNNr2O6!%KB!MX3A.(yWeSl,m]]zxB5/y{yZ]e*QQq>IMoӆwbbZT`̨(ٜosPW垰0iӆnn :^0VP.Bq:3?*gϜAqY/j|6%$wC5>KE}Z_CB!M^oوmFQ<ɂde*.]tB4vKJiUj\_/cH(-e޽^1UilۗnrDI sN 쯿ɓ$Uc~ܧWJB-[ cɓ 3u:>jVz+WtCkpU zcyzml[/ys  !DsQNPp2B4Օ~mz3Çml~;/CNǸ e,M4n+lsDiIQx >LI^҅1#HѼL&x||ZbG^_cpccirWL\NLd]nR5Q&_eNDiCH3T"ZFCшдz=n0G*阶|9###Y{-A1-(;~I:qu >Gh ha4Bn,7>}46}!`0phl.U*4_WW|JżMRѪsLݛ7oM׾=㢣dBFc0.XbԤ(s8V9wMM:6!D:X\l7{M&u`zHE΂.Jtvwgcl,(xVf4(=9YZJgGAĮ<޶-@_EDWh徾̥2\~>Z?ΞD#C0:.ǵl>QQu+KdN~nnLؑ7'LR@_5> ,=|ޡl~>l+[pᇉK"nj+ֳ'l;qqDx{}gw3 /PX^ΓӤ]}!־V1c'h]f >~;`|t5矹})zO'O^c-,/'x_w۶q6?/ggbCCik;}2'~ϜAo43$7/a9yl:V<ټ<¼[7c]|Lˣ;ԩxkNdj׮]sy^a!8wcbXKш (5i s(#6mD cpyǎՎi^S99 )SLn.O_oOc0{']|w/Ʌ10<%S^2﮿׬xv6},-՘*>۾z;t'OKcpG5v?x#ˎaѣyxݯWCvqaêˑף]s +33Yɜ0y6q*7SsPzε'O2ib~:p4}5̡} l.?GݻhZSRg*fd+$Mds}M'B!&GGR,7QQ1s`PR}n\ԴB4݅5^բF?/L:prƞ===л74W烔HeC84x4"Q%X~!NNR:d>?r$%%$UWӗ/`೫ŅdI}2=<7z4,󆴢":ooR yyn[bsWlʘzj5~L\+:v˂WݱU,9l˹sSTL;wb4_ք ?=7nDR!6^Ax8O7dٜc 3>\uu.]j~ÇQwfwx`j^3!䖕9!"0>@ww޻J=<ؑLRAA{eyee8z4eƍ,JnnA!D sq:pyN6QP/0? ʬ>}phXw/f T:{WQpv zwjF||>e 73ibsOM?_=8b֞5/Ffl mt\7tg 7XP?hsLM3#6J)jFll$@(T)1ڜFEa#, Խ;WIA!.I ѪTղT*zA50+,좂[+!RcpzGeeG*4 ٨W;x8"nn 8Rђ|JVB] v'ݝ:ܙ̒Sl Biz] ]\twvηd2ͨ82 3?_Lbs[p/bboA6)%C796cM5( ڵ#$է @߰0֞<{',̡_q]rZ3cc7fٰU|& UuY]cL҅۷Kq"A6m^ǎ|vUؙ„oe5mVШT\ȷB٬3~Rm2GDg;r(3n_ѱ#/ (;GBVI .xGDmE:L*w9l9Ӛ95j5zJbeGTT½b^ Yyydbڵcj9y نSӯ(!d]*U7G]f& ~J={2Rqi^P`Ĩl%"̣GQPC}|Xݳ'MWX}f&*/XQF4ŅG#",l >,ŵB8mL:?WW9kYX*MBU*[v?1l_O7c,USΎ};շ/FCvI #ڵuy%&rҥLݛnj!ݝIIYY!.]=v}BC>}/+={y3<2x|g]*~&7>3WϞPI ڞ H7󯈈(ꗢ( 6AʁtK/KwTMeq2QA7__{-Fyf&/YBʣ؟S0L,X3vmב|o''W t*'Ҷul.Z-:mx6eLTo\v-}یX^ٲ|?wu2}mҊXrOmyh ݫ@BcLjJeFZW% Btv|R|NOgoyh eeCϦܲj3 lrwsnjԣVxlP:<>ٽ~RM!J00 #*nY\LduQӧ=XްG acBkn"M&69Z6:xzr.?Z6Vu1%FPpjQ5iEBFS9@BKv6 l7z1~MM?tTܻ#\*]pDDЦwŊKr˭O*/'eetT~ 99qhJV3"2 o%`,5Ъ6;?tn㬧:{֍~6m[oYZr Gl#ؕjsߪۓڶq'NXw339ˀ6mTqpnde5ipL6.Dϐ\4V;f e=)ɦ̨G]8zwgڵL[0Oj Ara!_1AJJd|Xj]LQw''ˉڮ:~x Bhr- d(7A`CXlBJpC Tު%%ȩS5^n &KK4r&'~r2EFgT[99x۶Ӧl  B6SŘQX?+%ۛRB)+iz) "ۺAAR~=u 77:0:*{W=p48Y{W%$LE gp!OR^A|u5L[X Ņ g_1)&&vM.|oްQQt(33m9}{Y۷%0vWMxh,y+^OPU6m"Qgb"ii<3|8 Lܾ} `WJ }}G zc۷;xqB0ח,5#Ph4ةS|ٵkS۾= w J 6 x9;Ⱥu䗗ij6d=:d_ $N֞=g't?Yu8yeeq_3k N͙cW'%į2>:bgr2[gͲ^?{ 6=9s>g;3bcyfj5޼eKŽukݺU|}x?d NRAwjV]Gɉv`2ϞT\qE[N !hP`YVjeF#W<Ȇ\kpKeSl,==dLBg{AA :_09j90;,:urKc}8R\:=]]y]; mRIș2NrM`W0"ij5>Z-NN99䄿Z-~Zc%%<]Cݷ/%%1}rbfTڽ^VK ޾Ă]Æq$+ONŇW]׾=_^s #cpϪU\7 5exN/l;߼Y+V[V>}`$rNWuċGݼs'w3Çcߟټ;zbBt4W:Q= 9k2(/`b1ՑmmX|z=ߤI\߭`.eL_{VBz دܙyF]xK{L{%!GGSVYz=Ft2ϯ֢7q"w\ɬ+#F)!]\ߤIٳyyp }v {Cڶݻ`.F#]Yz QWIQjՇNyNbl;`SrʍFXOgg:ii6ϵG]圓;ux /TKsJWnխ[qhdpDM37VnX|su SՔ.]XvMߖ-,>xJE6mX|z,B!Z"[(eZC(56YZ-EֹB: -[ sv&iypyzSצ 4xEQؚKNN2'5ˍHJy)B^ϩRNqS/-diMYnG30.5LB*>''Z?3|^/S =x}>KhdT\5{6-42}rR{ZMaޛoW\Lyh(UJ ќj Օz%-[0\dQh;/\ݻjSS*)D]9:((( B!#Rr%FF&ϖ|kp0ɉ?s-eńu jRS { 4) ˳g]TDr#"8Ϗg۵c4Cg2qɝ:s@@=܋BΗii-*(,TiӦ&HP!DH*/Y oEyU ) u!* Qzzzdv~&<ݛzhMBB. )Q$c|@E5`ylqRp{&PB]/ psppC0a?kOsu޽񯇾Y:$' ),`_//䀀W`@q1/.fOa!)55dC qr;-AK0յA7ʹf~Z-):jE3;i%X(n-sV##Y2f%T[R|C"e!DB430/Ƹ5}dtڷ%%`/OO6_=, !Z]hk$s] E7rd)ٙM^dyrLLt&SCy}{F6R&E!"݅$ZZdm}([Ta 1]]|0>>M9$!G֑WZG &7:wf\|<`^c8褖|xBYo5*Udtڻ`//%Eq8Tk(/58ղ96v~XV|JZm*#|}(_ CC*48X\L|ŔXT@ϯZM'77z{zݝntrs.ۨfj4šh4Bq~,"Z~~L aQz:F UcnBر֨d!h2ߐS! `2כVz3r>Xj`7kz^BOg2qġc&;EOrY|<&hKti);G\ZZ2 c^T||.[EV`Ey}EE),n,)+{V 8tuwW@`wwwڹVkB!D{cGVfgg0`IJbfh(===zhB!. !hp):uU~{&3r^k/50חU={!;tupNAm~O`7`\j}A(GsX ǃ"Z\ϼ(y{dPq1 c)YUC<Z-jJi߾},_s碖,X!pr⽘n;r0o=~}ȼIJB4O BΕ^_IJ2FKry9_NG$"Sümz L'ѲYm*kzb`tEE,4p̋Wbc%%*,dga! _T>4U𠍳,hPR%^۷sɂB%8RRRgdp[HHSM4c2yoTB!\L\zSu:kppb@?t.=$';,d⚃9X\\k&ӷݺթ߁"?s>xnWx*0i(.+cgA,=EEZ2= |~ ݝ^^zz @`bRfzZ-=ԄRS'ziɓL[6$ ! !hpTLڋ.ya{Z^AQPSQBRS-EMŒG٘Wkp6(ȡs-.Ckdg1/\G *BbY?efӌ߷[c9d,(JVdbVpWͽmy׏#9

<޽;}aټ[x$&&₢(L:x^|EڶmK\\_~9ǎm۶5[R /0o<_~믿oA`ڵt:&Ms=/֭[h4Up1^y^~e<==-z!*1$%1;,-7d!Ds'B! \YEA ĸ]y0r^z!!|ޥ Y B\4\#898&&VRRi+y'&쵳ee@\ZjND!!];bLl/(`ga! H鬏DŽ9pYʄV.:1'Ar_999ܹӺ8O?dBV… aÆ^mbΜ9c'NȄ xh۶-ݺu?dرlذkײsN ݻ7o ,q}{vɄ ]֟;tAVyNsrrغu+;w} !hr* N${Dq05(oyԩ=<<{70$ٳ|(~jم[{UeeUPǁbLT )hO`ZMO``ܤp9ԩ]b0 44> r)-],Õ;sѣ] h߾=*mL=z4wu72Tg}Ƽy> b7]%\\0(-k%C {w'xDzexתǧy]Vq,Μ9SsEFF^cɬY?f#Oi!Jbbk( *'Eli-d! !;yBѠʌFk> -'3,DDVRDQo T0rMf]e*YqMXY:o$&nR:*&EaZh(s##r00׳ #/ )3PaU4zuqa7MOO !~'ƌoasw}W:(VyxHHHO>ᩧk׮L<\K!tై 뙟;6К=o|C#B! "{B]?ddpˑ#j^d$sd)W SOSSћLNS+z{~yz=o%%Vb": ^Rt(*-ςgS^'JK+ T Y> -A "Pzv1SZZjxf2cǎeо}z9xW9r'O>j=BG<WtII [=nd!hjQ!DJ t0@IJ ?n]WWf4r9T6sgP`0NRo$&Rb4֘uXUE+y%:^vǼZ.tk~>9lAC=dj-cR0? M{WWٸ!{f;Gy ХK>#JJJ&L`̘17zΝ;M`` =Xj̝;s:|s <///VZE^^Fy;v,AAADEE]cBqihx/&挰gclk!󍨋zB'B! rO=b(~Ϝ9q*KvzK/hJJ k׎aa̫gRx>>J=E Z2coQE"}\l@r RPY3p[w}?~^z Jw /*˗/kLPP uh4b29b6d>c>z=]taҥ 4~O`yMF\\܅>JQQqv&Ysj`>ԉRSyYr ːڻ ь`ூ6[n.{ 1Q=;>h0g0\z{3חa>>[JfJQ7c"<<$B!"}6$& 79QRB;1( * ɉ ~e!h}*(( B! xI &EARѳǎUz:`^jY߻7iBKу1xh4T*~ÇQTֲ{V fw Ss`;h-7Nf'b t@~~a<jޞe||rv ܸ6 UQg0Sf&?ffZ7]n.GKS S:qhxS'9x0O?q {K[!  !h0 ~._hWyI9 -+c~>5!XׁVQK>> C9ho\тi0oP4o22҂Wp?#|}q: +-71/33>8&  !h0;b]]xt,BQxr9yjbs5Ws7Ch,P֜h .B(ٙW:w\~%R @ʛN09&%V3ϏI\@kc?!jRxS'7( *`ɓ\ ؄BzדaR\{O! QQdi) 8^Zdc}T&Rfl/1қHYdBHYaa CQN1/ K Wʛ*e&k%; kvĀHvh:lvx4N ڡCSM!BԻRR{ \d/-!h ==iKgr>M=!(DFKNaj@Rӑӱ%?ߡ+IQmRVVD3a΄8;DHe!Q4kAsϱh"|c޼y|$%%9|<,Xiӈ^̙3h-}X|9sE-GVE?//I^φfnҤIl۶.ցynֶ-F#Xnd֔]09 k/nn< 4;{+DőhmZϷ)!ʌP D%%|r'aىF  2 H<;9w\S)+*cfkP|xcǦF`B4T (L.)ayV &O;pmƸ}1}m۶:?׎z嗉Ņ0LBIId%Syƌ>B\<4&0&3sr ދJ\-*EF#K32a+8%4{뉶misDN47d!DK B! +j)7qgC1i\쵌c!!M4gii_R]5,m9⭷Og_cOhKI9A4 &ExЪTT*U*4*FEA(֬>T,SAҡs ~cs~>zuZnn򢷧'hZb`0[o1gyƍGddq/"Ѭ\ZBcǎ 0{ҧOv<7Q7Lo>^~uONN[nsg[{z!> FDD}?0}k׮ \sz WWWV^>>>r-̝;nݺY}駹scFQ5Mܹ &X_oC 4Zo<B\\j1Ϗ7:t _srXJL&ޅ,,dwa!O>MWWnfJ@eS+lVVR2B#_[7[lEUӮG;\<_S9](pϣmD2 2M|7$ݧbywlk!'9-GO'؇[Z/? >-=ZcҐy6/bs&J$P"IG+/oBnJ̳|<wܢR996Uj /_''prO?''-?{j488T8_ߝT*TuX|S,CɄbD@@^Onz2-57(4jػaPpӭ NN b/ü八5011 je\~lڴzن {QZ.+66C>}7Dr]ұcHKKcڴi'::ŦcgСܹӡVصk'N.ցyX[ڵ+ɵri1Gg1o<&OL߾}~ϫ'̰0fQn29/YYI^l:ǧx9^;w@'' @ҊŠNL ksr0* j'AU+o|CBB!06 P^ *lHthMGp`NYNN9;p_mF_~uIØc dI؛pޱd0ቸy?+їI=JT:*sIԮS W~N+zSVXN^Z/A1tX\~z>cf!$:?&ve\ƈGiH'[ '7̻b_ R 7`''ںBœ sq!ٙKOKS0!,HWj4k0k0דqs-+ti)mæNSϧja^ϚL~eC}|$TyzeUb޼yvWUyx;w.?ݻw?W^y%:`noiii 55\䄿?6Zvv6o*++9b֬Y3|BCCyGx'{ۺ<BLg?İYY,`I *̛?+KYz=qj*j560heQܨ(;sDk d! !Z B!Uw_)Y:;zOm.iY2 ] ΣEs㤞H%?={N3(#Uv0vo7]b5~N8@^w8sywcNR S餞0 qvcPTs|O,5_ݟ@_Ǘ~4ߒFnWv=XDV=l44ڸz\@by9˭see)+#TWSargEaGa!xò͍ѾaѮM *!!^6Wߟo٦P6m{w}{2|:Ξ=K@@`^ ѣGgffMiLBCC=322l.wqj{1{1Oxꩧڵ+'O[RE_//oٲ2~bYVaŠe,DR1LJAA\@kS=L=ֶ-r ʹs ;\7h)$@(B+J5\à8a8G :sN1)Yne݁pvϙy6A1S1[gwwv]H0*\JJspzYA)q8N{D=Yaj͏SVKW2s%%+*xIi]3 OPVƧɁZ5p<< ~+ٶm[YrNcݺuTZ;v,JE߾}yYr% ߟ0-Ze]VU+WC^8*ώZ1`(++ղ`O?(CEEE+r&OlتB4HWWz<+UW[X9h)/yy|IxGHMM'otZ-seܹ{! ŪU˳.Vy;v,AAAD2tB$NN!!L&cEv623Il2 +^-*b_Q://n `[I0W0ח?1( _PD.37h)$@(B} v 98J}0g 䎓 e_|Rnxlg"Q_bXXf96%A+T< Q;JE{77ڻqUM¹2YP@IBs33zNO狴4 ook0S2F[x xQyՋŋ[ܹ3۶mc֬YӮ]; k!Cs\\\4hׯÒA;c yWꫯcܸquh"~^O.]Xt) _~<,XgyiӦW/B\8ggaǎ2;33_\lӷP៽{ ]X#N1ۛ[D /9*w:v׮]y3؜'ڧO o%??ooBVLJU..v;Tk+ !ZG߯ $P!t:^?vXZ:1qld&le7[k'}B5`ObT JOG⪀&˫Udo.$P!DjN^l`}nMgN`/sm` ,l^bQz:F#jS* WM/B!Ba5j(b8 99/FmpbX~p"Ec%QRP+^^Վ#i2֨[80L+^_a :Np`>/N8A^Z* /2X/ĵ\˚kHj3)- @ & +jWА1<@K33Iֲ}t}K !zªH.YdaqJ z }xёavvXJ JF߿0Q* :4T~ )1*B:iֳ33ns f n-A!D0VmmMokkxyǞ$'%5199((Y$7FNo`э2pda I A('+}}qr‘EeH30_eGR[S(omhGGb!RR__=W0}FÌPmذ#B| !IOLSeghɣS !Vpp`!YYLJbGRIU@e99|x: _H,ljj*{DBql7_ekb"W(jk3NN}t4/̬CB$Bq4ф&2*&IP!,œ)lgu9?"* o##&8;3w#y~Ԕ]\XE>/% Є$B!Ƽ^Z"S'iЀֺTYid^C| gYI !á(YOw̏(|-;Yx9Bד'>:uH'U)׃Ք~,K!DuB!BTϹE ;w??aT8QU4?~ǎ<2RnTVI !ç(Yysb:u __ښ%) >ʤ˗q8x'.\x$Tg3(8|YA !JB!Bq,ڦMIܙuM2§K,,J५Wq_EFYd$>+/ J%||pYj5AZzn !*& B!B!FJ%#@LNܼwQQee# \Io0=$Y> ˥+W0eʔ7YYYӲe*V'P*Te)R({{3)bcYGVw>{td#Гۦ[[Y[;)|(ԴC޸xXI\!B!j!oԫv8Ѻ5/bYN R ͉t=y hY ?~<&L@RP( TyPFfff 6Jo;==ɓ'coo1ݻwĉ%( >3^xlll8p pkɯtƍ...,[cңG]ʘ?.]`bbB˖-9|p͛8;;3tP233Yr_ڸ8᤯..LprظԔ)|I>3)- *$r!BB!jV)j%7ƤBQ)h5%5@d=כ7>:s9F%@xv6/_;!!LqueWW\ koꐪ&C^oɒ%h4v؁ya6mņ 6lX??#FЫW/Yh?C}O?tR&O @ǎW^nM>'|{{{ڴiѣGڵkT*s1Nj/[VPp.ڷoˍD8@Æ M!,gCC֫z/!27 ca(==0VPKߩk\oT\oqw$A(BCM_aQ g/͘1nw2/\Wxs`VjݚW(J|+9 |DB۫W/.\@-hӦM]M6j֯_[͛i׮]bi֬l޼Y,!!#GT{ÃcccåKFfhBq[AXnт'1P(Pnd!@Fʘڟ}x饗prr"**;w2fz]M4a<$%%L^^ӦMR,<CPʂ AYCZOOYf1k֬J /`kkK077!99ݻիxxxDBQ- =imyyv&_GEq&#=|‘23y=87]cϹ,^>ɿ^O fSQ ^{p) !\?v_&LࣶmYؼ9?Eٳ%ڬ{-VIЎ|޻7 5 H+?3fGy>ۖyI ׉iSh_FFWv޽Ǐcz^^{oy>㣶mgH.1q4_̼FLj:w//NMQ}mM6ϙ7\aK/q|д)J}sVJ z1Q#vΎ +\2J _1͚E :y//k!30`zʏ_I3AA4_~A:ܣԯ__~3c J%~`ĈL>#GVٵkd ƴiӘ4iÆ uX>T7ZR;vH`` ƍcݻ?кukOΒ%Kh߾=s̩ȅfX+۶t6₅Jj3.gy%#XC^quEI(͉HL\oЦB31Z-mnzp(| \ؼ);v`$ɉ/:?oӆ' 8%;bdiIlP{>BIDݪ1cxenl<<޽1ͮ>ɉg _?|wq_HdܯѡaGOsTZ";%Gi=z4>>l7'-ݻcMɓI?? hpOpv~!g̠q\ۿȳgqnڢP05kvzNsTLHqؠ x<^^ ^VFm>lx4>ܙF|P4j5.`NA8uˋ/ҰOb[ϛGƦM8)DGGv8BTɱT憇OBnC7(IT\CGGՕVdeeŤIxk;!D9܈| VYQQNN.Q~66 RO=u B]Uw<ڷ'ά]Kym~&,(|NG@R]LN %'=gchfOa/h]$`cfIOg^x8šoDaHv6c/]bNXs=r!ăOQBQgd$$at>ؑߠ%qzzꒃhHH-;|ٿ?xA|-}Z%pv֕9>.]P|M^JG l7ǎϯpϟ^˖ @rE"||HսdKoЀ4 Aݾc?Ln}{f]GaXc`jکSk9wܗB03&Mض-QR,\߿:6/=ZWWOVWLMM6lcƌ!++[%eNp72b;vdG挰G_@AO\fv0/\PJm3LLt9zBT7I !3ֽ&!oXSkxxn:t)Ѧ4ݛ%KX֣_>Wm'--r%K|Ԧ LBz\\nG!DFǕ쌞BQfpK4_{~SYjjN޽/󟷷wos?~48|0݋BQ7( ذIb:ubMLM+ɯZS4?~oȨσBOs__]5%?7njL\o!BQ'egsm~~!-\<;-}mۆGǎ>]cnoOR_(1/֖Rg&%ݲAz\6l`EX8;JgjkKjLm0qbuX0g1.w2ּ"ߏQz[O?Ip` [ͳfBQaCf|Z-QjVΝЬI+G#.e77^vsDZ_EFsl,Z-Z!|!#\aڵkdid$x͍A66U#ooZ?@FC@x8_5hPQBT7I !,,pj҄˖obBNZ{>d]Uxvȶy8bvޜ7*\Op`r~0J~N;-­eK| Kh7š)Sh3f 1.qqf(%KJI];Lllq81.e䲷;b>&Mɤ?*}t*GTt.Y#G׿? h8r%v(?3?|^ٽI~pv&9"sx@];OO.mۆSҼmK!D LJ,`i4o. wwaRUe+>0::77ZF!*}?>+{"2%>&';92ǎg965MTSjtĥ\o!꺪\oHP!DwWrs㑙39USOʾ?Æե g*w=CC'>OR·gO9Sƺ~} ͟^ߟ~/aaI6f8b6l /3kww͝["V9cV`ӌ2{__̞l>o ]L!ȑ[J֪7P?A*Z wQZv~!9m#g5 ǎE쐐;ޞB'/QQ(| nDfi4 H{{yNZE!S ^&*'22ؼS76>/mƚ_={qw>[`G>Ԕ\]I_S޾v7nJmՊv|M6TB5HMMB!DUl_-0"Y}9-?Ǻ^=y {? ]%9(h`bڦM9׮q(5%Ox6=Oz{]MTV Ū nۖ&BQ=J1{{df<*IUu#.ddܕ+LvNNLquŷKlEԯQQը7]cA$ B! r322w.X['wէFfG) z:R!DҒ-[OBo^ƕ,7Za}|<xՕYXqB!t LLLJ~&"#9BA~tij5GF42VVLW66uLxugo^؟DvhBQIQ!5a(1*x0HQ!JhX˻!!H))s__FG1998>{}]nԄBTXrEHx^=qrzˏh4=JdN ڶE$HNVB!B!tJuv&C{zbTFip'|9aaBA B!n/>tťmj0"!b QoT77Tsp?jݮB!<<ԉ((?_z85a~Az~~E])Ƽꪻ>:K!I !u;BQezz,L۶t,VvT deI愆)=2ZzB!J%㜜8ۦ A@<r0o_FDvv-F\y31Q\(!!B1 BQ\ܼ}yIN3Fh҄#G_ ž?gikԈ۷gϓuWǠtєH~<[f~|ާ((O~(֯'ˋ//Ž !w)Z??lu75pڞ8jx?TP)_qB;P(nef͸ھ=S\]1V*)j>qϣGg:3]7GvXBQgȷ'!uƹ Wqoߞ~KI8|.V$1,}N`O7u\ȯ&a]/ĉ;v/^ksm~~8S[[F,[Etu%X鳜t6ϚEIl }u_Md٘ؠӵ "pɒ:.z8,\Ș￧ĉ~4w.NMݭMBP(ȕvX32?~OnܸLhU[WAAY4!Boccݩ{{fh,cL ~W{ +Qo^V!d!xvB!VeEy)9]Ѩ\ơ #! F0׉t '??ތs =zQٻl_xc++v1ZɯаO1妧l :7?L!%* KΞŻKڎ[%PT(r4Cٳ _{ٱ={__ 0)B% &PkqkOc*]TBBB!DcjzƆx߸G7&$w|<=N/+:ylRӓ /`^lgWۡ !DB!ꄄPRi<`|?LL$%*JM Qgu/ggsr33:w=VaL&&%n>>Ը1W \s| qիt;8"qcGqfZݶB@[[.kNN_F1GԡB!ăDP=[hV Ӎ/,ޟL3gh}kB@u{ ?\y:BA(NLL`՘1e;&66%S([ZLyS[mL Lyj10&# F0rH8o] 3{{:LH^Vň>cׇerqgٸhQB!j>?4jL "p4aFWW\+5¥j40jddXT5BT+++k; !H; nڔ˙|t:bcjg~FFS*xW1& εB6I !xn^q^^E Y))&HH(&#>^%(&BӤIt4N^EѕNN<'hjn8>IvJ*>4֖3-811( 3)  xpŠj`uw$!( $vBܽ&&|רs==Ydk4f˼13ڻ ,-9x74'0B$B v^^;:MmrDAA2ťY3\7xu]ы[gdCڢ֭OVKVnn~MN+՚ ,Tpo׎Γ'JJ>jߞBQ,Q#pp`|P7ssQj5/]x7hm3Фr#6d@ Аy~}GE u25愅U^S۞< @ln.EFy,BQWHP!DP*;}:~:wFTJȁ<7:v,Xwn85n @)Sf|ZͫWt0A7Ro>y2#GP(vmǎŮhŊl|=LpkSSMvj*+<٩2a{ [OOr3397h+jť[?ƖzyahfvGB!66kǛ׮]tn4qqKNƍam]溹gCC1-w`ᔅ$vB+}}?Ʋu²u)j5 ^pvzp321`=kPÙM-$,.B:С8ϨA%xx橧dե C,нoe0I|u]Νyj | L)[ƸԼײ%'V/ˋVF[eUbˑ+H ;{`ּW<:tm !6l¹ c Gf3RsjeBQT<³9 P GFETcy^=޳xyw|KG[30CJ$$wr_:pu#LDbfc]t ,a7;~:*= ;5dt`4^&pe.M~5]vƬc(uR..LprX¸KIOgйs1ӓw(|yaa\BL a˖>BE B!BqP(72* KJ=%/Th@|xu0VMGqr^ #HOJG)~*mGb05QFu h5Zqvt}+\u Ė[ҴwSo8s;O! JrtIcvh(t`4+LdPP7}|NOg}|Ç \&Mhػ7^~̄Rcbxt|'1,?_}$S{R|;> C+77bc 9xVF]7X8\.!Brr9,gw-^BϠ[Ƭ2}#}ܿqX:Z§XJr2r10 :a7IJ"J4-v6o9$Z1ί@ATF !DS(5ʡT]lz:Ϟ je[3.|`Vh(cji # B!uVeEy)9]Ѩ\abcg\ܼY j` f Aupi\9w<2kn<:oݻ ھ{c-iil['Q MM;e z./Je)Kٳ42u˚ 3NNK&Bq/tg_gBOHzb:yy[Ѕ۾3<ίwt ܬ\\\;/ξS''x_&_U{10y8a.LBDZ3[3lliة!y87pf̂1n7q0664ti?VBMPښIJP{Hj*N5xz :7ΑB:EB!ꄄPRi<`Ё ; bghȕ={PӨ_?Bª^=7Klӫs||H0NFNKc[o|Pma;:95n_~qhB!nt|#xG3 ~spmzA=*ܫ3w̼e!=rKƗް6}XͳX=oYQ巋O!ĭzZ[sʊII©t\{h[[yz %` mIIB I !2X5fLFGc+3z50KLJ" /#fZQӁETT\ˋ'/<~#-~ڠwoMVJ A۷143{UKzz zq{ֽ~~8UvB!BB`ڲ6.BC|-=t?We@B: sGGRo;JAl.m݊:/OW^S'\5UL*}}ԹoliIˑ#{7w8$w؄B!BQ>B7Jhvvm%j魧Yh}mOOQD5/tbܧpD" B!uBmSRRIBh(!7ݱpa]ѡ?K^ˋ"ϜƆN&U:&[//ή[GЎX8:bUl!iuޞ+W޷[ǧn.'OƹY37Fϙk76_[fb" __1wt !B$y]!D]R(H{{;'sB\xqaq4.vj!2QHP!DlP WgTzxxРʌn=FSA֪_pc̊XΣذOGdDW_-B!B!D}K*ƌsF޲ٺ^W_a7;qs÷/_g7; ~ (mJJ B31Z-BkydlڄBAtttm#KKKR2BkᔅD` %%BQQ$瓩VՒbRc6zPĨ%~̶3 }{(y9y_te#='9ILp ٹZƒNOvǩD[V˹841bS׾?v}K/[76{w{]Wreݲsb[b^oB!B!B!7WCCzz{{>H|xubd"۾FfJ&[ |ĺiiu.2buiFϹriߥ[ڧŧկd@\Xuة0VX͔SP(  E  ee% B!B!B!B;NNz{/S~z<=023cB͐6!7+kǮ}ed2( ,BNF\;~ ZswЕ{gzNCb2G>$B!B!B!*и{czO o{_.G{-F˥D'8kt}+^mqiidgsmΙgt?ޞS^7մX>n~8Ov4ل3oVcra騈$B!B!BP11c䜑e./K&ny{-$oWԭg W?WW&a熕ݣgKYY8x:pz ģxHP!B!M )l\#Br(ZB{L~167\ ֶ3D%7 B!BqjXeeċB-[M 6+B:/b퇷/񺈵5v׽߬w34j QAQl ׷#j ]ߣFA$A(B!⾥P(`~k;!(3gبVPv(B!u߲[7[_5ҾK$J:7y((#zb qt̶3ƥ`dfD ֫[8Q4|˵hAͫ@B!B!c&&&cp\W._o !u,oѿæe#NOvn~ӨfW*}Cm%tAjޟ%PA !3ֽ+F:9t]v^^$UK^^j髮}Z1r$zt?މ9pZ-d tToԚ5Bܯԩ[{,YŒݻk!%ggch*NXr2vTخ&uӬtRk4@AsIʪxz\صkK,{w./F#Gߎ<վbyp kRٹvE*P[;]cԟROQɓ" E@sY{T+B!Deޞo kkԏޓzh7CyJ9M{5eҗhԥR䉹O0l0*էBEߢv.sy=1G%Dq BQgظWVA u^ JUՈlj;s&BpvٹBJ:UF'NGjW{SZK^IH`Jv(ÃO`q~57m"Oi|9kdw&G>׏ E Bnn|}yuk}-ׯg 5B!@B!uoEbX((xr}!?`GNaa;KI~Wpvf_r?HywwzIcr掎zM] }+Gɉ?Tg;0]'80Hh

ϞD_Zffn(J=w~I32~uY9lXDyܜݺ1y&.[!B$Bi9iil9S`e/o``bfƍxu}}{7k^zg ›bkN%.8 odĞ%KLLĦXUJS1%)<=Kŀ%K@l{} ʨ=?bNHjL Ns32m`W_qaFZl,!HũIZk@Nz:ͣ/h<{6̘ABh(Ǎʊsi,Fm8׿6mz=_~^`ΝXy|*s~~ ϟ>(8?~JVRReҥy*٩垗ϥ;;9qpr##oY}|־:oؠkSs\$Q6ϞMϩSqkՊŽ%73SfI׮xtpK*ϛǍS0{6vv$߸AlPPP73!Ԙ?2ɹ4[723HMGں{ˆ9]h8~IO-[hpc5cfnD2}. u!C04JB732tB3L=joĦA1;0?_.mc6eO [_[[?{N.t_ׯ3efLL޸1aLЁ$fe%8/ bڵ82GJ%nZ9;Lټ?FQeb.nfn?,fftWt.Wǒ%(7Uׯhe?aC9yS11tWƤVNYXƼyjZ6]/Pfɉ V;ߟw䳁1* IJ-lWz{p4"BB!wAB!܌ }^]X̄5ktBC9z5#?ƅ7߼v%u%O~51.zO>ݻ`WTKC-\ukl=GfDQ8gN{rsGGή[Gɓe c:{C?|nfŞ]Ҭsg֮- MOuˢ/\70/tJa˜9hQx@^;{wX#3g޲O9?AA˓~K|zx| ]?$GD7ب`Rҿe$$pv:96O=@-sg( ,:nssscRt*{D=cFtyEݲFʗ)\*"ϝOӤXo{[[t&+oMLH.7YV\ZߣFa__ㄾ. e]P.Aή]fU$O]ˌnp43Xd$  Yb'iPbW9ű瞣MR[-cũSضm77>*Å8<ߣFݲ99ۿyzfN">3cx$feۥ udӦ@A;J">FBVt*si66x&JggJxTT '=UGEPҲYXYNeXYFW$.#H&|U?W.( VdӦLݶAA*]--=Z]\ 'B2l0MVaQ#$A(NSٹ-˭+Q1aT&?_ܳcGΡ%6Ī 7*riV%׮[YME(J?XᙿQ~>Ը1s|wǡS鹙.Y͛IA]jFS âQu:XhH+1]>}t?֍gܧʜ[sG??,H4ٙM3g~<;vĤ7/_FoϞe&66lV-VĤD(Y 9.ǎ 6o~ֲe$+{e9qoQTt&#tpsD__ښ̼<]rhى(28C_ s՜yG33xg.U?(֤([Zꒃ,-Ʊ(^,vH 2d1ϯD==DU!DY @*sYv/IYYhZ&_τoF(G`]e 99wueـ+|(&;?DPtJahfFtVML-gU|ake|?*-‡bK]7ԤQeb t:%6дԨPwie&%1_O=)7RSUr_ \ML$$)E-p̄B!ĭ$A(nƖ1a겻Q*1%;%FS"IT꼼˲RR*;rscݼ|X*Ooӆ3-掎xIghH7ޠog[oDbIDATA۷aDڏ['?'KYH J#;%Z]"IXz;>>ZŽaW%c3KD,}+*8#18mEpv.qn0g1.w2ּ"K2]!Dͱ22B,ݛenn4ߑ!!bB!DT]B!yvH~v64o~?fsm~z7/_&9"D_NN$x(z`ξt͛f17rfZVnrLJZ.#R8})Ν5ۧ;֥U4m&?bH,O=CC|w1ơaCT%LJ"ԩ*ݟc3{{:>,_NwKPܴ)=M#?'cG[#BJENLu150qq埝I*ݹ3732t#|mm /pR[WWʙBH#E6\R+Wh[Mnn^f̆zztpsHO)%w~\\9j+U&檈$3/F eɑ#$|,r-1Ug0qcL p !+D۝Koqͅ w_e[y Z`q"jx`TZ;!(!B"#B<켽i5z4k^|/Sffs>'??<;ubowtسd %F5׏ŋhߞM@ԉ+Wqo׎ 7]SA2w. ҡE~<fpn>g֮EWx:vUXׯB$pRxjn\ܼkk|t0̶9?~~xwy= %J^Ď i2h듑Vٱc5aX ,9|9VVUMy'8pRRpon?N̥K%{v ]JbHǡaC GW{__l8nA;v`UzRZW!G mmYjawwcQ>'F0#$)/hZiޜ4וF޽%CYÂ޽1c޽3ȑھ>^^~"#90qbZ3[7&oĵ$:p1.qq:kccwO//C_.5lihkyreAPɓn KCCn{nsUFPо$jeUAbwvښ 7o}ؙq~}|0PxvmXV9Sd1մ)BB~ם{g޾}T+̱bQQXaobBw\#%ggs4"/HOO{'B!$B<0}}l=<8z5GgѵyO;l|L9mbES9'?]&Ofna>t53ZE~{y;SS<ڷ'#!{﹵jŅ_bV7atֿ666t2#GHxJp!˖aʨ˱Secwuop_tabm;?'Mу>o]v9u^Ԕ&qر*'4k+8ayX3h\kGT)Zԟ|*y1('8quoINZC?#jBqzU+ED0n:RrraPƗJݍn~f2bpjiboϧGp#5szxxa߾>6l+[p&&' G[i =<<'n1I~v<_; ? JzJ)^]q23cѣ|?4gb2G,1=)5j{Ν5kHeϸq())CXt_mcU&LO,(WZ\FZr--9s dDji¦1cJ$LMY3boիӽ\NN,QǯӓUÆѭXʪCEښ4/ki{Jѹ 8AwwwǏ-̤ߣT(04-kǔmrB!MII¢cBqvv&Ft ޜ1)S͛,ޝsjԨl>ЁӦHն#GX5f ލM%n5y,ޝFkڴG#c& k;!D8;;cxSSfmSm3m]\׫Wmᇼ۵+0jx.]={t ,nڶm|{$'N=nMHMc?$4p92#''>ciiI&B!3SX(-H)5mAq2P!܆ d`CFB/̌vh:\b.]Ԛ5h5V!׮O8ο?Hvj*-eB:lzΌ/fvV9 h\-}6\RJƬNNcj.űA[ZHB jXeeċB-[M 6} I !xq/I~BAVx1Fuh$}͛|c980x:Hؘ6wR485nS+Wb]~m&B'ӻt!"5oj'1رR? JuOR*=n=fu;MU懡CQU|BO ZRѯqm#:| j۶BJ~hԯ_mQ!+77fvСNWZ6mx~ÆC!;t=rN@[B 111Tl3xpm#r2dd6$A(B!B!BTuAA$fe1e=,al52HkRLHc{Jҥxixyۮ&uXmMX3S* KNfӼھ=6@+q-{w.V:ElFOTZcպbә+WܜQM^n+Rٿʞqߝ:š7HMԪU6s ػB! a]Ϳ+u$B!B!Bj.(["i-EuGб^J'&uӸYX64ח>^adel՞ ,Ѝ,8peq54͋6puÃ~4_Y{I5;U#DžǿJE77<ߺ5׳„.B!B!BQ Ư[Ǫ3gPСbQyʎnn,8pԜ^hݚeKk732xϏ Paa{7'170`Lf,GzJ oʾp2oi [3cG _Ǥ3u6\JFC>-#f [71=yE e1/!IIYXNxM.;z/#49{6lcJ˹s0n=<>,sCCz8x~<{}}^hݚݺP(*Ps'vȐfae]cssfvMW##l I !B!B!`fnef׏> uɷs66vp8"Ϋ[i'4kn݈LMe]5>8q֑C`ahȕnfd趻+$f+;joĦA1;0?_.mc6eO [_[[?{N.t_ׯ3efLL޸1aLЁ$fe%8/ bڵ82GJ%nZ9;Lټ?FQeb.nfn?,fftWt.Wǒ%(7Uׯhe?aC9yS11tWƤVNYXƼyjZ6]/Pfɉ V;ߟw䳁1* IJ-lWz{p4"BB!B!B! &&dg,+.WQ0g/kqB_{ ssv.(H |g.{{XȈ֮eFn8q,2߆gpÆ,4qp(lիs..x/[ƊSxm[]nn|TX.Xt u>0o~ś:ˋL߷Ňn._lM" CCLL*u\XW>U[Kal*uիR> ;BBxeNDE5)k8%ػ7 Z9;FB!],6JU|hLX ߍQb]xeRsrhʲh @v~~8̌Rsٛ[hL-+4p.iaeUcu*J&_OIiQDQMJb௿ˋ{ GSSn2?nR⿯a?~t&&TіU8f#I !xظڶyy(J*mZ8;cQGndեXB]V^FԔ\Ynl;e===ilo͠ RXr.0ӫRIսBQ7YMR#]‚{ Fá7xg.QӦR*16&9;ĺffeTMO/Q TLH.oI&m׶MtZe|5U&J[֯-+ALwrPedRakPk41r$2Auܜ_ϝR|<hPÿ#; +V<מ BQguoř ?Ĉ;޾Bq_UgC_zSRx}VG}KK^hݚ;cJ˹s7,:pΟ'81kcc_?̋ PG8&\H ''BOIa߿^Snfd0gO]rS͚1~=KX7 ~~q]N乍Ξ @ZN۱!!DnisZ1cG] 0zZžYx aaԳþ}uX\O?M//9|nF۸n`j\kZNN{9qO&<%s~;w;4Cdj*G""ӬY\a\ԍWжptbi0#6=rٳYf;=E9j+U&檈$3/F eɑ#<۲%&&%&hS,,H"!3S׶x w[C\sWV^;B-Xq4,⨬4Gwwwj$B{__/]JOW]js&<'^ĞO?ښ.'߶u^ /3_tq!!$F½M/]J^f&/iwIח_f3a*&ŞK e_7001‚P4ȑ ?>,')v4ڎK_fey%^߷aGylzN[Vdv(e/ ѡ>G?ƩS =S;;o 6(((ᚙ@jL ΟujB!D]7[723HMG q֑C`ahȕ_صkq05eN1թn7穵k0zt__VRG||xukԋHItul|*ZP0ח/C;듽wd"(E-,E*Z#1hUjkE1k+ID"|~dxss>^}jbiZl k##Nܹ;PT (?25j0N?xnC75eMn?wj7࢕}|X|Fp3+11 ްʣgCbVۉϟ ߷lI)SSܺͬ{բ=Ϟ1!0R\~q;v4--[jy3ݺA!xxڲ ֞;VVAW5I.YXO+yuꄢ(7zWBy[[SSg)D\] ۹Sޖ^^tvӊLm#==Bwܜ3߼Ɉ-[hӧ9t{˷ƌmЀׯGə{8s۵ؘQLܵg4gltY/{ڲY\1޶/^ :*7z KCC^8F/jsQCGNAl^gӰl"C8'87ښw2q.LL٬ls// tuZ8q ^FRZ[Gʕ~ [._.Ri﷏/v@Wn+?X7Hl,VFF؛M=5#7oܼɏc7BBÇyBdw"}-SwX.ۿN jS,oO8UMV\?x̳~}<L_€ߧrz?&66Ү7lĞ8{uo4xm[:X*ȷN+{=z-Y{x`A,,?gψ># vӦ? ucz͚lK2rۺ?HJ*0+٢oܠʕTup.]8Κu:!ϯ]|/fB?<ȃdgeo<HY7mJ-ggڄs'W7œ߼G7RҒTB!^^^t'SRٿ-SjPDF$=xqLg~cleE \\#'%ڴj>o;8T?mB~ޖʩ?|;wƷsgdd$ۦMI\‚Vaa B4wB!\,,Xҡq۷&>5FERT2 #=%Y3%t5!Mb[謮PlYFsSd39==tsvV>]vuXR"3:7Nq}Hjn\/jmLNKSC2?Μyi}|yPL>޴ usr43??88p}…kcbcÍÇ},￯.3Q#֮Mi^s #r(֎ &f~n~[DϟHҒ.[ تLfz:Jf+ǵzuo@W__z-^A]r">'OhWB:_:n$$"kG3r3ӣ=ύ ̕Jr?gߧӊ$#$@77~jݚIv)}{,\U;'5juho$suܬS E3 Pͽ׿?a;woZ={@j՘ֲuL #G!zUoؠ.3fM.Z'oƤI<]Gn0}>~iI7獖AÅ y¾04,&O"$"ñ16.A,|GG #B7 B!B!kW$'}*2^1+AI}W2>*n*IpP!B|HQ!B!YQHzN]J#濝9qoxaҮ]queZ&%!B!^K B{wI7A!2ڵ^OP9B ,f!BZB!B6s mk?lB!KO f9gOb?oګM<\z=mrKpuܬy3.]8ӵbE5 +]6p!;]ී'_:gK}D fMh@VEQp[βN߯T֪ŌhoK#KYJfB!B!B/4c>ٿci{ǎt}w&53;wflVF|<k`o3'8 4[EQXӭff%)+x( gOŋIIO(;lfܬXӭAnn \wBpڲ[7~ܙ>>!^? Bڈ111["^#קJ ݱpt7huq^B!09~ժ[ А*[;FE KJܜn*E>h_(q5UĽi3B}m4$*&lOیܶC.=|ipl,e,-5)miI]WW2(GٜBuuV,=Okk xgjԨAòeՁ¤u[*wZY},w$}/ WmgGҥY|8u\]IJKcٳ|Ӽy_]$@(HM-f\  ҾBσP>,O2Q#<9u.vf˕+ӧĂ}7kuTMo!=wf`v؛tr-~ /n9 ?>r}<8*tYgOGEqrT*meKZqp67ѳg${r#! ߔq.+WR*5Ą7oMyƴq(}}}mߴhg̤kJV~neC1W&B! eK `_\I7A!xc ZL ~p͍fT'EU-u{k0/f!ЂCӧ`o~/)I.^_povb"}2\ydfoZ;xzKW/%l XյK mx1)kiN/XG?s=un[[]+UbȦMpǏ -fcWC OBFQftk65֥Y$űnn=[o믱ȺIILd_reܹ ~5n/[&O&>65jPXܳ'!˖V.i5s&ׯ'#||h甭][]ϑp#- pP_~eVzG7nuTKfF Ul07oRzu bI]̆ G] 5w= `ۗ_]ԬIi0/UJ]ØMh2qєYS~eu4lHÇV.f|ߓ)F;ǖ)Su8Jf&V>okǬu?~`Ȯ]XB>,:~x`Av- ݴ]׮FKK֨'~~.\kIvD>ټ/I߶hgöǏq[wӧt&&LlԈ^U{k`lTB)F Q۶VRҶF/X5 ۷wLv]ƨm8y.*lmڸ:uXbj*_lg ) wkkխ{5jh=Gp}h-4w5o.o!C S!룴%LM<YԌ 6_ymmWhuj9[qtr`f[-BwHl,5cJH`͛kQQйbEo`K/UGa-KR{7m10t6T?q`ٳgfteoLaKJd~, T[lڡwáCx-Š  !x$&y$|!Jf&Əgݘ1 i>v,wΝcȑV|gi'X89q?Xڧnۆ31ax} ի,>Hz30Ct놅#MCٳYֿ?׭CC[>`/}Lhdd$k vHMFŋ5s&;kw41t6qxΙ@{Zbjk˭I}[~3ϰruɝ;\ٻW݆g·s/`_]==^ȳ~eՠ5nr)elKJfBs["!>LhԈLũqTZ/[;w =>XcH۾t#>5jnmͣdfDGlR 2G1A!OOGOeؙ{5^^?2jqf,"u]]lnծqq,8v,O9o;;6]LL LK//j:;i 6n;wlnND Xp}|(maA,=y.+zބ kؐq j}.JŒhh,`NffuɻwSё9^nø([Gʕ˗5 rw_4,[Os}۔-Ξemko-';חŋq23X g̽{g]gYYi/$57o( #G9  !x&&zd\ޢ;}}?L~b˖$ܾP2ޞσ|ej :@x`"LmmQ.0G 2\ݻK;w^DUY>?jEb8bOLJAlOf/XIŋ1z+mi֌ bdaA9s50PeyE+EQ6m5ycǪۛޟ~NHYy/"B=1mJf&*#=5˗cle(=qm۩rկRE Hz7o?1ҩR%t 0wtĵךBfĄg v-;u7*cahF/\pl,{OP3g3Qժ.[ՕYͽ8}eMnIOG]UG le"˭kh-?gdfjaAÅ LJ 5dŋD;Gooʕdf֞;GʕX|8e--3s ))|ײ] //M.5Ess=Ӷ9̤3egH\<#==t릞 1^='%1m^tT*ޭ^*,˚\ѹ3#n͍_ڴ…/̌e;2bVڄSŅU]VK&5juho$;]Rg ݴׯWg=Xյ M/ecڶ{II8ѫJ&6jTqBh}l~=EEƍOJBZeUukdo͛i uބ6lȏSF[#9{>V 15U#BN}?nY`070g*jq/ivTk!UjPvCWȈJLlԈ5jH:  !xfަdfx:e8h]##5 Wǝ:G@:9niۇU8VHfnuw*/VMzṠ.EقVutdlBs43B2YYێ={ 6*˗Rhw-[hfƃBǢ(}-~ /n9 ?~cB!DT*ڷί6<=aΈ{Ĺl6)C|ل(. Bsnfh:jzۅ(cjkeKzH㳱%eyùw1b*Xmy{//))ϖ)SX5t("#B!\,,Xҡq۷&꒚QQo[._]yzj]A8I޿+mm =^lՋlt!61xVu튭q`nN/7or;1ᆱNz%=hB! 8%݌W [ʥJc! B!ĿNZJFOƍe*W=8s#mqREC>.3{{LJ.ǽqP!O֥K?o[oqi.u*T۷kmg\Qou11v;8\U Ԇ[ݺ\رkѯ콼h(<чghWÆу{Yσ9V!.)^ѡ~ٲӧ< Wg-s$6VVBo30^@Bf',wkkZ+ǬyÇ,:~.+bb5nuONOϳӧ_}~7QZǎSrйbEޮTך}^YEmB!G__2ǏĠA2{PdBw??6OHx8iv~cуgrfyJq[={?p v$sqLllD͚Er|WFXN[ ݴi/vٳlK*c] OV3"%#%3Hmkkv}=Oʊr49R] ~ۗgkV/}N!oիMB""OIaAvHOJ|?7070 ͍6Ub !͍?wgM~=陙kW 4+uf:agb‚vό33P"%{Ш(qd* 5YߣF@)+:wf֭wK64\P]ёZfҮ],;u,jߞ9hK裂𰶦=y 0j6%%hfF*Uب^0{wFmƤݻ kkg2}w43cYǎغ6vqaU׮5gNK!BWx,,,J-B!e(1%ݜ7ӧ3QK-cb.(ep[D"Ix~=*qqq%!D899ax6}LM ӒnN<~߲W/a6lf~oGkGq۬Y}m˗/!t >}Joy7-o$$$ B!][NŵjUu:xpP!izӦ}6|]nIJ c޽WI7I!BךBg>} HzkkwR!~SWBR[_Rih"dO~Zůa^Q114Z}+osjn&$էOI7R32|vݸqqؙ; Sl)u\]XAE>ozf&'];0k~88T}ۚ^akr/ܰOcahgye gck^,wKF1`diSK9BQ789S?֓'Tst_je-#j%uoS>߾4II_W kJEΝ/[с'1WMPT)>}ʦKH(0@RR`>W CCo;WsxrlLʕYݵ+O&$"rpy3=TaMn~׭ՕtV>Mq`HH\>SR~=_ԯzM0yn&af`:ZPM=e }|}!!B!8ccc6mZBtHWW''nBH7WZ:!G3OOyzSU6ts,:~\# $5:v7>vvTG<v>>ڿLEak^Pٳ<>>y}(6f UKsŊWGBWe=IMen@_jFWF|̹Lo ^^wi{[BoŤ ܈8sxII?zM[Օ| HLMeNp0Mrͼ{̞}ճ"SWGU$Gij*Ϟev˖Es17ӧ_p( ))D6ddfM2՜0}阈LYf_ilARm۶u#c*Bƍ4i333T*?.&Oߴk^:?3J צDhh(PQ;xxx`ll:uŋePTT* quum۶^X =MM-[(;k'?q"ۮ\ÇtthUsE{{<9 Wi共>陙gf`fGȆ HlVU>ne_"c@`o)_Cn:}]] tui Am\u,Bl_Ջqq. 6 8d?If !xm۷S2{ljժk׮.]D~|Yٳ'AYo899MFu&c*dĉ\z5k`nnA͛i׮X[[uV>#9ܹsKmKJ_|xxxCLB&M8y$9֖ӧ$--X"##ܹ3ݻwgҥVB7bVObB` KАϷoY͕ԔۉOJbc,~*gSn=yC>mv'1{m?Ʒ߽ H=#4]HP!k… ǫ16F- H+BZӧ}͡Rɓb4?W^sXYYaffFy>:t<fCV(U4hЀÇkѢEݻWi…ddd0qD144RJDDDussڵkL3^ʕ+{.&Lx9 amSܯɓ' 4OOOa̙ԢPTꠛ)>>>DFFh{/}{O]oiɓ'gakk @ZZ ݛuB3ndz?陙lȑZ̽{\~Zq=:8PY㏻ feƍ16гx?/\Xe.ϫ{4Xk}uvT4zGDdfg}6i.n=y5kX7np?GPc@W|y&Twrb}.\(4)^ BZ;v,L:hy >Օ磫KXX=z4æݛ#G?cbbرcy^^^2ׯ_I& :xvwÆ 9wKfرܻw7oxf-NFJ?ԩԮ]5kЪU+Zh}˷SNsά\ŋӻwoÇYd [laذa4lPjԩ1zhٳgΎN:86'^82ڍB=*::f͚qE=zٙ{bddDͬ(/^uЁAqF:v>ʕ+iܸ1666(BN8~8&Lt,\͛syJ.Mtt4ՕcǪIӇ۷3m49s&AAA\xRJ7ydZlWߟ3sLtBP_BBB ёK.ѭ[7 _ӧO9p@!΂ ݻwFJ"88 ts'N,4ødL S!%‚z={ЬY3K5HNNfϞ=tڕ{RV- _hԨ+WTذa}۷ogӦM:tHxl֬UVe̙̚5ubnnu)VZʕ+׼ ʖ-̙36mm^^^,X@9&&֭[3n8J.MŊ{.wlذ~q1͙>}֭[:* ]]]^)662^tiy!Z訞u( 3<~իWTRlڴ^zi}/C\߾}iժqqq/^Lǎ133+ӦMcz7oތÇqpp![H])_ԯOTL 74Ycp0C6mGque^۶ )_ƍr9+tN!ٚ2 EA)īY3OC31aAv+]Z?CQHg]ۼ9)غ3Fԫ۷5Z(RgY|8!Z=hzHeJv#@8uك.ffuue{e-zr%Jⷎ&5juho$6[7ͲSQ^ =NB!h۷oy9ÇӣUVmUTl7ݻѣٸq#qqq4U.]* KY 6lNq;7ۓԩF5jĜ9sHMMU?mEnic*dS{ٳٽ{7]veϞ=4jH]^U׮]6lϞ=Ȉדپ};j\7 ȑ#jϘҪU+:Qe˖֑=?;Cu/[oO>}ٳ'Fejuݙs2{l._LJJ |ܚ4i;;;nݺjQK.eĈ\v]ve˖ۧO7ṋ[9s&-ZtəJU!DYY9r5nVvlf]\`ݥ--YҡCPƏWK-Qr h!F{ٚ֕{ ݬ'CxrAv,h׮}֮ԙmە+tv/@de(L~cS\\82`@uVutdy LJ>>Hݢ`B7Yp!N87^x;w3S.[!!!lݺ08t+Wٳg/l_JJ &&&>|8dp ݖݶY xxxhsdffS!' ޽{_:Utrr27\:t@rr27n4ӋիWܯ|wޯamm2 ;et9弞eR5ۛ.^HͱX8;;s߸q}}}cVXOpp0z][nB!8}EǏ35_B!h666t҅>,>m&988ѣGzgll̮]_T\/#֭[<z, S!ߟ$l… ԟ{ɶmHOO^z׫iF[h i.uÃ˗iOa3ΝxA;wu]aZnM֭y 2 fΜYlhР ,Ԯ][c(lذ???u IMM(#k֬QF|Wmg[[}a=z%KЫW"ߧɕ+W*uVW. !q-VDHeX(  BF "22Uԃ5jΆ c.\@HHHΟ?OBҤe[x1XR!x* On$@(6l0,YB&Mptt$66m۶ѣG7n\UTy̜9cccƎʊjժ)7.ϛ,]k₻;>>> 0;2rH|}}ILL$&&2m46mu֘RJb#kkkƎ˗#--3gpϟ_WeL S!%ߟp{<X{E_|Y3gh'|¬Yaܹ$%%} ^ۛUV닡!5s ۷/vx{{zNpp0NNNܸq3gbhhH5޺uNll,,[޽{G> !B-B!hDGGSL>C7oΘ1cѡ\rZձxb֭{/oakkKnݘ2e fCLiٲ%!!!ԪUKjΜ91-[{{no&MԔ-[RVb_n̞=իWӦMzڵk_~eddQɘ !xe1sv+4qqqtEcJ"""~/f͚r<5s[`;wfԨQt҅ o^J*ܾ}!Cмysk>BR7oޜ}allL~ bٓ5khNmݺ5&L`…t]]]FQߠA4h'NgϞ1r" ={6#((ZjiΦA6l͛7gTVÇ J/\???7n'|BBB-z !BcaaQmB/ʭ[pqq͛%!({%ěKҰaCB7#KKNZBݼɘxA  !Wkܸ1;ww_ٲe6H!B녆ZBQtuu +LBsɓ|[#B!B!%OB!ռK B!B!Yǎ#""qơEC!mB!B!B!?رcYMB B!B!B!B4222JBgHP!B!B!(&}! k퍹9Z"..N]ɓ' 4OOOa̙(.Jbǎ4o|}}9~8t sss|||hCRRÆ CCCj׮ݻ>O4  qrr]v$%%pB >*>}hӲe(_>ǖ*U#Ў B(hqꌌ &N;TTc?~&M3f {=Mu-EF(W*Jm(166TR :T6@hh(aaaܺu J?w:1?̌޽{Ν;ܜc>?~͛cff!!!y'S!o]SGT*Xh{U_g.\\~;bgg >>>u~geqL5ڵx{{cnnNV șbt…}}}T*}᧟~ҒdΛ7333Gѷo_122Ã#Gj ###޽wTvi:wkժ͛5dy۶mTZCCCWhB!f5jW_}EfͨUVacc'22XΑÇ\z~1rH)R$B#3BVr.Nmff5|zaJ*ҩS']6gŊ;ur =9WX#OiҤ L2ELí9qcǎER1l0W?Yr%XXX{n6mJ=;v,nbԨQdddwrM"##7ꅽ~Ef̘_۷111|dff2x`ƍٳpק~S>szU {Wxʘ>B!^/\SlmmYz5NNNsMu-Z: ٳg̛7 .\N3ݻwp˗gsc޽ ԩS cg:uҺ _˗;w.nbĈ2gΜBNQ҈~>|akkˮ]fiҤ C6lȹs4һ]t'2uTYjU93f&MĞ={[[[uݻ/ZN:affoۇ Ftt4gWr ӿԩSINN&""T {NnݢZjCCC6oLpp0ԪUK].>>A_ꊗ߉B%''k?(5kKrsscO={6mڨ$x(BQ\\\@qqqѪ|HH;wN… JRV\Q688Xi׮(riP~חjgՕ;[>33SIKKS&NTTI}ǎ \xQmʕrmEQeSÆ wyG9$$DWn޼ֵkWP>g)KV~wU*mRη(-RtuuGƴ1{%x}6ה[*rцLZ7o(ϯcݺuS((:uRz(+&&&5߱/O7xs佟ooQ6nܨo^)[.tm_~EQ͛7Wݕ~IٱczjeԨQʌ3 mU? 0@=zvZ寿R (WEQ>ʗ_~J*1}tZlɃprr*xmӈ#g„ T\sss>>={`mmMŊ![ z6etvtt,ƍ>}z{ի9w%''yf,X1.>>Xۮ BBBXx1~~~\tBѣѱcG]-qqqT\9ϱ/5k'G={ DSB!BZFHNN&%%ER+@233YlYoFOOU&wJs.Ԕ:upҥBShW܂8sL2B!^S*]]]ׯȑ#i۶-!33Sb/54{|ҧO&Mȑ#[.˗*ի3i$"##f͚899tR6m瘗{7n~u`V!BIP!kLJбcGF/?~DMF ӧԩSX>̬Y^x#Fн{wup4'0k,|||;w.IIIeʕ++ݻwoooMFӦMQcll̕+WXv-˖-ooo߿ϢET: -44ZjѡCzׯ_gÆ RB|[x1EZPƴ1Bz暒ooo.]ڵkqqq]=+?hт޽{S|y2e *URX722F޽ɓ'ٖ3E5cǎƙ3g8s תL6ڴiέ[۷o9M0 &y.]B߾}9uM6%!![NC۶mꫯpqqA__)SgGxժU#,, SSS7n\i9 RнGn۾{W8Aڵ cǎT\L[J*EJaʔ):vHJJ k׮e/ fϞ?^2 !B)BQ\\\@qqqѪ|HHg{FF2}t[100PJ*4mTYvLZZ2ftҊLSRJbmmhB9~F~M\b``888(=zP={}uҲeKTqssS~wѸʯo.+xShU|| VB\]]u...ܼy#xsW+K.-6:t3b&NXb(.-Zܜ+WtSkD%o mJHHB!B!(YjTXݻt^ɓ'ٻw/[la۶m%!B! $B!jQ~]]]T*EB!7{Wׯ_?Wx%mڴ|gtsB!(NI7@!'-Z}}ܹ(B7#**DӋ˒nB!B!ڴiáC ?lB!C!3f K.%&&Bhh(kS|1f͢wxxx/\}ޛn8v7S%;̿B!lmm-f!_N9Bxwi׮]y1aaah~c<@Ƙ1c$@(SܿB!B!B!JZZ:::Z,ucoo_,uOWW"DpB!B!BQL233ϰ֖ѣGi޼9fffXYYǏ=zD߾}qttFQGxx8HIIIO>l2ʗ/7n 44T#@pBT*vQF 9<_cMRpss86==]]iР<;uPZ5_m 1i$<<<044ɉvڑN JO> B!B!B!Ɍ3oXt)'Od…e.\@#<<~ݻwӫW/uaÆٳټy3ƍ#--Mу5jf~GIMMU9<'OfҤIX[[=zзo_V\!-[$11'''V^ ٳf͚5qm7n.+V`,[)۳gOzի122SNmϏJ"442-ZiӦ1bnw}=3fCtt4cǎU[M#)FB!B!Bb̙3=z47lٲ&Lꔕ^^^ԪUGRZ5:ڵ+ 6T_|AϞ=7oz{ǎ5Cك G1h quue޼y 2jժPB֭[`3gȈ 6`ll %ݻwgܸqTXQ]vԨQT5kԯ_uuuNCh֬o;wV @:uXM#3B!BD4]QQQT*.]T-+Pn۷oI&XZZbffFz?JYQDŽrʕhbfN!knܸݻwi۶z͛7(}v:t耢(/냯/3f`Μ9yϟݻxxxhfիǡC:6Çiժ:8σ)LsjҤ *p֭BOOOgܸqEj/6l 44#G(exHP!k#;JOOGRI:yiBWNtt4K.IXXk \`M6Ɔe˖fVJ׮]2eJ-1}?~0B7ȝ;w3r'>>^O6m7nT\7 mORn{~m}9>666ilRl``gϊt>m׏I&N͚5qvv櫯آoI1*1vXBv/DxxxC-B!¢4^K^wׯ_g׏_UiӦ0vXZlN:x !DNܻwOc{666kfsvvо{;=JXX;vڵkr+W.=*J߻wO#{pttxx{ƶ4>|`?EGGÇ3|pbbb_9r$*TM6[M#3B6<==5r*BNNʊ~A͍1ch3f nnnϷnݢwޔ)STҥK5Ny)011ZjDGG4F9݁AMJΝCRw^^t JETTTc2i$<<<044ɉvڑa5k9VVV4nܘsiܞ|Ⱦ}U3^sZظqL5k֠( 5k,,777&O gϞ6 Bȝv344WWWKj055% kwj֬kرcy`ĉchhHJP>>ԭ[ŋk_x1e˖aÆ}ѢEL6#FuV;m:u 4 99NڵթlO>E__3gi&ĉGÆ Y~=ڵ+N!x'0{lNƍkʁСk֬aǎ,ZnݺW|lٲM6gQT)*USLaż{lܸNQ|w̙3uѦMLMMׯXZZtR߯qߑӰaHNN&88u/0p@vZ,/J1a„"3p@>s"##ٱcÇkf="&&)ěBWBVCBB+J*UK*Jr5jpRF PVrA/_xyy)zzzoPU}%111rYN~embee*5jPVZQfԨQREQ ?!^g{B!^kOTotFZFgƺij[oܼys 8tUVպ/M4Q'nQMMM%88X]Ύu "-[̳mΝ3'N~eX~oN166&==xNG}װaCuVu2226m뤥? Q ѫ֭C%22.]{n^ZlT___͛Ghh(mڴz߹s'._4w\fϞ˗թ wM!T̘13fhl2eJ*6ә>}zѣ=zw… JhhhիWk׮tUc[>}ӧƶڵk{===Sz/Sr:u*SNV o!)FBLMM5a>|???Q믿000 88t 9.WJ >C֬Y122*}ƍ+V/B!^7oޞh֬.\ >>-ZSΝ˾}\GR㳁Ar;: <ݻw?tRJi|z*ZƆ%K~V\ ߿… s 4ٙׯƍ899Ι3gFٸq#"$$$߾6ziҒ;ӌ.^zU1cҤI׷ڻ|@()3o?JSMꪕ*zm VBvXWW[ҭ$^tRk(_(00;8 |30_~>+Wן9sʱ_uV%&&*::Z۷og}zxOZ}^\\h}YY*++զM:Fhzc=&Ţp=uNٱ׿婲>͓O>b߿_8nݺIF?𾶶VgϞUPPPZ~޻'vڥm۶͞\tɩci…u]_l ??ɓ uXWQQ={8FQJNNVII^y-ZHWLLUZZzryyy3fCl;vw[W B@֥K:ڮ֭[fSbb #G0w1___M0A:v$˒yt TRRÇGrv 8PZR~~}ٙ3gvbsS)(=zT<"""^=zخ7{ly{{kٲeueee̙3/ kkk;|ÿK.9gWwwm_mWReee!L3F[lz[9wu:N'&oo'< x@-[>lw%3F/_VeezvZ?WNN4~xj* ] 7_hz'd6uiegg+**J4l0iF o~ڴiSйsg=JOOו+WNǏWxxxd2i̙Zp BBB|roUGVEEO@v]߾}{)<<\PRR|M7NSPP;oSO=cֻϽ{jر*((ШQ[nZv̙r͚5KZR^^֯_yi'O$K0`^{5OШ(+2dL&?]vΟ{[o@m>`Pll-[hKHHP`` &ܹSϟӴ4EFFjܸqzgծ];}'7nF}K^YYYׯ6lؠ[:_h\}^AAA7o^_l_/߿ps]tQN &?7xCտ7oֱc{ר1[/5gEFFkLYYYל9s4yd m6jjjd\wڽ{4}t?_-YDk׮u6!!As5sLub(&&F?4a͘1帤F6nܨjȑuھ83|pٳG޽{m6 :Tt}i޽̙35}t߿~ܭܹs5w\kƌ h^I .^hhN:Z g) 4+jI;Y ܵk>p+;;[:zfsߝ>|XÇ׌3~疔رcUbbb,+h)\)[@m۶Unn,VڋVmFmU/6wb(77Wƍ… ;@rrrؾz!Uh$_~$gq(Lv/_V֭5h ۆ1ZFQ^^^ pGQ6)>>^%|^ǒӧKL&"""t9m\'H<Νk_vإ I:tK!gϞվ}Էo_ MxfTPPǫuֲZZҥWgˡضm[M8QEEE> uaSxtq?_tԩXV95ϗbQqql6mf3Ax< Cb8=7peeeɑ/pj~RuرΓr]tQiiòΝ;7׵!IwəCQ?jJtںbPnn"""ԵkW\ҩ7;'@:t萖.];d7pzTTTT탃u9YV?C9 C}]+9,}s3JNNu ͞=[-Ҏ;nfh᪫USSaM+**JG<W=n>""B555***!CժU+ۗ9sFw;),,L бc$ p[n#..N#F|}_'Nt 㧟~ܹsճgOnZիƙ߳g >c?^mڴQxx>s]pAO<կ_?ٳ! %%%)$$D2d>&;挌 f*88X&MREErrr/I`P\\yڼy#???$]pAƍӖ-[w^׿ n~:u~iڹsU]]$b1L9s.\ 6hΝ/~MSRֲe\j^xA۷od?^F$k׮UQQJJJ;lE@p+_}ӵ|r*../]$^Z~ys_^^^z_w~'fԩS;hҥھ}z`bqXx XB ,ݻ-d<55Uo>jɒ%ǏWff222cuѥ}77p7&I֭ϟ+55UFQ{vM6iڴiJIIc=|{YYY|pٳG޽{m6 :T4x`(++KCu9<… 6[HHM-$$ĩcccm>>>oƾlժU6ooo[MMMkkkmնtۀ?c$ʕ+mlϷ/?ad+((l6ݻmlEEE8pm{yyٖ.]q^3ئLr6mIUWW;,F?7A0 }o-_dKMMmP4 .ؼ|=ӧ^j $mذAk֬WTT={JF]gw}'I*((P=0ѣU\\`OpkX,~3'62k77@vڥÇ+<<\ÓhbV:tڄW\$mݺUg)%% :ԷZeee:q|||^M2ٳ\EDDk׮ZrSmoexp-MMMVkF_۶mɓ'멧˗l674 (yyy3fÅ;v4J2ڲeKu.5&Ѩd%''D-Z+&&\GqcرڻwowﮒFC=6j B@regzw﨨(eee)00P=zh>L_^ǎSLL8+++͟l n6lؠ~u 7ChQ+++Kӆ TQQ(}?#3fƎEo߾:w>3uI7l49jذaΝ;uy5Js]VQQQ2L c7Gv[hl-ܹs_*==]A3g#mK9R'h̙mTXX5k裏>RZZ_u=S>>osqeff*##C;vPǎoSO=xm۶MzGuEߗ$YF˫^cǎnݪLm޼Yl;c ͚5K4uTc0dXzoVX hΖdRUU*Iڷo d{[Ww ժիWgŋ%IcǎU[lfonBW^hРA***3<\4j({Z⋚1c^{5)S8ٳڷo{hܹzHzV $_Æ aWպukI=ܣ'|Riiiۦh钮hȑ7KFkG$M6sϞ=%IC3+;||U_eZ+((/Kl6YVYV{wIRxxVZu믿vhq}5kVf/6kzUTTTk<'/IW/"^R_7s%INjժ4b W~~,eٜnN! pB[$]C_]LE>>> .Yvbbb={_WIҙ3g$I ӹsgc/TZZZg>>> Pii:nժ$ʕ+.gVFFrss]jʕNupoo!-tE?:,}@@}Yy=Ӓ^ZvN>bSSLӧ(Iu.]`08{}19^z˪uٛ^\l*FQ:~N8ٳgkѢEڱcMۺrS77\E^uY۷o/G}]TT(""Ϋk׮ g?SFF\Wz뭷-ٳO)wWVV6Gdd׿:ܙ'ͦFV)33S:v$ wAA*o@?%KSN ׿׹3b(22R/5k,uA|}kĈ2e~_VΝ5`FP||Zj)SR|֬Y#cΖ^-_\m۶ٳ%IAAA{[o)00Pm۶u)::Z?JKKpB=n ooo4/PBB5l0kΝ:F%I9֮](L&v4 WQ  L+VhԜ9sooڼy}}P={*++խ[7M0>ĉСC{nmVVZiuQcǎߡzKӑ#Gt})??~h4jƍz5rHNAAA*((PrrMviZj-tֺfڸq֭[jO۶mСC%IVJJxb͚5K999/M||ddpڷoܱ2:uBBBt(>>^%ۿ%wh9˙$B['p_ɐoh B B B n^PPP dXj_# ڌQSXXRSS]js1CbCIo4? yy7w\駟jZf"##&ժI&5wh7'R9s1:tHK.UjjF%]7м(Ɨ_~)Izg./_V֭5h ۆ1ZFQ^^^ N#hY7mzxI JJJĉ6m:tvi:yxe2Ժuk5J egg+!!A8qC~]xQj׮v5khƌ=z} cv1BmڴѠATXX1*##CfY ֤ITQQGAqqqF͛7O>ӷ~d177\E,YŋK UXX+++ȑ#uIz7(::9meeffj֭2 ӧTUUy-Y޾{9iժUڰal٢={8nj34k,ԩSUUU` T׼Zb,Xݻw+;;[&IUUU_ܷo TFFvء;obz),I6lT[[ݻw_!٬۷k ?r{=m۶MӦM$EEE{ZzVXa߶W^ڴi o_Vbb$i{իWKJJO.I2LЁ4rryޞ"=#;w}ٵc.$iСvLΞ=}o߾.w@A* h?~Zn-*ժ.]_~uOqq.mV'NTQQö>h}>|~W5:urq?_tԩXV95ϗbQqql6mf3||P heeeɑ/pjRuرΝ]tQiiòΝ;7׵!wəCQ?jJtںbPnn"""ԵkW\ҩ7;܍7\G7hzǴp:뮿pV`;wNV?PgN"`_.f[neee7N2JNNVrrJJJ+hѢE߿bbbl{s݈|u{ B@GPDDëG7m}eא!C\ejժΜ9ϝLرc4Aeees ߸= B@7|SƍӼyNSOiر 0`N~ZΝSPPV^j%%%d̙3pB h 4x{{+--ͥyaÆ_;w5j(IvZEEEd2),,)E 6-dRaa^x͟?_?BBB޽{;ǦMN)))t"##UPPpKdeeJIIIWrr>&`WSSZ >\7nԺuT]]~i۶m:t$iJIIQVV/^Yf)'' e 6I .} .SN)$$D'Olp˗e65g7w8n+K|_wQ^^4]v Wyyuy7wh.AvQ jjjd]g0ըk۶rsseXdZ?PfQ!6vXݻuݻwWIIIO74w 6觟~w܍74w}6w.G@hr m0JKK;|#!ԩSo(LPPPs.{ hyhi vo߾cЄel 9B B B B B B B B B B B B B B B B B B B B B B -I&V^^ӹIENDB`neo-0.10.0/doc/source/index.rst0000644000076700000240000000617713560316630016702 0ustar andrewstaff00000000000000.. module:: neo .. image:: images/neologo.png :width: 600 px Neo is a Python package for working with electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats, including Spike2, NeuroExplorer, AlphaOmega, Axon, Blackrock, Plexon, Tdt, Igor Pro, and support for writing to a subset of these formats plus non-proprietary formats including Kwik and HDF5. The goal of Neo is to improve interoperability between Python tools for analyzing, visualizing and generating electrophysiology data, by providing a common, shared object model. In order to be as lightweight a dependency as possible, Neo is deliberately limited to represention of data, with no functions for data analysis or visualization. Neo is used by a number of other software tools, including SpykeViewer_ (data analysis and visualization), Elephant_ (data analysis), the G-node_ suite (databasing), PyNN_ (simulations), tridesclous_ (spike sorting) and ephyviewer_ (data visualization). OpenElectrophy_ (data analysis and visualization) used an older version of Neo. Neo implements a hierarchical data model well adapted to intracellular and extracellular electrophysiology and EEG data with support for multi-electrodes (for example tetrodes). Neo's data objects build on the quantities_ package, which in turn builds on NumPy by adding support for physical dimensions. Thus Neo objects behave just like normal NumPy arrays, but with additional metadata, checks for dimensional consistency and automatic unit conversion. A project with similar aims but for neuroimaging file formats is `NiBabel`_. Documentation ------------- .. toctree:: :maxdepth: 1 install core usecases io rawio examples api_reference whatisnew developers_guide io_developers_guide authors License ------- Neo is free software, distributed under a 3-clause Revised BSD licence (BSD-3-Clause). Support ------- If you have problems installing the software or questions about usage, documentation or anything else related to Neo, you can post to the `NeuralEnsemble mailing list`_. If you find a bug, please create a ticket in our `issue tracker`_. Contributing ------------ Any feedback is gladly received and highly appreciated! Neo is a community project, and all contributions are welcomed - see the :doc:`developers_guide` for more information. `Source code `_ is on GitHub. Citation -------- .. include:: ../../CITATION.txt .. _OpenElectrophy: https://github.com/OpenElectrophy/OpenElectrophy .. _Elephant: http://neuralensemble.org/elephant .. _G-node: http://www.g-node.org/ .. _Neuroshare: http://neuroshare.org/ .. _SpykeViewer: https://spyke-viewer.readthedocs.io/en/latest/ .. _NiBabel: https://nipy.org/nibabel/ .. _PyNN: http://neuralensemble.org/PyNN .. _quantities: https://pypi.org/project/quantities/ .. _`NeuralEnsemble mailing list`: https://groups.google.com/forum/#!forum/neuralensemble .. _`issue tracker`: https://github.com/NeuralEnsemble/python-neo/issues .. _tridesclous: https://github.com/tridesclous/tridesclous .. _ephyviewer: https://github.com/NeuralEnsemble/ephyviewer neo-0.10.0/doc/source/install.rst0000644000076700000240000000345214077743075017245 0ustar andrewstaff00000000000000************ Installation ************ Neo is a pure Python package, so it should be easy to get it running on any system. Installing from the Python Package Index ======================================== Dependencies ------------ * Python_ >= 3.7 * numpy_ >= 1.16.1 * quantities_ >= 0.12.1 You can install the latest published version of Neo and its dependencies using:: $ pip install neo Certain IO modules have additional dependencies. If these are not satisfied, Neo will still install but the IO module that uses them will fail on loading: * scipy >= 1.0.0 for NeoMatlabIO * h5py >= 2.5 for KwikIO * klusta for KwikIO * igor >= 0.2 for IgorIO * nixio >= 1.5 for NixIO * stfio for StimfitIO * pillow for TiffIO These dependencies can be installed by specifying a comma-separated list with the ``pip install`` command:: $ pip install neo[nixio,tiffio] Or when installing a specific version of neo:: $ pip install neo[nixio,tiffio]==0.9.0 These additional dependencies for IO modules are available:: * igorproio * kwikio * neomatlabio * nixio * stimfitio * tiffio To download and install the package manually, download: |neo_github_url| Then: .. parsed-literal:: $ unzip neo-|release|.zip $ cd neo-|release| $ python setup.py install Installing from source ====================== To install the latest version of Neo from the Git repository:: $ git clone git://github.com/NeuralEnsemble/python-neo.git $ cd python-neo $ python setup.py install .. _`Python`: https://www.python.org/ .. _`numpy`: https://numpy.org/ .. _`quantities`: https://pypi.org/project/quantities/ .. _`pip`: https://pypi.org/project/pip/ .. _`setuptools`: http://pypi.python.org/pypi/setuptools .. _Anaconda: https://www.anaconda.com/distribution/ neo-0.10.0/doc/source/io.rst0000644000076700000240000003110614066374330016173 0ustar andrewstaff00000000000000****** Neo IO ****** .. currentmodule:: neo Preamble ======== The Neo :mod:`io` module aims to provide an exhaustive way of loading and saving several widely used data formats in electrophysiology. The more these heterogeneous formats are supported, the easier it will be to manipulate them as Neo objects in a similar way. Therefore the IO set of classes propose a simple and flexible IO API that fits many format specifications. It is not only file-oriented, it can also read/write objects from a database. At the moment, there are 3 families of IO modules: 1. for reading closed manufacturers' formats (Spike2, Plexon, AlphaOmega, BlackRock, Axon, ...) 2. for reading(/writing) formats from open source tools (KlustaKwik, Elan, WinEdr, WinWcp, ...) 3. for reading/writing Neo structure in neutral formats (HDF5, .mat, ...) but with Neo structure inside (NeoHDF5, NeoMatlab, ...) Combining **1** for reading and **3** for writing is a good example of use: converting your datasets to a more standard format when you want to share/collaborate. Introduction ============ There is an intrinsic structure in the different Neo objects, that could be seen as a hierachy with cross-links. See :doc:`core`. The highest level object is the :class:`Block` object, which is the high level container able to encapsulate all the others. A :class:`Block` has therefore a list of :class:`Segment` objects, that can, in some file formats, be accessed individually. Depending on the file format, i.e. if it is streamable or not, the whole :class:`Block` may need to be loaded, but sometimes particular :class:`Segment` objects can be accessed individually. Within a :class:`Segment`, the same hierarchical organisation applies. A :class:`Segment` embeds several objects, such as :class:`SpikeTrain`, :class:`AnalogSignal`, :class:`IrregularlySampledSignal`, :class:`Epoch`, :class:`Event` (basically, all the different Neo objects). Depending on the file format, these objects can sometimes be loaded separately, without the need to load the whole file. If possible, a file IO therefore provides distinct methods allowing to load only particular objects that may be present in the file. The basic idea of each IO file format is to have, as much as possible, read/write methods for the individual encapsulated objects, and otherwise to provide a read/write method that will return the object at the highest level of hierarchy (by default, a :class:`Block` or a :class:`Segment`). The :mod:`neo.io` API is a balance between full flexibility for the user (all :meth:`read_XXX` methods are enabled) and simple, clean and understandable code for the developer (few :meth:`read_XXX` methods are enabled). This means that not all IOs offer the full flexibility for partial reading of data files. One format = one class ====================== The basic syntax is as follows. If you want to load a file format that is implemented in a generic :class:`MyFormatIO` class:: >>> from neo.io import MyFormatIO >>> reader = MyFormatIO(filename="myfile.dat") you can replace :class:`MyFormatIO` by any implemented class, see :ref:`list_of_io` Modes ====== An IO module can be based on a single file, a directory containing files, or a database. This is described in the :attr:`mode` attribute of the IO class. >>> from neo.io import MyFormatIO >>> print MyFormatIO.mode 'file' For *file* mode the *filename* keyword argument is necessary. For *directory* mode the *dirname* keyword argument is necessary. Ex: >>> reader = io.PlexonIO(filename='File_plexon_1.plx') >>> reader = io.TdtIO(dirname='aep_05') Supported objects/readable objects ================================== To know what types of object are supported by a given IO interface:: >>> MyFormatIO.supported_objects [Segment , AnalogSignal , SpikeTrain, Event, Spike] Supported objects does not mean objects that you can read directly. For instance, many formats support :class:`AnalogSignal` but don't allow them to be loaded directly, rather to access the :class:`AnalogSignal` objects, you must read a :class:`Segment`:: >>> seg = reader.read_segment() >>> print(seg.analogsignals) >>> print(seg.analogsignals[0]) To get a list of directly readable objects :: >>> MyFormatIO.readable_objects [Segment] The first element of the previous list is the highest level for reading the file. This mean that the IO has a :meth:`read_segment` method:: >>> seg = reader.read_segment() >>> type(seg) neo.core.Segment All IOs have a read() method that returns a list of :class:`Block` objects (representing the whole content of the file):: >>> bl = reader.read() >>> print bl[0].segments[0] neo.core.Segment Read a time slice of Segment ============================ Some objects support the ``time_slice`` argument in ``read_segment()``. This is useful to read only a subset of a dataset clipped in time. By default ``time_slice=None`` meaning load eveything. This reads everything:: seg = reader.read_segment(time_slice=None) This reads only the first 5 seconds:: seg = reader.read_segment(time_slice=(0*pq.s, 5.*pq.s)) .. _section-lazy: Lazy option and proxy objects ============================= In some cases you may not want to load everything in memory because it could be too big. For this scenario, some IOs implement ``lazy=True/False``. Since neo 0.7, a new lazy sytem have been added for some IO modules (all IO classes that inherit from rawio). To know if a class supports lazy mode use ``ClassIO.support_lazy``. With ``lazy=True`` all data objects (AnalogSignal/SpikeTrain/Event/Epoch) are replaced by proxy objects (AnalogSignalProxy/SpikeTrainProxy/EventProxy/EpochProxy). By default (if not specified), ``lazy=False``, i.e. all data is loaded. These proxy objects contain metadata (name, sampling_rate, id, ...) so they can be inspected but they do not contain any array-like data. All proxy objects contain a ``load()`` method to postpone the real load of array like data. Further more the ``load()`` method has a ``time_slice`` argument to load only a slice from the file. In this way the consumption of memory can be finely controlled. Here are two examples that read a dataset, extract sections of the signal based on recorded events, and averages the sections. The first example is without lazy mode, so it consumes more memory:: lim0, lim1 = -500 * pq.ms, +1500 * pq.ms seg = reader.read_segment(lazy=False) triggers = seg.events[0] sig = seg.analogsignals[0] # here sig contain the whole recording in memory all_sig_chunks = [] for t in triggers.times: t0, t1 = (t + lim0), (t + lim1) sig_chunk = sig.time_slice(t0, t1) all_sig_chunks.append(sig_chunk) apply_my_fancy_average(all_sig_chunks) The second example uses lazy mode, so it consumes less memory:: lim0, lim1 = -500*pq.ms, +1500*pq.ms seg = reader.read_segment(lazy=True) triggers = seg.events[0].load(time_slice=None) # this loads all triggers in memory sigproxy = seg.analogsignals[0] # this is a proxy all_sig_chunks = [] for t in triggers.times: t0, t1 = (t + lim0), (t + lim1) sig_chunk = sigproxy.load(time_slice=(t0, t1)) # here real data are loaded all_sig_chunks.append(sig_chunk) apply_my_fancy_average(all_sig_chunks) In addition to ``time_slice``, AnalogSignalProxy supports the ``channel_indexes`` argument. This allows loading only a subset of channels. This is useful where the channel count is very high. .. TODO: add something about magnitude mode when implemented for all objects. In this example, we read only three selected channels:: seg = reader.read_segment(lazy=True) anasig = seg.analogsignals[0].load(time_slice=None, channel_indexes=[0, 2, 18]) .. _neo_io_API: Details of API ============== The :mod:`neo.io` API is designed to be simple and intuitive: - each file format has an IO class (for example for Spike2 files you have a :class:`Spike2IO` class). - each IO class inherits from the :class:`BaseIO` class. - each IO class can read or write directly one or several Neo objects (for example :class:`Segment`, :class:`Block`, ...): see the :attr:`readable_objects` and :attr:`writable_objects` attributes of the IO class. - each IO class supports part of the :mod:`neo.core` hierachy, though not necessarily all of it (see :attr:`supported_objects`). - each IO class has a :meth:`read()` method that returns a list of :class:`Block` objects. If the IO only supports :class:`Segment` reading, the list will contain one block with all segments from the file. - each IO class that supports writing has a :meth:`write()` method that takes as a parameter a list of blocks, a single block or a single segment, depending on the IO's :attr:`writable_objects`. - some IO are able to do a *lazy* load: all metadata (e.g. :attr:`sampling_rate`) are read, but not the actual numerical data. - each IO is able to save and load all required attributes (metadata) of the objects it supports. - each IO can freely add user-defined or manufacturer-defined metadata to the :attr:`annotations` attribute of an object. If you want to develop your own IO ================================== See :doc:`io_developers_guide` for information on how to implement a new IO. .. _list_of_io: List of implemented formats =========================== .. automodule:: neo.io Logging ======= :mod:`neo` uses the standard Python :mod:`logging` module for logging. All :mod:`neo.io` classes have logging set up by default, although not all classes produce log messages. The logger name is the same as the full qualified class name, e.g. :class:`neo.io.nixio.NixIO`. By default, only log messages that are critically important for users are displayed, so users should not disable log messages unless they are sure they know what they are doing. However, if you wish to disable the messages, you can do so:: >>> import logging >>> >>> logger = logging.getLogger('neo') >>> logger.setLevel(100) Some io classes provide additional information that might be interesting to advanced users. To enable these messages, do the following:: >>> import logging >>> >>> logger = logging.getLogger('neo') >>> logger.setLevel(logging.INFO) It is also possible to log to a file in addition to the terminal:: >>> import logging >>> >>> logger = logging.getLogger('neo') >>> handler = logging.FileHandler('filename.log') >>> logger.addHandler(handler) To only log to the terminal:: >>> import logging >>> from neo import logging_handler >>> >>> logger = logging.getLogger('neo') >>> handler = logging.FileHandler('filename.log') >>> logger.addHandler(handler) >>> >>> logging_handler.setLevel(100) This can also be done for individual IO classes:: >>> import logging >>> >>> logger = logging.getLogger('neo.io.nixio.NixIO') >>> handler = logging.FileHandler('filename.log') >>> logger.addHandler(handler) Individual IO classes can have their loggers disabled as well:: >>> import logging >>> >>> logger = logging.getLogger('neo.io.nixio.NixIO') >>> logger.setLevel(100) And more detailed logging messages can be enabled for individual IO classes:: >>> import logging >>> >>> logger = logging.getLogger('neo.io.nixio.NixIO') >>> logger.setLevel(logging.INFO) The default handler, which is used to print logs to the command line, is stored in :attr:`neo.logging_handler`. This example changes how the log text is displayed:: >>> import logging >>> from neo import logging_handler >>> >>> formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') >>> logging_handler.setFormatter(formatter) For more complex logging, please see the documentation for the logging_ module. .. note:: If you wish to implement more advanced logging as describe in the documentation for the logging_ module or elsewhere on the internet, please do so before calling any :mod:`neo` functions or initializing any :mod:`neo` classes. This is because the default handler is created when :mod:`neo` is imported, but it is not attached to the :mod:`neo` logger until a class that uses logging is initialized or a function that uses logging is called. Further, the handler is only attached if there are no handlers already attached to the root logger or the :mod:`neo` logger, so adding your own logger will override the default one. Additional functions and/or classes may get logging during bugfix releases, so code relying on particular modules not having logging may break at any time without warning. .. _`logging`: https://docs.python.org/3/library/logging.html neo-0.10.0/doc/source/io_developers_guide.rst0000644000076700000240000001362114066374330021602 0ustar andrewstaff00000000000000.. _io_dev_guide: ******************** IO developers' guide ******************** .. _io_guiline: Guidelines for IO implementation ================================ There are two ways to add a new IO module: * By directly adding a new IO class in a module within :mod:`neo.io`: the reader/writer will deal directly with Neo objects * By adding a RawIO class in a module within :mod:`neo.rawio`: the reader should work with raw buffers from the file and provide some internal headers for the scale/units/name/... You can then generate an IO module simply by inheriting from your RawIO class and from :class:`neo.io.BaseFromRaw` For read only classes, we encourage you to write a :class:`RawIO` class because it allows slice reading, and is generally much quicker and easier (although only for reading) than implementing a full IO class. For read/write classes you can mix the two levels neo.rawio for reading and neo.io for writing. Recipe to develop an IO module for a new data format: 1. Fully understand the object model. See :doc:`core`. If in doubt ask the `mailing list`_. 2. Fully understand :mod:`neo.rawio.examplerawio`, It is a fake IO to explain the API. If in doubt ask the list. 3. Copy/paste ``examplerawio.py`` and choose clear file and class names for your IO. 4. implement all methods that **raise(NotImplementedError)** in :mod:`neo.rawio.baserawio`. Return None when the object is not supported (spike/waveform) 5. Write good docstrings. List dependencies, including minimum version numbers. 6. Add your class to :mod:`neo.rawio.__init__`. Keep imports inside ``try/except`` for dependency reasons. 7. Create a class in :file:`neo/io/` 8. Add your class to :mod:`neo.io.__init__`. Keep imports inside ``try/except`` for dependency reasons. 9. Create an account at https://gin.g-node.org and deposit files in :file:`NeuralEnsemble/ephy_testing_data`. 10. Write tests in :file:`neo/rawio/test_xxxxxrawio.py`. You must at least pass the standard tests (inherited from :class:`BaseTestRawIO`). See :file:`test_examplerawio.py` 11. Write a similar test in :file:`neo.tests/iotests/test_xxxxxio.py`. See :file:`test_exampleio.py` 12. Make a pull request when all tests pass. Miscellaneous ============= * If your IO supports several versions of a format (like ABF1, ABF2), upload to the gin.g-node.org test file repository all file versions possible. (for test coverage). * :py:func:`neo.core.Block.create_many_to_one_relationship` offers a utility to complete the hierachy when all one-to-many relationships have been created. * In the docstring, explain where you obtained the file format specification if it is a closed one. * If your IO is based on a database mapper, keep in mind that the returned object MUST be detached, because this object can be written to another url for copying. Tests ===== :py:class:`neo.rawio.tests.common_rawio_test.BaseTestRawIO` and :py:class:`neo.test.io.commun_io_test.BaseTestIO` provide standard tests. To use these you need to upload some sample data files at `gin-gnode`_. They will be publicly accessible for testing Neo. These tests: * check the compliance with the schema: hierachy, attribute types, ... * For IO modules able to both write and read data, it compares a generated dataset with the same data after a write/read cycle. The test scripts download all files from `gin-gnode`_ and stores them locally in ``/tmp/files_for_tests/``. Subsequent test runs use the previously downloaded files, rather than trying to download them each time. Each test must have at least one class that inherits ``BaseTestRawIO`` and that has 3 attributes: * ``rawioclass``: the class * ``entities_to_test``: a list of files (or directories) to be tested one by one * ``files_to_download``: a list of files to download (sometimes bigger than ``entities_to_test``) Here is an example test script taken from the distribution: :file:`test_axonrawio.py`: .. literalinclude:: ../../neo/test/rawiotest/test_axonrawio.py Logging ======= All IO classes by default have logging using the standard :mod:`logging` module: already set up. The logger name is the same as the fully qualified class name, e.g. :class:`neo.io.nixio.NixIO`. The :attr:`class.logger` attribute holds the logger for easy access. There are generally 3 types of situations in which an IO class should use a logger * Recoverable errors with the file that the users need to be notified about. In this case, please use :meth:`logger.warning` or :meth:`logger.error`. If there is an exception associated with the issue, you can use :meth:`logger.exception` in the exception handler to automatically include a backtrace with the log. By default, all users will see messages at this level, so please restrict it only to problems the user absolutely needs to know about. * Informational messages that advanced users might want to see in order to get some insight into the file. In this case, please use :meth:`logger.info`. * Messages useful to developers to fix problems with the io class. In this case, please use :meth:`logger.debug`. A log handler is automatically added to :mod:`neo`, so please do not user your own handler. Please use the :attr:`class.logger` attribute for accessing the logger inside the class rather than :meth:`logging.getLogger`. Please do not log directly to the root logger (e.g. :meth:`logging.warning`), use the class's logger instead (:meth:`class.logger.warning`). In the tests for the io class, if you intentionally test broken files, please disable logs by setting the logging level to `100`. ExampleIO ========= .. autoclass:: neo.rawio.ExampleRawIO .. autoclass:: neo.io.ExampleIO Here are the entire files: .. literalinclude:: ../../neo/rawio/examplerawio.py .. literalinclude:: ../../neo/io/exampleio.py .. _`mailing list`: https://groups.google.com/forum/#!forum/neuralensemble .. _gin-gnode: https://gin.g-node.org/NeuralEnsemble/ephy_testing_data neo-0.10.0/doc/source/rawio.rst0000644000076700000240000002071714066374330016713 0ustar andrewstaff00000000000000********* Neo RawIO ********* .. currentmodule:: neo.rawio .. _neo_rawio_API: Neo RawIO API ============= For performance and memory consumption reasons a new layer has been added to Neo. In brief: * **neo.io** is the user-oriented read/write layer. Reading consists of getting a tree of Neo objects from a data source (file, url, or directory). When reading, all Neo objects are correctly scaled to the correct units. Writing consists of making a set of Neo objects persistent in a file format. * **neo.rawio** is a low-level layer for reading data only. Reading consists of getting NumPy buffers (often int16/int64) of signals/spikes/events. Scaling to real values (microV, times, ...) is done in a second step. Here the underlying objects must be consistent across Blocks and Segments for a given data source. The neo.rawio API has been added for developers. The neo.rawio is close to what could be a C API for reading data but in Python/NumPy. Not all IOs are implemented in :mod:`neo.rawio` but all classes implemented in :mod:`neo.rawio` are also available in :mod:`neo.io`. Possible uses of the :mod:`neo.rawio` API are: * fast reading chunks of signals in int16 and do the scaling of units (uV) on a GPU while scaling the zoom. This should improve bandwith HD to RAM and RAM to GPU memory. * load only some small chunk of data for heavy computations. For instance the spike sorting module tridesclous_ does this. The :mod:`neo.rawio` API is less flexible than :mod:`neo.io` and has some limitations: * read-only * AnalogSignals must have the same characteristcs across all Blocks and Segments: ``sampling_rate``, ``shape[1]``, ``dtype`` * AnalogSignals should all have the same value of ``sampling_rate``, otherwise they won't be read at the same time. * Units must have SpikeTrain event if empty across all Block and Segment * Epoch and Event are processed the same way (with ``durations=None`` for Event). For an intuitive comparison of :mod:`neo.io` and :mod:`neo.rawio` see: * :file:`example/read_file_neo_io.py` * :file:`example/read_file_neo_rawio.py` One speculative benefit of the :mod:`neo.rawio` API should be that a developer should be able to code a new RawIO class with little knowledge of the Neo tree of objects or of the :mod:`quantities` package. Basic usage =========== First create a reader from a class:: >>> from neo.rawio import PlexonRawIO >>> reader = PlexonRawIO(filename='File_plexon_3.plx') Then browse the internal header and display information:: >>> reader.parse_header() >>> print(reader) PlexonRawIO: File_plexon_3.plx nb_block: 1 nb_segment: [1] signal_channels: [V1] spike_channels: [Wspk1u, Wspk2u, Wspk4u, Wspk5u ... Wspk29u Wspk30u Wspk31u Wspk32u] event_channels: [] You get the number of blocks and segments per block. You have information about channels: **signal_channels**, **spike_channels**, **event_channels**. All this information is internally available in the *header* dict:: >>> for k, v in reader.header.items(): ... print(k, v) signal_channels [('V1', 0, 1000., 'int16', '', 2.44140625, 0., 0)] event_channels [] nb_segment [1] nb_block 1 spike_channels [('Wspk1u', 'ch1#0', '', 0.00146484, 0., 0, 30000.) ('Wspk2u', 'ch2#0', '', 0.00146484, 0., 0, 30000.) ... Read signal chunks of data and scale them:: >>> channel_indexes = None  #could be channel_indexes = [0] >>> raw_sigs = reader.get_analogsignal_chunk(block_index=0, seg_index=0, i_start=1024, i_stop=2048, channel_indexes=channel_indexes) >>> float_sigs = reader.rescale_signal_raw_to_float(raw_sigs, dtype='float64') >>> sampling_rate = reader.get_signal_sampling_rate() >>> t_start = reader.get_signal_t_start(block_index=0, seg_index=0) >>> units =reader.header['signal_channels'][0]['units'] >>> print(raw_sigs.shape, raw_sigs.dtype) >>> print(float_sigs.shape, float_sigs.dtype) >>> print(sampling_rate, t_start, units) (1024, 1) int16 (1024, 1) float64 1000.0 0.0 V There are 3 ways to select a subset of channels: by index (0 based), by id or by name. By index is unambiguous 0 to n-1 (included), whereas for some IOs channel_names (and sometimes channel_ids) have no guarantees to be unique. In such cases, using names or ids may raise an error. A selected subset of channels which is passed to get_analog_signal_chunk, get_analog_signal_size, or get_analog_signal_t_start has the additional restriction that all such channels must have the same t_start and signal_size. Such subsets of channels may be available in specific RawIOs by using the get_group_signal_channel_indexes method, if the RawIO has defined separate group_ids for each group with those common characteristics. Example with BlackrockRawIO for the file FileSpec2.3001:: >>> raw_sigs = reader.get_analogsignal_chunk(channel_indexes=None) #Take all channels >>> raw_sigs1 = reader.get_analogsignal_chunk(channel_indexes=[0, 2, 4])) #Take 0 2 and 4 >>> raw_sigs2 = reader.get_analogsignal_chunk(channel_ids=[1, 3, 5]) # Same but with there id (1 based) >>> raw_sigs3 = reader.get_analogsignal_chunk(channel_names=['chan1', 'chan3', 'chan5'])) # Same but with there name print(raw_sigs1.shape[1], raw_sigs2.shape[1], raw_sigs3.shape[1]) 3, 3, 3 Inspect units channel. Each channel gives a SpikeTrain for each Segment. Note that for many formats a physical channel can have several units after spike sorting. So the nb_unit could be more than physical channel or signal channels. >>> nb_unit = reader.spike_channels_count() >>> print('nb_unit', nb_unit) nb_unit 30 >>> for unit_index in range(nb_unit): ... nb_spike = reader.spike_count(block_index=0, seg_index=0, unit_index=unit_index) ... print('unit_index', unit_index, 'nb_spike', nb_spike) unit_index 0 nb_spike 701 unit_index 1 nb_spike 716 unit_index 2 nb_spike 69 unit_index 3 nb_spike 12 unit_index 4 nb_spike 95 unit_index 5 nb_spike 37 unit_index 6 nb_spike 25 unit_index 7 nb_spike 15 unit_index 8 nb_spike 33 ... Get spike timestamps only between 0 and 10 seconds and convert them to spike times:: >>> spike_timestamps = reader.spike_timestamps(block_index=0, seg_index=0, unit_index=0, t_start=0., t_stop=10.) >>> print(spike_timestamps.shape, spike_timestamps.dtype, spike_timestamps[:5]) (424,) int64 [ 90 420 708 1020 1310] >>> spike_times = reader.rescale_spike_timestamp( spike_timestamps, dtype='float64') >>> print(spike_times.shape, spike_times.dtype, spike_times[:5]) (424,) float64 [ 0.003 0.014 0.0236 0.034 0.04366667] Get spike waveforms between 0 and 10 s:: >>> raw_waveforms = reader.spike_raw_waveforms( block_index=0, seg_index=0, unit_index=0, t_start=0., t_stop=10.) >>> print(raw_waveforms.shape, raw_waveforms.dtype, raw_waveforms[0,0,:4]) (424, 1, 64) int16 [-449 -206 34 40] >>> float_waveforms = reader.rescale_waveforms_to_float(raw_waveforms, dtype='float32', unit_index=0) >>> print(float_waveforms.shape, float_waveforms.dtype, float_waveforms[0,0,:4]) (424, 1, 64) float32 [-0.65771484 -0.30175781 0.04980469 0.05859375] Count events per channel:: >>> reader = PlexonRawIO(filename='File_plexon_2.plx') >>> reader.parse_header() >>> nb_event_channel = reader.event_channels_count() nb_event_channel 28 >>> print('nb_event_channel', nb_event_channel) >>> for chan_index in range(nb_event_channel): ... nb_event = reader.event_count(block_index=0, seg_index=0, event_channel_index=chan_index) ... print('chan_index',chan_index, 'nb_event', nb_event) chan_index 0 nb_event 1 chan_index 1 nb_event 0 chan_index 2 nb_event 0 chan_index 3 nb_event 0 ... Read event timestamps and times for chanindex=0 and with time limits (t_start=None, t_stop=None):: >>> ev_timestamps, ev_durations, ev_labels = reader.event_timestamps(block_index=0, seg_index=0, event_channel_index=0, t_start=None, t_stop=None) >>> print(ev_timestamps, ev_durations, ev_labels) [1268] None ['0'] >>> ev_times = reader.rescale_event_timestamp(ev_timestamps, dtype='float64') >>> print(ev_times) [ 0.0317] List of implemented formats =========================== .. automodule:: neo.rawio .. _tridesclous: https://github.com/tridesclous/tridesclous neo-0.10.0/doc/source/releases/0000755000076700000240000000000014077757142016644 5ustar andrewstaff00000000000000neo-0.10.0/doc/source/releases/0.10.0.rst0000644000076700000240000000562214077743075020117 0ustar andrewstaff00000000000000======================== Neo 0.10.0 release notes ======================== 27th July 2021 New IO modules -------------- .. currentmodule:: neo.io * :class:`CedIO` - an alternative to :class:`Spike2IO` * :class:`AxonaIO` * :class:`OpenEphysIO` - handle the binary format * :class:`PhyIO` * :class:`SpikeGLXIO` * :class:`NWBIO` - support for a subset of the `NWB:N`_ format * :class:`MaxwellIO` Bug fixes and improvements in IO modules ---------------------------------------- * :class:`NeuralynxIO` was refactored and now supports new file versions (neuraview) and single file loading. * Legacy versions of old IOs were removed for NeuralynxIO (neuralynxio_v1), BlackrockIO, NeoHdf5IO. * :class:`NixIOfr` now supports array annotations of :class:`AnalogSignal` objects. * :class:`NSDFIO` was removed because we can no longer maintain it. * all IOs now accept :class:`pathlib.Path` objects. * The IO modules of this release have been tested with version 0.1.0 of the `ephy_testing_data`_. Removal of Unit and ChannelIndex -------------------------------- .. currentmodule:: neo.core In version 0.9.0 :class:`Group` and :class:`ChannelView` were introduced, replacing :class:`Unit` and :class:`ChannelIndex`, which were deprecated. In this version the deprecated :class:`Unit` and :class:`ChannelIndex` are removed and only the new :class:`Group` and :class:`ChannelView` objects are available. Supported Python and NumPy versions ----------------------------------- We no longer support Python 3.6, nor versions of NumPy older than 1.16. Other new or modified features ------------------------------ * Lists of :class:`SpikeTrain` objects can now also be created from two arrays: one containing spike times and the other unit identities of the times (:class:`SpikeTrainList`). * Object identity is now preserved when using utility :func:`time_slice()` methods. See all `pull requests`_ included in this release and the `list of closed issues`_. RawIO modules ------------- Internal refactoring of the neo.rawio module regarding channel grouping. Now the concept of a signal stream is used to handle channel groups for signals. This enhances the way the :attr:`annotation` and :attr:`array_annotation` attributes are rendered at neo.io level. Acknowledgements ---------------- Thanks to Samuel Garcia, Julia Sprenger, Peter N. Steinmetz, Andrew Davison, Steffen Bürgers, Regimantas Jurkus, Alessio Buccino, Shashwat Sridhar, Jeffrey Gill, Etienne Combrisson, Ben Dichter and Elodie Legouée for their contributions to this release. .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.10.0+is%3Aclosed .. _`pull requests`: https://github.com/NeuralEnsemble/python-neo/pulls?q=is%3Apr+is%3Aclosed+merged%3A%3E2020-11-10+milestone%3A0.10.0 .. _`ephy_testing_data`: https://gin.g-node.org/NeuralEnsemble/ephy_testing_data/src/v0.1.0 .. _`NWB:N`: https://www.nwb.org/nwb-neurophysiology/ neo-0.10.0/doc/source/releases/0.10.0.rst.orig0000644000076700000240000000615314077553554021057 0ustar andrewstaff00000000000000======================== Neo 0.10.0 release notes ======================== 26th July 2021 .. currentmodule:: neo.core Removal of Unit and ChannelIndex -------------------------------- In version 0.9.0 :class:`Group` and :class:`ChannelView` were introduced, replacing :class:`Unit` and :class:`ChannelIndex`, which were deprecated. In this version the deprecated :class:`Unit` and :class:`ChannelIndex` are removed and only the new :class:`Group` and :class:`ChannelView` objects are available. Supported Python and NumPy versions ----------------------------------- We no longer support Python 3.6, nor versions of NumPy older than 1.16. Other new or modified features ------------------------------ * Lists of :class:`SpikeTrain` objects can now also be created from two arrays: one containing spike times and the other unit identities of the times (:class:`SpikeTrainList`). * Object identity is now preserved when using utility :func:`time_slice()` methods. See all `pull requests`_ included in this release and the `list of closed issues`_. New IO modules -------------- <<<<<<< Updated upstream * CEDIO - an alternative to spike2io * AxonaIO * OpenEphysIO - handle the binary format * PhyIO * SpikeGLXIO * NWBIO * MaxwellIO RawIO modules ------------- Internal refactoring of the neo.rawio module about the channel grouping. There is a signal_stream filed that handle channel groups for signal. This enhanced the way annotation and array_annotaion are rendered at neo.io level. ======= .. currentmodule:: neo.io * :class:`CEDIO` - an alternative to :class:`Spike2IO` * :class:`AxonaIO` * :class:`OpenEphysIO` * :class:`PhyIO` * :class:`SpikeGLXIO` * :class:`NWBIO` - support for a subset of the `NWB:N`_ format >>>>>>> Stashed changes Bug fixes and improvements in IO modules ---------------------------------------- * :class:`NeuralynxIO` was refactored and now supports new file versions (neuraview) and single file loading. * Legacy versions of old IOs were removed for NeuralynxIO (neuralynxio_v1), BlackrockIO, NeoHdf5IO. * :class:`NixIOfr` now supports array annotations of :class:`AnalogSignal` objects. * The IO modules of this release have been tested with version 0.1.0 of the `ephy_testing_data`_. * :class:`NSDFIO` was removed. * all IOs now accept :class:`pathlib.Path` objects. Acknowledgements ---------------- Thanks to Samuel Garcia, Julia Sprenger, Peter N. Steinmetz, Andrew Davison, Steffen Bürgers, Regimantas Jurkus, Alessio Buccino, Shashwat Sridhar, Jeffrey Gill, Etienne Combrisson, Ben Dichter and Elodie Legouée for their contributions to this release. .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.10.0+is%3Aclosed .. _`pull requests`: https://github.com/NeuralEnsemble/python-neo/pulls?q=is%3Apr+is%3Aclosed+merged%3A%3E2020-11-10+milestone%3A0.10.0 <<<<<<< Updated upstream .. _`ephy_test_data`: https://gin.g-node.org/NeuralEnsemble/ephy_testing_data/src/v0.1.0 ======= .. _`ephy_testing_data`: https://gin.g-node.org/NeuralEnsemble/ephy_testing_data/src/v0.1.0 .. _`NWB:N`: https://www.nwb.org/nwb-neurophysiology/ >>>>>>> Stashed changes neo-0.10.0/doc/source/releases/0.5.0.rst0000644000076700000240000001360713560316630020032 0ustar andrewstaff00000000000000======================= Neo 0.5.0 release notes ======================= 22nd March 2017 For Neo 0.5, we have taken the opportunity to simplify the Neo object model. Although this will require an initial time investment for anyone who has written code with an earlier version of Neo, the benefits will be greater simplicity, both in your own code and within the Neo code base, which should allow us to move more quickly in fixing bugs, improving performance and adding new features. More detail on these changes follows: Merging of "single-value" and "array" versions of data classes ============================================================== In previous versions of Neo, we had :class:`AnalogSignal` for one-dimensional (single channel) signals, and :class:`AnalogSignalArray` for two-dimensional (multi-channel) signals. In Neo 0.5.0, these have been merged under the name :class:`AnalogSignal`. :class:`AnalogSignal` has the same behaviour as the old :class:`AnalogSignalArray`. It is still possible to create an :class:`AnalogSignal` from a one-dimensional array, but this will be converted to an array with shape `(n, 1)`, e.g.: .. code-block:: python >>> signal = neo.AnalogSignal([0.0, 0.1, 0.2, 0.5, 0.6, 0.5, 0.4, 0.3, 0.0], ... sampling_rate=10*kHz, ... units=nA) >>> signal.shape (9, 1) Multi-channel arrays are created as before, but using :class:`AnalogSignal` instead of :class:`AnalogSignalArray`: .. code-block:: python >>> signal = neo.AnalogSignal([[0.0, 0.1, 0.2, 0.5, 0.6, 0.5, 0.4, 0.3, 0.0], ... [0.0, 0.2, 0.4, 0.7, 0.9, 0.8, 0.7, 0.6, 0.3]], ... sampling_rate=10*kHz, ... units=nA) >>> signal.shape (9, 2) Similarly, the :class:`Epoch` and :class:`EpochArray` classes have been merged into an array-valued class :class:`Epoch`, ditto for :class:`Event` and :class:`EventArray`, and the :class:`Spike` class, whose main function was to contain the waveform data for an individual spike, has been suppressed; waveform data are now available as the :attr:`waveforms` attribute of the :class:`SpikeTrain` class. Recording channels ================== As a consequence of the removal of "single-value" data classes, information on recording channels and the relationship between analog signals and spike trains is also stored differently. In Neo 0.5, we have introduced a new class, :class:`ChannelIndex`, which replaces both :class:`RecordingChannel` and :class:`RecordingChannelGroup`. In older versions of Neo, a :class:`RecordingChannel` object held metadata about a logical recording channel (a name and/or integer index) together with references to one or more :class:`AnalogSignal`\s recorded on that channel at different points in time (different :class:`Segment`\s); redundantly, the :class:`AnalogSignal` also had a :attr:`channel_index` attribute, which could be used in addition to or instead of creating a :class:`RecordingChannel`. Metadata about :class:`AnalogSignalArray`\s could be contained in a :class:`RecordingChannelGroup` in a similar way, i.e. :class:`RecordingChannelGroup` functioned as an array-valued version of :class:`RecordingChannel`, but :class:`RecordingChannelGroup` could also be used to group together individual :class:`RecordingChannel` objects. With Neo 0.5, information about the channel names and ids of an :class:`AnalogSignal` is contained in a :class:`ChannelIndex`, e.g.: .. code-block:: python >>> signal = neo.AnalogSignal([[0.0, 0.1, 0.2, 0.5, 0.6, 0.5, 0.4, 0.3, 0.0], ... [0.0, 0.2, 0.4, 0.7, 0.9, 0.8, 0.7, 0.6, 0.3]], ... [0.0, 0.1, 0.3, 0.6, 0.8, 0.7, 0.6, 0.5, 0.3]], ... sampling_rate=10*kHz, ... units=nA) >>> channels = neo.ChannelIndex(index=[0, 1, 2], ... channel_names=["chan1", "chan2", "chan3"]) >>> signal.channel_index = channels In this use, it replaces :class:`RecordingChannel`. :class:`ChannelIndex` may also be used to group together a subset of the channels of a multi-channel signal, for example: .. code-block:: python >>> channel_group = neo.ChannelIndex(index=[0, 2]) >>> channel_group.analogsignals.append(signal) >>> unit = neo.Unit() # will contain the spike train recorded from channels 0 and 2. >>> unit.channel_index = channel_group Checklist for updating code from 0.3/0.4 to 0.5 =============================================== To update your code from Neo 0.3/0.4 to 0.5, run through the following checklist: 1. Change all usages of :class:`AnalogSignalArray` to :class:`AnalogSignal`. 2. Change all usages of :class:`EpochArray` to :class:`Epoch`. 3. Change all usages of :class:`EventArray` to :class:`Event`. 4. Where you have a list of (single channel) :class:`AnalogSignal`\s all of the same length, consider converting them to a single, multi-channel :class:`AnalogSignal`. 5. Replace :class:`RecordingChannel` and :class:`RecordingChannelGroup` with :class:`ChannelIndex`. .. note:: in points 1-3, the data structure is still an array, it just has a shorter name. Other changes ============= * added :class:`NixIO` (`about the NIX format`_) * added :class:`IgorIO` * added :class:`NestIO` (for data files produced by the `NEST simulator`_) * :class:`NeoHdf5IO` is now read-only. It will read data files produced by earlier versions of Neo, but another HDF5-based IO, e.g. :class:`NixIO`, should be used for writing data. * many fixes/improvements to existing IO modules. All IO modules should now work with Python 3. .. https://github.com/NeuralEnsemble/python-neo/issues?utf8=✓&q=is%3Aissue%20is%3Aclosed%20created%3A%3E2014-02-01%20 .. _`about the NIX format`: https://github.com/G-Node/nix/wiki .. _`NEST simulator`: https://www.nest-simulator.org/ neo-0.10.0/doc/source/releases/0.5.1.rst0000644000076700000240000000225513507452453020035 0ustar andrewstaff00000000000000======================= Neo 0.5.1 release notes ======================= 4th May 2017 * Fixes to :class:`AxonIO` (thanks to @erikli and @cjfraz) and :class:`NeuroExplorerIO` (thanks to Mark Hollenbeck) * Fixes to pickling of :class:`Epoch` and :class:`Event` objects (thanks to Hélissande Fragnaud) * Added methods :meth:`as_array()` and :meth:`as_quantity()` to Neo data objects to simplify the common tasks of turning a Neo data object back into a plain Numpy array * Added :class:`NeuralynxIO`, which reads standard Neuralynx output files in ncs, nev, nse and ntt format (thanks to Julia Sprenger and Carlos Canova). * Added the :attr:`extras_require` field to setup.py, to clearly document the requirements for different io modules. For example, this allows you to run :command:`pip install neo[neomatlabio]` and have the extra dependency needed for the :mod:`neomatlabio` module (scipy in this case) be automatically installed. * Fixed a bug where slicing an :class:`AnalogSignal` did not modify the linked :class:`ChannelIndex`. (Full `list of closed issues`_) .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.5.1+is%3Aclosed neo-0.10.0/doc/source/releases/0.5.2.rst0000644000076700000240000000151313507452453020032 0ustar andrewstaff00000000000000======================= Neo 0.5.2 release notes ======================= 27th September 2017 * Removed support for Python 2.6 * Pickling :class:`AnalogSignal` and :class:`SpikeTrain` now preserves parent objects * Added NSDFIO, which reads and writes NSDF files * Fixes and improvements to PlexonIO, NixIO, BlackrockIO, NeuralynxIO, IgorIO, ElanIO, MicromedIO, TdtIO and others. Thanks to Michael Denker, Achilleas Koutsou, Mieszko Grodzicki, Samuel Garcia, Julia Sprenger, Andrew Davison, Rohan Shah, Richard C Gerkin, Mieszko Grodzicki, Mikkel Elle Lepperød, Joffrey Gonin, Hélissande Fragnaud, Elodie Legouée and Matthieu Sénoville for their contributions to this release. (Full `list of closed issues`_) .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.5.2+is%3Aclosed neo-0.10.0/doc/source/releases/0.6.0.rst0000644000076700000240000000366513560316630020036 0ustar andrewstaff00000000000000======================= Neo 0.6.0 release notes ======================= 23rd March 2018 Major changes: * Introduced :mod:`neo.rawio`: a low-level reader for various data formats * Added continuous integration for all IOs using CircleCI (previously only :mod:`neo.core` was tested, using Travis CI) * Moved the test file repository to https://gin.g-node.org/NeuralEnsemble/ephy_testing_data - this makes it easier for people to contribute new files for testing. Other important changes: * Added :func:`time_index()` and :func:`splice()` methods to :class:`AnalogSignal` * IO fixes and improvements: Blackrock, TDT, Axon, Spike2, Brainvision, Neuralynx * Implemented `__deepcopy__` for all data classes * New IO: BCI2000 * Lots of PEP8 fixes! * Implemented `__getitem__` for :class:`Epoch` * Removed "cascade" support from all IOs * Removed lazy loading except for IOs based on rawio * Marked lazy option as deprecated * Added :func:`time_slice` in read_segment() for IOs based on rawio * Made :attr:`SpikeTrain.times` return a :class:`Quantity` instead of a :class:`SpikeTrain` * Raise a :class:`ValueError` if ``t_stop`` is earlier than ``t_start`` when creating an empty :class:`SpikeTrain` * Changed filter behaviour to return all objects if no filter parameters are specified * Fix pickling/unpickling of :class:`Events` Deprecated IO classes: * :class:`KlustaKwikIO` (use :class:`KwikIO` instead) * :class:`PyNNTextIO`, :class:`PyNNNumpyIO` (Full `list of closed issues`_) Thanks to Björn Müller, Andrew Davison, Achilleas Koutsou, Chadwick Boulay, Julia Sprenger, Matthieu Senoville, Michael Denker and especially Samuel Garcia for their contributions to this release. .. note:: version 0.6.1 was released immediately following 0.6.0 to fix a minor problem with the documentation. .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.6.0+is%3Aclosed neo-0.10.0/doc/source/releases/0.7.0.rst0000644000076700000240000000203214017455620020023 0ustar andrewstaff00000000000000======================= Neo 0.7.0 release notes ======================= 26th November 2018 Main added features: * array annotations Other features: * `Event.to_epoch()` * Change the behaviour of `SpikeTrain.__add__` and `SpikeTrain.__sub__` * bug fix for `Epoch.time_slice()` New IO classes: * RawMCSRawIO (raw multi channel system file format) * OpenEphys format * Intanrawio (both RHD and RHS) * AxographIO Many bug fixes and improvements in IO: * AxonIO * WinWCPIO * NixIO * ElphyIO * Spike2IO * NeoMatlab * NeuralynxIO * BlackrockIO (V2.3) * NixIO (rewritten) Removed: * PyNNIO (Full `list of closed issues`_) Thanks to Achilleas Koutsou, Andrew Davison, Björn Müller, Chadwick Boulay, erikli, Jeffrey Gill, Julia Sprenger, Lucas (lkoelman), Mark Histed, Michael Denker, Mike Sintsov, Samuel Garcia, Scott W Harden and William Hart for their contributions to this release. .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.7.0+is%3Aclosed neo-0.10.0/doc/source/releases/0.7.1.rst0000644000076700000240000000063313560316630020030 0ustar andrewstaff00000000000000======================= Neo 0.7.1 release notes ======================= 13th December 2019 * Add alias `duplicate_with_new_array` for `duplicate_with_new_data`, for backwards compatibility * Update `NeuroshareapiIO` and `NeurosharectypesIO` to Neo 0.6 * Create basic and compatibility test for `nixio_fr` Thanks to Chek Yin Choi, Andrew Davison and Julia Sprenger for their contributions to this release. neo-0.10.0/doc/source/releases/0.7.2.rst0000644000076700000240000000042113560316630020024 0ustar andrewstaff00000000000000======================= Neo 0.7.2 release notes ======================= 10th July 2019 New RawIO class: * AxographRawIO Bug fixes: * Various CI fixes Thanks to Andrew Davison, Samuel Garcia, Jeffrey Gill and Julia Sprenger for their contributions to this release. neo-0.10.0/doc/source/releases/0.8.0.rst0000644000076700000240000001077714077743075020055 0ustar andrewstaff00000000000000======================= Neo 0.8.0 release notes ======================= 30th September 2019 Lazy loading ------------ Neo 0.8 sees a major new feature, the ability to selectively load only parts of a data file (for supported file formats) into memory, for example only a subset of the signals in a segment, a subset of the channels in a signal, or even only a certain time slice of a given signal. This can lead to major savings in time and memory consumption, or can allow files that are too large to be loaded into memory in their entirety to be processed one section at a time. Here is an example, loading only certain sections of a signal:: lim0, lim1 = -500*pq.ms, +1500*pq.ms seg = reader.read_segment(lazy=True) # this loads only the segment structure and metadata # but all data objects are replaced by proxies triggers = seg.events[0].load() # this loads all triggers in memory sigproxy = seg.analogsignals[0] # this is a proxy object all_sig_chunks = [] for t in triggers.times: t0, t1 = (t + lim0), (t + lim1) sig_chunk = sigproxy.load(time_slice=(t0, t1)) # here the actual data are loaded all_sig_chunks.append(sig_chunk) Not all IO modules support lazy loading (but many do). To know whether a given IO class supports lazy mode, use ``SomeIO.support_lazy``. For more details, see :ref:`section-lazy`. Image sequence data ------------------- Another new feature, although one that is more experimental, is support for image sequence data, coming from calcium imaging of neuronal activity, voltage-sensitive dye imaging, etc. The new :class:`ImageSequence` object contains a sequence of image frames as a 3D array. As with other Neo data objects, the object also holds metadata such as the sampling rate/frame duration and the spatial scale (physical size represented by one pixel). Three new IO modules, :class:`TiffIO`, :class:`AsciiImageIO` and :class:`BlkIO`, allow reading such data from file, e.g.:: from quantities import Hz, mm, dimensionless from neo.io import TiffIO data = TiffIO(data_path).read(units=dimensionless, sampling_rate=25 * Hz, spatial_scale=0.05 * mm) images = data[0].segments[0].imagesequences[0] :class:`ImageSequence` is a subclass of the NumPy :class:`ndarray`, and so can be manipulated in the same ways, e.g.:: images /= images.max() background = np.mean(images, axis=0) preprocessed_images = images - background Since a common operation with image sequences is to extract time series from regions of interest, Neo also provides various region-of-interest classes which perform this operation, returning an :class:`AnalogSignal` object:: roi = CircularRegionOfInterest(x=50, y=50, radius=10) signal = preprocessed_images.signal_from_region(roi)[0] Other new features ------------------ * new neo.utils module * Numpy 1.16+ compatibility * :meth:`time_shift()` method for :class:`Epoch`/:class:`Event`/:class:`AnalogSignal` * :meth:`time_slice()` method is now more robust * dropped support for Python 3.4 See all `pull requests`_ included in this release and the `list of closed issues`_. Bug fixes and improvements in IO modules ---------------------------------------- * Blackrock * Neuroshare * NixIOFr * NixIO (array annotation + 1d coordinates) * AsciiSignal (fix + json metadata + IrregularlySampledSignals + write proxy) * Spike2 (group same sampling rate) * Brainvision * NeuralynxIO .. Warning:: Some IOs (based on rawio) when loading can choose to split each channel into its own 1-channel :class:`AnalogSignal` or to group them in a multi-channel :class:`AnalogSignal`. The default behavior (either ``signal_group_mode='split-all'`` or ``'group-same-units'``) is not the same for all IOs for backwards compatibility reasons. In the next release, all IOs will have the default ``signal_group_mode='group-same-units'`` Acknowledgements ---------------- Thanks to Achileas Koutsou, Chek Yin Choi, Richard C. Gerkin, Hugo van Kemenade, Alexander Kleinjohann, Björn Müller, Jeffrey Gill, Christian Kothe, Mike Sintsov, @rishidhingra, Michael Denker, Julia Sprenger, Corentin Fragnaud, Andrew Davison and Samuel Garcia for their contributions to this release. .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.8.0+is%3Aclosed .. _`pull requests`: https://github.com/NeuralEnsemble/python-neo/pulls?q=is%3Apr+is%3Aclosed+merged%3A%3E2018-11-27+milestone%3A0.8.0neo-0.10.0/doc/source/releases/0.8.0.rst.orig0000644000076700000240000001101013543130175020755 0ustar andrewstaff00000000000000======================= Neo 0.8.0 release notes ======================= 27th September 2019 Neo 0.8 sees a major new feature, the ability to selectively load only parts of a data file (for supported file formats) into memory, for example only a subset of the signals in a segment, a subset of the channels in a signal, or even only a certain time slice of a given signal. This can lead to major savings in time and memory consumption, or can allow files that are too large to be loaded into memory in their entirety to be processed one section at a time. Here is an example, loading only certain sections of a signal:: lim0, lim1 = -500*pq.ms, +1500*pq.ms seg = reader.read_segment(lazy=True) # this loads only the segment structure and metadata # but all data objects are replaced by proxies triggers = seg.events[0].load() # this loads all triggers in memory sigproxy = seg.analogsignals[0] # this is a proxy object all_sig_chunks = [] for t in triggers.times: t0, t1 = (t + lim0), (t + lim1) sig_chunk = sigproxy.load(time_slice=(t0, t1)) # here the actual data are loaded all_sig_chunks.append(sig_chunk) Not all IO modules support lazy loading (but many do). To know whether a given IO class supports lazy mode, use ``SomeIO.support_lazy``. For more details, see :ref:`section-lazy`. Other new features ------------------ * new neo.utils module <<<<<<< Updated upstream * Numpy 1.16+ compatibility * :meth:`time_shift()` method for :class:`Epoch`/:class:`Event`/:class:`AnalogSignal` * :meth:`time_slice()` method is now more robust See all `pull requests`_ included in this release and the `list of closed issues`_. Bug fixes and improvements in IO modules ---------------------------------------- ======= Other features: * Event and Epoch do not use array_annotations machinery * deep copy annotations when slicing * numpy 1.16 compatibility * time_shift() method for Epoch/Event/AnalogSignal * time_slice more robust Many bug fixes and improvements in IO: >>>>>>> Stashed changes * Blackrock * Neuroshare * NixIOFr * NixIO (array annotation + 1d coordinates) * AsciiSignal (fix + json metadata + IrregularlySampledSignals + write proxy) * Spike2 (group same sampling rate) * Brainvision * NeuralynxIO <<<<<<< Updated upstream .. Warning:: Some IOs (based on rawio) when loading can choose to split each channel into its own 1-channel :class:`AnalogSignal` or to group them in a multi-channel :class:`AnalogSignal`. The default behavior (either ``signal_group_mode='split-all'`` or ``'group-same-units'``) is not the same for all IOs for backwards compatibility reasons. In the next release, all IOs will have the default ``signal_group_mode='group-same-units'`` Acknowledgements ---------------- Thanks to Achileas Koutsou, Chek Yin Choi, Richard C. Gerkin, Alexander Kleinjohann, Björn Müller, Jeffrey Gill, Christian Kothe, Mike Sintsov, @rishidhingra, Michael Denker, Julia Sprenger, ======= Warning: * Some IOs (base on rawio) when loading can choose to split each channel into separted AnalogSignal or to group then. The default behavior (either signal_group_mode='split-all' or 'group-same-units') was not the same for all IOs for retro compatibility reasons. In next release, all IOs will be signal_group_mode='group-same-units' * last release supporting python2.7 See all `Github pull requests`_ included in this release and the `list of closed issues`_. Thanks to Achileas Koutsou, Chek Yin Choi, Richard C Gerkin, Alexander Kleinjohann, Björn Müller, Jeffrey Gill, Christian Kothe, Mike Sintsov, rishidhingra, Michael Denker, Julia Sprenger, Corentin Fragnaud, >>>>>>> Stashed changes Andrew Davison and Samuel Garcia for their contributions to this release. .. Warning:: This may be the last release supporting Python 2.7. In any case, Python 2.7 support will be dropped before Neo 1.0. <<<<<<< Updated upstream .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.7.0+is%3Aclosed .. _`pull requests`: https://github.com/NeuralEnsemble/python-neo/pulls?q=is%3Apr+is%3Aclosed+merged%3A%3E2018-11-27+milestone%3A0.8.0 ======= .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.8.0+is%3Aclosed .. _`Github pull requests`: https://github.com/NeuralEnsemble/python-neo/pulls?q=is%3Apr+is%3Aclosed+merged%3A%3E2018-11-27+milestone%3A0.8.0 >>>>>>> Stashed changes neo-0.10.0/doc/source/releases/0.9.0.rst0000644000076700000240000001037314077743075020046 0ustar andrewstaff00000000000000======================= Neo 0.9.0 release notes ======================= 10th November 2020 Group and ChannelView replace Unit and ChannelIndex --------------------------------------------------- Experience with :class:`ChannelIndex` and :class:`Unit` has shown that these classes are often confusing and difficult to understand. In particular, :class:`ChannelIndex` was trying to provide three different functionalities in a single object: - providing information about individual traces within :class:`AnalogSignals` like the channel id and the channel name (labelling) - grouping a subset of traces within an :class:`AnalogSignal` via the ``index`` attribute (masking) - linking between / grouping :class:`AnalogSignals` (grouping) while grouping :class:`SpikeTrains` required a different class, :class:`Unit`. For more pointers to the difficulties this created, and some of the limitations of this approach, see `this Github issue`_. With the aim of making the three functionalities of labelling, masking and grouping both easier to use and more flexible, we have replaced :class:`ChannelIndex` and :class:`Unit` with: - array annotations (*labelling*) - already available since Neo 0.8 - :class:`~neo.core.ChannelView` (*masking*) - defines subsets of channels within an `AnalogSignal` using a mask - :class:`~neo.core.Group` (*grouping*) - allows any Neo object except :class`Segment` and :class:`Block` to be grouped For some guidance on migrating from :class:`ChannelIndex`/:class:`Unit` to :class:`Group` and :class:`ChannelView` see :doc:`../grouping`. Python 3 only ------------- We have now dropped support for Python 2.7 and Python 3.5, and for versions of NumPy older than 1.13. In future, we plan to follow NEP29_ + one year, i.e. we will support Python and NumPy versions for one year longer than recommended in NEP29. This was `discussed here`_. Change in default behaviour for grouping channels in IO modules --------------------------------------------------------------- Previously, when reading multiple related signals (same length, same units) from a file, some IO classes would by default create a separate, single-channel :class:`AnalogSignal` per signal, others would combine all related signals into one multi-channel :class:`AnalogSignal`. From Neo 0.9.0, the default for all IO classes is to create a one multi-channel :class:`AnalogSignal`. To get the "multiple single-channel signals" behaviour, use:: io.read(signal_group_mode="split-all") Other new or modified features ------------------------------ * added methods :func:`rectify()`, :func:`downsample` and :func:`resample` to :class:`AnalogSignal` * :func:`SpikeTrain.merge()` can now merge multiple spiketrains * the utility function :func:`cut_block_by_epochs()` gives a new :class:`Block` now rather than modifying the block in place * some missing properties such as ``t_start`` were added to :class:`ImageSequence`, and ``sampling_period`` was renamed to ``frame_duration`` * :func:`AnalogSignal.time_index()` now accepts arrays of times, not just a scalar. See all `pull requests`_ included in this release and the `list of closed issues`_. Bug fixes and improvements in IO modules ---------------------------------------- * NeoMatlabIO (support for signal annotations) * NeuralynxIO (fix handling of empty .nev files) * AxonIO (support EDR3 header, fix channel events bug) * Spike2IO (fix rounding problem, fix for v9 SON files) * MicromedIO (fix label encoding) Acknowledgements ---------------- Thanks to Julia Sprenger, Samuel Garcia, Andrew Davison, Alexander Kleinjohann, Hugo van Kemenade, Achilleas Koutsou, Jeffrey Gill, Corentin Fragnaud, Aitor Morales-Gregorio, Rémi Proville, Robin Gutzen, Marin Manuel, Simon Danner, Michael Denker, Peter N. Steinmetz, Diziet Asahi and Lucien Krapp for their contributions to this release. .. _`list of closed issues`: https://github.com/NeuralEnsemble/python-neo/issues?q=is%3Aissue+milestone%3A0.9.0+is%3Aclosed .. _`pull requests`: https://github.com/NeuralEnsemble/python-neo/pulls?q=is%3Apr+is%3Aclosed+merged%3A%3E2019-09-30+milestone%3A0.9.0 .. _NEP29: https://numpy.org/neps/nep-0029-deprecation_policy.html .. _`discussed here`: https://github.com/NeuralEnsemble/python-neo/issues/788 .. _`this Github issue`: https://github.com/NeuralEnsemble/python-neo/issues/456 neo-0.10.0/doc/source/scripts/0000755000076700000240000000000014077757142016530 5ustar andrewstaff00000000000000neo-0.10.0/doc/source/scripts/multi_tetrode_example.py0000644000076700000240000000706114060430470023461 0ustar andrewstaff00000000000000""" Example for usecases.rst """ from itertools import cycle import numpy as np from quantities import ms, mV, kHz import matplotlib.pyplot as plt from neo import Block, Segment, ChannelView, Group, SpikeTrain, AnalogSignal store_signals = False block = Block(name="probe data", tetrode_ids=["Tetrode #1", "Tetrode #2"]) block.segments = [Segment(name="trial #1", index=0), Segment(name="trial #2", index=1), Segment(name="trial #3", index=2)] n_units = { "Tetrode #1": 2, "Tetrode #2": 5 } # Create a group for each neuron, annotate each group with the tetrode from which it was recorded groups = [] counter = 0 for tetrode_id, n in n_units.items(): groups.extend( [Group(name=f"neuron #{counter + i + 1}", tetrode_id=tetrode_id) for i in range(n)] ) counter += n block.groups.extend(groups) iter_group = cycle(groups) # Create dummy data, one segment at a time for segment in block.segments: segment.block = block # create two 4-channel AnalogSignals with dummy data signals = { "Tetrode #1": AnalogSignal(np.random.rand(1000, 4) * mV, sampling_rate=10 * kHz, tetrode_id="Tetrode #1"), "Tetrode #2": AnalogSignal(np.random.rand(1000, 4) * mV, sampling_rate=10 * kHz, tetrode_id="Tetrode #2") } if store_signals: segment.analogsignals.extend(signals.values()) for signal in signals: signal.segment = segment # create spike trains with dummy data # we will pretend the spikes have been extracted from the dummy signal for tetrode_id in ("Tetrode #1", "Tetrode #2"): for i in range(n_units[tetrode_id]): spiketrain = SpikeTrain(np.random.uniform(0, 100, size=30) * ms, t_stop=100 * ms) # assign each spiketrain to the appropriate segment segment.spiketrains.append(spiketrain) spiketrain.segment = segment # assign each spiketrain to a given neuron current_group = next(iter_group) current_group.add(spiketrain) if store_signals: # add to the group a reference to the signal from which the spikes were obtained # this does not give a 1:1 correspondance between spike trains and signals, # for that we could use additional groups (and have groups of groups) current_group.add(signals[tetrode_id]) # Now plot the data # .. by trial plt.figure() for seg in block.segments: print(f"Analyzing segment {seg.index}") stlist = [st - st.t_start for st in seg.spiketrains] plt.subplot(len(block.segments), 1, seg.index + 1) count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title(f"PSTH in segment {seg.index}") plt.show() # ..by neuron plt.figure() for i, group in enumerate(block.groups): stlist = [st - st.t_start for st in group.spiketrains] plt.subplot(len(block.groups), 1, i + 1) count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title(f"PSTH of unit {group.name}") plt.show() # ..by tetrode plt.figure() for i, tetrode_id in enumerate(block.annotations["tetrode_ids"]): stlist = [] for unit in block.filter(objects=Group, tetrode_id=tetrode_id): stlist.extend([st - st.t_start for st in unit.spiketrains]) plt.subplot(2, 1, i + 1) count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title(f"PSTH blend of tetrode {tetrode_id}") plt.show() neo-0.10.0/doc/source/scripts/spike_sorting_example.py0000644000076700000240000000236514060430470023463 0ustar andrewstaff00000000000000""" Example for usecases.rst """ import numpy as np from neo import Segment, AnalogSignal, SpikeTrain, Group, ChannelView from quantities import Hz # generate some fake data seg = Segment() seg.analogsignals.append( AnalogSignal([[0.1, 0.1, 0.1, 0.1], [-2.0, -2.0, -2.0, -2.0], [0.1, 0.1, 0.1, 0.1], [-0.1, -0.1, -0.1, -0.1], [-0.1, -0.1, -0.1, -0.1], [-3.0, -3.0, -3.0, -3.0], [0.1, 0.1, 0.1, 0.1], [0.1, 0.1, 0.1, 0.1]], sampling_rate=1000 * Hz, units='V')) # extract spike trains from all channels st_list = [] for signal in seg.analogsignals: # use a simple threshhold detector spike_mask = np.where(np.min(signal.magnitude, axis=1) < -1.0)[0] # create a spike train spike_times = signal.times[spike_mask] st = SpikeTrain(spike_times, t_start=signal.t_start, t_stop=signal.t_stop) # remember the spike waveforms wf_list = [] for spike_idx in np.nonzero(spike_mask)[0]: wf_list.append(signal[spike_idx - 1:spike_idx + 2, :]) st.waveforms = np.array(wf_list) st_list.append(st) unit = Group() unit.spiketrains = st_list unit.analogsignals.extend(seg.analogsignals) neo-0.10.0/doc/source/usecases.rst0000644000076700000240000001723514060430470017376 0ustar andrewstaff00000000000000***************** Typical use cases ***************** Recording multiple trials from multiple channels ================================================ In this example we suppose that we have recorded from an 8-channel probe, and that we have recorded three trials/episodes. We therefore have a total of 8 x 3 = 24 signals, grouped into three :class:`AnalogSignal` objects, one per trial. Our entire dataset is contained in a :class:`Block`, which in turn contains: * 3 :class:`Segment` objects, each representing data from a single trial, * 1 :class:`Group`. .. image:: images/multi_segment_diagram.png :width: 75% :align: center :class:`Segment` and :class:`Group` objects provide two different ways to access the data, corresponding respectively, in this scenario, to access by **time** and by **space**. .. note:: Segments do not always represent trials, they can be used for many purposes: segments could represent parallel recordings for different subjects, or different steps in a current clamp protocol. **Temporal (by segment)** In this case you want to go through your data in order, perhaps because you want to correlate the neural response with the stimulus that was delivered in each segment. In this example, we're averaging over the channels. .. doctest:: import numpy as np from matplotlib import pyplot as plt for seg in block.segments: print("Analyzing segment %d" % seg.index) avg = np.mean(seg.analogsignals[0], axis=1) plt.figure() plt.plot(avg) plt.title("Peak response in segment %d: %f" % (seg.index, avg.max())) **Spatial (by channel)** In this case you want to go through your data by channel location and average over time. Perhaps you want to see which physical location produces the strongest response, and every stimulus was the same: .. doctest:: # We assume that our block has only 1 Group group = block.groups[0] avg = np.mean(group.analogsignals, axis=0) plt.figure() for index, name in enumerate(group.annotations["channel_names"]): plt.plot(avg[:, index]) plt.title("Average response on channels %s: %s' % (index, name) **Mixed example** Combining simultaneously the two approaches of descending the hierarchy temporally and spatially can be tricky. Here's an example. Let's say you saw something interesting on the 6th channel (index 5) on even numbered trials during the experiment and you want to follow up. What was the average response? .. doctest:: index = 5 avg = np.mean([seg.analogsignals[0][:, index] for seg in block.segments[::2]], axis=1) plt.plot(avg) Recording spikes from multiple tetrodes ======================================= Here is a similar example in which we have recorded with two tetrodes and extracted spikes from the extra-cellular signals. The spike times are contained in :class:`SpikeTrain` objects. * 3 :class:`Segments` (one per trial). * 7 :class:`Groups` (one per neuron), which each contain: * 3 :class:`SpikeTrain` objects * an annotation showing which tetrode the spiketrains were recorded from In total we have 3 x 7 = 21 :class:`SpikeTrains` in this :class:`Block`. .. image:: images/multi_segment_diagram_spiketrain.png :width: 75% :align: center .. note:: In this scenario we have discarded the original signals, perhaps to save space, therefore we use annotations to link the spiketrains to the tetrode they were recorded from. If we wished to include the original extracellular signals, we would add a reference to the three :class:`AnalogSignal` objects for the appropriate tetrode to the :class:`Group` for each neuron. There are three ways to access the :class:`SpikeTrain` data: * by trial (:class:`Segment`) * by neuron (:class:`Group`) * by tetrode **By trial** In this example, each :class:`Segment` represents data from one trial, and we want a PSTH for each trial from all units combined: .. doctest:: plt.figure() for seg in block.segments: print(f"Analyzing segment {seg.index}") stlist = [st - st.t_start for st in seg.spiketrains] plt.subplot(len(block.segments), 1, seg.index + 1) count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title(f"PSTH in segment {seg.index}") plt.show() **By neuron** Now we can calculate the PSTH averaged over trials for each unit, using the :attr:`block.groups` property: .. doctest:: plt.figure() for i, group in enumerate(block.groups): stlist = [st - st.t_start for st in group.spiketrains] plt.subplot(len(block.groups), 1, i + 1) count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title(f"PSTH of unit {group.name}") plt.show() **By tetrode** Here we calculate a PSTH averaged over trials by channel location, blending all units: .. doctest:: plt.figure() for i, tetrode_id in enumerate(block.annotations["tetrode_ids"]): stlist = [] for unit in block.filter(objects=Group, tetrode_id=tetrode_id): stlist.extend([st - st.t_start for st in unit.spiketrains]) plt.subplot(2, 1, i + 1) count, bins = np.histogram(stlist) plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title(f"PSTH blend of tetrode {tetrode_id}") plt.show() Spike sorting ============= Spike sorting is the process of detecting and classifying high-frequency deflections ("spikes") on a group of physically nearby recording channels. For example, let's say you have recordings from a tetrode containing 4 separate channels. Here is an example showing (with fake data) how you could iterate over the contained signals and extract spike times. (Of course in reality you would use a more sophisticated algorithm.) .. doctest:: # generate some fake data seg = Segment() seg.analogsignals.append( AnalogSignal([[0.1, 0.1, 0.1, 0.1], [-2.0, -2.0, -2.0, -2.0], [0.1, 0.1, 0.1, 0.1], [-0.1, -0.1, -0.1, -0.1], [-0.1, -0.1, -0.1, -0.1], [-3.0, -3.0, -3.0, -3.0], [0.1, 0.1, 0.1, 0.1], [0.1, 0.1, 0.1, 0.1]], sampling_rate=1000*Hz, units='V')) # extract spike trains from all channels st_list = [] for signal in seg.analogsignals: # use a simple threshhold detector spike_mask = np.where(np.min(signal.magnitude, axis=1) < -1.0)[0] # create a spike train spike_times = signal.times[spike_mask] st = SpikeTrain(spike_times, t_start=signal.t_start, t_stop=signal.t_stop) # remember the spike waveforms wf_list = [] for spike_idx in np.nonzero(spike_mask)[0]: wf_list.append(signal[spike_idx-1:spike_idx+2, :]) st.waveforms = np.array(wf_list) st_list.append(st) At this point, we have a list of spiketrain objects. We could simply create a single :class:`Group` object, assign all spiketrains to it, and then also assign the :class:`AnalogSignal` on which we detected them. .. doctest:: unit = Group() unit.spiketrains = st_list unit.analogsignals.extend(seg.analogsignals) Further processing could assign each of the detected spikes to an independent source, a putative single neuron. (This processing is outside the scope of Neo. There are many open-source toolboxes to do it, for instance our sister project OpenElectrophy.) In that case we would create a separate :class:`Group` for each cluster, assign its spiketrains to it, and still store in each group a reference to the original recording. .. EEG .. Network simulations neo-0.10.0/doc/source/whatisnew.rst0000644000076700000240000000506214077743075017607 0ustar andrewstaff00000000000000============= Release notes ============= .. toctree:: :maxdepth: 1 releases/0.10.0.rst releases/0.9.0.rst releases/0.8.0.rst releases/0.7.2.rst releases/0.7.1.rst releases/0.7.0.rst releases/0.6.0.rst releases/0.5.2.rst releases/0.5.1.rst releases/0.5.0.rst .. releases/0.2.0.rst .. releases/0.2.1.rst .. releases/0.3.0.rst .. releases/0.3.1.rst .. releases/0.3.2.rst .. releases/0.3.3.rst Version 0.4.0 ------------- * added StimfitIO * added KwikIO * significant improvements to AxonIO, BlackrockIO, BrainwareSrcIO, NeuroshareIO, PlexonIO, Spike2IO, TdtIO, * many test suite improvements * Container base class Version 0.3.3 ------------- * fix a bug in PlexonIO where some EventArrays only load 1 element. * fix a bug in BrainwareSrcIo for segments with no spikes. Version 0.3.2 ------------- * cleanup of io test code, with additional helper functions and methods * added BrainwareDamIo * added BrainwareF32Io * added BrainwareSrcIo Version 0.3.1 ------------- * lazy/cascading improvement * load_lazy_olbject() in neo.io added * added NeuroscopeIO Version 0.3.0 ------------- * various bug fixes in neo.io * added ElphyIO * SpikeTrain performence improved * An IO class now can return a list of Block (see read_all_blocks in IOs) * python3 compatibility improved Version 0.2.1 ------------- * assorted bug fixes * added :func:`time_slice()` method to the :class:`SpikeTrain` and :class:`AnalogSignalArray` classes. * improvements to annotation data type handling * added PickleIO, allowing saving Neo objects in the Python pickle format. * added ElphyIO (see http://neuro-psi.cnrs.fr/spip.php?article943) * added BrainVisionIO (see https://brainvision.com/) * improvements to PlexonIO * added :func:`merge()` method to the :class:`Block` and :class:`Segment` classes * development was mostly moved to GitHub, although the issue tracker is still at neuralensemble.org/neo Version 0.2.0 ------------- New features compared to neo 0.1: * new schema more consistent. * new objects: RecordingChannelGroup, EventArray, AnalogSignalArray, EpochArray * Neuron is now Unit * use the quantities_ module for everything that can have units. * Some objects directly inherit from Quantity: SpikeTrain, AnalogSignal, AnalogSignalArray, instead of having an attribute for data. * Attributes are classifyed in 3 categories: necessary, recommended, free. * lazy and cascade keywords are added to all IOs * Python 3 support * better tests .. _quantities: https://pypi.org/project/quantities/ neo-0.10.0/examples/0000755000076700000240000000000014077757142014612 5ustar andrewstaff00000000000000neo-0.10.0/examples/generated_data.py0000644000076700000240000001131314060430470020072 0ustar andrewstaff00000000000000""" This is an example for creating simple plots from various Neo structures. It includes a function that generates toy data. """ import numpy as np import quantities as pq from matplotlib import pyplot as plt import neo def generate_block(n_segments=3, n_channels=4, n_units=3, data_samples=1000, feature_samples=100): """ Generate a block with a single recording channel group and a number of segments, recording channels and units with associated analog signals and spike trains. """ feature_len = feature_samples / data_samples # Create Block to contain all generated data block = neo.Block() # Create multiple Segments block.segments = [neo.Segment(index=i) for i in range(n_segments)] # Create multiple ChannelIndexes block.channel_indexes = [neo.ChannelIndex(name='C%d' % i, index=i) for i in range(n_channels)] # Attach multiple Units to each ChannelIndex for channel_idx in block.channel_indexes: channel_idx.units = [neo.Unit('U%d' % i) for i in range(n_units)] # Create synthetic data for seg in block.segments: feature_pos = np.random.randint(0, data_samples - feature_samples) # Analog signals: Noise with a single sinewave feature wave = 3 * np.sin(np.linspace(0, 2 * np.pi, feature_samples)) for channel_idx in block.channel_indexes: sig = np.random.randn(data_samples) sig[feature_pos:feature_pos + feature_samples] += wave signal = neo.AnalogSignal(sig * pq.mV, sampling_rate=1 * pq.kHz) seg.analogsignals.append(signal) channel_idx.analogsignals.append(signal) # Spike trains: Random spike times with elevated rate in short period feature_time = feature_pos / data_samples for u in channel_idx.units: random_spikes = np.random.rand(20) feature_spikes = np.random.rand(5) * feature_len + feature_time spikes = np.hstack([random_spikes, feature_spikes]) train = neo.SpikeTrain(spikes * pq.s, 1 * pq.s) seg.spiketrains.append(train) u.spiketrains.append(train) block.create_many_to_one_relationship() return block block = generate_block() # In this example, we treat each segment in turn, averaging over the channels # in each: for seg in block.segments: print("Analysing segment %d" % seg.index) siglist = seg.analogsignals time_points = siglist[0].times avg = np.mean(siglist, axis=0) # Average over signals of Segment plt.figure() plt.plot(time_points, avg) plt.title("Peak response in segment %d: %f" % (seg.index, avg.max())) # The second alternative is spatial traversal of the data (by channel), with # averaging over trials. For example, perhaps you wish to see which physical # location produces the strongest response, and each stimulus was the same: # There are multiple ChannelIndex objects connected to the block, each # corresponding to a a physical electrode for channel_idx in block.channel_indexes: print("Analysing channel %d: %s" % (channel_idx.index, channel_idx.name)) siglist = channel_idx.analogsignals time_points = siglist[0].times avg = np.mean(siglist, axis=0) # Average over signals of RecordingChannel plt.figure() plt.plot(time_points, avg) plt.title("Average response on channel %d" % channel_idx.index) # There are three ways to access the spike train data: by Segment, # by ChannelIndex or by Unit. # By Segment. In this example, each Segment represents data from one trial, # and we want a peristimulus time histogram (PSTH) for each trial from all # Units combined: for seg in block.segments: print("Analysing segment %d" % seg.index) stlist = [st - st.t_start for st in seg.spiketrains] count, bins = np.histogram(np.hstack(stlist)) plt.figure() plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH in segment %d" % seg.index) # By Unit. Now we can calculate the PSTH averaged over trials for each Unit: for unit in block.list_units: stlist = [st - st.t_start for st in unit.spiketrains] count, bins = np.histogram(np.hstack(stlist)) plt.figure() plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH of unit %s" % unit.name) # By ChannelIndex. Here we calculate a PSTH averaged over trials by # channel location, blending all Units: for chx in block.channel_indexes: stlist = [] for unit in chx.units: stlist.extend([st - st.t_start for st in unit.spiketrains]) count, bins = np.histogram(np.hstack(stlist)) plt.figure() plt.bar(bins[:-1], count, width=bins[1] - bins[0]) plt.title("PSTH blend of recording channel group %s" % chx.name) plt.show() neo-0.10.0/examples/hbp_d571_example.py0000644000076700000240000000300313763631416020200 0ustar andrewstaff00000000000000import os import matplotlib.pyplot as plt import numpy as np from quantities import Hz, mm, dimensionless from neo.core import CircularRegionOfInterest from neo.io import TiffIO import elephant as el data_path = os.path.expanduser("~/Data/WaveScalES/LENS/170110_mouse2_deep/t1") # loading data data = TiffIO( data_path, units=dimensionless, sampling_rate=25 * Hz, spatial_scale=0.05 * mm ).read() images = data[0].segments[0].imagesequences[0] images /= images.max() plt.subplot(2, 2, 1) plt.imshow(images[100], cmap="gray") plt.title("Original image (frame 100)") # preprocessing background = np.mean(images, axis=0) preprocessed_images = images - background plt.subplot(2, 2, 2) plt.imshow(preprocessed_images[100], cmap="gray") plt.title("Subtracted background (frame 100)") # defining ROI and extracting signal roi = CircularRegionOfInterest(x=50, y=50, radius=10) circle = plt.Circle(roi.centre, roi.radius, color="b", fill=False) ax = plt.gca() ax.add_artist(circle) central_signal = preprocessed_images.signal_from_region(roi)[0] plt.subplot(2, 2, 3) plt.plot(central_signal.times, central_signal, lw=0.8) plt.title("Mean signal from ROI") plt.xlabel("Time [s]") # calculating power spectrum freqs, psd = el.spectral.welch_psd( central_signal, fs=central_signal.sampling_rate, freq_res=0.1 * Hz, overlap=0.8 ) plt.subplot(2, 2, 4) plt.plot(freqs, np.mean(psd, axis=0), lw=0.8) plt.title("Average power spectrum") plt.xlabel("frequency [Hz]") plt.ylabel("Fourier signal") plt.tight_layout() plt.show() # see Figure 2 neo-0.10.0/examples/hbp_d571_example2.py0000644000076700000240000000370613763631416020274 0ustar andrewstaff00000000000000import os import matplotlib.pyplot as plt import numpy as np from quantities import Hz, mm, dimensionless from neo.core import CircularRegionOfInterest from neo.io import TiffIO import elephant as el data_path = os.path.expanduser("~/Data/WaveScalES/LENS/170110_mouse2_deep/t1") # loading data data = TiffIO( data_path, units=dimensionless, sampling_rate=25 * Hz, spatial_scale=0.05 * mm ).read() # data = TiffIO(data_path).read(units=dimensionless, sampling_rate=25 * Hz, spatial_scale=0.05 * mm) images = data[0].segments[0].imagesequences[0] images /= images.max() plt.subplot(2, 2, 1) plt.imshow(images[100], cmap="gray") plt.title("Original image (frame 100)") print(images.min(), images.max(), images.mean()) ### # preprocessing background = np.mean(images, axis=0) print(background.min(), background.max(), background.mean()) preprocessed_images = images - background print( preprocessed_images.min(), preprocessed_images.max(), preprocessed_images.mean(), preprocessed_images.shape, preprocessed_images.dtype, ) np.save("preprocessed_images_orig.npy", preprocessed_images.magnitude) plt.subplot(2, 2, 2) plt.imshow(preprocessed_images[100], cmap="gray") plt.title("Subtracted background (frame 100)") # defining ROI and extracting signal roi = CircularRegionOfInterest(x=50, y=50, radius=10) circle = plt.Circle(roi.centre, roi.radius, color="b", fill=False) ax = plt.gca() ax.add_artist(circle) central_signal = preprocessed_images.signal_from_region(roi)[0] plt.subplot(2, 2, 3) plt.plot(central_signal.times, central_signal, lw=0.8) plt.title("Mean signal from ROI") plt.xlabel("Time [s]") # calculating power spectrum freqs, psd = el.spectral.welch_psd( central_signal, fs=central_signal.sampling_rate, freq_res=0.1 * Hz, overlap=0.8 ) plt.subplot(2, 2, 4) plt.plot(freqs, np.mean(psd, axis=0), lw=0.8) plt.title("Average power spectrum") plt.xlabel("frequency [Hz]") plt.ylabel("Fourier signal") plt.tight_layout() plt.show() # see Figure 2 neo-0.10.0/examples/hbp_d571_example_orig.py0000644000076700000240000000277313763631416021235 0ustar andrewstaff00000000000000import os import matplotlib.pyplot as plt import numpy as np from quantities import Hz, mm, dimensionless from neo.core import CircularRegionOfInterest from neo.io import TiffIO import elephant as el data_path = os.path.expanduser("~/Data/WaveScalES/LENS/170110_mouse2_deep/t1") # loading data data = TiffIO(data_path).read(units=dimensionless, sampling_rate=25 * Hz, spatial_scale=0.05 * mm) images = data[0].segments[0].imagesequences[0] images /= images.max() plt.subplot(2, 2, 1) plt.imshow(images[100], cmap="gray") plt.title("Original image (frame 100)") # preprocessing background = np.mean(images, axis=0) preprocessed_images = images - background plt.subplot(2, 2, 2) plt.imshow(preprocessed_images[100], cmap="gray") plt.title("Subtracted background (frame 100)") # defining ROI and extracting signal roi = CircularRegionOfInterest(x=50, y=50, radius=10) circle = plt.Circle(roi.centre, roi.radius, color="b", fill=False) ax = plt.gca() ax.add_artist(circle) central_signal = preprocessed_images.signal_from_region(roi)[0] plt.subplot(2, 2, 3) plt.plot(central_signal.times, central_signal, lw=0.8) plt.title("Mean signal from ROI") plt.xlabel("Time [s]") # calculating power spectrum freqs, psd = el.spectral.welch_psd( central_signal, fs=central_signal.sampling_rate, freq_res=0.1 * Hz, overlap=0.8 ) plt.subplot(2, 2, 4) plt.plot(freqs, np.mean(psd, axis=0), lw=0.8) plt.title("Average power spectrum") plt.xlabel("frequency [Hz]") plt.ylabel("Fourier signal") plt.tight_layout() plt.show() # see Figure 2 neo-0.10.0/examples/imageseq.py0000644000076700000240000000170313766432034016752 0ustar andrewstaff00000000000000from neo.core import ImageSequence from neo.core import RectangularRegionOfInterest, CircularRegionOfInterest, PolygonRegionOfInterest import matplotlib.pyplot as plt import quantities as pq import random # generate data l = [] for frame in range(50): l.append([]) for y in range(100): l[frame].append([]) for x in range(100): l[frame][y].append(random.randint(0, 50)) image_seq = ImageSequence(l, sampling_rate=500 * pq.Hz, spatial_scale='m', units='V') result = image_seq.signal_from_region(CircularRegionOfInterest(50, 50, 25), CircularRegionOfInterest(10, 10, 5), PolygonRegionOfInterest((50, 25), (50, 45), (14, 65), (90, 80))) for i in range(len(result)): plt.figure() plt.plot(result[i].times, result[i]) plt.xlabel("seconde") plt.ylabel("valeur") plt.show() neo-0.10.0/examples/read_files_neo_io.py0000644000076700000240000000213614060430470020573 0ustar andrewstaff00000000000000""" This is an example for reading files with neo.io """ import urllib import neo url_repo = 'https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data/raw/master/' # Plexon files distantfile = url_repo + 'plexon/File_plexon_3.plx' localfile = './File_plexon_3.plx' urllib.request.urlretrieve(distantfile, localfile) # create a reader reader = neo.io.PlexonIO(filename='File_plexon_3.plx') # read the blocks blks = reader.read(lazy=False) print(blks) # access to segments for blk in blks: for seg in blk.segments: print(seg) for asig in seg.analogsignals: print(asig) for st in seg.spiketrains: print(st) # CED Spike2 files distantfile = url_repo + 'spike2/File_spike2_1.smr' localfile = './File_spike2_1.smr' urllib.request.urlretrieve(distantfile, localfile) # create a reader reader = neo.io.Spike2IO(filename='File_spike2_1.smr') # read the block bl = reader.read(lazy=False)[0] print(bl) # access to segments for seg in bl.segments: print(seg) for asig in seg.analogsignals: print(asig) for st in seg.spiketrains: print(st) neo-0.10.0/examples/read_files_neo_rawio.py0000644000076700000240000000624014066374330021315 0ustar andrewstaff00000000000000""" This is an example for reading files with neo.rawio compare with read_files_neo_io.py """ import urllib from neo.rawio import PlexonRawIO url_repo = 'https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data/raw/master/' # Get Plexon files distantfile = url_repo + 'plexon/File_plexon_3.plx' localfile = './File_plexon_3.plx' urllib.request.urlretrieve(distantfile, localfile) # create a reader reader = PlexonRawIO(filename='File_plexon_3.plx') reader.parse_header() print(reader) print(reader.header) # Read signal chunks channel_indexes = None # could be channel_indexes = [0] raw_sigs = reader.get_analogsignal_chunk(block_index=0, seg_index=0, i_start=1024, i_stop=2048, channel_indexes=channel_indexes) float_sigs = reader.rescale_signal_raw_to_float(raw_sigs, dtype='float64') sampling_rate = reader.get_signal_sampling_rate() t_start = reader.get_signal_t_start(block_index=0, seg_index=0) units = reader.header['signal_channels'][0]['units'] print(raw_sigs.shape, raw_sigs.dtype) print(float_sigs.shape, float_sigs.dtype) print(sampling_rate, t_start, units) # Count units and spikes per unit nb_unit = reader.spike_channels_count() print('nb_unit', nb_unit) for unit_index in range(nb_unit): nb_spike = reader.spike_count(block_index=0, seg_index=0, unit_index=unit_index) print('unit_index', unit_index, 'nb_spike', nb_spike) # Read spike times spike_timestamps = reader.get_spike_timestamps(block_index=0, seg_index=0, unit_index=0, t_start=0., t_stop=10.) print(spike_timestamps.shape, spike_timestamps.dtype, spike_timestamps[:5]) spike_times = reader.rescale_spike_timestamp(spike_timestamps, dtype='float64') print(spike_times.shape, spike_times.dtype, spike_times[:5]) # Read spike waveforms raw_waveforms = reader.get_spike_raw_waveforms(block_index=0, seg_index=0, unit_index=0, t_start=0., t_stop=10.) print(raw_waveforms.shape, raw_waveforms.dtype, raw_waveforms[0, 0, :4]) float_waveforms = reader.rescale_waveforms_to_float(raw_waveforms, dtype='float32', unit_index=0) print(float_waveforms.shape, float_waveforms.dtype, float_waveforms[0, 0, :4]) # Read event timestamps and times (take anotehr file) distantfile = url_repo + 'plexon/File_plexon_2.plx' localfile = './File_plexon_2.plx' urllib.request.urlretrieve(distantfile, localfile) # Count event per channel reader = PlexonRawIO(filename='File_plexon_2.plx') reader.parse_header() nb_event_channel = reader.event_channels_count() print('nb_event_channel', nb_event_channel) for chan_index in range(nb_event_channel): nb_event = reader.event_count(block_index=0, seg_index=0, event_channel_index=chan_index) print('chan_index', chan_index, 'nb_event', nb_event) ev_timestamps, ev_durations, ev_labels = reader.get_event_timestamps(block_index=0, seg_index=0, event_channel_index=0, t_start=None, t_stop=None) print(ev_timestamps, ev_durations, ev_labels) ev_times = reader.rescale_event_timestamp(ev_timestamps, dtype='float64') print(ev_times) neo-0.10.0/examples/read_proxy_with_lazy_load.py0000644000076700000240000000315514060430470022415 0ustar andrewstaff00000000000000""" This is an example demonstrate the lazy load and proxy objects. """ import urllib import neo import quantities as pq import numpy as np url_repo = 'https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data/raw/master/' # Get Plexon files distantfile = url_repo + 'micromed/File_micromed_1.TRC' localfile = './File_micromed_1.TRC' urllib.request.urlretrieve(distantfile, localfile) # create a reader reader = neo.MicromedIO(filename='File_micromed_1.TRC') reader.parse_header() lim0, lim1 = -20 * pq.ms, +20 * pq.ms def apply_my_fancy_average(sig_list): """basic average along triggers and then channels here we go back to numpy with magnitude to be able to use np.stack """ sig_list = [s.magnitude for s in sig_list] sigs = np.stack(sig_list, axis=0) return np.mean(np.mean(sigs, axis=0), axis=1) seg = reader.read_segment(lazy=False) triggers = seg.events[0] anasig = seg.analogsignals[0] # here anasig contain the whole recording in memory all_sig_chunks = [] for t in triggers.times: t0, t1 = (t + lim0), (t + lim1) anasig_chunk = anasig.time_slice(t0, t1) all_sig_chunks.append(anasig_chunk) m1 = apply_my_fancy_average(all_sig_chunks) seg = reader.read_segment(lazy=True) triggers = seg.events[0].load(time_slice=None) # this load all trigers in memory anasigproxy = seg.analogsignals[0] # this is a proxy all_sig_chunks = [] for t in triggers.times: t0, t1 = (t + lim0), (t + lim1) anasig_chunk = anasigproxy.load(time_slice=(t0, t1)) # here real data are loaded all_sig_chunks.append(anasig_chunk) m2 = apply_my_fancy_average(all_sig_chunks) print(m1) print(m2) neo-0.10.0/examples/roi_demo.py0000644000076700000240000000212113766432034016747 0ustar andrewstaff00000000000000""" Example of working with RegionOfInterest objects """ import matplotlib.pyplot as plt import numpy as np from neo.core import CircularRegionOfInterest, RectangularRegionOfInterest, PolygonRegionOfInterest from numpy.random import rand def plot_roi(roi, shape): img = rand(120, 100) pir = np.array(roi.pixels_in_region()).T img[pir[1], pir[0]] = 5 plt.imshow(img, cmap='gray_r') plt.clim(0, 5) ax = plt.gca() ax.add_artist(shape) roi = CircularRegionOfInterest(x=50.3, y=50.8, radius=30.2) shape = plt.Circle(roi.centre, roi.radius, color='r', fill=False) plt.subplot(1, 3, 1) plot_roi(roi, shape) roi = RectangularRegionOfInterest(x=50.3, y=40.2, width=40.1, height=50.3) shape = plt.Rectangle((roi.x - roi.width/2.0, roi.y - roi.height/2.0), roi.width, roi.height, color='r', fill=False) plt.subplot(1, 3, 2) plot_roi(roi, shape) roi = PolygonRegionOfInterest( (20.3, 30.2), (80.7, 30.1), (55.2, 59.4) ) shape = plt.Polygon(np.array(roi.vertices), closed=True, color='r', fill=False) plt.subplot(1, 3, 3) plot_roi(roi, shape) plt.show() neo-0.10.0/examples/simple_plot_with_matplotlib.py0000644000076700000240000000221114060430470022751 0ustar andrewstaff00000000000000""" This is an example for plotting a Neo object with matplotlib. """ import urllib import numpy as np import quantities as pq from matplotlib import pyplot import neo distantfile = 'https://web.gin.g-node.org/NeuralEnsemble/ephy_testing_data/raw/master/plexon/File_plexon_3.plx' localfile = './File_plexon_3.plx' urllib.request.urlretrieve(distantfile, localfile) # reader = neo.io.NeuroExplorerIO(filename='File_neuroexplorer_2.nex') reader = neo.io.PlexonIO(filename='File_plexon_3.plx') bl = reader.read(lazy=False)[0] for seg in bl.segments: print("SEG: " + str(seg.file_origin)) fig = pyplot.figure() ax1 = fig.add_subplot(2, 1, 1) ax2 = fig.add_subplot(2, 1, 2) ax1.set_title(seg.file_origin) ax1.set_ylabel('arbitrary units') mint = 0 * pq.s maxt = np.inf * pq.s for i, asig in enumerate(seg.analogsignals): times = asig.times.rescale('s').magnitude asig = asig.magnitude ax1.plot(times, asig) trains = [st.rescale('s').magnitude for st in seg.spiketrains] colors = pyplot.cm.jet(np.linspace(0, 1, len(seg.spiketrains))) ax2.eventplot(trains, colors=colors) pyplot.show() neo-0.10.0/neo/0000755000076700000240000000000014077757142013555 5ustar andrewstaff00000000000000neo-0.10.0/neo/__init__.py0000644000076700000240000000046314066375716015672 0ustar andrewstaff00000000000000''' Neo is a package for representing electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats ''' import logging logging_handler = logging.StreamHandler() from neo.core import * from neo.io import * from neo.version import version as __version__ neo-0.10.0/neo/core/0000755000076700000240000000000014077757142014505 5ustar andrewstaff00000000000000neo-0.10.0/neo/core/__init__.py0000644000076700000240000000331414066375716016620 0ustar andrewstaff00000000000000""" :mod:`neo.core` provides classes for storing common electrophysiological data types. Some of these classes contain raw data, such as spike trains or analog signals, while others are containers to organize other classes (including both data classes and other container classes). Classes from :mod:`neo.io` return nested data structures containing one or more class from this module. Classes: .. autoclass:: Block .. autoclass:: Segment .. autoclass:: Group .. autoclass:: AnalogSignal .. autoclass:: IrregularlySampledSignal .. autoclass:: ChannelView .. autoclass:: Event .. autoclass:: Epoch .. autoclass:: SpikeTrain .. autoclass:: ImageSequence .. autoclass:: RectangularRegionOfInterest .. autoclass:: CircularRegionOfInterest .. autoclass:: PolygonRegionOfInterest """ from neo.core.block import Block from neo.core.segment import Segment from neo.core.analogsignal import AnalogSignal from neo.core.irregularlysampledsignal import IrregularlySampledSignal from neo.core.event import Event from neo.core.epoch import Epoch from neo.core.spiketrain import SpikeTrain from neo.core.imagesequence import ImageSequence from neo.core.regionofinterest import RectangularRegionOfInterest, CircularRegionOfInterest, PolygonRegionOfInterest from neo.core.view import ChannelView from neo.core.group import Group # Block should always be first in this list objectlist = [Block, Segment, AnalogSignal, IrregularlySampledSignal, Event, Epoch, SpikeTrain, ImageSequence, RectangularRegionOfInterest, CircularRegionOfInterest, PolygonRegionOfInterest, ChannelView, Group] objectnames = [ob.__name__ for ob in objectlist] class_by_name = dict(zip(objectnames, objectlist)) neo-0.10.0/neo/core/analogsignal.py0000644000076700000240000007527514066375716017537 0ustar andrewstaff00000000000000''' This module implements :class:`AnalogSignal`, an array of analog signals. :class:`AnalogSignal` inherits from :class:`basesignal.BaseSignal` which derives from :class:`BaseNeo`, and from :class:`quantites.Quantity`which in turn inherits from :class:`numpy.array`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Initialization of a new object from constructor happens in :meth:`__new__`. This is where user-specified attributes are set. * :meth:`__array_finalize__` is called for all new objects, including those created by slicing. This is where attributes are copied over from the old object. ''' import logging try: import scipy.signal except ImportError as err: HAVE_SCIPY = False else: HAVE_SCIPY = True import numpy as np import quantities as pq from neo.core.baseneo import BaseNeo, MergeError, merge_annotations, intersect_annotations from neo.core.dataobject import DataObject from copy import copy, deepcopy from neo.core.basesignal import BaseSignal logger = logging.getLogger("Neo") def _get_sampling_rate(sampling_rate, sampling_period): ''' Gets the sampling_rate from either the sampling_period or the sampling_rate, or makes sure they match if both are specified ''' if sampling_period is None: if sampling_rate is None: raise ValueError("You must provide either the sampling rate or " + "sampling period") elif sampling_rate is None: sampling_rate = 1.0 / sampling_period elif sampling_period != 1.0 / sampling_rate: raise ValueError('The sampling_rate has to be 1/sampling_period') if not hasattr(sampling_rate, 'units'): raise TypeError("Sampling rate/sampling period must have units") return sampling_rate def _new_AnalogSignalArray(cls, signal, units=None, dtype=None, copy=True, t_start=0 * pq.s, sampling_rate=None, sampling_period=None, name=None, file_origin=None, description=None, array_annotations=None, annotations=None, segment=None): ''' A function to map AnalogSignal.__new__ to function that does not do the unit checking. This is needed for pickle to work. ''' obj = cls(signal=signal, units=units, dtype=dtype, copy=copy, t_start=t_start, sampling_rate=sampling_rate, sampling_period=sampling_period, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) obj.segment = segment return obj class AnalogSignal(BaseSignal): ''' Array of one or more continuous analog signals. A representation of several continuous, analog signals that have the same duration, sampling rate and start time. Basically, it is a 2D array: dim 0 is time, dim 1 is channel index Inherits from :class:`quantities.Quantity`, which in turn inherits from :class:`numpy.ndarray`. *Usage*:: >>> from neo.core import AnalogSignal >>> import quantities as pq >>> >>> sigarr = AnalogSignal([[1, 2, 3], [4, 5, 6]], units='V', ... sampling_rate=1*pq.Hz) >>> >>> sigarr >>> sigarr[:,1] >>> sigarr[1, 1] array(5) * V *Required attributes/properties*: :signal: (quantity array 2D, numpy array 2D, or list (data, channel)) The data itself. :units: (quantity units) Required if the signal is a list or NumPy array, not if it is a :class:`Quantity` :t_start: (quantity scalar) Time when signal begins :sampling_rate: *or* **sampling_period** (quantity scalar) Number of samples per unit time or interval between two samples. If both are specified, they are checked for consistency. *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. *Optional attributes/properties*: :dtype: (numpy dtype or str) Override the dtype of the signal array. :copy: (bool) True by default. :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations \ for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :sampling_rate: (quantity scalar) Number of samples per unit time. (1/:attr:`sampling_period`) :sampling_period: (quantity scalar) Interval between two samples. (1/:attr:`quantity scalar`) :duration: (Quantity) Signal duration, read-only. (size * :attr:`sampling_period`) :t_stop: (quantity scalar) Time when signal ends, read-only. (:attr:`t_start` + :attr:`duration`) :times: (quantity 1D) The time points of each sample of the signal, read-only. (:attr:`t_start` + arange(:attr:`shape`[0])/:attr:`sampling_rate`) *Slicing*: :class:`AnalogSignal` objects can be sliced. When taking a single column (dimension 0, e.g. [0, :]) or a single element, a :class:`~quantities.Quantity` is returned. Otherwise an :class:`AnalogSignal` (actually a view) is returned, with the same metadata, except that :attr:`t_start` is changed if the start index along dimension 1 is greater than 1. Note that slicing an :class:`AnalogSignal` may give a different result to slicing the underlying NumPy array since signals are always two-dimensional. *Operations available on this object*: == != + * / ''' _parent_objects = ('Segment',) _parent_attrs = ('segment',) _quantity_attr = 'signal' _necessary_attrs = (('signal', pq.Quantity, 2), ('sampling_rate', pq.Quantity, 0), ('t_start', pq.Quantity, 0)) _recommended_attrs = BaseNeo._recommended_attrs def __new__(cls, signal, units=None, dtype=None, copy=True, t_start=0 * pq.s, sampling_rate=None, sampling_period=None, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Constructs new :class:`AnalogSignal` from data. This is called whenever a new class:`AnalogSignal` is created from the constructor, but not when slicing. __array_finalize__ is called on the new object. ''' signal = cls._rescale(signal, units=units) obj = pq.Quantity(signal, units=units, dtype=dtype, copy=copy).view(cls) if obj.ndim == 1: obj.shape = (-1, 1) if t_start is None: raise ValueError('t_start cannot be None') obj._t_start = t_start obj._sampling_rate = _get_sampling_rate(sampling_rate, sampling_period) obj.segment = None return obj def __init__(self, signal, units=None, dtype=None, copy=True, t_start=0 * pq.s, sampling_rate=None, sampling_period=None, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Initializes a newly constructed :class:`AnalogSignal` instance. ''' # This method is only called when constructing a new AnalogSignal, # not when slicing or viewing. We use the same call signature # as __new__ for documentation purposes. Anything not in the call # signature is stored in annotations. # Calls parent __init__, which grabs universally recommended # attributes and sets up self.annotations DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def __reduce__(self): ''' Map the __new__ function onto _new_AnalogSignalArray, so that pickle works ''' return _new_AnalogSignalArray, (self.__class__, np.array(self), self.units, self.dtype, True, self.t_start, self.sampling_rate, self.sampling_period, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment) def _array_finalize_spec(self, obj): ''' Set default values for attributes specific to :class:`AnalogSignal`. Common attributes are defined in :meth:`__array_finalize__` in :class:`basesignal.BaseSignal`), which is called every time a new signal is created and calls this method. ''' self._t_start = getattr(obj, '_t_start', 0 * pq.s) self._sampling_rate = getattr(obj, '_sampling_rate', None) return obj def __repr__(self): ''' Returns a string representing the :class:`AnalogSignal`. ''' return ('<%s(%s, [%s, %s], sampling rate: %s)>' % (self.__class__.__name__, super().__repr__(), self.t_start, self.t_stop, self.sampling_rate)) def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' if isinstance(i, (int, np.integer)): # a single point in time across all channels obj = super().__getitem__(i) obj = pq.Quantity(obj.magnitude, units=obj.units) elif isinstance(i, tuple): obj = super().__getitem__(i) j, k = i if isinstance(j, (int, np.integer)): # extract a quantity array obj = pq.Quantity(obj.magnitude, units=obj.units) else: if isinstance(j, slice): if j.start: obj.t_start = (self.t_start + j.start * self.sampling_period) if j.step: obj.sampling_period *= j.step elif isinstance(j, np.ndarray): raise NotImplementedError( "Arrays not yet supported") # in the general case, would need to return # IrregularlySampledSignal(Array) else: raise TypeError("%s not supported" % type(j)) if isinstance(k, (int, np.integer)): obj = obj.reshape(-1, 1) obj.array_annotate(**deepcopy(self.array_annotations_at_index(k))) elif isinstance(i, slice): obj = super().__getitem__(i) if i.start: obj.t_start = self.t_start + i.start * self.sampling_period obj.array_annotations = deepcopy(self.array_annotations) elif isinstance(i, np.ndarray): # Indexing of an AnalogSignal is only consistent if the resulting number of # samples is the same for each trace. The time axis for these samples is not # guaranteed to be continuous, so returning a Quantity instead of an AnalogSignal here. new_time_dims = np.sum(i, axis=0) if len(new_time_dims) and all(new_time_dims == new_time_dims[0]): obj = np.asarray(self).T.__getitem__(i.T) obj = obj.T.reshape(self.shape[1], -1).T obj = pq.Quantity(obj, units=self.units) else: raise IndexError("indexing of an AnalogSignals needs to keep the same number of " "sample for each trace contained") else: raise IndexError("index should be an integer, tuple, slice or boolean numpy array") return obj def __setitem__(self, i, value): """ Set an item or slice defined by :attr:`i` to `value`. """ # because AnalogSignals are always at least two-dimensional, # we need to handle the case where `i` is an integer if isinstance(i, int): i = slice(i, i + 1) elif isinstance(i, tuple): j, k = i if isinstance(k, int): i = (j, slice(k, k + 1)) return super().__setitem__(i, value) # sampling_rate attribute is handled as a property so type checking can # be done @property def sampling_rate(self): ''' Number of samples per unit time. (1/:attr:`sampling_period`) ''' return self._sampling_rate @sampling_rate.setter def sampling_rate(self, rate): ''' Setter for :attr:`sampling_rate` ''' if rate is None: raise ValueError('sampling_rate cannot be None') elif not hasattr(rate, 'units'): raise ValueError('sampling_rate must have units') self._sampling_rate = rate # sampling_period attribute is handled as a property on underlying rate @property def sampling_period(self): ''' Interval between two samples. (1/:attr:`sampling_rate`) ''' return 1. / self.sampling_rate @sampling_period.setter def sampling_period(self, period): ''' Setter for :attr:`sampling_period` ''' if period is None: raise ValueError('sampling_period cannot be None') elif not hasattr(period, 'units'): raise ValueError('sampling_period must have units') self.sampling_rate = 1. / period # t_start attribute is handled as a property so type checking can be done @property def t_start(self): ''' Time when signal begins. ''' return self._t_start @t_start.setter def t_start(self, start): ''' Setter for :attr:`t_start` ''' if start is None: raise ValueError('t_start cannot be None') self._t_start = start @property def duration(self): ''' Signal duration (:attr:`size` * :attr:`sampling_period`) ''' return self.shape[0] / self.sampling_rate @property def t_stop(self): ''' Time when signal ends. (:attr:`t_start` + :attr:`duration`) ''' return self.t_start + self.duration @property def times(self): ''' The time points of each sample of the signal (:attr:`t_start` + arange(:attr:`shape`)/:attr:`sampling_rate`) ''' return self.t_start + np.arange(self.shape[0]) / self.sampling_rate def __eq__(self, other): ''' Equality test (==) ''' if (isinstance(other, AnalogSignal) and ( self.t_start != other.t_start or self.sampling_rate != other.sampling_rate)): return False return super().__eq__(other) def _check_consistency(self, other): ''' Check if the attributes of another :class:`AnalogSignal` are compatible with this one. ''' if isinstance(other, AnalogSignal): for attr in "t_start", "sampling_rate": if getattr(self, attr) != getattr(other, attr): raise ValueError( "Inconsistent values of %s" % attr) # how to handle name and annotations? def _repr_pretty_(self, pp, cycle): ''' Handle pretty-printing the :class:`AnalogSignal`. ''' pp.text("{cls} with {channels} channels of length {length}; " "units {units}; datatype {dtype} ".format(cls=self.__class__.__name__, channels=self.shape[1], length=self.shape[0], units=self.units.dimensionality.string, dtype=self.dtype)) if self._has_repr_pretty_attrs_(): pp.breakable() self._repr_pretty_attrs_(pp, cycle) def _pp(line): pp.breakable() with pp.group(indent=1): pp.text(line) _pp("sampling rate: {}".format(self.sampling_rate)) _pp("time: {} to {}".format(self.t_start, self.t_stop)) def time_index(self, t): """Return the array index (or indices) corresponding to the time (or times) `t`""" i = (t - self.t_start) * self.sampling_rate i = np.rint(i.simplified.magnitude).astype(np.int64) return i def time_slice(self, t_start, t_stop): ''' Creates a new AnalogSignal corresponding to the time slice of the original AnalogSignal between times t_start, t_stop. Note, that for numerical stability reasons if t_start does not fall exactly on the time bins defined by the sampling_period it will be rounded to the nearest sampling bin. The time bin for t_stop will be chosen to make the duration of the resultant signal as close as possible to t_stop - t_start. This means that for a given duration, the size of the slice will always be the same. ''' # checking start time and transforming to start index if t_start is None: i = 0 t_start = 0 * pq.s else: i = self.time_index(t_start) # checking stop time and transforming to stop index if t_stop is None: j = len(self) else: delta = (t_stop - t_start) * self.sampling_rate j = i + int(np.rint(delta.simplified.magnitude)) if (i < 0) or (j > len(self)): raise ValueError('t_start, t_stop have to be within the analog \ signal duration') # Time slicing should create a deep copy of the object obj = deepcopy(self[i:j]) obj.t_start = self.t_start + i * self.sampling_period return obj def time_shift(self, t_shift): """ Shifts a :class:`AnalogSignal` to start at a new time. Parameters: ----------- t_shift: Quantity (time) Amount of time by which to shift the :class:`AnalogSignal`. Returns: -------- new_sig: :class:`AnalogSignal` New instance of a :class:`AnalogSignal` object starting at t_shift later than the original :class:`AnalogSignal` (the original :class:`AnalogSignal` is not modified). """ new_sig = deepcopy(self) new_sig.t_start = new_sig.t_start + t_shift return new_sig def splice(self, signal, copy=False): """ Replace part of the current signal by a new piece of signal. The new piece of signal will overwrite part of the current signal starting at the time given by the new piece's `t_start` attribute. The signal to be spliced in must have the same physical dimensions, sampling rate, and number of channels as the current signal and fit within it. If `copy` is False (the default), modify the current signal in place. If `copy` is True, return a new signal and leave the current one untouched. In this case, the new signal will not be linked to any parent objects. """ if signal.t_start < self.t_start: raise ValueError("Cannot splice earlier than the start of the signal") if signal.t_stop > self.t_stop: raise ValueError("Splice extends beyond signal") if signal.sampling_rate != self.sampling_rate: raise ValueError("Sampling rates do not match") i = self.time_index(signal.t_start) j = i + signal.shape[0] if copy: new_signal = deepcopy(self) new_signal.segment = None new_signal[i:j, :] = signal return new_signal else: self[i:j, :] = signal return self def downsample(self, downsampling_factor, **kwargs): """ Downsample the data of a signal. This method reduces the number of samples of the AnalogSignal to a fraction of the original number of samples, defined by `downsampling_factor`. This method is a wrapper of scipy.signal.decimate and accepts the same set of keyword arguments, except for specifying the axis of resampling, which is fixed to the first axis here. Parameters: ----------- downsampling_factor: integer Factor used for decimation of samples. Scipy recommends to call decimate multiple times for downsampling factors higher than 13 when using IIR downsampling (default). Returns: -------- downsampled_signal: :class:`AnalogSignal` New instance of a :class:`AnalogSignal` object containing the resampled data points. The original :class:`AnalogSignal` is not modified. Note: ----- For resampling the signal with a fixed number of samples, see `resample` method. """ if not HAVE_SCIPY: raise ImportError('Decimating requires availability of scipy.signal') # Resampling is only permitted along the time axis (axis=0) if 'axis' in kwargs: kwargs.pop('axis') downsampled_data = scipy.signal.decimate(self.magnitude, downsampling_factor, axis=0, **kwargs) downsampled_signal = self.duplicate_with_new_data(downsampled_data) # since the number of channels stays the same, we can also copy array annotations here downsampled_signal.array_annotations = self.array_annotations.copy() downsampled_signal.sampling_rate = self.sampling_rate / downsampling_factor return downsampled_signal def resample(self, sample_count, **kwargs): """ Resample the data points of the signal. This method interpolates the signal and returns a new signal with a fixed number of samples defined by `sample_count`. This method is a wrapper of scipy.signal.resample and accepts the same set of keyword arguments, except for specifying the axis of resampling which is fixed to the first axis here, and the sample positions. . Parameters: ----------- sample_count: integer Number of desired samples. The resulting signal starts at the same sample as the original and is sampled regularly. Returns: -------- resampled_signal: :class:`AnalogSignal` New instance of a :class:`AnalogSignal` object containing the resampled data points. The original :class:`AnalogSignal` is not modified. Note: ----- For reducing the number of samples to a fraction of the original, see `downsample` method """ if not HAVE_SCIPY: raise ImportError('Resampling requires availability of scipy.signal') # Resampling is only permitted along the time axis (axis=0) if 'axis' in kwargs: kwargs.pop('axis') if 't' in kwargs: kwargs.pop('t') resampled_data, resampled_times = scipy.signal.resample(self.magnitude, sample_count, t=self.times, axis=0, **kwargs) resampled_signal = self.duplicate_with_new_data(resampled_data) resampled_signal.sampling_rate = (sample_count / self.shape[0]) * self.sampling_rate # since the number of channels stays the same, we can also copy array annotations here resampled_signal.array_annotations = self.array_annotations.copy() return resampled_signal def rectify(self, **kwargs): """ Rectify the signal. This method rectifies the signal by taking the absolute value. This method is a wrapper of numpy.absolute() and accepts the same set of keyword arguments. Returns: -------- resampled_signal: :class:`AnalogSignal` New instance of a :class:`AnalogSignal` object containing the rectified data points. The original :class:`AnalogSignal` is not modified. """ # Use numpy to get the absolute value of the signal rectified_data = np.absolute(self.magnitude, **kwargs) rectified_signal = self.duplicate_with_new_data(rectified_data) # the sampling rate stays constant rectified_signal.sampling_rate = self.sampling_rate # since the number of channels stays the same, we can also copy array annotations here rectified_signal.array_annotations = self.array_annotations.copy() return rectified_signal def concatenate(self, *signals, overwrite=False, padding=False): """ Concatenate multiple neo.AnalogSignal objects across time. Units, sampling_rate and number of signal traces must be the same for all signals. Otherwise a ValueError is raised. Note that timestamps of concatenated signals might shift in oder to align the sampling times of all signals. Parameters ---------- signals: neo.AnalogSignal objects AnalogSignals that will be concatenated overwrite : bool If True, samples of the earlier (lower index in `signals`) signals are overwritten by that of later (higher index in `signals`) signals. If False, samples of the later are overwritten by earlier signal. Default: False padding : bool, scalar quantity Sampling values to use as padding in case signals do not overlap. If False, do not apply padding. Signals have to align or overlap. If True, signals will be padded using np.NaN as pad values. If a scalar quantity is provided, this will be used for padding. The other signal is moved forward in time by maximum one sampling period to align the sampling times of both signals. Default: False Returns ------- signal: neo.AnalogSignal concatenated output signal """ # Sanity of inputs if not hasattr(signals, '__iter__'): raise TypeError('signals must be iterable') if not all([isinstance(a, AnalogSignal) for a in signals]): raise TypeError('Entries of anasiglist have to be of type neo.AnalogSignal') if len(signals) == 0: return self signals = [self] + list(signals) # Check required common attributes: units, sampling_rate and shape[-1] shared_attributes = ['units', 'sampling_rate'] attribute_values = [tuple((getattr(anasig, attr) for attr in shared_attributes)) for anasig in signals] # add shape dimensions that do not relate to time attribute_values = [(attribute_values[i] + (signals[i].shape[1:],)) for i in range(len(signals))] if not all([attrs == attribute_values[0] for attrs in attribute_values]): raise MergeError( f'AnalogSignals have to share {shared_attributes} attributes to be concatenated.') units, sr, shape = attribute_values[0] # find gaps between Analogsignals combined_time_ranges = self._concatenate_time_ranges( [(s.t_start, s.t_stop) for s in signals]) missing_time_ranges = self._invert_time_ranges(combined_time_ranges) if len(missing_time_ranges): diffs = np.diff(np.asarray(missing_time_ranges), axis=1) else: diffs = [] if padding is False and any(diffs > signals[0].sampling_period): raise MergeError(f'Signals are not continuous. Can not concatenate signals with gaps. ' f'Please provide a padding value.') if padding is not False: logger.warning('Signals will be padded using {}.'.format(padding)) if padding is True: padding = np.NaN * units if isinstance(padding, pq.Quantity): padding = padding.rescale(units).magnitude else: raise MergeError('Invalid type of padding value. Please provide a bool value ' 'or a quantities object.') t_start = min([a.t_start for a in signals]) t_stop = max([a.t_stop for a in signals]) n_samples = int(np.rint(((t_stop - t_start) * sr).rescale('dimensionless').magnitude)) shape = (n_samples,) + shape # Collect attributes and annotations across all concatenated signals kwargs = {} common_annotations = signals[0].annotations common_array_annotations = signals[0].array_annotations for anasig in signals[1:]: common_annotations = intersect_annotations(common_annotations, anasig.annotations) common_array_annotations = intersect_annotations(common_array_annotations, anasig.array_annotations) kwargs['annotations'] = common_annotations kwargs['array_annotations'] = common_array_annotations for name in ("name", "description", "file_origin"): attr = [getattr(s, name) for s in signals] if all([a == attr[0] for a in attr]): kwargs[name] = attr[0] else: kwargs[name] = f'concatenation ({attr})' conc_signal = AnalogSignal(np.full(shape=shape, fill_value=padding, dtype=signals[0].dtype), sampling_rate=sr, t_start=t_start, units=units, **kwargs) if not overwrite: signals = signals[::-1] while len(signals) > 0: conc_signal.splice(signals.pop(0), copy=False) return conc_signal def _concatenate_time_ranges(self, time_ranges): time_ranges = sorted(time_ranges) new_ranges = time_ranges[:1] for t_start, t_stop in time_ranges[1:]: # time range are non continuous -> define new range if t_start > new_ranges[-1][1]: new_ranges.append((t_start, t_stop)) # time range is continuous -> extend time range elif t_stop > new_ranges[-1][1]: new_ranges[-1] = (new_ranges[-1][0], t_stop) return new_ranges def _invert_time_ranges(self, time_ranges): i = 0 new_ranges = [] while i < len(time_ranges) - 1: new_ranges.append((time_ranges[i][1], time_ranges[i + 1][0])) i += 1 return new_ranges neo-0.10.0/neo/core/baseneo.py0000644000076700000240000003332614066375716016503 0ustar andrewstaff00000000000000""" This module defines :class:`BaseNeo`, the abstract base class used by all :module:`neo.core` classes. """ from copy import deepcopy from datetime import datetime, date, time, timedelta from decimal import Decimal import logging from numbers import Number import numpy as np ALLOWED_ANNOTATION_TYPES = (int, float, complex, str, bytes, type(None), datetime, date, time, timedelta, Number, Decimal, np.number, np.bool_) logger = logging.getLogger("Neo") class MergeError(Exception): pass def _check_annotations(value): """ Recursively check that value is either of a "simple" type (number, string, date/time) or is a (possibly nested) dict, list or numpy array containing only simple types. """ if isinstance(value, np.ndarray): if not issubclass(value.dtype.type, ALLOWED_ANNOTATION_TYPES): raise ValueError("Invalid annotation. NumPy arrays with dtype %s" "are not allowed" % value.dtype.type) elif isinstance(value, dict): for element in value.values(): _check_annotations(element) elif isinstance(value, (list, tuple)): for element in value: _check_annotations(element) elif not isinstance(value, ALLOWED_ANNOTATION_TYPES): raise ValueError("Invalid annotation. Annotations of type %s are not" "allowed" % type(value)) def merge_annotation(a, b): """ First attempt at a policy for merging annotations (intended for use with parallel computations using MPI). This policy needs to be discussed further, or we could allow the user to specify a policy. Current policy: For arrays or lists: concatenate For dicts: merge recursively For strings: concatenate with ';' Otherwise: fail if the annotations are not equal """ assert type(a) == type(b), 'type({}) {} != type({}) {}'.format(a, type(a), b, type(b)) if isinstance(a, dict): return merge_annotations(a, b) elif isinstance(a, np.ndarray): # concatenate b to a return np.append(a, b) elif isinstance(a, list): # concatenate b to a return a + b elif isinstance(a, str): if a == b: return a else: return a + ";" + b else: assert a == b, '{} != {}'.format(a, b) return a def merge_annotations(A, *Bs): """ Merge two sets of annotations. Merging follows these rules: All keys that are in A or B, but not both, are kept. For keys that are present in both: For arrays or lists: concatenate For dicts: merge recursively For strings: concatenate with ';' Otherwise: warn if the annotations are not equal """ merged = A.copy() for B in Bs: for name in B: if name not in merged: merged[name] = B[name] else: try: merged[name] = merge_annotation(merged[name], B[name]) except BaseException as exc: # exc.args += ('key %s' % name,) # raise merged[name] = "MERGE CONFLICT" # temporary hack logger.debug("Merging annotations: A=%s Bs=%s merged=%s", A, Bs, merged) return merged def intersect_annotations(A, B): """ Identify common entries in dictionaries A and B and return these in a separate dictionary. Entries have to share key as well as value to be considered common. Parameters ---------- A, B : dict Dictionaries to merge. """ result = {} for key in set(A.keys()) & set(B.keys()): v1, v2 = A[key], B[key] assert type(v1) == type(v2), 'type({}) {} != type({}) {}'.format(v1, type(v1), v2, type(v2)) if isinstance(v1, dict) and v1 == v2: result[key] = deepcopy(v1) elif isinstance(v1, str) and v1 == v2: result[key] = A[key] elif isinstance(v1, list) and v1 == v2: result[key] = deepcopy(v1) elif isinstance(v1, np.ndarray) and all(v1 == v2): result[key] = deepcopy(v1) return result def _reference_name(class_name): """ Given the name of a class, return an attribute name to be used for references to instances of that class. For example, a Segment object has a parent Block object, referenced by `segment.block`. The attribute name `block` is obtained by calling `_container_name("Block")`. """ return class_name.lower() def _container_name(class_name): """ Given the name of a class, return an attribute name to be used for lists (or other containers) containing instances of that class. For example, a Block object contains a list of Segment objects, referenced by `block.segments`. The attribute name `segments` is obtained by calling `_container_name_plural("Segment")`. """ return _reference_name(class_name) + 's' class BaseNeo: """ This is the base class from which all Neo objects inherit. This class implements support for universally recommended arguments, and also sets up the :attr:`annotations` dict for additional arguments. Each class can define one or more of the following class attributes: :_parent_objects: Neo objects that can be parents of this object. Note that no Neo object can have more than one parent. An instance attribute named class.__name__.lower() will be automatically defined to hold this parent and will be initialized to None. :_necessary_attrs: A list of tuples containing the attributes that the class must have. The tuple can have 2-4 elements. The first element is the attribute name. The second element is the attribute type. The third element is the number of dimensions (only for numpy arrays and quantities). The fourth element is the dtype of array (only for numpy arrays and quantities). This does NOT include the attributes holding the parents or children of the object. :_recommended_attrs: A list of tuples containing the attributes that the class may optionally have. It uses the same structure as :_necessary_attrs: :_repr_pretty_attrs_keys_: The names of attributes printed when pretty-printing using iPython. The following helper properties are available: :_parent_containers: The names of the container attributes used to store :_parent_objects: :parents: All objects that are parents of the current object. :_all_attrs: All required and optional attributes. :_necessary_attrs: + :_recommended_attrs: The following "universal" methods are available: :__init__: Grabs the universally recommended arguments :attr:`name`, :attr:`file_origin`, and :attr:`description` and stores them as attributes. Also takes every additional argument (that is, every argument that is not handled by :class:`BaseNeo` or the child class), and puts in the dict :attr:`annotations`. :annotate(**args): Updates :attr:`annotations` with keyword/value pairs. :merge(**args): Merge the contents of another object into this one. The merge method implemented here only merges annotations (see :merge_annotations:). Subclasses should implementt their own merge rules. :merge_annotations(**args): Merge the :attr:`annotations` of another object into this one. Each child class should: 0) describe its parents (if any) and attributes in the relevant class attributes. :_recommended_attrs: should append BaseNeo._recommended_attrs to the end. 1) call BaseNeo.__init__(self, name=name, description=description, file_origin=file_origin, **annotations) with the universal recommended arguments, plus optional annotations 2) process its required arguments in its __new__ or __init__ method 3) process its non-universal recommended arguments (in its __new__ or __init__ method Non-keyword arguments should only be used for required arguments. The required and recommended arguments for each child class (Neo object) are specified in the _necessary_attrs and _recommended_attrs attributes and documentation for the child object. """ # these attributes control relationships, they need to be # specified in each child class # Parent objects whose children can have a single parent _parent_objects = () # Attribute names corresponding to _parent_objects _parent_attrs = () # Attributes that an instance is required to have defined _necessary_attrs = () # Attributes that an instance may or may have defined _recommended_attrs = (('name', str), ('description', str), ('file_origin', str)) # Attributes that are used for pretty-printing _repr_pretty_attrs_keys_ = ("name", "description", "annotations") def __init__(self, name=None, description=None, file_origin=None, **annotations): """ This is the base constructor for all Neo objects. Stores universally recommended attributes and creates :attr:`annotations` from additional arguments not processed by :class:`BaseNeo` or the child class. """ # create `annotations` for additional arguments _check_annotations(annotations) self.annotations = annotations # these attributes are recommended for all objects. self.name = name self.description = description self.file_origin = file_origin # initialize parent containers for parent in self._parent_containers: setattr(self, parent, None) def annotate(self, **annotations): """ Add annotations (non-standardized metadata) to a Neo object. Example: >>> obj.annotate(key1=value0, key2=value1) >>> obj.key2 value2 """ _check_annotations(annotations) self.annotations.update(annotations) def _has_repr_pretty_attrs_(self): return any(getattr(self, k) for k in self._repr_pretty_attrs_keys_) def _repr_pretty_attrs_(self, pp, cycle): first = True for key in self._repr_pretty_attrs_keys_: value = getattr(self, key) if value: if first: first = False else: pp.breakable() with pp.group(indent=1): pp.text("{}: ".format(key)) pp.pretty(value) def _repr_pretty_(self, pp, cycle): """ Handle pretty-printing the :class:`BaseNeo`. """ pp.text(self.__class__.__name__) if self._has_repr_pretty_attrs_(): pp.breakable() self._repr_pretty_attrs_(pp, cycle) @property def _parent_containers(self): """ Containers for parent objects. """ return tuple([_reference_name(parent) for parent in self._parent_objects]) @property def parents(self): """ All parent objects storing the current object. """ return tuple([getattr(self, attr) for attr in self._parent_containers]) @property def _all_attrs(self): """ Returns a combination of all required and recommended attributes. """ return self._necessary_attrs + self._recommended_attrs def merge_annotations(self, *others): """ Merge annotations from the other object into this one. Merging follows these rules: All keys that are in the either object, but not both, are kept. For keys that are present in both objects: For arrays or lists: concatenate the two arrays For dicts: merge recursively For strings: concatenate with ';' Otherwise: fail if the annotations are not equal """ other_annotations = [other.annotations for other in others] merged_annotations = merge_annotations(self.annotations, *other_annotations) self.annotations.update(merged_annotations) def merge(self, *others): """ Merge the contents of another object into this one. See :meth:`merge_annotations` for details of the merge operation. """ self.merge_annotations(*others) def set_parent(self, obj): """ Set the appropriate "parent" attribute of this object according to the type of "obj" """ if obj.__class__.__name__ not in self._parent_objects: raise TypeError("{} can only have parents of type {}, not {}".format( self.__class__.__name__, self._parent_objects, obj.__class__.__name__)) loc = self._parent_objects.index(obj.__class__.__name__) parent_attr = self._parent_attrs[loc] setattr(self, parent_attr, obj) neo-0.10.0/neo/core/basesignal.py0000644000076700000240000002653014066375716017176 0ustar andrewstaff00000000000000''' This module implements :class:`BaseSignal`, an array of signals. This is a parent class from which all signal objects inherit: :class:`AnalogSignal` and :class:`IrregularlySampledSignal` :class:`BaseSignal` inherits from :class:`quantities.Quantity`, which inherits from :class:`numpy.array`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Constructor :meth:`__new__` for :class:`BaseSignal` doesn't exist. Only child objects :class:`AnalogSignal` and :class:`IrregularlySampledSignal` can be created. ''' import copy import logging from copy import deepcopy import numpy as np import quantities as pq from neo.core.baseneo import MergeError, merge_annotations from neo.core.dataobject import DataObject, ArrayDict logger = logging.getLogger("Neo") class BaseSignal(DataObject): ''' This is the base class from which all signal objects inherit: :class:`AnalogSignal` and :class:`IrregularlySampledSignal`. This class contains all common methods of both child classes. It uses the following child class attributes: :_necessary_attrs: a list of the attributes that the class must have. :_recommended_attrs: a list of the attributes that the class may optionally have. ''' def _array_finalize_spec(self, obj): ''' Called by :meth:`__array_finalize__`, used to customize behaviour of sub-classes. ''' return obj def __array_finalize__(self, obj): ''' This is called every time a new signal is created. It is the appropriate place to set default values for attributes for a signal constructed by slicing or viewing. User-specified values are only relevant for construction from constructor, and these are set in __new__ in the child object. Then they are just copied over here. Default values for the specific attributes for subclasses (:class:`AnalogSignal` and :class:`IrregularlySampledSignal`) are set in :meth:`_array_finalize_spec` ''' super().__array_finalize__(obj) self._array_finalize_spec(obj) # The additional arguments self.annotations = getattr(obj, 'annotations', {}) # Add empty array annotations, because they cannot always be copied, # but do not overwrite existing ones from slicing etc. # This ensures the attribute exists if not hasattr(self, 'array_annotations'): self.array_annotations = ArrayDict(self._get_arr_ann_length()) # Globally recommended attributes self.name = getattr(obj, 'name', None) self.file_origin = getattr(obj, 'file_origin', None) self.description = getattr(obj, 'description', None) # Parent objects self.segment = getattr(obj, 'segment', None) @classmethod def _rescale(self, signal, units=None): ''' Check that units are present, and rescale the signal if necessary. This is called whenever a new signal is created from the constructor. See :meth:`__new__' in :class:`AnalogSignal` and :class:`IrregularlySampledSignal` ''' if units is None: if not hasattr(signal, "units"): raise ValueError("Units must be specified") elif isinstance(signal, pq.Quantity): # This test always returns True, i.e. rescaling is always executed if one of the units # is a pq.CompoundUnit. This is fine because rescaling is correct anyway. if pq.quantity.validate_dimensionality(units) != signal.dimensionality: signal = signal.rescale(units) return signal def __getslice__(self, i, j): ''' Get a slice from :attr:`i` to :attr:`j`.attr[0] Doesn't get called in Python 3, :meth:`__getitem__` is called instead ''' return self.__getitem__(slice(i, j)) def __ne__(self, other): ''' Non-equality test (!=) ''' return not self.__eq__(other) def _apply_operator(self, other, op, *args): ''' Handle copying metadata to the new signal after a mathematical operation. ''' self._check_consistency(other) f = getattr(super(), op) new_signal = f(other, *args) new_signal._copy_data_complement(self) # _copy_data_complement can't always copy array annotations, # so this needs to be done locally new_signal.array_annotations = copy.deepcopy(self.array_annotations) return new_signal def _get_required_attributes(self, signal, units): ''' Return a list of the required attributes for a signal as a dictionary ''' required_attributes = {} for attr in self._necessary_attrs: if attr[0] == "signal": required_attributes["signal"] = signal elif attr[0] == "image_data": required_attributes["image_data"] = signal elif attr[0] == "t_start": required_attributes["t_start"] = getattr(self, "t_start", 0.0 * pq.ms) else: required_attributes[str(attr[0])] = getattr(self, attr[0], None) required_attributes['units'] = units return required_attributes def duplicate_with_new_data(self, signal, units=None): ''' Create a new signal with the same metadata but different data. Required attributes of the signal are used. Note: Array annotations can not be copied here because length of data can change ''' if units is None: units = self.units # else: # units = pq.quantity.validate_dimensionality(units) # signal is the new signal required_attributes = self._get_required_attributes(signal, units) new = self.__class__(**required_attributes) new._copy_data_complement(self) new.annotations.update(self.annotations) # Note: Array annotations are not copied here, because it is not ensured # that the same number of signals is used and they would possibly make no sense # when combined with another signal return new def _copy_data_complement(self, other): ''' Copy the metadata from another signal. Required and recommended attributes of the signal are used. Note: Array annotations can not be copied here because length of data can change ''' all_attr = {self._recommended_attrs, self._necessary_attrs} for sub_at in all_attr: for attr in sub_at: if attr[0] == "t_start": setattr(self, attr[0], deepcopy(getattr(other, attr[0], 0.0 * pq.ms))) elif attr[0] != 'signal': setattr(self, attr[0], deepcopy(getattr(other, attr[0], None))) setattr(self, 'annotations', deepcopy(getattr(other, 'annotations', None))) # Note: Array annotations cannot be copied because length of data can be changed # here # which would cause inconsistencies def __rsub__(self, other, *args): ''' Backwards subtraction (other-self) ''' return self.__mul__(-1, *args) + other def __add__(self, other, *args): ''' Addition (+) ''' return self._apply_operator(other, "__add__", *args) def __sub__(self, other, *args): ''' Subtraction (-) ''' return self._apply_operator(other, "__sub__", *args) def __mul__(self, other, *args): ''' Multiplication (*) ''' return self._apply_operator(other, "__mul__", *args) def __truediv__(self, other, *args): ''' Float division (/) ''' return self._apply_operator(other, "__truediv__", *args) def __div__(self, other, *args): ''' Integer division (//) ''' return self._apply_operator(other, "__div__", *args) __radd__ = __add__ __rmul__ = __sub__ def merge(self, other): ''' Merge another signal into this one. The signal objects are concatenated horizontally (column-wise, :func:`np.hstack`). If the attributes of the two signal are not compatible, an Exception is raised. Required attributes of the signal are used. ''' for attr in self._necessary_attrs: if 'signal' != attr[0]: if getattr(self, attr[0], None) != getattr(other, attr[0], None): raise MergeError("Cannot merge these two signals as the %s differ." % attr[0]) if self.segment != other.segment: raise MergeError( "Cannot merge these two signals as they belong to different segments.") if hasattr(self, "lazy_shape"): if hasattr(other, "lazy_shape"): if self.lazy_shape[0] != other.lazy_shape[0]: raise MergeError("Cannot merge signals of different length.") merged_lazy_shape = (self.lazy_shape[0], self.lazy_shape[1] + other.lazy_shape[1]) else: raise MergeError("Cannot merge a lazy object with a real object.") if other.units != self.units: other = other.rescale(self.units) stack = np.hstack((self.magnitude, other.magnitude)) kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge({}, {})".format(attr_self, attr_other) merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) kwargs['array_annotations'] = self._merge_array_annotations(other) signal = self.__class__(stack, units=self.units, dtype=self.dtype, copy=False, t_start=self.t_start, sampling_rate=self.sampling_rate, **kwargs) signal.segment = self.segment if hasattr(self, "lazy_shape"): signal.lazy_shape = merged_lazy_shape return signal def time_slice(self, t_start, t_stop): ''' Creates a new AnalogSignal corresponding to the time slice of the original Signal between times t_start, t_stop. ''' NotImplementedError('Needs to be implemented for subclasses.') def concatenate(self, *signals): ''' Concatenate multiple signals across time. The signal objects are concatenated vertically (row-wise, :func:`np.vstack`). Concatenation can be used to combine signals across segments. Note: Only (array) annotations common to both signals are attached to the concatenated signal. If the attributes of the signals are not compatible, an Exception is raised. Parameters ---------- signals : multiple neo.BaseSignal objects The objects that is concatenated with this one. Returns ------- signal : neo.BaseSignal Signal containing all non-overlapping samples of the source signals. Raises ------ MergeError If `other` object has incompatible attributes. ''' NotImplementedError('Patching need to be implemented in sublcasses') neo-0.10.0/neo/core/block.py0000644000076700000240000001047314066375716016157 0ustar andrewstaff00000000000000''' This module defines :class:`Block`, the main container gathering all the data, whether discrete or continous, for a given recording session. base class used by all :module:`neo.core` classes. :class:`Block` derives from :class:`Container`, from :module:`neo.core.container`. ''' from datetime import datetime from neo.core.container import Container, unique_objs class Block(Container): ''' Main container gathering all the data, whether discrete or continous, for a given recording session. A block is not necessarily temporally homogeneous, in contrast to :class:`Segment`. *Usage*:: >>> from neo.core import Block, Segment, Group, AnalogSignal >>> from quantities import nA, kHz >>> import numpy as np >>> >>> # create a Block with 3 Segment and 2 Group objects ,,, blk = Block() >>> for ind in range(3): ... seg = Segment(name='segment %d' % ind, index=ind) ... blk.segments.append(seg) ... >>> for ind in range(2): ... group = Group(name='Array probe %d' % ind) ... blk.groups.append(group) ... >>> # Populate the Block with AnalogSignal objects ... for seg in blk.segments: ... for group in blk.groups: ... a = AnalogSignal(np.random.randn(10000, 64)*nA, ... sampling_rate=10*kHz) ... group.analogsignals.append(a) ... seg.analogsignals.append(a) *Required attributes/properties*: None *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. :file_datetime: (datetime) The creation date and time of the original data file. :rec_datetime: (datetime) The date and time of the original recording. Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Container of*: :class:`Segment` :class:`Group` ''' _container_child_objects = ('Segment', 'Group') _child_properties = () _recommended_attrs = ((('file_datetime', datetime), ('rec_datetime', datetime), ('index', int)) + Container._recommended_attrs) _repr_pretty_attrs_keys_ = (Container._repr_pretty_attrs_keys_ + ('file_origin', 'file_datetime', 'rec_datetime', 'index')) _repr_pretty_containers = ('segments',) def __init__(self, name=None, description=None, file_origin=None, file_datetime=None, rec_datetime=None, index=None, **annotations): ''' Initalize a new :class:`Block` instance. ''' super().__init__(name=name, description=description, file_origin=file_origin, **annotations) self.file_datetime = file_datetime self.rec_datetime = rec_datetime self.index = index self.regionsofinterest = [] # temporary workaround. # the goal is to store all sub-classes of RegionOfInterest in a single list # but this will need substantial changes to container handling @property def data_children_recur(self): ''' All data child objects stored in the current object, obtained recursively. ''' # subclassing this to remove duplicate objects such as SpikeTrain # objects in both Segment and Group # Only Block can have duplicate items right now, so implement # this here for performance reasons. return tuple(unique_objs(super().data_children_recur)) def list_children_by_class(self, cls): ''' List all children of a particular class recursively. You can either provide a class object, a class name, or the name of the container storing the class. ''' # subclassing this to remove duplicate objects such as SpikeTrain # objects in both Segment and Group # Only Block can have duplicate items right now, so implement # this here for performance reasons. return unique_objs(super().list_children_by_class(cls)) neo-0.10.0/neo/core/container.py0000644000076700000240000006355714066375716017062 0ustar andrewstaff00000000000000""" This module implements generic container base class that all neo container object inherit from. It provides shared methods for all container types. :class:`Container` is derived from :class:`BaseNeo` """ from copy import deepcopy from neo.core.baseneo import BaseNeo, _reference_name, _container_name from neo.core.spiketrain import SpikeTrain from neo.core.spiketrainlist import SpikeTrainList def unique_objs(objs): """ Return a list of objects in the list objs where all objects are unique using the "is" test. """ seen = set() return [obj for obj in objs if id(obj) not in seen and not seen.add(id(obj))] def filterdata(data, targdict=None, objects=None, **kwargs): """ Return a list of the objects in data matching *any* of the search terms in either their attributes or annotations. Search terms can be provided as keyword arguments or a dictionary, either as a positional argument after data or to the argument targdict. targdict can also be a list of dictionaries, in which case the filters are applied sequentially. If targdict and kwargs are both supplied, the targdict filters are applied first, followed by the kwarg filters. A targdict of None or {} and objects = None corresponds to no filters applied, therefore returning all child objects. Default targdict and objects is None. objects (optional) should be the name of a Neo object type, a neo object class, or a list of one or both of these. If specified, only these objects will be returned. """ # if objects are specified, get the classes if objects: if hasattr(objects, 'lower') or isinstance(objects, type): objects = [objects] elif objects is not None: return [] # handle cases with targdict if targdict is None: targdict = kwargs elif not kwargs: pass elif hasattr(targdict, 'keys'): targdict = [targdict, kwargs] else: targdict += [kwargs] if not targdict: results = data # if multiple dicts are provided, apply each filter sequentially elif not hasattr(targdict, 'keys'): # for performance reasons, only do the object filtering on the first # iteration results = filterdata(data, targdict=targdict[0], objects=objects) for targ in targdict[1:]: results = filterdata(results, targdict=targ) return results else: # do the actual filtering results = [] for key, value in sorted(targdict.items()): for obj in data: if (hasattr(obj, key) and getattr(obj, key) == value and all([obj is not res for res in results])): results.append(obj) elif (key in obj.annotations and obj.annotations[key] == value and all([obj is not res for res in results])): results.append(obj) # keep only objects of the correct classes if objects: results = [result for result in results if result.__class__ in objects or result.__class__.__name__ in objects] if results and all(isinstance(obj, SpikeTrain) for obj in results): return SpikeTrainList(results) else: return results class Container(BaseNeo): """ This is the base class from which Neo container objects inherit. It derives from :class:`BaseNeo`. In addition to the setup :class:`BaseNeo` does, this class also automatically sets up the lists to hold the children of the object. Each class can define one or more of the following class attributes (in addition to those of BaseNeo): :_container_child_objects: Neo container objects that can be children of this object. This attribute is used in cases where the child can only have one parent of this type. An instance attribute named class.__name__.lower()+'s' will be automatically defined to hold this child and will be initialized to an empty list. :_data_child_objects: Neo data objects that can be children of this object. An instance attribute named class.__name__.lower()+'s' will be automatically defined to hold this child and will be initialized to an empty list. :_multi_child_objects: Neo container objects that can be children of this object. This attribute is used in cases where the child can have multiple parents of this type. An instance attribute named class.__name__.lower()+'s' will be automatically defined to hold this child and will be initialized to an empty list. :_child_properties: Properties that return sub-children of a particular type. These properties must still be defined. This is mostly used for generate_diagram. :_repr_pretty_containers: The names of containers attributes printed when pretty-printing using iPython. The following helper properties are available (in addition to those of BaseNeo): :_single_child_objects: All neo container objects that can be children of this object and where the child can only have one parent of this type. :_container_child_objects: + :_data_child_objects: :_child_objects: All child objects. :_single_child_objects: + :_multi_child_objects: :_container_child_containers: The names of the container attributes used to store :_container_child_objects: :_data_child_containers: The names of the container attributes used to store :_data_child_objects: :_single_child_containers: The names of the container attributes used to store :_single_child_objects: :_multi_child_containers: The names of the container attributes used to store :_multi_child_objects: :_child_containers: All child container attributes. :_single_child_containers: + :_multi_child_containers: :_single_children: All objects that are children of the current object where the child can only have one parent of this type. :_multi_children: All objects that are children of the current object where the child can have multiple parents of this type. :data_children: All data objects that are children of the current object. :container_children: All container objects that are children of the current object. :children: All Neo objects that are children of the current object. :data_children_recur: All data objects that are children of the current object or any of its children, any of its children's children, etc. :container_children_recur: All container objects that are children of the current object or any of its children, any of its children's children, etc. :children_recur: All Neo objects that are children of the current object or any of its children, any of its children's children, etc. The following "universal" methods are available (in addition to those of BaseNeo): :size: A dictionary where each key is an attribute storing child objects and the value is the number of objects stored in that attribute. :filter(**args): Retrieves children of the current object that have particular properties. :list_children_by_class(**args): Retrieves all children of the current object recursively that are of a particular class. :create_many_to_one_relationship(**args): For each child of the current object that can only have a single parent, set its parent to be the current object. :create_many_to_many_relationship(**args): For children of the current object that can have more than one parent of this type, put the current object in the parent list. :create_relationship(**args): Combines :create_many_to_one_relationship: and :create_many_to_many_relationship: :merge(**args): Annotations are merged based on the rules of :merge_annotations:. Child objects with the same name and a :merge: method are merged using that method. Other child objects are appended to the relevant container attribute. Parents attributes are NOT changed in this operation. Unlike :BaseNeo.merge:, this method implements all necessary merge rules for a container class. Each child class should: 0) call Container.__init__(self, name=name, description=description, file_origin=file_origin, **annotations) with the universal recommended arguments, plus optional annotations 1) process its required arguments in its __new__ or __init__ method 2) process its non-universal recommended arguments (in its __new__ or __init__ method """ # Child objects that are a container and have a single parent _container_child_objects = () # Child objects that have data and have a single parent _data_child_objects = () # Child objects that can have multiple parents _multi_child_objects = () # Properties returning children of children [of children...] _child_properties = () # Containers that are listed when pretty-printing _repr_pretty_containers = () def __init__(self, name=None, description=None, file_origin=None, **annotations): """ Initalize a new :class:`Container` instance. """ super().__init__(name=name, description=description, file_origin=file_origin, **annotations) # initialize containers for container in self._child_containers: setattr(self, container, []) @property def _single_child_objects(self): """ Child objects that have a single parent. """ return self._container_child_objects + self._data_child_objects @property def _container_child_containers(self): """ Containers for child objects that are a container and have a single parent. """ return tuple([_container_name(child) for child in self._container_child_objects]) @property def _data_child_containers(self): """ Containers for child objects that have data and have a single parent. """ return tuple([_container_name(child) for child in self._data_child_objects]) @property def _single_child_containers(self): """ Containers for child objects with a single parent. """ return tuple([_container_name(child) for child in self._single_child_objects]) @property def _multi_child_containers(self): """ Containers for child objects that can have multiple parents. """ return tuple([_container_name(child) for child in self._multi_child_objects]) @property def _child_objects(self): """ All types for child objects. """ return self._single_child_objects + self._multi_child_objects @property def _child_containers(self): """ All containers for child objects. """ return self._single_child_containers + self._multi_child_containers @property def _single_children(self): """ All child objects that can only have single parents. """ childs = [list(getattr(self, attr)) for attr in self._single_child_containers] return tuple(sum(childs, [])) @property def _multi_children(self): """ All child objects that can have multiple parents. """ childs = [list(getattr(self, attr)) for attr in self._multi_child_containers] return tuple(sum(childs, [])) @property def data_children(self): """ All data child objects stored in the current object. Not recursive. """ childs = [list(getattr(self, attr)) for attr in self._data_child_containers] return tuple(sum(childs, [])) @property def container_children(self): """ All container child objects stored in the current object. Not recursive. """ childs = [list(getattr(self, attr)) for attr in self._container_child_containers + self._multi_child_containers] return tuple(sum(childs, [])) @property def children(self): """ All child objects stored in the current object. Not recursive. """ return self.data_children + self.container_children @property def data_children_recur(self): """ All data child objects stored in the current object, obtained recursively. """ childs = [list(child.data_children_recur) for child in self.container_children] return self.data_children + tuple(sum(childs, [])) @property def container_children_recur(self): """ All container child objects stored in the current object, obtained recursively. """ childs = [list(child.container_children_recur) for child in self.container_children] return self.container_children + tuple(sum(childs, [])) @property def children_recur(self): """ All child objects stored in the current object, obtained recursively. """ return self.data_children_recur + self.container_children_recur @property def size(self): """ Get dictionary containing the names of child containers in the current object as keys and the number of children of that type as values. """ return {name: len(getattr(self, name)) for name in self._child_containers} def filter(self, targdict=None, data=True, container=False, recursive=True, objects=None, **kwargs): """ Return a list of child objects matching *any* of the search terms in either their attributes or annotations. Search terms can be provided as keyword arguments or a dictionary, either as a positional argument after data or to the argument targdict. targdict can also be a list of dictionaries, in which case the filters are applied sequentially. If targdict and kwargs are both supplied, the targdict filters are applied first, followed by the kwarg filters. A targdict of None or {} corresponds to no filters applied, therefore returning all child objects. Default targdict is None. If data is True (default), include data objects. If container is True (default False), include container objects. If recursive is True (default), descend into child containers for objects. objects (optional) should be the name of a Neo object type, a neo object class, or a list of one or both of these. If specified, only these objects will be returned. If not specified any type of object is returned. Default is None. Note that if recursive is True, containers not in objects will still be descended into. This overrides data and container. Examples:: >>> obj.filter(name="Vm") >>> obj.filter(objects=neo.SpikeTrain) >>> obj.filter(targdict={'myannotation':3}) """ if isinstance(targdict, str): raise TypeError("filtering is based on key-value pairs." " Only a single string was provided.") # if objects are specified, get the classes if objects: data = True container = True if objects == SpikeTrain: children = SpikeTrainList() else: children = [] # get the objects we want if data: if recursive: children.extend(self.data_children_recur) else: children.extend(self.data_children) if container: if recursive: children.extend(self.container_children_recur) else: children.extend(self.container_children) return filterdata(children, objects=objects, targdict=targdict, **kwargs) def list_children_by_class(self, cls): """ List all children of a particular class recursively. You can either provide a class object, a class name, or the name of the container storing the class. """ if not hasattr(cls, 'lower'): cls = cls.__name__ container_name = _container_name(cls) objs = list(getattr(self, container_name, [])) for child in self.container_children_recur: objs.extend(getattr(child, container_name, [])) return objs def create_many_to_one_relationship(self, force=False, recursive=True): """ For each child of the current object that can only have a single parent, set its parent to be the current object. Usage: >>> a_block.create_many_to_one_relationship() >>> a_block.create_many_to_one_relationship(force=True) If the current object is a :class:`Block`, you want to run populate_RecordingChannel first, because this will create new objects that this method will link up. If force is True overwrite any existing relationships If recursive is True desecend into child objects and create relationships there """ parent_name = _reference_name(self.__class__.__name__) for child in self._single_children: if (hasattr(child, parent_name) and getattr(child, parent_name) is None or force): setattr(child, parent_name, self) if recursive: for child in self.container_children: child.create_many_to_one_relationship(force=force, recursive=True) def create_many_to_many_relationship(self, append=True, recursive=True): """ For children of the current object that can have more than one parent of this type, put the current object in the parent list. If append is True add it to the list, otherwise overwrite the list. If recursive is True desecend into child objects and create relationships there """ parent_name = _container_name(self.__class__.__name__) for child in self._multi_children: if not hasattr(child, parent_name): continue if append: target = getattr(child, parent_name) if self not in target: target.append(self) continue setattr(child, parent_name, [self]) if recursive: for child in self.container_children: child.create_many_to_many_relationship(append=append, recursive=True) def create_relationship(self, force=False, append=True, recursive=True): """ For each child of the current object that can only have a single parent, set its parent to be the current object. For children of the current object that can have more than one parent of this type, put the current object in the parent list. If the current object is a :class:`Block`, you want to run populate_RecordingChannel first, because this will create new objects that this method will link up. If force is True overwrite any existing relationships If append is True add it to the list, otherwise overwrite the list. If recursive is True desecend into child objects and create relationships there """ self.create_many_to_one_relationship(force=force, recursive=False) self.create_many_to_many_relationship(append=append, recursive=False) if recursive: for child in self.container_children: child.create_relationship(force=force, append=append, recursive=True) def __deepcopy__(self, memo): """ Creates a deep copy of the container. All contained objects will also be deep copied and relationships between all objects will be identical to the original relationships. Attributes and annotations of the container are deep copied as well. :param memo: (dict) Objects that have been deep copied already :return: (Container) Deep copy of input Container """ cls = self.__class__ necessary_attrs = {} for k in self._necessary_attrs: necessary_attrs[k[0]] = getattr(self, k[0], None) new_container = cls(**necessary_attrs) new_container.__dict__.update(self.__dict__) memo[id(self)] = new_container for k, v in self.__dict__.items(): try: setattr(new_container, k, deepcopy(v, memo)) except TypeError: setattr(new_container, k, v) new_container.create_relationship() return new_container def merge(self, other): """ Merge the contents of another object into this one. Container children of the current object with the same name will be merged. All other objects will be appended to the list of objects in this one. Duplicate copies of the same object will be skipped. Annotations are merged such that only items not present in the current annotations are added. Note that the other object will be linked inconsistently to other Neo objects after the merge operation and should not be used further. """ # merge containers with the same name for container in (self._container_child_containers + self._multi_child_containers): lookup = {obj.name: obj for obj in getattr(self, container)} ids = [id(obj) for obj in getattr(self, container)] for obj in getattr(other, container): if id(obj) in ids: continue if obj.name in lookup: lookup[obj.name].merge(obj) else: lookup[obj.name] = obj ids.append(id(obj)) getattr(self, container).append(obj) # for data objects, ignore the name and just add them for container in self._data_child_containers: objs = getattr(self, container) lookup = {obj.name: i for i, obj in enumerate(objs)} ids = [id(obj) for obj in objs] for obj in getattr(other, container): if id(obj) in ids: pass elif hasattr(obj, 'merge') and obj.name is not None and obj.name in lookup: ind = lookup[obj.name] try: newobj = getattr(self, container)[ind].merge(obj) getattr(self, container)[ind] = newobj except NotImplementedError: getattr(self, container).append(obj) ids.append(id(obj)) else: lookup[obj.name] = obj ids.append(id(obj)) getattr(self, container).append(obj) obj.set_parent(self) # use the BaseNeo merge as well super().merge(other) def _repr_pretty_(self, pp, cycle): """ Handle pretty-printing. """ pp.text(self.__class__.__name__) pp.text(" with ") vals = [] for container in self._child_containers: objs = getattr(self, container) if objs: vals.append('{} {}'.format(len(objs), container)) pp.text(', '.join(vals)) if self._has_repr_pretty_attrs_(): pp.breakable() self._repr_pretty_attrs_(pp, cycle) for container in self._repr_pretty_containers: pp.breakable() objs = getattr(self, container) pp.text("# {} (N={})".format(container, len(objs))) for (i, obj) in enumerate(objs): pp.breakable() pp.text("%s: " % i) with pp.indent(3): pp.pretty(obj) neo-0.10.0/neo/core/dataobject.py0000644000076700000240000004147114066374330017156 0ustar andrewstaff00000000000000""" This module defines :class:`DataObject`, the abstract base class used by all :module:`neo.core` classes that can contain data (i.e. are not container classes). It contains basic functionality that is shared among all those data objects. """ from copy import deepcopy import warnings import quantities as pq import numpy as np from neo.core.baseneo import BaseNeo, _check_annotations def _normalize_array_annotations(value, length): """Check consistency of array annotations Recursively check that value is either an array or list containing only "simple" types (number, string, date/time) or is a dict of those. Args: :value: (np.ndarray, list or dict) value to be checked for consistency :length: (int) required length of the array annotation Returns: np.ndarray The array_annotations from value in correct form Raises: ValueError: In case value is not accepted as array_annotation(s) """ # First stage, resolve dict of annotations into single annotations if isinstance(value, dict): for key in value.keys(): if isinstance(value[key], dict): raise ValueError("Nested dicts are not allowed as array annotations") value[key] = _normalize_array_annotations(value[key], length) elif value is None: raise ValueError("Array annotations must not be None") # If not array annotation, pass on to regular check and make it a list, that is checked again # This covers array annotations with length 1 elif not isinstance(value, (list, np.ndarray)) or ( isinstance(value, pq.Quantity) and value.shape == ()): _check_annotations(value) value = _normalize_array_annotations(np.array([value]), length) # If array annotation, check for correct length, only single dimension and allowed data else: # Get length that is required for array annotations, which is equal to the length # of the object's data own_length = length # Escape check if empty array or list and just annotate an empty array (length 0) # This enables the user to easily create dummy array annotations that will be filled # with data later on if len(value) == 0: if not isinstance(value, np.ndarray): value = np.ndarray((0,)) val_length = own_length else: # Note: len(o) also works for np.ndarray, it then uses the first dimension, # which is exactly the desired behaviour here val_length = len(value) if not own_length == val_length: raise ValueError( "Incorrect length of array annotation: {} != {}".format(val_length, own_length)) # Local function used to check single elements of a list or an array # They must not be lists or arrays and fit the usual annotation data types def _check_single_elem(element): # Nested array annotations not allowed currently # If element is a list or a np.ndarray, it's not conform except if it's a quantity of # length 1 if isinstance(element, list) or (isinstance(element, np.ndarray) and not ( isinstance(element, pq.Quantity) and ( element.shape == () or element.shape == (1,)))): raise ValueError("Array annotations should only be 1-dimensional") if isinstance(element, dict): raise ValueError("Dictionaries are not supported as array annotations") # Perform regular check for elements of array or list _check_annotations(element) # Arrays only need testing of single element to make sure the others are the same if isinstance(value, np.ndarray): # Type of first element is representative for all others # Thus just performing a check on the first element is enough # Even if it's a pq.Quantity, which can be scalar or array, this is still true # Because a np.ndarray cannot contain scalars and sequences simultaneously # If length of data is 0, then nothing needs to be checked if len(value): # Perform check on first element _check_single_elem(value[0]) return value # In case of list, it needs to be ensured that all data are of the same type else: # Conversion to numpy array makes all elements same type # Converts elements to most general type try: value = np.array(value) # Except when scalar and non-scalar values are mixed, this causes conversion to fail except ValueError as e: msg = str(e) if "setting an array element with a sequence." in msg: raise ValueError("Scalar values and arrays/lists cannot be " "combined into a single array annotation") else: raise e # If most specialized data type that possibly fits all elements is object, # raise an Error with a telling error message, because this means the elements # are not compatible if value.dtype == object: raise ValueError("Cannot convert list of incompatible types into a single" " array annotation") # Check the first element for correctness # If its type is correct for annotations, all others are correct as well # Note: Emtpy lists cannot reach this point _check_single_elem(value[0]) return value class DataObject(BaseNeo, pq.Quantity): ''' This is the base class from which all objects containing data inherit It contains common functionality for all those objects and handles array_annotations. Common functionality that is not included in BaseNeo includes: - duplicating with new data - rescaling the object - copying the object - returning it as pq.Quantity or np.ndarray - handling of array_annotations Array_annotations are a kind of annotation that contains metadata for every data point, i.e. per timestamp (in SpikeTrain, Event and Epoch) or signal channel (in AnalogSignal and IrregularlySampledSignal). They can contain the same data types as regular annotations, but are always represented as numpy arrays of the same length as the number of data points of the annotated neo object. Args: name (str, optional): Name of the Neo object description (str, optional): Human readable string description of the Neo object file_origin (str, optional): Origin of the data contained in this Neo object array_annotations (dict, optional): Dictionary containing arrays / lists which annotate individual data points of the Neo object. kwargs: regular annotations stored in a separate annotation dictionary ''' def __init__(self, name=None, description=None, file_origin=None, array_annotations=None, **annotations): """ This method is called by each data object and initializes the newly created object by adding array annotations and calling __init__ of the super class, where more annotations and attributes are processed. """ if not hasattr(self, 'array_annotations') or not self.array_annotations: self.array_annotations = ArrayDict(self._get_arr_ann_length()) if array_annotations is not None: self.array_annotate(**array_annotations) BaseNeo.__init__(self, name=name, description=description, file_origin=file_origin, **annotations) def array_annotate(self, **array_annotations): """ Add array annotations (annotations for individual data points) as arrays to a Neo data object. Example: >>> obj.array_annotate(code=['a', 'b', 'a'], category=[2, 1, 1]) >>> obj.array_annotations['code'][1] 'b' """ self.array_annotations.update(array_annotations) def array_annotations_at_index(self, index): """ Return dictionary of array annotations at a given index or list of indices :param index: int, list, numpy array: The index (indices) from which the annotations are extracted :return: dictionary of values or numpy arrays containing all array annotations for given index/indices Example: >>> obj.array_annotate(code=['a', 'b', 'a'], category=[2, 1, 1]) >>> obj.array_annotations_at_index(1) {code='b', category=1} """ # Taking only a part of the array annotations # Thus not using ArrayDict here, because checks for length are not needed index_annotations = {} # Use what is given as an index to determine the corresponding annotations, # if not possible, numpy raises an Error for ann in self.array_annotations.keys(): # NO deepcopy, because someone might want to alter the actual object using this try: index_annotations[ann] = self.array_annotations[ann][index] except IndexError as e: # IndexError caused by 'dummy' array annotations should not result in failure # Taking a slice from nothing results in nothing if len(self.array_annotations[ann]) == 0 and not self._get_arr_ann_length() == 0: index_annotations[ann] = self.array_annotations[ann] else: raise e return index_annotations def _merge_array_annotations(self, other): ''' Merges array annotations of 2 different objects. The merge happens in such a way that the result fits the merged data In general this means concatenating the arrays from the 2 objects. If an annotation is only present in one of the objects, it will be omitted :return Merged array_annotations ''' merged_array_annotations = {} omitted_keys_self = [] # Concatenating arrays for each key for key in self.array_annotations: try: value = deepcopy(self.array_annotations[key]) other_value = deepcopy(other.array_annotations[key]) # Quantities need to be rescaled to common unit if isinstance(value, pq.Quantity): try: other_value = other_value.rescale(value.units) except ValueError: raise ValueError("Could not merge array annotations " "due to different units") merged_array_annotations[key] = np.append(value, other_value) * value.units else: merged_array_annotations[key] = np.append(value, other_value) except KeyError: # Save the omitted keys to be able to print them omitted_keys_self.append(key) continue # Also save omitted keys from 'other' omitted_keys_other = [key for key in other.array_annotations if key not in self.array_annotations] # Warn if keys were omitted if omitted_keys_other or omitted_keys_self: warnings.warn("The following array annotations were omitted, because they were only " "present in one of the merged objects: {} from the one that was merged " "into and {} from the one that was merged into the other" "".format(omitted_keys_self, omitted_keys_other), UserWarning) # Return the merged array_annotations return merged_array_annotations def rescale(self, units): ''' Return a copy of the object converted to the specified units :return: Copy of self with specified units ''' # Use simpler functionality, if nothing will be changed dim = pq.quantity.validate_dimensionality(units) if self.dimensionality == dim: return self.copy() # Rescale the object into a new object obj = self.duplicate_with_new_data(signal=self.view(pq.Quantity).rescale(dim), units=units) # Expected behavior is deepcopy, so deepcopying array_annotations obj.array_annotations = deepcopy(self.array_annotations) obj.segment = self.segment return obj # Needed to implement this so array annotations are copied as well, ONLY WHEN copying 1:1 def copy(self, **kwargs): ''' Returns a shallow copy of the object :return: Copy of self ''' obj = super().copy(**kwargs) obj.array_annotations = self.array_annotations return obj def as_array(self, units=None): """ Return the object's data as a plain NumPy array. If `units` is specified, first rescale to those units. """ if units: return self.rescale(units).magnitude else: return self.magnitude def as_quantity(self): """ Return the object's data as a quantities array. """ return self.view(pq.Quantity) def _get_arr_ann_length(self): """ Return the length of the object's data as required for array annotations This is the last dimension of every object. :return Required length of array annotations for this object """ # Number of items is last dimension in of data object # This method should be overridden in case this changes try: length = self.shape[-1] # Note: This is because __getitem__[int] returns a scalar Epoch/Event/SpikeTrain # To be removed if __getitem__[int] is changed except IndexError: length = 1 return length def __deepcopy__(self, memo): """ Create a deep copy of the data object. All attributes and annotations are also deep copied. References to parent objects are not kept, they are set to None. :param memo: (dict) Objects that have been deep copied already :return: (DataObject) Deep copy of the input DataObject """ cls = self.__class__ necessary_attrs = {} # Units need to be specified explicitly for analogsignals/irregularlysampledsignals for k in self._necessary_attrs + (('units',),): necessary_attrs[k[0]] = getattr(self, k[0], self) # Create object using constructor with necessary attributes new_obj = cls(**necessary_attrs) # Add all attributes new_obj.__dict__.update(self.__dict__) memo[id(self)] = new_obj for k, v in self.__dict__.items(): # Single parent objects should not be deepcopied, because this is not expected behavior # and leads to a lot of stuff being copied (e.g. all other children of the parent as well), # thus creating a lot of overhead # But keeping the reference to the same parent is not desired either, because this would be unidirectional # When deepcopying top-down, e.g. a whole block, the links will be handled by the parent if k in self._parent_attrs: setattr(new_obj, k, None) continue try: setattr(new_obj, k, deepcopy(v, memo)) except TypeError: setattr(new_obj, k, v) return new_obj class ArrayDict(dict): """Dictionary subclass to handle array annotations When setting `obj.array_annotations[key]=value`, checks for consistency should not be bypassed. This class overrides __setitem__ from dict to perform these checks every time. The method used for these checks is given as an argument for __init__. """ def __init__(self, length, check_function=_normalize_array_annotations, *args, **kwargs): super().__init__(*args, **kwargs) self.check_function = check_function self.length = length def __setitem__(self, key, value): # Directly call the defined function # Need to wrap key and value in a dict in order to make sure # that nested dicts are detected value = self.check_function({key: value}, self.length)[key] super().__setitem__(key, value) # Updating the dict also needs to perform checks, so rerouting this to __setitem__ def update(self, *args, **kwargs): if args: if len(args) > 1: raise TypeError("update expected at most 1 arguments, " "got %d" % len(args)) other = dict(args[0]) for key in other: self[key] = other[key] for key in kwargs: self[key] = kwargs[key] def __reduce__(self): return super().__reduce__() neo-0.10.0/neo/core/epoch.py0000644000076700000240000003332514066374330016153 0ustar andrewstaff00000000000000''' This module defines :class:`Epoch`, an array of epochs. :class:`Epoch` derives from :class:`BaseNeo`, from :module:`neo.core.baseneo`. ''' from copy import deepcopy, copy import numpy as np import quantities as pq from neo.core.baseneo import BaseNeo, merge_annotations from neo.core.dataobject import DataObject, ArrayDict def _new_epoch(cls, times=None, durations=None, labels=None, units=None, name=None, description=None, file_origin=None, array_annotations=None, annotations=None, segment=None): ''' A function to map epoch.__new__ to function that does not do the unit checking. This is needed for pickle to work. ''' e = Epoch(times=times, durations=durations, labels=labels, units=units, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) e.segment = segment return e class Epoch(DataObject): ''' Array of epochs. *Usage*:: >>> from neo.core import Epoch >>> from quantities import s, ms >>> import numpy as np >>> >>> epc = Epoch(times=np.arange(0, 30, 10)*s, ... durations=[10, 5, 7]*ms, ... labels=np.array(['btn0', 'btn1', 'btn2'], dtype='U')) >>> >>> epc.times array([ 0., 10., 20.]) * s >>> epc.durations array([ 10., 5., 7.]) * ms >>> epc.labels array(['btn0', 'btn1', 'btn2'], dtype=' 1: raise ValueError("Times array has more than 1 dimension") if isinstance(durations, (list, tuple)): durations = np.array(durations) if durations is None: durations = np.array([]) * pq.s elif durations.size != times.size: if durations.size == 1: durations = durations * np.ones_like(times.magnitude) else: raise ValueError("Durations array has different length to times") if labels is None: labels = np.array([], dtype='U') else: labels = np.array(labels) if labels.size != times.size and labels.size: raise ValueError("Labels array has different length to times") if units is None: # No keyword units, so get from `times` try: units = times.units dim = units.dimensionality except AttributeError: raise ValueError('you must specify units') else: if hasattr(units, 'dimensionality'): dim = units.dimensionality else: dim = pq.quantity.validate_dimensionality(units) if not hasattr(durations, "dimensionality"): durations = pq.Quantity(durations, dim) # check to make sure the units are time # this approach is much faster than comparing the # reference dimensionality if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0], pq.UnitTime)): ValueError("Unit %s has dimensions %s, not [time]" % (units, dim.simplified)) obj = pq.Quantity.__new__(cls, times, units=dim) obj._labels = labels obj._durations = durations obj.segment = None return obj def __init__(self, times=None, durations=None, labels=None, units=None, name=None, description=None, file_origin=None, array_annotations=None, **annotations): ''' Initialize a new :class:`Epoch` instance. ''' DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def __reduce__(self): ''' Map the __new__ function onto _new_epoch, so that pickle works ''' return _new_epoch, (self.__class__, self.times, self.durations, self.labels, self.units, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment) def __array_finalize__(self, obj): super().__array_finalize__(obj) self._durations = getattr(obj, 'durations', None) self._labels = getattr(obj, 'labels', None) self.annotations = getattr(obj, 'annotations', None) self.name = getattr(obj, 'name', None) self.file_origin = getattr(obj, 'file_origin', None) self.description = getattr(obj, 'description', None) self.segment = getattr(obj, 'segment', None) # Add empty array annotations, because they cannot always be copied, # but do not overwrite existing ones from slicing etc. # This ensures the attribute exists if not hasattr(self, 'array_annotations'): self.array_annotations = ArrayDict(self._get_arr_ann_length()) def __repr__(self): ''' Returns a string representing the :class:`Epoch`. ''' objs = ['%s@%s for %s' % (label, str(time), str(dur)) for label, time, dur in zip(self.labels, self.times, self.durations)] return '' % ', '.join(objs) def _repr_pretty_(self, pp, cycle): super()._repr_pretty_(pp, cycle) def rescale(self, units): ''' Return a copy of the :class:`Epoch` converted to the specified units :return: Copy of self with specified units ''' # Use simpler functionality, if nothing will be changed dim = pq.quantity.validate_dimensionality(units) if self.dimensionality == dim: return self.copy() # Rescale the object into a new object obj = self.duplicate_with_new_data( times=self.view(pq.Quantity).rescale(dim), durations=self.durations.rescale(dim), labels=self.labels, units=units) # Expected behavior is deepcopy, so deepcopying array_annotations obj.array_annotations = deepcopy(self.array_annotations) obj.segment = self.segment return obj def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' obj = super().__getitem__(i) obj._durations = self.durations[i] if self._labels is not None and self._labels.size > 0: obj._labels = self.labels[i] else: obj._labels = self.labels try: # Array annotations need to be sliced accordingly obj.array_annotate(**deepcopy(self.array_annotations_at_index(i))) obj._copy_data_complement(self) except AttributeError: # If Quantity was returned, not Epoch obj.times = obj obj.durations = obj._durations obj.labels = obj._labels return obj def __getslice__(self, i, j): ''' Get a slice from :attr:`i` to :attr:`j`.attr[0] Doesn't get called in Python 3, :meth:`__getitem__` is called instead ''' return self.__getitem__(slice(i, j)) @property def times(self): return pq.Quantity(self) def merge(self, other): ''' Merge the another :class:`Epoch` into this one. The :class:`Epoch` objects are concatenated horizontally (column-wise), :func:`np.hstack`). If the attributes of the two :class:`Epoch` are not compatible, and Exception is raised. ''' othertimes = other.times.rescale(self.times.units) otherdurations = other.durations.rescale(self.durations.units) times = np.hstack([self.times, othertimes]) * self.times.units durations = np.hstack([self.durations, otherdurations]) * self.durations.units labels = np.hstack([self.labels, other.labels]) kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge({}, {})".format(attr_self, attr_other) merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) kwargs['array_annotations'] = self._merge_array_annotations(other) return Epoch(times=times, durations=durations, labels=labels, **kwargs) def _copy_data_complement(self, other): ''' Copy the metadata from another :class:`Epoch`. Note: Array annotations can not be copied here because length of data can change ''' # Note: Array annotations cannot be copied because length of data could be changed # here which would cause inconsistencies. This is instead done locally. for attr in ("name", "file_origin", "description"): setattr(self, attr, deepcopy(getattr(other, attr, None))) self._copy_annotations(other) def _copy_annotations(self, other): self.annotations = deepcopy(other.annotations) def duplicate_with_new_data(self, times, durations, labels, units=None): ''' Create a new :class:`Epoch` with the same metadata but different data (times, durations) Note: Array annotations can not be copied here because length of data can change ''' if units is None: units = self.units else: units = pq.quantity.validate_dimensionality(units) new = self.__class__(times=times, durations=durations, labels=labels, units=units) new._copy_data_complement(self) new._labels = labels new._durations = durations # Note: Array annotations can not be copied here because length of data can change return new def time_slice(self, t_start, t_stop): ''' Creates a new :class:`Epoch` corresponding to the time slice of the original :class:`Epoch` between (and including) times :attr:`t_start` and :attr:`t_stop`. Either parameter can also be None to use infinite endpoints for the time interval. ''' _t_start = t_start _t_stop = t_stop if t_start is None: _t_start = -np.inf if t_stop is None: _t_stop = np.inf indices = (self >= _t_start) & (self <= _t_stop) # Time slicing should create a deep copy of the object new_epc = deepcopy(self[indices]) return new_epc def time_shift(self, t_shift): """ Shifts an :class:`Epoch` by an amount of time. Parameters: ----------- t_shift: Quantity (time) Amount of time by which to shift the :class:`Epoch`. Returns: -------- epoch: :class:`Epoch` New instance of an :class:`Epoch` object starting at t_shift later than the original :class:`Epoch` (the original :class:`Epoch` is not modified). """ new_epc = self.duplicate_with_new_data(times=self.times + t_shift, durations=self.durations, labels=self.labels) # Here we can safely copy the array annotations since we know that # the length of the Epoch does not change. new_epc.array_annotate(**self.array_annotations) return new_epc def set_labels(self, labels): if self.labels is not None and self.labels.size > 0 and len(labels) != self.size: raise ValueError("Labels array has different length to times ({} != {})" .format(len(labels), self.size)) self._labels = np.array(labels) def get_labels(self): return self._labels labels = property(get_labels, set_labels) def set_durations(self, durations): if self.durations is not None and self.durations.size > 0 and len(durations) != self.size: raise ValueError("Durations array has different length to times ({} != {})" .format(len(durations), self.size)) self._durations = durations def get_durations(self): return self._durations durations = property(get_durations, set_durations) neo-0.10.0/neo/core/event.py0000644000076700000240000003317214066374330016176 0ustar andrewstaff00000000000000''' This module defines :class:`Event`, an array of events. :class:`Event` derives from :class:`BaseNeo`, from :module:`neo.core.baseneo`. ''' from copy import deepcopy, copy import numpy as np import quantities as pq from neo.core.baseneo import merge_annotations from neo.core.dataobject import DataObject, ArrayDict from neo.core.epoch import Epoch def _new_event(cls, times=None, labels=None, units=None, name=None, file_origin=None, description=None, array_annotations=None, annotations=None, segment=None): ''' A function to map Event.__new__ to function that does not do the unit checking. This is needed for pickle to work. ''' e = Event(times=times, labels=labels, units=units, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) e.segment = segment return e class Event(DataObject): ''' Array of events. *Usage*:: >>> from neo.core import Event >>> from quantities import s >>> import numpy as np >>> >>> evt = Event(np.arange(0, 30, 10)*s, ... labels=np.array(['trig0', 'trig1', 'trig2'], ... dtype='U')) >>> >>> evt.times array([ 0., 10., 20.]) * s >>> evt.labels array(['trig0', 'trig1', 'trig2'], dtype=' 1: raise ValueError("Times array has more than 1 dimension") if labels is None: labels = np.array([], dtype='U') else: labels = np.array(labels) if labels.size != times.size and labels.size: raise ValueError("Labels array has different length to times") if units is None: # No keyword units, so get from `times` try: units = times.units dim = units.dimensionality except AttributeError: raise ValueError('you must specify units') else: if hasattr(units, 'dimensionality'): dim = units.dimensionality else: dim = pq.quantity.validate_dimensionality(units) # check to make sure the units are time # this approach is much faster than comparing the # reference dimensionality if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0], pq.UnitTime)): ValueError("Unit {} has dimensions {}, not [time]".format(units, dim.simplified)) obj = pq.Quantity(times, units=dim).view(cls) obj._labels = labels obj.segment = None return obj def __init__(self, times=None, labels=None, units=None, name=None, description=None, file_origin=None, array_annotations=None, **annotations): ''' Initialize a new :class:`Event` instance. ''' DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def __reduce__(self): ''' Map the __new__ function onto _new_event, so that pickle works ''' return _new_event, (self.__class__, np.array(self), self.labels, self.units, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment) def __array_finalize__(self, obj): super().__array_finalize__(obj) self._labels = getattr(obj, 'labels', None) self.annotations = getattr(obj, 'annotations', None) self.name = getattr(obj, 'name', None) self.file_origin = getattr(obj, 'file_origin', None) self.description = getattr(obj, 'description', None) self.segment = getattr(obj, 'segment', None) # Add empty array annotations, because they cannot always be copied, # but do not overwrite existing ones from slicing etc. # This ensures the attribute exists if not hasattr(self, 'array_annotations'): self.array_annotations = ArrayDict(self._get_arr_ann_length()) def __repr__(self): ''' Returns a string representing the :class:`Event`. ''' objs = ['%s@%s' % (label, str(time)) for label, time in zip(self.labels, self.times)] return '' % ', '.join(objs) def _repr_pretty_(self, pp, cycle): super()._repr_pretty_(pp, cycle) def rescale(self, units): ''' Return a copy of the :class:`Event` converted to the specified units :return: Copy of self with specified units ''' # Use simpler functionality, if nothing will be changed dim = pq.quantity.validate_dimensionality(units) if self.dimensionality == dim: return self.copy() # Rescale the object into a new object obj = self.duplicate_with_new_data( times=self.view(pq.Quantity).rescale(dim), labels=self.labels, units=units) # Expected behavior is deepcopy, so deepcopying array_annotations obj.array_annotations = deepcopy(self.array_annotations) obj.segment = self.segment return obj @property def times(self): return pq.Quantity(self) def merge(self, other): ''' Merge the another :class:`Event` into this one. The :class:`Event` objects are concatenated horizontally (column-wise), :func:`np.hstack`). If the attributes of the two :class:`Event` are not compatible, and Exception is raised. ''' othertimes = other.times.rescale(self.times.units) times = np.hstack([self.times, othertimes]) * self.times.units labels = np.hstack([self.labels, other.labels]) kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge({}, {})".format(attr_self, attr_other) print('Event: merge annotations') merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) kwargs['array_annotations'] = self._merge_array_annotations(other) evt = Event(times=times, labels=labels, **kwargs) return evt def _copy_data_complement(self, other): ''' Copy the metadata from another :class:`Event`. Note: Array annotations can not be copied here because length of data can change ''' # Note: Array annotations, including labels, cannot be copied # because they are linked to their respective timestamps and length of data can be changed # here which would cause inconsistencies for attr in ("name", "file_origin", "description", "annotations"): setattr(self, attr, deepcopy(getattr(other, attr, None))) def __getitem__(self, i): obj = super().__getitem__(i) if self._labels is not None and self._labels.size > 0: obj.labels = self._labels[i] else: obj.labels = self._labels try: obj.array_annotate(**deepcopy(self.array_annotations_at_index(i))) obj._copy_data_complement(self) except AttributeError: # If Quantity was returned, not Event obj.times = obj return obj def set_labels(self, labels): if self.labels is not None and self.labels.size > 0 and len(labels) != self.size: raise ValueError("Labels array has different length to times ({} != {})" .format(len(labels), self.size)) self._labels = np.array(labels) def get_labels(self): return self._labels labels = property(get_labels, set_labels) def duplicate_with_new_data(self, times, labels, units=None): ''' Create a new :class:`Event` with the same metadata but different data Note: Array annotations can not be copied here because length of data can change ''' if units is None: units = self.units else: units = pq.quantity.validate_dimensionality(units) new = self.__class__(times=times, units=units) new._copy_data_complement(self) new.labels = labels # Note: Array annotations cannot be copied here, because length of data can be changed return new def time_slice(self, t_start, t_stop): ''' Creates a new :class:`Event` corresponding to the time slice of the original :class:`Event` between (and including) times :attr:`t_start` and :attr:`t_stop`. Either parameter can also be None to use infinite endpoints for the time interval. ''' _t_start = t_start _t_stop = t_stop if t_start is None: _t_start = -np.inf if t_stop is None: _t_stop = np.inf indices = (self >= _t_start) & (self <= _t_stop) # Time slicing should create a deep copy of the object new_evt = deepcopy(self[indices]) return new_evt def time_shift(self, t_shift): """ Shifts an :class:`Event` by an amount of time. Parameters: ----------- t_shift: Quantity (time) Amount of time by which to shift the :class:`Event`. Returns: -------- epoch: Event New instance of an :class:`Event` object starting at t_shift later than the original :class:`Event` (the original :class:`Event` is not modified). """ new_evt = self.duplicate_with_new_data(times=self.times + t_shift, labels=self.labels) # Here we can safely copy the array annotations since we know that # the length of the Event does not change. new_evt.array_annotate(**self.array_annotations) return new_evt def to_epoch(self, pairwise=False, durations=None): """ Returns a new Epoch object based on the times and labels in the Event object. This method has three modes of action. 1. By default, an array of `n` event times will be transformed into `n-1` epochs, where the end of one epoch is the beginning of the next. This assumes that the events are ordered in time; it is the responsibility of the caller to check this is the case. 2. If `pairwise` is True, then the event times will be taken as pairs representing the start and end time of an epoch. The number of events must be even, otherwise a ValueError is raised. 3. If `durations` is given, it should be a scalar Quantity or a Quantity array of the same size as the Event. Each event time is then taken as the start of an epoch of duration given by `durations`. `pairwise=True` and `durations` are mutually exclusive. A ValueError will be raised if both are given. If `durations` is given, epoch labels are set to the corresponding labels of the events that indicate the epoch start If `durations` is not given, then the event labels A and B bounding the epoch are used to set the labels of the epochs in the form 'A-B'. """ if pairwise: # Mode 2 if durations is not None: raise ValueError("Inconsistent arguments. " "Cannot give both `pairwise` and `durations`") if self.size % 2 != 0: raise ValueError("Pairwise conversion of events to epochs" " requires an even number of events") times = self.times[::2] durations = self.times[1::2] - times labels = np.array( ["{}-{}".format(a, b) for a, b in zip(self.labels[::2], self.labels[1::2])]) elif durations is None: # Mode 1 times = self.times[:-1] durations = np.diff(self.times) labels = np.array( ["{}-{}".format(a, b) for a, b in zip(self.labels[:-1], self.labels[1:])]) else: # Mode 3 times = self.times labels = self.labels return Epoch(times=times, durations=durations, labels=labels) neo-0.10.0/neo/core/group.py0000644000076700000240000000614214066375716016217 0ustar andrewstaff00000000000000""" This module implements :class:`Group`, which represents a subset of the channels in an :class:`AnalogSignal` or :class:`IrregularlySampledSignal`. It replaces and extends the grouping function of the former :class:`ChannelIndex` and :class:`Unit`. """ from os import close from neo.core.container import Container class Group(Container): """ Can contain any of the data objects, views, or other groups, outside the hierarchy of the segment and block containers. A common use is to link the :class:`SpikeTrain` objects within a :class:`Block`, possibly across multiple Segments, that were emitted by the same neuron. *Required attributes/properties*: None *Recommended attributes/properties*: :objects: (Neo object) Objects with which to pre-populate the :class:`Group` :name: (str) A label for the group. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. *Optional arguments*: :allowed_types: (list or tuple) Types of Neo object that are allowed to be added to the Group. If not specified, any Neo object can be added. Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Container of*: :class:`AnalogSignal`, :class:`IrregularlySampledSignal`, :class:`SpikeTrain`, :class:`Event`, :class:`Epoch`, :class:`ChannelView`, :class:`Group` """ _data_child_objects = ( 'AnalogSignal', 'IrregularlySampledSignal', 'SpikeTrain', 'Event', 'Epoch', 'ChannelView', 'ImageSequence' ) _container_child_objects = ('Segment', 'Group') _parent_objects = ('Block',) def __init__(self, objects=None, name=None, description=None, file_origin=None, allowed_types=None, **annotations): super().__init__(name=name, description=description, file_origin=file_origin, **annotations) if allowed_types is None: self.allowed_types = None else: self.allowed_types = tuple(allowed_types) if objects: self.add(*objects) @property def _container_lookup(self): return { cls_name: getattr(self, container_name) for cls_name, container_name in zip(self._child_objects, self._child_containers) } def _get_container(self, cls): if hasattr(cls, "proxy_for"): cls = cls.proxy_for return self._container_lookup[cls.__name__] def add(self, *objects): """Add a new Neo object to the Group""" for obj in objects: if self.allowed_types and not isinstance(obj, self.allowed_types): raise TypeError("This Group can only contain {}, but not {}" "".format(self.allowed_types, type(obj))) container = self._get_container(obj.__class__) container.append(obj) def walk(self): """ Walk the tree of subgroups """ yield self for grp in self.groups: yield from grp.walk() neo-0.10.0/neo/core/imagesequence.py0000644000076700000240000002373714066374330017676 0ustar andrewstaff00000000000000""" This module implements :class:`ImageSequence`, a 3D array. :class:`ImageSequence` inherits from :class:`basesignal.BaseSignal` which derives from :class:`BaseNeo`, and from :class:`quantites.Quantity`which in turn inherits from :class:`numpy.array`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Initialization of a new object from constructor happens in :meth:`__new__`. This is where user-specified attributes are set. * :meth:`__array_finalize__` is called for all new objects, including those created by slicing. This is where attributes are copied over from the old object. """ from neo.core.analogsignal import AnalogSignal, _get_sampling_rate import quantities as pq import numpy as np from neo.core.baseneo import BaseNeo from neo.core.basesignal import BaseSignal from neo.core.dataobject import DataObject class ImageSequence(BaseSignal): """ Representation of a sequence of images, as an array of three dimensions organized as [frame][row][column]. Inherits from :class:`quantities.Quantity`, which in turn inherits from :class:`numpy.ndarray`. *usage*:: >>> from neo.core import ImageSequence >>> import quantities as pq >>> >>> img_sequence_array = [[[column for column in range(20)]for row in range(20)] ... for frame in range(10)] >>> image_sequence = ImageSequence(img_sequence_array, units='V', ... sampling_rate=1 * pq.Hz, ... spatial_scale=1 * pq.micrometer) >>> image_sequence ImageSequence 10 frames with width 20 px and height 20 px; units V; datatype int64 sampling rate: 1.0 spatial_scale: 1.0 >>> image_sequence.spatial_scale array(1.) * um *Required attributes/properties*: :image_data: (3D NumPy array, or a list of 2D arrays) The data itself :units: (quantity units) :sampling_rate: *or* **frame_duration** (quantity scalar) Number of samples per unit time or duration of a single image frame. If both are specified, they are checked for consistency. :spatial_scale: (quantity scalar) size for a pixel. :t_start: (quantity scalar) Time when sequence begins. Default 0. *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. *Optional attributes/properties*: :dtype: (numpy dtype or str) Override the dtype of the signal array. :copy: (bool) True by default. Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :sampling_rate: (quantity scalar) Number of samples per unit time. (1/:attr:`frame_duration`) :frame_duration: (quantity scalar) Duration of each image frame. (1/:attr:`sampling_rate`) :spatial_scale: Size of a pixel :duration: (Quantity) Sequence duration, read-only. (size * :attr:`frame_duration`) :t_stop: (quantity scalar) Time when sequence ends, read-only. (:attr:`t_start` + :attr:`duration`) """ _parent_objects = ("Segment",) _parent_attrs = ("segment",) _quantity_attr = "image_data" _necessary_attrs = ( ("image_data", pq.Quantity, 3), ("sampling_rate", pq.Quantity, 0), ("spatial_scale", pq.Quantity, 0), ("t_start", pq.Quantity, 0), ) _recommended_attrs = BaseNeo._recommended_attrs def __new__(cls, image_data, units=None, dtype=None, copy=True, t_start=0 * pq.s, spatial_scale=None, frame_duration=None, sampling_rate=None, name=None, description=None, file_origin=None, **annotations): """ Constructs new :class:`ImageSequence` from data. This is called whenever a new class:`ImageSequence` is created from the constructor, but not when slicing. __array_finalize__ is called on the new object. """ if spatial_scale is None: raise ValueError("spatial_scale is required") image_data = np.stack(image_data) if len(image_data.shape) != 3: raise ValueError("list doesn't have the correct number of dimensions") obj = pq.Quantity(image_data, units=units, dtype=dtype, copy=copy).view(cls) obj.segment = None # function from analogsignal.py in neo/core directory obj.sampling_rate = _get_sampling_rate(sampling_rate, frame_duration) obj.spatial_scale = spatial_scale if t_start is None: raise ValueError("t_start cannot be None") obj._t_start = t_start return obj def __init__(self, image_data, units=None, dtype=None, copy=True, t_start=0 * pq.s, spatial_scale=None, frame_duration=None, sampling_rate=None, name=None, description=None, file_origin=None, **annotations): """ Initializes a newly constructed :class:`ImageSequence` instance. """ DataObject.__init__( self, name=name, file_origin=file_origin, description=description, **annotations ) def __array_finalize__spec(self, obj): self.sampling_rate = getattr(obj, "sampling_rate", None) self.spatial_scale = getattr(obj, "spatial_scale", None) self.units = getattr(obj, "units", None) self._t_start = getattr(obj, "_t_start", 0 * pq.s) return obj def signal_from_region(self, *region): """ Method that takes 1 or multiple regionofinterest, uses the method of each region of interest to get the list of pixels to average. Return a list of :class:`AnalogSignal` for each regionofinterest """ if len(region) == 0: raise ValueError("no regions of interest have been given") region_pixel = [] for i, b in enumerate(region): r = region[i].pixels_in_region() if not r: raise ValueError("region " + str(i) + "is empty") else: region_pixel.append(r) analogsignal_list = [] for i in region_pixel: data = [] for frame in range(len(self)): picture_data = [] for v in i: picture_data.append(self.view(pq.Quantity)[frame][v[0]][v[1]]) average = picture_data[0] for b in range(1, len(picture_data)): average += picture_data[b] data.append((average * 1.0) / len(i)) analogsignal_list.append( AnalogSignal( data, units=self.units, t_start=self.t_start, sampling_rate=self.sampling_rate ) ) return analogsignal_list def _repr_pretty_(self, pp, cycle): """ Handle pretty-printing the :class:`ImageSequence`. """ pp.text( "{cls} {nframe} frames with width {width} px and height {height} px; " "units {units}; datatype {dtype} ".format( cls=self.__class__.__name__, nframe=self.shape[0], height=self.shape[1], width=self.shape[2], units=self.units.dimensionality.string, dtype=self.dtype, ) ) def _pp(line): pp.breakable() with pp.group(indent=1): pp.text(line) for line in [ "sampling rate: {!s}".format(self.sampling_rate), "spatial_scale: {!s}".format(self.spatial_scale), ]: _pp(line) def _check_consistency(self, other): """ Check if the attributes of another :class:`ImageSequence` are compatible with this one. """ if isinstance(other, ImageSequence): for attr in ("sampling_rate", "spatial_scale", "t_start"): if getattr(self, attr) != getattr(other, attr): raise ValueError("Inconsistent values of %s" % attr) # t_start attribute is handled as a property so type checking can be done @property def t_start(self): """ Time when sequence begins. """ return self._t_start @t_start.setter def t_start(self, start): """ Setter for :attr:`t_start` """ if start is None: raise ValueError("t_start cannot be None") self._t_start = start @property def duration(self): """ Sequence duration (:attr:`size` * :attr:`frame_duration`) """ return self.shape[0] / self.sampling_rate @property def t_stop(self): """ Time when Sequence ends. (:attr:`t_start` + :attr:`duration`) """ return self.t_start + self.duration @property def times(self): """ The time points of each frame in the sequence (:attr:`t_start` + arange(:attr:`shape`)/:attr:`sampling_rate`) """ return self.t_start + np.arange(self.shape[0]) / self.sampling_rate @property def frame_duration(self): """ Duration of a single image frame in the sequence. (1/:attr:`sampling_rate`) """ return 1.0 / self.sampling_rate @frame_duration.setter def frame_duration(self, duration): """ Setter for :attr:`frame_duration` """ if duration is None: raise ValueError("frame_duration cannot be None") elif not hasattr(duration, "units"): raise ValueError("frame_duration must have units") self.sampling_rate = 1.0 / duration neo-0.10.0/neo/core/irregularlysampledsignal.py0000644000076700000240000005604214066375716022174 0ustar andrewstaff00000000000000''' This module implements :class:`IrregularlySampledSignal`, an array of analog signals with samples taken at arbitrary time points. :class:`IrregularlySampledSignal` inherits from :class:`basesignal.BaseSignal` which derives from :class:`BaseNeo`, from :module:`neo.core.baseneo`, and from :class:`quantities.Quantity`, which in turn inherits from :class:`numpy.ndarray`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Initialization of a new object from constructor happens in :meth:`__new__`. This is where user-specified attributes are set. * :meth:`__array_finalize__` is called for all new objects, including those created by slicing. This is where attributes are copied over from the old object. ''' from copy import deepcopy, copy try: import scipy.signal except ImportError as err: HAVE_SCIPY = False else: HAVE_SCIPY = True import numpy as np import quantities as pq from neo.core.baseneo import MergeError, merge_annotations, intersect_annotations from neo.core.basesignal import BaseSignal from neo.core.analogsignal import AnalogSignal from neo.core.dataobject import DataObject def _new_IrregularlySampledSignal(cls, times, signal, units=None, time_units=None, dtype=None, copy=True, name=None, file_origin=None, description=None, array_annotations=None, annotations=None, segment=None): ''' A function to map IrregularlySampledSignal.__new__ to a function that does not do the unit checking. This is needed for pickle to work. ''' iss = cls(times=times, signal=signal, units=units, time_units=time_units, dtype=dtype, copy=copy, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) iss.segment = segment return iss class IrregularlySampledSignal(BaseSignal): ''' An array of one or more analog signals with samples taken at arbitrary time points. A representation of one or more continuous, analog signals acquired at time :attr:`t_start` with a varying sampling interval. Each channel is sampled at the same time points. Inherits from :class:`quantities.Quantity`, which in turn inherits from :class:`numpy.ndarray`. *Usage*:: >>> from neo.core import IrregularlySampledSignal >>> from quantities import s, nA >>> >>> irsig0 = IrregularlySampledSignal([0.0, 1.23, 6.78], [1, 2, 3], ... units='mV', time_units='ms') >>> irsig1 = IrregularlySampledSignal([0.01, 0.03, 0.12]*s, ... [[4, 5], [5, 4], [6, 3]]*nA) *Required attributes/properties*: :times: (quantity array 1D, numpy array 1D, or list) The time of each data point. Must have the same size as :attr:`signal`. :signal: (quantity array 2D, numpy array 2D, or list (data, channel)) The data itself. :units: (quantity units) Required if the signal is a list or NumPy array, not if it is a :class:`Quantity`. :time_units: (quantity units) Required if :attr:`times` is a list or NumPy array, not if it is a :class:`Quantity`. *Recommended attributes/properties*:. :name: (str) A label for the dataset :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. *Optional attributes/properties*: :dtype: (numpy dtype or str) Override the dtype of the signal array. (times are always floats). :copy: (bool) True by default. :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations \ for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :sampling_intervals: (quantity array 1D) Interval between each adjacent pair of samples. (``times[1:] - times[:-1]``) :duration: (quantity scalar) Signal duration, read-only. (``times[-1] - times[0]``) :t_start: (quantity scalar) Time when signal begins, read-only. (``times[0]``) :t_stop: (quantity scalar) Time when signal ends, read-only. (``times[-1]``) *Slicing*: :class:`IrregularlySampledSignal` objects can be sliced. When this occurs, a new :class:`IrregularlySampledSignal` (actually a view) is returned, with the same metadata, except that :attr:`times` is also sliced in the same way. *Operations available on this object*: == != + * / ''' _parent_objects = ('Segment',) _parent_attrs = ('segment',) _quantity_attr = 'signal' _necessary_attrs = (('times', pq.Quantity, 1), ('signal', pq.Quantity, 2)) def __new__(cls, times, signal, units=None, time_units=None, dtype=None, copy=True, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Construct a new :class:`IrregularlySampledSignal` instance. This is called whenever a new :class:`IrregularlySampledSignal` is created from the constructor, but not when slicing. ''' signal = cls._rescale(signal, units=units) if time_units is None: if hasattr(times, "units"): time_units = times.units else: raise ValueError("Time units must be specified") elif isinstance(times, pq.Quantity): # could improve this test, what if units is a string? if time_units != times.units: times = times.rescale(time_units) # should check time units have correct dimensions obj = pq.Quantity.__new__(cls, signal, units=units, dtype=dtype, copy=copy) if obj.ndim == 1: obj = obj.reshape(-1, 1) if len(times) != obj.shape[0]: raise ValueError("times array and signal array must " "have same length") obj.times = pq.Quantity(times, units=time_units, dtype=float, copy=copy) obj.segment = None return obj def __init__(self, times, signal, units=None, time_units=None, dtype=None, copy=True, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Initializes a newly constructed :class:`IrregularlySampledSignal` instance. ''' DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def __reduce__(self): ''' Map the __new__ function onto _new_IrregularlySampledSignal, so that pickle works ''' return _new_IrregularlySampledSignal, (self.__class__, self.times, np.array(self), self.units, self.times.units, self.dtype, True, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment) def _array_finalize_spec(self, obj): ''' Set default values for attributes specific to :class:`IrregularlySampledSignal`. Common attributes are defined in :meth:`__array_finalize__` in :class:`basesignal.BaseSignal`), which is called every time a new signal is created and calls this method. ''' self.times = getattr(obj, 'times', None) return obj def __repr__(self): ''' Returns a string representing the :class:`IrregularlySampledSignal`. ''' return '<{}({} at times {})>'.format( self.__class__.__name__, super().__repr__(), self.times) def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' if isinstance(i, (int, np.integer)): # a single point in time across all channels obj = super().__getitem__(i) obj = pq.Quantity(obj.magnitude, units=obj.units) elif isinstance(i, tuple): obj = super().__getitem__(i) j, k = i if isinstance(j, (int, np.integer)): # a single point in time across some channels obj = pq.Quantity(obj.magnitude, units=obj.units) else: if isinstance(j, slice): obj.times = self.times.__getitem__(j) elif isinstance(j, np.ndarray): raise NotImplementedError("Arrays not yet supported") else: raise TypeError("%s not supported" % type(j)) if isinstance(k, (int, np.integer)): obj = obj.reshape(-1, 1) obj.array_annotations = deepcopy(self.array_annotations_at_index(k)) elif isinstance(i, slice): obj = super().__getitem__(i) obj.times = self.times.__getitem__(i) obj.array_annotations = deepcopy(self.array_annotations) elif isinstance(i, np.ndarray): # Indexing of an IrregularlySampledSignal is only consistent if the resulting # number of samples is the same for each trace. The time axis for these samples is not # guaranteed to be continuous, so returning a Quantity instead of an # IrregularlySampledSignal here. new_time_dims = np.sum(i, axis=0) if len(new_time_dims) and all(new_time_dims == new_time_dims[0]): obj = np.asarray(self).T.__getitem__(i.T) obj = obj.T.reshape(self.shape[1], -1).T obj = pq.Quantity(obj, units=self.units) else: raise IndexError("indexing of an IrregularlySampledSignal needs to keep the same " "number of sample for each trace contained") else: raise IndexError("index should be an integer, tuple, slice or boolean numpy array") return obj @property def duration(self): ''' Signal duration. (:attr:`times`[-1] - :attr:`times`[0]) ''' return self.times[-1] - self.times[0] @property def t_start(self): ''' Time when signal begins. (:attr:`times`[0]) ''' return self.times[0] @property def t_stop(self): ''' Time when signal ends. (:attr:`times`[-1]) ''' return self.times[-1] def __eq__(self, other): ''' Equality test (==) ''' if (isinstance(other, IrregularlySampledSignal) and not (self.times == other.times).all()): return False return super().__eq__(other) def _check_consistency(self, other): ''' Check if the attributes of another :class:`IrregularlySampledSignal` are compatible with this one. ''' # if not an array, then allow the calculation if not hasattr(other, 'ndim'): return # if a scalar array, then allow the calculation if not other.ndim: return # dimensionality should match if self.ndim != other.ndim: raise ValueError('Dimensionality does not match: {} vs {}'.format( self.ndim, other.ndim)) # if if the other array does not have a times property, # then it should be okay to add it directly if not hasattr(other, 'times'): return # if there is a times property, the times need to be the same if not (self.times == other.times).all(): raise ValueError('Times do not match: {} vs {}'.format(self.times, other.times)) def __rsub__(self, other, *args): ''' Backwards subtraction (other-self) ''' return self.__mul__(-1) + other def _repr_pretty_(self, pp, cycle): ''' Handle pretty-printing the :class:`IrregularlySampledSignal`. ''' pp.text("{cls} with {channels} channels of length {length}; " "units {units}; datatype {dtype} ".format(cls=self.__class__.__name__, channels=self.shape[1], length=self.shape[0], units=self.units.dimensionality.string, dtype=self.dtype)) if self._has_repr_pretty_attrs_(): pp.breakable() self._repr_pretty_attrs_(pp, cycle) def _pp(line): pp.breakable() with pp.group(indent=1): pp.text(line) for line in ["sample times: {}".format(self.times)]: _pp(line) @property def sampling_intervals(self): ''' Interval between each adjacent pair of samples. (:attr:`times[1:]` - :attr:`times`[:-1]) ''' return self.times[1:] - self.times[:-1] def mean(self, interpolation=None): ''' Calculates the mean, optionally using interpolation between sampling times. If :attr:`interpolation` is None, we assume that values change stepwise at sampling times. ''' if interpolation is None: return (self[:-1] * self.sampling_intervals.reshape(-1, 1)).sum() / self.duration else: raise NotImplementedError def resample(self, sample_count, **kwargs): """ Resample the data points of the signal. This method interpolates the signal and returns a new signal with a fixed number of samples defined by `sample_count`. This function is a wrapper of scipy.signal.resample and accepts the same set of keyword arguments, except for specifying the axis of resampling which is fixed to the first axis here, and the sample positions. . Parameters: ----------- sample_count: integer Number of desired samples. The resulting signal starts at the same sample as the original and is sampled regularly. Returns: -------- resampled_signal: :class:`AnalogSignal` New instance of a :class:`AnalogSignal` object containing the resampled data points. The original :class:`AnalogSignal` is not modified. """ if not HAVE_SCIPY: raise ImportError('Resampling requires availability of scipy.signal') # Resampling is only permitted along the time axis (axis=0) if 'axis' in kwargs: kwargs.pop('axis') if 't' in kwargs: kwargs.pop('t') resampled_data, resampled_times = scipy.signal.resample(self.magnitude, sample_count, t=self.times.magnitude, axis=0, **kwargs) new_sampling_rate = (sample_count - 1) / self.duration resampled_signal = AnalogSignal(resampled_data, units=self.units, dtype=self.dtype, t_start=self.t_start, sampling_rate=new_sampling_rate, array_annotations=self.array_annotations.copy(), **self.annotations.copy()) # since the number of channels stays the same, we can also copy array annotations here resampled_signal.array_annotations = self.array_annotations.copy() return resampled_signal def time_slice(self, t_start, t_stop): ''' Creates a new :class:`IrregularlySampledSignal` corresponding to the time slice of the original :class:`IrregularlySampledSignal` between times `t_start` and `t_stop`. Either parameter can also be None to use infinite endpoints for the time interval. ''' _t_start = t_start _t_stop = t_stop if t_start is None: _t_start = -np.inf if t_stop is None: _t_stop = np.inf indices = (self.times >= _t_start) & (self.times <= _t_stop) count = 0 id_start = None id_stop = None for i in indices: if id_start is None: if i: id_start = count else: if not i: id_stop = count break count += 1 # Time slicing should create a deep copy of the object new_st = deepcopy(self[id_start:id_stop]) return new_st def time_shift(self, t_shift): """ Shifts a :class:`IrregularlySampledSignal` to start at a new time. Parameters: ----------- t_shift: Quantity (time) Amount of time by which to shift the :class:`IrregularlySampledSignal`. Returns: -------- new_sig: :class:`SpikeTrain` New instance of a :class:`IrregularlySampledSignal` object starting at t_shift later than the original :class:`IrregularlySampledSignal` (the original :class:`IrregularlySampledSignal` is not modified). """ new_sig = deepcopy(self) new_sig.times += t_shift return new_sig def merge(self, other): ''' Merge another signal into this one. The signal objects are concatenated horizontally (column-wise, :func:`np.hstack`). If the attributes of the two signals are not compatible, an Exception is raised. Required attributes of the signal are used. ''' if not np.array_equal(self.times, other.times): raise MergeError("Cannot merge these two signals as the sample times differ.") if self.segment != other.segment: raise MergeError( "Cannot merge these two signals as they belong to different segments.") if hasattr(self, "lazy_shape"): if hasattr(other, "lazy_shape"): if self.lazy_shape[0] != other.lazy_shape[0]: raise MergeError("Cannot merge signals of different length.") merged_lazy_shape = (self.lazy_shape[0], self.lazy_shape[1] + other.lazy_shape[1]) else: raise MergeError("Cannot merge a lazy object with a real object.") if other.units != self.units: other = other.rescale(self.units) stack = np.hstack((self.magnitude, other.magnitude)) kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge({}, {})".format(attr_self, attr_other) merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) signal = self.__class__(self.times, stack, units=self.units, dtype=self.dtype, copy=False, **kwargs) signal.segment = self.segment signal.array_annotate(**self._merge_array_annotations(other)) if hasattr(self, "lazy_shape"): signal.lazy_shape = merged_lazy_shape return signal def concatenate(self, other, allow_overlap=False): ''' Combine this and another signal along the time axis. The signal objects are concatenated vertically (row-wise, :func:`np.vstack`). Patching can be used to combine signals across segments. Note: Only array annotations common to both signals are attached to the concatenated signal. If the attributes of the two signal are not compatible, an Exception is raised. Required attributes of the signal are used. Parameters ---------- other : neo.BaseSignal The object that is merged into this one. allow_overlap : bool If false, overlapping samples between the two signals are not permitted and an ValueError is raised. If true, no check for overlapping samples is performed and all samples are combined. Returns ------- signal : neo.IrregularlySampledSignal Signal containing all non-overlapping samples of both source signals. Raises ------ MergeError If `other` object has incompatible attributes. ''' for attr in self._necessary_attrs: if not (attr[0] in ['signal', 'times', 't_start', 't_stop', 'times']): if getattr(self, attr[0], None) != getattr(other, attr[0], None): raise MergeError( "Cannot concatenate these two signals as the %s differ." % attr[0]) if hasattr(self, "lazy_shape"): if hasattr(other, "lazy_shape"): if self.lazy_shape[-1] != other.lazy_shape[-1]: raise MergeError("Cannot concatenate signals as they contain" " different numbers of traces.") merged_lazy_shape = (self.lazy_shape[0] + other.lazy_shape[0], self.lazy_shape[-1]) else: raise MergeError("Cannot concatenate a lazy object with a real object.") if other.units != self.units: other = other.rescale(self.units) new_times = np.hstack((self.times, other.times)) sorting = np.argsort(new_times) new_samples = np.vstack((self.magnitude, other.magnitude)) kwargs = {} for name in ("name", "description", "file_origin"): attr_self = getattr(self, name) attr_other = getattr(other, name) if attr_self == attr_other: kwargs[name] = attr_self else: kwargs[name] = "merge({}, {})".format(attr_self, attr_other) merged_annotations = merge_annotations(self.annotations, other.annotations) kwargs.update(merged_annotations) kwargs['array_annotations'] = intersect_annotations(self.array_annotations, other.array_annotations) if not allow_overlap: if max(self.t_start, other.t_start) <= min(self.t_stop, other.t_stop): raise ValueError('Can not combine signals that overlap in time. Allow for ' 'overlapping samples using the "no_overlap" parameter.') t_start = min(self.t_start, other.t_start) t_stop = max(self.t_start, other.t_start) signal = IrregularlySampledSignal(signal=new_samples[sorting], times=new_times[sorting], units=self.units, dtype=self.dtype, copy=False, t_start=t_start, t_stop=t_stop, **kwargs) signal.segment = None if hasattr(self, "lazy_shape"): signal.lazy_shape = merged_lazy_shape return signal neo-0.10.0/neo/core/regionofinterest.py0000644000076700000240000001244314060430470020431 0ustar andrewstaff00000000000000from math import floor, ceil class RegionOfInterest: """Abstract base class""" pass class CircularRegionOfInterest(RegionOfInterest): """Representation of a circular ROI *Usage:* >>> roi = CircularRegionOfInterest(20.0, 20.0, radius=5.0) >>> signal = image_sequence.signal_from_region(roi) *Required attributes/properties*: :x, y: (integers or floats) Pixel coordinates of the centre of the ROI :radius: (integer or float) Radius of the ROI in pixels """ def __init__(self, x, y, radius): self.y = y self.x = x self.radius = radius @property def centre(self): return (self.x, self.y) @property def center(self): return self.centre def is_inside(self, x, y): if ((x - self.x) * (x - self.x) + (y - self.y) * (y - self.y) <= self.radius * self.radius): return True else: return False def pixels_in_region(self): """Returns a list of pixels whose *centres* are within the circle""" pixel_in_list = [] for y in range(int(floor(self.y - self.radius)), int(ceil(self.y + self.radius))): for x in range(int(floor(self.x - self.radius)), int(ceil(self.x + self.radius))): if self.is_inside(x, y): pixel_in_list.append([x, y]) return pixel_in_list class RectangularRegionOfInterest(RegionOfInterest): """Representation of a rectangular ROI *Usage:* >>> roi = RectangularRegionOfInterest(20.0, 20.0, width=5.0, height=5.0) >>> signal = image_sequence.signal_from_region(roi) *Required attributes/properties*: :x, y: (integers or floats) Pixel coordinates of the centre of the ROI :width: (integer or float) Width (x-direction) of the ROI in pixels :height: (integer or float) Height (y-direction) of the ROI in pixels """ def __init__(self, x, y, width, height): self.x = x self.y = y self.width = width self.height = height def is_inside(self, x, y): if (self.x - self.width/2.0 <= x < self.x + self.width/2.0 and self.y - self.height/2.0 <= y < self.y + self.height/2.0): return True else: return False def pixels_in_region(self): """Returns a list of pixels whose *centres* are within the rectangle""" pixel_list = [] h = self.height w = self.width for y in range(int(floor(self.y - h / 2.0)), int(ceil(self.y + h / 2.0))): for x in range(int(floor(self.x - w / 2.0)), int(ceil(self.x + w / 2.0))): if self.is_inside(x, y): pixel_list.append([x, y]) return pixel_list class PolygonRegionOfInterest(RegionOfInterest): """Representation of a polygonal ROI *Usage:* >>> roi = PolygonRegionOfInterest( ... (20.0, 20.0), ... (30.0, 20.0), ... (25.0, 25.0) ... ) >>> signal = image_sequence.signal_from_region(roi) *Required attributes/properties*: :vertices: tuples containing the (x, y) coordinates, as integers or floats, of the vertices of the polygon """ def __init__(self, *vertices): self.vertices = vertices def polygon_ray_casting(self, bounding_points, bounding_box_positions): # from https://stackoverflow.com/questions/217578/how-can-i-determine-whether-a-2d-point-is-within-a-polygon # user Noresourses # Arrays containing the x- and y-coordinates of the polygon's vertices. vertx = [point[0] for point in bounding_points] verty = [point[1] for point in bounding_points] # Number of vertices in the polygon nvert = len(bounding_points) # Points that are inside points_inside = [] # For every candidate position within the bounding box for idx, pos in enumerate(bounding_box_positions): testx, testy = (pos[0], pos[1]) c = 0 for i in range(0, nvert): j = i - 1 if i != 0 else nvert - 1 if (((verty[i]*1.0 > testy*1.0) != (verty[j]*1.0 > testy*1.0)) and (testx*1.0 < (vertx[j]*1.0 - vertx[i]*1.0) * (testy*1.0 - verty[i]*1.0) / (verty[j]*1.0 - verty[i]*1.0) + vertx[i]*1.0)): c += 1 # If odd, that means that we are inside the polygon if c % 2 == 1: points_inside.append(pos) return points_inside def pixels_in_region(self): min_x, max_x, min_y, max_y = (self.vertices[0][0], self.vertices[0][0], self.vertices[0][1], self.vertices[0][1]) for i in self.vertices: if i[0] < min_x: min_x = i[0] if i[0] > max_x: max_x = i[0] if i[1] < min_y: min_y = i[1] if i[1] > max_y: max_y = i[1] list_coord = [] for y in range(int(floor(min_y)), int(ceil(max_y))): for x in range(int(floor(min_x)), int(ceil(max_x))): list_coord.append((x, y)) pixel_list = self.polygon_ray_casting(self.vertices, list_coord) return pixel_list neo-0.10.0/neo/core/segment.py0000644000076700000240000002136214066375716016526 0ustar andrewstaff00000000000000''' This module defines :class:`Segment`, a container for data sharing a common time basis. :class:`Segment` derives from :class:`Container`, from :module:`neo.core.container`. ''' from datetime import datetime import numpy as np from copy import deepcopy from neo.core.container import Container from neo.core.spiketrainlist import SpikeTrainList class Segment(Container): ''' A container for data sharing a common time basis. A :class:`Segment` is a heterogeneous container for discrete or continous data sharing a common clock (time basis) but not necessary the same sampling rate, start or end time. *Usage*:: >>> from neo.core import Segment, SpikeTrain, AnalogSignal >>> from quantities import Hz, s >>> >>> seg = Segment(index=5) >>> >>> train0 = SpikeTrain(times=[.01, 3.3, 9.3], units='sec', t_stop=10) >>> seg.spiketrains.append(train0) >>> >>> train1 = SpikeTrain(times=[100.01, 103.3, 109.3], units='sec', ... t_stop=110) >>> seg.spiketrains.append(train1) >>> >>> sig0 = AnalogSignal(signal=[.01, 3.3, 9.3], units='uV', ... sampling_rate=1*Hz) >>> seg.analogsignals.append(sig0) >>> >>> sig1 = AnalogSignal(signal=[100.01, 103.3, 109.3], units='nA', ... sampling_period=.1*s) >>> seg.analogsignals.append(sig1) *Required attributes/properties*: None *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. :file_datetime: (datetime) The creation date and time of the original data file. :rec_datetime: (datetime) The date and time of the original recording :index: (int) You can use this to define a temporal ordering of your Segment. For instance you could use this for trial numbers. Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :all_data: (list) A list of all child objects in the :class:`Segment`. *Container of*: :class:`Epoch` :class:`Event` :class:`AnalogSignal` :class:`IrregularlySampledSignal` :class:`SpikeTrain` ''' _data_child_objects = ('AnalogSignal', 'Epoch', 'Event', 'IrregularlySampledSignal', 'SpikeTrain', 'ImageSequence') _parent_objects = ('Block',) _recommended_attrs = ((('file_datetime', datetime), ('rec_datetime', datetime), ('index', int)) + Container._recommended_attrs) _repr_pretty_containers = ('analogsignals',) def __init__(self, name=None, description=None, file_origin=None, file_datetime=None, rec_datetime=None, index=None, **annotations): ''' Initialize a new :class:`Segment` instance. ''' super().__init__(name=name, description=description, file_origin=file_origin, **annotations) self.spiketrains = SpikeTrainList(segment=self) self.file_datetime = file_datetime self.rec_datetime = rec_datetime self.index = index # t_start attribute is handled as a property so type checking can be done @property def t_start(self): ''' Time when first signal begins. ''' t_starts = [sig.t_start for sig in self.analogsignals + self.spiketrains + self.irregularlysampledsignals] for e in self.epochs + self.events: if hasattr(e, 't_start'): # in case of proxy objects t_starts += [e.t_start] elif len(e) > 0: t_starts += [e.times[0]] # t_start is not defined if no children are present if len(t_starts) == 0: return None t_start = min(t_starts) return t_start # t_stop attribute is handled as a property so type checking can be done @property def t_stop(self): ''' Time when last signal ends. ''' t_stops = [sig.t_stop for sig in self.analogsignals + self.spiketrains + self.irregularlysampledsignals] for e in self.epochs + self.events: if hasattr(e, 't_stop'): # in case of proxy objects t_stops += [e.t_stop] elif len(e) > 0: t_stops += [e.times[-1]] # t_stop is not defined if no children are present if len(t_stops) == 0: return None t_stop = max(t_stops) return t_stop def time_slice(self, t_start=None, t_stop=None, reset_time=False, **kwargs): """ Creates a time slice of a Segment containing slices of all child objects. Parameters: ----------- t_start: Quantity Starting time of the sliced time window. t_stop: Quantity Stop time of the sliced time window. reset_time: bool If True the time stamps of all sliced objects are set to fall in the range from t_start to t_stop. If False, original time stamps are retained. Default is False. Keyword Arguments: ------------------ Additional keyword arguments used for initialization of the sliced Segment object. Returns: -------- subseg: Segment Temporal slice of the original Segment from t_start to t_stop. """ subseg = Segment(**kwargs) for attr in ['file_datetime', 'rec_datetime', 'index', 'name', 'description', 'file_origin']: setattr(subseg, attr, getattr(self, attr)) subseg.annotations = deepcopy(self.annotations) if t_start is None: t_start = self.t_start if t_stop is None: t_stop = self.t_stop t_shift = - t_start # cut analogsignals and analogsignalarrays for ana_id in range(len(self.analogsignals)): if hasattr(self.analogsignals[ana_id], '_rawio'): ana_time_slice = self.analogsignals[ana_id].load(time_slice=(t_start, t_stop)) else: ana_time_slice = self.analogsignals[ana_id].time_slice(t_start, t_stop) if reset_time: ana_time_slice = ana_time_slice.time_shift(t_shift) subseg.analogsignals.append(ana_time_slice) # cut irregularly sampled signals for irr_id in range(len(self.irregularlysampledsignals)): if hasattr(self.irregularlysampledsignals[irr_id], '_rawio'): ana_time_slice = self.irregularlysampledsignals[irr_id].load( time_slice=(t_start, t_stop)) else: ana_time_slice = self.irregularlysampledsignals[irr_id].time_slice(t_start, t_stop) if reset_time: ana_time_slice = ana_time_slice.time_shift(t_shift) subseg.irregularlysampledsignals.append(ana_time_slice) # cut spiketrains for st_id in range(len(self.spiketrains)): if hasattr(self.spiketrains[st_id], '_rawio'): st_time_slice = self.spiketrains[st_id].load(time_slice=(t_start, t_stop)) else: st_time_slice = self.spiketrains[st_id].time_slice(t_start, t_stop) if reset_time: st_time_slice = st_time_slice.time_shift(t_shift) subseg.spiketrains.append(st_time_slice) # cut events for ev_id in range(len(self.events)): if hasattr(self.events[ev_id], '_rawio'): ev_time_slice = self.events[ev_id].load(time_slice=(t_start, t_stop)) else: ev_time_slice = self.events[ev_id].time_slice(t_start, t_stop) if reset_time: ev_time_slice = ev_time_slice.time_shift(t_shift) # appending only non-empty events if len(ev_time_slice): subseg.events.append(ev_time_slice) # cut epochs for ep_id in range(len(self.epochs)): if hasattr(self.epochs[ep_id], '_rawio'): ep_time_slice = self.epochs[ep_id].load(time_slice=(t_start, t_stop)) else: ep_time_slice = self.epochs[ep_id].time_slice(t_start, t_stop) if reset_time: ep_time_slice = ep_time_slice.time_shift(t_shift) # appending only non-empty epochs if len(ep_time_slice): subseg.epochs.append(ep_time_slice) subseg.create_relationship() return subseg neo-0.10.0/neo/core/spiketrain.py0000644000076700000240000010711514077527451017233 0ustar andrewstaff00000000000000''' This module implements :class:`SpikeTrain`, an array of spike times. :class:`SpikeTrain` derives from :class:`BaseNeo`, from :module:`neo.core.baseneo`, and from :class:`quantites.Quantity`, which inherits from :class:`numpy.array`. Inheritance from :class:`numpy.array` is explained here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html In brief: * Initialization of a new object from constructor happens in :meth:`__new__`. This is where user-specified attributes are set. * :meth:`__array_finalize__` is called for all new objects, including those created by slicing. This is where attributes are copied over from the old object. ''' import neo import sys from copy import deepcopy, copy import warnings import numpy as np import quantities as pq from neo.core.baseneo import BaseNeo, MergeError, merge_annotations from neo.core.dataobject import DataObject, ArrayDict def check_has_dimensions_time(*values): ''' Verify that all arguments have a dimensionality that is compatible with time. ''' errmsgs = [] for value in values: dim = value.dimensionality.simplified if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0], pq.UnitTime)): errmsgs.append( "value {} has dimensions {}, not [time]".format( value, dim)) if errmsgs: raise ValueError("\n".join(errmsgs)) def _check_time_in_range(value, t_start, t_stop, view=False): ''' Verify that all times in :attr:`value` are between :attr:`t_start` and :attr:`t_stop` (inclusive. If :attr:`view` is True, vies are used for the test. Using drastically increases the speed, but is only safe if you are certain that the dtype and units are the same ''' if t_start > t_stop: raise ValueError("t_stop ({}) is before t_start ({})".format(t_stop, t_start)) if not value.size: return if view: value = value.view(np.ndarray) t_start = t_start.view(np.ndarray) t_stop = t_stop.view(np.ndarray) if value.min() < t_start: raise ValueError("The first spike ({}) is before t_start ({})".format(value, t_start)) if value.max() > t_stop: raise ValueError("The last spike ({}) is after t_stop ({})".format(value, t_stop)) def _check_waveform_dimensions(spiketrain): ''' Verify that waveform is compliant with the waveform definition as quantity array 3D (spike, channel, time) ''' if not spiketrain.size: return waveforms = spiketrain.waveforms if (waveforms is None) or (not waveforms.size): return if waveforms.shape[0] != len(spiketrain): raise ValueError("Spiketrain length (%s) does not match to number of " "waveforms present (%s)" % (len(spiketrain), waveforms.shape[0])) def _new_spiketrain(cls, signal, t_stop, units=None, dtype=None, copy=True, sampling_rate=1.0 * pq.Hz, t_start=0.0 * pq.s, waveforms=None, left_sweep=None, name=None, file_origin=None, description=None, array_annotations=None, annotations=None, segment=None, unit=None): ''' A function to map :meth:`BaseAnalogSignal.__new__` to function that does not do the unit checking. This is needed for :module:`pickle` to work. ''' if annotations is None: annotations = {} obj = SpikeTrain(signal, t_stop, units, dtype, copy, sampling_rate, t_start, waveforms, left_sweep, name, file_origin, description, array_annotations, **annotations) obj.segment = segment obj.unit = unit return obj def normalize_times_array(times, units=None, dtype=None, copy=True): """ Return a quantity array with the correct units. There are four scenarios: A. times (NumPy array), units given as string or Quantities units B. times (Quantity array), units=None C. times (Quantity), units given as string or Quantities units D. times (NumPy array), units=None In scenarios A-C we return a tuple (times as a Quantity array, dimensionality) In scenario C, we rescale the original array to match `units` In scenario D, we raise a ValueError """ if dtype is None: if not hasattr(times, 'dtype'): dtype = float if units is None: # No keyword units, so get from `times` try: dim = times.units.dimensionality except AttributeError: raise ValueError('you must specify units') else: if hasattr(units, 'dimensionality'): dim = units.dimensionality else: dim = pq.quantity.validate_dimensionality(units) if hasattr(times, 'dimensionality'): if times.dimensionality.items() == dim.items(): units = None # units will be taken from times, avoids copying else: if not copy: raise ValueError("cannot rescale and return view") else: # this is needed because of a bug in python-quantities # see issue # 65 in python-quantities github # remove this if it is fixed times = times.rescale(dim) # check to make sure the units are time # this approach is orders of magnitude faster than comparing the # reference dimensionality if (len(dim) != 1 or list(dim.values())[0] != 1 or not isinstance(list(dim.keys())[0], pq.UnitTime)): ValueError("Units have dimensions %s, not [time]" % dim.simplified) return pq.Quantity(times, units=units, dtype=dtype, copy=copy), dim class SpikeTrain(DataObject): ''' :class:`SpikeTrain` is a :class:`Quantity` array of spike times. It is an ensemble of action potentials (spikes) emitted by the same unit in a period of time. *Usage*:: >>> from neo.core import SpikeTrain >>> from quantities import s >>> >>> train = SpikeTrain([3, 4, 5]*s, t_stop=10.0) >>> train2 = train[1:3] >>> >>> train.t_start array(0.0) * s >>> train.t_stop array(10.0) * s >>> train >>> train2 *Required attributes/properties*: :times: (quantity array 1D, numpy array 1D, or list) The times of each spike. :units: (quantity units) Required if :attr:`times` is a list or :class:`~numpy.ndarray`, not if it is a :class:`~quantities.Quantity`. :t_stop: (quantity scalar, numpy scalar, or float) Time at which :class:`SpikeTrain` ended. This will be converted to the same units as :attr:`times`. This argument is required because it specifies the period of time over which spikes could have occurred. Note that :attr:`t_start` is highly recommended for the same reason. Note: If :attr:`times` contains values outside of the range [t_start, t_stop], an Exception is raised. *Recommended attributes/properties*: :name: (str) A label for the dataset. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. :t_start: (quantity scalar, numpy scalar, or float) Time at which :class:`SpikeTrain` began. This will be converted to the same units as :attr:`times`. Default: 0.0 seconds. :waveforms: (quantity array 3D (spike, channel, time)) The waveforms of each spike. :sampling_rate: (quantity scalar) Number of samples per unit time for the waveforms. :left_sweep: (quantity array 1D) Time from the beginning of the waveform to the trigger time of the spike. :sort: (bool) If True, the spike train will be sorted by time. *Optional attributes/properties*: :dtype: (numpy dtype or str) Override the dtype of the signal array. :copy: (bool) Whether to copy the times array. True by default. Must be True when you request a change of units or dtype. :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations \ for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. *Properties available on this object*: :sampling_period: (quantity scalar) Interval between two samples. (1/:attr:`sampling_rate`) :duration: (quantity scalar) Duration over which spikes can occur, read-only. (:attr:`t_stop` - :attr:`t_start`) :spike_duration: (quantity scalar) Duration of a waveform, read-only. (:attr:`waveform`.shape[2] * :attr:`sampling_period`) :right_sweep: (quantity scalar) Time from the trigger times of the spikes to the end of the waveforms, read-only. (:attr:`left_sweep` + :attr:`spike_duration`) :times: (quantity array 1D) Returns the :class:`SpikeTrain` as a quantity array. *Slicing*: :class:`SpikeTrain` objects can be sliced. When this occurs, a new :class:`SpikeTrain` (actually a view) is returned, with the same metadata, except that :attr:`waveforms` is also sliced in the same way (along dimension 0). Note that t_start and t_stop are not changed automatically, although you can still manually change them. ''' _parent_objects = ('Segment',) _parent_attrs = ('segment',) _quantity_attr = 'times' _necessary_attrs = (('times', pq.Quantity, 1), ('t_start', pq.Quantity, 0), ('t_stop', pq.Quantity, 0)) _recommended_attrs = ((('waveforms', pq.Quantity, 3), ('left_sweep', pq.Quantity, 0), ('sampling_rate', pq.Quantity, 0)) + BaseNeo._recommended_attrs) def __new__(cls, times, t_stop, units=None, dtype=None, copy=True, sampling_rate=1.0 * pq.Hz, t_start=0.0 * pq.s, waveforms=None, left_sweep=None, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Constructs a new :clas:`Spiketrain` instance from data. This is called whenever a new :class:`SpikeTrain` is created from the constructor, but not when slicing. ''' if len(times) != 0 and waveforms is not None and len(times) != waveforms.shape[0]: # len(times)!=0 has been used to workaround a bug occuring during neo import raise ValueError("the number of waveforms should be equal to the number of spikes") if dtype is not None and hasattr(times, 'dtype') and times.dtype != dtype: if not copy: raise ValueError("cannot change dtype and return view") # if t_start.dtype or t_stop.dtype != times.dtype != dtype, # _check_time_in_range can have problems, so we set the t_start # and t_stop dtypes to be the same as times before converting them # to dtype below # see ticket #38 if hasattr(t_start, 'dtype') and t_start.dtype != times.dtype: t_start = t_start.astype(times.dtype) if hasattr(t_stop, 'dtype') and t_stop.dtype != times.dtype: t_stop = t_stop.astype(times.dtype) # Make sure units are consistent # also get the dimensionality now since it is much faster to feed # that to Quantity rather than a unit times, dim = normalize_times_array(times, units, dtype, copy) # Construct Quantity from data obj = times.view(cls) # spiketrain times always need to be 1-dimensional if len(obj.shape) > 1: raise ValueError("Spiketrain times array has more than 1 dimension") # if the dtype and units match, just copy the values here instead # of doing the much more expensive creation of a new Quantity # using items() is orders of magnitude faster if (hasattr(t_start, 'dtype') and t_start.dtype == obj.dtype and hasattr(t_start, 'dimensionality') and t_start.dimensionality.items() == dim.items()): obj.t_start = t_start.copy() else: obj.t_start = pq.Quantity(t_start, units=dim, dtype=obj.dtype) if (hasattr(t_stop, 'dtype') and t_stop.dtype == obj.dtype and hasattr(t_stop, 'dimensionality') and t_stop.dimensionality.items() == dim.items()): obj.t_stop = t_stop.copy() else: obj.t_stop = pq.Quantity(t_stop, units=dim, dtype=obj.dtype) # Store attributes obj.waveforms = waveforms obj.left_sweep = left_sweep obj.sampling_rate = sampling_rate # parents obj.segment = None obj.unit = None # Error checking (do earlier?) _check_time_in_range(obj, obj.t_start, obj.t_stop, view=True) return obj def __init__(self, times, t_stop, units=None, dtype=None, copy=True, sampling_rate=1.0 * pq.Hz, t_start=0.0 * pq.s, waveforms=None, left_sweep=None, name=None, file_origin=None, description=None, array_annotations=None, **annotations): ''' Initializes a newly constructed :class:`SpikeTrain` instance. ''' # This method is only called when constructing a new SpikeTrain, # not when slicing or viewing. We use the same call signature # as __new__ for documentation purposes. Anything not in the call # signature is stored in annotations. # Calls parent __init__, which grabs universally recommended # attributes and sets up self.annotations DataObject.__init__(self, name=name, file_origin=file_origin, description=description, array_annotations=array_annotations, **annotations) def _repr_pretty_(self, pp, cycle): super()._repr_pretty_(pp, cycle) def rescale(self, units): ''' Return a copy of the :class:`SpikeTrain` converted to the specified units ''' obj = super().rescale(units) obj.t_start = self.t_start.rescale(units) obj.t_stop = self.t_stop.rescale(units) obj.unit = self.unit return obj def __reduce__(self): ''' Map the __new__ function onto _new_BaseAnalogSignal, so that pickle works ''' import numpy return _new_spiketrain, (self.__class__, numpy.array(self), self.t_stop, self.units, self.dtype, True, self.sampling_rate, self.t_start, self.waveforms, self.left_sweep, self.name, self.file_origin, self.description, self.array_annotations, self.annotations, self.segment, self.unit) def __array_finalize__(self, obj): ''' This is called every time a new :class:`SpikeTrain` is created. It is the appropriate place to set default values for attributes for :class:`SpikeTrain` constructed by slicing or viewing. User-specified values are only relevant for construction from constructor, and these are set in __new__. Then they are just copied over here. Note that the :attr:`waveforms` attibute is not sliced here. Nor is :attr:`t_start` or :attr:`t_stop` modified. ''' # This calls Quantity.__array_finalize__ which deals with # dimensionality super().__array_finalize__(obj) # Supposedly, during initialization from constructor, obj is supposed # to be None, but this never happens. It must be something to do # with inheritance from Quantity. if obj is None: return # Set all attributes of the new object `self` from the attributes # of `obj`. For instance, when slicing, we want to copy over the # attributes of the original object. self.t_start = getattr(obj, 't_start', None) self.t_stop = getattr(obj, 't_stop', None) self.waveforms = getattr(obj, 'waveforms', None) self.left_sweep = getattr(obj, 'left_sweep', None) self.sampling_rate = getattr(obj, 'sampling_rate', None) self.segment = getattr(obj, 'segment', None) self.unit = getattr(obj, 'unit', None) # The additional arguments self.annotations = getattr(obj, 'annotations', {}) # Add empty array annotations, because they cannot always be copied, # but do not overwrite existing ones from slicing etc. # This ensures the attribute exists if not hasattr(self, 'array_annotations'): self.array_annotations = ArrayDict(self._get_arr_ann_length()) # Note: Array annotations have to be changed when slicing or initializing an object, # copying them over in spite of changed data would result in unexpected behaviour # Globally recommended attributes self.name = getattr(obj, 'name', None) self.file_origin = getattr(obj, 'file_origin', None) self.description = getattr(obj, 'description', None) if hasattr(obj, 'lazy_shape'): self.lazy_shape = obj.lazy_shape def __repr__(self): ''' Returns a string representing the :class:`SpikeTrain`. ''' return '' % ( super().__repr__(), self.t_start, self.t_stop) def sort(self): ''' Sorts the :class:`SpikeTrain` and its :attr:`waveforms`, if any, by time. ''' # sort the waveforms by the times sort_indices = np.argsort(self) if self.waveforms is not None and self.waveforms.any(): self.waveforms = self.waveforms[sort_indices] self.array_annotate(**deepcopy(self.array_annotations_at_index(sort_indices))) # now sort the times # We have sorted twice, but `self = self[sort_indices]` introduces # a dependency on the slicing functionality of SpikeTrain. super().sort() def __getslice__(self, i, j): ''' Get a slice from :attr:`i` to :attr:`j`. Doesn't get called in Python 3, :meth:`__getitem__` is called instead ''' return self.__getitem__(slice(i, j)) def __add__(self, time): ''' Shifts the time point of all spikes by adding the amount in :attr:`time` (:class:`Quantity`) If `time` is a scalar, this also shifts :attr:`t_start` and :attr:`t_stop`. If `time` is an array, :attr:`t_start` and :attr:`t_stop` are not changed unless some of the new spikes would be outside this range. In this case :attr:`t_start` and :attr:`t_stop` are modified if necessary to ensure they encompass all spikes. It is not possible to add two SpikeTrains (raises ValueError). ''' spikes = self.view(pq.Quantity) check_has_dimensions_time(time) if isinstance(time, SpikeTrain): raise TypeError("Can't add two spike trains") new_times = spikes + time if time.size > 1: t_start = min(self.t_start, np.min(new_times)) t_stop = max(self.t_stop, np.max(new_times)) else: t_start = self.t_start + time t_stop = self.t_stop + time return SpikeTrain(times=new_times, t_stop=t_stop, units=self.units, sampling_rate=self.sampling_rate, t_start=t_start, waveforms=self.waveforms, left_sweep=self.left_sweep, name=self.name, file_origin=self.file_origin, description=self.description, array_annotations=deepcopy(self.array_annotations), **self.annotations) def __sub__(self, time): ''' Shifts the time point of all spikes by subtracting the amount in :attr:`time` (:class:`Quantity`) If `time` is a scalar, this also shifts :attr:`t_start` and :attr:`t_stop`. If `time` is an array, :attr:`t_start` and :attr:`t_stop` are not changed unless some of the new spikes would be outside this range. In this case :attr:`t_start` and :attr:`t_stop` are modified if necessary to ensure they encompass all spikes. In general, it is not possible to subtract two SpikeTrain objects (raises ValueError). However, if `time` is itself a SpikeTrain of the same size as the SpikeTrain, returns a Quantities array (since this is often used in checking whether two spike trains are the same or in calculating the inter-spike interval. ''' spikes = self.view(pq.Quantity) check_has_dimensions_time(time) if isinstance(time, SpikeTrain): if self.size == time.size: return spikes - time else: raise TypeError("Can't subtract spike trains with different sizes") else: new_times = spikes - time if time.size > 1: t_start = min(self.t_start, np.min(new_times)) t_stop = max(self.t_stop, np.max(new_times)) else: t_start = self.t_start - time t_stop = self.t_stop - time return SpikeTrain(times=spikes - time, t_stop=t_stop, units=self.units, sampling_rate=self.sampling_rate, t_start=t_start, waveforms=self.waveforms, left_sweep=self.left_sweep, name=self.name, file_origin=self.file_origin, description=self.description, array_annotations=deepcopy(self.array_annotations), **self.annotations) def __getitem__(self, i): ''' Get the item or slice :attr:`i`. ''' obj = super().__getitem__(i) if hasattr(obj, 'waveforms') and obj.waveforms is not None: obj.waveforms = obj.waveforms.__getitem__(i) try: obj.array_annotate(**deepcopy(self.array_annotations_at_index(i))) except AttributeError: # If Quantity was returned, not SpikeTrain pass return obj def __setitem__(self, i, value): ''' Set the value the item or slice :attr:`i`. ''' if not hasattr(value, "units"): value = pq.Quantity(value, units=self.units) # or should we be strict: raise ValueError( # "Setting a value # requires a quantity")? # check for values outside t_start, t_stop _check_time_in_range(value, self.t_start, self.t_stop) super().__setitem__(i, value) def __setslice__(self, i, j, value): if not hasattr(value, "units"): value = pq.Quantity(value, units=self.units) _check_time_in_range(value, self.t_start, self.t_stop) super().__setslice__(i, j, value) def _copy_data_complement(self, other, deep_copy=False): ''' Copy the metadata from another :class:`SpikeTrain`. Note: Array annotations can not be copied here because length of data can change ''' # Note: Array annotations cannot be copied because length of data can be changed # here which would cause inconsistencies for attr in ("left_sweep", "sampling_rate", "name", "file_origin", "description", "annotations"): attr_value = getattr(other, attr, None) if deep_copy: attr_value = deepcopy(attr_value) setattr(self, attr, attr_value) def duplicate_with_new_data(self, signal, t_start=None, t_stop=None, waveforms=None, deep_copy=True, units=None): ''' Create a new :class:`SpikeTrain` with the same metadata but different data (times, t_start, t_stop) Note: Array annotations can not be copied here because length of data can change ''' # using previous t_start and t_stop if no values are provided if t_start is None: t_start = self.t_start if t_stop is None: t_stop = self.t_stop if waveforms is None: waveforms = self.waveforms if units is None: units = self.units else: units = pq.quantity.validate_dimensionality(units) new_st = self.__class__(signal, t_start=t_start, t_stop=t_stop, waveforms=waveforms, units=units) new_st._copy_data_complement(self, deep_copy=deep_copy) # Note: Array annotations are not copied here, because length of data could change # overwriting t_start and t_stop with new values new_st.t_start = t_start new_st.t_stop = t_stop # consistency check _check_time_in_range(new_st, new_st.t_start, new_st.t_stop, view=False) _check_waveform_dimensions(new_st) return new_st def time_slice(self, t_start, t_stop): ''' Creates a new :class:`SpikeTrain` corresponding to the time slice of the original :class:`SpikeTrain` between (and including) times :attr:`t_start` and :attr:`t_stop`. Either parameter can also be None to use infinite endpoints for the time interval. ''' _t_start = t_start _t_stop = t_stop if t_start is None: _t_start = -np.inf if t_stop is None: _t_stop = np.inf if _t_start > self.t_stop or _t_stop < self.t_start: # the alternative to raising an exception would be to return # a zero-duration spike train set at self.t_stop or self.t_start raise ValueError("A time slice completely outside the " "boundaries of the spike train is not defined.") indices = (self >= _t_start) & (self <= _t_stop) # Time slicing should create a deep copy of the object new_st = deepcopy(self[indices]) new_st.t_start = max(_t_start, self.t_start) new_st.t_stop = min(_t_stop, self.t_stop) if self.waveforms is not None: new_st.waveforms = self.waveforms[indices] return new_st def time_shift(self, t_shift): """ Shifts a :class:`SpikeTrain` to start at a new time. Parameters: ----------- t_shift: Quantity (time) Amount of time by which to shift the :class:`SpikeTrain`. Returns: -------- spiketrain: :class:`SpikeTrain` New instance of a :class:`SpikeTrain` object starting at t_shift later than the original :class:`SpikeTrain` (the original :class:`SpikeTrain` is not modified). """ new_st = self.duplicate_with_new_data( signal=self.times.view(pq.Quantity) + t_shift, t_start=self.t_start + t_shift, t_stop=self.t_stop + t_shift) # Here we can safely copy the array annotations since we know that # the length of the SpikeTrain does not change. new_st.array_annotate(**self.array_annotations) return new_st def merge(self, *others): ''' Merge other :class:`SpikeTrain` objects into this one. The times of the :class:`SpikeTrain` objects combined in one array and sorted. If the attributes of the :class:`SpikeTrain` objects are not compatible, an Exception is raised. ''' for other in others: if isinstance(other, neo.io.proxyobjects.SpikeTrainProxy): raise MergeError("Cannot merge, SpikeTrainProxy objects cannot be merged" "into regular SpikeTrain objects, please load them first.") elif not isinstance(other, SpikeTrain): raise MergeError("Cannot merge, only SpikeTrain" "can be merged into a SpikeTrain.") if self.sampling_rate != other.sampling_rate: raise MergeError("Cannot merge, different sampling rates") if self.t_start != other.t_start: raise MergeError("Cannot merge, different t_start") if self.t_stop != other.t_stop: raise MergeError("Cannot merge, different t_stop") if self.left_sweep != other.left_sweep: raise MergeError("Cannot merge, different left_sweep") if self.segment != other.segment: raise MergeError("Cannot merge these signals as they belong to" " different segments.") all_spiketrains = [self] all_spiketrains.extend([st.rescale(self.units) for st in others]) wfs = [st.waveforms is not None for st in all_spiketrains] if any(wfs) and not all(wfs): raise MergeError("Cannot merge signal with waveform and signal " "without waveform.") stack = np.concatenate([np.asarray(st) for st in all_spiketrains]) sorting = np.argsort(stack) stack = stack[sorting] kwargs = {} kwargs['array_annotations'] = self._merge_array_annotations(others, sorting=sorting) for name in ("name", "description", "file_origin"): attr = getattr(self, name) # check if self is already a merged spiketrain # if it is, get rid of the bracket at the end to append more attributes if attr is not None: if attr.startswith('merge(') and attr.endswith(')'): attr = attr[:-1] for other in others: attr_other = getattr(other, name) # both attributes are None --> nothing to do if attr is None and attr_other is None: continue # one of the attributes is None --> convert to string in order to merge them elif attr is None or attr_other is None: attr = str(attr) attr_other = str(attr_other) # check if the other spiketrain is already a merged spiketrain # if it is, append all of its merged attributes that aren't already in attr if attr_other.startswith('merge(') and attr_other.endswith(')'): for subattr in attr_other[6:-1].split('; '): if subattr not in attr: attr += '; ' + subattr if not attr.startswith('merge('): attr = 'merge(' + attr # if the other attribute is not in the list --> append # if attr doesn't already start with merge add merge( in the beginning elif attr_other not in attr: attr += '; ' + attr_other if not attr.startswith('merge('): attr = 'merge(' + attr # close the bracket of merge(...) if necessary if attr is not None: if attr.startswith('merge('): attr += ')' # write attr into kwargs dict kwargs[name] = attr merged_annotations = merge_annotations(*(st.annotations for st in all_spiketrains)) kwargs.update(merged_annotations) train = SpikeTrain(stack, units=self.units, dtype=self.dtype, copy=False, t_start=self.t_start, t_stop=self.t_stop, sampling_rate=self.sampling_rate, left_sweep=self.left_sweep, **kwargs) if all(wfs): wfs_stack = np.vstack([st.waveforms.rescale(self.waveforms.units) for st in all_spiketrains]) wfs_stack = wfs_stack[sorting] * self.waveforms.units train.waveforms = wfs_stack train.segment = self.segment if train.segment is not None: self.segment.spiketrains.append(train) return train def _merge_array_annotations(self, others, sorting=None): ''' Merges array annotations of multiple different objects. The merge happens in such a way that the result fits the merged data In general this means concatenating the arrays from the objects. If an annotation is not present in one of the objects, it will be omitted. Apart from that the array_annotations need to be sorted according to the sorting of the spikes. :return Merged array_annotations ''' assert sorting is not None, "The order of the merged spikes must be known" merged_array_annotations = {} omitted_keys_self = [] keys = self.array_annotations.keys() for key in keys: try: self_ann = deepcopy(self.array_annotations[key]) other_ann = np.concatenate([deepcopy(other.array_annotations[key]) for other in others]) if isinstance(self_ann, pq.Quantity): other_ann.rescale(self_ann.units) arr_ann = np.concatenate([self_ann, other_ann]) * self_ann.units else: arr_ann = np.concatenate([self_ann, other_ann]) merged_array_annotations[key] = arr_ann[sorting] # Annotation only available in 'self', must be skipped # Ignore annotations present only in one of the SpikeTrains except KeyError: omitted_keys_self.append(key) continue omitted_keys_other = [key for key in np.unique([key for other in others for key in other.array_annotations]) if key not in self.array_annotations] if omitted_keys_self or omitted_keys_other: warnings.warn("The following array annotations were omitted, because they were only " "present in one of the merged objects: {} from the one that was merged " "into and {} from the ones that were merged into it." "".format(omitted_keys_self, omitted_keys_other), UserWarning) return merged_array_annotations @property def times(self): ''' Returns the :class:`SpikeTrain` as a quantity array. ''' return pq.Quantity(self) @property def duration(self): ''' Duration over which spikes can occur, (:attr:`t_stop` - :attr:`t_start`) ''' if self.t_stop is None or self.t_start is None: return None return self.t_stop - self.t_start @property def spike_duration(self): ''' Duration of a waveform. (:attr:`waveform`.shape[2] * :attr:`sampling_period`) ''' if self.waveforms is None or self.sampling_rate is None: return None return self.waveforms.shape[2] / self.sampling_rate @property def sampling_period(self): ''' Interval between two samples. (1/:attr:`sampling_rate`) ''' if self.sampling_rate is None: return None return 1.0 / self.sampling_rate @sampling_period.setter def sampling_period(self, period): ''' Setter for :attr:`sampling_period` ''' if period is None: self.sampling_rate = None else: self.sampling_rate = 1.0 / period @property def right_sweep(self): ''' Time from the trigger times of the spikes to the end of the waveforms. (:attr:`left_sweep` + :attr:`spike_duration`) ''' dur = self.spike_duration if self.left_sweep is None or dur is None: return None return self.left_sweep + dur neo-0.10.0/neo/core/spiketrainlist.py0000644000076700000240000003251414066375716020132 0ustar andrewstaff00000000000000# -*- coding: utf-8 -*- """ This module implements :class:`SpikeTrainList`, a pseudo-list which supports a multiplexed representation of spike trains (all times in a single array, with a second array indicating which neuron/channel the spike is from). """ import numpy as np import quantities as pq from .spiketrain import SpikeTrain, normalize_times_array def is_spiketrain_or_proxy(obj): return isinstance(obj, SpikeTrain) or getattr(obj, "proxy_for", None) == SpikeTrain class SpikeTrainList(object): """ This class contains multiple spike trains, and can represent them either as a list of SpikeTrain objects or as a pair of arrays (all spike times in a single array, with a second array indicating which neuron/channel the spike is from). A SpikeTrainList object should behave like a list of SpikeTrains for iteration and item access. It is not intended to be used directly by users, but is available as the attribute `spiketrains` of Segments. Examples: # Create from list of SpikeTrain objects >>> stl = SpikeTrainList(items=( ... SpikeTrain([0.5, 0.6, 23.6, 99.2], units="ms", t_start=0 * pq.ms, t_stop=100.0 * pq.ms), ... SpikeTrain([0.0007, 0.0112], units="s", t_start=0 * pq.ms, t_stop=100.0 * pq.ms), ... SpikeTrain([1100, 88500], units="us", t_start=0 * pq.ms, t_stop=100.0 * pq.ms), ... SpikeTrain([], units="ms", t_start=0 * pq.ms, t_stop=100.0 * pq.ms), ... )) >>> stl.multiplexed (array([0, 0, 0, 0, 1, 1, 2, 2]), array([ 0.5, 0.6, 23.6, 99.2, 0.7, 11.2, 1.1, 88.5]) * ms) # Create from a pair of arrays >>> stl = SpikeTrainList.from_spike_time_array( ... np.array([0.5, 0.6, 0.7, 1.1, 11.2, 23.6, 88.5, 99.2]), ... np.array([0, 0, 1, 2, 1, 0, 2, 0]), ... all_channel_ids=[0, 1, 2, 3], ... units='ms', ... t_start=0 * pq.ms, ... t_stop=100.0 * pq.ms) >>> list(stl) [, , , ] """ def __init__(self, items=None, segment=None): """Initialize self""" if items is None: self._items = items else: for item in items: if not is_spiketrain_or_proxy(item): raise ValueError( "`items` can only contain SpikeTrain objects or proxy pbjects") self._items = list(items) self._spike_time_array = None self._channel_id_array = None self._all_channel_ids = None self._spiketrain_metadata = None self.segment = segment def __iter__(self): """Implement iter(self)""" if self._items is None: self._spiketrains_from_array() for item in self._items: yield item def __getitem__(self, i): """x.__getitem__(y) <==> x[y]""" if self._items is None: self._spiketrains_from_array() items = self._items[i] if is_spiketrain_or_proxy(items): return items else: return SpikeTrainList(items=items) def __str__(self): """Return str(self)""" if self._items is None: if self._spike_time_array is None: return str([]) else: return "SpikeTrainList containing {} spikes from {} neurons".format( self._spike_time_array.size, len(self._all_channel_ids)) else: return str(self._items) def __len__(self): """Return len(self)""" if self._items is None: if self._all_channel_ids is not None: return len(self._all_channel_ids) elif self._channel_id_array is not None: return np.unique(self._channel_id_array).size else: return 0 else: return len(self._items) def _add_spiketrainlists(self, other, in_place=False): if self._spike_time_array is None or other._spike_time_array is None: # if either self or other is not storing multiplexed spike trains # we combine them using the list of SpikeTrains representation if self._items is None: self._spiketrains_from_array() if other._items is None: other._spiketrains_from_array() if in_place: self._items.extend(other._items) return self else: return self.__class__(items=self._items[:] + other._items) else: # both self and other are storing multiplexed spike trains # so we update the array representation if self._spiketrain_metadata['t_start'] != other._spiketrain_metadata['t_start']: raise ValueError("Incompatible t_start") # todo: adjust times and t_start of other to be compatible with self if self._spiketrain_metadata['t_stop'] != other._spiketrain_metadata['t_stop']: raise ValueError("Incompatible t_stop") # todo: adjust t_stop of self and other as necessary combined_spike_time_array = np.hstack( (self._spike_time_array, other._spike_time_array)) combined_channel_id_array = np.hstack( (self._channel_id_array, other._channel_id_array)) combined_channel_ids = set(list(self._all_channel_ids) + other._all_channel_ids) if len(combined_channel_ids) != ( len(self._all_channel_ids) + len(other._all_channel_ids) ): raise ValueError("Duplicate channel ids, please rename channels before adding") if in_place: self._spike_time_array = combined_spike_time_array self._channel_id_array = combined_channel_id_array self._all_channel_ids = combined_channel_ids self._items = None return self else: return self.__class__.from_spike_time_array( combined_spike_time_array, combined_channel_id_array, combined_channel_ids, t_start=self._spiketrain_metadata['t_start'], t_stop=self._spiketrain_metadata['t_stop']) def __add__(self, other): """Return self + other""" if isinstance(other, self.__class__): return self._add_spiketrainlists(other) elif other and is_spiketrain_or_proxy(other[0]): return self._add_spiketrainlists( self.__class__(items=other, segment=self.segment) ) else: if self._items is None: self._spiketrains_from_array() return self._items + other def __iadd__(self, other): """Return self""" if isinstance(other, self.__class__): return self._add_spiketrainlists(other, in_place=True) elif other and is_spiketrain_or_proxy(other[0]): for obj in other: obj.segment = self.segment if self._items is None: self._spiketrains_from_array() self._items.extend(other) return self else: raise TypeError("Can only add a SpikeTrainList or a list of SpikeTrains in place") def __radd__(self, other): """Return other + self""" if isinstance(other, self.__class__): return other._add_spiketrainlists(self) elif other and is_spiketrain_or_proxy(other[0]): for obj in other: obj.segment = self.segment if self._items is None: self._spiketrains_from_array() self._items.extend(other) return self elif len(other) == 0: return self else: if self._items is None: self._spiketrains_from_array() return other + self._items def append(self, obj): """L.append(object) -> None -- append object to end""" if not is_spiketrain_or_proxy(obj): raise ValueError("Can only append SpikeTrain objects") if self._items is None: self._spiketrains_from_array() obj.segment = self.segment self._items.append(obj) def extend(self, iterable): """L.extend(iterable) -> None -- extend list by appending elements from the iterable""" if self._items is None: self._spiketrains_from_array() for obj in iterable: obj.segment = self.segment self._items.extend(iterable) @classmethod def from_spike_time_array(cls, spike_time_array, channel_id_array, all_channel_ids, t_stop, units=None, t_start=0.0 * pq.s, **annotations): """Create a SpikeTrainList object from an array of spike times and an array of channel ids. *Required attributes/properties*: :spike_time_array: (quantity array 1D, numpy array 1D, or list) The times of all spikes. :channel_id_array: (numpy array 1D of dtype int) The id of the channel (e.g. the neuron) to which each spike belongs. This array should have the same length as :attr:`spike_time_array` :all_channel_ids: (list, tuple, or numpy array 1D containing integers) All channel ids. This is needed to represent channels in which there are no spikes. :units: (quantity units) Required if :attr:`spike_time_array` is not a :class:`~quantities.Quantity`. :t_stop: (quantity scalar, numpy scalar, or float) Time at which spike recording ended. This will be converted to the same units as :attr:`spike_time_array` or :attr:`units`. *Recommended attributes/properties*: :t_start: (quantity scalar, numpy scalar, or float) Time at which spike recording began. This will be converted to the same units as :attr:`spike_time_array` or :attr:`units`. Default: 0.0 seconds. *Optional attributes/properties*: """ spike_time_array, dim = normalize_times_array(spike_time_array, units) obj = cls() obj._spike_time_array = spike_time_array obj._channel_id_array = channel_id_array obj._all_channel_ids = all_channel_ids obj._spiketrain_metadata = { "t_start": t_start, "t_stop": t_stop } for name, ann_value in annotations.items(): if len(ann_value) != len(obj): raise ValueError(f"incorrect length for annotation '{name}'") obj._annotations = annotations return obj def _spiketrains_from_array(self): """Convert multiplexed spike time data into a list of SpikeTrain objects""" if self._spike_time_array is None: self._items = [] else: self._items = [] for i, channel_id in enumerate(self._all_channel_ids): mask = self._channel_id_array == channel_id times = self._spike_time_array[mask] spiketrain = SpikeTrain(times, **self._spiketrain_metadata) spiketrain.annotations = { name: value[i] for name, value in self._annotations.items() } spiketrain.annotate(channel_id=channel_id) spiketrain.segment = self.segment self._items.append(spiketrain) @property def multiplexed(self): """Return spike trains as a pair of arrays. The first (plain NumPy) array contains the ids of the channels/neurons that produced each spike, the second (Quantity) array contains the times of the spikes. """ if self._spike_time_array is None: # need to convert list of SpikeTrains into multiplexed spike times array if self._items is None: return np.array([]), np.array([]) else: channel_ids = [] spike_times = [] dim = self._items[0].units.dimensionality for i, spiketrain in enumerate(self._items): if hasattr(spiketrain, "load"): # proxy object spiketrain = spiketrain.load() if spiketrain.times.dimensionality.items() == dim.items(): # no need to rescale spike_times.append(spiketrain.times) else: spike_times.append(spiketrain.times.rescale(dim)) if ("channel_id" in spiketrain.annotations and isinstance(spiketrain.annotations["channel_id"], int) ): ch_id = spiketrain.annotations["channel_id"] else: ch_id = i channel_ids.append(ch_id * np.ones(spiketrain.shape, dtype=np.int64)) self._spike_time_array = np.hstack(spike_times) * self._items[0].units self._channel_id_array = np.hstack(channel_ids) return self._channel_id_array, self._spike_time_array neo-0.10.0/neo/core/view.py0000644000076700000240000000701514077527451016032 0ustar andrewstaff00000000000000""" This module implements :class:`ChannelView`, which represents a subset of the channels in an :class:`AnalogSignal` or :class:`IrregularlySampledSignal`. It replaces the indexing function of the former :class:`ChannelIndex`. """ import numpy as np from .baseneo import BaseNeo from .basesignal import BaseSignal from .dataobject import ArrayDict class ChannelView(BaseNeo): """ A tool for indexing a subset of the channels within an :class:`AnalogSignal` or :class:`IrregularlySampledSignal`; *Required attributes/properties*: :obj: (AnalogSignal or IrregularlySampledSignal) The signal being indexed. :index: (list/1D-array) boolean or integer mask to select the channels of interest. *Recommended attributes/properties*: :name: (str) A label for the view. :description: (str) Text description. :file_origin: (str) Filesystem path or URL of the original data file. :array_annotations: (dict) Dict mapping strings to numpy arrays containing annotations for all data points Note: Any other additional arguments are assumed to be user-specific metadata and stored in :attr:`annotations`. """ _parent_objects = ('Segment',) _parent_attrs = ('segment',) _necessary_attrs = ( ('index', np.ndarray, 1, np.dtype('i')), ('obj', ('AnalogSignal', 'IrregularlySampledSignal'), 1) ) # "mask" would be an alternative name, proposing "index" for # backwards-compatibility with ChannelIndex def __init__(self, obj, index, name=None, description=None, file_origin=None, array_annotations=None, **annotations): super().__init__(name=name, description=description, file_origin=file_origin, **annotations) if not (isinstance(obj, BaseSignal) or ( hasattr(obj, "proxy_for") and issubclass(obj.proxy_for, BaseSignal))): raise ValueError("Can only take a ChannelView of an AnalogSignal " "or an IrregularlySampledSignal") self.obj = obj # check type and dtype of index and convert index to a common form # (accept list or array of bool or int, convert to int array) self.index = np.array(index) if len(self.index.shape) != 1: raise ValueError("index must be a 1D array") if self.index.dtype == bool: # convert boolean mask to integer index if self.index.size != self.obj.shape[-1]: raise ValueError("index size does not match number of channels in signal") self.index, = np.nonzero(self.index) # allow any type of integer representation elif self.index.dtype.char not in np.typecodes['AllInteger']: raise ValueError("index must be of a list or array of data type boolean or integer") if not hasattr(self, 'array_annotations') or not self.array_annotations: self.array_annotations = ArrayDict(self._get_arr_ann_length()) if array_annotations is not None: self.array_annotate(**array_annotations) @property def shape(self): return (self.obj.shape[0], self.index.size) def _get_arr_ann_length(self): return self.shape[-1] def array_annotate(self, **array_annotations): self.array_annotations.update(array_annotations) def resolve(self): """ Return a copy of the underlying object containing just the subset of channels defined by the index. """ return self.obj[:, self.index] neo-0.10.0/neo/io/0000755000076700000240000000000014077757142014164 5ustar andrewstaff00000000000000neo-0.10.0/neo/io/__init__.py0000644000076700000240000002023514077554132016271 0ustar andrewstaff00000000000000""" :mod:`neo.io` provides classes for reading and/or writing electrophysiological data files. Note that if the package dependency is not satisfied for one io, it does not raise an error but a warning. :attr:`neo.io.iolist` provides a list of successfully imported io classes. Functions: .. autofunction:: neo.io.get_io Classes: * :attr:`AlphaOmegaIO` * :attr:`AsciiImageIO` * :attr:`AsciiSignalIO` * :attr:`AsciiSpikeTrainIO` * :attr:`AxographIO` * :attr:`AxonaIO` * :attr:`AxonIO` * :attr:`BCI2000IO` * :attr:`BlackrockIO` * :attr:`BlkIO` * :attr:`BrainVisionIO` * :attr:`BrainwareDamIO` * :attr:`BrainwareF32IO` * :attr:`BrainwareSrcIO` * :attr:`CedIO` * :attr:`ElanIO` * :attr:`IgorIO` * :attr:`IntanIO` * :attr:`MEArecIO` * :attr:`KlustaKwikIO` * :attr:`KwikIO` * :attr:`MaxwellIO` * :attr:`MicromedIO` * :attr:`NeoMatlabIO` * :attr:`NestIO` * :attr:`NeuralynxIO` * :attr:`NeuroExplorerIO` * :attr:`NeuroScopeIO` * :attr:`NeuroshareIO` * :attr:`NixIO` * :attr:`NWBIO` * :attr:`OpenEphysIO` * :attr:`OpenEphysBinaryIO` * :attr:`PhyIO` * :attr:`PickleIO` * :attr:`PlexonIO` * :attr:`RawBinarySignalIO` * :attr:`RawMCSIO` * :attr:`Spike2IO` * :attr:`SpikeGadgetsIO` * :attr:`SpikeGLXIO` * :attr:`StimfitIO` * :attr:`TdtIO` * :attr:`TiffIO` * :attr:`WinEdrIO` * :attr:`WinWcpIO` .. autoclass:: neo.io.AlphaOmegaIO .. autoattribute:: extensions .. autoclass:: neo.io.AsciiImageIO .. autoattribute:: extensions .. autoclass:: neo.io.AsciiSignalIO .. autoattribute:: extensions .. autoclass:: neo.io.AsciiSpikeTrainIO .. autoattribute:: extensions .. autoclass:: neo.io.AxographIO .. autoattribute:: extensions .. autoclass:: neo.io.AxonaIO .. autoattribute:: extensions .. autoclass:: neo.io.AxonIO .. autoattribute:: extensions .. autoclass:: neo.io.BCI2000IO .. autoattribute:: extensions .. autoclass:: neo.io.BlackrockIO .. autoattribute:: extensions .. autoclass:: neo.io.BlkIO .. autoattribute:: extensions .. autoclass:: neo.io.BrainVisionIO .. autoattribute:: extensions .. autoclass:: neo.io.BrainwareDamIO .. autoattribute:: extensions .. autoclass:: neo.io.BrainwareF32IO .. autoattribute:: extensions .. autoclass:: neo.io.BrainwareSrcIO .. autoattribute:: extensions .. autoclass:: neo.io.CedIO .. autoattribute:: extensions .. autoclass:: neo.io.ElanIO .. autoattribute:: extensions .. .. autoclass:: neo.io.ElphyIO .. autoattribute:: extensions .. autoclass:: neo.io.IgorIO .. autoattribute:: extensions .. autoclass:: neo.io.IntanIO .. autoattribute:: extensions .. autoclass:: neo.io.KlustaKwikIO .. autoattribute:: extensions .. autoclass:: neo.io.KwikIO .. autoattribute:: extensions .. autoclass:: neo.io.MEArecIO .. autoattribute:: extensions .. autoclass:: neo.io.MaxwellIO .. autoattribute:: extensions .. autoclass:: neo.io.MicromedIO .. autoattribute:: extensions .. autoclass:: neo.io.NeoMatlabIO .. autoattribute:: extensions .. autoclass:: neo.io.NestIO .. autoattribute:: extensions .. autoclass:: neo.io.NeuralynxIO .. autoattribute:: extensions .. autoclass:: neo.io.NeuroExplorerIO .. autoattribute:: extensions .. autoclass:: neo.io.NeuroScopeIO .. autoattribute:: extensions .. autoclass:: neo.io.NeuroshareIO .. autoattribute:: extensions .. autoclass:: neo.io.NixIO .. autoattribute:: extensions .. autoclass:: neo.io.NWBIO .. autoattribute:: extensions .. autoclass:: neo.io.OpenEphysIO .. autoattribute:: extensions .. autoclass:: neo.io.OpenEphysBinaryIO .. autoattribute:: extensions .. autoclass:: neo.io.PhyIO .. autoattribute:: extensions .. autoclass:: neo.io.PickleIO .. autoattribute:: extensions .. autoclass:: neo.io.PlexonIO .. autoattribute:: extensions .. autoclass:: neo.io.RawBinarySignalIO .. autoattribute:: extensions .. autoclass:: neo.io.RawMCSIO .. autoattribute:: extensions .. autoclass:: Spike2IO .. autoattribute:: extensions .. autoclass:: SpikeGadgetsIO .. autoattribute:: extensions .. autoclass:: SpikeGLXIO .. autoattribute:: extensions .. autoclass:: neo.io.StimfitIO .. autoattribute:: extensions .. autoclass:: neo.io.TdtIO .. autoattribute:: extensions .. autoclass:: neo.io.TiffIO .. autoattribute:: extensions .. autoclass:: neo.io.WinEdrIO .. autoattribute:: extensions .. autoclass:: neo.io.WinWcpIO .. autoattribute:: extensions """ import os.path # try to import the neuroshare library. # if it is present, use the neuroshareapiio to load neuroshare files # if it is not present, use the neurosharectypesio to load files try: import neuroshare as ns except ImportError as err: from neo.io.neurosharectypesio import NeurosharectypesIO as NeuroshareIO # print("\n neuroshare library not found, loading data with ctypes" ) # print("\n to use the API be sure to install the library found at:") # print("\n www.http://pythonhosted.org/neuroshare/") else: from neo.io.neuroshareapiio import NeuroshareapiIO as NeuroshareIO # print("neuroshare library successfully imported") # print("\n loading with API...") from neo.io.alphaomegaio import AlphaOmegaIO from neo.io.asciiimageio import AsciiImageIO from neo.io.asciisignalio import AsciiSignalIO from neo.io.asciispiketrainio import AsciiSpikeTrainIO from neo.io.axographio import AxographIO from neo.io.axonaio import AxonaIO from neo.io.axonio import AxonIO from neo.io.blackrockio import BlackrockIO from neo.io.blkio import BlkIO from neo.io.bci2000io import BCI2000IO from neo.io.brainvisionio import BrainVisionIO from neo.io.brainwaredamio import BrainwareDamIO from neo.io.brainwaref32io import BrainwareF32IO from neo.io.brainwaresrcio import BrainwareSrcIO from neo.io.cedio import CedIO from neo.io.elanio import ElanIO from neo.io.elphyio import ElphyIO from neo.io.exampleio import ExampleIO from neo.io.igorproio import IgorIO from neo.io.intanio import IntanIO from neo.io.klustakwikio import KlustaKwikIO from neo.io.kwikio import KwikIO from neo.io.mearecio import MEArecIO from neo.io.maxwellio import MaxwellIO from neo.io.micromedio import MicromedIO from neo.io.neomatlabio import NeoMatlabIO from neo.io.nestio import NestIO from neo.io.neuralynxio import NeuralynxIO from neo.io.neuroexplorerio import NeuroExplorerIO from neo.io.neuroscopeio import NeuroScopeIO from neo.io.nixio import NixIO from neo.io.nixio_fr import NixIO as NixIOFr from neo.io.nwbio import NWBIO from neo.io.openephysio import OpenEphysIO from neo.io.openephysbinaryio import OpenEphysBinaryIO from neo.io.phyio import PhyIO from neo.io.pickleio import PickleIO from neo.io.plexonio import PlexonIO from neo.io.rawbinarysignalio import RawBinarySignalIO from neo.io.rawmcsio import RawMCSIO from neo.io.spike2io import Spike2IO from neo.io.spikegadgetsio import SpikeGadgetsIO from neo.io.spikeglxio import SpikeGLXIO from neo.io.stimfitio import StimfitIO from neo.io.tdtio import TdtIO from neo.io.tiffio import TiffIO from neo.io.winedrio import WinEdrIO from neo.io.winwcpio import WinWcpIO iolist = [ AlphaOmegaIO, AsciiImageIO, AsciiSignalIO, AsciiSpikeTrainIO, AxographIO, AxonaIO, AxonIO, BCI2000IO, BlackrockIO, BlkIO, BrainVisionIO, BrainwareDamIO, BrainwareF32IO, BrainwareSrcIO, CedIO, ElanIO, # ElphyIO, ExampleIO, IgorIO, IntanIO, KlustaKwikIO, KwikIO, MEArecIO, MaxwellIO, MicromedIO, NixIO, # place NixIO before other IOs that use HDF5 to make it the default for .h5 files NeoMatlabIO, NestIO, NeuralynxIO, NeuroExplorerIO, NeuroScopeIO, NeuroshareIO, NWBIO, OpenEphysIO, OpenEphysBinaryIO, PhyIO, PickleIO, PlexonIO, RawBinarySignalIO, RawMCSIO, Spike2IO, SpikeGadgetsIO, SpikeGLXIO, StimfitIO, TdtIO, TiffIO, WinEdrIO, WinWcpIO ] def get_io(filename, *args, **kwargs): """ Return a Neo IO instance, guessing the type based on the filename suffix. """ extension = os.path.splitext(filename)[1][1:] for io in iolist: if extension in io.extensions: return io(filename, *args, **kwargs) raise IOError("File extension %s not registered" % extension) neo-0.10.0/neo/io/alphaomegaio.py0000644000076700000240000005716514066375716017203 0ustar andrewstaff00000000000000""" Class for reading data from Alpha Omega .map files. This class is an experimental reader with important limitations. See the source code for details of the limitations. The code of this reader is of alpha quality and received very limited testing. This code is written from the incomplete file specifications available in: [1] AlphaMap Data Acquisition System User's Manual Version 10.1.1 Section 5 APPENDIX B: ALPHAMAP FILE STRUCTURE, pages 120-140 Edited by ALPHA OMEGA Home Office: P.O. Box 810, Nazareth Illit 17105, Israel http://www.alphaomega-eng.com/ and from the source code of a C software for conversion of .map files to .eeg elan software files : [2] alphamap2eeg 1.0, 12/03/03, Anne CHEYLUS - CNRS ISC UMR 5015 Supported : Read @author : sgarcia, Florent Jaillet """ # NOTE: For some specific types of comments, the following convention is used: # "TODO:" Desirable future evolution # "WARNING:" Information about code that is based on broken or missing # specifications and that might be wrong # Main limitations of this reader: # - The reader is only able to load data stored in data blocks of type 5 # (data block for one channel). In particular it means that it doesn't # support signals stored in blocks of type 7 (data block for multiple # channels). # For more details on these data blocks types, see 5.4.1 and 5.4.2 p 127 in # [1]. # - Rather than supporting all the neo objects types that could be extracted # from the file, all read data are returned in AnalogSignal objects, even for # digital channels or channels containing spiking informations. # - Digital channels are not converted to events or events array as they # should. # - Loading multichannel signals as AnalogSignalArrays is not supported. # - Many data or metadata that are avalaible in the file and that could be # represented in some way in the neo model are not extracted. In particular # scaling of the data and extraction of the units of the signals are not # supported. # - It received very limited testing, exlusively using python 2.6.6. In # particular it has not been tested using Python 3.x. # # These limitations are mainly due to the following reasons: # - Incomplete, unclear and in some places innacurate specifications of the # format in [1]. # - Lack of test files containing all the types of data blocks of interest # (in particular no file with type 7 data block for multiple channels where # available when writing this code). # - Lack of knowledge of the Alphamap software and the associated data models. # - Lack of time (especially as the specifications are incomplete, a lot of # reverse engineering and testing is required, which makes the development of # this IO very painful and long). # specific imports import datetime import os import struct # note neo.core need only numpy and quantities import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Block, Segment, AnalogSignal class AlphaOmegaIO(BaseIO): """ Class for reading data from Alpha Omega .map files (experimental) This class is an experimental reader with important limitations. See the source code for details of the limitations. The code of this reader is of alpha quality and received very limited testing. Usage: >>> from neo import io >>> r = io.AlphaOmegaIO( filename = 'File_AlphaOmega_1.map') >>> blck = r.read_block() >>> print blck.segments[0].analogsignals """ is_readable = True # This is a reading only class is_writable = False # writing is not supported # This class is able to directly or indirectly read the following kind of # objects supported_objects = [Block, Segment, AnalogSignal] # TODO: Add support for other objects that should be extractable from .map # files (Event, Epoch?, Epoch Array?, SpikeTrain?) # This class can only return a Block readable_objects = [Block] # TODO : create readers for different type of objects (Segment, # AnalogSignal,...) # This class is not able to write objects writeable_objects = [] # This is for GUI stuff : a definition for parameters when reading. read_params = {Block: []} # Writing is not supported, so no GUI stuff write_params = None name = 'AlphaOmega' extensions = ['map'] mode = 'file' def __init__(self, filename=None): """ Arguments: filename : the .map Alpha Omega file name """ BaseIO.__init__(self) self.filename = filename # write is not supported so I do not overload write method from BaseIO def read_block(self, lazy=False): """ Return a Block. """ assert not lazy, 'Do not support lazy' def count_samples(m_length): """ Count the number of signal samples available in a type 5 data block of length m_length """ # for information about type 5 data block, see [1] count = int((m_length - 6) / 2 - 2) # -6 corresponds to the header of block 5, and the -2 take into # account the fact that last 2 values are not available as the 4 # corresponding bytes are coding the time stamp of the beginning # of the block return count # create the neo Block that will be returned at the end blck = Block(file_origin=os.path.basename(self.filename)) blck.file_origin = os.path.basename(self.filename) fid = open(self.filename, 'rb') # NOTE: in the following, the word "block" is used in the sense used in # the alpha-omega specifications (ie a data chunk in the file), rather # than in the sense of the usual Block object in neo # step 1: read the headers of all the data blocks to load the file # structure pos_block = 0 # position of the current block in the file file_blocks = [] # list of data blocks available in the file seg = Segment(file_origin=os.path.basename(self.filename)) seg.file_origin = os.path.basename(self.filename) blck.segments.append(seg) while True: first_4_bytes = fid.read(4) if len(first_4_bytes) < 4: # we have reached the end of the file break else: m_length, m_TypeBlock = struct.unpack('Hcx', first_4_bytes) block = HeaderReader(fid, dict_header_type.get(m_TypeBlock, Type_Unknown)).read_f() block.update({'m_length': m_length, 'm_TypeBlock': m_TypeBlock, 'pos': pos_block}) if m_TypeBlock == '2': # The beginning of the block of type '2' is identical for # all types of channels, but the following part depends on # the type of channel. So we need a special case here. # WARNING: How to check the type of channel is not # described in the documentation. So here I use what is # proposed in the C code [2]. # According to this C code, it seems that the 'm_isAnalog' # is used to distinguished analog and digital channels, and # 'm_Mode' encodes the type of analog channel: # 0 for continuous, 1 for level, 2 for external trigger. # But in some files, I found channels that seemed to be # continuous channels with 'm_Modes' = 128 or 192. So I # decided to consider every channel with 'm_Modes' # different from 1 or 2 as continuous. I also couldn't # check that values of 1 and 2 are really for level and # external trigger as I had no test files containing data # of this types. type_subblock = 'unknown_channel_type(m_Mode=' \ + str(block['m_Mode']) + ')' description = Type2_SubBlockUnknownChannels block.update({'m_Name': 'unknown_name'}) if block['m_isAnalog'] == 0: # digital channel type_subblock = 'digital' description = Type2_SubBlockDigitalChannels elif block['m_isAnalog'] == 1: # analog channel if block['m_Mode'] == 1: # level channel type_subblock = 'level' description = Type2_SubBlockLevelChannels elif block['m_Mode'] == 2: # external trigger channel type_subblock = 'external_trigger' description = Type2_SubBlockExtTriggerChannels else: # continuous channel type_subblock = 'continuous(Mode' \ + str(block['m_Mode']) + ')' description = Type2_SubBlockContinuousChannels subblock = HeaderReader(fid, description).read_f() block.update(subblock) block.update({'type_subblock': type_subblock}) file_blocks.append(block) pos_block += m_length fid.seek(pos_block) # step 2: find the available channels list_chan = [] # list containing indexes of channel blocks for ind_block, block in enumerate(file_blocks): if block['m_TypeBlock'] == '2': list_chan.append(ind_block) # step 3: find blocks containing data for the available channels list_data = [] # list of lists of indexes of data blocks # corresponding to each channel for ind_chan, chan in enumerate(list_chan): list_data.append([]) num_chan = file_blocks[chan]['m_numChannel'] for ind_block, block in enumerate(file_blocks): if block['m_TypeBlock'] == '5': if block['m_numChannel'] == num_chan: list_data[ind_chan].append(ind_block) # step 4: compute the length (number of samples) of the channels chan_len = np.zeros(len(list_data), dtype=np.int64) for ind_chan, list_blocks in enumerate(list_data): for ind_block in list_blocks: chan_len[ind_chan] += count_samples( file_blocks[ind_block]['m_length']) # step 5: find channels for which data are available ind_valid_chan = np.nonzero(chan_len)[0] # step 6: load the data # TODO give the possibility to load data as AnalogSignalArrays for ind_chan in ind_valid_chan: list_blocks = list_data[ind_chan] ind = 0 # index in the data vector # read time stamp for the beginning of the signal form = '>> from neo import io >>> import quantities as pq >>> r = io.AsciiImageIO(file_name='File_asciiimage_1.txt',nb_frame=511, nb_row=100, ... nb_column=100,units='mm', sampling_rate=1.0*pq.Hz, ... spatial_scale=1.0*pq.mm) >>> block = r.read_block() read block creating segment returning block >>> block Block with 1 segments file_origin: 'File_asciiimage_1.txt # segments (N=1) 0: Segment with 1 imagesequences # analogsignals (N=0) """ name = 'AsciiImage IO' description = "Neo IO module for optical imaging data stored as a folder of TIFF images." _prefered_signal_group_mode = 'group-by-same-units' is_readable = True is_writable = False supported_objects = [Block, Segment, ImageSequence] readable_objects = supported_objects writeable_object = [] support_lazy = False read_params = {} write_params = {} extensions = [] mode = 'file' def __init__(self, file_name=None, nb_frame=None, nb_row=None, nb_column=None, units=None, sampling_rate=None, spatial_scale=None, **kwargs): BaseIO.__init__(self, file_name, **kwargs) self.nb_frame = nb_frame self.nb_row = nb_row self.nb_column = nb_column self.units = units self.sampling_rate = sampling_rate self.spatial_scale = spatial_scale def read_block(self, lazy=False, **kwargs): file = open(self.filename, 'r') data = file.read() print("read block") liste_value = [] record = [] for i in range(len(data)): if data[i] == "\n" or data[i] == "\t": t = "".join(str(e) for e in record) liste_value.append(t) record = [] else: record.append(data[i]) data = [] nb = 0 for i in range(self.nb_frame): data.append([]) for y in range(self.nb_row): data[i].append([]) for x in range(self.nb_column): data[i][y].append(liste_value[nb]) nb += 1 image_sequence = ImageSequence(np.array(data, dtype='float'), units=self.units, sampling_rate=self.sampling_rate, spatial_scale=self.spatial_scale) file.close() print("creating segment") segment = Segment(file_origin=self.filename) segment.imagesequences = [image_sequence] block = Block(file_origin=self.filename) segment.block = block block.segments.append(segment) print("returning block") return block neo-0.10.0/neo/io/asciisignalio.py0000644000076700000240000004007614077554132017355 0ustar andrewstaff00000000000000""" Class for reading/writing analog signals in a text file. Each column represents an AnalogSignal. All AnalogSignals have the same sampling rate. Covers many cases when parts of a file can be viewed as a CSV format. Supported : Read/Write Author: sgarcia """ import csv import os import json import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import AnalogSignal, IrregularlySampledSignal, Segment, Block class AsciiSignalIO(BaseIO): """ Class for reading signals in generic ascii format. Columns represent signals. They all share the same sampling rate. The sampling rate is externally known or the first column could hold the time vector. Usage: >>> from neo import io >>> r = io.AsciiSignalIO(filename='File_asciisignal_2.txt') >>> seg = r.read_segment() >>> print seg.analogsignals [ AnalogSignal sampling_rate = 1.0 / np.mean(np.diff(sig[:, self.timecolumn])) / self.time_units else: # not equally spaced --> IrregularlySampledSignal sampling_rate = None t_start = sig[0, self.timecolumn] * self.time_units if self.signal_group_mode == 'all-in-one': if self.timecolumn is not None: mask = list(range(sig.shape[1])) if self.timecolumn >= 0: mask.remove(self.timecolumn) else: # allow negative column index mask.remove(sig.shape[1] + self.timecolumn) signal = sig[:, mask] else: signal = sig if sampling_rate is None: irr_sig = IrregularlySampledSignal(signal[:, self.timecolumn] * self.time_units, signal * self.units, name='multichannel') seg.irregularlysampledsignals.append(irr_sig) else: ana_sig = AnalogSignal(signal * self.units, sampling_rate=sampling_rate, t_start=t_start, channel_index=self.usecols or np.arange(signal.shape[1]), name='multichannel') seg.analogsignals.append(ana_sig) else: if self.timecolumn is not None and self.timecolumn < 0: time_col = sig.shape[1] + self.timecolumn else: time_col = self.timecolumn for i in range(sig.shape[1]): if time_col == i: continue signal = sig[:, i] * self.units if sampling_rate is None: irr_sig = IrregularlySampledSignal(sig[:, time_col] * self.time_units, signal, t_start=t_start, channel_index=i, name='Column %d' % i) seg.irregularlysampledsignals.append(irr_sig) else: ana_sig = AnalogSignal(signal, sampling_rate=sampling_rate, t_start=t_start, channel_index=i, name='Column %d' % i) seg.analogsignals.append(ana_sig) seg.create_many_to_one_relationship() return seg def read_metadata(self): """ Read IO parameters from an associated JSON file """ # todo: also read annotations if self.metadata_filename is None: candidate = os.path.splitext(self.filename)[0] + "_about.json" if os.path.exists(candidate): self.metadata_filename = candidate else: return {} if os.path.exists(self.metadata_filename): with open(self.metadata_filename) as fp: metadata = json.load(fp) for key in "sampling_rate", "t_start": if key in metadata: metadata[key] = pq.Quantity(metadata[key]["value"], metadata[key]["units"]) for key in "units", "time_units": if key in metadata: metadata[key] = pq.Quantity(1, metadata[key]) return metadata else: return {} def write_segment(self, segment): """ Write a segment and AnalogSignal in a text file. """ # todo: check all analog signals have the same length, physical dimensions # and sampling rates l = [] if self.timecolumn is not None: if self.timecolumn != 0: raise NotImplementedError("Only column 0 currently supported for writing times") l.append(segment.analogsignals[0].times[:, np.newaxis].rescale(self.time_units)) # check signals are compatible (size, sampling rate), otherwise we # can't/shouldn't concatenate them # also set sampling_rate, t_start, units, time_units from signal(s) signal0 = segment.analogsignals[0] for attr in ("sampling_rate", "units", "shape"): val0 = getattr(signal0, attr) for signal in segment.analogsignals[1:]: val1 = getattr(signal, attr) if val0 != val1: raise Exception("Signals being written have different " + attr) setattr(self, attr, val0) # todo t_start, time_units self.time_units = signal0.times.units self.t_start = min(sig.t_start for sig in segment.analogsignals) for anaSig in segment.analogsignals: l.append(anaSig.rescale(self.units).magnitude) sigs = np.concatenate(l, axis=1) # print sigs.shape np.savetxt(self.filename, sigs, delimiter=self.delimiter) if self.metadata_filename is not None: self.write_metadata() def write_block(self, block): """ Can only write blocks containing a single segment. """ # in future, maybe separate segments by a blank link, or a "magic" comment if len(block.segments) > 1: raise ValueError("Can only write blocks containing a single segment." " This block contains {} segments.".format(len(block.segments))) self.write_segment(block.segments[0]) def write_metadata(self, metadata_filename=None): """ Write IO parameters to an associated JSON file """ # todo: also write annotations metadata = { "filename": self.filename, "delimiter": self.delimiter, "usecols": self.usecols, "skiprows": self.skiprows, "timecolumn": self.timecolumn, "sampling_rate": { "value": float(self.sampling_rate.magnitude), "units": self.sampling_rate.dimensionality.string }, "t_start": { "value": float(self.t_start.magnitude), "units": self.t_start.dimensionality.string }, "units": self.units.dimensionality.string, "time_units": self.time_units.dimensionality.string, "method": self.method, "signal_group_mode": self.signal_group_mode } if metadata_filename is None: if self.metadata_filename is None: self.metadata_filename = os.path.splitext(self.filename) + "_about.json" else: self.metadata_filename = metadata_filename with open(self.metadata_filename, "w") as fp: json.dump(metadata, fp) return self.metadata_filename neo-0.10.0/neo/io/asciispiketrainio.py0000644000076700000240000000710214077527451020246 0ustar andrewstaff00000000000000""" Classe for reading/writing SpikeTrains in a text file. It is the simple case where different spiketrains are written line by line. Supported : Read/Write Author: sgarcia """ import os import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Segment, SpikeTrain class AsciiSpikeTrainIO(BaseIO): """ Class for reading/writing SpikeTrains in a text file. Each Spiketrain is a line. Usage: >>> from neo import io >>> r = io.AsciiSpikeTrainIO( filename = 'File_ascii_spiketrain_1.txt') >>> seg = r.read_segment() >>> print seg.spiketrains # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE [>> import neo >>> r = neo.io.AxographIO(filename=filename) >>> blk = r.read_block(signal_group_mode='split-all') >>> display(blk) >>> # get signals >>> seg_index = 0 # episode number >>> sigs = [sig for sig in blk.segments[seg_index].analogsignals ... if sig.name in channel_names] >>> display(sigs) >>> # get event markers (same for all segments/episodes) >>> ev = blk.segments[0].events[0] >>> print([ev for ev in zip(ev.times, ev.labels)]) >>> # get interval bars (same for all segments/episodes) >>> ep = blk.segments[0].epochs[0] >>> print([ep for ep in zip(ep.times, ep.durations, ep.labels)]) >>> # get notes >>> print(blk.annotations['notes']) """ name = 'AxographIO' description = 'This IO reads .axgd/.axgx files created with AxoGraph' _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename='', force_single_segment=False): AxographRawIO.__init__(self, filename, force_single_segment) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/axonaio.py0000644000076700000240000000054114066374330016164 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.axonarawio import AxonaRawIO class AxonaIO(AxonaRawIO, BaseFromRaw): name = 'Axona IO' description = "Read raw continuous data (.bin and .set files)" def __init__(self, filename): AxonaRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/axonio.py0000644000076700000240000000743314066375716016043 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.axonrawio import AxonRawIO from neo.core import Block, Segment, AnalogSignal, Event import quantities as pq class AxonIO(AxonRawIO, BaseFromRaw): """ Class for reading data from pCLAMP and AxoScope files (.abf version 1 and 2), developed by Molecular device/Axon technologies. - abf = Axon binary file - atf is a text file based format from axon that could be read by AsciiIO (but this file is less efficient.) Here an important note from erikli@github for user who want to get the : With Axon ABF2 files, the information that you need to recapitulate the original stimulus waveform (both digital and analog) is contained in multiple places. - `AxonIO._axon_info['protocol']` -- things like number of samples in episode - `AxonIO.axon_info['section']['ADCSection']` | `AxonIO.axon_info['section']['DACSection']` -- things about the number of channels and channel properties - `AxonIO._axon_info['protocol']['nActiveDACChannel']` -- bitmask specifying which DACs are actually active - `AxonIO._axon_info['protocol']['nDigitalEnable']` -- bitmask specifying which set of Epoch timings should be used to specify the duration of digital outputs - `AxonIO._axon_info['dictEpochInfoPerDAC']` -- dict of dict. First index is DAC channel and second index is Epoch number (i.e. information about Epoch A in Channel 2 would be in `AxonIO._axon_info['dictEpochInfoPerDAC'][2][0]`) - `AxonIO._axon_info['EpochInfo']` -- list of dicts containing information about each Epoch's digital out pattern. Digital out is a bitmask with least significant bit corresponding to Digital Out 0 - `AxonIO._axon_info['listDACInfo']` -- information about DAC name, scale factor, holding level, etc - `AxonIO._t_starts` -- start time of each sweep in a unified time basis - `AxonIO._sampling_rate` The current AxonIO.read_protocol() method utilizes a subset of these. In particular I know it doesn't consider `nDigitalEnable`, `EpochInfo`, or `nActiveDACChannel` and it doesn't account for different types of Epochs offered by Clampex/pClamp other than discrete steps (such as ramp, pulse train, etc and encoded by `nEpochType` in the EpochInfoPerDAC section). I'm currently parsing a superset of the properties used by read_protocol() in my analysis scripts, but that code still doesn't parse the full information and isn't in a state where it could be committed and I can't currently prioritize putting together all the code that would parse the full set of data. The `AxonIO._axon_info['EpochInfo']` section doesn't currently exist. """ _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): AxonRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) def read_protocol(self): """ Read the protocol waveform of the file, if present; function works with ABF2 only. Protocols can be reconstructed from the ABF1 header. Returns: list of segments (one for every episode) with list of analog signls (one for every DAC). """ sigs_by_segments, sig_names, sig_units = self.read_raw_protocol() segments = [] for seg_index, sigs in enumerate(sigs_by_segments): seg = Segment(index=seg_index) t_start = self._t_starts[seg_index] * pq.s for c, sig in enumerate(sigs): ana_sig = AnalogSignal(sig, sampling_rate=self._sampling_rate * pq.Hz, t_start=t_start, name=sig_names[c], units=sig_units[c]) seg.analogsignals.append(ana_sig) segments.append(seg) return segments neo-0.10.0/neo/io/basefromrawio.py0000644000076700000240000003262414066375716017406 0ustar andrewstaff00000000000000""" BaseFromRaw ====== BaseFromRaw implement a bridge between the new neo.rawio API and the neo.io legacy that give neo.core object. The neo.rawio API is more restricted and limited and do not cover tricky cases with asymetrical tree of neo object. But if a format is done in neo.rawio the neo.io is done for free by inheritance of this class. Furthermore, IOs that inherits this BaseFromRaw also have the ability of the lazy load with proxy objects. """ import collections import warnings import numpy as np from neo import logging_handler from neo.core import (AnalogSignal, Block, Epoch, Event, IrregularlySampledSignal, Group, Segment, SpikeTrain) from neo.io.baseio import BaseIO from neo.io.proxyobjects import (AnalogSignalProxy, SpikeTrainProxy, EventProxy, EpochProxy, ensure_signal_units, check_annotations, ensure_second, proxyobjectlist) import quantities as pq class BaseFromRaw(BaseIO): """ This implement generic reader on top of RawIO reader. Arguments depend on `mode` (dir or file) File case:: reader = BlackRockIO(filename='FileSpec2.3001.nev') Dir case:: reader = NeuralynxIO(dirname='Cheetah_v5.7.4/original_data') Other arguments are IO specific. """ is_readable = True is_writable = False supported_objects = [Block, Segment, AnalogSignal, SpikeTrain, Group, Event, Epoch] readable_objects = [Block, Segment] writeable_objects = [] support_lazy = True name = 'BaseIO' description = '' extentions = [] mode = 'file' _prefered_signal_group_mode = 'group-by-same-units' # 'split-all' def __init__(self, *args, **kargs): BaseIO.__init__(self, *args, **kargs) self.parse_header() def read_block(self, block_index=0, lazy=False, create_group_across_segment=None, signal_group_mode=None, load_waveforms=False): """ :param block_index: int default 0. In case of several block block_index can be specified. :param lazy: False by default. :param create_group_across_segment: bool or dict If True : * Create a neo.Group to group AnalogSignal segments * Create a neo.Group to group SpikeTrain across segments * Create a neo.Group to group Event across segments * Create a neo.Group to group Epoch across segments With a dict the behavior can be controlled more finely create_group_across_segment = { 'AnalogSignal': True, 'SpikeTrain': False, ...} :param signal_group_mode: 'split-all' or 'group-by-same-units' (default depend IO): This control behavior for grouping channels in AnalogSignal. * 'split-all': each channel will give an AnalogSignal * 'group-by-same-units' all channel sharing the same quantity units ar grouped in a 2D AnalogSignal :param load_waveforms: False by default. Control SpikeTrains.waveforms is None or not. """ if signal_group_mode is None: signal_group_mode = self._prefered_signal_group_mode l = ['AnalogSignal', 'SpikeTrain', 'Event', 'Epoch'] if create_group_across_segment is None: # @andrew @ julia @michael ? # I think here the default None could give this create_group_across_segment = { 'AnalogSignal': True, #because mimic the old ChannelIndex for AnalogSignals 'SpikeTrain': False, # False by default because can create too many object for simulation 'Event': False, # not implemented yet 'Epoch': False, # not implemented yet } elif isinstance(create_group_across_segment, bool): # bool to dict v = create_group_across_segment create_group_across_segment = { k: v for k in l} elif isinstance(create_group_across_segment, dict): # put False to missing keys create_group_across_segment = {k: create_group_across_segment.get(k, False) for k in l} else: raise ValueError('create_group_across_segment must be bool or dict') # annotations bl_annotations = dict(self.raw_annotations['blocks'][block_index]) bl_annotations.pop('segments') bl_annotations = check_annotations(bl_annotations) bl = Block(**bl_annotations) # Group for AnalogSignals coming from signal_streams if create_group_across_segment['AnalogSignal']: signal_streams = self.header['signal_streams'] sub_streams = self.get_sub_signal_streams(signal_group_mode) sub_stream_groups = [] for sub_stream in sub_streams: stream_index, inner_stream_channels, name = sub_stream group = Group(name=name, stream_id=signal_streams[stream_index]['id']) bl.groups.append(group) sub_stream_groups.append(group) if create_group_across_segment['SpikeTrain']: spike_channels = self.header['spike_channels'] st_groups = [] for c in range(spike_channels.size): group = Group(name='SpikeTrain group {}'.format(c)) group.annotate(unit_name=spike_channels[c]['name']) group.annotate(unit_id=spike_channels[c]['id']) bl.groups.append(group) st_groups.append(group) if create_group_across_segment['Event']: # @andrew @ julia @michael : # Do we need this ? I guess yes raise NotImplementedError() if create_group_across_segment['Epoch']: # @andrew @ julia @michael : # Do we need this ? I guess yes raise NotImplementedError() # Read all segments for seg_index in range(self.segment_count(block_index)): seg = self.read_segment(block_index=block_index, seg_index=seg_index, lazy=lazy, signal_group_mode=signal_group_mode, load_waveforms=load_waveforms) bl.segments.append(seg) # create link between group (across segment) and data objects for seg in bl.segments: if create_group_across_segment['AnalogSignal']: for c, anasig in enumerate(seg.analogsignals): sub_stream_groups[c].add(anasig) if create_group_across_segment['SpikeTrain']: for c, sptr in enumerate(seg.spiketrains): st_groups[c].add(sptr) bl.create_many_to_one_relationship() return bl def read_segment(self, block_index=0, seg_index=0, lazy=False, signal_group_mode=None, load_waveforms=False, time_slice=None, strict_slicing=True): """ :param block_index: int default 0. In case of several blocks block_index can be specified. :param seg_index: int default 0. Index of segment. :param lazy: False by default. :param signal_group_mode: 'split-all' or 'group-by-same-units' (default depend IO): This control behavior for grouping channels in AnalogSignal. * 'split-all': each channel will give an AnalogSignal * 'group-by-same-units' all channel sharing the same quantity units ar grouped in a 2D AnalogSignal :param load_waveforms: False by default. Control SpikeTrains.waveforms is None or not. :param time_slice: None by default means no limit. A time slice is (t_start, t_stop) both are quantities. All object AnalogSignal, SpikeTrain, Event, Epoch will load only in the slice. :param strict_slicing: True by default. Control if an error is raised or not when t_start or t_stop is outside the real time range of the segment. """ if lazy: assert time_slice is None,\ 'For lazy=True you must specify time_slice when LazyObject.load(time_slice=...)' assert not load_waveforms,\ 'For lazy=True you must specify load_waveforms when SpikeTrain.load(load_waveforms=...)' if signal_group_mode is None: signal_group_mode = self._prefered_signal_group_mode # annotations seg_annotations = self.raw_annotations['blocks'][block_index]['segments'][seg_index].copy() for k in ('signals', 'spikes', 'events'): seg_annotations.pop(k) seg_annotations = check_annotations(seg_annotations) seg = Segment(index=seg_index, **seg_annotations) # AnalogSignal signal_streams = self.header['signal_streams'] sub_streams = self.get_sub_signal_streams(signal_group_mode) for sub_stream in sub_streams: stream_index, inner_stream_channels, name = sub_stream anasig = AnalogSignalProxy(rawio=self, stream_index=stream_index, inner_stream_channels=inner_stream_channels, block_index=block_index, seg_index=seg_index) anasig.name = name if not lazy: # ... and get the real AnalogSignal if not lazy anasig = anasig.load(time_slice=time_slice, strict_slicing=strict_slicing) anasig.segment = seg seg.analogsignals.append(anasig) # SpikeTrain and waveforms (optional) spike_channels = self.header['spike_channels'] for spike_channel_index in range(len(spike_channels)): # make a proxy... sptr = SpikeTrainProxy(rawio=self, spike_channel_index=spike_channel_index, block_index=block_index, seg_index=seg_index) if not lazy: # ... and get the real SpikeTrain if not lazy sptr = sptr.load(time_slice=time_slice, strict_slicing=strict_slicing, load_waveforms=load_waveforms) # TODO magnitude_mode='rescaled'/'raw' sptr.segment = seg seg.spiketrains.append(sptr) # Events/Epoch event_channels = self.header['event_channels'] for chan_ind in range(len(event_channels)): if event_channels['type'][chan_ind] == b'event': e = EventProxy(rawio=self, event_channel_index=chan_ind, block_index=block_index, seg_index=seg_index) if not lazy: e = e.load(time_slice=time_slice, strict_slicing=strict_slicing) e.segment = seg seg.events.append(e) elif event_channels['type'][chan_ind] == b'epoch': e = EpochProxy(rawio=self, event_channel_index=chan_ind, block_index=block_index, seg_index=seg_index) if not lazy: e = e.load(time_slice=time_slice, strict_slicing=strict_slicing) e.segment = seg seg.epochs.append(e) seg.create_many_to_one_relationship() return seg def get_sub_signal_streams(self, signal_group_mode='group-by-same-units'): """ When signal streams don't have homogeneous SI units across channels, they have to be split in sub streams to construct AnalogSignal objects with unique units. For backward compatibility (neo version <= 0.5) sub-streams can also be used to generate one AnalogSignal per channel. """ signal_streams = self.header['signal_streams'] signal_channels = self.header['signal_channels'] sub_streams = [] for stream_index in range(len(signal_streams)): stream_id = signal_streams[stream_index]['id'] stream_name = signal_streams[stream_index]['name'] mask = signal_channels['stream_id'] == stream_id channels = signal_channels[mask] if signal_group_mode == 'group-by-same-units': # this does not keep the original order _, idx = np.unique(channels['units'], return_index=True) all_units = channels['units'][np.sort(idx)] if len(all_units) == 1: # no substream #  None iwill be transform as slice later inner_stream_channels = None name = stream_name sub_stream = (stream_index, inner_stream_channels, name) sub_streams.append(sub_stream) else: for units in all_units: inner_stream_channels, = np.nonzero(channels['units'] == units) chan_names = channels[inner_stream_channels]['name'] name = 'Channels: (' + ' '.join(chan_names) + ')' sub_stream = (stream_index, inner_stream_channels, name) sub_streams.append(sub_stream) elif signal_group_mode == 'split-all': # mimic all neo <= 0.5 behavior for i, channel in enumerate(channels): inner_stream_channels = [i] name = channels[i]['name'] sub_stream = (stream_index, inner_stream_channels, name) sub_streams.append(sub_stream) else: raise (NotImplementedError) return sub_streams neo-0.10.0/neo/io/baseio.py0000644000076700000240000002122214066375716016000 0ustar andrewstaff00000000000000""" baseio ====== Classes ------- BaseIO - abstract class which should be overridden, managing how a file will load/write its data If you want a model for developing a new IO start from exampleIO. """ try: from collections.abc import Sequence except ImportError: from collections import Sequence import logging from neo import logging_handler from neo.core import (AnalogSignal, Block, Epoch, Event, Group, IrregularlySampledSignal, ChannelView, Segment, SpikeTrain, ImageSequence, RectangularRegionOfInterest, CircularRegionOfInterest, PolygonRegionOfInterest) read_error = "This type is not supported by this file format for reading" write_error = "This type is not supported by this file format for writing" class BaseIO: """ Generic class to handle all the file read/write methods for the key objects of the core class. This template is file-reading/writing oriented but it can also handle data read from/written to a database such as TDT sytem tanks or SQLite files. This is an abstract class that will be subclassed for each format The key methods of the class are: - ``read()`` - Read the whole object structure, return a list of Block objects - ``read_block(lazy=True, **params)`` - Read Block object from file with some parameters - ``read_segment(lazy=True, **params)`` - Read Segment object from file with some parameters - ``read_spiketrainlist(lazy=True, **params)`` - Read SpikeTrainList object from file with some parameters - ``write()`` - Write the whole object structure - ``write_block(**params)`` - Write Block object to file with some parameters - ``write_segment(**params)`` - Write Segment object to file with some parameters - ``write_spiketrainlist(**params)`` - Write SpikeTrainList object to file with some parameters The class can also implement these methods: - ``read_XXX(lazy=True, **params)`` - ``write_XXX(**params)`` where XXX could be one of the objects supported by the IO Each class is able to declare what can be accessed or written directly discribed by **readable_objects** and **readable_objects**. The object types can be one of the classes defined in neo.core (Block, Segment, AnalogSignal, ...) Each class does not necessary support all the whole neo hierarchy but part of it. This is described with **supported_objects**. All IOs must support at least Block with a read_block() ** start a new IO ** If you want to implement your own file format, you should create a class that will inherit from this BaseFile class and implement the previous methods. See ExampleIO in exampleio.py """ is_readable = False is_writable = False supported_objects = [] readable_objects = [] writeable_objects = [] support_lazy = False read_params = {} write_params = {} name = 'BaseIO' description = '' extensions = [] mode = 'file' # or 'fake' or 'dir' or 'database' def __init__(self, filename=None, **kargs): self.filename = str(filename) # create a logger for the IO class fullname = self.__class__.__module__ + '.' + self.__class__.__name__ self.logger = logging.getLogger(fullname) # create a logger for 'neo' and add a handler to it if it doesn't # have one already. # (it will also not add one if the root logger has a handler) corename = self.__class__.__module__.split('.')[0] corelogger = logging.getLogger(corename) rootlogger = logging.getLogger() if not corelogger.handlers and not rootlogger.handlers: corelogger.addHandler(logging_handler) ######## General read/write methods ####################### def read(self, lazy=False, **kargs): """ Return all data from the file as a list of Blocks """ if lazy and not self.support_lazy: raise ValueError("This IO module does not support lazy loading") if Block in self.readable_objects: if (hasattr(self, 'read_all_blocks') and callable(getattr(self, 'read_all_blocks'))): return self.read_all_blocks(lazy=lazy, **kargs) return [self.read_block(lazy=lazy, **kargs)] elif Segment in self.readable_objects: bl = Block(name='One segment only') seg = self.read_segment(lazy=lazy, **kargs) bl.segments.append(seg) bl.create_many_to_one_relationship() return [bl] else: raise NotImplementedError def write(self, bl, **kargs): if Block in self.writeable_objects: if isinstance(bl, Sequence): assert hasattr(self, 'write_all_blocks'), \ '%s does not offer to store a sequence of blocks' % \ self.__class__.__name__ self.write_all_blocks(bl, **kargs) else: self.write_block(bl, **kargs) elif Segment in self.writeable_objects: assert len(bl.segments) == 1, \ '%s is based on segment so if you try to write a block it ' + \ 'must contain only one Segment' % self.__class__.__name__ self.write_segment(bl.segments[0], **kargs) else: raise NotImplementedError ######## All individual read methods ####################### def read_block(self, **kargs): assert (Block in self.readable_objects), read_error def read_segment(self, **kargs): assert (Segment in self.readable_objects), read_error def read_spiketrain(self, **kargs): assert (SpikeTrain in self.readable_objects), read_error def read_analogsignal(self, **kargs): assert (AnalogSignal in self.readable_objects), read_error def read_imagesequence(self, **kargs): assert (ImageSequence in self.readable_objects), read_error def read_rectangularregionofinterest(self, **kargs): assert (RectangularRegionOfInterest in self.readable_objects), read_error def read_circularregionofinterest(self, **kargs): assert (CircularRegionOfInterest in self.readable_objects), read_error def read_polygonregionofinterest(self, **kargs): assert (PolygonRegionOfInterest in self.readable_objects), read_error def read_irregularlysampledsignal(self, **kargs): assert (IrregularlySampledSignal in self.readable_objects), read_error def read_channelview(self, **kargs): assert (ChannelView in self.readable_objects), read_error def read_event(self, **kargs): assert (Event in self.readable_objects), read_error def read_epoch(self, **kargs): assert (Epoch in self.readable_objects), read_error def read_group(self, **kargs): assert (Group in self.readable_objects), read_error ######## All individual write methods ####################### def write_block(self, bl, **kargs): assert (Block in self.writeable_objects), write_error def write_segment(self, seg, **kargs): assert (Segment in self.writeable_objects), write_error def write_spiketrain(self, sptr, **kargs): assert (SpikeTrain in self.writeable_objects), write_error def write_analogsignal(self, anasig, **kargs): assert (AnalogSignal in self.writeable_objects), write_error def write_imagesequence(self, imseq, **kargs): assert (ImageSequence in self.writeable_objects), write_error def write_rectangularregionofinterest(self, rectroi, **kargs): assert (RectangularRegionOfInterest in self.writeable_objects), read_error def write_circularregionofinterest(self, circroi, **kargs): assert (CircularRegionOfInterest in self.writeable_objects), read_error def write_polygonregionofinterest(self, polyroi, **kargs): assert (PolygonRegionOfInterest in self.writeable_objects), read_error def write_irregularlysampledsignal(self, irsig, **kargs): assert (IrregularlySampledSignal in self.writeable_objects), write_error def write_channelview(self, chv, **kargs): assert (ChannelView in self.writeable_objects), write_error def write_event(self, ev, **kargs): assert (Event in self.writeable_objects), write_error def write_epoch(self, ep, **kargs): assert (Epoch in self.writeable_objects), write_error def write_group(self, group, **kargs): assert (Group in self.writeable_objects), write_errorneo-0.10.0/neo/io/bci2000io.py0000644000076700000240000000063714066375716016134 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.bci2000rawio import BCI2000RawIO class BCI2000IO(BCI2000RawIO, BaseFromRaw): """Class for reading data from a BCI2000 .dat file, either version 1.0 or 1.1""" _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): BCI2000RawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/blackrockio.py0000644000076700000240000000120614060430470017000 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.blackrockrawio import BlackrockRawIO class BlackrockIO(BlackrockRawIO, BaseFromRaw): """ Supplementary class for reading BlackRock data using only a single nsx file. """ name = 'Blackrock IO for single nsx' description = "This IO reads a pair of corresponding nev and nsX files of the Blackrock " \ "" + "(Cerebus) recording system." _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename, **kargs): BlackrockRawIO.__init__(self, filename=filename, **kargs) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/blkio.py0000644000076700000240000003232714060430470015625 0ustar andrewstaff00000000000000from .baseio import BaseIO from neo.core import ImageSequence, Segment, Block import numpy as np import struct import os import math import quantities as pq class BlkIO(BaseIO): """ Neo IO module for optical imaging data stored as BLK file *Usage*: >>> from neo import io >>> import quantities as pq >>> r = io.BlkIO("file_blk_1.BLK",units='V',sampling_rate=1.0*pq.Hz, ... spatial_scale=1.0*pq.Hz) >>> block = r.read_block() reading the header reading block returning block >>> block Block with 6 segments file_origin: 'file_blk_1.BLK' # segments (N=6) 0: Segment with 1 imagesequences description: 'stim nb:0' # analogsignals (N=0) 1: Segment with 1 imagesequences description: 'stim nb:1' # analogsignals (N=0) 2: Segment with 1 imagesequences description: 'stim nb:2' # analogsignals (N=0) 3: Segment with 1 imagesequences description: 'stim nb:3' # analogsignals (N=0) 4: Segment with 1 imagesequences description: 'stim nb:4' # analogsignals (N=0) 5: Segment with 1 imagesequences description: 'stim nb:5' # analogsignals (N=0) Many thanks to Thomas Deneux for the MATLAB code on which this was based. """ name = 'BLK IO' description = "Neo IO module for optical imaging data stored as BLK file" _prefered_signal_goup_mode = 'group-by-same-units' is_readable = True is_wirtable = False supported_objects = [Block, Segment, ImageSequence] readble_objects = supported_objects support_lazy = False read_params = {} write_params = {} extensions = [] mode = 'file' def __init__(self, file_name=None, units=None, sampling_rate=None, spatial_scale=None, **kwargs): BaseIO.__init__(self, file_name, **kwargs) self.units = units self.sampling_rate = sampling_rate self.spatial_scale = spatial_scale def read(self, lazy=False, **kwargs): """ Return all data from the file as a list of Blocks """ if lazy: raise ValueError('This IO module does not support lazy loading') return [self.read_block(lazy=lazy, units=self.units, sampling_rate=self.sampling_rate, spatial_scale=self.spatial_scale, **kwargs)] def read_block(self, lazy=False, **kargs): def read(name, type, nb, dictionary, file): if type == 'int32': # dictionary[name] = int.from_bytes(file.read(4), byteorder=sys.byteorder, signed=True) dictionary[name] = struct.unpack("i", file.read(4))[0] if type == 'float32': dictionary[name] = struct.unpack('f', file.read(4))[0] if type == 'uint8': l = [] for i in range(nb): l.append(chr(struct.unpack('B', file.read(1))[0])) dictionary[name] = l if type == 'uint16': l = [] for i in range(nb): l.append((struct.unpack('H', file.read(2)))[0]) dictionary[name] = l if type == 'short': dictionary[name] = struct.unpack('h', file.read(2))[0] return dictionary def read_header(file_name): file = open(file_name, "rb") i = [ ['file_size', 'int32', 1], ['checksum_header', 'int32', 1], ['check_data', 'int32', 1], ['lenheader', 'int32', 1], ['versionid', 'float32', 1], ['filetype', 'int32', 1], ['filesubtype', 'int32', 1], ['datatype', 'int32', 1], ['sizeof', 'int32', 1], ['framewidth', 'int32', 1], ['frameheight', 'int32', 1], ['nframesperstim', 'int32', 1], ['nstimuli', 'int32', 1], ['initialxbinfactor', 'int32', 1], ['initialybinfactor', 'int32', 1], ['xbinfactor', 'int32', 1], ['ybinfactor', 'int32', 1], ['username', 'uint8', 32], ['recordingdate', 'uint8', 16], ['x1roi', 'int32', 1], ['y1roi', 'int32', 1], ['x2roi', 'int32', 1], ['y2roi', 'int32', 1], ['stimoffs', 'int32', 1], ['stimsize', 'int32', 1], ['frameoffs', 'int32', 1], ['framesize', 'int32', 1], ['refoffs', 'int32', 1], ['refsize', 'int32', 1], ['refwidth', 'int32', 1], ['refheight', 'int32', 1], ['whichblocks', 'uint16', 16], ['whichframe', 'uint16', 16], ['loclip', 'int32', 1], ['hiclip', 'int32', 1], ['lopass', 'int32', 1], ['hipass', 'int32', 1], ['operationsperformed', 'uint8', 64], ['magnification', 'float32', 1], ['gain', 'uint16', 1], ['wavelength', 'uint16', 1], ['exposuretime', 'int32', 1], ['nrepetitions', 'int32', 1], ['acquisitiondelay', 'int32', 1], ['interstiminterval', 'int32', 1], ['creationdate', 'uint8', 16], ['datafilename', 'uint8', 64], ['orareserved', 'uint8', 256] ] dic = {} for x in i: dic = read(name=x[0], type=x[1], nb=x[2], dictionary=dic, file=file) if dic['filesubtype'] == 13: i = [ ["includesrefframe", "int32", 1], ["temp", "uint8", 128], ["ntrials", "int32", 1], ["scalefactors", "int32", 1], ["cameragain", "short", 1], ["ampgain", "short", 1], ["samplingrate", "short", 1], ["average", "short", 1], ["exposuretime", "short", 1], ["samplingaverage", "short", 1], ["presentaverage", "short", 1], ["framesperstim", "short", 1], ["trialsperblock", "short", 1], ["sizeofanalogbufferinframes", "short", 1], ["cameratrials", "short", 1], ["filler", "uint8", 106], ["dyedaqreserved", "uint8", 106] ] for x in i: dic = read(name=x[0], type=x[1], nb=x[2], dictionary=dic, file=file) # nottested # p.listofstimuli=temp(1:max(find(temp~=0)))'; % up to first non-zero stimulus dic["listofstimuli"] = dic["temp"][0:np.argwhere(x != 0).max(0)] else: i = [ ["includesrefframe", "int32", 1], ["listofstimuli", "uint8", 256], ["nvideoframesperdataframe", "int32", 1], ["ntrials", "int32", 1], ["scalefactor", "int32", 1], ["meanampgain", "float32", 1], ["meanampdc", "float32", 1], ["vdaqreserved", "uint8", 256] ] for x in i: dic = read(name=x[0], type=x[1], nb=x[2], dictionary=dic, file=file) i = [["user", "uint8", 256], ["comment", "uint8", 256], ["refscalefactor", "int32", 1]] for x in i: dic = read(name=x[0], type=x[1], nb=x[2], dictionary=dic, file=file) dic["actuallength"] = os.stat(file_name).st_size file.close() return dic # start of the reading process nblocks = 1 print("reading the header") header = read_header(self.filename) nstim = header['nstimuli'] ni = header['framewidth'] nj = header['frameheight'] nfr = header['nframesperstim'] lenh = header['lenheader'] framesize = header['framesize'] filesize = header['file_size'] dtype = header['datatype'] gain = header['meanampgain'] dc = header['meanampdc'] scalefactor = header['scalefactor'] # [["dtype","nbytes","datatype","type_out"],[...]] l = [ [11, 1, "uchar", "uint8", "B"], [12, 2, "ushort", "uint16", "H"], [13, 4, "ulong", "uint32", "I"], [14, 4, "float", "single", "f"] ] for i in l: if dtype == i[0]: nbytes, datatype, type_out, struct_type = i[1], i[2], i[3], i[4] if framesize != ni * nj * nbytes: print("BAD HEADER!!! framesize does not match framewidth*frameheight*nbytes!") framesize = ni * nj * nbytes if (filesize - lenh) > (framesize * nfr * nstim): nfr2 = nfr + 1 includesrefframe = True else: nfr2 = nfr includesrefframe = False nbin = nblocks conds = [i for i in range(1, nstim + 1)] ncond = len(conds) data = [[[np.zeros((ni, nj, nfr), type_out)] for x in range(ncond)] for i in range(nbin)] for k in range(1, nbin + 1): print("reading block") bin = np.arange(math.floor((k - 1 / nbin * nblocks) + 1), math.floor((k / nbin * nblocks) + 1)) sbin = bin.size for j in range(1, sbin + 1): file = open(self.filename, 'rb') for i in range(1, ncond + 1): framestart = conds[i - 1] * nfr2 - nfr offset = framestart * ni * nj * nbytes + lenh file.seek(offset, 0) a = [(struct.unpack(struct_type, file.read(nbytes)))[0] for m in range(ni * nj * nfr)] a = np.reshape(np.array(a, dtype=type_out, order='F'), (ni * nj, nfr), order='F') a = np.reshape(a, (ni, nj, nfr), order='F') if includesrefframe: # not tested framestart = (conds[i] - 1) * nfr2 offset = framestart * ni * nj * nbytes + lenh file.seek(offset) ref = [(struct.unpack(struct_type, file.read(nbytes)))[0] for m in range(ni * nj)] ref = np.array(ref, dtype=type_out) for y in range(len(ref)): ref[y] *= scalefactor ref = np.reshape(ref, (ni, nj)) b = np.tile(ref, [1, 1, nfr]) for y in range(len(a)): b.append([]) for x in range(len(a[y])): b[y + 1].append([]) for frame in range(len(a[y][x])): b[y + 1][x][frame] = (a[y][x][frame] / gain) - \ (scalefactor * dc / gain) a = b if sbin == 1: data[k - 1][i - 1] = a else: # not tested for y in range(len(a)): for x in range(len(a[y])): a[y][x] /= sbin data[k - 1][i - 1] = data[k - 1][i - 1] + a / sbin file.close() # data format [block][stim][width][height][frame]] # data structure should be [block][stim][frame][width][height] in order to be easy to use with neo # each file is a block # each stim could be a segment # then an image sequence [frame][width][height] # image need to be rotated # changing order of data for compatibility # [block][stim][width][height][frame]] # to # [block][stim][frame][width][height] for block in range(len(data)): for stim in range(len(data[block])): a = [] for frame in range(header['nframesperstim']): a.append([]) for width in range(len(data[block][stim])): a[frame].append([]) for height in range(len(data[block][stim][width])): a[frame][width].append(data[block][stim][width][height][frame]) # rotation of data to be the same as thomas deneux screenshot a[frame] = np.rot90(np.fliplr(a[frame])) data[block][stim] = a block = Block(file_origin=self.filename) for stim in range(len(data[0])): image_sequence = ImageSequence(data[0][stim], units=self.units, sampling_rate=self.sampling_rate, spatial_scale=self.spatial_scale) segment = Segment(file_origin=self.filename, description=("stim nb:"+str(stim))) segment.imagesequences = [image_sequence] segment.block = block for key in header: block.annotations[key] = header[key] block.segments.append(segment) print("returning block") return block neo-0.10.0/neo/io/brainvisionio.py0000644000076700000240000000063514066375716017416 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.brainvisionrawio import BrainVisionRawIO class BrainVisionIO(BrainVisionRawIO, BaseFromRaw): """Class for reading data from the BrainVision product.""" _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): BrainVisionRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/brainwaredamio.py0000644000076700000240000001761014077527451017525 0ustar andrewstaff00000000000000''' Class for reading from Brainware DAM files DAM files are binary files for holding raw data. They are broken up into sequence of Segments, each containing a single raw trace and parameters. The DAM file does NOT contain a sampling rate, nor can it be reliably calculated from any of the parameters. You can calculate it from the "sweep length" attribute if it is present, but it isn't always present. It is more reliable to get it from the corresponding SRC file or F32 file if you have one. The DAM file also does not divide up data into Blocks, so only a single Block is returned.. Brainware was developed by Dr. Jan Schnupp and is availabe from Tucker Davis Technologies, Inc. http://www.tdt.com/downloads.htm Neither Dr. Jan Schnupp nor Tucker Davis Technologies, Inc. had any part in the development of this code The code is implemented with the permission of Dr. Jan Schnupp Author: Todd Jennings ''' # import needed core python modules import os import os.path # numpy and quantities are already required by neo import numpy as np import quantities as pq # needed core neo modules from neo.core import (AnalogSignal, Block, Group, Segment) # need to subclass BaseIO from neo.io.baseio import BaseIO class BrainwareDamIO(BaseIO): """ Class for reading Brainware raw data files with the extension '.dam'. The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block. Note: The file format does not contain a sampling rate. The sampling rate is set to 1 Hz, but this is arbitrary. If you have a corresponding .src or .f32 file, you can get the sampling rate from that. It may also be possible to infer it from the attributes, such as "sweep length", if present. Usage: >>> from neo.io.brainwaredamio import BrainwareDamIO >>> damfile = BrainwareDamIO(filename='multi_500ms_mulitrep_ch1.dam') >>> blk1 = damfile.read() >>> blk2 = damfile.read_block() >>> print blk1.segments >>> print blk1.segments[0].analogsignals >>> print blk1.units >>> print blk1.units[0].name >>> print blk2 >>> print blk2[0].segments """ is_readable = True # This class can only read data is_writable = False # write is not supported # This class is able to directly or indirectly handle the following objects # You can notice that this greatly simplifies the full Neo object hierarchy supported_objects = [Block, Group, Segment, AnalogSignal] readable_objects = [Block] writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff: a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = {Block: []} # do not support write so no GUI stuff write_params = None name = 'Brainware DAM File' extensions = ['dam'] mode = 'file' def __init__(self, filename=None): ''' Arguments: filename: the filename ''' BaseIO.__init__(self) self._path = filename self._filename = os.path.basename(filename) self._fsrc = None def read_block(self, lazy=False, **kargs): ''' Reads a block from the raw data file "fname" generated with BrainWare ''' assert not lazy, 'Do not support lazy' # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently if kargs: raise NotImplementedError('This method does not have any ' 'arguments implemented yet') self._fsrc = None block = Block(file_origin=self._filename) # create the objects to store other objects gr = Group(file_origin=self._filename) # load objects into their containers block.groups.append(gr) # open the file with open(self._path, 'rb') as fobject: # while the file is not done keep reading segments while True: seg = self._read_segment(fobject) # if there are no more Segments, stop if not seg: break # store the segment and signals block.segments.append(seg) gr.analogsignals.append(seg.analogsignals[0]) # remove the file object self._fsrc = None block.create_many_to_one_relationship() return block # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # IMPORTANT!!! # These are private methods implementing the internal reading mechanism. # Due to the way BrainWare DAM files are structured, they CANNOT be used # on their own. Calling these manually will almost certainly alter your # position in the file in an unrecoverable manner, whether they throw # an exception or not. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def _read_segment(self, fobject): ''' Read a single segment with a single analogsignal Returns the segment or None if there are no more segments ''' try: # float64 -- start time of the AnalogSignal t_start = np.fromfile(fobject, dtype=np.float64, count=1)[0] except IndexError: # if there are no more Segments, return return False # int16 -- index of the stimulus parameters seg_index = np.fromfile(fobject, dtype=np.int16, count=1)[0].tolist() # int16 -- number of stimulus parameters numelements = np.fromfile(fobject, dtype=np.int16, count=1)[0] # read the name strings for the stimulus parameters paramnames = [] for _ in range(numelements): # unit8 -- the number of characters in the string numchars = np.fromfile(fobject, dtype=np.uint8, count=1)[0] # char * numchars -- a single name string name = np.fromfile(fobject, dtype=np.uint8, count=numchars) # exclude invalid characters name = str(name[name >= 32].view('c').tobytes()) # add the name to the list of names paramnames.append(name) # float32 * numelements -- the values for the stimulus parameters paramvalues = np.fromfile(fobject, dtype=np.float32, count=numelements) # combine parameter names and the parameters as a dict params = dict(zip(paramnames, paramvalues)) # int32 -- the number elements in the AnalogSignal numpts = np.fromfile(fobject, dtype=np.int32, count=1)[0] # int16 * numpts -- the AnalogSignal itself signal = np.fromfile(fobject, dtype=np.int16, count=numpts) sig = AnalogSignal(signal.astype(np.float32) * pq.mV, t_start=t_start * pq.d, file_origin=self._filename, sampling_period=1. * pq.s, copy=False) # Note: setting the sampling_period to 1 s is arbitrary # load the AnalogSignal and parameters into a new Segment seg = Segment(file_origin=self._filename, index=seg_index, **params) seg.analogsignals = [sig] return seg neo-0.10.0/neo/io/brainwaref32io.py0000644000076700000240000002233014066375716017354 0ustar andrewstaff00000000000000''' Class for reading from Brainware F32 files F32 files are simplified binary files for holding spike data. Unlike SRC files, F32 files carry little metadata. This also means, however, that the file format does not change, unlike SRC files whose format changes periodically (although ideally SRC files are backwards-compatible). Each F32 file only holds a single Block. The only metadata stored in the file is the length of a single repetition of the stimulus and the values of the stimulus parameters (but not the names of the parameters). Brainware was developed by Dr. Jan Schnupp and is availabe from Tucker Davis Technologies, Inc. http://www.tdt.com/downloads.htm Neither Dr. Jan Schnupp nor Tucker Davis Technologies, Inc. had any part in the development of this code The code is implemented with the permission of Dr. Jan Schnupp Author: Todd Jennings ''' # import needed core python modules from os import path # numpy and quantities are already required by neo import numpy as np import quantities as pq # needed core neo modules from neo.core import Block, Group, Segment, SpikeTrain # need to subclass BaseIO from neo.io.baseio import BaseIO class BrainwareF32IO(BaseIO): ''' Class for reading Brainware Spike ReCord files with the extension '.f32' The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block. The read_all_blocks method automatically reads all Blocks. It will automatically close the file after reading. The read_next_block method will return one Block each time it is called. It will automatically close the file and reset to the first Block after reading the last block. Call the close method to close the file and reset this method back to the first Block. The isopen property tells whether the file is currently open and reading or closed. Note 1: There is always only one Group. Usage: >>> from neo.io.brainwaref32io import BrainwareF32IO >>> f32file = BrainwareF32IO(filename='multi_500ms_mulitrep_ch1.f32') >>> blk1 = f32file.read() >>> blk2 = f32file.read_block() >>> print blk1.segments >>> print blk1.segments[0].spiketrains >>> print blk1.units >>> print blk1.units[0].name >>> print blk2 >>> print blk2[0].segments ''' is_readable = True # This class can only read data is_writable = False # write is not supported # This class is able to directly or indirectly handle the following objects # You can notice that this greatly simplifies the full Neo object hierarchy supported_objects = [Block, Group, Segment, SpikeTrain] readable_objects = [Block] writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff: a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = {Block: []} # does not support write so no GUI stuff write_params = None name = 'Brainware F32 File' extensions = ['f32'] mode = 'file' def __init__(self, filename=None): ''' Arguments: filename: the filename ''' BaseIO.__init__(self) self._path = filename self._filename = path.basename(filename) self._fsrc = None self._blk = None self.__unit_group = None self.__t_stop = None self.__params = None self.__seg = None self.__spiketimes = None def read_block(self, lazy=False, **kargs): ''' Reads a block from the simple spike data file "fname" generated with BrainWare ''' assert not lazy, 'Do not support lazy' # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently if kargs: raise NotImplementedError('This method does not have any ' 'argument implemented yet') self._fsrc = None self._blk = Block(file_origin=self._filename) block = self._blk # create the objects to store other objects self.__unit_group = Group(file_origin=self._filename) block.groups.append(self.__unit_group) # initialize values self.__t_stop = None self.__params = None self.__seg = None self.__spiketimes = None # open the file with open(self._path, 'rb') as self._fsrc: res = True # while the file is not done keep reading segments while res: res = self.__read_id() block.create_many_to_one_relationship() # cleanup attributes self._fsrc = None self._blk = None self.__t_stop = None self.__params = None self.__seg = None self.__spiketimes = None return block # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # IMPORTANT!!! # These are private methods implementing the internal reading mechanism. # Due to the way BrainWare DAM files are structured, they CANNOT be used # on their own. Calling these manually will almost certainly alter your # position in the file in an unrecoverable manner, whether they throw # an exception or not. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def __read_id(self): ''' Read the next ID number and do the appropriate task with it. Returns nothing. ''' try: # float32 -- ID of the first data sequence objid = np.fromfile(self._fsrc, dtype=np.float32, count=1)[0] except IndexError: # if we have a previous segment, save it self.__save_segment() # if there are no more Segments, return return False if objid == -2: self.__read_condition() elif objid == -1: self.__read_segment() else: self.__spiketimes.append(objid) return True def __read_condition(self): ''' Read the parameter values for a single stimulus condition. Returns nothing. ''' # float32 -- SpikeTrain length in ms self.__t_stop = np.fromfile(self._fsrc, dtype=np.float32, count=1)[0] # float32 -- number of stimulus parameters numelements = int(np.fromfile(self._fsrc, dtype=np.float32, count=1)[0]) # [float32] * numelements -- stimulus parameter values paramvals = np.fromfile(self._fsrc, dtype=np.float32, count=numelements).tolist() # organize the parameers into a dictionary with arbitrary names paramnames = ['Param%s' % i for i in range(len(paramvals))] self.__params = dict(zip(paramnames, paramvals)) def __read_segment(self): ''' Setup the next Segment. Returns nothing. ''' # if we have a previous segment, save it self.__save_segment() # create the segment self.__seg = Segment(file_origin=self._filename, **self.__params) # create an empy array to save the spike times # this needs to be converted to a SpikeTrain before it can be used self.__spiketimes = [] def __save_segment(self): ''' Write the segment to the Block if it exists ''' # if this is the beginning of the first condition, then we don't want # to save, so exit # but set __seg from None to False so we know next time to create a # segment even if there are no spike in the condition if self.__seg is None: self.__seg = False return if not self.__seg: # create dummy values if there are no SpikeTrains in this condition self.__seg = Segment(file_origin=self._filename, **self.__params) self.__spiketimes = [] times = pq.Quantity(self.__spiketimes, dtype=np.float32, units=pq.ms) train = SpikeTrain(times, t_start=0 * pq.ms, t_stop=self.__t_stop * pq.ms, file_origin=self._filename) self.__seg.spiketrains = [train] self.__unit_group.spiketrains.append(train) self._blk.segments.append(self.__seg) # set an empty segment # from now on, we need to set __seg to False rather than None so # that if there is a condition with no SpikeTrains we know # to create an empty Segment self.__seg = False neo-0.10.0/neo/io/brainwaresrcio.py0000755000076700000240000016020114066375716017554 0ustar andrewstaff00000000000000""" Class for reading from Brainware SRC files SRC files are binary files for holding spike data. They are broken up into nested data sequences of different types, with each type of sequence identified by a unique ID number. This allows new versions of sequences to be included without breaking backwards compatibility, since new versions can just be given a new ID number. The ID numbers and the format of the data they contain were taken from the Matlab-based reader function supplied with BrainWare. The python code, however, was implemented from scratch in Python using Python idioms. There are some situations where BrainWare data can overflow the SRC file, resulting in a corrupt file. Neither BrainWare nor the Matlab-based reader can read such files. This software, however, will try to recover the data, and in most cases can do so successfully. Each SRC file can hold the equivalent of multiple Neo Blocks. Brainware was developed by Dr. Jan Schnupp and is availabe from Tucker Davis Technologies, Inc. http://www.tdt.com/downloads.htm Neither Dr. Jan Schnupp nor Tucker Davis Technologies, Inc. had any part in the development of this code The code is implemented with the permission of Dr. Jan Schnupp Note when porting ChannelIndex/Unit to Group (Samuel Garcia). The ChannelIndex was used as group of units. To avoid now a "group of group" each units is directly a "Group"'. Author: Todd Jennings """ # import needed core python modules from datetime import datetime, timedelta from itertools import chain import logging import os.path # numpy and quantities are already required by neo import numpy as np import quantities as pq # needed core neo modules from neo.core import (Block, Event, Group, Segment, SpikeTrain) # need to subclass BaseIO from neo.io.baseio import BaseIO LOGHANDLER = logging.StreamHandler() class BrainwareSrcIO(BaseIO): """ Class for reading Brainware Spike ReCord files with the extension '.src' The read_block method returns the first Block of the file. It will automatically close the file after reading. The read method is the same as read_block. The read_all_blocks method automatically reads all Blocks. It will automatically close the file after reading. The read_next_block method will return one Block each time it is called. It will automatically close the file and reset to the first Block after reading the last block. Call the close method to close the file and reset this method back to the first Block. The _isopen property tells whether the file is currently open and reading or closed. Note 1: The first Unit in each Group is always UnassignedSpikes, which has a SpikeTrain for each Segment containing all the spikes not assigned to any Unit in that Segment. Note 2: The first Segment in each Block is always Comments, which stores all comments as an Event object. Note 3: The parameters from the BrainWare table for each condition are stored in the Segment annotations. If there are multiple repetitions of a condition, each repetition is stored as a separate Segment. Note 4: There is always only one Group. Usage: >>> from neo.io.brainwaresrcio import BrainwareSrcIO >>> srcfile = BrainwareSrcIO(filename='multi_500ms_mulitrep_ch1.src') >>> blk1 = srcfile.read() >>> blk2 = srcfile.read_block() >>> blks = srcfile.read_all_blocks() >>> print blk1.segments >>> print blk1.segments[0].spiketrains >>> print blk1.groups >>> print blk1.groups[0].name >>> print blk2 >>> print blk2[0].segments >>> print blks >>> print blks[0].segments """ is_readable = True # This class can only read data is_writable = False # write is not supported # This class is able to directly or indirectly handle the following objects supported_objects = [Block, Group, Segment, SpikeTrain, Event] readable_objects = [Block] writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff: a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = {Block: []} # does not support write so no GUI stuff write_params = None name = 'Brainware SRC File' extensions = ['src'] mode = 'file' def __init__(self, filename=None): """ Arguments: filename: the filename """ BaseIO.__init__(self) # log the __init__ self.logger.info('__init__') # this stores the filename of the current object, exactly as it is # provided when the instance is initialized. self._filename = filename # this store the filename without the path self._file_origin = filename # This stores the file object for the current file self._fsrc = None # This stores the current Block self._blk = None # This stores the current Segment for easy access # It is equivalent to self._blk.segments[-1] self._seg0 = None # this stores a dictionary of the Block's Group (Units) by name, # making it easier and faster to retrieve Units by name later # UnassignedSpikes and Units accessed by index are not stored here self._unitdict = {} # this stores the current Unit self._unit0 = None # if the file has a list with negative length, the rest of the file's # list lengths are unreliable, so we need to store this value for the # whole file self._damaged = False # this stores an empty SpikeTrain which is used in various places. self._default_spiketrain = None @property def _isopen(self): """ This property tells whether the SRC file associated with the IO object is open. """ return self._fsrc is not None def _opensrc(self): """ Open the file if it isn't already open. """ # if the file isn't already open, open it and clear the Blocks if not self._fsrc or self._fsrc.closed: self._fsrc = open(self._filename, 'rb') # figure out the filename of the current file self._file_origin = os.path.basename(self._filename) def close(self): """ Close the currently-open file and reset the current reading point. """ self.logger.info('close') if self._isopen and not self._fsrc.closed: self._fsrc.close() # we also need to reset all per-file attributes self._damaged = False self._fsrc = None self._seg0 = None self._file_origin = None self._lazy = False self._default_spiketrain = None def read_block(self, lazy=False, **kargs): """ Reads the first Block from the Spike ReCording file "filename" generated with BrainWare. If you wish to read more than one Block, please use read_all_blocks. """ assert not lazy, 'Do not support lazy' # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently if kargs: raise NotImplementedError('This method does not have any ' 'arguments implemented yet') blockobj = self.read_next_block() self.close() return blockobj def read_next_block(self, **kargs): """ Reads a single Block from the Spike ReCording file "filename" generated with BrainWare. Each call of read will return the next Block until all Blocks are loaded. After the last Block, the file will be automatically closed and the progress reset. Call the close method manually to reset back to the first Block. """ # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently if kargs: raise NotImplementedError('This method does not have any ' 'arguments implemented yet') self._opensrc() # create _default_spiketrain here for performance reasons self._default_spiketrain = self._init_default_spiketrain.copy() self._default_spiketrain.file_origin = self._file_origin # create the Block and the contents all Blocks of from IO share self._blk = Block(file_origin=self._file_origin) self._seg0 = Segment(name='Comments', file_origin=self._file_origin) self._unit0 = Group(name='UnassignedSpikes', elliptic=[], boundaries=[], timestamp=[], max_valid=[]) self._blk.groups.append(self._unit0) self._blk.segments.append(self._seg0) # this actually reads the contents of the Block result = [] while hasattr(result, '__iter__'): try: result = self._read_by_id() except: self.close() raise # since we read at a Block level we always do this self._blk.create_many_to_one_relationship() # put the Block in a local object so it can be gargabe collected blockobj = self._blk # reset the per-Block attributes self._blk = None self._unitdict = {} # combine the comments into one big event self._combine_segment_events(self._seg0) # result is None iff the end of the file is reached, so we can # close the file # this notification is not helpful if using the read method with # cascading, since the user will know it is done when the method # returns a value if result is None: self.logger.info('Last Block read. Closing file.') self.close() return blockobj def read_all_blocks(self, lazy=False, **kargs): """ Reads all Blocks from the Spike ReCording file "filename" generated with BrainWare. The progress in the file is reset and the file closed then opened again prior to reading. The file is automatically closed after reading completes. """ # there are no keyargs implemented to so far. If someone tries to pass # them they are expecting them to do something or making a mistake, # neither of which should pass silently assert not lazy, 'Do not support lazy' if kargs: raise NotImplementedError('This method does not have any ' 'argument implemented yet') self.close() self._opensrc() # Read each Block. # After the last Block self._isopen is set to False, so this make a # good way to determine when to stop blocks = [] while self._isopen: try: blocks.append(self.read_next_block()) except: self.close() raise return blocks def _convert_timestamp(self, timestamp, start_date=datetime(1899, 12, 30)): """ _convert_timestamp(timestamp, start_date) - convert a timestamp in brainware src file units to a python datetime object. start_date defaults to 1899.12.30 (ISO format), which is the start date used by all BrainWare SRC data Blocks so far. If manually specified it should be a datetime object or any other object that can be added to a timedelta object. """ # datetime + timedelta = datetime again. try: timestamp = convert_brainwaresrc_timestamp(timestamp, start_date) except OverflowError as err: timestamp = start_date self.logger.exception('_convert_timestamp overflow') return timestamp # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # All methods from here on are private. They are not intended to be used # on their own, although methods that could theoretically be called on # their own are marked as such. All private methods could be renamed, # combined, or split at any time. All private methods prefixed by # "__read" or "__skip" will alter the current place in the file. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def _read_by_id(self): """ Reader for generic data BrainWare SRC files are broken up into data sequences that are identified by an ID code. This method determines the ID code and calls the method to read the data sequence with that ID code. See the _ID_DICT attribute for a dictionary of code/method pairs. IMPORTANT!!! This is the only private method that can be called directly. The rest of the private methods can only safely be called by this method or by other private methods, since they depend on the current position in the file. """ try: # uint16 -- the ID code of the next sequence seqid = np.fromfile(self._fsrc, dtype=np.uint16, count=1).item() except ValueError: # return a None if at EOF. Other methods use None to recognize # an EOF return None # using the seqid, get the reader function from the reader dict readfunc = self._ID_DICT.get(seqid) if readfunc is None: if seqid <= 0: # return if end-of-sequence ID code. This has to be 0. # just calling "return" will return a None which is used as an # EOF indicator return 0 else: # return a warning if the key is invalid # (this is consistent with the official behavior, # even the official reference files have invalid keys # when using the official reference reader matlab # scripts self.logger.warning('unknown ID: %s', seqid) return [] try: # run the function to get the data return readfunc(self) except (EOFError, UnicodeDecodeError) as err: # return a warning if the EOF is reached in the middle of a method self.logger.exception('Premature end of file') return None # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # These are helper methods. They don't read from the file, so it # won't harm the reading process to call them, but they are only relevant # when used in other private methods. # # These are tuned to the particular needs of this IO class, they are # unlikely to work properly if used with another file format. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def _assign_sequence(self, data_obj): """ _assign_sequence(data_obj) - Try to guess where an unknown sequence should go based on its class. Warning are issued if this method is used since manual reorganization may be needed. """ if isinstance(data_obj, Group): self.logger.warning('Unknown Group found, adding to Group list') self._blk.groups.append(data_obj) if data_obj.name: self._unitdict[data_obj.name] = data_obj elif isinstance(data_obj, Segment): self.logger.warning('Unknown Segment found, ' 'adding to Segments list') self._blk.segments.append(data_obj) elif isinstance(data_obj, Event): self.logger.warning('Unknown Event found, ' 'adding to comment Events list') self._seg0.events.append(data_obj) elif isinstance(data_obj, SpikeTrain): self.logger.warning('Unknown SpikeTrain found, ' 'adding to the UnassignedSpikes Unit') self._unit0.spiketrains.append(data_obj) elif hasattr(data_obj, '__iter__') and not isinstance(data_obj, str): for sub_obj in data_obj: self._assign_sequence(sub_obj) else: if self.logger.isEnabledFor(logging.WARNING): self.logger.warning('Unrecognized sequence of type %s found, ' 'skipping', type(data_obj)) _default_datetime = datetime(1, 1, 1) _default_t_start = pq.Quantity(0., units=pq.ms, dtype=np.float32) _init_default_spiketrain = SpikeTrain(times=pq.Quantity([], units=pq.ms, dtype=np.float32), t_start=pq.Quantity(0, units=pq.ms, dtype=np.float32 ), t_stop=pq.Quantity(1, units=pq.ms, dtype=np.float32), waveforms=pq.Quantity([[[]]], dtype=np.int8, units=pq.mV), dtype=np.float32, copy=False, timestamp=_default_datetime, respwin=np.array([], dtype=np.int32), dama_index=-1, trig2=pq.Quantity([], units=pq.ms, dtype=np.uint8), side='') def _combine_events(self, events): """ _combine_events(events) - combine a list of Events with single events into one long Event """ if not events: event = Event(times=pq.Quantity([], units=pq.s), labels=np.array([], dtype='U'), senders=np.array([], dtype='S'), t_start=0) return event times = [] labels = [] senders = [] for event in events: times.append(event.times.magnitude) if event.labels.shape == (1,): labels.append(event.labels[0]) else: raise AssertionError("This single event has multiple labels in an array with " "shape {} instead of a single label.". format(event.labels.shape)) senders.append(event.annotations['sender']) times = np.array(times, dtype=np.float32) t_start = times.min() times = pq.Quantity(times - t_start, units=pq.d).rescale(pq.s) labels = np.array(labels, dtype='U') senders = np.array(senders) event = Event(times=times, labels=labels, t_start=t_start.tolist(), senders=senders) return event def _combine_segment_events(self, segment): """ _combine_segment_events(segment) Combine all Events in a segment. """ event = self._combine_events(segment.events) event_t_start = event.annotations.pop('t_start') segment.rec_datetime = self._convert_timestamp(event_t_start) segment.events = [event] event.segment = segment def _combine_spiketrains(self, spiketrains): """ _combine_spiketrains(spiketrains) - combine a list of SpikeTrains with single spikes into one long SpikeTrain """ if not spiketrains: return self._default_spiketrain.copy() if hasattr(spiketrains[0], 'waveforms') and len(spiketrains) == 1: train = spiketrains[0] return train if hasattr(spiketrains[0], 't_stop'): # workaround for bug in some broken files istrain = [hasattr(utrain, 'waveforms') for utrain in spiketrains] if not all(istrain): goodtrains = [itrain for i, itrain in enumerate(spiketrains) if istrain[i]] badtrains = [itrain for i, itrain in enumerate(spiketrains) if not istrain[i]] spiketrains = (goodtrains + [self._combine_spiketrains(badtrains)]) spiketrains = [itrain for itrain in spiketrains if itrain.size > 0] if not spiketrains: return self._default_spiketrain.copy() # get the times of the spiketrains and combine them waveforms = [itrain.waveforms for itrain in spiketrains] rawtrains = np.array(np.concatenate(spiketrains, axis=1)) times = pq.Quantity(rawtrains, units=pq.ms, copy=False) lens1 = np.array([wave.shape[1] for wave in waveforms]) lens2 = np.array([wave.shape[2] for wave in waveforms]) if lens1.max() != lens1.min() or lens2.max() != lens2.min(): lens1 = lens1.max() - lens1 lens2 = lens2.max() - lens2 waveforms = [np.pad(waveform, ((0, 0), (0, len1), (0, len2)), 'constant') for waveform, len1, len2 in zip(waveforms, lens1, lens2)] waveforms = np.concatenate(waveforms, axis=0) # extract the trig2 annotation trig2 = np.array(np.concatenate([itrain.annotations['trig2'] for itrain in spiketrains], axis=1)) trig2 = pq.Quantity(trig2, units=pq.ms) elif hasattr(spiketrains[0], 'units'): return self._combine_spiketrains([spiketrains]) else: times, waveforms, trig2 = zip(*spiketrains) times = np.concatenate(times, axis=0) # get the times of the SpikeTrains and combine them times = pq.Quantity(times, units=pq.ms, copy=False) # get the waveforms of the SpikeTrains and combine them # these should be a 3D array with the first axis being the spike, # the second axis being the recording channel (there is only one), # and the third axis being the actual waveform waveforms = np.concatenate(waveforms, axis=0) # extract the trig2 annotation trig2 = pq.Quantity(np.hstack(trig2), units=pq.ms, copy=False) if not times.size: return self._default_spiketrain.copy() # get the maximum time t_stop = times[-1] * 2. waveforms = pq.Quantity(waveforms, units=pq.mV, copy=False) train = SpikeTrain(times=times, copy=False, t_start=self._default_t_start.copy(), t_stop=t_stop, file_origin=self._file_origin, waveforms=waveforms, timestamp=self._default_datetime, respwin=np.array([], dtype=np.int32), dama_index=-1, trig2=trig2, side='') return train # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # IMPORTANT!!! # These are private methods implementing the internal reading mechanism. # Due to the way BrainWare SRC files are structured, they CANNOT be used # on their own. Calling these manually will almost certainly alter your # position in the file in an unrecoverable manner, whether they throw # an exception or not. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- def __read_str(self, numchars=1, utf=None): """ Read a string of a specific length. """ rawstr = np.fromfile(self._fsrc, dtype='S%s' % numchars, count=1).item() return rawstr.decode('utf-8') def __read_annotations(self): """ Read the stimulus grid properties. ------------------------------------------------------------------- Returns a dictionary containing the parameter names as keys and the parameter values as values. The returned object must be added to the Block. ID: 29109 """ # int16 -- number of stimulus parameters numelements = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] if not numelements: return {} # [data sequence] * numelements -- parameter names names = [] for i in range(numelements): # {skip} = byte (char) -- skip one byte self._fsrc.seek(1, 1) # uint8 -- length of next string numchars = np.fromfile(self._fsrc, dtype=np.uint8, count=1).item() # if there is no name, make one up if not numchars: name = 'param%s' % i else: # char * numchars -- parameter name string name = self.__read_str(numchars) # if the name is already in there, add a unique number to it # so it isn't overwritten if name in names: name = name + str(i) names.append(name) # float32 * numelements -- an array of parameter values values = np.fromfile(self._fsrc, dtype=np.float32, count=numelements) # combine the names and values into a dict # the dict will be added to the annotations annotations = dict(zip(names, values)) return annotations def __read_annotations_old(self): """ Read the stimulus grid properties. Returns a dictionary containing the parameter names as keys and the parameter values as values. ------------------------------------------------ The returned objects must be added to the Block. This reads an old version of the format that does not store paramater names, so placeholder names are created instead. ID: 29099 """ # int16 * 14 -- an array of parameter values values = np.fromfile(self._fsrc, dtype=np.int16, count=14) # create dummy names and combine them with the values in a dict # the dict will be added to the annotations params = ['param%s' % i for i in range(len(values))] annotations = dict(zip(params, values)) return annotations def __read_comment(self): """ Read a single comment. The comment is stored as an Event in Segment 0, which is specifically for comments. ---------------------- Returns an empty list. The returned object is already added to the Block. No ID number: always called from another method """ # float64 -- timestamp (number of days since dec 30th 1899) time = np.fromfile(self._fsrc, dtype=np.double, count=1)[0] # int16 -- length of next string numchars1 = np.fromfile(self._fsrc, dtype=np.int16, count=1).item() # char * numchars -- the one who sent the comment sender = self.__read_str(numchars1) # int16 -- length of next string numchars2 = np.fromfile(self._fsrc, dtype=np.int16, count=1).item() # char * numchars -- comment text text = self.__read_str(numchars2, utf=False) comment = Event(times=pq.Quantity(time, units=pq.d), labels=[text], sender=sender, file_origin=self._file_origin) self._seg0.events.append(comment) return [] def __read_list(self): """ Read a list of arbitrary data sequences It only says how many data sequences should be read. These sequences are then read by their ID number. Note that lists can be nested. If there are too many sequences (for instance if there are a large number of spikes in a Segment) then a negative number will be returned for the number of data sequences to read. In this case the method tries to guess. This also means that all future list data sequences have unreliable lengths as well. ------------------------------------------- Returns a list of objects. Whether these objects need to be added to the Block depends on the object in question. There are several data sequences that have identical formats but are used in different situations. That means this data sequences has multiple ID numbers. ID: 29082 ID: 29083 ID: 29091 ID: 29093 """ # int16 -- number of sequences to read numelements = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # {skip} = bytes * 4 (int16 * 2) -- skip four bytes self._fsrc.seek(4, 1) if numelements == 0: return [] if not self._damaged and numelements < 0: self._damaged = True self.logger.error('Negative sequence count %s, file damaged', numelements) if not self._damaged: # read the sequences into a list seq_list = [self._read_by_id() for _ in range(numelements)] else: # read until we get some indication we should stop seq_list = [] # uint16 -- the ID of the next sequence seqidinit = np.fromfile(self._fsrc, dtype=np.uint16, count=1)[0] # {rewind} = byte * 2 (int16) -- move back 2 bytes, i.e. go back to # before the beginning of the seqid self._fsrc.seek(-2, 1) while 1: # uint16 -- the ID of the next sequence seqid = np.fromfile(self._fsrc, dtype=np.uint16, count=1)[0] # {rewind} = byte * 2 (int16) -- move back 2 bytes, i.e. go # back to before the beginning of the seqid self._fsrc.seek(-2, 1) # if we come across a new sequence, we are at the end of the # list so we should stop if seqidinit != seqid: break # otherwise read the next sequence seq_list.append(self._read_by_id()) return seq_list def __read_segment(self): """ Read an individual Segment. A Segment contains a dictionary of parameters, the length of the recording, a list of Units with their Spikes, and a list of Spikes not assigned to any Unit. The unassigned spikes are always stored in Unit 0, which is exclusively for storing these spikes. ------------------------------------------------- Returns the Segment object created by the method. The returned object is already added to the Block. ID: 29106 """ # (data_obj) -- the stimulus parameters for this segment annotations = self._read_by_id() annotations['feature_type'] = -1 annotations['go_by_closest_unit_center'] = False annotations['include_unit_bounds'] = False # (data_obj) -- SpikeTrain list of unassigned spikes # these go in the first Unit since it is for unassigned spikes unassigned_spikes = self._read_by_id() self._unit0.spiketrains.extend(unassigned_spikes) # read a list of units and grab the second return value, which is the # SpikeTrains from this Segment (if we use the Unit we will get all the # SpikeTrains from that Unit, resuling in duplicates if we are past # the first Segment trains = self._read_by_id() if not trains: if unassigned_spikes: # if there are no assigned spikes, # just use the unassigned spikes trains = zip(unassigned_spikes) else: # if there are no spiketrains at all, # create an empty spike train trains = [[self._default_spiketrain.copy()]] elif hasattr(trains[0], 'dtype'): # workaround for some broken files trains = [unassigned_spikes + [self._combine_spiketrains([trains])]] else: # get the second element from each returned value, # which is the actual SpikeTrains trains = [unassigned_spikes] + [train[1] for train in trains] # re-organize by sweeps trains = zip(*trains) # int32 -- SpikeTrain length in ms spiketrainlen = pq.Quantity(np.fromfile(self._fsrc, dtype=np.int32, count=1)[0], units=pq.ms, copy=False) segments = [] for train in trains: # create the Segment and add everything to it segment = Segment(file_origin=self._file_origin, **annotations) segment.spiketrains = train self._blk.segments.append(segment) segments.append(segment) for itrain in train: # use the SpikeTrain length to figure out the stop time # t_start is always 0 so we can ignore it itrain.t_stop = spiketrainlen return segments def __read_segment_list(self): """ Read a list of Segments with comments. Since comments can occur at any point, whether a recording is happening or not, it is impossible to reliably assign them to a specific Segment. For this reason they are always assigned to Segment 0, which is exclusively used to store comments. -------------------------------------------------------- Returns a list of the Segments created with this method. The returned objects are already added to the Block. ID: 29112 """ # uint8 -- number of electrode channels in the Segment numchannels = np.fromfile(self._fsrc, dtype=np.uint8, count=1)[0] # [list of sequences] -- individual Segments segments = self.__read_list() while not hasattr(segments[0], 'spiketrains'): segments = list(chain(*segments)) # char -- "side of brain" info side = self.__read_str(1) # int16 -- number of comments numelements = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # comment_obj * numelements -- comments about the Segments # we don't know which Segment specifically, though for _ in range(numelements): self.__read_comment() # store what side of the head we are dealing with for segment in segments: for spiketrain in segment.spiketrains: spiketrain.annotations['side'] = side return segments def __read_segment_list_v8(self): """ Read a list of Segments with comments. This is version 8 of the data sequence. This is the same as __read_segment_list_var, but can also contain one or more arbitrary sequences. The class makes an attempt to assign the sequences when possible, and warns the user when this happens (see the _assign_sequence method) -------------------------------------------------------- Returns a list of the Segments created with this method. The returned objects are already added to the Block. ID: 29117 """ # segment_collection_var -- this is based off a segment_collection_var segments = self.__read_segment_list_var() # uint16 -- the ID of the next sequence seqid = np.fromfile(self._fsrc, dtype=np.uint16, count=1)[0] # {rewind} = byte * 2 (int16) -- move back 2 bytes, i.e. go back to # before the beginning of the seqid self._fsrc.seek(-2, 1) if seqid in self._ID_DICT: # if it is a valid seqid, read it and try to figure out where # to put it self._assign_sequence(self._read_by_id()) else: # otherwise it is a Unit list self.__read_unit_list() # {skip} = byte * 2 (int16) -- skip 2 bytes self._fsrc.seek(2, 1) return segments def __read_segment_list_v9(self): """ Read a list of Segments with comments. This is version 9 of the data sequence. This is the same as __read_segment_list_v8, but contains some additional annotations. These annotations are added to the Segment. -------------------------------------------------------- Returns a list of the Segments created with this method. The returned objects are already added to the Block. ID: 29120 """ # segment_collection_v8 -- this is based off a segment_collection_v8 segments = self.__read_segment_list_v8() # uint8 feature_type = np.fromfile(self._fsrc, dtype=np.uint8, count=1)[0] # uint8 go_by_closest_unit_center = np.fromfile(self._fsrc, dtype=np.bool8, count=1)[0] # uint8 include_unit_bounds = np.fromfile(self._fsrc, dtype=np.bool8, count=1)[0] # create a dictionary of the annotations annotations = {'feature_type': feature_type, 'go_by_closest_unit_center': go_by_closest_unit_center, 'include_unit_bounds': include_unit_bounds} # add the annotations to each Segment for segment in segments: segment.annotations.update(annotations) return segments def __read_segment_list_var(self): """ Read a list of Segments with comments. This is the same as __read_segment_list, but contains information regarding the sampling period. This information is added to the SpikeTrains in the Segments. -------------------------------------------------------- Returns a list of the Segments created with this method. The returned objects are already added to the Block. ID: 29114 """ # float32 -- DA conversion clock period in microsec sampling_period = pq.Quantity(np.fromfile(self._fsrc, dtype=np.float32, count=1), units=pq.us, copy=False)[0] # segment_collection -- this is based off a segment_collection segments = self.__read_segment_list() # add the sampling period to each SpikeTrain for segment in segments: for spiketrain in segment.spiketrains: spiketrain.sampling_period = sampling_period return segments def __read_spike_fixed(self, numpts=40): """ Read a spike with a fixed waveform length (40 time bins) ------------------------------------------- Returns the time, waveform and trig2 value. The returned objects must be converted to a SpikeTrain then added to the Block. ID: 29079 """ # float32 -- spike time stamp in ms since start of SpikeTrain time = np.fromfile(self._fsrc, dtype=np.float32, count=1) # int8 * 40 -- spike shape -- use numpts for spike_var waveform = np.fromfile(self._fsrc, dtype=np.int8, count=numpts).reshape(1, 1, numpts) # uint8 -- point of return to noise trig2 = np.fromfile(self._fsrc, dtype=np.uint8, count=1) return time, waveform, trig2 def __read_spike_fixed_old(self): """ Read a spike with a fixed waveform length (40 time bins) This is an old version of the format. The time is stored as ints representing 1/25 ms time steps. It has no trigger information. ------------------------------------------- Returns the time, waveform and trig2 value. The returned objects must be converted to a SpikeTrain then added to the Block. ID: 29081 """ # int32 -- spike time stamp in ms since start of SpikeTrain time = np.fromfile(self._fsrc, dtype=np.int32, count=1) / 25. time = time.astype(np.float32) # int8 * 40 -- spike shape # This needs to be a 3D array, one for each channel. BrainWare # only ever has a single channel per file. waveform = np.fromfile(self._fsrc, dtype=np.int8, count=40).reshape(1, 1, 40) # create a dummy trig2 value trig2 = np.array([-1], dtype=np.uint8) return time, waveform, trig2 def __read_spike_var(self): """ Read a spike with a variable waveform length ------------------------------------------- Returns the time, waveform and trig2 value. The returned objects must be converted to a SpikeTrain then added to the Block. ID: 29115 """ # uint8 -- number of points in spike shape numpts = np.fromfile(self._fsrc, dtype=np.uint8, count=1)[0] # spike_fixed is the same as spike_var if you don't read the numpts # byte and set numpts = 40 return self.__read_spike_fixed(numpts) def __read_spiketrain_indexed(self): """ Read a SpikeTrain This is the same as __read_spiketrain_timestamped except it also contains the index of the Segment in the dam file. The index is stored as an annotation in the SpikeTrain. ------------------------------------------------- Returns a SpikeTrain object with multiple spikes. The returned object must be added to the Block. ID: 29121 """ # int32 -- index of the analogsignalarray in corresponding .dam file dama_index = np.fromfile(self._fsrc, dtype=np.int32, count=1)[0] # spiketrain_timestamped -- this is based off a spiketrain_timestamped spiketrain = self.__read_spiketrain_timestamped() # add the property to the dict spiketrain.annotations['dama_index'] = dama_index return spiketrain def __read_spiketrain_timestamped(self): """ Read a SpikeTrain This SpikeTrain contains a time stamp for when it was recorded The timestamp is stored as an annotation in the SpikeTrain. ------------------------------------------------- Returns a SpikeTrain object with multiple spikes. The returned object must be added to the Block. ID: 29110 """ # float64 -- timeStamp (number of days since dec 30th 1899) timestamp = np.fromfile(self._fsrc, dtype=np.double, count=1)[0] # convert to datetime object timestamp = self._convert_timestamp(timestamp) # seq_list -- spike list # combine the spikes into a single SpikeTrain spiketrain = self._combine_spiketrains(self.__read_list()) # add the timestamp spiketrain.annotations['timestamp'] = timestamp return spiketrain def __read_unit(self): """ Read all SpikeTrains from a single Segment and Unit This is the same as __read_unit_unsorted except it also contains information on the spike sorting boundaries. ------------------------------------------------------------------ Returns a single Unit and a list of SpikeTrains from that Unit and current Segment, in that order. The SpikeTrains must be returned since it is not possible to determine from the Unit which SpikeTrains are from the current Segment. The returned objects are already added to the Block. The SpikeTrains must be added to the current Segment. ID: 29116 """ # same as unsorted Unit unit, trains = self.__read_unit_unsorted() # float32 * 18 -- Unit boundaries (IEEE 32-bit floats) unit.annotations['boundaries'] = [np.fromfile(self._fsrc, dtype=np.float32, count=18)] # uint8 * 9 -- boolean values indicating elliptic feature boundary # dimensions unit.annotations['elliptic'] = [np.fromfile(self._fsrc, dtype=np.uint8, count=9)] return unit, trains def __read_unit_list(self): """ A list of a list of Units ----------------------------------------------- Returns a list of Units modified in the method. The returned objects are already added to the Block. No ID number: only called by other methods """ # this is used to figure out which Units to return maxunit = 1 # int16 -- number of time slices numelements = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # {sequence} * numelements1 -- the number of lists of Units to read for i in range(numelements): # {skip} = byte * 2 (int16) -- skip 2 bytes self._fsrc.seek(2, 1) # double max_valid = np.fromfile(self._fsrc, dtype=np.double, count=1)[0] # int16 - the number of Units to read numunits = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # update tha maximum Unit so far maxunit = max(maxunit, numunits + 1) # if there aren't enough Units, create them # remember we need to skip the UnassignedSpikes Unit if numunits > len(self._blk.groups) + 1: for ind1 in range(len(self._blk.groups), numunits + 1): unit = Group(name='unit%s' % ind1, file_origin=self._file_origin, elliptic=[], boundaries=[], timestamp=[], max_valid=[]) self._blk.groups.append(unit) # {Block} * numelements -- Units for ind1 in range(numunits): # get the Unit with the given index # remember we need to skip the UnassignedSpikes Unit unit = self._blk.groups[ind1 + 1] # {skip} = byte * 2 (int16) -- skip 2 bytes self._fsrc.seek(2, 1) # int16 -- a multiplier for the elliptic and boundaries # properties numelements3 = np.fromfile(self._fsrc, dtype=np.int16, count=1)[0] # uint8 * 10 * numelements3 -- boolean values indicating # elliptic feature boundary dimensions elliptic = np.fromfile(self._fsrc, dtype=np.uint8, count=10 * numelements3) # float32 * 20 * numelements3 -- feature boundaries boundaries = np.fromfile(self._fsrc, dtype=np.float32, count=20 * numelements3) unit.annotations['elliptic'].append(elliptic) unit.annotations['boundaries'].append(boundaries) unit.annotations['max_valid'].append(max_valid) return self._blk.groups[1:maxunit] def __read_unit_list_timestamped(self): """ A list of a list of Units. This is the same as __read_unit_list, except that it also has a timestamp. This is added ad an annotation to all Units. ----------------------------------------------- Returns a list of Units modified in the method. The returned objects are already added to the Block. ID: 29119 """ # double -- time zero (number of days since dec 30th 1899) timestamp = np.fromfile(self._fsrc, dtype=np.double, count=1)[0] # convert to to days since UNIX epoc time: timestamp = self._convert_timestamp(timestamp) # sorter -- this is based off a sorter units = self.__read_unit_list() for unit in units: unit.annotations['timestamp'].append(timestamp) return units def __read_unit_old(self): """ Read all SpikeTrains from a single Segment and Unit This is the same as __read_unit_unsorted except it also contains information on the spike sorting boundaries. This is an old version of the format that used 48-bit floating-point numbers for the boundaries. These cannot easily be read and so are skipped. ------------------------------------------------------------------ Returns a single Unit and a list of SpikeTrains from that Unit and current Segment, in that order. The SpikeTrains must be returned since it is not possible to determine from the Unit which SpikeTrains are from the current Segment. The returned objects are already added to the Block. The SpikeTrains must be added to the current Segment. ID: 29107 """ # same as Unit unit, trains = self.__read_unit_unsorted() # bytes * 108 (float48 * 18) -- Unit boundaries (48-bit floating # point numbers are not supported so we skip them) self._fsrc.seek(108, 1) # uint8 * 9 -- boolean values indicating elliptic feature boundary # dimensions unit.annotations['elliptic'] = np.fromfile(self._fsrc, dtype=np.uint8, count=9).tolist() return unit, trains def __read_unit_unsorted(self): """ Read all SpikeTrains from a single Segment and Unit This does not contain Unit boundaries. ------------------------------------------------------------------ Returns a single Unit and a list of SpikeTrains from that Unit and current Segment, in that order. The SpikeTrains must be returned since it is not possible to determine from the Unit which SpikeTrains are from the current Segment. The returned objects are already added to the Block. The SpikeTrains must be added to the current Segment. ID: 29084 """ # {skip} = bytes * 2 (uint16) -- skip two bytes self._fsrc.seek(2, 1) # uint16 -- number of characters in next string numchars = np.fromfile(self._fsrc, dtype=np.uint16, count=1).item() # char * numchars -- ID string of Unit name = self.__read_str(numchars) # int32 -- SpikeTrain length in ms # int32 * 4 -- response and spon period boundaries parts = np.fromfile(self._fsrc, dtype=np.int32, count=5) t_stop = pq.Quantity(parts[0].astype('float32'), units=pq.ms, copy=False) respwin = parts[1:] # (data_obj) -- list of SpikeTrains spikeslists = self._read_by_id() # use the Unit if it already exists, otherwise create it if name in self._unitdict: unit = self._unitdict[name] else: unit = Group(name=name, file_origin=self._file_origin, elliptic=[], boundaries=[], timestamp=[], max_valid=[]) self._blk.groups.append(unit) self._unitdict[name] = unit # convert the individual spikes to SpikeTrains and add them to the Unit trains = [self._combine_spiketrains(spikes) for spikes in spikeslists] unit.spiketrains.extend(trains) for train in trains: train.t_stop = t_stop.copy() train.annotations['respwin'] = respwin.copy() return unit, trains def __skip_information(self): """ Read an information sequence. This is data sequence is skipped both here and in the Matlab reference implementation. ---------------------- Returns an empty list Nothing is created so nothing is added to the Block. ID: 29113 """ # {skip} char * 34 -- display information self._fsrc.seek(34, 1) return [] def __skip_information_old(self): """ Read an information sequence This is data sequence is skipped both here and in the Matlab reference implementation This is an old version of the format ---------------------- Returns an empty list. Nothing is created so nothing is added to the Block. ID: 29100 """ # {skip} char * 4 -- display information self._fsrc.seek(4, 1) return [] # This dictionary maps the numeric data sequence ID codes to the data # sequence reading functions. # # Since functions are first-class objects in Python, the functions returned # from this dictionary are directly callable. # # If new data sequence ID codes are added in the future please add the code # here in numeric order and the method above in alphabetical order # # The naming of any private method may change at any time _ID_DICT = {29079: __read_spike_fixed, 29081: __read_spike_fixed_old, 29082: __read_list, 29083: __read_list, 29084: __read_unit_unsorted, 29091: __read_list, 29093: __read_list, 29099: __read_annotations_old, 29100: __skip_information_old, 29106: __read_segment, 29107: __read_unit_old, 29109: __read_annotations, 29110: __read_spiketrain_timestamped, 29112: __read_segment_list, 29113: __skip_information, 29114: __read_segment_list_var, 29115: __read_spike_var, 29116: __read_unit, 29117: __read_segment_list_v8, 29119: __read_unit_list_timestamped, 29120: __read_segment_list_v9, 29121: __read_spiketrain_indexed } def convert_brainwaresrc_timestamp(timestamp, start_date=datetime(1899, 12, 30)): """ convert_brainwaresrc_timestamp(timestamp, start_date) - convert a timestamp in brainware src file units to a python datetime object. start_date defaults to 1899.12.30 (ISO format), which is the start date used by all BrainWare SRC data Blocks so far. If manually specified it should be a datetime object or any other object that can be added to a timedelta object. """ # datetime + timedelta = datetime again. return start_date + timedelta(days=timestamp) if __name__ == '__main__': # run this when calling the file directly as a benchmark from neo.test.iotest.test_brainwaresrcio import FILES_TO_TEST from neo.test.iotest.common_io_test import url_for_tests from neo.test.iotest.tools import (create_local_temp_dir, download_test_file, get_test_file_full_path, make_all_directories) shortname = BrainwareSrcIO.__name__.lower().strip('io') local_test_dir = create_local_temp_dir(shortname) url = url_for_tests + shortname FILES_TO_TEST.remove('long_170s_1rep_1clust_ch2.src') make_all_directories(FILES_TO_TEST, local_test_dir) download_test_file(FILES_TO_TEST, local_test_dir, url) for path in get_test_file_full_path(ioclass=BrainwareSrcIO, filename=FILES_TO_TEST, directory=local_test_dir): ioobj = BrainwareSrcIO(path) ioobj.read_all_blocks(lazy=False) neo-0.10.0/neo/io/cedio.py0000644000076700000240000000047114066375716015624 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.cedrawio import CedRawIO class CedIO(CedRawIO, BaseFromRaw): __doc__ = CedRawIO.__doc__ def __init__(self, filename, entfile=None, posfile=None): CedRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/elanio.py0000644000076700000240000000211214066375716016002 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.elanrawio import ElanRawIO class ElanIO(ElanRawIO, BaseFromRaw): """ Class for reading data from Elan. Elan is software for studying time-frequency maps of EEG data. Elan is developed in Lyon, France, at INSERM U821 https://elan.lyon.inserm.fr Args: filename (string) : Full path to the .eeg file entfile (string) : Full path to the .ent file (optional). If None, the path to the ent file is inferred from the filename by adding the ".ent" extension to it posfile (string) : Full path to the .pos file (optional). If None, the path to the pos file is inferred from the filename by adding the ".pos" extension to it """ _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename, entfile=None, posfile=None): ElanRawIO.__init__(self, filename=filename, entfile=entfile, posfile=posfile) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/elphyio.py0000644000076700000240000046004614077527451016217 0ustar andrewstaff00000000000000""" README ===================================================================================== This is the implementation of the NEO IO for Elphy files. IO dependencies: - NEO - types - numpy - quantities Quick reference: ===================================================================================== Class ElphyIO() with methods read_block() and write_block() are implemented. This classes represent the way to access and produce Elphy files from NEO objects. As regards reading an existing Elphy file, start by initializing a IO class with it: >>> import neo >>> r = neo.io.ElphyIO( filename="Elphy.DAT" ) >>> r Read the file content into NEO object Block: >>> bl = r.read_block() >>> bl Now you can then read all Elphy data as NEO objects: >>> b1.segments [, , , ] >>> bl.segments[0].analogsignals[0] These functions return NEO objects, completely "detached" from the original Elphy file. Changes to the runtime objects will not cause any changes in the file. Having already existing NEO structures, it is possible to write them as an Elphy file. For example, given a segment: >>> s = neo.Segment() filled with other NEO structures: >>> import numpy as np >>> import quantities as pq >>> a = AnalogSignal( signal=np.random.rand(300), t_start=42*pq.ms) >>> s.analogsignals.append( a ) and added to a newly created NEO Block: >>> bl = neo.Block() >>> bl.segments.append( s ) Then, it's easy to create an Elphy file: >>> r = neo.io.ElphyIO( filename="ElphyNeoTest.DAT" ) >>> r.write_block( bl ) Author: Thierry Brizzi Domenico Guarino """ # python commons: from datetime import datetime from math import gcd from os import path import re import struct from time import time # note neo.core needs only numpy and quantities import numpy as np import quantities as pq # I need to subclass BaseIO from neo.io.baseio import BaseIO # to import from core from neo.core import (Block, Segment, AnalogSignal, Event, SpikeTrain) # -------------------------------------------------------- # OBJECTS class ElphyScaleFactor: """ Useful to retrieve real values from integer ones that are stored in an Elphy file : ``scale`` : compute the actual value of a sample with this following formula : ``delta`` * value + ``offset`` """ def __init__(self, delta, offset): self.delta = delta self.offset = offset def scale(self, value): return value * self.delta + self.offset class BaseSignal: """ A descriptor storing main signal properties : ``layout`` : the :class:``ElphyLayout` object that extracts data from a file. ``episode`` : the episode in which the signal has been acquired. ``sampling_frequency`` : the sampling frequency of the analog to digital converter. ``sampling_period`` : the sampling period of the analog to digital converter computed from sampling_frequency. ``t_start`` : the start time of the signal acquisition. ``t_stop`` : the end time of the signal acquisition. ``duration`` : the duration of the signal acquisition computed from t_start and t_stop. ``n_samples`` : the number of sample acquired during the recording computed from the duration and the sampling period. ``name`` : a label to identify the signal. ``data`` : a property triggering data extraction. """ def __init__(self, layout, episode, sampling_frequency, start, stop, name=None): self.layout = layout self.episode = episode self.sampling_frequency = sampling_frequency self.sampling_period = 1 / sampling_frequency self.t_start = start self.t_stop = stop self.duration = self.t_stop - self.t_start self.n_samples = int(self.duration / self.sampling_period) self.name = name @property def data(self): raise NotImplementedError('must be overloaded in subclass') class ElphySignal(BaseSignal): """ Subclass of :class:`BaseSignal` corresponding to Elphy's analog channels : ``channel`` : the identifier of the analog channel providing the signal. ``units`` : an array containing x and y coordinates units. ``x_unit`` : a property to access the x-coordinates unit. ``y_unit`` : a property to access the y-coordinates unit. ``data`` : a property that delegate data extraction to the ``get_signal_data`` function of the ```layout`` object. """ def __init__(self, layout, episode, channel, x_unit, y_unit, sampling_frequency, start, stop, name=None): super().__init__(layout, episode, sampling_frequency, start, stop, name) self.channel = channel self.units = [x_unit, y_unit] def __str__(self): return "{} ep_{} ch_{} [{}, {}]".format( self.layout.file.name, self.episode, self.channel, self.x_unit, self.y_unit) def __repr__(self): return self.__str__() @property def x_unit(self): """ Return the x-coordinate of the signal. """ return self.units[0] @property def y_unit(self): """ Return the y-coordinate of the signal. """ return self.units[1] @property def data(self): return self.layout.get_signal_data(self.episode, self.channel) class ElphyTag(BaseSignal): """ Subclass of :class:`BaseSignal` corresponding to Elphy's tag channels : ``number`` : the identifier of the tag channel. ``x_unit`` : the unit of the x-coordinate. """ def __init__(self, layout, episode, number, x_unit, sampling_frequency, start, stop, name=None): super().__init__(layout, episode, sampling_frequency, start, stop, name) self.number = number self.units = [x_unit, None] def __str__(self): return "{} : ep_{} tag_ch_{} [{}]".format( self.layout.file.name, self.episode, self.number, self.x_unit) def __repr__(self): return self.__str__() @property def x_unit(self): """ Return the x-coordinate of the signal. """ return self.units[0] @property def data(self): return self.layout.get_tag_data(self.episode, self.number) @property def channel(self): return self.number class ElphyEvent: """ A descriptor that store a set of events properties : ``layout`` : the :class:``ElphyLayout` object that extracts data from a file. ``episode`` : the episode in which the signal has been acquired. ``number`` : the identifier of the channel. ``x_unit`` : the unit of the x-coordinate. ``n_events`` : the number of events. ``name`` : a label to identify the event. ``times`` : a property triggering event times extraction. """ def __init__(self, layout, episode, number, x_unit, n_events, ch_number=None, name=None): self.layout = layout self.episode = episode self.number = number self.x_unit = x_unit self.n_events = n_events self.name = name self.ch_number = ch_number def __str__(self): return "{} : ep_{} evt_ch_{} [{}]".format( self.layout.file.name, self.episode, self.number, self.x_unit) def __repr__(self): return self.__str__() @property def channel(self): return self.number @property def times(self): return self.layout.get_event_data(self.episode, self.number) @property def data(self): return self.times class ElphySpikeTrain(ElphyEvent): """ A descriptor that store spiketrain properties : ``wf_samples`` : number of samples composing waveforms. ``wf_sampling_frequency`` : sampling frequency of waveforms. ``wf_sampling_period`` : sampling period of waveforms. ``wf_units`` : the units of the x and y coordinates of waveforms. ``t_start`` : the time before the arrival of the spike which corresponds to the starting time of a waveform. ``name`` : a label to identify the event. ``times`` : a property triggering event times extraction. ``waveforms`` : a property triggering waveforms extraction. """ def __init__(self, layout, episode, number, x_unit, n_events, wf_sampling_frequency, wf_samples, unit_x_wf, unit_y_wf, t_start, name=None): super().__init__(layout, episode, number, x_unit, n_events, name) self.wf_samples = wf_samples self.wf_sampling_frequency = wf_sampling_frequency assert wf_sampling_frequency, "bad sampling frequency" self.wf_sampling_period = 1.0 / wf_sampling_frequency self.wf_units = [unit_x_wf, unit_y_wf] self.t_start = t_start @property def x_unit_wf(self): """ Return the x-coordinate of waveforms. """ return self.wf_units[0] @property def y_unit_wf(self): """ Return the y-coordinate of waveforms. """ return self.wf_units[1] @property def times(self): return self.layout.get_spiketrain_data(self.episode, self.number) @property def waveforms(self): return self.layout.get_waveform_data(self.episode, self.number) if self.wf_samples \ else None # -------------------------------------------------------- # BLOCKS class BaseBlock: """ Represent a chunk of file storing metadata or raw data. A convenient class to break down the structure of an Elphy file to several building blocks : ``layout`` : the layout containing the block. ``identifier`` : the label that identified the block. ``size`` : the size of the block. ``start`` : the file index corresponding to the starting byte of the block. ``end`` : the file index corresponding to the ending byte of the block NB : Subclassing this class is a convenient way to set the properties using polymorphism rather than a conditional structure. By this way each :class:`BaseBlock` type know how to iterate through the Elphy file and store interesting data. """ def __init__(self, layout, identifier, start, size): self.layout = layout self.identifier = identifier self.size = size self.start = start self.end = self.start + self.size - 1 class ElphyBlock(BaseBlock): """ A subclass of :class:`BaseBlock`. Useful to store the location and size of interesting data within a block : ``parent_block`` : the parent block containing the block. ``header_size`` : the size of the header permitting the identification of the type of the block. ``data_offset`` : the file index located after the block header. ``data_size`` : the size of data located after the header. ``sub_blocks`` : the sub-blocks contained by the block. """ def __init__(self, layout, identifier, start, size, fixed_length=None, size_format="i", parent_block=None): super().__init__(layout, identifier, start, size) # a block may be a sub-block of another block self.parent_block = parent_block # pascal language store strings in 2 different ways # ... first, if in the program the size of the string is # specified (fixed) then the file stores the length # of the string and allocate a number of bytes equal # to the specified size # ... if this size is not specified the length of the # string is also stored but the file allocate dynamically # a number of bytes equal to the actual size of the string l_ident = len(self.identifier) if fixed_length: l_ident += (fixed_length - l_ident) self.header_size = l_ident + 1 + type_dict[size_format] # starting point of data located in the block self.data_offset = self.start + self.header_size self.data_size = self.size - self.header_size # a block may have sub-blocks # it is to subclasses to initialize # this property self.sub_blocks = list() def __repr__(self): return "{} : size = {}, start = {}, end = {}".format( self.identifier, self.size, self.start, self.end) def add_sub_block(self, block): """ Append a block to the sub-block list. """ self.sub_blocks.append(block) class FileInfoBlock(ElphyBlock): """ Base class of all subclasses whose the purpose is to extract user file info stored into an Elphy file : ``header`` : the header block relative to the block. ``file`` : the file containing the block. NB : User defined metadata are not really practical. An Elphy script must know the order of metadata storage to know exactly how to retrieve these data. That's why it is necessary to subclass and reproduce elphy script commands to extract metadata relative to a protocol. Consequently managing a new protocol implies to refactor the file info extraction. """ def __init__(self, layout, identifier, start, size, fixed_length=None, size_format="i", parent_block=None): super().__init__(layout, identifier, start, size, fixed_length, size_format, parent_block=parent_block) self.header = None self.file = self.layout.file def get_protocol_and_version(self): """ Return a tuple useful to identify the kind of protocol that has generated a file during data acquisition. """ raise Exception("must be overloaded in a subclass") def get_user_file_info(self): """ Return a dictionary containing all user file info stored in the file. """ raise Exception("must be overloaded in a subclass") def get_sparsenoise_revcor(self): """ Return 'REVCOR' user file info. This method is common to :class:`ClassicFileInfo` and :class:`MultistimFileInfo` because the last one is able to store this kind of metadata. """ header = dict() header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['gray_levels'] = read_from_char(self.file, 'h') header['position_x'] = read_from_char(self.file, 'ext') header['position_y'] = read_from_char(self.file, 'ext') header['length'] = read_from_char(self.file, 'ext') header['width'] = read_from_char(self.file, 'ext') header['orientation'] = read_from_char(self.file, 'ext') header['expansion'] = read_from_char(self.file, 'h') header['scotoma'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') # dt_on and dt_off may not exist in old revcor formats rollback = self.file.tell() header['dt_on'] = read_from_char(self.file, 'ext') if header['dt_on'] is None: self.file.seek(rollback) rollback = self.file.tell() header['dt_off'] = read_from_char(self.file, 'ext') if header['dt_off'] is None: self.file.seek(rollback) return header class ClassicFileInfo(FileInfoBlock): """ Extract user file info stored into an Elphy file corresponding to sparse noise (revcor), moving bar and flashbar protocols. """ def detect_protocol_from_name(self, path): pattern = r"\d{4}(\d+|\D)\D" codes = { 'r': 'sparsenoise', 'o': 'movingbar', 'f': 'flashbar', 'm': 'multistim' # here just for assertion } filename = path.split(path)[1] match = re.search(pattern, path) if hasattr(match, 'end'): code = codes.get(path[match.end() - 1].lower(), None) assert code != 'm', "multistim file detected" return code elif 'spt' in filename.lower(): return 'spontaneousactivity' else: return None def get_protocol_and_version(self): if self.layout and self.layout.info_block: self.file.seek(self.layout.info_block.data_offset) version = self.get_title() if version in ['REVCOR1', 'REVCOR2', 'REVCOR + PAIRING']: name = "sparsenoise" elif version in ['BARFLASH']: name = "flashbar" elif version in ['ORISTIM', 'ORISTM', 'ORISTM1', 'ORITUN']: name = "movingbar" else: name = self.detect_protocol_from_name(self.file.name) self.file.seek(0) return name, version return None, None def get_title(self): title_length, title = struct.unpack('= 2): name = None version = None else: if center == 2: name = "sparsenoise" elif center == 3: name = "densenoise" elif center == 4: name = "densenoise" elif center == 5: name = "grating" else: name = None version = None self.file.seek(0) return name, version return None, None def get_title(self): title_length = read_from_char(self.file, 'B') title, = struct.unpack('<%ss' % title_length, self.file.read(title_length)) if hasattr(title, 'decode'): title = title.decode() self.file.seek(self.file.tell() + 255 - title_length) return title def get_user_file_info(self): header = dict() if self.layout and self.layout.info_block: # go to the info_block sub_block = self.layout.info_block self.file.seek(sub_block.data_offset) # get the first four parameters acqLGN = read_from_char(self.file, 'i') center = read_from_char(self.file, 'i') surround = read_from_char(self.file, 'i') # store info in the header header['acqLGN'] = acqLGN header['center'] = center header['surround'] = surround if not (header['surround'] >= 2): header.update(self.get_center_header(center)) self.file.seek(0) return header def get_center_header(self, code): # get file info corresponding # to the executed protocol # for the center first ... if code == 0: return self.get_sparsenoise_revcor() elif code == 2: return self.get_sparsenoise_center() elif code == 3: return self.get_densenoise_center(True) elif code == 4: return self.get_densenoise_center(False) elif code == 5: return dict() # return self.get_grating_center() else: return dict() def get_surround_header(self, code): # then the surround if code == 2: return self.get_sparsenoise_surround() elif code == 3: return self.get_densenoise_surround(True) elif code == 4: return self.get_densenoise_surround(False) elif code == 5: raise NotImplementedError() return self.get_grating_center() else: return dict() def get_center_surround(self, center, surround): header = dict() header['stim_center'] = self.get_center_header(center) header['stim_surround'] = self.get_surround_header(surround) return header def get_sparsenoise_center(self): header = dict() header['title'] = self.get_title() header['number_of_sequences'] = read_from_char(self.file, 'i') header['pretrigger_duration'] = read_from_char(self.file, 'ext') header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['gray_levels'] = read_from_char(self.file, 'h') header['position_x'] = read_from_char(self.file, 'ext') header['position_y'] = read_from_char(self.file, 'ext') header['length'] = read_from_char(self.file, 'ext') header['width'] = read_from_char(self.file, 'ext') header['orientation'] = read_from_char(self.file, 'ext') header['expansion'] = read_from_char(self.file, 'h') header['scotoma'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') header['luminance_1'] = read_from_char(self.file, 'ext') header['luminance_2'] = read_from_char(self.file, 'ext') header['dt_count'] = read_from_char(self.file, 'i') dt_array = list() for _ in range(0, header['dt_count']): dt_array.append(read_from_char(self.file, 'ext')) header['dt_on'] = dt_array if dt_array else None header['dt_off'] = read_from_char(self.file, 'ext') return header def get_sparsenoise_surround(self): header = dict() header['title_surround'] = self.get_title() header['gap'] = read_from_char(self.file, 'ext') header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['gray_levels'] = read_from_char(self.file, 'h') header['expansion'] = read_from_char(self.file, 'h') header['scotoma'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') header['luminance_1'] = read_from_char(self.file, 'ext') header['luminance_2'] = read_from_char(self.file, 'ext') header['dt_on'] = read_from_char(self.file, 'ext') header['dt_off'] = read_from_char(self.file, 'ext') return header def get_densenoise_center(self, is_binary): header = dict() header['stimulus_type'] = "B" if is_binary else "T" header['title'] = self.get_title() _tmp = read_from_char(self.file, 'i') header['number_of_sequences'] = _tmp if _tmp < 0 else None rollback = self.file.tell() header['stimulus_duration'] = read_from_char(self.file, 'ext') if header['stimulus_duration'] is None: self.file.seek(rollback) header['pretrigger_duration'] = read_from_char(self.file, 'ext') header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['position_x'] = read_from_char(self.file, 'ext') header['position_y'] = read_from_char(self.file, 'ext') header['length'] = read_from_char(self.file, 'ext') header['width'] = read_from_char(self.file, 'ext') header['orientation'] = read_from_char(self.file, 'ext') header['expansion'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') header['luminance_1'] = read_from_char(self.file, 'ext') header['luminance_2'] = read_from_char(self.file, 'ext') header['dt_on'] = read_from_char(self.file, 'ext') header['dt_off'] = read_from_char(self.file, 'ext') return header def get_densenoise_surround(self, is_binary): header = dict() header['title_surround'] = self.get_title() header['gap'] = read_from_char(self.file, 'ext') header['n_div_x'] = read_from_char(self.file, 'h') header['n_div_y'] = read_from_char(self.file, 'h') header['expansion'] = read_from_char(self.file, 'h') header['seed'] = read_from_char(self.file, 'h') header['luminance_1'] = read_from_char(self.file, 'ext') header['luminance_2'] = read_from_char(self.file, 'ext') header['dt_on'] = read_from_char(self.file, 'ext') header['dt_off'] = read_from_char(self.file, 'ext') return header def get_grating_center(self): pass def get_grating_surround(self): pass class Header(ElphyBlock): """ A convenient subclass of :class:`Block` to store Elphy file header properties. NB : Subclassing this class is a convenient way to set the properties of the header using polymorphism rather than a conditional structure. """ def __init__(self, layout, identifier, size, fixed_length=None, size_format="i"): super().__init__(layout, identifier, 0, size, fixed_length, size_format) class Acquis1Header(Header): """ A subclass of :class:`Header` used to identify the 'ACQUIS1/GS/1991' format. Whereas more recent format, the header contains all data relative to episodes, channels and traces : ``n_channels`` : the number of acquisition channels. ``nbpt`` and ``nbptEx`` : parameters useful to compute the number of samples by episodes. ``tpData`` : the data format identifier used to compute sample size. ``x_unit`` : the x-coordinate unit for all channels in an episode. ``y_units`` : an array containing y-coordinate units for each channel in the episode. ``dX`` and ``X0`` : the scale factors necessary to retrieve the actual times relative to each sample in a channel. ``dY_ar`` and ``Y0_ar``: arrays of scale factors necessary to retrieve the actual values relative to samples. ``continuous`` : a boolean telling if the file has been acquired in continuous mode. ``preSeqI`` : the size in bytes of the data preceding raw data. ``postSeqI`` : the size in bytes of the data preceding raw data. ``dat_length`` : the length in bytes of the data in the file. ``sample_size`` : the size in bytes of a sample. ``n_samples`` : the number of samples. ``ep_size`` : the size in bytes of an episode. ``n_episodes`` : the number of recording sequences store in the file. NB : The size is read from the file, the identifier is a string containing 15 characters and the size is encoded as small integer. See file 'FicDefAc1.pas' to identify the parsed parameters. """ def __init__(self, layout): fileobj = layout.file super().__init__(layout, "ACQUIS1/GS/1991", 1024, 15, "h") # parse the header to store interesting data about episodes and channels fileobj.seek(18) # extract episode properties n_channels = read_from_char(fileobj, 'B') assert not ((n_channels < 1) or (n_channels > 16)), "bad number of channels" nbpt = read_from_char(fileobj, 'h') l_xu, x_unit = struct.unpack('= self.end: tagShift = 0 else: tagShift = read_from_char(layout.file, 'B') # setup object properties self.n_channels = n_channels self.nbpt = nbpt self.tpData = tpData self.x_unit = xu[0:l_xu] self.dX = dX self.X0 = X0 self.y_units = y_units[0:n_channels] self.dY_ar = dY_ar[0:n_channels] self.Y0_ar = Y0_ar[0:n_channels] self.continuous = continuous if self.continuous: self.preSeqI = 0 self.postSeqI = 0 else: self.preSeqI = preSeqI self.postSeqI = postSeqI self.varEp = varEp self.withTags = withTags if not self.withTags: self.tagShift = 0 else: if tagShift == 0: self.tagShift = 4 else: self.tagShift = tagShift self.sample_size = type_dict[types[self.tpData]] self.dat_length = self.layout.file_size - self.layout.data_offset if self.continuous: if self.n_channels > 0: self.n_samples = self.dat_length / (self.n_channels * self.sample_size) else: self.n_samples = 0 else: self.n_samples = self.nbpt self.ep_size = (self.preSeqI + self.postSeqI + self.n_samples * self.sample_size * self.n_channels) self.n_episodes = self.dat_length / self.ep_size if (self.n_samples != 0) else 0 class DAC2GSEpisodeBlock(ElphyBlock): """ Subclass of :class:`Block` useful to store data corresponding to 'DAC2SEQ' blocks stored in the DAC2/GS/2000 format. ``n_channels`` : the number of acquisition channels. ``nbpt`` : the number of samples by episodes. ``tpData`` : the data format identifier used to compute the sample size. ``x_unit`` : the x-coordinate unit for all channels in an episode. ``y_units`` : an array containing y-coordinate units for each channel in the episode. ``dX`` and ``X0`` : the scale factors necessary to retrieve the actual times relative to each sample in a channel. ``dY_ar`` and ``Y0_ar``: arrays of scale factors necessary to retrieve the actual values relative to samples. ``postSeqI`` : the size in bytes of the data preceding raw data. NB : see file 'FdefDac2.pas' to identify the parsed parameters. """ def __init__(self, layout, identifier, start, size, fixed_length=None, size_format="i"): main = layout.main_block n_channels, nbpt, tpData, postSeqI = struct.unpack(' 0] for data_block in blocks: self.file.seek(data_block.start) raw = self.file.read(data_block.size)[0:expected_size] databytes = np.frombuffer(raw, dtype=dtype) chunks.append(databytes) # concatenate all chunks and return # the specified slice if len(chunks) > 0: databytes = np.concatenate(chunks) return databytes[start:end] else: return np.array([]) def reshape_bytes(self, databytes, reshape, datatypes, order='<'): """ Reshape a numpy array containing a set of databytes. """ assert datatypes and len(datatypes) == len(reshape), "datatypes are not well defined" l_bytes = len(databytes) # create the mask for each shape shape_mask = list() for shape in reshape: for _ in np.arange(1, shape + 1): shape_mask.append(shape) # create a set of masks to extract data bit_masks = list() for shape in reshape: bit_mask = list() for value in shape_mask: bit = 1 if (value == shape) else 0 bit_mask.append(bit) bit_masks.append(np.array(bit_mask)) # extract data n_samples = l_bytes / np.sum(reshape) data = np.empty([len(reshape), n_samples], dtype=(int, int)) for index, bit_mask in enumerate(bit_masks): tmp = self.filter_bytes(databytes, bit_mask) tp = '{}{}{}'.format(order, datatypes[index], reshape[index]) data[index] = np.frombuffer(tmp, dtype=tp) return data.T def filter_bytes(self, databytes, bit_mask): """ Detect from a bit mask which bits to keep to recompose the signal. """ n_bytes = len(databytes) mask = np.ones(n_bytes, dtype=int) np.putmask(mask, mask, bit_mask) to_keep = np.where(mask > 0)[0] return databytes.take(to_keep) def load_channel_data(self, ep, ch): """ Return a numpy array containing the list of bytes corresponding to the specified episode and channel. """ # memorise the sample size and symbol sample_size = self.sample_size(ep, ch) sample_symbol = self.sample_symbol(ep, ch) # create a bit mask to define which # sample to keep from the file bit_mask = self.create_bit_mask(ep, ch) # load all bytes contained in an episode data_blocks = self.get_data_blocks(ep) databytes = self.load_bytes(data_blocks) raw = self.filter_bytes(databytes, bit_mask) # reshape bytes from the sample size dt = np.dtype(numpy_map[sample_symbol]) dt.newbyteorder('<') return np.frombuffer(raw.reshape([int(len(raw) / sample_size), sample_size]), dt) def apply_op(self, np_array, value, op_type): """ A convenient function to apply an operator over all elements of a numpy array. """ if op_type == "shift_right": return np_array >> value elif op_type == "shift_left": return np_array << value elif op_type == "mask": return np_array & value else: return np_array def get_tag_mask(self, tag_ch, tag_mode): """ Return a mask useful to retrieve bits that encode a tag channel. """ if tag_mode == 1: tag_mask = 0b01 if (tag_ch == 1) else 0b10 elif tag_mode in [2, 3]: ar_mask = np.zeros(16, dtype=int) ar_mask[tag_ch - 1] = 1 st = "0b" + ''.join(np.array(np.flipud(ar_mask), dtype=str)) tag_mask = eval(st) return tag_mask def load_encoded_tags(self, ep, tag_ch): """ Return a numpy array containing bytes corresponding to the specified episode and channel. """ tag_mode = self.tag_mode(ep) tag_mask = self.get_tag_mask(tag_ch, tag_mode) if tag_mode in [1, 2]: # digidata or itc mode # available for all formats ch = self.get_channel_for_tags(ep) raw = self.load_channel_data(ep, ch) return self.apply_op(raw, tag_mask, "mask") elif tag_mode == 3: # cyber k mode # only available for DAC2 objects format # store bytes corresponding to the blocks # containing tags in a numpy array and reshape # it to have a set of tuples (time, value) ck_blocks = self.get_blocks_of_type(ep, 'RCyberTag') databytes = self.load_bytes(ck_blocks) raw = self.reshape_bytes(databytes, reshape=(4, 2), datatypes=('u', 'u'), order='<') # keep only items that are compatible # with the specified tag channel raw[:, 1] = self.apply_op(raw[:, 1], tag_mask, "mask") # computing numpy.diff is useful to know # how many times a value is maintained # and necessary to reconstruct the # compressed signal ... repeats = np.array(np.diff(raw[:, 0]), dtype=int) data = np.repeat(raw[:-1, 1], repeats, axis=0) # ... note that there is always # a transition at t=0 for synchronisation # purpose, consequently it is not necessary # to complete with zeros when the first # transition arrive ... return data def load_encoded_data(self, ep, ch): """ Get encoded value of raw data from the elphy file. """ tag_shift = self.tag_shift(ep) data = self.load_channel_data(ep, ch) if tag_shift: return self.apply_op(data, tag_shift, "shift_right") else: return data def get_signal_data(self, ep, ch): """ Return a numpy array containing all samples of a signal, acquired on an Elphy analog channel, formatted as a list of (time, value) tuples. """ # get data from the file y_data = self.load_encoded_data(ep, ch) x_data = np.arange(0, len(y_data)) # create a recarray data = np.recarray(len(y_data), dtype=[('x', b_float), ('y', b_float)]) # put in the recarray the scaled data x_factors = self.x_scale_factors(ep, ch) y_factors = self.y_scale_factors(ep, ch) data['x'] = x_factors.scale(x_data) data['y'] = y_factors.scale(y_data) return data def get_tag_data(self, ep, tag_ch): """ Return a numpy array containing all samples of a signal, acquired on an Elphy tag channel, formatted as a list of (time, value) tuples. """ # get data from the file y_data = self.load_encoded_tags(ep, tag_ch) x_data = np.arange(0, len(y_data)) # create a recarray data = np.recarray(len(y_data), dtype=[('x', b_float), ('y', b_int)]) # put in the recarray the scaled data factors = self.x_tag_scale_factors(ep) data['x'] = factors.scale(x_data) data['y'] = y_data return data class Acquis1Layout(ElphyLayout): """ A subclass of :class:`ElphyLayout` to know how the 'ACQUIS1/GS/1991' format is organised. Extends :class:`ElphyLayout` to store the offset used to retrieve directly raw data : ``data_offset`` : an offset to jump directly to the raw data. """ def __init__(self, fileobj, data_offset): super().__init__(fileobj) self.data_offset = data_offset self.data_blocks = None def get_blocks_end(self): return self.data_offset def is_continuous(self): return self.header.continuous def get_episode_blocks(self): raise NotImplementedError() def set_info_block(self): i_blks = self.get_blocks_of_type('USER INFO') assert len(i_blks) < 2, 'too many info blocks' if len(i_blks): self.info_block = i_blks[0] def set_data_blocks(self): data_blocks = list() size = self.header.n_samples * self.header.sample_size * self.header.n_channels for ep in range(0, self.header.n_episodes): start = self.data_offset + ep * self.header.ep_size + self.header.preSeqI data_blocks.append(DummyDataBlock(self, 'Acquis1Data', start, size)) self.data_blocks = data_blocks def get_data_blocks(self, ep): return [self.data_blocks[ep - 1]] @property def n_episodes(self): return self.header.n_episodes def n_channels(self, episode): return self.header.n_channels def n_tags(self, episode): return 0 def tag_mode(self, ep): return 0 def tag_shift(self, ep): return 0 def get_channel_for_tags(self, ep): return None @property def no_analog_data(self): return True if (self.n_episodes == 0) else self.header.no_analog_data def sample_type(self, ep, ch): return self.header.tpData def sampling_period(self, ep, ch): return self.header.dX def n_samples(self, ep, ch): return self.header.n_samples def x_tag_scale_factors(self, ep): return ElphyScaleFactor( self.header.dX, self.header.X0 ) def x_scale_factors(self, ep, ch): return ElphyScaleFactor( self.header.dX, self.header.X0 ) def y_scale_factors(self, ep, ch): dY = self.header.dY_ar[ch - 1] Y0 = self.header.Y0_ar[ch - 1] # TODO: see why this kind of exception exists if dY is None or Y0 is None: raise Exception('bad Y-scale factors for episode {} channel {}'.format(ep, ch)) return ElphyScaleFactor(dY, Y0) def x_unit(self, ep, ch): return self.header.x_unit def y_unit(self, ep, ch): return self.header.y_units[ch - 1] @property def ep_size(self): return self.header.ep_size @property def file_duration(self): return self.header.dX * self.n_samples def get_tag(self, episode, tag_channel): return None def create_channel_mask(self, ep): return np.arange(1, self.header.n_channels + 1) class DAC2GSLayout(ElphyLayout): """ A subclass of :class:`ElphyLayout` to know how the 'DAC2 / GS / 2000' format is organised. Extends :class:`ElphyLayout` to store the offset used to retrieve directly raw data : ``data_offset`` : an offset to jump directly after the 'MAIN' block where 'DAC2SEQ' blocks start. ``main_block```: a shortcut to access 'MAIN' block. ``episode_blocks`` : a shortcut to access blocks corresponding to episodes. """ def __init__(self, fileobj, data_offset): super().__init__(fileobj) self.data_offset = data_offset self.main_block = None self.episode_blocks = None def get_blocks_end(self): return self.file_size # data_offset def is_continuous(self): main_block = self.main_block return main_block.continuous if main_block else False def get_episode_blocks(self): raise NotImplementedError() def set_main_block(self): main_block = self.get_blocks_of_type('MAIN') self.main_block = main_block[0] if main_block else None def set_episode_blocks(self): ep_blocks = self.get_blocks_of_type('DAC2SEQ') self.episode_blocks = ep_blocks if ep_blocks else None def set_info_block(self): i_blks = self.get_blocks_of_type('USER INFO') assert len(i_blks) < 2, "too many info blocks" if len(i_blks): self.info_block = i_blks[0] def set_data_blocks(self): data_blocks = list() identifier = 'DAC2GSData' size = self.main_block.n_samples * self.main_block.sample_size * self.main_block.n_channels if not self.is_continuous(): blocks = self.get_blocks_of_type('DAC2SEQ') for block in blocks: start = block.start + self.main_block.preSeqI data_blocks.append(DummyDataBlock(self, identifier, start, size)) else: start = self.blocks[-1].end + 1 + self.main_block.preSeqI data_blocks.append(DummyDataBlock(self, identifier, start, size)) self.data_blocks = data_blocks def get_data_blocks(self, ep): return [self.data_blocks[ep - 1]] def episode_block(self, ep): return self.main_block if self.is_continuous() else self.episode_blocks[ep - 1] def tag_mode(self, ep): return 1 if self.main_block.withTags else 0 def tag_shift(self, ep): return self.main_block.tagShift def get_channel_for_tags(self, ep): return 1 def sample_type(self, ep, ch): return self.main_block.tpData def sample_size(self, ep, ch): size = super().sample_size(ep, ch) assert size == 2, "sample size is always 2 bytes for DAC2/GS/2000 format" return size def sampling_period(self, ep, ch): block = self.episode_block(ep) return block.dX def x_tag_scale_factors(self, ep): block = self.episode_block(ep) return ElphyScaleFactor( block.dX, block.X0, ) def x_scale_factors(self, ep, ch): block = self.episode_block(ep) return ElphyScaleFactor( block.dX, block.X0, ) def y_scale_factors(self, ep, ch): block = self.episode_block(ep) return ElphyScaleFactor( block.dY_ar[ch - 1], block.Y0_ar[ch - 1] ) def x_unit(self, ep, ch): block = self.episode_block(ep) return block.x_unit def y_unit(self, ep, ch): block = self.episode_block(ep) return block.y_units[ch - 1] def n_samples(self, ep, ch): return self.main_block.n_samples def ep_size(self, ep): return self.main_block.ep_size @property def n_episodes(self): return self.main_block.n_episodes def n_channels(self, episode): return self.main_block.n_channels def n_tags(self, episode): return 2 if self.main_block.withTags else 0 @property def file_duration(self): return self.main_block.dX * self.n_samples def get_tag(self, episode, tag_channel): assert episode in range(1, self.n_episodes + 1) # there are none or 2 tag channels if self.tag_mode(episode) == 1: assert tag_channel in range(1, 3), "DAC2/GS/2000 format support only 2 tag channels" block = self.episode_block(episode) t_stop = self.main_block.n_samples * block.dX return ElphyTag(self, episode, tag_channel, block.x_unit, 1.0 / block.dX, 0, t_stop) else: return None def n_tag_samples(self, ep, tag_channel): return self.main_block.n_samples def get_tag_data(self, episode, tag_channel): # memorise some useful properties block = self.episode_block(episode) sample_size = self.sample_size(episode, tag_channel) sample_symbol = self.sample_symbol(episode, tag_channel) # create a bit mask to define which # sample to keep from the file channel_mask = self.create_channel_mask(episode) bit_mask = self.create_bit_mask(channel_mask, 1) # get bytes from the file data_block = self.data_blocks[episode - 1] n_bytes = data_block.size self.file.seek(data_block.start) databytes = np.frombuffer(self.file.read(n_bytes), ' 0)[0] raw = databytes.take(to_keep) raw = raw.reshape([len(raw) / sample_size, sample_size]) # create a recarray containing data dt = np.dtype(numpy_map[sample_symbol]) dt.newbyteorder('<') tag_mask = 0b01 if (tag_channel == 1) else 0b10 y_data = np.frombuffer(raw, dt) & tag_mask x_data = np.arange(0, len(y_data)) * block.dX + block.X0 data = np.recarray(len(y_data), dtype=[('x', b_float), ('y', b_int)]) data['x'] = x_data data['y'] = y_data return data def create_channel_mask(self, ep): return np.arange(1, self.main_block.n_channels + 1) class DAC2Layout(ElphyLayout): """ A subclass of :class:`ElphyLayout` to know how the Elphy format is organised. Whereas other formats storing raw data at the end of the file, 'DAC2 objects' format spreads them over multiple blocks : ``episode_blocks`` : a shortcut to access blocks corresponding to episodes. """ def __init__(self, fileobj): super().__init__(fileobj) self.episode_blocks = None def get_blocks_end(self): return self.file_size def is_continuous(self): ep_blocks = [k for k in self.blocks if k.identifier.startswith('B_Ep')] if ep_blocks: ep_block = ep_blocks[0] ep_sub_block = ep_block.sub_blocks[0] return ep_sub_block.continuous else: return False def set_episode_blocks(self): self.episode_blocks = [k for k in self.blocks if str(k.identifier).startswith('B_Ep')] def set_info_block(self): # in fact the file info are contained into a single sub-block with an USR identifier i_blks = self.get_blocks_of_type('B_Finfo') assert len(i_blks) < 2, "too many info blocks" if len(i_blks): i_blk = i_blks[0] sub_blocks = i_blk.sub_blocks if len(sub_blocks): self.info_block = sub_blocks[0] def set_data_blocks(self): data_blocks = list() blocks = self.get_blocks_of_type('RDATA') for block in blocks: start = block.data_start size = block.end + 1 - start data_blocks.append(DummyDataBlock(self, 'RDATA', start, size)) self.data_blocks = data_blocks def get_data_blocks(self, ep): return self.group_blocks_of_type(ep, 'RDATA') def group_blocks_of_type(self, ep, identifier): ep_blocks = list() blocks = [k for k in self.get_blocks_stored_in_episode(ep) if k.identifier == identifier] for block in blocks: start = block.data_start size = block.end + 1 - start ep_blocks.append(DummyDataBlock(self, identifier, start, size)) return ep_blocks def get_blocks_stored_in_episode(self, ep): data_blocks = [k for k in self.blocks if k.identifier == 'RDATA'] n_ep = self.n_episodes blk_1 = self.episode_block(ep) blk_2 = self.episode_block((ep + 1) % n_ep) i_1 = self.blocks.index(blk_1) i_2 = self.blocks.index(blk_2) if (blk_1 == blk_2) or (i_2 < i_1): return [k for k in data_blocks if self.blocks.index(k) > i_1] else: return [k for k in data_blocks if self.blocks.index(k) in range(i_1, i_2)] def set_cyberk_blocks(self): ck_blocks = list() blocks = self.get_blocks_of_type('RCyberTag') for block in blocks: start = block.data_start size = block.end + 1 - start ck_blocks.append(DummyDataBlock(self, 'RCyberTag', start, size)) self.ck_blocks = ck_blocks def episode_block(self, ep): return self.episode_blocks[ep - 1] @property def n_episodes(self): return len(self.episode_blocks) def analog_index(self, episode): """ Return indices relative to channels used for analog signals. """ block = self.episode_block(episode) tag_mode = block.ep_block.tag_mode an_index = np.where(np.array(block.ks_block.k_sampling) > 0) if tag_mode == 2: an_index = an_index[:-1] return an_index def n_channels(self, episode): """ Return the number of channels used for analog signals but also events. NB : in Elphy this 2 kinds of channels are not differenciated. """ block = self.episode_block(episode) tag_mode = block.ep_block.tag_mode n_channels = len(block.ks_block.k_sampling) return n_channels if tag_mode != 2 else n_channels - 1 def n_tags(self, episode): block = self.episode_block(episode) tag_mode = block.ep_block.tag_mode tag_map = {0: 0, 1: 2, 2: 16, 3: 16} return tag_map.get(tag_mode, 0) def n_events(self, episode): """ Return the number of channels dedicated to events. """ block = self.episode_block(episode) return block.ks_block.k_sampling.count(0) def n_spiketrains(self, episode): spk_blocks = [k for k in self.blocks if k.identifier == 'RSPK'] return spk_blocks[0].n_evt_channels if spk_blocks else 0 def sub_sampling(self, ep, ch): """ Return the sub-sampling factor for the specified episode and channel. """ block = self.episode_block(ep) return block.ks_block.k_sampling[ch - 1] if block.ks_block else 1 def aggregate_size(self, block, ep): ag_size = 0 for ch in range(1, len(block.ks_block.k_sampling)): if block.ks_block.k_sampling[ch - 1] != 0: ag_size += self.sample_size(ep, ch) return ag_size def n_samples(self, ep, ch): block = self.episode_block(ep) if not block.ep_block.continuous: return block.ep_block.nbpt / self.sub_sampling(ep, ch) else: # for continuous case there isn't any place # in the file that contains the number of # samples unlike the episode case ... data_blocks = self.get_data_blocks(ep) total_size = np.sum([k.size for k in data_blocks]) # count the number of samples in an # aggregate and compute its size in order # to determine the size of an aggregate ag_count = self.aggregate_sample_count(block) ag_size = self.aggregate_size(block, ep) n_ag = total_size / ag_size # the number of samples is equal # to the number of aggregates ... n_samples = n_ag n_chunks = total_size % ag_size # ... but not when there exists # a incomplete aggregate at the # end of the file, consequently # the preeceeding computed number # of samples must be incremented # by one only if the channel map # to a sample in the last aggregate # ... maybe this last part should be # deleted because the n_chunks is always # null in continuous mode if n_chunks: last_ag_size = total_size - n_ag * ag_count size = 0 for i in range(0, ch): size += self.sample_size(ep, i + 1) if size <= last_ag_size: n_samples += 1 return n_samples def sample_type(self, ep, ch): block = self.episode_block(ep) return block.kt_block.k_types[ch - 1] if block.kt_block else block.ep_block.tpData def sampling_period(self, ep, ch): block = self.episode_block(ep) return block.ep_block.dX * self.sub_sampling(ep, ch) def x_tag_scale_factors(self, ep): block = self.episode_block(ep) return ElphyScaleFactor( block.ep_block.dX, block.ep_block.X0 ) def x_scale_factors(self, ep, ch): block = self.episode_block(ep) return ElphyScaleFactor( block.ep_block.dX * block.ks_block.k_sampling[ch - 1], block.ep_block.X0, ) def y_scale_factors(self, ep, ch): block = self.episode_block(ep) return ElphyScaleFactor( block.ch_block.dY_ar[ch - 1], block.ch_block.Y0_ar[ch - 1] ) def x_unit(self, ep, ch): block = self.episode_block(ep) return block.ep_block.x_unit def y_unit(self, ep, ch): block = self.episode_block(ep) return block.ch_block.y_units[ch - 1] def tag_mode(self, ep): block = self.episode_block(ep) return block.ep_block.tag_mode def tag_shift(self, ep): block = self.episode_block(ep) return block.ep_block.tag_shift def get_channel_for_tags(self, ep): block = self.episode_block(ep) tag_mode = self.tag_mode(ep) if tag_mode == 1: ks = np.array(block.ks_block.k_sampling) mins = np.where(ks == ks.min())[0] + 1 return mins[0] elif tag_mode == 2: return block.ep_block.n_channels else: return None def aggregate_sample_count(self, block): """ Return the number of sample in an aggregate. """ # compute the least common multiple # for channels having block.ks_block.k_sampling[ch] > 0 lcm0 = 1 for i in range(0, block.ep_block.n_channels): if block.ks_block.k_sampling[i] > 0: lcm0 = least_common_multiple(lcm0, block.ks_block.k_sampling[i]) # sum quotients lcm / KSampling count = 0 for i in range(0, block.ep_block.n_channels): if block.ks_block.k_sampling[i] > 0: count += int(lcm0 / block.ks_block.k_sampling[i]) return count def create_channel_mask(self, ep): """ Return the minimal pattern of channel numbers representing the succession of channels in the multiplexed data. It is useful to do the mapping between a sample stored in the file and its relative channel. NB : This function has been converted from the 'TseqBlock.BuildMask' method of the file 'ElphyFormat.pas' stored in Elphy source code. """ block = self.episode_block(ep) ag_count = self.aggregate_sample_count(block) mask_ar = np.zeros(ag_count, dtype='i') ag_size = 0 i = 0 k = 0 while k < ag_count: for j in range(0, block.ep_block.n_channels): if (block.ks_block.k_sampling[j] != 0) and (i % block.ks_block.k_sampling[j] == 0): mask_ar[k] = j + 1 ag_size += self.sample_size(ep, j + 1) k += 1 if k >= ag_count: break i += 1 return mask_ar def get_signal(self, episode, channel): block = self.episode_block(episode) k_sampling = np.array(block.ks_block.k_sampling) evt_channels = np.where(k_sampling == 0)[0] if channel not in evt_channels: return super().get_signal(episode, channel) else: k_sampling[channel - 1] = -1 return self.get_event(episode, channel, k_sampling) def get_tag(self, episode, tag_channel): """ Return a :class:`ElphyTag` which is a descriptor of the specified event channel. """ assert episode in range(1, self.n_episodes + 1) # there are none, 2 or 16 tag # channels depending on tag_mode tag_mode = self.tag_mode(episode) if tag_mode: block = self.episode_block(episode) x_unit = block.ep_block.x_unit # verify the validity of the tag channel if tag_mode == 1: assert tag_channel in range( 1, 3), "Elphy format support only 2 tag channels for tag_mode == 1" elif tag_mode == 2: assert tag_channel in range( 1, 17), "Elphy format support only 16 tag channels for tag_mode == 2" elif tag_mode == 3: assert tag_channel in range( 1, 17), "Elphy format support only 16 tag channels for tag_mode == 3" smp_period = block.ep_block.dX smp_freq = 1.0 / smp_period if tag_mode != 3: ch = self.get_channel_for_tags(episode) n_samples = self.n_samples(episode, ch) t_stop = (n_samples - 1) * smp_freq else: # get the max of n_samples multiplied by the sampling # period done on every analog channels in order to avoid # the selection of a channel without concrete signals t_max = list() for ch in self.analog_index(episode): n_samples = self.n_samples(episode, ch) factors = self.x_scale_factors(episode, ch) chtime = n_samples * factors.delta t_max.append(chtime) time_max = max(t_max) # as (n_samples_tag - 1) * dX_tag # and time_max = n_sample_tag * dX_tag # it comes the following duration t_stop = time_max - smp_period return ElphyTag(self, episode, tag_channel, x_unit, smp_freq, 0, t_stop) else: return None def get_event(self, ep, ch, marked_ks): """ Return a :class:`ElphyEvent` which is a descriptor of the specified event channel. """ assert ep in range(1, self.n_episodes + 1) assert ch in range(1, self.n_channels + 1) # find the event channel number evt_channel = np.where(marked_ks == -1)[0][0] assert evt_channel in range(1, self.n_events(ep) + 1) block = self.episode_block(ep) ep_blocks = self.get_blocks_stored_in_episode(ep) evt_blocks = [k for k in ep_blocks if k.identifier == 'REVT'] n_events = np.sum([k.n_events[evt_channel - 1] for k in evt_blocks], dtype=int) x_unit = block.ep_block.x_unit return ElphyEvent(self, ep, evt_channel, x_unit, n_events, ch_number=ch) def load_encoded_events(self, episode, evt_channel, identifier): """ Return times stored as a 4-bytes integer in the specified event channel. """ data_blocks = self.group_blocks_of_type(episode, identifier) ep_blocks = self.get_blocks_stored_in_episode(episode) evt_blocks = [k for k in ep_blocks if k.identifier == identifier] # compute events on each channel n_events = np.sum([k.n_events for k in evt_blocks], dtype=int, axis=0) pre_events = np.sum(n_events[0:evt_channel - 1], dtype=int) start = pre_events end = start + n_events[evt_channel - 1] expected_size = 4 * np.sum(n_events, dtype=int) return self.load_bytes(data_blocks, dtype=' 0: name = names[episode - 1] start = name.size + 1 - name.data_size + 1 end = name.end - name.start + 1 chars = self.load_bytes([name], dtype='uint8', start=start, end=end, expected_size=name.size).tolist() # print "chars[%s:%s]: %s" % (start,end,chars) episode_name = ''.join([chr(k) for k in chars]) return episode_name def get_event_data(self, episode, evt_channel): """ Return times contained in the specified event channel. This function is triggered when the 'times' property of an :class:`ElphyEvent` descriptor instance is accessed. """ times = self.load_encoded_events(episode, evt_channel, "REVT") block = self.episode_block(episode) return times * block.ep_block.dX / len(block.ks_block.k_sampling) def get_spiketrain(self, episode, electrode_id): """ Return a :class:`Spike` which is a descriptor of the specified spike channel. """ assert episode in range(1, self.n_episodes + 1) assert electrode_id in range(1, self.n_spiketrains(episode) + 1) # get some properties stored in the episode sub-block block = self.episode_block(episode) x_unit = block.ep_block.x_unit x_unit_wf = getattr(block.ep_block, 'x_unit_wf', None) y_unit_wf = getattr(block.ep_block, 'y_unit_wf', None) # number of spikes in the entire episode spk_blocks = [k for k in self.blocks if k.identifier == 'RSPK'] n_events = np.sum([k.n_events[electrode_id - 1] for k in spk_blocks], dtype=int) # number of samples in a waveform wf_sampling_frequency = 1.0 / block.ep_block.dX wf_blocks = [k for k in self.blocks if k.identifier == 'RspkWave'] if wf_blocks: wf_samples = wf_blocks[0].wavelength t_start = wf_blocks[0].pre_trigger * block.ep_block.dX else: wf_samples = 0 t_start = 0 return ElphySpikeTrain(self, episode, electrode_id, x_unit, n_events, wf_sampling_frequency, wf_samples, x_unit_wf, y_unit_wf, t_start) def get_spiketrain_data(self, episode, electrode_id): """ Return times contained in the specified spike channel. This function is triggered when the 'times' property of an :class:`Spike` descriptor instance is accessed. NB : The 'RSPK' block is not actually identical to the 'EVT' one, because all units relative to a time are stored directly after all event times, 1 byte for each. This function doesn't return these units. But, they could be retrieved from the 'RspkWave' block with the 'get_waveform_data function' """ block = self.episode_block(episode) times = self.load_encoded_spikes(episode, electrode_id, "RSPK") return times * block.ep_block.dX def load_encoded_waveforms(self, episode, electrode_id): """ Return times on which waveforms are defined and a numpy recarray containing all the data stored in the RspkWave block. """ # load data corresponding to the RspkWave block identifier = "RspkWave" data_blocks = self.group_blocks_of_type(episode, identifier) databytes = self.load_bytes(data_blocks) # select only data corresponding # to the specified spk_channel ep_blocks = self.get_blocks_stored_in_episode(episode) wf_blocks = [k for k in ep_blocks if k.identifier == identifier] wf_samples = wf_blocks[0].wavelength events = np.sum([k.n_spikes for k in wf_blocks], dtype=int, axis=0) n_events = events[electrode_id - 1] pre_events = np.sum(events[0:electrode_id - 1], dtype=int) start = pre_events end = start + n_events # data must be reshaped before dtype = [ # the time of the spike arrival ('elphy_time', 'u4', (1,)), ('device_time', 'u4', (1,)), # the identifier of the electrode # would also be the 'trodalness' # but this tetrode devices are not # implemented in Elphy ('channel_id', 'u2', (1,)), # the 'category' of the waveform ('unit_id', 'u1', (1,)), # do not used ('dummy', 'u1', (13,)), # samples of the waveform ('waveform', 'i2', (wf_samples,)) ] x_start = wf_blocks[0].pre_trigger x_stop = wf_samples - x_start return np.arange(-x_start, x_stop), np.frombuffer(databytes, dtype=dtype)[start:end] def get_waveform_data(self, episode, electrode_id): """ Return waveforms corresponding to the specified spike channel. This function is triggered when the ``waveforms`` property of an :class:`Spike` descriptor instance is accessed. """ block = self.episode_block(episode) times, databytes = self.load_encoded_waveforms(episode, electrode_id) n_events, = databytes.shape wf_samples = databytes['waveform'].shape[1] dtype = [ ('time', float), ('electrode_id', int), ('unit_id', int), ('waveform', float, (wf_samples, 2)) ] data = np.empty(n_events, dtype=dtype) data['electrode_id'] = databytes['channel_id'][:, 0] data['unit_id'] = databytes['unit_id'][:, 0] data['time'] = databytes['elphy_time'][:, 0] * block.ep_block.dX data['waveform'][:, :, 0] = times * block.ep_block.dX data['waveform'][:, :, 1] = databytes['waveform'] * \ block.ep_block.dY_wf + block.ep_block.Y0_wf return data def get_rspk_data(self, spk_channel): """ Return times stored as a 4-bytes integer in the specified event channel. """ evt_blocks = self.get_blocks_of_type('RSPK') # compute events on each channel n_events = np.sum([k.n_events for k in evt_blocks], dtype=int, axis=0) # sum of array values up to spk_channel-1!!!! pre_events = np.sum(n_events[0:spk_channel], dtype=int) start = pre_events + (7 + len(n_events)) # rspk header end = start + n_events[spk_channel] expected_size = 4 * np.sum(n_events, dtype=int) # constant return self.load_bytes(evt_blocks, dtype='= layout.data_offset)): block = self.factory.create_block(layout, offset) # create the sub blocks if it is DAC2 objects format # this is only done for B_Ep and B_Finfo blocks for # DAC2 objects format, maybe it could be useful to # spread this to other block types. # if isinstance(header, DAC2Header) and (block.identifier in ['B_Ep']) : if isinstance(header, DAC2Header) and (block.identifier in ['B_Ep', 'B_Finfo']): sub_offset = block.data_offset while sub_offset < block.start + block.size: sub_block = self.factory.create_sub_block(block, sub_offset) block.add_sub_block(sub_block) sub_offset += sub_block.size # set up some properties of some DAC2Layout sub-blocks if isinstance(sub_block, ( DAC2EpSubBlock, DAC2AdcSubBlock, DAC2KSampSubBlock, DAC2KTypeSubBlock)): block.set_episode_block() block.set_channel_block() block.set_sub_sampling_block() block.set_sample_size_block() # SpikeTrain # if isinstance(header, DAC2Header) and (block.identifier in ['RSPK']) : # print "\nElphyFile.create_layout() - RSPK" # print "ElphyFile.create_layout() - n_events",block.n_events # print "ElphyFile.create_layout() - n_evt_channels",block.n_evt_channels layout.add_block(block) offset += block.size # set up as soon as possible the shortcut # to the main block of a DAC2GSLayout if (not detect_main and isinstance(layout, DAC2GSLayout) and isinstance(block, DAC2GSMainBlock)): layout.set_main_block() detect_main = True # detect if the file is continuous when # the 'MAIN' block has been parsed if not detect_continuous: is_continuous = isinstance(header, DAC2GSHeader) and layout.is_continuous() # set up the shortcut to blocks corresponding # to episodes, only available for DAC2Layout # and also DAC2GSLayout if not continuous if isinstance(layout, DAC2Layout) or ( isinstance(layout, DAC2GSLayout) and not layout.is_continuous()): layout.set_episode_blocks() layout.set_data_blocks() # finally set up the user info block of the layout layout.set_info_block() self.file.seek(0) return layout def is_continuous(self): return self.layout.is_continuous() @property def n_episodes(self): """ Return the number of recording sequences. """ return self.layout.n_episodes def n_channels(self, episode): """ Return the number of recording channels involved in data acquisition and relative to the specified episode : ``episode`` : the recording sequence identifier. """ return self.layout.n_channels(episode) def n_tags(self, episode): """ Return the number of tag channels relative to the specified episode : ``episode`` : the recording sequence identifier. """ return self.layout.n_tags(episode) def n_events(self, episode): """ Return the number of event channels relative to the specified episode : ``episode`` : the recording sequence identifier. """ return self.layout.n_events(episode) def n_spiketrains(self, episode): """ Return the number of event channels relative to the specified episode : ``episode`` : the recording sequence identifier. """ return self.layout.n_spiketrains(episode) def n_waveforms(self, episode): """ Return the number of waveform channels : """ return self.layout.n_waveforms(episode) def get_signal(self, episode, channel): """ Return the signal or event descriptor relative to the specified episode and channel : ``episode`` : the recording sequence identifier. ``channel`` : the analog channel identifier. NB : For 'DAC2 objects' format, it could be also used to retrieve events. """ return self.layout.get_signal(episode, channel) def get_tag(self, episode, tag_channel): """ Return the tag descriptor relative to the specified episode and tag channel : ``episode`` : the recording sequence identifier. ``tag_channel`` : the tag channel identifier. NB : There isn't any tag channels for 'Acquis1' format. ElphyTag channels appeared after 'DAC2/GS/2000' release. They are also present in 'DAC2 objects' format. """ return self.layout.get_tag(episode, tag_channel) def get_event(self, episode, evt_channel): """ Return the event relative the specified episode and event channel. `episode`` : the recording sequence identifier. ``tag_channel`` : the tag channel identifier. """ return self.layout.get_event(episode, evt_channel) def get_spiketrain(self, episode, electrode_id): """ Return the spiketrain relative to the specified episode and electrode_id. ``episode`` : the recording sequence identifier. ``electrode_id`` : the identifier of the electrode providing the spiketrain. NB : Available only for 'DAC2 objects' format. This descriptor can return the times of a spiketrain and waveforms relative to each of these times. """ return self.layout.get_spiketrain(episode, electrode_id) @property def comments(self): raise NotImplementedError() def get_user_file_info(self): """ Return user defined file metadata. """ if not self.layout.info_block: return dict() else: return self.layout.info_block.get_user_file_info() @property def episode_info(self, ep_number): raise NotImplementedError() def get_signals(self): """ Get all available analog or event channels stored into an Elphy file. """ signals = list() for ep in range(1, self.n_episodes + 1): for ch in range(1, self.n_channels(ep) + 1): signal = self.get_signal(ep, ch) signals.append(signal) return signals def get_tags(self): """ Get all available tag channels stored into an Elphy file. """ tags = list() for ep in range(1, self.n_episodes + 1): for tg in range(1, self.n_tags(ep) + 1): tag = self.get_tag(ep, tg) tags.append(tag) return tags def get_spiketrains(self): """ Get all available spiketrains stored into an Elphy file. """ spiketrains = list() for ep in range(1, self.n_episodes + 1): for ch in range(1, self.n_spiketrains(ep) + 1): spiketrain = self.get_spiketrain(ep, ch) spiketrains.append(spiketrain) return spiketrains def get_rspk_spiketrains(self): """ Get all available spiketrains stored into an Elphy file. """ spiketrains = list() spk_blocks = self.layout.get_blocks_of_type('RSPK') for bl in spk_blocks: # print "ElphyFile.get_spiketrains() - identifier:",bl.identifier for ch in range(0, bl.n_evt_channels): spiketrain = self.layout.get_rspk_data(ch) spiketrains.append(spiketrain) return spiketrains def get_names(self): com_blocks = list() com_blocks = self.layout.get_blocks_of_type('COM') return com_blocks # -------------------------------------------------------- class ElphyIO(BaseIO): """ Class for reading from and writing to an Elphy file. It enables reading: - :class:`Block` - :class:`Segment` - :class:`Event` - :class:`SpikeTrain` Usage: >>> from neo import io >>> r = io.ElphyIO(filename='ElphyExample.DAT') >>> seg = r.read_block() >>> print(seg.analogsignals) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE >>> print(seg.spiketrains) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE >>> print(seg.events) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE >>> print(anasig._data_description) >>> anasig = r.read_analogsignal() >>> bl = Block() >>> # creating segments, their contents and append to bl >>> r.write_block( bl ) """ is_readable = True # This class can read data is_writable = False # This class can write data # This class is able to directly or indirectly handle the following objects supported_objects = [Block, Segment, AnalogSignal, SpikeTrain] # This class can return a Block readable_objects = [Block] # This class is not able to write objects writeable_objects = [] has_header = False is_streameable = False # This is for GUI stuff : a definition for parameters when reading. # This dict should be keyed by object (`Block`). Each entry is a list # of tuple. The first entry in each tuple is the parameter name. The # second entry is a dict with keys 'value' (for default value), # and 'label' (for a descriptive name). # Note that if the highest-level object requires parameters, # common_io_test will be skipped. read_params = { } # do not supported write so no GUI stuff write_params = { } name = 'Elphy IO' extensions = ['DAT'] # mode can be 'file' or 'dir' or 'fake' or 'database' mode = 'file' # internal serialized representation of neo data serialized = None def __init__(self, filename=None): """ Arguments: filename : the filename to read """ BaseIO.__init__(self) self.filename = filename self.elphy_file = ElphyFile(self.filename) def read_block(self, lazy=False, ): """ Return :class:`Block`. Parameters: lazy : postpone actual reading of the file. """ assert not lazy, 'Do not support lazy' # basic block = Block(name=None) # get analog and tag channels try: self.elphy_file.open() except Exception as e: self.elphy_file.close() raise Exception("cannot open file {} : {}".format(self.filename, e)) # create a segment containing all analog, # tag and event channels for the episode if self.elphy_file.n_episodes in [None, 0]: print("File '%s' appears to have no episodes" % (self.filename)) return block for episode in range(1, self.elphy_file.n_episodes + 1): segment = self.read_segment(episode) segment.block = block block.segments.append(segment) # close file self.elphy_file.close() # result return block def write_block(self, block): """ Write a given Neo Block to an Elphy file, its structure being, for example: Neo -> Elphy -------------------------------------------------------------- Block File Segment Episode Block (B_Ep) AnalogSignalArray Episode Descriptor (Ep + Adc + Ksamp + Ktype) multichannel RDATA (with a ChannelMask multiplexing channels) 2D NumPy Array ... AnalogSignalArray AnalogSignal AnalogSignal ... ... SpikeTrain Event Block (RSPK) SpikeTrain ... Arguments:: block: the block to be saved """ # Serialize Neo structure into Elphy file # each analog signal will be serialized as elphy Episode Block (with its subblocks) # then all spiketrains will be serialized into an Rspk Block (an Event Block with addons). # Serialize (and size) all Neo structures before writing them to file # Since to write each Elphy Block is required to know in advance its size, # which includes that of its subblocks, it is necessary to # serialize first the lowest structures. # Iterate over block structures elphy_limit = 256 All = '' # print "\n\n--------------------------------------------\n" # print "write_block() - n_segments:",len(block.segments) for seg in block.segments: analogsignals = 0 # init nbchan = 0 nbpt = 0 chls = 0 Dxu = 1e-8 # 0.0000001 Rxu = 1e+8 # 10000000.0 X0uSpk = 0.0 CyberTime = 0.0 aa_units = [] NbEv = [] serialized_analog_data = '' serialized_spike_data = '' # AnalogSignals # Neo signalarrays are 2D numpy array where each row is an array of samples for a # channel: # signalarray A = [[ 1, 2, 3, 4 ], # [ 5, 6, 7, 8 ]] # signalarray B = [[ 9, 10, 11, 12 ], # [ 13, 14, 15, 16 ]] # Neo Segments can have more than one signalarray. # To be converted in Elphy analog channels they need to be all in a 2D array, not in # several 2D arrays. # Concatenate all analogsignalarrays into one and then flatten it. # Elphy RDATA blocks contain Fortran styled samples: # 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15, 4, 8, 12, 16 # AnalogSignalArrays -> analogsignals # get the first to have analogsignals with the right shape # Annotations for analogsignals array come as a list of int being source ids # here, put each source id on a separate dict entry in order to have a matching # afterwards idx = 0 annotations = dict() # get all the others # print "write_block() - n_analogsignals:",len(seg.analogsignals) # print "write_block() - n_analogsignalarrays:",len(seg.analogsignalarrays) for asigar in seg.analogsignalarrays: idx, annotations = self.get_annotations_dict( annotations, "analogsignal", asigar.annotations.items(), asigar.name, idx) # array structure _, chls = asigar.shape # units for _ in range(chls): aa_units.append(asigar.units) Dxu = asigar.sampling_period Rxu = asigar.sampling_rate if isinstance(analogsignals, np.ndarray): analogsignals = np.hstack((analogsignals, asigar)) else: analogsignals = asigar # first time # collect and reshape all analogsignals if isinstance(analogsignals, np.ndarray): # transpose matrix since in Neo channels are column-wise while in Elphy are # row-wise analogsignals = analogsignals.T # get dimensions nbchan, nbpt = analogsignals.shape # serialize AnalogSignal analog_data_fmt = '<' + str(analogsignals.size) + 'f' # serialized flattened numpy channels in 'F'ortran style analog_data_64 = analogsignals.flatten('F') # elphy normally uses float32 values (for performance reasons) analog_data = np.array(analog_data_64, dtype=np.float32) serialized_analog_data += struct.pack(analog_data_fmt, *analog_data) # SpikeTrains # Neo spiketrains are stored as a one-dimensional array of times # [ 0.11, 1.23, 2.34, 3.45, 4.56, 5.67, 6.78, 7.89 ... ] # These are converted into Elphy Rspk Block which will contain all of them # RDATA + NbVeV:integer for the number of channels (spiketrains) # + NbEv:integer[] for the number of event per channel # followed by the actual arrays of integer containing spike times # spiketrains = seg.spiketrains # ... but consider elphy loading limitation: NbVeV = len(seg.spiketrains) # print "write_block() - n_spiketrains:",NbVeV if len(seg.spiketrains) > elphy_limit: NbVeV = elphy_limit # serialize format spiketrain_data_fmt = '<' spiketrains = [] for idx, train in enumerate(seg.spiketrains[:NbVeV]): # print "write_block() - train.size:", train.size,idx # print "write_block() - train:", train fake, annotations = self.get_annotations_dict( annotations, "spiketrain", train.annotations.items(), '', idx) # annotations.update( dict( [("spiketrain-"+str(idx), # train.annotations['source_id'])] ) ) # print "write_block() - train[%s].annotation['source_id']:%s" # "" % (idx,train.annotations['source_id']) # total number of events format + blackrock sorting mark (0 for neo) spiketrain_data_fmt += str(train.size) + "i" + str(train.size) + "B" # get starting time X0uSpk = train.t_start.item() CyberTime = train.t_stop.item() # count number of events per train NbEv.append(train.size) # multiply by sampling period train = train * Rxu # all flattened spike train # blackrock acquisition card also adds a byte for each event to sort it spiketrains.extend([spike.item() for spike in train] + [0 for _ in range(train.size)]) # Annotations # print annotations # using DBrecord elphy block, they will be available as values in elphy environment # separate keys and values in two separate serialized strings ST_sub = '' st_fmt = '' st_data = [] BUF_sub = '' serialized_ST_data = '' serialized_BUF_data = '' for key in sorted(annotations.iterkeys()): # take all values, get their type and concatenate fmt = '' data = [] value = annotations[key] if isinstance(value, (int, np.int32, np.int64)): # elphy type 2 fmt = ' 0 else "episode %s" % str(episode + 1) segment = Segment(name=name) # create an analog signal for # each channel in the episode for channel in range(1, self.elphy_file.n_channels(episode) + 1): signal = self.elphy_file.get_signal(episode, channel) x_unit = signal.x_unit.strip().decode() analog_signal = AnalogSignal( signal.data['y'], units=signal.y_unit, t_start=signal.t_start * getattr(pq, signal.x_unit.strip().decode()), t_stop=signal.t_stop * getattr(pq, signal.x_unit.strip().decode()), # sampling_rate = signal.sampling_frequency * pq.kHz, sampling_period=signal.sampling_period * getattr(pq, x_unit), channel_name="episode {}, channel {}".format(int(episode + 1), int(channel + 1)) ) analog_signal.segment = segment segment.analogsignals.append(analog_signal) # create a spiketrain for each # spike channel in the episode # in case of multi-electrode # acquisition context n_spikes = self.elphy_file.n_spiketrains(episode) # print "read_segment() - n_spikes:",n_spikes if n_spikes > 0: for spk in range(1, n_spikes + 1): spiketrain = self.read_spiketrain(episode, spk) spiketrain.segment = segment segment.spiketrains.append(spiketrain) # segment return segment def read_event(self, episode, evt): """ Internal method used to return a list of elphy :class:`EventArray` acquired from event channels. Parameters: elphy_file : is the elphy object. episode : number of elphy episode, roughly corresponding to a segment. evt : index of the event. """ event = self.elphy_file.get_event(episode, evt) neo_event = Event( times=event.times * pq.s, channel_name="episode {}, event channel {}".format(episode + 1, evt + 1) ) return neo_event def read_spiketrain(self, episode, spk): """ Internal method used to return an elphy object :class:`SpikeTrain`. Parameters: elphy_file : is the elphy object. episode : number of elphy episode, roughly corresponding to a segment. spk : index of the spike array. """ block = self.elphy_file.layout.episode_block(episode) spike = self.elphy_file.get_spiketrain(episode, spk) spikes = spike.times * pq.s # print "read_spiketrain() - spikes: %s" % (len(spikes)) # print "read_spiketrain() - spikes:",spikes dct = { 'times': spikes, # check 't_start': block.ep_block.X0_wf if block.ep_block.X0_wf < spikes[0] else spikes[0], 't_stop': block.ep_block.cyber_time if block.ep_block.cyber_time > spikes[-1] else spikes[-1], 'units': 's', # special keywords to identify the # electrode providing the spiketrain # event though it is redundant with # waveforms 'label': "episode {}, electrode {}".format(episode, spk), 'electrode_id': spk } # new spiketrain return SpikeTrain(**dct) neo-0.10.0/neo/io/exampleio.py0000644000076700000240000000157514060430470016511 0ustar andrewstaff00000000000000""" neo.io have been split in 2 level API: * neo.io: this API give neo object * neo.rawio: this API give raw data as they are in files. Developper are encourage to use neo.rawio. When this is done the neo.io is done automagically with this king of following code. Author: sgarcia """ from neo.io.basefromrawio import BaseFromRaw from neo.rawio.examplerawio import ExampleRawIO class ExampleIO(ExampleRawIO, BaseFromRaw): name = 'example IO' description = "Fake IO" # This is an inportant choice when there are several channels. # 'split-all' : 1 AnalogSignal each 1 channel # 'group-by-same-units' : one 2D AnalogSignal for each group of channel with same units _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename=''): ExampleRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/igorproio.py0000644000076700000240000001446614066375716016563 0ustar andrewstaff00000000000000""" Class for reading data created by IGOR Pro (WaveMetrics, Inc., Portland, OR, USA) Depends on: igor (https://pypi.python.org/pypi/igor/) Supported: Read Author: Andrew Davison Also contributing: Rick Gerkin """ from warnings import warn import pathlib import quantities as pq from neo.io.baseio import BaseIO from neo.core import Block, Segment, AnalogSignal try: import igor.binarywave as bw import igor.packed as pxp from igor.record.wave import WaveRecord HAVE_IGOR = True except ImportError: HAVE_IGOR = False class IgorIO(BaseIO): """ Class for reading Igor Binary Waves (.ibw) or Packed Experiment (.pxp) files written by WaveMetrics’ IGOR Pro software. It requires the `igor` Python package by W. Trevor King. Usage: >>> from neo import io >>> r = io.IgorIO(filename='...ibw') """ is_readable = True # This class can only read data is_writable = False # write is not supported supported_objects = [Block, Segment, AnalogSignal] readable_objects = [Block, Segment, AnalogSignal] writeable_objects = [] has_header = False is_streameable = False name = 'igorpro' extensions = ['ibw', 'pxp'] mode = 'file' def __init__(self, filename=None, parse_notes=None): """ Arguments: filename: the filename parse_notes: (optional) A function which will parse the 'notes' field in the file header and return a dictionary which will be added to the object annotations. """ BaseIO.__init__(self) filename = pathlib.Path(filename) assert filename.suffix[1:] in self.extensions, \ "Only the following extensions are supported: %s" % self.extensions self.filename = filename self.extension = filename.suffix[1:] self.parse_notes = parse_notes self._filesystem = None def read_block(self, lazy=False): assert not lazy, 'This IO does not support lazy mode' block = Block(file_origin=str(self.filename)) block.segments.append(self.read_segment(lazy=lazy)) block.segments[-1].block = block return block def read_segment(self, lazy=False): assert not lazy, 'This IO does not support lazy mode' segment = Segment(file_origin=str(self.filename)) if self.extension == 'pxp': if not self._filesystem: _, self.filesystem = pxp.load(str(self.filename)) def callback(dirpath, key, value): if isinstance(value, WaveRecord): signal = self._wave_to_analogsignal(value.wave['wave'], dirpath) signal.segment = segment segment.analogsignals.append(signal) pxp.walk(self.filesystem, callback) else: segment.analogsignals.append( self.read_analogsignal(lazy=lazy)) segment.analogsignals[-1].segment = segment return segment def read_analogsignal(self, path=None, lazy=False): assert not lazy, 'This IO does not support lazy mode' if not HAVE_IGOR: raise Exception("`igor` package not installed. " "Try `pip install igor`") if self.extension == 'ibw': data = bw.load(str(self.filename)) version = data['version'] if version > 5: raise IOError("Igor binary wave file format version {} " "is not supported.".format(version)) elif self.extension == 'pxp': assert type(path) is str, \ "A colon-separated Igor-style path must be provided." if not self._filesystem: _, self.filesystem = pxp.load(str(self.filename)) path = path.split(':') location = self.filesystem['root'] for element in path: if element != 'root': location = location[element.encode('utf8')] data = location.wave return self._wave_to_analogsignal(data['wave'], []) def _wave_to_analogsignal(self, content, dirpath): if "padding" in content: assert content['padding'].size == 0, \ "Cannot handle non-empty padding" signal = content['wData'] note = content['note'] header = content['wave_header'] name = str(header['bname'].decode('utf-8')) units = "".join([x.decode() for x in header['dataUnits']]) try: time_units = "".join([x.decode() for x in header['xUnits']]) assert len(time_units) except: time_units = "s" try: t_start = pq.Quantity(header['hsB'], time_units) except KeyError: t_start = pq.Quantity(header['sfB'][0], time_units) try: sampling_period = pq.Quantity(header['hsA'], time_units) except: sampling_period = pq.Quantity(header['sfA'][0], time_units) if self.parse_notes: try: annotations = self.parse_notes(note) except ValueError: warn("Couldn't parse notes field.") annotations = {'note': note} else: annotations = {'note': note} annotations["igor_path"] = ":".join(item.decode('utf-8') for item in dirpath) signal = AnalogSignal(signal, units=units, copy=False, t_start=t_start, sampling_period=sampling_period, name=name, file_origin=str(self.filename), **annotations) return signal # the following function is to handle the annotations in the # Igor data files from the Blue Brain Project NMC Portal def key_value_string_parser(itemsep=";", kvsep=":"): """ Parses a string into a dict. Arguments: itemsep - character which separates items kvsep - character which separates the key and value within an item Returns: a function which takes the string to be parsed as the sole argument and returns a dict. Example: >>> parse = key_value_string_parser(itemsep=";", kvsep=":") >>> parse("a:2;b:3") {'a': 2, 'b': 3} """ def parser(s): items = s.split(itemsep) return dict(item.split(kvsep, 1) for item in items if item) return parser neo-0.10.0/neo/io/intanio.py0000644000076700000240000000054114060430470016157 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.intanrawio import IntanRawIO class IntanIO(IntanRawIO, BaseFromRaw): __doc__ = IntanRawIO.__doc__ _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): IntanRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/klustakwikio.py0000644000076700000240000004155014066375716017265 0ustar andrewstaff00000000000000""" Reading and writing from KlustaKwik-format files. Ref: http://klusters.sourceforge.net/UserManual/data-files.html Supported : Read, Write Author : Chris Rodgers TODO: * When reading, put the Unit into the RCG, RC hierarchy * When writing, figure out how to get group and cluster if those annotations weren't set. Consider removing those annotations if they are redundant. * Load features in addition to spiketimes. """ import glob import logging import os.path import shutil # note neo.core need only numpy and quantitie import numpy as np try: import matplotlib.mlab as mlab except ImportError as err: HAVE_MLAB = False MLAB_ERR = err else: HAVE_MLAB = True MLAB_ERR = None # I need to subclass BaseIO from neo.io.baseio import BaseIO from neo.core import Block, Segment, Group, SpikeTrain # Pasted version of feature file format spec """ The Feature File Generic file name: base.fet.n Format: ASCII, integer values The feature file lists for each spike the PCA coefficients for each electrode, followed by the timestamp of the spike (more features can be inserted between the PCA coefficients and the timestamp). The first line contains the number of dimensions. Assuming N1 spikes (spike1...spikeN1), N2 electrodes (e1...eN2) and N3 coefficients (c1...cN3), this file looks like: nbDimensions c1_e1_spk1 c2_e1_spk1 ... cN3_e1_spk1 c1_e2_spk1 ... cN3_eN2_spk1 timestamp_spk1 c1_e1_spk2 c2_e1_spk2 ... cN3_e1_spk2 c1_e2_spk2 ... cN3_eN2_spk2 timestamp_spk2 ... c1_e1_spkN1 c2_e1_spkN1 ... cN3_e1_spkN1 c1_e2_spkN1 ... cN3_eN2_spkN1 timestamp_spkN1 The timestamp is expressed in multiples of the sampling interval. For instance, for a 20kHz recording (50 microsecond sampling interval), a timestamp of 200 corresponds to 200x0.000050s=0.01s from the beginning of the recording session. Notice that the last line must end with a newline or carriage return. """ class KlustaKwikIO(BaseIO): """Reading and writing from KlustaKwik-format files.""" # Class variables demonstrating capabilities of this IO is_readable = True is_writable = True # This IO can only manipulate objects relating to spike times supported_objects = [Block, SpikeTrain] # Keep things simple by always returning a block readable_objects = [Block] # And write a block writeable_objects = [Block] # Not sure what these do, if anything has_header = False is_streameable = False # GUI params read_params = {} # GUI params write_params = {} # The IO name and the file extensions it uses name = 'KlustaKwik' extensions = ['fet', 'clu', 'res', 'spk'] # Operates on directories mode = 'file' def __init__(self, filename, sampling_rate=30000.): """Create a new IO to operate on a directory filename : the directory to contain the files basename : string, basename of KlustaKwik format, or None sampling_rate : in Hz, necessary because the KlustaKwik files stores data in samples. """ if not HAVE_MLAB: raise MLAB_ERR BaseIO.__init__(self) # self.filename = os.path.normpath(filename) self.filename, self.basename = os.path.split(os.path.abspath(filename)) self.sampling_rate = float(sampling_rate) # error check if not os.path.isdir(self.filename): raise ValueError("filename must be a directory") # initialize a helper object to parse filenames self._fp = FilenameParser(dirname=self.filename, basename=self.basename) def read_block(self, lazy=False): """Returns a Block containing spike information. There is no obvious way to infer the segment boundaries from raw spike times, so for now all spike times are returned in one big segment. The way around this would be to specify the segment boundaries, and then change this code to put the spikes in the right segments. """ assert not lazy, 'Do not support lazy' # Create block and segment to hold all the data block = Block() # Search data directory for KlustaKwik files. # If nothing found, return empty block self._fetfiles = self._fp.read_filenames('fet') self._clufiles = self._fp.read_filenames('clu') if len(self._fetfiles) == 0: return block # Create a single segment to hold all of the data seg = Segment(name='seg0', index=0, file_origin=self.filename) block.segments.append(seg) # Load spike times from each group and store in a dict, keyed # by group number self.spiketrains = dict() for group in sorted(self._fetfiles.keys()): # Load spike times fetfile = self._fetfiles[group] spks, features = self._load_spike_times(fetfile) # Load cluster ids or generate if group in self._clufiles: clufile = self._clufiles[group] uids = self._load_unit_id(clufile) else: # unclustered data, assume all zeros uids = np.zeros(spks.shape, dtype=np.int32) # error check if len(spks) != len(uids): raise ValueError("lengths of fet and clu files are different") # Create Group for each cluster unique_unit_ids = np.unique(uids) for unit_id in sorted(unique_unit_ids): # Initialize the unit u = Group(name=('unit %d from group %d' % (unit_id, group)), index=unit_id, group=group) # Initialize a new SpikeTrain for the spikes from this unit st = SpikeTrain( times=spks[uids == unit_id] / self.sampling_rate, units='sec', t_start=0.0, t_stop=spks.max() / self.sampling_rate, name=('unit %d from group %d' % (unit_id, group))) st.annotations['cluster'] = unit_id st.annotations['group'] = group # put features in if len(features) != 0: st.annotations['waveform_features'] = features # Link u.add(st) seg.spiketrains.append(st) block.create_many_to_one_relationship() return block # Helper hidden functions for reading def _load_spike_times(self, fetfilename): """Reads and returns the spike times and features""" with open(fetfilename, mode='r') as f: # Number of clustering features is integer on first line nbFeatures = int(f.readline().strip()) # Each subsequent line consists of nbFeatures values, followed by # the spike time in samples. names = ['fet%d' % n for n in range(nbFeatures)] names.append('spike_time') # Load into recarray data = np.recfromtxt(fetfilename, names=names, skip_header=1, delimiter=' ') # get features features = np.array([data['fet%d' % n] for n in range(nbFeatures)]) # Return the spike_time column return data['spike_time'], features.transpose() def _load_unit_id(self, clufilename): """Reads and return the cluster ids as int32""" with open(clufilename, mode='r') as f: # Number of clusters on this tetrode is integer on first line nbClusters = int(f.readline().strip()) # Read each cluster name as a string cluster_names = f.readlines() # Convert names to integers # I think the spec requires cluster names to be integers, but # this code could be modified to support string names which are # auto-numbered. try: cluster_ids = [int(name) for name in cluster_names] except ValueError: raise ValueError( "Could not convert cluster name to integer in %s" % clufilename) # convert to numpy array and error check cluster_ids = np.array(cluster_ids, dtype=np.int32) if len(np.unique(cluster_ids)) != nbClusters: logging.warning("warning: I got %d clusters instead of %d in %s" % ( len(np.unique(cluster_ids)), nbClusters, clufilename)) return cluster_ids # writing functions def write_block(self, block): """Write spike times and unit ids to disk. Currently descends hierarchy from block to segment to spiketrain. Then gets group and cluster information from spiketrain. Then writes the time and cluster info to the file associated with that group. The group and cluster information are extracted from annotations, eg `sptr.annotations['group']`. If no cluster information exists, it is assigned to cluster 0. Note that all segments are essentially combined in this process, since the KlustaKwik format does not allow for segment boundaries. As implemented currently, does not use the `Unit` object at all. We first try to use the sampling rate of each SpikeTrain, or if this is not set, we use `self.sampling_rate`. If the files already exist, backup copies are created by appending the filenames with a "~". """ # set basename if self.basename is None: logging.warning("warning: no basename provided, using `basename`") self.basename = 'basename' # First create file handles for each group which will be stored self._make_all_file_handles(block) # We'll detect how many features belong in each group self._group2features = {} # Iterate through segments in this block for seg in block.segments: # Write each spiketrain of the segment for st in seg.spiketrains: # Get file handles for this spiketrain using its group group = self.st2group(st) fetfilehandle = self._fetfilehandles[group] clufilehandle = self._clufilehandles[group] # Get the id to write to clu file for this spike train cluster = self.st2cluster(st) # Choose sampling rate to convert to samples try: sr = st.annotations['sampling_rate'] except KeyError: sr = self.sampling_rate # Convert to samples spike_times_in_samples = np.rint( np.array(st) * sr).astype(np.int64) # Try to get features from spiketrain try: all_features = st.annotations['waveform_features'] except KeyError: # Use empty all_features = [ [] for _ in range(len(spike_times_in_samples))] all_features = np.asarray(all_features) if all_features.ndim != 2: raise ValueError("waveform features should be 2d array") # Check number of features we're supposed to have try: n_features = self._group2features[group] except KeyError: # First time through .. set number of features n_features = all_features.shape[1] self._group2features[group] = n_features # and write to first line of file fetfilehandle.write("%d\n" % n_features) if n_features != all_features.shape[1]: raise ValueError("inconsistent number of features: " + "supposed to be %d but I got %d" % (n_features, all_features.shape[1])) # Write features and time for each spike for stt, features in zip(spike_times_in_samples, all_features): # first features for val in features: fetfilehandle.write(str(val)) fetfilehandle.write(" ") # now time fetfilehandle.write("%d\n" % stt) # and cluster id clufilehandle.write("%d\n" % cluster) # We're done, so close the files self._close_all_files() # Helper functions for writing def st2group(self, st): # Not sure this is right so make it a method in case we change it try: return st.annotations['group'] except KeyError: return 0 def st2cluster(self, st): # Not sure this is right so make it a method in case we change it try: return st.annotations['cluster'] except KeyError: return 0 def _make_all_file_handles(self, block): """Get the tetrode (group) of each neuron (cluster) by descending the hierarchy through segment and block. Store in a dict {group_id: list_of_clusters_in_that_group} """ group2clusters = {} for seg in block.segments: for st in seg.spiketrains: group = self.st2group(st) cluster = self.st2cluster(st) if group in group2clusters: if cluster not in group2clusters[group]: group2clusters[group].append(cluster) else: group2clusters[group] = [cluster] # Make new file handles for each group self._fetfilehandles, self._clufilehandles = {}, {} for group, clusters in group2clusters.items(): self._new_group(group, nbClusters=len(clusters)) def _new_group(self, id_group, nbClusters): # generate filenames fetfilename = os.path.join(self.filename, self.basename + ('.fet.%d' % id_group)) clufilename = os.path.join(self.filename, self.basename + ('.clu.%d' % id_group)) # back up before overwriting if os.path.exists(fetfilename): shutil.copyfile(fetfilename, fetfilename + '~') if os.path.exists(clufilename): shutil.copyfile(clufilename, clufilename + '~') # create file handles self._fetfilehandles[id_group] = open(fetfilename, mode='w') self._clufilehandles[id_group] = open(clufilename, mode='w') # write out first line # self._fetfilehandles[id_group].write("0\n") # Number of features self._clufilehandles[id_group].write("%d\n" % nbClusters) def _close_all_files(self): for val in self._fetfilehandles.values(): val.close() for val in self._clufilehandles.values(): val.close() class FilenameParser: """Simple class to interpret user's requests into KlustaKwik filenames""" def __init__(self, dirname, basename=None): """Initialize a new parser for a directory containing files dirname: directory containing files basename: basename in KlustaKwik format spec If basename is left None, then files with any basename in the directory will be used. An error is raised if files with multiple basenames exist in the directory. """ self.dirname = os.path.normpath(dirname) self.basename = basename # error check if not os.path.isdir(self.dirname): raise ValueError("filename must be a directory") def read_filenames(self, typestring='fet'): """Returns filenames in the data directory matching the type. Generally, `typestring` is one of the following: 'fet', 'clu', 'spk', 'res' Returns a dict {group_number: filename}, e.g.: { 0: 'basename.fet.0', 1: 'basename.fet.1', 2: 'basename.fet.2'} 'basename' can be any string not containing whitespace. Only filenames that begin with "basename.typestring." and end with a sequence of digits are valid. The digits are converted to an integer and used as the group number. """ all_filenames = glob.glob(os.path.join(self.dirname, '*')) # Fill the dict with valid filenames d = {} for v in all_filenames: # Test whether matches format, ie ends with digits split_fn = os.path.split(v)[1] m = glob.re.search((r'^(\w+)\.%s\.(\d+)$' % typestring), split_fn) if m is not None: # get basename from first hit if not specified if self.basename is None: self.basename = m.group(1) # return files with correct basename if self.basename == m.group(1): # Key the group number to the filename # This conversion to int should always work since only # strings of digits will match the regex tetn = int(m.group(2)) d[tetn] = v return d neo-0.10.0/neo/io/kwikio.py0000644000076700000240000001520214066375716016034 0ustar andrewstaff00000000000000""" Class for reading data from a .kwik dataset Depends on: scipy phy Supported: Read Author: Mikkel E. Lepperød @CINPLA """ # TODO: writing to file import numpy as np import quantities as pq import os try: from scipy import stats except ImportError as err: HAVE_SCIPY = False SCIPY_ERR = err else: HAVE_SCIPY = True SCIPY_ERR = None try: from klusta import kwik except ImportError as err: HAVE_KWIK = False KWIK_ERR = err else: HAVE_KWIK = True KWIK_ERR = None # I need to subclass BaseIO from neo.io.baseio import BaseIO # to import from core from neo.core import Segment, SpikeTrain, Epoch, AnalogSignal, Block, Group import neo.io.tools class KwikIO(BaseIO): """ Class for "reading" experimental data from a .kwik file. Generates a :class:`Segment` with a :class:`AnalogSignal` """ is_readable = True # This class can only read data is_writable = False # write is not supported supported_objects = [Block, Segment, SpikeTrain, AnalogSignal, Group] # This class can return either a Block or a Segment # The first one is the default ( self.read ) # These lists should go from highest object to lowest object because # common_io_test assumes it. readable_objects = [Block] # This class is not able to write objects writeable_objects = [] has_header = False is_streameable = False name = 'Kwik' description = 'This IO reads experimental data from a .kwik dataset' extensions = ['kwik'] mode = 'file' def __init__(self, filename): """ Arguments: filename : the filename """ if not HAVE_KWIK: raise KWIK_ERR BaseIO.__init__(self) self.filename = os.path.abspath(filename) model = kwik.KwikModel(self.filename) # TODO this group is loaded twice self.models = [kwik.KwikModel(self.filename, channel_group=grp) for grp in model.channel_groups] def read_block(self, lazy=False, get_waveforms=True, cluster_group=None, raw_data_units='uV', get_raw_data=False, ): """ Reads a block with segments and groups Parameters: get_waveforms: bool, default = False Wether or not to get the waveforms get_raw_data: bool, default = False Wether or not to get the raw traces raw_data_units: str, default = "uV" SI units of the raw trace according to voltage_gain given to klusta cluster_group: str, default = None Which clusters to load, possibilities are "noise", "unsorted", "good", if None all is loaded. """ assert not lazy, 'Do not support lazy' blk = Block() seg = Segment(file_origin=self.filename) blk.segments += [seg] for model in self.models: group_id = model.channel_group group_meta = {'group_id': group_id} group_meta.update(model.metadata) chx = Group(name='channel group #{}'.format(group_id), index=model.channels, **group_meta) blk.groups.append(chx) clusters = model.spike_clusters for cluster_id in model.cluster_ids: meta = model.cluster_metadata[cluster_id] if cluster_group is None: pass elif cluster_group != meta: continue sptr = self.read_spiketrain(cluster_id=cluster_id, model=model, get_waveforms=get_waveforms, raw_data_units=raw_data_units) sptr.annotations.update({'cluster_group': meta, 'group_id': model.channel_group}) unit = Group(cluster_group=meta, group_id=model.channel_group, name='unit #{}'.format(cluster_id)) unit.add(sptr) chx.add(unit) seg.spiketrains.append(sptr) if get_raw_data: ana = self.read_analogsignal(model, units=raw_data_units) chx.add(ana) seg.analogsignals.append(ana) seg.duration = model.duration * pq.s blk.create_many_to_one_relationship() return blk def read_analogsignal(self, model, units='uV', lazy=False): """ Reads analogsignals Parameters: units: str, default = "uV" SI units of the raw trace according to voltage_gain given to klusta """ assert not lazy, 'Do not support lazy' arr = model.traces[:] * model.metadata['voltage_gain'] ana = AnalogSignal(arr, sampling_rate=model.sample_rate * pq.Hz, units=units, file_origin=model.metadata['raw_data_files']) return ana def read_spiketrain(self, cluster_id, model, lazy=False, get_waveforms=True, raw_data_units=None ): """ Reads sorted spiketrains Parameters: get_waveforms: bool, default = False Wether or not to get the waveforms cluster_id: int, Which cluster to load, according to cluster id from klusta model: klusta.kwik.KwikModel A KwikModel object obtained by klusta.kwik.KwikModel(fname) """ try: if (not (cluster_id in model.cluster_ids)): raise ValueError except ValueError: print("Exception: cluster_id (%d) not found !! " % cluster_id) return clusters = model.spike_clusters idx = np.nonzero(clusters == cluster_id) if get_waveforms: w = model.all_waveforms[idx] # klusta: num_spikes, samples_per_spike, num_chans = w.shape w = w.swapaxes(1, 2) w = pq.Quantity(w, raw_data_units) else: w = None if model.duration > 0.: t_stop = model.duration else: t_stop = np.max(model.spike_times[idx]) sptr = SpikeTrain(times=model.spike_times[idx], t_stop=t_stop, waveforms=w, units='s', sampling_rate=model.sample_rate * pq.Hz, file_origin=self.filename, **{'cluster_id': cluster_id}) return sptr neo-0.10.0/neo/io/maxwellio.py0000644000076700000240000000044414066375716016542 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.maxwellrawio import MaxwellRawIO class MaxwellIO(MaxwellRawIO, BaseFromRaw): mode = 'file' def __init__(self, filename): MaxwellRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/mearecio.py0000644000076700000240000000050114066374330016306 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.mearecrawio import MEArecRawIO class MEArecIO(MEArecRawIO, BaseFromRaw): __doc__ = MEArecRawIO.__doc__ mode = 'file' def __init__(self, filename): MEArecRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/micromedio.py0000644000076700000240000000071514060430470016650 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.micromedrawio import MicromedRawIO from neo.core import Segment, AnalogSignal, Epoch, Event class MicromedIO(MicromedRawIO, BaseFromRaw): """Class for reading/writing data from Micromed files (.trc).""" _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): MicromedRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/neomatlabio.py0000644000076700000240000003525314060430470017020 0ustar andrewstaff00000000000000""" Module for reading/writing Neo objects in MATLAB format (.mat) versions 5 to 7.2. This module is a bridge for MATLAB users who want to adopt the Neo object representation. The nomenclature is the same but using Matlab structs and cell arrays. With this module MATLAB users can use neo.io to read a format and convert it to .mat. Supported : Read/Write Author: sgarcia, Robert Pröpper """ from datetime import datetime from distutils import version import re import numpy as np import quantities as pq # check scipy try: import scipy.io import scipy.version except ImportError as err: HAVE_SCIPY = False SCIPY_ERR = err else: if version.LooseVersion(scipy.version.version) < '0.12.0': HAVE_SCIPY = False SCIPY_ERR = ImportError("your scipy version is too old to support " + "MatlabIO, you need at least 0.12.0. " + "You have %s" % scipy.version.version) else: HAVE_SCIPY = True SCIPY_ERR = None from neo.io.baseio import BaseIO from neo.core import (Block, Segment, AnalogSignal, Event, Epoch, SpikeTrain, objectnames, class_by_name) classname_lower_to_upper = {} for k in objectnames: classname_lower_to_upper[k.lower()] = k class NeoMatlabIO(BaseIO): """ Class for reading/writing Neo objects in MATLAB format (.mat) versions 5 to 7.2. This module is a bridge for MATLAB users who want to adopt the Neo object representation. The nomenclature is the same but using Matlab structs and cell arrays. With this module MATLAB users can use neo.io to read a format and convert it to .mat. Rules of conversion: * Neo classes are converted to MATLAB structs. e.g., a Block is a struct with attributes "name", "file_datetime", ... * Neo one_to_many relationships are cellarrays in MATLAB. e.g., ``seg.analogsignals[2]`` in Python Neo will be ``seg.analogsignals{3}`` in MATLAB. * Quantity attributes are represented by 2 fields in MATLAB. e.g., ``anasig.t_start = 1.5 * s`` in Python will be ``anasig.t_start = 1.5`` and ``anasig.t_start_unit = 's'`` in MATLAB. * classes that inherit from Quantity (AnalogSignal, SpikeTrain, ...) in Python will have 2 fields (array and units) in the MATLAB struct. e.g.: ``AnalogSignal( [1., 2., 3.], 'V')`` in Python will be ``anasig.array = [1. 2. 3]`` and ``anasig.units = 'V'`` in MATLAB. 1 - **Scenario 1: create data in MATLAB and read them in Python** This MATLAB code generates a block:: block = struct(); block.segments = { }; block.name = 'my block with matlab'; for s = 1:3 seg = struct(); seg.name = strcat('segment ',num2str(s)); seg.analogsignals = { }; for a = 1:5 anasig = struct(); anasig.signal = rand(100,1); anasig.signal_units = 'mV'; anasig.t_start = 0; anasig.t_start_units = 's'; anasig.sampling_rate = 100; anasig.sampling_rate_units = 'Hz'; seg.analogsignals{a} = anasig; end seg.spiketrains = { }; for t = 1:7 sptr = struct(); sptr.times = rand(30,1)*10; sptr.times_units = 'ms'; sptr.t_start = 0; sptr.t_start_units = 'ms'; sptr.t_stop = 10; sptr.t_stop_units = 'ms'; seg.spiketrains{t} = sptr; end event = struct(); event.times = [0, 10, 30]; event.times_units = 'ms'; event.labels = ['trig0'; 'trig1'; 'trig2']; seg.events{1} = event; epoch = struct(); epoch.times = [10, 20]; epoch.times_units = 'ms'; epoch.durations = [4, 10]; epoch.durations_units = 'ms'; epoch.labels = ['a0'; 'a1']; seg.epochs{1} = epoch; block.segments{s} = seg; end save 'myblock.mat' block -V7 This code reads it in Python:: import neo r = neo.io.NeoMatlabIO(filename='myblock.mat') bl = r.read_block() print bl.segments[1].analogsignals[2] print bl.segments[1].spiketrains[4] 2 - **Scenario 2: create data in Python and read them in MATLAB** This Python code generates the same block as in the previous scenario:: import neo import quantities as pq from scipy import rand, array bl = neo.Block(name='my block with neo') for s in range(3): seg = neo.Segment(name='segment' + str(s)) bl.segments.append(seg) for a in range(5): anasig = neo.AnalogSignal(rand(100)*pq.mV, t_start=0*pq.s, sampling_rate=100*pq.Hz) seg.analogsignals.append(anasig) for t in range(7): sptr = neo.SpikeTrain(rand(40)*pq.ms, t_start=0*pq.ms, t_stop=10*pq.ms) seg.spiketrains.append(sptr) ev = neo.Event([0, 10, 30]*pq.ms, labels=array(['trig0', 'trig1', 'trig2'])) ep = neo.Epoch([10, 20]*pq.ms, durations=[4, 10]*pq.ms, labels=array(['a0', 'a1'])) seg.events.append(ev) seg.epochs.append(ep) from neo.io.neomatlabio import NeoMatlabIO w = NeoMatlabIO(filename='myblock.mat') w.write_block(bl) This MATLAB code reads it:: load 'myblock.mat' block.name block.segments{2}.analogsignals{3}.signal block.segments{2}.analogsignals{3}.signal_units block.segments{2}.analogsignals{3}.t_start block.segments{2}.analogsignals{3}.t_start_units 3 - **Scenario 3: conversion** This Python code converts a Spike2 file to MATLAB:: from neo import Block from neo.io import Spike2IO, NeoMatlabIO r = Spike2IO(filename='spike2.smr') w = NeoMatlabIO(filename='convertedfile.mat') blocks = r.read() w.write(blocks[0]) """ is_readable = True is_writable = True supported_objects = [Block, Segment, AnalogSignal, Epoch, Event, SpikeTrain] readable_objects = [Block] writeable_objects = [Block] has_header = False is_streameable = False read_params = {Block: []} write_params = {Block: []} name = 'neomatlab' extensions = ['mat'] mode = 'file' def __init__(self, filename=None): """ This class read/write neo objects in matlab 5 to 7.2 format. Arguments: filename : the filename to read """ if not HAVE_SCIPY: raise SCIPY_ERR BaseIO.__init__(self) self.filename = filename def read_block(self, lazy=False): """ Arguments: """ assert not lazy, 'Do not support lazy' d = scipy.io.loadmat(self.filename, struct_as_record=False, squeeze_me=True, mat_dtype=True) if 'block' not in d: self.logger.exception('No block in ' + self.filename) return None bl_struct = d['block'] bl = self.create_ob_from_struct( bl_struct, 'Block') bl.create_many_to_one_relationship() return bl def write_block(self, bl, **kargs): """ Arguments: bl: the block to b saved """ bl_struct = self.create_struct_from_obj(bl) for seg in bl.segments: seg_struct = self.create_struct_from_obj(seg) bl_struct['segments'].append(seg_struct) for anasig in seg.analogsignals: anasig_struct = self.create_struct_from_obj(anasig) seg_struct['analogsignals'].append(anasig_struct) for ea in seg.events: ea_struct = self.create_struct_from_obj(ea) seg_struct['events'].append(ea_struct) for ea in seg.epochs: ea_struct = self.create_struct_from_obj(ea) seg_struct['epochs'].append(ea_struct) for sptr in seg.spiketrains: sptr_struct = self.create_struct_from_obj(sptr) seg_struct['spiketrains'].append(sptr_struct) scipy.io.savemat(self.filename, {'block': bl_struct}, oned_as='row') def create_struct_from_obj(self, ob): struct = {} # relationship for childname in getattr(ob, '_single_child_containers', []): supported_containers = [subob.__name__.lower() + 's' for subob in self.supported_objects] if childname in supported_containers: struct[childname] = [] # attributes all_attrs = list(ob._all_attrs) if hasattr(ob, 'annotations'): all_attrs.append(('annotations', type(ob.annotations))) for i, attr in enumerate(all_attrs): attrname, attrtype = attr[0], attr[1] # ~ if attrname =='': # ~ struct['array'] = ob.magnitude # ~ struct['units'] = ob.dimensionality.string # ~ continue if (hasattr(ob, '_quantity_attr') and ob._quantity_attr == attrname): struct[attrname] = ob.magnitude struct[attrname + '_units'] = ob.dimensionality.string continue if not (attrname in ob.annotations or hasattr(ob, attrname)): continue if getattr(ob, attrname) is None: continue if attrtype == pq.Quantity: # ndim = attr[2] struct[attrname] = getattr(ob, attrname).magnitude struct[attrname + '_units'] = getattr( ob, attrname).dimensionality.string elif attrtype == datetime: struct[attrname] = str(getattr(ob, attrname)) else: struct[attrname] = getattr(ob, attrname) return struct def create_ob_from_struct(self, struct, classname): cl = class_by_name[classname] # check if inherits Quantity # ~ is_quantity = False # ~ for attr in cl._necessary_attrs: # ~ if attr[0] == '' and attr[1] == pq.Quantity: # ~ is_quantity = True # ~ break # ~ is_quantiy = hasattr(cl, '_quantity_attr') # ~ if is_quantity: if hasattr(cl, '_quantity_attr'): quantity_attr = cl._quantity_attr arr = getattr(struct, quantity_attr) # ~ data_complement = dict(units=str(struct.units)) data_complement = dict(units=str( getattr(struct, quantity_attr + '_units'))) if "sampling_rate" in (at[0] for at in cl._necessary_attrs): # put fake value for now, put correct value later data_complement["sampling_rate"] = 0 * pq.kHz try: len(arr) except TypeError: # strange scipy.io behavior: if len is 1 we get a float arr = np.array(arr) arr = arr.reshape((-1,)) # new view with one dimension if "t_stop" in (at[0] for at in cl._necessary_attrs): if len(arr) > 0: data_complement["t_stop"] = arr.max() else: data_complement["t_stop"] = 0.0 if "t_start" in (at[0] for at in cl._necessary_attrs): if len(arr) > 0: data_complement["t_start"] = arr.min() else: data_complement["t_start"] = 0.0 ob = cl(arr, **data_complement) else: ob = cl() for attrname in struct._fieldnames: # check children if attrname in getattr(ob, '_single_child_containers', []): child_struct = getattr(struct, attrname) try: # try must only surround len() or other errors are captured child_len = len(child_struct) except TypeError: # strange scipy.io behavior: if len is 1 there is no len() child = self.create_ob_from_struct( child_struct, classname_lower_to_upper[attrname[:-1]]) getattr(ob, attrname.lower()).append(child) else: for c in range(child_len): child = self.create_ob_from_struct( child_struct[c], classname_lower_to_upper[attrname[:-1]]) getattr(ob, attrname.lower()).append(child) continue # attributes if attrname.endswith('_units') or attrname == 'units': # linked with another field continue if hasattr(cl, '_quantity_attr') and cl._quantity_attr == attrname: continue item = getattr(struct, attrname) attributes = cl._necessary_attrs + cl._recommended_attrs \ + (('annotations', dict),) dict_attributes = dict([(a[0], a[1:]) for a in attributes]) if attrname in dict_attributes: attrtype = dict_attributes[attrname][0] if attrtype == datetime: m = r'(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+).(\d+)' r = re.findall(m, str(item)) if len(r) == 1: item = datetime(*[int(e) for e in r[0]]) else: item = None elif attrtype == np.ndarray: dt = dict_attributes[attrname][2] item = item.astype(dt) elif attrtype == pq.Quantity: ndim = dict_attributes[attrname][1] units = str(getattr(struct, attrname + '_units')) if ndim == 0: item = pq.Quantity(item, units) else: item = pq.Quantity(item, units) elif attrtype == dict: # FIXME: works but doesn't convert nested struct to dict item = {fn: getattr(item, fn) for fn in item._fieldnames} else: item = attrtype(item) setattr(ob, attrname, item) return ob neo-0.10.0/neo/io/nestio.py0000644000076700000240000007627714060430470016042 0ustar andrewstaff00000000000000""" Class for reading output files from NEST simulations ( http://www.nest-simulator.org/ ). Tested with NEST2.10.0 Depends on: numpy, quantities Supported: Read Authors: Julia Sprenger, Maximilian Schmidt, Johanna Senk """ # needed for Python3 compatibility import os.path import warnings from datetime import datetime import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Block, Segment, SpikeTrain, AnalogSignal value_type_dict = {'V': pq.mV, 'I': pq.pA, 'g': pq.CompoundUnit("10^-9*S"), 'no type': pq.dimensionless} class NestIO(BaseIO): """ Class for reading NEST output files. GDF files for the spike data and DAT files for analog signals are possible. Usage: >>> from neo.io.nestio import NestIO >>> files = ['membrane_voltages-1261-0.dat', 'spikes-1258-0.gdf'] >>> r = NestIO(filenames=files) >>> seg = r.read_segment(gid_list=[], t_start=400 * pq.ms, t_stop=600 * pq.ms, id_column_gdf=0, time_column_gdf=1, id_column_dat=0, time_column_dat=1, value_columns_dat=2) """ is_readable = True # class supports reading, but not writing is_writable = False supported_objects = [SpikeTrain, AnalogSignal, Segment, Block] readable_objects = [SpikeTrain, AnalogSignal, Segment, Block] has_header = False is_streameable = False write_params = None # writing is not supported name = 'nest' extensions = ['gdf', 'dat'] mode = 'file' def __init__(self, filenames=None): """ Parameters ---------- filenames: string or list of strings, default=None The filename or list of filenames to load. """ if isinstance(filenames, str): filenames = [filenames] self.filenames = filenames self.avail_formats = {} self.avail_IOs = {} for filename in filenames: path, ext = os.path.splitext(filename) ext = ext.strip('.') if ext in self.extensions: if ext in self.avail_IOs: raise ValueError('Received multiple files with "%s" ' 'extention. Can only load single file of ' 'this type.' % ext) self.avail_IOs[ext] = ColumnIO(filename) self.avail_formats[ext] = path def __read_analogsignals(self, gid_list, time_unit, t_start=None, t_stop=None, sampling_period=None, id_column=0, time_column=1, value_columns=2, value_types=None, value_units=None): """ Internal function called by read_analogsignal() and read_segment(). """ if 'dat' not in self.avail_formats: raise ValueError('Can not load analogsignals. No DAT file ' 'provided.') # checking gid input parameters gid_list, id_column = self._check_input_gids(gid_list, id_column) # checking time input parameters t_start, t_stop = self._check_input_times(t_start, t_stop, mandatory=False) # checking value input parameters (value_columns, value_types, value_units) = \ self._check_input_values_parameters(value_columns, value_types, value_units) # defining standard column order for internal usage # [id_column, time_column, value_column1, value_column2, ...] column_ids = [id_column, time_column] + value_columns for i, cid in enumerate(column_ids): if cid is None: column_ids[i] = -1 # assert that no single column is assigned twice column_list = [id_column, time_column] + value_columns column_list_no_None = [c for c in column_list if c is not None] if len(np.unique(column_list_no_None)) < len(column_list_no_None): raise ValueError( 'One or more columns have been specified to contain ' 'the same data. Columns were specified to %s.' '' % column_list_no_None) # extracting condition and sorting parameters for raw data loading (condition, condition_column, sorting_column) = self._get_conditions_and_sorting(id_column, time_column, gid_list, t_start, t_stop) # loading raw data columns data = self.avail_IOs['dat'].get_columns( column_ids=column_ids, condition=condition, condition_column=condition_column, sorting_columns=sorting_column) sampling_period = self._check_input_sampling_period(sampling_period, time_column, time_unit, data) analogsignal_list = [] # extracting complete gid list for anasig generation if (gid_list == []) and id_column is not None: gid_list = np.unique(data[:, id_column]) # generate analogsignals for each neuron ID for i in gid_list: selected_ids = self._get_selected_ids( i, id_column, time_column, t_start, t_stop, time_unit, data) # extract starting time of analogsignal if (time_column is not None) and data.size: anasig_start_time = data[selected_ids[0], 1] * time_unit else: # set t_start equal to sampling_period because NEST starts # recording only after 1 sampling_period anasig_start_time = 1. * sampling_period # create one analogsignal per value column requested for v_id, value_column in enumerate(value_columns): signal = data[ selected_ids[0]:selected_ids[1], value_column] # create AnalogSignal objects and annotate them with # the neuron ID analogsignal_list.append(AnalogSignal( signal * value_units[v_id], sampling_period=sampling_period, t_start=anasig_start_time, id=i, type=value_types[v_id])) # check for correct length of analogsignal assert (analogsignal_list[-1].t_stop == anasig_start_time + len(signal) * sampling_period) return analogsignal_list def __read_spiketrains(self, gdf_id_list, time_unit, t_start, t_stop, id_column, time_column, **args): """ Internal function for reading multiple spiketrains at once. This function is called by read_spiketrain() and read_segment(). """ if 'gdf' not in self.avail_IOs: raise ValueError('Can not load spiketrains. No GDF file provided.') # assert that the file contains spike times if time_column is None: raise ValueError('Time column is None. No spike times to ' 'be read in.') gdf_id_list, id_column = self._check_input_gids(gdf_id_list, id_column) t_start, t_stop = self._check_input_times(t_start, t_stop, mandatory=True) # assert that no single column is assigned twice if id_column == time_column: raise ValueError('One or more columns have been specified to ' 'contain the same data.') # defining standard column order for internal usage # [id_column, time_column, value_column1, value_column2, ...] column_ids = [id_column, time_column] for i, cid in enumerate(column_ids): if cid is None: column_ids[i] = -1 (condition, condition_column, sorting_column) = \ self._get_conditions_and_sorting(id_column, time_column, gdf_id_list, t_start, t_stop) data = self.avail_IOs['gdf'].get_columns( column_ids=column_ids, condition=condition, condition_column=condition_column, sorting_columns=sorting_column) # create a list of SpikeTrains for all neuron IDs in gdf_id_list # assign spike times to neuron IDs if id_column is given if id_column is not None: if (gdf_id_list == []) and id_column is not None: gdf_id_list = np.unique(data[:, id_column]) spiketrain_list = [] for nid in gdf_id_list: selected_ids = self._get_selected_ids(nid, id_column, time_column, t_start, t_stop, time_unit, data) times = data[selected_ids[0]:selected_ids[1], time_column] spiketrain_list.append(SpikeTrain( times, units=time_unit, t_start=t_start, t_stop=t_stop, id=nid, **args)) # if id_column is not given, all spike times are collected in one # spike train with id=None else: train = data[:, time_column] spiketrain_list = [SpikeTrain(train, units=time_unit, t_start=t_start, t_stop=t_stop, id=None, **args)] return spiketrain_list def _check_input_times(self, t_start, t_stop, mandatory=True): """ Checks input times for existence and setting default values if necessary. t_start: pq.quantity.Quantity, start time of the time range to load. t_stop: pq.quantity.Quantity, stop time of the time range to load. mandatory: bool, if True times can not be None and an error will be raised. if False, time values of None will be replaced by -infinity or infinity, respectively. default: True. """ if t_stop is None: if mandatory: raise ValueError('No t_start specified.') else: t_stop = np.inf * pq.s if t_start is None: if mandatory: raise ValueError('No t_stop specified.') else: t_start = -np.inf * pq.s for time in (t_start, t_stop): if not isinstance(time, pq.quantity.Quantity): raise TypeError('Time value (%s) is not a quantity.' % time) return t_start, t_stop def _check_input_values_parameters(self, value_columns, value_types, value_units): """ Checks value parameters for consistency. value_columns: int, column id containing the value to load. value_types: list of strings, type of values. value_units: list of units of the value columns. Returns adjusted list of [value_columns, value_types, value_units] """ if value_columns is None: raise ValueError('No value column provided.') if isinstance(value_columns, int): value_columns = [value_columns] if value_types is None: value_types = ['no type'] * len(value_columns) elif isinstance(value_types, str): value_types = [value_types] # translating value types into units as far as possible if value_units is None: short_value_types = [vtype.split('_')[0] for vtype in value_types] if not all([svt in value_type_dict for svt in short_value_types]): raise ValueError('Can not interpret value types ' '"%s"' % value_types) value_units = [value_type_dict[svt] for svt in short_value_types] # checking for same number of value types, units and columns if not (len(value_types) == len(value_units) == len(value_columns)): raise ValueError('Length of value types, units and columns does ' 'not match (%i,%i,%i)' % (len(value_types), len(value_units), len(value_columns))) if not all([isinstance(vunit, pq.UnitQuantity) for vunit in value_units]): raise ValueError('No value unit or standard value type specified.') return value_columns, value_types, value_units def _check_input_gids(self, gid_list, id_column): """ Checks gid values and column for consistency. gid_list: list of int or None, gid to load. id_column: int, id of the column containing the gids. Returns adjusted list of [gid_list, id_column]. """ if gid_list is None: gid_list = [gid_list] if None in gid_list and id_column is not None: raise ValueError('No neuron IDs specified but file contains ' 'neuron IDs in column %s. Specify empty list to ' 'retrieve spiketrains of all neurons.' '' % str(id_column)) if gid_list != [None] and id_column is None: raise ValueError('Specified neuron IDs to be %s, but no ID column ' 'specified.' % gid_list) return gid_list, id_column def _check_input_sampling_period(self, sampling_period, time_column, time_unit, data): """ Checks sampling period, times and time unit for consistency. sampling_period: pq.quantity.Quantity, sampling period of data to load. time_column: int, column id of times in data to load. time_unit: pq.quantity.Quantity, unit of time used in the data to load. data: numpy array, the data to be loaded / interpreted. Returns pq.quantities.Quantity object, the updated sampling period. """ if sampling_period is None: if time_column is not None: data_sampling = np.unique( np.diff(sorted(np.unique(data[:, 1])))) if len(data_sampling) > 1: raise ValueError('Different sampling distances found in ' 'data set (%s)' % data_sampling) else: dt = data_sampling[0] else: raise ValueError('Can not estimate sampling rate without time ' 'column id provided.') sampling_period = pq.CompoundUnit(str(dt) + '*' + time_unit.units.u_symbol) elif not isinstance(sampling_period, pq.UnitQuantity): raise ValueError("sampling_period is not specified as a unit.") return sampling_period def _get_conditions_and_sorting(self, id_column, time_column, gid_list, t_start, t_stop): """ Calculates the condition, condition_column and sorting_column based on other parameters supplied for loading the data. id_column: int, id of the column containing gids. time_column: int, id of the column containing times. gid_list: list of int, gid to be loaded. t_start: pq.quantity.Quantity, start of the time range to be loaded. t_stop: pq.quantity.Quantity, stop of the time range to be loaded. Returns updated [condition, condition_column, sorting_column]. """ condition, condition_column = None, None sorting_column = [] curr_id = 0 if ((gid_list != [None]) and (gid_list is not None)): if gid_list != []: def condition(x): return x in gid_list condition_column = id_column sorting_column.append(curr_id) # Sorting according to gids first curr_id += 1 if time_column is not None: sorting_column.append(curr_id) # Sorting according to time curr_id += 1 elif t_start != -np.inf and t_stop != np.inf: warnings.warn('Ignoring t_start and t_stop parameters, because no ' 'time column id is provided.') if sorting_column == []: sorting_column = None else: sorting_column = sorting_column[::-1] return condition, condition_column, sorting_column def _get_selected_ids(self, gid, id_column, time_column, t_start, t_stop, time_unit, data): """ Calculates the data range to load depending on the selected gid and the provided time range (t_start, t_stop) gid: int, gid to be loaded. id_column: int, id of the column containing gids. time_column: int, id of the column containing times. t_start: pq.quantity.Quantity, start of the time range to load. t_stop: pq.quantity.Quantity, stop of the time range to load. time_unit: pq.quantity.Quantity, time unit of the data to load. data: numpy array, data to load. Returns list of selected gids """ gid_ids = np.array([0, data.shape[0]]) if id_column is not None: gid_ids = np.array([np.searchsorted(data[:, 0], gid, side='left'), np.searchsorted(data[:, 0], gid, side='right')]) gid_data = data[gid_ids[0]:gid_ids[1], :] # select only requested time range id_shifts = np.array([0, 0]) if time_column is not None: id_shifts[0] = np.searchsorted(gid_data[:, 1], t_start.rescale( time_unit).magnitude, side='left') id_shifts[1] = (np.searchsorted(gid_data[:, 1], t_stop.rescale( time_unit).magnitude, side='left') - gid_data.shape[0]) selected_ids = gid_ids + id_shifts return selected_ids def read_block(self, gid_list=None, time_unit=pq.ms, t_start=None, t_stop=None, sampling_period=None, id_column_dat=0, time_column_dat=1, value_columns_dat=2, id_column_gdf=0, time_column_gdf=1, value_types=None, value_units=None, lazy=False): assert not lazy, 'Do not support lazy' seg = self.read_segment(gid_list, time_unit, t_start, t_stop, sampling_period, id_column_dat, time_column_dat, value_columns_dat, id_column_gdf, time_column_gdf, value_types, value_units) blk = Block(file_origin=seg.file_origin, file_datetime=seg.file_datetime) blk.segments.append(seg) seg.block = blk return blk def read_segment(self, gid_list=None, time_unit=pq.ms, t_start=None, t_stop=None, sampling_period=None, id_column_dat=0, time_column_dat=1, value_columns_dat=2, id_column_gdf=0, time_column_gdf=1, value_types=None, value_units=None, lazy=False): """ Reads a Segment which contains SpikeTrain(s) with specified neuron IDs from the GDF data. Arguments ---------- gid_list : list, default: None A list of GDF IDs of which to return SpikeTrain(s). gid_list must be specified if the GDF file contains neuron IDs, the default None then raises an error. Specify an empty list [] to retrieve the spike trains of all neurons. time_unit : Quantity (time), optional, default: quantities.ms The time unit of recorded time stamps in DAT as well as GDF files. t_start : Quantity (time), optional, default: 0 * pq.ms Start time of SpikeTrain. t_stop : Quantity (time), default: None Stop time of SpikeTrain. t_stop must be specified, the default None raises an error. sampling_period : Quantity (frequency), optional, default: None Sampling period of the recorded data. id_column_dat : int, optional, default: 0 Column index of neuron IDs in the DAT file. time_column_dat : int, optional, default: 1 Column index of time stamps in the DAT file. value_columns_dat : int, optional, default: 2 Column index of the analog values recorded in the DAT file. id_column_gdf : int, optional, default: 0 Column index of neuron IDs in the GDF file. time_column_gdf : int, optional, default: 1 Column index of time stamps in the GDF file. value_types : str, optional, default: None Nest data type of the analog values recorded, eg.'V_m', 'I', 'g_e' value_units : Quantity (amplitude), default: None The physical unit of the recorded signal values. lazy : bool, optional, default: False Returns ------- seg : Segment The Segment contains one SpikeTrain and one AnalogSignal for each ID in gid_list. """ assert not lazy, 'Do not support lazy' if isinstance(gid_list, tuple): if gid_list[0] > gid_list[1]: raise ValueError('The second entry in gid_list must be ' 'greater or equal to the first entry.') gid_list = range(gid_list[0], gid_list[1] + 1) # __read_xxx() needs a list of IDs if gid_list is None: gid_list = [None] # create an empty Segment seg = Segment(file_origin=",".join(self.filenames)) seg.file_datetime = datetime.fromtimestamp(os.stat(self.filenames[0]).st_mtime) # todo: rather than take the first file for the timestamp, we should take the oldest # in practice, there won't be much difference # Load analogsignals and attach to Segment if 'dat' in self.avail_formats: seg.analogsignals = self.__read_analogsignals( gid_list, time_unit, t_start, t_stop, sampling_period=sampling_period, id_column=id_column_dat, time_column=time_column_dat, value_columns=value_columns_dat, value_types=value_types, value_units=value_units) if 'gdf' in self.avail_formats: seg.spiketrains = self.__read_spiketrains( gid_list, time_unit, t_start, t_stop, id_column=id_column_gdf, time_column=time_column_gdf) return seg def read_analogsignal(self, gid=None, time_unit=pq.ms, t_start=None, t_stop=None, sampling_period=None, id_column=0, time_column=1, value_column=2, value_type=None, value_unit=None, lazy=False): """ Reads an AnalogSignal with specified neuron ID from the DAT data. Arguments ---------- gid : int, default: None The GDF ID of the returned SpikeTrain. gdf_id must be specified if the GDF file contains neuron IDs, the default None then raises an error. Specify an empty list [] to retrieve the spike trains of all neurons. time_unit : Quantity (time), optional, default: quantities.ms The time unit of recorded time stamps. t_start : Quantity (time), optional, default: 0 * pq.ms Start time of SpikeTrain. t_stop : Quantity (time), default: None Stop time of SpikeTrain. t_stop must be specified, the default None raises an error. sampling_period : Quantity (frequency), optional, default: None Sampling period of the recorded data. id_column : int, optional, default: 0 Column index of neuron IDs. time_column : int, optional, default: 1 Column index of time stamps. value_column : int, optional, default: 2 Column index of the analog values recorded. value_type : str, optional, default: None Nest data type of the analog values recorded, eg.'V_m', 'I', 'g_e'. value_unit : Quantity (amplitude), default: None The physical unit of the recorded signal values. lazy : bool, optional, default: False Returns ------- spiketrain : SpikeTrain The requested SpikeTrain object with an annotation 'id' corresponding to the gdf_id parameter. """ assert not lazy, 'Do not support lazy' # __read_spiketrains() needs a list of IDs return self.__read_analogsignals([gid], time_unit, t_start, t_stop, sampling_period=sampling_period, id_column=id_column, time_column=time_column, value_columns=value_column, value_types=value_type, value_units=value_unit)[0] def read_spiketrain( self, gdf_id=None, time_unit=pq.ms, t_start=None, t_stop=None, id_column=0, time_column=1, lazy=False, **args): """ Reads a SpikeTrain with specified neuron ID from the GDF data. Arguments ---------- gdf_id : int, default: None The GDF ID of the returned SpikeTrain. gdf_id must be specified if the GDF file contains neuron IDs. Providing [] loads all available IDs. time_unit : Quantity (time), optional, default: quantities.ms The time unit of recorded time stamps. t_start : Quantity (time), default: None Start time of SpikeTrain. t_start must be specified. t_stop : Quantity (time), default: None Stop time of SpikeTrain. t_stop must be specified. id_column : int, optional, default: 0 Column index of neuron IDs. time_column : int, optional, default: 1 Column index of time stamps. lazy : bool, optional, default: False Returns ------- spiketrain : SpikeTrain The requested SpikeTrain object with an annotation 'id' corresponding to the gdf_id parameter. """ assert not lazy, 'Do not support lazy' if (not isinstance(gdf_id, int)) and gdf_id is not None: raise ValueError('gdf_id has to be of type int or None.') if gdf_id is None and id_column is not None: raise ValueError('No neuron ID specified but file contains ' 'neuron IDs in column ' + str(id_column) + '.') return self.__read_spiketrains([gdf_id], time_unit, t_start, t_stop, id_column, time_column, **args)[0] class ColumnIO: ''' Class for reading an ASCII file containing multiple columns of data. ''' def __init__(self, filename): """ filename: string, path to ASCII file to read. """ self.filename = filename # read the first line to check the data type (int or float) of the data f = open(self.filename) line = f.readline() additional_parameters = {} if '.' not in line: additional_parameters['dtype'] = np.int32 self.data = np.loadtxt(self.filename, **additional_parameters) if len(self.data.shape) == 1: self.data = self.data[:, np.newaxis] def get_columns(self, column_ids='all', condition=None, condition_column=None, sorting_columns=None): """ column_ids : 'all' or list of int, the ids of columns to extract. condition : None or function, which is applied to each row to evaluate if it should be included in the result. Needs to return a bool value. condition_column : int, id of the column on which the condition function is applied to sorting_columns : int or list of int, column ids to sort by. List entries have to be ordered by increasing sorting priority! Returns ------- numpy array containing the requested data. """ if column_ids == [] or column_ids == 'all': column_ids = range(self.data.shape[-1]) if isinstance(column_ids, (int, float)): column_ids = [column_ids] column_ids = np.array(column_ids) if column_ids is not None: if max(column_ids) >= len(self.data) - 1: raise ValueError('Can not load column ID %i. File contains ' 'only %i columns' % (max(column_ids), len(self.data))) if sorting_columns is not None: if isinstance(sorting_columns, int): sorting_columns = [sorting_columns] if (max(sorting_columns) >= self.data.shape[1]): raise ValueError('Can not sort by column ID %i. File contains ' 'only %i columns' % (max(sorting_columns), self.data.shape[1])) # Starting with whole dataset being selected for return selected_data = self.data # Apply filter condition to rows if condition and (condition_column is None): raise ValueError('Filter condition provided, but no ' 'condition_column ID provided') elif (condition_column is not None) and (condition is None): warnings.warn('Condition column ID provided, but no condition ' 'given. No filtering will be performed.') elif (condition is not None) and (condition_column is not None): condition_function = np.vectorize(condition) mask = condition_function( selected_data[:, condition_column]).astype(bool) selected_data = selected_data[mask, :] # Apply sorting if requested if sorting_columns is not None: values_to_sort = selected_data[:, sorting_columns].T ordered_ids = np.lexsort(tuple(values_to_sort[i] for i in range(len(values_to_sort)))) selected_data = selected_data[ordered_ids, :] # Select only requested columns selected_data = selected_data[:, column_ids] return selected_data neo-0.10.0/neo/io/neuralynxio.py0000644000076700000240000000322414060430470017074 0ustar andrewstaff00000000000000""" Class for reading data from Neuralynx files. This IO supports NCS, NEV and NSE file formats. Depends on: numpy Supported: Read Author: Julia Sprenger, Carlos Canova """ from neo.io.basefromrawio import BaseFromRaw from neo.rawio.neuralynxrawio.neuralynxrawio import NeuralynxRawIO class NeuralynxIO(NeuralynxRawIO, BaseFromRaw): """ Class for reading data from Neuralynx files. This IO supports NCS, NEV, NSE and NTT file formats. NCS contains signals for one channel NEV contains events NSE contains spikes and waveforms for mono electrodes NTT contains spikes and waveforms for tetrodes """ _prefered_signal_group_mode = 'group-by-same-units' mode = 'dir' def __init__(self, dirname, use_cache=False, cache_path='same_as_resource', keep_original_times=False): """ Initialise IO instance Parameters ---------- dirname : str Directory containing data files use_cache : bool, optional Cache results of initial file scans for faster loading in subsequent runs. Default: False cache_path : str, optional Folder path to use for cache files. Default: 'same_as_resource' keep_original_times : bool Preserve original time stamps as in data files. By default datasets are shifted to begin at t_start = 0*pq.second. Default: False """ NeuralynxRawIO.__init__(self, dirname=dirname, use_cache=use_cache, cache_path=cache_path, keep_original_times=keep_original_times) BaseFromRaw.__init__(self, dirname) neo-0.10.0/neo/io/neuroexplorerio.py0000644000076700000240000000064314066375716020003 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.neuroexplorerrawio import NeuroExplorerRawIO class NeuroExplorerIO(NeuroExplorerRawIO, BaseFromRaw): """Class for reading data from NeuroExplorer (.nex)""" _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): NeuroExplorerRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/neuroscopeio.py0000644000076700000240000000070014060430470017225 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.neuroscoperawio import NeuroScopeRawIO class NeuroScopeIO(NeuroScopeRawIO, BaseFromRaw): """ Reading from Neuroscope format files. Ref: http://neuroscope.sourceforge.net/ """ _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): NeuroScopeRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/neuroshareapiio.py0000644000076700000240000004675414060430470017733 0ustar andrewstaff00000000000000""" Class for "reading" data from Neuroshare compatible files (check neuroshare.org) It runs through the whole file and searches for: analog signals, spike cutouts, and trigger events (without duration) Depends on: Neuroshare API 0.9.1, numpy 1.6.1, quantities 0.10.1 Supported: Read Author: Andre Maia Chagas """ # note neo.core needs only numpy and quantities import numpy as np import quantities as pq import os # check to see if the neuroshare bindings are properly imported try: import neuroshare as ns except ImportError as err: print(err) # print('\n neuroshare library not found, loading data will not work!' ) # print('\n be sure to install the library found at:') # print('\n www.http://pythonhosted.org/neuroshare/') else: pass # print('neuroshare library successfully imported') # import BaseIO from neo.io.baseio import BaseIO # import objects from neo.core from neo.core import Segment, AnalogSignal, SpikeTrain, Event, Epoch # create an object based on BaseIO class NeuroshareapiIO(BaseIO): # setting some class parameters is_readable = True # This class can only read data is_writable = False # write is not supported supported_objects = [Segment, AnalogSignal, SpikeTrain, Event, Epoch] has_header = False is_streameable = False readable_objects = [Segment, AnalogSignal, SpikeTrain, Event, Epoch] # This class is not able to write objects writeable_objects = [] # # This is for GUI stuff : a definition for parameters when reading. # # This dict should be keyed by object (`Block`). Each entry is a list # # of tuple. The first entry in each tuple is the parameter name. The # # second entry is a dict with keys 'value' (for default value), # # and 'label' (for a descriptive name). # # Note that if the highest-level object requires parameters, # # common_io_test will be skipped. read_params = { Segment: [ ("segment_duration", {"value": 0., "label": "Segment size (s.)"}), ("t_start", {"value": 0., "label": "start reading (s.)"}), # ("num_analogsignal", # {'value" : 8, "label" : "Number of recording points"}), # ("num_spiketrain_by_channel', # {"value" : 3, "label" : "Num of spiketrains"}), ], } # # do not supported write so no GUI stuff write_params = None name = "Neuroshare" extensions = [] # This object operates on neuroshare files mode = "file" def __init__(self, filename=None, dllpath=None): """ Arguments: filename : the filename The init function will run automatically upon calling of the class, as in: test = MultichannelIO(filename = filetoberead.mcd), therefore the first operations with the file are set here, so that the user doesn't have to remember to use another method, than the ones defined in the NEO library """ BaseIO.__init__(self) self.filename = filename # set the flags for each event type eventID = 1 analogID = 2 epochID = 3 # if a filename was given, create a dictionary with information that will # be needed later on. if self.filename is not None: if dllpath is not None: name = os.path.splitext(os.path.basename(dllpath))[0] library = ns.Library(name, dllpath) else: library = None self.fd = ns.File(self.filename, library=library) # get all the metadata from file self.metadata = self.fd.metadata_raw # get sampling rate self.metadata["sampRate"] = 1. / self.metadata["TimeStampResolution"] # hz # create lists and array for electrode, spike cutouts and trigger channels self.metadata["elecChannels"] = list() self.metadata["elecChanId"] = list() self.metadata["num_analogs"] = 0 self.metadata["spkChannels"] = list() self.metadata["spkChanId"] = list() self.metadata["num_spkChans"] = 0 self.metadata["triggers"] = list() self.metadata["triggersId"] = list() self.metadata["num_trigs"] = 0 self.metadata["digital epochs"] = list() self.metadata["digiEpochId"] = list() self.metadata["num_digiEpochs"] = 0 # loop through all entities in file to get the indexes for each entity # type, so that one can run through the indexes later, upon reading the # segment for entity in self.fd.entities: # if entity is analog and not the digital line recording # (stored as analog in neuroshare files) if entity.entity_type == analogID and entity.label[0:4] != "digi": # get the electrode number self.metadata["elecChannels"].append(entity.label[-4:]) # get the electrode index self.metadata["elecChanId"].append(entity.id) # increase the number of electrodes found self.metadata["num_analogs"] += 1 # if the entity is a event entitiy and a trigger if entity.entity_type == eventID and entity.label[0:4] == "trig": # get the digital bit/trigger number self.metadata["triggers"].append(entity.label[0:4] + entity.label[-4:]) # get the digital bit index self.metadata["triggersId"].append(entity.id) # increase the number of triggers found self.metadata["num_trigs"] += 1 # if the entity is non triggered digital values with duration if entity.entity_type == eventID and entity.label[0:4] == "digi": # get the digital bit number self.metadata["digital epochs"].append(entity.label[-5:]) # get the digital bit index self.metadata["digiEpochId"].append(entity.id) # increase the number of triggers found self.metadata["num_digiEpochs"] += 1 # if the entity is spike cutouts if entity.entity_type == epochID and entity.label[0:4] == "spks": self.metadata["spkChannels"].append(entity.label[-4:]) self.metadata["spkChanId"].append(entity.id) self.metadata["num_spkChans"] += 1 # function to create a block and read in a segment # def create_block(self, # # ): # # blk=Block(name = self.fileName+"_segment:", # file_datetime = str(self.metadata_raw["Time_Day"])+"/"+ # str(self.metadata_raw["Time_Month"])+"/"+ # str(self.metadata_raw["Time_Year"])+"_"+ # str(self.metadata_raw["Time_Hour"])+":"+ # str(self.metadata_raw["Time_Min"])) # # blk.rec_datetime = blk.file_datetime # return blk # create function to read segment def read_segment(self, lazy=False, # all following arguments are decided by this IO and are free t_start=0., segment_duration=0., ): """ Return a Segment containing all analog and spike channels, as well as all trigger events. Parameters: segment_duration :is the size in secend of the segment. num_analogsignal : number of AnalogSignal in this segment num_spiketrain : number of SpikeTrain in this segment """ assert not lazy, 'Do not support lazy' # if no segment duration is given, use the complete file if segment_duration == 0.: segment_duration = float(self.metadata["TimeSpan"]) # if the segment duration is bigger than file, use the complete file if segment_duration >= float(self.metadata["TimeSpan"]): segment_duration = float(self.metadata["TimeSpan"]) # if the time sum of start point and segment duration is bigger than # the file time span, cap it at the end if segment_duration + t_start > float(self.metadata["TimeSpan"]): segment_duration = float(self.metadata["TimeSpan"]) - t_start # create an empty segment seg = Segment(name="segment from the NeuroshareapiIO") # read nested analosignal if self.metadata["num_analogs"] == 0: print("no analog signals in this file!") else: # run through the number of analog channels found at the __init__ function for i in range(self.metadata["num_analogs"]): # create an analog signal object for each channel found ana = self.read_analogsignal(channel_index=self.metadata["elecChanId"][i], segment_duration=segment_duration, t_start=t_start) # add analog signal read to segment object seg.analogsignals += [ana] # read triggers (in this case without any duration) for i in range(self.metadata["num_trigs"]): # create event object for each trigger/bit found eva = self.read_event(channel_index=self.metadata["triggersId"][i], segment_duration=segment_duration, t_start=t_start, ) # add event object to segment seg.events += [eva] # read epochs (digital events with duration) for i in range(self.metadata["num_digiEpochs"]): # create event object for each trigger/bit found epa = self.read_epoch(channel_index=self.metadata["digiEpochId"][i], segment_duration=segment_duration, t_start=t_start, ) # add event object to segment seg.epochs += [epa] # read nested spiketrain # run through all spike channels found for i in range(self.metadata["num_spkChans"]): # create spike object sptr = self.read_spiketrain(channel_index=self.metadata["spkChanId"][i], segment_duration=segment_duration, t_start=t_start) # add the spike object to segment seg.spiketrains += [sptr] seg.create_many_to_one_relationship() return seg """ With this IO AnalogSignal can be accessed directly with its channel number """ def read_analogsignal(self, lazy=False, # channel index as given by the neuroshare API channel_index=0, # time in seconds to be read segment_duration=0., # time in seconds to start reading from t_start=0., ): assert not lazy, 'Do not support lazy' # some controls: # if no segment duration is given, use the complete file if segment_duration == 0.: segment_duration = float(self.metadata["TimeSpan"]) # if the segment duration is bigger than file, use the complete file if segment_duration >= float(self.metadata["TimeSpan"]): segment_duration = float(self.metadata["TimeSpan"]) # get the analog object sig = self.fd.get_entity(channel_index) # get the units (V, mV etc) sigUnits = sig.units # get the electrode number chanName = sig.label[-4:] # transform t_start into index (reading will start from this index) startat = int(t_start * self.metadata["sampRate"]) # get the number of bins to read in bins = int(segment_duration * self.metadata["sampRate"]) # if the number of bins to read is bigger than # the total number of bins, read only till the end of analog object if startat + bins > sig.item_count: bins = sig.item_count - startat # read the data from the sig object sig, _, _ = sig.get_data(index=startat, count=bins) # store it to the 'AnalogSignal' object anasig = AnalogSignal(sig, units=sigUnits, sampling_rate=self.metadata["sampRate"] * pq.Hz, t_start=t_start * pq.s, t_stop=(t_start + segment_duration) * pq.s, channel_index=channel_index) # annotate from which electrode the signal comes from anasig.annotate(info="signal from channel %s" % chanName) return anasig # function to read spike trains def read_spiketrain(self, lazy=False, channel_index=0, segment_duration=0., t_start=0.): """ Function to read in spike trains. This API still does not support read in of specific channels as they are recorded. rather the fuunction gets the entity set by 'channel_index' which is set in the __init__ function (all spike channels) """ assert not lazy, 'Do not support lazy' # sampling rate sr = self.metadata["sampRate"] # create a list to store spiketrain times times = list() # get the spike data from a specific channel index tempSpks = self.fd.get_entity(channel_index) # transform t_start into index (reading will start from this index) startat = tempSpks.get_index_by_time(t_start, 0) # zero means closest index to value # get the last index to read, using segment duration and t_start # -1 means last index before time endat = tempSpks.get_index_by_time(float(segment_duration + t_start), -1) numIndx = endat - startat # get the end point using segment duration # create a numpy empty array to store the waveforms waveforms = np.array(np.zeros([numIndx, tempSpks.max_sample_count])) # loop through the data from the specific channel index for i in range(startat, endat, 1): # get cutout, timestamp, cutout duration, and spike unit tempCuts, timeStamp, duration, unit = tempSpks.get_data(i) # save the cutout in the waveform matrix waveforms[i] = tempCuts[0] # append time stamp to list times.append(timeStamp) # create a spike train object spiketr = SpikeTrain(times, units=pq.s, t_stop=t_start + segment_duration, t_start=t_start * pq.s, name="spikes from electrode" + tempSpks.label[-3:], waveforms=waveforms * pq.volt, sampling_rate=sr * pq.Hz, file_origin=self.filename, annotate=("channel_index:" + str(channel_index))) return spiketr def read_event(self, lazy=False, channel_index=0, t_start=0., segment_duration=0.): """function to read digital timestamps. this function only reads the event onset. to get digital event durations, use the epoch function (to be implemented).""" assert not lazy, 'Do not support lazy' # create temporary empty lists to store data tempNames = list() tempTimeStamp = list() # get entity from file trigEntity = self.fd.get_entity(channel_index) # transform t_start into index (reading will start from this index) startat = trigEntity.get_index_by_time(t_start, 0) # zero means closest index to value # get the last index to read, using segment duration and t_start endat = trigEntity.get_index_by_time( float(segment_duration + t_start), -1) # -1 means last index before time # numIndx = endat-startat # run through specified intervals in entity for i in range(startat, endat + 1, 1): # trigEntity.item_count): # get in which digital bit was the trigger detected tempNames.append(trigEntity.label[-8:]) # get the time stamps of onset events tempData, onOrOff = trigEntity.get_data(i) # if this was an onset event, save it to the list # on triggered recordings it seems that only onset events are # recorded. On continuous recordings both onset(==1) # and offset(==255) seem to be recorded if onOrOff == 1: # append the time stamp to them empty list tempTimeStamp.append(tempData) # create an event array eva = Event(labels=np.array(tempNames, dtype="U"), times=np.array(tempTimeStamp) * pq.s, file_origin=self.filename, description="the trigger events (without durations)") return eva def read_epoch(self, lazy=False, channel_index=0, t_start=0., segment_duration=0.): """function to read digital timestamps. this function reads the event onset and offset and outputs onset and duration. to get only onsets use the event array function""" assert not lazy, 'Do not support lazy' # create temporary empty lists to store data tempNames = list() tempTimeStamp = list() durations = list() # get entity from file digEntity = self.fd.get_entity(channel_index) # transform t_start into index (reading will start from this index) startat = digEntity.get_index_by_time(t_start, 0) # zero means closest index to value # get the last index to read, using segment duration and t_start # -1 means last index before time endat = digEntity.get_index_by_time(float(segment_duration + t_start), -1) # run through entity using only odd "i"s for i in range(startat, endat + 1, 1): if i % 2 == 1: # get in which digital bit was the trigger detected tempNames.append(digEntity.label[-8:]) # get the time stamps of even events tempData, onOrOff = digEntity.get_data(i - 1) # if this was an onset event, save it to the list # on triggered recordings it seems that only onset events are # recorded. On continuous recordings both onset(==1) # and offset(==255) seem to be recorded # if onOrOff == 1: # append the time stamp to them empty list tempTimeStamp.append(tempData) # get time stamps of odd events tempData1, onOrOff = digEntity.get_data(i) # if onOrOff == 255: # pass durations.append(tempData1 - tempData) epa = Epoch(file_origin=self.filename, times=np.array(tempTimeStamp) * pq.s, durations=np.array(durations) * pq.s, labels=np.array(tempNames, dtype="U"), description="digital events with duration") return epa neo-0.10.0/neo/io/neurosharectypesio.py0000644000076700000240000004143114060430470020454 0ustar andrewstaff00000000000000""" NeuroshareIO is a wrap with ctypes of neuroshare DLLs. Neuroshare is a C API for reading neural data. Neuroshare also provides a Matlab and a Python API on top of that. Neuroshare is an open source API but each dll is provided directly by the vendor. The neo user have to download separtatly the dll on neurosharewebsite: http://neuroshare.sourceforge.net/ For some vendors (Spike2/CED , Clampfit/Abf, ...), neo.io also provides pure Python Neo users you should prefer them of course :) Supported : Read Author: sgarcia """ import sys import ctypes import os import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Segment, AnalogSignal, SpikeTrain, Event ns_OK = 0 # Function successful ns_LIBERROR = -1 # Generic linked library error ns_TYPEERROR = -2 # Library unable to open file type ns_FILEERROR = -3 # File access or read error ns_BADFILE = -4 # Invalid file handle passed to function ns_BADENTITY = -5 # Invalid or inappropriate entity identifier specified ns_BADSOURCE = -6 # Invalid source identifier specified ns_BADINDEX = -7 # Invalid entity index specified class NeuroshareError(Exception): def __init__(self, lib, errno): self.lib = lib self.errno = errno pszMsgBuffer = ctypes.create_string_buffer(256) self.lib.ns_GetLastErrorMsg(pszMsgBuffer, ctypes.c_uint32(256)) errstr = '{}: {}'.format(errno, pszMsgBuffer.value) Exception.__init__(self, errstr) class DllWithError(): def __init__(self, lib): self.lib = lib def __getattr__(self, attr): f = getattr(self.lib, attr) return self.decorate_with_error(f) def decorate_with_error(self, f): def func_with_error(*args): errno = f(*args) if errno != ns_OK: raise NeuroshareError(self.lib, errno) return errno return func_with_error class NeurosharectypesIO(BaseIO): """ Class for reading file trougth neuroshare API. The user need the DLLs in the path of the file format. Usage: >>> from neo import io >>> r = io.NeuroshareIO(filename='a_file', dllname=the_name_of_dll) >>> seg = r.read_segment(import_neuroshare_segment=True) >>> print seg.analogsignals # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE [>> print seg.spiketrains [] >>> print seg.events [] Note: neuroshare.ns_ENTITY_EVENT: are converted to neo.EventArray neuroshare.ns_ENTITY_ANALOG: are converted to neo.AnalogSignal neuroshare.ns_ENTITY_NEURALEVENT: are converted to neo.SpikeTrain neuroshare.ns_ENTITY_SEGMENT: is something between serie of small AnalogSignal and Spiketrain with associated waveforms. It is arbitrarily converted as SpikeTrain. """ is_readable = True is_writable = False supported_objects = [Segment, AnalogSignal, Event, SpikeTrain] readable_objects = [Segment] writeable_objects = [] has_header = False is_streameable = False read_params = {Segment: []} write_params = None name = 'neuroshare' extensions = [] mode = 'file' def __init__(self, filename='', dllname=''): """ Arguments: filename: the file to read ddlname: the name of neuroshare dll to be used for this file """ BaseIO.__init__(self) self.dllname = dllname self.filename = filename def read_segment(self, import_neuroshare_segment=True, lazy=False): """ Arguments: import_neuroshare_segment: import neuroshare segment as SpikeTrain with associated waveforms or not imported at all. """ assert not lazy, 'Do not support lazy' seg = Segment(file_origin=os.path.basename(self.filename), ) if sys.platform.startswith('win'): neuroshare = ctypes.windll.LoadLibrary(self.dllname) elif sys.platform.startswith('linux'): neuroshare = ctypes.cdll.LoadLibrary(self.dllname) neuroshare = DllWithError(neuroshare) # elif sys.platform.startswith('darwin'): # API version info = ns_LIBRARYINFO() neuroshare.ns_GetLibraryInfo(ctypes.byref(info), ctypes.sizeof(info)) seg.annotate(neuroshare_version=str(info.dwAPIVersionMaj) + '.' + str(info.dwAPIVersionMin)) # open file hFile = ctypes.c_uint32(0) neuroshare.ns_OpenFile(ctypes.c_char_p(self.filename), ctypes.byref(hFile)) fileinfo = ns_FILEINFO() neuroshare.ns_GetFileInfo(hFile, ctypes.byref(fileinfo), ctypes.sizeof(fileinfo)) # read all entities for dwEntityID in range(fileinfo.dwEntityCount): entityInfo = ns_ENTITYINFO() neuroshare.ns_GetEntityInfo(hFile, dwEntityID, ctypes.byref( entityInfo), ctypes.sizeof(entityInfo)) # EVENT if entity_types[entityInfo.dwEntityType] == 'ns_ENTITY_EVENT': pEventInfo = ns_EVENTINFO() neuroshare.ns_GetEventInfo(hFile, dwEntityID, ctypes.byref( pEventInfo), ctypes.sizeof(pEventInfo)) if pEventInfo.dwEventType == 0: # TEXT pData = ctypes.create_string_buffer(pEventInfo.dwMaxDataLength) elif pEventInfo.dwEventType == 1: # CVS pData = ctypes.create_string_buffer(pEventInfo.dwMaxDataLength) elif pEventInfo.dwEventType == 2: # 8bit pData = ctypes.c_byte(0) elif pEventInfo.dwEventType == 3: # 16bit pData = ctypes.c_int16(0) elif pEventInfo.dwEventType == 4: # 32bit pData = ctypes.c_int32(0) pdTimeStamp = ctypes.c_double(0.) pdwDataRetSize = ctypes.c_uint32(0) ea = Event(name=str(entityInfo.szEntityLabel), ) times = [] labels = [] for dwIndex in range(entityInfo.dwItemCount): neuroshare.ns_GetEventData(hFile, dwEntityID, dwIndex, ctypes.byref(pdTimeStamp), ctypes.byref(pData), ctypes.sizeof(pData), ctypes.byref(pdwDataRetSize)) times.append(pdTimeStamp.value) labels.append(str(pData.value)) ea.times = times * pq.s ea.labels = np.array(labels, dtype='U') seg.events.append(ea) # analog if entity_types[entityInfo.dwEntityType] == 'ns_ENTITY_ANALOG': pAnalogInfo = ns_ANALOGINFO() neuroshare.ns_GetAnalogInfo(hFile, dwEntityID, ctypes.byref( pAnalogInfo), ctypes.sizeof(pAnalogInfo)) dwIndexCount = entityInfo.dwItemCount pdwContCount = ctypes.c_uint32(0) pData = np.zeros((entityInfo.dwItemCount,), dtype='float64') total_read = 0 while total_read < entityInfo.dwItemCount: dwStartIndex = ctypes.c_uint32(total_read) dwStopIndex = ctypes.c_uint32(entityInfo.dwItemCount - total_read) neuroshare.ns_GetAnalogData(hFile, dwEntityID, dwStartIndex, dwStopIndex, ctypes.byref(pdwContCount), pData[total_read:].ctypes.data_as( ctypes.POINTER(ctypes.c_double))) total_read += pdwContCount.value signal = pq.Quantity(pData, units=pAnalogInfo.szUnits, copy=False) # t_start dwIndex = 0 pdTime = ctypes.c_double(0) neuroshare.ns_GetTimeByIndex(hFile, dwEntityID, dwIndex, ctypes.byref(pdTime)) anaSig = AnalogSignal(signal, sampling_rate=pAnalogInfo.dSampleRate * pq.Hz, t_start=pdTime.value * pq.s, name=str(entityInfo.szEntityLabel), ) anaSig.annotate(probe_info=str(pAnalogInfo.szProbeInfo)) seg.analogsignals.append(anaSig) # segment if entity_types[ entityInfo.dwEntityType] == 'ns_ENTITY_SEGMENT' and import_neuroshare_segment: pdwSegmentInfo = ns_SEGMENTINFO() if not str(entityInfo.szEntityLabel).startswith('spks'): continue neuroshare.ns_GetSegmentInfo(hFile, dwEntityID, ctypes.byref(pdwSegmentInfo), ctypes.sizeof(pdwSegmentInfo)) nsource = pdwSegmentInfo.dwSourceCount pszMsgBuffer = ctypes.create_string_buffer(" " * 256) neuroshare.ns_GetLastErrorMsg(ctypes.byref(pszMsgBuffer), 256) for dwSourceID in range(pdwSegmentInfo.dwSourceCount): pSourceInfo = ns_SEGSOURCEINFO() neuroshare.ns_GetSegmentSourceInfo(hFile, dwEntityID, dwSourceID, ctypes.byref(pSourceInfo), ctypes.sizeof(pSourceInfo)) pdTimeStamp = ctypes.c_double(0.) dwDataBufferSize = pdwSegmentInfo.dwMaxSampleCount * pdwSegmentInfo.dwSourceCount pData = np.zeros((dwDataBufferSize), dtype='float64') pdwSampleCount = ctypes.c_uint32(0) pdwUnitID = ctypes.c_uint32(0) nsample = int(dwDataBufferSize) times = np.empty((entityInfo.dwItemCount), dtype='f') waveforms = np.empty((entityInfo.dwItemCount, nsource, nsample), dtype='f') for dwIndex in range(entityInfo.dwItemCount): neuroshare.ns_GetSegmentData( hFile, dwEntityID, dwIndex, ctypes.byref(pdTimeStamp), pData.ctypes.data_as(ctypes.POINTER(ctypes.c_double)), dwDataBufferSize * 8, ctypes.byref(pdwSampleCount), ctypes.byref(pdwUnitID)) times[dwIndex] = pdTimeStamp.value waveforms[dwIndex, :, :] = pData[:nsample * nsource].reshape( nsample, nsource).transpose() sptr = SpikeTrain(times=pq.Quantity(times, units='s', copy=False), t_stop=times.max(), waveforms=pq.Quantity(waveforms, units=str( pdwSegmentInfo.szUnits), copy=False), left_sweep=nsample / 2. / float(pdwSegmentInfo.dSampleRate) * pq.s, sampling_rate=float(pdwSegmentInfo.dSampleRate) * pq.Hz, name=str(entityInfo.szEntityLabel), ) seg.spiketrains.append(sptr) # neuralevent if entity_types[entityInfo.dwEntityType] == 'ns_ENTITY_NEURALEVENT': pNeuralInfo = ns_NEURALINFO() neuroshare.ns_GetNeuralInfo(hFile, dwEntityID, ctypes.byref(pNeuralInfo), ctypes.sizeof(pNeuralInfo)) pData = np.zeros((entityInfo.dwItemCount,), dtype='float64') dwStartIndex = 0 dwIndexCount = entityInfo.dwItemCount neuroshare.ns_GetNeuralData(hFile, dwEntityID, dwStartIndex, dwIndexCount, pData.ctypes.data_as(ctypes.POINTER(ctypes.c_double))) times = pData * pq.s t_stop = times.max() sptr = SpikeTrain(times, t_stop=t_stop, name=str(entityInfo.szEntityLabel), ) seg.spiketrains.append(sptr) # close neuroshare.ns_CloseFile(hFile) seg.create_many_to_one_relationship() return seg # neuroshare structures class ns_FILEDESC(ctypes.Structure): _fields_ = [('szDescription', ctypes.c_char * 32), ('szExtension', ctypes.c_char * 8), ('szMacCodes', ctypes.c_char * 8), ('szMagicCode', ctypes.c_char * 16), ] class ns_LIBRARYINFO(ctypes.Structure): _fields_ = [('dwLibVersionMaj', ctypes.c_uint32), ('dwLibVersionMin', ctypes.c_uint32), ('dwAPIVersionMaj', ctypes.c_uint32), ('dwAPIVersionMin', ctypes.c_uint32), ('szDescription', ctypes.c_char * 64), ('szCreator', ctypes.c_char * 64), ('dwTime_Year', ctypes.c_uint32), ('dwTime_Month', ctypes.c_uint32), ('dwTime_Day', ctypes.c_uint32), ('dwFlags', ctypes.c_uint32), ('dwMaxFiles', ctypes.c_uint32), ('dwFileDescCount', ctypes.c_uint32), ('FileDesc', ns_FILEDESC * 16), ] class ns_FILEINFO(ctypes.Structure): _fields_ = [('szFileType', ctypes.c_char * 32), ('dwEntityCount', ctypes.c_uint32), ('dTimeStampResolution', ctypes.c_double), ('dTimeSpan', ctypes.c_double), ('szAppName', ctypes.c_char * 64), ('dwTime_Year', ctypes.c_uint32), ('dwTime_Month', ctypes.c_uint32), ('dwReserved', ctypes.c_uint32), ('dwTime_Day', ctypes.c_uint32), ('dwTime_Hour', ctypes.c_uint32), ('dwTime_Min', ctypes.c_uint32), ('dwTime_Sec', ctypes.c_uint32), ('dwTime_MilliSec', ctypes.c_uint32), ('szFileComment', ctypes.c_char * 256), ] class ns_ENTITYINFO(ctypes.Structure): _fields_ = [('szEntityLabel', ctypes.c_char * 32), ('dwEntityType', ctypes.c_uint32), ('dwItemCount', ctypes.c_uint32), ] entity_types = {0: 'ns_ENTITY_UNKNOWN', 1: 'ns_ENTITY_EVENT', 2: 'ns_ENTITY_ANALOG', 3: 'ns_ENTITY_SEGMENT', 4: 'ns_ENTITY_NEURALEVENT', } class ns_EVENTINFO(ctypes.Structure): _fields_ = [ ('dwEventType', ctypes.c_uint32), ('dwMinDataLength', ctypes.c_uint32), ('dwMaxDataLength', ctypes.c_uint32), ('szCSVDesc', ctypes.c_char * 128), ] class ns_ANALOGINFO(ctypes.Structure): _fields_ = [ ('dSampleRate', ctypes.c_double), ('dMinVal', ctypes.c_double), ('dMaxVal', ctypes.c_double), ('szUnits', ctypes.c_char * 16), ('dResolution', ctypes.c_double), ('dLocationX', ctypes.c_double), ('dLocationY', ctypes.c_double), ('dLocationZ', ctypes.c_double), ('dLocationUser', ctypes.c_double), ('dHighFreqCorner', ctypes.c_double), ('dwHighFreqOrder', ctypes.c_uint32), ('szHighFilterType', ctypes.c_char * 16), ('dLowFreqCorner', ctypes.c_double), ('dwLowFreqOrder', ctypes.c_uint32), ('szLowFilterType', ctypes.c_char * 16), ('szProbeInfo', ctypes.c_char * 128), ] class ns_SEGMENTINFO(ctypes.Structure): _fields_ = [ ('dwSourceCount', ctypes.c_uint32), ('dwMinSampleCount', ctypes.c_uint32), ('dwMaxSampleCount', ctypes.c_uint32), ('dSampleRate', ctypes.c_double), ('szUnits', ctypes.c_char * 32), ] class ns_SEGSOURCEINFO(ctypes.Structure): _fields_ = [ ('dMinVal', ctypes.c_double), ('dMaxVal', ctypes.c_double), ('dResolution', ctypes.c_double), ('dSubSampleShift', ctypes.c_double), ('dLocationX', ctypes.c_double), ('dLocationY', ctypes.c_double), ('dLocationZ', ctypes.c_double), ('dLocationUser', ctypes.c_double), ('dHighFreqCorner', ctypes.c_double), ('dwHighFreqOrder', ctypes.c_uint32), ('szHighFilterType', ctypes.c_char * 16), ('dLowFreqCorner', ctypes.c_double), ('dwLowFreqOrder', ctypes.c_uint32), ('szLowFilterType', ctypes.c_char * 16), ('szProbeInfo', ctypes.c_char * 128), ] class ns_NEURALINFO(ctypes.Structure): _fields_ = [ ('dwSourceEntityID', ctypes.c_uint32), ('dwSourceUnitID', ctypes.c_uint32), ('szProbeInfo', ctypes.c_char * 128), ] neo-0.10.0/neo/io/nixio.py0000644000076700000240000016137214066375716015677 0ustar andrewstaff00000000000000# Copyright (c) 2016, German Neuroinformatics Node (G-Node) # Achilleas Koutsou # # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted under the terms of the BSD License. See # LICENSE file in the root of the Project. """ Module for reading data from files in the NIX format. Author: Achilleas Koutsou This IO supports both writing and reading of NIX files. Reading is supported only if the NIX file was created using this IO. Details on how the Neo object tree is mapped to NIX, as well as details on behaviours specific to this IO, can be found on the wiki of the G-Node fork of Neo: https://github.com/G-Node/python-neo/wiki """ from datetime import date, time, datetime from collections.abc import Iterable from collections import OrderedDict import itertools from uuid import uuid4 import warnings from distutils.version import LooseVersion as Version from itertools import chain import quantities as pq import numpy as np from .baseio import BaseIO from ..core import (Block, Segment, AnalogSignal, IrregularlySampledSignal, Epoch, Event, SpikeTrain, ImageSequence, ChannelView, Group) from ..io.proxyobjects import BaseProxy from ..version import version as neover try: import nixio as nix HAVE_NIX = True except ImportError: HAVE_NIX = False datetime_types = (date, time, datetime) EMPTYANNOTATION = "EMPTYLIST" ARRAYANNOTATION = "ARRAYANNOTATION" DATETIMEANNOTATION = "DATETIME" DATEANNOTATION = "DATE" TIMEANNOTATION = "TIME" MIN_NIX_VER = Version("1.5.0") datefmt = "%Y-%m-%d" timefmt = "%H:%M:%S.%f" datetimefmt = datefmt + "T" + timefmt def stringify(value): if value is None: return value if isinstance(value, bytes): value = value.decode() return str(value) def create_quantity(values, unitstr): if "*" in unitstr: unit = pq.CompoundUnit(stringify(unitstr)) else: unit = unitstr return pq.Quantity(values, unit) def units_to_string(pqunit): dim = str(pqunit.dimensionality) if dim.startswith("(") and dim.endswith(")"): return dim.strip("()") return dim def dt_to_nix(dt): """ Converts date, time, and datetime objects to an ISO string representation appropriate for storing in NIX. Returns the converted value and the annotation type definition for converting back to the original value type. """ if isinstance(dt, datetime): return dt.strftime(datetimefmt), DATETIMEANNOTATION if isinstance(dt, date): return dt.strftime(datefmt), DATEANNOTATION if isinstance(dt, time): return dt.strftime(timefmt), TIMEANNOTATION # Unknown: returning as is return dt def dt_from_nix(nixdt, annotype): """ Inverse function of 'dt_to_nix()'. Requires the stored annotation type to distinguish between the three source types (date, time, and datetime). """ if annotype == DATEANNOTATION: dt = datetime.strptime(nixdt, datefmt) return dt.date() if annotype == TIMEANNOTATION: dt = datetime.strptime(nixdt, timefmt) return dt.time() if annotype == DATETIMEANNOTATION: dt = datetime.strptime(nixdt, datetimefmt) return dt # Unknown type: older (or newer) IO version? # Returning as is to avoid data loss. return nixdt def check_nix_version(): if not HAVE_NIX: raise Exception( "Failed to import NIX. " "The NixIO requires the Python package for NIX " "(nixio on PyPi). Try `pip install nixio`." ) # nixio version numbers have a 'v' prefix which breaks the comparison nixverstr = nix.__version__.lstrip("v") try: nixver = Version(nixverstr) except ValueError: warnings.warn( f"Could not understand NIX Python version {nixverstr}. " f"The NixIO requires version {MIN_NIX_VER} of the Python package for NIX. " "The IO may not work correctly." ) return if nixver < MIN_NIX_VER: raise Exception( "NIX version not supported. " f"The NixIO requires version {MIN_NIX_VER} or higher of the Python package " f"for NIX. Found version {nixverstr}" ) class NixIO(BaseIO): """ Class for reading and writing NIX files. """ is_readable = True is_writable = True supported_objects = [Block, Segment, Group, ChannelView, AnalogSignal, IrregularlySampledSignal, Epoch, Event, SpikeTrain] readable_objects = [Block] writeable_objects = [Block] name = "NIX" extensions = ["h5", "nix"] mode = "file" def __init__(self, filename, mode="rw"): """ Initialise IO instance and NIX file. :param filename: Full path to the file """ check_nix_version() BaseIO.__init__(self, filename) self.filename = str(filename) if mode == "ro": filemode = nix.FileMode.ReadOnly elif mode == "rw": filemode = nix.FileMode.ReadWrite elif mode == "ow": filemode = nix.FileMode.Overwrite else: raise ValueError(f"Invalid mode specified '{mode}'. " "Valid modes: 'ro' (ReadOnly)', 'rw' (ReadWrite)," " 'ow' (Overwrite).") self.nix_file = nix.File.open(self.filename, filemode) if self.nix_file.mode == nix.FileMode.ReadOnly: self._file_version = '0.5.2' if "neo" in self.nix_file.sections: self._file_version = self.nix_file.sections["neo"]["version"] elif self.nix_file.mode == nix.FileMode.ReadWrite: if "neo" in self.nix_file.sections: self._file_version = self.nix_file.sections["neo"]["version"] else: self._file_version = '0.5.2' filemd = self.nix_file.create_section("neo", "neo.metadata") filemd["version"] = self._file_version else: # new file filemd = self.nix_file.create_section("neo", "neo.metadata") filemd["version"] = neover self._file_version = neover self._block_read_counter = 0 # helper maps self._neo_map = dict() self._ref_map = dict() self._signal_map = dict() self._view_map = dict() # _names_ok is used to guard against name check duplication self._names_ok = False def __enter__(self): return self def __exit__(self, *args): self.close() def read_all_blocks(self, lazy=False): if lazy: raise Exception("Lazy loading is not supported for NixIO") return list(self._nix_to_neo_block(blk) for blk in self.nix_file.blocks) def read_block(self, index=None, nixname=None, neoname=None, lazy=False): """ Loads a Block from the NIX file along with all contained child objects and returns the equivalent Neo Block. The Block to read can be specified in one of three ways: - Index (position) in the file - Name of the NIX Block (see [...] for details on the naming) - Name of the original Neo Block If no arguments are specified, the first Block is returned and consecutive calls to the function return the next Block in the file. After all Blocks have been loaded this way, the function returns None. If more than one argument is specified, the precedence order is: index, nixname, neoname Note that Neo objects can be anonymous or have non-unique names, so specifying a Neo name may be ambiguous. See also :meth:`NixIO.iter_blocks`. :param index: The position of the Block to be loaded (creation order) :param nixname: The name of the Block in NIX :param neoname: The name of the original Neo Block """ if lazy: raise Exception("Lazy loading is not supported for NixIO") nix_block = None if index is not None: nix_block = self.nix_file.blocks[index] elif nixname is not None: nix_block = self.nix_file.blocks[nixname] elif neoname is not None: for blk in self.nix_file.blocks: if ("neo_name" in blk.metadata and blk.metadata["neo_name"] == neoname): nix_block = blk break else: raise KeyError(f"Block with Neo name '{neoname}' does not exist") else: index = self._block_read_counter if index >= len(self.nix_file.blocks): return None nix_block = self.nix_file.blocks[index] self._block_read_counter += 1 return self._nix_to_neo_block(nix_block) def iter_blocks(self): """ Returns an iterator which can be used to consecutively load and convert all Blocks from the NIX File. """ for blk in self.nix_file.blocks: yield self._nix_to_neo_block(blk) def _nix_to_neo_block(self, nix_block): neo_attrs = self._nix_attr_to_neo(nix_block) neo_block = Block(**neo_attrs) neo_block.rec_datetime = datetime.fromtimestamp(nix_block.created_at) # descend into Groups groups_to_resolve = [] for grp in nix_block.groups: if grp.type == "neo.segment": newseg = self._nix_to_neo_segment(grp) neo_block.segments.append(newseg) # parent reference newseg.block = neo_block elif grp.type == "neo.group": newgrp, parent_name = self._nix_to_neo_group(grp) assert parent_name is None neo_block.groups.append(newgrp) # parent reference newgrp.block = neo_block elif grp.type == "neo.subgroup": newgrp, parent_name = self._nix_to_neo_group(grp) groups_to_resolve.append((newgrp, parent_name)) else: raise Exception("Unexpected group type") # link subgroups to parents for newgrp, parent_name in groups_to_resolve: parent = self._neo_map[parent_name] parent.groups.append(newgrp) # find free floating (Groupless) signals and spiketrains blockdas = self._group_signals(nix_block.data_arrays) for name, das in blockdas.items(): if name not in self._neo_map: if das[0].type == "neo.analogsignal": self._nix_to_neo_analogsignal(das) elif das[0].type == "neo.irregularlysampledsignal": self._nix_to_neo_irregularlysampledsignal(das) elif das[0].type == "neo.imagesequence": self._nix_to_neo_imagesequence(das) for mt in nix_block.multi_tags: if mt.type == "neo.spiketrain" and mt.name not in self._neo_map: self._nix_to_neo_spiketrain(mt) # create object links neo_block.create_relationship() # reset maps self._neo_map = dict() self._ref_map = dict() self._signal_map = dict() self._view_map = dict() return neo_block def _nix_to_neo_segment(self, nix_group): neo_attrs = self._nix_attr_to_neo(nix_group) neo_segment = Segment(**neo_attrs) neo_segment.rec_datetime = datetime.fromtimestamp(nix_group.created_at) self._neo_map[nix_group.name] = neo_segment # this will probably get all the DAs anyway, but if we change any part # of the mapping to add other kinds of DataArrays to a group, such as # MultiTag positions and extents, this filter will be necessary dataarrays = list(filter( lambda da: da.type in ("neo.analogsignal", "neo.irregularlysampledsignal", "neo.imagesequence",), nix_group.data_arrays)) dataarrays = self._group_signals(dataarrays) # descend into DataArrays for name, das in dataarrays.items(): if das[0].type == "neo.analogsignal": newasig = self._nix_to_neo_analogsignal(das) neo_segment.analogsignals.append(newasig) # parent reference newasig.segment = neo_segment elif das[0].type == "neo.irregularlysampledsignal": newisig = self._nix_to_neo_irregularlysampledsignal(das) neo_segment.irregularlysampledsignals.append(newisig) # parent reference newisig.segment = neo_segment elif das[0].type == "neo.imagesequence": new_imgseq = self._nix_to_neo_imagesequence(das) neo_segment.imagesequences.append(new_imgseq) # parent reference new_imgseq.segment = neo_segment # descend into MultiTags for mtag in nix_group.multi_tags: if mtag.type == "neo.event": newevent = self._nix_to_neo_event(mtag) neo_segment.events.append(newevent) # parent reference newevent.segment = neo_segment elif mtag.type == "neo.epoch": newepoch = self._nix_to_neo_epoch(mtag) neo_segment.epochs.append(newepoch) # parent reference newepoch.segment = neo_segment elif mtag.type == "neo.spiketrain": newst = self._nix_to_neo_spiketrain(mtag) neo_segment.spiketrains.append(newst) # parent reference newst.segment = neo_segment return neo_segment def _nix_to_neo_group(self, nix_group): neo_attrs = self._nix_attr_to_neo(nix_group) parent_name = neo_attrs.pop("neo_parent", None) neo_group = Group(**neo_attrs) self._neo_map[nix_group.name] = neo_group dataarrays = list(filter( lambda da: da.type in ("neo.analogsignal", "neo.irregularlysampledsignal", "neo.imagesequence",), nix_group.data_arrays)) dataarrays = self._group_signals(dataarrays) # descend into DataArrays for name in dataarrays: obj = self._neo_map[name] neo_group.add(obj) # descend into MultiTags for mtag in nix_group.multi_tags: if mtag.type == "neo.channelview" and mtag.name not in self._neo_map: self._nix_to_neo_channelview(mtag) obj = self._neo_map[mtag.name] neo_group.add(obj) return neo_group, parent_name def _nix_to_neo_channelview(self, nix_mtag): neo_attrs = self._nix_attr_to_neo(nix_mtag) index = nix_mtag.positions nix_name, = self._group_signals(nix_mtag.references).keys() obj = self._neo_map[nix_name] neo_chview = ChannelView(obj, index, **neo_attrs) self._neo_map[nix_mtag.name] = neo_chview return neo_chview def _nix_to_neo_analogsignal(self, nix_da_group): """ Convert a group of NIX DataArrays to a Neo AnalogSignal. This method expects a list of data arrays that all represent the same, multidimensional Neo AnalogSignal object. :param nix_da_group: a list of NIX DataArray objects :return: a Neo AnalogSignal object """ neo_attrs = self._nix_attr_to_neo(nix_da_group[0]) metadata = nix_da_group[0].metadata neo_attrs["nix_name"] = metadata.name # use the common base name unit = nix_da_group[0].unit signaldata = np.array([d[:] for d in nix_da_group]).transpose() signaldata = create_quantity(signaldata, unit) timedim = self._get_time_dimension(nix_da_group[0]) sampling_period = create_quantity(timedim.sampling_interval, timedim.unit) # t_start should have been added to neo_attrs via the NIX # object's metadata. This may not be present since in older # versions, we didn't store t_start in the metadata when it # wasn't necessary, such as when the timedim.offset and unit # did not require rescaling. if "t_start" in neo_attrs: t_start = neo_attrs["t_start"] del neo_attrs["t_start"] else: t_start = create_quantity(timedim.offset, timedim.unit) neo_signal = AnalogSignal(signal=signaldata, sampling_period=sampling_period, t_start=t_start, **neo_attrs) self._neo_map[neo_attrs["nix_name"]] = neo_signal # all DAs reference the same sources srcnames = list(src.name for src in nix_da_group[0].sources) for n in srcnames: if n not in self._ref_map: self._ref_map[n] = list() self._ref_map[n].append(neo_signal) return neo_signal def _nix_to_neo_imagesequence(self, nix_da_group): """ Convert a group of NIX DataArrays to a Neo ImageSequence. This method expects a list of data arrays that all represent the same, multidimensional Neo ImageSequence object. :param nix_da_group: a list of NIX DataArray objects :return: a Neo ImageSequence object """ neo_attrs = self._nix_attr_to_neo(nix_da_group[0]) metadata = nix_da_group[0].metadata neo_attrs["nix_name"] = metadata.name # use the common base name unit = nix_da_group[0].unit imgseq = np.array([d[:] for d in nix_da_group]).transpose() sampling_rate = neo_attrs["sampling_rate"] del neo_attrs["sampling_rate"] spatial_scale = neo_attrs["spatial_scale"] del neo_attrs["spatial_scale"] if "t_start" in neo_attrs: t_start = neo_attrs["t_start"] del neo_attrs["t_start"] else: t_start = 0.0 * pq.ms neo_seq = ImageSequence(image_data=imgseq, sampling_rate=sampling_rate, spatial_scale=spatial_scale, units=unit, t_start=t_start, **neo_attrs) self._neo_map[neo_attrs["nix_name"]] = neo_seq # all DAs reference the same sources srcnames = list(src.name for src in nix_da_group[0].sources) for n in srcnames: if n not in self._ref_map: self._ref_map[n] = list() self._ref_map[n].append(neo_seq) return neo_seq def _nix_to_neo_irregularlysampledsignal(self, nix_da_group): """ Convert a group of NIX DataArrays to a Neo IrregularlySampledSignal. This method expects a list of data arrays that all represent the same, multidimensional Neo IrregularlySampledSignal object. :param nix_da_group: a list of NIX DataArray objects :return: a Neo IrregularlySampledSignal object """ neo_attrs = self._nix_attr_to_neo(nix_da_group[0]) metadata = nix_da_group[0].metadata neo_attrs["nix_name"] = metadata.name # use the common base name unit = nix_da_group[0].unit signaldata = np.array([d[:] for d in nix_da_group]).transpose() signaldata = create_quantity(signaldata, unit) timedim = self._get_time_dimension(nix_da_group[0]) times = create_quantity(timedim.ticks, timedim.unit) neo_signal = IrregularlySampledSignal(signal=signaldata, times=times, **neo_attrs) self._neo_map[neo_attrs["nix_name"]] = neo_signal # all DAs reference the same sources srcnames = list(src.name for src in nix_da_group[0].sources) for n in srcnames: if n not in self._ref_map: self._ref_map[n] = list() self._ref_map[n].append(neo_signal) return neo_signal def _nix_to_neo_event(self, nix_mtag): neo_attrs = self._nix_attr_to_neo(nix_mtag) time_unit = nix_mtag.positions.unit times = create_quantity(nix_mtag.positions, time_unit) labels = np.array(nix_mtag.positions.dimensions[0].labels, dtype="U") neo_event = Event(times=times, labels=labels, **neo_attrs) self._neo_map[nix_mtag.name] = neo_event return neo_event def _nix_to_neo_epoch(self, nix_mtag): neo_attrs = self._nix_attr_to_neo(nix_mtag) time_unit = nix_mtag.positions.unit times = create_quantity(nix_mtag.positions, time_unit) durations = create_quantity(nix_mtag.extents, nix_mtag.extents.unit) if len(nix_mtag.positions.dimensions[0].labels) > 0: labels = np.array(nix_mtag.positions.dimensions[0].labels, dtype="U") else: labels = None neo_epoch = Epoch(times=times, durations=durations, labels=labels, **neo_attrs) self._neo_map[nix_mtag.name] = neo_epoch return neo_epoch def _nix_to_neo_spiketrain(self, nix_mtag): neo_attrs = self._nix_attr_to_neo(nix_mtag) time_unit = nix_mtag.positions.unit times = create_quantity(nix_mtag.positions, time_unit) neo_spiketrain = SpikeTrain(times=times, **neo_attrs) if nix_mtag.features: wfda = nix_mtag.features[0].data wftime = self._get_time_dimension(wfda) neo_spiketrain.waveforms = create_quantity(wfda, wfda.unit) interval_units = wftime.unit neo_spiketrain.sampling_period = create_quantity(wftime.sampling_interval, interval_units) left_sweep_units = wftime.unit if "left_sweep" in wfda.metadata: neo_spiketrain.left_sweep = create_quantity(wfda.metadata["left_sweep"], left_sweep_units) self._neo_map[nix_mtag.name] = neo_spiketrain srcnames = list(src.name for src in nix_mtag.sources) for n in srcnames: if n not in self._ref_map: self._ref_map[n] = list() self._ref_map[n].append(neo_spiketrain) return neo_spiketrain def write_all_blocks(self, neo_blocks, use_obj_names=False): """ Convert all ``neo_blocks`` to the NIX equivalent and write them to the file. :param neo_blocks: List (or iterable) containing Neo blocks :param use_obj_names: If True, will not generate unique object names but will instead try to use the name of each Neo object. If these are not unique, an exception will be raised. """ if use_obj_names: self._use_obj_names(neo_blocks) self._names_ok = True for bl in neo_blocks: self.write_block(bl, use_obj_names) def write_block(self, block, use_obj_names=False): """ Convert the provided Neo Block to a NIX Block and write it to the NIX file. :param block: Neo Block to be written :param use_obj_names: If True, will not generate unique object names but will instead try to use the name of each Neo object. If these are not unique, an exception will be raised. """ if use_obj_names: if not self._names_ok: # _names_ok guards against check duplication # If it's False, it means write_block() was called directly self._use_obj_names([block]) if "nix_name" in block.annotations: nix_name = block.annotations["nix_name"] else: nix_name = f"neo.block.{self._generate_nix_name()}" block.annotate(nix_name=nix_name) if nix_name in self.nix_file.blocks: nixblock = self.nix_file.blocks[nix_name] del self.nix_file.blocks[nix_name] del self.nix_file.sections[nix_name] nixblock = self.nix_file.create_block(nix_name, "neo.block") nixblock.metadata = self.nix_file.create_section(nix_name, "neo.block.metadata") metadata = nixblock.metadata neoname = block.name if block.name is not None else "" metadata["neo_name"] = neoname nixblock.definition = block.description if block.rec_datetime: nix_rec_dt = int(block.rec_datetime.strftime("%s")) nixblock.force_created_at(nix_rec_dt) if block.file_datetime: fdt, annotype = dt_to_nix(block.file_datetime) fdtprop = metadata.create_property("file_datetime", fdt) fdtprop.definition = annotype if block.annotations: for k, v in block.annotations.items(): self._write_property(metadata, k, v) # descend into Segments for seg in block.segments: self._write_segment(seg, nixblock) # descend into Neo Groups for group in block.groups: self._write_group(group, nixblock) def _write_channelview(self, chview, nixblock, nixgroup): """ Convert the provided Neo ChannelView to a NIX MultiTag and write it to the NIX file. :param chx: The Neo ChannelView to be written :param nixblock: NIX Block where the MultiTag will be created """ if "nix_name" in chview.annotations: nix_name = chview.annotations["nix_name"] else: nix_name = "neo.channelview.{}".format(self._generate_nix_name()) chview.annotate(nix_name=nix_name) # create a new data array if this channelview was not saved yet if not nix_name in self._view_map: channels = nixblock.create_data_array( "{}.index".format(nix_name), "neo.channelview.index", data=chview.index ) nixmt = nixblock.create_multi_tag(nix_name, "neo.channelview", positions=channels) nixmt.metadata = nixgroup.metadata.create_section( nix_name, "neo.channelview.metadata" ) metadata = nixmt.metadata neoname = chview.name if chview.name is not None else "" metadata["neo_name"] = neoname nixmt.definition = chview.description if chview.annotations: for k, v in chview.annotations.items(): self._write_property(metadata, k, v) self._view_map[nix_name] = nixmt # link tag to the data array for the ChannelView's signal if not ("nix_name" in chview.obj.annotations and chview.obj.annotations["nix_name"] in self._signal_map): # the following restriction could be relaxed later # but for a first pass this simplifies my mental model raise Exception("Need to save signals before saving views") nix_name = chview.obj.annotations["nix_name"] nixmt.references.extend(self._signal_map[nix_name]) else: nixmt = self._view_map[nix_name] nixgroup.multi_tags.append(nixmt) def _write_segment(self, segment, nixblock): """ Convert the provided Neo Segment to a NIX Group and write it to the NIX file. :param segment: Neo Segment to be written :param nixblock: NIX Block where the Group will be created """ if "nix_name" in segment.annotations: nix_name = segment.annotations["nix_name"] else: nix_name = f"neo.segment.{self._generate_nix_name()}" segment.annotate(nix_name=nix_name) nixgroup = nixblock.create_group(nix_name, "neo.segment") nixgroup.metadata = nixblock.metadata.create_section(nix_name, "neo.segment.metadata") metadata = nixgroup.metadata neoname = segment.name if segment.name is not None else "" metadata["neo_name"] = neoname nixgroup.definition = segment.description if segment.rec_datetime: nix_rec_dt = int(segment.rec_datetime.strftime("%s")) nixgroup.force_created_at(nix_rec_dt) if segment.file_datetime: fdt, annotype = dt_to_nix(segment.file_datetime) fdtprop = metadata.create_property("file_datetime", fdt) fdtprop.definition = annotype if segment.annotations: for k, v in segment.annotations.items(): self._write_property(metadata, k, v) # write signals, events, epochs, and spiketrains for asig in segment.analogsignals: self._write_analogsignal(asig, nixblock, nixgroup) for isig in segment.irregularlysampledsignals: self._write_irregularlysampledsignal(isig, nixblock, nixgroup) for event in segment.events: self._write_event(event, nixblock, nixgroup) for epoch in segment.epochs: self._write_epoch(epoch, nixblock, nixgroup) for spiketrain in segment.spiketrains: self._write_spiketrain(spiketrain, nixblock, nixgroup) for imagesequence in segment.imagesequences: self._write_imagesequence(imagesequence, nixblock, nixgroup) def _write_group(self, neo_group, nixblock, parent=None): """ Convert the provided Neo Group to a NIX Group and write it to the NIX file. :param neo_group: Neo Group to be written :param nixblock: NIX Block where the NIX Group will be created :param parent: for sub-groups, the parent Neo Group """ if parent: label = "neo.subgroup" # note that the use of a different label for top-level groups and sub-groups is not # strictly necessary, the presence of the "neo_parent" annotation is sufficient. # However, I think it adds clarity and helps in debugging and testing. else: label = "neo.group" if "nix_name" in neo_group.annotations: nix_name = neo_group.annotations["nix_name"] else: nix_name = "{}.{}".format(label, self._generate_nix_name()) neo_group.annotate(nix_name=nix_name) nixgroup = nixblock.create_group(nix_name, label) nixgroup.metadata = nixblock.metadata.create_section( nix_name, f"{label}.metadata" ) metadata = nixgroup.metadata neoname = neo_group.name if neo_group.name is not None else "" metadata["neo_name"] = neoname if parent: metadata["neo_parent"] = parent.annotations["nix_name"] nixgroup.definition = neo_group.description if neo_group.annotations: for k, v in neo_group.annotations.items(): self._write_property(metadata, k, v) # link signals and image sequences objnames = [] for obj in chain( neo_group.analogsignals, neo_group.irregularlysampledsignals, neo_group.imagesequences, ): if not ("nix_name" in obj.annotations and obj.annotations["nix_name"] in self._signal_map): # the following restriction could be relaxed later # but for a first pass this simplifies my mental model raise Exception("Orphan signals/image sequences cannot be stored, needs to belong to a Segment") objnames.append(obj.annotations["nix_name"]) for name in objnames: for da in self._signal_map[name]: nixgroup.data_arrays.append(da) # link events, epochs and spiketrains objnames = [] for obj in chain( neo_group.events, neo_group.epochs, neo_group.spiketrains, ): if not ("nix_name" in obj.annotations and obj.annotations["nix_name"] in nixblock.multi_tags): # the following restriction could be relaxed later # but for a first pass this simplifies my mental model raise Exception("Orphan epochs/events/spiketrains cannot be stored, needs to belong to a Segment") objnames.append(obj.annotations["nix_name"]) for name in objnames: mt = nixblock.multi_tags[name] nixgroup.multi_tags.append(mt) # save channel views for chview in neo_group.channelviews: self._write_channelview(chview, nixblock, nixgroup) # save sub-groups for subgroup in neo_group.groups: self._write_group(subgroup, nixblock, parent=neo_group) def _write_analogsignal(self, anasig, nixblock, nixgroup): """ Convert the provided ``anasig`` (AnalogSignal) to a list of NIX DataArray objects and write them to the NIX file. All DataArray objects created from the same AnalogSignal have their metadata section point to the same object. :param anasig: The Neo AnalogSignal to be written :param nixblock: NIX Block where the DataArrays will be created :param nixgroup: NIX Group where the DataArrays will be attached """ if "nix_name" in anasig.annotations: nix_name = anasig.annotations["nix_name"] else: nix_name = f"neo.analogsignal.{self._generate_nix_name()}" anasig.annotate(nix_name=nix_name) if f"{nix_name}.0" in nixblock.data_arrays and nixgroup: # AnalogSignal is in multiple Segments. # Append DataArrays to Group and return. dalist = list() for idx in itertools.count(): daname = f"{nix_name}.{idx}" if daname in nixblock.data_arrays: dalist.append(nixblock.data_arrays[daname]) else: break nixgroup.data_arrays.extend(dalist) return if isinstance(anasig, BaseProxy): data = np.transpose(anasig.load()[:].magnitude) else: data = np.transpose(anasig[:].magnitude) parentmd = nixgroup.metadata if nixgroup else nixblock.metadata metadata = parentmd.create_section(nix_name, "neo.analogsignal.metadata") nixdas = list() for idx, row in enumerate(data): daname = f"{nix_name}.{idx}" da = nixblock.create_data_array(daname, "neo.analogsignal", data=row) da.metadata = metadata da.definition = anasig.description da.unit = units_to_string(anasig.units) sampling_period = anasig.sampling_period.magnitude.item() timedim = da.append_sampled_dimension(sampling_period) timedim.unit = units_to_string(anasig.sampling_period.units) tstart = anasig.t_start metadata["t_start"] = tstart.magnitude.item() metadata.props["t_start"].unit = units_to_string(tstart.units) timedim.offset = tstart.rescale(timedim.unit).magnitude.item() timedim.label = "time" nixdas.append(da) if nixgroup: nixgroup.data_arrays.append(da) neoname = anasig.name if anasig.name is not None else "" metadata["neo_name"] = neoname if anasig.annotations: for k, v in anasig.annotations.items(): self._write_property(metadata, k, v) if anasig.array_annotations: for k, v in anasig.array_annotations.items(): p = self._write_property(metadata, k, v) p.type = ARRAYANNOTATION self._signal_map[nix_name] = nixdas def _write_imagesequence(self, imgseq, nixblock, nixgroup): """ Convert the provided ``imgseq`` (ImageSequence) to a list of NIX DataArray objects and write them to the NIX file. All DataArray objects created from the same ImageSequence have their metadata section point to the same object. :param anasig: The Neo ImageSequence to be written :param nixblock: NIX Block where the DataArrays will be created :param nixgroup: NIX Group where the DataArrays will be attached """ if "nix_name" in imgseq.annotations: nix_name = imgseq.annotations["nix_name"] else: nix_name = f"neo.imagesequence.{self._generate_nix_name()}" imgseq.annotate(nix_name=nix_name) if f"{nix_name}.0" in nixblock.data_arrays and nixgroup: dalist = list() for idx in itertools.count(): daname = f"{nix_name}.{idx}" if daname in nixblock.data_arrays: dalist.append(nixblock.data_arrays[daname]) else: break nixgroup.data_arrays.extend(dalist) return if isinstance(imgseq, BaseProxy): data = np.transpose(imgseq.load()[:].magnitude) else: data = np.transpose(imgseq[:].magnitude) parentmd = nixgroup.metadata if nixgroup else nixblock.metadata metadata = parentmd.create_section(nix_name, "neo.imagesequence.metadata") nixdas = list() for idx, row in enumerate(data): daname = f"{nix_name}.{idx}" da = nixblock.create_data_array(daname, "neo.imagesequence", data=row) da.metadata = metadata da.definition = imgseq.description da.unit = units_to_string(imgseq.units) metadata["sampling_rate"] = imgseq.sampling_rate.magnitude.item() units = imgseq.sampling_rate.units metadata.props["sampling_rate"].unit = units_to_string(units) metadata["spatial_scale"] = imgseq.spatial_scale.magnitude.item() units = imgseq.spatial_scale.units metadata.props["spatial_scale"].unit = units_to_string(units) metadata["t_start"] = imgseq.t_start.magnitude.item() units = imgseq.t_start.units metadata.props["t_start"].unit = units_to_string(units) nixdas.append(da) if nixgroup: nixgroup.data_arrays.append(da) neoname = imgseq.name if imgseq.name is not None else "" metadata["neo_name"] = neoname if imgseq.annotations: for k, v in imgseq.annotations.items(): self._write_property(metadata, k, v) self._signal_map[nix_name] = nixdas def _write_irregularlysampledsignal(self, irsig, nixblock, nixgroup): """ Convert the provided ``irsig`` (IrregularlySampledSignal) to a list of NIX DataArray objects and write them to the NIX file at the location. All DataArray objects created from the same IrregularlySampledSignal have their metadata section point to the same object. :param irsig: The Neo IrregularlySampledSignal to be written :param nixblock: NIX Block where the DataArrays will be created :param nixgroup: NIX Group where the DataArrays will be attached """ if "nix_name" in irsig.annotations: nix_name = irsig.annotations["nix_name"] else: nix_name = f"neo.irregularlysampledsignal.{self._generate_nix_name()}" irsig.annotate(nix_name=nix_name) if f"{nix_name}.0" in nixblock.data_arrays and nixgroup: # IrregularlySampledSignal is in multiple Segments. # Append DataArrays to Group and return. dalist = list() for idx in itertools.count(): daname = f"{nix_name}.{idx}" if daname in nixblock.data_arrays: dalist.append(nixblock.data_arrays[daname]) else: break nixgroup.data_arrays.extend(dalist) return if isinstance(irsig, BaseProxy): data = np.transpose(irsig.load()[:].magnitude) else: data = np.transpose(irsig[:].magnitude) parentmd = nixgroup.metadata if nixgroup else nixblock.metadata metadata = parentmd.create_section(nix_name, "neo.irregularlysampledsignal.metadata") nixdas = list() for idx, row in enumerate(data): daname = f"{nix_name}.{idx}" da = nixblock.create_data_array(daname, "neo.irregularlysampledsignal", data=row) da.metadata = metadata da.definition = irsig.description da.unit = units_to_string(irsig.units) timedim = da.append_range_dimension(irsig.times.magnitude) timedim.unit = units_to_string(irsig.times.units) timedim.label = "time" nixdas.append(da) if nixgroup: nixgroup.data_arrays.append(da) neoname = irsig.name if irsig.name is not None else "" metadata["neo_name"] = neoname if irsig.annotations: for k, v in irsig.annotations.items(): self._write_property(metadata, k, v) if irsig.array_annotations: for k, v in irsig.array_annotations.items(): p = self._write_property(metadata, k, v) p.type = ARRAYANNOTATION self._signal_map[nix_name] = nixdas def _write_event(self, event, nixblock, nixgroup): """ Convert the provided Neo Event to a NIX MultiTag and write it to the NIX file. :param event: The Neo Event to be written :param nixblock: NIX Block where the MultiTag will be created :param nixgroup: NIX Group where the MultiTag will be attached """ if "nix_name" in event.annotations: nix_name = event.annotations["nix_name"] else: nix_name = f"neo.event.{self._generate_nix_name()}" event.annotate(nix_name=nix_name) if nix_name in nixblock.multi_tags: # Event is in multiple Segments. Append to Group and return. mt = nixblock.multi_tags[nix_name] nixgroup.multi_tags.append(mt) return if isinstance(event, BaseProxy): event = event.load() times = event.times.magnitude units = units_to_string(event.times.units) labels = event.labels timesda = nixblock.create_data_array(f"{nix_name}.times", "neo.event.times", data=times) timesda.unit = units nixmt = nixblock.create_multi_tag(nix_name, "neo.event", positions=timesda) nixmt.metadata = nixgroup.metadata.create_section(nix_name, "neo.event.metadata") metadata = nixmt.metadata labeldim = timesda.append_set_dimension() labeldim.labels = labels neoname = event.name if event.name is not None else "" metadata["neo_name"] = neoname nixmt.definition = event.description if event.annotations: for k, v in event.annotations.items(): self._write_property(metadata, k, v) if event.array_annotations: for k, v in event.array_annotations.items(): p = self._write_property(metadata, k, v) p.type = ARRAYANNOTATION nixgroup.multi_tags.append(nixmt) # reference all AnalogSignals and IrregularlySampledSignals in Group for da in nixgroup.data_arrays: if da.type in ("neo.analogsignal", "neo.irregularlysampledsignal"): nixmt.references.append(da) def _write_epoch(self, epoch, nixblock, nixgroup): """ Convert the provided Neo Epoch to a NIX MultiTag and write it to the NIX file. :param epoch: The Neo Epoch to be written :param nixblock: NIX Block where the MultiTag will be created :param nixgroup: NIX Group where the MultiTag will be attached """ if "nix_name" in epoch.annotations: nix_name = epoch.annotations["nix_name"] else: nix_name = f"neo.epoch.{self._generate_nix_name()}" epoch.annotate(nix_name=nix_name) if nix_name in nixblock.multi_tags: # Epoch is in multiple Segments. Append to Group and return. mt = nixblock.multi_tags[nix_name] nixgroup.multi_tags.append(mt) return if isinstance(epoch, BaseProxy): epoch = epoch.load() times = epoch.times.magnitude tunits = units_to_string(epoch.times.units) durations = epoch.durations.magnitude dunits = units_to_string(epoch.durations.units) timesda = nixblock.create_data_array(f"{nix_name}.times", "neo.epoch.times", data=times) timesda.unit = tunits nixmt = nixblock.create_multi_tag(nix_name, "neo.epoch", positions=timesda) durada = nixblock.create_data_array(f"{nix_name}.durations", "neo.epoch.durations", data=durations) durada.unit = dunits nixmt.extents = durada nixmt.metadata = nixgroup.metadata.create_section(nix_name, "neo.epoch.metadata") metadata = nixmt.metadata labeldim = timesda.append_set_dimension() labeldim.labels = epoch.labels neoname = epoch.name if epoch.name is not None else "" metadata["neo_name"] = neoname nixmt.definition = epoch.description if epoch.annotations: for k, v in epoch.annotations.items(): self._write_property(metadata, k, v) if epoch.array_annotations: for k, v in epoch.array_annotations.items(): p = self._write_property(metadata, k, v) p.type = ARRAYANNOTATION nixgroup.multi_tags.append(nixmt) # reference all AnalogSignals and IrregularlySampledSignals in Group for da in nixgroup.data_arrays: if da.type in ("neo.analogsignal", "neo.irregularlysampledsignal"): nixmt.references.append(da) def _write_spiketrain(self, spiketrain, nixblock, nixgroup): """ Convert the provided Neo SpikeTrain to a NIX MultiTag and write it to the NIX file. :param spiketrain: The Neo SpikeTrain to be written :param nixblock: NIX Block where the MultiTag will be created :param nixgroup: NIX Group where the MultiTag will be attached """ if "nix_name" in spiketrain.annotations: nix_name = spiketrain.annotations["nix_name"] else: nix_name = f"neo.spiketrain.{self._generate_nix_name()}" spiketrain.annotate(nix_name=nix_name) if nix_name in nixblock.multi_tags and nixgroup: # SpikeTrain is in multiple Segments. Append to Group and return. mt = nixblock.multi_tags[nix_name] nixgroup.multi_tags.append(mt) return if isinstance(spiketrain, BaseProxy): spiketrain = spiketrain.load() times = spiketrain.times.magnitude tunits = units_to_string(spiketrain.times.units) waveforms = spiketrain.waveforms timesda = nixblock.create_data_array(f"{nix_name}.times", "neo.spiketrain.times", data=times) timesda.unit = tunits nixmt = nixblock.create_multi_tag(nix_name, "neo.spiketrain", positions=timesda) parentmd = nixgroup.metadata if nixgroup else nixblock.metadata nixmt.metadata = parentmd.create_section(nix_name, "neo.spiketrain.metadata") metadata = nixmt.metadata neoname = spiketrain.name if spiketrain.name is not None else "" metadata["neo_name"] = neoname nixmt.definition = spiketrain.description self._write_property(metadata, "t_start", spiketrain.t_start) self._write_property(metadata, "t_stop", spiketrain.t_stop) if spiketrain.annotations: for k, v in spiketrain.annotations.items(): self._write_property(metadata, k, v) if spiketrain.array_annotations: for k, v in spiketrain.array_annotations.items(): p = self._write_property(metadata, k, v) p.type = ARRAYANNOTATION if nixgroup: nixgroup.multi_tags.append(nixmt) if waveforms is not None: wfdata = list(wf.magnitude for wf in list(wfgroup for wfgroup in spiketrain.waveforms)) wfunits = units_to_string(spiketrain.waveforms.units) wfda = nixblock.create_data_array(f"{nix_name}.waveforms", "neo.waveforms", data=wfdata) wfda.unit = wfunits wfda.metadata = nixmt.metadata.create_section(wfda.name, "neo.waveforms.metadata") nixmt.create_feature(wfda, nix.LinkType.Indexed) # TODO: Move time dimension first for PR #457 # https://github.com/NeuralEnsemble/python-neo/pull/457 wfda.append_set_dimension() wfda.append_set_dimension() wftime = wfda.append_sampled_dimension(spiketrain.sampling_period.magnitude.item()) wftime.unit = units_to_string(spiketrain.sampling_period.units) wftime.label = "time" if spiketrain.left_sweep is not None: self._write_property(wfda.metadata, "left_sweep", spiketrain.left_sweep) @staticmethod def _generate_nix_name(): return uuid4().hex def _write_property(self, section, name, v): """ Create a metadata property with a given name and value on the provided metadata section. :param section: The metadata section to hold the new property :param name: The name of the property :param v: The value to write :return: The newly created property """ if isinstance(v, pq.Quantity): if len(v.shape): section.create_property(name, tuple(v.magnitude)) else: section.create_property(name, v.magnitude.item()) section.props[name].unit = str(v.dimensionality) elif isinstance(v, datetime_types): value, annotype = dt_to_nix(v) prop = section.create_property(name, value) prop.definition = annotype elif isinstance(v, str): if len(v): section.create_property(name, v) else: section.create_property(name, nix.DataType.String) elif isinstance(v, bytes): section.create_property(name, v.decode()) elif isinstance(v, Iterable): values = [] unit = None definition = None if len(v) == 0: # NIX supports empty properties but dtype must be specified # Defaulting to String and using definition to signify empty # iterable as opposed to empty string values = nix.DataType.String definition = EMPTYANNOTATION elif hasattr(v, "ndim") and v.ndim == 0: values = v.item() if isinstance(v, pq.Quantity): unit = str(v.dimensionality) else: for item in v: if isinstance(item, str): item = item elif isinstance(item, pq.Quantity): unit = str(item.dimensionality) item = item.magnitude.item() elif isinstance(item, Iterable): self.logger.warn("Multidimensional arrays and nested " "containers are not currently " "supported when writing to NIX.") return None else: item = item values.append(item) section.create_property(name, values) section.props[name].unit = unit section.props[name].definition = definition elif type(v).__module__ == "numpy": section.create_property(name, v.item()) else: section.create_property(name, v) return section.props[name] @staticmethod def _nix_attr_to_neo(nix_obj): """ Reads common attributes and metadata from a NIX object and populates a dictionary with Neo-compatible attributes and annotations. Common attributes: neo_name, nix_name, description, file_datetime (if applicable). Metadata: For properties that specify a 'unit', a Quantity object is created. """ neo_attrs = dict() neo_attrs["nix_name"] = nix_obj.name neo_attrs["description"] = stringify(nix_obj.definition) if nix_obj.metadata: for prop in nix_obj.metadata.inherited_properties(): values = list(prop.values) if prop.unit: units = prop.unit values = create_quantity(values, units) if not len(values): if prop.definition == EMPTYANNOTATION: values = list() elif prop.data_type == nix.DataType.String: values = "" elif len(values) == 1: values = values[0] if prop.definition in (DATEANNOTATION, TIMEANNOTATION, DATETIMEANNOTATION): values = dt_from_nix(values, prop.definition) if prop.type == ARRAYANNOTATION: if 'array_annotations' in neo_attrs: neo_attrs['array_annotations'][prop.name] = values else: neo_attrs['array_annotations'] = {prop.name: values} else: neo_attrs[prop.name] = values # since the 'neo_name' NIX property becomes the actual object's name, # there's no reason to keep it in the annotations neo_attrs["name"] = stringify(neo_attrs.pop("neo_name", None)) return neo_attrs @staticmethod def _group_signals(dataarrays): """ Groups data arrays that were generated by the same Neo Signal object. The collection can contain both AnalogSignals and IrregularlySampledSignals. :param dataarrays: A collection of DataArray objects to group :return: A dictionary mapping a base name to a list of DataArrays which belong to the same Signal """ # now start grouping groups = OrderedDict() for da in dataarrays: basename = ".".join(da.name.split(".")[:-1]) if basename not in groups: groups[basename] = list() groups[basename].append(da) return groups @staticmethod def _get_time_dimension(obj): for dim in obj.dimensions: if hasattr(dim, "label") and dim.label == "time": return dim return None def _use_obj_names(self, blocks): errmsg = "use_obj_names enabled: found conflict or anonymous object" allobjs = [] def check_unique(objs): names = list(o.name for o in objs) if None in names or "" in names: raise ValueError(names) if len(names) != len(set(names)): self._names_ok = False raise ValueError(names) # collect objs if ok allobjs.extend(objs) try: check_unique(blocks) except ValueError as exc: raise ValueError(f"{errmsg} in Blocks") from exc for blk in blocks: try: # Segments check_unique(blk.segments) except ValueError as exc: raise ValueError(f"{errmsg} at Block '{blk.name}' > segments") from exc # collect all signals in all segments signals = [] # collect all events, epochs, and spiketrains in all segments eests = [] for seg in blk.segments: signals.extend(seg.analogsignals) signals.extend(seg.irregularlysampledsignals) signals.extend(seg.imagesequences) eests.extend(seg.events) eests.extend(seg.epochs) eests.extend(seg.spiketrains) try: # AnalogSignals and IrregularlySampledSignals check_unique(signals) except ValueError as exc: raise ValueError(f"{errmsg} in Signal names of Block '{blk.name}'") from exc try: # Events, Epochs, and SpikeTrains check_unique(eests) except ValueError as exc: raise ValueError( f"{errmsg} in Event, Epoch, and Spiketrain names of Block '{blk.name}'" ) from exc # groups groups = [] for grp in blk.groups: groups.extend(list(grp.walk())) try: check_unique(groups) except ValueError as exc: raise ValueError(f"{errmsg} in Group names of Block '{blk.name}'") from exc # names are OK: assign annotations for o in allobjs: o.annotations["nix_name"] = o.name def close(self): """ Closes the open nix file and resets maps. """ if (hasattr(self, "nix_file") and self.nix_file and self.nix_file.is_open()): self.nix_file.close() self.nix_file = None self._neo_map = None self._ref_map = None self._signal_map = None self._view_map = None self._block_read_counter = None def __del__(self): self.close() neo-0.10.0/neo/io/nixio_fr.py0000644000076700000240000000110414066375716016350 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.nixrawio import NIXRawIO # This class subjects to limitations when there are multiple asymmetric blocks class NixIO(NIXRawIO, BaseFromRaw): name = 'NIX IO' _prefered_signal_group_mode = 'group-by-same-units' _prefered_units_group_mode = 'all-in-one' def __init__(self, filename): NIXRawIO.__init__(self, filename) BaseFromRaw.__init__(self, filename) def __enter__(self): return self def __exit__(self, *args): self.header = None self.file.close() neo-0.10.0/neo/io/nwbio.py0000644000076700000240000010551114077554132015651 0ustar andrewstaff00000000000000""" NWBIO ===== IO class for reading data from a Neurodata Without Borders (NWB) dataset Documentation : https://www.nwb.org/ Depends on: h5py, nwb, dateutil Supported: Read, Write Python API - https://pynwb.readthedocs.io Sample datasets from CRCNS - https://crcns.org/NWB Sample datasets from Allen Institute - http://alleninstitute.github.io/AllenSDK/cell_types.html#neurodata-without-borders """ from __future__ import absolute_import, division import json import logging import os from collections import defaultdict from itertools import chain from json.decoder import JSONDecodeError import numpy as np import quantities as pq from neo.core import (Segment, SpikeTrain, Epoch, Event, AnalogSignal, IrregularlySampledSignal, Block, ImageSequence) from neo.io.baseio import BaseIO from neo.io.proxyobjects import ( AnalogSignalProxy as BaseAnalogSignalProxy, EventProxy as BaseEventProxy, EpochProxy as BaseEpochProxy, SpikeTrainProxy as BaseSpikeTrainProxy ) # PyNWB imports try: import pynwb from pynwb import NWBFile, TimeSeries from pynwb.base import ProcessingModule from pynwb.ecephys import ElectricalSeries, Device, EventDetection from pynwb.behavior import SpatialSeries from pynwb.misc import AnnotationSeries from pynwb import image from pynwb.image import ImageSeries from pynwb.spec import NWBAttributeSpec, NWBDatasetSpec, NWBGroupSpec, NWBNamespace, \ NWBNamespaceBuilder from pynwb.device import Device # For calcium imaging data from pynwb.ophys import TwoPhotonSeries, OpticalChannel, ImageSegmentation, Fluorescence have_pynwb = True except ImportError: have_pynwb = False # hdmf imports try: from hdmf.spec import (LinkSpec, GroupSpec, DatasetSpec, SpecNamespace, NamespaceBuilder, AttributeSpec, DtypeSpec, RefSpec) have_hdmf = True except ImportError: have_hdmf = False except SyntaxError: have_hdmf = False logger = logging.getLogger("Neo") GLOBAL_ANNOTATIONS = ( "session_start_time", "identifier", "timestamps_reference_time", "experimenter", "experiment_description", "session_id", "institution", "keywords", "notes", "pharmacology", "protocol", "related_publications", "slices", "source_script", "source_script_file_name", "data_collection", "surgery", "virus", "stimulus_notes", "lab", "session_description" ) POSSIBLE_JSON_FIELDS = ( "source_script", "description" ) prefix_map = { 1e9: 'giga', 1e6: 'mega', 1e3: 'kilo', 1: '', 1e-3: 'milli', 1e-6: 'micro', 1e-9: 'nano', 1e-12: 'pico' } def try_json_field(content): """ Try to interpret a string as JSON data. If successful, return the JSON data (dict or list) If unsuccessful, return the original string """ try: return json.loads(content) except JSONDecodeError: return content def get_class(module, name): """ Given a module path and a class name, return the class object """ module_path = module.split(".") assert len(module_path) == 2 # todo: handle the general case where this isn't 2 return getattr(getattr(pynwb, module_path[1]), name) def statistics(block): # todo: move this to be a property of Block """ Return simple statistics about a Neo Block. """ stats = { "SpikeTrain": {"count": 0}, "AnalogSignal": {"count": 0}, "IrregularlySampledSignal": {"count": 0}, "Epoch": {"count": 0}, "Event": {"count": 0}, } for segment in block.segments: stats["SpikeTrain"]["count"] += len(segment.spiketrains) stats["AnalogSignal"]["count"] += len(segment.analogsignals) stats["IrregularlySampledSignal"]["count"] += len(segment.irregularlysampledsignals) stats["Epoch"]["count"] += len(segment.epochs) stats["Event"]["count"] += len(segment.events) return stats def get_units_conversion(signal, timeseries_class): """ Given a quantity array and a TimeSeries subclass, return the conversion factor and the expected units """ # it would be nice if the expected units was an attribute of the PyNWB class if "CurrentClamp" in timeseries_class.__name__: expected_units = pq.volt elif "VoltageClamp" in timeseries_class.__name__: expected_units = pq.ampere else: # todo: warn that we don't handle this subclass yet expected_units = signal.units return float((signal.units / expected_units).simplified.magnitude), expected_units def time_in_seconds(t): return float(t.rescale("second")) def _decompose_unit(unit): """ Given a quantities unit object, return a base unit name and a conversion factor. Example: >>> _decompose_unit(pq.mV) ('volt', 0.001) """ assert isinstance(unit, pq.quantity.Quantity) assert unit.magnitude == 1 conversion = 1.0 def _decompose(unit): dim = unit.dimensionality if len(dim) != 1: raise NotImplementedError("Compound units not yet supported") # e.g. volt-metre uq, n = list(dim.items())[0] if n != 1: raise NotImplementedError("Compound units not yet supported") # e.g. volt^2 uq_def = uq.definition return float(uq_def.magnitude), uq_def conv, unit2 = _decompose(unit) while conv != 1: conversion *= conv unit = unit2 conv, unit2 = _decompose(unit) return list(unit.dimensionality.keys())[0].name, conversion def _recompose_unit(base_unit_name, conversion): """ Given a base unit name and a conversion factor, return a quantities unit object Example: >>> _recompose_unit("ampere", 1e-9) UnitCurrent('nanoampere', 0.001 * uA, 'nA') """ unit_name = None for cf in prefix_map: # conversion may have a different float precision to the keys in # prefix_map, so we can't just use `prefix_map[conversion]` if abs(conversion - cf) / cf < 1e-6: unit_name = prefix_map[cf] + base_unit_name if unit_name is None: raise ValueError(f"Can't handle this conversion factor: {conversion}") if unit_name[-1] == "s": # strip trailing 's', e.g. "volts" --> "volt" unit_name = unit_name[:-1] try: return getattr(pq, unit_name) except AttributeError: logger.warning(f"Can't handle unit '{unit_name}'. Returning dimensionless") return pq.dimensionless class NWBIO(BaseIO): """ Class for "reading" experimental data from a .nwb file, and "writing" a .nwb file from Neo """ supported_objects = [Block, Segment, AnalogSignal, IrregularlySampledSignal, SpikeTrain, Epoch, Event, ImageSequence] readable_objects = supported_objects writeable_objects = supported_objects has_header = False support_lazy = True name = 'NeoNWB IO' description = 'This IO reads/writes experimental data from/to an .nwb dataset' extensions = ['nwb'] mode = 'one-file' is_readable = True is_writable = True is_streameable = False def __init__(self, filename, mode='r'): """ Arguments: filename : the filename """ if not have_pynwb: raise Exception("Please install the pynwb package to use NWBIO") if not have_hdmf: raise Exception("Please install the hdmf package to use NWBIO") BaseIO.__init__(self, filename=filename) self.filename = filename self.blocks_written = 0 self.nwb_file_mode = mode def read_all_blocks(self, lazy=False, **kwargs): """ Load all blocks in the file. """ assert self.nwb_file_mode in ('r',) io = pynwb.NWBHDF5IO(self.filename, mode=self.nwb_file_mode, load_namespaces=True) # Open a file with NWBHDF5IO self._file = io.read() self.global_block_metadata = {} for annotation_name in GLOBAL_ANNOTATIONS: value = getattr(self._file, annotation_name, None) if value is not None: if annotation_name in POSSIBLE_JSON_FIELDS: value = try_json_field(value) self.global_block_metadata[annotation_name] = value if "session_description" in self.global_block_metadata: self.global_block_metadata["description"] = self.global_block_metadata[ "session_description"] self.global_block_metadata["file_origin"] = self.filename if "session_start_time" in self.global_block_metadata: self.global_block_metadata["rec_datetime"] = self.global_block_metadata[ "session_start_time"] if "file_create_date" in self.global_block_metadata: self.global_block_metadata["file_datetime"] = self.global_block_metadata[ "file_create_date"] self._blocks = {} self._read_acquisition_group(lazy=lazy) self._read_stimulus_group(lazy) self._read_units(lazy=lazy) self._read_epochs_group(lazy) return list(self._blocks.values()) def read_block(self, lazy=False, block_index=0, **kargs): """ Load the first block in the file. """ return self.read_all_blocks(lazy=lazy)[block_index] def _get_segment(self, block_name, segment_name): # If we've already created a Block with the given name return it, # otherwise create it now and store it in self._blocks. # If we've already created a Segment in the given block, return it, # otherwise create it now and return it. if block_name in self._blocks: block = self._blocks[block_name] else: block = Block(name=block_name, **self.global_block_metadata) self._blocks[block_name] = block segment = None for seg in block.segments: if segment_name == seg.name: segment = seg break if segment is None: segment = Segment(name=segment_name) segment.block = block block.segments.append(segment) return segment def _read_epochs_group(self, lazy): if self._file.epochs is not None: try: # NWB files created by Neo store the segment, block and epoch names as extra # columns segment_names = self._file.epochs.segment[:] block_names = self._file.epochs.block[:] epoch_names = self._file.epochs._name[:] except AttributeError: epoch_names = None if epoch_names is not None: unique_epoch_names = np.unique(epoch_names) for epoch_name in unique_epoch_names: index, = np.where((epoch_names == epoch_name)) epoch = EpochProxy(self._file.epochs, epoch_name, index) if not lazy: epoch = epoch.load() segment_name = np.unique(segment_names[index]) block_name = np.unique(block_names[index]) assert segment_name.size == block_name.size == 1 segment = self._get_segment(block_name[0], segment_name[0]) segment.epochs.append(epoch) epoch.segment = segment else: epoch = EpochProxy(self._file.epochs) if not lazy: epoch = epoch.load() segment = self._get_segment("default", "default") segment.epochs.append(epoch) epoch.segment = segment def _read_timeseries_group(self, group_name, lazy): group = getattr(self._file, group_name) for timeseries in group.values(): try: # NWB files created by Neo store the segment and block names in the comments field hierarchy = json.loads(timeseries.comments) except JSONDecodeError: # For NWB files created with other applications, we put everything in a single # segment in a single block # todo: investigate whether there is a reliable way to create multiple segments, # e.g. using Trial information block_name = "default" segment_name = "default" else: block_name = hierarchy["block"] segment_name = hierarchy["segment"] segment = self._get_segment(block_name, segment_name) if isinstance(timeseries, AnnotationSeries): event = EventProxy(timeseries, group_name) if not lazy: event = event.load() segment.events.append(event) event.segment = segment elif timeseries.rate: # AnalogSignal signal = AnalogSignalProxy(timeseries, group_name) if not lazy: signal = signal.load() segment.analogsignals.append(signal) signal.segment = segment else: # IrregularlySampledSignal signal = AnalogSignalProxy(timeseries, group_name) if not lazy: signal = signal.load() segment.irregularlysampledsignals.append(signal) signal.segment = segment def _read_units(self, lazy): if self._file.units: for id in range(len(self._file.units)): try: # NWB files created by Neo store the segment and block names as extra columns segment_name = self._file.units.segment[id] block_name = self._file.units.block[id] except AttributeError: # For NWB files created with other applications, we put everything in a single # segment in a single block segment_name = "default" block_name = "default" segment = self._get_segment(block_name, segment_name) spiketrain = SpikeTrainProxy(self._file.units, id) if not lazy: spiketrain = spiketrain.load() segment.spiketrains.append(spiketrain) spiketrain.segment = segment def _read_acquisition_group(self, lazy): self._read_timeseries_group("acquisition", lazy) def _read_stimulus_group(self, lazy): self._read_timeseries_group("stimulus", lazy) def write_all_blocks(self, blocks, **kwargs): """ Write list of blocks to the file """ # todo: allow metadata in NWBFile constructor to be taken from kwargs annotations = defaultdict(set) for annotation_name in GLOBAL_ANNOTATIONS: if annotation_name in kwargs: annotations[annotation_name] = kwargs[annotation_name] else: for block in blocks: if annotation_name in block.annotations: try: annotations[annotation_name].add(block.annotations[annotation_name]) except TypeError: if annotation_name in POSSIBLE_JSON_FIELDS: encoded = json.dumps(block.annotations[annotation_name]) annotations[annotation_name].add(encoded) else: raise if annotation_name in annotations: if len(annotations[annotation_name]) > 1: raise NotImplementedError( "We don't yet support multiple values for {}".format(annotation_name)) # take single value from set annotations[annotation_name], = annotations[annotation_name] if "identifier" not in annotations: annotations["identifier"] = self.filename if "session_description" not in annotations: annotations["session_description"] = blocks[0].description or self.filename # todo: concatenate descriptions of multiple blocks if different if "session_start_time" not in annotations: raise Exception("Writing to NWB requires an annotation 'session_start_time'") # todo: handle subject # todo: store additional Neo annotations somewhere in NWB file nwbfile = NWBFile(**annotations) assert self.nwb_file_mode in ('w',) # possibly expand to 'a'ppend later if self.nwb_file_mode == "w" and os.path.exists(self.filename): os.remove(self.filename) io_nwb = pynwb.NWBHDF5IO(self.filename, mode=self.nwb_file_mode) if sum(statistics(block)["SpikeTrain"]["count"] for block in blocks) > 0: nwbfile.add_unit_column('_name', 'the name attribute of the SpikeTrain') # nwbfile.add_unit_column('_description', # 'the description attribute of the SpikeTrain') nwbfile.add_unit_column( 'segment', 'the name of the Neo Segment to which the SpikeTrain belongs') nwbfile.add_unit_column( 'block', 'the name of the Neo Block to which the SpikeTrain belongs') if sum(statistics(block)["Epoch"]["count"] for block in blocks) > 0: nwbfile.add_epoch_column('_name', 'the name attribute of the Epoch') # nwbfile.add_epoch_column('_description', 'the description attribute of the Epoch') nwbfile.add_epoch_column( 'segment', 'the name of the Neo Segment to which the Epoch belongs') nwbfile.add_epoch_column('block', 'the name of the Neo Block to which the Epoch belongs') for i, block in enumerate(blocks): self.write_block(nwbfile, block) io_nwb.write(nwbfile) io_nwb.close() with pynwb.NWBHDF5IO(self.filename, "r") as io_validate: errors = pynwb.validate(io_validate, namespace="core") if errors: raise Exception(f"Errors found when validating {self.filename}") def write_block(self, nwbfile, block, **kwargs): """ Write a Block to the file :param block: Block to be written :param nwbfile: Representation of an NWB file """ electrodes = self._write_electrodes(nwbfile, block) if not block.name: block.name = "block%d" % self.blocks_written for i, segment in enumerate(block.segments): assert segment.block is block if not segment.name: segment.name = "%s : segment%d" % (block.name, i) self._write_segment(nwbfile, segment, electrodes) self.blocks_written += 1 def _write_electrodes(self, nwbfile, block): # this handles only icephys_electrode for now electrodes = {} devices = {} for segment in block.segments: for signal in chain(segment.analogsignals, segment.irregularlysampledsignals): if "nwb_electrode" in signal.annotations: elec_meta = signal.annotations["nwb_electrode"].copy() if elec_meta["name"] not in electrodes: # todo: check for consistency if the name is already there if elec_meta["device"]["name"] in devices: device = devices[elec_meta["device"]["name"]] else: device = nwbfile.create_device(**elec_meta["device"]) devices[elec_meta["device"]["name"]] = device elec_meta.pop("device") electrodes[elec_meta["name"]] = nwbfile.create_icephys_electrode( device=device, **elec_meta ) return electrodes def _write_segment(self, nwbfile, segment, electrodes): # maybe use NWB trials to store Segment metadata? for i, signal in enumerate( chain(segment.analogsignals, segment.irregularlysampledsignals)): assert signal.segment is segment if not signal.name: signal.name = "%s : analogsignal%d" % (segment.name, i) self._write_signal(nwbfile, signal, electrodes) for i, train in enumerate(segment.spiketrains): assert train.segment is segment if not train.name: train.name = "%s : spiketrain%d" % (segment.name, i) self._write_spiketrain(nwbfile, train) for i, event in enumerate(segment.events): assert event.segment is segment if not event.name: event.name = "%s : event%d" % (segment.name, i) self._write_event(nwbfile, event) for i, epoch in enumerate(segment.epochs): if not epoch.name: epoch.name = "%s : epoch%d" % (segment.name, i) self._write_epoch(nwbfile, epoch) def _write_signal(self, nwbfile, signal, electrodes): hierarchy = {'block': signal.segment.block.name, 'segment': signal.segment.name} if "nwb_neurodata_type" in signal.annotations: timeseries_class = get_class(*signal.annotations["nwb_neurodata_type"]) else: timeseries_class = TimeSeries # default additional_metadata = {name[4:]: value for name, value in signal.annotations.items() if name.startswith("nwb:")} if "nwb_electrode" in signal.annotations: electrode_name = signal.annotations["nwb_electrode"]["name"] additional_metadata["electrode"] = electrodes[electrode_name] if timeseries_class != TimeSeries: conversion, units = get_units_conversion(signal, timeseries_class) additional_metadata["conversion"] = conversion else: units = signal.units if isinstance(signal, AnalogSignal): sampling_rate = signal.sampling_rate.rescale("Hz") tS = timeseries_class( name=signal.name, starting_time=time_in_seconds(signal.t_start), data=signal, unit=units.dimensionality.string, rate=float(sampling_rate), comments=json.dumps(hierarchy), **additional_metadata) # todo: try to add array_annotations via "control" attribute elif isinstance(signal, IrregularlySampledSignal): tS = timeseries_class( name=signal.name, data=signal, unit=units.dimensionality.string, timestamps=signal.times.rescale('second').magnitude, comments=json.dumps(hierarchy), **additional_metadata) else: raise TypeError( "signal has type {0}, should be AnalogSignal or IrregularlySampledSignal".format( signal.__class__.__name__)) nwb_group = signal.annotations.get("nwb_group", "acquisition") add_method_map = { "acquisition": nwbfile.add_acquisition, "stimulus": nwbfile.add_stimulus } if nwb_group in add_method_map: add_time_series = add_method_map[nwb_group] else: raise NotImplementedError("NWB group '{}' not yet supported".format(nwb_group)) add_time_series(tS) return tS def _write_spiketrain(self, nwbfile, spiketrain): nwbfile.add_unit(spike_times=spiketrain.rescale('s').magnitude, obs_intervals=[[float(spiketrain.t_start.rescale('s')), float(spiketrain.t_stop.rescale('s'))]], _name=spiketrain.name, # _description=spiketrain.description, segment=spiketrain.segment.name, block=spiketrain.segment.block.name) # todo: handle annotations (using add_unit_column()?) # todo: handle Neo Units # todo: handle spike waveforms, if any (see SpikeEventSeries) return nwbfile.units def _write_event(self, nwbfile, event): hierarchy = {'block': event.segment.block.name, 'segment': event.segment.name} tS_evt = AnnotationSeries( name=event.name, data=event.labels, timestamps=event.times.rescale('second').magnitude, description=event.description or "", comments=json.dumps(hierarchy)) nwbfile.add_acquisition(tS_evt) return tS_evt def _write_epoch(self, nwbfile, epoch): for t_start, duration, label in zip(epoch.rescale('s').magnitude, epoch.durations.rescale('s').magnitude, epoch.labels): nwbfile.add_epoch(t_start, t_start + duration, [label], [], _name=epoch.name, segment=epoch.segment.name, block=epoch.segment.block.name) return nwbfile.epochs class AnalogSignalProxy(BaseAnalogSignalProxy): common_metadata_fields = ( # fields that are the same for all TimeSeries subclasses "comments", "description", "unit", "starting_time", "timestamps", "rate", "data", "starting_time_unit", "timestamps_unit", "electrode" ) def __init__(self, timeseries, nwb_group): self._timeseries = timeseries self.units = timeseries.unit if timeseries.conversion: self.units = _recompose_unit(timeseries.unit, timeseries.conversion) if timeseries.starting_time is not None: self.t_start = timeseries.starting_time * pq.s else: self.t_start = timeseries.timestamps[0] * pq.s if timeseries.rate: self.sampling_rate = timeseries.rate * pq.Hz else: self.sampling_rate = None self.name = timeseries.name self.annotations = {"nwb_group": nwb_group} self.description = try_json_field(timeseries.description) if isinstance(self.description, dict): self.annotations["notes"] = self.description if "name" in self.annotations: self.annotations.pop("name") self.description = None self.shape = self._timeseries.data.shape if len(self.shape) == 1: self.shape = (self.shape[0], 1) metadata_fields = list(timeseries.__nwbfields__) for field_name in self.__class__.common_metadata_fields: # already handled try: metadata_fields.remove(field_name) except ValueError: pass for field_name in metadata_fields: value = getattr(timeseries, field_name) if value is not None: self.annotations[f"nwb:{field_name}"] = value self.annotations["nwb_neurodata_type"] = ( timeseries.__class__.__module__, timeseries.__class__.__name__ ) if hasattr(timeseries, "electrode"): # todo: once the Group class is available, we could add electrode metadata # to a Group containing all signals that share that electrode # This would reduce the amount of redundancy (repeated metadata in every signal) electrode_metadata = {"device": {}} metadata_fields = list(timeseries.electrode.__class__.__nwbfields__) + ["name"] metadata_fields.remove("device") # needs special handling for field_name in metadata_fields: value = getattr(timeseries.electrode, field_name) if value is not None: electrode_metadata[field_name] = value for field_name in timeseries.electrode.device.__class__.__nwbfields__: value = getattr(timeseries.electrode.device, field_name) if value is not None: electrode_metadata["device"][field_name] = value self.annotations["nwb_electrode"] = electrode_metadata def load(self, time_slice=None, strict_slicing=True): """ Load AnalogSignalProxy args: :param time_slice: None or tuple of the time slice expressed with quantities. None is the entire signal. :param strict_slicing: True by default. Control if an error is raised or not when one of the time_slice members (t_start or t_stop) is outside the real time range of the segment. """ i_start, i_stop, sig_t_start = None, None, self.t_start if time_slice: if self.sampling_rate is None: i_start, i_stop = np.searchsorted(self._timeseries.timestamps, time_slice) else: i_start, i_stop, sig_t_start = self._time_slice_indices( time_slice, strict_slicing=strict_slicing) signal = self._timeseries.data[i_start: i_stop] if self.sampling_rate is None: return IrregularlySampledSignal( self._timeseries.timestamps[i_start:i_stop] * pq.s, signal, units=self.units, t_start=sig_t_start, sampling_rate=self.sampling_rate, name=self.name, description=self.description, array_annotations=None, **self.annotations) # todo: timeseries.control / control_description else: return AnalogSignal( signal, units=self.units, t_start=sig_t_start, sampling_rate=self.sampling_rate, name=self.name, description=self.description, array_annotations=None, **self.annotations) # todo: timeseries.control / control_description class EventProxy(BaseEventProxy): def __init__(self, timeseries, nwb_group): self._timeseries = timeseries self.name = timeseries.name self.annotations = {"nwb_group": nwb_group} self.description = try_json_field(timeseries.description) if isinstance(self.description, dict): self.annotations.update(self.description) self.description = None self.shape = self._timeseries.data.shape def load(self, time_slice=None, strict_slicing=True): """ Load EventProxy args: :param time_slice: None or tuple of the time slice expressed with quantities. None is the entire signal. :param strict_slicing: True by default. Control if an error is raised or not when one of the time_slice members (t_start or t_stop) is outside the real time range of the segment. """ if time_slice: raise NotImplementedError("todo") else: times = self._timeseries.timestamps[:] labels = self._timeseries.data[:] return Event(times * pq.s, labels=labels, name=self.name, description=self.description, **self.annotations) class EpochProxy(BaseEpochProxy): def __init__(self, time_intervals, epoch_name=None, index=None): """ :param time_intervals: An epochs table, which is a specific TimeIntervals table that stores info about long periods :param epoch_name: (str) Name of the epoch object :param index: (np.array, slice) Slice object or array of bool values masking time_intervals to be used. In case of an array it has to have the same shape as `time_intervals`. """ self._time_intervals = time_intervals if index is not None: self._index = index self.shape = (index.sum(),) else: self._index = slice(None) self.shape = (len(time_intervals),) self.name = epoch_name def load(self, time_slice=None, strict_slicing=True): """ Load EpochProxy args: :param time_slice: None or tuple of the time slice expressed with quantities. None is all of the intervals. :param strict_slicing: True by default. Control if an error is raised or not when one of the time_slice members (t_start or t_stop) is outside the real time range of the segment. """ if time_slice: raise NotImplementedError("todo") else: start_times = self._time_intervals.start_time[self._index] stop_times = self._time_intervals.stop_time[self._index] durations = stop_times - start_times labels = self._time_intervals.tags[self._index] return Epoch(times=start_times * pq.s, durations=durations * pq.s, labels=labels, name=self.name) class SpikeTrainProxy(BaseSpikeTrainProxy): def __init__(self, units_table, id): """ :param units_table: A Units table (see https://pynwb.readthedocs.io/en/stable/pynwb.misc.html#pynwb.misc.Units) :param id: the cell/unit ID (integer) """ self._units_table = units_table self.id = id self.units = pq.s obs_intervals = units_table.get_unit_obs_intervals(id) if len(obs_intervals) == 0: t_start, t_stop = None, None elif len(obs_intervals) == 1: t_start, t_stop = obs_intervals[0] else: raise NotImplementedError("Can't yet handle multiple observation intervals") self.t_start = t_start * pq.s self.t_stop = t_stop * pq.s self.annotations = {"nwb_group": "acquisition"} try: # NWB files created by Neo store the name as an extra column self.name = units_table._name[id] except AttributeError: self.name = None self.shape = None # no way to get this without reading the data def load(self, time_slice=None, strict_slicing=True): """ Load SpikeTrainProxy args: :param time_slice: None or tuple of the time slice expressed with quantities. None is the entire spike train. :param strict_slicing: True by default. Control if an error is raised or not when one of the time_slice members (t_start or t_stop) is outside the real time range of the segment. """ interval = None if time_slice: interval = (float(t) for t in time_slice) # convert from quantities spike_times = self._units_table.get_unit_spike_times(self.id, in_interval=interval) return SpikeTrain( spike_times * self.units, self.t_stop, units=self.units, # sampling_rate=array(1.) * Hz, t_start=self.t_start, # waveforms=None, # left_sweep=None, name=self.name, # file_origin=None, # description=None, # array_annotations=None, **self.annotations) neo-0.10.0/neo/io/openephysbinaryio.py0000644000076700000240000000057714066374330020306 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.openephysbinaryrawio import OpenEphysBinaryRawIO class OpenEphysBinaryIO(OpenEphysBinaryRawIO, BaseFromRaw): _prefered_signal_group_mode = 'group-by-same-units' mode = 'dir' def __init__(self, dirname): OpenEphysBinaryRawIO.__init__(self, dirname=dirname) BaseFromRaw.__init__(self, dirname) neo-0.10.0/neo/io/openephysio.py0000644000076700000240000000054114060430470017060 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.openephysrawio import OpenEphysRawIO class OpenEphysIO(OpenEphysRawIO, BaseFromRaw): _prefered_signal_group_mode = 'group-by-same-units' mode = 'dir' def __init__(self, dirname): OpenEphysRawIO.__init__(self, dirname=dirname) BaseFromRaw.__init__(self, dirname) neo-0.10.0/neo/io/phyio.py0000644000076700000240000000047214066374330015661 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.phyrawio import PhyRawIO class PhyIO(PhyRawIO, BaseFromRaw): name = 'Phy IO' description = "Phy IO" mode = 'dir' def __init__(self, dirname): PhyRawIO.__init__(self, dirname=dirname) BaseFromRaw.__init__(self, dirname) neo-0.10.0/neo/io/pickleio.py0000644000076700000240000000263714060430470016325 0ustar andrewstaff00000000000000""" Module for reading/writing data from/to Python pickle format. Class: PickleIO Supported: Read/Write Authors: Andrew Davison """ try: import cPickle as pickle # Python 2 except ImportError: import pickle # Python 3 from neo.io.baseio import BaseIO from neo.core import (Block, Segment, AnalogSignal, SpikeTrain) class PickleIO(BaseIO): """ A class for reading and writing Neo data from/to the Python "pickle" format. Note that files in this format may not be readable if using a different version of Neo to that used to create the file. It should therefore not be used for long-term storage, but rather for intermediate results in a pipeline. """ is_readable = True is_writable = True has_header = False is_streameable = False # TODO - correct spelling to "is_streamable" # should extend to other classes. supported_objects = [Block, Segment, AnalogSignal, SpikeTrain] readable_objects = supported_objects writeable_objects = supported_objects mode = 'file' name = "Python pickle file" extensions = ['pkl', 'pickle'] def read_block(self, lazy=False): assert not lazy, 'Do not support lazy' with open(self.filename, "rb") as fp: block = pickle.load(fp) return block def write_block(self, block): with open(self.filename, "wb") as fp: pickle.dump(block, fp) neo-0.10.0/neo/io/plexonio.py0000644000076700000240000000112214066375716016370 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.plexonrawio import PlexonRawIO class PlexonIO(PlexonRawIO, BaseFromRaw): """ Class for reading the old data format from Plexon acquisition system (.plx) Note that Plexon now use a new format PL2 which is NOT supported by this IO. Compatible with versions 100 to 106. Other versions have not been tested. """ _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): PlexonRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/proxyobjects.py0000644000076700000240000006353114077554132017273 0ustar andrewstaff00000000000000""" Here a list of proxy object that can be used when lazy=True at neo.io level. This idea is to be able to postpone that real in memory loading for objects that contains big data (AnalogSIgnal, SpikeTrain, Event, Epoch). The implementation rely on neo.rawio, so it will available only for neo.io that ineherits neo.rawio. """ import numpy as np import quantities as pq import logging from neo.core.baseneo import BaseNeo from neo.core import (AnalogSignal, Epoch, Event, SpikeTrain) from neo.core.dataobject import ArrayDict logger = logging.getLogger("Neo") class BaseProxy(BaseNeo): def __init__(self, array_annotations=None, **annotations): # this for py27 str vs py3 str in neo attributes ompatibility annotations = check_annotations(annotations) if 'file_origin' not in annotations: # the str is to make compatible with neo_py27 where attribute # used to be str so raw bytes annotations['file_origin'] = str(self._rawio.source_name()) if array_annotations is None: array_annotations = {} for k, v in array_annotations.items(): array_annotations[k] = np.asarray(v) # clean array annotations that are not 1D # TODO remove this once multi-dimensional array_annotations are possible array_annotations = {k: v for k, v in array_annotations.items() if v.ndim == 1} # this mock the array annotations to avoid inherits DataObject self.array_annotations = ArrayDict(self.shape[-1]) self.array_annotations.update(array_annotations) BaseNeo.__init__(self, **annotations) def load(self, time_slice=None, **kwargs): # should be implemented by subclass raise NotImplementedError def time_slice(self, t_start, t_stop): ''' Load the proxy object within the specified time range. Has the same call signature as AnalogSignal.time_slice, Epoch.time_slice, etc. ''' return self.load(time_slice=(t_start, t_stop)) class AnalogSignalProxy(BaseProxy): ''' This object mimic AnalogSignal except that it does not have the signals array itself. All attributes and annotations are here. The goal is to postpone the loading of data into memory when reading a file with the new lazy load system based on neo.rawio. This object must not be constructed directly but is given neo.io when lazy=True instead of a true AnalogSignal. The AnalogSignalProxy is able to load: * only a slice of time * only a subset of channels * have an internal raw magnitude identic to the file (int16) with a pq.CompoundUnit(). Usage: >>> proxy_anasig = AnalogSignalProxy(rawio=self.reader, global_channel_indexes=None, block_index=0, seg_index=0) >>> anasig = proxy_anasig.load() >>> slice_of_anasig = proxy_anasig.load(time_slice=(1.*pq.s, 2.*pq.s)) >>> some_channel_of_anasig = proxy_anasig.load(channel_indexes=[0,5,10]) ''' _parent_objects = ('Segment',) _necessary_attrs = (('sampling_rate', pq.Quantity, 0), ('t_start', pq.Quantity, 0)) _recommended_attrs = BaseNeo._recommended_attrs proxy_for = AnalogSignal def __init__(self, rawio=None, stream_index=None, inner_stream_channels=None, block_index=0, seg_index=0): # stream_index: indicate the stream stream_id can be retreive easily # inner_stream_channels: are channel index inside the stream None means all channels # if inner_stream_channels is not None: #  * then this is a "substream" # * handle the case where channels have different units inside a stream # * is related to BaseFromRaw.get_sub_signal_streams() self._rawio = rawio self._block_index = block_index self._seg_index = seg_index self._stream_index = stream_index if inner_stream_channels is None: inner_stream_channels = slice(inner_stream_channels) self._inner_stream_channels = inner_stream_channels signal_streams = self._rawio.header['signal_streams'] stream_id = signal_streams[stream_index]['id'] signal_channels = self._rawio.header['signal_channels'] global_inds, = np.nonzero(signal_channels['stream_id'] == stream_id) self._nb_total_chann_in_stream = global_inds.size self._global_channel_indexes = global_inds[inner_stream_channels] self._nb_chan = self._global_channel_indexes.size sig_chans = signal_channels[self._global_channel_indexes] assert np.unique(sig_chans['units']).size == 1, 'Channel do not have same units' assert np.unique(sig_chans['dtype']).size == 1, 'Channel do not have same dtype' assert np.unique(sig_chans['sampling_rate']).size == 1, \ 'Channel do not have same sampling_rate' self.units = ensure_signal_units(sig_chans['units'][0]) self.dtype = sig_chans['dtype'][0] self.sampling_rate = sig_chans['sampling_rate'][0] * pq.Hz self.sampling_period = 1. / self.sampling_rate sigs_size = self._rawio.get_signal_size(block_index=block_index, seg_index=seg_index, stream_index=stream_index) self.shape = (sigs_size, self._nb_chan) self.t_start = self._rawio.get_signal_t_start(block_index, seg_index, stream_index) * pq.s # magnitude_mode='raw' is supported only if all offset=0 # and all gain are the same support_raw_magnitude = np.all(sig_chans['gain'] == sig_chans['gain'][0]) and \ np.all(sig_chans['offset'] == 0.) if support_raw_magnitude: str_units = ensure_signal_units(sig_chans['units'][0]).units.dimensionality.string gain0 = sig_chans['gain'][0] self._raw_units = pq.CompoundUnit(f'{gain0}*{str_units}') else: self._raw_units = None # retrieve annotations and array annotations seg_ann = self._rawio.raw_annotations['blocks'][block_index]['segments'][seg_index] annotations = seg_ann['signals'][stream_index].copy() array_annotations = annotations.pop('__array_annotations__') array_annotations = {k: v[inner_stream_channels] for k, v in array_annotations.items()} BaseProxy.__init__(self, array_annotations=array_annotations, **annotations) @property def duration(self): '''Signal duration''' return self.shape[0] / self.sampling_rate @property def t_stop(self): '''Time when signal ends''' return self.t_start + self.duration def _time_slice_indices(self, time_slice, strict_slicing=True): """ Calculate the start and end indices for the slice. Also returns t_start """ if time_slice is None: i_start, i_stop = None, None sig_t_start = self.t_start else: sr = self.sampling_rate t_start, t_stop = time_slice if t_start is None: i_start = None sig_t_start = self.t_start else: t_start = ensure_second(t_start) if strict_slicing: assert self.t_start <= t_start <= self.t_stop, 't_start is outside' else: t_start = max(t_start, self.t_start) # the i_start is necessary ceil i_start = int(np.ceil((t_start - self.t_start).magnitude * sr.magnitude)) # this needed to get the real t_start of the first sample # because do not necessary match what is demanded sig_t_start = self.t_start + i_start / sr if t_stop is None: i_stop = None else: t_stop = ensure_second(t_stop) if strict_slicing: assert self.t_start <= t_stop <= self.t_stop, 't_stop is outside' else: t_stop = min(t_stop, self.t_stop) i_stop = int((t_stop - self.t_start).magnitude * sr.magnitude) return i_start, i_stop, sig_t_start def load(self, time_slice=None, strict_slicing=True, channel_indexes=None, magnitude_mode='rescaled'): ''' *Args*: :time_slice: None or tuple of the time slice expressed with quantities. None is the entire signal. :channel_indexes: None or list. Channels to load. None is all channels Be carefull that channel_indexes represent the local channel index inside the AnalogSignal and not the global_channel_indexes like in rawio. :magnitude_mode: 'rescaled' or 'raw'. For instance if the internal dtype is int16: * **rescaled** give [1.,2.,3.]*pq.uV and the dtype is float32 * **raw** give [10, 20, 30]*pq.CompoundUnit('0.1*uV') The CompoundUnit with magnitude_mode='raw' is usefull to postpone the scaling when needed and having an internal dtype=int16 but it less intuitive when you don't know so well quantities. :strict_slicing: True by default. Control if an error is raise or not when one of time_slice member (t_start or t_stop) is outside the real time range of the segment. ''' # fixed_chan_indexes is channel index (or slice) in the stream # channel_indexes is channel index (or slice) in the substream if isinstance(self._inner_stream_channels, slice): if self._inner_stream_channels == slice(None): # sub stream is the entire stream if channel_indexes is None: fixed_chan_indexes = None else: fixed_chan_indexes = channel_indexes else: # sub stream is part of stream with slice if channel_indexes is None: fixed_chan_indexes = self._inner_stream_channels else: global_inds = np.arange(self._nb_total_chann_in_stream) fixed_chan_indexes = global_inds[self._inner_stream_channels][channel_indexes] else: # sub stream is part of stream with indexes if channel_indexes is None: fixed_chan_indexes = self._inner_stream_channels else: fixed_chan_indexes = self._inner_stream_channels[channel_indexes] i_start, i_stop, sig_t_start = self._time_slice_indices(time_slice, strict_slicing=strict_slicing) raw_signal = self._rawio.get_analogsignal_chunk(block_index=self._block_index, seg_index=self._seg_index, i_start=i_start, i_stop=i_stop, stream_index=self._stream_index, channel_indexes=fixed_chan_indexes) # if slice in channel : change name and array_annotations if raw_signal.shape[1] != self._nb_chan: name = 'slice of ' + self.name channel_indexes2 = channel_indexes if channel_indexes2 is None: channel_indexes2 = slice(None) array_annotations = {k: v[channel_indexes2] for k, v in self.array_annotations.items()} else: name = self.name array_annotations = self.array_annotations if magnitude_mode == 'raw': assert self._raw_units is not None,\ 'raw magnitude is not support gain are not the same for all channel or offset is not 0' sig = raw_signal units = self._raw_units elif magnitude_mode == 'rescaled': # dtype is float32 when internally it is float32 or int16 if self.dtype == 'float64': dtype = 'float64' else: dtype = 'float32' sig = self._rawio.rescale_signal_raw_to_float(raw_signal, dtype=dtype, stream_index=self._stream_index, channel_indexes=fixed_chan_indexes) units = self.units anasig = AnalogSignal(sig, units=units, copy=False, t_start=sig_t_start, sampling_rate=self.sampling_rate, name=name, file_origin=self.file_origin, description=self.description, array_annotations=array_annotations, **self.annotations) return anasig class SpikeTrainProxy(BaseProxy): ''' This object mimic SpikeTrain except that it does not have the spike time nor waveforms. All attributes and annotations are here. The goal is to postpone the loading of data into memory when reading a file with the new lazy load system based on neo.rawio. This object must not be constructed directly but is given neo.io when lazy=True instead of a true SpikeTrain. The SpikeTrainProxy is able to load: * only a slice of time * load wveforms or not. * have an internal raw magnitude identic to the file (generally the ticks of clock in int64) or the rescale to seconds. Usage: >>> proxy_sptr = SpikeTrainProxy(rawio=self.reader, unit_channel=0, block_index=0, seg_index=0,) >>> sptr = proxy_sptr.load() >>> slice_of_sptr = proxy_sptr.load(time_slice=(1.*pq.s, 2.*pq.s)) ''' _parent_objects = ('Segment', 'Unit') _quantity_attr = 'times' _necessary_attrs = (('t_start', pq.Quantity, 0), ('t_stop', pq.Quantity, 0)) _recommended_attrs = () proxy_for = SpikeTrain def __init__(self, rawio=None, spike_channel_index=None, block_index=0, seg_index=0): self._rawio = rawio self._block_index = block_index self._seg_index = seg_index self._spike_channel_index = spike_channel_index nb_spike = self._rawio.spike_count(block_index=block_index, seg_index=seg_index, spike_channel_index=spike_channel_index) self.shape = (nb_spike, ) self.t_start = self._rawio.segment_t_start(block_index, seg_index) * pq.s self.t_stop = self._rawio.segment_t_stop(block_index, seg_index) * pq.s seg_ann = self._rawio.raw_annotations['blocks'][block_index]['segments'][seg_index] annotations = seg_ann['spikes'][spike_channel_index].copy() array_annotations = annotations.pop('__array_annotations__') h = self._rawio.header['spike_channels'][spike_channel_index] wf_sampling_rate = h['wf_sampling_rate'] if not np.isnan(wf_sampling_rate) and wf_sampling_rate > 0: self.sampling_rate = wf_sampling_rate * pq.Hz self.left_sweep = (h['wf_left_sweep'] / self.sampling_rate).rescale('s') self._wf_units = ensure_signal_units(h['wf_units']) else: self.sampling_rate = None self.left_sweep = None BaseProxy.__init__(self, array_annotations=array_annotations, **annotations) def load(self, time_slice=None, strict_slicing=True, magnitude_mode='rescaled', load_waveforms=False): ''' *Args*: :time_slice: None or tuple of the time slice expressed with quantities. None is the entire signal. :strict_slicing: True by default. Control if an error is raise or not when one of time_slice member (t_start or t_stop) is outside the real time range of the segment. :magnitude_mode: 'rescaled' or 'raw'. :load_waveforms: bool load waveforms or not. ''' t_start, t_stop = consolidate_time_slice(time_slice, self.t_start, self.t_stop, strict_slicing) _t_start, _t_stop = prepare_time_slice(time_slice) spike_timestamps = self._rawio.get_spike_timestamps(block_index=self._block_index, seg_index=self._seg_index, spike_channel_index=self._spike_channel_index, t_start=_t_start, t_stop=_t_stop) if magnitude_mode == 'raw': # we must modify a bit the neo.rawio interface to also read the spike_timestamps # underlying clock wich is not always same as sigs raise(NotImplementedError) elif magnitude_mode == 'rescaled': dtype = 'float64' spike_times = self._rawio.rescale_spike_timestamp(spike_timestamps, dtype=dtype) units = 's' if load_waveforms: assert self.sampling_rate is not None, 'Do not have waveforms' raw_wfs = self._rawio.get_spike_raw_waveforms(block_index=self._block_index, seg_index=self._seg_index, spike_channel_index=self._spike_channel_index, t_start=_t_start, t_stop=_t_stop) if magnitude_mode == 'rescaled': float_wfs = self._rawio.rescale_waveforms_to_float(raw_wfs, dtype='float32', spike_channel_index=self._spike_channel_index) waveforms = pq.Quantity(float_wfs, units=self._wf_units, dtype='float32', copy=False) elif magnitude_mode == 'raw': # could code also CompundUnit here but it is over killed # so we used dimentionless waveforms = pq.Quantity(raw_wfs, units='', dtype=raw_wfs.dtype, copy=False) else: waveforms = None sptr = SpikeTrain(spike_times, t_stop, units=units, dtype=dtype, t_start=t_start, copy=False, sampling_rate=self.sampling_rate, waveforms=waveforms, left_sweep=self.left_sweep, name=self.name, file_origin=self.file_origin, description=self.description, **self.annotations) if time_slice is None: sptr.array_annotate(**self.array_annotations) else: # TODO handle array_annotations with time_slice pass return sptr class _EventOrEpoch(BaseProxy): _parent_objects = ('Segment',) _quantity_attr = 'times' def __init__(self, rawio=None, event_channel_index=None, block_index=0, seg_index=0): self._rawio = rawio self._block_index = block_index self._seg_index = seg_index self._event_channel_index = event_channel_index nb_event = self._rawio.event_count(block_index=block_index, seg_index=seg_index, event_channel_index=event_channel_index) self.shape = (nb_event, ) self.t_start = self._rawio.segment_t_start(block_index, seg_index) * pq.s self.t_stop = self._rawio.segment_t_stop(block_index, seg_index) * pq.s # both necessary attr and annotations annotations = {} for k in ('name', 'id'): annotations[k] = self._rawio.header['event_channels'][event_channel_index][k] seg_ann = self._rawio.raw_annotations['blocks'][block_index]['segments'][seg_index] ann = seg_ann['events'][event_channel_index] annotations = ann.copy() array_annotations = annotations.pop('__array_annotations__') BaseProxy.__init__(self, array_annotations=array_annotations, **annotations) def load(self, time_slice=None, strict_slicing=True): ''' *Args*: :time_slice: None or tuple of the time slice expressed with quantities. None is the entire signal. :strict_slicing: True by default. Control if an error is raise or not when one of time_slice member (t_start or t_stop) is outside the real time range of the segment. ''' t_start, t_stop = consolidate_time_slice(time_slice, self.t_start, self.t_stop, strict_slicing) _t_start, _t_stop = prepare_time_slice(time_slice) timestamp, durations, labels = self._rawio.get_event_timestamps(block_index=self._block_index, seg_index=self._seg_index, event_channel_index=self._event_channel_index, t_start=_t_start, t_stop=_t_stop) dtype = 'float64' times = self._rawio.rescale_event_timestamp(timestamp, dtype=dtype) units = 's' if durations is not None: durations = self._rawio.rescale_epoch_duration(durations, dtype=dtype) * pq.s h = self._rawio.header['event_channels'][self._event_channel_index] if h['type'] == b'event': ret = Event(times=times, labels=labels, units='s', name=self.name, file_origin=self.file_origin, description=self.description, **self.annotations) elif h['type'] == b'epoch': ret = Epoch(times=times, durations=durations, labels=labels, units='s', name=self.name, file_origin=self.file_origin, description=self.description, **self.annotations) if time_slice is None: ret.array_annotate(**self.array_annotations) else: # TODO handle array_annotations with time_slice pass return ret class EventProxy(_EventOrEpoch): ''' This object mimic Event except that it does not have the times nor labels. All other attributes and annotations are here. The goal is to postpone the loading of data into memory when reading a file with the new lazy load system based on neo.rawio. This object must not be constructed directly but is given neo.io when lazy=True instead of a true Event. The EventProxy is able to load: * only a slice of time Usage: >>> proxy_event = EventProxy(rawio=self.reader, event_channel_index=0, block_index=0, seg_index=0,) >>> event = proxy_event.load() >>> slice_of_event = proxy_event.load(time_slice=(1.*pq.s, 2.*pq.s)) ''' _necessary_attrs = (('times', pq.Quantity, 1), ('labels', np.ndarray, 1, np.dtype('U'))) proxy_for = Event class EpochProxy(_EventOrEpoch): ''' This object mimic Epoch except that it does not have the times nor labels nor durations. All other attributes and annotations are here. The goal is to postpone the loading of data into memory when reading a file with the new lazy load system based on neo.rawio. This object must not be constructed directly but is given neo.io when lazy=True instead of a true Epoch. The EpochProxy is able to load: * only a slice of time Usage: >>> proxy_epoch = EpochProxy(rawio=self.reader, event_channel_index=0, block_index=0, seg_index=0,) >>> epoch = proxy_epoch.load() >>> slice_of_epoch = proxy_epoch.load(time_slice=(1.*pq.s, 2.*pq.s)) ''' _necessary_attrs = (('times', pq.Quantity, 1), ('durations', pq.Quantity, 1), ('labels', np.ndarray, 1, np.dtype('U'))) proxy_for = Epoch proxyobjectlist = [AnalogSignalProxy, SpikeTrainProxy, EventProxy, EpochProxy] unit_convert = {'Volts': 'V', 'volts': 'V', 'Volt': 'V', 'volt': 'V', ' Volt': 'V', 'microV': 'uV', # note that "micro" and "mu" are two different characters in Unicode # although they mostly look the same. Here we accept both. 'µV': 'uV', 'μV': 'uV'} def ensure_signal_units(units): # test units units = units.replace(' ', '') if units in unit_convert: units = unit_convert[units] try: units = pq.Quantity(1, units) except: logger.warning('Units "{}" can not be converted to a quantity. Using dimensionless ' 'instead'.format(units)) units = '' units = pq.Quantity(1, units) return units def check_annotations(annotations): # force type to str for some keys # imposed for tests for k in ('name', 'description', 'file_origin'): if k in annotations: annotations[k] = str(annotations[k]) if 'coordinates' in annotations: # some rawio expose some coordinates in annotations but is not standardized # (x, y, z) or polar, at the moment it is more resonable to remove them annotations.pop('coordinates') return annotations def ensure_second(v): if isinstance(v, float): return v * pq.s elif isinstance(v, pq.Quantity): return v.rescale('s') elif isinstance(v, int): return float(v) * pq.s def prepare_time_slice(time_slice): """ This give clean time slice but keep None for calling rawio slice """ if time_slice is None: t_start, t_stop = None, None else: t_start, t_stop = time_slice if t_start is not None: t_start = ensure_second(t_start).rescale('s').magnitude if t_stop is not None: t_stop = ensure_second(t_stop).rescale('s').magnitude return (t_start, t_stop) def consolidate_time_slice(time_slice, seg_t_start, seg_t_stop, strict_slicing): """ This give clean time slice in quantity for t_start/t_stop of object None is replace by seg limit. """ if time_slice is None: t_start, t_stop = None, None else: t_start, t_stop = time_slice if t_start is None: t_start = seg_t_start else: if strict_slicing: assert seg_t_start <= t_start <= seg_t_stop, 't_start is outside' else: t_start = max(t_start, seg_t_start) t_start = ensure_second(t_start) if t_stop is None: t_stop = seg_t_stop else: if strict_slicing: assert seg_t_start <= t_stop <= seg_t_stop, 't_stop is outside' else: t_stop = min(t_stop, seg_t_stop) t_stop = ensure_second(t_stop) return (t_start, t_stop) neo-0.10.0/neo/io/rawbinarysignalio.py0000644000076700000240000000655214066375716020273 0ustar andrewstaff00000000000000""" Class for reading/writing data in a raw binary interleaved compact file. Sampling rate, units, number of channel and dtype must be externally known. This generic format is quite widely used in old acquisition systems and is quite universal for sharing data. Supported : Read/Write Author: sgarcia """ import os import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Segment, AnalogSignal from neo.io.basefromrawio import BaseFromRaw from neo.rawio.rawbinarysignalrawio import RawBinarySignalRawIO class RawBinarySignalIO(RawBinarySignalRawIO, BaseFromRaw): """ Class for reading/writing data in a raw binary interleaved compact file. **Important release note** Since the version neo 0.6.0 and the neo.rawio API, argmuents of the IO (dtype, nb_channel, sampling_rate) must be given at the __init__ and not at read_segment() because there is no read_segment() in neo.rawio classes. So now the usage is: >>>>r = io.RawBinarySignalIO(filename='file.raw', dtype='int16', nb_channel=16, sampling_rate=10000.) """ _prefered_signal_group_mode = 'group-by-same-units' is_readable = True is_writable = True supported_objects = [Segment, AnalogSignal] readable_objects = [Segment] writeable_objects = [Segment] def __init__(self, filename, dtype='int16', sampling_rate=10000., nb_channel=2, signal_gain=1., signal_offset=0., bytesoffset=0): RawBinarySignalRawIO.__init__(self, filename=filename, dtype=dtype, sampling_rate=sampling_rate, nb_channel=nb_channel, signal_gain=signal_gain, signal_offset=signal_offset, bytesoffset=bytesoffset) BaseFromRaw.__init__(self, filename) def write_segment(self, segment): """ **Arguments** segment : the segment to write. Only analog signals will be written. Support only 2 cases: * segment.analogsignals have one 2D AnalogSignal * segment.analogsignals have several 1D AnalogSignal with same length/sampling_rate/dtype """ if self.bytesoffset: raise NotImplementedError('bytesoffset values other than 0 ' + 'not supported') anasigs = segment.analogsignals assert len(anasigs) > 0, 'No AnalogSignal' anasig0 = anasigs[0] if len(anasigs) == 1 and anasig0.ndim == 2: numpy_sigs = anasig0.magnitude else: assert anasig0.ndim == 1 or (anasig0.ndim == 2 and anasig0.shape[1] == 1) # all AnaologSignal from Segment must have the same length/sampling_rate/dtype for anasig in anasigs[1:]: assert anasig.shape == anasig0.shape assert anasig.sampling_rate == anasig0.sampling_rate assert anasig.dtype == anasig0.dtype numpy_sigs = np.empty((anasig0.size, len(anasigs))) for i, anasig in enumerate(anasigs): numpy_sigs[:, i] = anasig.magnitude.flatten() numpy_sigs -= self.signal_offset numpy_sigs /= self.signal_gain numpy_sigs = numpy_sigs.astype(self.dtype) with open(self.filename, 'wb') as f: f.write(numpy_sigs.tobytes()) neo-0.10.0/neo/io/rawmcsio.py0000644000076700000240000000050514060430470016342 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.rawmcsrawio import RawMCSRawIO class RawMCSIO(RawMCSRawIO, BaseFromRaw): _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): RawMCSRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/spike2io.py0000644000076700000240000000052714060430470016247 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.spike2rawio import Spike2RawIO class Spike2IO(Spike2RawIO, BaseFromRaw): _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename, **kargs): Spike2RawIO.__init__(self, filename=filename, **kargs) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/spikegadgetsio.py0000644000076700000240000000052214077527451017535 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.spikegadgetsrawio import SpikeGadgetsRawIO class SpikeGadgetsIO(SpikeGadgetsRawIO, BaseFromRaw): __doc__ = SpikeGadgetsRawIO.__doc__ def __init__(self, filename): SpikeGadgetsRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/spikeglxio.py0000644000076700000240000000051014066374330016700 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.spikeglxrawio import SpikeGLXRawIO class SpikeGLXIO(SpikeGLXRawIO, BaseFromRaw): __doc__ = SpikeGLXRawIO.__doc__ mode = 'dir' def __init__(self, dirname): SpikeGLXRawIO.__init__(self, dirname=dirname) BaseFromRaw.__init__(self, dirname) neo-0.10.0/neo/io/stimfitio.py0000644000076700000240000001166414060430470016535 0ustar andrewstaff00000000000000""" README =============================================================================== This is an adapter to represent stfio objects as neo objects. stfio is a standalone file i/o Python module that ships with the Stimfit program (http://www.stimfit.org). It's a Python wrapper around Stimfit's file i/o library (libstfio) that natively provides support for the following file types: - ABF (Axon binary file format; pClamp 6--9) - ABF2 (Axon binary file format 2; pClamp 10+) - ATF (Axon text file format) - AXGX/AXGD (Axograph X file format) - CFS (Cambridge electronic devices filing system) - HEKA (HEKA binary file format) - HDF5 (Hierarchical data format 5; only hdf5 files written by Stimfit or stfio are supported) In addition, libstfio can use the biosig file i/o library as an additional file handling backend (http://biosig.sourceforge.net/), extending support to more than 30 additional file formats (http://pub.ist.ac.at/~schloegl/biosig/TESTED). Based on exampleio.py and axonio.py from neo.io 08 Feb 2014, C. Schmidt-Hieber, University College London """ import numpy as np import quantities as pq from neo.io.baseio import BaseIO from neo.core import Block, Segment, AnalogSignal try: import stfio except ImportError as err: HAS_STFIO = False STFIO_ERR = err else: HAS_STFIO = True STFIO_ERR = None class StimfitIO(BaseIO): """ Class for converting a stfio Recording to a Neo object. Provides a standardized representation of the data as defined by the neo project; this is useful to explore the data with an increasing number of electrophysiology software tools that rely on the Neo standard. stfio is a standalone file i/o Python module that ships with the Stimfit program (http://www.stimfit.org). It is a Python wrapper around Stimfit's file i/o library (libstfio) that natively provides support for the following file types: - ABF (Axon binary file format; pClamp 6--9) - ABF2 (Axon binary file format 2; pClamp 10+) - ATF (Axon text file format) - AXGX/AXGD (Axograph X file format) - CFS (Cambridge electronic devices filing system) - HEKA (HEKA binary file format) - HDF5 (Hierarchical data format 5; only hdf5 files written by Stimfit or stfio are supported) In addition, libstfio can use the biosig file i/o library as an additional file handling backend (http://biosig.sourceforge.net/), extending support to more than 30 additional file formats (http://pub.ist.ac.at/~schloegl/biosig/TESTED). Example usage: >>> import neo >>> neo_obj = neo.io.StimfitIO("file.abf") or >>> import stfio >>> stfio_obj = stfio.read("file.abf") >>> neo_obj = neo.io.StimfitIO(stfio_obj) """ is_readable = True is_writable = False supported_objects = [Block, Segment, AnalogSignal] readable_objects = [Block] writeable_objects = [] has_header = False is_streameable = False read_params = {Block: []} write_params = None name = 'Stimfit' extensions = ['abf', 'dat', 'axgx', 'axgd', 'cfs'] mode = 'file' def __init__(self, filename=None): """ Arguments: filename : Either a filename or a stfio Recording object """ if not HAS_STFIO: raise STFIO_ERR BaseIO.__init__(self) if hasattr(filename, 'lower'): self.filename = filename self.stfio_rec = None else: self.stfio_rec = filename self.filename = None def read_block(self, lazy=False): assert not lazy, 'Do not support lazy' if self.filename is not None: self.stfio_rec = stfio.read(self.filename) bl = Block() bl.description = self.stfio_rec.file_description bl.annotate(comment=self.stfio_rec.comment) try: bl.rec_datetime = self.stfio_rec.datetime except: bl.rec_datetime = None dt = np.round(self.stfio_rec.dt * 1e-3, 9) * pq.s # ms to s sampling_rate = 1.0 / dt t_start = 0 * pq.s # iterate over sections first: for j, recseg in enumerate(self.stfio_rec[0]): seg = Segment(index=j) length = len(recseg) # iterate over channels: for i, recsig in enumerate(self.stfio_rec): name = recsig.name unit = recsig.yunits try: pq.Quantity(1, unit) except: unit = '' signal = pq.Quantity(recsig[j], unit) anaSig = AnalogSignal(signal, sampling_rate=sampling_rate, t_start=t_start, name=str(name), channel_index=i) seg.analogsignals.append(anaSig) bl.segments.append(seg) t_start = t_start + length * dt bl.create_many_to_one_relationship() return bl neo-0.10.0/neo/io/tdtio.py0000644000076700000240000000113214066375716015657 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.tdtrawio import TdtRawIO class TdtIO(TdtRawIO, BaseFromRaw): """ Class for reading data from from Tucker Davis TTank format. Terminology: TDT holds data with tanks (actually a directory). And tanks hold sub blocks (sub directories). Tanks correspond to Neo Blocks and TDT blocks correspond to Neo Segments. """ _prefered_signal_group_mode = 'group-by-same-units' mode = 'dir' def __init__(self, dirname): TdtRawIO.__init__(self, dirname=dirname) BaseFromRaw.__init__(self, dirname) neo-0.10.0/neo/io/tiffio.py0000644000076700000240000001045614077527451016022 0ustar andrewstaff00000000000000""" Neo IO module for optical imaging data stored as a folder of TIFF images. """ import os try: from PIL import Image have_pil = True except ImportError: have_pil = False import numpy as np from neo.core import ImageSequence, Segment, Block from .baseio import BaseIO import glob import re class TiffIO(BaseIO): """ Neo IO module for optical imaging data stored as a folder of TIFF images. *Usage*: >>> from neo import io >>> import quantities as pq >>> r = io.TiffIO("dir_tiff",spatial_scale=1.0*pq.mm, units='V', ... sampling_rate=1.0*pq.Hz) >>> block = r.read_block() read block creating segment returning block >>> block Block with 1 segments file_origin: 'test' # segments (N=1) 0: Segment with 1 imagesequences annotations: {'tiff_file_names': ['file_tif_1_.tiff', 'file_tif_2.tiff', 'file_tif_3.tiff', 'file_tif_4.tiff', 'file_tif_5.tiff', 'file_tif_6.tiff', 'file_tif_7.tiff', 'file_tif_8.tiff', 'file_tif_9.tiff', 'file_tif_10.tiff', 'file_tif_11.tiff', 'file_tif_12.tiff', 'file_tif_13.tiff', 'file_tif_14.tiff']} # analogsignals (N=0) """ name = 'TIFF IO' description = "Neo IO module for optical imaging data stored as a folder of TIFF images." _prefered_signal_group_mode = 'group-by-same-units' is_readable = True is_writable = False supported_objects = [Block, Segment, ImageSequence] readable_objects = supported_objects writeable_objects = [] support_lazy = False read_params = {} write_params = {} extensions = [] mode = 'dir' def __init__(self, directory_path=None, units=None, sampling_rate=None, spatial_scale=None, **kwargs): if not have_pil: raise Exception("Please install the pillow package to use TiffIO") BaseIO.__init__(self, directory_path, **kwargs) self.units = units self.sampling_rate = sampling_rate self.spatial_scale = spatial_scale def read_block(self, lazy=False, **kwargs): # to sort file def natural_sort(l): convert = lambda text: int(text) if text.isdigit() else text.lower() alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)] return sorted(l, key=alphanum_key) # find all the images in the given directory file_name_list = [] # name of extensions to track types = ["*.tif", "*.tiff"] for file in types: file_name_list.append(glob.glob(self.filename+"/"+file)) # flatten list file_name_list = [item for sublist in file_name_list for item in sublist] # delete path in the name of file file_name_list = [file_name[len(self.filename)+1::] for file_name in file_name_list] # sorting file file_name_list = natural_sort(file_name_list) list_data_image = [] for file_name in file_name_list: data = np.array(Image.open(self.filename + "/" + file_name)).astype(np.float32) list_data_image.append(data) list_data_image = np.array(list_data_image) if len(list_data_image.shape) == 4: list_data_image = [] for file_name in file_name_list: image = Image.open(self.filename + "/" + file_name).convert('L') data = np.array(image).astype(np.float32) list_data_image.append(data) print("read block") image_sequence = ImageSequence(np.stack(list_data_image), units=self.units, sampling_rate=self.sampling_rate, spatial_scale=self.spatial_scale) print("creating segment") segment = Segment(file_origin=self.filename) segment.annotate(tiff_file_names=file_name_list) segment.imagesequences = [image_sequence] block = Block(file_origin=self.filename) segment.block = block block.segments.append(segment) print("returning block") return block neo-0.10.0/neo/io/tools.py0000644000076700000240000000465714066375716015713 0ustar andrewstaff00000000000000""" Tools for IO coder: * Creating RecordingChannel and making links with AnalogSignals and SPikeTrains """ try: from collections.abc import MutableSequence except ImportError: from collections import MutableSequence import numpy as np from neo.core import (AnalogSignal, Block, Epoch, Event, IrregularlySampledSignal, Group, ChannelView, Segment, SpikeTrain) class LazyList(MutableSequence): """ An enhanced list that can load its members on demand. Behaves exactly like a regular list for members that are Neo objects. Each item should contain the information that ``load_lazy_cascade`` needs to load the respective object. """ _container_objects = { Block, Segment, Group} _neo_objects = _container_objects.union( [AnalogSignal, Epoch, Event, ChannelView, IrregularlySampledSignal, SpikeTrain]) def __init__(self, io, lazy, items=None): """ :param io: IO instance that can load items. :param lazy: Lazy parameter with which the container object using the list was loaded. :param items: Optional, initial list of items. """ if items is None: self._data = [] else: self._data = items self._lazy = lazy self._io = io def __getitem__(self, index): item = self._data.__getitem__(index) if isinstance(index, slice): return LazyList(self._io, item) if type(item) in self._neo_objects: return item loaded = self._io.load_lazy_cascade(item, self._lazy) self._data[index] = loaded return loaded def __delitem__(self, index): self._data.__delitem__(index) def __len__(self): return self._data.__len__() def __setitem__(self, index, value): self._data.__setitem__(index, value) def insert(self, index, value): self._data.insert(index, value) def append(self, value): self._data.append(value) def reverse(self): self._data.reverse() def extend(self, values): self._data.extend(values) def remove(self, value): self._data.remove(value) def __str__(self): return '<' + self.__class__.__name__ + '>' + self._data.__str__() def __repr__(self): return '<' + self.__class__.__name__ + '>' + self._data.__repr__() neo-0.10.0/neo/io/winedrio.py0000644000076700000240000000077214066375716016365 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.winedrrawio import WinEdrRawIO class WinEdrIO(WinEdrRawIO, BaseFromRaw): """ Class for reading data from WinEdr, a software tool written by John Dempster. WinEdr is free: http://spider.science.strath.ac.uk/sipbs/software.htm """ _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): WinEdrRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/io/winwcpio.py0000644000076700000240000000077214066375716016404 0ustar andrewstaff00000000000000from neo.io.basefromrawio import BaseFromRaw from neo.rawio.winwcprawio import WinWcpRawIO class WinWcpIO(WinWcpRawIO, BaseFromRaw): """ Class for reading data from WinWCP, a software tool written by John Dempster. WinWCP is free: http://spider.science.strath.ac.uk/sipbs/software.htm """ _prefered_signal_group_mode = 'group-by-same-units' def __init__(self, filename): WinWcpRawIO.__init__(self, filename=filename) BaseFromRaw.__init__(self, filename) neo-0.10.0/neo/rawio/0000755000076700000240000000000014077757142014676 5ustar andrewstaff00000000000000neo-0.10.0/neo/rawio/__init__.py0000644000076700000240000001214714077527451017012 0ustar andrewstaff00000000000000""" :mod:`neo.rawio` provides classes for reading with low level API electrophysiological data files. :attr:`neo.rawio.rawiolist` provides a list of successfully imported rawio classes. Functions: .. autofunction:: neo.rawio.get_rawio_class Classes: * :attr:`AxographRawIO` * :attr:`AxonaRawIO` * :attr:`AxonRawIO` * :attr:`BlackrockRawIO` * :attr:`BrainVisionRawIO` * :attr:`CedRawIO` * :attr:`ElanRawIO` * :attr:`IntanRawIO` * :attr:`MaxwellRawIO` * :attr:`MEArecRawIO` * :attr:`MicromedRawIO` * :attr:`NeuralynxRawIO` * :attr:`NeuroExplorerRawIO` * :attr:`NeuroScopeRawIO` * :attr:`NIXRawIO` * :attr:`OpenEphysRawIO` * :attr:`OpenEphysBinaryRawIO` * :attr:'PhyRawIO' * :attr:`PlexonRawIO` * :attr:`RawBinarySignalRawIO` * :attr:`RawMCSRawIO` * :attr:`Spike2RawIO` * :attr:`SpikeGadgetsRawIO` * :attr:`SpikeGLXRawIO` * :attr:`TdtRawIO` * :attr:`WinEdrRawIO` * :attr:`WinWcpRawIO` .. autoclass:: neo.rawio.AxographRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.AxonaRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.AxonRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.BlackrockRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.BrainVisionRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.CedRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.ElanRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.IntanRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.MaxwellRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.MEArecRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.MicromedRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.NeuralynxRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.NeuroExplorerRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.NeuroScopeRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.NIXRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.OpenEphysRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.OpenEphysBinaryRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.PhyRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.PlexonRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.RawBinarySignalRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.RawMCSRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.Spike2RawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.SpikeGadgetsRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.SpikeGLXRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.TdtRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.WinEdrRawIO .. autoattribute:: extensions .. autoclass:: neo.rawio.WinWcpRawIO .. autoattribute:: extensions """ import os from neo.rawio.axographrawio import AxographRawIO from neo.rawio.axonarawio import AxonaRawIO from neo.rawio.axonrawio import AxonRawIO from neo.rawio.blackrockrawio import BlackrockRawIO from neo.rawio.brainvisionrawio import BrainVisionRawIO from neo.rawio.cedrawio import CedRawIO from neo.rawio.elanrawio import ElanRawIO from neo.rawio.examplerawio import ExampleRawIO from neo.rawio.intanrawio import IntanRawIO from neo.rawio.maxwellrawio import MaxwellRawIO from neo.rawio.mearecrawio import MEArecRawIO from neo.rawio.micromedrawio import MicromedRawIO from neo.rawio.neuralynxrawio import NeuralynxRawIO from neo.rawio.neuroexplorerrawio import NeuroExplorerRawIO from neo.rawio.neuroscoperawio import NeuroScopeRawIO from neo.rawio.nixrawio import NIXRawIO from neo.rawio.openephysrawio import OpenEphysRawIO from neo.rawio.openephysbinaryrawio import OpenEphysBinaryRawIO from neo.rawio.phyrawio import PhyRawIO from neo.rawio.plexonrawio import PlexonRawIO from neo.rawio.rawbinarysignalrawio import RawBinarySignalRawIO from neo.rawio.rawmcsrawio import RawMCSRawIO from neo.rawio.spike2rawio import Spike2RawIO from neo.rawio.spikegadgetsrawio import SpikeGadgetsRawIO from neo.rawio.spikeglxrawio import SpikeGLXRawIO from neo.rawio.tdtrawio import TdtRawIO from neo.rawio.winedrrawio import WinEdrRawIO from neo.rawio.winwcprawio import WinWcpRawIO rawiolist = [ AxographRawIO, AxonaRawIO, AxonRawIO, BlackrockRawIO, BrainVisionRawIO, CedRawIO, ElanRawIO, IntanRawIO, MicromedRawIO, MaxwellRawIO, MEArecRawIO, NeuralynxRawIO, NeuroExplorerRawIO, NeuroScopeRawIO, NIXRawIO, OpenEphysRawIO, OpenEphysBinaryRawIO, PhyRawIO, PlexonRawIO, RawBinarySignalRawIO, RawMCSRawIO, Spike2RawIO, SpikeGadgetsRawIO, SpikeGLXRawIO, TdtRawIO, WinEdrRawIO, WinWcpRawIO, ] def get_rawio_class(filename_or_dirname): """ Return a neo.rawio class guess from file extention. """ _, ext = os.path.splitext(filename_or_dirname) ext = ext[1:] possibles = [] for rawio in rawiolist: if any(ext.lower() == ext2.lower() for ext2 in rawio.extensions): possibles.append(rawio) if len(possibles) == 1: return possibles[0] else: return None neo-0.10.0/neo/rawio/axographrawio.py0000644000076700000240000017453514066374330020132 0ustar andrewstaff00000000000000""" AxographRawIO ============= RawIO class for reading AxoGraph files (.axgd, .axgx) Original author: Jeffrey Gill Documentation of the AxoGraph file format provided by the developer is incomplete and in some cases incorrect. The primary sources of official documentation are found in two out-of-date documents: - AxoGraph X User Manual, provided with AxoGraph and also available online: https://axograph.com/documentation/AxoGraph%20User%20Manual.pdf - AxoGraph_ReadWrite.h, a header file that is part of a C++ program provided with AxoGraph, and which is also available here: https://github.com/CWRUChielLab/axographio/blob/master/ axographio/include/axograph_readwrite/AxoGraph_ReadWrite.h These were helpful starting points for building this RawIO, especially for reading the beginnings of AxoGraph files, but much of the rest of the file format was deciphered by reverse engineering and guess work. Some portions of the file format remain undeciphered. The AxoGraph file format is versatile in that it can represent both time series data collected during data acquisition and non-time series data generated through manual or automated analysis, such as power spectrum analysis. This implementation of an AxoGraph file format reader makes no effort to decoding non-time series data. For simplicity, it makes several assumptions that should be valid for any file generated directly from episodic or continuous data acquisition without significant post-acquisition modification by the user. Detailed logging is provided during header parsing for debugging purposes: >>> import logging >>> r = AxographRawIO(filename) >>> r.logger.setLevel(logging.DEBUG) >>> r.parse_header() Background and Terminology -------------------------- Acquisition modes: AxoGraph can operate in two main data acquisition modes: - Episodic "protocol-driven" acquisition mode, in which the program records from specified signal channels for a fixed duration each time a trigger is detected. Each trigger-activated recording is called an "episode". From files acquired in this mode, AxographRawIO creates multiple Neo Segments, one for each episode, unless force_single_segment=True. - Continuous "chart recorder" acquisition mode, in which it creates a continuous recording that can be paused and continued by the user whenever they like. From files acquired in this mode, AxographRawIO creates a single Neo Segment. "Episode": analogous to a Neo Segment See descriptions of acquisition modes above and of groups below. "Column": analogous to a Quantity array A column is a 1-dimensional array of data, stored in any one of a number of data types (e.g., scaled ints or floats). In the oldest version of the AxoGraph file format, even time was stored as a 1-dimensional array. In newer versions, time is stored as a special type of "column" that is really just a starting time and a sampling period. Column data appears in series in the file, i.e., all of the first column's data appears before the second column's. As an aside, because of this design choice AxoGraph cannot write data to disk as it is collected but must store it all in memory until data acquisition ends. This also affected how file slicing was implmented for this RawIO: Instead of using a single memmap to address into a 2-dimensional block of data, AxographRawIO constructs multiple 1-dimensional memmaps, one for each column, each with its own offset. Each column's data array is preceded by a header containing the column title, which normally contains the units (e.g., "Current (nA)"). Data recorded in episodic acquisition mode will contain a repeating sequence of column names, where each repetition corresponds to an episode (e.g., "Time", "Column A", "Column B", "Column A", "Column B", etc.). AxoGraph offers a spreadsheet view for viewing all column data. "Trace": analogous to a single-channel Neo AnalogSignal A trace is a 2-dimensional series. Raw data is not stored in the part of the file concerned with traces. Instead, in the header for each trace are indexes pointing to two data columns, defined earlier in the file, corresponding to the trace's x and y data. These indexes can be changed in AxoGraph under the "Assign X and Y Columns" tool, though doing so may violate assumptions made by AxographRawIO. For time series data collected under the usual data acquisition modes that has not been modified after collection by the user, the x-index always points to the time column; one trace exists for each non-time column, with the y-index pointing to that column. Traces are analogous to AnalogSignals in Neo. However, for simplicity of implementation, AxographRawIO does not actually check the pairing of columns in the trace headers. Instead it assumes the default pairing described above when it creates signal channels while scanning through columns. Older versions of the AxoGraph file format lack trace headers entirely, so this is the most general solution. Trace headers contain additional information about the series, such as plot style, which is parsed by AxographRawIO and made available in self.info['trace_header_info_list'] but is otherwise unused. "Group": analogous to a Neo ChannelIndex for matching channels across Segments A group is a collection of one or more traces. Like traces, raw data is not stored in the part of the file concerned with groups. Instead, each trace header contains an index pointing to the group it is assigned to. Group assignment of traces can be changed in AxoGraph under the "Group Traces" tool, or by using the "Merge Traces" or "Separate Traces" commands, though doing so may violate assumptions made by AxographRawIO. Files created in episodic acquisition mode contain multiple traces per group, one for each episode. In that mode, a group corresponds to a signal channel and is analogous to a ChannelIndex in Neo; the traces within the group represent the time series recorded for that channel across episodes and are analogous to AnalogSignals from multiple Segments in Neo. In contrast, files created in continuous acquisition mode contain one trace per group, each corresponding to a signal channel. In that mode, groups and traces are basically conceptually synonymous, though the former can still be thought of as analogous to ChannelIndexes in Neo for a single-Segment. Group headers are only consulted by AxographRawIO to determine if is safe to interpret a file as episodic and therefore translatable to multiple Segments in Neo. Certain criteria have to be met, such as all groups containing equal numbers of traces and each group having homogeneous signal parameters. If trace grouping was modified by the user after data acquisition, this may result in the file being interpretted as non-episodic. Older versions of the AxoGraph file format lack group headers entirely, so these files are never deemed safe to interpret as episodic, even if the column names follow a repeating sequence as described above. "Tag" / "Event marker": analogous to a Neo Event In continuous acquisition mode, the user can press a hot key to tag a moment in time with a short label. Additionally, if the user stops or restarts data acquisition in this mode, a tag is created automatically with the label "Stop" or "Start", respectively. These are displayed by AxoGraph as event markers. AxographRawIO will organize all event markers into a single Neo Event channel with the name "AxoGraph Tags". In episodic acquisition mode, the tag hot key behaves differently. The current episode number is recorded in a user-editable notes section of the file, made available by AxographRawIO in self.info['notes']. Because these do not correspond to moments in time, they are not processed into Neo Events. "Interval bar": analogous to a Neo Epoch After data acquisition, the user can annotate an AxoGraph file with horizontal, labeled bars called interval bars that span a specified period of time. These are not episode specific. AxographRawIO will organize all interval bars into a single Neo Epoch channel with the name "AxoGraph Intervals". """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import os from datetime import datetime from io import open, BufferedReader from struct import unpack, calcsize import numpy as np class AxographRawIO(BaseRawIO): """ RawIO class for reading AxoGraph files (.axgd, .axgx) Args: filename (string): File name of the AxoGraph file to read. force_single_segment (bool): Episodic files are normally read as multi-Segment Neo objects. This parameter can force AxographRawIO to put all signals into a single Segment. Default: False. Example: >>> import neo >>> r = neo.rawio.AxographRawIO(filename=filename) >>> r.parse_header() >>> print(r) >>> # get signals >>> raw_chunk = r.get_analogsignal_chunk( ... block_index=0, seg_index=0, ... i_start=0, i_stop=1024, ... channel_names=channel_names) >>> float_chunk = r.rescale_signal_raw_to_float( ... raw_chunk, ... dtype='float64', ... channel_names=channel_names) >>> print(float_chunk) >>> # get event markers >>> ev_raw_times, _, ev_labels = r.get_event_timestamps( ... event_channel_index=0) >>> ev_times = r.rescale_event_timestamp( ... ev_raw_times, dtype='float64') >>> print([ev for ev in zip(ev_times, ev_labels)]) >>> # get interval bars >>> ep_raw_times, ep_raw_durations, ep_labels = r.get_event_timestamps( ... event_channel_index=1) >>> ep_times = r.rescale_event_timestamp( ... ep_raw_times, dtype='float64') >>> ep_durations = r.rescale_epoch_duration( ... ep_raw_durations, dtype='float64') >>> print([ep for ep in zip(ep_times, ep_durations, ep_labels)]) >>> # get notes >>> print(r.info['notes']) >>> # get other miscellaneous info >>> print(r.info) """ name = 'AxographRawIO' description = 'This IO reads .axgd/.axgx files created with AxoGraph' extensions = ['axgd', 'axgx'] rawmode = 'one-file' def __init__(self, filename, force_single_segment=False): BaseRawIO.__init__(self) self.filename = filename self.force_single_segment = force_single_segment def _parse_header(self): self.header = {} self._scan_axograph_file() if not self.force_single_segment and self._safe_to_treat_as_episodic(): self.logger.debug('Will treat as episodic') self._convert_to_multi_segment() else: self.logger.debug('Will not treat as episodic') self.logger.debug('') self._generate_minimal_annotations() blk_annotations = self.raw_annotations['blocks'][0] blk_annotations['format_ver'] = self.info['format_ver'] blk_annotations['comment'] = self.info['comment'] if 'comment' in self.info else None blk_annotations['notes'] = self.info['notes'] if 'notes' in self.info else None blk_annotations['rec_datetime'] = self._get_rec_datetime() # modified time is not ideal but less prone to # cross-platform issues than created time (ctime) blk_annotations['file_datetime'] = datetime.fromtimestamp( os.path.getmtime(self.filename)) def _source_name(self): return self.filename def _segment_t_start(self, block_index, seg_index): # same for all segments return self._t_start def _segment_t_stop(self, block_index, seg_index): # same for all signals in all segments t_stop = self._t_start + \ len(self._raw_signals[seg_index][0]) * self._sampling_period return t_stop ### # signal and channel zone def _get_signal_size(self, block_index, seg_index, stream_index): # same for all signals in all segments return len(self._raw_signals[seg_index][0]) def _get_signal_t_start(self, block_index, seg_index, stream_index): # same for all signals in all segments return self._t_start def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): if channel_indexes is None or \ np.all(channel_indexes == slice(None, None, None)): channel_indexes = range(self.signal_channels_count(stream_index)) raw_signals = [self._raw_signals [seg_index] [channel_index] [slice(i_start, i_stop)] for channel_index in channel_indexes] raw_signals = np.array(raw_signals).T # loads data into memory return raw_signals ### # spiketrain and unit zone def _spike_count(self, block_index, seg_index, unit_index): # not supported return None def _get_spike_timestamps(self, block_index, seg_index, unit_index, t_start, t_stop): # not supported return None def _rescale_spike_timestamp(self, spike_timestamps, dtype): # not supported return None ### # spike waveforms zone def _get_spike_raw_waveforms(self, block_index, seg_index, unit_index, t_start, t_stop): # not supported return None ### # event and epoch zone def _event_count(self, block_index, seg_index, event_channel_index): # Retrieve size of either event or epoch channel: # event_channel_index: 0 AxoGraph Tags, 1 AxoGraph Intervals # AxoGraph tags can only be inserted in continuous data acquisition # mode. When the tag hot key is pressed in episodic acquisition mode, # the notes are updated with the current episode number instead of an # instantaneous event marker being created. This means that Neo-like # Events cannot be generated by AxoGraph for multi-Segment (episodic) # files. Furthermore, Neo-like Epochs (interval markers) are not # episode specific. For these reasons, this function ignores seg_index. return self._raw_event_epoch_timestamps[event_channel_index].size def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): # Retrieve either event or epoch data, unscaled: # event_channel_index: 0 AxoGraph Tags, 1 AxoGraph Intervals # AxoGraph tags can only be inserted in continuous data acquisition # mode. When the tag hot key is pressed in episodic acquisition mode, # the notes are updated with the current episode number instead of an # instantaneous event marker being created. This means that Neo-like # Events cannot be generated by AxoGraph for multi-Segment (episodic) # files. Furthermore, Neo-like Epochs (interval markers) are not # episode specific. For these reasons, this function ignores seg_index. timestamps = self._raw_event_epoch_timestamps[event_channel_index] durations = self._raw_event_epoch_durations[event_channel_index] labels = self._event_epoch_labels[event_channel_index] if durations is None: # events if t_start is not None: # keep if event occurs after t_start ... keep = timestamps >= int(t_start / self._sampling_period) timestamps = timestamps[keep] labels = labels[keep] if t_stop is not None: # ... and before t_stop keep = timestamps <= int(t_stop / self._sampling_period) timestamps = timestamps[keep] labels = labels[keep] else: # epochs if t_start is not None: # keep if epoch ends after t_start ... keep = timestamps + durations >= \ int(t_start / self._sampling_period) timestamps = timestamps[keep] durations = durations[keep] labels = labels[keep] if t_stop is not None: # ... and starts before t_stop keep = timestamps <= int(t_stop / self._sampling_period) timestamps = timestamps[keep] durations = durations[keep] labels = labels[keep] return timestamps, durations, labels def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): # Scale either event or epoch start times from sample index to seconds # (t_start shouldn't be added) event_times = event_timestamps.astype(dtype) * self._sampling_period return event_times def _rescale_epoch_duration(self, raw_duration, dtype, event_channel_index): # Scale epoch durations from samples to seconds epoch_durations = raw_duration.astype(dtype) * self._sampling_period return epoch_durations ### # multi-segment zone def _safe_to_treat_as_episodic(self): """ The purpose of this fuction is to determine if the file contains any irregularities in its grouping of traces such that it cannot be treated as episodic. Even "continuous" recordings can be treated as single-episode recordings and could be identified as safe by this function. Recordings in which the user has changed groupings to create irregularities should be caught by this function. """ # First check: Old AxoGraph file formats do not contain enough metadata # to know for certain that the file is episodic. if self.info['format_ver'] < 3: self.logger.debug('Cannot treat as episodic because old format ' 'contains insufficient metadata') return False # Second check: If the file is episodic, it should report that it # contains more than 1 episode. if 'n_episodes' not in self.info: self.logger.debug('Cannot treat as episodic because episode ' 'metadata is missing or could not be parsed') return False if self.info['n_episodes'] == 1: self.logger.debug('Cannot treat as episodic because file reports ' 'one episode') return False # Third check: If the file is episodic, groups of traces should all # contain the same number of traces, one for each episode. This is # generally true of "continuous" (single-episode) recordings as well, # which normally have 1 trace per group. if 'group_header_info_list' not in self.info: self.logger.debug('Cannot treat as episodic because group ' 'metadata is missing or could not be parsed') return False if 'trace_header_info_list' not in self.info: self.logger.debug('Cannot treat as episodic because trace ' 'metadata is missing or could not be parsed') return False group_id_to_col_indexes = {} for group_id in self.info['group_header_info_list']: col_indexes = [] for trace_header in self.info['trace_header_info_list'].values(): if trace_header['group_id_for_this_trace'] == group_id: col_indexes.append(trace_header['y_index']) group_id_to_col_indexes[group_id] = col_indexes n_traces_by_group = {k: len(v) for k, v in group_id_to_col_indexes.items()} all_groups_have_same_number_of_traces = len(np.unique(list( n_traces_by_group.values()))) == 1 if not all_groups_have_same_number_of_traces: self.logger.debug('Cannot treat as episodic because groups differ ' 'in number of traces') return False # Fourth check: The number of traces in each group should equal # n_episodes. n_traces_per_group = np.unique(list(n_traces_by_group.values())) if n_traces_per_group != self.info['n_episodes']: self.logger.debug('Cannot treat as episodic because n_episodes ' 'does not match number of traces per group') return False # Fifth check: If the file is episodic, all traces within a group # should have identical signal channel parameters (e.g., name, units) # except for their unique ids. This too is generally true of # "continuous" (single-episode) files, which normally have 1 trace per # group. signal_channels_with_ids_dropped = \ self.header['signal_channels'][ [n for n in self.header['signal_channels'].dtype.names if n != 'id']] group_has_uniform_signal_parameters = {} for group_id, col_indexes in group_id_to_col_indexes.items(): # subtract 1 from indexes in next statement because time is not # included in signal_channels signal_params_for_group = np.array( signal_channels_with_ids_dropped[np.array(col_indexes) - 1]) group_has_uniform_signal_parameters[group_id] = \ len(np.unique(signal_params_for_group)) == 1 all_groups_have_uniform_signal_parameters = \ np.all(list(group_has_uniform_signal_parameters.values())) if not all_groups_have_uniform_signal_parameters: self.logger.debug('Cannot treat as episodic because some groups ' 'have heterogeneous signal parameters') return False # all checks passed self.logger.debug('Can treat as episodic') return True def _convert_to_multi_segment(self): """ Reshape signal headers and signal data for an episodic file """ self.header['nb_segment'] = [self.info['n_episodes']] # drop repeated signal headers self.header['signal_channels'] = \ self.header['signal_channels'].reshape( self.info['n_episodes'], -1)[0] # reshape signal memmap list new_sig_memmaps = [] n_channels = len(self.header['signal_channels']) sig_memmaps = self._raw_signals[0] for first_index in np.arange(0, len(sig_memmaps), n_channels): new_sig_memmaps.append( sig_memmaps[first_index:first_index + n_channels]) self._raw_signals = new_sig_memmaps self.logger.debug('New number of segments: {}'.format( self.info['n_episodes'])) return def _get_rec_datetime(self): """ Determine the date and time at which the recording was started from automatically generated notes. How these notes should be parsed differs depending on whether the recording was obtained in episodic or continuous acquisition mode. """ rec_datetime = None date_string = '' time_string = '' datetime_string = '' if 'notes' not in self.info: return None for note_line in self.info['notes'].split('\n'): # episodic acquisition mode if note_line.startswith('Created on '): date_string = note_line.strip('Created on ') if note_line.startswith('Start data acquisition at '): time_string = note_line.strip('Start data acquisition at ') # continuous acquisition mode if note_line.startswith('Created : '): datetime_string = note_line.strip('Created : ') if date_string and time_string: datetime_string = ' '.join([date_string, time_string]) if datetime_string: try: rec_datetime = datetime.strptime(datetime_string, '%a %b %d %Y %H:%M:%S') except ValueError: pass return rec_datetime def _scan_axograph_file(self): """ This function traverses the entire AxoGraph file, constructing memmaps for signals and collecting channel information and other metadata """ self.info = {} with open(self.filename, 'rb') as fid: f = StructFile(fid) self.logger.debug('filename: {}'.format(self.filename)) self.logger.debug('') # the first 4 bytes are always a 4-character file type identifier # - for early versions of AxoGraph, this identifier was 'AxGr' # - starting with AxoGraph X, the identifier is 'axgx' header_id = f.read(4).decode('utf-8') self.info['header_id'] = header_id assert header_id in ['AxGr', 'axgx'], \ 'not an AxoGraph binary file! "{}"'.format(self.filename) self.logger.debug('header_id: {}'.format(header_id)) # the next two numbers store the format version number and the # number of data columns to follow # - for 'AxGr' files, these numbers are 2-byte unsigned short ints # - for 'axgx' files, these numbers are 4-byte long ints # - the 4-character identifier changed from 'AxGr' to 'axgx' with # format version 3 if header_id == 'AxGr': format_ver, n_cols = f.read_f('HH') assert format_ver == 1 or format_ver == 2, \ 'mismatch between header identifier "{}" and format ' \ 'version "{}"!'.format(header_id, format_ver) elif header_id == 'axgx': format_ver, n_cols = f.read_f('ll') assert format_ver >= 3, \ 'mismatch between header identifier "{}" and format ' \ 'version "{}"!'.format(header_id, format_ver) else: raise NotImplementedError( 'unimplemented file header identifier "{}"!'.format( header_id)) self.info['format_ver'] = format_ver self.info['n_cols'] = n_cols self.logger.debug('format_ver: {}'.format(format_ver)) self.logger.debug('n_cols: {}'.format(n_cols)) self.logger.debug('') ############################################## # BEGIN COLUMNS sig_memmaps = [] sig_channels = [] for i in range(n_cols): self.logger.debug('== COLUMN INDEX {} =='.format(i)) ############################################## # NUMBER OF DATA POINTS IN COLUMN n_points = f.read_f('l') self.logger.debug('n_points: {}'.format(n_points)) ############################################## # COLUMN TYPE # depending on the format version, data columns may have a type # - prior to verion 3, column types did not exist and data was # stored in a fixed pattern # - beginning with version 3, several data types are available # as documented in AxoGraph_ReadWrite.h if format_ver == 1 or format_ver == 2: col_type = None elif format_ver >= 3: col_type = f.read_f('l') else: raise NotImplementedError( 'unimplemented file format version "{}"!'.format( format_ver)) self.logger.debug('col_type: {}'.format(col_type)) ############################################## # COLUMN NAME AND UNITS # depending on the format version, column titles are stored # differently # - prior to version 3, column titles were stored as # fixed-length 80-byte Pascal strings # - beginning with version 3, column titles are stored as # variable-length strings (see StructFile.read_string for # details) if format_ver == 1 or format_ver == 2: title = f.read_f('80p').decode('utf-8') elif format_ver >= 3: title = f.read_f('S') else: raise NotImplementedError( 'unimplemented file format version "{}"!'.format( format_ver)) self.logger.debug('title: {}'.format(title)) # units are given in parentheses at the end of a column title, # unless units are absent if len(title.split()) > 0 and title.split()[-1][0] == '(' and \ title.split()[-1][-1] == ')': name = ' '.join(title.split()[:-1]) units = title.split()[-1].strip('()') else: name = title units = '' self.logger.debug('name: {}'.format(name)) self.logger.debug('units: {}'.format(units)) ############################################## # COLUMN DTYPE, SCALE, OFFSET if format_ver == 1: # for format version 1, all columns are arrays of floats dtype = 'f' gain, offset = 1, 0 # data is neither scaled nor off-set elif format_ver == 2: # for format version 2, the first column is a "series" of # regularly spaced values specified merely by a first value # and an increment, and all subsequent columns are arrays # of shorts with a scaling factor if i == 0: # series first_value, increment = f.read_f('ff') self.logger.debug( 'interval: {}, freq: {}'.format( increment, 1 / increment)) self.logger.debug( 'start: {}, end: {}'.format( first_value, first_value + increment * (n_points - 1))) # assume this is the time column t_start, sampling_period = first_value, increment self.info['t_start'] = t_start self.info['sampling_period'] = sampling_period self.logger.debug('') continue # skip memmap, chan info for time col else: # scaled short dtype = 'h' gain, offset = \ f.read_f('f'), 0 # data is scaled without offset elif format_ver >= 3: # for format versions 3 and later, the column type # determines how the data should be read # - column types 1, 2, 3, and 8 are not defined in # AxoGraph_ReadWrite.h # - column type 9 is different from the others in that it # represents regularly spaced values # (such as times at a fixed frequency) specified by a # first value and an increment, without storing a large # data array if col_type == 9: # series first_value, increment = f.read_f('dd') self.logger.debug( 'interval: {}, freq: {}'.format( increment, 1 / increment)) self.logger.debug( 'start: {}, end: {}'.format( first_value, first_value + increment * (n_points - 1))) if i == 0: # assume this is the time column t_start, sampling_period = first_value, increment self.info['t_start'] = t_start self.info['sampling_period'] = sampling_period self.logger.debug('') continue # skip memmap, chan info for time col else: raise NotImplementedError( 'series data are supported only for the first ' 'data column (time)!') elif col_type == 4: # short dtype = 'h' gain, offset = 1, 0 # data neither scaled nor off-set elif col_type == 5: # long dtype = 'l' gain, offset = 1, 0 # data neither scaled nor off-set elif col_type == 6: # float dtype = 'f' gain, offset = 1, 0 # data neither scaled nor off-set elif col_type == 7: # double dtype = 'd' gain, offset = 1, 0 # data neither scaled nor off-set elif col_type == 10: # scaled short dtype = 'h' gain, offset = f.read_f('dd') # data scaled w/ offset else: raise NotImplementedError( 'unimplemented column type "{}"!'.format(col_type)) else: raise NotImplementedError( 'unimplemented file format version "{}"!'.format( format_ver)) ############################################## # COLUMN MEMMAP AND CHANNEL INFO # create a memory map that allows accessing parts of the file # without loading it all into memory array = np.memmap( self.filename, mode='r', dtype=f.byte_order + dtype, offset=f.tell(), shape=n_points) # advance the file position to after the data array f.seek(array.nbytes, 1) if i == 0: # assume this is the time column containing n_points values # verify times are spaced regularly diffs = np.diff(array) increment = np.median(diffs) max_frac_step_deviation = np.max(np.abs( diffs / increment - 1)) tolerance = 1e-3 if max_frac_step_deviation > tolerance: self.logger.debug('largest proportional deviation ' 'from median step size in the first ' 'column exceeds the tolerance ' 'of ' + str(tolerance) + ':' ' ' + str(max_frac_step_deviation)) raise ValueError('first data column (assumed to be ' 'time) is not regularly spaced') first_value = array[0] self.logger.debug( 'interval: {}, freq: {}'.format( increment, 1 / increment)) self.logger.debug( 'start: {}, end: {}'.format( first_value, first_value + increment * (n_points - 1))) t_start, sampling_period = first_value, increment self.info['t_start'] = t_start self.info['sampling_period'] = sampling_period self.logger.debug('') continue # skip saving memmap, chan info for time col else: # not a time column self.logger.debug('gain: {}, offset: {}'.format(gain, offset)) self.logger.debug('initial data: {}'.format( array[:5] * gain + offset)) # channel_info will be cast to _signal_channel_dtype channel_info = ( name, str(i), 1 / sampling_period, f.byte_order + dtype, units, gain, offset, '0') self.logger.debug('channel_info: {}'.format(channel_info)) self.logger.debug('') sig_memmaps.append(array) sig_channels.append(channel_info) # END COLUMNS ############################################## # initialize lists for events and epochs raw_event_timestamps = [] raw_epoch_timestamps = [] raw_epoch_durations = [] event_labels = [] epoch_labels = [] # the remainder of the file may contain metadata, events and epochs try: ############################################## # COMMENT self.logger.debug('== COMMENT ==') comment = f.read_f('S') self.info['comment'] = comment self.logger.debug(comment if comment else 'no comment!') self.logger.debug('') ############################################## # NOTES self.logger.debug('== NOTES ==') notes = f.read_f('S') self.info['notes'] = notes self.logger.debug(notes if notes else 'no notes!') self.logger.debug('') ############################################## # TRACES self.logger.debug('== TRACES ==') n_traces = f.read_f('l') self.info['n_traces'] = n_traces self.logger.debug('n_traces: {}'.format(n_traces)) self.logger.debug('') trace_header_info_list = {} group_ids = [] for i in range(n_traces): # AxoGraph traces are 1-indexed in GUI, so use i+1 below self.logger.debug('== TRACE #{} =='.format(i + 1)) trace_header_info = {} if format_ver < 6: # before format version 6, there was only one version # of the header, and version numbers were not provided trace_header_info['trace_header_version'] = 1 else: # for format versions 6 and later, the header version # must be read trace_header_info['trace_header_version'] = \ f.read_f('l') if trace_header_info['trace_header_version'] == 1: TraceHeaderDescription = TraceHeaderDescriptionV1 elif trace_header_info['trace_header_version'] == 2: TraceHeaderDescription = TraceHeaderDescriptionV2 else: raise NotImplementedError( 'unimplemented trace header version "{}"!'.format( trace_header_info['trace_header_version'])) for key, fmt in TraceHeaderDescription: trace_header_info[key] = f.read_f(fmt) # AxoGraph traces are 1-indexed in GUI, so use i+1 below trace_header_info_list[i + 1] = trace_header_info group_ids.append( trace_header_info['group_id_for_this_trace']) self.logger.debug(trace_header_info) self.logger.debug('') self.info['trace_header_info_list'] = trace_header_info_list ############################################## # GROUPS self.logger.debug('== GROUPS ==') n_groups = f.read_f('l') self.info['n_groups'] = n_groups group_ids = \ np.sort(list(set(group_ids))) # remove duplicates and sort assert n_groups == len(group_ids), \ 'expected group_ids to have length {}: {}'.format( n_groups, group_ids) self.logger.debug('n_groups: {}'.format(n_groups)) self.logger.debug('group_ids: {}'.format(group_ids)) self.logger.debug('') group_header_info_list = {} for i in group_ids: # AxoGraph groups are 0-indexed in GUI, so use i below self.logger.debug('== GROUP #{} =='.format(i)) group_header_info = {} if format_ver < 6: # before format version 6, there was only one version # of the header, and version numbers were not provided group_header_info['group_header_version'] = 1 else: # for format versions 6 and later, the header version # must be read group_header_info['group_header_version'] = \ f.read_f('l') if group_header_info['group_header_version'] == 1: GroupHeaderDescription = GroupHeaderDescriptionV1 else: raise NotImplementedError( 'unimplemented group header version "{}"!'.format( group_header_info['group_header_version'])) for key, fmt in GroupHeaderDescription: group_header_info[key] = f.read_f(fmt) # AxoGraph groups are 0-indexed in GUI, so use i below group_header_info_list[i] = group_header_info self.logger.debug(group_header_info) self.logger.debug('') self.info['group_header_info_list'] = group_header_info_list ############################################## # UNKNOWN self.logger.debug('>> UNKNOWN 1 <<') # 36 bytes of undeciphered data (types here are guesses) unknowns = f.read_f('9l') self.logger.debug(unknowns) self.logger.debug('') ############################################## # EPISODES self.logger.debug('== EPISODES ==') # a subset of episodes can be selected for "review", or # episodes can be paged through one by one, and the indexes of # those currently in review appear in this list episodes_in_review = [] n_episodes = f.read_f('l') self.info['n_episodes'] = n_episodes for i in range(n_episodes): episode_bool = f.read_f('Z') if episode_bool: episodes_in_review.append(i + 1) self.info['episodes_in_review'] = episodes_in_review self.logger.debug('n_episodes: {}'.format(n_episodes)) self.logger.debug('episodes_in_review: {}'.format( episodes_in_review)) if format_ver == 5: # the test file for version 5 contains this extra list of # episode indexes with unknown purpose old_unknown_episode_list = [] n_episodes2 = f.read_f('l') for i in range(n_episodes2): episode_bool = f.read_f('Z') if episode_bool: old_unknown_episode_list.append(i + 1) self.logger.debug('old_unknown_episode_list: {}'.format( old_unknown_episode_list)) if n_episodes2 != n_episodes: self.logger.debug( 'n_episodes2 ({}) and n_episodes ({}) ' 'differ!'.format(n_episodes2, n_episodes)) # another list of episode indexes with unknown purpose unknown_episode_list = [] n_episodes3 = f.read_f('l') for i in range(n_episodes3): episode_bool = f.read_f('Z') if episode_bool: unknown_episode_list.append(i + 1) self.logger.debug('unknown_episode_list: {}'.format( unknown_episode_list)) if n_episodes3 != n_episodes: self.logger.debug( 'n_episodes3 ({}) and n_episodes ({}) ' 'differ!'.format(n_episodes3, n_episodes)) # episodes can be masked to be removed from the pool of # reviewable episodes completely until unmasked, and the # indexes of those currently masked appear in this list masked_episodes = [] n_episodes4 = f.read_f('l') for i in range(n_episodes4): episode_bool = f.read_f('Z') if episode_bool: masked_episodes.append(i + 1) self.info['masked_episodes'] = masked_episodes self.logger.debug('masked_episodes: {}'.format( masked_episodes)) if n_episodes4 != n_episodes: self.logger.debug( 'n_episodes4 ({}) and n_episodes ({}) ' 'differ!'.format(n_episodes4, n_episodes)) self.logger.debug('') ############################################## # UNKNOWN self.logger.debug('>> UNKNOWN 2 <<') # 68 bytes of undeciphered data (types here are guesses) unknowns = f.read_f('d 9l d 4l') self.logger.debug(unknowns) self.logger.debug('') ############################################## # FONTS if format_ver >= 6: font_categories = ['axis titles', 'axis labels (ticks)', 'notes', 'graph title'] else: # would need an old version of AxoGraph to determine how it # used these settings font_categories = ['everything (?)'] font_settings_info_list = {} for i in font_categories: self.logger.debug('== FONT SETTINGS FOR {} =='.format(i)) font_settings_info = {} for key, fmt in FontSettingsDescription: font_settings_info[key] = f.read_f(fmt) # I don't know why two arbitrary values were selected to # represent this switch, but it seems they were # - setting1 could contain other undeciphered data as a # bitmask, like setting2 assert font_settings_info['setting1'] in \ [FONT_BOLD, FONT_NOT_BOLD], \ 'expected setting1 ({}) to have value FONT_BOLD ' \ '({}) or FONT_NOT_BOLD ({})'.format( font_settings_info['setting1'], FONT_BOLD, FONT_NOT_BOLD) # size is stored 10 times bigger than real value font_settings_info['size'] = \ font_settings_info['size'] / 10.0 font_settings_info['bold'] = \ bool(font_settings_info['setting1'] == FONT_BOLD) font_settings_info['italics'] = \ bool(font_settings_info['setting2'] & FONT_ITALICS) font_settings_info['underline'] = \ bool(font_settings_info['setting2'] & FONT_UNDERLINE) font_settings_info['strikeout'] = \ bool(font_settings_info['setting2'] & FONT_STRIKEOUT) font_settings_info_list[i] = font_settings_info self.logger.debug(font_settings_info) self.logger.debug('') self.info['font_settings_info_list'] = font_settings_info_list ############################################## # X-AXIS SETTINGS self.logger.debug('== X-AXIS SETTINGS ==') x_axis_settings_info = {} for key, fmt in XAxisSettingsDescription: x_axis_settings_info[key] = f.read_f(fmt) self.info['x_axis_settings_info'] = x_axis_settings_info self.logger.debug(x_axis_settings_info) self.logger.debug('') ############################################## # UNKNOWN self.logger.debug('>> UNKNOWN 3 <<') # 108 bytes of undeciphered data (types here are guesses) unknowns = f.read_f('8l 3d 13l') self.logger.debug(unknowns) self.logger.debug('') ############################################## # EVENTS / TAGS self.logger.debug('=== EVENTS / TAGS ===') n_events, n_events_again = f.read_f('ll') self.info['n_events'] = n_events self.logger.debug('n_events: {}'.format(n_events)) # event / tag timing is stored as an index into time raw_event_timestamps = [] event_labels = [] for i in range(n_events_again): event_index = f.read_f('l') raw_event_timestamps.append(event_index) n_events_yet_again = f.read_f('l') for i in range(n_events_yet_again): title = f.read_f('S') event_labels.append(title) event_list = [] for event_label, event_index in \ zip(event_labels, raw_event_timestamps): # t_start shouldn't be added here event_time = event_index * sampling_period event_list.append({ 'title': event_label, 'index': event_index, 'time': event_time}) self.info['event_list'] = event_list for event in event_list: self.logger.debug(event) self.logger.debug('') ############################################## # UNKNOWN self.logger.debug('>> UNKNOWN 4 <<') # 28 bytes of undeciphered data (types here are guesses) unknowns = f.read_f('7l') self.logger.debug(unknowns) self.logger.debug('') ############################################## # EPOCHS / INTERVAL BARS self.logger.debug('=== EPOCHS / INTERVAL BARS ===') n_epochs = f.read_f('l') self.info['n_epochs'] = n_epochs self.logger.debug('n_epochs: {}'.format(n_epochs)) epoch_list = [] for i in range(n_epochs): epoch_info = {} for key, fmt in EpochInfoDescription: epoch_info[key] = f.read_f(fmt) epoch_list.append(epoch_info) self.info['epoch_list'] = epoch_list # epoch / interval bar timing and duration are stored in # seconds, so here they are converted to (possibly non-integer) # indexes into time to fit into the procrustean beds of # _rescale_event_timestamp and _rescale_epoch_duration raw_epoch_timestamps = [] raw_epoch_durations = [] epoch_labels = [] for epoch in epoch_list: raw_epoch_timestamps.append( epoch['t_start'] / sampling_period) raw_epoch_durations.append( (epoch['t_stop'] - epoch['t_start']) / sampling_period) epoch_labels.append(epoch['title']) self.logger.debug(epoch) self.logger.debug('') ############################################## # UNKNOWN self.logger.debug( '>> UNKNOWN 5 (includes y-axis plot ranges) <<') # lots of undeciphered data rest_of_the_file = f.read() self.logger.debug(rest_of_the_file) self.logger.debug('') self.logger.debug('End of file reached (expected)') except EOFError as e: if format_ver == 1 or format_ver == 2: # for format versions 1 and 2, metadata like graph display # information was stored separately in the "resource fork" # of the file, so reaching the end of the file before all # metadata is parsed is expected self.logger.debug('End of file reached (expected)') pass else: # for format versions 3 and later, there should be metadata # stored at the end of the file, so warn that something may # have gone wrong, but try to continue anyway self.logger.warning('End of file reached unexpectedly ' 'while parsing metadata, will attempt ' 'to continue') self.logger.debug(e, exc_info=True) pass except UnicodeDecodeError as e: # warn that something went wrong with reading a string, but try # to continue anyway self.logger.warning('Problem decoding text while parsing ' 'metadata, will ignore any remaining ' 'metadata and attempt to continue') self.logger.debug(e, exc_info=True) pass self.logger.debug('') ############################################## # RAWIO HEADER # event_channels will be cast to _event_channel_dtype event_channels = [] event_channels.append(('AxoGraph Tags', '', 'event')) event_channels.append(('AxoGraph Intervals', '', 'epoch')) if len(sig_channels) > 0: signal_streams = [('Signals', '0')] else: signal_streams = [] # organize header self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = np.array(signal_streams, dtype=_signal_stream_dtype) self.header['signal_channels'] = np.array(sig_channels, dtype=_signal_channel_dtype) self.header['event_channels'] = np.array(event_channels, dtype=_event_channel_dtype) self.header['spike_channels'] = np.array([], dtype=_spike_channel_dtype) ############################################## # DATA OBJECTS # organize data self._sampling_period = sampling_period self._t_start = t_start self._raw_signals = [sig_memmaps] # first index is seg_index self._raw_event_epoch_timestamps = [ np.array(raw_event_timestamps), np.array(raw_epoch_timestamps)] self._raw_event_epoch_durations = [ None, np.array(raw_epoch_durations)] self._event_epoch_labels = [ np.array(event_labels, dtype='U'), np.array(epoch_labels, dtype='U')] class StructFile(BufferedReader): """ A container for the file buffer with some added convenience functions for reading AxoGraph files """ def __init__(self, *args, **kwargs): # As far as I've seen, every AxoGraph file uses big-endian encoding, # regardless of the system architecture on which it was created, but # here I provide means for controlling byte ordering in case a counter # example is found. self.byte_order = kwargs.pop('byte_order', '>') if self.byte_order == '>': # big-endian self.utf_16_decoder = 'utf-16-be' elif self.byte_order == '<': # little-endian self.utf_16_decoder = 'utf-16-le' else: # unspecified self.utf_16_decoder = 'utf-16' super().__init__(*args, **kwargs) def read_and_unpack(self, fmt): """ Calculate the number of bytes corresponding to the format string, read in that number of bytes, and unpack them according to the format string """ try: return unpack( self.byte_order + fmt, self.read(calcsize(self.byte_order + fmt))) except Exception as e: if e.args[0].startswith('unpack requires a buffer of'): raise EOFError(e) else: raise def read_string(self): """ The most common string format in AxoGraph files is a variable length string with UTF-16 encoding, preceded by a 4-byte integer (long) specifying the length of the string in bytes. Unlike a Pascal string ('p' format), these strings are not stored in a fixed number of bytes with padding at the end. This function reads in one of these variable length strings """ # length may be -1, 0, or a positive integer length = self.read_and_unpack('l')[0] if length > 0: return self.read(length).decode(self.utf_16_decoder) else: return '' def read_bool(self): """ AxoGraph files encode each boolean as 4-byte integer (long) with value 1 = True, 0 = False. This function reads in one of these booleans. """ return bool(self.read_and_unpack('l')[0]) def read_f(self, fmt, offset=None): """ This function is a wrapper for read_and_unpack that adds compatibility with two new format strings: 'S': a variable length UTF-16 string, readable with read_string 'Z': a boolean encoded as a 4-byte integer, readable with read_bool This method does not implement support for numbers before the new format strings, such as '3Z' to represent 3 bools (use 'ZZZ' instead). """ if offset is not None: self.seek(offset) # place commas before and after each instance of S or Z for special in ['S', 'Z']: fmt = fmt.replace(special, ',' + special + ',') # split S and Z into isolated strings fmt = fmt.split(',') # construct a tuple of unpacked data data = () for subfmt in fmt: if subfmt == 'S': data += (self.read_string(),) elif subfmt == 'Z': data += (self.read_bool(),) else: data += self.read_and_unpack(subfmt) if len(data) == 1: return data[0] else: return data FONT_BOLD = 75 # mysterious arbitrary constant FONT_NOT_BOLD = 50 # mysterious arbitrary constant FONT_ITALICS = 1 FONT_UNDERLINE = 2 FONT_STRIKEOUT = 4 TraceHeaderDescriptionV1 = [ # documented in AxoGraph_ReadWrite.h ('x_index', 'l'), ('y_index', 'l'), ('err_bar_index', 'l'), ('group_id_for_this_trace', 'l'), ('hidden', 'Z'), # AxoGraph_ReadWrite.h incorrectly states "shown" instead ('min_x', 'd'), ('max_x', 'd'), ('min_positive_x', 'd'), ('x_is_regularly_spaced', 'Z'), ('x_increases_monotonically', 'Z'), ('x_interval_if_regularly_spaced', 'd'), ('min_y', 'd'), ('max_y', 'd'), ('min_positive_y', 'd'), ('trace_color', 'xBBB'), ('display_joined_line_plot', 'Z'), ('line_thickness', 'd'), ('pen_style', 'l'), ('display_symbol_plot', 'Z'), ('symbol_type', 'l'), ('symbol_size', 'l'), ('draw_every_data_point', 'Z'), ('skip_points_by_distance_instead_of_pixels', 'Z'), ('pixels_between_symbols', 'l'), ('display_histogram_plot', 'Z'), ('histogram_type', 'l'), ('histogram_bar_separation', 'l'), ('display_error_bars', 'Z'), ('display_pos_err_bar', 'Z'), ('display_neg_err_bar', 'Z'), ('err_bar_width', 'l'), ] # documented in AxoGraph_ReadWrite.h # - only one difference exists between versions 1 and 2 TraceHeaderDescriptionV2 = list(TraceHeaderDescriptionV1) # make a copy TraceHeaderDescriptionV2.insert(3, ('neg_err_bar_index', 'l')) GroupHeaderDescriptionV1 = [ # undocumented and reverse engineered ('title', 'S'), ('unknown1', 'h'), # 2 bytes of undeciphered data (types are guesses) ('units', 'S'), ('unknown2', 'hll'), # 10 bytes of undeciphered data (types are guesses) ] FontSettingsDescription = [ # undocumented and reverse engineered ('font', 'S'), ('size', 'h'), # divide this 2-byte integer by 10 to get font size ('unknown1', '5b'), # 5 bytes of undeciphered data (types are guesses) ('setting1', 'B'), # includes bold setting ('setting2', 'B'), # italics, underline, strikeout specified in bitmap ] XAxisSettingsDescription = [ # undocumented and reverse engineered ('unknown1', '3l2d'), # 28 bytes of undeciphered data (types are guesses) ('plotted_x_range', 'dd'), ('unknown2', 'd'), # 8 bytes of undeciphered data (types are guesses) ('auto_x_ticks', 'Z'), ('x_minor_ticks', 'd'), ('x_major_ticks', 'd'), ('x_axis_title', 'S'), ('unknown3', 'h'), # 2 bytes of undeciphered data (types are guesses) ('units', 'S'), ('unknown4', 'h'), # 2 bytes of undeciphered data (types are guesses) ] EpochInfoDescription = [ # undocumented and reverse engineered ('title', 'S'), ('t_start', 'd'), ('t_stop', 'd'), ('y_pos', 'd'), ] neo-0.10.0/neo/rawio/axonarawio.py0000644000076700000240000005442514066375716017433 0ustar andrewstaff00000000000000""" This class reads .set and .bin file data from the Axona acquisition system. File format overview: http://space-memory-navigation.org/DacqUSBFileFormats.pdf In brief: data.set - setup file containing all hardware setups related to the trial data.bin - raw data file There are many other data formats from Axona, which we do not consider (yet). These are derivative from the raw continuous data (.bin) and could in principle be extracted from it (see file format overview for details). Author: Steffen Buergers, Julia Sprenger """ import datetime import pathlib import re import numpy as np from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) class AxonaRawIO(BaseRawIO): """ Class for reading raw, continuous data from the Axona dacqUSB system: http://space-memory-navigation.org/DacqUSBFileFormats.pdf The raw data is saved in .bin binary files with an accompanying .set file about the recording setup (see the above manual for details). Usage: import neo.rawio r = neo.rawio.AxonaRawIO( filename=os.path.join(dir_name, base_filename) ) r.parse_header() print(r) raw_chunk = r.get_analogsignal_chunk(block_index=0, seg_index=0, i_start=0, i_stop=1024, channel_names=channel_names) float_chunk = reader.rescale_signal_raw_to_float( raw_chunk, dtype='float64', channel_indexes=[0, 3, 6] ) """ extensions = ['bin', 'set'] + [str(i) for i in range(1, 33)] # Never used? rawmode = 'multi-file' # In the .bin file, channels are arranged in a strange order. # This list takes a channel index as input and returns the actual # offset for the channel in the memory map (self._raw_signals). channel_memory_offset = [ 32, 33, 34, 35, 36, 37, 38, 39, 0, 1, 2, 3, 4, 5, 6, 7, 40, 41, 42, 43, 44, 45, 46, 47, 8, 9, 10, 11, 12, 13, 14, 15, 48, 49, 50, 51, 52, 53, 54, 55, 16, 17, 18, 19, 20, 21, 22, 23, 56, 57, 58, 59, 60, 61, 62, 63, 24, 25, 26, 27, 28, 29, 30, 31 ] def __init__(self, filename): BaseRawIO.__init__(self) # Accepting filename with arbitrary suffix as input self.filename = pathlib.Path(filename).with_suffix('') self.set_file = self.filename.with_suffix('.set') self.bin_file = None self.tetrode_files = [] # set file is mandatory for any recording if not self.set_file.exists(): raise ValueError(f'Could not locate ".set" file. ' f'{self.filename.with_suffix(".set")} does not ' f'exist.') # detecting available files if self.filename.with_suffix('.bin').exists(): self.bin_file = self.filename.with_suffix('.bin') for i in range(1, 33): unit_file = self.filename.with_suffix(f'.{i}') if unit_file.exists(): self.tetrode_files.append(unit_file) else: break def _source_name(self): return self.filename def _parse_header(self): ''' Read important information from .set header file, create memory map to raw data (.bin file) and prepare header dictionary in neo format. ''' unit_dtype = np.dtype([('spiketimes', '>i4'), ('samples', 'int8', (50,))]) # Utility collection of file parameters (general info and header data) params = {'bin': {'filename': self.bin_file, 'bytes_packet': 432, 'bytes_data': 384, 'bytes_head': 32, 'bytes_tail': 16, 'data_type': 'int16', 'header_size': 0, # bin files don't contain a file header 'header_encoding': None}, 'set': {'filename': self.set_file, 'header_encoding': 'cp1252'}, 'unit': {'data_type': unit_dtype, 'tetrode_ids': [], 'header_encoding': 'cp1252'}} self.file_parameters = params # SCAN SET FILE set_dict = self.get_header_parameters(self.set_file, 'set') params['set']['file_header'] = set_dict params['set']['sampling_rate'] = int(set_dict['rawRate']) # SCAN BIN FILE signal_streams = [] signal_channels = [] if self.bin_file: bin_dict = self.file_parameters['bin'] # add derived parameters from bin file bin_dict['num_channels'] = len(self.get_active_tetrode()) * 4 num_tot_packets = int(self.bin_file.stat().st_size / bin_dict['bytes_packet']) bin_dict['num_total_packets'] = num_tot_packets bin_dict['num_total_samples'] = num_tot_packets * 3 # Create np.memmap to .bin file self._raw_signals = np.memmap( self.bin_file, dtype=self.file_parameters['bin']['data_type'], mode='r', offset=self.file_parameters['bin']['header_size'] ) signal_streams = self._get_signal_streams_header() signal_channels = self._get_signal_chan_header() # SCAN TETRODE FILES # In this IO one tetrode corresponds to one unit as spikes are not # sorted yet. self._raw_spikes = [] spike_channels = [] if self.tetrode_files: for i, tetrode_file in enumerate(self.tetrode_files): # collecting tetrode specific parameters and dtype conversions tdict = self.get_header_parameters(tetrode_file, 'unit') tdict['filename'] = tetrode_file tdict['num_chans'] = int(tdict['num_chans']) tdict['num_spikes'] = int(tdict['num_spikes']) tdict['header_size'] = len( self.get_header_bstring(tetrode_file)) # memory mapping spiking data spikes = np.memmap( tetrode_file, dtype=self.file_parameters['unit']['data_type'], mode='r', offset=tdict['header_size'], shape=(tdict['num_spikes'], 4)) self._raw_spikes.append(spikes) unit_name = f'tetrode {i + 1}' unit_id = f'{i + 1}' wf_units = 'dimensionless' wf_gain = 1 wf_offset = 0. # left sweep information is only stored in set file wf_left_sweep = int(self.file_parameters['set']['file_header'] ['pretrigSamps']) # Extract waveform sample rate # 1st priority source: Spike2msMode (0 -> 48kHz; 1 -> 24kHz) # 2nd priority source: tetrode sample rate spikemode_to_sr = {0: 48000, 1: 24000} # spikemode->rate in Hz sm = self.file_parameters['set']['file_header'].get( 'Spike2msMode', -1) wf_sampling_rate = spikemode_to_sr.get(int(sm), None) if wf_sampling_rate is None: wf_sampling_rate = self._to_hz(tdict['sample_rate'], dtype=float) spike_channels.append((unit_name, unit_id, wf_units, wf_gain, wf_offset, wf_left_sweep, wf_sampling_rate)) self.file_parameters['unit']['tetrode_ids'].append(i + 1) self.file_parameters['unit'][i + 1] = tdict # propagate common tetrode parameters to global unit level units_dict = self.file_parameters['unit'] ids = units_dict['tetrode_ids'] copied_keys = [] if ids: for key, value in units_dict[ids[0]].items(): # copy key-value pair if present across all tetrodes if all([key in units_dict[t] for t in ids]) and \ all([units_dict[t][key] == value for t in ids]): self.file_parameters['unit'][key] = value copied_keys.append(key) # remove key from individual tetrode parameters for key in copied_keys: for t in ids: self.file_parameters['unit'][t].pop(key) # Create RawIO header dict self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = np.array(signal_streams, dtype=_signal_stream_dtype) self.header['signal_channels'] = np.array(signal_channels, dtype=_signal_channel_dtype) self.header['spike_channels'] = np.array(spike_channels, dtype=_spike_channel_dtype) self.header['event_channels'] = np.array([], dtype=_event_channel_dtype) # Annotations self._generate_minimal_annotations() # Adding custom annotations bl_ann = self.raw_annotations['blocks'][0] seg_ann = bl_ann['segments'][0] seg_ann['rec_datetime'] = self.read_datetime() if len(seg_ann['signals']): seg_ann['signals'][0]['__array_annotations__']['tetrode_id'] = \ [tetr for tetr in self.get_active_tetrode() for _ in range(4)] if len(seg_ann['spikes']): # adding segment annotations seg_keys = ['experimenter', 'comments', 'sw_version'] for seg_key in seg_keys: if seg_key in self.file_parameters['unit']: seg_ann[seg_key] = self.file_parameters['unit'][seg_key] def _get_signal_streams_header(self): # create signals stream information (we always expect a single stream) return np.array([('stream 0', '0')], dtype=_signal_stream_dtype) def _segment_t_start(self, block_index, seg_index): return 0. def _segment_t_stop(self, block_index, seg_index): t_stop = 0. if 'num_total_packets' in self.file_parameters['bin']: sr = self.file_parameters['set']['sampling_rate'] t_stop = self.file_parameters['bin']['num_total_samples'] / sr if self.file_parameters['unit']['tetrode_ids']: # get tetrode recording durations in seconds if 'duration' not in self.file_parameters['unit']: raise ValueError('Can not determine common tetrode recording' 'duration.') tetrode_duration = float(self.file_parameters['unit']['duration']) t_stop = max(t_stop, tetrode_duration) return t_stop def _get_signal_size(self, block_index, seg_index, channel_indexes=None): if 'num_total_packets' in self.file_parameters['bin']: return self.file_parameters['bin']['num_total_samples'] else: return 0 def _get_signal_t_start(self, block_index, seg_index, stream_index): return 0. def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): """ Return raw (continuous) signals as 2d numpy array (time x chan). Note that block_index and seg_index are always 1 (regardless of input). Raw data is in a single vector np.memmap with the following structure: Each byte packet (432 bytes) has header (32 bytes), footer (16 bytes) and three samples of 2 bytes each for 64 channels (384 bytes), which are jumbled up in a strange order. Each channel is remapped to a certain position (see get_channel_offset), and a channel's samples are allcoated as follows (example for channel 7): sample 1: 32b (head) + 2*38b (remappedID) and 2*38b + 1b (2nd byte) sample 2: 32b (head) + 128 (all chan. 1st entry) + 2*38b and ... sample 3: 32b (head) + 128*2 (all channels 1st and 2nd entry) + ... """ bin_dict = self.file_parameters['bin'] # Set default values if i_start is None: i_start = 0 if i_stop is None: i_stop = bin_dict['num_total_samples'] if channel_indexes is None: channel_indexes = [i for i in range(bin_dict['num_channels'])] num_samples = (i_stop - i_start) # Create base index vector for _raw_signals for time period of interest num_packets_oi = (num_samples + 2) // 3 offset = i_start // 3 * (bin_dict['bytes_packet'] // 2) rem = (i_start % 3) raw_samples = np.arange(num_packets_oi + 1, dtype=np.uint32) sample1 = raw_samples * (bin_dict['bytes_packet'] // 2) + \ bin_dict['bytes_head'] // 2 + offset sample2 = sample1 + 64 sample3 = sample2 + 64 sig_ids = np.empty((sample1.size + sample2.size + sample3.size,), dtype=sample1.dtype) sig_ids[0::3] = sample1 sig_ids[1::3] = sample2 sig_ids[2::3] = sample3 sig_ids = sig_ids[rem:(rem + num_samples)] # Read one channel at a time raw_signals = np.ndarray(shape=(num_samples, len(channel_indexes)), dtype=bin_dict['data_type']) for i, ch_idx in enumerate(channel_indexes): chan_offset = self.channel_memory_offset[ch_idx] raw_signals[:, i] = self._raw_signals[sig_ids + chan_offset] return raw_signals def _spike_count(self, block_index, seg_index, unit_index): tetrode_id = unit_index raw_spikes = self._raw_spikes[tetrode_id] nb_tetrode_spikes = raw_spikes.shape[0] # also take into account last, potentially incomplete set of spikes nb_unit_spikes = int(np.ceil(nb_tetrode_spikes)) return nb_unit_spikes def _get_spike_timestamps(self, block_index, seg_index, unit_index, t_start, t_stop): assert block_index == 0 assert seg_index == 0 tetrode_id = unit_index raw_spikes = self._raw_spikes[tetrode_id] # spike times are repeated for each contact -> use only first contact unit_spikes = raw_spikes['spiketimes'][:, 0] # slice spike times only if needed if t_start is None and t_stop is None: return unit_spikes if t_start is None: t_start = self._segment_t_start(block_index, seg_index) if t_stop is None: t_stop = self._segment_t_stop(block_index, seg_index) mask = self._get_temporal_mask(t_start, t_stop, tetrode_id) return unit_spikes[mask] def _rescale_spike_timestamp(self, spike_timestamps, dtype): spike_times = spike_timestamps.astype(dtype) spike_times /= self._to_hz(self.file_parameters['unit']['timebase'], dtype=int) return spike_times def _get_spike_raw_waveforms(self, block_index, seg_index, unit_index, t_start, t_stop): assert block_index == 0 assert seg_index == 0 tetrode_id = unit_index waveforms = self._raw_spikes[tetrode_id]['samples'] # slice timestamps / waveforms only when necessary if t_start is None and t_stop is None: return waveforms if t_start is None: t_start = self._segment_t_start(block_index, seg_index) if t_stop is None: t_stop = self._segment_t_stop(block_index, seg_index) mask = self._get_temporal_mask(t_start, t_stop, tetrode_id) waveforms = waveforms[mask] return waveforms def get_header_bstring(self, file): """ Scan file for the occurrence of 'data_start' and return the header as byte string Parameters ---------- file (str or path): file to be loaded Returns ------- str: header byte content """ header = b'' with open(file, 'rb') as f: for bin_line in f: if b'data_start' in bin_line: header += b'data_start' break else: header += bin_line return header # ------------------ HELPER METHODS -------------------- # This is largely based on code by Geoff Barrett from the Hussaini lab: # https://github.com/GeoffBarrett/BinConverter # Adapted or modified by Steffen Buergers, Julia Sprenger def _get_temporal_mask(self, t_start, t_stop, tetrode_id): # Convenience function for creating a temporal mask given # start time (t_start) and stop time (t_stop) # Used by _get_spike_raw_waveforms and _get_spike_timestamps # spike times are repeated for each contact -> use only first contact raw_spikes = self._raw_spikes[tetrode_id] unit_spikes = raw_spikes['spiketimes'][:, 0] # convert t_start and t_stop to sampling frequency # Note: this assumes no time offset! unit_params = self.file_parameters['unit'] lim0 = t_start * self._to_hz(unit_params['timebase'], dtype=int) lim1 = t_stop * self._to_hz(unit_params['timebase'], dtype=int) mask = (unit_spikes >= lim0) & (unit_spikes <= lim1) return mask def get_header_parameters(self, file, file_type): """ Extract header parameters as dictionary keys and following phrases as values (strings). Parameters ---------- file (str or path): file to be loaded file_type (str): type of file to be loaded ('set' or 'unit') Returns ------- header (dict): dictionary with keys being the parameters that were found & values being strings of the data. EXAMPLE self.get_header_parameters('file.set', 'set') """ params = {} encoding = self.file_parameters[file_type]['header_encoding'] header_string = self.get_header_bstring(file).decode(encoding) # omit the last line as this contains only `data_start` key for line in header_string.splitlines()[:-1]: key, value = line.split(' ', 1) params[key] = value return params def get_active_tetrode(self): """ Returns the ID numbers of the active tetrodes as a list. E.g.: [1,2,3,4] for a recording with 4 tetrodes (16 channels). """ active_tetrodes = [] for key, status in self.file_parameters['set']['file_header'].items(): # The pattern to look for is collectMask_X Y, # where X is the tetrode number, and Y is 1 or 0 if key.startswith('collectMask_'): if int(status): tetrode_id = int(key.strip('collectMask_')) active_tetrodes.append(tetrode_id) return active_tetrodes def _get_channel_from_tetrode(self, tetrode): """ This function will take the tetrode number and return the Axona channel numbers, i.e. Tetrode 1 = Ch0-Ch3, Tetrode 2 = Ch4-Ch7, etc. """ return np.arange(0, 4) + 4 * (int(tetrode) - 1) def read_datetime(self): """ Creates datetime object (y, m, d, h, m, s) from .set file header """ date_str = self.file_parameters['set']['file_header']['trial_date'] time_str = self.file_parameters['set']['file_header']['trial_time'] # extract core date string date_str = re.findall(r'\d+\s\w+\s\d{4}$', date_str)[0] return datetime.datetime.strptime(date_str + ', ' + time_str, "%d %b %Y, %H:%M:%S") def _get_channel_gain(self, bytes_per_sample=2): """ This is actually not the gain_ch value from the .set file, but the conversion factor from raw data to uV. Formula for conversion to uV: 1000 * adc_fullscale_mv / (gain_ch * max-value), with max_value = 2**(8 * bytes_per_sample - 1) Adapted from https://github.com/CINPLA/pyxona/blob/stable/pyxona/core.py """ gain_list = [] adc_fm = int( self.file_parameters['set']['file_header']['ADC_fullscale_mv']) for key, value in self.file_parameters['set']['file_header'].items(): if key.startswith('gain_ch'): gain_list.append(np.float32(value)) max_value = 2**(8 * bytes_per_sample - 1) gain_list = [1000 * adc_fm / (gain * max_value) for gain in gain_list] return gain_list def _get_signal_chan_header(self): """ Returns a 1 dimensional np.array of tuples with one entry per channel that recorded data. Each tuple contains the following information: channel name (1a, 1b, 1c, 1d, 2a, 2b, ...; num=tetrode, letter=elec), channel id (1, 2, 3, 4, 5, ... N), sampling rate, data type (int16), unit (uV), gain, offset, stream id """ active_tetrode_set = self.get_active_tetrode() num_active_tetrode = len(active_tetrode_set) elec_per_tetrode = 4 letters = ['a', 'b', 'c', 'd'] dtype = self.file_parameters['bin']['data_type'] units = 'uV' gain_list = self._get_channel_gain() offset = 0 # What is the offset? sig_channels = [] for itetr in range(num_active_tetrode): for ielec in range(elec_per_tetrode): cntr = (itetr * elec_per_tetrode) + ielec ch_name = '{}{}'.format(itetr + 1, letters[ielec]) chan_id = str(cntr) gain = gain_list[cntr] stream_id = '0' # the sampling rate information is stored in the set header # and not in the bin file sr = self.file_parameters['set']['sampling_rate'] sig_channels.append((ch_name, chan_id, sr, dtype, units, gain, offset, stream_id)) return np.array(sig_channels, dtype=_signal_channel_dtype) def _to_hz(self, param, dtype=float): return dtype(param.replace(' hz', '')) neo-0.10.0/neo/rawio/axonrawio.py0000644000076700000240000010132614066374330017252 0ustar andrewstaff00000000000000""" Class for reading data from pCLAMP and AxoScope files (.abf version 1 and 2), developed by Molecular device/Axon technologies. - abf = Axon binary file - atf is a text file based format from axon that could be read by AsciiIO (but this file is less efficient.) This code is a port of abfload and abf2load written in Matlab (BSD-2-Clause licence) by : - Copyright (c) 2009, Forrest Collman, fcollman@princeton.edu - Copyright (c) 2004, Harald Hentschke and available here: http://www.mathworks.com/matlabcentral/fileexchange/22114-abf2load Information on abf 1 and 2 formats is available here: http://www.moleculardevices.com/pages/software/developer_info.html This file supports the old (ABF1) and new (ABF2) format. ABF1 (clampfit <=9) and ABF2 (clampfit >10) All possible mode are possible : - event-driven variable-length mode 1 -> return several Segments per Block - event-driven fixed-length mode 2 or 5 -> return several Segments - gap free mode -> return one (or sevral) Segment in the Block Supported : Read Author: Samuel Garcia, JS Nowacki Note: j.s.nowacki@gmail.com has a C++ library with SWIG bindings which also reads abf files - would be good to cross-check """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np import struct import datetime import os from io import open, BufferedReader import numpy as np class AxonRawIO(BaseRawIO): extensions = ['abf'] rawmode = 'one-file' def __init__(self, filename=''): BaseRawIO.__init__(self) self.filename = filename def _parse_header(self): info = self._axon_info = parse_axon_soup(self.filename) version = info['fFileVersionNumber'] # file format if info['nDataFormat'] == 0: sig_dtype = np.dtype('i2') elif info['nDataFormat'] == 1: sig_dtype = np.dtype('f4') if version < 2.: nbchannel = info['nADCNumChannels'] head_offset = info['lDataSectionPtr'] * BLOCKSIZE + info[ 'nNumPointsIgnored'] * sig_dtype.itemsize totalsize = info['lActualAcqLength'] elif version >= 2.: nbchannel = info['sections']['ADCSection']['llNumEntries'] head_offset = info['sections']['DataSection'][ 'uBlockIndex'] * BLOCKSIZE totalsize = info['sections']['DataSection']['llNumEntries'] self._raw_data = np.memmap(self.filename, dtype=sig_dtype, mode='r', shape=(totalsize,), offset=head_offset) # 3 possible modes if version < 2.: mode = info['nOperationMode'] elif version >= 2.: mode = info['protocol']['nOperationMode'] assert mode in [1, 2, 3, 5], 'Mode {} is not supported'.formagt(mode) # event-driven variable-length mode (mode 1) # event-driven fixed-length mode (mode 2 or 5) # gap free mode (mode 3) can be in several episodes # read sweep pos if version < 2.: nbepisod = info['lSynchArraySize'] offset_episode = info['lSynchArrayPtr'] * BLOCKSIZE elif version >= 2.: nbepisod = info['sections']['SynchArraySection'][ 'llNumEntries'] offset_episode = info['sections']['SynchArraySection'][ 'uBlockIndex'] * BLOCKSIZE if nbepisod > 0: episode_array = np.memmap( self.filename, [('offset', 'i4'), ('len', 'i4')], 'r', shape=nbepisod, offset=offset_episode) else: episode_array = np.empty(1, [('offset', 'i4'), ('len', 'i4')]) episode_array[0]['len'] = self._raw_data.size episode_array[0]['offset'] = 0 # sampling_rate if version < 2.: self._sampling_rate = 1. / (info['fADCSampleInterval'] * nbchannel * 1.e-6) elif version >= 2.: self._sampling_rate = 1.e6 / info['protocol']['fADCSequenceInterval'] # one sweep = one segment nb_segment = episode_array.size # Get raw data by segment self._raw_signals = {} self._t_starts = {} pos = 0 for seg_index in range(nb_segment): length = episode_array[seg_index]['len'] if version < 2.: fSynchTimeUnit = info['fSynchTimeUnit'] elif version >= 2.: fSynchTimeUnit = info['protocol']['fSynchTimeUnit'] if (fSynchTimeUnit != 0) and (mode == 1): length /= fSynchTimeUnit self._raw_signals[seg_index] = self._raw_data[pos:pos + length].reshape(-1, nbchannel) pos += length t_start = float(episode_array[seg_index]['offset']) if (fSynchTimeUnit == 0): t_start = t_start / self._sampling_rate else: t_start = t_start * fSynchTimeUnit * 1e-6 self._t_starts[seg_index] = t_start # Create channel header if version < 2.: channel_ids = [chan_num for chan_num in info['nADCSamplingSeq'] if chan_num >= 0] else: channel_ids = list(range(nbchannel)) signal_channels = [] adc_nums = [] for chan_index, chan_id in enumerate(channel_ids): if version < 2.: name = info['sADCChannelName'][chan_id].replace(b' ', b'') units = safe_decode_units(info['sADCUnits'][chan_id]) adc_num = info['nADCPtoLChannelMap'][chan_id] elif version >= 2.: ADCInfo = info['listADCInfo'][chan_id] name = ADCInfo['ADCChNames'].replace(b' ', b'') units = safe_decode_units(ADCInfo['ADCChUnits']) adc_num = ADCInfo['nADCNum'] adc_nums.append(adc_num) if info['nDataFormat'] == 0: # int16 gain/offset if version < 2.: gain = info['fADCRange'] gain /= info['fInstrumentScaleFactor'][chan_id] gain /= info['fSignalGain'][chan_id] gain /= info['fADCProgrammableGain'][chan_id] gain /= info['lADCResolution'] if info['nTelegraphEnable'][chan_id] == 0: pass elif info['nTelegraphEnable'][chan_id] == 1: gain /= info['fTelegraphAdditGain'][chan_id] else: self.logger.warning('ignoring buggy nTelegraphEnable') offset = info['fInstrumentOffset'][chan_id] offset -= info['fSignalOffset'][chan_id] elif version >= 2.: gain = info['protocol']['fADCRange'] gain /= info['listADCInfo'][chan_id]['fInstrumentScaleFactor'] gain /= info['listADCInfo'][chan_id]['fSignalGain'] gain /= info['listADCInfo'][chan_id]['fADCProgrammableGain'] gain /= info['protocol']['lADCResolution'] if info['listADCInfo'][chan_id]['nTelegraphEnable']: gain /= info['listADCInfo'][chan_id]['fTelegraphAdditGain'] offset = info['listADCInfo'][chan_id]['fInstrumentOffset'] offset -= info['listADCInfo'][chan_id]['fSignalOffset'] else: gain, offset = 1., 0. stream_id = '0' signal_channels.append((name, str(chan_id), self._sampling_rate, sig_dtype, units, gain, offset, stream_id)) signal_channels = np.array(signal_channels, dtype=_signal_channel_dtype) # one unique signal stream signal_streams = np.array([('Signals', '0')], dtype=_signal_stream_dtype) # only one events channel : tag # In ABF timstamps are not attached too any particular segment # so each segment acess all event timestamps = [] labels = [] comments = [] for i, tag in enumerate(info['listTag']): timestamps.append(tag['lTagTime']) labels.append(str(tag['nTagType'])) comments.append(clean_string(tag['sComment'])) self._raw_ev_timestamps = np.array(timestamps) self._ev_labels = np.array(labels, dtype='U') self._ev_comments = np.array(comments, dtype='U') event_channels = [('Tag', '', 'event')] event_channels = np.array(event_channels, dtype=_event_channel_dtype) # No spikes spike_channels = [] spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) # fille into header dict self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [nb_segment] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = signal_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels # insert some annotation at some place self._generate_minimal_annotations() bl_annotations = self.raw_annotations['blocks'][0] bl_annotations['rec_datetime'] = info['rec_datetime'] bl_annotations['abf_version'] = version for seg_index in range(nb_segment): seg_annotations = bl_annotations['segments'][seg_index] seg_annotations['abf_version'] = version signal_an = self.raw_annotations['blocks'][0]['segments'][seg_index]['signals'][0] nADCNum = np.array([adc_nums[c] for c in range(signal_channels.size)]) signal_an['__array_annotations__']['nADCNum'] = nADCNum for c in range(event_channels.size): ev_ann = seg_annotations['events'][c] ev_ann['comments'] = self._ev_comments def _source_name(self): return self.filename def _segment_t_start(self, block_index, seg_index): return self._t_starts[seg_index] def _segment_t_stop(self, block_index, seg_index): t_stop = self._t_starts[seg_index] + \ self._raw_signals[seg_index].shape[0] / self._sampling_rate return t_stop def _get_signal_size(self, block_index, seg_index, stream_index): shape = self._raw_signals[seg_index].shape return shape[0] def _get_signal_t_start(self, block_index, seg_index, stream_index): return self._t_starts[seg_index] def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): if channel_indexes is None: channel_indexes = slice(None) raw_signals = self._raw_signals[seg_index][slice(i_start, i_stop), channel_indexes] return raw_signals def _event_count(self, block_index, seg_index, event_channel_index): return self._raw_ev_timestamps.size def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): # In ABF timstamps are not attached too any particular segment # so each segmetn acees all event timestamp = self._raw_ev_timestamps labels = self._ev_labels durations = None if t_start is not None: keep = timestamp >= int(t_start * self._sampling_rate) timestamp = timestamp[keep] labels = labels[keep] if t_stop is not None: keep = timestamp <= int(t_stop * self._sampling_rate) timestamp = timestamp[keep] labels = labels[keep] return timestamp, durations, labels def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): event_times = event_timestamps.astype(dtype) / self._sampling_rate return event_times def read_raw_protocol(self): """ Read the protocol waveform of the file, if present; function works with ABF2 only. Protocols can be reconstructed from the ABF1 header. Returns: list of segments (one for every episode) with list of analog signls (one for every DAC). Author: JS Nowacki """ info = self._axon_info if info['fFileVersionNumber'] < 2.: raise IOError("Protocol section is only present in ABF2 files.") nADC = info['sections']['ADCSection'][ 'llNumEntries'] # Number of ADC channels nDAC = info['sections']['DACSection'][ 'llNumEntries'] # Number of DAC channels nSam = int(info['protocol'][ 'lNumSamplesPerEpisode'] / nADC) # Number of samples per episode nEpi = info['lActualEpisodes'] # Actual number of episodes # Make a list of segments with analog signals with just holding levels # List of segments relates to number of episodes, as for recorded data sigs_by_segments = [] for epiNum in range(nEpi): # One analog signal for each DAC in segment (episode) signals = [] for DACNum in range(nDAC): sig = np.ones(nSam) * info['listDACInfo'][DACNum]['fDACHoldingLevel'] # If there are epoch infos for this DAC if DACNum in info['dictEpochInfoPerDAC']: # Save last sample index i_last = int(nSam * 15625 / 10 ** 6) # TODO guess for first holding # Go over EpochInfoPerDAC and change the analog signal # according to the epochs epochInfo = info['dictEpochInfoPerDAC'][DACNum] for epochNum, epoch in epochInfo.items(): i_begin = i_last i_end = i_last + epoch['lEpochInitDuration'] + \ epoch['lEpochDurationInc'] * epiNum dif = i_end - i_begin sig[i_begin:i_end] = np.ones(dif) * \ (epoch['fEpochInitLevel'] + epoch['fEpochLevelInc'] * epiNum) i_last += epoch['lEpochInitDuration'] + \ epoch['lEpochDurationInc'] * epiNum signals.append(sig) sigs_by_segments.append(signals) sig_names = [] sig_units = [] for DACNum in range(nDAC): name = info['listDACInfo'][DACNum]['DACChNames'].decode("utf-8") units = safe_decode_units(info['listDACInfo'][DACNum]['DACChUnits']) sig_names.append(name) sig_units.append(units) return sigs_by_segments, sig_names, sig_units def parse_axon_soup(filename): """ read the header of the file The strategy here differs from the original script under Matlab. In the original script for ABF2, it completes the header with information that is located in other structures. In ABF2 this function returns info with sub dict: sections (ABF2) protocol (ABF2) listTags (ABF1&2) listADCInfo (ABF2) listDACInfo (ABF2) dictEpochInfoPerDAC (ABF2) that contains more information. """ with open(filename, 'rb') as fid: f = StructFile(fid) # version f_file_signature = f.read(4) if f_file_signature == b'ABF ': header_description = headerDescriptionV1 elif f_file_signature == b'ABF2': header_description = headerDescriptionV2 else: return None # construct dict header = {} for key, offset, fmt in header_description: val = f.read_f(fmt, offset=offset) if len(val) == 1: header[key] = val[0] else: header[key] = np.array(val) # correction of version number and starttime if f_file_signature == b'ABF ': header['lFileStartTime'] += header[ 'nFileStartMillisecs'] * .001 elif f_file_signature == b'ABF2': n = header['fFileVersionNumber'] header['fFileVersionNumber'] = n[3] + 0.1 * n[2] + \ 0.01 * n[1] + 0.001 * n[0] header['lFileStartTime'] = header['uFileStartTimeMS'] * .001 if header['fFileVersionNumber'] < 2.: # tags listTag = [] for i in range(header['lNumTagEntries']): f.seek(header['lTagSectionPtr'] + i * 64) tag = {} for key, fmt in TagInfoDescription: val = f.read_f(fmt) if len(val) == 1: tag[key] = val[0] else: tag[key] = np.array(val) listTag.append(tag) header['listTag'] = listTag # protocol name formatting header['sProtocolPath'] = clean_string(header['sProtocolPath']) header['sProtocolPath'] = header['sProtocolPath']. \ replace(b'\\', b'/') elif header['fFileVersionNumber'] >= 2.: # in abf2 some info are in other place # sections sections = {} for s, sectionName in enumerate(sectionNames): uBlockIndex, uBytes, llNumEntries = \ f.read_f('IIl', offset=76 + s * 16) sections[sectionName] = {} sections[sectionName]['uBlockIndex'] = uBlockIndex sections[sectionName]['uBytes'] = uBytes sections[sectionName]['llNumEntries'] = llNumEntries header['sections'] = sections # strings sections # hack for reading channels names and units # this section is not very detailed and so the code # not very robust. The idea is to remove the first # part by find ing one of th fowoling KEY # unfortunatly the later part contains a the file # taht can contain by accident also one of theses keys... f.seek(sections['StringsSection']['uBlockIndex'] * BLOCKSIZE) big_string = f.read(sections['StringsSection']['uBytes']) goodstart = -1 for key in [b'AXENGN', b'clampex', b'Clampex', b'EDR3', b'CLAMPEX', b'axoscope', b'AxoScope', b'Clampfit']: # goodstart = big_string.lower().find(key) goodstart = big_string.find(b'\x00' + key) if goodstart != -1: break assert goodstart != -1, \ 'This file does not contain clampex, axoscope or clampfit in the header' big_string = big_string[goodstart + 1:] strings = big_string.split(b'\x00') # ADC sections header['listADCInfo'] = [] for i in range(sections['ADCSection']['llNumEntries']): # read ADCInfo f.seek(sections['ADCSection']['uBlockIndex'] * BLOCKSIZE + sections['ADCSection']['uBytes'] * i) ADCInfo = {} for key, fmt in ADCInfoDescription: val = f.read_f(fmt) if len(val) == 1: ADCInfo[key] = val[0] else: ADCInfo[key] = np.array(val) ADCInfo['ADCChNames'] = strings[ADCInfo['lADCChannelNameIndex'] - 1] ADCInfo['ADCChUnits'] = strings[ADCInfo['lADCUnitsIndex'] - 1] header['listADCInfo'].append(ADCInfo) # protocol sections protocol = {} f.seek(sections['ProtocolSection']['uBlockIndex'] * BLOCKSIZE) for key, fmt in protocolInfoDescription: val = f.read_f(fmt) if len(val) == 1: protocol[key] = val[0] else: protocol[key] = np.array(val) header['protocol'] = protocol header['sProtocolPath'] = strings[header['uProtocolPathIndex'] - 1] # tags listTag = [] for i in range(sections['TagSection']['llNumEntries']): f.seek(sections['TagSection']['uBlockIndex'] * BLOCKSIZE + sections['TagSection']['uBytes'] * i) tag = {} for key, fmt in TagInfoDescription: val = f.read_f(fmt) if len(val) == 1: tag[key] = val[0] else: tag[key] = np.array(val) listTag.append(tag) header['listTag'] = listTag # DAC sections header['listDACInfo'] = [] for i in range(sections['DACSection']['llNumEntries']): # read DACInfo f.seek(sections['DACSection']['uBlockIndex'] * BLOCKSIZE + sections['DACSection']['uBytes'] * i) DACInfo = {} for key, fmt in DACInfoDescription: val = f.read_f(fmt) if len(val) == 1: DACInfo[key] = val[0] else: DACInfo[key] = np.array(val) DACInfo['DACChNames'] = strings[DACInfo['lDACChannelNameIndex'] - 1] DACInfo['DACChUnits'] = strings[ DACInfo['lDACChannelUnitsIndex'] - 1] header['listDACInfo'].append(DACInfo) # EpochPerDAC sections # header['dictEpochInfoPerDAC'] is dict of dicts: # - the first index is the DAC number # - the second index is the epoch number # It has to be done like that because data may not exist # and may not be in sorted order header['dictEpochInfoPerDAC'] = {} for i in range(sections['EpochPerDACSection']['llNumEntries']): # read DACInfo f.seek(sections['EpochPerDACSection']['uBlockIndex'] * BLOCKSIZE + sections['EpochPerDACSection']['uBytes'] * i) EpochInfoPerDAC = {} for key, fmt in EpochInfoPerDACDescription: val = f.read_f(fmt) if len(val) == 1: EpochInfoPerDAC[key] = val[0] else: EpochInfoPerDAC[key] = np.array(val) DACNum = EpochInfoPerDAC['nDACNum'] EpochNum = EpochInfoPerDAC['nEpochNum'] # Checking if the key exists, if not, the value is empty # so we have to create empty dict to populate if DACNum not in header['dictEpochInfoPerDAC']: header['dictEpochInfoPerDAC'][DACNum] = {} header['dictEpochInfoPerDAC'][DACNum][EpochNum] = \ EpochInfoPerDAC # Epoch sections header['EpochInfo'] = [] for i in range(sections['EpochSection']['llNumEntries']): # read EpochInfo f.seek(sections['EpochSection']['uBlockIndex'] * BLOCKSIZE + sections['EpochSection']['uBytes'] * i) EpochInfo = {} for key, fmt in EpochInfoDescription: val = f.read_f(fmt) if len(val) == 1: EpochInfo[key] = val[0] else: EpochInfo[key] = np.array(val) header['EpochInfo'].append(EpochInfo) # date and time if header['fFileVersionNumber'] < 2.: YY = 1900 MM = 1 DD = 1 hh = int(header['lFileStartTime'] / 3600.) mm = int((header['lFileStartTime'] - hh * 3600) / 60) ss = header['lFileStartTime'] - hh * 3600 - mm * 60 ms = int(np.mod(ss, 1) * 1e6) ss = int(ss) elif header['fFileVersionNumber'] >= 2.: YY = int(header['uFileStartDate'] / 10000) MM = int((header['uFileStartDate'] - YY * 10000) / 100) DD = int(header['uFileStartDate'] - YY * 10000 - MM * 100) hh = int(header['uFileStartTimeMS'] / 1000. / 3600.) mm = int((header['uFileStartTimeMS'] / 1000. - hh * 3600) / 60) ss = header['uFileStartTimeMS'] / 1000. - hh * 3600 - mm * 60 ms = int(np.mod(ss, 1) * 1e6) ss = int(ss) header['rec_datetime'] = datetime.datetime(YY, MM, DD, hh, mm, ss, ms) return header class StructFile(BufferedReader): def read_f(self, fmt, offset=None): if offset is not None: self.seek(offset) return struct.unpack(fmt, self.read(struct.calcsize(fmt))) def clean_string(s): s = s.rstrip(b'\x00') s = s.rstrip(b' ') return s def safe_decode_units(s): s = s.replace(b' ', b'') s = s.replace(b'\xb5', b'u') # \xb5 is µ s = s.replace(b'\xb0', b'\xc2\xb0') # \xb0 is ° s = s.decode('utf-8') return s BLOCKSIZE = 512 headerDescriptionV1 = [ ('fFileSignature', 0, '4s'), ('fFileVersionNumber', 4, 'f'), ('nOperationMode', 8, 'h'), ('lActualAcqLength', 10, 'i'), ('nNumPointsIgnored', 14, 'h'), ('lActualEpisodes', 16, 'i'), ('lFileStartTime', 24, 'i'), ('lDataSectionPtr', 40, 'i'), ('lTagSectionPtr', 44, 'i'), ('lNumTagEntries', 48, 'i'), ('lSynchArrayPtr', 92, 'i'), ('lSynchArraySize', 96, 'i'), ('nDataFormat', 100, 'h'), ('nADCNumChannels', 120, 'h'), ('fADCSampleInterval', 122, 'f'), ('fSynchTimeUnit', 130, 'f'), ('lNumSamplesPerEpisode', 138, 'i'), ('lPreTriggerSamples', 142, 'i'), ('lEpisodesPerRun', 146, 'i'), ('fADCRange', 244, 'f'), ('lADCResolution', 252, 'i'), ('nFileStartMillisecs', 366, 'h'), ('nADCPtoLChannelMap', 378, '16h'), ('nADCSamplingSeq', 410, '16h'), ('sADCChannelName', 442, '10s' * 16), ('sADCUnits', 602, '8s' * 16), ('fADCProgrammableGain', 730, '16f'), ('fInstrumentScaleFactor', 922, '16f'), ('fInstrumentOffset', 986, '16f'), ('fSignalGain', 1050, '16f'), ('fSignalOffset', 1114, '16f'), ('nDigitalEnable', 1436, 'h'), ('nActiveDACChannel', 1440, 'h'), ('nDigitalHolding', 1584, 'h'), ('nDigitalInterEpisode', 1586, 'h'), ('nDigitalValue', 2588, '10h'), ('lDACFilePtr', 2048, '2i'), ('lDACFileNumEpisodes', 2056, '2i'), ('fDACCalibrationFactor', 2074, '4f'), ('fDACCalibrationOffset', 2090, '4f'), ('nWaveformEnable', 2296, '2h'), ('nWaveformSource', 2300, '2h'), ('nInterEpisodeLevel', 2304, '2h'), ('nEpochType', 2308, '20h'), ('fEpochInitLevel', 2348, '20f'), ('fEpochLevelInc', 2428, '20f'), ('lEpochInitDuration', 2508, '20i'), ('lEpochDurationInc', 2588, '20i'), ('nTelegraphEnable', 4512, '16h'), ('fTelegraphAdditGain', 4576, '16f'), ('sProtocolPath', 4898, '384s'), ] headerDescriptionV2 = [ ('fFileSignature', 0, '4s'), ('fFileVersionNumber', 4, '4b'), ('uFileInfoSize', 8, 'I'), ('lActualEpisodes', 12, 'I'), ('uFileStartDate', 16, 'I'), ('uFileStartTimeMS', 20, 'I'), ('uStopwatchTime', 24, 'I'), ('nFileType', 28, 'H'), ('nDataFormat', 30, 'H'), ('nSimultaneousScan', 32, 'H'), ('nCRCEnable', 34, 'H'), ('uFileCRC', 36, 'I'), ('FileGUID', 40, 'I'), ('uCreatorVersion', 56, 'I'), ('uCreatorNameIndex', 60, 'I'), ('uModifierVersion', 64, 'I'), ('uModifierNameIndex', 68, 'I'), ('uProtocolPathIndex', 72, 'I'), ] sectionNames = [ 'ProtocolSection', 'ADCSection', 'DACSection', 'EpochSection', 'ADCPerDACSection', 'EpochPerDACSection', 'UserListSection', 'StatsRegionSection', 'MathSection', 'StringsSection', 'DataSection', 'TagSection', 'ScopeSection', 'DeltaSection', 'VoiceTagSection', 'SynchArraySection', 'AnnotationSection', 'StatsSection', ] protocolInfoDescription = [ ('nOperationMode', 'h'), ('fADCSequenceInterval', 'f'), ('bEnableFileCompression', 'b'), ('sUnused1', '3s'), ('uFileCompressionRatio', 'I'), ('fSynchTimeUnit', 'f'), ('fSecondsPerRun', 'f'), ('lNumSamplesPerEpisode', 'i'), ('lPreTriggerSamples', 'i'), ('lEpisodesPerRun', 'i'), ('lRunsPerTrial', 'i'), ('lNumberOfTrials', 'i'), ('nAveragingMode', 'h'), ('nUndoRunCount', 'h'), ('nFirstEpisodeInRun', 'h'), ('fTriggerThreshold', 'f'), ('nTriggerSource', 'h'), ('nTriggerAction', 'h'), ('nTriggerPolarity', 'h'), ('fScopeOutputInterval', 'f'), ('fEpisodeStartToStart', 'f'), ('fRunStartToStart', 'f'), ('lAverageCount', 'i'), ('fTrialStartToStart', 'f'), ('nAutoTriggerStrategy', 'h'), ('fFirstRunDelayS', 'f'), ('nChannelStatsStrategy', 'h'), ('lSamplesPerTrace', 'i'), ('lStartDisplayNum', 'i'), ('lFinishDisplayNum', 'i'), ('nShowPNRawData', 'h'), ('fStatisticsPeriod', 'f'), ('lStatisticsMeasurements', 'i'), ('nStatisticsSaveStrategy', 'h'), ('fADCRange', 'f'), ('fDACRange', 'f'), ('lADCResolution', 'i'), ('lDACResolution', 'i'), ('nExperimentType', 'h'), ('nManualInfoStrategy', 'h'), ('nCommentsEnable', 'h'), ('lFileCommentIndex', 'i'), ('nAutoAnalyseEnable', 'h'), ('nSignalType', 'h'), ('nDigitalEnable', 'h'), ('nActiveDACChannel', 'h'), ('nDigitalHolding', 'h'), ('nDigitalInterEpisode', 'h'), ('nDigitalDACChannel', 'h'), ('nDigitalTrainActiveLogic', 'h'), ('nStatsEnable', 'h'), ('nStatisticsClearStrategy', 'h'), ('nLevelHysteresis', 'h'), ('lTimeHysteresis', 'i'), ('nAllowExternalTags', 'h'), ('nAverageAlgorithm', 'h'), ('fAverageWeighting', 'f'), ('nUndoPromptStrategy', 'h'), ('nTrialTriggerSource', 'h'), ('nStatisticsDisplayStrategy', 'h'), ('nExternalTagType', 'h'), ('nScopeTriggerOut', 'h'), ('nLTPType', 'h'), ('nAlternateDACOutputState', 'h'), ('nAlternateDigitalOutputState', 'h'), ('fCellID', '3f'), ('nDigitizerADCs', 'h'), ('nDigitizerDACs', 'h'), ('nDigitizerTotalDigitalOuts', 'h'), ('nDigitizerSynchDigitalOuts', 'h'), ('nDigitizerType', 'h'), ] ADCInfoDescription = [ ('nADCNum', 'h'), ('nTelegraphEnable', 'h'), ('nTelegraphInstrument', 'h'), ('fTelegraphAdditGain', 'f'), ('fTelegraphFilter', 'f'), ('fTelegraphMembraneCap', 'f'), ('nTelegraphMode', 'h'), ('fTelegraphAccessResistance', 'f'), ('nADCPtoLChannelMap', 'h'), ('nADCSamplingSeq', 'h'), ('fADCProgrammableGain', 'f'), ('fADCDisplayAmplification', 'f'), ('fADCDisplayOffset', 'f'), ('fInstrumentScaleFactor', 'f'), ('fInstrumentOffset', 'f'), ('fSignalGain', 'f'), ('fSignalOffset', 'f'), ('fSignalLowpassFilter', 'f'), ('fSignalHighpassFilter', 'f'), ('nLowpassFilterType', 'b'), ('nHighpassFilterType', 'b'), ('fPostProcessLowpassFilter', 'f'), ('nPostProcessLowpassFilterType', 'c'), ('bEnabledDuringPN', 'b'), ('nStatsChannelPolarity', 'h'), ('lADCChannelNameIndex', 'i'), ('lADCUnitsIndex', 'i'), ] TagInfoDescription = [ ('lTagTime', 'i'), ('sComment', '56s'), ('nTagType', 'h'), ('nVoiceTagNumber_or_AnnotationIndex', 'h'), ] DACInfoDescription = [ ('nDACNum', 'h'), ('nTelegraphDACScaleFactorEnable', 'h'), ('fInstrumentHoldingLevel', 'f'), ('fDACScaleFactor', 'f'), ('fDACHoldingLevel', 'f'), ('fDACCalibrationFactor', 'f'), ('fDACCalibrationOffset', 'f'), ('lDACChannelNameIndex', 'i'), ('lDACChannelUnitsIndex', 'i'), ('lDACFilePtr', 'i'), ('lDACFileNumEpisodes', 'i'), ('nWaveformEnable', 'h'), ('nWaveformSource', 'h'), ('nInterEpisodeLevel', 'h'), ('fDACFileScale', 'f'), ('fDACFileOffset', 'f'), ('lDACFileEpisodeNum', 'i'), ('nDACFileADCNum', 'h'), ('nConditEnable', 'h'), ('lConditNumPulses', 'i'), ('fBaselineDuration', 'f'), ('fBaselineLevel', 'f'), ('fStepDuration', 'f'), ('fStepLevel', 'f'), ('fPostTrainPeriod', 'f'), ('fPostTrainLevel', 'f'), ('nMembTestEnable', 'h'), ('nLeakSubtractType', 'h'), ('nPNPolarity', 'h'), ('fPNHoldingLevel', 'f'), ('nPNNumADCChannels', 'h'), ('nPNPosition', 'h'), ('nPNNumPulses', 'h'), ('fPNSettlingTime', 'f'), ('fPNInterpulse', 'f'), ('nLTPUsageOfDAC', 'h'), ('nLTPPresynapticPulses', 'h'), ('lDACFilePathIndex', 'i'), ('fMembTestPreSettlingTimeMS', 'f'), ('fMembTestPostSettlingTimeMS', 'f'), ('nLeakSubtractADCIndex', 'h'), ('sUnused', '124s'), ] EpochInfoPerDACDescription = [ ('nEpochNum', 'h'), ('nDACNum', 'h'), ('nEpochType', 'h'), ('fEpochInitLevel', 'f'), ('fEpochLevelInc', 'f'), ('lEpochInitDuration', 'i'), ('lEpochDurationInc', 'i'), ('lEpochPulsePeriod', 'i'), ('lEpochPulseWidth', 'i'), ('sUnused', '18s'), ] EpochInfoDescription = [ ('nEpochNum', 'h'), ('nDigitalValue', 'h'), ('nDigitalTrainValue', 'h'), ('nAlternateDigitalValue', 'h'), ('nAlternateDigitalTrainValue', 'h'), ('bEpochCompression', 'b'), ('sUnused', '21s'), ] neo-0.10.0/neo/rawio/baserawio.py0000644000076700000240000010574414077527451017235 0ustar andrewstaff00000000000000""" baserawio ====== Classes ------- BaseRawIO abstract class which should be overridden to write a RawIO. RawIO is a low level API in neo that provides fast access to the raw data. When possible, all IOs should/implement this level following these guidelines: * internal use of memmap (or hdf5) * fast reading of the header (do not read the complete file) * neo tree object is symmetric and logical: same channel/units/event along all block and segments. For this level, datasets of recordings are mapped as follows: A channel refers to a physical channel of recording in an experiment. It is identified by a channel_id. Recordings from a channel consist of sections of samples which are recorded contiguously in time; in other words, a section of a channel has a specific sampling_rate, start_time, and length (and thus also stop_time, which is the time of the sample which would lie one sampling interval beyond the last sample present in that section). A stream consists of a set of channels which all have the same structure of their sections of recording and the same data type of samples. Each stream has a unique stream_id and has a name, which does not need to be unique. A stream thus has multiple channels which all have the same sampling rate and are on the same clock, have the same sections with t_starts and lengths, and the same data type for their samples. The samples in a stream can thus be retrieved as an Numpy array, a chunk of samples. Channels within a stream can be accessed be either their channel_id, which must be unique within a stream, or by their channel_index, which is a 0 based index to all channels within the stream. Note that a single channel of recording may be represented within multiple streams, and such is the case for RawIOs which may have both unfiltered and filtered or downsampled versions of the signals from a single recording channel. In such a case, a single channel and channel_id may be represented by a different channel_index within different streams. Lists of channel_indexes are often convenient to pass around a selection of channels within a stream. At the neo.io level, one AnalogSignal with multiple channels can be created for each stream. Such an AnalogSignal may have multiple Segments, with each segment containing the sections from each channel with the same t_start and length. Such multiple Segments for a RawIO will have the same sampling rate. It is thus possible to retrieve the t_start and length of the sections of the channels for a Block and Segment of a stream. So this handles **only** one simplified but very frequent case of dataset: * Only one channel set for AnalogSignal stable along Segment * Only one channel set for SpikeTrain stable along Segment * AnalogSignal have all the same sampling_rate acroos all Segment * t_start/t_stop are the same for many object (SpikeTrain, Event) inside a Segment Signal channels are handled by group of "stream". One stream will result at neo.io level in one AnalogSignal with multiple channels. A helper class `neo.io.basefromrawio.BaseFromRaw` transforms a RawIO to neo legacy IO. In short all "neo.rawio" classes are also "neo.io" with lazy reading capability. With this API the IO have an attributes `header` with necessary keys. This `header` attribute is done in `_parse_header(...)` method. See ExampleRawIO as example. BaseRawIO also implements a possible persistent cache system that can be used by some RawIOs to avoid a very long parse_header() call. The idea is that some variable or vector can be stored somewhere (near the file, /tmp, any path) for use across multiple constructions of a RawIO for a given set of data. """ import logging import numpy as np import os import sys from neo import logging_handler try: import joblib HAVE_JOBLIB = True except ImportError: HAVE_JOBLIB = False possible_raw_modes = ['one-file', 'multi-file', 'one-dir', ] # 'multi-dir', 'url', 'other' error_header = 'Header is not read yet, do parse_header() first' _signal_stream_dtype = [ ('name', 'U64'), # not necessarily unique ('id', 'U64'), # must be unique ] _signal_channel_dtype = [ ('name', 'U64'), # not necessarily unique ('id', 'U64'), # must be unique ('sampling_rate', 'float64'), ('dtype', 'U16'), ('units', 'U64'), ('gain', 'float64'), ('offset', 'float64'), ('stream_id', 'U64'), ] # TODO for later: add t_start and length in _signal_channel_dtype # this would simplify all t_start/t_stop stuff for each RawIO class _common_sig_characteristics = ['sampling_rate', 'dtype', 'stream_id'] _spike_channel_dtype = [ ('name', 'U64'), ('id', 'U64'), # for waveform ('wf_units', 'U64'), ('wf_gain', 'float64'), ('wf_offset', 'float64'), ('wf_left_sweep', 'int64'), ('wf_sampling_rate', 'float64'), ] # in rawio event and epoch are handled the same way # except, that duration is `None` for events _event_channel_dtype = [ ('name', 'U64'), ('id', 'U64'), ('type', 'S5'), # epoch or event ] class BaseRawIO: """ Generic class to handle. """ name = 'BaseIO' description = '' extensions = [] rawmode = None # one key from possible_raw_modes def __init__(self, use_cache=False, cache_path='same_as_resource', **kargs): """ :TODO: Why multi-file would have a single filename is confusing here - shouldn't the name of this argument be filenames_list or filenames_base or similar? When rawmode=='one-file' kargs MUST contains 'filename' the filename When rawmode=='multi-file' kargs MUST contains 'filename' one of the filenames. When rawmode=='one-dir' kargs MUST contains 'dirname' the dirname. """ # create a logger for the IO class fullname = self.__class__.__module__ + '.' + self.__class__.__name__ self.logger = logging.getLogger(fullname) # Create a logger for 'neo' and add a handler to it if it doesn't have one already. # (it will also not add one if the root logger has a handler) corename = self.__class__.__module__.split('.')[0] corelogger = logging.getLogger(corename) rootlogger = logging.getLogger() if not corelogger.handlers and not rootlogger.handlers: corelogger.addHandler(logging_handler) self.use_cache = use_cache if use_cache: assert HAVE_JOBLIB, 'You need to install joblib for cache' self.setup_cache(cache_path) else: self._cache = None self.header = None def parse_header(self): """ This must parse the file header to get all stuff for fast use later on. This must create self.header['nb_block'] self.header['nb_segment'] self.header['signal_streams'] self.header['signal_channels'] self.header['spike_channels'] self.header['event_channels'] """ self._parse_header() self._check_stream_signal_channel_characteristics() def source_name(self): """Return fancy name of file source""" return self._source_name() def __repr__(self): txt = '{}: {}\n'.format(self.__class__.__name__, self.source_name()) if self.header is not None: nb_block = self.block_count() txt += 'nb_block: {}\n'.format(nb_block) nb_seg = [self.segment_count(i) for i in range(nb_block)] txt += 'nb_segment: {}\n'.format(nb_seg) # signal streams v = [s['name'] + f' (chans: {self.signal_channels_count(i)})' for i, s in enumerate(self.header['signal_streams'])] v = pprint_vector(v) txt += f'signal_streams: {v}\n' for k in ('signal_channels', 'spike_channels', 'event_channels'): ch = self.header[k] v = pprint_vector(self.header[k]['name']) txt += f'{k}: {v}\n' return txt def _generate_minimal_annotations(self): """ Helper function that generates a nested dict for annotations. Must be called when these are Ok after self.header is done and thus when these functions return the correct values: * block_count() * segment_count() * signal_streams_count() * signal_channels_count() * spike_channels_count() * event_channels_count() There are several sources and kinds of annotations that will be forwarded to the neo.io level and used to enrich neo objects: * annotations of objects common across segments * signal_streams > neo.AnalogSignal annotations * signal_channels > neo.AnalogSignal array_annotations split by stream * spike_channels > neo.SpikeTrain * event_channels > neo.Event and neo.Epoch * annotations that depend of the block_id/segment_id of the object: * nested in raw_annotations['blocks'][block_index]['segments'][seg_index]['signals'] Usage after a call to this function we can do this to populate more annotations: raw_annotations['blocks'][block_index][ 'nickname'] = 'super block' raw_annotations['blocks'][block_index] ['segments']['important_key'] = 'important value' raw_annotations['blocks'][block_index] ['segments'][seg_index] ['signals']['nickname'] = 'super signals stream' raw_annotations['blocks'][block_index] ['segments'][seg_index] ['signals']['__array_annotations__'] ['channels_quality'] = ['bad', 'good', 'medium', 'good'] raw_annotations['blocks'][block_index] ['segments'][seg_index] ['spikes'][spike_chan]['nickname'] = 'super neuron' raw_annotations['blocks'][block_index] ['segments'][seg_index] ['spikes'][spike_chan] ['__array_annotations__']['spike_amplitudes'] = [-1.2, -10., ...] raw_annotations['blocks'][block_index] ['segments'][seg_index] ['events'][ev_chan]['nickname'] = 'super trigger' raw_annotations['blocks'][block_index] ['segments'][seg_index] ['events'][ev_chan] Z['__array_annotations__']['additional_label'] = ['A', 'B', 'A', 'C', ...] Theses annotations will be used at the neo.io API directly in objects. Standard annotation like name/id/file_origin are already generated here. """ signal_streams = self.header['signal_streams'] signal_channels = self.header['signal_channels'] spike_channels = self.header['spike_channels'] event_channels = self.header['event_channels'] # use for AnalogSignal.annotations and AnalogSignal.array_annotations signal_stream_annotations = [] for c in range(signal_streams.size): stream_id = signal_streams[c]['id'] channels = signal_channels[signal_channels['stream_id'] == stream_id] d = {} d['name'] = signal_streams['name'][c] d['stream_id'] = stream_id d['file_origin'] = self._source_name() d['__array_annotations__'] = {} for key in ('name', 'id'): values = np.array([channels[key][chan] for chan in range(channels.size)]) d['__array_annotations__']['channel_' + key + 's'] = values signal_stream_annotations.append(d) # used for SpikeTrain.annotations and SpikeTrain.array_annotations spike_annotations = [] for c in range(spike_channels.size): # use for Unit.annotations d = {} d['name'] = spike_channels['name'][c] d['id'] = spike_channels['id'][c] d['file_origin'] = self._source_name() d['__array_annotations__'] = {} spike_annotations.append(d) # used for Event/Epoch.annotations and Event/Epoch.array_annotations event_annotations = [] for c in range(event_channels.size): # not used in neo.io at the moment could useful one day d = {} d['name'] = event_channels['name'][c] d['id'] = event_channels['id'][c] d['file_origin'] = self._source_name() d['__array_annotations__'] = {} event_annotations.append(d) # duplicate this signal_stream_annotations/spike_annotations/event_annotations # across blocks and segments and create annotations ann = {} ann['blocks'] = [] for block_index in range(self.block_count()): d = {} d['file_origin'] = self.source_name() d['segments'] = [] ann['blocks'].append(d) for seg_index in range(self.segment_count(block_index)): d = {} d['file_origin'] = self.source_name() # copy nested d['signals'] = signal_stream_annotations.copy() d['spikes'] = spike_annotations.copy() d['events'] = event_annotations.copy() ann['blocks'][block_index]['segments'].append(d) self.raw_annotations = ann def _repr_annotations(self): txt = 'Raw annotations\n' for block_index in range(self.block_count()): bl_a = self.raw_annotations['blocks'][block_index] txt += '*Block {}\n'.format(block_index) for k, v in bl_a.items(): if k in ('segments',): continue txt += ' -{}: {}\n'.format(k, v) for seg_index in range(self.segment_count(block_index)): seg_a = bl_a['segments'][seg_index] txt += ' *Segment {}\n'.format(seg_index) for k, v in seg_a.items(): if k in ('signals', 'spikes', 'events',): continue txt += ' -{}: {}\n'.format(k, v) # annotations by channels for spikes/events/epochs for child in ('signals', 'events', 'spikes', ): if child == 'signals': n = self.header['signal_streams'].shape[0] else: n = self.header[child[:-1] + '_channels'].shape[0] for c in range(n): neo_name = {'signals': 'AnalogSignal', 'spikes': 'SpikeTrain', 'events': 'Event/Epoch'}[child] txt += f' *{neo_name} {c}\n' child_a = seg_a[child][c] for k, v in child_a.items(): if k == '__array_annotations__': continue txt += f' -{k}: {v}\n' for k, values in child_a['__array_annotations__'].items(): values = ', '.join([str(v) for v in values[:4]]) values = '[ ' + values + ' ...' txt += f' -{k}: {values}\n' return txt def print_annotations(self): """Print formatted raw_annotations""" print(self._repr_annotations()) def block_count(self): """return number of blocks""" return self.header['nb_block'] def segment_count(self, block_index): """return number of segments for a given block""" return self.header['nb_segment'][block_index] def signal_streams_count(self): """Return the number of signal streams. Same for all Blocks and Segments. """ return len(self.header['signal_streams']) def signal_channels_count(self, stream_index): """Return the number of signal channels for a given stream. This number is the same for all Blocks and Segments. """ stream_id = self.header['signal_streams'][stream_index]['id'] channels = self.header['signal_channels'] channels = channels[channels['stream_id'] == stream_id] return len(channels) def spike_channels_count(self): """Return the number of unit (aka spike) channels. Same for all Blocks and Segments. """ return len(self.header['spike_channels']) def event_channels_count(self): """Return the number of event/epoch channels. Same for all Blocks and Segments. """ return len(self.header['event_channels']) def segment_t_start(self, block_index, seg_index): """Global t_start of a Segment in s. Shared by all objects except for AnalogSignal. """ return self._segment_t_start(block_index, seg_index) def segment_t_stop(self, block_index, seg_index): """Global t_start of a Segment in s. Shared by all objects except for AnalogSignal. """ return self._segment_t_stop(block_index, seg_index) ### # signal and channel zone def _check_stream_signal_channel_characteristics(self): """ Check that all channels that belonging to the same stream_id have the same stream id and _common_sig_characteristics. These presently include: * sampling_rate * units * dtype """ signal_streams = self.header['signal_streams'] signal_channels = self.header['signal_channels'] if signal_streams.size > 0: assert signal_channels.size > 0, 'Signal stream but no signal_channels!!!' for stream_index in range(signal_streams.size): stream_id = signal_streams[stream_index]['id'] mask = signal_channels['stream_id'] == stream_id characteristics = signal_channels[mask][_common_sig_characteristics] unique_characteristics = np.unique(characteristics) assert unique_characteristics.size == 1, \ f'Some channel in stream_id {stream_id} ' \ f'do not have same {_common_sig_characteristics} {unique_characteristics}' # also check that channel_id is unique inside a stream channel_ids = signal_channels[mask]['id'] assert np.unique(channel_ids).size == channel_ids.size, \ f'signal_channels dont have unique ids for stream {stream_index}' self._several_channel_groups = signal_streams.size > 1 def channel_name_to_index(self, stream_index, channel_names): """ Inside a stream, transform channel_names to channel_indexes. Based on self.header['signal_channels'] channel_indexes are zero-based offsets within the stream """ stream_id = self.header['signal_streams'][stream_index]['id'] mask = self.header['signal_channels']['stream_id'] == stream_id signal_channels = self.header['signal_channels'][mask] chan_names = list(signal_channels['name']) assert signal_channels.size == np.unique(chan_names).size, 'Channel names not unique' channel_indexes = np.array([chan_names.index(name) for name in channel_names]) return channel_indexes def channel_id_to_index(self, stream_index, channel_ids): """ Inside a stream, transform channel_ids to channel_indexes. Based on self.header['signal_channels'] channel_indexes are zero-based offsets within the stream """ # unique ids is already checked in _check_stream_signal_channel_characteristics stream_id = self.header['signal_streams'][stream_index]['id'] mask = self.header['signal_channels']['stream_id'] == stream_id signal_channels = self.header['signal_channels'][mask] chan_ids = list(signal_channels['id']) channel_indexes = np.array([chan_ids.index(chan_id) for chan_id in channel_ids]) return channel_indexes def _get_channel_indexes(self, stream_index, channel_indexes, channel_names, channel_ids): """ Select channel_indexes for a stream based on channel_indexes/channel_names/channel_ids depending which is not None. """ if channel_indexes is None and channel_names is not None: channel_indexes = self.channel_name_to_index(stream_index, channel_names) elif channel_indexes is None and channel_ids is not None: channel_indexes = self.channel_id_to_index(stream_index, channel_ids) return channel_indexes def _get_stream_index_from_arg(self, stream_index_arg): if stream_index_arg is None: assert self.header['signal_streams'].size == 1 stream_index = 0 else: assert 0 <= stream_index_arg < self.header['signal_streams'].size stream_index = stream_index_arg return stream_index def get_signal_size(self, block_index, seg_index, stream_index=None): """ Retrieve the length of a single section of the channels in a stream. :param block_index: :param seg_index: :param stream_index: :return: number of samples """ stream_index = self._get_stream_index_from_arg(stream_index) return self._get_signal_size(block_index, seg_index, stream_index) def get_signal_t_start(self, block_index, seg_index, stream_index=None): """ Retrieve the t_start of a single section of the channels in a stream. :param block_index: :param seg_index: :param stream_index: :return: start time of section """ stream_index = self._get_stream_index_from_arg(stream_index) return self._get_signal_t_start(block_index, seg_index, stream_index) def get_signal_sampling_rate(self, stream_index=None): """ Retrieve sampling rate for a stream and all channels in that stream. :param stream_index: :return: sampling rate """ stream_index = self._get_stream_index_from_arg(stream_index) stream_id = self.header['signal_streams'][stream_index]['id'] mask = self.header['signal_channels']['stream_id'] == stream_id signal_channels = self.header['signal_channels'][mask] sr = signal_channels[0]['sampling_rate'] return float(sr) def get_analogsignal_chunk(self, block_index=0, seg_index=0, i_start=None, i_stop=None, stream_index=None, channel_indexes=None, channel_names=None, channel_ids=None, prefer_slice=False): """ Return a chunk of raw signal as a Numpy array. columns correspond to samples from a section of a single channel of recording. The channels are chosen either by channel_names, if provided, otherwise by channel_ids, if provided, otherwise by channel_indexes, if provided, otherwise all channels are selected. :param block_index: block containing segment with section :param seg_index: segment containing section :param i_start: index of first sample to retrieve within section :param i_stop: index of one past last sample to retrieve within section :param stream_index: index of stream containing channels :param channel_indexes: list of indexes of channels to retrieve. Can be a list, slice, np.array of int, or None :param channel_names: list of channels names to retrieve, or None :param channel_ids: list of channel ids to retrieve, or None :param prefer_slice: use slicing with lazy read if channel_indexes are provided as an np.ndarray and are contiguous :return: array with raw signal samples """ stream_index = self._get_stream_index_from_arg(stream_index) channel_indexes = self._get_channel_indexes(stream_index, channel_indexes, channel_names, channel_ids) # some check on channel_indexes if isinstance(channel_indexes, list): channel_indexes = np.asarray(channel_indexes) if isinstance(channel_indexes, np.ndarray): if channel_indexes.dtype == 'bool': assert self.signal_channels_count(stream_index) == channel_indexes.size channel_indexes, = np.nonzero(channel_indexes) if prefer_slice and isinstance(channel_indexes, np.ndarray): # Check if channel_indexes are contiguous and transform to slice argument if possible. # This is useful for memmap or hdf5 where providing a slice causes a lazy read, # rather than a list of indexes that make a copy (like numpy.take()). if np.all(np.diff(channel_indexes) == 1): channel_indexes = slice(channel_indexes[0], channel_indexes[-1] + 1) raw_chunk = self._get_analogsignal_chunk( block_index, seg_index, i_start, i_stop, stream_index, channel_indexes) return raw_chunk def rescale_signal_raw_to_float(self, raw_signal, dtype='float32', stream_index=None, channel_indexes=None, channel_names=None, channel_ids=None): """ Rescale a chunk of raw signals which are provided as a Numpy array. These are normally returned by a call to get_analog_signal_chunk. The channels are specified either by channel_names, if provided, otherwise by channel_ids, if provided, otherwise by channel_indexes, if provided, otherwise all channels are selected. :param raw_signal: Numpy array of samples. columns are samples for a single channel :param dtype: data type for returned scaled samples :param stream_index: index of stream containing channels :param channel_indexes: list of indexes of channels to retrieve or None :param channel_names: list of channels names to retrieve, or None :param channel_ids: list of channel ids to retrieve, or None :return: array of scaled sample values """ stream_index = self._get_stream_index_from_arg(stream_index) channel_indexes = self._get_channel_indexes(stream_index, channel_indexes, channel_names, channel_ids) if channel_indexes is None: channel_indexes = slice(None) stream_id = self.header['signal_streams'][stream_index]['id'] mask = self.header['signal_channels']['stream_id'] == stream_id channels = self.header['signal_channels'][mask] if channel_indexes is None: channel_indexes = slice(None) channels = channels[channel_indexes] float_signal = raw_signal.astype(dtype) if np.any(channels['gain'] != 1.): float_signal *= channels['gain'] if np.any(channels['offset'] != 0.): float_signal += channels['offset'] return float_signal # spiketrain and unit zone def spike_count(self, block_index=0, seg_index=0, spike_channel_index=0): return self._spike_count(block_index, seg_index, spike_channel_index) def get_spike_timestamps(self, block_index=0, seg_index=0, spike_channel_index=0, t_start=None, t_stop=None): """ The timestamp datatype is as close to the format itself. Sometimes float/int32/int64. Sometimes it is the index on the signal but not always. The conversion to second or index_on_signal is done outside this method. t_start/t_stop are limits in seconds. """ timestamp = self._get_spike_timestamps(block_index, seg_index, spike_channel_index, t_start, t_stop) return timestamp def rescale_spike_timestamp(self, spike_timestamps, dtype='float64'): """ Rescale spike timestamps to seconds. """ return self._rescale_spike_timestamp(spike_timestamps, dtype) # spiketrain waveform zone def get_spike_raw_waveforms(self, block_index=0, seg_index=0, spike_channel_index=0, t_start=None, t_stop=None): wf = self._get_spike_raw_waveforms(block_index, seg_index, spike_channel_index, t_start, t_stop) return wf def rescale_waveforms_to_float(self, raw_waveforms, dtype='float32', spike_channel_index=0): wf_gain = self.header['spike_channels']['wf_gain'][spike_channel_index] wf_offset = self.header['spike_channels']['wf_offset'][spike_channel_index] float_waveforms = raw_waveforms.astype(dtype) if wf_gain != 1.: float_waveforms *= wf_gain if wf_offset != 0.: float_waveforms += wf_offset return float_waveforms # event and epoch zone def event_count(self, block_index=0, seg_index=0, event_channel_index=0): return self._event_count(block_index, seg_index, event_channel_index) def get_event_timestamps(self, block_index=0, seg_index=0, event_channel_index=0, t_start=None, t_stop=None): """ The timestamp datatype is as close to the format itself. Sometimes float/int32/int64. Sometimes it is the index on the signal but not always. The conversion to second or index_on_signal is done outside this method. t_start/t_sop are limits in seconds. returns timestamp labels durations """ timestamp, durations, labels = self._get_event_timestamps( block_index, seg_index, event_channel_index, t_start, t_stop) return timestamp, durations, labels def rescale_event_timestamp(self, event_timestamps, dtype='float64', event_channel_index=0): """ Rescale event timestamps to seconds. """ return self._rescale_event_timestamp(event_timestamps, dtype, event_channel_index) def rescale_epoch_duration(self, raw_duration, dtype='float64', event_channel_index=0): """ Rescale epoch raw duration to seconds. """ return self._rescale_epoch_duration(raw_duration, dtype, event_channel_index) def setup_cache(self, cache_path, **init_kargs): if self.rawmode in ('one-file', 'multi-file'): resource_name = self.filename elif self.rawmode == 'one-dir': resource_name = self.dirname else: raise (NotImplementedError) if cache_path == 'home': if sys.platform.startswith('win'): dirname = os.path.join(os.environ['APPDATA'], 'neo_rawio_cache') elif sys.platform.startswith('darwin'): dirname = '~/Library/Application Support/neo_rawio_cache' else: dirname = os.path.expanduser('~/.config/neo_rawio_cache') dirname = os.path.join(dirname, self.__class__.__name__) if not os.path.exists(dirname): os.makedirs(dirname) elif cache_path == 'same_as_resource': dirname = os.path.dirname(resource_name) else: assert os.path.exists(cache_path), \ 'cache_path do not exists use "home" or "same_as_resource" to make this auto' # the hash of the resource (dir of file) is done with filename+datetime # TODO make something more sophisticated when rawmode='one-dir' that use all # filename and datetime d = dict(ressource_name=resource_name, mtime=os.path.getmtime(resource_name)) hash = joblib.hash(d, hash_name='md5') # name is constructed from the real_n,ame and the hash name = '{}_{}'.format(os.path.basename(resource_name), hash) self.cache_filename = os.path.join(dirname, name) if os.path.exists(self.cache_filename): self.logger.warning('Use existing cache file {}'.format(self.cache_filename)) self._cache = joblib.load(self.cache_filename) else: self.logger.warning('Create cache file {}'.format(self.cache_filename)) self._cache = {} self.dump_cache() def add_in_cache(self, **kargs): assert self.use_cache self._cache.update(kargs) self.dump_cache() def dump_cache(self): assert self.use_cache joblib.dump(self._cache, self.cache_filename) ################## # Functions to be implemented in IO below here def _parse_header(self): raise (NotImplementedError) # must call # self._generate_empty_annotations() def _source_name(self): raise (NotImplementedError) def _segment_t_start(self, block_index, seg_index): raise (NotImplementedError) def _segment_t_stop(self, block_index, seg_index): raise (NotImplementedError) ### # signal and channel zone def _get_signal_size(self, block_index, seg_index, stream_index): """ Return the size of a set of AnalogSignals indexed by channel_indexes. All channels indexed must have the same size and t_start. """ raise (NotImplementedError) def _get_signal_t_start(self, block_index, seg_index, stream_index): """ Return the t_start of a set of AnalogSignals indexed by channel_indexes. All channels indexed must have the same size and t_start. """ raise (NotImplementedError) def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): """ Return the samples from a set of AnalogSignals indexed by stream_index and channel_indexes (local index inner stream). RETURNS ------- array of samples, with each requested channel in a column """ raise (NotImplementedError) ### # spiketrain and unit zone def _spike_count(self, block_index, seg_index, spike_channel_index): raise (NotImplementedError) def _get_spike_timestamps(self, block_index, seg_index, spike_channel_index, t_start, t_stop): raise (NotImplementedError) def _rescale_spike_timestamp(self, spike_timestamps, dtype): raise (NotImplementedError) ### # spike waveforms zone def _get_spike_raw_waveforms(self, block_index, seg_index, spike_channel_index, t_start, t_stop): raise (NotImplementedError) ### # event and epoch zone def _event_count(self, block_index, seg_index, event_channel_index): raise (NotImplementedError) def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): raise (NotImplementedError) def _rescale_event_timestamp(self, event_timestamps, dtype): raise (NotImplementedError) def _rescale_epoch_duration(self, raw_duration, dtype): raise (NotImplementedError) def pprint_vector(vector, lim=8): vector = np.asarray(vector) assert vector.ndim == 1 if len(vector) > lim: part1 = ', '.join(e for e in vector[:lim // 2]) part2 = ' , '.join(e for e in vector[-lim // 2:]) txt = f"[{part1} ... {part2}]" else: part1 = ', '.join(e for e in vector[:lim // 2]) txt = f"[{part1}]" return txt neo-0.10.0/neo/rawio/bci2000rawio.py0000644000076700000240000004322114077527451017351 0ustar andrewstaff00000000000000""" BCI2000RawIO is a class to read BCI2000 .dat files. https://www.bci2000.org/mediawiki/index.php/Technical_Reference:BCI2000_File_Format """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np import re try: from urllib.parse import unquote except ImportError: from urllib import url2pathname as unquote class BCI2000RawIO(BaseRawIO): """ Class for reading data from a BCI2000 .dat file, either version 1.0 or 1.1 """ extensions = ['dat'] rawmode = 'one-file' def __init__(self, filename=''): BaseRawIO.__init__(self) self.filename = filename self._my_events = None def _source_name(self): return self.filename def _parse_header(self): file_info, state_defs, param_defs = parse_bci2000_header(self.filename) self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] # one unique stream signal_streams = np.array([('Signals', '0')], dtype=_signal_stream_dtype) self.header['signal_streams'] = signal_streams sig_channels = [] for chan_ix in range(file_info['SourceCh']): if 'ChannelNames' in param_defs and not np.isnan(param_defs['ChannelNames']['value']): ch_name = param_defs['ChannelNames']['value'][chan_ix] else: ch_name = 'ch' + str(chan_ix) chan_id = str(chan_ix + 1) sr = param_defs['SamplingRate']['value'] # Hz dtype = file_info['DataFormat'] units = 'uV' gain = param_defs['SourceChGain']['value'][chan_ix] if isinstance(gain, str): r = re.findall(r'(\d+)(\D+)', gain) # some files have strange units attached to gain # in that case it is ignored if len(r) == 1: gain = r[0][0] gain = float(gain) offset = param_defs['SourceChOffset']['value'][chan_ix] if isinstance(offset, str): offset = float(offset) stream_id = '0' sig_channels.append((ch_name, chan_id, sr, dtype, units, gain, offset, stream_id)) self.header['signal_channels'] = np.array(sig_channels, dtype=_signal_channel_dtype) self.header['spike_channels'] = np.array([], dtype=_spike_channel_dtype) # creating event channel for each state variable event_channels = [] for st_ix, st_tup in enumerate(state_defs): event_channels.append((st_tup[0], 'ev_' + str(st_ix), 'event')) self.header['event_channels'] = np.array(event_channels, dtype=_event_channel_dtype) # Add annotations. # Generates basic annotations in nested dict self.raw_annotations self._generate_minimal_annotations() self.raw_annotations['blocks'][0].update({ 'file_info': file_info, 'param_defs': param_defs }) event_annotations = self.raw_annotations['blocks'][0]['segments'][0]['events'] for ev_ix, ev_dict in enumerate(event_annotations): ev_dict.update({ 'length': state_defs[ev_ix][1], 'startVal': state_defs[ev_ix][2], 'bytePos': state_defs[ev_ix][3], 'bitPos': state_defs[ev_ix][4] }) import time time_formats = ['%a %b %d %H:%M:%S %Y', '%Y-%m-%dT%H:%M:%S'] try: self._global_time = time.mktime(time.strptime(param_defs['StorageTime']['value'], time_formats[0])) except: self._global_time = time.mktime(time.strptime(param_defs['StorageTime']['value'], time_formats[1])) # Save variables to make it easier to load the binary data. self._read_info = { 'header_len': file_info['HeaderLen'], 'n_chans': file_info['SourceCh'], 'sample_dtype': { 'int16': np.int16, 'int32': np.int32, 'float32': np.float32}.get(file_info['DataFormat']), 'state_vec_len': file_info['StatevectorLen'], 'sampling_rate': param_defs['SamplingRate']['value'] } # Calculate the dtype for a single timestamp of data. This contains the data + statevector self._read_info['line_dtype'] = [ ('raw_vector', self._read_info['sample_dtype'], self._read_info['n_chans']), ('state_vector', np.uint8, self._read_info['state_vec_len'])] import os self._read_info['n_samps'] = int((os.stat(self.filename).st_size - file_info['HeaderLen']) / np.dtype(self._read_info['line_dtype']).itemsize) # memmap is fast so we can get the data ready for reading now. self._memmap = np.memmap(self.filename, dtype=self._read_info['line_dtype'], offset=self._read_info['header_len'], mode='r') def _segment_t_start(self, block_index, seg_index): return 0. def _segment_t_stop(self, block_index, seg_index): return self._read_info['n_samps'] / self._read_info['sampling_rate'] def _get_signal_size(self, block_index, seg_index, stream_index): assert stream_index == 0 return self._read_info['n_samps'] def _get_signal_t_start(self, block_index, seg_index, channel_indexes): return 0. def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): assert stream_index == 0 if i_start is None: i_start = 0 if i_stop is None: i_stop = self._read_info['n_samps'] assert (0 <= i_start <= self._read_info['n_samps']), "i_start outside data range" assert (0 <= i_stop <= self._read_info['n_samps']), "i_stop outside data range" if channel_indexes is None: channel_indexes = np.arange(self.header['signal_channels'].size) return self._memmap['raw_vector'][i_start:i_stop, channel_indexes] def _spike_count(self, block_index, seg_index, unit_index): return 0 def _get_spike_timestamps(self, block_index, seg_index, unit_index, t_start, t_stop): return None def _rescale_spike_timestamp(self, spike_timestamps, dtype): return None def _get_spike_raw_waveforms(self, block_index, seg_index, unit_index, t_start, t_stop): return None def _event_count(self, block_index, seg_index, event_channel_index): return self._event_arrays_list[event_channel_index][0].shape[0] def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): # Return 3 numpy arrays: timestamp, durations, labels # durations must be None for 'event' # label must a dtype ='U' ts, dur, labels = self._event_arrays_list[event_channel_index] # seg_t_start = self._segment_t_start(block_index, seg_index) keep = np.ones(ts.shape, dtype=bool) if t_start is not None: keep = np.logical_and(keep, ts >= t_start) if t_stop is not None: keep = np.logical_and(keep, ts <= t_stop) return ts[keep], dur[keep], labels[keep] def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): event_times = (event_timestamps / float(self._read_info['sampling_rate'])).astype(dtype) return event_times def _rescale_epoch_duration(self, raw_duration, dtype, event_channel_index): durations = (raw_duration / float(self._read_info['sampling_rate'])).astype(dtype) return durations @property def _event_arrays_list(self): if self._my_events is None: event_annotations = self.raw_annotations['blocks'][0]['segments'][0]['events'] self._my_events = [] for event_channel_index in range(self.event_channels_count()): sd = event_annotations[event_channel_index] ev_times = durs = vals = np.array([]) # Skip these big but mostly useless (?) states. if sd['name'] not in ['SourceTime', 'StimulusTime']: # Determine which bytes of self._memmap['state_vector'] are needed. nbytes = int(np.ceil((sd['bitPos'] + sd['length']) / 8)) byte_slice = slice(sd['bytePos'], sd['bytePos'] + nbytes) # Then determine how to mask those bytes to get only the needed bits. bit_mask = np.array([255] * nbytes, dtype=np.uint8) bit_mask[0] &= 255 & (255 << sd['bitPos']) # Fix the mask for the first byte extra_bits = 8 - (sd['bitPos'] + sd['length']) % 8 bit_mask[-1] &= 255 & (255 >> extra_bits) # Fix the mask for the last byte # When converting to an int, we need to know which integer type it will become n_max_bytes = 1 << (nbytes - 1).bit_length() view_type = {1: np.int8, 2: np.int16, 4: np.int32, 8: np.int64}.get(n_max_bytes) # Slice and mask the data masked_byte_array = self._memmap['state_vector'][:, byte_slice] & bit_mask # Convert byte array to a vector of ints: # pad to give even columns then view as larger int type state_vec = np.pad(masked_byte_array, (0, n_max_bytes - nbytes), 'constant').view(dtype=view_type) state_vec = np.right_shift(state_vec, sd['bitPos'])[:, 0] # In the state vector, find 'events' whenever the state changes st_ch_ix = np.where(np.hstack((0, np.diff(state_vec))) != 0)[0] # event inds if len(st_ch_ix) > 0: ev_times = st_ch_ix durs = np.asarray([None] * len(st_ch_ix)) # np.hstack((np.diff(st_ch_ix), len(state_vec) - st_ch_ix[-1])) vals = np.char.mod('%d', state_vec[st_ch_ix]) # event val, string'd self._my_events.append([ev_times, durs, vals.astype('U')]) return self._my_events def parse_bci2000_header(filename): # typically we want parameter values in Hz, seconds, or microvolts. scales_dict = { 'hz': 1, 'khz': 1000, 'mhz': 1000000, 'uv': 1, 'muv': 1, 'mv': 1000, 'v': 1000000, 's': 1, 'us': 0.000001, 'mus': 0.000001, 'ms': 0.001, 'min': 60, 'sec': 1, 'usec': 0.000001, 'musec': 0.000001, 'msec': 0.001 } def rescale_value(param_val, data_type): unit_str = '' if param_val.lower().startswith('0x'): param_val = int(param_val, 16) elif data_type in ['int', 'float']: matches = re.match(r'(-*\d+)(\w*)', param_val) if matches is not None: # Can be None for % in def, min, max vals param_val, unit_str = matches.group(1), matches.group(2) param_val = int(param_val) if data_type == 'int' else float(param_val) if len(unit_str) > 0: param_val *= scales_dict.get(unit_str.lower(), 1) else: param_val = unquote(param_val) return param_val, unit_str def parse_dimensions(param_list): num_els = param_list.pop(0) # Sometimes the number of elements isn't given, # but the list of element labels is wrapped with {} if num_els == '{': num_els = param_list.index('}') el_labels = [unquote(param_list.pop(0)) for x in range(num_els)] param_list.pop(0) # Remove the '}' else: num_els = int(num_els) el_labels = [str(ix) for ix in range(num_els)] return num_els, el_labels with open(filename, 'rb') as fid: # Parse the file header (plain text) # The first line contains basic information which we store in a dictionary. temp = fid.readline().decode('utf8').split() keys = [k.rstrip('=') for k in temp[::2]] vals = temp[1::2] # Insert default version and format file_info = {'BCI2000V': 1.0, 'DataFormat': 'int16'} file_info.update(**dict(zip(keys, vals))) # From string to float/int file_info['BCI2000V'] = float(file_info['BCI2000V']) for k in ['HeaderLen', 'SourceCh', 'StatevectorLen']: if k in file_info: file_info[k] = int(file_info[k]) # The next lines contain state vector definitions. temp = fid.readline().decode('utf8').strip() assert temp == '[ State Vector Definition ]', \ "State definitions not found in header %s" % filename state_defs = [] state_def_dtype = [('name', 'a64'), ('length', int), ('startVal', int), ('bytePos', int), ('bitPos', int)] while True: temp = fid.readline().decode('utf8').strip() if len(temp) == 0 or temp[0] == '[': # Presence of '[' signifies new section. break temp = temp.split() state_defs.append((temp[0], int(temp[1]), int(temp[2]), int(temp[3]), int(temp[4]))) state_defs = np.array(state_defs, dtype=state_def_dtype) # The next lines contain parameter definitions. # There are many, and their formatting can be complicated. assert temp == '[ Parameter Definition ]', \ "Parameter definitions not found in header %s" % filename param_defs = {} while True: temp = fid.readline().decode('utf8') if fid.tell() >= file_info['HeaderLen']: # End of header. break if len(temp.strip()) == 0: continue # Skip empty lines # Everything after the '//' is a comment. temp = temp.strip().split('//', 1) param_def = {'comment': temp[1].strip() if len(temp) > 1 else ''} # Parse the parameter definition. Generally it is sec:cat:name dtype name param_value+ temp = temp[0].split() param_def.update( {'section_category_name': [unquote(x) for x in temp.pop(0).split(':')]}) dtype = temp.pop(0) param_name = unquote(temp.pop(0).rstrip('=')) # Parse the rest. Parse method depends on the dtype param_value, units = None, None if dtype in ('int', 'float'): param_value = temp.pop(0) if param_value == 'auto': param_value = np.nan units = '' else: param_value, units = rescale_value(param_value, dtype) elif dtype in ('string', 'variant'): param_value = unquote(temp.pop(0)) elif dtype.endswith('list'): # e.g., intlist, stringlist, floatlist, list dtype = dtype[:-4] # The list parameter values will begin with either # an int to specify the number of elements # or a list of labels surrounded by { }. num_elements, element_labels = parse_dimensions(temp) # This will pop off info. param_def.update({'element_labels': element_labels}) pv_un = [rescale_value(pv, dtype) for pv in temp[:num_elements]] if len(pv_un) > 0: param_value, units = zip(*pv_un) else: param_value, units = np.nan, '' temp = temp[num_elements:] # Sometimes an element list will be a list of ints even though # the element_type is '' (str)... # This usually happens for known parameters, such as SourceChOffset, # that can be dealt with explicitly later. elif dtype.endswith('matrix'): dtype = dtype[:-6] # The parameter values will be preceded by two dimension descriptors, # first rows then columns. Each dimension might be described by an # int or a list of labels surrounded by {} n_rows, row_labels = parse_dimensions(temp) n_cols, col_labels = parse_dimensions(temp) param_def.update({'row_labels': row_labels, 'col_labels': col_labels}) param_value = [] units = [] for row_ix in range(n_rows): cols = [] for col_ix in range(n_cols): col_val, _units = rescale_value(temp[row_ix * n_cols + col_ix], dtype) cols.append(col_val) units.append(_units) param_value.append(cols) temp = temp[n_rows * n_cols:] param_def.update({ 'value': param_value, 'units': units, 'dtype': dtype }) # At the end of the parameter definition, we might get # default, min, max values for the parameter. temp.reverse() if len(temp): param_def.update({'max_val': rescale_value(temp.pop(0), dtype)}) if len(temp): param_def.update({'min_val': rescale_value(temp.pop(0), dtype)}) if len(temp): param_def.update({'default_val': rescale_value(temp.pop(0), dtype)}) param_defs.update({param_name: param_def}) # End parameter block # Outdent to close file return file_info, state_defs, param_defs neo-0.10.0/neo/rawio/blackrockrawio.py0000644000076700000240000023573514077554132020256 0ustar andrewstaff00000000000000""" Module for reading data from files in the Blackrock in raw format. This work is based on: * Chris Rodgers - first version * Michael Denker, Lyuba Zehl - second version * Samuel Garcia - third version * Lyuba Zehl, Michael Denker - fourth version * Samuel Garcia, Julia Srenger - fifth version This IO supports reading only. This IO is able to read: * the nev file which contains spikes * ns1, ns2, .., ns6 files that contain signals at different sampling rates This IO can handle the following Blackrock file specifications: * 2.1 * 2.2 * 2.3 The neural data channels are 1 - 128. The analog inputs are 129 - 144. (129 - 137 AC coupled, 138 - 144 DC coupled) spike- and event-data; 30000 Hz in NEV file. "ns1": "analog data: 500 Hz", "ns2": "analog data: 1000 Hz", "ns3": "analog data: 2000 Hz", "ns4": "analog data: 10000 Hz", "ns5": "analog data: 30000 Hz", "ns6": "analog data: 30000 Hz (no digital filter)" The possible file extensions of the Cerebus system and their content: ns1: contains analog data; sampled at 500 Hz (+ digital filters) ns2: contains analog data; sampled at 1000 Hz (+ digital filters) ns3: contains analog data; sampled at 2000 Hz (+ digital filters) ns4: contains analog data; sampled at 10000 Hz (+ digital filters) ns5: contains analog data; sampled at 30000 Hz (+ digital filters) ns6: contains analog data; sampled at 30000 Hz (no digital filters) nev: contains spike- and event-data; sampled at 30000 Hz sif: contains institution and patient info (XML) ccf: contains Cerebus configurations TODO: * videosync events (file spec 2.3) * tracking events (file spec 2.3) * buttontrigger events (file spec 2.3) * config events (file spec 2.3) * check left sweep settings of Blackrock * check nsx offsets (file spec 2.1) * add info of nev ext header (NSASEXEX) to non-neural events (file spec 2.1 and 2.2) * read sif file information * read ccf file information * fix reading of periodic sampling events (non-neural event type) (file spec 2.1 and 2.2) """ import datetime import os import re import warnings import numpy as np import quantities as pq from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) class BlackrockRawIO(BaseRawIO): """ Class for reading data in from a file set recorded by the Blackrock (Cerebus) recording system. Upon initialization, the class is linked to the available set of Blackrock files. Note: This routine will handle files according to specification 2.1, 2.2, and 2.3. Recording pauses that may occur in file specifications 2.2 and 2.3 are automatically extracted and the data set is split into different segments. The Blackrock data format consists not of a single file, but a set of different files. This constructor associates itself with a set of files that constitute a common data set. By default, all files belonging to the file set have the same base name, but different extensions. However, by using the override parameters, individual filenames can be set. Args: filename (string): File name (without extension) of the set of Blackrock files to associate with. Any .nsX or .nev, .sif, or .ccf extensions are ignored when parsing this parameter. nsx_override (string): File name of the .nsX files (without extension). If None, filename is used. Default: None. nev_override (string): File name of the .nev file (without extension). If None, filename is used. Default: None. nsx_to_load (int, list, 'max', 'all' (=None)) default None: IDs of nsX file from which to load data, e.g., if set to 5 only data from the ns5 file are loaded. If 'all', then all nsX will be loaded. Contrary to previsous version of the IO (<0.7), nsx_to_load must be set at the init before parse_header(). Examples: >>> reader = BlackrockRawIO(filename='FileSpec2.3001', nsx_to_load=5) >>> reader.parse_header() Inspect a set of file consisting of files FileSpec2.3001.ns5 and FileSpec2.3001.nev >>> print(reader) Display all informations about signal channels, units, segment size.... """ extensions = ['ns' + str(_) for _ in range(1, 7)] extensions.extend(['nev', ]) # 'sif', 'ccf' not yet supported rawmode = 'multi-file' def __init__(self, filename=None, nsx_override=None, nev_override=None, nsx_to_load=None, verbose=False): """ Initialize the BlackrockIO class. """ BaseRawIO.__init__(self) self.filename = str(filename) # remove extension from base _filenames for ext in self.extensions: if self.filename.endswith(os.path.extsep + ext): self.filename = self.filename.replace(os.path.extsep + ext, '') self.nsx_to_load = nsx_to_load # remove extensions from overrides self._filenames = {} if nsx_override: self._filenames['nsx'] = re.sub( os.path.extsep + 'ns[1,2,3,4,5,6]$', '', nsx_override) else: self._filenames['nsx'] = self.filename if nev_override: self._filenames['nev'] = re.sub( os.path.extsep + 'nev$', '', nev_override) else: self._filenames['nev'] = self.filename # check which files are available self._avail_files = dict.fromkeys(self.extensions, False) self._avail_nsx = [] for ext in self.extensions: if ext.startswith('ns'): file2check = ''.join( [self._filenames['nsx'], os.path.extsep, ext]) else: file2check = ''.join( [self._filenames[ext], os.path.extsep, ext]) if os.path.exists(file2check): self._avail_files[ext] = True if ext.startswith('ns'): self._avail_nsx.append(int(ext[-1])) if not self._avail_files['nev'] and not self._avail_nsx: raise IOError("No Blackrock files found in specified path") # These dictionaries are used internally to map the file specification # revision of the nsx and nev files to one of the reading routines # NSX self.__nsx_header_reader = { '2.1': self.__read_nsx_header_variant_a, '2.2': self.__read_nsx_header_variant_b, '2.3': self.__read_nsx_header_variant_b} self.__nsx_dataheader_reader = { '2.1': self.__read_nsx_dataheader_variant_a, '2.2': self.__read_nsx_dataheader_variant_b, '2.3': self.__read_nsx_dataheader_variant_b} self.__nsx_data_reader = { '2.1': self.__read_nsx_data_variant_a, '2.2': self.__read_nsx_data_variant_b, '2.3': self.__read_nsx_data_variant_b} self.__nsx_params = { '2.1': self.__get_nsx_param_variant_a, '2.2': self.__get_nsx_param_variant_b, '2.3': self.__get_nsx_param_variant_b} # NEV self.__nev_header_reader = { '2.1': self.__read_nev_header_variant_a, '2.2': self.__read_nev_header_variant_b, '2.3': self.__read_nev_header_variant_c} self.__nev_data_reader = { '2.1': self.__read_nev_data_variant_a, '2.2': self.__read_nev_data_variant_a, '2.3': self.__read_nev_data_variant_b} self.__waveform_size = { '2.1': self.__get_waveform_size_variant_a, '2.2': self.__get_waveform_size_variant_a, '2.3': self.__get_waveform_size_variant_b} self.__channel_labels = { '2.1': self.__get_channel_labels_variant_a, '2.2': self.__get_channel_labels_variant_b, '2.3': self.__get_channel_labels_variant_b} self.__nonneural_evdicts = { '2.1': self.__get_nonneural_evdicts_variant_a, '2.2': self.__get_nonneural_evdicts_variant_a, '2.3': self.__get_nonneural_evdicts_variant_b} self.__comment_evdict = { '2.1': self.__get_comment_evdict_variant_a, '2.2': self.__get_comment_evdict_variant_a, '2.3': self.__get_comment_evdict_variant_a} def _parse_header(self): main_sampling_rate = 30000. event_channels = [] spike_channels = [] signal_streams = [] signal_channels = [] # Step1 NEV file if self._avail_files['nev']: # Load file spec and headers of available # read nev file specification self.__nev_spec = self.__extract_nev_file_spec() # read nev headers self.__nev_basic_header, self.__nev_ext_header = \ self.__nev_header_reader[self.__nev_spec]() self.nev_data = self.__nev_data_reader[self.__nev_spec]() spikes, spike_segment_ids = self.nev_data['Spikes'] # scan all channel to get number of Unit spike_channels = [] self.internal_unit_ids = [] # pair of chan['packet_id'], spikes['unit_class_nb'] for i in range(len(self.__nev_ext_header[b'NEUEVWAV'])): channel_id = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'][i] chan_mask = (spikes['packet_id'] == channel_id) chan_spikes = spikes[chan_mask] all_unit_id = np.unique(chan_spikes['unit_class_nb']) for u, unit_id in enumerate(all_unit_id): self.internal_unit_ids.append((channel_id, unit_id)) name = "ch{}#{}".format(channel_id, unit_id) _id = "Unit {}".format(1000 * channel_id + unit_id) wf_gain = self.__nev_params('digitization_factor')[channel_id] / 1000. wf_offset = 0. wf_units = 'uV' # TODO: Double check if this is the correct assumption (10 samples) # default value: threshold crossing after 10 samples of waveform wf_left_sweep = 10 wf_sampling_rate = main_sampling_rate spike_channels.append((name, _id, wf_units, wf_gain, wf_offset, wf_left_sweep, wf_sampling_rate)) # scan events # NonNeural: serial and digital input events_data, event_segment_ids = self.nev_data['NonNeural'] ev_dict = self.__nonneural_evdicts[self.__nev_spec](events_data) if 'Comments' in self.nev_data: comments_data, comments_segment_ids = self.nev_data['Comments'] ev_dict.update(self.__comment_evdict[self.__nev_spec](comments_data)) for ev_name in ev_dict: event_channels.append((ev_name, '', 'event')) # TODO: TrackingEvents # TODO: ButtonTrigger # TODO: VideoSync # Step2 NSX file # Load file spec and headers of available nsx files self.__nsx_spec = {} self.__nsx_basic_header = {} self.__nsx_ext_header = {} self.__nsx_data_header = {} for nsx_nb in self._avail_nsx: spec = self.__nsx_spec[nsx_nb] = self.__extract_nsx_file_spec(nsx_nb) # read nsx headers self.__nsx_basic_header[nsx_nb], self.__nsx_ext_header[nsx_nb] = \ self.__nsx_header_reader[spec](nsx_nb) # Read nsx data header(s) # for nsxdef get_analogsignal_shape(self, block_index, seg_index): self.__nsx_data_header[nsx_nb] = self.__nsx_dataheader_reader[spec](nsx_nb) # nsx_to_load can be either int, list, 'max', all' (aka None) # here make a list only if self.nsx_to_load is None or self.nsx_to_load == 'all': self.nsx_to_load = list(self._avail_nsx) elif self.nsx_to_load == 'max': if len(self._avail_nsx): self.nsx_to_load = [max(self._avail_nsx)] else: self.nsx_to_load = [] elif isinstance(self.nsx_to_load, int): self.nsx_to_load = [self.nsx_to_load] elif isinstance(self.nsx_to_load, list): pass else: raise(ValueError('nsx_to_load is wrong')) assert all(nsx_nb in self._avail_nsx for nsx_nb in self.nsx_to_load),\ 'nsx_to_load do not match available nsx list' # check that all files come from the same specification all_spec = [self.__nsx_spec[nsx_nb] for nsx_nb in self.nsx_to_load] if self._avail_files['nev']: all_spec.append(self.__nev_spec) assert all(all_spec[0] == spec for spec in all_spec), \ "Files don't have the same internal version" if len(self.nsx_to_load) > 0 and \ self.__nsx_spec[self.nsx_to_load[0]] == '2.1' and \ not self._avail_files['nev']: pass # Because rescaling to volts requires information from nev file (dig_factor) # Remove if raw loading becomes possible # raise IOError("For loading Blackrock file version 2.1 .nev files are required!") # This requires nsX to be parsed already # Needs to be called when no nsX are available as well in order to warn the user if self._avail_files['nev']: for nsx_nb in self.nsx_to_load: self.__match_nsx_and_nev_segment_ids(nsx_nb) self.nsx_datas = {} self.sig_sampling_rates = {} if len(self.nsx_to_load) > 0: for nsx_nb in self.nsx_to_load: spec = self.__nsx_spec[nsx_nb] self.nsx_datas[nsx_nb] = self.__nsx_data_reader[spec](nsx_nb) sr = float(main_sampling_rate / self.__nsx_basic_header[nsx_nb]['period']) self.sig_sampling_rates[nsx_nb] = sr if spec in ['2.2', '2.3']: ext_header = self.__nsx_ext_header[nsx_nb] elif spec == '2.1': ext_header = [] keys = ['labels', 'units', 'min_analog_val', 'max_analog_val', 'min_digital_val', 'max_digital_val'] params = self.__nsx_params[spec](nsx_nb) for i in range(len(params['labels'])): d = {} for key in keys: d[key] = params[key][i] ext_header.append(d) if len(ext_header) > 0: signal_streams.append((f'nsx{nsx_nb}', str(nsx_nb))) for i, chan in enumerate(ext_header): if spec in ['2.2', '2.3']: ch_name = chan['electrode_label'].decode() ch_id = str(chan['electrode_id']) units = chan['units'].decode() elif spec == '2.1': ch_name = chan['labels'] ch_id = str(self.__nsx_ext_header[nsx_nb][i]['electrode_id']) units = chan['units'] sig_dtype = 'int16' # max_analog_val/min_analog_val/max_digital_val/min_analog_val are int16!!!!! # dangarous situation so cast to float everyone if np.isnan(float(chan['min_analog_val'])): gain = 1 offset = 0 else: gain = (float(chan['max_analog_val']) - float(chan['min_analog_val'])) / \ (float(chan['max_digital_val']) - float(chan['min_digital_val'])) offset = -float(chan['min_digital_val']) \ * gain + float(chan['min_analog_val']) stream_id = str(nsx_nb) signal_channels.append((ch_name, ch_id, sr, sig_dtype, units, gain, offset, stream_id)) # check nb segment per nsx nb_segments_for_nsx = [len(self.nsx_datas[nsx_nb]) for nsx_nb in self.nsx_to_load] assert all(nb == nb_segments_for_nsx[0] for nb in nb_segments_for_nsx),\ 'Segment nb not consistanent across nsX files' self._nb_segment = nb_segments_for_nsx[0] self.__delete_empty_segments() # t_start/t_stop for segment are given by nsx limits or nev limits self._sigs_t_starts = {nsx_nb: [] for nsx_nb in self.nsx_to_load} self._seg_t_starts, self._seg_t_stops = [], [] for data_bl in range(self._nb_segment): t_stop = 0. for nsx_nb in self.nsx_to_load: length = self.nsx_datas[nsx_nb][data_bl].shape[0] if self.__nsx_data_header[nsx_nb] is None: t_start = 0. else: t_start = self.__nsx_data_header[nsx_nb][data_bl]['timestamp'] / \ self.__nsx_basic_header[nsx_nb]['timestamp_resolution'] t_stop = max(t_stop, t_start + length / self.sig_sampling_rates[nsx_nb]) self._sigs_t_starts[nsx_nb].append(t_start) if self._avail_files['nev']: max_nev_time = 0 for k, (data, ev_ids) in self.nev_data.items(): segment_mask = ev_ids == data_bl if data[segment_mask].size > 0: t = data[segment_mask][-1]['timestamp'] / self.__nev_basic_header[ 'timestamp_resolution'] max_nev_time = max(max_nev_time, t) if max_nev_time > t_stop: t_stop = max_nev_time min_nev_time = max_nev_time for k, (data, ev_ids) in self.nev_data.items(): segment_mask = ev_ids == data_bl if data[segment_mask].size > 0: t = data[segment_mask][0]['timestamp'] / self.__nev_basic_header[ 'timestamp_resolution'] min_nev_time = min(min_nev_time, t) if min_nev_time < t_start: t_start = min_nev_time self._seg_t_starts.append(t_start) self._seg_t_stops.append(float(t_stop)) else: # When only nev is available, only segments that are documented in nev can be detected max_nev_times = {} min_nev_times = {} # Find maximal and minimal time for each nev segment for k, (data, ev_ids) in self.nev_data.items(): for i in np.unique(ev_ids): mask = [ev_ids == i] curr_data = data[mask] if curr_data.size > 0: if max(curr_data['timestamp']) >= max_nev_times.get(i, 0): max_nev_times[i] = max(curr_data['timestamp']) if min(curr_data['timestamp']) <= min_nev_times.get(i, max_nev_times[i]): min_nev_times[i] = min(curr_data['timestamp']) # Calculate t_start and t_stop for each segment in seconds resolution = self.__nev_basic_header['timestamp_resolution'] self._seg_t_starts = [v / float(resolution) for k, v in sorted(min_nev_times.items())] self._seg_t_stops = [v / float(resolution) for k, v in sorted(max_nev_times.items())] self._nb_segment = len(self._seg_t_starts) self._sigs_t_starts = [None] * self._nb_segment # finalize header spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) event_channels = np.array(event_channels, dtype=_event_channel_dtype) signal_channels = np.array(signal_channels, dtype=_signal_channel_dtype) signal_streams = np.array(signal_streams, dtype=_signal_stream_dtype) self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [self._nb_segment] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = signal_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels rec_datetime = self.__nev_params('rec_datetime') if self._avail_files['nev'] else None # Put annotations at some places for compatibility # with previous BlackrockIO version self._generate_minimal_annotations() block_ann = self.raw_annotations['blocks'][0] block_ann['description'] = 'Block of data from Blackrock file set.' block_ann['file_origin'] = self.filename block_ann['name'] = "Blackrock Data Block" block_ann['rec_datetime'] = rec_datetime block_ann['avail_file_set'] = [k for k, v in self._avail_files.items() if v] block_ann['avail_nsx'] = self._avail_nsx block_ann['avail_nev'] = self._avail_files['nev'] # 'sif' and 'ccf' files not yet supported # block_ann['avail_sif'] = self._avail_files['sif'] # block_ann['avail_ccf'] = self._avail_files['ccf'] block_ann['rec_pauses'] = False for seg_index in range(self._nb_segment): seg_ann = block_ann['segments'][seg_index] seg_ann['file_origin'] = self.filename seg_ann['name'] = "Segment {}".format(seg_index) seg_ann['description'] = "Segment containing data from t_start to t_stop" if seg_index == 0: # if more than 1 segment means pause # so datetime is valide only for seg_index=0 seg_ann['rec_datetime'] = rec_datetime for c in range(signal_streams.size): sig_ann = seg_ann['signals'][c] stream_id = signal_streams['id'][c] nsx_nb = int(stream_id) sig_ann['description'] = f'AnalogSignal from nsx{nsx_nb}' sig_ann['file_origin'] = self._filenames['nsx'] + '.ns' + str(nsx_nb) sig_ann['nsx'] = nsx_nb # handle signal array annotations from nsx header if self.__nsx_spec[nsx_nb] in ['2.2', '2.3'] and nsx_nb in self.__nsx_ext_header: mask = signal_channels['stream_id'] == stream_id channels = signal_channels[mask] nsx_header = self.__nsx_ext_header[nsx_nb] for key in ('physical_connector', 'connector_pin', 'hi_freq_corner', 'lo_freq_corner', 'hi_freq_order', 'lo_freq_order', 'hi_freq_type', 'lo_freq_type'): values = [] for chan_id in channels['id']: chan_id = int(chan_id) idx = list(nsx_header['electrode_id']).index(chan_id) values.append(nsx_header[key][idx]) values = np.array(values) sig_ann['__array_annotations__'][key] = values for c in range(spike_channels.size): st_ann = seg_ann['spikes'][c] channel_id, unit_id = self.internal_unit_ids[c] unit_tag = {0: 'unclassified', 255: 'noise'}.get(unit_id, str(unit_id)) st_ann['channel_id'] = channel_id st_ann['unit_id'] = unit_id st_ann['unit_tag'] = unit_tag st_ann['description'] = f'SpikeTrain channel_id: {channel_id}, unit_id: {unit_id}' st_ann['file_origin'] = self._filenames['nev'] + '.nev' if self._avail_files['nev']: ev_dict = self.__nonneural_evdicts[self.__nev_spec](events_data) if 'Comments' in self.nev_data: ev_dict.update(self.__comment_evdict[self.__nev_spec](comments_data)) color_codes = ["#{:08X}".format(code) for code in comments_data['color']] color_codes = np.array(color_codes, dtype='S9') for c in range(event_channels.size): # Next line makes ev_ann a reference to seg_ann['events'][c] ev_ann = seg_ann['events'][c] name = event_channels['name'][c] ev_ann['description'] = ev_dict[name]['desc'] ev_ann['file_origin'] = self._filenames['nev'] + '.nev' if name == 'comments': ev_ann['color_codes'] = color_codes def _source_name(self): return self.filename def _segment_t_start(self, block_index, seg_index): return self._seg_t_starts[seg_index] def _segment_t_stop(self, block_index, seg_index): return self._seg_t_stops[seg_index] def _get_signal_size(self, block_index, seg_index, stream_index): stream_id = self.header['signal_streams'][stream_index]['id'] nsx_nb = int(stream_id) memmap_data = self.nsx_datas[nsx_nb][seg_index] return memmap_data.shape[0] def _get_signal_t_start(self, block_index, seg_index, stream_index): stream_id = self.header['signal_streams'][stream_index]['id'] nsx_nb = int(stream_id) return self._sigs_t_starts[nsx_nb][seg_index] def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): stream_id = self.header['signal_streams'][stream_index]['id'] nsx_nb = int(stream_id) memmap_data = self.nsx_datas[nsx_nb][seg_index] if channel_indexes is None: channel_indexes = slice(None) sig_chunk = memmap_data[i_start:i_stop, channel_indexes] return sig_chunk def _spike_count(self, block_index, seg_index, unit_index): channel_id, unit_id = self.internal_unit_ids[unit_index] all_spikes = self.nev_data['Spikes'][0] mask = (all_spikes['packet_id'] == channel_id) & (all_spikes['unit_class_nb'] == unit_id) if self._nb_segment == 1: # very fast nb = int(np.sum(mask)) else: # must clip in time time range timestamp = all_spikes[mask]['timestamp'] sl = self._get_timestamp_slice(timestamp, seg_index, None, None) timestamp = timestamp[sl] nb = timestamp.size return nb def _get_spike_timestamps(self, block_index, seg_index, unit_index, t_start, t_stop): channel_id, unit_id = self.internal_unit_ids[unit_index] all_spikes, event_segment_ids = self.nev_data['Spikes'] # select by channel_id and unit_id mask = ((all_spikes['packet_id'] == channel_id) & (all_spikes['unit_class_nb'] == unit_id) & (event_segment_ids == seg_index)) unit_spikes = all_spikes[mask] timestamp = unit_spikes['timestamp'] sl = self._get_timestamp_slice(timestamp, seg_index, t_start, t_stop) timestamp = timestamp[sl] return timestamp def _get_timestamp_slice(self, timestamp, seg_index, t_start, t_stop): if self._nb_segment > 1: # we must clip event in seg time limits if t_start is None: t_start = self._seg_t_starts[seg_index] if t_stop is None: t_stop = self._seg_t_stops[seg_index] if t_start is None: ind_start = None else: ts = np.math.ceil(t_start * self.__nev_basic_header['timestamp_resolution']) ind_start = np.searchsorted(timestamp, ts) if t_stop is None: ind_stop = None else: ts = int(t_stop * self.__nev_basic_header['timestamp_resolution']) ind_stop = np.searchsorted(timestamp, ts) # +1 sl = slice(ind_start, ind_stop) return sl def _rescale_spike_timestamp(self, spike_timestamps, dtype): spike_times = spike_timestamps.astype(dtype) spike_times /= self.__nev_basic_header['timestamp_resolution'] return spike_times def _get_spike_raw_waveforms(self, block_index, seg_index, unit_index, t_start, t_stop): channel_id, unit_id = self.internal_unit_ids[unit_index] all_spikes, event_segment_ids = self.nev_data['Spikes'] mask = ((all_spikes['packet_id'] == channel_id) & (all_spikes['unit_class_nb'] == unit_id) & (event_segment_ids == seg_index)) unit_spikes = all_spikes[mask] wf_dtype = self.__nev_params('waveform_dtypes')[channel_id] wf_size = self.__nev_params('waveform_size')[channel_id] waveforms = unit_spikes['waveform'].flatten().view(wf_dtype) waveforms = waveforms.reshape(int(unit_spikes.size), 1, int(wf_size)) timestamp = unit_spikes['timestamp'] sl = self._get_timestamp_slice(timestamp, seg_index, t_start, t_stop) waveforms = waveforms[sl] return waveforms def _event_count(self, block_index, seg_index, event_channel_index): name = self.header['event_channels']['name'][event_channel_index] if name == 'comments': events_data, event_segment_ids = self.nev_data['Comments'] ev_dict = self.__comment_evdict[self.__nev_spec](events_data)[name] else: events_data, event_segment_ids = self.nev_data['NonNeural'] ev_dict = self.__nonneural_evdicts[self.__nev_spec](events_data)[name] mask = ev_dict['mask'] & (event_segment_ids == seg_index) if self._nb_segment == 1: # very fast nb = int(np.sum(mask)) else: # must clip in time time range timestamp = events_data[ev_dict['mask']]['timestamp'] sl = self._get_timestamp_slice(timestamp, seg_index, None, None) timestamp = timestamp[sl] nb = timestamp.size return nb def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): name = self.header['event_channels']['name'][event_channel_index] if name == 'comments': events_data, event_segment_ids = self.nev_data['Comments'] ev_dict = self.__comment_evdict[self.__nev_spec](events_data)[name] # If immediate decoding is desired: encoding = {0: 'latin_1', 1: 'utf_16', 255: 'latin_1'} labels = [data[ev_dict['field']].decode( encoding[data['char_set']]) for data in events_data] labels = np.array(labels, dtype='U') else: events_data, event_segment_ids = self.nev_data['NonNeural'] ev_dict = self.__nonneural_evdicts[self.__nev_spec](events_data)[name] labels = events_data[ev_dict['field']].astype('U') mask = ev_dict['mask'] & (event_segment_ids == seg_index) timestamp = events_data[mask]['timestamp'] labels = labels[mask] # time clip sl = self._get_timestamp_slice(timestamp, seg_index, t_start, t_stop) timestamp = timestamp[sl] labels = labels[sl] durations = None return timestamp, durations, labels def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): ev_times = event_timestamps.astype(dtype) ev_times /= self.__nev_basic_header['timestamp_resolution'] return ev_times ################################################### ################################################### # Above here code from Lyuba Zehl, Michael Denker # coming from previous BlackrockIO def __extract_nsx_file_spec(self, nsx_nb): """ Extract file specification from an .nsx file. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # Header structure of files specification 2.2 and higher. For files 2.1 # and lower, the entries ver_major and ver_minor are not supported. dt0 = [ ('file_id', 'S8'), ('ver_major', 'uint8'), ('ver_minor', 'uint8')] nsx_file_id = np.fromfile(filename, count=1, dtype=dt0)[0] if nsx_file_id['file_id'].decode() == 'NEURALSG': spec = '2.1' elif nsx_file_id['file_id'].decode() == 'NEURALCD': spec = '{}.{}'.format( nsx_file_id['ver_major'], nsx_file_id['ver_minor']) else: raise IOError('Unsupported NSX file type.') return spec def __extract_nev_file_spec(self): """ Extract file specification from an .nev file """ filename = '.'.join([self._filenames['nev'], 'nev']) # Header structure of files specification 2.2 and higher. For files 2.1 # and lower, the entries ver_major and ver_minor are not supported. dt0 = [ ('file_id', 'S8'), ('ver_major', 'uint8'), ('ver_minor', 'uint8')] nev_file_id = np.fromfile(filename, count=1, dtype=dt0)[0] if nev_file_id['file_id'].decode() == 'NEURALEV': spec = '{}.{}'.format( nev_file_id['ver_major'], nev_file_id['ver_minor']) else: raise IOError('NEV file type {} is not supported'.format( nev_file_id['file_id'])) return spec def __read_nsx_header_variant_a(self, nsx_nb): """ Extract nsx header information from a 2.1 .nsx file """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # basic header (file_id: NEURALCD) dt0 = [ ('file_id', 'S8'), # label of sampling groun (e.g. "1kS/s" or "LFP Low") ('label', 'S16'), # number of 1/30000 seconds between data points # (e.g., if sampling rate "1 kS/s", period equals "30") ('period', 'uint32'), ('channel_count', 'uint32')] nsx_basic_header = np.fromfile(filename, count=1, dtype=dt0)[0] # "extended" header (last field of file_id: NEURALCD) # (to facilitate compatibility with higher file specs) offset_dt0 = np.dtype(dt0).itemsize shape = nsx_basic_header['channel_count'] # originally called channel_id in Blackrock user manual # (to facilitate compatibility with higher file specs) dt1 = [('electrode_id', 'uint32')] nsx_ext_header = np.memmap( filename, shape=shape, offset=offset_dt0, dtype=dt1, mode='r') return nsx_basic_header, nsx_ext_header def __read_nsx_header_variant_b(self, nsx_nb): """ Extract nsx header information from a 2.2 or 2.3 .nsx file """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # basic header (file_id: NEURALCD) dt0 = [ ('file_id', 'S8'), # file specification split into major and minor version number ('ver_major', 'uint8'), ('ver_minor', 'uint8'), # bytes of basic & extended header ('bytes_in_headers', 'uint32'), # label of the sampling group (e.g., "1 kS/s" or "LFP low") ('label', 'S16'), ('comment', 'S256'), ('period', 'uint32'), ('timestamp_resolution', 'uint32'), # time origin: 2byte uint16 values for ... ('year', 'uint16'), ('month', 'uint16'), ('weekday', 'uint16'), ('day', 'uint16'), ('hour', 'uint16'), ('minute', 'uint16'), ('second', 'uint16'), ('millisecond', 'uint16'), # number of channel_count match number of extended headers ('channel_count', 'uint32')] nsx_basic_header = np.fromfile(filename, count=1, dtype=dt0)[0] # extended header (type: CC) offset_dt0 = np.dtype(dt0).itemsize shape = nsx_basic_header['channel_count'] dt1 = [ ('type', 'S2'), ('electrode_id', 'uint16'), ('electrode_label', 'S16'), # used front-end amplifier bank (e.g., A, B, C, D) ('physical_connector', 'uint8'), # used connector pin (e.g., 1-37 on bank A, B, C or D) ('connector_pin', 'uint8'), # digital and analog value ranges of the signal ('min_digital_val', 'int16'), ('max_digital_val', 'int16'), ('min_analog_val', 'int16'), ('max_analog_val', 'int16'), # units of the analog range values ("mV" or "uV") ('units', 'S16'), # filter settings used to create nsx from source signal ('hi_freq_corner', 'uint32'), ('hi_freq_order', 'uint32'), ('hi_freq_type', 'uint16'), # 0=None, 1=Butterworth ('lo_freq_corner', 'uint32'), ('lo_freq_order', 'uint32'), ('lo_freq_type', 'uint16')] # 0=None, 1=Butterworth nsx_ext_header = np.memmap( filename, shape=shape, offset=offset_dt0, dtype=dt1, mode='r') return nsx_basic_header, nsx_ext_header def __read_nsx_dataheader(self, nsx_nb, offset): """ Reads data header following the given offset of an nsx file. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # dtypes data header dt2 = [ ('header', 'uint8'), ('timestamp', 'uint32'), ('nb_data_points', 'uint32')] return np.memmap(filename, dtype=dt2, shape=1, offset=offset, mode='r')[0] def __read_nsx_dataheader_variant_a( self, nsx_nb, filesize=None, offset=None): """ Reads None for the nsx data header of file spec 2.1. Introduced to facilitate compatibility with higher file spec. """ return None def __read_nsx_dataheader_variant_b( self, nsx_nb, filesize=None, offset=None, ): """ Reads the nsx data header for each data block following the offset of file spec 2.2 and 2.3. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) filesize = self.__get_file_size(filename) data_header = {} index = 0 if offset is None: offset = self.__nsx_basic_header[nsx_nb]['bytes_in_headers'] while offset < filesize: dh = self.__read_nsx_dataheader(nsx_nb, offset) data_header[index] = { 'header': dh['header'], 'timestamp': dh['timestamp'], 'nb_data_points': dh['nb_data_points'], 'offset_to_data_block': offset + dh.dtype.itemsize} # data size = number of data points * (2bytes * number of channels) # use of `int` avoids overflow problem data_size = int(dh['nb_data_points']) *\ int(self.__nsx_basic_header[nsx_nb]['channel_count']) * 2 # define new offset (to possible next data block) offset = data_header[index]['offset_to_data_block'] + data_size index += 1 return data_header def __read_nsx_data_variant_a(self, nsx_nb): """ Extract nsx data from a 2.1 .nsx file """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) # get shape of data shape = ( self.__nsx_params['2.1'](nsx_nb)['nb_data_points'], self.__nsx_basic_header[nsx_nb]['channel_count']) offset = self.__nsx_params['2.1'](nsx_nb)['bytes_in_headers'] # read nsx data # store as dict for compatibility with higher file specs data = {0: np.memmap( filename, dtype='int16', shape=shape, offset=offset, mode='r')} return data def __read_nsx_data_variant_b(self, nsx_nb): """ Extract nsx data (blocks) from a 2.2 or 2.3 .nsx file. Blocks can arise if the recording was paused by the user. """ filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) data = {} for data_bl in self.__nsx_data_header[nsx_nb].keys(): # get shape and offset of data shape = ( self.__nsx_data_header[nsx_nb][data_bl]['nb_data_points'], self.__nsx_basic_header[nsx_nb]['channel_count']) offset = \ self.__nsx_data_header[nsx_nb][data_bl]['offset_to_data_block'] # read data data[data_bl] = np.memmap( filename, dtype='int16', shape=shape, offset=offset, mode='r') return data def __read_nev_header(self, ext_header_variants): """ Extract nev header information from a of specific .nsx header variant """ filename = '.'.join([self._filenames['nev'], 'nev']) # basic header dt0 = [ # Set to "NEURALEV" ('file_type_id', 'S8'), ('ver_major', 'uint8'), ('ver_minor', 'uint8'), # Flags ('additionnal_flags', 'uint16'), # File index of first data sample ('bytes_in_headers', 'uint32'), # Number of bytes per data packet (sample) ('bytes_in_data_packets', 'uint32'), # Time resolution of time stamps in Hz ('timestamp_resolution', 'uint32'), # Sampling frequency of waveforms in Hz ('sample_resolution', 'uint32'), ('year', 'uint16'), ('month', 'uint16'), ('weekday', 'uint16'), ('day', 'uint16'), ('hour', 'uint16'), ('minute', 'uint16'), ('second', 'uint16'), ('millisecond', 'uint16'), ('application_to_create_file', 'S32'), ('comment_field', 'S256'), # Number of extended headers ('nb_ext_headers', 'uint32')] nev_basic_header = np.fromfile(filename, count=1, dtype=dt0)[0] # extended header # this consist in N block with code 8bytes + 24 data bytes # the data bytes depend on the code and need to be converted # cafilename_nsx, segse by case shape = nev_basic_header['nb_ext_headers'] offset_dt0 = np.dtype(dt0).itemsize # This is the common structure of the beginning of extended headers dt1 = [ ('packet_id', 'S8'), ('info_field', 'S24')] raw_ext_header = np.memmap( filename, offset=offset_dt0, dtype=dt1, shape=shape, mode='r') nev_ext_header = {} for packet_id in ext_header_variants.keys(): mask = (raw_ext_header['packet_id'] == packet_id) dt2 = self.__nev_ext_header_types()[packet_id][ ext_header_variants[packet_id]] nev_ext_header[packet_id] = raw_ext_header.view(dt2)[mask] return nev_basic_header, nev_ext_header def __read_nev_header_variant_a(self): """ Extract nev header information from a 2.1 .nev file """ ext_header_variants = { b'NEUEVWAV': 'a', b'ARRAYNME': 'a', b'ECOMMENT': 'a', b'CCOMMENT': 'a', b'MAPFILE': 'a', b'NSASEXEV': 'a'} return self.__read_nev_header(ext_header_variants) def __read_nev_header_variant_b(self): """ Extract nev header information from a 2.2 .nev file """ ext_header_variants = { b'NEUEVWAV': 'b', b'ARRAYNME': 'a', b'ECOMMENT': 'a', b'CCOMMENT': 'a', b'MAPFILE': 'a', b'NEUEVLBL': 'a', b'NEUEVFLT': 'a', b'DIGLABEL': 'a', b'NSASEXEV': 'a'} return self.__read_nev_header(ext_header_variants) def __read_nev_header_variant_c(self): """ Extract nev header information from a 2.3 .nev file """ ext_header_variants = { b'NEUEVWAV': 'b', b'ARRAYNME': 'a', b'ECOMMENT': 'a', b'CCOMMENT': 'a', b'MAPFILE': 'a', b'NEUEVLBL': 'a', b'NEUEVFLT': 'a', b'DIGLABEL': 'a', b'VIDEOSYN': 'a', b'TRACKOBJ': 'a'} return self.__read_nev_header(ext_header_variants) def __read_nev_data(self, nev_data_masks, nev_data_types): """ Extract nev data from a 2.1 or 2.2 .nev file """ filename = '.'.join([self._filenames['nev'], 'nev']) data_size = self.__nev_basic_header['bytes_in_data_packets'] header_size = self.__nev_basic_header['bytes_in_headers'] # read all raw data packets and markers dt0 = [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('value', 'S{}'.format(data_size - 6))] raw_data = np.memmap(filename, offset=header_size, dtype=dt0, mode='r') masks = self.__nev_data_masks(raw_data['packet_id']) types = self.__nev_data_types(data_size) event_segment_ids = self.__get_event_segment_ids(raw_data, masks, nev_data_masks) data = {} for k, v in nev_data_masks.items(): mask = masks[k][v] data[k] = (raw_data.view(types[k][nev_data_types[k]])[mask], event_segment_ids[mask]) return data def __get_reset_event_mask(self, raw_event_data, masks, nev_data_masks): """ Extract mask for reset comment events in 2.3 .nev file """ restart_mask = np.logical_and(masks['Comments'][nev_data_masks['Comments']], raw_event_data['value'] == b'\x00\x00\x00\x00\x00\x00critical load restart') # TODO: Fix hardcoded number of bytes return restart_mask def __get_event_segment_ids(self, raw_event_data, masks, nev_data_masks): """ Construct array of corresponding segment ids for each event for nev version 2.3 """ if self.__nev_spec in ['2.1', '2.2']: # No pause or reset mechanism present for file version 2.1 and 2.2 return np.zeros(len(raw_event_data), dtype=int) elif self.__nev_spec == '2.3': reset_ev_mask = self.__get_reset_event_mask(raw_event_data, masks, nev_data_masks) reset_ev_ids = np.where(reset_ev_mask)[0] # consistency check for monotone increasing time stamps # explicitely converting to int to allow for negative diff values jump_ids = \ np.where(np.diff(np.asarray(raw_event_data['timestamp'], dtype=int)) < 0)[0] + 1 overlap = np.in1d(jump_ids, reset_ev_ids) if not all(overlap): # additional resets occurred without a reset event being stored additional_ids = jump_ids[np.invert(overlap)] warnings.warn('Detected {} undocumented segments within ' 'nev data after timestamps {}.' ''.format(len(additional_ids), additional_ids)) reset_ev_ids = sorted(np.unique(np.concatenate((reset_ev_ids, jump_ids)))) event_segment_ids = np.zeros(len(raw_event_data), dtype=int) for reset_event_id in reset_ev_ids: event_segment_ids[reset_event_id:] += 1 self._nb_segment_nev = len(reset_ev_ids) + 1 return event_segment_ids def __match_nsx_and_nev_segment_ids(self, nsx_nb): """ Ensure matching ids of segments detected in nsx and nev file for version 2.3 """ # NSX required for matching, if not available, warn the user if not self._avail_nsx: warnings.warn("No nsX available so it cannot be checked whether " "the segments in nev are all correct. Most importantly, " "recording pauses will not be detected", UserWarning) return # Only needs to be done for nev version 2.3 if self.__nev_spec == '2.3': nsx_offset = self.__nsx_data_header[nsx_nb][0]['timestamp'] # Multiples of 1/30.000s that pass between two nsX samples nsx_period = self.__nsx_basic_header[nsx_nb]['period'] # NSX segments needed as dict and list nonempty_nsx_segments = {} list_nonempty_nsx_segments = [] # Counts how many segments CAN be created from nev nb_possible_nev_segments = self._nb_segment_nev # Nonempty segments are those containing at least 2 samples # These have to be able to be mapped to nev for k, v in sorted(self.__nsx_data_header[nsx_nb].items()): if v['nb_data_points'] > 1: nonempty_nsx_segments[k] = v list_nonempty_nsx_segments.append(v) # Account for paused segments # This increases nev event segment ids if from the nsx an additional segment is found # If one new segment, i.e. that could not be determined from the nev was found, # all following ids need to be increased to account for the additional segment before for k, (data, ev_ids) in self.nev_data.items(): # Check all nonempty nsX segments for i, seg in enumerate(list_nonempty_nsx_segments[:]): # Last timestamp in this nsX segment # Not subtracting nsX offset from end because spike extraction might continue end_of_current_nsx_seg = seg['timestamp'] + \ seg['nb_data_points'] * self.__nsx_basic_header[nsx_nb]['period'] mask_after_seg = (ev_ids == i) & \ (data['timestamp'] > end_of_current_nsx_seg + nsx_period) # Show warning if spikes do not fit any segment (+- 1 sampling 'tick') # Spike should belong to segment before mask_outside = (ev_ids == i) & \ (data['timestamp'] < int(seg['timestamp']) - nsx_offset - nsx_period) if len(data[mask_outside]) > 0: warnings.warn("Spikes outside any segment. Detected on segment #{}". format(i)) ev_ids[mask_outside] -= 1 # If some nev data are outside of this nsX segment, increase their segment ids # and the ids of all following segments. They are checked for the next nsX # segment then. If they do not fit any of them, # a warning will be shown, indicating how far outside the segment spikes are # If they fit the next segment, more segments are possible in nev, # because a new one has been discovered if len(data[mask_after_seg]) > 0: # Warning if spikes are after last segment if i == len(list_nonempty_nsx_segments) - 1: timestamp_resolution = self.__nsx_params[self.__nsx_spec[ nsx_nb]]('timestamp_resolution', nsx_nb) time_after_seg = (data[mask_after_seg]['timestamp'][-1] - end_of_current_nsx_seg) / timestamp_resolution warnings.warn("Spikes {}s after last segment.".format(time_after_seg)) # Break out of loop because it's the last iteration # and the spikes should stay connected to last segment break # If reset and no segment detected in nev, then these segments cannot be # distinguished in nev, which is a big problem # XXX 96 is an arbitrary number based on observations in available files elif list_nonempty_nsx_segments[i + 1]['timestamp'] - nsx_offset <= 96: # If not all definitely belong to the next segment, # then it cannot be distinguished where some belong if len(data[ev_ids == i]) != len(data[mask_after_seg]): raise ValueError("Some segments in nsX cannot be detected in nev") # Actual processing if no problem has occurred nb_possible_nev_segments += 1 ev_ids[ev_ids > i] += 1 ev_ids[mask_after_seg] += 1 # consistency check: same number of segments for nsx and nev data assert nb_possible_nev_segments == len(nonempty_nsx_segments), \ ('Inconsistent ns{0} and nev file. {1} segments present in .nev file, but {2} in ' 'ns{0} file.'.format(nsx_nb, nb_possible_nev_segments, len(nonempty_nsx_segments))) new_nev_segment_id_mapping = dict(zip(range(nb_possible_nev_segments), sorted(list(nonempty_nsx_segments)))) # replacing event ids by matched event ids in place for k, (data, ev_ids) in self.nev_data.items(): if len(ev_ids): ev_ids[:] = np.vectorize(new_nev_segment_id_mapping.__getitem__)(ev_ids) def __read_nev_data_variant_a(self): """ Extract nev data from a 2.1 & 2.2 .nev file """ nev_data_masks = { 'NonNeural': 'a', 'Spikes': 'a'} nev_data_types = { 'NonNeural': 'a', 'Spikes': 'a'} return self.__read_nev_data(nev_data_masks, nev_data_types) def __read_nev_data_variant_b(self): """ Extract nev data from a 2.3 .nev file """ nev_data_masks = { 'NonNeural': 'a', 'Spikes': 'b', 'Comments': 'a', 'VideoSync': 'a', 'TrackingEvents': 'a', 'ButtonTrigger': 'a', 'ConfigEvent': 'a'} nev_data_types = { 'NonNeural': 'b', 'Spikes': 'a', 'Comments': 'a', 'VideoSync': 'a', 'TrackingEvents': 'a', 'ButtonTrigger': 'a', 'ConfigEvent': 'a'} return self.__read_nev_data(nev_data_masks, nev_data_types) def __nev_ext_header_types(self): """ Defines extended header types for different .nev file specifications. """ nev_ext_header_types = { b'NEUEVWAV': { # Version>=2.1 'a': [ ('packet_id', 'S8'), ('electrode_id', 'uint16'), ('physical_connector', 'uint8'), ('connector_pin', 'uint8'), ('digitization_factor', 'uint16'), ('energy_threshold', 'uint16'), ('hi_threshold', 'int16'), ('lo_threshold', 'int16'), ('nb_sorted_units', 'uint8'), # number of bytes per waveform sample ('bytes_per_waveform', 'uint8'), ('unused', 'S10')], # Version>=2.3 'b': [ ('packet_id', 'S8'), ('electrode_id', 'uint16'), ('physical_connector', 'uint8'), ('connector_pin', 'uint8'), ('digitization_factor', 'uint16'), ('energy_threshold', 'uint16'), ('hi_threshold', 'int16'), ('lo_threshold', 'int16'), ('nb_sorted_units', 'uint8'), # number of bytes per waveform sample ('bytes_per_waveform', 'uint8'), # number of samples for each waveform ('spike_width', 'uint16'), ('unused', 'S8')]}, b'ARRAYNME': { 'a': [ ('packet_id', 'S8'), ('electrode_array_name', 'S24')]}, b'ECOMMENT': { 'a': [ ('packet_id', 'S8'), ('extra_comment', 'S24')]}, b'CCOMMENT': { 'a': [ ('packet_id', 'S8'), ('continued_comment', 'S24')]}, b'MAPFILE': { 'a': [ ('packet_id', 'S8'), ('mapFile', 'S24')]}, b'NEUEVLBL': { 'a': [ ('packet_id', 'S8'), ('electrode_id', 'uint16'), # label of this electrode ('label', 'S16'), ('unused', 'S6')]}, b'NEUEVFLT': { 'a': [ ('packet_id', 'S8'), ('electrode_id', 'uint16'), ('hi_freq_corner', 'uint32'), ('hi_freq_order', 'uint32'), # 0=None 1=Butterworth ('hi_freq_type', 'uint16'), ('lo_freq_corner', 'uint32'), ('lo_freq_order', 'uint32'), # 0=None 1=Butterworth ('lo_freq_type', 'uint16'), ('unused', 'S2')]}, b'DIGLABEL': { 'a': [ ('packet_id', 'S8'), # Read name of digital ('label', 'S16'), # 0=serial, 1=parallel ('mode', 'uint8'), ('unused', 'S7')]}, b'NSASEXEV': { 'a': [ ('packet_id', 'S8'), # Read frequency of periodic packet generation ('frequency', 'uint16'), # Read if digital input triggers events ('digital_input_config', 'uint8'), # Read if analog input triggers events ('analog_channel_1_config', 'uint8'), ('analog_channel_1_edge_detec_val', 'uint16'), ('analog_channel_2_config', 'uint8'), ('analog_channel_2_edge_detec_val', 'uint16'), ('analog_channel_3_config', 'uint8'), ('analog_channel_3_edge_detec_val', 'uint16'), ('analog_channel_4_config', 'uint8'), ('analog_channel_4_edge_detec_val', 'uint16'), ('analog_channel_5_config', 'uint8'), ('analog_channel_5_edge_detec_val', 'uint16'), ('unused', 'S6')]}, b'VIDEOSYN': { 'a': [ ('packet_id', 'S8'), ('video_source_id', 'uint16'), ('video_source', 'S16'), ('frame_rate', 'float32'), ('unused', 'S2')]}, b'TRACKOBJ': { 'a': [ ('packet_id', 'S8'), ('trackable_type', 'uint16'), ('trackable_id', 'uint16'), ('point_count', 'uint16'), ('video_source', 'S16'), ('unused', 'S2')]}} return nev_ext_header_types def __nev_data_masks(self, packet_ids): """ Defines data masks for different .nev file specifications depending on the given packet identifiers. """ __nev_data_masks = { 'NonNeural': { 'a': (packet_ids == 0)}, 'Spikes': { # Version 2.1 & 2.2 'a': (0 < packet_ids) & (packet_ids <= 255), # Version>=2.3 'b': (0 < packet_ids) & (packet_ids <= 2048)}, 'Comments': { 'a': (packet_ids == 0xFFFF)}, 'VideoSync': { 'a': (packet_ids == 0xFFFE)}, 'TrackingEvents': { 'a': (packet_ids == 0xFFFD)}, 'ButtonTrigger': { 'a': (packet_ids == 0xFFFC)}, 'ConfigEvent': { 'a': (packet_ids == 0xFFFB)}} return __nev_data_masks def __nev_data_types(self, data_size): """ Defines data types for different .nev file specifications depending on the given packet identifiers. """ __nev_data_types = { 'NonNeural': { # Version 2.1 & 2.2 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('packet_insertion_reason', 'uint8'), ('reserved', 'uint8'), ('digital_input', 'uint16'), ('analog_input_channel_1', 'int16'), ('analog_input_channel_2', 'int16'), ('analog_input_channel_3', 'int16'), ('analog_input_channel_4', 'int16'), ('analog_input_channel_5', 'int16'), ('unused', 'S{}'.format(data_size - 20))], # Version>=2.3 'b': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('packet_insertion_reason', 'uint8'), ('reserved', 'uint8'), ('digital_input', 'uint16'), ('unused', 'S{}'.format(data_size - 10))]}, 'Spikes': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('unit_class_nb', 'uint8'), ('reserved', 'uint8'), ('waveform', 'S{}'.format(data_size - 8))]}, 'Comments': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('char_set', 'uint8'), ('flag', 'uint8'), ('color', 'uint32'), ('comment', 'S{}'.format(data_size - 12))]}, 'VideoSync': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('video_file_nb', 'uint16'), ('video_frame_nb', 'uint32'), ('video_elapsed_time', 'uint32'), ('video_source_id', 'uint32'), ('unused', 'int8', (data_size - 20,))]}, 'TrackingEvents': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('parent_id', 'uint16'), ('node_id', 'uint16'), ('node_count', 'uint16'), ('point_count', 'uint16'), ('tracking_points', 'uint16', ((data_size - 14) // 2,))]}, 'ButtonTrigger': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('trigger_type', 'uint16'), ('unused', 'int8', (data_size - 8,))]}, 'ConfigEvent': { 'a': [ ('timestamp', 'uint32'), ('packet_id', 'uint16'), ('config_change_type', 'uint16'), ('config_changed', 'S{}'.format(data_size - 8))]}} return __nev_data_types def __nev_params(self, param_name): """ Returns wanted nev parameter. """ nev_parameters = { 'bytes_in_data_packets': self.__nev_basic_header['bytes_in_data_packets'], 'rec_datetime': datetime.datetime( year=self.__nev_basic_header['year'], month=self.__nev_basic_header['month'], day=self.__nev_basic_header['day'], hour=self.__nev_basic_header['hour'], minute=self.__nev_basic_header['minute'], second=self.__nev_basic_header['second'], microsecond=self.__nev_basic_header['millisecond']), 'max_res': self.__nev_basic_header['timestamp_resolution'], 'channel_ids': self.__nev_ext_header[b'NEUEVWAV']['electrode_id'], 'channel_labels': self.__channel_labels[self.__nev_spec](), 'event_unit': pq.CompoundUnit("1.0/{} * s".format( self.__nev_basic_header['timestamp_resolution'])), 'nb_units': dict(zip( self.__nev_ext_header[b'NEUEVWAV']['electrode_id'], self.__nev_ext_header[b'NEUEVWAV']['nb_sorted_units'])), 'digitization_factor': dict(zip( self.__nev_ext_header[b'NEUEVWAV']['electrode_id'], self.__nev_ext_header[b'NEUEVWAV']['digitization_factor'])), 'data_size': self.__nev_basic_header['bytes_in_data_packets'], 'waveform_size': self.__waveform_size[self.__nev_spec](), 'waveform_dtypes': self.__get_waveforms_dtype(), 'waveform_sampling_rate': self.__nev_basic_header['sample_resolution'] * pq.Hz, 'waveform_time_unit': pq.CompoundUnit("1.0/{} * s".format( self.__nev_basic_header['sample_resolution'])), 'waveform_unit': pq.uV} return nev_parameters[param_name] def __get_file_size(self, filename): """ Returns the file size in bytes for the given file. """ filebuf = open(filename, 'rb') filebuf.seek(0, os.SEEK_END) file_size = filebuf.tell() filebuf.close() return file_size def __get_min_time(self): """ Returns the smallest time that can be determined from the recording for use as the lower bound n in an interval [n,m). """ tp = [] if self._avail_files['nev']: tp.extend(self.__get_nev_rec_times()[0]) for nsx_i in self._avail_nsx: tp.extend(self.__nsx_rec_times[self.__nsx_spec[nsx_i]](nsx_i)[0]) return min(tp) def __get_max_time(self): """ Returns the largest time that can be determined from the recording for use as the upper bound m in an interval [n,m). """ tp = [] if self._avail_files['nev']: tp.extend(self.__get_nev_rec_times()[1]) for nsx_i in self._avail_nsx: tp.extend(self.__nsx_rec_times[self.__nsx_spec[nsx_i]](nsx_i)[1]) return max(tp) def __get_nev_rec_times(self): """ Extracts minimum and maximum time points from a nev file. """ filename = '.'.join([self._filenames['nev'], 'nev']) dt = [('timestamp', 'uint32')] offset = \ self.__get_file_size(filename) - \ self.__nev_params('bytes_in_data_packets') last_data_packet = np.memmap(filename, offset=offset, dtype=dt, mode='r')[0] n_starts = [0 * self.__nev_params('event_unit')] n_stops = [ last_data_packet['timestamp'] * self.__nev_params('event_unit')] return n_starts, n_stops def __get_waveforms_dtype(self): """ Extracts the actual waveform dtype set for each channel. """ # Blackrock code giving the approiate dtype conv = {0: 'int8', 1: 'int8', 2: 'int16', 4: 'int32'} # get all electrode ids from nev ext header all_el_ids = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] # get the dtype of waveform (this is stupidly complicated) if self.__is_set( np.array(self.__nev_basic_header['additionnal_flags']), 0): dtype_waveforms = {k: 'int16' for k in all_el_ids} else: # extract bytes per waveform waveform_bytes = \ self.__nev_ext_header[b'NEUEVWAV']['bytes_per_waveform'] # extract dtype for waveforms fro each electrode dtype_waveforms = dict(zip(all_el_ids, conv[waveform_bytes])) return dtype_waveforms def __get_channel_labels_variant_a(self): """ Returns labels for all channels for file spec 2.1 """ elids = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] labels = [] for elid in elids: if elid < 129: labels.append('chan%i' % elid) else: labels.append('ainp%i' % (elid - 129 + 1)) return dict(zip(elids, labels)) def __get_channel_labels_variant_b(self): """ Returns labels for all channels for file spec 2.2 and 2.3 """ elids = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] labels = self.__nev_ext_header[b'NEUEVLBL']['label'] return dict(zip(elids, labels)) if len(labels) > 0 else None def __get_waveform_size_variant_a(self): """ Returns wavform sizes for all channels for file spec 2.1 and 2.2 """ wf_dtypes = self.__get_waveforms_dtype() nb_bytes_wf = self.__nev_basic_header['bytes_in_data_packets'] - 8 wf_sizes = { ch: int(nb_bytes_wf / np.dtype(dt).itemsize) for ch, dt in wf_dtypes.items()} return wf_sizes def __get_waveform_size_variant_b(self): """ Returns wavform sizes for all channels for file spec 2.3 """ elids = self.__nev_ext_header[b'NEUEVWAV']['electrode_id'] spike_widths = self.__nev_ext_header[b'NEUEVWAV']['spike_width'] return dict(zip(elids, spike_widths)) def __get_left_sweep_waveforms(self): """ Returns left sweep of waveforms for each channel. Left sweep is defined as the time from the beginning of the waveform to the trigger time of the corresponding spike. """ # TODO: Double check if this is the actual setting for Blackrock wf_t_unit = self.__nev_params('waveform_time_unit') all_ch = self.__nev_params('channel_ids') # TODO: Double check if this is the correct assumption (10 samples) # default value: threshold crossing after 10 samples of waveform wf_left_sweep = {ch: 10 * wf_t_unit for ch in all_ch} # non-default: threshold crossing at center of waveform # wf_size = self.__nev_params('waveform_size') # wf_left_sweep = dict( # [(ch, (wf_size[ch] / 2) * wf_t_unit) for ch in all_ch]) return wf_left_sweep def __get_nsx_param_variant_a(self, nsx_nb): """ Returns parameter (param_name) for a given nsx (nsx_nb) for file spec 2.1. """ # Here, min/max_analog_val and min/max_digital_val are not available in # the nsx, so that we must estimate these parameters from the # digitization factor of the nev (information by Kian Torab, Blackrock # Microsystems). Here dig_factor=max_analog_val/max_digital_val. We set # max_digital_val to 1000, and max_analog_val=dig_factor. dig_factor is # given in nV by definition, so the units turn out to be uV. labels = [] dig_factor = [] for elid in self.__nsx_ext_header[nsx_nb]['electrode_id']: if self._avail_files['nev']: # This is a workaround for the DigitalFactor overflow in NEV # files recorded with buggy Cerebus system. # Fix taken from: NMPK toolbox by Blackrock, # file openNEV, line 464, # git rev. d0a25eac902704a3a29fa5dfd3aed0744f4733ed df = self.__nev_params('digitization_factor')[elid] if df == 21516: df = 152592.547 dig_factor.append(df) else: dig_factor.append(float('nan')) if elid < 129: labels.append('chan%i' % elid) else: labels.append('ainp%i' % (elid - 129 + 1)) filename = '.'.join([self._filenames['nsx'], 'ns%i' % nsx_nb]) bytes_in_headers = self.__nsx_basic_header[nsx_nb].dtype.itemsize + \ self.__nsx_ext_header[nsx_nb].dtype.itemsize * \ self.__nsx_basic_header[nsx_nb]['channel_count'] if np.isnan(dig_factor[0]): units = '' warnings.warn("Cannot rescale to voltage, raw data will be returned.", UserWarning) else: units = 'uV' nsx_parameters = { 'nb_data_points': int( (self.__get_file_size(filename) - bytes_in_headers) / (2 * self.__nsx_basic_header[nsx_nb]['channel_count']) - 1), 'labels': labels, 'units': np.array([units] * self.__nsx_basic_header[nsx_nb]['channel_count']), 'min_analog_val': -1 * np.array(dig_factor), 'max_analog_val': np.array(dig_factor), 'min_digital_val': np.array( [-1000] * self.__nsx_basic_header[nsx_nb]['channel_count']), 'max_digital_val': np.array([1000] * self.__nsx_basic_header[nsx_nb]['channel_count']), 'timestamp_resolution': 30000, 'bytes_in_headers': bytes_in_headers, 'sampling_rate': 30000 / self.__nsx_basic_header[nsx_nb]['period'] * pq.Hz, 'time_unit': pq.CompoundUnit("1.0/{}*s".format( 30000 / self.__nsx_basic_header[nsx_nb]['period']))} # Returns complete dictionary because then it does not need to be called so often return nsx_parameters def __get_nsx_param_variant_b(self, param_name, nsx_nb): """ Returns parameter (param_name) for a given nsx (nsx_nb) for file spec 2.2 and 2.3. """ nsx_parameters = { 'labels': self.__nsx_ext_header[nsx_nb]['electrode_label'], 'units': self.__nsx_ext_header[nsx_nb]['units'], 'min_analog_val': self.__nsx_ext_header[nsx_nb]['min_analog_val'], 'max_analog_val': self.__nsx_ext_header[nsx_nb]['max_analog_val'], 'min_digital_val': self.__nsx_ext_header[nsx_nb]['min_digital_val'], 'max_digital_val': self.__nsx_ext_header[nsx_nb]['max_digital_val'], 'timestamp_resolution': self.__nsx_basic_header[nsx_nb]['timestamp_resolution'], 'bytes_in_headers': self.__nsx_basic_header[nsx_nb]['bytes_in_headers'], 'sampling_rate': self.__nsx_basic_header[nsx_nb]['timestamp_resolution'] / self.__nsx_basic_header[nsx_nb]['period'] * pq.Hz, 'time_unit': pq.CompoundUnit("1.0/{}*s".format( self.__nsx_basic_header[nsx_nb]['timestamp_resolution'] / self.__nsx_basic_header[nsx_nb]['period']))} return nsx_parameters[param_name] def __get_nonneural_evdicts_variant_a(self, data): """ Defines event types and the necessary parameters to extract them from a 2.1 and 2.2 nev file. """ # TODO: add annotations of nev ext header (NSASEXEX) to event types # digital events event_types = { 'digital_input_port': { 'name': 'digital_input_port', 'field': 'digital_input', 'mask': data['packet_insertion_reason'] == 1, 'desc': "Events of the digital input port"}, 'serial_input_port': { 'name': 'serial_input_port', 'field': 'digital_input', 'mask': data['packet_insertion_reason'] == 129, 'desc': "Events of the serial input port"}} # analog input events via threshold crossings for ch in range(5): event_types.update({ 'analog_input_channel_{}'.format(ch + 1): { 'name': 'analog_input_channel_{}'.format(ch + 1), 'field': 'analog_input_channel_{}'.format(ch + 1), 'mask': self.__is_set( data['packet_insertion_reason'], ch + 1), 'desc': "Values of analog input channel {} in mV " "(+/- 5000)".format(ch + 1)}}) # TODO: define field and desc event_types.update({ 'periodic_sampling_events': { 'name': 'periodic_sampling_events', 'field': 'digital_input', 'mask': self.__is_set(data['packet_insertion_reason'], 6), 'desc': 'Periodic sampling event of a certain frequency'}}) return event_types def __delete_empty_segments(self): """ If there are empty segments (e.g. due to a reset or clock synchronization across two systems), these can be discarded. If this is done, all the data and data_headers need to be remapped to become a range starting from 0 again. Nev data are mapped accordingly to stay with their corresponding segment in the nsX data. """ # Discard empty segments removed_seg = [] for data_bl in range(self._nb_segment): keep_seg = True for nsx_nb in self.nsx_to_load: length = self.nsx_datas[nsx_nb][data_bl].shape[0] keep_seg = keep_seg and (length >= 2) if not keep_seg: removed_seg.append(data_bl) for nsx_nb in self.nsx_to_load: self.nsx_datas[nsx_nb].pop(data_bl) self.__nsx_data_header[nsx_nb].pop(data_bl) # Keys need to be increasing from 0 to maximum in steps of 1 # To ensure this after removing empty segments, some keys need to be re mapped for i in removed_seg[::-1]: for j in range(i + 1, self._nb_segment): # remap nsx seg index for nsx_nb in self.nsx_to_load: data = self.nsx_datas[nsx_nb].pop(j) self.nsx_datas[nsx_nb][j - 1] = data data_header = self.__nsx_data_header[nsx_nb].pop(j) self.__nsx_data_header[nsx_nb][j - 1] = data_header # Also remap nev data, ev_ids are the equivalent to keys above if self._avail_files['nev']: for k, (data, ev_ids) in self.nev_data.items(): ev_ids[ev_ids == j] -= 1 self._nb_segment -= 1 def __get_nonneural_evdicts_variant_b(self, data): """ Defines event types and the necessary parameters to extract them from a 2.3 nev file. """ # digital events if not np.all(np.in1d(data['packet_insertion_reason'], [1, 129])): # Blackrock spec gives reason==64 means PERIODIC, but never seen this live warnings.warn("Unknown event codes found", RuntimeWarning) event_types = { 'digital_input_port': { 'name': 'digital_input_port', 'field': 'digital_input', 'mask': self.__is_set(data['packet_insertion_reason'], 0) & ~self.__is_set(data['packet_insertion_reason'], 7), 'desc': "Events of the digital input port"}, 'serial_input_port': { 'name': 'serial_input_port', 'field': 'digital_input', 'mask': self.__is_set(data['packet_insertion_reason'], 0) & self.__is_set(data['packet_insertion_reason'], 7), 'desc': "Events of the serial input port"}} return event_types def __get_comment_evdict_variant_a(self, data): return { 'comments': { 'name': 'comments', 'field': 'comment', 'mask': data['packet_id'] == 65535, 'desc': 'Comments' } } def __is_set(self, flag, pos): """ Checks if bit is set at the given position for flag. If flag is an array, an array will be returned. """ return flag & (1 << pos) > 0 neo-0.10.0/neo/rawio/brainvisionrawio.py0000644000076700000240000001632214066374330020631 0ustar andrewstaff00000000000000""" Class for reading data from BrainVision product. This code was originally made by L. Pezard (2010), modified B. Burle and S. More. Author: Samuel Garcia """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np import datetime import os import re class BrainVisionRawIO(BaseRawIO): """ """ extensions = ['vhdr'] rawmode = 'one-file' def __init__(self, filename=''): BaseRawIO.__init__(self) self.filename = filename def _parse_header(self): # Read header file (vhdr) vhdr_header = read_brainvsion_soup(self.filename) bname = os.path.basename(self.filename) marker_filename = self.filename.replace(bname, vhdr_header['Common Infos']['MarkerFile']) binary_filename = self.filename.replace(bname, vhdr_header['Common Infos']['DataFile']) assert vhdr_header['Common Infos'][ 'DataFormat'] == 'BINARY', NotImplementedError assert vhdr_header['Common Infos'][ 'DataOrientation'] == 'MULTIPLEXED', NotImplementedError nb_channel = int(vhdr_header['Common Infos']['NumberOfChannels']) sr = 1.e6 / float(vhdr_header['Common Infos']['SamplingInterval']) self._sampling_rate = sr fmt = vhdr_header['Binary Infos']['BinaryFormat'] fmts = {'INT_16': np.int16, 'INT_32': np.int32, 'IEEE_FLOAT_32': np.float32, } assert fmt in fmts, NotImplementedError sig_dtype = fmts[fmt] # raw signals memmap sigs = np.memmap(binary_filename, dtype=sig_dtype, mode='r', offset=0) if sigs.size % nb_channel != 0: sigs = sigs[:-sigs.size % nb_channel] self._raw_signals = sigs.reshape(-1, nb_channel) signal_streams = np.array([('Signals', '0')], dtype=_signal_stream_dtype) sig_channels = [] channel_infos = vhdr_header['Channel Infos'] for c in range(nb_channel): try: channel_desc = channel_infos['Ch%d' % (c + 1,)] except KeyError: channel_desc = channel_infos['ch%d' % (c + 1,)] name, ref, res, units = channel_desc.split(',') units = units.replace('µ', 'u') chan_id = str(c + 1) if sig_dtype == np.int16 or sig_dtype == np.int32: gain = float(res) else: gain = 1 offset = 0 stream_id = '0' sig_channels.append((name, chan_id, self._sampling_rate, sig_dtype, units, gain, offset, stream_id)) sig_channels = np.array(sig_channels, dtype=_signal_channel_dtype) # No spikes spike_channels = [] spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) # read all markers in memory all_info = read_brainvsion_soup(marker_filename)['Marker Infos'] ev_types = [] ev_timestamps = [] ev_labels = [] for i in range(len(all_info)): ev_type, ev_label, pos, size, channel = all_info[ 'Mk%d' % (i + 1,)].split(',')[:5] ev_types.append(ev_type) ev_timestamps.append(int(pos)) ev_labels.append(ev_label) ev_types = np.array(ev_types) ev_timestamps = np.array(ev_timestamps) ev_labels = np.array(ev_labels, dtype='U') # group them by types self._raw_events = [] event_channels = [] for c, ev_type in enumerate(np.unique(ev_types)): ind = (ev_types == ev_type) event_channels.append((ev_type, '', 'event')) self._raw_events.append((ev_timestamps[ind], ev_labels[ind])) event_channels = np.array(event_channels, dtype=_event_channel_dtype) # fille into header dict self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = sig_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels self._generate_minimal_annotations() if 'Coordinates' in vhdr_header: sig_annotations = self.raw_annotations['blocks'][0]['segments'][0]['signals'][0] all_coords = [] for c in range(sig_channels.size): coords = vhdr_header['Coordinates']['Ch{}'.format(c + 1)] all_coords.append([float(v) for v in coords.split(',')]) all_coords = np.array(all_coords) for dim in range(all_coords.shape[1]): sig_annotations['__array_annotations__'][f'coordinates_{dim}'] = all_coords[:, dim] def _source_name(self): return self.filename def _segment_t_start(self, block_index, seg_index): return 0. def _segment_t_stop(self, block_index, seg_index): t_stop = self._raw_signals.shape[0] / self._sampling_rate return t_stop ### def _get_signal_size(self, block_index, seg_index, stream_index): assert stream_index == 0 return self._raw_signals.shape[0] def _get_signal_t_start(self, block_index, seg_index, stream_index): return 0. def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): if channel_indexes is None: channel_indexes = slice(None) raw_signals = self._raw_signals[slice(i_start, i_stop), channel_indexes] return raw_signals ### def _spike_count(self, block_index, seg_index, unit_index): return 0 ### # event and epoch zone def _event_count(self, block_index, seg_index, event_channel_index): all_timestamps, all_label = self._raw_events[event_channel_index] return all_timestamps.size def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): timestamps, labels = self._raw_events[event_channel_index] if t_start is not None: keep = timestamps >= int(t_start * self._sampling_rate) timestamps = timestamps[keep] labels = labels[keep] if t_stop is not None: keep = timestamps <= int(t_stop * self._sampling_rate) timestamps = timestamps[keep] labels = labels[keep] durations = None return timestamps, durations, labels raise (NotImplementedError) def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): event_times = event_timestamps.astype(dtype) / self._sampling_rate return event_times def read_brainvsion_soup(filename): with open(filename, 'r', encoding='utf8') as f: section = None all_info = {} for line in f: line = line.strip('\n').strip('\r') if line.startswith('['): section = re.findall(r'\[([\S ]+)\]', line)[0] all_info[section] = {} continue if line.startswith(';'): continue if '=' in line and len(line.split('=')) == 2: k, v = line.split('=') all_info[section][k] = v return all_info neo-0.10.0/neo/rawio/cedrawio.py0000644000076700000240000002154014077554132017041 0ustar andrewstaff00000000000000""" Class for reading data from CED (Cambridge Electronic Design) http://ced.co.uk/ This read *.smrx (and *.smr) from spike2 and signal software. Note Spike2RawIO/Spike2IO is the old implementation in neo. It still works without any dependency and should be faster. Spike2IO only works for smr (32 bit) and not for smrx (64 bit) files. This implementation depends on the SONPY package: https://pypi.org/project/sonpy/ Please note that the SONPY package: * is NOT open source * internally uses a list instead of numpy.ndarray, potentially causing slow data reading * is maintained by CED Author : Samuel Garcia """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np from copy import deepcopy try: import sonpy HAVE_SONPY = True except ImportError: HAVE_SONPY = False class CedRawIO(BaseRawIO): """ Class for reading data from CED (Cambridge Electronic Design) spike2. This internally uses the sonpy package which is closed source. This IO reads smr and smrx files """ extensions = ['smr', 'smrx'] rawmode = 'one-file' def __init__(self, filename='', take_ideal_sampling_rate=False, ): BaseRawIO.__init__(self) self.filename = filename self.take_ideal_sampling_rate = take_ideal_sampling_rate def _source_name(self): return self.filename def _parse_header(self): assert HAVE_SONPY, 'sonpy must be installed' self.smrx_file = sonpy.lib.SonFile(sName=str(self.filename), bReadOnly=True) smrx = self.smrx_file self._time_base = smrx.GetTimeBase() channel_infos = [] signal_channels = [] spike_channels = [] self._all_spike_ticks = {} for chan_ind in range(smrx.MaxChannels()): chan_type = smrx.ChannelType(chan_ind) chan_id = str(chan_ind) if chan_type == sonpy.lib.DataType.Adc: physical_chan = smrx.PhysicalChannel(chan_ind) divide = smrx.ChannelDivide(chan_ind) if self.take_ideal_sampling_rate: sr = smrx.GetIdealRate(chan_ind) else: sr = 1. / (smrx.GetTimeBase() * divide) max_time = smrx.ChannelMaxTime(chan_ind) first_time = smrx.FirstTime(chan_ind, 0, max_time) # max_times is included so +1 time_size = (max_time - first_time) / divide + 1 channel_infos.append((first_time, max_time, divide, time_size, sr)) gain = smrx.GetChannelScale(chan_ind) / 6553.6 offset = smrx.GetChannelOffset(chan_ind) units = smrx.GetChannelUnits(chan_ind) ch_name = smrx.GetChannelTitle(chan_ind) dtype = 'int16' # set later after grouping stream_id = '0' signal_channels.append((ch_name, chan_id, sr, dtype, units, gain, offset, stream_id)) elif chan_type == sonpy.lib.DataType.AdcMark: # spike and waveforms : only spike times is used here ch_name = smrx.GetChannelTitle(chan_ind) first_time = smrx.FirstTime(chan_ind, 0, max_time) max_time = smrx.ChannelMaxTime(chan_ind) divide = smrx.ChannelDivide(chan_ind) # here we don't use filter (sonpy.lib.MarkerFilter()) so we get all marker wave_marks = smrx.ReadWaveMarks(chan_ind, int(max_time / divide), 0, max_time) # here we load in memory all spike once because the access is really slow # with the ReadWaveMarks spike_ticks = np.array([t.Tick for t in wave_marks]) spike_codes = np.array([t.Code1 for t in wave_marks]) unit_ids = np.unique(spike_codes) for unit_id in unit_ids: name = f'{ch_name}#{unit_id}' spike_chan_id = f'ch{chan_id}#{unit_id}' spike_channels.append((name, spike_chan_id, '', 1, 0, 0, 0)) mask = spike_codes == unit_id self._all_spike_ticks[spike_chan_id] = spike_ticks[mask] signal_channels = np.array(signal_channels, dtype=_signal_channel_dtype) # channels are grouped into stream if they have a common start, stop, size, divide and sampling_rate channel_infos = np.array(channel_infos, dtype=[('first_time', 'i8'), ('max_time', 'i8'), ('divide', 'i8'), ('size', 'i8'), ('sampling_rate', 'f8')]) unique_info = np.unique(channel_infos) self.stream_info = unique_info signal_streams = [] for i, info in enumerate(unique_info): stream_id = str(i) mask = channel_infos == info signal_channels['stream_id'][mask] = stream_id num_chans = np.sum(mask) stream_name = f'{stream_id} {num_chans}chans' signal_streams.append((stream_name, stream_id)) signal_streams = np.array(signal_streams, dtype=_signal_stream_dtype) # spike channels not handled spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) # event channels not handled event_channels = [] event_channels = np.array(event_channels, dtype=_event_channel_dtype) self._seg_t_start = np.inf self._seg_t_stop = -np.inf for info in self.stream_info: self._seg_t_start = min(self._seg_t_start, info['first_time'] * self._time_base) self._seg_t_stop = max(self._seg_t_stop, info['max_time'] * self._time_base) self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = signal_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels self._generate_minimal_annotations() def _segment_t_start(self, block_index, seg_index): return self._seg_t_start def _segment_t_stop(self, block_index, seg_index): return self._seg_t_stop def _get_signal_size(self, block_index, seg_index, stream_index): size = self.stream_info[stream_index]['size'] return size def _get_signal_t_start(self, block_index, seg_index, stream_index): info = self.stream_info[stream_index] t_start = info['first_time'] * self._time_base return t_start def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): if i_start is None: i_start = 0 if i_stop is None: i_stop = self.stream_info[stream_index]['size'] stream_id = self.header['signal_streams']['id'][stream_index] signal_channels = self.header['signal_channels'] mask = signal_channels['stream_id'] == stream_id signal_channels = signal_channels[mask] if channel_indexes is not None: signal_channels = signal_channels[channel_indexes] num_chans = len(signal_channels) size = i_stop - i_start sigs = np.zeros((size, num_chans), dtype='int16') info = self.stream_info[stream_index] t_from = info['first_time'] + info['divide'] * i_start t_upto = info['first_time'] + info['divide'] * i_stop for i, chan_id in enumerate(signal_channels['id']): chan_ind = int(chan_id) sig = self.smrx_file.ReadInts(chan=chan_ind, nMax=size, tFrom=t_from, tUpto=t_upto) sigs[:, i] = sig return sigs def _spike_count(self, block_index, seg_index, unit_index): unit_id = self.header['spike_channels'][unit_index]['id'] spike_ticks = self._all_spike_ticks[unit_id] return spike_ticks.size def _get_spike_timestamps(self, block_index, seg_index, unit_index, t_start, t_stop): unit_id = self.header['spike_channels'][unit_index]['id'] spike_ticks = self._all_spike_ticks[unit_id] if t_start is not None: tick_start = int(t_start / self._time_base) spike_ticks = spike_ticks[spike_ticks >= tick_start] if t_stop is not None: tick_stop = int(t_stop / self._time_base) spike_ticks = spike_ticks[spike_ticks <= tick_stop] return spike_ticks def _rescale_spike_timestamp(self, spike_timestamps, dtype): spike_times = spike_timestamps.astype(dtype) spike_times *= self._time_base return spike_times def _get_spike_raw_waveforms(self, block_index, seg_index, spike_channel_index, t_start, t_stop): return None neo-0.10.0/neo/rawio/elanrawio.py0000644000076700000240000002253214066375716017236 0ustar andrewstaff00000000000000""" Class for reading data from Elan. Elan is software for studying time-frequency maps of EEG data. Elan is developed in Lyon, France, at INSERM U821 https://elan.lyon.inserm.fr An Elan dataset is separated into 3 files : - .eeg raw data file - .eeg.ent hearder file - .eeg.pos event file Author: Samuel Garcia """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np import datetime import re import pathlib class ElanRawIO(BaseRawIO): extensions = ['eeg'] rawmode = 'one-file' def __init__(self, filename=None, entfile=None, posfile=None): BaseRawIO.__init__(self) self.filename = pathlib.Path(filename) # check whether ent and pos files are defined # keep existing suffixes in the process of ent and pos filename # generation if entfile is None: entfile = self.filename.with_suffix(self.filename.suffix + '.ent') if posfile is None: posfile = self.filename.with_suffix(self.filename.suffix + '.pos') self.entfile = entfile self.posfile = posfile def _parse_header(self): with open(self.entfile, mode='rt', encoding='ascii', newline=None) as f: # version version = f.readline()[:-1] assert version in ['V2', 'V3'], 'Read only V2 or V3 .eeg.ent files. %s given' % version # info info1 = f.readline()[:-1] info2 = f.readline()[:-1] # strange 2 line for datetime # line1 line = f.readline() r1 = re.findall(r'(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)', line) r2 = re.findall(r'(\d+):(\d+):(\d+)', line) r3 = re.findall(r'(\d+)-(\d+)-(\d+)', line) YY, MM, DD, hh, mm, ss = (None,) * 6 if len(r1) != 0: DD, MM, YY, hh, mm, ss = r1[0] elif len(r2) != 0: hh, mm, ss = r2[0] elif len(r3) != 0: DD, MM, YY = r3[0] # line2 line = f.readline() r1 = re.findall(r'(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)', line) r2 = re.findall(r'(\d+):(\d+):(\d+)', line) r3 = re.findall(r'(\d+)-(\d+)-(\d+)', line) if len(r1) != 0: DD, MM, YY, hh, mm, ss = r1[0] elif len(r2) != 0: hh, mm, ss = r2[0] elif len(r3) != 0: DD, MM, YY = r3[0] try: fulldatetime = datetime.datetime(int(YY), int(MM), int(DD), int(hh), int(mm), int(ss)) except: fulldatetime = None line = f.readline() line = f.readline() line = f.readline() # sampling rate sample line = f.readline() self._sampling_rate = 1. / float(line) # nb channel line = f.readline() nb_channel = int(line) - 2 channel_infos = [{} for c in range(nb_channel + 2)] # channel label for c in range(nb_channel + 2): channel_infos[c]['label'] = f.readline()[:-1] # channel kind for c in range(nb_channel + 2): channel_infos[c]['kind'] = f.readline()[:-1] # channel unit for c in range(nb_channel + 2): channel_infos[c]['units'] = f.readline()[:-1] # range for gain and offset for c in range(nb_channel + 2): channel_infos[c]['min_physic'] = float(f.readline()[:-1]) for c in range(nb_channel + 2): channel_infos[c]['max_physic'] = float(f.readline()[:-1]) for c in range(nb_channel + 2): channel_infos[c]['min_logic'] = float(f.readline()[:-1]) for c in range(nb_channel + 2): channel_infos[c]['max_logic'] = float(f.readline()[:-1]) # info filter info_filter = [] for c in range(nb_channel + 2): channel_infos[c]['info_filter'] = f.readline()[:-1] n = int(round(np.log(channel_infos[0]['max_logic'] - channel_infos[0]['min_logic']) / np.log(2)) / 8) sig_dtype = np.dtype('>i' + str(n)) signal_streams = np.array([('Signals', '0')], dtype=_signal_stream_dtype) sig_channels = [] for c, chan_info in enumerate(channel_infos[:-2]): chan_name = chan_info['label'] chan_id = str(c) gain = (chan_info['max_physic'] - chan_info['min_physic']) / \ (chan_info['max_logic'] - chan_info['min_logic']) offset = - chan_info['min_logic'] * gain + chan_info['min_physic'] stream_id = '0' sig_channels.append((chan_name, chan_id, self._sampling_rate, sig_dtype, chan_info['units'], gain, offset, stream_id)) sig_channels = np.array(sig_channels, dtype=_signal_channel_dtype) # raw data self._raw_signals = np.memmap(self.filename, dtype=sig_dtype, mode='r', offset=0).reshape(-1, nb_channel + 2) self._raw_signals = self._raw_signals[:, :-2] # triggers with open(self.posfile, mode='rt', encoding='ascii', newline=None) as f: self._raw_event_timestamps = [] self._event_labels = [] self._reject_codes = [] for line in f.readlines(): r = re.findall(r' *(\d+)\s* *(\d+)\s* *(\d+) *', line) self._raw_event_timestamps.append(int(r[0][0])) self._event_labels.append(str(r[0][1])) self._reject_codes.append(str(r[0][2])) self._raw_event_timestamps = np.array(self._raw_event_timestamps, dtype='int64') self._event_labels = np.array(self._event_labels, dtype='U') self._reject_codes = np.array(self._reject_codes, dtype='U') event_channels = [] event_channels.append(('Trigger', '', 'event')) event_channels = np.array(event_channels, dtype=_event_channel_dtype) # No spikes spike_channels = [] spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) # fille into header dict self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = sig_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels # insert some annotation at some place self._generate_minimal_annotations() extra_info = dict(rec_datetime=fulldatetime, elan_version=version, info1=info1, info2=info2) block_annotations = self.raw_annotations['blocks'][0] block_annotations.update(extra_info) seg_annotations = self.raw_annotations['blocks'][0]['segments'][0] seg_annotations.update(extra_info) sig_annotations = self.raw_annotations['blocks'][0]['segments'][0]['signals'][0] for key in ('info_filter', 'kind'): values = [channel_infos[c][key] for c in range(nb_channel)] sig_annotations['__array_annotations__'][key] = np.array(values) event_annotations = self.raw_annotations['blocks'][0]['segments'][0]['events'][0] event_annotations['__array_annotations__']['reject_codes'] = self._reject_codes def _source_name(self): return self.filename def _segment_t_start(self, block_index, seg_index): return 0. def _segment_t_stop(self, block_index, seg_index): t_stop = self._raw_signals.shape[0] / self._sampling_rate return t_stop def _get_signal_size(self, block_index, seg_index, stream_index): assert stream_index == 0 return self._raw_signals.shape[0] def _get_signal_t_start(self, block_index, seg_index, stream_index): assert stream_index == 0 return 0. def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): if channel_indexes is None: channel_indexes = slice(None) raw_signals = self._raw_signals[slice(i_start, i_stop), channel_indexes] return raw_signals def _spike_count(self, block_index, seg_index, unit_index): return 0 def _event_count(self, block_index, seg_index, event_channel_index): return self._raw_event_timestamps.size def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): timestamp = self._raw_event_timestamps labels = self._event_labels durations = None if t_start is not None: keep = timestamp >= int(t_start * self._sampling_rate) timestamp = timestamp[keep] labels = labels[keep] if t_stop is not None: keep = timestamp <= int(t_stop * self._sampling_rate) timestamp = timestamp[keep] labels = labels[keep] return timestamp, durations, labels def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): event_times = event_timestamps.astype(dtype) / self._sampling_rate return event_times neo-0.10.0/neo/rawio/examplerawio.py0000644000076700000240000004376714066374330017756 0ustar andrewstaff00000000000000""" ExampleRawIO is a class of a fake example. This is to be used when coding a new RawIO. Rules for creating a new class: 1. Step 1: Create the main class * Create a file in **neo/rawio/** that endith with "rawio.py" * Create the class that inherits from BaseRawIO * copy/paste all methods that need to be implemented. * code hard! The main difficulty is `_parse_header()`. In short you have a create a mandatory dict than contains channel informations:: self.header = {} self.header['nb_block'] = 2 self.header['nb_segment'] = [2, 3] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = signal_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels 2. Step 2: RawIO test: * create a file in neo/rawio/tests with the same name with "test_" prefix * copy paste neo/rawio/tests/test_examplerawio.py and do the same 3. Step 3 : Create the neo.io class with the wrapper * Create a file in neo/io/ that ends with "io.py" * Create a class that inherits both your RawIO class and BaseFromRaw class * copy/paste from neo/io/exampleio.py 4.Step 4 : IO test * create a file in neo/test/iotest with the same previous name with "test_" prefix * copy/paste from neo/test/iotest/test_exampleio.py """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np class ExampleRawIO(BaseRawIO): """ Class for "reading" fake data from an imaginary file. For the user, it gives access to raw data (signals, event, spikes) as they are in the (fake) file int16 and int64. For a developer, it is just an example showing guidelines for someone who wants to develop a new IO module. Two rules for developers: * Respect the :ref:`neo_rawio_API` * Follow the :ref:`io_guiline` This fake IO: * has 2 blocks * blocks have 2 and 3 segments * has 2 signals streams of 8 channel each (sample_rate = 10000) so 16 channels in total * has 3 spike_channels * has 2 event channels: one has *type=event*, the other has *type=epoch* Usage: >>> import neo.rawio >>> r = neo.rawio.ExampleRawIO(filename='itisafake.nof') >>> r.parse_header() >>> print(r) >>> raw_chunk = r.get_analogsignal_chunk(block_index=0, seg_index=0, i_start=0, i_stop=1024, channel_names=channel_names) >>> float_chunk = reader.rescale_signal_raw_to_float(raw_chunk, dtype='float64', channel_indexes=[0, 3, 6]) >>> spike_timestamp = reader.spike_timestamps(spike_channel_index=0, t_start=None, t_stop=None) >>> spike_times = reader.rescale_spike_timestamp(spike_timestamp, 'float64') >>> ev_timestamps, _, ev_labels = reader.event_timestamps(event_channel_index=0) """ extensions = ['fake'] rawmode = 'one-file' def __init__(self, filename=''): BaseRawIO.__init__(self) # note that this filename is ued in self._source_name self.filename = filename def _source_name(self): # this function is used by __repr__ # for general cases self.filename is good # But for URL you could mask some part of the URL to keep # the main part. return self.filename def _parse_header(self): # This is the central part of a RawIO # we need to collect from the original format all # information required for fast access # at any place in the file # In short `_parse_header()` can be slow but # `_get_analogsignal_chunk()` need to be as fast as possible # create fake signals stream information signal_streams = [] for c in range(2): name = f'stream {c}' stream_id = c signal_streams.append((name, stream_id)) signal_streams = np.array(signal_streams, dtype=_signal_stream_dtype) # create fake signals channels information # This is mandatory!!!! # gain/offset/units are really important because # the scaling to real value will be done with that # The real signal will be evaluated as `(raw_signal * gain + offset) * pq.Quantity(units)` signal_channels = [] for c in range(16): ch_name = 'ch{}'.format(c) # our channel id is c+1 just for fun # Note that chan_id should be related to # original channel id in the file format # so that the end user should not be lost when reading datasets chan_id = c + 1 sr = 10000. # Hz dtype = 'int16' units = 'uV' gain = 1000. / 2 ** 16 offset = 0. # stream_id indicates how to group channels # channels inside a "stream" share same characteristics # (sampling rate/dtype/t_start/units/...) stream_id = str(c // 8) signal_channels.append((ch_name, chan_id, sr, dtype, units, gain, offset, stream_id)) signal_channels = np.array(signal_channels, dtype=_signal_channel_dtype) # A stream can contain signals with different physical units. # Here, the two last channels will have different units (pA) # Since AnalogSignals must have consistent units across channels, # this stream will be split in 2 parts on the neo.io level and finally 3 AnalogSignals # will be generated per Segment. signal_channels[-2:]['units'] = 'pA' # create fake units channels # This is mandatory!!!! # Note that if there is no waveform at all in the file # then wf_units/wf_gain/wf_offset/wf_left_sweep/wf_sampling_rate # can be set to any value because _spike_raw_waveforms # will return None spike_channels = [] for c in range(3): unit_name = 'unit{}'.format(c) unit_id = '#{}'.format(c) wf_units = 'uV' wf_gain = 1000. / 2 ** 16 wf_offset = 0. wf_left_sweep = 20 wf_sampling_rate = 10000. spike_channels.append((unit_name, unit_id, wf_units, wf_gain, wf_offset, wf_left_sweep, wf_sampling_rate)) spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) # creating event/epoch channel # This is mandatory!!!! # In RawIO epoch and event they are dealt the same way. event_channels = [] event_channels.append(('Some events', 'ev_0', 'event')) event_channels.append(('Some epochs', 'ep_1', 'epoch')) event_channels = np.array(event_channels, dtype=_event_channel_dtype) # fille into header dict # This is mandatory!!!!! self.header = {} self.header['nb_block'] = 2 self.header['nb_segment'] = [2, 3] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = signal_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels # insert some annotations/array_annotations at some place # at neo.io level. IOs can add annotations # to any object. To keep this functionality with the wrapper # BaseFromRaw you can add annotations in a nested dict. # `_generate_minimal_annotations()` must be called to generate the nested # dict of annotations/array_annotations self._generate_minimal_annotations() # this pprint lines really help for understand the nested (and complicated sometimes) dict # from pprint import pprint # pprint(self.raw_annotations) # Until here all mandatory operations for setting up a rawio are implemented. # The following lines provide additional, recommended annotations for the # final neo objects. for block_index in range(2): bl_ann = self.raw_annotations['blocks'][block_index] bl_ann['name'] = 'Block #{}'.format(block_index) bl_ann['block_extra_info'] = 'This is the block {}'.format(block_index) for seg_index in range([2, 3][block_index]): seg_ann = bl_ann['segments'][seg_index] seg_ann['name'] = 'Seg #{} Block #{}'.format( seg_index, block_index) seg_ann['seg_extra_info'] = 'This is the seg {} of block {}'.format( seg_index, block_index) for c in range(2): sig_an = seg_ann['signals'][c]['nickname'] = \ f'This stream {c} is from a subdevice' # add some array annotations (8 channels) sig_an = seg_ann['signals'][c]['__array_annotations__']['impedance'] = \ np.random.rand(8) * 10000 for c in range(3): spiketrain_an = seg_ann['spikes'][c] spiketrain_an['quality'] = 'Good!!' # add some array annotations num_spikes = self.spike_count(block_index, seg_index, c) spiketrain_an['__array_annotations__']['amplitudes'] = \ np.random.randn(num_spikes) for c in range(2): event_an = seg_ann['events'][c] if c == 0: event_an['nickname'] = 'Miss Event 0' # add some array annotations num_ev = self.event_count(block_index, seg_index, c) event_an['__array_annotations__']['button'] = ['A'] * num_ev elif c == 1: event_an['nickname'] = 'MrEpoch 1' def _segment_t_start(self, block_index, seg_index): # this must return an float scale in second # this t_start will be shared by all object in the segment # except AnalogSignal all_starts = [[0., 15.], [0., 20., 60.]] return all_starts[block_index][seg_index] def _segment_t_stop(self, block_index, seg_index): # this must return an float scale in second all_stops = [[10., 25.], [10., 30., 70.]] return all_stops[block_index][seg_index] def _get_signal_size(self, block_index, seg_index, stream_index): # We generate fake data in which the two stream signals have the same shape # across all segments (10.0 seconds) # This is not the case for real data, instead you should return the signal # size depending on the block_index and segment_index # this must return an int = the number of sample # Note that channel_indexes can be ignored for most cases # except for several sampling rate. return 100000 def _get_signal_t_start(self, block_index, seg_index, stream_index): # This give the t_start of signals. # Very often this equal to _segment_t_start but not # always. # this must return an float scale in second # Note that channel_indexes can be ignored for most cases # except for several sampling rate. # Here this is the same. # this is not always the case return self._segment_t_start(block_index, seg_index) def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): # this must return a signal chunk in a signal stream # limited with i_start/i_stop (can be None) # channel_indexes can be None (=all channel in the stream) or a list or numpy.array # This must return a numpy array 2D (even with one channel). # This must return the orignal dtype. No conversion here. # This must as fast as possible. # To speed up this call all preparatory calculations should be implemented # in _parse_header(). # Here we are lucky: our signals is always zeros!! # it is not always the case :) # internally signals are int16 # convertion to real units is done with self.header['signal_channels'] if i_start is None: i_start = 0 if i_stop is None: i_stop = 100000 if i_start < 0 or i_stop > 100000: # some check raise IndexError("I don't like your jokes") if channel_indexes is None: nb_chan = 8 elif isinstance(channel_indexes, slice): channel_indexes = np.arange(8, dtype='int')[channel_indexes] nb_chan = len(channel_indexes) else: channel_indexes = np.asarray(channel_indexes) if any(channel_indexes < 0): raise IndexError('bad boy') if any(channel_indexes >= 8): raise IndexError('big bad wolf') nb_chan = len(channel_indexes) raw_signals = np.zeros((i_stop - i_start, nb_chan), dtype='int16') return raw_signals def _spike_count(self, block_index, seg_index, spike_channel_index): # Must return the nb of spikes for given (block_index, seg_index, spike_channel_index) # we are lucky: our units have all the same nb of spikes!! # it is not always the case nb_spikes = 20 return nb_spikes def _get_spike_timestamps(self, block_index, seg_index, spike_channel_index, t_start, t_stop): # In our IO, timestamp are internally coded 'int64' and they # represent the index of the signals 10kHz # we are lucky: spikes have the same discharge in all segments!! # incredible neuron!! This is not always the case # the same clip t_start/t_start must be used in _spike_raw_waveforms() ts_start = (self._segment_t_start(block_index, seg_index) * 10000) spike_timestamps = np.arange(0, 10000, 500) + ts_start if t_start is not None or t_stop is not None: # restricte spikes to given limits (in seconds) lim0 = int(t_start * 10000) lim1 = int(t_stop * 10000) mask = (spike_timestamps >= lim0) & (spike_timestamps <= lim1) spike_timestamps = spike_timestamps[mask] return spike_timestamps def _rescale_spike_timestamp(self, spike_timestamps, dtype): # must rescale to second a particular spike_timestamps # with a fixed dtype so the user can choose the precisino he want. spike_times = spike_timestamps.astype(dtype) spike_times /= 10000. # because 10kHz return spike_times def _get_spike_raw_waveforms(self, block_index, seg_index, spike_channel_index, t_start, t_stop): # this must return a 3D numpy array (nb_spike, nb_channel, nb_sample) # in the original dtype # this must be as fast as possible. # the same clip t_start/t_start must be used in _spike_timestamps() # If there there is no waveform supported in the # IO them _spike_raw_waveforms must return None # In our IO waveforms come from all channels # they are int16 # convertion to real units is done with self.header['spike_channels'] # Here, we have a realistic case: all waveforms are only noise. # it is not always the case # we 20 spikes with a sweep of 50 (5ms) # trick to get how many spike in the slice ts = self._get_spike_timestamps(block_index, seg_index, spike_channel_index, t_start, t_stop) nb_spike = ts.size np.random.seed(2205) # a magic number (my birthday) waveforms = np.random.randint(low=-2**4, high=2**4, size=nb_spike * 50, dtype='int16') waveforms = waveforms.reshape(nb_spike, 1, 50) return waveforms def _event_count(self, block_index, seg_index, event_channel_index): # event and spike are very similar # we have 2 event channels if event_channel_index == 0: # event channel return 6 elif event_channel_index == 1: # epoch channel return 10 def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): # the main difference between spike channel and event channel # is that for here we have 3 numpy array timestamp, durations, labels # durations must be None for 'event' # label must a dtype ='U' # in our IO event are directly coded in seconds seg_t_start = self._segment_t_start(block_index, seg_index) if event_channel_index == 0: timestamp = np.arange(0, 6, dtype='float64') + seg_t_start durations = None labels = np.array(['trigger_a', 'trigger_b'] * 3, dtype='U12') elif event_channel_index == 1: timestamp = np.arange(0, 10, dtype='float64') + .5 + seg_t_start durations = np.ones((10), dtype='float64') * .25 labels = np.array(['zoneX'] * 5 + ['zoneZ'] * 5, dtype='U12') if t_start is not None: keep = timestamp >= t_start timestamp, labels = timestamp[keep], labels[keep] if durations is not None: durations = durations[keep] if t_stop is not None: keep = timestamp <= t_stop timestamp, labels = timestamp[keep], labels[keep] if durations is not None: durations = durations[keep] return timestamp, durations, labels def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): # must rescale to second a particular event_timestamps # with a fixed dtype so the user can choose the precisino he want. # really easy here because in our case it is already seconds event_times = event_timestamps.astype(dtype) return event_times def _rescale_epoch_duration(self, raw_duration, dtype, event_channel_index): # really easy here because in our case it is already seconds durations = raw_duration.astype(dtype) return durations neo-0.10.0/neo/rawio/intanrawio.py0000644000076700000240000004451414066374330017423 0ustar andrewstaff00000000000000""" Support for intan tech rhd and rhs files. This 2 formats are more or less the same but: * some variance in headers. * rhs amplifier is more complexe because the optional DC channel RHS supported version 1.0 RHD supported version 1.0 1.1 1.2 1.3 2.0 See: * http://intantech.com/files/Intan_RHD2000_data_file_formats.pdf * http://intantech.com/files/Intan_RHS2000_data_file_formats.pdf Author: Samuel Garcia """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype, _common_sig_characteristics) import numpy as np from collections import OrderedDict from distutils.version import LooseVersion as V class IntanRawIO(BaseRawIO): """ """ extensions = ['rhd', 'rhs'] rawmode = 'one-file' def __init__(self, filename=''): BaseRawIO.__init__(self) self.filename = filename def _source_name(self): return self.filename def _parse_header(self): if self.filename.endswith('.rhs'): self._global_info, self._ordered_channels, data_dtype,\ header_size, self._block_size = read_rhs(self.filename) elif self.filename.endswith('.rhd'): self._global_info, self._ordered_channels, data_dtype,\ header_size, self._block_size = read_rhd(self.filename) # memmap raw data with the complicated structured dtype self._raw_data = np.memmap(self.filename, dtype=data_dtype, mode='r', offset=header_size) # check timestamp continuity timestamp = self._raw_data['timestamp'].flatten() assert np.all(np.diff(timestamp) == 1), 'timestamp have gaps' # signals signal_channels = [] for c, chan_info in enumerate(self._ordered_channels): name = chan_info['native_channel_name'] chan_id = str(c) # the chan_id have no meaning in intan if chan_info['signal_type'] == 20: # exception for temperature sig_dtype = 'int16' else: sig_dtype = 'uint16' stream_id = str(chan_info['signal_type']) signal_channels.append((name, chan_id, chan_info['sampling_rate'], sig_dtype, chan_info['units'], chan_info['gain'], chan_info['offset'], stream_id)) signal_channels = np.array(signal_channels, dtype=_signal_channel_dtype) stream_ids = np.unique(signal_channels['stream_id']) signal_streams = np.zeros(stream_ids.size, dtype=_signal_stream_dtype) signal_streams['id'] = stream_ids for stream_index, stream_id in enumerate(stream_ids): signal_streams['name'][stream_index] = stream_type_to_name.get(int(stream_id), '') self._max_sampling_rate = np.max(signal_channels['sampling_rate']) self._max_sigs_length = self._raw_data.size * self._block_size # No events event_channels = [] event_channels = np.array(event_channels, dtype=_event_channel_dtype) # No spikes spike_channels = [] spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) # fille into header dict self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = signal_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels self._generate_minimal_annotations() def _segment_t_start(self, block_index, seg_index): return 0. def _segment_t_stop(self, block_index, seg_index): t_stop = self._max_sigs_length / self._max_sampling_rate return t_stop def _get_signal_size(self, block_index, seg_index, stream_index): stream_id = self.header['signal_streams'][stream_index]['id'] mask = self.header['signal_channels']['stream_id'] == stream_id signal_channels = self.header['signal_channels'][mask] channel_names = signal_channels['name'] chan_name0 = channel_names[0] size = self._raw_data[chan_name0].size return size def _get_signal_t_start(self, block_index, seg_index, stream_index): return 0. def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): if i_start is None: i_start = 0 if i_stop is None: i_stop = self._get_signal_size(block_index, seg_index, stream_index) stream_id = self.header['signal_streams'][stream_index]['id'] mask = self.header['signal_channels']['stream_id'] == stream_id signal_channels = self.header['signal_channels'][mask] if channel_indexes is None: channel_indexes = slice(None) channel_names = signal_channels['name'][channel_indexes] shape = self._raw_data[channel_names[0]].shape # some channel (temperature) have 1D field so shape 1D # because 1 sample per block if len(shape) == 2: # this is the general case with 2D block_size = shape[1] block_start = i_start // block_size block_stop = i_stop // block_size + 1 sl0 = i_start % block_size sl1 = sl0 + (i_stop - i_start) sigs_chunk = np.zeros((i_stop - i_start, len(channel_names)), dtype='uint16') for i, chan_name in enumerate(channel_names): data_chan = self._raw_data[chan_name] if len(shape) == 1: sigs_chunk[:, i] = data_chan[i_start:i_stop] else: sigs_chunk[:, i] = data_chan[block_start:block_stop].flatten()[sl0:sl1] return sigs_chunk def read_qstring(f): length = np.fromfile(f, dtype='uint32', count=1)[0] if length == 0xFFFFFFFF or length == 0: return '' txt = f.read(length).decode('utf-16') return txt def read_variable_header(f, header): info = {} for field_name, field_type in header: if field_type == 'QString': field_value = read_qstring(f) else: field_value = np.fromfile(f, dtype=field_type, count=1)[0] info[field_name] = field_value return info ############### # RHS ZONE rhs_global_header = [ ('magic_number', 'uint32'), # 0xD69127AC ('major_version', 'int16'), ('minor_version', 'int16'), ('sampling_rate', 'float32'), ('dsp_enabled', 'int16'), ('actual_dsp_cutoff_frequency', 'float32'), ('actual_lower_bandwidth', 'float32'), ('actual_lower_settle_bandwidth', 'float32'), ('actual_upper_bandwidth', 'float32'), ('desired_dsp_cutoff_frequency', 'float32'), ('desired_lower_bandwidth', 'float32'), ('desired_lower_settle_bandwidth', 'float32'), ('desired_upper_bandwidth', 'float32'), ('notch_filter_mode', 'int16'), ('desired_impedance_test_frequency', 'float32'), ('actual_impedance_test_frequency', 'float32'), ('amp_settle_mode', 'int16'), ('charge_recovery_mode', 'int16'), ('stim_step_size', 'float32'), ('recovery_current_limit', 'float32'), ('recovery_target_voltage', 'float32'), ('note1', 'QString'), ('note2', 'QString'), ('note3', 'QString'), ('dc_amplifier_data_saved', 'int16'), ('board_mode', 'int16'), ('ref_channel_name', 'QString'), ('nb_signal_group', 'int16'), ] rhs_signal_group_header = [ ('signal_group_name', 'QString'), ('signal_group_prefix', 'QString'), ('signal_group_enabled', 'int16'), ('channel_num', 'int16'), ('amplified_channel_num', 'int16'), ] rhs_signal_channel_header = [ ('native_channel_name', 'QString'), ('custom_channel_name', 'QString'), ('native_order', 'int16'), ('custom_order', 'int16'), ('signal_type', 'int16'), ('channel_enabled', 'int16'), ('chip_channel_num', 'int16'), ('command_stream', 'int16'), ('board_stream_num', 'int16'), ('spike_scope_trigger_mode', 'int16'), ('spike_scope_voltage_thresh', 'int16'), ('spike_scope_digital_trigger_channel', 'int16'), ('spike_scope_digital_edge_polarity', 'int16'), ('electrode_impedance_magnitude', 'float32'), ('electrode_impedance_phase', 'float32'), ] def read_rhs(filename): BLOCK_SIZE = 128 # sample per block with open(filename, mode='rb') as f: global_info = read_variable_header(f, rhs_global_header) channels_by_type = {k: [] for k in [0, 3, 4, 5, 6]} for g in range(global_info['nb_signal_group']): group_info = read_variable_header(f, rhs_signal_group_header) if bool(group_info['signal_group_enabled']): for c in range(group_info['channel_num']): chan_info = read_variable_header(f, rhs_signal_channel_header) assert chan_info['signal_type'] not in (1, 2) if bool(chan_info['channel_enabled']): channels_by_type[chan_info['signal_type']].append(chan_info) header_size = f.tell() sr = global_info['sampling_rate'] # construct dtype by re-ordering channels by types ordered_channels = [] data_dtype = [('timestamp', 'int32', BLOCK_SIZE)] # 0: RHS2000 amplifier channel. for chan_info in channels_by_type[0]: name = chan_info['native_channel_name'] chan_info['sampling_rate'] = sr chan_info['units'] = 'uV' chan_info['gain'] = 0.195 chan_info['offset'] = -32768 * 0.195 ordered_channels.append(chan_info) data_dtype += [(name, 'uint16', BLOCK_SIZE)] if bool(global_info['dc_amplifier_data_saved']): for chan_info in channels_by_type[0]: name = chan_info['native_channel_name'] chan_info_dc = dict(chan_info) chan_info_dc['native_channel_name'] = name + '_DC' chan_info_dc['sampling_rate'] = sr chan_info_dc['units'] = 'mV' chan_info_dc['gain'] = 19.23 chan_info_dc['offset'] = -512 * 19.23 chan_info_dc['signal_type'] = 10 # put it in another group ordered_channels.append(chan_info_dc) data_dtype += [(name + '_DC', 'uint16', BLOCK_SIZE)] for chan_info in channels_by_type[0]: name = chan_info['native_channel_name'] chan_info_stim = dict(chan_info) chan_info_stim['native_channel_name'] = name + '_STIM' chan_info_stim['sampling_rate'] = sr # stim channel are coplicated because they are coded # with bits, they do not fit the gain/offset rawio strategy chan_info_stim['units'] = '' chan_info_stim['gain'] = 1. chan_info_stim['offset'] = 0. chan_info_stim['signal_type'] = 11 # put it in another group ordered_channels.append(chan_info_stim) data_dtype += [(name + '_STIM', 'uint16', BLOCK_SIZE)] # 3: Analog input channel. # 4: Analog output channel. for sig_type in [3, 4, ]: for chan_info in channels_by_type[sig_type]: name = chan_info['native_channel_name'] chan_info['sampling_rate'] = sr chan_info['units'] = 'V' chan_info['gain'] = 0.0003125 chan_info['offset'] = -32768 * 0.0003125 ordered_channels.append(chan_info) data_dtype += [(name, 'uint16', BLOCK_SIZE)] # 5: Digital input channel. # 6: Digital output channel. for sig_type in [5, 6]: # at the moment theses channel are not in sig channel list # but they are in the raw memamp if len(channels_by_type[sig_type]) > 0: name = {5: 'DIGITAL-IN', 6: 'DIGITAL-OUT'}[sig_type] data_dtype += [(name, 'uint16', BLOCK_SIZE)] return global_info, ordered_channels, data_dtype, header_size, BLOCK_SIZE ############### # RHD ZONE rhd_global_header_base = [ ('magic_number', 'uint32'), # 0xC6912702 ('major_version', 'int16'), ('minor_version', 'int16'), ] rhd_global_header_part1 = [ ('sampling_rate', 'float32'), ('dsp_enabled', 'int16'), ('actual_dsp_cutoff_frequency', 'float32'), ('actual_lower_bandwidth', 'float32'), ('actual_upper_bandwidth', 'float32'), ('desired_dsp_cutoff_frequency', 'float32'), ('desired_lower_bandwidth', 'float32'), ('desired_upper_bandwidth', 'float32'), ('notch_filter_mode', 'int16'), ('desired_impedance_test_frequency', 'float32'), ('actual_impedance_test_frequency', 'float32'), ('note1', 'QString'), ('note2', 'QString'), ('note3', 'QString'), ] rhd_global_header_v11 = [ ('num_temp_sensor_channels', 'int16'), ] rhd_global_header_v13 = [ ('eval_board_mode', 'int16'), ] rhd_global_header_v20 = [ ('reference_channel', 'QString'), ] rhd_global_header_final = [ ('nb_signal_group', 'int16'), ] rhd_signal_group_header = [ ('signal_group_name', 'QString'), ('signal_group_prefix', 'QString'), ('signal_group_enabled', 'int16'), ('channel_num', 'int16'), ('amplified_channel_num', 'int16'), ] rhd_signal_channel_header = [ ('native_channel_name', 'QString'), ('custom_channel_name', 'QString'), ('native_order', 'int16'), ('custom_order', 'int16'), ('signal_type', 'int16'), ('channel_enabled', 'int16'), ('chip_channel_num', 'int16'), ('board_stream_num', 'int16'), ('spike_scope_trigger_mode', 'int16'), ('spike_scope_voltage_thresh', 'int16'), ('spike_scope_digital_trigger_channel', 'int16'), ('spike_scope_digital_edge_polarity', 'int16'), ('electrode_impedance_magnitude', 'float32'), ('electrode_impedance_phase', 'float32'), ] stream_type_to_name = { 0: 'RHD2000 amplifier channel', 1: 'RHD2000 auxiliary input channel', 2: 'RHD2000 supply voltage channel', 3: 'USB board ADC input channel', 4: 'USB board digital input channel', 5: 'USB board digital output channel', } def read_rhd(filename): with open(filename, mode='rb') as f: global_info = read_variable_header(f, rhd_global_header_base) version = V('{major_version}.{minor_version}'.format(**global_info)) # the header size depend on the version :-( header = list(rhd_global_header_part1) # make a copy if version >= '1.1': header = header + rhd_global_header_v11 else: global_info['num_temp_sensor_channels'] = 0 if version >= '1.3': header = header + rhd_global_header_v13 else: global_info['eval_board_mode'] = 0 if version >= '2.0': header = header + rhd_global_header_v20 else: global_info['reference_channel'] = '' header = header + rhd_global_header_final global_info.update(read_variable_header(f, header)) # read channel group and channel header channels_by_type = {k: [] for k in [0, 1, 2, 3, 4, 5]} for g in range(global_info['nb_signal_group']): group_info = read_variable_header(f, rhd_signal_group_header) if bool(group_info['signal_group_enabled']): for c in range(group_info['channel_num']): chan_info = read_variable_header(f, rhd_signal_channel_header) if bool(chan_info['channel_enabled']): channels_by_type[chan_info['signal_type']].append(chan_info) header_size = f.tell() sr = global_info['sampling_rate'] # construct the data block dtype and reorder channels if version >= '2.0': BLOCK_SIZE = 128 else: BLOCK_SIZE = 60 # 256 channels ordered_channels = [] if version >= '1.2': data_dtype = [('timestamp', 'int32', BLOCK_SIZE)] else: data_dtype = [('timestamp', 'uint32', BLOCK_SIZE)] # 0: RHD2000 amplifier channel for chan_info in channels_by_type[0]: name = chan_info['native_channel_name'] chan_info['sampling_rate'] = sr chan_info['units'] = 'uV' chan_info['gain'] = 0.195 chan_info['offset'] = -32768 * 0.195 ordered_channels.append(chan_info) data_dtype += [(name, 'uint16', BLOCK_SIZE)] # 1: RHD2000 auxiliary input channel for chan_info in channels_by_type[1]: name = chan_info['native_channel_name'] chan_info['sampling_rate'] = sr / 4. chan_info['units'] = 'V' chan_info['gain'] = 0.0000374 chan_info['offset'] = 0. ordered_channels.append(chan_info) data_dtype += [(name, 'uint16', BLOCK_SIZE // 4)] # 2: RHD2000 supply voltage channel for chan_info in channels_by_type[2]: name = chan_info['native_channel_name'] chan_info['sampling_rate'] = sr / BLOCK_SIZE chan_info['units'] = 'V' chan_info['gain'] = 0.0000748 chan_info['offset'] = 0. ordered_channels.append(chan_info) data_dtype += [(name, 'uint16')] # temperature is not an official channel in the header for i in range(global_info['num_temp_sensor_channels']): name = 'temperature_{}'.format(i) chan_info = {'native_channel_name': name, 'signal_type': 20} chan_info['sampling_rate'] = sr / BLOCK_SIZE chan_info['units'] = 'Celsius' chan_info['gain'] = 0.001 chan_info['offset'] = 0. ordered_channels.append(chan_info) data_dtype += [(name, 'int16')] # 3: USB board ADC input channel for chan_info in channels_by_type[3]: name = chan_info['native_channel_name'] chan_info['sampling_rate'] = sr chan_info['units'] = 'V' if global_info['eval_board_mode'] == 0: chan_info['gain'] = 0.000050354 chan_info['offset'] = 0. elif global_info['eval_board_mode'] == 1: chan_info['gain'] = 0.00015259 chan_info['offset'] = -32768 * 0.00015259 elif global_info['eval_board_mode'] == 13: chan_info['gain'] = 0.0003125 chan_info['offset'] = -32768 * 0.0003125 ordered_channels.append(chan_info) data_dtype += [(name, 'uint16', BLOCK_SIZE)] # 4: USB board digital input channel # 5: USB board digital output channel for sig_type in [4, 5]: # at the moment theses channel are not in sig channel list # but they are in the raw memamp if len(channels_by_type[sig_type]) > 0: name = {4: 'DIGITAL-IN', 5: 'DIGITAL-OUT'}[sig_type] data_dtype += [(name, 'uint16', BLOCK_SIZE)] return global_info, ordered_channels, data_dtype, header_size, BLOCK_SIZE neo-0.10.0/neo/rawio/maxwellrawio.py0000644000076700000240000002204514077554132017760 0ustar andrewstaff00000000000000""" Class for reading data from maxwell biosystem device: * MaxOne * MaxTwo https://www.mxwbio.com/resources/mea/ The implementation is a mix between: * the implementation in spikeextractors https://github.com/SpikeInterface/spikeextractors/blob/master/spikeextractors/extractors/maxwellextractors/maxwellextractors.py * the implementation in spyking-circus https://github.com/spyking-circus/spyking-circus/blob/master/circus/files/maxwell.py The implementation do not handle spike at the moment. For maxtwo device, each well will be a different signal stream. Author : Samuel Garcia, Alessio Buccino, Pierre Yger """ import os from pathlib import Path import platform from urllib.request import urlopen from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np try: import h5py HAVE_H5 = True except ImportError: HAVE_H5 = False class MaxwellRawIO(BaseRawIO): """ Class for reading MaxOne or MaxTwo files. """ extensions = ['h5'] rawmode = 'one-file' def __init__(self, filename='', rec_name=None): BaseRawIO.__init__(self) self.filename = filename self.rec_name = rec_name def _source_name(self): return self.filename def _parse_header(self): try: import MEArec as mr HAVE_MEAREC = True except ImportError: HAVE_MEAREC = False assert HAVE_H5, 'h5py is not installed' h5 = h5py.File(self.filename, mode='r') self.h5_file = h5 version = h5['version'][0].decode() # create signal stream # one stream per well signal_streams = [] if int(version) == 20160704: self._old_format = True signal_streams.append(('well000', 'well000')) elif int(version) > 20160704: # multi stream stream (one well is one stream) self._old_format = False stream_ids = list(h5['wells'].keys()) for stream_id in stream_ids: rec_names = list(h5['wells'][stream_id].keys()) if len(rec_names) > 1: if self.rec_name is None: raise ValueError("Detected multiple recordings. Please select a single recording using" f' the `rec_name` paramter.\nPossible rec_name {rec_names}') else: self.rec_name = rec_names[0] signal_streams.append((stream_id, stream_id)) else: raise NotImplementedError(f'This version {version} is not supported') signal_streams = np.array(signal_streams, dtype=_signal_stream_dtype) # create signal channels max_sig_length = 0 self._signals = {} sig_channels = [] for stream_id in signal_streams['id']: if int(version) == 20160704: sr = 20000. settings = h5["settings"] if 'lsb' in settings: gain_uV = settings['lsb'][0] * 1e6 else: if "gain" not in settings: print("'gain' amd 'lsb' not found in settings. " "Setting gain to 512 (default)") gain = 512 else: gain = settings['gain'][0] gain_uV = 3.3 / (1024 * gain) * 1e6 sigs = h5['sig'] mapping = h5["mapping"] ids = np.array(mapping['channel']) ids = ids[ids >= 0] self._channel_slice = ids elif int(version) > 20160704: settings = h5['wells'][stream_id][self.rec_name]['settings'] sr = settings['sampling'][0] if 'lsb' in settings: gain_uV = settings['lsb'][0] * 1e6 else: if "gain" not in settings: print("'gain' amd 'lsb' not found in settings. " "Setting gain to 512 (default)") gain = 512 else: gain = settings['gain'][0] gain_uV = 3.3 / (1024 * gain) * 1e6 mapping = settings['mapping'] sigs = h5['wells'][stream_id][self.rec_name]['groups']['routed']['raw'] channel_ids = np.array(mapping['channel']) electrode_ids = np.array(mapping['electrode']) mask = channel_ids >= 0 channel_ids = channel_ids[mask] electrode_ids = electrode_ids[mask] for i, chan_id in enumerate(channel_ids): elec_id = electrode_ids[i] ch_name = f'ch{chan_id} elec{elec_id}' offset_uV = 0 sig_channels.append((ch_name, str(chan_id), sr, 'uint16', 'uV', gain_uV, offset_uV, stream_id)) self._signals[stream_id] = sigs max_sig_length = max(max_sig_length, sigs.shape[1]) self._t_stop = max_sig_length / sr sig_channels = np.array(sig_channels, dtype=_signal_channel_dtype) spike_channels = [] spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) event_channels = [] event_channels = np.array(event_channels, dtype=_event_channel_dtype) self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = sig_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels self._generate_minimal_annotations() bl_ann = self.raw_annotations['blocks'][0] bl_ann['maxwell_version'] = version def _segment_t_start(self, block_index, seg_index): return 0. def _segment_t_stop(self, block_index, seg_index): return self._t_stop def _get_signal_size(self, block_index, seg_index, stream_index): stream_id = self.header['signal_streams'][stream_index]['id'] sigs = self._signals[stream_id] return sigs.shape[1] def _get_signal_t_start(self, block_index, seg_index, stream_index): return 0. def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): stream_id = self.header['signal_streams'][stream_index]['id'] sigs = self._signals[stream_id] if i_start is None: i_start = 0 if i_stop is None: i_stop = sigs.shape[1] if channel_indexes is None: channel_indexes = slice(None) try: if self._old_format: sigs = sigs[self._channel_slice, i_start:i_stop] sigs = sigs[channel_indexes] else: sigs = sigs[channel_indexes, i_start:i_stop] except OSError as e: print('*' * 10) print(_hdf_maxwell_error) print('*' * 10) raise(e) sigs = sigs.T return sigs _hdf_maxwell_error = """Maxwell file format is based on HDF5. The internal compression requires a custom plugin!!! This is a big pain for the end user. You, as a end user, should ask Maxwell company to change this. Please visit this page and install the missing decompression libraries: https://share.mxwbio.com/d/4742248b2e674a85be97/ Then, link the decompression library by setting the `HDF5_PLUGIN_PATH` to your installation location, e.g. via os.environ['HDF5_PLUGIN_PATH'] = '/path/to/cutum/hdf5/plugin/' Alternatively, you can use the auto_install_maxwell_hdf5_compression_plugin() below function that do it automagically. """ def auto_install_maxwell_hdf5_compression_plugin(hdf5_plugin_path=None, force_download=True): if hdf5_plugin_path is None: hdf5_plugin_path = os.getenv('HDF5_PLUGIN_PATH', None) if hdf5_plugin_path is None: hdf5_plugin_path = Path.home() / 'hdf5_plugin_path_maxwell' os.environ['HDF5_PLUGIN_PATH'] = str(hdf5_plugin_path) hdf5_plugin_path = Path(hdf5_plugin_path) hdf5_plugin_path.mkdir(exist_ok=True) if platform.system() == 'Linux': remote_lib = 'https://share.mxwbio.com/d/4742248b2e674a85be97/files/?p=%2FLinux%2Flibcompression.so&dl=1' local_lib = hdf5_plugin_path / 'libcompression.so' elif platform.system() == 'Darwin': remote_lib = 'https://share.mxwbio.com/d/4742248b2e674a85be97/files/?p=%2FMacOS%2Flibcompression.dylib&dl=1' local_lib = hdf5_plugin_path / 'libcompression.dylib' elif platform.system() == 'Windows': remote_lib = 'https://share.mxwbio.com/d/4742248b2e674a85be97/files/?p=%2FWindows%2Fcompression.dll&dl=1' local_lib = hdf5_plugin_path / 'compression.dll' if not force_download and local_lib.is_file(): print(f'lib h5 compression for maxwell already already in {local_lib}') return dist = urlopen(remote_lib) with open(local_lib, 'wb') as f: f.write(dist.read()) neo-0.10.0/neo/rawio/mearecrawio.py0000644000076700000240000001473514066375716017561 0ustar andrewstaff00000000000000""" Class for reading data from a MEArec simulated data. See: https://mearec.readthedocs.io/en/latest/ https://github.com/alejoe91/MEArec https://link.springer.com/article/10.1007/s12021-020-09467-7 Author : Alessio Buccino """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np from copy import deepcopy class MEArecRawIO(BaseRawIO): """ Class for "reading" fake data from a MEArec file. Usage: >>> import neo.rawio >>> r = neo.rawio.MEArecRawIO(filename='mearec.h5') >>> r.parse_header() >>> print(r) >>> raw_chunk = r.get_analogsignal_chunk(block_index=0, seg_index=0, i_start=0, i_stop=1024, channel_names=channel_names) >>> float_chunk = reader.rescale_signal_raw_to_float(raw_chunk, dtype='float64', channel_indexes=[0, 3, 6]) >>> spike_timestamp = reader.spike_timestamps(unit_index=0, t_start=None, t_stop=None) >>> spike_times = reader.rescale_spike_timestamp(spike_timestamp, 'float64') """ extensions = ['h5'] rawmode = 'one-file' def __init__(self, filename=''): BaseRawIO.__init__(self) self.filename = filename def _source_name(self): return self.filename def _parse_header(self): try: import MEArec as mr HAVE_MEAREC = True except ImportError: HAVE_MEAREC = False assert HAVE_MEAREC, 'MEArec is not installed' self._recgen = mr.load_recordings(recordings=self.filename, return_h5_objects=True, check_suffix=False, load=['recordings', 'spiketrains', 'channel_positions'], load_waveforms=False) self._sampling_rate = self._recgen.info['recordings']['fs'] self._recordings = self._recgen.recordings self._num_frames, self._num_channels = self._recordings.shape signal_streams = np.array([('Signals', '0')], dtype=_signal_stream_dtype) sig_channels = [] for c in range(self._num_channels): ch_name = 'ch{}'.format(c) chan_id = str(c + 1) sr = self._sampling_rate # Hz dtype = self._recordings.dtype units = 'uV' gain = 1. offset = 0. stream_id = '0' sig_channels.append((ch_name, chan_id, sr, dtype, units, gain, offset, stream_id)) sig_channels = np.array(sig_channels, dtype=_signal_channel_dtype) # creating units channels spike_channels = [] self._spiketrains = self._recgen.spiketrains for c in range(len(self._spiketrains)): unit_name = 'unit{}'.format(c) unit_id = '#{}'.format(c) # if spiketrains[c].waveforms is not None: wf_units = '' wf_gain = 1. wf_offset = 0. wf_left_sweep = 0 wf_sampling_rate = self._sampling_rate spike_channels.append((unit_name, unit_id, wf_units, wf_gain, wf_offset, wf_left_sweep, wf_sampling_rate)) spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) event_channels = [] event_channels = np.array(event_channels, dtype=_event_channel_dtype) self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = sig_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels self._generate_minimal_annotations() for block_index in range(1): bl_ann = self.raw_annotations['blocks'][block_index] bl_ann['mearec_info'] = deepcopy(self._recgen.info) def _segment_t_start(self, block_index, seg_index): all_starts = [[0.]] return all_starts[block_index][seg_index] def _segment_t_stop(self, block_index, seg_index): t_stop = self._num_frames / self._sampling_rate all_stops = [[t_stop]] return all_stops[block_index][seg_index] def _get_signal_size(self, block_index, seg_index, stream_index): assert stream_index == 0 return self._num_frames def _get_signal_t_start(self, block_index, seg_index, stream_index): assert stream_index == 0 return self._segment_t_start(block_index, seg_index) def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): if i_start is None: i_start = 0 if i_stop is None: i_stop = self._num_frames if channel_indexes is None: channel_indexes = slice(self._num_channels) if isinstance(channel_indexes, slice): raw_signals = self._recgen.recordings[i_start:i_stop, channel_indexes] else: # sort channels because h5py neeeds sorted indexes if np.any(np.diff(channel_indexes) < 0): sorted_channel_indexes = np.sort(channel_indexes) sorted_idx = np.array([list(sorted_channel_indexes).index(ch) for ch in channel_indexes]) raw_signals = self._recgen.recordings[i_start:i_stop, sorted_channel_indexes] raw_signals = raw_signals[:, sorted_idx] else: raw_signals = self._recgen.recordings[i_start:i_stop, channel_indexes] return raw_signals def _spike_count(self, block_index, seg_index, unit_index): return len(self._spiketrains[unit_index]) def _get_spike_timestamps(self, block_index, seg_index, unit_index, t_start, t_stop): spike_timestamps = self._spiketrains[unit_index].times.magnitude if t_start is None: t_start = self._segment_t_start(block_index, seg_index) if t_stop is None: t_stop = self._segment_t_stop(block_index, seg_index) timestamp_idxs = np.where((spike_timestamps >= t_start) & (spike_timestamps < t_stop)) return spike_timestamps[timestamp_idxs] def _rescale_spike_timestamp(self, spike_timestamps, dtype): return spike_timestamps.astype(dtype) def _get_spike_raw_waveforms(self, block_index, seg_index, spike_channel_index, t_start, t_stop): return None neo-0.10.0/neo/rawio/micromedrawio.py0000644000076700000240000002167414077527451020121 0ustar andrewstaff00000000000000""" Class for reading/writing data from micromed (.trc). Inspired by the Matlab code for EEGLAB from Rami K. Niazy. Completed with matlab Guillaume BECQ code. Author: Samuel Garcia """ from .baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np import datetime import os import struct import io class StructFile(io.BufferedReader): def read_f(self, fmt, offset=None): if offset is not None: self.seek(offset) return struct.unpack(fmt, self.read(struct.calcsize(fmt))) class MicromedRawIO(BaseRawIO): """ Class for reading data from micromed (.trc). """ extensions = ['trc', 'TRC'] rawmode = 'one-file' def __init__(self, filename=''): BaseRawIO.__init__(self) self.filename = filename def _parse_header(self): with open(self.filename, 'rb') as fid: f = StructFile(fid) # Name f.seek(64) surname = f.read(22).strip(b' ') firstname = f.read(20).strip(b' ') # Date day, month, year, hour, minute, sec = f.read_f('bbbbbb', offset=128) rec_datetime = datetime.datetime(year + 1900, month, day, hour, minute, sec) Data_Start_Offset, Num_Chan, Multiplexer, Rate_Min, Bytes = f.read_f( 'IHHHH', offset=138) # header version header_version, = f.read_f('b', offset=175) assert header_version == 4 # area f.seek(176) zone_names = ['ORDER', 'LABCOD', 'NOTE', 'FLAGS', 'TRONCA', 'IMPED_B', 'IMPED_E', 'MONTAGE', 'COMPRESS', 'AVERAGE', 'HISTORY', 'DVIDEO', 'EVENT A', 'EVENT B', 'TRIGGER'] zones = {} for zname in zone_names: zname2, pos, length = f.read_f('8sII') zones[zname] = zname2, pos, length assert zname == zname2.decode('ascii').strip(' ') # raw signals memmap sig_dtype = 'u' + str(Bytes) self._raw_signals = np.memmap(self.filename, dtype=sig_dtype, mode='r', offset=Data_Start_Offset).reshape(-1, Num_Chan) # Reading Code Info zname2, pos, length = zones['ORDER'] f.seek(pos) code = np.frombuffer(f.read(Num_Chan * 2), dtype='u2') units_code = {-1: 'nV', 0: 'uV', 1: 'mV', 2: 1, 100: 'percent', 101: 'dimensionless', 102: 'dimensionless'} signal_channels = [] sig_grounds = [] for c in range(Num_Chan): zname2, pos, length = zones['LABCOD'] f.seek(pos + code[c] * 128 + 2, 0) chan_name = f.read(6).strip(b"\x00").decode('ascii') ground = f.read(6).strip(b"\x00").decode('ascii') sig_grounds.append(ground) logical_min, logical_max, logical_ground, physical_min, physical_max = f.read_f( 'iiiii') k, = f.read_f('h') units = units_code.get(k, 'uV') factor = float(physical_max - physical_min) / float( logical_max - logical_min + 1) gain = factor offset = -logical_ground * factor f.seek(8, 1) sampling_rate, = f.read_f('H') sampling_rate *= Rate_Min chan_id = str(c) stream_id = '0' signal_channels.append((chan_name, chan_id, sampling_rate, sig_dtype, units, gain, offset, stream_id)) signal_channels = np.array(signal_channels, dtype=_signal_channel_dtype) signal_streams = np.array([('Signals', '0')], dtype=_signal_stream_dtype) assert np.unique(signal_channels['sampling_rate']).size == 1 self._sampling_rate = float(np.unique(signal_channels['sampling_rate'])[0]) # Event channels event_channels = [] event_channels.append(('Trigger', '', 'event')) event_channels.append(('Note', '', 'event')) event_channels.append(('Event A', '', 'epoch')) event_channels.append(('Event B', '', 'epoch')) event_channels = np.array(event_channels, dtype=_event_channel_dtype) # Read trigger and notes self._raw_events = [] ev_dtypes = [('TRIGGER', [('start', 'u4'), ('label', 'u2')]), ('NOTE', [('start', 'u4'), ('label', 'S40')]), ('EVENT A', [('label', 'u4'), ('start', 'u4'), ('stop', 'u4')]), ('EVENT B', [('label', 'u4'), ('start', 'u4'), ('stop', 'u4')]), ] for zname, ev_dtype in ev_dtypes: zname2, pos, length = zones[zname] dtype = np.dtype(ev_dtype) rawevent = np.memmap(self.filename, dtype=dtype, mode='r', offset=pos, shape=length // dtype.itemsize) keep = (rawevent['start'] >= rawevent['start'][0]) & ( rawevent['start'] < self._raw_signals.shape[0]) & ( rawevent['start'] != 0) rawevent = rawevent[keep] self._raw_events.append(rawevent) # No spikes spike_channels = [] spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) # fille into header dict self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [1] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = signal_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels # insert some annotation at some place self._generate_minimal_annotations() bl_annotations = self.raw_annotations['blocks'][0] seg_annotations = bl_annotations['segments'][0] for d in (bl_annotations, seg_annotations): d['rec_datetime'] = rec_datetime d['firstname'] = firstname d['surname'] = surname d['header_version'] = header_version sig_annotations = self.raw_annotations['blocks'][0]['segments'][0]['signals'][0] sig_annotations['__array_annotations__']['ground'] = np.array(sig_grounds) def _source_name(self): return self.filename def _segment_t_start(self, block_index, seg_index): return 0. def _segment_t_stop(self, block_index, seg_index): t_stop = self._raw_signals.shape[0] / self._sampling_rate return t_stop def _get_signal_size(self, block_index, seg_index, stream_index): assert stream_index == 0 return self._raw_signals.shape[0] def _get_signal_t_start(self, block_index, seg_index, stream_index): assert stream_index == 0 return 0. def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): if channel_indexes is None: channel_indexes = slice(channel_indexes) raw_signals = self._raw_signals[slice(i_start, i_stop), channel_indexes] return raw_signals def _spike_count(self, block_index, seg_index, unit_index): return 0 def _event_count(self, block_index, seg_index, event_channel_index): n = self._raw_events[event_channel_index].size return n def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): raw_event = self._raw_events[event_channel_index] if t_start is not None: keep = raw_event['start'] >= int(t_start * self._sampling_rate) raw_event = raw_event[keep] if t_stop is not None: keep = raw_event['start'] <= int(t_stop * self._sampling_rate) raw_event = raw_event[keep] timestamp = raw_event['start'] if event_channel_index < 2: durations = None else: durations = raw_event['stop'] - raw_event['start'] try: labels = raw_event['label'].astype('U') except UnicodeDecodeError: # sometimes the conversion do not work : here a simple fix labels = np.array([e.decode('cp1252') for e in raw_event['label']], dtype='U') return timestamp, durations, labels def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): event_times = event_timestamps.astype(dtype) / self._sampling_rate return event_times def _rescale_epoch_duration(self, raw_duration, dtype, event_channel_index): durations = raw_duration.astype(dtype) / self._sampling_rate return durations neo-0.10.0/neo/rawio/neuralynxrawio/0000755000076700000240000000000014077757142017765 5ustar andrewstaff00000000000000neo-0.10.0/neo/rawio/neuralynxrawio/__init__.py0000644000076700000240000000053114066374330022065 0ustar andrewstaff00000000000000""" RawIO for reading data from Neuralynx files. This IO supports NCS, NEV, NSE and NTT file formats. NCS contains the sampled signal for one channel NEV contains events NSE contains spikes and waveforms for mono electrodes NTT contains spikes and waveforms for tetrodes """ from neo.rawio.neuralynxrawio.neuralynxrawio import NeuralynxRawIO neo-0.10.0/neo/rawio/neuralynxrawio/ncssections.py0000644000076700000240000004116014066375716022675 0ustar andrewstaff00000000000000import math class NcsSections: """ Contains information regarding the contiguous sections of records in an Ncs file. Methods of NcsSectionsFactory perform parsing of this information from an Ncs file and produce these where the sections are discontiguous in time and in temporal order. TODO: This class will likely need __eq__, __ne__, and __hash__ to be useful in more sophisticated segment construction algorithms. """ def __init__(self): self.sects = [] self.sampFreqUsed = 0 # actual sampling frequency of samples self.microsPerSampUsed = 0 # microseconds per sample class NcsSection: """ Information regarding a single contiguous section or group of records in an Ncs file. Model is that times are closed on the left and open on the right. Record numbers are closed on both left and right, that is, inclusive of the last record. endTime should never be set less than startTime for comparison functions to work properly, though this is not enforced. """ _RECORD_SIZE = 512 # nb sample per signal record def __init__(self): self.startRec = -1 # index of starting record self.startTime = -1 # starttime of first record self.endRec = -1 # index of last record (inclusive) self.endTime = -1 # end time of last record, that is, the end time of the last # sampling period contained in the last record of the section def __init__(self, sb, st, eb, et): self.startRec = sb self.startTime = st self.endRec = eb self.endTime = et def before_time(self, rhb): """ Determine if this section is completely before another section in time. """ return self.endTime < rhb.startTime def overlaps_time(self, rhb): """ Determine if this section overlaps another in time. """ return self.startTime <= rhb.endTime and self.endTime >= rhb.startTime def after_time(self, rhb): """ Determine if this section is completely after another section in time. """ return self.startTime >= rhb.endTime class NcsSectionsFactory: """ Class for factory methods which perform parsing of contiguous sections of records in Ncs files. Model for times is that times are rounded to nearest microsecond. Times from start of a sample until just before the next sample are included, that is, closed lower bound and open upper bound on intervals. A channel with no samples is empty and contains no time intervals. Moved here since algorithm covering all 3 header styles and types used is more complicated. """ _maxGapSampFrac = 0.2 # maximum fraction of a sampling interval between predicted # and actual record timestamps still considered within one section @staticmethod def get_freq_for_micros_per_samp(micros): """ Compute fractional sampling frequency, given microseconds per sample. """ return 1e6 / micros @staticmethod def get_micros_per_samp_for_freq(sampFr): """ Calculate fractional microseconds per sample, given the sampling frequency (Hz). """ return 1e6 / sampFr @staticmethod def calc_sample_time(sampFr, startTime, posn): """ Calculate time rounded to microseconds for sample given frequency, start time, and sample position. """ return round(startTime + NcsSectionsFactory.get_micros_per_samp_for_freq(sampFr) * posn) @staticmethod def _parseGivenActualFrequency(ncsMemMap, ncsSects, chanNum, reqFreq, blkOnePredTime): """ Parse sections in memory mapped file when microsPerSampUsed and sampFreqUsed are known, filling in an NcsSections object. PARAMETERS ncsMemMap: memmap of Ncs file ncsSections: NcsSections with actual sampFreqUsed correct, first NcsSection with proper startSect and startTime already added. chanNum: channel number that should be present in all records reqFreq: rounded frequency that all records should contain blkOnePredTime: predicted starting time of second record in block RETURN NcsSections object with block locations marked """ startBlockPredTime = blkOnePredTime blkLen = 0 curBlock = ncsSects.sects[0] for recn in range(1, ncsMemMap.shape[0]): if ncsMemMap['channel_id'][recn] != chanNum or \ ncsMemMap['sample_rate'][recn] != reqFreq: raise IOError('Channel number or sampling frequency changed in ' + 'records within file') predTime = NcsSectionsFactory.calc_sample_time(ncsSects.sampFreqUsed, startBlockPredTime, blkLen) ts = ncsMemMap['timestamp'][recn] nValidSamps = ncsMemMap['nb_valid'][recn] if ts != predTime: curBlock.endRec = recn - 1 curBlock.endTime = predTime curBlock = NcsSection(recn, ts, -1, -1) ncsSects.sects.append(curBlock) startBlockPredTime = NcsSectionsFactory.calc_sample_time( ncsSects.sampFreqUsed, ts, nValidSamps) blkLen = 0 else: blkLen += nValidSamps curBlock.endRec = ncsMemMap.shape[0] - 1 endTime = NcsSectionsFactory.calc_sample_time(ncsSects.sampFreqUsed, startBlockPredTime, blkLen) curBlock.endTime = endTime return ncsSects @staticmethod def _buildGivenActualFrequency(ncsMemMap, actualSampFreq, reqFreq): """ Build NcsSections object for file given actual sampling frequency. Requires that frequency in each record agrees with requested frequency. This is normally obtained by rounding the header frequency; however, this value may be different from the rounded actual frequency used in the recording, since the underlying requirement in older Ncs files was that the number of microseconds per sample in the records is the inverse of the sampling frequency stated in the header truncated to whole microseconds. PARAMETERS ncsMemMap: memmap of Ncs file actualSampFreq: actual sampling frequency used reqFreq: frequency to require in records RETURN: NcsSections object """ # check frequency in first record if ncsMemMap['sample_rate'][0] != reqFreq: raise IOError("Sampling frequency in first record doesn't agree with header.") chanNum = ncsMemMap['channel_id'][0] nb = NcsSections() nb.sampFreqUsed = actualSampFreq nb.microsPerSampUsed = NcsSectionsFactory.get_micros_per_samp_for_freq(actualSampFreq) # check if file is one block of records, which is often the case, and avoid full parse lastBlkI = ncsMemMap.shape[0] - 1 ts0 = ncsMemMap['timestamp'][0] nb0 = ncsMemMap['nb_valid'][0] predLastBlockStartTime = NcsSectionsFactory.calc_sample_time(actualSampFreq, ts0, NcsSection._RECORD_SIZE * lastBlkI) lts = ncsMemMap['timestamp'][lastBlkI] lnb = ncsMemMap['nb_valid'][lastBlkI] if ncsMemMap['channel_id'][lastBlkI] == chanNum and \ ncsMemMap['sample_rate'][lastBlkI] == reqFreq and \ lts == predLastBlockStartTime: lastBlkEndTime = NcsSectionsFactory.calc_sample_time(actualSampFreq, lts, lnb) curBlock = NcsSection(0, ts0, lastBlkI, lastBlkEndTime) nb.sects.append(curBlock) return nb # otherwise need to scan looking for breaks else: blkOnePredTime = NcsSectionsFactory.calc_sample_time(actualSampFreq, ts0, nb0) curBlock = NcsSection(0, ts0, -1, -1) nb.sects.append(curBlock) return NcsSectionsFactory._parseGivenActualFrequency(ncsMemMap, nb, chanNum, reqFreq, blkOnePredTime) @staticmethod def _parseForMaxGap(ncsMemMap, ncsSects, maxGapLen): """ Parse blocks of records from file, allowing a maximum gap in timestamps between records in sections. Estimates frequency being used based on timestamps. PARAMETERS ncsMemMap: memmap of Ncs file ncsSects: NcsSections object with sampFreqUsed set to nominal frequency to use in computing time for samples (Hz) maxGapLen: maximum difference within a block between predicted time of start of record and recorded time RETURN: NcsSections object with sampFreqUsed and microsPerSamp set based on estimate from largest block """ # track frequency of each block and use estimate with longest block maxBlkLen = 0 maxBlkFreqEstimate = 0 # Parse the record sequence, finding blocks of continuous time with no more than # maxGapLength and same channel number chanNum = ncsMemMap['channel_id'][0] startBlockTime = ncsMemMap['timestamp'][0] blkLen = ncsMemMap['nb_valid'][0] lastRecTime = startBlockTime lastRecNumSamps = blkLen recFreq = ncsMemMap['sample_rate'][0] curBlock = NcsSection(0, startBlockTime, -1, -1) ncsSects.sects.append(curBlock) for recn in range(1, ncsMemMap.shape[0]): if ncsMemMap['channel_id'][recn] != chanNum or \ ncsMemMap['sample_rate'][recn] != recFreq: raise IOError('Channel number or sampling frequency changed in ' + 'records within file') predTime = NcsSectionsFactory.calc_sample_time(ncsSects.sampFreqUsed, lastRecTime, lastRecNumSamps) ts = ncsMemMap['timestamp'][recn] nb = ncsMemMap['nb_valid'][recn] if abs(ts - predTime) > maxGapLen: curBlock.endRec = recn - 1 curBlock.endTime = predTime curBlock = NcsSection(recn, ts, -1, -1) ncsSects.sects.append(curBlock) if blkLen > maxBlkLen: maxBlkLen = blkLen maxBlkFreqEstimate = (blkLen - lastRecNumSamps) * 1e6 / \ (lastRecTime - startBlockTime) startBlockTime = ts blkLen = nb else: blkLen += nb lastRecTime = ts lastRecNumSamps = nb if blkLen > maxBlkLen: maxBlkFreqEstimate = (blkLen - lastRecNumSamps) * 1e6 / \ (lastRecTime - startBlockTime) curBlock.endRec = ncsMemMap.shape[0] - 1 endTime = NcsSectionsFactory.calc_sample_time(ncsSects.sampFreqUsed, lastRecTime, lastRecNumSamps) curBlock.endTime = endTime ncsSects.sampFreqUsed = maxBlkFreqEstimate ncsSects.microsPerSampUsed = NcsSectionsFactory.get_micros_per_samp_for_freq( maxBlkFreqEstimate) return ncsSects @staticmethod def _buildForMaxGap(ncsMemMap, nomFreq): """ Determine sections of records in memory mapped Ncs file given a nominal frequency of the file, using the default values of frequency tolerance and maximum gap between blocks. PARAMETERS ncsMemMap: memmap of Ncs file nomFreq: nominal sampling frequency used, normally from header of file RETURN: NcsSections object """ nb = NcsSections() numRecs = ncsMemMap.shape[0] if numRecs < 1: return nb chanNum = ncsMemMap['channel_id'][0] ts0 = ncsMemMap['timestamp'][0] lastBlkI = numRecs - 1 lts = ncsMemMap['timestamp'][lastBlkI] lcid = ncsMemMap['channel_id'][lastBlkI] lnb = ncsMemMap['nb_valid'][lastBlkI] lsr = ncsMemMap['sample_rate'][lastBlkI] # check if file is one block of records, with exact timestamp match, which may be the case numSampsForPred = NcsSection._RECORD_SIZE * lastBlkI predLastBlockStartTime = NcsSectionsFactory.calc_sample_time(nomFreq, ts0, numSampsForPred) freqInFile = math.floor(nomFreq) if lts - predLastBlockStartTime == 0 and lcid == chanNum and lsr == freqInFile: endTime = NcsSectionsFactory.calc_sample_time(nomFreq, lts, lnb) curBlock = NcsSection(0, ts0, lastBlkI, endTime) nb.sects.append(curBlock) nb.sampFreqUsed = numSampsForPred / (lts - ts0) * 1e6 nb.microsPerSampUsed = NcsSectionsFactory.get_micros_per_samp_for_freq(nb.sampFreqUsed) # otherwise parse records to determine blocks using default maximum gap length else: nb.sampFreqUsed = nomFreq nb.microsPerSampUsed = NcsSectionsFactory.get_micros_per_samp_for_freq(nb.sampFreqUsed) maxGapToAllow = round(NcsSectionsFactory._maxGapSampFrac * 1e6 / nomFreq) nb = NcsSectionsFactory._parseForMaxGap(ncsMemMap, nb, maxGapToAllow) return nb @staticmethod def build_for_ncs_file(ncsMemMap, nlxHdr): """ Build an NcsSections object for an NcsFile, given as a memmap and NlxHeader, handling gap detection appropriately given the file type as specified by the header. PARAMETERS ncsMemMap: memory map of file nlxHdr: NlxHeader from corresponding file. RETURNS An NcsSections corresponding to the provided ncsMemMap and nlxHdr """ acqType = nlxHdr.type_of_recording() # Old Neuralynx style with truncated whole microseconds for actual sampling. This # restriction arose from the sampling being based on a master 1 MHz clock. if acqType == "PRE4": freq = nlxHdr['sampling_rate'] microsPerSampUsed = math.floor(NcsSectionsFactory.get_micros_per_samp_for_freq(freq)) sampFreqUsed = NcsSectionsFactory.get_freq_for_micros_per_samp(microsPerSampUsed) nb = NcsSectionsFactory._buildGivenActualFrequency(ncsMemMap, sampFreqUsed, math.floor(freq)) nb.sampFreqUsed = sampFreqUsed nb.microsPerSampUsed = microsPerSampUsed # digital lynx style with fractional frequency and micros per samp determined from # block times elif acqType == "DIGITALLYNX" or acqType == "DIGITALLYNXSX": nomFreq = nlxHdr['sampling_rate'] nb = NcsSectionsFactory._buildForMaxGap(ncsMemMap, nomFreq) # BML style with fractional frequency and micros per samp elif acqType == "BML": sampFreqUsed = nlxHdr['sampling_rate'] nb = NcsSectionsFactory._buildGivenActualFrequency(ncsMemMap, sampFreqUsed, math.floor(sampFreqUsed)) else: raise TypeError("Unknown Ncs file type from header.") return nb @staticmethod def _verifySectionsStructure(ncsMemMap, ncsSects): """ Check that the record structure and timestamps for the ncsMemMap agrees with that in ncsSects. Provides a more rapid verification of structure than building a new NcsSections and checking equality. PARAMETERS ncsMemMap: memmap of file to be checked ncsSects existing block structure to be checked RETURN: true if all timestamps and block record starts and stops agree, otherwise false. """ for blki in range(0, len(ncsSects.sects)): if ncsMemMap['timestamp'][ncsSects.sects[blki].startRec] != \ ncsSects.sects[blki].startTime: return False ets = ncsMemMap['timestamp'][ncsSects.sects[blki].endRec] enb = ncsMemMap['nb_valid'][ncsSects.sects[blki].endRec] endTime = NcsSectionsFactory.calc_sample_time(ncsSects.sampFreqUsed, ets, enb) if endTime != ncsSects.sects[blki].endTime: return False return True neo-0.10.0/neo/rawio/neuralynxrawio/neuralynxrawio.py0000644000076700000240000007001214077527451023424 0ustar andrewstaff00000000000000""" Class for reading data from Neuralynx files. This IO supports NCS, NEV, NSE and NTT file formats. NCS contains the sampled signal for one channel NEV contains events NSE contains spikes and waveforms for mono electrodes NTT contains spikes and waveforms for tetrodes All Neuralynx files contain a 16 kilobyte text header followed by 0 or more fixed length records. The format of the header has never been formally specified, however, the Neuralynx programs which write them have followed a set of conventions which have varied over the years. Additionally, other programs like Pegasus write files with somewhat varying headers. This variation requires parsing to determine the exact version and type which is handled within this RawIO by the NlxHeader class. Ncs files contain a series of 1044 byte records, each of which contains 512 16 byte samples and header information which includes a 64 bit timestamp in microseconds, a 16 bit channel number, the sampling frequency in integral Hz, and the number of the 512 samples which are considered valid samples (the remaining samples within the record are invalid). The Ncs file header usually contains a specification of the sampling frequency, which may be rounded to an integral number of Hz or may be fractional. The actual sampling frequency in terms of the underlying clock is physically determined by the spacing of the timestamps between records. These variations of header format and possible differences between the stated sampling frequency and actual sampling frequency can create apparent time discrepancies in .Ncs files. Additionally, the Neuralynx recording software can start and stop recording while continuing to write samples to a single .Ncs file, which creates larger gaps in the time sequences of the samples. This RawIO attempts to correct for these deviations where possible and present a single section of contiguous samples with one sampling frequency, t_start, and length for each .Ncs file. These sections are determined by the NcsSectionsFactory class. In the event the gaps are larger, this RawIO only provides the samples from the first section as belonging to one Segment. This RawIO presents only a single Block and Segment. :TODO: This should likely be changed to provide multiple segments and allow for multiple .Ncs files in a directory with differing section structures. Author: Julia Sprenger, Carlos Canova, Samuel Garcia, Peter N. Steinmetz. """ from ..baserawio import (BaseRawIO, _signal_channel_dtype, _signal_stream_dtype, _spike_channel_dtype, _event_channel_dtype) import numpy as np import os from collections import (namedtuple, OrderedDict) from neo.rawio.neuralynxrawio.ncssections import (NcsSection, NcsSectionsFactory) from neo.rawio.neuralynxrawio.nlxheader import NlxHeader class NeuralynxRawIO(BaseRawIO): """" Class for reading datasets recorded by Neuralynx. This version works with rawmode of one-dir for a single directory of files or one-file for a single file. Examples: >>> reader = NeuralynxRawIO(dirname='Cheetah_v5.5.1/original_data') >>> reader.parse_header() Inspect all files in the directory. >>> print(reader) Display all information about signal channels, units, segment size.... """ extensions = ['nse', 'ncs', 'nev', 'ntt'] rawmode = 'one-dir' _ncs_dtype = [('timestamp', 'uint64'), ('channel_id', 'uint32'), ('sample_rate', 'uint32'), ('nb_valid', 'uint32'), ('samples', 'int16', (NcsSection._RECORD_SIZE))] def __init__(self, dirname='', filename='', keep_original_times=False, **kargs): """ Initialize io for either a directory of Ncs files or a single Ncs file. Parameters ---------- dirname: str name of directory containing all files for dataset. If provided, filename is ignored. filename: str name of a single ncs, nse, nev, or ntt file to include in dataset. If used, dirname must not be provided. keep_original_times: if True, keep original start time as in files, otherwise set 0 of time to first time in dataset """ if dirname != '': self.dirname = dirname self.rawmode = 'one-dir' elif filename != '': self.filename = filename self.rawmode = 'one-file' else: raise ValueError("One of dirname or filename must be provided.") self.keep_original_times = keep_original_times BaseRawIO.__init__(self, **kargs) def _source_name(self): if self.rawmode == 'one-file': return self.filename else: return self.dirname def _parse_header(self): stream_channels = [] signal_channels = [] spike_channels = [] event_channels = [] self.ncs_filenames = OrderedDict() # (chan_name, chan_id): filename self.nse_ntt_filenames = OrderedDict() # (chan_name, chan_id): filename self.nev_filenames = OrderedDict() # chan_id: filename self._nev_memmap = {} self._spike_memmap = {} self.internal_unit_ids = [] # channel_index > ((channel_name, channel_id), unit_id) self.internal_event_ids = [] self._empty_ncs = [] # this list contains filenames of empty files self._empty_nev = [] self._empty_nse_ntt = [] # Explore the directory looking for ncs, nev, nse and ntt # and construct channels headers. signal_annotations = [] unit_annotations = [] event_annotations = [] if self.rawmode == 'one-dir': filenames = sorted(os.listdir(self.dirname)) dirname = self.dirname else: dirname, fname = os.path.split(self.filename) filenames = [fname] for filename in filenames: filename = os.path.join(dirname, filename) _, ext = os.path.splitext(filename) ext = ext[1:] # remove dot ext = ext.lower() # make lower case for comparisons if ext not in self.extensions: continue # Skip Ncs files with only header. Other empty file types # will have an empty dataset constructed later. if (os.path.getsize(filename) <= NlxHeader.HEADER_SIZE) and ext in ['ncs']: self._empty_ncs.append(filename) continue # All file have more or less the same header structure info = NlxHeader(filename) chan_names = info['channel_names'] chan_ids = info['channel_ids'] for idx, chan_id in enumerate(chan_ids): chan_name = chan_names[idx] chan_uid = (chan_name, chan_id) if ext == 'ncs': # a sampled signal channel units = 'uV' gain = info['bit_to_microVolt'][idx] if info.get('input_inverted', False): gain *= -1 offset = 0. stream_id = 0 signal_channels.append((chan_name, str(chan_id), info['sampling_rate'], 'int16', units, gain, offset, stream_id)) self.ncs_filenames[chan_uid] = filename keys = [ 'DspFilterDelay_µs', 'recording_opened', 'FileType', 'DspDelayCompensation', 'recording_closed', 'DspLowCutFilterType', 'HardwareSubSystemName', 'DspLowCutNumTaps', 'DSPLowCutFilterEnabled', 'HardwareSubSystemType', 'DspHighCutNumTaps', 'ADMaxValue', 'DspLowCutFrequency', 'DSPHighCutFilterEnabled', 'RecordSize', 'InputRange', 'DspHighCutFrequency', 'input_inverted', 'NumADChannels', 'DspHighCutFilterType', ] d = {k: info[k] for k in keys if k in info} signal_annotations.append(d) elif ext in ('nse', 'ntt'): # nse and ntt are pretty similar except for the waveform shape. # A file can contain several unit_id (so several unit channel). assert chan_id not in self.nse_ntt_filenames, \ 'Several nse or ntt files have the same unit_id!!!' self.nse_ntt_filenames[chan_uid] = filename dtype = get_nse_or_ntt_dtype(info, ext) if os.path.getsize(filename) <= NlxHeader.HEADER_SIZE: self._empty_nse_ntt.append(filename) data = np.zeros((0,), dtype=dtype) else: data = np.memmap(filename, dtype=dtype, mode='r', offset=NlxHeader.HEADER_SIZE) self._spike_memmap[chan_uid] = data unit_ids = np.unique(data['unit_id']) for unit_id in unit_ids: # a spike channel for each (chan_id, unit_id) self.internal_unit_ids.append((chan_uid, unit_id)) unit_name = "ch{}#{}#{}".format(chan_name, chan_id, unit_id) unit_id = '{}'.format(unit_id) wf_units = 'uV' wf_gain = info['bit_to_microVolt'][idx] if info.get('input_inverted', False): wf_gain *= -1 wf_offset = 0. wf_left_sweep = -1 # NOT KNOWN wf_sampling_rate = info['sampling_rate'] spike_channels.append( (unit_name, '{}'.format(unit_id), wf_units, wf_gain, wf_offset, wf_left_sweep, wf_sampling_rate)) unit_annotations.append(dict(file_origin=filename)) elif ext == 'nev': # an event channel # each ('event_id', 'ttl_input') give a new event channel self.nev_filenames[chan_id] = filename if os.path.getsize(filename) <= NlxHeader.HEADER_SIZE: self._empty_nev.append(filename) data = np.zeros((0,), dtype=nev_dtype) internal_ids = [] else: data = np.memmap(filename, dtype=nev_dtype, mode='r', offset=NlxHeader.HEADER_SIZE) internal_ids = np.unique(data[['event_id', 'ttl_input']]).tolist() for internal_event_id in internal_ids: if internal_event_id not in self.internal_event_ids: event_id, ttl_input = internal_event_id name = '{} event_id={} ttl={}'.format( chan_name, event_id, ttl_input) event_channels.append((name, chan_id, 'event')) self.internal_event_ids.append(internal_event_id) self._nev_memmap[chan_id] = data signal_channels = np.array(signal_channels, dtype=_signal_channel_dtype) spike_channels = np.array(spike_channels, dtype=_spike_channel_dtype) event_channels = np.array(event_channels, dtype=_event_channel_dtype) # require all sampled signals, ncs files, to have the same sampling rate if signal_channels.size > 0: sampling_rate = np.unique(signal_channels['sampling_rate']) assert sampling_rate.size == 1 self._sigs_sampling_rate = sampling_rate[0] signal_streams = [('signals', '0')] else: signal_streams = [] signal_streams = np.array(signal_streams, dtype=_signal_stream_dtype) # set 2 attributes needed later for header in case there are no ncs files in dataset, # e.g. Pegasus self._timestamp_limits = None self._nb_segment = 1 # Read ncs files for gap detection and nb_segment computation. self._sigs_memmaps, ncsSegTimestampLimits = self.scan_ncs_files(self.ncs_filenames) if ncsSegTimestampLimits: self._ncs_seg_timestamp_limits = ncsSegTimestampLimits # save copy self._nb_segment = ncsSegTimestampLimits.nb_segment self._sigs_length = ncsSegTimestampLimits.length.copy() self._timestamp_limits = ncsSegTimestampLimits.timestamp_limits.copy() self._sigs_t_start = ncsSegTimestampLimits.t_start.copy() self._sigs_t_stop = ncsSegTimestampLimits.t_stop.copy() # Determine timestamp limits in nev, nse file by scanning them. ts0, ts1 = None, None for _data_memmap in (self._spike_memmap, self._nev_memmap): for _, data in _data_memmap.items(): ts = data['timestamp'] if ts.size == 0: continue if ts0 is None: ts0 = ts[0] ts1 = ts[-1] ts0 = min(ts0, ts[0]) ts1 = max(ts1, ts[-1]) # decide on segment and global start and stop times based on files available if self._timestamp_limits is None: # case NO ncs but HAVE nev or nse self._timestamp_limits = [(ts0, ts1)] self._seg_t_starts = [ts0 / 1e6] self._seg_t_stops = [ts1 / 1e6] self.global_t_start = ts0 / 1e6 self.global_t_stop = ts1 / 1e6 elif ts0 is not None: # case HAVE ncs AND HAVE nev or nse self.global_t_start = min(ts0 / 1e6, self._sigs_t_start[0]) self.global_t_stop = max(ts1 / 1e6, self._sigs_t_stop[-1]) self._seg_t_starts = list(self._sigs_t_start) self._seg_t_starts[0] = self.global_t_start self._seg_t_stops = list(self._sigs_t_stop) self._seg_t_stops[-1] = self.global_t_stop else: # case HAVE ncs but NO nev or nse self._seg_t_starts = self._sigs_t_start self._seg_t_stops = self._sigs_t_stop self.global_t_start = self._sigs_t_start[0] self.global_t_stop = self._sigs_t_stop[-1] if self.keep_original_times: self.global_t_stop = self.global_t_stop - self.global_t_start self.global_t_start = 0 # fill header dictionary self.header = {} self.header['nb_block'] = 1 self.header['nb_segment'] = [self._nb_segment] self.header['signal_streams'] = signal_streams self.header['signal_channels'] = signal_channels self.header['spike_channels'] = spike_channels self.header['event_channels'] = event_channels # Annotations self._generate_minimal_annotations() bl_annotations = self.raw_annotations['blocks'][0] for seg_index in range(self._nb_segment): seg_annotations = bl_annotations['segments'][seg_index] for c in range(signal_streams.size): # one or no signal stream sig_ann = seg_annotations['signals'][c] # handle array annotations for key in signal_annotations[0].keys(): values = [] for c in range(signal_channels.size): value = signal_annotations[0][key] values.append(value) values = np.array(values) if values.ndim == 1: # 'InputRange': is 2D and make bugs sig_ann['__array_annotations__'][key] = values for c in range(spike_channels.size): unit_ann = seg_annotations['spikes'][c] unit_ann.update(unit_annotations[c]) for c in range(event_channels.size): # annotations for channel events event_id, ttl_input = self.internal_event_ids[c] chan_id = event_channels[c]['id'] ev_ann = seg_annotations['events'][c] ev_ann['file_origin'] = self.nev_filenames[chan_id] # ~ ev_ann['marker_id'] = # ~ ev_ann['nttl'] = # ~ ev_ann['digital_marker'] = # ~ ev_ann['analog_marker'] = # Accessors for segment times which are offset by appropriate global start time def _segment_t_start(self, block_index, seg_index): return self._seg_t_starts[seg_index] - self.global_t_start def _segment_t_stop(self, block_index, seg_index): return self._seg_t_stops[seg_index] - self.global_t_start def _get_signal_size(self, block_index, seg_index, stream_index): return self._sigs_length[seg_index] def _get_signal_t_start(self, block_index, seg_index, stream_index): return self._sigs_t_start[seg_index] - self.global_t_start def _get_analogsignal_chunk(self, block_index, seg_index, i_start, i_stop, stream_index, channel_indexes): """ Retrieve chunk of analog signal, a chunk being a set of contiguous samples. PARAMETERS ---------- block_index: index of block in dataset, ignored as only 1 block in this implementation seg_index: index of segment to use i_start: sample index of first sample within segment to retrieve i_stop: sample index of last sample within segment to retrieve channel_indexes: list of channel indices to return data for RETURNS ------- array of samples, with each requested channel in a column """ if i_start is None: i_start = 0 if i_stop is None: i_stop = self._sigs_length[seg_index] block_start = i_start // NcsSection._RECORD_SIZE block_stop = i_stop // NcsSection._RECORD_SIZE + 1 sl0 = i_start % 512 sl1 = sl0 + (i_stop - i_start) if channel_indexes is None: channel_indexes = slice(None) channel_ids = self.header['signal_channels'][channel_indexes]['id'].astype(int) channel_names = self.header['signal_channels'][channel_indexes]['name'] # create buffer for samples sigs_chunk = np.zeros((i_stop - i_start, len(channel_ids)), dtype='int16') for i, chan_uid in enumerate(zip(channel_names, channel_ids)): data = self._sigs_memmaps[seg_index][chan_uid] sub = data[block_start:block_stop] sigs_chunk[:, i] = sub['samples'].flatten()[sl0:sl1] return sigs_chunk def _spike_count(self, block_index, seg_index, unit_index): chan_uid, unit_id = self.internal_unit_ids[unit_index] data = self._spike_memmap[chan_uid] ts = data['timestamp'] ts0, ts1 = self._timestamp_limits[seg_index] # only count spikes inside the timestamp limits, inclusive, and for the specified unit keep = (ts >= ts0) & (ts <= ts1) & (unit_id == data['unit_id']) nb_spike = int(data[keep].size) return nb_spike def _get_spike_timestamps(self, block_index, seg_index, unit_index, t_start, t_stop): chan_uid, unit_id = self.internal_unit_ids[unit_index] data = self._spike_memmap[chan_uid] ts = data['timestamp'] ts0, ts1 = self._timestamp_limits[seg_index] if t_start is not None: ts0 = int((t_start + self.global_t_start) * 1e6) if t_start is not None: ts1 = int((t_stop + self.global_t_start) * 1e6) keep = (ts >= ts0) & (ts <= ts1) & (unit_id == data['unit_id']) timestamps = ts[keep] return timestamps def _rescale_spike_timestamp(self, spike_timestamps, dtype): spike_times = spike_timestamps.astype(dtype) spike_times /= 1e6 spike_times -= self.global_t_start return spike_times def _get_spike_raw_waveforms(self, block_index, seg_index, unit_index, t_start, t_stop): chan_uid, unit_id = self.internal_unit_ids[unit_index] data = self._spike_memmap[chan_uid] ts = data['timestamp'] ts0, ts1 = self._timestamp_limits[seg_index] if t_start is not None: ts0 = int((t_start + self.global_t_start) * 1e6) if t_start is not None: ts1 = int((t_stop + self.global_t_start) * 1e6) keep = (ts >= ts0) & (ts <= ts1) & (unit_id == data['unit_id']) wfs = data[keep]['samples'] if wfs.ndim == 2: # case for nse waveforms = wfs[:, None, :] else: # case for ntt change (n, 32, 4) to (n, 4, 32) waveforms = wfs.swapaxes(1, 2) return waveforms def _event_count(self, block_index, seg_index, event_channel_index): event_id, ttl_input = self.internal_event_ids[event_channel_index] chan_id = self.header['event_channels'][event_channel_index]['id'] data = self._nev_memmap[chan_id] ts0, ts1 = self._timestamp_limits[seg_index] ts = data['timestamp'] keep = (ts >= ts0) & (ts <= ts1) & (data['event_id'] == event_id) & \ (data['ttl_input'] == ttl_input) nb_event = int(data[keep].size) return nb_event def _get_event_timestamps(self, block_index, seg_index, event_channel_index, t_start, t_stop): event_id, ttl_input = self.internal_event_ids[event_channel_index] chan_id = self.header['event_channels'][event_channel_index]['id'] data = self._nev_memmap[chan_id] ts0, ts1 = self._timestamp_limits[seg_index] if t_start is not None: ts0 = int((t_start + self.global_t_start) * 1e6) if t_start is not None: ts1 = int((t_stop + self.global_t_start) * 1e6) ts = data['timestamp'] keep = (ts >= ts0) & (ts <= ts1) & (data['event_id'] == event_id) & \ (data['ttl_input'] == ttl_input) subdata = data[keep] timestamps = subdata['timestamp'] labels = subdata['event_string'].astype('U') durations = None return timestamps, durations, labels def _rescale_event_timestamp(self, event_timestamps, dtype, event_channel_index): event_times = event_timestamps.astype(dtype) event_times /= 1e6 event_times -= self.global_t_start return event_times def scan_ncs_files(self, ncs_filenames): """ Given a list of ncs files, read their basic structure. PARAMETERS: ------ ncs_filenames - list of ncs filenames to scan. RETURNS: ------ memmaps [ {} for seg_index in range(self._nb_segment) ][chan_uid] seg_time_limits SegmentTimeLimits for sections in scanned Ncs files Files will be scanned to determine the sections of records. If file is a single section of records, this scan is brief, otherwise it will check each record which may take some time. """ # :TODO: Needs to account for gaps and start and end times potentially # being different in different groups of channels. These groups typically # correspond to the channels collected by a single ADC card. if len(ncs_filenames) == 0: return None, None # Build dictionary of chan_uid to associated NcsSections, memmap and NlxHeaders. Only # construct new NcsSections when it is different from that for the preceding file. chanSectMap = dict() for chan_uid, ncs_filename in self.ncs_filenames.items(): data = np.memmap(ncs_filename, dtype=self._ncs_dtype, mode='r', offset=NlxHeader.HEADER_SIZE) nlxHeader = NlxHeader(ncs_filename) if not chanSectMap or (chanSectMap and not NcsSectionsFactory._verifySectionsStructure(data, lastNcsSections)): lastNcsSections = NcsSectionsFactory.build_for_ncs_file(data, nlxHeader) chanSectMap[chan_uid] = [lastNcsSections, nlxHeader, data] # Construct an inverse dictionary from NcsSections to list of associated chan_uids revSectMap = dict() for k, v in chanSectMap.items(): revSectMap.setdefault(v[0], []).append(k) # If there is only one NcsSections structure in the set of ncs files, there should only # be one entry. Otherwise this is presently unsupported. if len(revSectMap) > 1: raise IOError('ncs files have {} different sections structures. Unsupported.'.format( len(revSectMap))) seg_time_limits = SegmentTimeLimits(nb_segment=len(lastNcsSections.sects), t_start=[], t_stop=[], length=[], timestamp_limits=[]) memmaps = [{} for seg_index in range(seg_time_limits.nb_segment)] # create segment with subdata block/t_start/t_stop/length for each channel for i, fileEntry in enumerate(self.ncs_filenames.items()): chan_uid = fileEntry[0] data = chanSectMap[chan_uid][2] # create a memmap for each record section of the current file curSects = chanSectMap[chan_uid][0] for seg_index in range(len(curSects.sects)): curSect = curSects.sects[seg_index] subdata = data[curSect.startRec:(curSect.endRec + 1)] memmaps[seg_index][chan_uid] = subdata # create segment timestamp limits based on only NcsSections structure in use if i == 0: numSampsLastSect = subdata[-1]['nb_valid'] ts0 = subdata[0]['timestamp'] ts1 = NcsSectionsFactory.calc_sample_time(curSects.sampFreqUsed, subdata[-1]['timestamp'], numSampsLastSect) seg_time_limits.timestamp_limits.append((ts0, ts1)) t_start = ts0 / 1e6 seg_time_limits.t_start.append(t_start) t_stop = ts1 / 1e6 seg_time_limits.t_stop.append(t_stop) # :NOTE: This should really be the total of nb_valid in records, but this # allows the last record of a section to be shorter, the most common case. # Have never seen a section of records with not full records before the last. length = (subdata.size - 1) * NcsSection._RECORD_SIZE + numSampsLastSect seg_time_limits.length.append(length) return memmaps, seg_time_limits # time limits for set of segments SegmentTimeLimits = namedtuple("SegmentTimeLimits", ['nb_segment', 't_start', 't_stop', 'length', 'timestamp_limits']) nev_dtype = [ ('reserved', '\S+)' r' At Time: (?P